path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
content/ML/ML_8.ipynb | ###Markdown
ClusteringSo far in this course, we've focused our attention in machine learning on two fundamental tasks. - **Regression** aims to predict the value of a quantitative variable. - **Classification** aims to predict the value of a qualitative variable. However, this isn't all there is to machine learning. In this lecture, we're going to take a quick look at another task, called *clustering*. Clustering fits into the broad set of *unsupervised* machine learning tasks. In unsupervised tasks, there's no target variable to predict, and therefore no "right answer." Instead, the aim of an unsupervised algorithm is to explore the data and detect some latent structure. Clustering is the most common example of unsupervised tasks. In a clustering task, we hypothesize that the data may be naturally divided into dense clusters. The purpose of a clustering algorithm is to find these clusters. This lecture is based on the chapter [*In Depth: k-Means Clustering*](https://jakevdp.github.io/PythonDataScienceHandbook/05.11-k-means.html) of the [*Python Data Science Handbook*](https://jakevdp.github.io/PythonDataScienceHandbook/) by Jake VanderPlas.
###Code
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Let's start by generating some synthetic data. The `make_blobs()` function will create a user-specified number of "blobs" of data, each of which are reasonably well-separated from each other. Under the hood, it does this by assigning a true label to each data point, which it then returns as `y_true`. However, in a standard clustering task, we would not assume that the true labels exist, and we won't use them here.
###Code
from sklearn.datasets import make_blobs
X, y_true = make_blobs(n_samples=300, centers=4,
cluster_std=0.60, random_state=0)
fig, ax = plt.subplots(1)
ax.scatter(X[:, 0], X[:, 1], s=50);
###Output
_____no_output_____
###Markdown
Visually, it appears that there are 4 clusters. Let's import `KMeans` and see how we do:
###Code
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4)
kmeans.fit(X)
###Output
_____no_output_____
###Markdown
To get cluster labels, we use the `predict()` method:
###Code
y_kmeans = kmeans.predict(X)
###Output
_____no_output_____
###Markdown
Now let's visualize the results. The use of the `c` and `cmap` arguments to `ax.scatter()` allow us to easily plot points of multiple colors.
###Code
fig, ax = plt.subplots(1)
ax.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='viridis')
###Output
_____no_output_____
###Markdown
It looks like `k-means` did a pretty good job of detecting our clusters! Under the hood, `k-means` tries to identify a "centroid" for each cluster. The two main principles are: 1. Each centroid is the mean of all the points to which it corresponds. 2. Each point is closer to its centroid than to any other centroid. The `KMeans` class makes it easy to retrieve the cluster centroids and visualize them.
###Code
centers = kmeans.cluster_centers_
ax.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5)
fig
###Output
_____no_output_____
###Markdown
We can see that the cluster centroids do indeed correspond pretty nicely to the "middle" of each of the identified clusters. This experiment went very well, but of course, things in the real world aren't that easy. Let's take a look at the Palmer Penguins again, for example.
###Code
import urllib
def retrieve_data(url):
"""
Retrieve a file from the specified url and save it in a local file
called data.csv. The intended values of url are:
"""
# grab the data and parse it
filedata = urllib.request.urlopen(url)
to_write = filedata.read()
# write to file
with open("data.csv", "wb") as f:
f.write(to_write)
retrieve_data("https://philchodrow.github.io/PIC16A/datasets/palmer_penguins.csv")
penguins = pd.read_csv("data.csv")
###Output
_____no_output_____
###Markdown
Let's make a simple scatterplot of the culmen lengths and depths for the penguins.
###Code
fig, ax = plt.subplots(1)
for s in penguins['Species'].unique():
df = penguins[penguins['Species'] == s]
ax.scatter(df['Culmen Length (mm)'], df['Culmen Depth (mm)'], label = s)
###Output
_____no_output_____
###Markdown
When we include the colors, it looks like there might be some clusters of penguins here. Maybe even 3? Let's see.
###Code
X = penguins[["Culmen Length (mm)", "Culmen Depth (mm)"]].dropna()
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
fig, ax = plt.subplots(1)
ax.scatter(X["Culmen Length (mm)"], X["Culmen Depth (mm)"], c = kmeans.predict(X));
###Output
_____no_output_____ |
numpy/4. Binary Data - Solutions.ipynb | ###Markdown
Reading Binary Data with NumpyTamás Gál ([email protected])The latest version of this notebook is available at [https://github.com/Asterics2020-Obelics](https://github.com/Asterics2020-Obelics/School2019/tree/master/numpy)**Warning**: This notebook contains all the solutions. If you are currently sitting in the `NumPy` lecture, close this immediately ;-) You will now work in a blank notebook, you don't need anything else!
###Code
import numpy as np
import sys
print("Python version: {0}\n"
"NumPy version: {1}"
.format(sys.version, np.__version__))
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (16, 5)
plt.rcParams['figure.dpi'] = 300
###Output
_____no_output_____
###Markdown
Exercise: Read a KM3NeT Event File and create a histogram of the PMT ToTsUse `numpy.fromfile()` and custom `dtype`s to read an event from `School2019/numpy/IO_EVT.dat`The KM3NeT DAQ dataformat for storing an event consists of a header and two sets of hits (triggered hits and snapshot hits). The header has been skipped, so `IO_EVT.dat` only contains the **triggered** and **snapshot** hits. Triggered hits:- n_hits `[int32, little endian]`- n_hits * triggered_hit_struct - optical module ID `[int32, little endian]`, example 808476737 - PMT ID `[unsigned char (byte)]`, value between 0 and 30 - time in nanoseconds `[uint32, big endian]`, example 90544845 - ToT (time over threshold) `[unsigned byte]`, value between 0 and 255 - trigger mask `[uint64, little endian]`, bitmask, typical values are 1, 3, 4, 6 Snapshot hits: same as triggered hits but without the `trigger mask` field Solution We can use the `xxd` command to have a quick look at the binary data. If we don't know the structure, this might be a good starting point to identify some strings or recognise numbers from a proiri knowledge.
###Code
!xxd IO_EVT.dat |head -n 10
###Output
00000000: 0f00 0000 4160 3030 0205 659a 101d 0400 ....A`00..e.....
00000010: 0000 0000 0000 4160 3030 0305 659a 2515 ......A`00..e.%.
00000020: 0400 0000 0000 0000 5e7b 3030 0005 659a ........^{00..e.
00000030: 6821 0400 0000 0000 0000 5e7b 3030 0105 h!........^{00..
00000040: 659a 541b 0600 0000 0000 0000 5e7b 3030 e.T.........^{00
00000050: 0a05 659a 6511 0600 0000 0000 0000 5e7b ..e.e.........^{
00000060: 3030 1005 659a 5c1b 0600 0000 0000 0000 00..e.\.........
00000070: 5e7b 3030 1405 659a 5619 0600 0000 0000 ^{00..e.V.......
00000080: 0000 4887 3730 0105 6599 cf1a 0600 0000 ..H.70..e.......
00000090: 0000 0000 4887 3730 0b05 6599 d613 0600 ....H.70..e.....
###Markdown
The hit `dtype` We define our custom `dtype` for the hits and use the `dtype.descr` attribute as a base `dtype` for triggered hits, extended with the `triggermask` field.
###Code
hit_dtype = np.dtype([
("dom_id", "<i"),
("pmt_id", "B"),
("time", ">I"),
("tot", "B"),
])
trig_hit_dtype = np.dtype(hit_dtype.descr + [('triggermask', '<Q')])
###Output
_____no_output_____
###Markdown
The file `IO_EVT.dat` contains a single event. Opened in binary-read mode (`"rb"`), the `fobj` behaves like a stream. `np.fromfile` will call the `.read()` method with the number of bytes calculated from the given `dtype`.According to the data format specification, the first integer (represented by `dtype='<i'` where `<` indicates that it's little endian) is the number of triggered hits.To read the array of triggered hits, the `trig_hit_dtype` is used and the `count=n_trig_hits` argument is passed, otherwise `numpy` will read till the end of the file.We repeat the same process for the regular (snapshot) hits. Parsing the binary data
###Code
with open("IO_EVT.dat", "rb") as fobj:
n_trig_hits = np.fromfile(fobj, dtype='<i', count=1)[0]
trig_hits = np.fromfile(fobj, dtype=trig_hit_dtype, count=n_trig_hits)
n_hits = np.fromfile(fobj, dtype='<i', count=1)[0]
hits = np.fromfile(fobj, dtype=hit_dtype, count=n_hits)
###Output
_____no_output_____
###Markdown
Let's see what we got:
###Code
trig_hits
hits
###Output
_____no_output_____
###Markdown
The overall ToT distributionWe can easily access a specific attribute of all hits using dictionary-notation.
###Code
plt.hist(hits['tot'])
plt.xlabel('ToT [ns]')
plt.yscale('log')
plt.ylabel('count');
###Output
_____no_output_____
###Markdown
Live Event Read-Out from the KM3NeT ORCA DetectorIn this example we will read events directly from the ORCA detector, running in the deeps of the Mediterranean!Install ControlHost to communicate with the detector: **`pip install controlhost`**.To create a connection, subscribe to triggered events via the **`"IO_EVT"`** tag to **131.188.167.67**:**The header is 48 bytes, just skip it.** Retrieve 100 events and create another ToT histogram from all hits! Unfortunately `eduroam` doesn't allow the connection, so you have to use VPN or the take the binary dump `events.dat````pythonfobj = open("events.dat", "rb")```
###Code
fobj = open("events.dat", "rb")
###Output
_____no_output_____
###Markdown
```pythonimport controlhost as chwith ch.Client("131.188.167.67", tag="IO_EVT") as client: for i in range(5): data = client.get_message().data[48:] print(len(data))``` Solution (live connection)
###Code
import io
import tqdm # for nice progress bars
def retrieve_hits(client):
"""Retrieves the hits of the next event using a ControlHost client"""
data = io.BytesIO(client.get_message().data) # creat a stream
data.read(48) # skip the first 48 bytes
n_trig_hits = np.frombuffer(data.read(4), dtype='<i', count=1)[0]
triggered_hits = np.frombuffer(
data.read(trig_hit_dtype.itemsize * n_trig_hits),
dtype=trig_hit_dtype
)
n_hits = np.frombuffer(data.read(4), dtype='<i', count=1)[0]
hits = np.frombuffer(
data.read(hit_dtype.itemsize * n_hits),
dtype=hit_dtype
)
return trig_hits, hits
###Output
_____no_output_____
###Markdown
Solution (binary file)
###Code
def extract_hits(filename):
"""Extract the hits from a binary dump"""
fobj = open(filename, 'rb')
hits = []
triggered_hits = []
while fobj:
header = fobj.read(48) # skip the first 48 bytes
if not header:
break
n_trig_hits = np.frombuffer(fobj.read(4), dtype='<i', count=1)[0]
_triggered_hits = np.frombuffer(
fobj.read(trig_hit_dtype.itemsize * n_trig_hits),
dtype=trig_hit_dtype
)
triggered_hits.append(_triggered_hits)
n_hits = np.frombuffer(fobj.read(4), dtype='<i', count=1)[0]
_hits = np.frombuffer(
fobj.read(hit_dtype.itemsize * n_hits),
dtype=hit_dtype
)
hits.append(_hits)
fobj.close()
return trig_hits, hits
###Output
_____no_output_____
###Markdown
Gathering hits data (live connection)
###Code
tots = []
with ch.Client("131.188.167.67", tag="IO_EVT") as client:
for i in tqdm.trange(100):
trig_hits, hits = retrieve_hits(client)
tots.append(hits['tot'])
###Output
100%|██████████| 100/100 [00:38<00:00, 3.19it/s]
###Markdown
Extracting hits (binary file)
###Code
triggered_hits, hits = extract_hits("events.dat")
tots = [h['tot'] for h in hits]
plt.hist(np.concatenate(tots).ravel(), bins=100)
plt.xlabel('ToT [ns]')
plt.yscale('log')
plt.ylabel('count');
###Output
_____no_output_____ |
modulos/modulo-1-introducao.ipynb | ###Markdown
Revisando o conteúdo da semana!Amy Guerra 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
>>> print ('Hello,Word!')
###Output
Hello,Word!
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
>>> 2++2
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
>>> 02
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
minuto=60
segundos = 42*minuto
print("Existem {} segundos em 42 minutos e 42 segundos".format(segundos+42))
###Output
Existem 2562 segundos em 42 minutos e 42 segundos
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
60=minuto
segundos=42*minuto
print("Existem {}segundos em 42 minutos e 42 segundos" .format(segundos+42))
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
x = y = 1
print("X",x)
print("Y",y)
###Output
X 1
Y 1
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
>>> print ('Hello,Word!').
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
x=1
y=2
print (xy)
###Output
_____no_output_____
###Markdown
--- Quais são as outras formas de praticar esses conceitos? Leia um valor inteiro. A seguir, calcule o menor número de notas possíveis (cédulas) no qual o valor pode ser decomposto. As notas consideradas são de 100, 50, 20, 10, 5, 2 e 1. Imprima o valor lido e, em seguida, a quantidade mínima de notas de cada tipo necessárias, conforme o exemplo fornecido abaixo.
###Code
>>> print ("Digite um valor inteiro")
valor=int(input())
print("_"*25)
print("R$",valor)
nota100= valor//100
valor = valor - nota100*100
nota50 = valor//50
valor = valor - nota50*50
nota20= valor//20
valor = valor - nota20*20
nota10=valor//10
valor = valor - nota10*10
nota5= valor//5
valor = valor - nota5*5
nota2=valor//2
valor = valor - nota2*2
moeda1= valor// 1
valor= valor - moeda1*1
print('{} nota(s) de R$ 100,00'.format(nota100))
print('{} nota(s) de R$ 50,00'.format(nota50))
print('{} nota(s) de R$ 20,00'.format(nota20))
print('{} nota(s) de R$ 10,00'.format(nota10))
print('{} nota(s) de R$ 5,00'.format(nota5))
# vírgula funciona da mesma forma
print(nota2, 'nota(s) de R$ 2,00')
# Usando o format com variável
print('{moeda} moeda(s) de R$ 1,00'.format(moeda=moeda1))
###Output
Digite um valor inteiro
157
_________________________
R$ 157
1 nota(s) de R$ 100,00
1 nota(s) de R$ 50,00
0 nota(s) de R$ 20,00
0 nota(s) de R$ 10,00
1 nota(s) de R$ 5,00
1 nota(s) de R$ 2,00
0 moeda(s) de R$ 1,00
###Markdown
Revisando o conteúdo da semana! Daniela Rodrigues 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
###Output
_____no_output_____
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
###Output
_____no_output_____
###Markdown
Revisando o conteúdo da semana! Lilian Gomes 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
print ("Grupo de estudo")
###Output
Grupo de estudo
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
###Output
_____no_output_____
###Markdown
Revisando o conteúdo da semana!
###Code
###Output
_____no_output_____
###Markdown
###Code
print('Escreva seu nome')
###Output
Escreva seu nome
###Markdown
1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
###Output
_____no_output_____
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
###Output
_____no_output_____
###Markdown
Revisando o conteúdo da semana!Naiara Santos 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
###Output
_____no_output_____
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
###Output
_____no_output_____
###Markdown
Revisando o conteúdo da semana! Anna Carolina 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
###Output
_____no_output_____
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
###Output
_____no_output_____
###Markdown
Revisando o conteúdo da semana! 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
print "Alô, alô"
print("40tenadas"
###Output
_____no_output_____
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
print(+2)
print(2++2)
###Output
4
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
print(02)
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
minuto = 60
minutos_42 = 42*minuto
print("Existem {} segundos em 42 minutos e 42 segundos".format(minutos_42 + 42))
###Output
Existem 2562 segundos em 42 minutos e 42 segundos
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
42 = n
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
x = y = 1
print("X",x)
print("Y",y)
###Output
X 1
Y 1
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
ponto_no_final = 2 + 2.
C = "tá achando que eu sou C amadah?";
print("COM PONTO:", ponto_no_final)
print("PONTO E VÍRGULA:", C)
###Output
COM PONTO: 4.0
PONTO E VÍRGULA: tá achando que eu sou C amadah?
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
x = 2
y = 3
print(xy)
###Output
_____no_output_____
###Markdown
--- Quais são as outras formas de praticar esses conceitos? Leia um valor inteiro. A seguir, calcule o menor número de notas possíveis (cédulas) no qual o valor pode ser decomposto. As notas consideradas são de **100, 50, 20, 10, 5, 2 e 1**. Imprima o valor lido e, em seguida, a quantidade mínima de notas de cada tipo necessárias, conforme o exemplo fornecido abaixo.
###Code
print("Digite um número inteiro:")
valor = int(input())
print("_"*25)
print("R$",valor)
notas_100 = valor // 100
valor = valor - notas_100*100
notas_50 = valor // 50
valor = valor - notas_50*50
notas_20 = valor // 20
valor = valor - notas_20*20
notas_10 = valor // 10
valor = valor - notas_10*10
notas_5 = valor // 5
valor = valor - notas_5*5
notas_2 = valor // 2
valor = valor - notas_2*2
moeda_1 = valor // 1
valor = valor - moeda_1*1
print('{} nota(s) de R$ 100,00'.format(notas_100))
print('{} nota(s) de R$ 50,00'.format(notas_50))
print('{} nota(s) de R$ 20,00'.format(notas_20))
print('{} nota(s) de R$ 10,00'.format(notas_10))
print('{} nota(s) de R$ 5,00'.format(notas_5))
# vírgula funciona da mesma forma
print(notas_2, 'nota(s) de R$ 2,00')
# Usando o format com variável
print('{moeda} moeda(s) de R$ 1,00'.format(moeda=moeda_1))
###Output
Digite um número inteiro:
12345
_________________________
R$ 12345
123 nota(s) de R$ 100,00
0 nota(s) de R$ 50,00
2 nota(s) de R$ 20,00
0 nota(s) de R$ 10,00
1 nota(s) de R$ 5,00
0 nota(s) de R$ 2,00
0 moeda(s) de R$ 1,00
###Markdown
Lembrete!
###Code
# floor division, Retorna a parte inteira da divisão
print("O Resultado inteiro de 10/2:",10 // 2)
print("O Resultado inteiro de 15/2:",15 // 2)
# MOD, Retorna o RESTO da divisão
print("\nO Resto de 10/2:",10 % 2)
print("O Resto de 15/2:",15 % 2)
###Output
O Resultado inteiro de 10/2: 5
O Resultado inteiro de 15/2: 7
O Resto de 10/2: 0
O Resto de 15/2: 1
###Markdown
Como aproximar ou arredondar números decimais
###Code
from math import ceil, floor
value = 1.45299759
print("Arredondado de", value,"pra baixo (minimo):",floor(value))
# Format por ordem =============> valor 0 , valor 1
print("Arredondado {1} pra cima (teto): {0}".format(ceil(value), value))
print("Limitar casas decimais com o format: {0:.2f}".format(value))
###Output
Arredondado de 1.45299759 pra baixo (minimo): 1
Arredondado 1.45299759 pra cima (teto): 2
Limitar casas decimais com o format: 1.45
###Markdown
Revisando o conteúdo da semana! Débora Oliveira 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
print("Grupo de Estudo")
###Output
Grupo de Estudo
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
+2
2++2
5*2
10/2
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
02
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
(42*60)+42
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
n = 42
n
42 = n
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
x = y = 1
x
y
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
k = 10;
k
l = 5.
l
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
e = 5
u = 5
eu
e * u
###Output
_____no_output_____
###Markdown
--- Quais são as outras formas de praticar esses conceitos?
###Code
valor = 4/3*3.14*5**3
print(valor)
print("{:.2f}".format(4/3*3.14*5**3))
preço = 24.95
preço_desconto = 24.95 * 0.4
qtd = 60
transporte = 3 + (0.75 * qtd)
total = preço_desconto * qtd + transporte
print(total)
print("{:.2f}".format(total))
segundos_saida = (6 * 3600) + (52 * 60)
segundos_caminhada = ((8 * 60) + 15) * 2
segundos_corrida = ((7 * 60) + 12) * 3
segundos_total = segundos_saida + segundos_caminhada + segundos_corrida
horas_chegada = segundos_total // 3600
minutos_chegada = (segundos_total % 3600) // 60
segundos_chegada = segundos_total % 60
print(horas_chegada, 'h' , minutos_chegada, 'm', segundos_chegada, 's')
print("Digite o valor que deseja:")
valor = int(input())
notas_200 = valor // 200
print("{} nota(s) de R$ 200".format(notas_200))
valor = valor - (notas_200 * 200)
notas_100 = valor // 100
print("{} nota(s) de R$ 100".format(notas_100))
valor = valor - (notas_100 * 100)
notas_50 = valor // 50
print("{} nota(s) de R$ 50".format(notas_50))
valor = valor - (notas_50 * 50)
notas_20 = valor // 20
print("{} nota(s) de R$ 20".format(notas_20))
valor = valor - (notas_20 * 20)
notas_10 = valor // 10
print("{} nota(s) de R$ 10".format(notas_10))
valor = valor - (notas_10 * 10)
notas_05 = valor // 5
print("{} nota(s) de R$ 5".format(notas_05))
valor = valor - (notas_05 * 5)
notas_02 = valor // 2
print("{} nota(s) de R$ 2".format(notas_02))
valor = valor - (notas_02 * 2)
###Output
Digite o valor que deseja:
2563
12 nota(s) de R$ 200
1 nota(s) de R$ 100
1 nota(s) de R$ 50
0 nota(s) de R$ 20
1 nota(s) de R$ 10
0 nota(s) de R$ 5
1 nota(s) de R$ 2
###Markdown
Revisando o conteúdo da semana!Milena Ferreira
###Code
###Output
_____no_output_____
###Markdown
1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
###Output
_____no_output_____
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
###Output
_____no_output_____
###Markdown
Revisando o conteúdo da semana! 1 - Em uma instrução print, o que acontece se você omitir um dos parênteses ou ambos?
###Code
print("Teste")
###Output
Teste
###Markdown
2 - O que acontece se puser um sinal de mais antes de um número? E se escrever assim: 2++2?
###Code
###Output
_____no_output_____
###Markdown
3 - O que acontece se você tentar usar 02 isso no Python?
###Code
###Output
_____no_output_____
###Markdown
4 - Quantos segundos há em 42 minutos e 42 segundos?
###Code
###Output
_____no_output_____
###Markdown
5 - Vimos que n = 42 é legal. E 42 = n?
###Code
###Output
_____no_output_____
###Markdown
6 - Ou x = y = 1?
###Code
###Output
_____no_output_____
###Markdown
7 - O que acontece se você puser um ponto e vírgula no fim de uma instrução no Python? E um ponto?
###Code
###Output
_____no_output_____
###Markdown
8 - Em notação matemática é possível multiplicar x e y desta forma: xy. O que acontece se você tentar fazer o mesmo no Python?
###Code
###Output
_____no_output_____ |
Spring2017-2019/16-LSQ/Seminar16.ipynb | ###Markdown
Семинар Задача наименьших квадратов (Least Squares Problem) Постановка задачи1. **Широкая:** пусть даны $m$ пар измерениий $(x_i, y_i)$, где $ x_i \in \mathbb{R}^n, \; y_i \in \mathbb{R}^p$. Найти такую функцию $f$, что $$\frac{1}{2}\|f(x_i) - y_i \|^2_2 \to \min$$2. **Уже:** пусть даны $m$ пар измерениий $(x_i, y_i)$, где $ x_i \in \mathbb{R}^n, \; y_i \in \mathbb{R}^p$. Найти такую *параметрическую* функцию $f(x, w)$, что $$\frac{1}{2}\|f(x_i, w) - y_i \|^2_2 \to \min_w$$3. **Ещё уже:** пусть даны $m$ пар измерениий $(x_i, y_i)$, где $ x_i \in \mathbb{R}^n, \; y_i \in \mathbb{R}$. Найти такую *параметрическую* функцию $f(x, w)$, что $$\frac{1}{2} \sum_{i=1}^m(f(x_i, w) - y_i )^2 \to \min_w$$ Линейный случайРассмотрим случай линейной зависимости между измерениями $x_i \in \mathbb{R}^n$ и $y_i \in \mathbb{R}, \; i = 1,\ldots, m$.Тогда$$f(x, w) = x^{\top}w$$или$$f(X, W) = XW$$Задача наименьших квадратов формулируется в виде$$L(w|X, y) = \frac{1}{2}\sum\limits_{i=1}^m (x^{\top}_i w - y_i)^2 = \frac{1}{2}\|Xw - y \|^2_2 \to \min_w$$**Замечание.** Везде далее $m \geq n$ и $\mathrm{rank}(X) = n$ кроме специально оговоренных случаев Нормальное уравнениеИз необходимого условия минимума первого порядка и выпуклости нормы следует, что $$L'(w^* | X, y) = 0 \Rightarrow (X^{\top}X)w^* = X^{\top}y$$или$$w^* = (X^{\top}X)^{-1}X^{\top}y = X^+y = X^{\dagger}y,$$где $X^{\dagger} = X^+ = (X^{\top}X)^{-1}X^{\top}$ - *псевдообратная матрица*.**Замечение:** убедитесь, что Вы можете вывести выражение для $w^*$!**Вопрос:** к какой задаче сведена задача оптимизации? Прямые методы Разложение Холецкого**Определение.** Любая матрица $A \in \mathbb{S}^n_{++}$ имеет единственное разложение Холецкого:$$A = LL^{\top},$$где $L$ - нижнетреугольная матрица.Алгоритм:1. Вычислить $X^{\top}X$ и $X^{\top}y$2. Вычислить разложение Холецкого матрицы $X^{\top}X$3. Найти $w^*$ прямой и обратной подстановкой Pro & contraPro - при $m \gg n$ хранение $X^{\top}X$ требует намного меньше памяти, чем хранение $X$- если матрица $X$ разреженная, существуют методы также дающие разреженное разложение Холецкого Contra- число обусловленности $X^{\top}X$ равно квадрату числа обусловленности $X$. Ошибка пропорциональна обусловленности.- необходимо вычислить $X^{\top}X$ QR разложение**Определение.** Любую матрицу $A \in \mathbb{R}^{m \times n}$ можно представить в виде$$A = QR,$$где $Q \in \mathbb{R}^{m \times m}$ - унитарная матрица, а $R \in \mathbb{R}^{m \times n}$ - прямоугольная верхнетреугольная. Применение1. Вычислить QR разложение матрицы $X$: $X = QR$.2. $Q = [Q_1, Q_2]$, $Q_1 \in \mathbb{R}^{m \times n}$,$R = \begin{bmatrix}R_1\\0\end{bmatrix}$,$R_1 \in \mathbb{R}^{n \times n}$ - квадратная верхнетреугольная матрица2. Задача примет вид: $$\|R_1w - Q_1^{\top}y \|^2_2 \to \min_w$$и нормальное уравнение$$R_1w^* = Q_1^{\top}y$$Получили уравнение с квадратной верхнетреугольной матрицей, которое легко решается обратной подстановкой. Pro & contraPro - ошибка пропорциональна числу обусловленности $X$, а не $X^{\top}X$- более устойчив, чем использование разложение ХолецкогоContra- нельзя контролировать устойчивость решения Сингулярное разложение (SVD)**Определение.** Любую матрицу $A \in \mathbb{R}^{m \times n}$ можно представить в виде$$A = U\widehat{\Sigma} V^* = [U_1, U_2] \begin{bmatrix} \Sigma\\ 0 \end{bmatrix} V^*,$$где $U \in \mathbb{R}^{m \times m}$ - унитарная матрица, $U_1 \in \mathbb{R}^{m \times n}$, $\Sigma = \mathrm{diag}(\sigma_1, \ldots, \sigma_n) \in \mathbb{R}^{n \times n}$ - диагональная с сингулярными числами $\sigma_i$ на диагонали, и $V \in \mathbb{R}^{n \times n}$ - унитарная. Применение$$\| Xw - y\|^2_2 = \left\| \begin{bmatrix} \Sigma \\ 0 \end{bmatrix} V^* w - \begin{bmatrix} U_1^{\top} \\ U_2^{\top} \end{bmatrix}y \right\|^2_2 \sim \| \Sigma V^* w - U_1^{\top}y \|^2_2$$Решение линейной системы с **квадратной** матрицей:$$w^* = V\Sigma^{-1}U_1^{\top}y = \sum\limits_{i=1}^n \frac{u_i^{\top}y}{\sigma_i} v_i,$$где $v_i$ и $u_i$ - столбцы матриц $V$ и $U_1$ Pro & contraPro - информация о чувствительности решения к возмущениям $y$- контроль устойчивости: малые сингулярные числа можно отбросить- если матрица близка к вырожденной, то только SVD позволяет это показатьContra- вычисление SVD наиболее затратно по сравнению с QR разложением и разложением Холецкого Эксперименты
###Code
import numpy as np
n = 1000
m = 2 * n
X = np.random.randn(m, n)
w = np.random.randn(n)
y = X.dot(w) + 1e-5 * np.random.randn(m)
w_est = np.linalg.solve(X.T.dot(X), X.T.dot(y))
print(np.linalg.norm(w - w_est))
import scipy.linalg as sclin
import scipy.sparse.linalg as scsplin
def CholSolve(X, y):
res = sclin.cho_factor(X.T.dot(X), lower=True)
return sclin.cho_solve(res, X.T.dot(y))
def QRSolve(X, y):
Q, R = sclin.qr(X)
return sclin.solve_triangular(R[:R.shape[1], :], Q[:, :R.shape[1]].T.dot(y))
def SVDSolve(X, y):
U, s, V = sclin.svd(X, full_matrices=False)
return V.T.dot(np.diagflat(1.0 / s).dot(U.T.dot(y)))
def CGSolve(X, y):
def mv(x):
return X.T.dot(X.dot(x))
LA = scsplin.LinearOperator((X.shape[1], X.shape[1]), matvec=mv)
w, _ = scsplin.cg(LA, X.T.dot(y), tol=1e-10)
return w
def NPSolve(X, y):
return np.linalg.solve(X.T.dot(X), X.T.dot(y))
def LSQRSolve(X, y):
res = scsplin.lsqr(X, y)
return res[0]
w_chol = CholSolve(X, y)
print(np.linalg.norm(w - w_chol))
w_qr = QRSolve(X, y)
print(np.linalg.norm(w - w_qr))
w_svd = SVDSolve(X, y)
print(np.linalg.norm(w - w_svd))
w_cg = CGSolve(X, y)
print(np.linalg.norm(w - w_cg))
w_np = NPSolve(X, y)
print(np.linalg.norm(w - w_np))
w_lsqr = LSQRSolve(X, y)
print(np.linalg.norm(w - w_lsqr))
%timeit w_chol = CholSolve(X, y)
%timeit w_qr = QRSolve(X, y)
%timeit w_svd = SVDSolve(X, y)
%timeit w_cg = CGSolve(X, y)
%timeit w_np = NPSolve(X, y)
%timeit w_lsqr = LSQRSolve(X, y)
%matplotlib inline
import time
import matplotlib.pyplot as plt
n = [10, 100, 1000, 2000, 5000]
chol_time = []
qr_time = []
svd_time = []
cg_time = []
np_time = []
lsqr_time = []
for dim in n:
m = int(1.5 * dim)
X = np.random.randn(m, dim)
w = np.random.randn(dim)
y = X.dot(w) + 1e-5 * np.random.randn(m)
st = time.time()
w_chol = CholSolve(X, y)
chol_time.append(time.time() - st)
st = time.time()
w_qr = QRSolve(X, y)
qr_time.append(time.time() - st)
st = time.time()
w_svd = SVDSolve(X, y)
svd_time.append(time.time() - st)
st = time.time()
w_cg = CGSolve(X, y)
cg_time.append(time.time() - st)
st = time.time()
w_np = NPSolve(X, y)
np_time.append(time.time() - st)
st = time.time()
w_lsqr = LSQRSolve(X, y)
lsqr_time.append(time.time() - st)
plt.figure(figsize=(10,8))
plt.plot(n, chol_time, linewidth=5, label="Cholesky")
plt.plot(n, qr_time, linewidth=5, label="QR")
plt.plot(n, svd_time, linewidth=5, label="SVD")
plt.plot(n, cg_time, linewidth=5, label="CG")
plt.plot(n, np_time, linewidth=5, label="Numpy")
plt.plot(n, lsqr_time, linewidth=5, label="LSQR")
plt.legend(loc="best", fontsize=20)
plt.xscale("log")
plt.yscale("log")
plt.xlabel(r"Dimension", fontsize=20)
plt.ylabel(r"Time, sec.", fontsize=20)
plt.xticks(fontsize = 20)
_ = plt.yticks(fontsize = 20)
###Output
_____no_output_____
###Markdown
Нелинейный случай (J. Nocedal, S. Wright Numerical Optimization, Ch. 10)**Вопрос:** а если необходимо моделировать измерения нелинейной функцией $f(x, w)$?**Ответ:** аналитического решения уже нет, поэтому необходимо использовать итерационные методы Метод Гаусса-Ньютона- Целевая функция$$S = \frac{1}{2}\| f(X, w) - y\|^2_2 = \frac{1}{2}\|r(w)\|_2^2 \to \min_w$$- Градиент$$S' = \sum_{i=1}^m r_i(w)r_i'(w) = J^{\top}(w)r(w), $$где $J$ - якобиан остатков $r(w)$- Гессиан \begin{align*}S''(w) = & \sum_{i=1}^m r_i'(w)r_i'(w) + \sum_{i=1}^m r_i(w)r_i''(w) \\= & J^{\top}(w)J(w) + \sum_{i=1}^m r_i(w)r_i''(w)\end{align*} Метод Ньютона - Уравнение на поиск направления в методе Ньютона$$S''(w_k)h_{k+1} = -J^{\top}(w_k)r(w_k)$$- Или более подробно$$\left(J^{\top}(w_k)J(w_k) + \sum_{i=1}^m r_i(w_k)r_i''(w_k)\right) h_{k+1} = -J^{\top}(w_k)r(w_k)$$**Вопрос:** что меняется с добавлением имени Гаусса? Метод Гаусса-Ньютона $$\left(J^{\top}(w_k)J(w_k)\right) h_{k+1} = -J^{\top}(w_k)r(w_k)$$**Замечание:** шаг метода определяется линейным поиском с помощью комбинации ранее освещённых правил. Альтернативный вывод с помощью линеаризации целевой функции- Исходная задача$$S = \frac{1}{2}\| f(X, w) - y\|^2_2 = \frac{1}{2}\|r(w)\|_2^2 \to \min_w$$- Линеаризуем целевую функцию в текущей точке $w_k$ для получения направления $h_k$$$S(w_{k+1}) = \frac{1}{2}\| r(w_{k+1}) \|^2_2 \approx \frac{1}{2}\|r(w_k) + J(w_k)^{\top} h_k\|_2^2 \to \min_{h_k}$$- Получили **линейную** задачу, про которую выше были получены аналитические результаты Теорема сходимости**Теорема.** Пусть остатки $r_i(w)$ ограничены и их градиенты Липшицевы, а якобиан $J$ полного ранга. Тогда$$\lim_{k \to \infty} J^{\top}(w_k)r_k = 0,$$при выборе шага по достаточному убыванию и условию кривизны. Скорость сходимости$$\|w_{k+1} - w^* \|_2 \leq \| (J^{\top}J(w^*))^{-1}H(w^*)\| \|w_k - w^* \|_2 + O(\|w_k - w^* \|^2_2)$$- Зависит от соотношения между $J^{\top}J$ и $H(w_k) = \sum\limits_{i=1}^m r_i(w_k)r_i''(w_k)$- Чем меньше $\| (J^{\top}J(w^*))^{-1}H(w^*) \|$, тем быстрее сходимость - Если $H(w^*) = 0$, то сходимость локально квадратичная Случай больших остатков- В этом случае $H(w_k)$ пренебрегать нельзя- Сигнализирует о неадекватности выбранной параметрической функции $f(X, w)$- Требует применения *гибридных* алгоритмов, которые работают как метод Гаусса-Ньютона при маленьких остатках и работают как метод Ньютона или квазиньютоновский метод при больших остатках Pro & contraPro- не нужно вычислять $r''(w)$- из якобиана вычисляется оценка гессиана- используемое приближение гессиана часто очень точное в смысле нормы- в случае полного ранга якобиана, гарантируется, что полученное направление - это направление убывания- интерпретация как линеаризация функции $f(x, w)$ около точки экстремумаContra- приближение гессиана может быть очень неточным- если матрица $J^{\top}J$ близка к вырожденной, решение неустойчиво, и даже сходимость не гарантируется Метод Левенберга-Марквардта Какие проблемы накопились?- В методе Ньютона сходимость только **локальная**, но **квадратичная**- Вырожденность гессиана или его приближения (метод Гаусса-Ньютона) приводит к неустойчивости решения- Градиентный метод сходится к стационарной точке из **любого** начального приближения, но **линейно** Как решить эти проблемы? Хотя бы частично... **Идея:** отделить спектр гессиана от 0 с помощью дополнительного слагаемого вида $\lambda I$Метод Левенберга-Марквардта:$$(f''(x_k) + \lambda_k I)h_k = -f'(x_k), \qquad \lambda_k > 0$$ Почему это хорошая идея? - При $\lambda_k \to 0$ метод работает как метод Ньютона- При $\lambda_k \to \infty$ метод работает как градиентный спуск- В методе Гаусса-Ньютона слагаемое $\lambda_k I$ является оценкой $H(w_k)$- Если оценка гессиана $J^{\top}J$ разреженная, то добавление $\lambda_k I$ её не портит и позволяет быстро решать систему уравнений- Регуляризация исходной задачи - см. далее Осталась одна проблема.... Стратегий подбора $\lambda_k$ очень много. Общая идея аналогична backtracking'у:- задаём начальное приближение- если убывание функции достаточное, то метод находится в зоне, где хорошо работает квадратичная аппроксимация, следовательно можно ещё уменьшить $\lambda_{k+1}$- если убывание недостаточно сильное, то надо увеличить $\lambda_k$ и ещё раз получить направление $h_k$ и проверить насколько оно подходит Сходимость- Доказательства сходимости непросты из-за необходимости учитывать изменения $\lambda_k$- Гарантируется сходимость к стационарной точке при адекватном моделировании кривизны функции в каждой точке ```python def simple_levenberg_marquardt(f, jac, x, lam, rho_min, rho_max, tol): n = x.shape[0] while True: J = jac(x) F = f(x) while True: x_next = np.linalg.solve(J.T.dot(J) + lam * np.eye(n), -J.dot(F)) F_next = f(x_next) if np.linalg.norm(F_next) < np.linalg.norm(F): lam = rho_min * lam x = x_next break else: lam = lam * rho_max if np.linalg.norm(F) - np.linalg.norm(F_next) < tol: break return x``` ЭкспериментРассмотрим задачу нелинейных наименьших квадратов для следующей функции$$f(w|x) = w_1 e^{w_2 x}\cos(w_3 x + w_4)$$при $w = (1, -0.5, 10, 0)$
###Code
w = np.array([1, -0.5, 10, 0])
def f(x, w=w):
return w[0] * np.exp(x * w[1]) * np.cos(w[2] * x + w[3])
num_points = 100
x_range = np.linspace(0, 5, num=num_points)
plt.plot(x_range, f(x_range))
num_samples = 50
x_samples = np.random.choice(x_range, size=num_samples)
y_samples = f(x_samples) + 0.05 * np.random.randn(num_samples)
plt.plot(x_range, f(x_range))
plt.scatter(x_samples, y_samples, c="red")
import scipy.optimize as scopt
res = lambda w: f(x_samples, w) - y_samples
def jac(w):
J = np.zeros((num_samples, 4))
J[:, 0] = np.exp(x_samples * w[1]) * np.cos(x_samples * w[2] + w[3])
J[:, 1] = w[0] * x_samples * np.exp(x_samples * w[1]) * np.cos(x_samples * w[2] + w[3])
J[:, 2] = -w[0] * x_samples * np.exp(x_samples * w[1]) * np.sin(x_samples * w[2] + w[3])
J[:, 3] = -w[0] * np.exp(x_samples * w[1]) * np.sin(x_samples * w[2] + w[3])
return J
result = {}
x0 = np.random.randn(4)
result["LM"] = scopt.least_squares(fun=res, method="lm", x0=x0, jac=jac)
# result["TR"] = scopt.least_squares(fun=res, method="trf", x0=x0, jac=jac)
# result["Dogleg"] = scopt.least_squares(fun=res, method="dogbox", x0=x0, jac=jac)
plt.figure(figsize=(8, 6))
fontsize = 16
plt.plot(x_range, f(x_range), label="Exact")
for method in result:
plt.plot(x_range, f(x_range, result[method].x), label=method)
plt.legend(fontsize=fontsize)
plt.xticks(fontsize=fontsize)
plt.yticks(fontsize=fontsize)
print("Exact parameters = {}".format(w))
for method in result:
print("{} method parameters = {}".format(method, result[method].x))
###Output
Exact parameters = [ 1. -0.5 10. 0. ]
LM method parameters = [ 1.01790846 -0.49635022 9.9670075 0.01450216]
|
notebooks/2b Input Driven Observations (GLM-HMM).ipynb | ###Markdown
Input Driven Observations ("GLM-HMM")Notebook prepared by Zoe Ashwood: feel free to email me with feedback or questions (zashwood at cs dot princeton dot edu).This notebook demonstrates the "InputDrivenObservations" class, and illustrates its use in the context of modeling decision-making data as in Ashwood et al. (2020) ([Mice alternate between discrete strategies during perceptualdecision-making](https://www.biorxiv.org/content/10.1101/2020.10.19.346353v1.full.pdf)).Compared to the model considered in the notebook ["2 Input Driven HMM"](https://github.com/lindermanlab/ssm/blob/master/notebooks/2%20Input%20Driven%20HMM.ipynb), Ashwood et al. (2020) assumes a stationary transition matrix where transition probabilities *do not* depend on external inputs. However, observation probabilities now *do* depend on external covariates according to:$$\begin{align}\Pr(y_t = c \mid z_{t} = k, u_t, w_{kc}) = \frac{\exp\{w_{kc}^\mathsf{T} u_t\}}{\sum_{c'=1}^C \exp\{w_{kc'}^\mathsf{T} u_t\}}\end{align}$$where $c \in \{1, ..., C\}$ indicates the categorical class for the observation, $u_{t} \in \mathbb{R}^{M}$ is the set of input covariates, and $w_{kc} \in \mathbb{R}^{M}$ is the set of input weights associated with state $k$ and class $c$. These weights, along with the transition matrix and initial state probabilities, will be learned.In Ashwood et al. (2020), $C = 2$ as $y_{t}$ represents the binary choice made by an animal during a 2AFC (2-Alternative Forced Choice) task. The above equation then reduces to:$$\begin{align}\Pr(y_t = 1 \mid z_{t} = k, u_t, w_{k}) = \frac{1}{1 + \exp\{-w_{k}^\mathsf{T} u_t\}}.\end{align}$$and only a single set of weights is associated with each state. 1. SetupThe line `import ssm` imports the package for use. Here, we have also imported a few other packages for plotting.
###Code
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
import ssm
from ssm.util import one_hot, find_permutation
%matplotlib inline
npr.seed(0)
###Output
_____no_output_____
###Markdown
2. Input Driven ObservationsWe create a HMM with input-driven observations and 'standard' (stationary) transitions with the following line: ```python ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs", observation_kwargs=dict(C=num_categories), transitions="standard")```As in Ashwood et al. (2020), we are going to model an animal's binary choice data during a decision-making task, so we will set `num_categories=2` because the animal only has two options available to it. We will also set `obs_dim = 1` because the dimensionality of the observation data is 1 (if we were also modeling, for example, the binned reaction time of the animal, we could set `obs_dim = 2`). For the sake of simplicity, we will assume that an animal's choice in a particular state is only affected by the external stimulus associated with that particular trial, and its innate choice bias. Thus, we will set `input_dim = 2` and we will simulate input data that resembles sequences of stimuli in what follows. In Ashwood et al. (2020), they found that many mice used 3 decision-making states when performing 2AFC tasks. We will, thus, set `num_states = 3`. 2a. Initialize GLM-HMM
###Code
# Set the parameters of the GLM-HMM
num_states = 3 # number of discrete states
obs_dim = 1 # number of observed dimensions
num_categories = 2 # number of categories for output
input_dim = 2 # input dimensions
# Make a GLM-HMM
true_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
###Output
_____no_output_____
###Markdown
2b. Specify parameters of generative GLM-HMM Let's update the weights and transition matrix for the true GLM-HMM so as to bring the GLM-HMM to the parameter regime that real animals use (according to Ashwood et al. (2020)):
###Code
gen_weights = np.array([[[6, 1]], [[2, -3]], [[2, 3]]])
gen_log_trans_mat = np.log(np.array([[[0.98, 0.01, 0.01], [0.05, 0.92, 0.03], [0.02, 0.03, 0.94]]]))
true_glmhmm.observations.params = gen_weights
true_glmhmm.transitions.params = gen_log_trans_mat
# Plot generative parameters:
fig = plt.figure(figsize=(8, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
cols = ['#ff7f00', '#4daf4a', '#377eb8']
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="state " + str(k+1))
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.legend()
plt.title("Generative weights", fontsize = 15)
plt.subplot(1, 2, 2)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("Generative transition matrix", fontsize = 15)
###Output
_____no_output_____
###Markdown
2c. Create external input sequences Simulate an example set of external inputs for each trial in a session. We will create an array of size `(num_sess x num_trials_per_sess x num_covariates)`. As in Ashwood et al. (2020), for each trial in a session we will include the stimulus presented to the animal at that trial, as well as a '1' as the second covariate (so as to capture the animal's innate bias for one of the two options available to it). We will simulate stimuli sequences so as to resemble the sequences of stimuli in the International Brain Laboratory et al. (2020) task.
###Code
num_sess = 20 # number of example sessions
num_trials_per_sess = 100 # number of trials in a session
inpts = np.ones((num_sess, num_trials_per_sess, input_dim)) # initialize inpts array
stim_vals = [-1, -0.5, -0.25, -0.125, -0.0625, 0, 0.0625, 0.125, 0.25, 0.5, 1]
inpts[:,:,0] = np.random.choice(stim_vals, (num_sess, num_trials_per_sess)) # generate random sequence of stimuli
inpts = list(inpts) #convert inpts to correct format
###Output
_____no_output_____
###Markdown
2d. Simulate states and observations with generative model
###Code
# Generate a sequence of latents and choices for each session
true_latents, true_choices = [], []
for sess in range(num_sess):
true_z, true_y = true_glmhmm.sample(num_trials_per_sess, input=inpts[sess])
true_latents.append(true_z)
true_choices.append(true_y)
# Calculate true loglikelihood
true_ll = true_glmhmm.log_probability(true_choices, inputs=inpts)
print("true ll = " + str(true_ll))
###Output
true ll = -910.4271498215511
###Markdown
3. Fit GLM-HMM and perform recovery analysis 3a. Maximum Likelihood Estimation Now we instantiate a new GLM-HMM and check that we can recover the generative parameters in simulated data:
###Code
new_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
N_iters = 200 # maximum number of EM iterations. Fitting with stop earlier if increase in LL is below tolerance specified by tolerance parameter
fit_ll = new_glmhmm.fit(true_choices, inputs=inpts, method="em", num_iters=N_iters, tolerance=10**-4)
# Plot the log probabilities of the true and fit models. Fit model final LL should be greater
# than or equal to true LL.
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
plt.plot(fit_ll, label="EM")
plt.plot([0, len(fit_ll)], true_ll * np.ones(2), ':k', label="True")
plt.legend(loc="lower right")
plt.xlabel("EM Iteration")
plt.xlim(0, len(fit_ll))
plt.ylabel("Log Probability")
plt.show()
###Output
_____no_output_____
###Markdown
3b. Retrieved parameters Compare retrieved weights and transition matrices to generative parameters. To do this, we may first need to permute the states of the fit GLM-HMM relative to thegenerative model. One way to do this uses the `find_permutation` function from `ssm`:
###Code
new_glmhmm.permute(find_permutation(true_latents[0], new_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
###Output
_____no_output_____
###Markdown
Now plot generative and retrieved weights for GLMs (analogous plot to Figure S1c in Ashwood et al. (2020)):
###Code
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
cols = ['#ff7f00', '#4daf4a', '#377eb8']
recovered_weights = new_glmhmm.observations.params
for k in range(num_states):
if k ==0:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="generative")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = "recovered", linestyle = '--')
else:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle = '--')
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.legend()
plt.title("Weight recovery", fontsize=15)
###Output
_____no_output_____
###Markdown
Now plot generative and retrieved transition matrices (analogous plot to Figure S1c in Ashwood et al. (2020)):
###Code
fig = plt.figure(figsize=(5, 2.5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("generative", fontsize = 15)
plt.subplot(1, 2, 2)
recovered_trans_mat = np.exp(new_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
###Output
_____no_output_____
###Markdown
3c. Posterior State Probabilities Let's now plot $p(z_{t} = k|\mathbf{y}, \{u_{t}\}_{t=1}^{T})$, the posterior state probabilities, which give the probability of the animal being in state k at trial t.
###Code
# Get expected states:
posterior_probs = [new_glmhmm.expected_states(data=data, input=inpt)[0]
for data, inpt
in zip(true_choices, inpts)]
fig = plt.figure(figsize=(5, 2.5), dpi=80, facecolor='w', edgecolor='k')
sess_id = 0 #session id; can choose any index between 0 and num_sess-1
for k in range(num_states):
plt.plot(posterior_probs[sess_id][:, k], label="State " + str(k + 1), lw=2,
color=cols[k])
plt.ylim((-0.01, 1.01))
plt.yticks([0, 0.5, 1], fontsize = 10)
plt.xlabel("trial #", fontsize = 15)
plt.ylabel("p(state)", fontsize = 15)
###Output
_____no_output_____
###Markdown
With these posterior state probabilities, we can assign trials to states and then plot the fractional occupancy of each state:
###Code
# concatenate posterior probabilities across sessions
posterior_probs_concat = np.concatenate(posterior_probs)
# get state with maximum posterior probability at particular trial:
state_max_posterior = np.argmax(posterior_probs_concat, axis = 1)
# now obtain state fractional occupancies:
_, state_occupancies = np.unique(state_max_posterior, return_counts=True)
state_occupancies = state_occupancies/np.sum(state_occupancies)
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
for z, occ in enumerate(state_occupancies):
plt.bar(z, occ, width = 0.8, color = cols[z])
plt.ylim((0, 1))
plt.xticks([0, 1, 2], ['1', '2', '3'], fontsize = 10)
plt.yticks([0, 0.5, 1], ['0', '0.5', '1'], fontsize=10)
plt.xlabel('state', fontsize = 15)
plt.ylabel('frac. occupancy', fontsize=15)
###Output
_____no_output_____
###Markdown
4. Fit GLM-HMM and perform recovery analysis: Maximum A Priori Estimation Above, we performed Maximum Likelihood Estimation to retrieve the generative parameters of the GLM-HMM in simulated data. In the small data regime, where we do not have many trials available to us, we may instead want to perform Maximum A Priori (MAP) Estimation in order to incorporate a prior term and restrict the range for the best fitting parameters. Unfortunately, what is meant by 'small data regime' is problem dependent and will be affected by the number of states in the generative GLM-HMM, and the specific parameters of the generative model, amongst other things. In practice, we may perform both Maximum Likelihood Estimation and MAP estimation and compare the ability of the fit models to make predictions on held-out data (see Section 5 on Cross-Validation below). The prior we consider for the GLM-HMM is the product of a Gaussian prior on the GLM weights, $W$, and a Dirichlet prior on the transition matrix, $A$:$$\begin{align}\Pr(W, A) &= \mathcal{N}(W|0, \Sigma) \Pr(A|\alpha) \\&= \mathcal{N}(W|0, diag(\sigma^{2}, \cdots, \sigma^{2})) \prod_{j=1}^{K} \dfrac{1}{B(\alpha)} \prod_{k=1}^{K} A_{jk}^{\alpha -1}\end{align}$$There are two hyperparameters controlling the strength of the prior: $\sigma$ and $\alpha$. The larger the value of $\sigma$ and if $\alpha = 1$, the more similar MAP estimation will become to Maximum Likelihood Estimation, and the prior term will become an additive offset to the objective function of the GLM-HMM that is independent of the values of $W$ and $A$. In comparison, setting $\sigma = 2$ and $\alpha = 2$ will result in the prior no longer being independent of $W$ and $\alpha$. In order to perform MAP estimation for the GLM-HMM with `ssm`, the new syntax is:```pythonssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs", observation_kwargs=dict(C=num_categories,prior_sigma=prior_sigma), transitions="sticky", transition_kwargs=dict(alpha=prior_alpha,kappa=0))```where `prior_sigma` is the $\sigma$ parameter from above, and `prior_alpha` is the $\alpha$ parameter.
###Code
# Instantiate GLM-HMM and set prior hyperparameters
prior_sigma = 2
prior_alpha = 2
map_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories,prior_sigma=prior_sigma),
transitions="sticky", transition_kwargs=dict(alpha=prior_alpha,kappa=0))
# Fit GLM-HMM with MAP estimation:
_ = map_glmhmm.fit(true_choices, inputs=inpts, method="em", num_iters=N_iters, tolerance=10**-4)
###Output
_____no_output_____
###Markdown
Compare final likelihood of data with MAP estimation and MLE to likelihood under generative model (note: we cannot use log_probability that is output of `fit` function as this incorporates prior term, which is not comparable between generative and MAP models). We want to check that MAP and MLE likelihood values are higher than true likelihood; if they are not, this may indicate a poor initialization and that we should refit these models.
###Code
true_likelihood = true_glmhmm.log_likelihood(true_choices, inputs=inpts)
mle_final_ll = new_glmhmm.log_likelihood(true_choices, inputs=inpts)
map_final_ll = map_glmhmm.log_likelihood(true_choices, inputs=inpts)
# Plot these values
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
loglikelihood_vals = [true_likelihood, mle_final_ll, map_final_ll]
colors = ['Red', 'Navy', 'Purple']
for z, occ in enumerate(loglikelihood_vals):
plt.bar(z, occ, width = 0.8, color = colors[z])
plt.ylim((true_likelihood-5, true_likelihood+15))
plt.xticks([0, 1, 2], ['true', 'mle', 'map'], fontsize = 10)
plt.xlabel('model', fontsize = 15)
plt.ylabel('loglikelihood', fontsize=15)
###Output
_____no_output_____
###Markdown
5. Cross Validation To assess which model is better - the model fit via Maximum Likelihood Estimation, or the model fit via MAP estimation - we can investigate the predictive power of these fit models on held-out test data sets.
###Code
# Create additional input sequences to be used as held-out test data
num_test_sess = 10
test_inpts = np.ones((num_test_sess, num_trials_per_sess, input_dim))
test_inpts[:,:,0] = np.random.choice(stim_vals, (num_test_sess, num_trials_per_sess))
test_inpts = list(test_inpts)
# Create set of test latents and choices to accompany input sequences:
test_latents, test_choices = [], []
for sess in range(num_test_sess):
test_z, test_y = true_glmhmm.sample(num_trials_per_sess, input=test_inpts[sess])
test_latents.append(test_z)
test_choices.append(test_y)
# Compare likelihood of test_choices for model fit with MLE and MAP:
mle_test_ll = new_glmhmm.log_likelihood(test_choices, inputs=test_inpts)
map_test_ll = map_glmhmm.log_likelihood(test_choices, inputs=test_inpts)
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
loglikelihood_vals = [mle_test_ll, map_test_ll]
colors = ['Navy', 'Purple']
for z, occ in enumerate(loglikelihood_vals):
plt.bar(z, occ, width = 0.8, color = colors[z])
plt.ylim((mle_test_ll-2, mle_test_ll+5))
plt.xticks([0, 1], ['mle', 'map'], fontsize = 10)
plt.xlabel('model', fontsize = 15)
plt.ylabel('loglikelihood', fontsize=15)
###Output
_____no_output_____
###Markdown
Here we see that the model fit with MAP estimation achieves higher likelihood on the held-out dataset than the model fit with MLE, so we would choose this model as the best model of animal decision-making behavior (although we'd probably want to perform multiple fold cross-validation to be sure that this is the case in all instantiations of test data). Let's finish by comparing the retrieved weights and transition matrices from MAP estimation to the generative parameters.
###Code
map_glmhmm.permute(find_permutation(true_latents[0], map_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
fig = plt.figure(figsize=(6, 3), dpi=80, facecolor='w', edgecolor='k')
cols = ['#ff7f00', '#4daf4a', '#377eb8']
plt.subplot(1,2,1)
recovered_weights = new_glmhmm.observations.params
for k in range(num_states):
if k ==0: # show labels only for first state
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="generative")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = 'recovered', linestyle='--')
else:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle='--')
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.title("MLE", fontsize = 15)
plt.legend()
plt.subplot(1,2,2)
recovered_weights = map_glmhmm.observations.params
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="", linestyle = '-')
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle='--')
plt.yticks(fontsize=10)
plt.xticks([0, 1], ['', ''], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.title("MAP", fontsize = 15)
fig = plt.figure(figsize=(7, 2.5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("generative", fontsize = 15)
plt.subplot(1, 3, 2)
recovered_trans_mat = np.exp(new_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered - MLE", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
plt.subplot(1, 3, 3)
recovered_trans_mat = np.exp(map_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered - MAP", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
###Output
_____no_output_____
###Markdown
Input Driven Observations ("GLM-HMM")Notebook prepared by Zoe Ashwood: feel free to email me with feedback or questions (zashwood at cs dot princeton dot edu).This notebook demonstrates the "InputDrivenObservations" class, and illustrates its use in the context of modeling decision-making data as in Ashwood et al. (2020) ([Mice alternate between discrete strategies during perceptualdecision-making](https://www.biorxiv.org/content/10.1101/2020.10.19.346353v1.full.pdf)).Compared to the model considered in the notebook ["2 Input Driven HMM"](https://github.com/lindermanlab/ssm/blob/master/notebooks/2%20Input%20Driven%20HMM.ipynb), Ashwood et al. (2020) assumes a stationary transition matrix where transition probabilities *do not* depend on external inputs. However, observation probabilities now *do* depend on external covariates according to:for $c \neq C$:$$\begin{align}\Pr(y_t = c \mid z_{t} = k, u_t, w_{kc}) = \frac{\exp\{w_{kc}^\mathsf{T} u_t\}}{1+\sum_{c'=1}^{C-1} \exp\{w_{kc'}^\mathsf{T} u_t\}}\end{align}$$and for $c = C$:$$\begin{align}\Pr(y_t = c \mid z_{t} = k, u_t, w_{kc}) = \frac{1}{1+\sum_{c'=1}^{C-1} \exp\{w_{kc'}^\mathsf{T} u_t\}}\end{align}$$where $c \in \{1, ..., C\}$ indicates the categorical class for the observation, $u_{t} \in \mathbb{R}^{M}$ is the set of input covariates, and $w_{kc} \in \mathbb{R}^{M}$ is the set of input weights associated with state $k$ and class $c$. These weights, along with the transition matrix and initial state probabilities, will be learned.In Ashwood et al. (2020), $C = 2$ as $y_{t}$ represents the binary choice made by an animal during a 2AFC (2-Alternative Forced Choice) task. The above equations then reduce to:$$\begin{align}\Pr(y_t = 0 \mid z_{t} = k, u_t, w_{k}) = \frac{\exp\{w_{k}^\mathsf{T} u_t\}}{1 + \exp\{w_{k}^\mathsf{T} u_t\}} = \frac{1}{1 + \exp\{-w_{k}^\mathsf{T} u_t\}}.\end{align}$$$$\begin{align}\Pr(y_t = 1 \mid z_{t} = k, u_t, w_{k}) = \frac{1}{1 + \exp\{w_{k}^\mathsf{T} u_t\}}.\end{align}$$and only a single weight vector, $w_{k}$, is associated with each state. 1. SetupThe line `import ssm` imports the package for use. Here, we have also imported a few other packages for plotting.
###Code
import autograd.numpy.random as npr
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
import ssm
from ssm.util import find_permutation
npr.seed(0)
###Output
_____no_output_____
###Markdown
2. Input Driven ObservationsWe create a HMM with input-driven observations and 'standard' (stationary) transitions with the following line: ```python ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs", observation_kwargs=dict(C=num_categories), transitions="standard")```As in Ashwood et al. (2020), we are going to model an animal's binary choice data during a decision-making task, so we will set `num_categories=2` because the animal only has two options available to it. We will also set `obs_dim = 1` because the dimensionality of the observation data is 1 (if we were also modeling, for example, the binned reaction time of the animal, we could set `obs_dim = 2`). For the sake of simplicity, we will assume that an animal's choice in a particular state is only affected by the external stimulus associated with that particular trial, and its innate choice bias. Thus, we will set `input_dim = 2` and we will simulate input data that resembles sequences of stimuli in what follows. In Ashwood et al. (2020), they found that many mice used 3 decision-making states when performing 2AFC tasks. We will, thus, set `num_states = 3`. 2a. Initialize GLM-HMM
###Code
# Set the parameters of the GLM-HMM
num_states = 3 # number of discrete states
obs_dim = 1 # number of observed dimensions
num_categories = 2 # number of categories for output
input_dim = 2 # input dimensions
# Make a GLM-HMM
true_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
###Output
_____no_output_____
###Markdown
2b. Specify parameters of generative GLM-HMM Let's update the weights and transition matrix for the true GLM-HMM so as to bring the GLM-HMM to the parameter regime that real animals use (according to Ashwood et al. (2020)):
###Code
gen_weights = np.array([[[6, 1]], [[2, -3]], [[2, 3]]])
gen_log_trans_mat = np.log(np.array([[[0.98, 0.01, 0.01], [0.05, 0.92, 0.03], [0.03, 0.03, 0.94]]]))
true_glmhmm.observations.params = gen_weights
true_glmhmm.transitions.params = gen_log_trans_mat
# Plot generative parameters:
fig = plt.figure(figsize=(8, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
cols = ['#ff7f00', '#4daf4a', '#377eb8']
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="state " + str(k+1))
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.legend()
plt.title("Generative weights", fontsize = 15)
plt.subplot(1, 2, 2)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("Generative transition matrix", fontsize = 15)
###Output
_____no_output_____
###Markdown
2c. Create external input sequences Simulate an example set of external inputs for each trial in a session. We will create an array of size `(num_sess x num_trials_per_sess x num_covariates)`. As in Ashwood et al. (2020), for each trial in a session we will include the stimulus presented to the animal at that trial, as well as a '1' as the second covariate (so as to capture the animal's innate bias for one of the two options available to it). We will simulate stimuli sequences so as to resemble the sequences of stimuli in the International Brain Laboratory et al. (2020) task.
###Code
num_sess = 20 # number of example sessions
num_trials_per_sess = 100 # number of trials in a session
inpts = np.ones((num_sess, num_trials_per_sess, input_dim)) # initialize inpts array
stim_vals = [-1, -0.5, -0.25, -0.125, -0.0625, 0, 0.0625, 0.125, 0.25, 0.5, 1]
inpts[:,:,0] = np.random.choice(stim_vals, (num_sess, num_trials_per_sess)) # generate random sequence of stimuli
inpts = list(inpts) #convert inpts to correct format
inpts
###Output
_____no_output_____
###Markdown
2d. Simulate states and observations with generative model
###Code
# Generate a sequence of latents and choices for each session
true_latents, true_choices = [], []
for sess in range(num_sess):
true_z, true_y = true_glmhmm.sample(num_trials_per_sess, input=inpts[sess])
true_latents.append(true_z)
true_choices.append(true_y)
true_z
# Calculate true loglikelihood
true_ll = true_glmhmm.log_probability(true_choices, inputs=inpts)
print("true ll = " + str(true_ll))
###Output
_____no_output_____
###Markdown
3. Fit GLM-HMM and perform recovery analysis 3a. Maximum Likelihood Estimation Now we instantiate a new GLM-HMM and check that we can recover the generative parameters in simulated data:
###Code
new_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
N_iters = 200 # maximum number of EM iterations. Fitting with stop earlier if increase in LL is below tolerance specified by tolerance parameter
fit_ll = new_glmhmm.fit(true_choices[0], inputs=inpts[0], method="em", num_iters=N_iters, tolerance=10**-4)
new_glmhmm.transitions.transition_matrices(true_choices[0], inpts[0], None, None)
new_glmhmm.observations.log_likelihoods(true_choices[0], inpts[0], None, None).shape
# Plot the log probabilities of the true and fit models. Fit model final LL should be greater
# than or equal to true LL.
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
plt.plot(fit_ll, label="EM")
plt.plot([0, len(fit_ll)], true_ll * np.ones(2), ':k', label="True")
plt.legend(loc="lower right")
plt.xlabel("EM Iteration")
plt.xlim(0, len(fit_ll))
plt.ylabel("Log Probability")
plt.show()
###Output
_____no_output_____
###Markdown
3b. Retrieved parameters Compare retrieved weights and transition matrices to generative parameters. To do this, we may first need to permute the states of the fit GLM-HMM relative to thegenerative model. One way to do this uses the `find_permutation` function from `ssm`:
###Code
new_glmhmm.permute(find_permutation(true_latents[0], new_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
###Output
_____no_output_____
###Markdown
Now plot generative and retrieved weights for GLMs (analogous plot to Figure S1c in Ashwood et al. (2020)):
###Code
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
cols = ['#ff7f00', '#4daf4a', '#377eb8']
recovered_weights = new_glmhmm.observations.params
for k in range(num_states):
if k ==0:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="generative")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = "recovered", linestyle = '--')
else:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle = '--')
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.legend()
plt.title("Weight recovery", fontsize=15)
###Output
_____no_output_____
###Markdown
Now plot generative and retrieved transition matrices (analogous plot to Figure S1c in Ashwood et al. (2020)):
###Code
fig = plt.figure(figsize=(5, 2.5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("generative", fontsize = 15)
plt.subplot(1, 2, 2)
recovered_trans_mat = np.exp(new_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
###Output
_____no_output_____
###Markdown
3c. Posterior State Probabilities Let's now plot $p(z_{t} = k|\mathbf{y}, \{u_{t}\}_{t=1}^{T})$, the posterior state probabilities, which give the probability of the animal being in state k at trial t.
###Code
# Get expected states:
posterior_probs = [new_glmhmm.expected_states(data=data, input=inpt)[0]
for data, inpt
in zip([true_choices[0]], [inpts[0]])]
true_choices
fig = plt.figure(figsize=(5, 2.5), dpi=80, facecolor='w', edgecolor='k')
sess_id = 0 #session id; can choose any index between 0 and num_sess-1
for k in range(num_states):
plt.plot(posterior_probs[sess_id][:, k], label="State " + str(k + 1), lw=2,
color=cols[k])
plt.ylim((-0.01, 1.01))
plt.yticks([0, 0.5, 1], fontsize = 10)
plt.xlabel("trial #", fontsize = 15)
plt.ylabel("p(state)", fontsize = 15)
###Output
_____no_output_____
###Markdown
With these posterior state probabilities, we can assign trials to states and then plot the fractional occupancy of each state:
###Code
# concatenate posterior probabilities across sessions
posterior_probs_concat = np.concatenate(posterior_probs)
# get state with maximum posterior probability at particular trial:
state_max_posterior = np.argmax(posterior_probs_concat, axis = 1)
# now obtain state fractional occupancies:
_, state_occupancies = np.unique(state_max_posterior, return_counts=True)
state_occupancies = state_occupancies/np.sum(state_occupancies)
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
for z, occ in enumerate(state_occupancies):
plt.bar(z, occ, width = 0.8, color = cols[z])
plt.ylim((0, 1))
plt.xticks([0, 1, 2], ['1', '2', '3'], fontsize = 10)
plt.yticks([0, 0.5, 1], ['0', '0.5', '1'], fontsize=10)
plt.xlabel('state', fontsize = 15)
plt.ylabel('frac. occupancy', fontsize=15)
###Output
_____no_output_____
###Markdown
4. Fit GLM-HMM and perform recovery analysis: Maximum A Priori Estimation Above, we performed Maximum Likelihood Estimation to retrieve the generative parameters of the GLM-HMM in simulated data. In the small data regime, where we do not have many trials available to us, we may instead want to perform Maximum A Priori (MAP) Estimation in order to incorporate a prior term and restrict the range for the best fitting parameters. Unfortunately, what is meant by 'small data regime' is problem dependent and will be affected by the number of states in the generative GLM-HMM, and the specific parameters of the generative model, amongst other things. In practice, we may perform both Maximum Likelihood Estimation and MAP estimation and compare the ability of the fit models to make predictions on held-out data (see Section 5 on Cross-Validation below). The prior we consider for the GLM-HMM is the product of a Gaussian prior on the GLM weights, $W$, and a Dirichlet prior on the transition matrix, $A$:$$\begin{align}\Pr(W, A) &= \mathcal{N}(W|0, \Sigma) \Pr(A|\alpha) \\&= \mathcal{N}(W|0, diag(\sigma^{2}, \cdots, \sigma^{2})) \prod_{j=1}^{K} \dfrac{1}{B(\alpha)} \prod_{k=1}^{K} A_{jk}^{\alpha -1}\end{align}$$There are two hyperparameters controlling the strength of the prior: $\sigma$ and $\alpha$. The larger the value of $\sigma$ and if $\alpha = 1$, the more similar MAP estimation will become to Maximum Likelihood Estimation, and the prior term will become an additive offset to the objective function of the GLM-HMM that is independent of the values of $W$ and $A$. In comparison, setting $\sigma = 2$ and $\alpha = 2$ will result in the prior no longer being independent of $W$ and $\alpha$. In order to perform MAP estimation for the GLM-HMM with `ssm`, the new syntax is:```pythonssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs", observation_kwargs=dict(C=num_categories,prior_sigma=prior_sigma), transitions="sticky", transition_kwargs=dict(alpha=prior_alpha,kappa=0))```where `prior_sigma` is the $\sigma$ parameter from above, and `prior_alpha` is the $\alpha$ parameter.
###Code
# Instantiate GLM-HMM and set prior hyperparameters
prior_sigma = 2
prior_alpha = 2
map_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories,prior_sigma=prior_sigma),
transitions="sticky", transition_kwargs=dict(alpha=prior_alpha,kappa=0))
# Fit GLM-HMM with MAP estimation:
_ = map_glmhmm.fit(true_choices, inputs=inpts, method="em", num_iters=N_iters, tolerance=10**-4)
###Output
_____no_output_____
###Markdown
Compare final likelihood of data with MAP estimation and MLE to likelihood under generative model (note: we cannot use log_probability that is output of `fit` function as this incorporates prior term, which is not comparable between generative and MAP models). We want to check that MAP and MLE likelihood values are higher than true likelihood; if they are not, this may indicate a poor initialization and that we should refit these models.
###Code
true_likelihood = true_glmhmm.log_likelihood(true_choices, inputs=inpts)
mle_final_ll = new_glmhmm.log_likelihood(true_choices, inputs=inpts)
map_final_ll = map_glmhmm.log_likelihood(true_choices, inputs=inpts)
# Plot these values
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
loglikelihood_vals = [true_likelihood, mle_final_ll, map_final_ll]
colors = ['Red', 'Navy', 'Purple']
for z, occ in enumerate(loglikelihood_vals):
plt.bar(z, occ, width = 0.8, color = colors[z])
plt.ylim((true_likelihood-5, true_likelihood+15))
plt.xticks([0, 1, 2], ['true', 'mle', 'map'], fontsize = 10)
plt.xlabel('model', fontsize = 15)
plt.ylabel('loglikelihood', fontsize=15)
###Output
_____no_output_____
###Markdown
5. Cross Validation To assess which model is better - the model fit via Maximum Likelihood Estimation, or the model fit via MAP estimation - we can investigate the predictive power of these fit models on held-out test data sets.
###Code
# Create additional input sequences to be used as held-out test data
num_test_sess = 10
test_inpts = np.ones((num_test_sess, num_trials_per_sess, input_dim))
test_inpts[:,:,0] = np.random.choice(stim_vals, (num_test_sess, num_trials_per_sess))
test_inpts = list(test_inpts)
# Create set of test latents and choices to accompany input sequences:
test_latents, test_choices = [], []
for sess in range(num_test_sess):
test_z, test_y = true_glmhmm.sample(num_trials_per_sess, input=test_inpts[sess])
test_latents.append(test_z)
test_choices.append(test_y)
# Compare likelihood of test_choices for model fit with MLE and MAP:
mle_test_ll = new_glmhmm.log_likelihood(test_choices, inputs=test_inpts)
map_test_ll = map_glmhmm.log_likelihood(test_choices, inputs=test_inpts)
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
loglikelihood_vals = [mle_test_ll, map_test_ll]
colors = ['Navy', 'Purple']
for z, occ in enumerate(loglikelihood_vals):
plt.bar(z, occ, width = 0.8, color = colors[z])
plt.ylim((mle_test_ll-2, mle_test_ll+5))
plt.xticks([0, 1], ['mle', 'map'], fontsize = 10)
plt.xlabel('model', fontsize = 15)
plt.ylabel('loglikelihood', fontsize=15)
###Output
_____no_output_____
###Markdown
Here we see that the model fit with MAP estimation achieves higher likelihood on the held-out dataset than the model fit with MLE, so we would choose this model as the best model of animal decision-making behavior (although we'd probably want to perform multiple fold cross-validation to be sure that this is the case in all instantiations of test data). Let's finish by comparing the retrieved weights and transition matrices from MAP estimation to the generative parameters.
###Code
map_glmhmm.permute(find_permutation(true_latents[0], map_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
fig = plt.figure(figsize=(6, 3), dpi=80, facecolor='w', edgecolor='k')
cols = ['#ff7f00', '#4daf4a', '#377eb8']
plt.subplot(1,2,1)
recovered_weights = new_glmhmm.observations.params
for k in range(num_states):
if k ==0: # show labels only for first state
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="generative")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = 'recovered', linestyle='--')
else:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle='--')
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.title("MLE", fontsize = 15)
plt.legend()
plt.subplot(1,2,2)
recovered_weights = map_glmhmm.observations.params
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="", linestyle = '-')
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle='--')
plt.yticks(fontsize=10)
plt.xticks([0, 1], ['', ''], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.title("MAP", fontsize = 15)
fig = plt.figure(figsize=(7, 2.5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("generative", fontsize = 15)
plt.subplot(1, 3, 2)
recovered_trans_mat = np.exp(new_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered - MLE", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
plt.subplot(1, 3, 3)
recovered_trans_mat = np.exp(map_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered - MAP", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
###Output
_____no_output_____
###Markdown
6. Multinomial GLM-HMM Until now, we have only considered the case where there are 2 output classes (the Bernoulli GLM-HMM corresponding to `C=num_categories=2`), yet the `ssm` framework is sufficiently general to allow us to fit the multinomial GLM-HMM described in Equations 1 and 2. Here we demonstrate a recovery analysis for the multinomial GLM-HMM.
###Code
# Set the parameters of the GLM-HMM
num_states = 4 # number of discrete states
obs_dim = 1 # number of observed dimensions
num_categories = 3 # number of categories for output
input_dim = 2 # input dimensions
# Make a GLM-HMM
true_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
# Set weights of multinomial GLM-HMM
gen_weights = np.array([[[0.6,3], [2,3]], [[6,1], [6,-2]], [[1,1], [3,1]], [[2,2], [0,5]]])
print(gen_weights.shape)
true_glmhmm.observations.params = gen_weights
###Output
_____no_output_____
###Markdown
In the above, notice that the shape of the weights for the multinomial GLM-HMM is `(num_states, num_categories-1, input_dim)`. Specifically, we only learn `num_categories-1` weight vectors (of size `input_dim`) for a given state, and we set the weights for the other observation class to zero. Constraining the weight vectors for one class is important if that we want to be able to identify generative weights in simulated data. If we didn't do this, it is easy to see that one could generate the same observation probabilities with a set of weight vectors that are offset by a constant vector $w_{k}$ (the index k indicates that a different offset vector could exist per state):$$\begin{align}\Pr(y_t = c \mid z_{t} = k, u_t, w_{kc}) = \frac{\exp\{w_{kc}^\mathsf{T} u_t\}}{\sum_{c'=1}^C \exp\{w_{kc'}^\mathsf{T} u_t\}} = \frac{\exp\{(w_{kc}-w_{k})^\mathsf{T} u_t\}}{\sum_{c'=1}^C \exp\{(w_{kc'}-w_{k})^\mathsf{T} u_t\}}\end{align}$$Equations 1 and 2 at the top of this notebook already take into account the fact that the weights for a particular class for a given state are fixed to zero (this is why $c = C$ is handled differently).
###Code
# Set transition matrix of multinomial GLM-HMM
gen_log_trans_mat = np.log(np.array([[[0.90, 0.04, 0.05, 0.01], [0.05, 0.92, 0.01, 0.02], [0.03, 0.02, 0.94, 0.01], [0.09, 0.01, 0.01, 0.89]]]))
true_glmhmm.transitions.params = gen_log_trans_mat
# Create external inputs sequence; compared to the example above, we will increase the number of examples
# (through the "num_trials_per_session" paramater) since the number of parameters has increased
num_sess = 20 # number of example sessions
num_trials_per_sess = 1000 # number of trials in a session
inpts = np.ones((num_sess, num_trials_per_sess, input_dim)) # initialize inpts array
stim_vals = [-1, -0.5, -0.25, -0.125, -0.0625, 0, 0.0625, 0.125, 0.25, 0.5, 1]
inpts[:,:,0] = np.random.choice(stim_vals, (num_sess, num_trials_per_sess)) # generate random sequence of stimuli
inpts = list(inpts)
# Generate a sequence of latents and choices for each session
true_latents, true_choices = [], []
for sess in range(num_sess):
true_z, true_y = true_glmhmm.sample(num_trials_per_sess, input=inpts[sess])
true_latents.append(true_z)
true_choices.append(true_y)
# plot example data:
fig = plt.figure(figsize=(8, 3), dpi=80, facecolor='w', edgecolor='k')
plt.step(range(100),true_choices[0][range(100)], color = "red")
plt.yticks([0, 1, 2])
plt.title("example data (multinomial GLM-HMM)")
plt.xlabel("trial #", fontsize = 15)
plt.ylabel("observation class", fontsize = 15)
# Calculate true loglikelihood
true_ll = true_glmhmm.log_probability(true_choices, inputs=inpts)
print("true ll = " + str(true_ll))
# fit GLM-HMM
new_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
N_iters = 500 # maximum number of EM iterations. Fitting with stop earlier if increase in LL is below tolerance specified by tolerance parameter
fit_ll = new_glmhmm.fit(true_choices, inputs=inpts, method="em", num_iters=N_iters, tolerance=10**-4)
# Plot the log probabilities of the true and fit models. Fit model final LL should be greater
# than or equal to true LL.
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
plt.plot(fit_ll, label="EM")
plt.plot([0, len(fit_ll)], true_ll * np.ones(2), ':k', label="True")
plt.legend(loc="lower right")
plt.xlabel("EM Iteration")
plt.xlim(0, len(fit_ll))
plt.ylabel("Log Probability")
plt.show()
# permute recovered state identities to match state identities of generative model
new_glmhmm.permute(find_permutation(true_latents[0], new_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
# Plot recovered parameters:
recovered_weights = new_glmhmm.observations.params
recovered_transitions = new_glmhmm.transitions.params
fig = plt.figure(figsize=(16, 8), dpi=80, facecolor='w', edgecolor='k')
plt.subplots_adjust(wspace=0.3, hspace=0.6)
plt.subplot(2, 2, 1)
cols = ['#ff7f00', '#4daf4a', '#377eb8', '#f781bf', '#a65628', '#984ea3', '#999999', '#e41a1c', '#dede00']
for c in range(num_categories):
plt.subplot(2, num_categories+1, c+1)
if c < num_categories-1:
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k,c], marker='o',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1))
else:
for k in range(num_states):
plt.plot(range(input_dim), np.zeros(input_dim), marker='o',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1), alpha = 0.5)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.yticks(fontsize=10)
plt.xticks([0, 1], ['', ''])
if c == 0:
plt.ylabel("GLM weight", fontsize=15)
plt.legend()
plt.title("Generative weights; class " + str(c+1), fontsize = 15)
plt.ylim((-3, 10))
plt.subplot(2, num_categories+1, num_categories+1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("Generative transition matrix", fontsize = 15)
cols = ['#ff7f00', '#4daf4a', '#377eb8', '#f781bf', '#a65628', '#984ea3', '#999999', '#e41a1c', '#dede00']
for c in range(num_categories):
plt.subplot(2, num_categories+1, num_categories + c + 2)
if c < num_categories-1:
for k in range(num_states):
plt.plot(range(input_dim), recovered_weights[k,c], marker='o', linestyle = '--',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1))
else:
for k in range(num_states):
plt.plot(range(input_dim), np.zeros(input_dim), marker='o', linestyle = '--',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1), alpha = 0.5)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.yticks(fontsize=10)
plt.xlabel("covariate", fontsize=15)
if c == 0:
plt.ylabel("GLM weight", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.legend()
plt.title("Recovered weights; class " + str(c+1), fontsize = 15)
plt.ylim((-3,10))
plt.subplot(2, num_categories+1, 2*num_categories+2)
recovered_trans_mat = np.exp(recovered_transitions)[0]
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("Recovered transition matrix", fontsize = 15)
###Output
_____no_output_____
###Markdown
Input Driven Observations ("GLM-HMM")Notebook prepared by Zoe Ashwood: feel free to email me with feedback or questions (zashwood at cs dot princeton dot edu).This notebook demonstrates the "InputDrivenObservations" class, and illustrates its use in the context of modeling decision-making data as in Ashwood et al. (2020) ([Mice alternate between discrete strategies during perceptualdecision-making](https://www.biorxiv.org/content/10.1101/2020.10.19.346353v1.full.pdf)).Compared to the model considered in the notebook ["2 Input Driven HMM"](https://github.com/lindermanlab/ssm/blob/master/notebooks/2%20Input%20Driven%20HMM.ipynb), Ashwood et al. (2020) assumes a stationary transition matrix where transition probabilities *do not* depend on external inputs. However, observation probabilities now *do* depend on external covariates according to:for $c \neq C$:$$\begin{align}\Pr(y_t = c \mid z_{t} = k, u_t, w_{kc}) = \frac{\exp\{w_{kc}^\mathsf{T} u_t\}}{1+\sum_{c'=1}^{C-1} \exp\{w_{kc'}^\mathsf{T} u_t\}}\end{align}$$and for $c = C$:$$\begin{align}\Pr(y_t = c \mid z_{t} = k, u_t, w_{kc}) = \frac{1}{1+\sum_{c'=1}^{C-1} \exp\{w_{kc'}^\mathsf{T} u_t\}}\end{align}$$where $c \in \{1, ..., C\}$ indicates the categorical class for the observation, $u_{t} \in \mathbb{R}^{M}$ is the set of input covariates, and $w_{kc} \in \mathbb{R}^{M}$ is the set of input weights associated with state $k$ and class $c$. These weights, along with the transition matrix and initial state probabilities, will be learned.In Ashwood et al. (2020), $C = 2$ as $y_{t}$ represents the binary choice made by an animal during a 2AFC (2-Alternative Forced Choice) task. The above equations then reduce to:$$\begin{align}\Pr(y_t = 0 \mid z_{t} = k, u_t, w_{k}) = \frac{\exp\{w_{k}^\mathsf{T} u_t\}}{1 + \exp\{w_{k}^\mathsf{T} u_t\}} = \frac{1}{1 + \exp\{-w_{k}^\mathsf{T} u_t\}}.\end{align}$$$$\begin{align}\Pr(y_t = 1 \mid z_{t} = k, u_t, w_{k}) = \frac{1}{1 + \exp\{w_{k}^\mathsf{T} u_t\}}.\end{align}$$and only a single weight vector, $w_{k}$, is associated with each state. 1. SetupThe line `import ssm` imports the package for use. Here, we have also imported a few other packages for plotting.
###Code
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
import ssm
from ssm.util import find_permutation
npr.seed(0)
###Output
_____no_output_____
###Markdown
2. Input Driven ObservationsWe create a HMM with input-driven observations and 'standard' (stationary) transitions with the following line: ```python ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs", observation_kwargs=dict(C=num_categories), transitions="standard")```As in Ashwood et al. (2020), we are going to model an animal's binary choice data during a decision-making task, so we will set `num_categories=2` because the animal only has two options available to it. We will also set `obs_dim = 1` because the dimensionality of the observation data is 1 (if we were also modeling, for example, the binned reaction time of the animal, we could set `obs_dim = 2`). For the sake of simplicity, we will assume that an animal's choice in a particular state is only affected by the external stimulus associated with that particular trial, and its innate choice bias. Thus, we will set `input_dim = 2` and we will simulate input data that resembles sequences of stimuli in what follows. In Ashwood et al. (2020), they found that many mice used 3 decision-making states when performing 2AFC tasks. We will, thus, set `num_states = 3`. 2a. Initialize GLM-HMM
###Code
# Set the parameters of the GLM-HMM
num_states = 3 # number of discrete states
obs_dim = 1 # number of observed dimensions
num_categories = 2 # number of categories for output
input_dim = 2 # input dimensions
# Make a GLM-HMM
true_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
###Output
_____no_output_____
###Markdown
2b. Specify parameters of generative GLM-HMM Let's update the weights and transition matrix for the true GLM-HMM so as to bring the GLM-HMM to the parameter regime that real animals use (according to Ashwood et al. (2020)):
###Code
gen_weights = np.array([[[6, 1]], [[2, -3]], [[2, 3]]])
gen_log_trans_mat = np.log(np.array([[[0.98, 0.01, 0.01], [0.05, 0.92, 0.03], [0.03, 0.03, 0.94]]]))
true_glmhmm.observations.params = gen_weights
true_glmhmm.transitions.params = gen_log_trans_mat
# Plot generative parameters:
fig = plt.figure(figsize=(8, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
cols = ['#ff7f00', '#4daf4a', '#377eb8']
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="state " + str(k+1))
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.legend()
plt.title("Generative weights", fontsize = 15)
plt.subplot(1, 2, 2)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("Generative transition matrix", fontsize = 15)
###Output
_____no_output_____
###Markdown
2c. Create external input sequences Simulate an example set of external inputs for each trial in a session. We will create an array of size `(num_sess x num_trials_per_sess x num_covariates)`. As in Ashwood et al. (2020), for each trial in a session we will include the stimulus presented to the animal at that trial, as well as a '1' as the second covariate (so as to capture the animal's innate bias for one of the two options available to it). We will simulate stimuli sequences so as to resemble the sequences of stimuli in the International Brain Laboratory et al. (2020) task.
###Code
num_sess = 20 # number of example sessions
num_trials_per_sess = 100 # number of trials in a session
inpts = np.ones((num_sess, num_trials_per_sess, input_dim)) # initialize inpts array
stim_vals = [-1, -0.5, -0.25, -0.125, -0.0625, 0, 0.0625, 0.125, 0.25, 0.5, 1]
inpts[:,:,0] = np.random.choice(stim_vals, (num_sess, num_trials_per_sess)) # generate random sequence of stimuli
inpts = list(inpts) #convert inpts to correct format
###Output
_____no_output_____
###Markdown
2d. Simulate states and observations with generative model
###Code
# Generate a sequence of latents and choices for each session
true_latents, true_choices = [], []
for sess in range(num_sess):
true_z, true_y = true_glmhmm.sample(num_trials_per_sess, input=inpts[sess])
true_latents.append(true_z)
true_choices.append(true_y)
# Calculate true loglikelihood
true_ll = true_glmhmm.log_probability(true_choices, inputs=inpts)
print("true ll = " + str(true_ll))
###Output
true ll = -900.7834782398646
###Markdown
3. Fit GLM-HMM and perform recovery analysis 3a. Maximum Likelihood Estimation Now we instantiate a new GLM-HMM and check that we can recover the generative parameters in simulated data:
###Code
new_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
N_iters = 200 # maximum number of EM iterations. Fitting with stop earlier if increase in LL is below tolerance specified by tolerance parameter
fit_ll = new_glmhmm.fit(true_choices, inputs=inpts, method="em", num_iters=N_iters, tolerance=10**-4)
# Plot the log probabilities of the true and fit models. Fit model final LL should be greater
# than or equal to true LL.
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
plt.plot(fit_ll, label="EM")
plt.plot([0, len(fit_ll)], true_ll * np.ones(2), ':k', label="True")
plt.legend(loc="lower right")
plt.xlabel("EM Iteration")
plt.xlim(0, len(fit_ll))
plt.ylabel("Log Probability")
plt.show()
###Output
_____no_output_____
###Markdown
3b. Retrieved parameters Compare retrieved weights and transition matrices to generative parameters. To do this, we may first need to permute the states of the fit GLM-HMM relative to thegenerative model. One way to do this uses the `find_permutation` function from `ssm`:
###Code
new_glmhmm.permute(find_permutation(true_latents[0], new_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
###Output
_____no_output_____
###Markdown
Now plot generative and retrieved weights for GLMs (analogous plot to Figure S1c in Ashwood et al. (2020)):
###Code
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
cols = ['#ff7f00', '#4daf4a', '#377eb8']
recovered_weights = new_glmhmm.observations.params
for k in range(num_states):
if k ==0:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="generative")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = "recovered", linestyle = '--')
else:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k], linestyle='-',
lw=1.5, label="")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle = '--')
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.legend()
plt.title("Weight recovery", fontsize=15)
###Output
_____no_output_____
###Markdown
Now plot generative and retrieved transition matrices (analogous plot to Figure S1c in Ashwood et al. (2020)):
###Code
fig = plt.figure(figsize=(5, 2.5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("generative", fontsize = 15)
plt.subplot(1, 2, 2)
recovered_trans_mat = np.exp(new_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
###Output
_____no_output_____
###Markdown
3c. Posterior State Probabilities Let's now plot $p(z_{t} = k|\mathbf{y}, \{u_{t}\}_{t=1}^{T})$, the posterior state probabilities, which give the probability of the animal being in state k at trial t.
###Code
# Get expected states:
posterior_probs = [new_glmhmm.expected_states(data=data, input=inpt)[0]
for data, inpt
in zip(true_choices, inpts)]
fig = plt.figure(figsize=(5, 2.5), dpi=80, facecolor='w', edgecolor='k')
sess_id = 0 #session id; can choose any index between 0 and num_sess-1
for k in range(num_states):
plt.plot(posterior_probs[sess_id][:, k], label="State " + str(k + 1), lw=2,
color=cols[k])
plt.ylim((-0.01, 1.01))
plt.yticks([0, 0.5, 1], fontsize = 10)
plt.xlabel("trial #", fontsize = 15)
plt.ylabel("p(state)", fontsize = 15)
###Output
_____no_output_____
###Markdown
With these posterior state probabilities, we can assign trials to states and then plot the fractional occupancy of each state:
###Code
# concatenate posterior probabilities across sessions
posterior_probs_concat = np.concatenate(posterior_probs)
# get state with maximum posterior probability at particular trial:
state_max_posterior = np.argmax(posterior_probs_concat, axis = 1)
# now obtain state fractional occupancies:
_, state_occupancies = np.unique(state_max_posterior, return_counts=True)
state_occupancies = state_occupancies/np.sum(state_occupancies)
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
for z, occ in enumerate(state_occupancies):
plt.bar(z, occ, width = 0.8, color = cols[z])
plt.ylim((0, 1))
plt.xticks([0, 1, 2], ['1', '2', '3'], fontsize = 10)
plt.yticks([0, 0.5, 1], ['0', '0.5', '1'], fontsize=10)
plt.xlabel('state', fontsize = 15)
plt.ylabel('frac. occupancy', fontsize=15)
###Output
_____no_output_____
###Markdown
4. Fit GLM-HMM and perform recovery analysis: Maximum A Priori Estimation Above, we performed Maximum Likelihood Estimation to retrieve the generative parameters of the GLM-HMM in simulated data. In the small data regime, where we do not have many trials available to us, we may instead want to perform Maximum A Priori (MAP) Estimation in order to incorporate a prior term and restrict the range for the best fitting parameters. Unfortunately, what is meant by 'small data regime' is problem dependent and will be affected by the number of states in the generative GLM-HMM, and the specific parameters of the generative model, amongst other things. In practice, we may perform both Maximum Likelihood Estimation and MAP estimation and compare the ability of the fit models to make predictions on held-out data (see Section 5 on Cross-Validation below). The prior we consider for the GLM-HMM is the product of a Gaussian prior on the GLM weights, $W$, and a Dirichlet prior on the transition matrix, $A$:$$\begin{align}\Pr(W, A) &= \mathcal{N}(W|0, \Sigma) \Pr(A|\alpha) \\&= \mathcal{N}(W|0, diag(\sigma^{2}, \cdots, \sigma^{2})) \prod_{j=1}^{K} \dfrac{1}{B(\alpha)} \prod_{k=1}^{K} A_{jk}^{\alpha -1}\end{align}$$There are two hyperparameters controlling the strength of the prior: $\sigma$ and $\alpha$. The larger the value of $\sigma$ and if $\alpha = 1$, the more similar MAP estimation will become to Maximum Likelihood Estimation, and the prior term will become an additive offset to the objective function of the GLM-HMM that is independent of the values of $W$ and $A$. In comparison, setting $\sigma = 2$ and $\alpha = 2$ will result in the prior no longer being independent of $W$ and $\alpha$. In order to perform MAP estimation for the GLM-HMM with `ssm`, the new syntax is:```pythonssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs", observation_kwargs=dict(C=num_categories,prior_sigma=prior_sigma), transitions="sticky", transition_kwargs=dict(alpha=prior_alpha,kappa=0))```where `prior_sigma` is the $\sigma$ parameter from above, and `prior_alpha` is the $\alpha$ parameter.
###Code
# Instantiate GLM-HMM and set prior hyperparameters
prior_sigma = 2
prior_alpha = 2
map_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories,prior_sigma=prior_sigma),
transitions="sticky", transition_kwargs=dict(alpha=prior_alpha,kappa=0))
# Fit GLM-HMM with MAP estimation:
_ = map_glmhmm.fit(true_choices, inputs=inpts, method="em", num_iters=N_iters, tolerance=10**-4)
###Output
_____no_output_____
###Markdown
Compare final likelihood of data with MAP estimation and MLE to likelihood under generative model (note: we cannot use log_probability that is output of `fit` function as this incorporates prior term, which is not comparable between generative and MAP models). We want to check that MAP and MLE likelihood values are higher than true likelihood; if they are not, this may indicate a poor initialization and that we should refit these models.
###Code
true_likelihood = true_glmhmm.log_likelihood(true_choices, inputs=inpts)
mle_final_ll = new_glmhmm.log_likelihood(true_choices, inputs=inpts)
map_final_ll = map_glmhmm.log_likelihood(true_choices, inputs=inpts)
# Plot these values
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
loglikelihood_vals = [true_likelihood, mle_final_ll, map_final_ll]
colors = ['Red', 'Navy', 'Purple']
for z, occ in enumerate(loglikelihood_vals):
plt.bar(z, occ, width = 0.8, color = colors[z])
plt.ylim((true_likelihood-5, true_likelihood+15))
plt.xticks([0, 1, 2], ['true', 'mle', 'map'], fontsize = 10)
plt.xlabel('model', fontsize = 15)
plt.ylabel('loglikelihood', fontsize=15)
###Output
_____no_output_____
###Markdown
5. Cross Validation To assess which model is better - the model fit via Maximum Likelihood Estimation, or the model fit via MAP estimation - we can investigate the predictive power of these fit models on held-out test data sets.
###Code
# Create additional input sequences to be used as held-out test data
num_test_sess = 10
test_inpts = np.ones((num_test_sess, num_trials_per_sess, input_dim))
test_inpts[:,:,0] = np.random.choice(stim_vals, (num_test_sess, num_trials_per_sess))
test_inpts = list(test_inpts)
# Create set of test latents and choices to accompany input sequences:
test_latents, test_choices = [], []
for sess in range(num_test_sess):
test_z, test_y = true_glmhmm.sample(num_trials_per_sess, input=test_inpts[sess])
test_latents.append(test_z)
test_choices.append(test_y)
# Compare likelihood of test_choices for model fit with MLE and MAP:
mle_test_ll = new_glmhmm.log_likelihood(test_choices, inputs=test_inpts)
map_test_ll = map_glmhmm.log_likelihood(test_choices, inputs=test_inpts)
fig = plt.figure(figsize=(2, 2.5), dpi=80, facecolor='w', edgecolor='k')
loglikelihood_vals = [mle_test_ll, map_test_ll]
colors = ['Navy', 'Purple']
for z, occ in enumerate(loglikelihood_vals):
plt.bar(z, occ, width = 0.8, color = colors[z])
plt.ylim((mle_test_ll-2, mle_test_ll+5))
plt.xticks([0, 1], ['mle', 'map'], fontsize = 10)
plt.xlabel('model', fontsize = 15)
plt.ylabel('loglikelihood', fontsize=15)
###Output
_____no_output_____
###Markdown
Here we see that the model fit with MAP estimation achieves higher likelihood on the held-out dataset than the model fit with MLE, so we would choose this model as the best model of animal decision-making behavior (although we'd probably want to perform multiple fold cross-validation to be sure that this is the case in all instantiations of test data). Let's finish by comparing the retrieved weights and transition matrices from MAP estimation to the generative parameters.
###Code
map_glmhmm.permute(find_permutation(true_latents[0], map_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
fig = plt.figure(figsize=(6, 3), dpi=80, facecolor='w', edgecolor='k')
cols = ['#ff7f00', '#4daf4a', '#377eb8']
plt.subplot(1,2,1)
recovered_weights = new_glmhmm.observations.params
for k in range(num_states):
if k ==0: # show labels only for first state
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="generative")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = 'recovered', linestyle='--')
else:
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="")
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle='--')
plt.yticks(fontsize=10)
plt.ylabel("GLM weight", fontsize=15)
plt.xlabel("covariate", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.title("MLE", fontsize = 15)
plt.legend()
plt.subplot(1,2,2)
recovered_weights = map_glmhmm.observations.params
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k][0], marker='o',
color=cols[k],
lw=1.5, label="", linestyle = '-')
plt.plot(range(input_dim), recovered_weights[k][0], color=cols[k],
lw=1.5, label = '', linestyle='--')
plt.yticks(fontsize=10)
plt.xticks([0, 1], ['', ''], fontsize=12, rotation=45)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.title("MAP", fontsize = 15)
fig = plt.figure(figsize=(7, 2.5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("generative", fontsize = 15)
plt.subplot(1, 3, 2)
recovered_trans_mat = np.exp(new_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered - MLE", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
plt.subplot(1, 3, 3)
recovered_trans_mat = np.exp(map_glmhmm.transitions.log_Ps)
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.title("recovered - MAP", fontsize = 15)
plt.subplots_adjust(0, 0, 1, 1)
###Output
_____no_output_____
###Markdown
6. Multinomial GLM-HMM Until now, we have only considered the case where there are 2 output classes (the Bernoulli GLM-HMM corresponding to `C=num_categories=2`), yet the `ssm` framework is sufficiently general to allow us to fit the multinomial GLM-HMM described in Equations 1 and 2. Here we demonstrate a recovery analysis for the multinomial GLM-HMM.
###Code
# Set the parameters of the GLM-HMM
num_states = 4 # number of discrete states
obs_dim = 1 # number of observed dimensions
num_categories = 3 # number of categories for output
input_dim = 2 # input dimensions
# Make a GLM-HMM
true_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
# Set weights of multinomial GLM-HMM
gen_weights = np.array([[[0.6,3], [2,3]], [[6,1], [6,-2]], [[1,1], [3,1]], [[2,2], [0,5]]])
print(gen_weights.shape)
true_glmhmm.observations.params = gen_weights
###Output
(4, 2, 2)
###Markdown
In the above, notice that the shape of the weights for the multinomial GLM-HMM is `(num_states, num_categories-1, input_dim)`. Specifically, we only learn `num_categories-1` weight vectors (of size `input_dim`) for a given state, and we set the weights for the other observation class to zero. Constraining the weight vectors for one class is important if that we want to be able to identify generative weights in simulated data. If we didn't do this, it is easy to see that one could generate the same observation probabilities with a set of weight vectors that are offset by a constant vector $w_{k}$ (the index k indicates that a different offset vector could exist per state):$$\begin{align}\Pr(y_t = c \mid z_{t} = k, u_t, w_{kc}) = \frac{\exp\{w_{kc}^\mathsf{T} u_t\}}{\sum_{c'=1}^C \exp\{w_{kc'}^\mathsf{T} u_t\}} = \frac{\exp\{(w_{kc}-w_{k})^\mathsf{T} u_t\}}{\sum_{c'=1}^C \exp\{(w_{kc'}-w_{k})^\mathsf{T} u_t\}}\end{align}$$Equations 1 and 2 at the top of this notebook already take into account the fact that the weights for a particular class for a given state are fixed to zero (this is why $c = C$ is handled differently).
###Code
# Set transition matrix of multinomial GLM-HMM
gen_log_trans_mat = np.log(np.array([[[0.90, 0.04, 0.05, 0.01], [0.05, 0.92, 0.01, 0.02], [0.03, 0.02, 0.94, 0.01], [0.09, 0.01, 0.01, 0.89]]]))
true_glmhmm.transitions.params = gen_log_trans_mat
# Create external inputs sequence; compared to the example above, we will increase the number of examples
# (through the "num_trials_per_session" paramater) since the number of parameters has increased
num_sess = 20 # number of example sessions
num_trials_per_sess = 1000 # number of trials in a session
inpts = np.ones((num_sess, num_trials_per_sess, input_dim)) # initialize inpts array
stim_vals = [-1, -0.5, -0.25, -0.125, -0.0625, 0, 0.0625, 0.125, 0.25, 0.5, 1]
inpts[:,:,0] = np.random.choice(stim_vals, (num_sess, num_trials_per_sess)) # generate random sequence of stimuli
inpts = list(inpts)
# Generate a sequence of latents and choices for each session
true_latents, true_choices = [], []
for sess in range(num_sess):
true_z, true_y = true_glmhmm.sample(num_trials_per_sess, input=inpts[sess])
true_latents.append(true_z)
true_choices.append(true_y)
# plot example data:
fig = plt.figure(figsize=(8, 3), dpi=80, facecolor='w', edgecolor='k')
plt.step(range(100),true_choices[0][range(100)], color = "red")
plt.yticks([0, 1, 2])
plt.title("example data (multinomial GLM-HMM)")
plt.xlabel("trial #", fontsize = 15)
plt.ylabel("observation class", fontsize = 15)
# Calculate true loglikelihood
true_ll = true_glmhmm.log_probability(true_choices, inputs=inpts)
print("true ll = " + str(true_ll))
# fit GLM-HMM
new_glmhmm = ssm.HMM(num_states, obs_dim, input_dim, observations="input_driven_obs",
observation_kwargs=dict(C=num_categories), transitions="standard")
N_iters = 500 # maximum number of EM iterations. Fitting with stop earlier if increase in LL is below tolerance specified by tolerance parameter
fit_ll = new_glmhmm.fit(true_choices, inputs=inpts, method="em", num_iters=N_iters, tolerance=10**-4)
# Plot the log probabilities of the true and fit models. Fit model final LL should be greater
# than or equal to true LL.
fig = plt.figure(figsize=(4, 3), dpi=80, facecolor='w', edgecolor='k')
plt.plot(fit_ll, label="EM")
plt.plot([0, len(fit_ll)], true_ll * np.ones(2), ':k', label="True")
plt.legend(loc="lower right")
plt.xlabel("EM Iteration")
plt.xlim(0, len(fit_ll))
plt.ylabel("Log Probability")
plt.show()
# permute recovered state identities to match state identities of generative model
new_glmhmm.permute(find_permutation(true_latents[0], new_glmhmm.most_likely_states(true_choices[0], input=inpts[0])))
# Plot recovered parameters:
recovered_weights = new_glmhmm.observations.params
recovered_transitions = new_glmhmm.transitions.params
fig = plt.figure(figsize=(16, 8), dpi=80, facecolor='w', edgecolor='k')
plt.subplots_adjust(wspace=0.3, hspace=0.6)
plt.subplot(2, 2, 1)
cols = ['#ff7f00', '#4daf4a', '#377eb8', '#f781bf', '#a65628', '#984ea3', '#999999', '#e41a1c', '#dede00']
for c in range(num_categories):
plt.subplot(2, num_categories+1, c+1)
if c < num_categories-1:
for k in range(num_states):
plt.plot(range(input_dim), gen_weights[k,c], marker='o',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1))
else:
for k in range(num_states):
plt.plot(range(input_dim), np.zeros(input_dim), marker='o',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1), alpha = 0.5)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.yticks(fontsize=10)
plt.xticks([0, 1], ['', ''])
if c == 0:
plt.ylabel("GLM weight", fontsize=15)
plt.legend()
plt.title("Generative weights; class " + str(c+1), fontsize = 15)
plt.ylim((-3, 10))
plt.subplot(2, num_categories+1, num_categories+1)
gen_trans_mat = np.exp(gen_log_trans_mat)[0]
plt.imshow(gen_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(gen_trans_mat.shape[0]):
for j in range(gen_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(gen_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("Generative transition matrix", fontsize = 15)
cols = ['#ff7f00', '#4daf4a', '#377eb8', '#f781bf', '#a65628', '#984ea3', '#999999', '#e41a1c', '#dede00']
for c in range(num_categories):
plt.subplot(2, num_categories+1, num_categories + c + 2)
if c < num_categories-1:
for k in range(num_states):
plt.plot(range(input_dim), recovered_weights[k,c], marker='o', linestyle = '--',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1))
else:
for k in range(num_states):
plt.plot(range(input_dim), np.zeros(input_dim), marker='o', linestyle = '--',
color=cols[k], lw=1.5, label="state " + str(k+1) + "; class " + str(c+1), alpha = 0.5)
plt.axhline(y=0, color="k", alpha=0.5, ls="--")
plt.yticks(fontsize=10)
plt.xlabel("covariate", fontsize=15)
if c == 0:
plt.ylabel("GLM weight", fontsize=15)
plt.xticks([0, 1], ['stimulus', 'bias'], fontsize=12, rotation=45)
plt.legend()
plt.title("Recovered weights; class " + str(c+1), fontsize = 15)
plt.ylim((-3,10))
plt.subplot(2, num_categories+1, 2*num_categories+2)
recovered_trans_mat = np.exp(recovered_transitions)[0]
plt.imshow(recovered_trans_mat, vmin=-0.8, vmax=1, cmap='bone')
for i in range(recovered_trans_mat.shape[0]):
for j in range(recovered_trans_mat.shape[1]):
text = plt.text(j, i, str(np.around(recovered_trans_mat[i, j], decimals=2)), ha="center", va="center",
color="k", fontsize=12)
plt.xlim(-0.5, num_states - 0.5)
plt.xticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.yticks(range(0, num_states), ('1', '2', '3', '4'), fontsize=10)
plt.ylim(num_states - 0.5, -0.5)
plt.ylabel("state t", fontsize = 15)
plt.xlabel("state t+1", fontsize = 15)
plt.title("Recovered transition matrix", fontsize = 15)
###Output
_____no_output_____ |
notebooks/Effect of Near-Duplicates on Retrieval-Evaluation.ipynb | ###Markdown
The Effect of Near-Duplicates on the Evaluation of Search EnginesThis jupyter-notebook provides supplementary material to our homonymousECIR-Paper that reproduces and generalizes the observations by [Bernstein and Zobel](https://dl.acm.org/citation.cfm?doid=1099554.1099733)~\cite{} that content-equivalent.... You can use the variables `EVALUATION`, `PREPROCESSING`, and `SIMILARITY` to configuresome reports that show informations that we kept out of the paper for shortness. ExampleTo include NDCG and MAP as evaluation measures (`EVALUATION = ['NDCG', 'MAP']`)Please contact us in case of questions or problems: [Maik Fröbe](maik-froebe.de) ([webis.de](webis.de)).Please cite...
###Code
from judgment_util import *
###Output
_____no_output_____ |
Quantitative results.ipynb | ###Markdown
Table of Contents1 Imports2 CIFAR-10 Results Imports
###Code
import os
from os.path import join
import numpy as np
# Plotting imports
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.font_manager
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
plt.rcParams['text.latex.preamble']=[r"\usepackage{lmodern}", r'\usepackage{amssymb}', r'\usepackage{amsmath}',
r'\usepackage{wasysym}']
params = {'text.usetex' : True,
'font.size' : 20,
'font.family' : 'sans-serif',
'font.serif' : 'Computer Modern Sans serif',
'text.latex.unicode': True,
}
plt.rcParams.update(params)
from interpretability.utils import explainers_color_map
#loading results for plotting
from project_utils import get_precomputed_results
get_precomputed_results()
def joined_listdir(path):
return [(join(path, d), d) for d in os.listdir(path)]
exps = ['pretrained-densenet121',
'densenet_121_cossched',
'densenet_121',
'pretrained-resnet34',
'resnet_34',
'pretrained-vgg11',
'vgg_11',
'pretrained-inception',
'inception_v3']
results = {}
for exp in exps:
exp_results = {}
for mdir, method in joined_listdir(join("results", exp, "localisation")):
exp_results[method] = np.loadtxt(join(mdir, "localisation_metric.np"))
results.update({exp: exp_results})
pairs = [
('pretrained-densenet121', 'densenet_121_cossched', "DenseNet-121"),
('pretrained-resnet34', 'resnet_34', "ResNet-34"),
('pretrained-vgg11', 'vgg_11', "VGG-11"),
('pretrained-inception', 'inception_v3', "InceptionNet"),
]
n_imgs = 9
fig, axes = plt.subplots(2, 2, figsize=(45 * .725, 12 * .725))
pretrained = False
labels_ordered = None
for ax_idx, (p, ax) in enumerate(zip(pairs, axes.flatten())):
offset = 0
labels1 = np.array(sorted(results[p[0]].items(),
key=lambda x: np.percentile(x[1], 50))).T.tolist()[0] + ["Ours"]
labels2, _ = np.array(sorted(results[p[1]].items(),
key=lambda x: np.percentile(x[1], 50),
reverse=False)).T.tolist()
total = len(labels1 if pretrained else labels2)
if labels_ordered is None:
labels_ordered = labels1 if pretrained else labels2
l1 = ax.hlines([1], -.4, total,
alpha=1, linestyle="dashed", label="Oracle", lw=4,
color=np.array((41, 110, 180), dtype=float)/255, zorder=20)
l2 = ax.hlines([(1/n_imgs)], -.4, total, alpha=1, linestyle="dashed", lw=4,
label="Uniform", color=np.array((255, 180, 0), dtype=float)/255, zorder=20)
box_plot = sns.boxplot(data=(
([results[p[0]][l] for l in labels1[:-1]] + [results[p[1]]["Ours"]]) if pretrained else
[results[p[1]][l] for l in labels2]
), ax=ax, fliersize=0, zorder=50, width=.7)
for i, l in enumerate(labels1 if pretrained else labels2):
mybox = box_plot.artists[i]
mybox.set_facecolor(np.array(explainers_color_map[l])/255)
mybox.set_linewidth(2)
mybox.set_zorder(20)
ax.set_xticks([])
ax.tick_params(axis='y', which='major', labelsize=34)
ax.add_artist(l1)
if ax_idx >= 2:
ax.annotate(("B-Cos " if not pretrained else "Pretrained ") + p[2], xy=(0.25, 1.2),
xycoords=("axes fraction", "data"),
fontsize=48, ha="center", va="center", bbox=dict(boxstyle="round", fc=(1, 1, 1, .5),
ec="black", lw=1))
else:
ax.annotate(("B-Cos " if not pretrained else "Pretrained ") + p[2], xy=(0.25, .45+.075),
xycoords=("axes fraction", "axes fraction"),
fontsize=48, ha="center", bbox=dict(boxstyle="round", fc=(1, 1, 1, .5),
ec="black", lw=1))
ax.set_ylim(ax.get_ylim()[0], 1.4 if ax_idx >=2 else 1.4)
if pretrained:
ax.vlines([len(labels1)-1.5], -2, 2, linestyle=(1, (4, 2)), linewidth=4)
ax.annotate("B-cos", xy=(len(labels1)-1, .5), xycoords=("data", "data"),
fontsize=32, ha="center", va="center", bbox=dict(boxstyle="round", fc=(1, 1, 1, .5),
ec="black", lw=1), rotation=90)
ax.set_yticks(np.arange(0, 1.2, .2))
ax.set_yticklabels(["${:.1f}$".format(l) for l in np.arange(0, 1.2, .2)], fontsize=40)
if ax_idx % 2 == 1:
ax.tick_params("y", labelleft=False, which="both")
ax.grid(linewidth=2, color="white", zorder=-10)
ax.set_xlim(-.5, total -1 + .5)
ax.grid(zorder=-10, linestyle="dashed", alpha=1, axis="y")
legend_elements = [Patch(facecolor=np.array(explainers_color_map[l])/255,
edgecolor='black', lw=2,
label=l) for l in labels_ordered]
fig.tight_layout(h_pad=0, w_pad=.65, rect=(0, 0, 1, 1.2))
ax = fig.add_axes([0, 1.2, 1, .1])
ax.grid(False)
ax.set_xticks([])
ax.set_yticks([])
ax.set_facecolor("white")
leg = fig.legend(handles=legend_elements, loc='upper center', ncol=len(labels1),
fontsize=48, bbox_to_anchor=[.535, 1.175+0.01], handlelength=1.5,
columnspacing=1.5, handletextpad=.5,
facecolor="w", edgecolor="black",
)
leg.set_in_layout(True)
leg.get_frame().set_linewidth(2)
legend_elements = [Line2D([0], [0], color=np.array((41, 110, 180), dtype=float)/255,
lw=4, label='Oracle attributions', linestyle="dashed"),
Line2D([0], [0], color=np.array((255, 180, 0), dtype=float)/255,
lw=4, label='Uniform attributions', linestyle="dashed")
]
leg2 = fig.legend(handles=legend_elements, loc='upper center', ncol=2,
fontsize=42, bbox_to_anchor=[.5, 1.3+0.01],
facecolor="w", edgecolor="black", framealpha=1,
)
txt = plt.figtext(-.01, .6, "Localisation Metric", rotation=90, fontsize=55, va="center", ha="center")
fig.add_artist(leg)
fig.set_facecolor("white")
n_imgs = 9
fig, axes = plt.subplots(2, 1, figsize=(60 * .6 / 2, 15 * .6))
label_order = None
for ax_idx, (p, ax) in enumerate(zip(pairs[::3], axes.flatten())):
offset = 0
labels1, _ = np.array(sorted(results[p[0]].items(),
key=lambda x: np.percentile(x[1], 50))).T.tolist()
labels2, _ = np.array(sorted(results[p[1]].items(),
key=lambda x: np.percentile(x[1], 50),
reverse=False)).T.tolist()
if label_order is None:
label_order = labels2
total = len(labels2)
l1 = ax.hlines([1], -.4, total,
alpha=1, linestyle="dashed", label="Oracle", lw=4,
color=np.array((41, 110, 180), dtype=float)/255, zorder=20)
l2 = ax.hlines([(1/n_imgs)], -.4, total, alpha=1, linestyle="dashed", lw=4,
label="Uniform", color=np.array((255, 180, 0), dtype=float)/255, zorder=20)
box_plot = sns.boxplot(data=(
[results[p[1]][l] for l in labels2]
), ax=ax, fliersize=0, zorder=50, width=.7)
for i, l in enumerate(labels2):
mybox = box_plot.artists[i]
mybox.set_facecolor(np.array(explainers_color_map[l])/255)
mybox.set_linewidth(2)
mybox.set_zorder(20)
ax.set_xticks([])
ax.tick_params(axis='y', which='major', labelsize=34)
ax.add_artist(l1)
if ax_idx >= 2:
ax.annotate("B-cos " + p[2], xy=(0.1, .6), xycoords=("axes fraction", "axes fraction"),
fontsize=42, bbox=dict(boxstyle="round", fc=(1, 1, 1, .5),
ec="black", lw=1))
else:
ax.annotate("B-cos " + p[2], xy=(0.1, .45), xycoords=("axes fraction", "axes fraction"),
fontsize=42, bbox=dict(boxstyle="round", fc=(1, 1, 1, .5),
ec="black", lw=1))
ax.set_ylim(ax.get_ylim()[0], 1.1 if ax_idx >=2 else 1.3)
if ax_idx %2 == 0 or True:
ax.set_yticks(np.arange(0, 1.2, .2))
ax.set_yticklabels(["${:.1f}$".format(l) for l in np.arange(0, 1.2, .2)], fontsize=40)
ax.grid(linewidth=2, color="white", zorder=-10)
ax.set_xlim(-.5, total -1 + .5)
ax.grid(zorder=-10, linestyle="dashed", alpha=1, axis="y")
unique_entries = [l for l in explainers_color_map.keys() if l in labels1 + labels2]
legend_elements = [Patch(facecolor=np.array(explainers_color_map[l])/255,
edgecolor='black', lw=2,
label=l) for l in label_order]
leg.set_in_layout(True)
leg.get_frame().set_linewidth(2)
fig.tight_layout(h_pad=0, w_pad=.65, rect=(0, 0, 1, 1.2))
ax = fig.add_axes([0, 1.2, 1, .1])
ax.grid(False)
ax.set_xticks([])
ax.set_yticks([])
ax.set_facecolor("white")
leg = fig.legend(handles=legend_elements, loc='upper center', ncol=len(unique_entries),
fontsize=32, bbox_to_anchor=[.55, 1.15+0.06], handlelength=1.25,
columnspacing=1.1, handletextpad=.5,
facecolor="w", edgecolor="black",
)
legend_elements = [Line2D([0], [0], color=np.array((41, 110, 180), dtype=float)/255,
lw=4, label='Oracle attributions', linestyle="dashed"),
Line2D([0], [0], color=np.array((255, 180, 0), dtype=float)/255,
lw=4, label='Uniform attributions', linestyle="dashed")
]
leg2 = fig.legend(handles=legend_elements, loc='upper center', ncol=2,
fontsize=32, bbox_to_anchor=[.55, .5+0.06], borderaxespad=0.025,
facecolor="w", edgecolor="black", framealpha=1,
)
txt = plt.figtext(-.01, .6, "Localisation Metric", rotation=90, fontsize=55, va="center", ha="center")
fig.add_artist(leg)
fig.set_facecolor("white")
###Output
_____no_output_____
###Markdown
CIFAR-10 Results
###Code
from experiments.CIFAR10.bcos.experiment_parameters import exps as c10_exps
fontsize = 24
sns.set_style("darkgrid")
results = []
labels = []
# final accs of models
accs = np.array([93.53, 93.81, 93.69, 93.69, 93.19, 92.6, 92.37])/100
for e in c10_exps.keys():
results.append(np.loadtxt(join("results", "c10", e, "localisation_metric.np")))
labels.append(e)
fig, ax = plt.subplots(1, figsize=(12, 5))
n_imgs = 9
l1 = ax.hlines([1], -.4, len(results), alpha=1, linestyle="dashed", label="Oracle", lw=3,
color=np.array((41, 110, 180), dtype=float)/255, zorder=20)
l2 = ax.hlines([(1/n_imgs)], -.4, len(results), alpha=1, linestyle="dashed", lw=3,
label="Uniform", color=np.array((255, 180, 0), dtype=float)/255, zorder=20)
box_plot = sns.boxplot(data=results, ax=ax, fliersize=0, zorder=50)
ax.set_xticks(range(len(results)))
ax.tick_params(axis='y', which='major', labelsize=fontsize)
ax.set_xticklabels([l.replace("_", "-") for l in labels], rotation=60, fontsize=fontsize)
l1 = ax.legend([l1], ["Oracle"], loc="upper right", bbox_to_anchor=(.21, 1), facecolor="white", framealpha=1,
borderaxespad=0.1, fontsize=fontsize)
ax.legend([l2], ["Uniform"], loc="upper right", bbox_to_anchor=(.45, 1), facecolor="white", framealpha=1,
borderaxespad=0.1, fontsize=fontsize)
plt.gca().add_artist(l1)
ax.set_ylabel("Localisation Metric", fontsize=24)
ax.set_yticks(np.arange(0, 1.2, .2))
ax.set_ylim(ax.get_ylim()[0], 1.2)
ax.set_xlim(-.5, len(results) - 1 + .5)
ax.grid(zorder=-10, linestyle="dashed", alpha=1, axis="y")
fig.tight_layout()
fig.set_facecolor("white")
ax.set_xlabel("Exponent B", fontsize=24)
ax.set_yticks(np.arange(.0, 1.1, 0.2))
ax.set_yticklabels(["{:.1f}".format(y) for y in np.arange(.0, 1.1, 0.2)])
ax = ax.twinx()
ax.set_xticklabels(["{:.2f}".format(b) for b in [1.00, 1.25, 1.50, 1.75, 2.00, 2.25, 2.50]], fontsize=fontsize)
cmap = matplotlib.cm.get_cmap('Greens')
colours = [cmap(i) for i in np.linspace(0, 1, len(c10_exps))]
for i, k in enumerate(colours):
mybox = box_plot.artists[i]
mybox.set_facecolor(k)
mybox.set_zorder(20)
ax.plot(np.arange(len(c10_exps)), accs, "x--", color="red", markersize=12, markeredgewidth=4)
ax.set_ylim((.90, .95))
ax.set_yticks([.91, .92, .93, .94])
ax.set_yticklabels(["{:.2f}".format(y) for y in [.91, .92, .93, .94]])
ax.tick_params("y", labelcolor="red", color="red", labelsize=20)
ax.grid(False)
ax.set_ylabel("Accuracy", color="red", fontsize=24, rotation=270, labelpad=28)
fig.tight_layout()
###Output
_____no_output_____ |
P3 - Stroke Prediction/Code/Stroke Prediction.ipynb | ###Markdown
Stroke Prediction **Vinay Nagaraj** Overview According to the World Health Organization (WHO) stroke is the 2nd leading cause of death globally, responsible for approximately 11% of total deaths. A Report from the American Heart Association informs that an average, someone in the US has a stroke every 40 seconds. Stroke is a treatable disease, and if detected or predicted early, its severity can be greatly reduced. If stroke can be predicted at an early stage there is 4% lower risk of in-hospital death, 4% better odds of walking independently after leaving the hospital and also 3% better odds of being sent home instead of to an institution. As part of this project, I will be using the dataset from [Kaggle](https://www.kaggle.com/fedesoriano/stroke-prediction-dataset) by which I intend to consider all the relevant information about the patient such as gender, age, various diseases, and smoking and build predictive analytics techniques that would predict the patients with high risk and is likely to get stroke. This helps in providing the advanced warning to alert the patients so that they can apply proper precautions and possibly the prevent the stroke.
###Code
# Load necessary libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scikitplot as skplt
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import SMOTE
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
# Read our data
stroke_data = pd.read_csv('healthcare-dataset-stroke-data.csv')
# Check the dimension of the data frame
print("The dimension of the table is: ", stroke_data.shape)
# Lets look at some sample records to understand the data
print(stroke_data.head(5).T)
# Check the types of each feature
stroke_data.dtypes
# Check for any missing values
stroke_data.isna().sum()
# First let us round off Age to convert it to integer
stroke_data['age'] = stroke_data['age'].apply(lambda x : round(x))
# BMI values less than 12 and greater than 60 are potential outliers. So we will change them to NaN
stroke_data['bmi'] = stroke_data['bmi'].apply(lambda bmi_value: bmi_value if 12 < bmi_value < 60 else np.nan)
# Sorting DataFrame based on Gender then on Age and using Forward Fill-ffill() to fill NaN value for BMI
stroke_data.sort_values(['gender', 'age'], inplace=True)
stroke_data.reset_index(drop=True, inplace=True)
stroke_data['bmi'].ffill(inplace=True)
# Check for any missing values
stroke_data.isna().sum()
###Output
_____no_output_____
###Markdown
**Data Summary**- gender: "Male", "Female" or "Other"- age: age of the patient- hypertension: 0 if the patient doesn't have hypertension, 1 if the patient has hypertension- heart_disease: 0 if the patient doesn't have any heart diseases, 1 if the patient has a heart disease- ever_married: "No" or "Yes"- work_type: "children", "Govt_jov", "Never_worked", "Private" or "Self-employed"- Residence_type: "Rural" or "Urban"- avg_glucose_level: average glucose level in blood- bmi: body mass index- smoking_status: "formerly smoked", "never smoked", "smokes" or "Unknown"*- stroke: 1 if the patient had a stroke or 0 if not\*Note: "Unknown" in smoking_status means that the information is unavailable for this patient
###Code
stroke_data.describe()
# Understand the categorical data in our dataset
for column in stroke_data.columns:
if stroke_data[column].dtype == object:
print("{} : {}".format(str(column), str(stroke_data[column].unique())))
print(stroke_data[column].value_counts())
print("-----------------------------------------------------\n\n")
# Drop 'id' feature as it is irrelevant.
stroke_data = stroke_data.drop('id', axis=1)
###Output
_____no_output_____
###Markdown
Below are our observations so far:\1) Input data has 5110 records and 12 features.\2) bmi had 201 rows of missing values which is now filled using Forward Fill.\3) 'id' is an irrelevant features to our analysis, so was dropped. Graph Analysis/EDA
###Code
# Plot of Patients who had stroke vs Patients who did not have stroke
sns.countplot('stroke', data=stroke_data)
plt.title('0: No Stroke, 1: Stroke', fontsize=14)
plt.show()
# Percentage of Patients who had stroke vs Patients who did not have stroke
Count_stroke_patients = len(stroke_data[stroke_data["stroke"]==1]) # Patients who had stroke
Count_nostroke_patients = len(stroke_data[stroke_data["stroke"]==0]) # Patients who never had stroke
print("Total count of Patients who had stroke = ",Count_stroke_patients)
print("Total count of Patients who never had stroke = ",Count_nostroke_patients)
Percentage_of_stroke_patients = Count_stroke_patients/(Count_stroke_patients+Count_nostroke_patients)
print("Percentage of Patients who had stroke = ",Percentage_of_stroke_patients*100)
Percentage_of_nostroke_patients= Count_nostroke_patients/(Count_nostroke_patients+Count_stroke_patients)
print("Percentage of Patients who never had stroke = ",Percentage_of_nostroke_patients*100)
###Output
Total count of Patients who had stroke = 249
Total count of Patients who never had stroke = 4861
Percentage of Patients who had stroke = 4.87279843444227
Percentage of Patients who never had stroke = 95.12720156555773
###Markdown
Our Dataset contains a total of 4,861 rows of patients who never had stroke and 249 rows of patients who had stroke. We can observe that our dataset is highly imbalanced and we will handle that by over-sampling (SMOTE) before we perform model analysis.
###Code
# plot the effect of Smoking on Stroke
g= sns.catplot(x = "smoking_status", y = "stroke", data = stroke_data, kind = "bar", height = 5)
g.set_ylabels("Smoking on Stroke Probability")
plt.title("Effect of Smoking on Stroke",fontsize=15)
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
Being a smoker or a formerly smoker increases your risk of having a stroke. Also, looks like people who used to smoke are more prone to a Stroke than people still smoking. But surely Smoking is injurious to health.
###Code
# plot the effect of Marriage on Stroke
g= sns.catplot(x = "ever_married", y = "stroke", data = stroke_data, kind = "bar", height = 5)
g.set_ylabels("Marriage on Stroke Probability")
plt.title("Effect of Marriage on Stroke",fontsize=15)
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
Wasn't this obvious :)
###Code
# plot the effect of Heart Disease on Stroke
g= sns.catplot(x = "heart_disease", y = "stroke", data = stroke_data, kind = "bar", height = 5)
g.set_ylabels("heart_disease on Stroke Probability")
plt.title("Effect of Heart Disease on Stroke",fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
People with a history of heart disease are more prone to Stroke
###Code
# plot the effect of Hypertension on Stroke
g= sns.catplot(x = "hypertension", y = "stroke", data = stroke_data, kind = "bar", height = 5)
g.set_ylabels("hypertension on Stroke Probability")
plt.title("Effect of Hypertension on Stroke",fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
People with a history of Hypertension are more prone to Stroke
###Code
# plot the effect of Gender on Stroke
g= sns.catplot(x = "gender", y = "stroke", data = stroke_data, kind = "bar", height = 5)
g.set_ylabels("Gender on Stroke Probability")
plt.title("Effect of Gender on Stroke",fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Male are more prone to Stroke when compared to Females.
###Code
# Stroke distribution by age Age
plt.figure(figsize=(12,10))
sns.distplot(stroke_data[stroke_data['stroke'] == 0]["age"], color='green') # No Stroke - green
sns.distplot(stroke_data[stroke_data['stroke'] == 1]["age"], color='red') # Stroke - Red
plt.title('No Stroke vs Stroke by Age', fontsize=15)
plt.xlim([18,100])
plt.show()
###Output
_____no_output_____
###Markdown
Based on the above plot, it seems clear that Age is a big factor in stroke patients - the older you get the more at risk you are.
###Code
# plot the effect of work-type on Stroke
plt.figure(figsize=(10,5))
sns.countplot(data=stroke_data[stroke_data["stroke"]==1],x='work_type',palette='cool')
plt.title("Effect of Work Type on Stroke",fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Private work type exposes you to more stroke, than being self-employed or Govt work.
###Code
# plot the effect of Residence_type on Stroke
plt.figure(figsize=(10,5))
sns.countplot(data=stroke_data[stroke_data["stroke"]==1],x='Residence_type',palette='cool')
plt.title("Effect of Residence_type on Stroke",fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
People staying in Urban areas are more prone to Stroke
###Code
# BMI Box Plot
plt.figure(figsize=(10,7))
sns.boxplot(data=stroke_data,x=stroke_data["bmi"],color='gray')
plt.title("Box Plot on BMI",fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Train/Test
###Code
# Update the data in gender column, by changing value of Female to 0, Male to 1 and Other to 2
stroke_data['gender'].replace({'Female': 0, 'Male': 1, 'Other': 2}, inplace = True)
# Update the data in ever_married column, by changing value of Yes to 0 and No to 1
stroke_data['ever_married'].replace({'Yes': 0, 'No': 1}, inplace = True)
# Update the data in work_type column, by changing value of Private to 0, Self-employed to 1, children to 2, Govt_job to 3 and Never_worked to 4
stroke_data['work_type'].replace({'Private': 0, 'Self-employed': 1, 'children': 2, 'Govt_job': 3, 'Never_worked': 4}, inplace = True)
# Update the data in Residence_type column, by changing value of Urban to 0 and Rural to 1
stroke_data['Residence_type'].replace({'Urban': 0, 'Rural': 1}, inplace = True)
# Update the data in smoking_status column, by changing value of never smoked to 0, formerly smoked to 1, smokes to 2 and Unknown to 3
stroke_data['smoking_status'].replace({'never smoked': 0, 'formerly smoked': 1, 'smokes': 2, 'Unknown': 3}, inplace = True)
# Pearson Correlation Heatmap
plt.subplots(figsize=(15,12))
sns.heatmap(stroke_data.corr(method = 'pearson'), annot=True, fmt='.0%')
plt.title("Pearson Correlation Heatmap",fontsize=15)
plt.show()
# Spearman Correlation Heatmap
plt.subplots(figsize=(15,12))
sns.heatmap(stroke_data.corr(method = 'spearman'), annot=True, fmt='.0%')
plt.title("Spearman Correlation Heatmap",fontsize=15)
plt.show()
# Train and test data
x=stroke_data.drop(columns=["stroke"],axis="columns")
y=stroke_data.stroke
x.head()
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=.3,random_state=42)
# Details of training dataset
print("Transaction Number x_train dataset: ", x_train.shape)
print("Transaction Number y_train dataset: ", y_train.shape)
print("Transaction Number x_test dataset: ", x_test.shape)
print("Transaction Number y_test dataset: ", y_test.shape)
print("Before OverSampling, counts of label '1': {}".format(sum(y_train==1)))
print("Before OverSampling, counts of label '0': {} \n".format(sum(y_train==0)))
sns.countplot(x=y_train, data=stroke_data, palette='CMRmap')
plt.title("Before OverSampling",fontsize=15)
plt.show()
###Output
Transaction Number x_train dataset: (3577, 10)
Transaction Number y_train dataset: (3577,)
Transaction Number x_test dataset: (1533, 10)
Transaction Number y_test dataset: (1533,)
Before OverSampling, counts of label '1': 172
Before OverSampling, counts of label '0': 3405
###Markdown
As we see above, the dataset is highly imbalanced as most of the records belong to Patients who never had a stroke. Therefore the algorithms are much more likely to classify new observations to the majority class and high accuracy won't tell us anything. In order to address this challenge, we are using oversampling data approach instead of undersampling. Oversampling increases the number of minority class members in the training set. The advantage of oversampling is that no information from the original training set is lost unlike in undersampling, as all observations from the minority and majority classes are kept.Since this approach is prone to overfitting, we have to be cautious. We are using oversampling technique called SMOTE (Synthetic Minority Oversampling Technique), to make our dataset balanced. It creates synthetic points from the minority class.
###Code
# Oversample the training dataset
sm = SMOTE(random_state=2)
x_train_s, y_train_s = sm.fit_resample(x_train, y_train.ravel())
print('After OverSampling, the shape of x_train: {}'.format(x_train_s.shape))
print('After OverSampling, the shape of y_train: {} \n'.format(y_train_s.shape))
print("After OverSampling, counts of label '1', %: {}".format(sum(y_train_s==1)/len(y_train_s)*100.0,2))
print("After OverSampling, counts of label '0', %: {}".format(sum(y_train_s==0)/len(y_train_s)*100.0,2))
sns.countplot(x=y_train_s, data=stroke_data, palette='CMRmap')
plt.title("After OverSampling",fontsize=15)
plt.show()
# Determine 10 best features using SelectKBest
best_features = SelectKBest(score_func=f_classif, k=10)
fit = best_features.fit(x_train_s,y_train_s)
df_scores = pd.DataFrame(fit.scores_)
df_columns = pd.DataFrame(x_train_s.columns)
# concatenate dataframes
feature_scores = pd.concat([df_columns, df_scores],axis=1)
feature_scores.columns = ['Feature_Name','Score'] # name output columns
print(feature_scores.nlargest(10,'Score')) # print 10 best features
# Bar plot showing features in the order of score
tmp = feature_scores.sort_values(by='Score',ascending=False)
plt.title('Features importance',fontsize=14)
s = sns.barplot(x='Feature_Name',y='Score',data=tmp)
s.set_xticklabels(s.get_xticklabels(),rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Model Evaluation & Selection Random Forest Classifier
###Code
rf = RandomForestClassifier()
rf.fit(x_train_s,y_train_s)
rf_predict = rf.predict(x_test)
dec = np.int64(np.ceil(np.log10(len(y_test))))
print('Confusion Matrix - Random Forest')
print(confusion_matrix(y_test, rf_predict), '\n')
print('Classification report - Random Forest')
print(classification_report(y_test,rf_predict, digits=dec), '\n')
print('Random Forest Accuracy Score = ', accuracy_score(y_test,rf_predict)*100)
skplt.metrics.plot_confusion_matrix(y_test,rf_predict)
plt.title('Random Forest Confusion Matrix')
plt.show()
###Output
_____no_output_____
###Markdown
k-Nearest Neighbors
###Code
kn = KNeighborsClassifier(n_neighbors=4)
kn.fit(x_train_s,y_train_s)
kn_predict = kn.predict(x_test)
print('Confusion Matrix - kNN')
print(confusion_matrix(y_test, kn_predict), '\n')
print('Classification report - kNN')
print(classification_report(y_test,kn_predict, digits=dec), '\n')
print('k-Nearest Neighbor Accuracy Score = ', accuracy_score(y_test,kn_predict)*100)
skplt.metrics.plot_confusion_matrix(y_test,kn_predict)
plt.title('kNN Confusion Matrix')
plt.show()
###Output
_____no_output_____
###Markdown
Decision Tree Classifier
###Code
dt = DecisionTreeClassifier()
dt.fit(x_train_s,y_train_s)
dt_predict = dt.predict(x_test)
print('Confusion Matrix - Decision Tree')
print(confusion_matrix(y_test, dt_predict), '\n')
print('Classification report - Decision Tree')
print(classification_report(y_test,dt_predict, digits=dec), '\n')
print('Decision Tree Accuracy Score = ', accuracy_score(y_test,dt_predict)*100)
skplt.metrics.plot_confusion_matrix(y_test,dt_predict)
plt.title('Decision Tree Confusion Matrix')
plt.show()
###Output
_____no_output_____ |
8.Where_Are_Forests_Located_Widget.ipynb | ###Markdown
"Where are Forests Located?" WidgetThis widget is a mix of a donut chart and a ranked list. It shows tree cover extent by admin region. On hover the pie chart segments display the extent, in ha and %, for that region.The donut chart should display data for the top few admin regions, and group the rest together as 'Other Districts'Displayed data, ordered by DESC area(ha). 1. Admin-2 or -1 name2. % of total extent3. Area of extent (ha)User Variables:1. Hanson extent ('Gadm28'), IFL2013, Plantations or Intact forest2. Admin-0 and -1 region
###Code
#Import Global Metadata etc
%run '0.Importable_Globals.ipynb'
# VARIABLES
location = 'All Region' # 'plantations', 'ifl_2013', or 'primary_forests'... here 'gadm28'=default
threshold = 0 # 0,10,15,20,25,30,50,75,100
adm0 = 'GBR'
adm1 = 1 # To rank admin 1 areas, set to None
# To rank admin 2 areas, specify an admin 1 level
extent_year = 2000 #extent data, 2000 or 2010
tags = ["forest_change", "land_cover"]
selectable_polynames = ['gadm28',
'wdpa',
'primary_forest',
'ifl_2013']
# get admin 1 or 2 level human-readable name info as needed:
adm1_to_name = None
adm2_to_name = None
if adm1:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
adm2_to_name ={}
for row in tmp:
adm2_to_name[row.get('adm2')] = row.get('name')
tmp = get_admin1_json(iso=adm0)
adm1_to_name={}
for row in tmp:
adm1_to_name[row.get('adm1')] = row.get('name')
# Returns json object containing admin-codes, total area and extent (both in ha)
# If adm1 is not specified, it returns the total values for each adm1 region
# Else, returns the adm2 values within that adm1 region
# You may also specify a polyname (intersecting area) e.g. 'extent and % of plantations only'
# By default polyname is 'gadm28' (all forest extent)
def multiregion_extent_queries(adm0, adm1=None, year='area_extent_2000', p_name='gadm28', threshold=30):
if adm0 and not adm1:
print('Request for adm1 areas')
sql = (f"SELECT adm1 as region, sum({year}) as extent, sum(area_gadm28) as total "
f"FROM {ds} "
f"WHERE iso = '{adm0}' "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}' "
f"GROUP BY adm1 "
f"ORDER BY adm1")
elif adm0 and adm1:
print('Request for adm2 areas')
sql = (f"SELECT adm2 as region, {year} as extent, area_gadm28 as total "
f"FROM {ds} "
f"WHERE iso = '{adm0}' "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}' "
f"AND adm1 = '{adm1}' ")
return sql
# Takes the data from the above api call and generates a list containing the relevant data:
# Admin-Code, Forest Extent Area, Percentage of Admin region
# NOTE that 'area_percent' is the forest extent area relative to teh area of its admin-region.
def data_output(data, adm1=None):
output = []
for d in range(0, len(data)):
tmp_ = {
'region': data[d]['region'],
'area_percent': (100*data[d]['extent']/data[d]['total']),
'area_ha': data[d]['extent']
}
output.append(tmp_)
return output
# Example sql and returned data
url = f"https://production-api.globalforestwatch.org/v1/query/{ds}"
sql = multiregion_extent_queries(adm0, adm1, extent_year_dict[extent_year], polynames[location], threshold)
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
data = r.json()['data']
data[0:3]
# After generating list return wanted metrics
# (NOTE! This is not sorted.)
extent_json = data_output(data, adm1)
extent_json[0:3]
#Sort regions by area (DESC)
newlist = sorted(extent_json, key=lambda k: k['area_ha'], reverse=True)
newlist[0:3]
# Example donut chart
# NOTE - THE COLOURS ARE NOT NECESSARILY THOSE NEEDED FOR PRODUCTION
limit = 0
sizes = []
labels = []
for r in range(0,10):
try:
if adm1:
labels.append(adm2_to_name[newlist[r].get('region')])
elif adm0:
labels.append(adm1_to_name[newlist[r].get('region')])
sizes.append(newlist[r].get('area_ha'))
except:
break
limit += 1
other_regions=0
for rows in range(limit+1,len(newlist)):
other_regions += newlist[rows].get('area_ha')
if other_regions != 0:
labels.append('Other regions')
sizes.append(other_regions)
if adm1:
title = adm1_to_name[adm1]
elif adm0:
title = iso_to_countries[adm0]
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=False, startangle=90, colors=['#0da330', '#69ef88','green','grey'])
ax1.axis('equal')
centre_circle = plt.Circle((0,0),0.75,color='black', fc='white',linewidth=0.5)
fig1 = plt.gcf()
fig1.gca().add_artist(centre_circle)
plt.title(f'Forest cover in {title}')
plt.show()
###Output
_____no_output_____
###Markdown
Dynamic Sentence for "Where are Forests Located?" Widget1. Returns the no of regions responsible for >50% of the regions tree cover extent (adm1) - or, the extent (%) that the top 10% of regions are responsible for (adm2)2. Max and Min extent (%) in that region3. Average extent (%) that each region contributes to the total
###Code
#Calculate total three cover loss at this threshold
total = 0
for i in range(0,len(extent_json)):
total += newlist[i]['area_ha']
# Calculate % extent for the sub-region (relative to total extent) Also filters out incorrect/duplicated data
correct_list = []
for i in range(0,len(extent_json)):
if(i != 0 and newlist[i]['region'] != newlist[i-1]['region']):
correct_list.append(100*newlist[i]['area_ha']/total)
elif i == 0:
correct_list.append(100*newlist[i]['area_ha']/total)
correct_list[0:3]
#Calculate the mean extent
mean=0
for i in range(0, len(correct_list)):
mean += correct_list[i]
mean = mean/len(correct_list)
# Percentile calcs: work out how many regions are responsible for >50% loss
# x is no. of adm regions.
tenth_percentile = int(len(correct_list)/10)
if adm1:
top_ten_index = tenth_percentile
total = np.sum(correct_list[0: top_ten_index+1])
accumulated_percent = 0
for n, item in enumerate(correct_list):
accumulated_percent += item
if accumulated_percent >= 50:
lower_fity_percentile_regions = n +1
break
#Extent Stats
extent_stats = { 'max': correct_list[0], 'min': correct_list[len(correct_list)-1], 'avg': mean}
extent_stats
#Dynamic sentence. For adm2.
if adm1:
if len(correct_list) > 10:
print(f"The top {tenth_percentile} sub-regions are responsible for ", end="")
print(f"{total:,.0f}% of {adm1_to_name[adm1]}'s ", end="")
if location == 'All Region':
print(f"regional tree cover in {extent_year} ", end="")
print(f"where tree canopy is greater than {threshold}%. ", end="")
elif (location == 'Mining' or 'Mining in Intact Forest Landscapes' or 'Mining in Plantation Areas'):
print(f"tree cover in areas with {location.lower()} in {extent_year} ", end="")
print(f"where tree canopy is greater than {threshold}%. ", end="")
else:
print(f"tree cover found in {location.lower()} in {extent_year} ", end="")
print(f"where tree canopy is greater than {threshold}%. ", end="")
print(f"{adm2_to_name[newlist[0].get('region')]} has the largest relative tree cover ", end="")
print(f"in {adm1_to_name[adm1]} at {extent_stats['max']:,.0f}%.", end="")
else:
#Dynamic sentence. For adm1.
if len(correct_list) > 10:
print(f"In {iso_to_countries[adm0]} {lower_fity_percentile_regions} ", end="")
print(f"regions represent more than half ({accumulated_percent:,.0f}%) ",end="")
print(f"of all tree cover extent ", end="")
if location == 'All Region':
print(f"country-wide. ", end="")
elif (location == 'Mining' or 'Mining in Intact Forest Landscapes' or 'Mining in Plantation Areas'):
print(f"in areas with {location.lower()}. ", end="")
else:
print(f"found in {location.lower()}. ", end="")
else:
print(f"Within {iso_to_countries[adm0]}, ", end="")
print(f"{adm1_to_name[newlist[0].get('region')]} ", end="")
print(f"has the largest relative tree cover in {extent_year} ", end="")
print(f"at {extent_stats['max']:,.0f}%, ", end="")
print(f"where tree canopy is greater than {threshold}%. ", end="")
###Output
The top 11 sub-regions are responsible for 1,879% of England's regional tree cover in 2000 where tree canopy is greater than 0%. Surrey has the largest relative tree cover in England at 8%.
###Markdown
"Which regions are the forested?" WidgetA seperate widget, which displays only a ranked list of subregions by relative tree cover extent (i.e. relative to the subregions size) and a dynamic sentence.Replaces the % option in the **"Where are the forests located?"** widget.
###Code
# VARIABLES
location = 'All Region' # 'plantations', 'ifl_2013', or 'primary_forests'... here 'gadm28'=default
threshold = 0 # 0,10,15,20,25,30,50,75,100
adm0 = 'GBR'
adm1 = 1 # To rank admin 1 areas, set to None
# To rank admin 2 areas, specify an admin 1 level
extent_year = 2000 #extent data, 2000 or 2010
tags = ["forest_change", "land_cover"]
selectable_polynames = ['gadm28',
'wdpa',
'primary_forest',
'ifl_2013']
# get admin 1 or 2 level human-readable name info as needed:
adm1_to_name = None
adm2_to_name = None
if adm1:
tmp = get_admin2_json(iso=adm0, adm1=adm1)
adm2_to_name ={}
for row in tmp:
adm2_to_name[row.get('adm2')] = row.get('name')
tmp = get_admin1_json(iso=adm0)
adm1_to_name={}
for row in tmp:
adm1_to_name[row.get('adm1')] = row.get('name')
# Returns json object containing admin-codes, total area and extent (both in ha)
# If adm1 is not specified, it returns the total values for each adm1 region
# Else, returns the adm2 values within that adm1 region
# You may also specify a polyname (intersecting area) e.g. 'extent and % of plantations only'
# By default polyname is 'gadm28' (all forest extent)
def multiregion_extent_queries(adm0, adm1=None, year='area_extent_2000', p_name='gadm28', threshold=30):
if adm0 and not adm1:
print('Request for adm1 areas')
sql = (f"SELECT adm1 as region, sum({year}) as extent, sum(area_gadm28) as total "
f"FROM {ds} "
f"WHERE iso = '{adm0}' "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}' "
f"GROUP BY adm1 "
f"ORDER BY adm1")
elif adm0 and adm1:
print('Request for adm2 areas')
sql = (f"SELECT adm2 as region, {year} as extent, area_gadm28 as total "
f"FROM {ds} "
f"WHERE iso = '{adm0}' "
f"AND thresh = {threshold} "
f"AND polyname = '{p_name}' "
f"AND adm1 = '{adm1}' ")
return sql
# Takes the data from the above api call and generates a list containing the relevant data:
# Admin-Code, Forest Extent Area, Percentage of Admin region
# NOTE that 'area_percent' is the forest extent area relative to teh area of its admin-region.
def data_output(data, adm1=None):
output = []
for d in range(0, len(data)):
tmp_ = {
'region': data[d]['region'],
'area_percent': (100*data[d]['extent']/data[d]['total']),
'area_ha': data[d]['extent']
}
output.append(tmp_)
return output
# Example sql and returned data
url = f"https://production-api.globalforestwatch.org/v1/query/{ds}"
sql = multiregion_extent_queries(adm0, adm1, extent_year_dict[extent_year], polynames[location], threshold)
properties = {"sql": sql}
r = requests.get(url, params = properties)
print(r.url)
print(f'Status: {r.status_code}')
data = r.json()['data']
data[0:3]
# After generating list return wanted metrics
# (NOTE! This is not sorted.)
extent_json = data_output(data, adm1)
extent_json[0:3]
#Sort regions by relative area, in % (DESC)
newlist = sorted(extent_json, key=lambda k: k['area_percent'], reverse=True)
newlist[0:3]
#Calculate total tree cover at this threshold
total = 0
for i in range(0,len(extent_json)):
total += newlist[i]['area_percent']
mean = total/len(extent_json)
mean
#Dynamic sentence. For adm2.
if adm1:
print(f"The most forested sub-region in {adm1_to_name[adm1]} ", end="")
print(f"is {adm2_to_name[newlist[0].get('region')]} ", end="")
else:
print(f"The most forested sub-region in {iso_to_countries[adm0]} ", end="")
print(f"is {adm1_to_name[adm1]} ", end="")
print(f"at {newlist[0].get('area_percent')}% ", end="")
if location == 'All Region':
print(f"tree cover in {extent_year}, ", end="")
print(f"where tree canopy is greater than {threshold}%. ", end="")
elif (location == 'Mining' or 'Mining in Intact Forest Landscapes' or 'Mining in Plantation Areas'):
print(f"tree cover in areas with {location.lower()} in {extent_year}, ", end="")
print(f"where tree canopy is greater than {threshold}%. ", end="")
else:
print(f"tree cover found in {location.lower()} in {extent_year}, ", end="")
print(f"where tree canopy is greater than {threshold}%. ", end="")
print(f"This is compared to compared to an average ", end="")
print(f"of {mean:,.1f}% across all sub-regions. ", end="")
###Output
The most forested sub-region in England is Surrey at 41.784433325254305% tree cover in 2000, where tree canopy is greater than 0%. This is compared to compared to an average of 16.8% across all sub-regions. |
My_notebooks/collision_avoidance/data_collection.ipynb | ###Markdown
Collision Avoidance - Data CollectionIf you ran through the basic motion notebook, hopefully you're enjoying how easy it can be to make your Jetbot move around! Thats very cool! But what's even cooler, is making JetBot move around all by itself! This is a super hard task, that has many different approaches but the whole problem is usually broken down into easier sub-problems. It could be argued that one of the mostimportant sub-problems to solve, is the problem of preventing the robot from entering dangerous situations! We're calling this *collision avoidance*. In this set of notebooks, we're going to attempt to solve the problem using deep learning and a single, very versatile, sensor: the camera. You'll see how with a neural network, camera, and the NVIDIA Jetson Nano, we can teach the robot a very useful behavior!The approach we take to avoiding collisions is to create a virtual "safety bubble" around the robot. Within this safety bubble, the robot is able to spin in a circle without hitting any objects (or other dangerous situations like falling off a ledge). Of course, the robot is limited by what's in it's field of vision, and we can't prevent objects from being placed behind the robot, etc. But we can prevent the robot from entering these scenarios itself.The way we'll do this is super simple: First, we'll manually place the robot in scenarios where it's "safety bubble" is violated, and label these scenarios ``blocked``. We save a snapshot of what the robot sees along with this label.Second, we'll manually place the robot in scenarios where it's safe to move forward a bit, and label these scenarios ``free``. Likewise, we save a snapshot along with this label.That's all that we'll do in this notebook; data collection. Once we have lots of images and labels, we'll upload this data to a GPU enabled machine where we'll *train* a neural network to predict whether the robot's safety bubble is being violated based off of the image it sees. We'll use this to implement a simple collision avoidance behavior in the end :)> IMPORTANT NOTE: When JetBot spins in place, it actually spins about the center between the two wheels, not the center of the robot chassis itself. This is an important detail to remember when you're trying to estimate whether the robot's safety bubble is violated or not. But don't worry, you don't have to be exact. If in doubt it's better to lean on the cautious side (a big safety bubble). We want to make sure JetBot doesn't enter a scenario that it couldn't get out of by turning in place. Display live camera feedSo let's get started. First, let's initialize and display our camera like we did in the *teleoperation* notebook. > Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task).> In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later.
###Code
import traitlets
from jetcam.usb_camera import USBCamera
from jetcam.utils import bgr8_to_jpeg
import ipywidgets as widgets
from IPython.display import display
camera = USBCamera(width=224, height=224, capture_width=640, capture_height=480, capture_device=0)
image = widgets.Image(format='jpeg', width=224, height=224) # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)
display(image)
camera.running = True
###Output
_____no_output_____
###Markdown
Awesome, next let's create a few directories where we'll store all our data. We'll create a folder ``dataset`` that will contain two sub-folders ``free`` and ``blocked``, where we'll place the images for each scenario.
###Code
import os
blocked_dir = '/workspace/jetbot/dataset0/blocked'
free_dir = '/workspace/jetbot/dataset0/free'
# we have this "try/except" statement because these next functions can throw an error if the directories exist already
try:
os.makedirs(free_dir)
os.makedirs(blocked_dir)
except FileExistsError:
print('Directories not created because they already exist')
###Output
_____no_output_____
###Markdown
If you refresh the Jupyter file browser on the left, you should now see those directories appear. Next, let's create and display some buttons that we'll use to save snapshotsfor each class label. We'll also add some text boxes that will display how many images of each category that we've collected so far. This is useful because we want to makesure we collect about as many ``free`` images as ``blocked`` images. It also helps to know how many images we've collected overall.
###Code
import ipywidgets.widgets as widgets
button_layout = widgets.Layout(width='128px', height='64px')
free_button = widgets.Button(description='add free', button_style='success', layout=button_layout)
blocked_button = widgets.Button(description='add blocked', button_style='danger', layout=button_layout)
free_count = widgets.IntText(layout=button_layout, value=len(os.listdir(free_dir)))
blocked_count = widgets.IntText(layout=button_layout, value=len(os.listdir(blocked_dir)))
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
###Output
_____no_output_____
###Markdown
Right now, these buttons wont do anything. We have to attach functions to save images for each category to the buttons' ``on_click`` event. We'll save the valueof the ``Image`` widget (rather than the camera), because it's already in compressed JPEG format!To make sure we don't repeat any file names (even across different machines!) we'll use the ``uuid`` package in python, which defines the ``uuid1`` method to generatea unique identifier. This unique identifier is generated from information like the current time and the machine address.
###Code
from uuid import uuid1
def save_snapshot(directory):
image_path = os.path.join(directory, str(uuid1()) + '.jpg')
with open(image_path, 'wb') as f:
f.write(image_widget.value)
def save_free():
global free_dir, free_count
save_snapshot(free_dir)
free_count.value = len(os.listdir(free_dir))
def save_blocked():
global blocked_dir, blocked_count
save_snapshot(blocked_dir)
blocked_count.value = len(os.listdir(blocked_dir))
# attach the callbacks, we use a 'lambda' function to ignore the
# parameter that the on_click event would provide to our function
# because we don't need it.
free_button.on_click(lambda x: save_free())
blocked_button.on_click(lambda x: save_blocked())
###Output
_____no_output_____
###Markdown
Great! Now the buttons above should save images to the ``free`` and ``blocked`` directories. You can use the Jupyter Lab file browser to view these files!Now go ahead and collect some data 1. Place the robot in a scenario where it's blocked and press ``add blocked``2. Place the robot in a scenario where it's free and press ``add free``3. Repeat 1, 2> REMINDER: You can move the widgets to new windows by right clicking the cell and clicking ``Create New View for Output``. Or, you can just re-display them> together as we will belowHere are some tips for labeling data1. Try different orientations2. Try different lighting3. Try varied object / collision types; walls, ledges, objects4. Try different textured floors / objects; patterned, smooth, glass, etc.Ultimately, the more data we have of scenarios the robot will encounter in the real world, the better our collision avoidance behavior will be. It's importantto get *varied* data (as described by the above tips) and not just a lot of data, but you'll probably need at least 100 images of each class (that's not a science, just a helpful tip here). But don't worry, it goes pretty fast once you get going :)
###Code
display(image_widget)
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
###Output
_____no_output_____
###Markdown
Again, let's close the camera conneciton properly so that we can use the camera in the later notebook.
###Code
camera.stop()
###Output
_____no_output_____
###Markdown
NextOnce you've collected enough data, we'll need to copy that data to our GPU desktop or cloud machine for training. First, we can call the following *terminal* command to compressour dataset folder into a single *zip* file.> The ! prefix indicates that we want to run the cell as a *shell* (or *terminal*) command.> The -r flag in the zip command below indicates *recursive* so that we include all nested files, the -q flag indicates *quiet* so that the zip command doesn't print any output
###Code
!zip -r -q /workspace/jetbot/data0.zip /workspace/jetbot/dataset0
###Output
_____no_output_____
###Markdown
You should see a file named ``dataset.zip`` in the Jupyter Lab file browser. You should download the zip file using the Jupyter Lab file browser by right clicking and selecting ``Download``.Next, we'll need to upload this data to our GPU desktop or cloud machine (we refer to this as the *host*) to train the collision avoidance neural network. We'll assume that you've set up your trainingmachine as described in the JetBot WiKi. If you have, you can navigate to ``http://:8888`` to open up the Jupyter Lab environment running on the host. The notebook you'll need to open there is called ``collision_avoidance/train_model.ipynb``.So head on over to your training machine and follow the instructions there! Once your model is trained, we'll return to the robot Jupyter Lab enivornment to use the model for a live demo!
###Code
camera.running = False
camera.unobserve()
camera().release
camera('off')
###Output
_____no_output_____ |
NoSQL/NetworkX/force.ipynb | ###Markdown
JavascriptExample of writing JSON format graph data and using the D3 Javascript library to produce an HTML/Javascript drawing.
###Code
# Author: Aric Hagberg <[email protected]>
# Copyright (C) 2011-2019 by
# Aric Hagberg <[email protected]>
# Dan Schult <[email protected]>
# Pieter Swart <[email protected]>
# All rights reserved.
# BSD license.
import json
import flask
import networkx as nx
from networkx.readwrite import json_graph
G = nx.barbell_graph(6, 3)
# this d3 example uses the name attribute for the mouse-hover value,
# so add a name to each node
for n in G:
G.nodes[n]['name'] = n
# write json formatted data
d = json_graph.node_link_data(G) # node-link format to serialize
# write json
json.dump(d, open('force/force.json', 'w'))
print('Wrote node-link JSON data to force/force.json')
# Serve the file over http to allow for cross origin requests
app = flask.Flask(__name__, static_folder="force")
@app.route('/')
def static_proxy():
return app.send_static_file('force.html')
print('\nGo to http://localhost:8000 to see the example\n')
app.run(port=8000)
###Output
_____no_output_____ |
colgado.ipynb | ###Markdown
Colgado Este código está adaptado de https://github.com/kiteco/python-youtube-code/tree/master/build-hangman-in-python.También se puede encontrar el tutorial en YouTube https://www.youtube.com/watch?v=m4nEnsavl6w&t=363s. Librerías
###Code
# Se utiliza esta librería para elegir aleatoriamente la palabra que hay que adivinar.
import random
###Output
_____no_output_____
###Markdown
Funciones `obtiene_palabras`
###Code
def obtiene_palabras():
"""
Lee el archivo palabras.txt y almacena los contenidos en una List. Cada elemento
de la List es una de las palabras del archivo.
"""
palabras = []
with open('palabras.txt', 'r') as f_palabras:
for line in f_palabras:
for w in line.split(','):
palabras.append(w.rstrip().lstrip())
return palabras
palabras = obtiene_palabras()
print(palabras[0: 10])
###Output
['humanidad', 'humano', 'peo', 'poto', 'persona', 'gente', 'hombre', 'mujer', 'bebé', 'niño']
###Markdown
`elige_palabra(palabra)`
###Code
def elige_palabra(palabras):
"""
Esta función elige de forma aleatoria una de las palabras en la List.
"""
palabra = random.choice(palabras)
return palabra.upper()
elige_palabra(palabras)
###Output
_____no_output_____
###Markdown
`muestra_colgado(tentativos)`
###Code
def muestra_colgado(tentativos):
"""
Muestra el estado del colgado en función del número de tentativos remanente.
El número máximo de tentativos debe ser 6.
"""
etapas = [ # estado final: cabeza, torso, brazos y piernas
"""
--------
| |
| O
| \\|/
| |
| / \\
-
""",
# cabeza, torso, brazos y una pierna
"""
--------
| |
| O
| \\|/
| |
| /
-
""",
# cabeza, torso y ambos brazos
"""
--------
| |
| O
| \\|/
| |
|
-
""",
# cabeza, torso y un brazo
"""
--------
| |
| O
| \\|
| |
|
-
""",
# cabeza y torso
"""
--------
| |
| O
| |
| |
|
-
""",
# cabeza
"""
--------
| |
| O
|
|
|
-
""",
# estado inicial
"""
--------
| |
|
|
|
|
-
"""
]
return etapas[tentativos]
print(muestra_colgado(0))
###Output
--------
| |
| O
| \|/
| |
| / \
-
###Markdown
`juega(palabra)` Esta es la función principal del juego.
###Code
def juega(palabra):
adivinado = False
letras_adivinadas = []
palabras_adivinadas = []
intentos = 6
print("¡Juguemos al Colgado!")
print(muestra_colgado(intentos))
linea_palabra = "_" * len(palabra)
print(linea_palabra)
print(f'La palabra tiene {len(palabra)} letras.')
print("\n")
# Entramos al main loop de un partido
while not adivinado and intentos > 0:
intento = input(
"Por favor adivina una letra o toda la palabra: "
).upper()
if len(intento) == 1 and intento.isalpha():
if intento in letras_adivinadas:
print(f'Ya intentaste con la letra {intento}')
elif intento not in palabra:
print(f'La letra {intento} no está en la palabra.')
intentos -= 1
letras_adivinadas.append(intento)
else:
print(f'¡Buena! La letra {intento} está en la palabra.')
letras_adivinadas.append(intento)
word_as_list = list(linea_palabra)
indices = [i for i, letter in enumerate(
palabra) if letter == intento]
for index in indices:
word_as_list[index] = intento
linea_palabra = "".join(word_as_list)
if "_" not in linea_palabra:
adivinado = True
elif len(intento) == len(palabra) and intento.isalpha():
if intento in palabras_adivinadas:
print(f'Ya intentaste con la palabra {intento}.')
elif intento != palabra:
print(f'La palabra no es {intento}.')
intentos -= 1
palabras_adivinadas.append(intento)
else:
adivinado = True
linea_palabra = palabra
else:
print("Ese no es un intento válido.")
print(muestra_colgado(intentos))
print(linea_palabra)
print("\n")
if adivinado:
print("¡Felicitaciones! Adivinaste la palabra y ganaste.")
else:
print(f'Pucha, te quedaste sin intentos, la palabra era {palabra}.')
def main():
palabras = obtiene_palabras()
palabra = elige_palabra(palabras)
juega(palabra)
while input("¿Juegas de nuevo? (S/N) ").upper() == "S":
palabra = elige_palabra()
juega(palabra)
main()
###Output
¡Juguemos al Colgado!
--------
| |
|
|
|
|
-
______
La palabra tiene 6 letras.
Por favor adivina una letra o toda la palabra: a
¡Buena! La letra A está en la palabra.
--------
| |
|
|
|
|
-
____A_
Por favor adivina una letra o toda la palabra: e
¡Buena! La letra E está en la palabra.
--------
| |
|
|
|
|
-
_E__A_
Por favor adivina una letra o toda la palabra: T
La letra T no está en la palabra.
--------
| |
| O
|
|
|
-
_E__A_
Por favor adivina una letra o toda la palabra: r
¡Buena! La letra R está en la palabra.
--------
| |
| O
|
|
|
-
_ERRAR
Por favor adivina una letra o toda la palabra: Herrar
La palabra no es HERRAR.
--------
| |
| O
| |
| |
|
-
_ERRAR
Por favor adivina una letra o toda la palabra: yerrar
La palabra no es YERRAR.
--------
| |
| O
| \|
| |
|
-
_ERRAR
Por favor adivina una letra o toda la palabra: cerrar
--------
| |
| O
| \|
| |
|
-
CERRAR
¡Felicitaciones! Adivinaste la palabra y ganaste.
¿Juegas de nuevo? (S/N) n
|
notebooks/03_cudf_group_sort.ipynb | ###Markdown
Grouping and Sorting with cuDF In this notebook you will be introduced to grouping and sorting with cuDF, with performance comparisons to Pandas, before integrating what you learned in a short data analysis exercise. Objectives By the time you complete this notebook you will be able to:- Perform GPU-accelerated group and sort operations with cuDF Imports
###Code
import cudf
import pandas as pd
###Output
_____no_output_____
###Markdown
Read Data We once again read the UK population data, returning to timed comparisons with Pandas.
###Code
%time gdf = cudf.read_csv('../data/data_pop.csv')
gdf.drop(gdf.columns[0], axis=1, inplace=True)
%time df = pd.read_csv('../data/data_pop.csv')
df.drop(df.columns[0], axis=1, inplace=True)
gdf.shape == df.shape
gdf.dtypes
gdf.shape
gdf.head()
###Output
_____no_output_____
###Markdown
Grouping and Sorting Record Grouping Record grouping with cuDF works the same way as in Pandas. cuDF
###Code
%%time
counties = gdf[['county', 'age']].groupby(['county'])
avg_ages = counties.mean()
print(avg_ages[:5])
###Output
_____no_output_____
###Markdown
Pandas
###Code
%%time
counties_pd = df[['county', 'age']].groupby(['county'])
avg_ages_pd = counties_pd.mean()
print(avg_ages_pd[:5])
###Output
_____no_output_____
###Markdown
Sorting Sorting is also very similar to Pandas, though cuDF does not support in-place sorting. cuDF
###Code
%time gdf_names = gdf['name'].sort_values()
print(gdf_names[:5]) # yes, "A" is an infrequent but correct given name in the UK, according to census data
print(gdf_names[-5:])
###Output
_____no_output_____
###Markdown
Pandas This operation takes a while with Pandas. Feel free to start the next exercise while you wait.
###Code
%time df_names = df['name'].sort_values()
print(df_names[:5])
print(df_names[-5:])
###Output
_____no_output_____
###Markdown
Exercise 3: Youngest Names For this exercise you will need to use both `groupby` and `sort_values`.We would like to know which names are associated with the lowest average age and how many people have those names. Using the `mean` and `count` methods on the data grouped by name, identify the three names with the lowest mean age and their counts. Visualize the Population - Use Bokeh to visualize the population data
###Code
import cupy as cp
from bokeh import plotting as bplt
from bokeh import models as bmdl
###Output
_____no_output_____
###Markdown
Setup Visualizations RAPIDS can be used with a wide array of visualizations, both open source and proprietary. We won't teach to a specific visualization option in this workshop but will just use the open source [Bokeh](https://bokeh.pydata.org/en/latest/index.html) to illustrate the results of some machine learning algorithms. As such, please feel free to make a light pass over this section, which enables visualizations to be output in this notebook, and creates a visualization helper function `base_plot` we will use below.
###Code
# Turn on in-Jupyter viz
bplt.output_notebook()
# Helper function for visuals
def base_plot(data=None, padding=None,
tools='pan,wheel_zoom,reset', plot_width=500, plot_height=500, x_range=(0, 100), y_range=(0, 100), **plot_args):
# if we send in two columns of data, we can use them to auto-size the scale
if data is not None and padding is not None:
x_range = (min(data.iloc[:, 0]) - padding, max(data.iloc[:, 0]) + padding)
y_range = (min(data.iloc[:, 1]) - padding, max(data.iloc[:, 1]) + padding)
p = bplt.figure(tools=tools, plot_width=plot_width, plot_height=plot_height,
x_range=x_range, y_range=y_range, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0,
**plot_args)
p.axis.visible = True
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.add_tools(bmdl.BoxZoomTool(match_aspect=True))
return p
###Output
_____no_output_____
###Markdown
Subset Data for Vizualizations Bokeh, [DataShader](http://datashader.org/), and other open source visualization projects are being connected with RAPIDS via the [cuXfilter](https://github.com/rapidsai/cuxfilter) framework. For simplicity in this workshop, we will use the standard CPU Bokeh. CPU performance can be a real bottleneck to our workflows, so the typical approach is to select subsets of our data to visualize, especially during initial iterations.Here we make a subset of our data, and use the `to_pandas` method on that subset so that we can pass the pandas Dataframe for visualizations:
###Code
plot_subset = gdf.take(cp.random.choice(gdf.shape[0], size=100000, replace=True))
df_subset = plot_subset.to_pandas()
df_subset.head()
###Output
_____no_output_____
###Markdown
Visualize Population Density and Distribution To avoid overplotting, we shrink the `alpha` value and reduce the `size` of each pixel.
###Code
options = dict(line_color=None,
fill_color='blue',
size=2, # Reduce size to make points more distinct
alpha=.05) # Reduce alpha to avoid overplotting
###Output
_____no_output_____
###Markdown
We give the `easting` and `northing` columns of our data subset to our visualization helper function...
###Code
p = base_plot(data=df_subset[['easting', 'northing']],
padding=10000)
###Output
_____no_output_____
###Markdown
...plot circles for each datapoint...
###Code
p.circle(x=list(df_subset['easting']), y=list(df_subset['northing']), **options)
###Output
_____no_output_____
###Markdown
...and display.
###Code
bplt.show(p)
###Output
_____no_output_____ |
Data_Analytics_in_Action/digits.ipynb | ###Markdown
**识别手写体数字** 导入数据集
###Code
from sklearn import datasets
digits = datasets.load_digits()
print digits.DESCR
###Output
Optical Recognition of Handwritten Digits Data Set
===================================================
Notes
-----
Data Set Characteristics:
:Number of Instances: 5620
:Number of Attributes: 64
:Attribute Information: 8x8 image of integer pixels in the range 0..16.
:Missing Attribute Values: None
:Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)
:Date: July; 1998
This is a copy of the test set of the UCI ML hand-written digits datasets
http://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
The data set contains images of hand-written digits: 10 classes where
each class refers to a digit.
Preprocessing programs made available by NIST were used to extract
normalized bitmaps of handwritten digits from a preprinted form. From a
total of 43 people, 30 contributed to the training set and different 13
to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of
4x4 and the number of on pixels are counted in each block. This generates
an input matrix of 8x8 where each element is an integer in the range
0..16. This reduces dimensionality and gives invariance to small
distortions.
For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.
T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.
L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,
1994.
References
----------
- C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their
Applications to Handwritten Digit Recognition, MSc Thesis, Institute of
Graduate Studies in Science and Engineering, Bogazici University.
- E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
- Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.
Linear dimensionalityreduction using relevance weighted LDA. School of
Electrical and Electronic Engineering Nanyang Technological University.
2005.
- Claudio Gentile. A New Approximate Maximal Margin Classification
Algorithm. NIPS. 2000.
###Markdown
手写体数字图像的数据,则存储在digit.images,数组中每个元素表示一张图像,每个元素为 $8 \times 8$形状的矩阵,矩阵各项为数值类型,每个数值对应着一种灰度等级,0代表白色,15代表黑色
###Code
digits.images[0]
###Output
_____no_output_____
###Markdown
借助matplotlib库,生成图像
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(digits.images[0],cmap=plt.cm.gray_r,interpolation='nearest')
digits.target
digits.target.size
###Output
_____no_output_____
###Markdown
学习预测 digits数据集有1797个元素,考虑使用前1791个作为训练集,剩余的6个作为验证集,查看细节
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.subplot(321)
plt.imshow(digits.images[1791], cmap=plt.cm.gray_r, interpolation='nearest')
plt.subplot(322)
plt.imshow(digits.images[1792], cmap=plt.cm.gray_r, interpolation='nearest')
plt.subplot(323)
plt.imshow(digits.images[1793], cmap=plt.cm.gray_r, interpolation='nearest')
plt.subplot(324)
plt.imshow(digits.images[1794], cmap=plt.cm.gray_r, interpolation='nearest')
plt.subplot(325)
plt.imshow(digits.images[1795], cmap=plt.cm.gray_r, interpolation='nearest')
plt.subplot(326)
plt.imshow(digits.images[1796], cmap=plt.cm.gray_r, interpolation='nearest')
###Output
_____no_output_____
###Markdown
定义svc估计器进行学习
###Code
from sklearn import svm
svc = svm.SVC(gamma=0.0001,C=100.)
svc.fit(digits.data[1:1790],digits.target[1:1790])
svc.predict(digits.data[1791:1976])
digits.target[1791:1976]
###Output
_____no_output_____ |
.ipynb_checkpoints/Recommender Systems with Python-checkpoint.ipynb | ###Markdown
Movie Recommendation System with PythonIn this project, we'll develop a basic recommender system with Python and pandas.Movies will be suggested by similarity to other movies; this is not a robust recommendation system, but something to start out on.
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Data We have two datasets:- A dataset of movie ratings.- A dataset of all movies titles and their ids.
###Code
#Reading the ratings dataset.
column_names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv('data/u.data', sep='\t', names=column_names)
df.head()
###Output
_____no_output_____
###Markdown
Reading the movie titles
###Code
movie_titles = pd.read_csv("data/Movie_Id_Titles")
movie_titles.head()
###Output
_____no_output_____
###Markdown
We can merge them together:
###Code
df = pd.merge(df,movie_titles,on='item_id')
df.head()
###Output
_____no_output_____
###Markdown
Exploratory AnalysisLet's explore the data a bit and get a look at some of the best rated movies.
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let's create a ratings dataframe with average rating and number of ratings:
###Code
df.groupby('title')['rating'].mean().sort_values(ascending=False).head()
df.groupby('title')['rating'].count().sort_values(ascending=False).head()
ratings = pd.DataFrame(df.groupby('title')['rating'].mean())
ratings.head()
###Output
_____no_output_____
###Markdown
Setting the number of ratings column:
###Code
ratings['num of ratings'] = pd.DataFrame(df.groupby('title')['rating'].count())
ratings.head()
###Output
_____no_output_____
###Markdown
Visualizing the number of ratings
###Code
plt.figure(figsize=(10,4))
ratings['num of ratings'].hist(bins=40)
plt.figure(figsize=(10,4))
ratings['rating'].hist(bins=70)
sns.jointplot(x='rating',y='num of ratings',data=ratings,alpha=0.5)
###Output
_____no_output_____
###Markdown
Okay! Now that we have a general idea of what the data looks like, let's move on to creating a simple recommendation system: Recommending Similar Movies Now let's create a matrix that has the user ids on one access and the movie title on another axis. Each cell will then consist of the rating the user gave to that movie. Note there will be a lot of NaN values, because most people have not seen most of the movies.
###Code
moviemat = df.pivot_table(index='user_id',columns='title',values='rating')
moviemat.head()
###Output
_____no_output_____
###Markdown
Most rated movie:
###Code
ratings.sort_values('num of ratings',ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Let's choose two movies: starwars, a sci-fi movie. And Liar Liar, a comedy.
###Code
ratings.head()
###Output
_____no_output_____
###Markdown
Now let's grab the user ratings for those two movies:
###Code
starwars_user_ratings = moviemat['Star Wars (1977)']
liarliar_user_ratings = moviemat['Liar Liar (1997)']
starwars_user_ratings.head()
###Output
_____no_output_____
###Markdown
We can then use corrwith() method to get correlations between two pandas series:
###Code
similar_to_starwars = moviemat.corrwith(starwars_user_ratings)
similar_to_liarliar = moviemat.corrwith(liarliar_user_ratings)
###Output
/Users/marci/anaconda/lib/python3.5/site-packages/numpy/lib/function_base.py:2487: RuntimeWarning: Degrees of freedom <= 0 for slice
warnings.warn("Degrees of freedom <= 0 for slice", RuntimeWarning)
###Markdown
Let's clean this by removing NaN values and using a DataFrame instead of a series:
###Code
corr_starwars = pd.DataFrame(similar_to_starwars,columns=['Correlation'])
corr_starwars.dropna(inplace=True)
corr_starwars.head()
###Output
_____no_output_____
###Markdown
Now if we sort the dataframe by correlation, we should get the most similar movies, however note that we get some results that don't really make sense. This is because there are a lot of movies only watched once by users who also watched star wars (it was the most popular movie).
###Code
corr_starwars.sort_values('Correlation',ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Let's fix this by filtering out movies that have less than 100 reviews (this value was chosen based off the histogram from earlier).
###Code
corr_starwars = corr_starwars.join(ratings['num of ratings'])
corr_starwars.head()
###Output
_____no_output_____
###Markdown
Now sort the values and notice how the titles make a lot more sense:
###Code
corr_starwars[corr_starwars['num of ratings']>100].sort_values('Correlation',ascending=False).head()
###Output
_____no_output_____
###Markdown
Now the same for the comedy Liar Liar:
###Code
corr_liarliar = pd.DataFrame(similar_to_liarliar,columns=['Correlation'])
corr_liarliar.dropna(inplace=True)
corr_liarliar = corr_liarliar.join(ratings['num of ratings'])
corr_liarliar[corr_liarliar['num of ratings']>100].sort_values('Correlation',ascending=False).head()
###Output
_____no_output_____
###Markdown
___ ___ Recommender Systems with PythonWelcome to the code notebook for creating Recommender Systems with Python. This notebook follows along with the presentation. Recommendation Systems usually rely on larger data sets and specifically need to be organized in a particular fashion. Because of this, we won't have a project to go along with this topic, instead we will have a more intensive walkthrough process on creating a recommendation system with Python.___ Methods UsedTwo most common types of recommender systems are **Content-Based** and **Collaborative Filtering (CF)**. * Collaborative filtering produces recommendations based on the knowledge of users’ attitude to items, that is it uses the "wisdom of the crowd" to recommend items. * Content-based recommender systems focus on the attributes of the items and give you recommendations based on the similarity between them. Collaborative FilteringIn general, Collaborative filtering (CF) is more commonly used than content-based systems because it usually gives better results and is relatively easy to understand (from an overall implementation perspective). The algorithm has the ability to do feature learning on its own, which means that it can start to learn for itself what features to use. CF can be divided into **Memory-Based Collaborative Filtering** and **Model-Based Collaborative filtering**. In this tutorial, we will implement Model-Based CF by using singular value decomposition (SVD) and Memory-Based CF by computing cosine similarity. The DataWe will use famous MovieLens dataset, which is one of the most common datasets used when implementing and testing recommender engines. It contains 100k movie ratings from 943 users and a selection of 1682 movies.You can download the dataset [here](http://files.grouplens.org/datasets/movielens/ml-100k.zip) or just use the u.data file that is already included in this folder.____ Getting StartedLet's import some libraries we will need:
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
We can then read in the **u.data** file, which contains the full dataset. You can read a brief description of the dataset [here](http://files.grouplens.org/datasets/movielens/ml-100k-README.txt).Note how we specify the separator argument for a Tab separated file.
###Code
column_names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv('u.data', sep='\t', names=column_names)
###Output
_____no_output_____
###Markdown
Get a sneak peek of the first two rows in the dataset. Next, let's count the number of unique users and movies.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Note how we only have the item_id
###Code
n_users = df.user_id.unique().shape[0]
n_items = df.item_id.unique().shape[0]
print('Number of users = ' + str(n_users) + ' | Number of movies = ' + str(n_items))
###Output
Number of users = 944 | Number of movies = 1682
###Markdown
You can use the [`scikit-learn`](http://scikit-learn.org/stable/) library to split the dataset into testing and training. [`Cross_validation.train_test_split`](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html) shuffles and splits the data into two datasets according to the percentage of test examples (``test_size``), which in this case is 0.25.
###Code
from sklearn import cross_validation as cv
train_data, test_data = cv.train_test_split(df, test_size=0.25)
###Output
_____no_output_____
###Markdown
Memory-Based Collaborative FilteringMemory-Based Collaborative Filtering approaches can be divided into two main sections: **user-item filtering** and **item-item filtering**. A *user-item filtering* will take a particular user, find users that are similar to that user based on similarity of ratings, and recommend items that those similar users liked. In contrast, *item-item filtering* will take an item, find users who liked that item, and find other items that those users or similar users also liked. It takes items and outputs other items as recommendations. * *Item-Item Collaborative Filtering*: “Users who liked this item also liked …”* *User-Item Collaborative Filtering*: “Users who are similar to you also liked …”In both cases, you create a user-item matrix which you build from the entire dataset. Since you have split the data into testing and training you will need to create two ``[943 x 1682]`` matrices. The training matrix contains 75% of the ratings and the testing matrix contains 25% of the ratings. Example of user-item matrix:After you have built the user-item matrix you calculate the similarity and create a similarity matrix. The similarity values between items in *Item-Item Collaborative Filtering* are measured by observing all the users who have rated both items. For *User-Item Collaborative Filtering* the similarity values between users are measured by observing all the items that are rated by both users.A distance metric commonly used in recommender systems is *cosine similarity*, where the ratings are seen as vectors in ``n``-dimensional space and the similarity is calculated based on the angle between these vectors. Cosine similiarity for users *a* and *m* can be calculated using the formula below, where you take dot product of the user vector *$u_k$* and the user vector *$u_a$* and divide it by multiplication of the Euclidean lengths of the vectors.To calculate similarity between items *m* and *b* you use the formula:<img class="aligncenter size-thumbnail img-responsive" src="https://latex.codecogs.com/gif.latex?s_u^{cos}(i_m,i_b)=\frac{i_m&space;\cdot&space;i_b&space;}{&space;\left&space;\|&space;i_m&space;\right&space;\|&space;\left&space;\|&space;i_b&space;\right&space;\|&space;}&space;=\frac{\sum&space;x_{a,m}x_{a,b}}{\sqrt{\sum&space;x_{a,m}^2\sum&space;x_{a,b}^2}}"/>Your first step will be to create the user-item matrix. Since you have both testing and training data you need to create two matrices.
###Code
#Create two user-item matrices, one for training and another for testing
train_data_matrix = np.zeros((n_users, n_items))
for line in train_data.itertuples():
train_data_matrix[line[1]-1, line[2]-1] = line[3]
test_data_matrix = np.zeros((n_users, n_items))
for line in test_data.itertuples():
test_data_matrix[line[1]-1, line[2]-1] = line[3]
###Output
_____no_output_____
###Markdown
You can use the [`pairwise_distances`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html) function from `sklearn` to calculate the cosine similarity. Note, the output will range from 0 to 1 since the ratings are all positive.
###Code
from sklearn.metrics.pairwise import pairwise_distances
user_similarity = pairwise_distances(train_data_matrix, metric='cosine')
item_similarity = pairwise_distances(train_data_matrix.T, metric='cosine')
###Output
_____no_output_____
###Markdown
Next step is to make predictions. You have already created similarity matrices: `user_similarity` and `item_similarity` and therefore you can make a prediction by applying following formula for user-based CF:You can look at the similarity between users *k* and *a* as weights that are multiplied by the ratings of a similar user *a* (corrected for the average rating of that user). You will need to normalize it so that the ratings stay between 1 and 5 and, as a final step, sum the average ratings for the user that you are trying to predict. The idea here is that some users may tend always to give high or low ratings to all movies. The relative difference in the ratings that these users give is more important than the absolute values. To give an example: suppose, user *k* gives 4 stars to his favourite movies and 3 stars to all other good movies. Suppose now that another user *t* rates movies that he/she likes with 5 stars, and the movies he/she fell asleep over with 3 stars. These two users could have a very similar taste but treat the rating system differently. When making a prediction for item-based CF you don't need to correct for users average rating since query user itself is used to do predictions.
###Code
def predict(ratings, similarity, type='user'):
if type == 'user':
mean_user_rating = ratings.mean(axis=1)
#You use np.newaxis so that mean_user_rating has same format as ratings
ratings_diff = (ratings - mean_user_rating[:, np.newaxis])
pred = mean_user_rating[:, np.newaxis] + similarity.dot(ratings_diff) / np.array([np.abs(similarity).sum(axis=1)]).T
elif type == 'item':
pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)])
return pred
item_prediction = predict(train_data_matrix, item_similarity, type='item')
user_prediction = predict(train_data_matrix, user_similarity, type='user')
###Output
_____no_output_____
###Markdown
EvaluationThere are many evaluation metrics but one of the most popular metric used to evaluate accuracy of predicted ratings is *Root Mean Squared Error (RMSE)*. You can use the [`mean_square_error`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) (MSE) function from `sklearn`, where the RMSE is just the square root of MSE. To read more about different evaluation metrics you can take a look at [this article](http://research.microsoft.com/pubs/115396/EvaluationMetrics.TR.pdf). Since you only want to consider predicted ratings that are in the test dataset, you filter out all other elements in the prediction matrix with `prediction[ground_truth.nonzero()]`.
###Code
from sklearn.metrics import mean_squared_error
from math import sqrt
def rmse(prediction, ground_truth):
prediction = prediction[ground_truth.nonzero()].flatten()
ground_truth = ground_truth[ground_truth.nonzero()].flatten()
return sqrt(mean_squared_error(prediction, ground_truth))
print('User-based CF RMSE: ' + str(rmse(user_prediction, test_data_matrix)))
print('Item-based CF RMSE: ' + str(rmse(item_prediction, test_data_matrix)))
###Output
User-based CF RMSE: 3.1269170802946533
Item-based CF RMSE: 3.4566054361533025
###Markdown
Memory-based algorithms are easy to implement and produce reasonable prediction quality. The drawback of memory-based CF is that it doesn't scale to real-world scenarios and doesn't address the well-known cold-start problem, that is when new user or new item enters the system. Model-based CF methods are scalable and can deal with higher sparsity level than memory-based models, but also suffer when new users or items that don't have any ratings enter the system. I would like to thank Ethan Rosenthal for his [post](http://blog.ethanrosenthal.com/2015/11/02/intro-to-collaborative-filtering/) about Memory-Based Collaborative Filtering. Model-based Collaborative FilteringModel-based Collaborative Filtering is based on **matrix factorization (MF)** which has received greater exposure, mainly as an unsupervised learning method for latent variable decomposition and dimensionality reduction. Matrix factorization is widely used for recommender systems where it can deal better with scalability and sparsity than Memory-based CF. The goal of MF is to learn the latent preferences of users and the latent attributes of items from known ratings (learn features that describe the characteristics of ratings) to then predict the unknown ratings through the dot product of the latent features of users and items. When you have a very sparse matrix, with a lot of dimensions, by doing matrix factorization you can restructure the user-item matrix into low-rank structure, and you can represent the matrix by the multiplication of two low-rank matrices, where the rows contain the latent vector. You fit this matrix to approximate your original matrix, as closely as possible, by multiplying the low-rank matrices together, which fills in the entries missing in the original matrix.Let's calculate the sparsity level of MovieLens dataset:
###Code
sparsity=round(1.0-len(df)/float(n_users*n_items),3)
print('The sparsity level of MovieLens100K is ' + str(sparsity*100) + '%')
###Output
The sparsity level of MovieLens100K is 93.7%
###Markdown
To give an example of the learned latent preferences of the users and items: let's say for the MovieLens dataset you have the following information: _(user id, age, location, gender, movie id, director, actor, language, year, rating)_. By applying matrix factorization the model learns that important user features are _age group (under 10, 10-18, 18-30, 30-90)_, _location_ and _gender_, and for movie features it learns that _decade_, _director_ and _actor_ are most important. Now if you look into the information you have stored, there is no such feature as the _decade_, but the model can learn on its own. The important aspect is that the CF model only uses data (user_id, movie_id, rating) to learn the latent features. If there is little data available model-based CF model will predict poorly, since it will be more difficult to learn the latent features. Models that use both ratings and content features are called **Hybrid Recommender Systems** where both Collaborative Filtering and Content-based Models are combined. Hybrid recommender systems usually show higher accuracy than Collaborative Filtering or Content-based Models on their own: they are capable to address the cold-start problem better since if you don't have any ratings for a user or an item you could use the metadata from the user or item to make a prediction. Hybrid recommender systems will be covered in the next tutorials. SVDA well-known matrix factorization method is **Singular value decomposition (SVD)**. Collaborative Filtering can be formulated by approximating a matrix `X` by using singular value decomposition. The winning team at the Netflix Prize competition used SVD matrix factorization models to produce product recommendations, for more information I recommend to read articles: [Netflix Recommendations: Beyond the 5 stars](http://techblog.netflix.com/2012/04/netflix-recommendations-beyond-5-stars.html) and [Netflix Prize and SVD](http://buzzard.ups.edu/courses/2014spring/420projects/math420-UPS-spring-2014-gower-netflix-SVD.pdf).The general equation can be expressed as follows:Given `m x n` matrix `X`:* *`U`* is an *`(m x r)`* orthogonal matrix* *`S`* is an *`(r x r)`* diagonal matrix with non-negative real numbers on the diagonal* *V^T* is an *`(r x n)`* orthogonal matrixElements on the diagnoal in `S` are known as *singular values of `X`*. Matrix *`X`* can be factorized to *`U`*, *`S`* and *`V`*. The *`U`* matrix represents the feature vectors corresponding to the users in the hidden feature space and the *`V`* matrix represents the feature vectors corresponding to the items in the hidden feature space.Now you can make a prediction by taking dot product of *`U`*, *`S`* and *`V^T`*.
###Code
import scipy.sparse as sp
from scipy.sparse.linalg import svds
#get SVD components from train matrix. Choose k.
u, s, vt = svds(train_data_matrix, k = 20)
s_diag_matrix=np.diag(s)
X_pred = np.dot(np.dot(u, s_diag_matrix), vt)
print('User-based CF MSE: ' + str(rmse(X_pred, test_data_matrix)))
###Output
User-based CF MSE: 2.7178500181267085
|
Advance Retail Sales Clothing and Clothing Accessory Stores .ipynb | ###Markdown
Data Preprocessing Splitting data
###Code
# I want 10% of the whole data has to be splitted to train and test
length = 0.1
split_length =len(data) - int(len(data)*length)
train_data = data.iloc[:split_length]
test_data = data.iloc[split_length:]
len(train_data),len(test_data)
###Output
_____no_output_____
###Markdown
Scaling data using MinMaxScaler
###Code
#scaling data
scaler = MinMaxScaler()
scale_train = scaler.fit_transform(train_data)
scale_test = scaler.transform(test_data)
###Output
_____no_output_____
###Markdown
Preparing Time series generator data for training as well as validation data
###Code
def timeserieGenerator(length=12,batch_size=1):
train_generator = TimeseriesGenerator(scale_train,scale_train,length = length,batch_size = batch_size)
validation_generator = TimeseriesGenerator(scale_test,scale_test,length = length,batch_size = batch_size)
return train_generator,validation_generator,length
length = int(input("Enter the length:"))
batch_size = int(input("Enter Batch Size:"))
train_generator,validation_generator,length = timeserieGenerator(length,batch_size)
###Output
Enter the length:12
Enter Batch Size:1
###Markdown
Building Model
###Code
def building_and_fitting_model(model_type,length=12,n_features = 1):
model1 = Sequential()
model1.add(model_type(50,activation = 'relu',input_shape=(length,n_features)))
model1.add(Dense(1))
model1.compile(optimizer='adam',loss = 'mse')
print(model1.summary())
ES = EarlyStopping(monitor = 'val_loss',mode = 'min',patience=5)
model1.fit_generator(train_generator,validation_data = validation_generator,epochs = 300,callbacks = [ES])
print(str(model_type),":\n")
df = pd.DataFrame(model1.history.history)
df.plot()
return model1
def forecast(to_be_forecasted,model):
forecast = []
first_eval_batch = scale_train[-length:]
current_eval_batch = first_eval_batch.reshape((1,length,batch_size))
for i in range(to_be_forecasted):
prediction = model.predict(current_eval_batch)[0]
forecast.append(prediction)
current_eval_batch = np.append(current_eval_batch[:,1:,:],[[prediction]],axis=1)
forecast = scaler.inverse_transform(forecast)
return forecast
###Output
_____no_output_____
###Markdown
LSTM
###Code
model_LSTM = building_and_fitting_model(LSTM,length = length , n_features = batch_size)
forecast_points = forecast(len(scale_test),model_LSTM)
test_data["LSTM"] = forecast_points
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 50) 10400
_________________________________________________________________
dense (Dense) (None, 1) 51
=================================================================
Total params: 10,451
Trainable params: 10,451
Non-trainable params: 0
_________________________________________________________________
None
WARNING:tensorflow:From <ipython-input-9-da0b8aab6513>:11: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 289 steps, validate for 21 steps
Epoch 1/300
289/289 [==============================] - 7s 23ms/step - loss: 0.0286 - val_loss: 0.0221
Epoch 2/300
289/289 [==============================] - 3s 11ms/step - loss: 0.0193 - val_loss: 0.0176
Epoch 3/300
289/289 [==============================] - 6s 21ms/step - loss: 0.0167 - val_loss: 0.0457
Epoch 4/300
289/289 [==============================] - 3s 12ms/step - loss: 0.0128 - val_loss: 0.0063
Epoch 5/300
289/289 [==============================] - 6s 20ms/step - loss: 0.0079 - val_loss: 0.0144
Epoch 6/300
289/289 [==============================] - 5s 17ms/step - loss: 0.0044 - val_loss: 0.0011
Epoch 7/300
289/289 [==============================] - 4s 12ms/step - loss: 0.0028 - val_loss: 0.0012
Epoch 8/300
289/289 [==============================] - 5s 16ms/step - loss: 0.0024 - val_loss: 0.0020
Epoch 9/300
289/289 [==============================] - 4s 13ms/step - loss: 0.0019 - val_loss: 0.0015
Epoch 10/300
289/289 [==============================] - 5s 19ms/step - loss: 0.0031 - val_loss: 9.9300e-04
Epoch 11/300
289/289 [==============================] - 4s 14ms/step - loss: 0.0017 - val_loss: 0.0010
Epoch 12/300
289/289 [==============================] - 6s 21ms/step - loss: 0.0016 - val_loss: 0.0032
Epoch 13/300
289/289 [==============================] - 6s 22ms/step - loss: 0.0017 - val_loss: 0.0012
Epoch 14/300
289/289 [==============================] - 5s 17ms/step - loss: 0.0015 - val_loss: 0.0010
Epoch 15/300
289/289 [==============================] - 6s 21ms/step - loss: 0.0014 - val_loss: 0.0011
<class 'tensorflow.python.keras.layers.recurrent_v2.LSTM'> :
###Markdown
SimpleRNN
###Code
len(scale_test)
model_SRNN = building_and_fitting_model(SimpleRNN,length = length , n_features = batch_size)
forecast_points = forecast(len(scale_test),model_SRNN)
test_data["SimpleRNN"] = forecast_points
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
simple_rnn (SimpleRNN) (None, 50) 2600
_________________________________________________________________
dense_1 (Dense) (None, 1) 51
=================================================================
Total params: 2,651
Trainable params: 2,651
Non-trainable params: 0
_________________________________________________________________
None
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 289 steps, validate for 21 steps
Epoch 1/300
289/289 [==============================] - 6s 20ms/step - loss: 0.0206 - val_loss: 0.0083
Epoch 2/300
289/289 [==============================] - 3s 12ms/step - loss: 0.0057 - val_loss: 0.0019
Epoch 3/300
289/289 [==============================] - 3s 10ms/step - loss: 9.8295e-04 - val_loss: 0.0016
Epoch 4/300
289/289 [==============================] - 3s 10ms/step - loss: 0.0016 - val_loss: 0.0013
Epoch 5/300
289/289 [==============================] - 4s 13ms/step - loss: 0.0012 - val_loss: 0.0025
Epoch 6/300
289/289 [==============================] - 5s 17ms/step - loss: 0.0010 - val_loss: 0.0028
Epoch 7/300
289/289 [==============================] - 5s 16ms/step - loss: 0.0014 - val_loss: 0.0041
Epoch 8/300
289/289 [==============================] - 5s 17ms/step - loss: 0.0013 - val_loss: 0.0023
Epoch 9/300
289/289 [==============================] - 4s 15ms/step - loss: 0.0022 - val_loss: 0.0015
<class 'tensorflow.python.keras.layers.recurrent.SimpleRNN'> :
###Markdown
GRU
###Code
model_GRU = building_and_fitting_model(GRU,length = length , n_features = batch_size)
forecast_points = forecast(len(scale_test),model_GRU)
test_data["GRU"] = forecast_points
test_data.plot(figsize=(12,8))
###Output
_____no_output_____
###Markdown
Evaluating model using reccursion metrics
###Code
def max_error_value(true,predicted):
return max_error(true,predicted)
def r2score(true,predicted):
return r2_score(true,predicted)
def mean_squared_error_value(true,predicted):
return mean_squared_error(true,predicted)
def mean_squared_error_value(true,predicted):
return mean_squared_error(true,predicted)
def evaluating_models():
#Printing Max Error
print("Max Error from LSTM:",max_error_value(test_data[['RSCCASN']],test_data[['LSTM']]))
print("Max Error from SimpleRNN:",max_error_value(test_data[['RSCCASN']],test_data[['SimpleRNN']]))
print("Max Error from GRU:",max_error_value(test_data[['RSCCASN']],test_data[['GRU']]))
print("\n\n")
#Mean Squared Error
print("Mean Squared Error from LSTM: ",mean_squared_error_value(test_data[['RSCCASN']],test_data[['LSTM']]))
print("Mean Squared Error from SimpleRNN: ",mean_squared_error_value(test_data[['RSCCASN']],test_data[['SimpleRNN']]))
print("Mean Squared Error from GRU: ",mean_squared_error_value(test_data[['RSCCASN']],test_data[['GRU']]))
print("\n\n")
#r2_score
rscr = 0
model = 'LSTM'
#LSTM
rscr = r2score(test_data[['RSCCASN']],test_data[['LSTM']])
print("r2_score From LSTM:",rscr)
#SimpleRNN
temp = r2score(test_data[['RSCCASN']],test_data[['SimpleRNN']])
print("r2_score From SimpleRNN:",temp)
if temp>rscr:
rscr = temp
model = 'SimpleRNN'
#GRU
temp = r2score(test_data[['RSCCASN']],test_data[['GRU']])
print("r2_score From GRU:",temp)
if temp>rscr:
rscr = temp
model = 'GRU'
print('\n\nBest Model Among All Is: ',model ,"With r2_score: ",rscr)
evaluating_models()
###Output
Max Error from LSTM: 6932.228821754456
Max Error from SimpleRNN: 2853.161606788639
Max Error from GRU: 3813.265411853794
Mean Squared Error from LSTM: 2436821.037504457
Mean Squared Error from SimpleRNN: 1203552.3144490453
Mean Squared Error from GRU: 1981421.7655668282
r2_score From LSTM: 0.8290642740266767
r2_score From SimpleRNN: 0.9155743957184854
r2_score From GRU: 0.8610091743530878
Best Model Among All Is: SimpleRNN With r2_score: 0.9155743957184854
###Markdown
Based on the above, we are using SimpleRNN for predicting or forecasting for an year's data Forecasting results with the trained model of SimpleRNN Note: More and more you forecast,introducing of noise is too much into data.
###Code
scaled_data_for_forecasting = scaler.fit_transform(data)
train_data.tail()
period = int(input('Enter the number of years to be forecasted:'))
period *= 12
forecasting_result = forecast(period,model_SRNN)
forecating_index = pd.date_range(start='2017-02-01',periods=period,freq='MS')
forecating_index
forecast_dataframe = pd.DataFrame(data = forecasting_result,index = forecating_index,
columns = ['Forecast'])
# Forecasted dataframe
forecast_dataframe
###Output
_____no_output_____
###Markdown
plotting in different plots
###Code
train_data.plot()
forecast_dataframe.plot()
###Output
_____no_output_____
###Markdown
Plotting in same axis
###Code
ax = train_data.plot(figsize=(12,8))
forecast_dataframe.plot(ax=ax)
plt.xlim('2015-01-01','2021-01-01')
###Output
_____no_output_____ |
Mycodes/chapter02_prerequisite/2.2_tensor.ipynb | ###Markdown
2.2 数据操作
###Code
import torch
torch.manual_seed(0)
torch.cuda.manual_seed(0)
print(torch.__version__)
###Output
1.9.1+cpu
###Markdown
2.2.1 创建`Tensor`创建一个5x3的未初始化的`Tensor`:
###Code
x = torch.empty(5, 3)
print(x)
###Output
tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
###Markdown
创建一个5x3的随机初始化的`Tensor`:
###Code
x = torch.rand(5, 3)
print(x)
x = torch.randn(5, 3)
print(x)
torch.randint(1,100,(5,3))
torch.randn??
torch.rand??
torch.randint??
# randint(low=0, high, size)
###Output
_____no_output_____
###Markdown
创建一个5x3的long型全0的`Tensor`:
###Code
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
###Output
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
###Markdown
直接根据数据创建:
###Code
x = torch.tensor([5.5, 3])
print(x)
###Output
tensor([5.5000, 3.0000])
###Markdown
还可以通过现有的`Tensor`来创建,此方法会默认重用输入`Tensor`的一些属性,例如数据类型,除非自定义数据类型。
###Code
x = x.new_ones(5, 3, dtype=torch.float64) # 返回的tensor默认具有相同的torch.dtype和torch.device
print(x)
x = torch.randn_like(x, dtype=torch.float) # 指定新的数据类型
print(x)
###Output
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
tensor([[ 0.2692, -0.0630, 0.0084],
[ 0.9664, 0.7486, 1.2709],
[ 0.2109, 1.5359, -2.1960],
[-0.4223, -0.1316, 0.1957],
[-1.0772, 0.4173, -0.1003]])
###Markdown
我们可以通过`shape`或者`size()`来获取`Tensor`的形状:
###Code
print(x.size())
print(x.shape)
###Output
torch.Size([5, 3])
torch.Size([5, 3])
###Markdown
> 注意:返回的torch.Size其实就是一个tuple, 支持所有tuple的操作。 2.2.2 操作 算术操作* **加法形式一**
###Code
y = torch.rand(5, 3)
print(x + y)
###Output
tensor([[ 0.3880, 0.6854, 0.0545],
[ 0.9858, 0.7628, 1.6695],
[ 1.0471, 1.5626, -1.2804],
[-0.1224, 0.5148, 0.7185],
[-1.0281, 1.3320, 0.6689]])
###Markdown
* **加法形式二**
###Code
print(torch.add(x, y))
result = torch.empty(5, 3)
result
torch.add(x, y)
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
torch.add??
###Output
_____no_output_____
###Markdown
* **加法形式三、inplace**
###Code
# adds x to y
y.add_(x)
print(y)
###Output
tensor([[ 1.3967, 1.0892, 0.4369],
[ 1.6995, 2.0453, 0.6539],
[-0.1553, 3.7016, -0.3599],
[ 0.7536, 0.0870, 1.2274],
[ 2.5046, -0.1913, 0.4760]])
###Markdown
> **注:PyTorch操作inplace版本都有后缀"_", 例如`x.copy_(y), x.t_()`** 索引我们还可以使用类似NumPy的索引操作来访问`Tensor`的一部分,需要注意的是:**索引出来的结果与原数据共享内存,也即修改一个,另一个会跟着修改。**
###Code
print(x)
y = x[0, :]
y += 1
print(y)
print(x[0, :]) # 源tensor也被改了
print(x)
###Output
tensor([[ 1.2692, 0.9370, 1.0084],
[ 0.9664, 0.7486, 1.2709],
[ 0.2109, 1.5359, -2.1960],
[-0.4223, -0.1316, 0.1957],
[-1.0772, 0.4173, -0.1003]])
###Markdown
改变形状用`view()`来改变`Tensor`的形状:
###Code
y = x.view(15)
z = x.view(-1, 5) # -1所指的维度可以根据其他维度的值推出来
print(x.size(), y.size(), z.size())
###Output
torch.Size([5, 3]) torch.Size([15]) torch.Size([3, 5])
###Markdown
**注意`view()`返回的新tensor与源tensor共享内存,也即更改其中的一个,另外一个也会跟着改变。**
###Code
print(x)
x += 1
print(x)
print(y) # 也加了1
###Output
tensor([[ 1.2692, 0.9370, 1.0084],
[ 0.9664, 0.7486, 1.2709],
[ 0.2109, 1.5359, -2.1960],
[-0.4223, -0.1316, 0.1957],
[-1.0772, 0.4173, -0.1003]])
tensor([[ 2.2692, 1.9370, 2.0084],
[ 1.9664, 1.7486, 2.2709],
[ 1.2109, 2.5359, -1.1960],
[ 0.5777, 0.8684, 1.1957],
[-0.0772, 1.4173, 0.8997]])
tensor([ 2.2692, 1.9370, 2.0084, 1.9664, 1.7486, 2.2709, 1.2109, 2.5359,
-1.1960, 0.5777, 0.8684, 1.1957, -0.0772, 1.4173, 0.8997])
###Markdown
如果不想共享内存,推荐先用`clone`创造一个副本然后再使用`view`。
###Code
print(x)
x_cp = x.clone().view(15)
x -= 1
print(x)
print(x_cp)
###Output
tensor([[ 2.2692, 1.9370, 2.0084],
[ 1.9664, 1.7486, 2.2709],
[ 1.2109, 2.5359, -1.1960],
[ 0.5777, 0.8684, 1.1957],
[-0.0772, 1.4173, 0.8997]])
tensor([[ 1.2692, 0.9370, 1.0084],
[ 0.9664, 0.7486, 1.2709],
[ 0.2109, 1.5359, -2.1960],
[-0.4223, -0.1316, 0.1957],
[-1.0772, 0.4173, -0.1003]])
tensor([ 2.2692, 1.9370, 2.0084, 1.9664, 1.7486, 2.2709, 1.2109, 2.5359,
-1.1960, 0.5777, 0.8684, 1.1957, -0.0772, 1.4173, 0.8997])
###Markdown
另外一个常用的函数就是`item()`, 它可以将一个标量`Tensor`转换成一个Python number:
###Code
x = torch.randn(1)
print(x)
print(x.item())
###Output
tensor([1.2897])
1.2897011041641235
###Markdown
2.2.3 广播机制
###Code
x = torch.arange(1, 3).view(1, 2)
print(x)
y = torch.arange(1, 4).view(3, 1)
print(y)
print(x + y)
###Output
tensor([[1, 2]])
tensor([[1],
[2],
[3]])
tensor([[2, 3],
[3, 4],
[4, 5]])
###Markdown
2.2.4 运算的内存开销
###Code
x = torch.tensor([1, 2])
y = torch.tensor([3, 4])
id_before = id(y)
y += x
print(id(y) == id_before)
x = torch.tensor([1, 2])
y = torch.tensor([3, 4])
id_before = id(y)
y = y + x
print(id(y) == id_before)
x = torch.tensor([1, 2])
y = torch.tensor([3, 4])
id_before = id(y)
y[:] = y + x
print(id(y) == id_before)
x = torch.tensor([1, 2])
y = torch.tensor([3, 4])
id_before = id(y)
torch.add(x, y, out=y) # y += x, y.add_(x)
print(id(y) == id_before)
x = torch.tensor([1, 2])
y = torch.tensor([3, 4])
id_before = id(y)
# torch.add(x, y, out=y) # y += x, y.add_(x)
y.add_(x)
print(id(y) == id_before)
###Output
True
###Markdown
2.2.5 `Tensor`和NumPy相互转换**`numpy()`和`from_numpy()`这两个函数产生的`Tensor`和NumPy array实际是使用的相同的内存,改变其中一个时另一个也会改变!!!** `Tensor`转NumPy
###Code
a = torch.ones(5)
b = a.numpy()
print(a, b)
a += 1
print(a, b)
b += 1
print(a, b)
###Output
tensor([1., 1., 1., 1., 1.]) [1. 1. 1. 1. 1.]
tensor([2., 2., 2., 2., 2.]) [2. 2. 2. 2. 2.]
tensor([3., 3., 3., 3., 3.]) [3. 3. 3. 3. 3.]
###Markdown
NumPy数组转`Tensor`
###Code
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
print(a, b)
a += 1
print(a, b)
b += 1
print(a, b)
###Output
[1. 1. 1. 1. 1.] tensor([1., 1., 1., 1., 1.], dtype=torch.float64)
[2. 2. 2. 2. 2.] tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
[3. 3. 3. 3. 3.] tensor([3., 3., 3., 3., 3.], dtype=torch.float64)
###Markdown
直接用`torch.tensor()`将NumPy数组转换成`Tensor`,该方法总是会进行数据拷贝,返回的`Tensor`和原来的数据不再共享内存。
###Code
# 用torch.tensor()转换时不会共享内存
c = torch.tensor(a)
a += 1
print(a, c)
###Output
[4. 4. 4. 4. 4.] tensor([3., 3., 3., 3., 3.], dtype=torch.float64)
###Markdown
2.2.6 `Tensor` on GPU
###Code
x
# 以下代码只有在PyTorch GPU版本上才会执行
if torch.cuda.is_available():
device = torch.device("cuda") # GPU
y = torch.ones_like(x, device=device) # 直接创建一个在GPU上的Tensor
x = x.to(device) # 等价于 .to("cuda")
z = x + y
print(z)
print(z.to("cpu", torch.double)) # to()还可以同时更改数据类型
###Output
_____no_output_____ |
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb | ###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
# Making sure it went through correctly
result
soup = bs4.BeautifulSoup(result.text)
first = soup.select('h2')[0]
first
# Making it readable by removing HTML tags
first.text.strip()
titles = [tag.text.strip()
for tag in soup.select('h2')]
titles
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
long_titles = []
for title in titles:
if len(title) > 80:
long_titles.append(title)
long_titles
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
[title for title in titles
if len(title) > 80]
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) > 80
long("Short and meaningless string")
long('Supercalifragilisticexpealidociouseventhoughthesoundofitissomethingquiteatrocious')
list(filter(long, titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
list(filter(lambda t: len(t) > 80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
df.shape
df[ df['title'].str.len() > 80 ]
condition = df['title'].str.len() > 80
df[condition]
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title length'] = df['title'].apply(len)
df.shape
df.head()
df[ df['title length'] > 80 ]
df.loc[ df['title length'] > 80, 'title']
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title'] = df['title length'] > 80
df.shape
df.head()
df[ df['long title']==True]
df[df['long title']]
###Output
_____no_output_____
###Markdown
first letter
###Code
df['first letter'] = df['title'].str[0]
df[ df['first letter']=='T' ]
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title word count'] = df['title'].apply(textstat.lexicon_count)
df.head()
df[ df['title word count'] <= 3 ]
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df = df.rename(columns={'title length': 'title character count'})
df.head()
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
df.describe(include='all')
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.sort_values(by='title character count').head(5)
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df.sort_values(by='first letter', ascending=False).head()
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first letter'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
%matplotlib inline
(df['first letter']
.value_counts()
.head(5)
.plot
.barh(color='grey',
title='PyCon 2019 Talks: Top 5 Most Frequent First Letters'));
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
title = 'Distribution of title length, in characters'
df['title character count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet? Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
### ASSIGNMENT
# Scraping talk descriptions
descriptions = [tag.text.strip()
for tag in soup.select('.presentation-description')]
descriptions
df.head()
# Adding new column
df['description'] = pd.DataFrame({'title': descriptions})
df.head()
# Add description character count
# Add description word count
# Format like this : df['title'].apply(len)
df['description_chars'] = df['description'].apply(len)
df['description_words'] = df['description'].apply(textstat.lexicon_count)
df.head()
###Output
_____no_output_____
###Markdown
Describe all the dataframe's columns. What's the average description word count? The minimum? The maximum?Answer the question: Which descriptions could fit in a tweet?
###Code
# Average description word count
count_sum = []
for count in df["description_words"]:
count_sum.append(count)
average_count = sum(count_sum)/len(count_sum)
average_count
# Minimum description word count
minimum_word_count = min(count_sum)
minimum_word_count
# Maximum description word count
maximum_word_count = max(count_sum)
maximum_word_count
# Descriptions that could fit in a tweet
df[ df['description_chars'] <= 280 ]
# Descriptions that could fit in a tweet prior to 2018
# Probably would be important if we were, say, trying to determine which descriptions were shared on the site's twitter account over the past five years
df[ df['description_chars'] <= 140 ]
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
result
type(result)
result.text
type(result.text)
soup = bs4.BeautifulSoup(result.text)
soup
type(soup)
soup.select('h2')
type(soup.select('h2'))
len(soup.select('h2'))
first = soup.select('h2')[0]
first
type(first)
first.text
type(first.text)
first.text.strip()
last = soup.select('h2')[-1]
last.text.strip()
# This ...
titles = []
for tag in soup.select('h2'):
title = tag.text.strip()
titles.append(title)
# ... is the same as this:
titles = [tag.text.strip()
for tag in soup.select('h2')]
type(titles), len(titles)
titles[0], titles[-1]
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
long_titles = []
for title in titles:
if len(title) > 80:
long_titles.append(title)
long_titles
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
[title for title in titles
if len(title) > 80]
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) > 80
long('Python is good!')
long('Thinking Inside the Box: How Python Helped Us Adapt to An Existing Data Ingestion Pipeline')
list(filter(long, titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
list(filter(lambda t: len(t) > 80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
df.shape
df[ df['title'].str.len() > 80 ]
condition = df['title'].str.len() > 80
df[condition]
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title length'] = df['title'].apply(len)
df.shape
df.head()
df[ df['title length'] > 80 ]
df.loc[ df['title length'] > 80, 'title']
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title'] = df['title length'] > 80
df.shape
df.head()
df[ df['long title']==True]
df[df['long title']]
###Output
_____no_output_____
###Markdown
first letter
###Code
# 'Python is great!'[0]
df['first letter'] = df['title'].str[0]
df[ df['first letter']=='P' ]
df[ df['title'].str.startswith('P') ]
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title word count'] = df['title'].apply(textstat.lexicon_count)
df.shape
df.head()
df[ df['title word count'] <= 3 ]
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df = df.rename(columns={'title length': 'title character count'})
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
df.describe(include='all')
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.sort_values(by='title character count').head(5)
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df.sort_values(by='first letter', ascending=False).head()
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first letter'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
%matplotlib inline
(df['first letter']
.value_counts()
.head(5)
.plot
.barh(color='grey',
title='Top 5 most frequent first letters, PyCon 2019 talks'));
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
title = 'Distribution of title length, in characters'
df['title character count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet? Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
descrip = [tag.text.strip() for tag in soup.select('.presentation-description')]
#print(descrip)
df['description'] = descrip
df['description char length'] = [len(x) for x in descrip]
df['desscription word count'] = [x for x in df['description'].apply(textstat.lexicon_count)]
df.describe()
###Output
_____no_output_____
###Markdown
Description Word CountMax 421Min 20Mean 130.821053Description Character CountMax 2827Min 121Mean 813.073684
###Code
tweetable_descriptions = df.loc[ df['description char length'] > 280, 'title']
# df[['description char length' < 280, 'title']]
print('Tweetable descriptions are')
print(df.loc[ df['description char length'] <= 280, 'title'])
print(df.loc[ df['description char length'] <= 280,'description'])
df['grade level'] = df['description'].apply(textstat.flesch_kincaid_grade)
print('These are the Flesh-Kincaid Grade descriptions.')
print(df['grade level'].describe())
(df['grade level'].plot.hist(title='Flesh-Kincaid histogram'));
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
###Output
_____no_output_____
###Markdown
title length
###Code
###Output
_____no_output_____
###Markdown
long title
###Code
###Output
_____no_output_____
###Markdown
first letter
###Code
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
#!pip install textstat
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** these questions:- Which descriptions could fit in a tweet?- What's the distribution of grade levels? Plot a histogram.
###Code
import bs4
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
result = requests.get(url)
soup = bs4.BeautifulSoup(result.text)
descriptions = [tag.text.strip()
for tag in soup.select('.presentation-description')]
print (len(descriptions))
print (descriptions)
df = pd.DataFrame({'description': descriptions})
df['char count'] = df.description.apply(len)
df.head()
import textstat
df['descr. word count'] = df['description'].apply(textstat.lexicon_count)
df.head()
df['grade level'] = df['description'].apply(textstat.flesch_kincaid_grade)
df.head()
df.describe()
df.describe(exclude=np.number)
df['tweetable'] = df['char count']<=280
df[df['tweetable'] == True]
plt.hist(df['grade level'])
plt.title('Histogram of Description Grade Levels')
plt.show();
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
###Output
_____no_output_____
###Markdown
title length
###Code
###Output
_____no_output_____
###Markdown
long title
###Code
###Output
_____no_output_____
###Markdown
first letter
###Code
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
result
type(result)
result.text
type(result.text)
soup = bs4.BeautifulSoup(result.text)
soup
type(soup)
soup.select('h2')
type(soup.select('h2'))
len(soup.select('h2'))
first = soup.select('h2')[0]
first
type(first)
first.text
type(first.text)
first.text.strip()
last = soup.select('h2')[-1]
last.text.strip()
#This...
titles = []
for tag in soup.select('h2'):
title = tag.text.strip()
titles.append(title)
# ... is the same as this:
titles = [tag.text.strip()
for tag in soup. select('h2')]
type(titles),len(titles)
titles[0],titles[-1]
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
long_titles = []
for title in titles:
if len(title) > 80:
#print(title)
long_titles.append(title)
long_titles
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
[title for title in titles if len(title) > 80]
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title)>80
long('Python is good!')
def long(title):
return len(title)>80
long('Thinking Inside the Box: How Python Helped Us Adapt to An Existing Data Ingestion Pipeline')
list(filter(long, titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
#rarely used
filter(lambda t: len(t)> 80,titles)
#rarely used
list(filter(lambda t: len(t)> 80,titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title':titles})
df.shape
df[df['title'].str.len()>80]
df['title']
df['title'].str.len()
df['title'].str.len()>80
condition = df['title'].str.len()>80
df[condition]
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title length'] = df['title'].apply(len)
df.shape
df.head()
df[df['title length'] > 80]
df.loc[df['title length']> 80, 'title length']
df.loc[df['title length']> 80, 'title']
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title']=df['title length']>80
df.shape
df.head()
df[df['long title']==True]
df[df['long title']]
df[df['long title']==False]
df[df['long title']!=True]
df[~df['long title']]
###Output
_____no_output_____
###Markdown
first letter
###Code
'Python is great!'[-1]
'Python is great!'[0]
df['title'].str[0]
df['first letter'] = df['title'].str[0]
df[df['first letter']=='P']
'Python is great!'.startswith('P')
'Hello world!'.startswith('P')
df[df['title'].str.startswith('P')]
df[df['title'].str.contains('neural')]
df[df['title'].str.contains('Neural')]
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title'].apply(textstat.lexicon_count)
#new column
df['title word count'] = df['title'].apply(textstat.lexicon_count)
df.shape
df.head()
df[df['title word count']<= 3]
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df.head()
df = df.rename(columns={'description character count':'title character count','description': 'title','description word count':'title word count'})
df.head()
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
df.describe(include='all')
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.describe()
df.sort_values(by='title character count').head(5)
df.sort_values(by='title character count').head(5)['title']
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
#REVERSE, reverse, reverse alphabetically
df.sort_values(by='first letter',ascending=False)
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first letter'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts()
df['long title'].value_counts() / 95
df['long title'].value_counts() / len(df)
df['long title'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
df['first letter']
df['first letter'].value_counts()
df['first letter'].value_counts().head(5)
%matplotlib inline
(df['first letter']
.value_counts()
.head(5)
.plot.barh());
#the ; suppresses the line of matplotlib data
%matplotlib inline
(df['first letter']
.value_counts()
.head(5)
.plot.barh())
#the ; suppresses the line of matplotlib axes data
%matplotlib inline
(df['first letter']
.value_counts()
.head(5)
.plot.barh(color='grey',
title='Top 5 Most Frequesnt First Letters, PyCon Title 2019 Talks'));
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
title = 'Distribution of Title Length, In Characters'
df['title character count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet?**04.02.19 - Stretch Challenge - SCROLL DOWN**
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
soup = bs4.BeautifulSoup(result.text)
soup
type(soup)
soup.select('div.presentation-description')
first = soup.select('div.presentation-description')[0]
first
first.text.strip()
df.describe(include='all')
#df.head()
df = df.rename(columns={'title':'description'})
df = df.rename(columns={'title character count':'description character count'})
df = df.rename(columns={'title word count':'description word count'})
df.head()
df.describe(include='all')
###Output
_____no_output_____
###Markdown
Average description word count is 8 words. (Rounding up 7.978)Minimum description word count is 2 words.Maximum description word count is 19 words.All descriptions could fit in a tweet since they are all less than or equal to 140 characters. Will represent in code form below.
###Code
df[df['description character count'] <= 140]
###Output
_____no_output_____
###Markdown
Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
df.describe(include='all')
#df['description grade level'] = df['description grade level',]
#loop text from description first
#textstat.flesch_kincaid_grade(df['description'][0])
#[ expression for item in list if conditional ]
# for item in flesch_kincaid_grade.description
!pip install textstat
import textstat
[textstat.flesch_kincaid_grade(item) for item in df['description'] ]
df['description grade level'] = df.description.apply(textstat.flesch_kincaid_grade)
df['description grade level'].hist();
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
###Output
_____no_output_____
###Markdown
title length
###Code
###Output
_____no_output_____
###Markdown
long title
###Code
###Output
_____no_output_____
###Markdown
first letter
###Code
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** these questions:- Which descriptions could fit in a tweet?- What's the distribution of grade levels? Plot a histogram.
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
import bs4
import requests
import pandas as pd
result = requests.get(url)
soup = bs4.BeautifulSoup(result.text)
descriptions = [tag.text.strip()
for tag in soup.select('div.presentation-description')]
df = pd.DataFrame({'description': descriptions})
df['description length'] = df.description.apply(len)
df.loc[df['description length'] > 100, 'description length']
df
df.describe()
! pip install textstat
import textstat
df['description word count'] = df.description.apply(textstat.lexicon_count)
df['description character count'] = df.description.str.len()
df['description character count']
df['kincaid grade'] = df.description.apply(textstat.flesch_kincaid_grade)
df['tweet length'] = df[df['description word count'] < 281]
df['tweet length']
%matplotlib inline
title = 'Distribution of the Flesch-Kincaid grade'
df['kincaid grade'].plot.hist(title=title)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
result
type(result)
result.text
type(result.text)
soup = bs4.BeautifulSoup(result.text)
soup
type(soup)
soup.select('h2')
type(soup.select('h2'))
len(soup.select('h2'))
first = soup.select('h2')[0]
first
type(first)
first.text
type(first.text)
# remove whitespace
first.text.strip()
soup.select('h2')[-1]
last = soup.select('h2')[-1]
last.text.strip()
for tag in soup.select('h2'):
title = tag.text.strip()
print(title)
titles = [tag.text.strip()
for tag in soup.select('h2')]
titles
type(titles), len(titles)
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
long_titles = []
for title in titles:
if len(title) > 80:
long_titles.append(title)
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
[title for title in titles
if len(titles) > 80]
def long(title):
return len(title) > 80
long('Python is good')
list(filter(long, titles))
###Output
_____no_output_____
###Markdown
3. Filter with named function 4. Filter with anonymous function
###Code
list(filter(lambda t: len(t) > 80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title' : titles})
df.shape
df.head()
df[df['title'].str.len() > 80]
condition = df['title'].str.len() > 80
df[condition]
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title length'] = df['title'].apply(len)
df.head()
df.loc[ df['title length'] > 80, ['title']]
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title'] = df['title length'] > 80
df.shape
df.head()
df[ df['long title']==True]
df[~df['long title']]
###Output
_____no_output_____
###Markdown
first letter
###Code
# 'Python is great:'[0]
df['first letter'] = df['title'].str[0]
df[ df['first letter']=='P']
# 'Python is good!'.startswith('P')
df[ df['title'].str.startswith('P') ]
df[ df['title'].str.contains('Python')]
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title word count'] = df['title'].apply(textstat.lexicon_count)
print(df.shape)
df.head()
df[ df['title word count'] <= 3]
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df = df.rename(columns={'title length' : 'title character count'})
df.head()
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
df.describe(include='all')
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.sort_values(by='title character count').head(5)
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df.sort_values(by='first letter', ascending=False).head()
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first letter'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
(df['first letter']
.value_counts()
.head(5)
.plot
.barh(color='gray',
title="Top 5 letter of 2019 Pythoncon Talks"));
title = "Distribution of Title Length in Characters"
df['title character count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet?
###Code
soup.find_all('div', id='presentation-description')
soup.select('div.presentation-description')
type('.presentation-description')
for tag in soup.select('.presentation-description'):
descriptions = tag.text.strip()
print(descriptions)
descriptions = [desc.text for desc in soup.select('.presentation-description')]
descriptions
desc_df = pd.DataFrame({'.presentation-description' : descriptions})
pd.options.display.max_colwidth = 10000
desc_df.shape
desc_df.head()
desc_df = desc_df.rename(columns={'.presentation-description' : 'Presentation Description'})
desc_df.head()
desc_df['Presentation Description Character Count'] = desc_df['Presentation Description'].apply(len)
# desc_df = desc_df.drop(columns = ['Description Character Count'])
desc_df.head()
desc_df['Presentation Description Word Count'] = desc_df['Presentation Description'].apply(textstat.lexicon_count)
print(desc_df.shape)
desc_df.head()
desc_df.describe()
desc_df.describe(include='all')
desc_df.describe(exclude='number')
# Tweetable Presentation Descriptions
desc_df.loc[ desc_df['Presentation Description Character Count'] <= 280, ['Presentation Description']]
###Output
_____no_output_____
###Markdown
Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
desc_fkg = textstat.flesch_kincaid_grade('.presentaion-description')
desc_fkg
desc_df['Presentation Description Flesch-Kincaid Grade'] = desc_df['Presentation Description'].apply(textstat.flesch_kincaid_grade)
desc_df.head()
title = "Presentation Description Flesch-Kincaid Grade"
desc_df['Presentation Description Flesch-Kincaid Grade'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
result
type(result)
result.text
type(result.text) #verify type of information
soup = bs4.BeautifulSoup(result.text)
#soup
type(soup) # verify what type of soup again
soup.select('h2') # pulling in the titles
type(soup.select('h2')) # what type is this and what can i do with a list?
len((1, 2, 3)) # length of the list
len(soup.select('h2')) # total length of the list/talks
# means our parsing is working
first = soup.select('h2')[0] # <-- index no 0
first
type(first) # its a tag == text obj
first.text # have removed h2 tag,
first.text.strip() #removing any characters from either end that are whitespace characters
# you can put any part of this sentence in the (), not really useful but good to know
last = soup.select('h2')[-1] # trick to get the last list
#assign a variable to it
last.text.strip()
# reflects what we have already done twice,
# if i just want to print the title, use a print statement
for tag in soup.select('h2'):
title = tag.text.strip()
print(title)
#this....
titles = []
for tag in soup.select('h2'):
title = tag.text.strip()
titles.append(title)
#titles # content of a list
#is the same as this
titles = [tag.text.strip()
for tag in soup.select('h2')]
#titles
len(titles)
type(titles), len(titles) # type of list, sum
titles[0], titles[-1] #first and last
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
for title in titles:
if len(title) > 80: #if length is longer than 80 print titles
print(title)
#or
long_titles = []
for title in titles:
if len(title) > 80: #if length is longer than 80 print titles
long_titles.append(title)
#used mainly when things get complicated
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
list_titles = [title for title in titles if len(title) > 80]
list_titles
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) > 80
long('Python is good!')
long('Thinking Inside the Box: How Python Helped Us Adapt to An Existing Data Ingestion Pipeline')
list(filter(long, titles)) # list of all things that pass through this filter
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
list(filter(lambda t: len(t) > 80, titles)) #where t means titles
#this isnt visually appealing, very confusing and not always used
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200 #setting col width to not truncate the data
df = pd.DataFrame({'title': titles}) # making a dict
#we are making a dataframe here
#df
#df.shape
df[ df['title'].str.len() > 80 ] #another way to do this, we are subsetting the obsevation
#basically saying this column name length is greater than 80
#if you don't understand this, break it down and solve each piece of this
# dont play computer in your head, let the computer show you what it is doing
#.str says, treat this colimn like a bunch of strings
#df['title'].str.len()
#condtion = df['title'].str.len() >80 or delete the condtion and you will see all rows in a T or F setting
#df[condition] only will pass the rows where that statement is true
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title length'] = df['title'].apply(len) #returning list of 95 numbers where
df.shape
df.head()
df[ df['title length'] > 80]
df.loc[ df['title length'] >80, 'title']
df.loc[ df['title length'] >80, 'title length']
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title'] = df['title length'] > 80
df.shape # shape increased
df.head()
# assigning a boolean value
df[ df['long title']== True] # gives all col where long title equales true
#df[ -df[ Use - to flip it
###Output
_____no_output_____
###Markdown
first letter
###Code
#'Python is Great!'[0] #first
#'Python is Great!'[-1]#last
df['first letter'] = df['title'].str[0] #all the first letters | assign it to a column
df[ df['first letter']=='P' ] #shows where they all start with p
'Python is good'.startswith('P') # if you didnt want to create another col
df[ df['title'].str.startswith('P')] #another way
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title'].apply(textstat.lexicon_count).sum() #wordcount summ
df['title word count'] = df['title'].apply(textstat.lexicon_count).sum() #wordcount summ
df.head()
df[ df['title word count'] <= 3]
#why doesnt this show
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df = df.rename(columns={'title length': 'title char count'})
#make syre you assign a new df to this, assign it back!
df.head()
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
df.describe(include='all') #show / exclude is an option as well
df.describe(exclude='number') #anything numpy considrs a number
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.sort_values(by='title char count').head(5)
#[:5] is ok also
df.sort_values(by='title char count').head(5)['title']
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df.sort_values(by='first letter', ascending=False)
df.sort_values(by='first letter', ascending=False).head()
#you can use decending as well
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first letter'].value_counts() #frequency of each
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts()
df['long title'].value_counts() / 95 #creating percentages 95 is len of df
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
%matplotlib inline
(df['first letter']
.value_counts()
.head()
.plot
.barh(color='pink',
title= 'fhkj'));
#declaritive style
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
title = 'fh'
df['title char count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet? Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
soup = bs4.BeautifulSoup(result.text)
#soup
# Scrape the Talk Descriptions
soup.select('.presentation-description')
# What type of data am I looking at?
type(soup.select('.presentation-description'))
# What is the total amount of lines
len(soup.select('.presentation-description'))
descriptions = [tag.text.strip()
for tag in soup.select('.presentation-description')]
descriptions
# Verify
len(descriptions)
###Output
_____no_output_____
###Markdown
**DataFrame Work**
###Code
# Pull in pandas
import pandas as pd
pd.options.display.max_colwidth = 200 #setting col width to not truncate the data
###Output
_____no_output_____
###Markdown
**Description**
###Code
# Setting up the Dataframe
df = pd.DataFrame({'description': descriptions })
# Verify
#df
df.shape
###Output
_____no_output_____
###Markdown
**Description Character Count**
###Code
df['description character count'] = df['description'].apply(len)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
**Description Word Count**
###Code
import textstat
df['description word count'] = df['description'].apply(textstat.lexicon_count)
df['description'].apply(textstat.lexicon_count).sum() # checking to see the total word count
# Checking if it works
df.head()
# Verify
df.shape
###Output
_____no_output_____
###Markdown
Describe Each DF ***I wasn't very certain on what exactly was being asked, so I generated a .describe for each column***
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
**Description Column*****Note: This column has no numbers. I just put it up for practice with syntax***
###Code
df['description'].describe(exclude=[np.object])
###Output
_____no_output_____
###Markdown
**Description Word Count Column**
###Code
df['description word count'].describe()
###Output
_____no_output_____
###Markdown
**Average, Minimum, and Maximum**
###Code
print('The Average Description Word Count is:', df['description word count'].mean())
print('The Minimum Description Word Count is:', df['description word count'].min())
print('The Maximum Description Word Count is:', df['description word count'].max())
###Output
The Average Description Word Count is: 130.82105263157894
The Minimum Description Word Count is: 20
The Maximum Description Word Count is: 421
###Markdown
**Description Character Count Column**
###Code
df['description character count'].describe()
###Output
_____no_output_____
###Markdown
**Look at all the Columns Together**
###Code
df.describe(include='all')
###Output
_____no_output_____
###Markdown
What Descriptions Could fit in a Tweet
###Code
# Checking out characters under 280(maximum tweet characters)
df[ df['description character count'] < 280]
# Another way to locate
df.loc[ df['description character count'] < 280, 'description character count']
###Output
_____no_output_____
###Markdown
Stretch Goal **Create another Column** ***SOLVED***
###Code
df['description grade level'] = df['description'].apply(textstat.flesch_kincaid_grade)
df.head()
###Output
_____no_output_____
###Markdown
**Create a Histogram**
###Code
df['description grade level'].plot.hist(color='pink');
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
###Output
_____no_output_____
###Markdown
title length
###Code
###Output
_____no_output_____
###Markdown
long title
###Code
###Output
_____no_output_____
###Markdown
first letter
###Code
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** these questions:- Which descriptions could fit in a tweet?- What's the distribution of grade levels? Plot a histogram.
###Code
import requests
import bs4
url = 'https://us.pycon.org/2018/schedule/talks/list/'
result = requests.get(url)
soup = bs4.BeautifulSoup(result.text)
soup.select('.presentation-description')[0].text.strip()
descriptions = [tag.text.strip() for tag in soup.select('.presentation-description')]
titles = [tag.text.strip() for tag in soup.select('h2')]
len(descriptions), len(titles)
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'description':descriptions})
df.head()
df['description char count'] = df.description.apply(len)
df.head()
!pip install textstat
import textstat
# use textstat to count words.
df['description word count'] = df.description.apply(textstat.lexicon_count)
df.head()
# readability by grade level using the Flesh-Kincaid grade level
# FK grade levels 0-18
# 0-6: Basic, 7-12: Average, 12-18: Skilled
df['description FK grade level'] = df.description.apply(textstat.flesch_kincaid_grade)
df.head()
# looks like we have one value that is way too high. might want to categorize them.
df['description FK grade level'].describe()
import numpy as np
criteria = [((df['description FK grade level'] >= 0) & (df['description FK grade level'] < 6)),
((df['description FK grade level'] >= 6) & (df['description FK grade level'] < 12)),
((df['description FK grade level'] >= 12))]
values = ['Basic', 'Average', 'Skilled']
df['description FK category'] = np.select(criteria,values)
df.head()
df['description FK category'].value_counts().plot.barh(title='Counts for each FK category');
df.describe()
list(df['description'][df['description char count'] < 280])[0]
df['tweetable description'] = df['description char count'] <= 280
df['description FK grade level'].plot.hist(title='distribution of FK grade levels');
df['description FOG grade level'] = df.description.apply(textstat.gunning_fog)
df['description SMOG grade level'] = df.description.apply(textstat.smog_index)
df.head()
df['mean grade level'] = (df['description FK grade level'] + df['description FOG grade level'] + df['description SMOG grade level']) / 3
df.head()
df['description char per word'] = df['description char count'] / df['description word count']
df['description char per word'].corr(df['mean grade level'])
df.pivot_table(values = 'description char per word', index='description FK category').plot.barh()
df.head()
df.describe()
df.corr()
soup.select('h2')[0].text.strip()
df.head(1)
df['title'] = [tag.text.strip() for tag in soup.select('h2')]
df.head(1)
df = df.drop(labels='titles', axis='columns')
df.head(1)
df['title char count'] = df.title.apply(len)
df['first letter in title'] = df.title.str[0]
df['title word count'] = df.title.apply(textstat.lexicon_count)
df.head(1)
df['first letter in title'] = df['first letter in title'].str.upper()
df.shape
df['title char per word'] = df['title char count'] / df['title word count']
df['bigger words in title'] = (df['title char per word'] > df['description char per word'])
df['bigger words in title'].describe()
len(soup.select('b')[1::2][0])
df = df.drop(labels='speaker names', axis='columns')
df.head(1)
df['speaker names'] = [tag.text.strip() for tag in soup.select('b')[::2]]
df['time/place'] = [tag.text.strip() for tag in soup.select('b')[1::2]]
import re
def split(expression):
expression = re.split('\n',expression)
cols = []
event_day = expression[0].strip()
event_time = expression[1].strip()
event_location = expression[3].strip()
cols.append(event_day)
cols.append(event_time)
cols.append(event_location)
return cols
times_places = list(df['time/place'].apply(split))
days = []
times = []
locations = []
for item in times_places:
days.append(item[0])
times.append(item[1])
locations.append(item[2])
df['event day'] = days
df['event times'] = times
df['event locations'] = locations
df = df.drop(labels='time/place',axis=1)
df['event locations'].value_counts()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
## Response [200] means that everything went okay with the retrieval of information
result
## what is the type of the information.. in this case, object
type(result)
## returns HTML code. the text from the URL
result.text
type(result.text)
## bs4.beautifulsoup function helps organize this str
soup = bs4.BeautifulSoup(result.text)
soup
## beautiful soup object
type(soup)
## tab to get info.. select finds certain elements. One can inspect source from web pages and look for clues for the
##information that you want. trial and error until you get what you want!
soup.select('h2') ## select all h2 tags on the page
type(soup.select('h2')) ## returns a list!
len(soup.select('h2')) ## tells you the length of the list.. about 100 talks in this case!
first = soup.select('h2')[0] ## return the first element
first
type(first) ## soup tag element
## keep tab completing to see what you can do for these different types of items
first.text ## get the text from the bs4 Tag object! .. text with spaces and newline characters
type(first.text) # another string
first.text.strip() ## strip the blank spaces
first.text.strip().strip('5') ## strips specific text
last = soup.select('h2')[-1] ## select the last element
last
#loop through all the text and print the titles with spaces removed!
titles = []
for tag in soup.select('h2'):
title = tag.text.strip()
titles.append(title)
print(titles) ## a list!
type(titles)
titlesCompr = [tag.text.strip() for tag in soup.select('h2')] ## list comprehensions! same as 'titles' list above
print(titlesCompr)
titlesCompr[0], titlesCompr[-1] ## first and last titles! can iterate the list!
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
## for loop
for title in titles:
if len(title) > 80:
print(title)
###Output
¡Escuincla babosa!: Creating a telenovela script in three Python deep learning frameworks
Getting started with Deep Learning: Using Keras & Numpy to detect voice disorders
How to engage Python contributors in the long term? Tech is easy, people are hard.
Lessons learned from building a community of Python users among thousands of analysts
Life Is Better Painted Black, or: How to Stop Worrying and Embrace Auto-Formatting
One Engineer, an API, and an MVP: Or, how I spent one hour improving hiring data at my company.
Put down the deep learning: When not to use neural networks and what to do instead
Thinking Inside the Box: How Python Helped Us Adapt to An Existing Data Ingestion Pipeline
###Markdown
2. List Comprehension
###Code
long_titles = [title for title in titles if len(title) > 80] # list comprehension
long_titles
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
## function that returns true or false if title is long or not
def long(title):
return len(title) > 80
long("Python is good")
## filters for long titles (using long function).. list call to it returns it into a list
## functional style of programming
list(filter(long, titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
## another way to do the same thing as before. 'Lambdas are like list comprehensions for functions'
list(filter(lambda t: len(t) > 80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200 ## shows full title so it doesn't get truncated
df = pd.DataFrame({'title': titles}) # craete datafram using data from previous list!
df.shape
df[ df['title'].str.len() > 80] ## refer to pandas cheat sheet to see what is going on here!
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
###Output
_____no_output_____
###Markdown
title length
###Code
## adding column that shows the title lengths
df['title_length'] = df['title'].apply(len)
df.loc[df['title_length']>80, 'title_length'] ## returns titles with length greater than 80
###Output
_____no_output_____
###Markdown
long title
###Code
## boolean column.. if short then false, if long, then True
df['long_title'] = df['title_length'] > 80
df.shape
df[df['long_title']] ## return ones with long titles only
###Output
_____no_output_____
###Markdown
first letter
###Code
df['first_letter'] = df['title'].str[0] ## add column of first letters
df[df['first_letter']=='P'] ## show rows where first letter is 'P'
## same as.. '.startswith('P')' .. python methods.. very convenient!
df[df['title'].str.startswith('P')] ## keep in mind.. strings put in are case sensitive... .lower() or .upper() can be used
## other methods.. .contains('string')
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
## stats on text information
import textstat
## tab complete to look at methods! always helpful
## textstat.
df['title_word_count'] = df['title'].apply(textstat.lexicon_count)
df.shape
df.head()
df[df['title_word_count'] <= 3] # look at short word count names
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
## rename. .make sure to reassign to original (or new) dataframe, dependind if you want
## to keep them separated or not
df = df.rename(columns={'title_length': 'title_character_count'})
df.head() ## you can see that the column got renamed
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe() ## will only show numeric columns
df.describe(exclude='number') ## includes all columns.. but probably won't be great for some of the stats
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.sort_values(by='title_character_count').head()['title']
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df.sort_values(by='first_letter', ascending=False).head() ## be aware of details of functions.. might not return because of style
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first_letter'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long_title'].value_counts() / 95 # manual way to get percentage.. divide by length of dataframe
df['long_title'].value_counts(normalize=True) # parameter that gives percentages
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
## parentheses around everything allows to put chain on different lines
(df['first_letter']
.value_counts()
.head()
.plot
.barh(color='grey',
title='top five most frequent letters, Pycon 2019 talks')) # horizontal plot for top five letter counts
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
# histogram.. with title added to it.. of distribution of character counts
title = "distribution of title length in characters"
df['title_character_count'].plot.hist(title=title)
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet? Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
soup.select('.presentation-description') ## selecting data that is within the '.presentation-description' tags
descriptions = [tag.text.strip() for tag in soup.select('.presentation-description')] ## putting the stripped descriptions into a list
df_copy = df ## making a copy of the dataframe
df_copy['descriptions'] = descriptions ## making a column named 'descriptions' consisiting of the talk descriptions
df_copy.head() ## show the new data frame with the added column
## adding a column that shows the character count of the descriptions
df_copy['description_character_count'] = df_copy['descriptions'].apply(len)
df_copy.head()
## adding a column with description word counts using 'textstat' library
df_copy['descriptions_word_count'] = df_copy['descriptions'].apply(textstat.lexicon_count)
df_copy.head()
## Describe all the dataframe's columns. What's the average description word count? The minimum? The maximum?
df_copy.describe()
###Output
_____no_output_____
###Markdown
1. **Average descriptions word count**: 130.82 words2. **Minimum descriptions word count**: 20 words3. **Maximum descriptions word count**: 421 words
###Code
## Answer the question: Which descriptions could fit in a tweet? - Twitter current limit is 280 characters
df_copy[df_copy['description_character_count'] <= 280]
###Output
_____no_output_____
###Markdown
Only 1 description would fit in a tweet - "Making Music with Pythin, SuperCollider and FoxDot" ***STRETCH CHALLENGE*****Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
## new column that shows the description grade level
kincaid_grade = [textstat.flesch_kincaid_grade(text) for text in descriptions]
df_copy['description_grade_level'] = kincaid_grade
df_copy.head()
## distribution of description grade levels histogram
df_copy['description_grade_level'].hist()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
###Output
_____no_output_____
###Markdown
title length
###Code
###Output
_____no_output_____
###Markdown
long title
###Code
###Output
_____no_output_____
###Markdown
first letter
###Code
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import requests, bs4
result = requests.get(url)
result
type(result)
result.text
soup = bs4.BeautifulSoup(result.text)
type(soup)
soup.select('h2')
type(soup.select('h2'))
len(soup.select('h2'))
first = soup.select('h2')[0]
first
type(first)
first.text
type(first.text)
first.text.strip()
last = soup.select('h2')[-1]
last.text.strip()
for tag in soup.select('h2'):
title = tag.text.strip()
print(title)
titles = []
for tag in soup.select('h2'):
title = tag.text.strip()
titles.append(title)
print(titles)
titles = [tag.text.strip()
for tag in soup.select('h2')]
print(titles)
###Output
['5 Steps to Build Python Native GUI Widgets for BeeWare', '8 things that happen at the dot: Attribute Access & Descriptors', 'Account Security Patterns: How Logged-In Are you?', 'Ace Your Technical Interview Using Python', 'Advanced asyncio: Solving Real-world Production Problems', 'A Guide to Software Engineering for Visually Impaired', 'A Medieval DSL? Parsing Heraldic Blazons with Python!', 'A New Era in Python Governance', 'API Evolution the Right Way', 'A Right Stitch-up: Creating embroidery patterns with Pillow', 'A Snake in the Bits: Security Automation with Python', 'Assets in Django without losing your hair', 'Attracting the Invisible Contributors', 'Beyond Two Groups: Generalized Bayesian A/B[/C/D/E...] Testing', 'Break the Cycle: Three excellent Python tools to automate repetitive tasks', 'Building a Culture of Observability', 'Building an Open Source Artificial Pancreas', 'Building reproducible Python applications for secured environments', 'But, Why is the (Django) Admin Slow?', 'Coded Readers: Using Python to uncover surprising patterns in the books you love', 'Code Review Skills for Pythonistas', 'CUDA in your Python: Effective Parallel Programming on the GPU', "Dependency hell: a library author's guide", 'Django Channels in practice', 'Does remote work really work?', "Don't be a robot, build the bot", 'Eita! Why Internationalization and Localization matter', 'Engineering Ethics and Open Source Software', 'Ensuring Safe Water Access with Python and Machine Learning', 'Escape from auto-manual testing with Hypothesis!', '¡Escuincla babosa!: Creating a telenovela script in three Python deep learning frameworks', "Everything at Once: Python's Many Concurrency Models", 'Exceptional Exceptions - How to properly raise, handle and create them.', 'Extracting tabular data from PDFs with Camelot & Excalibur', 'Fighting Climate Change with Python', 'Floats are Friends: making the most of IEEE754.00000000000000002', 'From days to minutes, from minutes to milliseconds with SQLAlchemy', 'Getting Started Testing in Data Science', 'Getting started with Deep Learning: Using Keras & Numpy to detect voice disorders', 'Getting to Three Million Lines of Type-Annotated Python', 'Going from 2 to 3 on Windows, macOS and Linux', "Help! I'm now the leader of our Meetup group!", 'How to Build a Clinical Diagnostic Model in Python', 'How to engage Python contributors in the long term? Tech is easy, people are hard.', 'How to JIT: Writing a Python JIT from scratch in pure Python', 'How to Think about Data Visualization', 'Instant serverless APIs, powered by SQLite', 'Intentional Deployment: Best Practices for Feature Flag Management', 'Lessons learned from building a community of Python users among thousands of analysts', 'Leveraging the Type System to Write Secure Applications', 'Life Is Better Painted Black, or: How to Stop Worrying and Embrace Auto-Formatting', 'Lowering the Stakes of Failure with Pre-mortems and Post-mortems', 'Machine learning model and dataset versioning practices', 'Maintaining a Python Project When It’s Not Your Job', 'Making Music with Python, SuperCollider and FoxDot', 'Measures and Mismeasures of algorithmic fairness', 'Measuring Model Fairness', 'Migrating Pinterest from Python2 to Python3', 'Mocking and Patching Pitfalls', 'Modern solvers: Problems well-defined are problems solved', 'One Engineer, an API, and an MVP: Or, how I spent one hour improving hiring data at my company.', 'Plan your next eclipse viewing with Jupyter and geopandas', 'Plugins: Adding Flexibility to Your Apps', 'Plug-n-Stream Player Piano: Signal Processing With Python', 'Practical decorators', 'Programmatic Notebooks with papermill', 'Put down the deep learning: When not to use neural networks and what to do instead', 'Python on Windows is Okay, Actually', 'Python Security Tools', "Releasing the World's Largest Python Site Every 7 Minutes", 'Rescuing Kerala with Python', 'Scraping a Million Pokemon Battles: Distributed Systems By Example', "Set Practice: learning from Python's set types", 'Statistical Profiling (and other fun with the sys module)', 'Strategies for testing Async code', 'Supporting Engineers with Mental Health Issues', 'Syntax Trees and Python - Automated Code Transformations', 'Take Back the Web with GraphQL', 'Terrain, Art, Python and LiDAR', 'The Black Magic of Python Wheels', 'The Perils of Inheritance: Why We Should Prefer Composition', 'The Refactoring Balance Beam: When to Make Changes and When to Leave it Alone', 'The Zen of Python Teams', 'Things I Wish They Told Me About The Multiprocessing Module in Python 3', 'Thinking Inside the Box: How Python Helped Us Adapt to An Existing Data Ingestion Pipeline', 'Thinking like a Panda: Everything you need to know to use pandas the right way.', 'Thoth - how to recommend the best possible libraries for your application', 'Time to take out the rubbish: garbage collector', 'to GIL or not to GIL: the Future of Multi-Core (C)Python', 'Type hinting (and mypy)', 'Understanding Python’s Debugging Internals', 'What is a PLC and how do I talk Python to it?', "What's new in Python 3.7", 'Wily Python: Writing simpler and more maintainable Python', "Working with Time Zones: Everything You Wish You Didn't Need to Know"]
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
long_titles = []
for title in titles:
if len(title) > 80:
# print(title)
long_titles.append(title)
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
long_titles = [title for title in titles if len(title) > 80]
print(long_titles)
###Output
['¡Escuincla babosa!: Creating a telenovela script in three Python deep learning frameworks', 'Getting started with Deep Learning: Using Keras & Numpy to detect voice disorders', 'How to engage Python contributors in the long term? Tech is easy, people are hard.', 'Lessons learned from building a community of Python users among thousands of analysts', 'Life Is Better Painted Black, or: How to Stop Worrying and Embrace Auto-Formatting', 'One Engineer, an API, and an MVP: Or, how I spent one hour improving hiring data at my company.', 'Put down the deep learning: When not to use neural networks and what to do instead', 'Thinking Inside the Box: How Python Helped Us Adapt to An Existing Data Ingestion Pipeline']
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) > 80
long('Getting started with Deep Learning: Using Keras & Numpy to detect voice disorders')
filter(long, titles)
list(filter(long, titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
list(filter(lambda t: len(t) > 80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
print(df)
print(df.shape)
df[ df['title'].str.len() > 80 ]
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title length'] = df['title'].apply(len)
print(df['title length'])
df.shape
df.loc[df['title length'] > 80, 'title']
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title'] = df['title length'] > 80
df.shape
df[ df['long title'] == True ]
df[ df['long title']]
###Output
_____no_output_____
###Markdown
3 ways to get all rows where 'long title' is false:df[ df['long title'] == False]df[ df['long title'] != True] ~ denotes an inversion, so True returns Falsedf[ ~df['long title']] first letter
###Code
df['first character'] = df['title'].str[0]
df[ df['first character'] == 'P' ]
df[ df['title'].str.startswith('P')]
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title word count'] = df['title'].apply(textstat.lexicon_count)
df.head()
df.shape
df [ df['title word count'] <= 3 ]
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df = df.rename(columns={'title length': 'title character count'})
df.head()
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.sort_values(by='title character count').head()
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df.sort_values(by='first character', ascending=False)
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first character'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
%matplotlib inline
# Method chaining
(df['first character']
.value_counts()
.head()
.plot
.barh(color = 'grey',
title='Top 5 most frequent first letters, PyCon 2019 talks'));
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
title = 'Distribution of title length, in characters'
df['title character count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet? Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
soup.select('.presentation-description')
first_description = soup.select('.presentation-description')[0]
first_description
first_description.text
first_description.text.strip()
first_description.text.strip().replace('\r\n\r\n', " ")
type(first_description.text.strip())
descriptions = [tag.text.strip().replace('\r\n\r\n', " ") for tag in soup.select('.presentation-description')]
print(descriptions)
df['description'] = pd.DataFrame(descriptions)
df.head()
df['description char count'] = df['description'].apply(len)
df.head()
df['description'].describe()
df['description word count'] = df['description'].apply(textstat.lexicon_count)
df.head()
df.describe()
df.describe(exclude='number')
df[ df['description char count'] <= 280]
###Output
_____no_output_____
###Markdown
Stretch ChallengeMake another new column in the dataframe:description grade level (you can use this textstat function to get the Flesh-Kincaid grade level)Answer the question: What's the distribution of grade levels? Plot a histogram.Be aware that Textstat has issues when sentences aren't separated by spaces. (A Lambda School Data Science student helped identify this issue, and emailed with the developer.)Also, BeautifulSoup doesn't separate paragraph tags with spaces.So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
df['description grade level'] = [textstat.flesch_kincaid_grade(text)
for text in descriptions]
df.head()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
import requests, bs4
result = requests.get(url)
# tab lists available options, shift + enter runs the cell
result.text
soup = bs4.BeautifulSoup(result.text)
soup.select('h2')
len(soup.select('h2'))
first = soup.select('h2')[0]
first.text.strip()
#[-1:] vs -1 does slicing instead of locating
last = soup.select('h2')[-1:]
titles =[]
for tag in soup.select('h2'):
tag.text.strip()
titles.append(titles)
titles =[tag.text.strip() for tag in soup.select('h2')]
type (titles), len(titles)
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
for title in titles:
if len(title) > 80:
print (title)
long_titles =[]
for title in titles:
if len(title) > 80:
long_titles.append(title)
long_titles
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
[title for title in titles if len(title) > 80]
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) >80
list(filter(long,titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
list(filter(lambda t: len(t) >80,titles))
df.title.apply(len)
df['title'].str[0]
df.title[0]
def first_letter(string):
return string[0]
df.title.apply(first_letter)
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd. DataFrame({'title':titles})
df[df.title.str.len()>80]
###Output
_____no_output_____
###Markdown
title length
###Code
df['title length'] = df.title.apply(len)
df.head()
df[df['title length']>80]
df.loc[df['title length']>80,'title length']
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title'] = df['title length'] > 80
df[df['long title']]
###Output
_____no_output_____
###Markdown
first letter
###Code
title = 'Debugging PySpark'
first_letter = title[0]
first_letter
df['first letter']=df.title.str[0]
df.head()
df[df['first letter']=='P']
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
first = df.title.values[0]
last = df.title.values[-1]
first, last
textstat.lexicon_count(first),textstat.lexicon_count(last)
df['title word count']=df.title.apply(textstat.lexicon_count)
df[df['title word count']<=3]
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df.rename(columns={'title length':'title character count'}, inplace=True)
df.columns
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
df.describe(include='all')
import numpy as np
df.describe(exclude=np.number)
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
#ascending = False to get longer
df.sort_values(by='title character count', ).head(5)
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df['first letter'] =df['first letter'].str.upper()
df.sort_values(by='first letter', ascending = False)
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first letter'].value_counts().sort_index()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts() / len(df)
df['long title'].value_counts(normalize='True')
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
(df['first letter']
.value_counts()
.head(5)
.plot.barh(color='grey',title='Top 5 most frequent first letters, Pycon 2018 Talks'));
title ='Distribution of title length in characters'
df['title character count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** these questions:- Which descriptions could fit in a tweet?- What's the distribution of grade levels? Plot a histogram.
###Code
#scrape
description =[tag.text.strip() for tag in soup.select('.presentation-description')]
#new columns
df['descriptions'] = pd. DataFrame({'descriptions':description})
df['descriptions character count'] = df.descriptions.apply(len)
df['descriptions word count']=df.descriptions.apply(textstat.lexicon_count)
df['grade level']=df.descriptions.apply(textstat.flesch_kincaid_grade)
df.head()
df.describe()
#135 average description word count. The minimum is 35 words, the maximum is 436 words
#Which descriptions could fit in a tweet?
df[df.descriptions.apply(len)<280]
df['grade level'].plot.hist(title="Grade Level");
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
result
soup = bs4.BeautifulSoup(result.text)
type(soup)
soup.select('h2')
first = soup.select('h2')[0]
first.text
first.text.strip()
titles = [title.text.strip() for title in soup.select('h2')]
len(titles), type(titles)
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
long_titles = []
for title in titles:
if len(title) > 80:
long_titles.append(title)
long_titles
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
long_titles = [title for title in titles if len(title) > 80]
long_titles
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) > 80
list(filter(long, titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
list(filter(lambda t: len(t) > 80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
df.shape
df[ df['title'].str.len() > 80 ]
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title length'] = df['title'].apply(len)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
long title
###Code
df['long title'] = df['title'].apply(len) > 80
df.head()
df[ df['long title'] == True ]
###Output
_____no_output_____
###Markdown
first letter
###Code
df['first letter'] = df['title'].str[0]
df[ df['first letter'] == 'P']
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title word count'] = df['title'].apply(textstat.lexicon_count)
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df = df.rename(columns={'title length':'title character count'})
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe(exclude='number')
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
df.sort_values(by='title character count')[:5]
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
df.sort_values(by='title', ascending=False).head()
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first letter'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long title'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
%matplotlib inline
(df['first letter']
.value_counts()
.head(5)
.plot
.barh(
color='grey',
title='Top 5 Most Frequent First Letters, Pycon 2019 Talks'));
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
(df['title character count']
.plot
.hist(
color='b',
title='Distribution of Title Lengths'));
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet? Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
results.text
descriptions = soup.select('.presentation-description')
print(descriptions)
first = descriptions[0]
first.text.strip()
last = descriptions[-1].text.strip()
last.replace('\r\n\r\n', ' ')
descs = []
for desc in soup.select('.presentation-description'):
desc = desc.text.strip()
desc = desc.replace('\r\n\r\n', ' ')
descs.append(desc)
descs
type(descs), len(descs)
# Add presentation descriptions to dataframe
df['description'] = descs
df.head()
textstat.lexicon_count(df['description'][0])
# Add description word count column
df['description word count'] = df['description'].apply(textstat.lexicon_count)
df.head()
# Add description character count column
df['description character count'] = df['description'].apply(len)
df.head()
# Describing the dataframe's columns.
df.describe()
###Output
_____no_output_____
###Markdown
- The presentation description's mean word count is approximately 131 words. - The minimum presentation description's word count is 20 words and the maximum is 421 words.
###Code
df.describe(exclude='number')
# Check to see which presentation descriptions would fit in a tweet (less than 280 characters)
df[ df['description character count'] <= 280 ]
df['description'][70]
df['description grade level'] = df['description'].apply(textstat.flesch_kincaid_grade)
df.head()
df['description'][2]
df.describe()
df[ df['description grade level'] > 16 ]
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
ax = plt.axes()
ax.hist(df['description grade level'],
alpha=0.7,
histtype='stepfilled',
color='steelblue',
edgecolor='none')
ax.set(
xlabel='Description Grade Level',
ylabel='Number of Presentation Descriptions',
title='Reading Grade Level of Presentation Descriptions');
fig = plt.subplots(figsize=(16,8))
ax = plt.axes()
plt.hist2d(
df['description word count'],
df['title word count'],
bins=5,
cmap='Blues'
)
ax.set(
xlabel='Description Word Count',
ylabel='Title Word Count',
title='Relationship between Presentation Title and Description Word Count'
)
cb = plt.colorbar()
cb.set_label('counts in each bin')
fig = plt.subplots(figsize=(16,8))
ax = plt.axes()
plt.scatter(
df['description word count'],
df['description character count'],
c=df['description grade level'],
s=df['title character count'],
alpha=0.3,
cmap='viridis'
)
ax.set(
xlabel='Description Word Count',
ylabel='Description Character Count',
title='Relationship between Description Length and Readability'
)
cb = plt.colorbar()
cb.set_label('Reading Grade Level')
plt.axis([0,250,0,1500])
plt.show()
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
soup = bs4.BeautifulSoup(result.text)
titles = [tag.text.strip() for tag in soup.select('h2')]
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
long_titles_for_loop = []
for title in titles:
if len(titles) > 80:
long_titles_for_loop.append(title)
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
long_titles_list_comp = [title for title in titles if len(title) > 80]
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) > 80
long_titles_named_func = list(filter(long, titles))
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
long_titles_anon_func = list(filter(lambda x: len(x) > 80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
long_titles_pd = df[df['title'].str.len() > 80]
df.head()
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title_length'] = df.title.apply(len)
df.head()
###Output
_____no_output_____
###Markdown
long title
###Code
df['long_title'] = df['title_length'] > 80
df.head()
###Output
_____no_output_____
###Markdown
first letter
###Code
df['first_letter'] = df['title'].str[0]
df.head()
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
df['title_word_count'] = df['title'].apply(textstat.lexicon_count)
df.head()
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df.rename(columns={'title_length': 'title character count'}, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
sorted_df = df.sort_values(by='title character count')
five_shortest_titles = list(sorted_df.title[0:5])
five_shortest_titles
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
alpha_sort_reverse = df.sort_values(by='title', ascending=False)
reverse_sorted_titles = list(alpha_sort_reverse['title'])
reverse_sorted_titles
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df.first_letter.value_counts()
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long_title'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
(df.first_letter
.value_counts()
.head(5)
.plot
.barh(color='grey',
title='Top 5 most frequent first letters'));
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
hist_title = 'Distribution of title length in character counts'
df['title character count'].plot.hist(title=hist_title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** the question: Which descriptions could fit in a tweet?
###Code
#Scrape the talk descriptions. Hint: soup.select('.presentation-description')
descriptions = [desc.text for desc in soup.select('.presentation-description')]
#Make new columns in the dataframe:
#description
df['description'] = descriptions
#description character count
df['description_character_count'] = df['description'].apply(len)
#description word count
df['description_word_count'] = df['description'].apply(textstat.lexicon_count)
#Describe all the dataframe's columns. What's the average description word count? The minimum? The maximum?
df.describe()
print('The avg description word count is: ' + str(round(df.description_word_count.mean(), 1)))
print('The minimum is: ' + str(df.description_word_count.min()))
print('The maximum is: ' + str(df.description_word_count.max()))
#Answer the question: Which descriptions could fit in a tweet?
twitter_max = 280
twitterable_descriptions = df[df['description_character_count'] <= twitter_max]['description']
twitterable_descriptions
###Output
_____no_output_____
###Markdown
Stretch Challenge**Make** another new column in the dataframe:- description grade level (you can use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Answer** the question: What's the distribution of grade levels? Plot a histogram.**Be aware** that [Textstat has issues when sentences aren't separated by spaces](https://github.com/shivam5992/textstat/issues/77issuecomment-453734048). (A Lambda School Data Science student helped identify this issue, and emailed with the developer.) Also, [BeautifulSoup doesn't separate paragraph tags with spaces](https://bugs.launchpad.net/beautifulsoup/+bug/1768330).So, you may get some inaccurate or surprising grade level estimates here. Don't worry, that's ok — but optionally, can you do anything to try improving the grade level estimates?
###Code
#Make another new column in the dataframe: description grade level
df['flesch_score'] = df['description'].apply(textstat.flesch_kincaid_grade)
grade_levels = []
for score in df['flesch_score']:
if score < 30:
grade_levels.append('College +')
elif score < 50:
grade_levels.append('College')
elif score < 60:
grade_levels.append('Grade 10-12')
elif score < 70:
grade_levels.append('Grade 8-9')
elif score < 80:
grade_levels.append('Grade 7')
elif score < 90:
grade_levels.append('Grade 6')
else:
grade_levels.append('Grade 5')
df['description_grade_level'] = grade_levels
#Answer the question: What's the distribution of grade levels? Plot a histogram.
df['description_grade_level'].value_counts()
df['description_grade_level'].value_counts().plot.bar(title='Histogram of Grade Level for each description');
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2019 talks
###Code
url = 'https://us.pycon.org/2019/schedule/talks/list/'
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop 2. List Comprehension 3. Filter with named function 4. Filter with anonymous function 5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html) Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length long title first letter word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
###Code
###Output
_____no_output_____
###Markdown
title length
###Code
###Output
_____no_output_____
###Markdown
long title
###Code
###Output
_____no_output_____
###Markdown
first letter
###Code
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
###Code
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** these questions:- Which descriptions could fit in a tweet?- What's the distribution of grade levels? Plot a histogram.
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
result
type(result)
result.text
# To retrieve top search result links
soup = bs4.BeautifulSoup(result.text)
soup
# Extracting talk descriptions
desc = soup.select('.presentation-description')
desc
# Extracting talk descriptions in a list with only texts
description = [tag.text.strip()
for tag in soup.select('.presentation-description')]
description
type(description), len(description)
description[0], description[2]
type(desc)
len(desc)
desc_first = desc [0]
desc_first
type(desc_first)
type(desc_first.text)
###Output
_____no_output_____
###Markdown
New Columns in the dataframe
###Code
import pandas as pd
pd.set_option('display.width', 1000) #to increase the column width
df = pd.DataFrame({'description': description})
df.head()
#df['title length'] = df.title.apply(len)
df['description character count'] = df.description.apply(len)
df.head()
# word count
!pip install textstat
#df['description word count'] =
import textstat
df['description word count'] = df.description.apply(textstat.lexicon_count)
df.head()
df['description grade level'] = df.description.apply(textstat.flesch_kincaid_grade)
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
1. Average description word count is 1192. Minimum = 353. Maximum = 436
###Code
# Descriptions that could fit in a tweet
df[df.description.str.len()<281]
title = 'Distribution of Description Grade Levels'
df['description grade level'].plot.hist(title = title);
###Output
_____no_output_____
###Markdown
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
###Code
url = 'https://us.pycon.org/2018/schedule/talks/list/'
import bs4
import requests
result = requests.get(url)
type(result.text) #confirms str
result #successful
type(result)
type(result.text) #confirms str
soup = bs4.BeautifulSoup(result.text)
soup.select('h2')
#print(soup) #dont do this
type(soup.select('h2'))
len(soup.select('h2')) #95 talks
first = soup.select('h2')[0] #first talk
first
first.text #cleaner but has problems
first.text.strip() #defaults to strip white spaces and newline parse
last = soup.select('h2')[-1] #last talk
print(type(last)) #tag
print(type(soup.select('h2')[-1:])) #list, not useful
#our complete list of titles loop style
titles = []
for tag in soup.select('h2'):
tag.text.strip()
titles.append(titles)
#list comp style
titles = [tag.text.strip() for tag in soup.select('h2')]
type(titles), len(titles)
titles[0], titles[-1]
###Output
_____no_output_____
###Markdown
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
###Code
#nonfunctional compared to other methods
long_titles = []
for title in titles:
if len(title) > 80:
long_titles.append(title)
len(long_titles)
###Output
_____no_output_____
###Markdown
2. List Comprehension
###Code
long_titles = [title for title in titles if len(title) > 80]
len(long_titles)
###Output
_____no_output_____
###Markdown
3. Filter with named function
###Code
def long(title):
return len(title) > 80
long('Hello') #False
list(filter(long, titles)) #filter by itself is an object
###Output
_____no_output_____
###Markdown
4. Filter with anonymous function
###Code
list(filter(lambda t: len(t)>80, titles))
###Output
_____no_output_____
###Markdown
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
###Code
import pandas as pd
pd.options.display.max_colwidth = 200
df = pd.DataFrame({'title': titles})
df[df.title.str.len() > 80]
condition = df.title.str.len() > 80 #calls a Series of booleans
df[condition]
df.title.str.len() #calls a Series of ints equal to str.len
###Output
_____no_output_____
###Markdown
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html) title length
###Code
df['title_length'] = df.title.apply(len)
df.head()
df.loc[df['title_length']>80, 'title length']]
###Output
_____no_output_____
###Markdown
long title
###Code
#alt
df[df['long_title' == True]]
###Output
_____no_output_____
###Markdown
first letter
###Code
df['first_letter'] = df.title.str[0]
df.head()
df[df['first_letter'] == 'P']
###Output
_____no_output_____
###Markdown
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
###Code
!pip install textstat
import textstat
first = df.title.values[0]
last = df.title.values[-1]
first, last
textstat.lexicon_count(first), textstat.lexicon_count(last)
df['title_word_count'] = df.title.apply(textstat.lexicon_count)
df[df['title_word_count'] <= 3]
###Output
_____no_output_____
###Markdown
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
###Code
df['title_character_count'] = df['title_length']
df.head()
#alt
df = df.rename(columns={'title_legth': 'title_cha'})
df.columns
###Output
_____no_output_____
###Markdown
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
###Code
import numpy as np
df.describe(exclude=np.number)
###Output
_____no_output_____
###Markdown
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html)
###Code
df.sort_values(by='title_character_count').head()
###Output
_____no_output_____
###Markdown
Five shortest titles, by character count
###Code
df['first_letter'] = df['first_letter'].str.upper()
df.sort_values(by='first_letter', ascending=False).head()
###Output
_____no_output_____
###Markdown
Titles sorted reverse alphabetically
###Code
###Output
_____no_output_____
###Markdown
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
###Code
df['first_letter'].value_counts().sort_index()
df[df['first_letter']=='""']
###Output
_____no_output_____
###Markdown
Percentage of talks with long titles
###Code
df['long_title'].value_counts()/len(df)
#alt
df['long_title'].value_counts(normalize=True)*100
###Output
_____no_output_____
###Markdown
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
###Code
%matplotlib inline
(df['first_letter'].value_counts().head()
.plot.barh(color='grey', title='Top 5 Most Frequent First Letters'));
#suppress subplot object with ;
###Output
_____no_output_____
###Markdown
Histogram of title lengths, in characters
###Code
title = 'Dist of Character Count'
df['title_character_count'].plot.hist(title=title);
###Output
_____no_output_____
###Markdown
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** these questions:- Which descriptions could fit in a tweet?- What's the distribution of grade levels? Plot a histogram.
###Code
description = [tag.text.strip() for tag in soup.select('.presentation-description')]
df['description'] = description
df.head(2)
df['desc_character_count'] = df.description.apply(len)
df['desc_word_count'] = df.description.apply(textstat.lexicon_count)
df.head(2)
textlist = []
indexno = 0
for index in df['description']:
x = textstat.flesch_kincaid_grade(df['description'][indexno])
textlist.append(x)
indexno = indexno+1
df['desc_gradelv'] = textlist
df.head(2)
import numpy as np
df.describe(exclude=np.number)
df.describe(include=np.number)
#min desc word count - 35
#max desc word count - 436
#mean desc word count - 134.6
df = df[df['desc_character_count'] <= 280]
df.head()
#df includes only descriptions that would be tweetable
%matplotlib inline
title = 'Distribution of Reading Levels'
ax = df['desc_gradelv'].plot.hist(title=title)
###Output
_____no_output_____ |
src/trash/PredatorStudy_ESCA.ipynb | ###Markdown
PREDATOR: **PRED**icting the imp**A**ct of cancer somatic mu**T**ations on pr**O**tein-protein inte**R**actions ESCA File LocationC:\Users\ibrah\Documents\GitHub\Predicting-Mutation-Effects\src File NamePredatorStudy_ESCA.ipynb Last EditedNovember 2nd, 2021 Purpose - [x] Apply on Cancer Datasets > ESCA* Target (Cancer) data: - *ESCA_Interface.txt*
###Code
# Common imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
import os.path as op
import sys
import random
from pathlib import Path
from pprint import pprint
from IPython.display import display
from tqdm.notebook import tqdm
from helpers.helpers_predator.displayers import (
display_label_counts,
display_labels,
visualize_label_counts,
display_data,
)
from helpers.helpers_predator.visualizers import (
visualize_sampled_train_datasets_label_counts
)
from helpers.helpers_predator.common import load_predator
from helpers.helpers_predator.common import export_data
# PATHS
ESCA_PATH = Path(r"../../My-ELASPIC-Web-API/Elaspic_Results/Merged_Results/ESCA_Interface_2021-11-02.txt")
PREDATOR_MODEL_PATH = Path(r"PredatorModels/PredatorModel_2021-10-24/04f37897/predator.pkl")
PREDICTIONS_DATASETS_FOLDER_PATH = "../data/predictions_datasets/"
# Reflect changes in the modules immediately.
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load the Predator
###Code
predator = load_predator(PREDATOR_MODEL_PATH)
###Output
2021-11-02 10:11:16 |[32m INFO [0m| helpers.helpers_predator.common | Predator object PredatorModels\PredatorModel_2021-10-24\04f37897\predator.pkl is loaded successfully.
###Markdown
Prediction TCGA on Cancer Dataset: ESCA
###Code
predator.initialize_target_data_materials(
tcga_code_path_pairs=[('esca', ESCA_PATH)]
)
###Output
2021-11-02 10:11:29 |[36m DEBUG [0m| helpers.helpers_predator.data_materials | Initialize `esca` ..
2021-11-02 10:11:29 |[36m DEBUG [0m| helpers.helpers_predator.data_materials | Initialize `target_esca_data` ..
2021-11-02 10:11:29 |[36m DEBUG [0m| helpers.helpers_predator.data_materials | Initializing target data materials ..
2021-11-02 10:11:29 |[36m DEBUG [0m| helpers.helpers_predator.data_materials | Determined features: ['Provean_score', 'EL2_score', 'Final_ddG', 'Interactor_alignment_score', 'Solvent_accessibility_wt', 'Matrix_score', 'Solvent_accessibility_mut', 'van_der_waals_mut', 'Interactor_template_sequence_identity', 'solvation_polar_wt']
2021-11-02 10:11:29 |[36m DEBUG [0m| helpers.helpers_predator.data_materials | Declaring Xs_esca data materials ..
###Markdown
TCGA Cancer Datasets ESCA
###Code
display_data(predator.data_materials["esca"])
###Output
[36mData dimensions: (2435, 103)[0m
###Markdown
Preprocessed TCGA Cancer Datasets ESCA
###Code
display_data(predator.data_materials["target_esca_data"])
###Output
[36mData dimensions: (2435, 61)[0m
###Markdown
Voting mode: `hard`
###Code
predator.predict(voting='hard')
# Predictions for first 10 experiment.
predator.predictions["esca"][:3]
predator.predictions.plot_predictions_distributions("esca")
###Output
2021-11-02 10:12:24 |[36m DEBUG [0m| helpers.helpers_predator.predictions | Initializing value counts ..
###Markdown
Predictions Post Processing Post processing of predictions involves following steps: 1. Merging Predictions with SNV Data The prediction column is merged with SNV data for each experiment.$\text{For each experiment } n: $$$ \textit{(Prediction Merged Data)}_n = \underbrace{[\textit{Predictions}_n]}_\text{0, 1 or "NoVote"} + \underbrace{[\textit{Protein }] [\textit{Mutation }] [\textit{Interactor }]}_\text{Cancer Data Triplets} + \underbrace{[\textit{Features }] }_\text{Elaspic}$$ 2. Convert to 1-isomer: `Interactor_UniProt_ID` $\textit{Interactor_UniProt_ID}$ column contains isomer proteins. Here, we convert them into primary isoform representation (i.e. without dashes). | Interactor_UniProt_ID |--------------| P38936 || P16473 || P16473-2 || P19793 | 3. Dropping Invalid Predictions Entries which predicted as both `Decreasing` and `Increasing+NoEff` are dropped. Due to having different features for the same $\textit{(protein, mutation, interactor)}$ triplet from ELASPIC, the triplet $\textit{(protein, mutation, interactor)}$ may be classified both 0 and 1. We drop such instances.
###Code
predator.predictions_post_process()
display_data(predator.predictions["esca_predicted_datasets"][0])
predator.predictions.plot_distribution_valid_vs_invalid("esca")
predator.predictions.plot_num_finalized_predictions("esca")
predator.prepare_ensemble_prediction_data()
display_data(predator.predictions["esca_ensemble_prediction_data"])
display_data(predator.data_materials["esca"])
display_data(predator.data_materials["Xs_esca"][0])
predator.predictions.plot_ensemble_prediction_distribution("esca")
ov_prediction_results_hard = predator.predictions["esca_prediction_results"]
display_data(ov_prediction_results_hard)
ov_ensemble_prediction_data_hard = predator.predictions["esca_ensemble_prediction_data"]
ov_prediction_results_hard_no_votes_dropped = predator.predictions["esca_prediction_results_no_votes_dropped"]
display_data(ov_prediction_results_hard_no_votes_dropped)
visualize_label_counts(ov_prediction_results_hard_no_votes_dropped, 'Prediction')
###Output
[36mLabel counts:
Disrupting 591
Increasing + No Effect 582
Name: Prediction, dtype: int64[0m
###Markdown
Voting mode: `soft`
###Code
predator.initialize_target_data_materials(
tcga_code_path_pairs=[('esca', ESCA_PATH)]
)
predator.predict(voting='soft')
predator.predictions.keys()
# Predictions for first 10 experiment.
predator.predictions["esca_prob"][:3]
###Output
_____no_output_____
###Markdown
Predictions Post Processing Post processing of predictions involves following steps: 1. Merging Predictions with SNV Data The prediction column is merged with SNV data for each experiment.$\text{For each experiment } n: $$$ \textit{(Prediction Merged Data)}_n = \underbrace{[\textit{Predictions}_n]}_\text{Probs Percentages} + \underbrace{[\textit{Protein }] [\textit{Mutation }] [\textit{Interactor }]}_\text{Cancer Data Triplets} + \underbrace{[\textit{Features }] }_\text{Elaspic}$$ 2. Convert to 1-isomer: `Interactor_UniProt_ID` $\textit{Interactor_UniProt_ID}$ column contains isomer proteins. Here, we convert them into primary isoform representation (i.e. without dashes). | Interactor_UniProt_ID |--------------| P38936 || P16473 || P16473-2 || P19793 | 3. Dropping Invalid Predictions Entries whose predicted class-1 probability lies in both `Decreasing` and `Increasing+NoEff` are dropped. Due to having different features for the same $\textit{(protein, mutation, interactor)}$ triplet from ELASPIC, the triplet $\textit{(protein, mutation, interactor)}$ may contain class-1 probability prediction of both lower than 0.50 and higher than 50. We drop such instances.
###Code
predator.predictions_post_process()
predator.predictions.keys()
display_data(predator.predictions["esca_predicted_probs_datasets"][0])
predator.predictions.plot_distribution_valid_vs_invalid("esca")
predator.predictions.plot_num_finalized_predictions("esca")
display_data(predator.predictions['esca_finalized_prediction_dataframes'][0])
predator.prepare_ensemble_prediction_data()
display_data(predator.predictions['esca_predictions_prob_data'])
predator.predictions.plot_ensemble_prediction_distribution("esca")
esca_prediction_results_soft = predator.predictions['esca_prediction_results']
display_data(esca_prediction_results_soft)
esca_prediction_results_soft_no_votes_dropped = predator.predictions["esca_prediction_results_no_votes_dropped"]
display_data(esca_prediction_results_soft_no_votes_dropped)
visualize_label_counts(esca_prediction_results_soft_no_votes_dropped, 'Prediction')
esca_ensemble_prediction_data_soft = predator.predictions["esca_ensemble_prediction_data"]
esca_predictions_prob_data_soft = predator.predictions["esca_predictions_prob_data"]
###Output
_____no_output_____
###Markdown
Exporting Predictions
###Code
# esca_prediction_results = esca_prediction_results_hard_no_votes_dropped
esca_prediction_results = esca_prediction_results_soft_no_votes_dropped
display_data(esca_prediction_results)
predator.export_prediction(
tcga="esca",
data=esca_prediction_results,
file_name="predictions",
folder_path=PREDICTIONS_DATASETS_FOLDER_PATH,
voting="soft",
overwrite=False,
file_extension='csv'
)
###Output
2021-11-06 13:16:05 |[36m DEBUG [0m| helpers.helpers_predator.common | Folder with ID 4be914c2 is created.
2021-11-06 13:16:05 |[36m DEBUG [0m| helpers.helpers_predator.common | Exporting data predictions at location ../data/predictions_datasets/ in folder esca_prediction_2021-11-06\4be914c2..
2021-11-06 13:16:06 |[32m INFO [0m| helpers.helpers_predator.common | ../data/predictions_datasets/esca_prediction_2021-11-06\4be914c2\predictions_soft_2021-11-06.csv is exported successfully.
2021-11-06 13:16:06 |[32m INFO [0m| helpers.helpers_predator.common | Config is exported.
|
.ipynb_checkpoints/functions-checkpoint.ipynb | ###Markdown
This notebook has all functions.Methods :Method 1 : Threshold-->Filter-->Erosion-->DilationMethod 2 : Filter-->Threshold-->Erosion-->Dilation
###Code
# to save labelled images
BASE_DIR="/Users/Trupti/01-LIDo/02-VijiProject/ImageAnalysis/"
def save_img_method1(folder_path,img_name,iterator):
LABELLED_IMG_DIR = BASE_DIR + "AnalysisMethods/AnalysisResults/XMovie/labelled_images/"
directory=folder_path.split('/')[-1].split('.')[0] # to create a folder per experiment to save csvs
path = LABELLED_IMG_DIR + directory
try:
os.makedirs(path)
except FileExistsError:
# directory already exists
pass
plt.imsave((path + '/' +'label_image'+str(iterator)+'.png'),img_name,dpi=300)
def save_img_method2(folder_path,img_name,iterator):
LABELLED_IMG_DIR = BASE_DIR + "AnalysisMethods/AnalysisResults/AMovie/labelled_images/"
directory=folder_path.split('/')[-1].split('.')[0] # to create a folder per experiment to save csvs
path = LABELLED_IMG_DIR + directory
try:
os.makedirs(path)
except FileExistsError:
# directory already exists
pass
plt.imsave((path + '/' +'label_image'+str(iterator)+'.png'),img_name,dpi=300)
def cytoplasm_signal(img):
'''
This function takes an 8bit image,calculates the pixel value for all 4 corners in a 5x5 window
and returns its mean.
'''
col,row=img.shape
topLeft=img[0:5, 0:5].flatten()
topRight=img[col-5:col,0:5].flatten()
bottomLeft=img[0:5,col-5:col].flatten()
bottomRight=img[col-5:col,col-5:col].flatten()
mean_array=np.concatenate([topLeft,topRight,bottomLeft,bottomRight])
mean=np.mean(mean_array)
return(mean)
# remove the outliers
def outliers(df):
'''
This functions takes the dataframe as input and removes the outliers outside the first and third quartile range
'''
Q1 = df['intensity_ratio'].quantile(0.25)
Q3 = df['intensity_ratio'].quantile(0.75)
IQR = Q3 - Q1
df_out= df[~((df['intensity_ratio'] < (Q1 - 1.5 * IQR)) |(df['intensity_ratio'] > (Q3 + 1.5 * IQR)))]
return(df_out)
def prewitt_method1_BG(folder_path):
'''
This function takes the folder path of tif images and performs following steps.
1. Reads the image from the path
2. Converts the 16bit image to 8 bit
3. Prewitt Filter-->Yen Threshold-->Erosion-->dilation
For mean intensity calculation, the background noise needs to be filtered from the intensity image.
'''
df_green_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity',
'perimeter','mean_intensity','bg_value_mask','bg_value_channel'])
df_red_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity',
'perimeter','mean_intensity','bg_value_mask','bg_value_channel'])
# set path for images
red_chpath = os.path.join(folder_path,"pp1","*.tif") #C1 red channel
green_chpath = os.path.join(folder_path,"mask","*.tif") #C0 green channel
# create red channel image array
red_image=[]
for file in natsorted(glob.glob(red_chpath)):
red_image.append(file)
propList = ['label','area', 'eccentricity', 'perimeter', 'mean_intensity']
k=0
for file in natsorted(glob.glob(green_chpath)):
green_channel_image= io.imread(file) # This is to measure and label the particles
#Convert an (ImageJ) TIFF to an 8 bit numpy array
green_image= (green_channel_image / np.amax(green_channel_image) * 255).astype(np.uint8)
#Apply threshold
threshold = filters.threshold_yen(green_image)
#Generate thresholded image
threshold_image = green_image > threshold
# Apply prewitt filter to threshold image
prewitt_im= filters.prewitt(threshold_image)
#Apply erosion to the filtered image followed by dilation to the eroded image
erosion_im=morphology.binary_erosion(prewitt_im, selem=None, out=None)
dilation_im=morphology.binary_dilation(erosion_im, selem=None, out=None)
# label the final converted image
labelled_mask,num_labels = ndi.label(dilation_im)
#overlay onto channel image (red channel image)
red_channel_image = io.imread(red_image[k])
image_label_overlay = color.label2rgb(labelled_mask, image=red_channel_image,bg_label=0)
# #SAVE THE IMAGES : uncomment to save the images
save_img_method1(folder_path,labelled_mask,k)
#save_img(folder_path,green_image,k)
#save_img(folder_path,red_channel_image,k)
#save_img(folder_path,image_label_overlay,k)
#Calculate properties
##################################
# calculate background and subtract from intensity image
bg_value_green=cytoplasm_signal(green_channel_image) # mask image
bg_value_red=cytoplasm_signal(red_channel_image) # PP1 channel image
# subtract background
mod_green_channel_image = green_channel_image-bg_value_green
mod_red_channel_image= red_channel_image-bg_value_red
#Using regionprops or regionprops_table
all_props_green=measure.regionprops_table(labelled_mask, intensity_image=mod_green_channel_image,
properties=['label','area', 'eccentricity', 'perimeter', 'mean_intensity']) # intensity image is 16 bit green channel image
all_props_red=measure.regionprops_table(labelled_mask, intensity_image=mod_red_channel_image,
properties=['label','area', 'eccentricity', 'perimeter', 'mean_intensity']) # intensity image is 16 bit red channel image
df_green = pd.DataFrame(all_props_green)
df_green['fname']=file[-13:] # this is to shorten the filename. change this number as per the file name (check for better method)
df_red= pd.DataFrame(all_props_red)
df_red['fname']=red_image[k][-13:]
df_green['label']=str(k) +"_"+ df_green['label'].astype(str) # creates unique label which later helps to merge both dataframes
df_red['label']= str(k) +"_" + df_red['label'].astype(str)
df_green_final=pd.concat([df_green_final,df_green])
df_green_final['bg_value_mask']=bg_value_green
df_red_final=pd.concat([df_red_final,df_red])
df_red_final['bg_value_channel']=bg_value_red
k+=1
return(df_green_final,df_red_final)
def prewitt_method1_noBG(folder_path):
'''
This function takes the folder path of tif images and performs following steps.
1. Reads the image from the path
2. Converts the 16bit image to 8 bit
3. Prewitt Filter-->Yen Threshold-->Erosion-->dilation
For mean intensity calculation,background is not removed for the mean intensity calculations
'''
df_green_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity', 'perimeter',
'mean_intensity'])
df_red_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity', 'perimeter',
'mean_intensity'])
# set path for images
red_chpath = os.path.join(folder_path,"pp1","*.tif") #C1 red channel
green_chpath = os.path.join(folder_path,"mask","*.tif") #C0 green channel
# create red channel image array
red_image=[]
for file in natsorted(glob.glob(red_chpath)):
red_image.append(file)
propList = ['label','area', 'eccentricity', 'perimeter', 'mean_intensity']
k=0
for file in natsorted(glob.glob(green_chpath)):
green_channel_image= io.imread(file) # This is to measure and label the particles
#Convert an (ImageJ) TIFF to an 8 bit numpy array
green_image= (green_channel_image / np.amax(green_channel_image) * 255).astype(np.uint8)
#Apply threshold
threshold = filters.threshold_yen(green_image)
#Generate thresholded image
threshold_image = green_image > threshold
# Apply prewitt filter to threshold image
prewitt_im= filters.prewitt(threshold_image)
#Apply erosion to the filtered image followed by dilation to the eroded image
erosion_im=morphology.binary_erosion(prewitt_im, selem=None, out=None)
dilation_im=morphology.binary_dilation(erosion_im, selem=None, out=None)
# label the final converted image
labelled_mask,num_labels = ndi.label(dilation_im)
#overlay onto channel image (red channel image)
red_channel_image = io.imread(red_image[k])
image_label_overlay = color.label2rgb(labelled_mask, image=red_channel_image,bg_label=0)
#Calculate properties
all_props_green=measure.regionprops_table(labelled_mask, intensity_image=green_channel_image,
properties=['label','area', 'eccentricity', 'perimeter', 'mean_intensity']) # intensity image is 16 bit green channel image
all_props_red=measure.regionprops_table(labelled_mask, intensity_image=red_channel_image,
properties=['label','area', 'eccentricity', 'perimeter', 'mean_intensity']) # intensity image is 16 bit red channel image
df_green = pd.DataFrame(all_props_green)
df_green['fname']=file[-13:] # this is to shorten the filename. change this number as per the file name (check for better method)
df_red= pd.DataFrame(all_props_red)
df_red['fname']=red_image[k][-13:]
df_green['label']=str(k) +"_"+ df_green['label'].astype(str) # creates unique label which later helps to merge both dataframes
df_red['label']= str(k) +"_" + df_red['label'].astype(str)
df_green_final=pd.concat([df_green_final,df_green])
df_red_final=pd.concat([df_red_final,df_red])
k+=1
return(df_green_final,df_red_final)
def prewitt_method2_BG(folder_path):
'''
This function takes the folder path of tif images and performs following steps.
1. Reads the image from the path
2. Converts the 16bit image to 8 bit
3. Prewitt Filter-->Yen Threshold-->Erosion-->dilation
For mean intensity calculation, the background noise needs to be filtered from the intensity image.
'''
df_green_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity', 'perimeter',
'mean_intensity','bg_value_mask','bg_value_channel'])
df_red_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity', 'perimeter',
'mean_intensity','bg_value_mask','bg_value_channel'])
# set path for images
red_chpath = os.path.join(folder_path,"channel","*.tif") #C1 red channel
green_chpath = os.path.join(folder_path,"mask","*.tif") #C0 green channel
# create red channel image array
red_image=[]
for file in natsorted(glob.glob(red_chpath)):
red_image.append(file)
propList = ['label','area', 'eccentricity', 'perimeter', 'mean_intensity']
k=0
for file in natsorted(glob.glob(green_chpath)):
green_channel_image= io.imread(file) # This is to measure and label the particles
#Convert an (ImageJ) TIFF to an 8 bit numpy array
green_image= (green_channel_image / np.amax(green_channel_image) * 255).astype(np.uint8)
#Apply filter
prewitt_im= filters.prewitt(green_image)
#apply threshold to filtered image
threshold = filters.threshold_yen(prewitt_im)
#Generate thresholded image
threshold_image = prewitt_im > threshold
#Apply erosion to the filtered image followed by dilation to the eroded image
erosion_im=morphology.binary_erosion(threshold_image, selem=None, out=None)
dilation_im=morphology.binary_dilation(erosion_im, selem=None, out=None)
# label the final converted image
labelled_mask,num_labels = ndi.label(dilation_im)
#overlay onto channel image (red channel image)
red_channel_image = io.imread(red_image[k])
image_label_overlay = color.label2rgb(labelled_mask, image=red_channel_image,bg_label=0)
#SAVE THE IMAGES : uncomment to save the images
save_img_method2(folder_path,labelled_mask,k)
#save_img(folder_path,green_image,k)
#save_img(folder_path,red_channel_image,k)
#save_img(folder_path,image_label_overlay,k)
#Calculate properties
##################################
# calculate background and subtract from intensity image
bg_value_green=cytoplasm_signal(green_channel_image) # mask image
bg_value_red=cytoplasm_signal(red_channel_image) # PP1 channel image
# subtract background
mod_green_channel_image = green_channel_image-bg_value_green
mod_red_channel_image= red_channel_image-bg_value_red
#Using regionprops or regionprops_table
all_props_green=measure.regionprops_table(labelled_mask, intensity_image=mod_green_channel_image,
properties=['label','area', 'eccentricity', 'perimeter',
'mean_intensity','bg_value_mask','bg_value_channel']) # intensity image is 16 bit green channel image
all_props_red=measure.regionprops_table(labelled_mask, intensity_image=mod_red_channel_image,
properties=['label','area', 'eccentricity', 'perimeter',
'mean_intensity','bg_value_mask','bg_value_channel']) # intensity image is 16 bit red channel image
df_green = pd.DataFrame(all_props_green)
df_green['fname']=file[-13:] # this is to shorten the filename. change this number as per the file name (check for better method)
df_red= pd.DataFrame(all_props_red)
df_red['fname']=red_image[k][-13:]
df_green['label']=str(k) +"_"+ df_green['label'].astype(str) # creates unique label which later helps to merge both dataframes
df_red['label']= str(k) +"_" + df_red['label'].astype(str)
df_green_final=pd.concat([df_green_final,df_green])
df_green_final['bg_value_mask']=bg_value_green
df_red_final=pd.concat([df_red_final,df_red])
df_red_final['bg_value_channel']=bg_value_red
k+=1
return(df_green_final,df_red_final)
def prewitt_method2_noBG(folder_path):
'''
This function takes the folder path of tif images and performs following steps.
1. Reads the image from the path
2. Converts the 16bit image to 8 bit
3. Prewitt Filter-->Yen Threshold-->Erosion-->dilation
Background is not removed for the mean intensity calculations
'''
df_green_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity', 'perimeter','mean_intensity'])
df_red_final = pd.DataFrame(columns=['fname','label', 'area', 'eccentricity', 'perimeter','mean_intensity'])
# set path for images
red_chpath = os.path.join(folder_path,"channel","*.tif") #C1 red channel
green_chpath = os.path.join(folder_path,"mask","*.tif") #C0 green channel
# create red channel image array
red_image=[]
for file in natsorted(glob.glob(red_chpath)):
red_image.append(file)
propList = ['label','area', 'eccentricity', 'perimeter', 'mean_intensity']
k=0
for file in natsorted(glob.glob(green_chpath)):
green_channel_image= io.imread(file) # This is to measure and label the particles
#Convert an (ImageJ) TIFF to an 8 bit numpy array
green_image= (green_channel_image / np.amax(green_channel_image) * 255).astype(np.uint8)
#Apply filter
prewitt_im= filters.prewitt(green_image)
#apply threshold to filtered image
threshold = filters.threshold_yen(prewitt_im)
#Generate thresholded image
threshold_image = prewitt_im > threshold
#Apply erosion to the filtered image followed by dilation to the eroded image
erosion_im=morphology.binary_erosion(threshold_image, selem=None, out=None)
dilation_im=morphology.binary_dilation(erosion_im, selem=None, out=None)
# label the final converted image
labelled_mask,num_labels = ndi.label(dilation_im)
#overlay onto channel image (red channel image)
red_channel_image = io.imread(red_image[k])
image_label_overlay = color.label2rgb(labelled_mask, image=red_channel_image,bg_label=0)
#Calculate properties
all_props_green=measure.regionprops_table(labelled_mask, intensity_image=green_channel_image,
properties=['label','area', 'eccentricity', 'perimeter', 'mean_intensity']) # intensity image is 16 bit green channel image
all_props_red=measure.regionprops_table(labelled_mask, intensity_image=red_channel_image,
properties=['label','area', 'eccentricity', 'perimeter', 'mean_intensity']) # intensity image is 16 bit red channel image
df_green = pd.DataFrame(all_props_green)
df_green['fname']=file[-13:] # this is to shorten the filename. change this number as per the file name (check for better method)
df_red= pd.DataFrame(all_props_red)
df_red['fname']=red_image[k][-13:]
df_green['label']=str(k) +"_"+ df_green['label'].astype(str) # creates unique label which later helps to merge both dataframes
df_red['label']= str(k) +"_" + df_red['label'].astype(str)
df_green_final=pd.concat([df_green_final,df_green])
df_red_final=pd.concat([df_red_final,df_red])
k+=1
return(df_green_final,df_red_final)
###Output
_____no_output_____
###Markdown
Packages
###Code
import os
import pandas as pd
import seaborn as sns
from astropy.io import ascii
###Output
_____no_output_____
###Markdown
Functions
###Code
def read_data(folder_path):
filenames = os.listdir(folder_path)
for filename in filenames:
if(filename.endswith('.tbl')):
first_file = filename
break
df_data = ascii.read(folder_path + first_file).to_pandas()
for filename in filenames:
if(filename.endswith('.tbl') and not filename == first_file):
data = ascii.read(folder_path + filename).to_pandas()
df_data = df_data.append(data)
return df_data
def plot_data(dataframe):
data = read_data('datasets/time-curves/2301590/')
###Output
_____no_output_____ |
04 - Clustering(KR).ipynb | ###Markdown
클러스터링*지도* 학습과 달리 *비지도* 학습은 레이블 예측을 훈련하고 검증 할 "실측 라벨"가 없을 때 사용됩니다. 비지도 학습의 가장 일반적인 형태는 *클러스터링*으로, 학습 데이터가 예측할 클래스 레이블에 대해 알려진 값을 포함하지 않는다는 점을 제외하고는 개념적으로 *분류*와 유사합니다. 클러스터링은 특성 값에서 확인할 수있는 유사성을 기반으로 학습 사례를 분리하여 작동합니다. 이렇게 생각해보세요. 주어진 엔티티의 숫자 특징은 n 차원 공간에서 엔티티의 위치를 정의하는 벡터 좌표로 생각할 수 있습니다. 클러스터링 모델이 추구하는 것은 다른 클러스터와 분리되어있는 동안 서로 가까운 엔티티의 그룹 또는 *클러스터*를 식별하는 것입니다.예를 들어 다양한 종류의 밀 종자에 대한 측정 값이 포함 된 데이터 세트를 살펴 보겠습니다.> ** 인용 ** :이 실습에 사용 된 seed 데이터 세트는 원래 Lublin에있는 폴란드 과학 아카데미의 Agrophysics 연구소에서 게시했으며 UCI 데이터 저장소(Dua, D. 및 Graff, C. (2019)에서 다운로드할 수 있다. UCI 기계 학습 저장소 [http://archive.ics.uci.edu/ml].: 캘리포니아 대학교, 정보 및 컴퓨터 과학 대학).
###Code
import pandas as pd
# load the training dataset
data = pd.read_csv('data/seeds.csv')
# Display a random sample of 10 observations (just the features)
features = data[data.columns[0:6]]
features.sample(10)
###Output
_____no_output_____
###Markdown
보시다시피 데이터 세트에는 시드(seeds)의 각 인스턴스 (*관찰*)에 대한 6 개의 데이터 포인트 (또는 *특징*)가 포함되어 있습니다. 따라서 이를 6차원 공간에서 각 인스턴스의 위치를 설명하는 좌표로 해석할 수 있습니다.물론 6 차원 공간은 3 차원, 2 차원 플롯에서 시각화하기 어렵습니다. 따라서 *주성분 분석* (PCA)이라는 수학적 기법을 활용하여 특징 간의 관계를 분석하고 각 관측치를 두 주요 구성 요소에 대한 좌표로 요약합니다. 즉, 6차원 특징 값을 2 차원 좌표로 변환하여 2 차원으로 표현해줍니다.
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
# Normalize the numeric features so they're on the same scale
scaled_features = MinMaxScaler().fit_transform(features[data.columns[0:6]])
# Get two principal components
pca = PCA(n_components=2).fit(scaled_features)
features_2d = pca.transform(scaled_features)
features_2d[0:10]
###Output
_____no_output_____
###Markdown
이제 데이터 지점을 2차원으로 변환하여 플롯으로 시각화할 수 있습니다.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.scatter(features_2d[:,0],features_2d[:,1])
plt.xlabel('Dimension 1')
plt.ylabel('Dimension 2')
plt.title('Data')
plt.show()
###Output
_____no_output_____
###Markdown
바라건대 적어도 2 개, 틀림없이 3 개, 합리적으로 구별되는 데이터 포인트 그룹을 볼 수 있기를 바랍니다. 그러나 여기에 클러스터링의 근본적인 문제 중 하나가 있습니다. 실제 클래스 레이블 없이 데이터를 분리할 클러스터 수를 어떻게 알 수 있을까요?알아낼 수있는 한 가지 방법은 데이터 샘플을 사용하여 클러스터 수가 증가하는 일련의 클러스터링 모델을 만들고 각 클러스터 내에서 데이터 포인트가 얼마나 밀접하게 그룹화되어 있는지 측정하는 것입니다. 이 견고성을 측정하는 데 자주 사용되는 메트릭은 *WCSS*(클러스터 내 제곱합)이며 값이 낮으면 데이터 포인트가 더 가깝다는 것을 의미합니다. 그런 다음 각 모델에 대한 WCSS를 시각화 할 수 있습니다.
###Code
#importing the libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
%matplotlib inline
# Create 10 models with 1 to 10 clusters
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i)
# Fit the data points
kmeans.fit(features.values)
# Get the WCSS (inertia) value
wcss.append(kmeans.inertia_)
#Plot the WCSS values onto a line graph
plt.plot(range(1, 11), wcss)
plt.title('WCSS by Clusters')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
###Output
_____no_output_____
###Markdown
이 플롯은 군집 수가 1개에서 2개로 증가함에 따라 WCSS가 크게 감소하고 (더 큰 *밀착성*) 클러스터가 2개에서 3개로 더욱 눈에 띄게 감소함을 보여줍니다. 그 후에는 감소가 덜 두드러 져서 차트에서 약 3 개의 군집에 "elbow"가 생깁니다. 이는 합리적으로 잘 분리 된 데이터 포인트 클러스터가 2 ~ 3 개 있음을 나타내는 좋은 표시입니다. K-Means Clustering테스트 클러스터를 만드는 데 사용한 알고리즘은 *K-means*입니다. 이것은 데이터 세트를 동일한 분산의 *K* 클러스터로 분리하는 일반적으로 사용되는 클러스터링 알고리즘입니다. 클러스터 수 *K*는 사용자가 정의합니다. 기본 알고리즘에는 다음 단계가 있습니다.1. K centroids(중심) 세트가 무작위로 선택 됩니다.2. 클러스터는 데이터 포인트를 가장 가까운 중심에 할당하여 형성됩니다.3. 각 군집의 평균이 계산되고 중심이 평균으로 이동합니다.4. 중지 기준이 충족 될 때까지 2 단계와 3 단계가 반복됩니다. 일반적으로 알고리즘은 새로운 반복이 발생할 때마다 중심의 움직임을 무시할 수 있고 클러스터가 정적으로 될 때 종료됩니다.5. 클러스터 변경이 중지되면 알고리즘이 *수렴*되어 클러스터의 위치를 정의합니다. 중심의 임의 시작점은 알고리즘을 다시 실행하면 클러스터가 약간 다를 수 있으므로 훈련에는 일반적으로 여러 클러스터가 포함됩니다. 반복, 매번 중심을 다시 초기화하고 최상의 WCSS를 가진 모델이 선택됩니다.K 값을 3으로 해서 데이터에 K-Means을 사용해 보겠습니다.
###Code
from sklearn.cluster import KMeans
# Create a model based on 3 centroids
model = KMeans(n_clusters=3, init='k-means++', n_init=100, max_iter=1000)
# Fit to the data and predict the cluster assignments for each data point
km_clusters = model.fit_predict(features.values)
# View the cluster assignments
km_clusters
###Output
_____no_output_____
###Markdown
2 차원 데이터 포인트가있는 클러스터 할당을 살펴 보겠습니다.
###Code
def plot_clusters(samples, clusters):
col_dic = {0:'blue',1:'green',2:'orange'}
mrk_dic = {0:'*',1:'x',2:'+'}
colors = [col_dic[x] for x in clusters]
markers = [mrk_dic[x] for x in clusters]
for sample in range(len(clusters)):
plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100)
plt.xlabel('Dimension 1')
plt.ylabel('Dimension 2')
plt.title('Assignments')
plt.show()
plot_clusters(features_2d, km_clusters)
###Output
_____no_output_____
###Markdown
데이터가 세 개의 개별 군집으로 분리되었기를 바랍니다.그렇다면 클러스터링의 실질적인 용도는 무엇일까요? 클러스터 수나 클러스터가 무엇을 나타내는지 모르는 상태에서 District Cluster로 그룹화해야 하는 데이터가 있는 경우도 있습니다. 예를 들어, 마케팅 조직은 고객을 서로 다른 부문으로 구분한 다음 해당 부문이 서로 다른 구매 행동을 보이는 방식을 조사하고자 할 수 있습니다.클러스터링은 분류 모델을 생성하기 위한 초기 단계로 사용됩니다. 먼저 개별 데이터 지점 그룹을 식별한 다음 이러한 클러스터에 클래스 레이블을 할당합니다. 그런 다음 이 레이블링된 데이터를 사용하여 지도 학습으로 분류 모델을 학습 할 수 있습니다.seed 데이터의 경우 여러 종의 씨앗이 이미 알려져 있고 0 (*Kama*), 1 (*Rosa*) 또는 2 (*Canadian*)로 인코딩되어 있으므로 이러한 식별자를 사용하여 비지도 알고리즘에 의해 식별된 클러스터와 종 분류를 비교할 수 있습니다.
###Code
seed_species = data[data.columns[7]]
plot_clusters(features_2d, seed_species.values)
###Output
_____no_output_____
###Markdown
군집 할당과 클래스 레이블간에 약간의 차이가있을 수 있지만 K-Menas 모델은 관측치를 군집화하는 합리적인 작업을 수행하여 동일한 종의 종자가 일반적으로 동일한 군집에 있어야합니다. 계층적 클러스터링(Hierarchical Clustering)계층적 클러스터링 방법은 K-menas 방법과 비교할 때 분포 가정이 적습니다. 그러나 K-menas 방법은 일반적으로 더 확장 가능하며 때로는 매우 그렇습니다.계층적 클러스터링은 *divisive* 방법 또는 *agglomerative* 방법으로 클러스터를 생성합니다. 분할 방법은 전체 데이터 세트에서 시작하여 단계적으로 파티션을 찾는 "하향식"접근 방식입니다. 집계 클러스터링은 "상향식" 접근 방식입니다. 이 실습에서는 대략 다음과 같이 작동하는 집계 클러스터링을 사용합니다.1. 각 데이터 포인트 간의 연결 거리가 계산됩니다.2. 포인트는 가장 가까운 이웃과 쌍으로 클러스터됩니다.3. 클러스터 간의 연결 거리가 계산됩니다.4. 클러스터는 더 큰 클러스터로 쌍으로 결합됩니다.5. 모든 데이터 포인트가 단일 클러스터에있을 때까지 3 단계와 4 단계가 반복됩니다.연결 함수는 여러 가지 방법으로 계산할 수 있습니다.-Ward linkage는 연결되는 클러스터의 분산 증가를 측정합니다.-평균 연결은 두 군집 구성원 간의 평균 쌍별 거리를 사용합니다.-완전 또는 최대 연결은 두 군집 구성원 간의 최대 거리를 사용합니다.연결 함수를 계산하기 위해 몇 가지 다른 거리 메트릭이 사용됩니다.-유클리드 또는 l2 거리가 가장 널리 사용됩니다. 이것은 Ward 연결 방법에 대한 유일한 메트릭입니다.-맨해튼 또는 l1 거리는 특이 치에 강하고 다른 흥미로운 속성을 가지고 있습니다.-코사인 유사도는 벡터의 크기로 나눈 위치 벡터 간의 내적입니다. 이 측정 항목은 유사성의 측정값인 반면 다른 두 측정 항목은 차이 측정 값입니다. 유사성은 이미지 또는 텍스트 문서와 같은 데이터로 작업할 때 매우 유용할 수 있습니다. Agglomerative 클러스터링(Agglomerative Clustering)Agglomerative 클러스터링 알고리즘을 사용하여 seed 데이터를 클러스터링하는 예를 살펴 보겠습니다.
###Code
from sklearn.cluster import AgglomerativeClustering
agg_model = AgglomerativeClustering(n_clusters=3)
agg_clusters = agg_model.fit_predict(features.values)
agg_clusters
###Output
_____no_output_____
###Markdown
그렇다면 agglomerative 클러스터 할당은 어떻게 생겼을까요?
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def plot_clusters(samples, clusters):
col_dic = {0:'blue',1:'green',2:'orange'}
mrk_dic = {0:'*',1:'x',2:'+'}
colors = [col_dic[x] for x in clusters]
markers = [mrk_dic[x] for x in clusters]
for sample in range(len(clusters)):
plt.scatter(samples[sample][0], samples[sample][1], color = colors[sample], marker=markers[sample], s=100)
plt.xlabel('Dimension 1')
plt.ylabel('Dimension 2')
plt.title('Assignments')
plt.show()
plot_clusters(features_2d, agg_clusters)
###Output
_____no_output_____ |
SPL3/chapter3/unigram.ipynb | ###Markdown
###Code
from collections import Counter
from random import choices
class corpus():
def __init__(self, corpus):
self.word_list = []
self.bigram_counter = {}
for sent in corpus:
words = sent.split() #the only prepocessing method I use for now.
self.word_list += words
#count bigram
for i in range(len(words) - 1):
if (words[i], words[i + 1]) not in self.bigram_counter:
self.bigram_counter[(words[i], words[i + 1])] = 1
else:
self.bigram_counter[(words[i], words[i + 1])] += 1
self.unigram_counter = Counter(self.word_list)
def count_unigram(self):
unigram_prob = {}
denominator = sum(self.unigram_counter.values())
for key in self.unigram_counter:
unigram_prob[key] = self.unigram_counter[key] / denominator
return unigram_prob
def count_bigram(self):
bigram_prob = {}
for prefix in self.unigram_counter:
relative_dict = {}
for next_word in self.unigram_counter:
if(prefix, next_word) in self.bigram_counter:
relative_dict[(prefix, next_word)] = self.bigram_counter[(prefix, next_word)]
denominator = sum(relative_dict.values())
for key in relative_dict:
relative_dict[key] /= denominator
#merge two dict
bigram_prob = {**bigram_prob, **relative_dict}
return bigram_prob
def generate(self, max_len=10):
start = '<s>'
end = '</s>'
sent = [start]
while sent[-1] != end and len(sent) < max_len:
prefix = sent[-1]
#calculate relative frequency
relative_dict = {}
for key in self.unigram_counter:
if (prefix, key) in self.bigram_counter:
relative_dict[(prefix, key)] = self.bigram_counter[(prefix, key)]
denominator = sum(relative_dict.values())
for key in relative_dict:
relative_dict[key] /= denominator
#generate a sample
next_bigram = choices(list(relative_dict.keys()), list(relative_dict.values()))
#update status
sent.append(next_bigram[0][-1])
return sent if sent[-1] == end else sent + [end]
def compute_ppl(self, text):
text = text.split()
bigram_prob = self.count_bigram()
ppl = 1
for i in range(len(text) - 2):
w1, w2 = text[i], text[i+1]
ppl *= bigram_prob[(w1, w2)]
return pow(ppl, -1/(len(text) - 2))
test_corpus = ['<s> I am Sam </s>',
'<s> Sam I am </s>',
'<s> I am Sam </s>',
'<s> I do not like green eggs and Sam </s>']
my_corpus = corpus(test_corpus)
my_corpus.count_unigram()
my_corpus.count_bigram()
my_corpus.generate()
my_corpus.compute_ppl('<s> I am Sam </s>')
###Output
_____no_output_____ |
workshops/tfx-caip-tf23/lab-04-tfx-metadata/labs/lab-04.ipynb | ###Markdown
Inspecting TFX metadata Learning Objectives1. Use a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.In this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted **AI Platform Pipelines** instance includes the [ML Metadata](https://github.com/google/ml-metadata) service. In **AI Platform Pipelines**, ML Metadata uses *MySQL* as a database backend and can be accessed using a GRPC server. Setup
###Code
import os
import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration import metadata
from tfx.types import standard_artifacts
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
###Output
_____no_output_____
###Markdown
Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in `lab-02` or `lab-03`. 1.1 Configure Kubernetes port forwardingTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Proceed to the next step, "Connecting to ML Metadata". Option 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.Hosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.
###Code
import yaml
# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
###Output
_____no_output_____
###Markdown
The pipeline source can be found in the `pipeline` folder. Switch to the `pipeline` folder and compile the pipeline.
###Code
%cd pipeline
###Output
_____no_output_____
###Markdown
2.1 Create AI Platform Pipelines clusterNavigate to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select `"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform"` to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an `App instance name` such as "TFX-lab-04". 2.2 Configure environment settings Update the below constants with the settings reflecting your lab environment.- `GCP_REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `kubeflowpipelines-` prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
* `CUSTOM_SERVICE_ACCOUNT` - In the gcp console Click on the Navigation Menu. Navigate to `IAM & Admin`, then to `Service Accounts` and use the service account starting with prifix - `'tfx-tuner-caip-service-account'`. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup `README` for setup instructions. - `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default'
ENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com'
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
###Output
_____no_output_____
###Markdown
2.3 Compile pipeline
###Code
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
###Output
_____no_output_____
###Markdown
2.4 Deploy pipeline to AI Platform
###Code
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:
###Code
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
###Output
_____no_output_____
###Markdown
2.5 Create and monitor pipeline run
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
2.6 Configure Kubernetes port forwarding To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Connecting to ML Metadata Configure ML Metadata GRPC client
###Code
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port
###Output
_____no_output_____
###Markdown
Connect to ML Metadata service
###Code
store = metadata_store.MetadataStore(connection_config)
###Output
_____no_output_____
###Markdown
ImportantA full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below. Exploring ML Metadata The Metadata Store uses the following data model:- `ArtifactType` describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.- `Artifact` describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.- `ExecutionType` describes a type of component or step in a workflow, and its runtime parameters.- `Execution` is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.- `Event` is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.- `ContextType` describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.- `Context` is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.- `Attribution` is a record of the relationship between Artifacts and Contexts.- `Association` is a record of the relationship between Executions and Contexts. List the registered artifact types.
###Code
for artifact_type in store.get_artifact_types():
print(artifact_type.name)
###Output
_____no_output_____
###Markdown
Display the registered execution types.
###Code
for execution_type in store.get_execution_types():
print(execution_type.name)
###Output
_____no_output_____
###Markdown
List the registered context types.
###Code
for context_type in store.get_context_types():
print(context_type.name)
###Output
_____no_output_____
###Markdown
Visualizing TFX artifacts Retrieve data analysis and validation artifacts
###Code
with metadata.Metadata(connection_config) as store:
schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)
schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
train_stats_file, eval_stats_file))
anomalies_path = anomalies_artifacts[-1].uri
train_anomalies_file = os.path.join(anomalies_path, 'train', 'anomalies.pbtxt')
eval_anomalies_file = os.path.join(anomalies_path, 'eval', 'anomalies.pbtxt')
print("Train anomalies file:{}, Eval anomalies file:{}".format(
train_anomalies_file, eval_anomalies_file))
###Output
_____no_output_____
###Markdown
Visualize schema
###Code
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Visualize statistics Exercise: looking at the features visualized below, answer the following questions:- Which feature transformations would you apply to each feature with TF Transform?- Are there data quality issues with certain features that may impact your model performance? How might you deal with it?
###Code
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
###Output
_____no_output_____
###Markdown
Visualize anomalies
###Code
train_anomalies = tfdv.load_anomalies_text(train_anomalies_file)
tfdv.display_anomalies(train_anomalies)
eval_anomalies = tfdv.load_anomalies_text(eval_anomalies_file)
tfdv.display_anomalies(eval_anomalies)
###Output
_____no_output_____
###Markdown
Retrieve model artifacts
###Code
with metadata.Metadata(connection_config) as store:
model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
hyperparam_artifacts = store.get_artifacts_by_type(standard_artifacts.HyperParameters.TYPE_NAME)
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))
best_hparams_path = os.path.join(hyperparam_artifacts[-1].uri, 'best_hyperparameters.txt')
print("Generated model best hyperparameters result:{}".format(best_hparams_path))
###Output
_____no_output_____
###Markdown
Return best hyperparameters
###Code
# Latest pipeline run Tuner search space.
json.loads(file_io.read_file_to_string(best_hparams_path))['space']
# Latest pipeline run Tuner searched best_hyperparameters artifacts.
json.loads(file_io.read_file_to_string(best_hparams_path))['values']
###Output
_____no_output_____
###Markdown
Visualize model evaluations Exercise: review the model evaluation results below and answer the following questions:- Which Wilderness Area had the highest accuracy?- Which Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?
###Code
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
eval_result, slicing_column='Wilderness_Area')
###Output
_____no_output_____
###Markdown
Inspecting TFX metadata Learning Objectives1. Use a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.In this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted **AI Platform Pipelines** instance includes the [ML Metadata](https://github.com/google/ml-metadata) service. In **AI Platform Pipelines**, ML Metadata uses *MySQL* as a database backend and can be accessed using a GRPC server. Setup
###Code
import os
import json
import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration import metadata
from tfx.types import standard_artifacts
from tensorflow.python.lib.io import file_io
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
###Output
_____no_output_____
###Markdown
Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in `lab-02` or `lab-03`. 1.1 Configure Kubernetes port forwardingTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Proceed to the next step, "Connecting to ML Metadata". Option 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.Hosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.
###Code
import yaml
# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
###Output
_____no_output_____
###Markdown
The pipeline source can be found in the `pipeline` folder. Switch to the `pipeline` folder and compile the pipeline.
###Code
%cd pipeline
###Output
_____no_output_____
###Markdown
2.1 Create AI Platform Pipelines clusterNavigate to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select `"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform"` to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an `App instance name` such as "TFX-lab-04". 2.2 Configure environment settings Update the below constants with the settings reflecting your lab environment.- `GCP_REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `kubeflowpipelines-` prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
* `CUSTOM_SERVICE_ACCOUNT` - In the gcp console Click on the Navigation Menu. Navigate to `IAM & Admin`, then to `Service Accounts` and use the service account starting with prifix - `'tfx-tuner-caip-service-account'`. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup `README` for setup instructions. - `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default' #Change
ENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com' #Change
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com' #Change
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
###Output
_____no_output_____
###Markdown
2.3 Compile pipeline
###Code
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=True
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
###Output
_____no_output_____
###Markdown
2.4 Deploy pipeline to AI Platform
###Code
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:
###Code
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
###Output
_____no_output_____
###Markdown
2.5 Create and monitor pipeline run
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
2.6 Configure Kubernetes port forwarding To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Connecting to ML Metadata Configure ML Metadata GRPC client
###Code
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port
###Output
_____no_output_____
###Markdown
Connect to ML Metadata service
###Code
store = metadata_store.MetadataStore(connection_config)
###Output
_____no_output_____
###Markdown
ImportantA full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below. Exploring ML Metadata The Metadata Store uses the following data model:- `ArtifactType` describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.- `Artifact` describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.- `ExecutionType` describes a type of component or step in a workflow, and its runtime parameters.- `Execution` is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.- `Event` is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.- `ContextType` describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.- `Context` is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.- `Attribution` is a record of the relationship between Artifacts and Contexts.- `Association` is a record of the relationship between Executions and Contexts. List the registered artifact types.
###Code
for artifact_type in store.get_artifact_types():
print(artifact_type.name)
###Output
_____no_output_____
###Markdown
Display the registered execution types.
###Code
for execution_type in store.get_execution_types():
print(execution_type.name)
###Output
_____no_output_____
###Markdown
List the registered context types.
###Code
for context_type in store.get_context_types():
print(context_type.name)
###Output
_____no_output_____
###Markdown
Visualizing TFX artifacts Retrieve data analysis and validation artifacts
###Code
with metadata.Metadata(connection_config) as store:
schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)
schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
train_stats_file, eval_stats_file))
anomalies_path = anomalies_artifacts[-1].uri
train_anomalies_file = os.path.join(anomalies_path, 'train', 'anomalies.pbtxt')
eval_anomalies_file = os.path.join(anomalies_path, 'eval', 'anomalies.pbtxt')
print("Train anomalies file:{}, Eval anomalies file:{}".format(
train_anomalies_file, eval_anomalies_file))
###Output
_____no_output_____
###Markdown
Visualize schema
###Code
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Visualize statistics Exercise: looking at the features visualized below, answer the following questions:- Which feature transformations would you apply to each feature with TF Transform?- Are there data quality issues with certain features that may impact your model performance? How might you deal with it?
###Code
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
###Output
_____no_output_____
###Markdown
Visualize anomalies
###Code
train_anomalies = tfdv.load_anomalies_text(train_anomalies_file)
tfdv.display_anomalies(train_anomalies)
eval_anomalies = tfdv.load_anomalies_text(eval_anomalies_file)
tfdv.display_anomalies(eval_anomalies)
###Output
_____no_output_____
###Markdown
Retrieve model artifacts
###Code
with metadata.Metadata(connection_config) as store:
model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
hyperparam_artifacts = store.get_artifacts_by_type(standard_artifacts.HyperParameters.TYPE_NAME)
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))
best_hparams_path = os.path.join(hyperparam_artifacts[-1].uri, 'best_hyperparameters.txt')
print("Generated model best hyperparameters result:{}".format(best_hparams_path))
###Output
_____no_output_____
###Markdown
Return best hyperparameters
###Code
# Latest pipeline run Tuner search space.
json.loads(file_io.read_file_to_string(best_hparams_path))['space']
# Latest pipeline run Tuner searched best_hyperparameters artifacts.
json.loads(file_io.read_file_to_string(best_hparams_path))['values']
###Output
_____no_output_____
###Markdown
Visualize model evaluations Exercise: review the model evaluation results below and answer the following questions:- Which Wilderness Area had the highest accuracy?- Which Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?
###Code
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
eval_result, slicing_column='Wilderness_Area')
###Output
_____no_output_____
###Markdown
Inspecting TFX metadata Learning Objectives1. Use a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.In this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted **AI Platform Pipelines** instance includes the [ML Metadata](https://github.com/google/ml-metadata) service. In **AI Platform Pipelines**, ML Metadata uses *MySQL* as a database backend and can be accessed using a GRPC server. Setup
###Code
import os
import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration import metadata
from tfx.types import standard_artifacts
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
###Output
_____no_output_____
###Markdown
Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in `lab-02` or `lab-03`. 1.1 Configure Kubernetes port forwardingTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Proceed to the next step, "Connecting to ML Metadata". Option 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.Hosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.
###Code
import yaml
# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
###Output
_____no_output_____
###Markdown
The pipeline source can be found in the `pipeline` folder. Switch to the `pipeline` folder and compile the pipeline.
###Code
%cd pipeline
###Output
_____no_output_____
###Markdown
2.1 Create AI Platform Pipelines clusterNavigate to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select `"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform"` to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an `App instance name` such as "TFX-lab-04". 2.2 Configure environment settings Update the below constants with the settings reflecting your lab environment.- `GCP_REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `kubeflowpipelines-` prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
* `CUSTOM_SERVICE_ACCOUNT` - your user created custom google cloud service account for your pipeline's AI Platform Training job that you created during initial setup for these labs to access the Cloud AI Platform Vizier service. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup `README` for setup instructions. - `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default'
ENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com'
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
###Output
_____no_output_____
###Markdown
2.3 Compile pipeline
###Code
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
###Output
_____no_output_____
###Markdown
2.4 Deploy pipeline to AI Platform
###Code
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:
###Code
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
###Output
_____no_output_____
###Markdown
2.5 Create and monitor pipeline run
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
2.6 Configure Kubernetes port forwarding To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Connecting to ML Metadata Configure ML Metadata GRPC client
###Code
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port
###Output
_____no_output_____
###Markdown
Connect to ML Metadata service
###Code
store = metadata_store.MetadataStore(connection_config)
###Output
_____no_output_____
###Markdown
ImportantA full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below. Exploring ML Metadata The Metadata Store uses the following data model:- `ArtifactType` describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.- `Artifact` describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.- `ExecutionType` describes a type of component or step in a workflow, and its runtime parameters.- `Execution` is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.- `Event` is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.- `ContextType` describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.- `Context` is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.- `Attribution` is a record of the relationship between Artifacts and Contexts.- `Association` is a record of the relationship between Executions and Contexts. List the registered artifact types.
###Code
for artifact_type in store.get_artifact_types():
print(artifact_type.name)
###Output
_____no_output_____
###Markdown
Display the registered execution types.
###Code
for execution_type in store.get_execution_types():
print(execution_type.name)
###Output
_____no_output_____
###Markdown
List the registered context types.
###Code
for context_type in store.get_context_types():
print(context_type.name)
###Output
_____no_output_____
###Markdown
Visualizing TFX artifacts Retrieve data analysis and validation artifacts
###Code
with metadata.Metadata(connection_config) as store:
stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
train_stats_file, eval_stats_file))
schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
anomalies_file = os.path.join(anomalies_artifacts[-1].uri, 'anomalies.pbtxt')
print("Generated anomalies file:{}".format(anomalies_file))
###Output
_____no_output_____
###Markdown
Visualize statistics Exercise: looking at the features visualized below, answer the following questions:- Which feature transformations would you apply to each feature with TF Transform?- Are there data quality issues with certain features that may impact your model performance? How might you deal with it?
###Code
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
###Output
_____no_output_____
###Markdown
Visualize schema
###Code
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Visualize anomalies
###Code
anomalies = tfdv.load_anomalies_text(anomalies_file)
tfdv.display_anomalies(anomalies)
###Output
_____no_output_____
###Markdown
Retrieve model evaluations
###Code
with metadata.Metadata(connection_config) as store:
model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))
###Output
_____no_output_____
###Markdown
Visualize model evaluations Exercise: review the model evaluation results below and answer the following questions:- Which Wilderness Area had the highest accuracy?- Which Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?
###Code
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
eval_result, slicing_column='Wilderness_Area')
###Output
_____no_output_____
###Markdown
Inspecting TFX metadata Learning Objectives1. Use a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.In this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted **AI Platform Pipelines** instance includes the [ML Metadata](https://github.com/google/ml-metadata) service. In **AI Platform Pipelines**, ML Metadata uses *MySQL* as a database backend and can be accessed using a GRPC server. Setup
###Code
import os
import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration import metadata
from tfx.types import standard_artifacts
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
###Output
_____no_output_____
###Markdown
Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in `lab-02` or `lab-03`. 1.1 Configure Kubernetes port forwardingTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Proceed to the next step, "Connecting to ML Metadata". Option 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.Hosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.
###Code
import yaml
# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
###Output
_____no_output_____
###Markdown
The pipeline source can be found in the `pipeline` folder. Switch to the `pipeline` folder and compile the pipeline.
###Code
%cd pipeline
###Output
_____no_output_____
###Markdown
2.1 Create AI Platform Pipelines clusterNavigate to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select `"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform"` to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an `App instance name` such as "TFX-lab-04". 2.2 Configure environment settings Update the below constants with the settings reflecting your lab environment.- `GCP_REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `kubeflowpipelines-` prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
* `CUSTOM_SERVICE_ACCOUNT` - In the gcp console Click on the Navigation Menu. Navigate to `IAM & Admin`, then to `Service Accounts` and use the service account starting with prifix - `'tfx-tuner-caip-service-account'`. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup `README` for setup instructions. - `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default'
ENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com'
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
###Output
_____no_output_____
###Markdown
2.3 Compile pipeline
###Code
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
###Output
_____no_output_____
###Markdown
2.4 Deploy pipeline to AI Platform
###Code
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:
###Code
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
###Output
_____no_output_____
###Markdown
2.5 Create and monitor pipeline run
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
2.6 Configure Kubernetes port forwarding To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.From a JupyterLab terminal, execute the following commands:```gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE] kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080``` Connecting to ML Metadata Configure ML Metadata GRPC client
###Code
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port
###Output
_____no_output_____
###Markdown
Connect to ML Metadata service
###Code
store = metadata_store.MetadataStore(connection_config)
###Output
_____no_output_____
###Markdown
ImportantA full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below. Exploring ML Metadata The Metadata Store uses the following data model:- `ArtifactType` describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.- `Artifact` describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.- `ExecutionType` describes a type of component or step in a workflow, and its runtime parameters.- `Execution` is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.- `Event` is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.- `ContextType` describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.- `Context` is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.- `Attribution` is a record of the relationship between Artifacts and Contexts.- `Association` is a record of the relationship between Executions and Contexts. List the registered artifact types.
###Code
for artifact_type in store.get_artifact_types():
print(artifact_type.name)
###Output
_____no_output_____
###Markdown
Display the registered execution types.
###Code
for execution_type in store.get_execution_types():
print(execution_type.name)
###Output
_____no_output_____
###Markdown
List the registered context types.
###Code
for context_type in store.get_context_types():
print(context_type.name)
###Output
_____no_output_____
###Markdown
Visualizing TFX artifacts Retrieve data analysis and validation artifacts
###Code
with metadata.Metadata(connection_config) as store:
stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
train_stats_file, eval_stats_file))
schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
anomalies_file = os.path.join(anomalies_artifacts[-1].uri, 'anomalies.pbtxt')
print("Generated anomalies file:{}".format(anomalies_file))
###Output
_____no_output_____
###Markdown
Visualize statistics Exercise: looking at the features visualized below, answer the following questions:- Which feature transformations would you apply to each feature with TF Transform?- Are there data quality issues with certain features that may impact your model performance? How might you deal with it?
###Code
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
###Output
_____no_output_____
###Markdown
Visualize schema
###Code
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)
###Output
_____no_output_____
###Markdown
Visualize anomalies
###Code
anomalies = tfdv.load_anomalies_text(anomalies_file)
tfdv.display_anomalies(anomalies)
###Output
_____no_output_____
###Markdown
Retrieve model evaluations
###Code
with metadata.Metadata(connection_config) as store:
model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))
###Output
_____no_output_____
###Markdown
Visualize model evaluations Exercise: review the model evaluation results below and answer the following questions:- Which Wilderness Area had the highest accuracy?- Which Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?
###Code
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
eval_result, slicing_column='Wilderness_Area')
###Output
_____no_output_____ |
introduction_to_python/Introduction to Python.ipynb | ###Markdown
IntroductionPython is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python’s elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms.In this section we will cover the basic of Python language and features of python. The tutorial is created with Jupyter Notebook a web application that allow us to share documents including live python code, however this tutorial can be followed using your python intrepreter. We will also assume you have basic programming skill so we will skip a lot of basic concepts.We will use python 3.6 for this tutorial which you can download and install from https://www.python.org/. After installing python you can open you command line/bash and type "__python --v__" to check your installed python version. You can also start your python intrepreter with typing "python" in your command line and press enter.
###Code
import sys
print(sys.version)
###Output
3.6.3 |Anaconda custom (32-bit)| (default, Nov 8 2017, 15:12:41) [MSC v.1900 32 bit (Intel)]
###Markdown
basic of pythonPython is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:
###Code
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
print(quicksort([3,6,8,10,1,2,1]))
###Output
[1, 1, 2, 3, 6, 8, 10]
|
projects/notebooks/03_sent2vec_admissiondiagnosis_clustering.ipynb | ###Markdown
Sent2Vec: admission diagnosis clusteringGroup diagnosis with feature vectors from pretrained NLP modelCODE NOT RUNNING YET, STILL BUGGY WITH THE "embed_sentences" function. Calculate feature vector for each diagnosis string
###Code
import os
import numpy as np
from collections import Counter
import sent2vec
os.makedirs("_cache", exist_ok=True)
SENT2VEC_MODEL_PATH = '/data/wiki_unigrams.bin'
sent2vec_model = sent2vec.Sent2vecModel()
assert os.path.exists(SENT2VEC_MODEL_PATH)
patient_demo_dict = np.load('_cache/patient_demo.npy', allow_pickle=True).item()
admissiondx = patient_demo_dict['apacheadmissiondx']
admissiondx_embs_cache_path = '_cache/admissiondx_embs.npy'
if os.path.exists(admissiondx_embs_cache_path):
admissiondx_embs = np.load(admissiondx_embs_cache_path, allow_pickle=True)
else:
sent2vec_model.load_model(SENT2VEC_MODEL_PATH, inference_mode=True)
admissiondx_embs = sent2vec_model.embed_sentences(admissiondx)
np.save('_cache/admissiondx_embs.npy', admissiondx_embs)
sent2vec_model.release_shared_mem(SENT2VEC_MODEL_PATH)
print(1)
admissiondx_embs.shape
admissiondx_embs = admissiondx_embs.reshape(admissiondx_embs.shape[0], -1)
admissiondx_embs.shape
###Output
_____no_output_____
###Markdown
Feature vector clustering
###Code
from sklearn.manifold import TSNE
from sklearn.decomposition import LatentDirichletAllocation, PCA
from sklearn.cluster import AffinityPropagation, DBSCAN, OPTICS
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
DBSCAN Clustering
###Code
# Cluster
DBSCAN_clusters = DBSCAN(eps=0.3, min_samples=10)
DBSCAN_clusters.fit(admissiondx_embs)
print("Number of core samples:", DBSCAN_clusters.core_sample_indices_.shape)
admissiondx_dbscan_labels = DBSCAN_clusters.labels_
core_samples_mask = np.zeros_like(admissiondx_dbscan_labels, dtype=bool)
core_samples_mask[DBSCAN_clusters.core_sample_indices_] = True
n_clusters_ = len(set(admissiondx_dbscan_labels)) - (1 if -1 in admissiondx_dbscan_labels else 0)
n_noise_ = list(admissiondx_dbscan_labels).count(-1)
print('Estimated number of clusters: %d' % n_clusters_)
print('Estimated number of noise points: %d' % n_noise_)
diagnosis_dict = {}
for i, label in enumerate(admissiondx_dbscan_labels):
if label in diagnosis_dict:
diagnosis_dict[label].append(i)
else:
diagnosis_dict[label] = [i]
admissiondx[diagnosis_dict[1]]
admissiondx[diagnosis_dict[2]]
for i in range(128):
print('\n',len(admissiondx[diagnosis_dict[i]]), '\n', admissiondx[diagnosis_dict[i]])
###Output
_____no_output_____
###Markdown
OPTICS Clustering
###Code
OPTICS_cluster = OPTICS(min_samples=50, xi=.05, min_cluster_size=.01)
OPTICS_cluster.fit(admissiondx_embs)
num_labels_optics = len(set(OPTICS_cluster.labels_))
print('Estimated number of labels: %d' % num_labels_optics)
diagnosis_dict_optics = {}
for i, label in enumerate(OPTICS_cluster.labels_):
if label in diagnosis_dict_optics:
diagnosis_dict_optics[label].append(i)
else:
diagnosis_dict_optics[label] = [i]
# f = open('diagnosis_stats.txt', 'w')
# for i in range(-1, 19):
# f.write(f'Group {i}\n')
# c = Counter(admissiondx[diagnosis_dict_optics[i]])
# for key in c:
# f.write(f'{key}: {c[key]}\n')
# f.write('\n\n\n')
# f.close()
###Output
_____no_output_____
###Markdown
Save clustering models
###Code
import joblib
joblib.dump(OPTICS_cluster, 'admission_diagnosis_cluster_OPTICS')
joblib.dump(DBSCAN_clusters, 'admission_diagnosis_cluster_DBSCAN')
OPTICS_cluster = joblib.load('admission_diagnosis_cluster_OPTICS')
DBSCAN_clusters = joblib.load('admission_diagnosis_cluster_DBSCAN')
admissiondx_dbscan_labels = DBSCAN_clusters.labels_
colors = [plt.cm.Spectral(each)
for each in np.linspace(0, 1, len(admissiondx_dbscan_labels))]
for k, col in zip(admissiondx_dbscan_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (admissiondx_dbscan_labels == k)
xy = admissiondx_embs[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=14)
xy = admissiondx_embs[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=tuple(col),
markeredgecolor='k', markersize=6)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
###Output
_____no_output_____ |
01 Machine Learning/scikit_examples_jupyter/model_selection/plot_roc.ipynb | ###Markdown
=======================================Receiver Operating Characteristic (ROC)=======================================Example of Receiver Operating Characteristic (ROC) metric to evaluateclassifier output quality.ROC curves typically feature true positive rate on the Y axis, and falsepositive rate on the X axis. This means that the top left corner of the plot isthe "ideal" point - a false positive rate of zero, and a true positive rate ofone. This is not very realistic, but it does mean that a larger area under thecurve (AUC) is usually better.The "steepness" of ROC curves is also important, since it is ideal to maximizethe true positive rate while minimizing the false positive rate.Multiclass settings-------------------ROC curves are typically used in binary classification to study the output ofa classifier. In order to extend ROC curve and ROC area to multi-classor multi-label classification, it is necessary to binarize the output. One ROCcurve can be drawn per label, but one can also draw a ROC curve by consideringeach element of the label indicator matrix as a binary prediction(micro-averaging).Another evaluation measure for multi-class classification ismacro-averaging, which gives equal weight to the classification of eachlabel.NoteSee also :func:`sklearn.metrics.roc_auc_score`, `sphx_glr_auto_examples_model_selection_plot_roc_crossval.py`.
###Code
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
# Import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]
# Add noisy features to make the problem harder
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# shuffle and split training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=0)
# Learn to predict each class against the other
classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True,
random_state=random_state))
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
###Output
_____no_output_____
###Markdown
Plot of a ROC curve for a specific class
###Code
plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Plot ROC curves for the multiclass problem
###Code
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
plt.figure()
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC curve of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
doc/t_to_z_procedure.ipynb | ###Markdown
Properly Transforming T to Z Scores for Large Brain MapsWe discovered a strange truncation of strongly negative values when converting from T statistic scores --> P values --> Z scores. First we will show the strangeness. The task behind the mapThis is a group map for a "story" contrast from a language task from the Human Connectome Project (HCP). For [this task](http://www.sciencedirect.com/science/article/pii/S1053811913005272), there are alternating blocks of doing match problems and listening to a story. This contrast is for the "story" blocks. The mapWe concatenated each single subject cope1.nii.gz image representing this contrast in time, for a total of 486 subjects (timepoints), and ran randomise for 5000 iterations (fsl).
###Code
randomise -i OneSamp4D -o OneSampT -1 -T
###Output
_____no_output_____
###Markdown
Viewing the T Statistic MapNow we can read in the file, and first look at the image itself and the T-distribution.
###Code
import matplotlib
import matplotlib.pylab as plt
import numpy as np
%matplotlib inline
import nibabel as nib
from nilearn.plotting import plot_stat_map, plot_roi
from scipy.spatial.distance import pdist
from scipy.stats import norm, t
import seaborn as sns
all_copes_file = "../example/tfMRI_LANGUAGE_STORY.nii_tstat1.nii.gz"
all_copes = nib.load(all_copes_file)
plot_stat_map(all_copes)
print("Here is our map created with randomise for all 486 subjects, for the story contrast")
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/nilearn/datasets/__init__.py:96: FutureWarning: Fetchers from the nilearn.datasets module will be updated in version 0.9 to return python strings instead of bytes and Pandas dataframes instead of Numpy arrays.
"Numpy arrays.", FutureWarning)
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/nilearn/plotting/img_plotting.py:341: FutureWarning: Default resolution of the MNI template will change from 2mm to 1mm in version 0.10.0
anat_img = load_mni152_template()
###Markdown
Now I want to point out something about this map - we have a set of strongly negative outliers.
###Code
# Function to flag outliers
def plot_outliers(image,n_std=6):
mr = nib.load(image)
data = mr.get_data()
mean = data.mean()
std = data.std()
six_dev_up = mean + n_std * std
six_dev_down = mean - n_std*std
empty_brain = np.zeros(data.shape)
empty_brain[data>=six_dev_up] = 1
empty_brain[data<=six_dev_down] = 1
outlier_nii = nib.nifti1.Nifti1Image(empty_brain,affine=mr.get_affine(),header=mr.get_header())
plot_roi(outlier_nii)
plot_outliers(all_copes_file)
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:4: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
after removing the cwd from sys.path.
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:12: DeprecationWarning: get_affine method is deprecated.
Please use the ``img.affine`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
if sys.path[0] == '':
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:12: DeprecationWarning: get_header method is deprecated.
Please use the ``img.header`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
if sys.path[0] == '':
###Markdown
It is a separate problem entirely if those outliers should be there, but for the purposes of this problem, we would want any conversion from T to Z to maintain those outliers. Let's now look at the distribution of the data. Viewing the T Distribution
###Code
data = all_copes.get_data()
data = data[data!=0]
sns.distplot(data.flatten(), label="Original T-Stat Data")
plt.legend()
print("Here is our map created with randomise for all 486 subjects, for the story contrast")
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:1: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
"""Entry point for launching an IPython kernel.
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
We have a heavy left tail, meaning lots of strongly negative values. Converting from T to P-ValuesWe next will convert T scores into P values by way of the "survival function" from the scipy.stats t module. The survival function is actually 1 - the cumulative density function (CDF) that will give us the probability (p-value) for each of our random variable (the T). The degrees of freedom should be the number of subjects from which the group map was derived -2.
###Code
dof=486 - 2
data = all_copes.get_data()
p_values = t.sf(data, df = dof)
p_values[p_values==1] = 0.99999999999999
sns.distplot(p_values.flatten(), label="P-Values from T-Stat Data")
plt.legend()
print("Here are the p-values created from the t-stat map, including all zeros in the map when we calculate")
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:2: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Converting from P-Values to Z-ScoresNow we can use scipy.stats.norm inverse survival function to "undo" the p-values back into normal (Z scores).
###Code
z_values = norm.isf(p_values)
sns.distplot(z_values.flatten(), label="Z-Values from T-Stat Data")
plt.legend()
print("Here are the z-values created from the t-stat map, including all zeros in the map when we calculate")
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
But now we see something strange. The distribution looks almost truncated. When we look at the new image, the strong negative values (previously outliers in ventricles) aren't there either:
###Code
# Need to make sure we look at the same slices :)
def plot_outliers(image,cut_coords,n_std=6):
mr = nib.load(image)
data = mr.get_data()
mean = data.mean()
std = data.std()
six_dev_up = mean + n_std * std
six_dev_down = mean - n_std*std
empty_brain = np.zeros(data.shape)
empty_brain[data>=six_dev_up] = 1
empty_brain[data<=six_dev_down] = 1
outlier_nii = nib.nifti1.Nifti1Image(empty_brain,affine=mr.get_affine(),header=mr.get_header())
plot_roi(outlier_nii,cut_coords=cut_coords)
Z_nii = nib.nifti1.Nifti1Image(z_values,affine=all_copes.get_affine(),header=all_copes.get_header())
nib.save(Z_nii,"../example/Zimage.nii")
plot_outliers("../example/Zimage.nii",cut_coords=(7,0,13))
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:15: DeprecationWarning: get_affine method is deprecated.
Please use the ``img.affine`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
from ipykernel import kernelapp as app
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:15: DeprecationWarning: get_header method is deprecated.
Please use the ``img.header`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
from ipykernel import kernelapp as app
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:4: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
after removing the cwd from sys.path.
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:12: DeprecationWarning: get_affine method is deprecated.
Please use the ``img.affine`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
if sys.path[0] == '':
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:12: DeprecationWarning: get_header method is deprecated.
Please use the ``img.header`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
if sys.path[0] == '':
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/numpy/ma/core.py:2832: UserWarning: Warning: converting a masked element to nan.
order=order, subok=True, ndmin=ndmin)
###Markdown
And here is the problem. The outliers are clearly gone, and it's because the distribution has been truncated: Properly Converting T to ZI found [this paper](http://www.stats.uwo.ca/faculty/aim/2010/JSSSnipets/V23N1.pdf), which summarizes the problem: Implementing the Correct Transformation from T to ZThis was modified from the code provided in the paper above. Thank you!
###Code
data = all_copes.get_data()
# Let's select just the nonzero voxels
nonzero = data[data!=0]
# We will store our results here
Z = np.zeros(len(nonzero))
# Select values less than or == 0, and greater than zero
c = np.zeros(len(nonzero))
k1 = (nonzero <= c)
k2 = (nonzero > c)
# Subset the data into two sets
t1 = nonzero[k1]
t2 = nonzero[k2]
# Calculate p values for <=0
p_values_t1 = t.cdf(t1, df = dof)
z_values_t1 = norm.ppf(p_values_t1)
# Calculate p values for > 0
p_values_t2 = t.cdf(-t2, df = dof)
z_values_t2 = -norm.ppf(p_values_t2)
Z[k1] = z_values_t1
Z[k2] = z_values_t2
sns.distplot(Z, label="Z-Values from T-Stat Data")
plt.legend()
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:1: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
"""Entry point for launching an IPython kernel.
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/seaborn/distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Viewing the new Z Score MapDid we fix it?
###Code
empty_nii = np.zeros(all_copes.shape)
empty_nii[all_copes.get_data()!=0] = Z
Z_nii_fixed = nib.nifti1.Nifti1Image(empty_nii,affine=all_copes.get_affine(),header=all_copes.get_header())
nib.save(Z_nii_fixed,"../example/Zfixed.nii")
plot_stat_map(Z_nii_fixed)
###Output
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:2: DeprecationWarning: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
* deprecated from version: 3.0
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 5.0
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:3: DeprecationWarning: get_affine method is deprecated.
Please use the ``img.affine`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
This is separate from the ipykernel package so we can avoid doing imports until
/home/tomo/anaconda3/envs/t2z/lib/python3.6/site-packages/ipykernel_launcher.py:3: DeprecationWarning: get_header method is deprecated.
Please use the ``img.header`` property instead.
* deprecated from version: 2.1
* Will raise <class 'nibabel.deprecator.ExpiredDeprecationError'> as of version: 4.0
This is separate from the ipykernel package so we can avoid doing imports until
|
examples/Tutorial.ipynb | ###Markdown
Tutorial: converting, writing, and reading with heparchy Here's a quick (very incomplete) primer on using `heparchy`'s utilities to convert, write, and read files in a hierarchical and high performance way.(Proper Sphinx documentation coming soon.)
###Code
from tqdm import tqdm # some nice progress bars :)
###Output
_____no_output_____
###Markdown
Write events hierarchically under processes within a database file
###Code
from heparchy.write.hdf import HdfWriter
###Output
_____no_output_____
###Markdown
As long as you provide `HepWriter` with `numpy` arrays in the correct shape and data type, you can source your data however you want. In this case, we make use of the built-in HepMC file parser, `heparchy.hepmc.HepMC`.
###Code
from heparchy.read.hepmc import HepMC
###Output
_____no_output_____
###Markdown
The `heparchy.hepmc.HepMC` file parser returns an object whose `data` property is a `heparchy.data.ShowerData` object. This has some convenience methods which traverse the shower looking for a user defined signal vertex, and then follows one of the produced particles, identifying all of its descendants with a boolean mask. To make use of this functionality during the data conversion, we will also import `heparchy.data.SignalVertex`, and define some vertices for this process.
###Code
from heparchy.data.event import SignalVertex
signal_vertices = [
SignalVertex( # top decay
incoming=[6], outgoing=[24,5], # defines the vertex
follow=[24,5] # specifies which of the outgoing particles to track in the shower
),
SignalVertex( # anti-top decay
incoming=[-6], outgoing=[-24,-5],
follow=[-5] # we can be selective about which outgoing particles to follow
),
]
###Output
_____no_output_____
###Markdown
Heparchy uses context managers and iterators to improve safety, speed, and remove boilerplate. This does lead to a lot of nesting, but the result is fairly intuitive.1. create a file to store the data2. add "processes" to that file (however you want to define them, _eg._ `p p > t t~`)3. within those processes nest events, each of which contain datasetsThere are context managers for each of those stages which handle the fiddly bits and standardise the process. The returned objects then provide methods to write the datasets, as in the example below.The example below also contains the `HepMC` file parser, which itself opens HepMC files by use of a context manager, and the returned object may be iterated over all of the events. So that's another two layers of nesting (yay), but pretty convenient.
###Code
with HdfWriter('showers.hdf5') as hep_file: # first we create the file
with hep_file.new_process('top') as process: # then write a process
with HepMC('/home/jlc1n20/messy/test.hepmc') as raw_file: # load in data to convert from HepMC
for shower in tqdm(raw_file): # iterate through the events in the HepMC file
signal_masks = shower.signal_mask(signal_vertices)
# signal_masks is a list in same order as signal_vertices
# each element is a dictionary, keyed by pdg code of followed particle
W_mask = signal_masks[0][24]
b_mask = signal_masks[0][5]
anti_b_mask = signal_masks[1][-5]
with process.new_event() as event: # create event for writing
# add datasets - each is optional!
event.set_edges(shower.edges) # can omit if only storing final state
event.set_pmu(shower.pmu)
event.set_pdg(shower.pdg)
event.set_mask(name='final', data=shower.final)
event.set_mask(name='W_mask', data=W_mask)
event.set_mask(name='b_mask', data=b_mask)
event.set_mask(name='anti_b_mask', data=anti_b_mask)
###Output
4999it [09:15, 9.00it/s]
###Markdown
Read data from heparchy format
###Code
from heparchy.read.hdf import HdfReader
###Output
_____no_output_____
###Markdown
Iteratively read all events of a given process Reading data follows a similar hierarchical structure to writing data, as above.1. open the heparchy data file2. read processes given by name3. iterate over the nested events, extracting their datasetsThe first two of these tasks are handled with context managers, but the final task is achieved simply by iterating over the process object, which provides event objects with properties and methods that efficiently read from the heparchy file.
###Code
with HdfReader('showers.hdf5') as hep_file:
process = hep_file.read_process(name='top')
for shower in tqdm(process):
pmu = shower.pmu
pdg = shower.pdg
num_pcls = shower.count
name = shower.name
edges = shower.edges
final = shower.mask('final')
W_mask = shower.mask('W_mask')
###Output
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4999/4999 [00:12<00:00, 411.45it/s]
###Markdown
12 seconds - not a bad speedup having needed 11 minutes to read the data from HepMC! Read individual events If you need only to access one event at a time, or are using this library within dataloaders which extract datasets in their own order _eg._ for `pytorch`, you need not iterate over the process, and instead can use the `read_event` method.
###Code
with HdfReader('showers.hdf5') as hep_file:
process = hep_file.read_process(name='top')
num_events = len(process)
shower = process.read_event(128)
pmu = shower.pmu
pdg = shower.pdg
num_pcls = shower.count
name = shower.name
edges = shower.edges
final = shower.mask('final')
W_mask = shower.mask('W_mask')
b_mask = shower.mask('b_mask')
anti_b_mask = shower.mask('anti_b_mask')
pmu
###Output
_____no_output_____
###Markdown
Sanity check: the extracted data Just to calm any misgivings about what is contained in all of these properties, it's all just strings, integers, and numpy arrays. See below.
###Code
name
num_pcls
pmu
pdg
edges
final
W_mask
###Output
_____no_output_____
###Markdown
What next? You can, of course, now do whatever you want with this data. Below I list some useful idioms for handling the data afterwards. Combining masks
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Now that we have access to some boolean masks over the data, we can combine them to perform simple queries over the particle data. _eg._ to get the final state particles which descended from the W, simply perform a bitwise `and`.
###Code
W_final = np.bitwise_and(W_mask, final)
pmu[W_final] # extract momenta of final W descendants
###Output
_____no_output_____
###Markdown
If you want to do the same thing for the b-quark descendants, but also to remove neutrinos because they aren't going to show up in detector data, you can do a boolean comparison over the `pdg` output array, and perform the `and` operation over all three masks using the `ufunc.reduce` method. _ie._
###Code
neutrino_pdgs = [12, 14, 16]
neutrino_filter = ~np.isin(np.abs(pdg), neutrino_pdgs)
b_detectable = np.bitwise_and.reduce([b_mask, final, neutrino_filter])
pmu[b_detectable] # extract momenta of detectable b descendants
###Output
_____no_output_____
###Markdown
Querying events via DataFrames While this is fine for basic manipulations, it does get rather messy when more detailed data extraction is required. As a convenience, `ShowerData` objects have a method which returns `pandas.DataFrame` object, which has extremely powerful vectorised aggregation and query methods.
###Code
from heparchy.data.event import ShowerData
with HdfReader('showers.hdf5') as hep_file:
process = hep_file.read_process(name='top')
event = process.read_event(1202) # event number chosen at whim
shower = ShowerData(
edges=event.edges,
pmu=event.pmu,
pdg=event.pdg,
final=event.mask('final'),
)
shower_df = shower.to_pandas(data=['pdg', 'pmu', 'final', 'pt', 'eta', 'phi'])
shower_df
###Output
_____no_output_____
###Markdown
This reconstructs the same dataclass object that was being handed to us from the `heparchy.hepmc.HepMC` parser, hence we can compute signal masks for the event any time we like, not just when parsing HepMC files\*. Let's compute them again, this time following the W- as well, because now we've demonstrated the ability to be selective, it's just annoying not to have it. We can then add this to the DataFrame and do some glorious compound queries on the whole dataset!---\* The data doesn't need to be from a HepMC file at all. As long as you can format the shower into numpy arrays, you can pass it to the `ShowerData` object constructor.
###Code
signal_vertices = [
SignalVertex(incoming=[6], outgoing=[24,5], follow=[24,5]),
SignalVertex(incoming=[-6], outgoing=[-24,-5], follow=[-24, -5]),
]
signal_masks = shower.signal_mask(signal_vertices)
shower_df['W'] = signal_masks[0][24]
shower_df['b'] = signal_masks[0][5]
shower_df['anti_W'] = signal_masks[1][-24]
shower_df['anti_b'] = signal_masks[1][-5]
shower_df
###Output
_____no_output_____
###Markdown
Nice, eh?If we wish to perform cuts on the data, for instance to filter out particles which wouldn't be observed in the final state, it is trivial to extract this data using `query`.
###Code
nu_pdgs = (12, 14, 16)
detect_df = shower_df.query('final and pt > 0.5 and abs(eta) < 2.5 and @nu_pdgs not in abs(pdg)')
###Output
_____no_output_____
###Markdown
In one line, we have extracted particles in the final state, while filtering out low transverse momentum, high pseudorapidity, and neutrinos. This data set can be further queried, and aggregations performed over it to calculate useful quantities.For instance, the total jet transverse momentum for the W- boson may be calculated as follows:
###Code
anti_W_df = detect_df.query('anti_W')
pt = np.sqrt(anti_W_df['x'].sum() ** 2 + anti_W_df['y'].sum() ** 2)
pt
###Output
_____no_output_____
###Markdown
Introduction to pyNS pyNS is a Python library to programmatically access the Neuroscout API. pyNS let's you query the API and create analyses, without having to mess around with buildling any JSON requests yourself.In this tutorial, I'll demonstrate how to query the API to create your own analysis.
###Code
from pyns import Neuroscout
api = Neuroscout('[email protected]', 'yourpassword')
###Output
_____no_output_____
###Markdown
The `Neuroscout` object will be your main entry point to the API. You can instantiate this object without any credentials to access public API routes, or preferably with your Neuroscout credentials to be able to create and save your analyses. The `Neuroscout` object has links to each main API route, and each of these links implements the standard HTTP verbs that are suppoted by each route, such as `datasets`, `runs`, `predictors`, etc... Querying datasets and runs
###Code
api.datasets.get()
###Output
_____no_output_____
###Markdown
This request returns a list of two datasets, with information about the dataset, as well as the run IDs associated with this dataset. Let's focus on the dataset, `life`If we want more information on the specific runs within this dataset, we can query the `runs` route, using the dataset_id associated with `life`
###Code
dataset = api.datasets.get()[1]
api.runs.get(dataset_id=dataset['id'])
###Output
_____no_output_____
###Markdown
Using this information, we can decide which runs to focus on for our analysis. Querying predictors Now let's take a look at the predictors associated with this dataset
###Code
api.predictors.get(run_id=dataset['runs'])
###Output
_____no_output_____
###Markdown
A bunch of useful information to help me choose some features! Let's keep it simple and go with 'rmse' (sound volume) and 'FramewiseDisplacement': Building an Analysis Now, let's build an analysis. For this, we can use the `Analysis` class, which makes it easy to build an Analysis locally, by mirroring the Analysis representation on the server. To build an `Analysis` object, we can use the `create_analysis` which pre-populates our `Analysis` object with the relevant information, including a pre-build BIDS model, and registers it to the API.
###Code
analysis = api.analyses.create_analysis(
dataset_name='Life', name='My new analysis!',
predictor_names=['rmse', 'FramewiseDisplacement'],
hrf_variables=['rmse'],
subject=['rid000001', 'rid000005']
)
###Output
_____no_output_____
###Markdown
This newly created analysis has been assigned a unique ID by the Neuroscout API
###Code
analysis.hash_id
# Some properties are read-only and came from the server
analysis.created_at
###Output
_____no_output_____
###Markdown
The analysis creation function has found the runs relevant to the subjects we're interested in, and created a basic BIDS-Model for our analysis:
###Code
analysis.model
analysis.runs
# Neuroscout API Predictor IDs
analysis.predictors
###Output
_____no_output_____
###Markdown
We can edit this Analysis object to fill in any other Analysis details, and push them to the Neuroscout API:
###Code
analysis.description = "This is my analysis, and it's probably the best"
analysis.push()
###Output
_____no_output_____
###Markdown
ReportsNow that we have created and design an analysis we can generate some reports based on our designLet's generate a report using only a single run
###Code
analysis.generate_report(run_id=analysis.runs[0])
###Output
_____no_output_____
###Markdown
This report should take a few seconds to a few minutes to compile, and we can check its status:
###Code
report = analysis.get_report(run_id=analysis.runs[0])
report
report
###Output
_____no_output_____
###Markdown
Great, our report was sucesfully generated with no errors. Now lets take a look at the resulting design matrix:
###Code
from IPython.display import Image
Image(url=report['result']['design_matrix_plot'][0])
###Output
_____no_output_____
###Markdown
Compiling the analysisFinally, now that we are happy with our analysis, we can ask Neuroscout to verify the analysis, and generate an analysis bundle for us
###Code
analysis.compile()
analysis.get_status()
###Output
_____no_output_____
###Markdown
Great! Our analysis passed with no errors. We can now run our analysis using the `neuroscout-cli`. For more information on the `neuroscout-cli`, see here: https://github.com/neuroscout/neuroscout-cli Cloning our analysis Now that we've gone off and run our analysis, we realized we want to make some changes. In this case, I'm just going to change the analysis name.With Neuroscout this is easy, because I simply clone my previous analysis, and take off from I left off
###Code
new_analysis = analysis.clone()
new_analysis.hash_id
new_analysis.name = 'My new analysis name!'
###Output
_____no_output_____
###Markdown
However, what if we wanted to take this same model, and apply it to a different model. For example, `dataset_id` 5, which correspond to SherlockMerlin?To do so, we have to use the `fill` function to get the correct `predictors` and `runs`, as these IDS "correspond to the wrong dataset
###Code
new_analysis.predictors = []
new_analysis.runs = []
new_analysis.dataset_id = 5
new_analysis.fill()
###Output
_____no_output_____
###Markdown
This function automatically filled in all available runs for dataset_id = 5, and found the corresponding predictor ids based on the names used in the model. We can now compile this cloned analysis.
###Code
new_analysis.compile()
new_analysis.get_status()
###Output
_____no_output_____
###Markdown
funcX TutorialfuncX is a Function-as-a-Service (FaaS) platform for science that enables you to register functions in a cloud-hosted service and then reliably execute those functions on a remote funcX endpoint. This tutorial is configured to use a tutorial endpoint hosted by the funcX team. You can set up and use your own endpoint by following the [funcX documentation](https://funcx.readthedocs.io/en/latest/endpoints.html) funcX Python SDKThe funcX Python SDK provides programming abstractions for interacting with the funcX service. Before running this tutorial locally, you should first install the funcX SDK as follows: $ pip install funcx(If you are running on binder, we've already done this for you in the binder environment.)The funcX SDK exposes a `FuncXClient` object for all interactions with the funcX service. In order to use the funcX service, you must first authenticate using one of hundreds of supported identity providers (e. g., your institution, ORCID, Google). As part of the authentication process, you must grant permission for funcX to access your identity information (to retrieve your email address), Globus Groups management access (to share functions and endpoints), and Globus Search (to discover functions and endpoints).
###Code
from funcx.sdk.client import FuncXClient
fxc = FuncXClient()
###Output
_____no_output_____
###Markdown
Basic usageThe following example demonstrates how you can register and execute a function. Registering a functionfuncX works like any other FaaS platform: you must first register a function with funcX before being able to execute it on a remote endpoint. The registration process will serialize the function body and store it securely in the funcX service. As we will see below, you may share functions with others and discover functions shared with you.When you register a function, funcX will return a universally unique identifier (UUID) for it. This UUID can then be used to manage and invoke the function.
###Code
def hello_world():
return "Hello World!"
func_uuid = fxc.register_function(hello_world)
print(func_uuid)
###Output
_____no_output_____
###Markdown
Running a function To invoke a function, you must provide a) the function's UUID; and b) the `endpoint_id` of the endpoint on which you wish to execute that function. Note: here we use the public funcX tutorial endpoint; you may change the `endpoint_id` to the UUID of any endpoint on which you have permission to execute functions. funcX functions are designed to be executed remotely and asynchrously. To avoid synchronous invocation, the result of a function invocation (called a `task`) is a UUID, which may be introspected to monitor execution status and retrieve results.The funcX service will manage the reliable execution of a task, for example, by qeueing tasks when the endpoint is busy or offline and retrying tasks in case of node failures.
###Code
tutorial_endpoint = '4b116d3c-1703-4f8f-9f6f-39921e5864df' # Public tutorial endpoint
res = fxc.run(endpoint_id=tutorial_endpoint, function_id=func_uuid)
print(res)
###Output
_____no_output_____
###Markdown
Retrieving resultsWhen the task has completed executing, you can access the results via the funcX client as follows:
###Code
fxc.get_result(res)
###Output
_____no_output_____
###Markdown
Functions with argumentsfuncX supports registration and invocation of functions with arbitrary arguments and returned parameters. funcX will serialize any \*args and \*\*kwargs when invoking a function and it will serialize any return parameters or exceptions. Note: funcX uses standard Python serialization libraries (e. g., Pickle, Dill). It also limits the size of input arguments and returned parameters to 5 MB.The following example shows a function that computes the sum of a list of input arguments. First we register the function as above:
###Code
def funcx_sum(items):
return sum(items)
sum_function = fxc.register_function(funcx_sum)
###Output
_____no_output_____
###Markdown
When invoking the function, you can pass in arguments like any other function, either by position or with keyword arguments.
###Code
items = [1, 2, 3, 4, 5]
res = fxc.run(items, endpoint_id=tutorial_endpoint, function_id=sum_function)
print (fxc.get_result(res))
###Output
_____no_output_____
###Markdown
Functions with dependenciesfuncX requires that functions explictly state all dependencies within the function body. It also assumes that the dependent libraries are available on the endpoint in which the function will execute. For example, in the following function we explictly import the time module.
###Code
def funcx_date():
from datetime import date
return date.today()
date_function = fxc.register_function(funcx_date)
res = fxc.run(endpoint_id=tutorial_endpoint, function_id=date_function)
print (fxc.get_result(res))
###Output
_____no_output_____
###Markdown
Calling external applicationsDepending on the configuration of the funcX endpoint, you can often invoke external applications that are avaialble in the endpoint environment.
###Code
def funcx_echo(name):
import os
return os.popen("echo Hello %s" % name).read()
echo_function = fxc.register_function(funcx_echo)
res = fxc.run("World", endpoint_id=tutorial_endpoint, function_id=echo_function)
print (fxc.get_result(res))
###Output
_____no_output_____
###Markdown
Catching exceptionsWhen functions fail, the exception is captured and serialized by the funcX endpoint, and is reraised when you try to get the result. In the following example, the 'deterministic failure' exception is raised when `fxc.get_result` is called on the failing function.
###Code
def failing():
raise Exception("deterministic failure")
failing_function = fxc.register_function(failing)
res = fxc.run(endpoint_id=tutorial_endpoint, function_id=failing_function)
fxc.get_result(res)
###Output
_____no_output_____
###Markdown
Running functions many timesAfter registering a function, you can invoke it repeatedly. The following example shows how the monte carlo method can be used to estimate pi. Specifically, if a circle with radius $r$ is inscribed inside a square with side length $2r$, the area of the circle is $\pi r^2$ and the area of the square is $(2r)^2$. Thus, if $N$ uniformly-distributed random points are dropped within the square, approximately $N\pi/4$ will be inside the circle.
###Code
import time
# function that estimates pi by placing points in a box
def pi(num_points):
from random import random
inside = 0
for i in range(num_points):
x, y = random(), random() # Drop a random point in the box.
if x**2 + y**2 < 1: # Count points within the circle.
inside += 1
return (inside*4 / num_points)
# register the function
pi_function = fxc.register_function(pi)
# execute the function 3 times
estimates = []
for i in range(3):
estimates.append(fxc.run(10**5, endpoint_id=tutorial_endpoint, function_id=pi_function))
# wait for tasks to complete
time.sleep(5)
# wait for all tasks to complete
for e in estimates:
while fxc.get_task(e)['pending'] == 'True':
time.sleep(3)
# get the results and calculate the total
results = [fxc.get_result(i) for i in estimates]
total = 0
for r in results:
total += r
# print the results
print("Estimates: %s" % results)
print("Average: {:.5f}".format(total/len(results)))
###Output
_____no_output_____
###Markdown
Describing and discovering functions funcX manages a registry of functions that can be shared, discovered and reused. When registering a function, you may choose to set a description to support discovery, as well as making it `public` (so that others can run it) and/or `searchable` (so that others can discover it).
###Code
def hello_world():
return "Hello World!"
func_uuid = fxc.register_function(hello_world, description="hello world function", public=True, searchable=True)
print(func_uuid)
###Output
_____no_output_____
###Markdown
You can search previously registered functions to which you have access using `search_function`. The first parameter is searched against all the fields, such as author, description, function name, and function source. You can navigate through pages of results with the `offset` and `limit` keyword args. The object returned is a simple wrapper on a list, so you can index into it, but also can have a pretty-printed table.
###Code
search_results = fxc.search_function("hello", offset=0, limit=5)
print(search_results)
###Output
_____no_output_____
###Markdown
Managing endpointsfuncX endpoints advertise whether or not they are online as well as information about their available resources, queued tasks, and other information. If you are permitted to execute functions on an endpoint, you can also retrieve the status of the endpoint. The following example shows how to look up the status (online or offline) and the number of number of waiting tasks and workers connected to the endpoint.
###Code
endpoint_status = fxc.get_endpoint_status(tutorial_endpoint)
print("Status: %s" % endpoint_status['status'])
print("Workers: %s" % endpoint_status['logs'][0]['total_workers'])
print("Tasks: %s" % endpoint_status['logs'][0]['outstanding_tasks'])
###Output
_____no_output_____
###Markdown
Advanced featuresfuncX provides several features that address more advanced use cases. Running batchesAfter registering a function, you might want to invoke that function many times without making individual calls to the funcX service. Such examples occur when running monte carlo simulations, ensembles, and parameter sweep applications. funcX provides a batch interface that enables specification of a range of function invocations. To use this interface, you must create a funcX batch object and then add each invocation to that object. You can then pass the constructed object to the `batch_run` interface.
###Code
def squared(x):
return x**2
squared_function = fxc.register_function(squared)
inputs = list(range(10))
batch = fxc.create_batch()
for x in inputs:
batch.add(x, endpoint_id=tutorial_endpoint, function_id=squared_function)
batch_res = fxc.batch_run(batch)
###Output
_____no_output_____
###Markdown
Similary, funcX provides an interface to retrieve the status of the entire batch of invocations.
###Code
fxc.get_batch_result(batch_res)
###Output
_____no_output_____
###Markdown
funcX TutorialfuncX is a Function-as-a-Service (FaaS) platform for science that enables you to convert almost any computing resource into a high-performance function serving device. Deploying a funcX endpoint will integrate your resource into the function serving fabric, allowing you to dynamically send, monitor, and receive results from function invocations. funcX is built on top of Parsl, allowing you to connect your endpoint to large compute resources via traditional batch queues, where funcX will dynamically provision, use, and release resources on-demand to fulfill function requests.Here we provide an example of using funcX to register a function and run it on a publicly available tutorial endpoint. We start by creating a funcX client to interact with the service.
###Code
from funcx.sdk.client import FuncXClient
fxc = FuncXClient()
###Output
_____no_output_____
###Markdown
Here we define the tutorial endpoint to be used in this demonstration. Because the tutorial endpoint is Kubernetes-based, we select a simple python3.6 container that will be used during execution.
###Code
def funcx_sum(items):
return sum(items)
func_uuid = fxc.register_function(funcx_sum,
description="A sum function")
print(func_uuid)
payload = [1, 2, 3, 4, 66]
endpoint_uuid = '840b214f-ea5c-4d0c-b2b8-ea591634065b'
res = fxc.run(payload, endpoint_id=endpoint_uuid, function_id=func_uuid)
print(res)
fxc.get_result(res)
###Output
_____no_output_____
###Markdown
Loading DataLoad train data stored in CSV format using Pandas. Pretty much any format is acceptable, just some form of text and accompanying labels. Modify according to your task. For the purpose of this tutorial, we are using a sample from New York Times Front Page Dataset (Boydstun, 2014).
###Code
train_df = pd.read_csv("../data/tutorial_train.csv")
###Output
_____no_output_____
###Markdown
Loading test data
###Code
test_df = pd.read_csv("../data/tutorial_test.csv")
###Output
_____no_output_____
###Markdown
Just to get an idea of what this dataset looks like Paired data consisting of freeform text accompanied by their supervised labels towards the particular task. Here the text is headlines of news stories and the label categorizes them into the subjects. We have a total of 25 possible labels here, each represented by a separate number.
###Code
print(len(train_df.label.values))
train_df.head()
print(train_df.text[:10].tolist(), train_df.label[:10].tolist())
###Output
['AIDS in prison, treatment costs overwhelm prison budgets', 'olympics security', 'police brutality', 'Iranian nuclear program; deal with European Union and its leaving of Iran free to develop plutonium.', 'terror alert raised', 'Job report shows unexpected vigor for US economy', "Clinton proposes West Bank Plan to Isreal's Prime Minister Netanyahu", 'Senators debate Iraq War policy', 'Myrtle Beach', 'china visit'] [12, 19, 12, 16, 16, 5, 19, 16, 14, 19]
###Markdown
Learning ParametersThese are training arguments that you would use to train the classifier. For the purposes of the tutorial we set some sample values. Presumably in a different case you would perform a grid search or random search CV
###Code
lr = 1e-3
epochs = 2
print("Learning Rate ", lr)
print("Train Epochs ", epochs)
###Output
Learning Rate 0.001
Train Epochs 2
###Markdown
Initialise model1. First argument is indicative to use the Roberta architecture (alternatives - Bert, XLNet... as provided by Huggingface). Used to specify the right tokenizer and classification head as well 2. Second argument provides intialisation point as provided by Huggingface [here](https://huggingface.co/transformers/pretrained_models.html). Examples - roberta-base, roberta-large, gpt2-large...3. The tokenizer accepts the freeform text input and tansforms it into a sequence of tokens suitable for input to the transformer. The transformer architecture processes these before passing it on to the classifier head which transforms this representation into the label space. 4. Number of labels is specified below to initialise the classification head appropriately. As per the classification task you would change this.5. You can see the training args set above were used in the model initiation below.. 6. Pass in training arguments as initialised, especially note the output directory where the model is to be saved and also training logs will be output. The overwrite output directory parameter is a safeguard in case you're rerunning the experiment. Similarly if you're rerunning the same experiment with different parameters, you might not want to reprocess the input every time - the first time it's done, it is cached so you might be able to just reuse the same. fp16 refers to floating point precision which you set according to the GPUs available to you, it shouldn't affect the classification result just the performance.
###Code
model = TransformerModel('roberta', 'roberta-base', num_labels=25, reprocess_input_data=True, num_train_epochs=epochs, learning_rate=lr,
output_dir='./saved_model/', overwrite_output_dir=True, fp16=False)
###Output
_____no_output_____
###Markdown
Run training
###Code
model.train(train_df['text'], test_df['label'])
###Output
Starting Epoch: 0
Starting Epoch: 1
Training of roberta model complete. Saved to ./saved_model/.
###Markdown
To see more in depth logs, set flag show_running_loss=True on the function call of train_model Inference from modelAt training time the model is saved to the output directory that was passed in at initialization. We can either continue retaining the same model object, or load from the directory it was previously saved at. In this example we show the loading to illustrate how you would do the same. This is helpful when you want to train and save a classifier and use the same sporadically. For example in an online setting where you have some labelled training data you would train and save a model, and then load and use it to classify tweets as your collection pipeline progresses.
###Code
model = TransformerModel('roberta', 'roberta-base', num_labels=25, location="./saved_model/")
###Output
_____no_output_____
###Markdown
Evaluate on test setAt inference time we have access to the model outputs which we can use to make predictions as shown below. Similarly you could perform any emprical analysis on the output before/after saving the same. Typically you would save the results for replication purposes. You can use the model outputs as you would on a normal Pytorch model, here we just show label predictions and accuracy. In this tutorial we only used a fraction of the available data, hence why the actual accuracy is not great. For full results that we conducted on the experiments, check out our paper.
###Code
result, model_outputs, wrong_predictions = model.evaluate(test_df['text'], test_df['label'])
preds = np.argmax(model_outputs, axis = 1)
len(test_df), len(preds)
correct = 0
labels = test_df['label'].tolist()
for i in range(len(labels)):
if preds[i] == labels[i]:
correct+=1
accuracy = correct/len(labels)
print("Accuracy: ", accuracy)
pickle.dump(model_outputs, open("../model_outputs.pkl", "wb"))
###Output
_____no_output_____
###Markdown
Run inference This is the use case when you only have a new set of documents and no labels. For example if we just want to make predictions on a set of new text documents without loading a pandas datafram i.e. if you just have a list of texts, it can be predicted as shown below. Note that here you have the predictions and model outputs.
###Code
texts = test_df['text'].tolist()
preds, model_outputs = model.predict(texts)
correct = 0
for i in range(len(labels)):
if preds[i] == labels[i]:
correct+=1
accuracy = correct/len(labels)
print("Accuracy: ", accuracy)
###Output
Accuracy: 0.23947895791583165
###Markdown
Tutorial
###Code
import pandas as pd
from autoc import DataExploration, NaImputer, PreProcessor
from autoc.naimputer import missing_map
from autoc.outliersdetection import OutliersDetection
from autoc.utils.getdata import get_dataset
from autoc.utils.helpers import cserie
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Titanic dataset
###Code
# Loading Titanic dataset
titanic = get_dataset('titanic')
titanic.head()
###Output
_____no_output_____
###Markdown
DataExploration The DataExploraion class is designed to provide helpers for basic Dataexploration task
###Code
# Instantiate the class this way
exploration_titanic = DataExploration(titanic)
# The structure function gives a good summary of important characteristics of the dataset like
# missing values, nb_unique values, cst columns, types of the column ...
exploration_titanic.structure()
# If you want more specific primitive :
exploration_titanic.nacolcount()
cserie(exploration_titanic.narows_full) # no rows of only missing values
exploration_titanic.count_unique()
# More complete numeric summary than describe()
exploration_titanic.numeric_summary() # you can access to numeric
# Look at quantiles
exploration_titanic.dfquantiles(nb_quantiles=10)
###Output
_____no_output_____
###Markdown
Primitive list : Print Warning
###Code
# print Consistency infos
# This function helps you to trakc potential consistency errors in the dataset
# like duplicates columns, constant columns, full missing rows, full missing columns.
exploration_titanic.print_infos('consistency', print_empty=False)
###Output
{'duplicated_rows': {'action': 'delete',
'comment': 'You should delete this rows with df.drop_duplicates()',
'level': 'ERROR',
'value': Int64Index([ 47, 76, 77, 87, 95, 101, 121, 133, 173, 196,
...
838, 844, 846, 859, 863, 870, 877, 878, 884, 886],
dtype='int64', length=107)}}
###Markdown
Fancier Functions
###Code
# Nearzerovariance function inspired from caret
exploration_titanic.nearzerovar()
# Find highly correlated columns
exploration_titanic.findcorr() # no highly numerical correlated columns
exploration_titanic.findupcol()
# no duplicated cols
# Recheck duplicated row
titanic.duplicated().sum()
###Output
_____no_output_____
###Markdown
Outliers Detection This class is a simple class to detect one dimension outliers.
###Code
outlier_detection = OutliersDetection(titanic)
outlier_detection.basic_cutoff
outlier_detection.strong_cutoff
soft_outliers_fare = outlier_detection.outlier_detection_serie_1d('fare',cutoff_params=outlier_detection.basic_cutoff)
strong_outliers_fare = outlier_detection.outlier_detection_serie_1d('fare',cutoff_params=outlier_detection.strong_cutoff)
# finding index of your Dataframe
index_strong_outliers = (strong_outliers_fare.is_outlier == 1)
titanic.fare.describe()
# a lot of outliers because distribution is lognormal
titanic.loc[index_strong_outliers, :].head()
titanic.fare.hist()
outlier_detection.outlier_detection_1d(cutoff_params=outlier_detection.basic_cutoff).head(20)
###Output
_____no_output_____
###Markdown
Prerocessor
###Code
# initialize preprocessing
preprocessor = PreProcessor(titanic, copy=True)
print("We made a copy so id titanic : {} different from id preprocessor.data {}".format(
id(titanic),id(preprocessor.data)))
# using infos consistency from DataExploration
preprocessor.print_infos('consistency')
# basic cleaning delete constant columns
titanic_clean = preprocessor.basic_cleaning()
titanic_clean.shape # We removed the dupliated columns
titanic.shape
preprocessor.infer_subtypes() # this function tries to indentify different subtypes of data
preprocessor.subtypes
###Output
_____no_output_____
###Markdown
Airbnb Dataset This is a dataset from airbnb users found (the dataset used here is train_users_2.csv from the [this airbnb kaggle competition](https://www.kaggle.com/c/airbnb-recruiting-new-user-bookings/data?train_users_2.csv.zip)
###Code
df_airbnb = get_dataset('airbnb_users')
###Output
_____no_output_____
###Markdown
DataExploration
###Code
exploration_airbnb = DataExploration(df_airbnb)
exploration_airbnb.print_infos('consistency')
exploration_airbnb.structure()
exploration_airbnb.sign_summary() # Get sign summary (look for -1 na encoded value for example)
###Output
_____no_output_____
###Markdown
Outliers Detection
###Code
airbnb_od = OutliersDetection(df_airbnb)
# OutliersDetection is a subclass of DataExploration
airbnb_od.structure()
airbnb_od.numeric_summary() # you can access to numeric
airbnb_od.strong_cutoff
outliers_age = airbnb_od.outlier_detection_serie_1d('age', cutoff_params=airbnb_od.strong_cutoff)
outliers_age.head(10)
print("nb strong outliers : {}".format(outliers_age.is_outlier.sum()))
index_outliers_age = cserie(outliers_age.is_outlier==1, index=True)
df_airbnb.loc[index_outliers_age,:]
###Output
_____no_output_____
###Markdown
Naimputer
###Code
#plt.style.use('ggplot') # ggplot2 style for mathplotlib
naimp = NaImputer(df_airbnb)
naimp.data_isna.corr()
naimp.plot_corrplot_na()
missing_map(df_airbnb, nmax=200)
naimp.get_isna_ttest('age', type_test='ks')
naimp.get_isna_ttest('age', type_test='ttest')
naimp.get_overlapping_matrix()
naimp.nacolcount()
###Output
_____no_output_____
###Markdown
Overview of the Tutorial- Imports- Part A: Step-by-Step Walkthrough- Part B: Wrapping Function Walkthrough- Part C: Plotting Temperature Profile - Common Errors and Fixes Imports In order to import the musical robot packages - the package must be downloaded first - further instructions can be found in the package information
###Code
import sys
import matplotlib.pyplot as plt
import numpy as np
#sys.path.insert(0, '../musicalrobot/')
# Importing the required modules
from musicalrobot import irtemp
from musicalrobot import edge_detection as ed
from musicalrobot import pixel_analysis as pa
from musicalrobot import data_encoding as de
%matplotlib inline
###Output
_____no_output_____
###Markdown
PART A: Step-by-Step Walkthrough Use the function 'edge_detection.input_file' to load the input file - a file is provided in the data folder
###Code
frames = ed.input_file('../musicalrobot/data/10_17_19_PPA_Shallow_plate.tiff')
plt.imshow(frames[0])
###Output
_____no_output_____
###Markdown
NOTE: Need to replace with data with the border up Crop the input file if required to remove the noise and increase the accuracy of edge detectionWhen cropping focus on removing any sections of large temperature disparity and evening out the temperatures over the range of the plate in the viewfinder
###Code
crop_frame = []
for frame in frames:
crop_frame.append(frame[25:90,50:120])
plt.imshow(crop_frame[300])
plt.colorbar()
###Output
_____no_output_____
###Markdown
Equalize Image to determine sample position
###Code
img_eq = pa.image_eq(crop_frame)
###Output
_____no_output_____
###Markdown
Determining the sum of pixels in each column and row
###Code
column_sum, row_sum = pa.pixel_sum(img_eq)
###Output
_____no_output_____
###Markdown
Determining the plate and sample locations
###Code
# input of the previous outputs as well as the known layout of samples
r_peaks, c_peaks = pa.peak_values(column_sum, row_sum, 3, 3, freeze_heat=False)
sample_location = pa.locations(r_peaks, c_peaks, img_eq)
#pixel location of sample in row
r_peaks
#pixel location of sample in column
c_peaks
#outputs of all pixel locations
sample_location
###Output
_____no_output_____
###Markdown
Extract temperature profiles at all of the sample and plate locations
###Code
temp, plate_temp = pa.pixel_intensity(sample_location,crop_frame, 'Row', 'Column', 'plate_location')
###Output
_____no_output_____
###Markdown
Finding inflection Temperature
###Code
s_peaks, s_infl = ed.peak_detection(temp,plate_temp, 'Sample')
#lists all of the inflection points that were recorded over the samples
np.asarray(s_infl)[:,0]
###Output
_____no_output_____
###Markdown
Confirming validity of inflection pointThis function will catagorize the calculated inflection points on the noise and inflection validity for each point - In this example all of the inflection points are catagorized as "noiseless" and "inflection" - this is ideal for inflection points. If the point has extra noise or is not catagorized as an inflection for sure, this is a suggestion to manually check the graphs.
###Code
result_df = de.final_result(temp, plate_temp, path='../musicalrobot/data/')
result_df
###Output
_____no_output_____
###Markdown
Part B: Using Wrapping FunctionAll of the functions covered in part A are wrapped and can be run with a single row after the cropping function is run Load and crop the image in the same way as in part A Run the wrapping function
###Code
result_df1 = pa.pixel_temp(crop_frame,n_columns = 3, n_rows = 3, freeze_heat=False, path='../musicalrobot/data/')
###Output
_____no_output_____
###Markdown
Part C: Plotting Temperature Profiles Need to include the dual graph bit
###Code
for i in range(len(temp)):
plt.plot(plate_temp[i], temp[i])
plt.title('PPA Melting Temperature')
plt.xlabel('Plate temperature($^{\circ}$C)')
plt.ylabel('Sample Temperature($^{\circ}$C)')
# plt.savefig('../temp_profiles/ppa_'+ str(i+1)+ '.png')
# uncomment previous line to save figures into an established folder
plt.show()
###Output
_____no_output_____
###Markdown
Introduction: DDOT tutorial* __What is an ontology?__ An ontology is a hierarchical arrangement of two types of nodes: (1)genes at the leaves of the hierarchy and (2) terms at intermediatelevels of the hierarchy. The hierarchy can be thought of as directedacyclic graph (DAG), in which each node can have multiple children ormultiple parent nodes. DAGs are a generalization of trees(a.k.a. dendogram), where each node has at most one parent.* __What is DDOT?__ The DDOT Python package provides many functions for assembling,analyzing, and visualizing ontologies. The main functionalities areimplemented in an object-oriented manner by an "Ontology" class, which handles ontologies that are data-driven as well as thosethat are manually curated like the Gene Ontology.* __What to do after reading this tutorial__ Check out a complete list of functions in the [Ontology class](http://ddot.readthedocs.io/en/latest/ontology.html) and a list of [utility functions](http://ddot.readthedocs.io/en/latest/utils.html) that may help you build more concise pipelines. Also check out [example Jupyter notebooks](https://github.com/michaelkyu/ddot/tree/master/examples) that contain pipelines for downloading and processing the Gene Ontology and for inferring data-driven gene ontologies of diseases
###Code
# Import Ontology class from DDOT package
import ddot
from ddot import Ontology
###Output
/cellar/users/mikeyu/anaconda2/envs/ddot_py36/lib/python3.6/site-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
###Markdown
Creating an Ontology object* An object of the Ontology class can be created in several ways.* In this tutorial, we will construct and analyze the toy ontology shown below. Create ontology through the \_\_init\_\_ constructor
###Code
# Connections from child terms to parent terms
hierarchy = [('S3', 'S1'),
('S4', 'S1'),
('S5', 'S1'),
('S5', 'S2'),
('S6', 'S2'),
('S1', 'S0'),
('S2', 'S0')]
# Connections from genes to terms
mapping = [('A', 'S3'),
('B', 'S3'),
('C', 'S3'),
('C', 'S4'),
('D', 'S4'),
('E', 'S5'),
('F', 'S5'),
('G', 'S6'),
('H', 'S6')]
# Construct ontology
ont = Ontology(hierarchy, mapping)
# Prints a summary of the ontology's structure
print(ont)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Create an ontology from a tab-delimited table or Pandas dataframe
###Code
# Write ontology to a tab-delimited table
ont.to_table('toy_ontology.txt')
# Reconstruct the ontology from the table
ont2 = Ontology.from_table('toy_ontology.txt')
ont2
###Output
_____no_output_____
###Markdown
From the Network Data Exchange (NDEx).* It is strongly recommended that you create a free account on NDEx in order to keep track of your own ontologies.* Note that there are two NDEx servers: the main one at http://ndexbio.org/ and a test server for prototyping your code at http://test.ndexbio.org. Each server requires a separate user account. While you get familiar with DDOT, we recommend that you use an account on the test server.
###Code
# Set the NDEx server and the user account.
# This "scratch" account will work for this tutorial, but you should replace it with your own account.
ndex_server = 'http://test.ndexbio.org'
ndex_user, ndex_pass = 'scratch', 'scratch'
# Upload ontology to NDEx. The string after "v2/network/" is a unique identifier, which is called the UUID, of the ontology in NDEx
url, _ = ont.to_ndex(ndex_server=ndex_server, ndex_user=ndex_user, ndex_pass=ndex_pass)
print(url)
# Download the ontology from NDEx
ont2 = Ontology.from_ndex(url)
print(ont2)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: ['Vis:Fill Color', 'name', 'Vis:Shape', 'Vis:Size', 'Vis:Border Paint']
edge_attributes: ['Vis:Visible']
###Markdown
Inspecting the structure of an ontology An Ontology object contains seven attributes:* ``genes`` : List of gene names* ``terms`` : List of term names* ``gene_2_term`` : dictionary mapping a gene name to a list of terms connected to that gene. Terms are represented as their 0-based index in ``terms``.* ``term_2_gene`` : dictionary mapping a term name to a list or genes connected to that term. Genes are represented as their 0-based index in ``genes``.* ``child_2_parent`` : dictionary mapping a child term to its parent terms.* ``parent_2_child`` : dictionary mapping a parent term to its children terms.* ``term_sizes`` : A list of each term's size, i.e. the number of unique genes contained within this term and its descendants. The order of this list is the same as ``terms``. For every ``i``, it holds that ``term_sizes[i] = len(self.term_2_gene[self.terms[i]])``
###Code
ont.genes
ont.terms
ont.gene_2_term
ont.term_2_gene
ont.child_2_parent
ont.parent_2_child
###Output
_____no_output_____
###Markdown
Alternatively, the hierarchical connections can be viewed as a binary matrix, using `Ontology.connected()`
###Code
conn = ont.connected()
import numpy as np
np.array(conn, dtype=np.int32)
###Output
_____no_output_____
###Markdown
A summary of an Ontology’s object, i.e. the number of genes, terms, and connections, can be printed `print(ont)`
###Code
print(ont)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Manipulating the structure of an ontology DDOT provides several convenience functions for processing Ontologies into a desirable structure. Currently, there are no functions for adding genes and terms. If this is needed, then we recommend creating a new Ontology or manipulating the contents in a different library, such as NetworkX or igraph, and transforming the results into Ontology. Renaming nodes
###Code
# Renaming genes and terms.
ont2 = ont.rename(genes={'A' : 'A_alias'}, terms={'S0':'S0_alias'})
ont2.to_table()
###Output
_____no_output_____
###Markdown
Delete S1 and G while preserving transitive connections
###Code
ont2 = ont.delete(to_delete=['S1', 'G'])
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 6 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Delete S1 and G (don't preserve transitive connections)
###Code
ont2 = ont.delete(to_delete=['S1', 'G'], preserve_transitivity=False)
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 3 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Propagate gene-term connections* Often times, it is convenient to explicitly include all transitive connections in the hierarchy. That is, if a hierarchy has edges A-->B and B-->C, then the hierarchy also has A-->C. This can be done by calling `Ontology.propagate(direction='forward')` function.* On the other hand, all transitive connections can be removed with `Ontology.propagate(direction='reverse')`. This is useful as a parsimonious set of connections.
###Code
# Include all transitive connections between genes and terms
ont2 = ont.propagate(direction='forward', gene_term=True, term_term=False)
print(ont2)
# Remove all transitive connections between genes and terms, retaining only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=True, term_term=False)
print(ont3)
###Output
8 genes, 7 terms, 27 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Propagate term-term connections
###Code
# Include all transitive connections between terms
ont2 = ont.propagate(direction='forward', gene_term=False, term_term=True)
print(ont2)
# Remove all transitive connections between terms, retaining only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=False, term_term=True)
print(ont3)
###Output
8 genes, 7 terms, 9 gene-term relations, 11 term-term relations
node_attributes: []
edge_attributes: []
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Take the subbranch consisting of all term and genes under S1
###Code
ont2 = ont.focus(branches=['S1'])
print(ont2)
###Output
Genes and Terms to keep: 10
6 genes, 4 terms, 7 gene-term relations, 3 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Inferring a data-driven ontology* Given a set of genes and a gene similarity network, we can hierarchically cluster the genes to infer cellular subsystems using the CliXO algorithm. The resulting hierarchy of subsystems defines a "data-driven gene ontology". For more information about the CLIXO algorithm, see Kramer et al. Bioinformatics, 30(12), pp.i34-i42. 2014.* Conversely, we can also "flatten" the ontology structure to infer a gene-by-gene similarity network. In particular, the similarity between two genes is calculated as the size of the smallest common subsystem, known as "Resnik semantic similarity".* The CLIXO algorithm has been designed to reconstruct the original hierarchy from the Resnik score.
###Code
# Flatten ontology to gene-by-gene network
sim, genes = ont.flatten()
print('Similarity matrix')
print(np.round(sim, 2))
print('Row/column names of similarity matrix')
print(genes)
# Reconstruct the ontology using the CLIXO algorithm.
# In general, you may feed any kind of gene-gene similarities, e.g. measurements of protein-protein interactions, gene co-expression, or genetic interactions.
ont2 = Ontology.run_clixo(sim, 0.0, 1.0, square=True, square_names=genes)
print(ont2)
ont2.to_table(edge_attr=True)
###Output
_____no_output_____
###Markdown
Ontology alignment* The structures of two ontologies can be compared through a procedure known as ontology alignment. Ontology.align() implements the ontology alignment described in (Dutkowski et al. Nature biotechnology, 31(1), 2013), in which terms are matched if they contain similar sets of genes and if their parents and children terms are also similar.* Ontology alignment is particularly useful for annotating a data-driven gene ontology by aligning it to a curated ontology such as the Gene Ontology (GO). For instance, if a data-driven term is identified to have a similar set of genes as the GO term for DNA repair, then the data-driven subsystem can be annotated as being involved in DNA repair. Moreover, data-driven terms with no matches in the ontology alignment may represent new molecular mechanisms.
###Code
## Make a second ontology (the ontology to the right in the above diagram)
# Connections from child terms to parent terms
hierarchy = [('T3', 'T1'),
('T4', 'T1'),
('T1', 'T0'),
('T5', 'T0')]
# Connections from genes to terms
mapping = [('A', 'T3'),
('B', 'T3'),
('C', 'T3'),
('D', 'T4'),
('E', 'T4'),
('F', 'T4'),
('G', 'T5'),
('H', 'T5')]
# Construct ontology
ont_B = Ontology(hierarchy, mapping)
ont.align(ont_B)
###Output
collapse command: /cellar/users/mikeyu/anaconda2/envs/ddot_py36/lib/python3.6/site-packages/ddot/alignOntology/collapseRedundantNodes /tmp/tmpgdjisdao
collapse command: /cellar/users/mikeyu/anaconda2/envs/ddot_py36/lib/python3.6/site-packages/ddot/alignOntology/collapseRedundantNodes /tmp/tmp1dv8t8j0
Alignment command: /cellar/users/mikeyu/anaconda2/envs/ddot_py36/lib/python3.6/site-packages/ddot/alignOntology/calculateFDRs /tmp/tmp0e8ukp6u /tmp/tmp9gs4huql 0.05 criss_cross /tmp/tmpycs1ivbe 100 40 gene
###Markdown
Construct ontotypes* A major goal of genetics is to understand how genotype translates to phenotype. An ontology represents biological structure through which this genotype-phenotype translation happens. * Given a set of mutations comprising a genotype, DDOT allows you to propagate the impact of these mutations to the subsystems containing these genes in the ontology. In particular, the impact on a subsystem is estimated by the number of its genes that have been mutated. These subsystem activities, which we have called an “ontotype”, enables more accurate and interpretable predictions of phenotype from genotype (Yu et al. Cell Systems 2016, 2(2), pp.77-88. 2016).
###Code
# Genotypes can be represented as tuples of mutated genes
genotypes = [('A', 'B'),
('A', 'E'),
('A', 'H'),
('B', 'E'),
('B', 'H'),
('C', 'F'),
('D', 'E'),
('D', 'H'),
('E', 'H'),
('G', 'H')]
# Calculate the ontotypes, represented a genotype-by-term matrix. Each value represents the functional impact on a term in a genotype.
ontotypes = ont.get_ontotype(genotypes)
print(ontotypes)
# Genotypes can also be represented a genotype-by-gene matrix as an alternative input format
import pandas as pd, numpy as np
genotypes_df = pd.DataFrame(np.zeros((len(genotypes), len(ont.genes)), np.float64),
index=['Genotype%s' % i for i in range(len(genotypes))],
columns=ont.genes)
for i, (g1, g2) in enumerate(genotypes):
genotypes_df.loc['Genotype%s' % i, g1] = 1.0
genotypes_df.loc['Genotype%s' % i, g2] = 1.0
print('Genotype matrix')
print(genotypes_df)
print("")
ontotypes = ont.get_ontotype(genotypes_df, input_format='matrix')
print('Ontotype matrix')
print(ontotypes)
###Output
Genotype matrix
A B C D E F G H
Genotype0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
Genotype1 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
Genotype2 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
Genotype3 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0
Genotype4 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0
Genotype5 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
Genotype6 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0
Genotype7 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0
Genotype8 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype9 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0
Ontotype matrix
S0 S1 S2 S3 S4 S5 S6
Genotype0 0.0 0.0 0.0 2.0 0.0 0.0 0.0
Genotype1 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype2 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype3 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype4 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype5 0.0 0.0 0.0 1.0 1.0 1.0 0.0
Genotype6 0.0 0.0 0.0 0.0 1.0 1.0 0.0
Genotype7 0.0 0.0 0.0 0.0 1.0 0.0 1.0
Genotype8 0.0 0.0 0.0 0.0 0.0 1.0 1.0
Genotype9 0.0 0.0 0.0 0.0 0.0 0.0 2.0
###Markdown
Conversions to NetworkX and igraph
###Code
# Convert to an igraph object
G = ont.to_igraph()
print(G)
# Reconstruct the Ontology object from the igraph object
Ontology.from_igraph(G)
# Convert to a NetworkX object
G = ont.to_networkx()
print(G.nodes())
print(G.edges())
# Reconstruct the Ontology object from the NetworkX object
tmp = Ontology.from_networkx(G)
print(tmp)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Ontology visualization using HiView (http://hiview.ucsd.edu)* HiView is a web application for general visualization of the hierarchical structure in ontologies.* To use HiView, you must first upload your ontology into NDEx using the [Ontology.to_ndex()](http://ddot.readthedocs.io/en/latest/ontology.htmlddot.Ontology.to_ndex) function, and then input the NDEx URL for the ontology to HiView* In contrast to almost all other hierarchical visualization tools, which are limited to simple tree structures, HiView also supports more complicated hierarchies in the form of directed acyclic graphs, in which nodes may have multiple parents. A simple upload to NDEx and visualization in HiView* Upload ontologies to NDEx using the `Ontology.to_ndex()` function.* Setting the parameter `layout="bubble"` (default value) will identify a spanning tree of the DAG and then lay this tree in a space-compact manner. When viewing in HiView, only the edges in the spanning tree are shown, while the other edges can be chosen to be shown.
###Code
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble')
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
###Output
Go to http://hiview.ucsd.edu in your web browser
Enter this into the "NDEx Sever URL" field: http://test.ndexbio.org
Enter this into the "UUID of the main hierarchy" field: 31385ecb-6b55-11e8-9d1c-0660b7976219
###Markdown
An alternative layout by duplicating nodes* Setting the parameter `layout="bubble-collect"` will convert the DAG into a tree by duplicating nodes.* This transformation enables the ontology structure to be visualized without edges crossing.
###Code
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble-collect')
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
###Output
Go to http://hiview.ucsd.edu in your web browser
Enter this into the "NDEx Sever URL" field: http://test.ndexbio.org
Enter this into the "UUID of the main hierarchy" field: 31686f7d-6b55-11e8-9d1c-0660b7976219
###Markdown
Visualizing metadata by modifying node labels, colors, and sizes* An Ontology object has a `node_attr` field that is a pandas DataFrame. The rows of the dataframe are genes or terms, and the columns are node attributes.* HiView understands special node attributes to control the node labels, colors, and sizes.
###Code
# Set the node labels (default is the gene and term names, as found in Ontology.genes and Ontology.terms)
ont.node_attr.loc['S4', 'Label'] = 'S4 alias'
ont.node_attr.loc['S5', 'Label'] = 'S5 alias'
# Set the fill color of nodes
ont.node_attr.loc['C', 'Vis:Fill Color'] = '#7fc97f'
ont.node_attr.loc['S1', 'Vis:Fill Color'] = '#beaed4'
ont.node_attr.loc['S0', 'Vis:Fill Color'] = '#fdc086'
# Set the node sizes (if not set, the default is the term size, as found in Ontology.term_sizes)
ont.node_attr.loc['C', 'Size'] = 10
ont.node_attr
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble-collect')
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
# Clear node attributes (optional)
ont.clear_node_attr()
ont.node_attr
###Output
_____no_output_____
###Markdown
Visualize gene-gene interaction networks alongside the ontology* Every term in an ontology represents a biological function shared among the term's genes. Based on this intuition, those genes should be interacting in different ways, e.g. protein-protein interactions, RNA expression, or genetic interactions.* Gene-gene interaction networks can be uploaded with the ontology to NDEx, so that they can be visualized at the same time in HiView
###Code
# Calculate a gene-by-gene similarity matrix using the Resnik semantic similarity definition (see section "Inferring a data-driven ontology")
sim, genes = ont.flatten()
print(genes)
print(np.round(sim, 2))
# Convert the gene-by-gene similarity matrix into a dataframe with a "long" format, where rows represent gene pairs. This conversion can be easily done with ddot.melt_square()
import pandas as pd
sim_df = pd.DataFrame(sim, index=genes, columns=genes)
sim_long = ddot.melt_square(sim_df)
sim_long.head()
# Create other gene-gene interactions. For example, these can represent protein-protein interactions or gene co-expression. Here, we simulate interactions by adding a random noise to the Resnik similarity
sim_long['example_interaction_type1'] = sim_long['similarity'] + np.random.random(sim_long.shape[0]) / 2.
sim_long['example_interaction_type2'] = sim_long['similarity'] + np.random.random(sim_long.shape[0]) / 2.
sim_long.head()
# Include the above gene-gene interactions by setting the `network` and `main_feature` parameters.
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
network=sim_long,
main_feature='similarity',
layout='bubble-collect')
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
###Output
Go to http://hiview.ucsd.edu in your web browser
Enter this into the "NDEx Sever URL" field: http://test.ndexbio.org
Enter this into the "UUID of the main hierarchy" field: 32a5aa8c-6b55-11e8-9d1c-0660b7976219
###Markdown
**RosbagInputFormat** RosbagInputFormat is an open source **splitable** Hadoop InputFormat for the rosbag file format. Usage from Spark (pyspark)Example data can be found for instance at https://github.com/udacity/self-driving-car/tree/master/datasets published under MIT License. Check that the rosbag file version is V2.0The code you cloned is located in ```/opt/ros_hadoop/master``` while the latest release is in ```/opt/ros_hadoop/latest```../lib/rosbaginputformat.jar is a symlink to a recent version. You can replace it with the version you would like to test.```bashjava -jar ../lib/rosbaginputformat.jar --version -f /opt/ros_hadoop/master/dist/HMB_4.bag``` Extract the index as configurationThe index is a very very small configuration file containing a protobuf array that will be given in the job configuration.**Note** that the operation **will not** process and it **will not** parse the whole bag file, but will simply seek to the required offset.
###Code
%%bash
echo -e "Current working directory: $(pwd)\n\n"
tree -d -L 2 /opt/ros_hadoop/
%%bash
# assuming you start the notebook in the doc/ folder of master (default Dockerfile build)
java -jar ../lib/rosbaginputformat.jar -f /opt/ros_hadoop/master/dist/HMB_4.bag
###Output
[0m[32mFound: 421 chunks
[0mIt should be the same number reported by rosbag tool.
If you encounter any issues try reindexing your file and submit an issue.
[0m
###Markdown
This will generate a very small file named HMB_4.bag.idx.bin in the same folder. Copy the bag file in HDFSUsing your favorite tool put the bag file in your working HDFS folder.**Note:** keep the index json file as configuration to your jobs, **do not** put small files in HDFS.For convenience we already provide an example file (/opt/ros_hadoop/master/dist/HMB_4.bag) in the HDFS under /user/root/```bashhdfs dfs -put /opt/ros_hadoop/master/dist/HMB_4.baghdfs dfs -ls``` Process the ros bag file in Spark using the RosbagInputFormat Create the Spark Session or get an existing one
###Code
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sparkConf = SparkConf()
sparkConf.setMaster("local[*]")
sparkConf.setAppName("ros_hadoop")
sparkConf.set("spark.jars", "../lib/protobuf-java-3.3.0.jar,../lib/rosbaginputformat.jar,../lib/scala-library-2.11.8.jar")
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
sc = spark.sparkContext
###Output
_____no_output_____
###Markdown
Create an RDD from the Rosbag file**Note:** your HDFS address might differ.
###Code
fin = sc.newAPIHadoopFile(
path = "hdfs://127.0.0.1:9000/user/root/HMB_4.bag",
inputFormatClass = "de.valtech.foss.RosbagMapInputFormat",
keyClass = "org.apache.hadoop.io.LongWritable",
valueClass = "org.apache.hadoop.io.MapWritable",
conf = {"RosbagInputFormat.chunkIdx":"/opt/ros_hadoop/master/dist/HMB_4.bag.idx.bin"})
###Output
_____no_output_____
###Markdown
Interpret the MessagesTo interpret the messages we need the connections.We could get the connections as configuration as well. At the moment we decided to collect the connections into Spark driver in a dictionary and use it in the subsequent RDD actions. Note in the next version of the RosbagInputFormater alternative implementations will be given. Collect the connections from all Spark partitions of the bag file into the Spark driver
###Code
conn_a = fin.filter(lambda r: r[1]['header']['op'] == 7).map(lambda r: r[1]).collect()
conn_d = {str(k['header']['topic']):k for k in conn_a}
# see topic names
conn_d.keys()
###Output
_____no_output_____
###Markdown
Load the python map functions from src/main/python/functions.py
###Code
%run -i ../src/main/python/functions.py
###Output
_____no_output_____
###Markdown
Use of msg_map to apply a function on all messagesPython **rosbag.bag** needs to be installed on all Spark workers.The msg_map function (from src/main/python/functions.py) takes three arguments:1. r = the message or RDD record Tuple2. func = a function (default str) to apply to the ROS message3. conn = a connection to specify what topic to process
###Code
%matplotlib inline
# use %matplotlib notebook in python3
from functools import partial
import pandas as pd
import numpy as np
# Take messages from '/imu/data' topic using default str func
rdd = fin.flatMap(
partial(msg_map, conn=conn_d['/imu/data'])
)
print(rdd.take(1)[0])
###Output
header:
seq: 1701626
stamp:
secs: 1479425728
nsecs: 747487068
frame_id: /imu
orientation:
x: -0.0251433756238
y: 0.0284643176884
z: -0.0936542998233
w: 0.994880191333
orientation_covariance: [0.017453292519943295, 0.0, 0.0, 0.0, 0.017453292519943295, 0.0, 0.0, 0.0, 0.15707963267948966]
angular_velocity:
x: 0.0
y: 0.0
z: 0.0
angular_velocity_covariance: [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0]
linear_acceleration:
x: 1.16041922569
y: 0.595418334007
z: 10.7565326691
linear_acceleration_covariance: [0.0004, 0.0, 0.0, 0.0, 0.0004, 0.0, 0.0, 0.0, 0.0004]
###Markdown
Image data from camera messagesAn example of taking messages using a func other than default str.In our case we apply a lambda to messages from from '/center_camera/image_color/compressed' topic. As usual with Spark the operation will happen in parallel on all workers.
###Code
from PIL import Image
from io import BytesIO
res = fin.flatMap(
partial(msg_map, func=lambda r: r.data, conn=conn_d['/center_camera/image_color/compressed'])
).take(50)
Image.open(BytesIO(res[48]))
###Output
_____no_output_____
###Markdown
Plot fuel levelThe topic /vehicle/fuel_level_report contains 2215 ROS messages. Let us plot the header.stamp in seconds vs. fuel_level using a pandas dataframe
###Code
def f(msg):
return (msg.header.stamp.secs, msg.fuel_level)
d = fin.flatMap(
partial(msg_map, func=f, conn=conn_d['/vehicle/fuel_level_report'])
).toDF().toPandas()
d.set_index('_1').plot(legend=False);
###Output
_____no_output_____
###Markdown
Aggregate acceleration statistics
###Code
%matplotlib inline
import matplotlib.pylab as plt
import seaborn as sns
from pyspark.sql import types as T
import yaml
sns.set_style('whitegrid')
sns.set_context('talk')
schema = T.StructType()
schema = schema.add(T.StructField('seq',T.IntegerType()))
schema = schema.add(T.StructField('secs',T.IntegerType()))
schema = schema.add(T.StructField('nsecs',T.IntegerType()))
schema = schema.add(T.StructField('orientation_x',T.DoubleType()))
schema = schema.add(T.StructField('orientation_y',T.DoubleType()))
schema = schema.add(T.StructField('orientation_z',T.DoubleType()))
schema = schema.add(T.StructField('angular_velocity_x',T.DoubleType()))
schema = schema.add(T.StructField('angular_velocity_y',T.DoubleType()))
schema = schema.add(T.StructField('angular_velocity_z',T.DoubleType()))
schema = schema.add(T.StructField('linear_acceleration_x',T.DoubleType()))
schema = schema.add(T.StructField('linear_acceleration_y',T.DoubleType()))
schema = schema.add(T.StructField('linear_acceleration_z',T.DoubleType()))
def get_time_and_acc(r):
r = yaml.load(r)
return (r['header']['seq'],
r['header']['stamp']['secs'],
r['header']['stamp']['nsecs'],
r['orientation']['x'],
r['orientation']['y'],
r['orientation']['z'],
r['angular_velocity']['x'],
r['angular_velocity']['y'],
r['angular_velocity']['z'],
r['linear_acceleration']['x'],
r['linear_acceleration']['y'],
r['linear_acceleration']['z'],
)
pdf_acc = spark.createDataFrame(fin
.flatMap(partial(msg_map, conn=conn_d['/imu/data']))
.map(get_time_and_acc), schema=schema).toPandas()
pdf_acc.head()
xbins = np.arange(-5,5,0.2)
ybins = np.arange(-5,5,0.2)
h,_,_ = np.histogram2d(pdf_acc.linear_acceleration_x,pdf_acc.linear_acceleration_y, bins=(xbins,ybins))
h[h == 0] = np.NaN
fig, ax = plt.subplots(figsize=(10,8))
plt.imshow(h.T,extent=[xbins[0],xbins[-1],ybins[0],ybins[-1]],origin='lower',interpolation='nearest')
#plt.colorbar()
plt.xlabel(r'Acceleration x [m/s^2]')
plt.ylabel('Acceleration y [m/s^2]')
plt.title('Acceleration distribution');
###Output
_____no_output_____
###Markdown
Visualize track in Google mapsYou have to apply for an Google Maps API key to execute this section, cf.https://developers.google.com/maps/documentation/javascript/get-api-keyAdd your key to the next cell:
###Code
import gmaps
gmaps.configure('AI...')
schema = T.StructType()
schema = schema.add(T.StructField('seq',T.IntegerType()))
schema = schema.add(T.StructField('secs',T.IntegerType()))
schema = schema.add(T.StructField('nsecs',T.IntegerType()))
schema = schema.add(T.StructField('latitude',T.DoubleType()))
schema = schema.add(T.StructField('longitude',T.DoubleType()))
schema = schema.add(T.StructField('altitude',T.DoubleType()))
schema = schema.add(T.StructField('status_service',T.IntegerType()))
schema = schema.add(T.StructField('status_status',T.IntegerType()))
def get_gps(r):
r = yaml.load(r)
return (r['header']['seq'],
r['header']['stamp']['secs'],
r['header']['stamp']['nsecs'],
r['latitude'],
r['longitude'],
r['altitude'],
r['status']['service'],
r['status']['status']
)
pdf_gps = spark.createDataFrame(
fin
.flatMap(partial(msg_map, conn=conn_d['/vehicle/gps/fix']))
.map(get_gps),
schema=schema
).toPandas()
pdf_gps.head()
fig, ax = plt.subplots()
pdf_gps.sort_values('secs').plot('longitude','latitude',ax=ax, legend=False)
plt.xlabel('Longitude')
plt.ylabel('Latitude');
c='rgba(0,0,150,0.3)'
fig = gmaps.figure(center=(pdf_gps.latitude.mean(),pdf_gps.longitude.mean()), zoom_level=14)
track = gmaps.symbol_layer(pdf_gps[['latitude','longitude']], fill_color=c, stroke_color=c, scale=2)
fig.add_layer(track)
fig
###Output
_____no_output_____
###Markdown
Output of this cell would look like this Machine Learning models on Spark workersA dot product Keras "model" for each message from a topic. We will compare it with the one computed with numpy.**Note** that the imports happen in the workers and not in driver. On the other hand the connection dictionary is sent over the closure.
###Code
def f(msg):
from keras.layers import dot, Dot, Input
from keras.models import Model
linear_acceleration = {
'x': msg.linear_acceleration.x,
'y': msg.linear_acceleration.y,
'z': msg.linear_acceleration.z,
}
linear_acceleration_covariance = np.array(msg.linear_acceleration_covariance)
i1 = Input(shape=(3,))
i2 = Input(shape=(3,))
o = dot([i1,i2], axes=1)
model = Model([i1,i2], o)
# return a tuple with (numpy dot product, keras dot "predict")
return (
np.dot(linear_acceleration_covariance.reshape(3,3),
[linear_acceleration['x'], linear_acceleration['y'], linear_acceleration['z']]),
model.predict([
np.array([[ linear_acceleration['x'], linear_acceleration['y'], linear_acceleration['z'] ]]),
linear_acceleration_covariance.reshape((3,3))])
)
fin.flatMap(partial(msg_map, func=f, conn=conn_d['/vehicle/imu/data_raw'])).take(5)
# tuple with (numpy dot product, keras dot "predict")
from pyspark.sql import Row
pdf_steering = spark.createDataFrame(fin.flatMap(partial(msg_map, func=lambda r: Row(**yaml.load(str(r))), conn=conn_d['/vehicle/steering_report']))).toPandas()
pdf_steering['secs'] = pdf_steering.header.map(lambda r: r['stamp']['secs'])
fig, axes = plt.subplots(2,1,figsize=(10,8))
pdf_steering.set_index('secs').speed.plot(ax=axes[0])
pdf_steering.set_index('secs').steering_wheel_angle.plot(ax=axes[1])
axes[0].set_ylabel('Speed [mph?]')
axes[1].set_ylabel('Steering wheel angle [%]')
###Output
_____no_output_____
###Markdown
HOW TO USE OPTICHILL IMPORTING THE NECESSARY MODULES TO RUN THE CODE
###Code
import pandas as pd
import numpy as np
import glob
import os
from optichill import bas_filter
from optichill import GBM_model
###Output
_____no_output_____
###Markdown
FILTERING OUT THE DATA * First split the data from Plant 1 to training and testing data:(Ensure that the correct path to the data files from the directory of this notebook is stated)
###Code
train_data = [
'Plt1 m 2018-01.csv', 'Plt1 m 2018-02.csv', 'Plt1 m 2018-03.csv',
'Plt1 m 2018-04.csv'
]
test_data = [
'Plt1 m 2016-11.csv', 'Plt1 m 2016-12.csv', 'Plt1 m 2017-01.csv', 'Plt1 m 2017-02.csv',
'Plt1 m 2017-03.csv',
'Plt1 m 2017-04.csv', 'Plt1 m 2017-05.csv', 'Plt1 m 2017-06.csv', 'Plt1 m 2017-07.csv',
'Plt1 m 2017-08.csv', 'Plt1 m 2017-09.csv', 'Plt1 m 2017-10.csv', 'Plt1 m 2017-11.csv',
'Plt1 m 2017-12.csv'
]
points_list = '../../capstone/Plt1/Plt1 Points List.xlsx'
###Output
_____no_output_____
###Markdown
Two filtered datasets (training and testing) are obtained using the `train_single_plant` function: `include_alarms` allows you to decide whether you need to include alarms or not, and `dim_remove` allows you to specify which features you want to include or exclude from the dataset. This allows you to explore the fit with certain feature only. * Use `bas_filter.train_single_plant` to allow the data to import the data, filter out features that are redundent and alarms to provide a training and testing dataset that can be used.
###Code
df_train, df_test = bas_filter.train_single_plt(
'../../capstone/Plt1', train_data, test_data, points_list,
include_alarms = False, dim_remove = []
)
###Output
Filtering Training Set
['../../capstone/Plt1\\Plt1 m 2018-01.csv']
['../../capstone/Plt1\\Plt1 m 2018-02.csv']
['../../capstone/Plt1\\Plt1 m 2018-03.csv']
['../../capstone/Plt1\\Plt1 m 2018-04.csv']
Descriptors in the points list that are not in the datasets.
CommunicationFailure_COV
CH3COM1F
CH3Ready
CH4COM1F
CH4Ready
CH4SURGE
CH5COM1F
CH5Ready
Original data contains 32796 points and 414 dimensions.
A CTTR_ALARM was noted and 122 datapoints were removed from the dataset.
A PCHWP3Failed was noted and 122 datapoints were removed from the dataset.
A PCHWP4Failed was noted and 122 datapoints were removed from the dataset.
A PCHWP5Failed was noted and 122 datapoints were removed from the dataset.
A SCHWP3Failed was noted and 122 datapoints were removed from the dataset.
A SCHWP4Failed was noted and 122 datapoints were removed from the dataset.
A SCHWP5Failed was noted and 122 datapoints were removed from the dataset.
A CH3_CHWSTSP_Alarm was noted and 122 datapoints were removed from the dataset.
A CH3ALARM was noted and 122 datapoints were removed from the dataset.
A CH3F was noted and 122 datapoints were removed from the dataset.
A CH4_CHWSTSP_Alarm was noted and 122 datapoints were removed from the dataset.
A CH4ALARM was noted and 126 datapoints were removed from the dataset.
A CH4F was noted and 126 datapoints were removed from the dataset.
A CH5_CHWSTSP_Alarm was noted and 126 datapoints were removed from the dataset.
A CH5ALARM was noted and 1212 datapoints were removed from the dataset.
A CH5F was noted and 1212 datapoints were removed from the dataset.
A CDWP3Failed was noted and 1212 datapoints were removed from the dataset.
A CDWP3SPD_Alarm was noted and 9685 datapoints were removed from the dataset.
A CDWP4Failed was noted and 9685 datapoints were removed from the dataset.
A CDWP4SPD_Alarm was noted and 10249 datapoints were removed from the dataset.
A CDWP5Failed was noted and 10249 datapoints were removed from the dataset.
A CDWP5SPD_Alarm was noted and 17264 datapoints were removed from the dataset.
A CT4Failed was noted and 17264 datapoints were removed from the dataset.
A CT4SPD_Alarm was noted and 17279 datapoints were removed from the dataset.
A CT5Failed was noted and 17279 datapoints were removed from the dataset.
A CT5SPD_Alarm was noted and 17279 datapoints were removed from the dataset.
Filtered data contains 15021 points and 193 dimensions.
Filtering Test Set
['../../capstone/Plt1\\Plt1 m 2016-11.csv']
['../../capstone/Plt1\\Plt1 m 2016-12.csv']
['../../capstone/Plt1\\Plt1 m 2017-01.csv']
['../../capstone/Plt1\\Plt1 m 2017-02.csv']
['../../capstone/Plt1\\Plt1 m 2017-03.csv']
###Markdown
* Split the data into a datasest with kW/Ton and all the other features. This is similar to splitting the data into "x and y"axes:
###Code
ytrain = df_train['kW/Ton']
ytest = df_test['kW/Ton']
xtrain = df_train.drop(['kW/Ton'], axis=1)
xtest = df_test.drop(['kW/Ton'], axis=1)
###Output
_____no_output_____
###Markdown
USING GBM (GRADIENT BOOSTING MACHINES) FOR DETERMINING FEATURE IMPORTANCE AND PREDICTING EFFICIENCY * Train the model by using the `GBM_model.train_model` function. The R2 gets printed below:
###Code
GBM_model.train_model(xtrain, ytrain, xtest, ytest)
GBM_model.predict_model()
###Output
_____no_output_____
###Markdown
* Save the features importance list (A list of all the features of the plant in order of their importance to the efficiency) into a .csv file using `GBM_model.feature_importance_list`:
###Code
GBM_model.feature_importance_list('Plt1_tutorial.csv', xtest)
###Output
The feature importance list was created as Plt1_tutorial.csv
###Markdown
funcX TutorialfuncX is a Function-as-a-Service (FaaS) platform for science that enables you to register functions in a cloud-hosted service and then reliably execute those functions on a remote funcX endpoint. This tutorial is configured to use a tutorial endpoint hosted by the funcX team. You can set up and use your own endpoint by following the [funcX documentation](https://funcx.readthedocs.io/en/latest/endpoints.html) funcX Python SDKThe funcX Python SDK provides programming abstractions for interacting with the funcX service. Before running this tutorial locally, you should first install the funcX SDK as follows: $ pip install funcx(If you are running on binder, we've already done this for you in the binder environment.)The funcX SDK exposes a `FuncXClient` object for all interactions with the funcX service. In order to use the funcX service, you must first authenticate using one of hundreds of supported identity providers (e. g., your institution, ORCID, Google). As part of the authentication process, you must grant permission for funcX to access your identity information (to retrieve your email address), Globus Groups management access (to share functions and endpoints), and Globus Search (to discover functions and endpoints).
###Code
from funcx.sdk.client import FuncXClient
fxc = FuncXClient()
###Output
_____no_output_____
###Markdown
Basic usageThe following example demonstrates how you can register and execute a function. Registering a functionfuncX works like any other FaaS platform: you must first register a function with funcX before being able to execute it on a remote endpoint. The registration process will serialize the function body and store it securely in the funcX service. As we will see below, you may share functions with others and discover functions shared with you.When you register a function, funcX will return a universally unique identifier (UUID) for it. This UUID can then be used to manage and invoke the function.
###Code
def hello_world():
return "Hello World!"
func_uuid = fxc.register_function(hello_world)
print(func_uuid)
###Output
_____no_output_____
###Markdown
Running a function To invoke a function, you must provide a) the function's UUID; and b) the `endpoint_id` of the endpoint on which you wish to execute that function. Note: here we use the public funcX tutorial endpoint; you may change the `endpoint_id` to the UUID of any endpoint on which you have permission to execute functions. funcX functions are designed to be executed remotely and asynchrously. To avoid synchronous invocation, the result of a function invocation (called a `task`) is a UUID, which may be introspected to monitor execution status and retrieve results.The funcX service will manage the reliable execution of a task, for example, by qeueing tasks when the endpoint is busy or offline and retrying tasks in case of node failures.
###Code
tutorial_endpoint = '4b116d3c-1703-4f8f-9f6f-39921e5864df' # Public tutorial endpoint
res = fxc.run(endpoint_id=tutorial_endpoint, function_id=func_uuid)
print(res)
###Output
_____no_output_____
###Markdown
Retrieving resultsWhen the task has completed executing, you can access the results via the funcX client as follows:
###Code
fxc.get_result(res)
###Output
_____no_output_____
###Markdown
Functions with argumentsfuncX supports registration and invocation of functions with arbitrary arguments and returned parameters. funcX will serialize any \*args and \*\*kwargs when invoking a function and it will serialize any return parameters or exceptions. Note: funcX uses standard Python serialization libraries (e. g., Pickle, Dill). It also limits the size of input arguments and returned parameters to 5 MB.The following example shows a function that computes the sum of a list of input arguments. First we register the function as above:
###Code
def funcx_sum(items):
return sum(items)
sum_function = fxc.register_function(funcx_sum)
###Output
_____no_output_____
###Markdown
When invoking the function, you can pass in arguments like any other function, either by position or with keyword arguments.
###Code
items = [1, 2, 3, 4, 5]
res = fxc.run(items, endpoint_id=tutorial_endpoint, function_id=sum_function)
print (fxc.get_result(res))
###Output
_____no_output_____
###Markdown
Functions with dependenciesfuncX requires that functions explictly state all dependencies within the function body. It also assumes that the dependent libraries are available on the endpoint in which the function will execute. For example, in the following function we explictly import the time module.
###Code
def funcx_date():
from datetime import date
return date.today()
date_function = fxc.register_function(funcx_date)
res = fxc.run(endpoint_id=tutorial_endpoint, function_id=date_function)
print (fxc.get_result(res))
###Output
_____no_output_____
###Markdown
Calling external applicationsDepending on the configuration of the funcX endpoint, you can often invoke external applications that are avaialble in the endpoint environment.
###Code
def funcx_echo(name):
import os
return os.popen("echo Hello %s" % name).read()
echo_function = fxc.register_function(funcx_echo)
res = fxc.run("World", endpoint_id=tutorial_endpoint, function_id=echo_function)
print (fxc.get_result(res))
###Output
_____no_output_____
###Markdown
Catching exceptionsWhen functions fail, the exception is captured and serialized by the funcX endpoint, and is reraised when you try to get the result. In the following example, the 'deterministic failure' exception is raised when `fxc.get_result` is called on the failing function.
###Code
def failing():
raise Exception("deterministic failure")
failing_function = fxc.register_function(failing)
res = fxc.run(endpoint_id=tutorial_endpoint, function_id=failing_function)
fxc.get_result(res)
###Output
_____no_output_____
###Markdown
Running functions many timesAfter registering a function, you can invoke it repeatedly. The following example shows how the monte carlo method can be used to estimate pi. Specifically, if a circle with radius $r$ is inscribed inside a square with side length $2r$, the area of the circle is $\pi r^2$ and the area of the square is $(2r)^2$. Thus, if $N$ uniformly-distributed random points are dropped within the square, approximately $N\pi/4$ will be inside the circle.
###Code
import time
# function that estimates pi by placing points in a box
def pi(num_points):
from random import random
inside = 0
for i in range(num_points):
x, y = random(), random() # Drop a random point in the box.
if x**2 + y**2 < 1: # Count points within the circle.
inside += 1
return (inside*4 / num_points)
# register the function
pi_function = fxc.register_function(pi)
# execute the function 3 times
estimates = []
for i in range(3):
estimates.append(fxc.run(10**5, endpoint_id=tutorial_endpoint, function_id=pi_function))
# wait for tasks to complete
time.sleep(5)
# wait for all tasks to complete
for e in estimates:
while fxc.get_task(e)['pending'] == 'True':
time.sleep(3)
# get the results and calculate the total
results = [fxc.get_result(i) for i in estimates]
total = 0
for r in results:
total += r
# print the results
print("Estimates: %s" % results)
print("Average: {:.5f}".format(total/len(results)))
###Output
_____no_output_____
###Markdown
Describing and discovering functions funcX manages a registry of functions that can be shared, discovered and reused. When registering a function, you may choose to set a description to support discovery, as well as making it `public` (so that others can run it) and/or `searchable` (so that others can discover it).
###Code
def hello_world():
return "Hello World!"
func_uuid = fxc.register_function(hello_world, description="hello world function", public=True, searchable=True)
print(func_uuid)
###Output
_____no_output_____
###Markdown
You can search previously registered functions to which you have access using `search_function`. The first parameter is searched against all the fields, such as author, description, function name, and function source. You can navigate through pages of results with the `offset` and `limit` keyword args. The object returned is a simple wrapper on a list, so you can index into it, but also can have a pretty-printed table.
###Code
search_results = fxc.search_function("hello", offset=0, limit=5)
print(search_results)
###Output
_____no_output_____
###Markdown
Managing endpointsfuncX endpoints advertise whether or not they are online as well as information about their available resources, queued tasks, and other information. If you are permitted to execute functions on an endpoint, you can also retrieve the status of the endpoint. The following example shows how to look up the status (online or offline) and the number of number of waiting tasks and workers connected to the endpoint.
###Code
endpoint_status = fxc.get_endpoint_status(tutorial_endpoint)
print("Status: %s" % endpoint_status['status'])
print("Workers: %s" % endpoint_status['logs'][0]['total_workers'])
print("Tasks: %s" % endpoint_status['logs'][0]['outstanding_tasks'])
###Output
_____no_output_____
###Markdown
Advanced featuresfuncX provides several features that address more advanced use cases. Running batchesAfter registering a function, you might want to invoke that function many times without making individual calls to the funcX service. Such examples occur when running monte carlo simulations, ensembles, and parameter sweep applications. funcX provides a batch interface that enables specification of a range of function invocations. To use this interface, you must create a funcX batch object and then add each invocation to that object. You can then pass the constructed object to the `batch_run` interface.
###Code
def squared(x):
return x**2
squared_function = fxc.register_function(squared)
inputs = list(range(10))
batch = fxc.create_batch()
for x in inputs:
batch.add(x, endpoint_id=tutorial_endpoint, function_id=squared_function)
batch_res = fxc.batch_run(batch)
###Output
_____no_output_____
###Markdown
Similary, funcX provides an interface to retrieve the status of the entire batch of invocations.
###Code
fxc.get_batch_status(batch_res)
###Output
_____no_output_____
###Markdown
Prerequisite the tsumiki cell magic extension can be loaded via:
###Code
%load_ext tsumiki
###Output
_____no_output_____
###Markdown
Usage with notebook. Write with markdown.
###Code
%%tsumiki
:Markdown:
# Title1
## Title2
### Title3
- list1
- list2
- [ ] foo
- [x] bar
###Output
_____no_output_____
###Markdown
Write with HTML
###Code
%%tsumiki
:HTML:
<font color="red">Red</font>
</br>
<font color="green">Green</font>
###Output
_____no_output_____
###Markdown
Multiple columnsSpecify `:` as the number of columns.
###Code
%%tsumiki
:Markdown::
* left1
* left2
:Markdown::
* right1
* right2
###Output
_____no_output_____
###Markdown
Write with mixed markup langueges.
###Code
%%tsumiki
:Markdown:
# Title
:HTML:::
<p>col0</p>
<font color="red">Red</font>
</br>
<font color="green">Green</font>
:Markdown:::
col1
* list1
* list2
:Markdown:::
col2
* list1
* list2
###Output
_____no_output_____
###Markdown
Usage with Python. import module
###Code
import tsumiki
text = """
:Markdown:
# Title
* list1
* list2
"""
print(tsumiki.Tsumiki(text).html)
###Output
<div class="tsumiki">
<style>
.tsumiki .columns1 {
margin-bottom: 12px;
}
</style>
<h1>Title</h1>
<ul>
<li>list1</li>
<li>list2</li>
</ul>
</div>
###Markdown
pyWRspice Wrapper Tutorial IntroPyWRspice is Python wrapper for [WRspice](http://www.wrcad.com/), a SPICE simulation engine modified by Whiteley Research (WR) featuring Josephson junctions. In the package:- simulation.py: Simulate a complete or parametric WRspice script via WRspice simulator.- script.py: Programmatically construct a WRspice script.- remote.py: Run WRspice simulation remotely on an SSH server. Install WRspiceGet and install the software [here](http://www.wrcad.com/xictools/index.html).*Important* : Make sure to take note where the executable wrspice is on your machine. On Unix, it is likely "/usr/local/xictools/bin/wrspice". On Windows, "C:/usr/local/xictools/bin/wrspice.bat".
###Code
# Add pyWRspice location to system path, if you haven't run setup.py
import sys
sys.path.append("../")
import numpy as np
import logging, importlib
from pyWRspice import script, simulation, remote
import matplotlib.pyplot as plt
%matplotlib inline
logging.basicConfig(level=logging.WARNING)
###Output
_____no_output_____
###Markdown
1. Run a complete WRspice script Let's run a simple WRspice script.**Requirements: **- Declare the script with python format strings. - `write {output_file}` should be written by the script in the `.control` block, using the binary/text format.
###Code
script1 = """* Transient response of RLC circuit
.tran 50p 100n
* RLC model of a transmission line
R1 1 2 0.1
L1 2 3 1n
C1 3 0 20p
R2 3 0 1e3
* Load impedance
Rload 3 0 50
* Pulse voltage source
V1 1 0 pulse(0 1 1n 1n 1n 20n)
*
.control
run
set filetype=binary
write {output_file} v(2) v(3)
.endc
"""
###Output
_____no_output_____
###Markdown
Wrap the script into a WRWrapper class instance.*Important*: Make sure to specify ```command = ``` path to the executable wrspice file on your machine. On Unix, it is likely ```/usr/local/xictools/bin/wrspice```.On Windows, ```C:/usr/local/xictools/bin/wrspice.bat```.
###Code
engine = simulation.WRWrapper(command = "/usr/local/xictools/bin/wrspice") # Typical for Unix
# On Windows, try:
# sw = WRWrapper(command = "C:/usr/local/xictools/bin/wrspice.bat")
###Output
_____no_output_____
###Markdown
Run the script.If you want to save the circuit file, specify the keyword argument ```circuit_file```. If you want to save the data file, specify ```output_file```. If not specified, temporary files will be created then deleted after execution.The ```run``` method returns the output data.
###Code
dat1 = engine.run(script1)
# If you want to save the file, run: dat1 = engine.run(script1,circuit_file="dummy.cir",output_file="dummy.raw")
# Extract the data
ts = dat1.variables[0].values
v2 = dat1.variables[1].values
v3 = dat1.variables[2].values
# Or we can convert the data into pandas DataFrame object
df = dat1.to_df()
ts = df['time']
v2 = df['v(2)']
v3 = df['v(3)']
# Or we can convert the data into numpy array
df = dat1.to_array()
ts = df[0]
v2 = df[1]
v3 = df[2]
# Plot the data
fig = plt.figure(figsize=(12,6))
plt.plot(ts*1e9, v2, label="v(2)")
plt.plot(ts*1e9, v3, label="v(3)")
plt.xlabel("Time [ns]")
plt.ylabel("Voltage [V]")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2. Run a parametric WRspice scriptWe can parametrize the circuit description by using keyword substitution in Python string. Basically, if ```s = "Value={x}"``` then ```s.format(x=2)``` results in ```Value=2```.In the example below, we parametrize the values of the capacitor as ```cap``` (pF) and pulse duration as ```dur``` (ns).
###Code
script2 = """* Transient response of RLC circuit
.tran 50p 100n
* RLC model of a transmission line
R1 1 2 0.1
L1 2 3 1n
C1 3 0 {cap}p
R2 3 0 1e3
* Load impedance
Rload 3 0 50
* Pulse voltage source
V1 1 0 pulse(0 1 1n 1n 1n {dur}n)
*
.control
run
set filetype=binary
write {output_file} v(2) v(3)
.endc
"""
sw = simulation.WRWrapper(script2, command = "/usr/local/xictools/bin/wrspice")
###Output
_____no_output_____
###Markdown
We then specify the values of ```cap``` and ```dur``` when execute the script with the ```run``` function.
###Code
dat2 = engine.run(script2,cap=30, dur=40)
# Extract the data
dat2 = dat2.to_array()
ts = dat2[0]
v2 = dat2[1]
v3 = dat2[2]
# Plot the data
fig = plt.figure(figsize=(12,6))
plt.plot(ts*1e9, v2, label="v(2)")
plt.plot(ts*1e9, v3, label="v(3)")
plt.xlabel("Time [ns]")
plt.ylabel("Voltage [V]")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Tip:** When there are many parameters, it is clumsy to pass them into the ```run()``` method. We can collect them into a dictionary object and pass it altogether to ```run()```. This makes it easier for verifying and changing values. As an example:
###Code
params = {'cap':30,
'dur':40,
'output_file':None}
# Check the script before running
print(script2.format(**params))
# Run the script by passing the params to the run() method
dat2 = engine.run(script2,**params)
# The results should be the same as the previous run. Not shown here.
###Output
_____no_output_____
###Markdown
3. Run WRspice script with multiple parametric values in parallel We can pass a list of values to one or more parameters and run them all in parallel, using multiprocessing, with the ```run_parallel()``` method. Let's demonstrate it with ```cap```.
###Code
# Recycle params above
params["cap"] = [20,50,100]
params["dur"] = 40
params3, dat3 = engine.run_parallel(script2,save_file=False,**params)
###Output
_____no_output_____
###Markdown
The returned is an array of data corresponding to multiple runs. We need some extra work to extract them.
###Code
params3
dat3
# Extract data
caps = params3["cap"]
v3s = []
for dat in dat3:
v3s.append(dat.to_array()[2])
ts = dat.to_array()[0]
# Plot the data
fig = plt.figure(figsize=(12,6))
for cap,v3 in zip(caps,v3s):
plt.plot(ts*1e9, v3, label="cap = %s pF" %cap)
plt.xlabel("Time [ns]")
plt.ylabel("Voltage [V]")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Parallel run with multiple parametersWe can change multiple parameters in one single parallel run. For example, repeat the above simulation with 2 different pulse durations.
###Code
# Recycle params above
params["cap"] = [20,50,100]
params["dur"] = [30, 60]
params4, dat4 = engine.run_parallel(script2,save_file=False,**params)
# Examine the returned parameter values
for k,v in params4.items():
print("%s = %s" %(k,v))
print("")
# Get the shape of the returned data
dat4.shape
# Plot the data
fig = plt.figure(figsize=(12,6))
shape = dat4.shape
for i in range(shape[0]):
for j in range(shape[1]):
dat = dat4[i,j]
ts = dat.variables[0].values
v3 = dat.variables[2].values
plt.plot(ts*1e9, v3, label="cap=%s[pF], dur=%s[ns]" %(params4["cap"][i,j],params4["dur"][i,j]))
plt.xlabel("Time [ns]")
plt.ylabel("Voltage [V]")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
4. Adaptive runWill be added later if there will be demand. 5. Construct a WRspice script using script.pySo far we have written the sample WRspice scripts manually. The task can become arduous for large circuits. One can use the template package ```jinja``` to ease the task. Here we explore a different, pythonic way, to construct ```script2``` above.
###Code
# Reminder: script2
print(script2)
###Output
* Transient response of RLC circuit
.tran 50p 100n
* RLC model of a transmission line
R1 1 2 0.1
L1 2 3 1n
C1 3 0 {cap}p
R2 3 0 1e3
* Load impedance
Rload 3 0 50
* Pulse voltage source
V1 1 0 pulse(0 1 1n 1n 1n {dur}n)
*
.control
run
set filetype=binary
write {output_file} v(2) v(3)
.endc
###Markdown
Set up a circuitWe can declare a component by specifying its name, a list of ```ports```, ```value```, and additional parameters. ```ports``` can be numeric or string but, as with ```value```, will be converted to string eventually.
###Code
# Circuit components
R1 = script.Component("R1",ports=[1,"p2"],value=0.1,params={},comment="") # Full description
L1 = script.Component("L1",["p2",3],"1n")
C1 = script.Component("C1",[L1.ports[1],0],"{cap}p")
R2 = script.Component("R2",[3,0],1e3)
# Set up a circuit
cir = script.Circuit()
# Add components to the circuit
cir.add_component(R1) # Add one component
cir.add_components([L1,C1,R2]) # Add a list of components
# Display the circuit
print(cir.script())
# Plot the circuit, showing value
plt.figure(figsize=(9,6))
cir.plot(show_value=True)
plt.show()
# Similarly, set up a circuit having the load resistance and voltage source
Rload = script.Component("Rload",[3,0],50, comment="Load resistance")
V1 = script.Component("V1",[1,0],"pulse(0 1 1n 1n 1n {dur}n)", comment="Pulse source")
control_cir = script.Circuit()
control_cir.add_components([Rload,V1])
print(control_cir.script())
###Output
* Load resistance
Rload 3 0 50
* Pulse source
V1 1 0 pulse(0 1 1n 1n 1n {dur}n)
###Markdown
We can add models by ```add_model()```, subcircuits by ```add_subcircuit()``` or add extra script by ```add_script()``` into the circuit object. Let's skip these for now. Set up a script
###Code
scr = script.Script("Transient response of RLC circuit")
# Add circuits
scr.add_circuit(cir)
scr.add_circuit(control_cir)
# Specify analysis and data saving
scr.analysis = ".tran 50p 100n"
scr.config_save(["p2",3],filename=None,filetype="binary") # specify which voltages to save; filename and filetype are optional
# Print out the script
print(scr.script())
# For confirmation, plot the combined circuit
plt.figure(figsize=(9,6))
scr.plot()
plt.show()
###Output
*Transient response of RLC circuit
.tran 50p 100n
R1 1 p2 0.1
L1 p2 3 1n
C1 3 0 {cap}p
R2 3 0 1000.0
* Load resistance
Rload 3 0 50
* Pulse source
V1 1 0 pulse(0 1 1n 1n 1n {dur}n)
.control
run
set filetype=binary
write {output_file} v(p2) v(3)
.endc
###Markdown
Test run the script
###Code
# Get the circuit parameters
scr.get_params()
print(scr.params)
# Set values to the parameters
scr.params["cap"] = 100
scr.params["dur"] = 40
# Alternatively
scr.set_params(cap=100,dur=40)
# Run the script
dat5 = engine.run(scr.script(),**scr.params)
# Extract the data
dat5 = dat5.to_array()
ts = dat5[0]
v2 = dat5[1]
v3 = dat5[2]
# Plot the data
fig = plt.figure(figsize=(8,6))
plt.plot(ts*1e9, v2, label="v(2)")
plt.plot(ts*1e9, v3, label="v(3)")
plt.xlabel("Time [ns]")
plt.ylabel("Voltage [V]")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Add an array of componentsAdding an array of components or subcircuits into a circuit is now very simple. Let's use ```cir``` as a subcircuit modelling a segment of a transmission line. We will simulate 10 of them.
###Code
big_cir = script.Circuit()
# Add subcircuit
big_cir.add_subcircuit(cir,"segment",[1,3])
print(big_cir.script())
# The subcircuit can be instantiated as a general component whose name starts with 'X'
# Let's add 10 of them
for i in range(1,11):
big_cir.add_component(script.Component("X%d"%i,[i,i+1],"segment"))
# Check the result
print(big_cir.script())
# Add the source and load
Rload.ports = [11,0] # Change the ports of Rload, originally [3,0]
big_cir.add_components([V1,Rload])
# Plot the circuit, not show value
plt.figure(figsize=(12,6))
big_cir.plot()
plt.show()
# In case we forgot how the subcircuit looks like, let's plot it again
plt.figure(figsize=(8,6))
big_cir.subcircuits["segment"].plot(show_value=True)
plt.show()
# Set up a script
scr2 = script.Script("Transient response of a transmission line")
# Add circuits
scr2.add_circuit(big_cir)
# Specify analysis and data saving
scr2.analysis = ".tran 50p 100n"
scr2.config_save([1,5,11]) # Just examine a few voltages
# Final check of the script
print(scr2.script())
# Get the circuit parameters
scr2.get_params()
# Set values to the parameters
scr2.set_params(cap=100,dur=40)
# Run the script
dat5 = engine.run(scr2.script(),**scr2.params)
# Extract the data
df5 = dat5.to_df()
ts = df5["time"]
vs = df5["v(1)"]
vmid = df5["v(5)"]
vload = df5["v(11)"]
# Plot the data
fig = plt.figure(figsize=(8,6))
plt.plot(ts*1e9, vs, label="V source")
plt.plot(ts*1e9, vmid, label="V mid")
plt.plot(ts*1e9, vload, label="V load")
plt.xlabel("Time [ns]")
plt.ylabel("Voltage [V]")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Introduction: DDOT tutorial* __What is an ontology?__ An ontology is a hierarchical arrangement of two types of nodes: (1)genes at the leaves of the hierarchy and (2) terms at intermediatelevels of the hierarchy. The hierarchy can be thought of as directedacyclic graph (DAG), in which each node can have multiple children ormultiple parent nodes. DAGs are a generalization of trees(a.k.a. dendogram), where each node has at most one parent.* __What is DDOT?__ The DDOT Python package provides many functions for assembling,analyzing, and visualizing ontologies. The main functionalities areimplemented in an object-oriented manner by an "Ontology" class, which handles ontologies that are data-driven as well as thosethat are manually curated like the Gene Ontology.* __What to do after reading this tutorial__ Check out a complete list of functions in the [Ontology class](http://ddot.readthedocs.io/en/latest/ontology.html) and a list of [utility functions](http://ddot.readthedocs.io/en/latest/utils.html) that may help you build more concise pipelines. Also check out [example Jupyter notebooks](https://github.com/michaelkyu/ddot/tree/master/examples) that contain pipelines for downloading and processing the Gene Ontology and for inferring data-driven gene ontologies of diseases
###Code
import os
import ddot
import numpy as np
import pandas as pd
from ddot import Ontology
###Output
_____no_output_____
###Markdown
Creating an Ontology object* An object of the Ontology class can be created in several ways.* In this tutorial, we will construct and analyze the toy ontology shown below. Create ontology through the `__init__` constructor
###Code
# Connections from child terms to parent terms
hierarchy = [('S3', 'S1'),
('S4', 'S1'),
('S5', 'S1'),
('S5', 'S2'),
('S6', 'S2'),
('S1', 'S0'),
('S2', 'S0')]
# Connections from genes to terms
mapping = [('A', 'S3'),
('B', 'S3'),
('C', 'S3'),
('C', 'S4'),
('D', 'S4'),
('E', 'S5'),
('F', 'S5'),
('G', 'S6'),
('H', 'S6')]
# Construct ontology
ont = Ontology(hierarchy, mapping)
# Prints a summary of the ontology's structure
print(ont)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Create an ontology from a tab-delimited table or Pandas dataframe
###Code
# Write ontology to a tab-delimited table
ont.to_table('toy_ontology.txt')
# Reconstruct the ontology from the table
Ontology.from_table('toy_ontology.txt')
###Output
_____no_output_____
###Markdown
Create an Ontology from the Network Data Exchange (NDEx)* It is strongly recommended that you create a free account on NDEx in order to keep track of your own ontologies.* Note that there are two NDEx servers: the main one at http://ndexbio.org/ and a test server for prototyping your code at http://test.ndexbio.org. Each server requires a separate user account. While you get familiar with DDOT, we recommend that you use an account on the test server. Set the NDEx server and the user account.This "scratch" account will work for this tutorial, but you should replace it with your own account.
###Code
ndex_server = os.environ.get('NDEX_SERVER', default='http://ndexbio.org')
ndex_user = os.environ.get('NDEX_USERNAME', default='scratch')
ndex_pass = os.environ.get('NDEX_PASSWORD', default='scratch')
###Output
_____no_output_____
###Markdown
Upload ontology to NDEx. The string after "v2/network/" is a unique identifier, which is called the UUID, of the ontology in NDEx
###Code
url, _ = ont.to_ndex(
ndex_user=ndex_user,
ndex_pass=ndex_pass,
ndex_server=ndex_server,
)
print(url)
# Download the ontology from NDEx
ont2 = Ontology.from_ndex(url)
print(ont2)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: ['Vis:Fill Color', 'Vis:Border Paint', 'Vis:Size', 'Vis:Shape', 'name']
edge_attributes: ['Vis:Visible']
###Markdown
Inspecting the structure of an ontology An Ontology object contains seven attributes:* ``genes`` : List of gene names* ``terms`` : List of term names* ``gene_2_term`` : dictionary mapping a gene name to a list of terms connected to that gene. Terms are represented as their 0-based index in ``terms``.* ``term_2_gene`` : dictionary mapping a term name to a list or genes connected to that term. Genes are represented as their 0-based index in ``genes``.* ``child_2_parent`` : dictionary mapping a child term to its parent terms.* ``parent_2_child`` : dictionary mapping a parent term to its children terms.* ``term_sizes`` : A list of each term's size, i.e. the number of unique genes contained within this term and its descendants. The order of this list is the same as ``terms``. For every ``i``, it holds that ``term_sizes[i] = len(self.term_2_gene[self.terms[i]])``
###Code
ont.genes
ont.terms
ont.gene_2_term
ont.term_2_gene
ont.child_2_parent
ont.parent_2_child
###Output
_____no_output_____
###Markdown
Alternatively, the hierarchical connections can be viewed as a binary matrix, using `Ontology.connected()`
###Code
conn = ont.connected()
np.array(conn, dtype=np.int32)
###Output
_____no_output_____
###Markdown
A summary of an Ontology’s object, i.e. the number of genes, terms, and connections, can be printed `print(ont)`
###Code
print(ont)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Manipulating the structure of an ontology DDOT provides several convenience functions for processing Ontologies into a desirable structure. Currently, there are no functions for adding genes and terms. If this is needed, then we recommend creating a new Ontology or manipulating the contents in a different library, such as NetworkX or igraph, and transforming the results into Ontology. Renaming nodes
###Code
# Renaming genes and terms.
ont2 = ont.rename(genes={'A' : 'A_alias'}, terms={'S0':'S0_alias'})
ont2.to_table()
###Output
_____no_output_____
###Markdown
Delete S1 and G while preserving transitive connections
###Code
ont2 = ont.delete(to_delete=['S1', 'G'])
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 6 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Delete S1 and G (don't preserve transitive connections)
###Code
ont2 = ont.delete(to_delete=['S1', 'G'], preserve_transitivity=False)
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 3 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Propagate gene-term connections* Often times, it is convenient to explicitly include all transitive connections in the hierarchy. That is, if a hierarchy has edges A-->B and B-->C, then the hierarchy also has A-->C. This can be done by calling `Ontology.propagate(direction='forward')` function.* On the other hand, all transitive connections can be removed with `Ontology.propagate(direction='reverse')`. This is useful as a parsimonious set of connections.
###Code
# Include all transitive connections between genes and terms
ont2 = ont.propagate(direction='forward', gene_term=True, term_term=False)
print(ont2)
# Remove all transitive connections between genes and terms, retaining only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=True, term_term=False)
print(ont3)
###Output
8 genes, 7 terms, 27 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Propagate term-term connections
###Code
# Include all transitive connections between terms
ont2 = ont.propagate(direction='forward', gene_term=False, term_term=True)
print(ont2)
# Remove all transitive connections between terms, retaining only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=False, term_term=True)
print(ont3)
###Output
8 genes, 7 terms, 9 gene-term relations, 11 term-term relations
node_attributes: []
edge_attributes: []
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Take the subbranch consisting of all term and genes under S1
###Code
ont2 = ont.focus(branches=['S1'])
print(ont2)
###Output
Genes and Terms to keep: 10
6 genes, 4 terms, 7 gene-term relations, 3 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Inferring a data-driven ontology* Given a set of genes and a gene similarity network, we can hierarchically cluster the genes to infer cellular subsystems using the CliXO algorithm. The resulting hierarchy of subsystems defines a "data-driven gene ontology". For more information about the CLIXO algorithm, see [Kramer, *et al.* Bioinformatics, 30(12), pp.i34-i42. 2014](https://doi.org/10.1093/bioinformatics/btu282).* Conversely, we can also "flatten" the ontology structure to infer a gene-by-gene similarity network. In particular, the similarity between two genes is calculated as the size of the smallest common subsystem, known as "Resnik semantic similarity".* The CLIXO algorithm has been designed to reconstruct the original hierarchy from the Resnik score.
###Code
# Flatten ontology to gene-by-gene network
sim, genes = ont.flatten()
print('Similarity matrix:')
print(np.round(sim, 2))
print('\nRow/column names of similarity matrix:')
print(*genes)
###Output
Similarity matrix:
[[ 1.42 1.42 1.42 0.42 0.42 0.42 -0. -0. ]
[ 1.42 1.42 1.42 0.42 0.42 0.42 -0. -0. ]
[ 1.42 1.42 2. 2. 0.42 0.42 -0. -0. ]
[ 0.42 0.42 2. 2. 0.42 0.42 -0. -0. ]
[ 0.42 0.42 0.42 0.42 2. 2. 1. 1. ]
[ 0.42 0.42 0.42 0.42 2. 2. 1. 1. ]
[-0. -0. -0. -0. 1. 1. 2. 2. ]
[-0. -0. -0. -0. 1. 1. 2. 2. ]]
Row/column names of similarity matrix:
A B C D E F G H
###Markdown
Reconstruct the ontology using the CLIXO algorithm.In general, you may feed any kind of gene-gene similarities, e.g. measurements of protein-protein interactions, gene co-expression, or genetic interactions.
###Code
sim_df = pd.DataFrame(sim, index=list(genes), columns=list(genes))
ont2 = Ontology.run_clixo(
sim_df,
df_output_path='df_temp.txt',
clixo_output_path='clixo_temp.txt',
output_log_path='output_log.txt',
alpha=0.0,
beta=1.0,
square=True,
square_names=genes,
)
print(ont2)
ont2.to_table(edge_attr=True)
###Output
_____no_output_____
###Markdown
Ontology alignment* The structures of two ontologies can be compared through a procedure known as ontology alignment. Ontology.align() implements the ontology alignment described in (Dutkowski et al. Nature biotechnology, 31(1), 2013), in which terms are matched if they contain similar sets of genes and if their parents and children terms are also similar.* Ontology alignment is particularly useful for annotating a data-driven gene ontology by aligning it to a curated ontology such as the Gene Ontology (GO). For instance, if a data-driven term is identified to have a similar set of genes as the GO term for DNA repair, then the data-driven subsystem can be annotated as being involved in DNA repair. Moreover, data-driven terms with no matches in the ontology alignment may represent new molecular mechanisms.
###Code
## Make a second ontology (the ontology to the right in the above diagram)
# Connections from child terms to parent terms
hierarchy = [('T3', 'T1'),
('T4', 'T1'),
('T1', 'T0'),
('T5', 'T0')]
# Connections from genes to terms
mapping = [('A', 'T3'),
('B', 'T3'),
('C', 'T3'),
('D', 'T4'),
('E', 'T4'),
('F', 'T4'),
('G', 'T5'),
('H', 'T5')]
# Construct ontology
ont_B = Ontology(hierarchy, mapping)
ont.align(ont_B)
###Output
collapse command: /Users/cthoyt/dev/ddot/ddot/alignOntology/collapseRedundantNodes /var/folders/l8/mz5vb84x5sg3bpv8__vr91240000gn/T/tmp58gwlwxp
collapse command: /Users/cthoyt/dev/ddot/ddot/alignOntology/collapseRedundantNodes /var/folders/l8/mz5vb84x5sg3bpv8__vr91240000gn/T/tmpk_5b48wr
Alignment command: /Users/cthoyt/dev/ddot/ddot/alignOntology/calculateFDRs /var/folders/l8/mz5vb84x5sg3bpv8__vr91240000gn/T/tmptcfp2au4 /var/folders/l8/mz5vb84x5sg3bpv8__vr91240000gn/T/tmpjtzdkhiy 0.05 criss_cross /var/folders/l8/mz5vb84x5sg3bpv8__vr91240000gn/T/tmpby1ips80 100 8 gene
###Markdown
Construct ontotypes* A major goal of genetics is to understand how genotype translates to phenotype. An ontology represents biological structure through which this genotype-phenotype translation happens. * Given a set of mutations comprising a genotype, DDOT allows you to propagate the impact of these mutations to the subsystems containing these genes in the ontology. In particular, the impact on a subsystem is estimated by the number of its genes that have been mutated. These subsystem activities, which we have called an “ontotype”, enables more accurate and interpretable predictions of phenotype from genotype (Yu et al. Cell Systems 2016, 2(2), pp.77-88. 2016).
###Code
# Genotypes can be represented as tuples of mutated genes
genotypes = [('A', 'B'),
('A', 'E'),
('A', 'H'),
('B', 'E'),
('B', 'H'),
('C', 'F'),
('D', 'E'),
('D', 'H'),
('E', 'H'),
('G', 'H')]
# Calculate the ontotypes, represented a genotype-by-term matrix. Each value represents the functional impact on a term in a genotype.
ontotypes = ont.get_ontotype(genotypes)
print(ontotypes)
# Genotypes can also be represented a genotype-by-gene matrix as an alternative input format
genotypes_df = pd.DataFrame(
np.zeros((len(genotypes), len(ont.genes)), np.float64),
index=[f'Genotype{i}' for i in range(len(genotypes))],
columns=ont.genes,
)
for i, (g1, g2) in enumerate(genotypes):
genotypes_df.loc['Genotype%s' % i, g1] = 1.0
genotypes_df.loc['Genotype%s' % i, g2] = 1.0
print('Genotype matrix:')
print(genotypes_df)
ontotypes = ont.get_ontotype(genotypes_df, input_format='matrix')
print('Ontotype matrix:')
print(ontotypes)
###Output
Ontotype matrix:
S0 S1 S2 S3 S4 S5 S6
Genotype0 0.0 0.0 0.0 2.0 0.0 0.0 0.0
Genotype1 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype2 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype3 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype4 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype5 0.0 0.0 0.0 1.0 1.0 1.0 0.0
Genotype6 0.0 0.0 0.0 0.0 1.0 1.0 0.0
Genotype7 0.0 0.0 0.0 0.0 1.0 0.0 1.0
Genotype8 0.0 0.0 0.0 0.0 0.0 1.0 1.0
Genotype9 0.0 0.0 0.0 0.0 0.0 0.0 2.0
###Markdown
Conversions to NetworkX and igraph
###Code
# Convert to an igraph object
G = ont.to_igraph()
print(G)
# Reconstruct the Ontology object from the igraph object
Ontology.from_igraph(G)
# Convert to a NetworkX object
G = ont.to_networkx()
print(G.nodes())
print(G.edges())
# Reconstruct the Ontology object from the NetworkX object
tmp = Ontology.from_networkx(G)
print(tmp)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Ontology visualization using HiView (http://hiview.ucsd.edu)* HiView is a web application for general visualization of the hierarchical structure in ontologies.* To use HiView, you must first upload your ontology into NDEx using the [Ontology.to_ndex()](http://ddot.readthedocs.io/en/latest/ontology.htmlddot.Ontology.to_ndex) function, and then input the NDEx URL for the ontology to HiView* In contrast to almost all other hierarchical visualization tools, which are limited to simple tree structures, HiView also supports more complicated hierarchies in the form of directed acyclic graphs, in which nodes may have multiple parents. A simple upload to NDEx and visualization in HiView* Upload ontologies to NDEx using the `Ontology.to_ndex()` function.* Setting the parameter `layout="bubble"` (default value) will identify a spanning tree of the DAG and then lay this tree in a space-compact manner. When viewing in HiView, only the edges in the spanning tree are shown, while the other edges can be chosen to be shown.
###Code
url, _ = ont.to_ndex(
ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout=None, # 'bubble'
)
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
###Output
Go to http://hiview.ucsd.edu in your web browser
Enter this into the "NDEx Sever URL" field: http://ndexbio.org
Enter this into the "UUID of the main hierarchy" field: 97424525-d861-11e8-aaa6-0ac135e8bacf
###Markdown
An alternative layout by duplicating nodes* Setting the parameter `layout="bubble-collect"` will convert the DAG into a tree by duplicating nodes.* This transformation enables the ontology structure to be visualized without edges crossing.
###Code
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble-collect')
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
###Output
Go to http://hiview.ucsd.edu in your web browser
Enter this into the "NDEx Sever URL" field: http://ndexbio.org
Enter this into the "UUID of the main hierarchy" field: 98071bc8-d861-11e8-aaa6-0ac135e8bacf
###Markdown
Visualizing metadata by modifying node labels, colors, and sizes* An Ontology object has a `node_attr` field that is a pandas DataFrame. The rows of the dataframe are genes or terms, and the columns are node attributes.* HiView understands special node attributes to control the node labels, colors, and sizes.
###Code
# Set the node labels (default is the gene and term names, as found in Ontology.genes and Ontology.terms)
ont.node_attr.loc['S4', 'Label'] = 'S4 alias'
ont.node_attr.loc['S5', 'Label'] = 'S5 alias'
# Set the fill color of nodes
ont.node_attr.loc['C', 'Vis:Fill Color'] = '#7fc97f'
ont.node_attr.loc['S1', 'Vis:Fill Color'] = '#beaed4'
ont.node_attr.loc['S0', 'Vis:Fill Color'] = '#fdc086'
# Set the node sizes (if not set, the default is the term size, as found in Ontology.term_sizes)
ont.node_attr.loc['C', 'Size'] = 10
ont.node_attr
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble-collect')
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
# Clear node attributes (optional)
ont.clear_node_attr()
ont.node_attr
###Output
_____no_output_____
###Markdown
Visualize gene-gene interaction networks alongside the ontology* Every term in an ontology represents a biological function shared among the term's genes. Based on this intuition, those genes should be interacting in different ways, e.g. protein-protein interactions, RNA expression, or genetic interactions.* Gene-gene interaction networks can be uploaded with the ontology to NDEx, so that they can be visualized at the same time in HiView
###Code
# Calculate a gene-by-gene similarity matrix using the Resnik semantic similarity definition (see section "Inferring a data-driven ontology")
sim, genes = ont.flatten()
print(genes)
print(np.round(sim, 2))
###Output
['A' 'B' 'C' 'D' 'E' 'F' 'G' 'H']
[[ 1.42 1.42 1.42 0.42 0.42 0.42 -0. -0. ]
[ 1.42 1.42 1.42 0.42 0.42 0.42 -0. -0. ]
[ 1.42 1.42 2. 2. 0.42 0.42 -0. -0. ]
[ 0.42 0.42 2. 2. 0.42 0.42 -0. -0. ]
[ 0.42 0.42 0.42 0.42 2. 2. 1. 1. ]
[ 0.42 0.42 0.42 0.42 2. 2. 1. 1. ]
[-0. -0. -0. -0. 1. 1. 2. 2. ]
[-0. -0. -0. -0. 1. 1. 2. 2. ]]
###Markdown
Convert the gene-by-gene similarity matrix into a dataframe with a "long" format, where rows represent gene pairs. This conversion can be easily done with ddot.melt_square()
###Code
sim_df = pd.DataFrame(sim, index=genes, columns=genes)
sim_long = ddot.melt_square(sim_df)
sim_long.head()
###Output
_____no_output_____
###Markdown
Create other gene-gene interactions. For example, these can represent protein-protein interactions or gene co-expression. Here, we simulate interactions by adding a random noise to the Resnik similarity
###Code
sim_long['example_interaction_type1'] = sim_long['similarity'] + np.random.random(sim_long.shape[0]) / 2.
sim_long['example_interaction_type2'] = sim_long['similarity'] + np.random.random(sim_long.shape[0]) / 2.
sim_long.head()
# Include the above gene-gene interactions by setting the `network` and `main_feature` parameters.
url, _ = ont.to_ndex(
ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
network=sim_long,
main_feature='similarity',
layout='bubble-collect',
)
print('Go to http://hiview.ucsd.edu in your web browser')
print('Enter this into the "NDEx Sever URL" field: %s' % ndex_server)
print('Enter this into the "UUID of the main hierarchy" field: %s' % url.split('/')[-1])
###Output
Go to http://hiview.ucsd.edu in your web browser
Enter this into the "NDEx Sever URL" field: http://ndexbio.org
Enter this into the "UUID of the main hierarchy" field: 9d05f2b3-d861-11e8-aaa6-0ac135e8bacf
###Markdown
pyFCI tutorialThis is a prototipe for a library to perform **intrinsic dimension estimation using the local full correlation integral estimator** presented in out [paper](https://www.nature.com/articles/s41598-019-53549-9). InstallationClone the repository locally git clone https://github.com/vittorioerba/pyFCI.gitand install using pip cd pyFCI pip3 install . If you want to make modifications to the source code, install by sìymlinking cd pyFCI pip3 install -e . UsageWe recommend using numpy arrays as often as you can.
###Code
# imports
import pyFCI
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
###Output
_____no_output_____
###Markdown
Let's generate a simple dataset to play with.
###Code
N = 100;
d = 3;
dataset = np.random.rand(N,d)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(dataset[:,0], dataset[:,1], dataset[:,2])
###Output
_____no_output_____
###Markdown
Global Intrinsic Dimension Estimation (IDE)First of all, we need to preprocess our dataset so that it has null mean, and all vectors are normalized.
###Code
processed_dataset = pyFCI.center_and_normalize(dataset)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(processed_dataset[:,0], processed_dataset[:,1], processed_dataset[:,2])
###Output
_____no_output_____
###Markdown
Then, we proceed to compute the **full correlation integral** (FCI).
###Code
fci = pyFCI.FCI(processed_dataset)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fci[:,0], fci[:,1])
ax.set_xlim([0,2])
ax.set_ylim([0,1])
###Output
_____no_output_____
###Markdown
Notice that if your dataset has $N$ points, the ``pyFCI.FCI()`` function will have to perform $\frac{N(N-1)}{2} \sim N^2$ operations to compute exactly the FCI.If your dataset is large, it's better to compute an approximation of the FCI by using the ``pyFCI.FCI_MC()`` method; its second argument is gives an upper bound on the number of operations allowed (500 is a san default, anything above that will practically work as good as the exact FCI for all purposes).Let's compare the two methods.(**Attention:** the first run will call the numba jit compiler and will take much longer!)
###Code
N = 2000;
d = 10;
dataset = np.random.rand(N,d)
processed_dataset = pyFCI.center_and_normalize(dataset);
%time fci = pyFCI.FCI(processed_dataset)
%time fciMC = pyFCI.FCI_MC(processed_dataset, 1000)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fci[:,0], fci[:,1], label="exact")
ax.plot(fciMC[:,0], fciMC[:,1], label="approx $10^3$ samples")
ax.legend(loc='upper left')
ax.set_xlim([0,2])
ax.set_ylim([0,1])
###Output
_____no_output_____
###Markdown
Now that we have the FCI, we are ready to compute the ID of the dataset.For a first check, one can use the ``pyFCI.analytical_FCI()`` function (notice that we need to use $d-1$, as normalizing the dataset eats away a degree of freedom):
###Code
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fci[:,0], fci[:,1], label="empirical exact")
ax.plot(fciMC[:,0], fciMC[:,1], label="empirical approx $10^3$ samples")
xs = np.linspace(0,2,100)
ys = pyFCI.analytical_FCI(xs,d-1,1)
ax.plot(xs, ys, label="analytical")
ax.set_xlim([0,2])
ax.set_ylim([0,1])
ax.legend(loc='upper left')
###Output
_____no_output_____
###Markdown
To actually fit the function and recover $d$, we use ``pyFCI.fit_FCI()``.
###Code
fit_exact = pyFCI.fit_FCI(fci)
fit_MC = pyFCI.fit_FCI(fciMC)
print("ID estimated with exact FCI: ", fit_exact[0])
print("ID estimated with approximate FCI: ", fit_MC[0])
###Output
ID estimated with exact FCI: 10.153064067014695
ID estimated with approximate FCI: 10.619394691123722
###Markdown
Local Intrinsic Dimension Estimation (IDE)To estimate the local ID, you need to specify a local patch of your dataset.This is done by selecting a single point in the dataset, and specifing the number of nearest neighbours that define larger and larger neighbourhoods.
###Code
center = np.random.randint(len(dataset))
ks = np.array([5*i for i in range(1,11)])
localFCI = pyFCI.local_FCI(dataset,center,ks)
print(" ks |Max dist|loc ID| x0| MSE")
with np.printoptions(precision=3, suppress=True):
print(localFCI)
###Output
ks |Max dist|loc ID| x0| MSE
[[ 5. 0.657 22.92 1.2 0.054]
[10. 0.735 10.939 1.048 0.03 ]
[15. 0.793 7.238 1.034 0.02 ]
[20. 0.812 10.555 1.033 0.036]
[25. 0.836 10.308 1.009 0.018]
[30. 0.854 8.93 0.994 0.022]
[35. 0.862 10.111 1.012 0.01 ]
[40. 0.878 11.345 1.016 0.015]
[45. 0.889 9.565 0.991 0.009]
[50. 0.895 9.33 1.017 0.012]]
###Markdown
Now you can repeat for as many local centers as you like:
###Code
Ncenters = 30
centers = np.random.randint(len(dataset),size=Ncenters)
localFCI_multiple = np.empty(shape=(0,len(ks),5))
for i in range(Ncenters):
localFCI = pyFCI.local_FCI(dataset,center,ks)
localFCI_multiple = np.append( localFCI_multiple, [localFCI], axis=0 )
###Output
_____no_output_____
###Markdown
and you can reproduce the persistence plot show in our [paper](https://www.nature.com/articles/s41598-019-53549-9)
###Code
fig = plt.figure()
ax = fig.add_subplot()
for i in range(Ncenters):
ax.plot(localFCI_multiple[i,:,0],localFCI_multiple[i,:,2])
xs = np.linspace(0,50,2)
ax.plot(xs,[10 for x in xs],color="black")
ax.set_ylim([0,20])
N = 1000;
d = 500;
dataset = np.random.rand(N,d)
processed_dataset = pyFCI.center_and_normalize(dataset)
fci = pyFCI.FCI(processed_dataset)
fciMC = pyFCI.FCI_MC(processed_dataset, 1000)
#fit_exact = pyFCI.fit_FCI(fci)
#fit_MC = pyFCI.fit_FCI(fciMC)
#print("ID estimated with exact FCI: ", fit_exact[0])
#print("ID estimated with approximate FCI: ", fit_MC[0])
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fci[:,0], fci[:,1], label="empirical exact")
ax.plot(fciMC[:,0], fciMC[:,1], label="empirical approx $10^3$ samples")
xs = np.linspace(0,2,100)
ys = pyFCI.analytical_FCI(xs,d-1,1)
ax.plot(xs, ys, label="analytical")
ax.set_xlim([0,2])
ax.set_ylim([0,1])
ax.legend(loc='upper left')
pyFCI.analytical_FCI(xs,350,1)
###Output
<ipython-input-7-207f19900e99>:1: RuntimeWarning: invalid value encountered in double_scalars
pyFCI.analytical_FCI(xs,350,1)
###Markdown
IntroductionAn ontology is a hierarchical arrangement of two types of nodes: (1)genes at the leaves of the hierarchy and (2) terms at intermediatelevels of the hierarchy. The hierarchy can be thought of as directedacyclic graph (DAG), in which each node can have multiple children ormultiple parent nodes. DAGs are a generalization of trees(a.k.a. dendogram), where each node has at most one parent.The DDOT Python library provides many functions for assembling,analyzing, and visualizing ontologies. The main functionalities areimplemented in an object-oriented manner by an "Ontology" class. Thisclass can handle both ontologies that are data-driven as well as thosethat are manually curated like the Gene Ontology.
###Code
# Import Ontology class from DDOT package
from ddot import Ontology
###Output
_____no_output_____
###Markdown
Creating an Ontology objectAn object of the Ontology class can be created in several ways. To demonstratethis, we will build the following ontology Through the \_\_init\_\_ constructor
###Code
# Connections from child terms to parent terms
hierarchy = [('S3', 'S1'),
('S4', 'S1'),
('S5', 'S1'),
('S5', 'S2'),
('S6', 'S2'),
('S1', 'S0'),
('S2', 'S0')]
# Connections from genes to terms
mapping = [('A', 'S3'),
('B', 'S3'),
('C', 'S3'),
('C', 'S4'),
('D', 'S4'),
('E', 'S5'),
('F', 'S5'),
('G', 'S6'),
('H', 'S6')]
# Construct ontology
ont = Ontology(hierarchy, mapping)
###Output
_____no_output_____
###Markdown
To and from a tab-delimited table or Pandas dataframe
###Code
ont.to_table('toy_ontology.txt')
ont = Ontology.from_table('toy_ontology.txt')
###Output
_____no_output_____
###Markdown
From the Network Data Exchange (NDEx). Requires creating a free user account at http://ndexbio.org/
###Code
# Replace with your own NDEx user account
ndex_server, ndex_user, ndex_pass = 'http://test.ndexbio.org', 'scratch', 'scratch'
# ndex_user, ndex_pass = 'ddot_test', 'ddot_test'
url, _ = ont.to_ndex(ndex_server=ndex_server, ndex_user=ndex_user, ndex_pass=ndex_pass)
print(url)
ont2 = Ontology.from_ndex(url)
print(ont2)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: ['name', 'y_pos', 'Vis:Fill Color', 'Vis:Border Paint', 'x_pos', 'Label', 'Vis:Shape', 'NodeType', 'Size', 'Vis:Size', 'isRoot']
edge_attributes: ['EdgeType', 'Vis:Visible', 'Is_Tree_Edge', '2']
###Markdown
Inspecting the structure of an ontology An Ontology object contains seven attributes:* ``genes`` : List of gene names* ``terms`` : List of term names* ``gene_2_term`` : dictionary mapping a gene name to a list of terms connected to that gene. Terms are represented as their 0-based index in ``terms``.* ``term_2_gene`` : dictionary mapping a term name to a list or genes connected to that term. Genes are represented as their 0-based index in ``genes``.* ``child_2_parent`` : dictionary mapping a child term to its parent terms.* ``parent_2_child`` : dictionary mapping a parent term to its children terms.* ``term_sizes`` : A list of each term's size, i.e. the number of unique genes contained within this term and its descendants. The order of this list is the same as ``terms``. For every ``i``, it holds that ``term_sizes[i] = len(self.term_2_gene[self.terms[i]])``
###Code
ont.genes
ont.terms
ont.gene_2_term
ont.term_2_gene
ont.child_2_parent
ont.parent_2_child
###Output
_____no_output_____
###Markdown
Alternatively, the hierarchical connections can be viewed as a binary matrix, using `Ontology.connected()`
###Code
conn = ont.connected()
np.array(conn, dtype=np.int32)
###Output
_____no_output_____
###Markdown
A summary of an Ontology’s object, i.e. the number of genes, terms, and connections, can be printed `print(ont)`
###Code
print(ont)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: [2]
###Markdown
Manipulating the structure of an ontology DDOT provides several convenience functions for processing Ontologies into a desirable structure. Currently, there are no functions for adding genes and terms. If this is needed, then we recommend creating a new Ontology or manipulating the contents in a different library, such as NetworkX or igraph, and transforming the results into Ontology.
###Code
# Renaming genes and terms.
ont2 = ont.rename(genes={'A' : 'A_alias'}, terms={'S0':'S0_alias'})
ont2.to_table()
###Output
_____no_output_____
###Markdown
Delete S1 and G while preserving transitive connections
###Code
ont2 = ont.delete(to_delete=['S1', 'G'])
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 6 term-term relations
node_attributes: []
edge_attributes: [2]
###Markdown
Delete S1 and G (don't preserve transitive connections)
###Code
ont2 = ont.delete(to_delete=['S1', 'G'], preserve_transitivity=False)
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 3 term-term relations
node_attributes: []
edge_attributes: [2]
###Markdown
Propagate gene-term connections
###Code
ont2 = ont.propagate(direction='forward', gene_term=True, term_term=False)
print(ont2)
# Remove all transitive connections, and maintain only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=True, term_term=False)
###Output
8 genes, 7 terms, 27 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: [2]
###Markdown
Propagate term-term connections
###Code
ont2 = ont.propagate(direction='forward', gene_term=False, term_term=True)
print(ont2)
# Remove all transitive connections, and maintain only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=False, term_term=True)
###Output
8 genes, 7 terms, 9 gene-term relations, 11 term-term relations
node_attributes: []
edge_attributes: [2]
###Markdown
Take the subbranch consisting of all term and genes under S1
###Code
ont2 = ont.focus(branches=['S1'])
print(ont2)
###Output
Genes and Terms to keep: 10
6 genes, 4 terms, 7 gene-term relations, 3 term-term relations
node_attributes: ['Original_Size']
edge_attributes: [2]
###Markdown
Inferring a data-driven ontologyAn ontology can also be inferred in a data-driven manner based on an input set of node-node similarities.
###Code
sim, genes = ont.flatten()
print(genes)
print(sim)
ont2 = Ontology.run_clixo(sim, 0.0, 1.0, square=True, square_names=genes)
print(ont2)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: ['CLIXO_score']
###Markdown
Ontology alignment
###Code
## Make a second ontology
# Connections from child terms to parent terms
hierarchy = [('T3', 'T1'),
('T4', 'T1'),
('T1', 'T0'),
('T5', 'T0')]
# Connections from genes to terms
mapping = [('A', 'T3'),
('B', 'T3'),
('C', 'T3'),
('D', 'T4'),
('E', 'T4'),
('F', 'T4'),
('G', 'T5'),
('H', 'T5')]
# Construct ontology
ont_B = Ontology(hierarchy, mapping)
ont.align(ont_B)
###Output
collapse command: /cellar/users/mikeyu/DeepTranslate/ddot/ddot/alignOntology/collapseRedundantNodes /tmp/tmp69tzhltw
collapse command: /cellar/users/mikeyu/DeepTranslate/ddot/ddot/alignOntology/collapseRedundantNodes /tmp/tmpq7jbs_ag
Alignment command: /cellar/users/mikeyu/DeepTranslate/ddot/ddot/alignOntology/calculateFDRs /tmp/tmpdleaetmk /tmp/tmpwvbp55c8 0.05 criss_cross /tmp/tmp8bvabn36 100 40 gene
###Markdown
Construct ontotypes
###Code
# Genotypes can be represented as tuples of mutated genes
genotypes = [('A', 'B'),
('A', 'E'),
('A', 'H'),
('B', 'E'),
('B', 'H'),
('C', 'F'),
('D', 'E'),
('D', 'H'),
('E', 'H'),
('G', 'H')]
ontotypes = ont.get_ontotype(genotypes)
print(ontotypes)
# Genotypes can also be represented a genotype-by-gene matrix
import pandas as pd, numpy as np
genotypes_df = pd.DataFrame(np.zeros((len(genotypes), len(ont.genes)), np.float64),
index=['Genotype%s' % i for i in range(len(genotypes))],
columns=ont.genes)
for i, (g1, g2) in enumerate(genotypes):
genotypes_df.loc['Genotype%s' % i, g1] = 1.0
genotypes_df.loc['Genotype%s' % i, g2] = 1.0
print(genotypes_df)
ontotypes = ont.get_ontotype(genotypes_df, input_format='matrix')
print(ontotypes)
###Output
A B C D E F G H
Genotype0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
Genotype1 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
Genotype2 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
Genotype3 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0
Genotype4 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0
Genotype5 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
Genotype6 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0
Genotype7 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0
Genotype8 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype9 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0
S0 S1 S2 S3 S4 S5 S6
Genotype0 0.0 0.0 0.0 2.0 0.0 0.0 0.0
Genotype1 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype2 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype3 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype4 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype5 0.0 0.0 0.0 1.0 1.0 1.0 0.0
Genotype6 0.0 0.0 0.0 0.0 1.0 1.0 0.0
Genotype7 0.0 0.0 0.0 0.0 1.0 0.0 1.0
Genotype8 0.0 0.0 0.0 0.0 0.0 1.0 1.0
Genotype9 0.0 0.0 0.0 0.0 0.0 0.0 2.0
###Markdown
Conversions to NetworkX and igraph
###Code
G = ont.to_igraph()
print(G)
G = ont.to_networkx()
print(G.nodes())
print(G.edges())
###Output
['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6']
[('A', 'S3'), ('B', 'S3'), ('C', 'S3'), ('C', 'S4'), ('D', 'S4'), ('E', 'S5'), ('F', 'S5'), ('G', 'S6'), ('H', 'S6'), ('S1', 'S0'), ('S2', 'S0'), ('S3', 'S1'), ('S4', 'S1'), ('S5', 'S1'), ('S5', 'S2'), ('S6', 'S2')]
###Markdown
Visualization in HiView (http://hiview.ucsd.edu)
###Code
url, _ = ont.to_ndex(ndex_server=ndex_server, ndex_user=ndex_user, ndex_pass=ndex_pass, layout='bubble-collect')
print('Enter this into the "NDEx Sever URL" field')
print(ndex_server)
print('Enter this into the "UUID of the main hierarchy" field at http://hiview.ucsd.edu:')
print(url.split('/')[-1])
###Output
Enter this into the "NDEx Sever URL" field
http://test.ndexbio.org
Enter this into the "UUID of the main hierarchy" field at http://hiview.ucsd.edu:
23f542c5-3a0a-11e8-9da1-0660b7976219
###Markdown
funcX TutorialfuncX is a Function-as-a-Service (FaaS) platform for science that enables you to convert almost any computing resource into a high-performance function serving device. To do this, you deploy a funcX endpoint agent on the resource, which integrates it into the function serving fabric, allowing you to dynamically send, monitor, and receive results from function invocations. funcX is built on top of [Parsl](https://parsl-project.org), enabling a funcX endpoint to use large compute resources via traditional batch queues, where funcX will dynamically provision, use, and release resources on-demand to fulfill function requests. The function service fabric, which is run centrally as a service, is hosted in AWS.Here we provide an example of using funcX to register a function and run it on a publicly available tutorial endpoint. funcX ClientWe start by instantiating a funcX client as a programmatic means of communicating with the function service fabric. The client allows you to:- Register functions- Register containers and execution environments- Launch registered functions against endpoints- Check the status of launched functions- Retrieve outputs from functions AuthenticationInstantiating a client will force an authentication flow where you will be asked to authenticate with Globus Auth. Every interaction with funcX is authenticated to allow us to enforce access control on both functions and endpoints. As part of the authentication process we request access to your identity information (to retrieve your email address), Globus Groups management access, and Globus Search. We require Groups access in order to facilitate sharing. Globus Search allows funcX to add your functions to a searchable registry and make them discoverable to permitted users (as well as yourself!).
###Code
from funcx.sdk.client import FuncXClient
fxc = FuncXClient()
###Output
_____no_output_____
###Markdown
Next we define a Python function, which we will later register with funcX. This function simply sums its input.When defining a function you can specify \*args and \*\*kwargs as inputs. Note: any dependencies for a funcX function must be specified inside the function body.
###Code
def funcx_sum(items):
return sum(items)
###Output
_____no_output_____
###Markdown
Registering a functionTo use a function with funcX, you must first register it with the service, using `register_function`. You can optionally include a description of the function.The registration process will serialize the function body and transmit it to the funcX function service fabric.Registering a function returns a UUID for the function, which can then be used to invoke it.
###Code
func_uuid = fxc.register_function(funcx_sum,
description="tutorial summation", public=True)
print(func_uuid)
###Output
_____no_output_____
###Markdown
Searching a functionYou can search previously registered functions to which you have access using `search_function`. The first parameter `q` is searched against all the fields, such as author, description, function name, and function source. You can navigate through pages of results with the `offset` and `limit` keyword args. The object returned is simple wrapper on a list, so you can index into it, but also can have a pretty-printed table. To make use of the results, you can either just use the `function_uuid` field returned for each result, or for functions that were registered with recent versions of the service, you can load the source code using the search results object's `load_result` method.
###Code
search_results = fxc.search_function("tutorial", offset=0, limit=5)
print(search_results[0])
print(search_results)
search_results.load_result(0)
result_0_uuid = search_results[0]['function_uuid']
###Output
_____no_output_____
###Markdown
Running a functionTo invoke (perform) a function, you must provide the function's UUID, returned from the registration process, and an `endpoint_id`. Note: here we use the funcX public tutorial endpoint, which is running on AWS.The client's `run` function will serialize any \*args and \*\*kwargs, and pass them to the function when invoking it. Therefore, as our example function simply takes an arg input (items), we can specify an input arg and it will be used by the function. Here we define a small list of integers for our function to sum.The Web service will return the UUID for the invokation of the function, which we call a task. This UUID can be used to check the status of the task and retrieve the result.
###Code
endpoint_uuid = '4b116d3c-1703-4f8f-9f6f-39921e5864df' # Public tutorial endpoint
items = [1, 2, 3, 4, 5]
res = fxc.run(items, endpoint_id=endpoint_uuid, function_id=func_uuid)
print(res)
###Output
_____no_output_____
###Markdown
You can now retrieve the result of the invocation using `get_result()` on the UUID of the task. Note: We remove the task from our database once the result has been retrieved, thus you can only retireve the result once.
###Code
fxc.get_result(res)
###Output
_____no_output_____
###Markdown
Running batchesYou might want to invoke many function calls at once. This can be easily done via the batch interface:
###Code
def squared(x):
return x**2
squared_uuid = fxc.register_function(squared, searchable=False)
inputs = list(range(10))
batch = fxc.create_batch()
for x in inputs:
batch.add(x, endpoint_id=endpoint_uuid, function_id=squared_uuid)
batch_res = fxc.batch_run(batch)
fxc.get_batch_status(batch_res)
###Output
_____no_output_____
###Markdown
Catching exceptionsWhen functions fail, the exception is captured, and reraised when you try to get the result. In the following example, the 'deterministic failure' exception is raised when `fxc.get_result` is called on the failing function.
###Code
def failing():
raise Exception("deterministic failure")
failing_uuid = fxc.register_function(failing, searchable=False)
res = fxc.run(endpoint_id=endpoint_uuid, function_id=failing_uuid)
fxc.get_result(res)
###Output
_____no_output_____
###Markdown
pyFCI tutorialThis is a prototipe for a library to perform **intrinsic dimension estimation using the local full correlation integral estimator** presented in out [paper](https://www.nature.com/articles/s41598-019-53549-9). InstallationClone the repository locally git clone https://github.com/vittorioerba/pyFCI.gitand install using pip cd pyFCI pip3 install . If you want to make modifications to the source code, install by sìymlinking cd pyFCI pip3 install -e . UsageWe recommend using numpy arrays as often as you can.
###Code
# imports
import pyFCI
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
###Output
_____no_output_____
###Markdown
Let's generate a simple dataset to play with.
###Code
N = 100;
d = 3;
dataset = np.random.rand(N,d)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(dataset[:,0], dataset[:,1], dataset[:,2])
###Output
_____no_output_____
###Markdown
Global Intrinsic Dimension Estimation (IDE)First of all, we need to preprocess our dataset so that it has null mean, and all vectors are normalized.
###Code
processed_dataset = pyFCI.center_and_normalize(dataset)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(processed_dataset[:,0], processed_dataset[:,1], processed_dataset[:,2])
###Output
_____no_output_____
###Markdown
Then, we proceed to compute the **full correlation integral** (FCI).
###Code
fci = pyFCI.FCI(processed_dataset)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fci[:,0], fci[:,1])
ax.set_xlim([0,2])
ax.set_ylim([0,1])
###Output
_____no_output_____
###Markdown
Notice that if your dataset has $N$ points, the ``pyFCI.FCI()`` function will have to perform $\frac{N(N-1)}{2} \sim N^2$ operations to compute exactly the FCI.If your dataset is large, it's better to compute an approximation of the FCI by using the ``pyFCI.FCI_MC()`` method; its second argument is gives an upper bound on the number of operations allowed (500 is a san default, anything above that will practically work as good as the exact FCI for all purposes).Let's compare the two methods.(**Attention:** the first run will call the numba jit compiler and will take much longer!)
###Code
N = 2000;
d = 10;
dataset = np.random.rand(N,d)
processed_dataset = pyFCI.center_and_normalize(dataset);
%time fci = pyFCI.FCI(processed_dataset)
%time fciMC = pyFCI.FCI_MC(processed_dataset, 1000)
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fci[:,0], fci[:,1], label="exact")
ax.plot(fciMC[:,0], fciMC[:,1], label="approx $10^3$ samples")
ax.legend(loc='upper left')
ax.set_xlim([0,2])
ax.set_ylim([0,1])
###Output
_____no_output_____
###Markdown
Now that we have the FCI, we are ready to compute the ID of the dataset.For a first check, one can use the ``pyFCI.analytical_FCI()`` function (notice that we need to use $d-1$, as normalizing the dataset eats away a degree of freedom):
###Code
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fci[:,0], fci[:,1], label="empirical exact")
ax.plot(fciMC[:,0], fciMC[:,1], label="empirical approx $10^3$ samples")
xs = np.linspace(0,2,100)
ys = pyFCI.analytical_FCI(xs,d-1,1)
ax.plot(xs, ys, label="analytical")
ax.set_xlim([0,2])
ax.set_ylim([0,1])
ax.legend(loc='upper left')
###Output
_____no_output_____
###Markdown
To actually fit the function and recover $d$, we use ``pyFCI.fit_FCI()``.
###Code
fit_exact = pyFCI.fit_FCI(fci)
fit_MC = pyFCI.fit_FCI(fciMC)
print("ID estimated with exact FCI: ", fit_exact[0])
print("ID estimated with approximate FCI: ", fit_MC[0])
###Output
ID estimated with exact FCI: 10.153064067014695
ID estimated with approximate FCI: 10.619394691123722
###Markdown
Local Intrinsic Dimension Estimation (IDE)To estimate the local ID, you need to specify a local patch of your dataset.This is done by selecting a single point in the dataset, and specifing the number of nearest neighbours that define larger and larger neighbourhoods.
###Code
center = np.random.randint(len(dataset))
ks = np.array([5*i for i in range(1,11)])
localFCI = pyFCI.local_FCI(dataset,center,ks)
print(" ks |Max dist|loc ID| x0| MSE")
with np.printoptions(precision=3, suppress=True):
print(localFCI)
###Output
ks |Max dist|loc ID| x0| MSE
[[ 5. 0.657 22.92 1.2 0.054]
[10. 0.735 10.939 1.048 0.03 ]
[15. 0.793 7.238 1.034 0.02 ]
[20. 0.812 10.555 1.033 0.036]
[25. 0.836 10.308 1.009 0.018]
[30. 0.854 8.93 0.994 0.022]
[35. 0.862 10.111 1.012 0.01 ]
[40. 0.878 11.345 1.016 0.015]
[45. 0.889 9.565 0.991 0.009]
[50. 0.895 9.33 1.017 0.012]]
###Markdown
Now you can repeat for as many local centers as you like:
###Code
Ncenters = 30
centers = np.random.randint(len(dataset),size=Ncenters)
localFCI_multiple = np.empty(shape=(0,len(ks),5))
for i in range(Ncenters):
localFCI = pyFCI.local_FCI(dataset,center,ks)
localFCI_multiple = np.append( localFCI_multiple, [localFCI], axis=0 )
###Output
_____no_output_____
###Markdown
and you can reproduce the persistence plot show in our [paper](https://www.nature.com/articles/s41598-019-53549-9)
###Code
fig = plt.figure()
ax = fig.add_subplot()
for i in range(Ncenters):
ax.plot(localFCI_multiple[i,:,0],localFCI_multiple[i,:,2])
xs = np.linspace(0,50,2)
ax.plot(xs,[10 for x in xs],color="black")
ax.set_ylim([0,20])
###Output
_____no_output_____
###Markdown
Introduction: DDOT tutorial* __What is DDOT?__ The DDOT Python package provides many functions for assembling,analyzing, and visualizing ontologies. The main functionalities areimplemented in an object-oriented manner by an "Ontology" class, which handles ontologies that are data-driven as well as thosethat are manually curated like the Gene Ontology.* __What is an ontology?__ An ontology is a hierarchical arrangement of two types of nodes: (1)genes at the leaves of the hierarchy and (2) terms at intermediatelevels of the hierarchy. The hierarchy can be thought of as directedacyclic graph (DAG), in which each node can have multiple children ormultiple parent nodes. DAGs are a generalization of trees(a.k.a. dendogram), where each node has at most one parent.* __What to do after reading this tutorial__ Check out a complete list of functions in the [Ontology class](http://ddot.readthedocs.io/en/latest/ontology.html) and a list of [utility functions](http://ddot.readthedocs.io/en/latest/utils.html) that may help you build more concise pipelines. Also check out [example Jupyter notebooks](https://github.com/michaelkyu/ddot/tree/master/examples) that contain pipelines for downloading and processing the Gene Ontology and for inferring data-driven gene ontologies of diseases
###Code
# Import Ontology class from DDOT package
import ddot
from ddot import Ontology
import numpy as np
###Output
_____no_output_____
###Markdown
Creating an Ontology object* An object of the Ontology class can be created in several ways.* In this tutorial, we will construct and analyze the toy ontology shown below. Create an ontology through the \_\_init\_\_ constructor
###Code
# Connections from child terms to parent terms
hierarchy = [('S3', 'S1'),
('S4', 'S1'),
('S5', 'S1'),
('S5', 'S2'),
('S6', 'S2'),
('S1', 'S0'),
('S2', 'S0')]
# Connections from genes to terms
mapping = [('A', 'S3'),
('B', 'S3'),
('C', 'S3'),
('C', 'S4'),
('D', 'S4'),
('E', 'S5'),
('F', 'S5'),
('G', 'S6'),
('H', 'S6')]
# Construct ontology
ont = Ontology(hierarchy, mapping)
# Prints a summary of the ontology's structure
print(ont)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Create an ontology from a tab-delimited table or Pandas dataframe
###Code
# Write ontology to a tab-delimited table
ont.to_table('toy_ontology.txt')
# Reconstruct the ontology from the table
ont2 = Ontology.from_table('toy_ontology.txt')
ont2
###Output
_____no_output_____
###Markdown
From the Network Data Exchange (NDEx).* It is strongly recommended that you create a free account on NDEx in order to keep track of your own ontologies.* Note that there are two NDEx servers: the main server at http://public.ndexbio.org/ and a test server for prototyping your code at http://test.ndexbio.org (also aliased as http://dev2.ndexbio.org). Each server requires a separate user account. Because the main server contains networks from publications, we recommend that you use an account on the test server while you become familiar with DDOT
###Code
# Note: change the server to http://public.ndexbio.org, if this is where you created your NDEx account
ndex_server = 'http://test.ndexbio.org'
# Set the NDEx server and the user account (replace with your own account)
ndex_user, ndex_pass = '<enter your username>', '<enter your account password>'
# Upload ontology to NDEx. The string after "v2/network/" is a unique identifier, which is called the UUID, of the ontology in NDEx
url, _ = ont.to_ndex(ndex_server=ndex_server, ndex_user=ndex_user, ndex_pass=ndex_pass)
print(url)
# Download the ontology from NDEx
ont2 = Ontology.from_ndex(url)
print(ont2)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: ['Vis:Border Paint', 'Vis:Shape', 'name', 'Vis:Fill Color']
edge_attributes: ['Vis:Visible']
###Markdown
Inspecting the structure of an ontology An Ontology object contains seven attributes:* ``genes`` : List of gene names* ``terms`` : List of term names* ``gene_2_term`` : dictionary mapping a gene name to a list of terms connected to that gene. Terms are represented as their 0-based index in ``terms``.* ``term_2_gene`` : dictionary mapping a term name to a list or genes connected to that term. Genes are represented as their 0-based index in ``genes``.* ``child_2_parent`` : dictionary mapping a child term to its parent terms.* ``parent_2_child`` : dictionary mapping a parent term to its children terms.* ``term_sizes`` : A list of each term's size, i.e. the number of unique genes contained within this term and its descendants. The order of this list is the same as ``terms``. For every ``i``, it holds that ``term_sizes[i] = len(self.term_2_gene[self.terms[i]])``
###Code
ont.genes
ont.terms
ont.gene_2_term
ont.term_2_gene
ont.child_2_parent
ont.parent_2_child
###Output
_____no_output_____
###Markdown
Alternatively, the hierarchical connections can be viewed as a binary matrix, using `Ontology.connected()`
###Code
conn = ont.connected()
np.array(conn, dtype=np.int32)
###Output
_____no_output_____
###Markdown
A summary of an Ontology’s object, i.e. the number of genes, terms, and connections, can be printed `print(ont)`
###Code
print(ont)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Manipulating the structure of an ontology DDOT provides several convenience functions for processing Ontologies into a desirable structure. Currently, there are no functions for adding genes and terms. If this is needed, then we recommend creating a new Ontology or manipulating the contents in a different library, such as NetworkX or igraph, and transforming the results into Ontology. Renaming nodes
###Code
# Renaming genes and terms.
ont2 = ont.rename(genes={'A' : 'A_alias'}, terms={'S0':'S0_alias'})
ont2.to_table()
###Output
_____no_output_____
###Markdown
Delete S1 and G while preserving transitive connections
###Code
ont2 = ont.delete(to_delete=['S1', 'G'])
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 6 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Delete S1 and G (don't preserve transitive connections)
###Code
ont2 = ont.delete(to_delete=['S1', 'G'], preserve_transitivity=False)
print(ont2)
###Output
7 genes, 6 terms, 8 gene-term relations, 3 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Propagate gene-term connections* Often times, it is convenient to explicitly include all transitive connections in the hierarchy. That is, if a hierarchy has edges A-->B and B-->C, then the hierarchy also has A-->C. This can be done by calling `Ontology.propagate(direction='forward')` function.* On the other hand, all transitive connections can be removed with `Ontology.propagate(direction='reverse')`. This is useful as a parsimonious set of connections.
###Code
# Include all transitive connections between genes and terms
ont2 = ont.propagate(direction='forward', gene_term=True, term_term=False)
print(ont2)
# Remove all transitive connections between genes and terms, retaining only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=True, term_term=False)
print(ont3)
###Output
8 genes, 7 terms, 27 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Propagate term-term connections
###Code
# Include all transitive connections between terms
ont2 = ont.propagate(direction='forward', gene_term=False, term_term=True)
print(ont2)
# Remove all transitive connections between terms, retaining only a parsimonious set of connections
ont3 = ont2.propagate(direction='reverse', gene_term=False, term_term=True)
print(ont3)
###Output
8 genes, 7 terms, 9 gene-term relations, 11 term-term relations
node_attributes: []
edge_attributes: []
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Take the subbranch consisting of all term and genes under S1
###Code
ont2 = ont.focus(branches=['S1'])
print(ont2)
###Output
Genes and Terms to keep: 10
6 genes, 4 terms, 7 gene-term relations, 3 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Inferring a data-driven ontology* Given a set of genes and a gene similarity network, we can hierarchically cluster the genes to infer cellular subsystems using the CliXO algorithm. The resulting hierarchy of subsystems defines a "data-driven gene ontology". For more information about the CLIXO algorithm, see Kramer et al. Bioinformatics, 30(12), pp.i34-i42. 2014.* Conversely, we can also "flatten" the ontology structure to infer a gene-by-gene similarity network. In particular, the similarity between two genes is calculated as the size of the smallest common subsystem, known as "Resnik semantic similarity".* The CLIXO algorithm has been designed to reconstruct the original hierarchy from the Resnik score.
###Code
# Flatten ontology to gene-by-gene network
sim, genes = ont.flatten()
print('Similarity matrix')
print(np.round(sim, 2))
print('Row/column names of similarity matrix')
print(genes)
# Reconstruct the ontology using the CLIXO algorithm.
# In general, you may feed any kind of gene-gene similarities,
# e.g. measurements of protein-protein interactions, gene co-expression, or genetic interactions.
ont2 = Ontology.run_clixo(sim, alpha=0.0, beta=1.0, square=True, square_names=genes)
print(ont2)
ont2.to_table(edge_attr=True)
###Output
_____no_output_____
###Markdown
Ontology alignment* The structures of two ontologies can be compared through a procedure known as ontology alignment. Ontology.align() implements the ontology alignment described in (Dutkowski et al. Nature biotechnology, 31(1), 2013), in which terms are matched if they contain similar sets of genes and if their parents and children terms are also similar.* Ontology alignment is particularly useful for annotating a data-driven gene ontology by aligning it to a curated ontology such as the Gene Ontology (GO). For instance, if a data-driven term is identified to have a similar set of genes as the GO term for DNA repair, then the data-driven subsystem can be annotated as being involved in DNA repair. Moreover, data-driven terms with no matches in the ontology alignment may represent new molecular mechanisms.
###Code
## Make a second ontology (the ontology to the right in the above diagram)
# Connections from child terms to parent terms
hierarchy = [('T3', 'T1'),
('T4', 'T1'),
('T1', 'T0'),
('T5', 'T0')]
# Connections from genes to terms
mapping = [('A', 'T3'),
('B', 'T3'),
('C', 'T3'),
('D', 'T4'),
('E', 'T4'),
('F', 'T4'),
('G', 'T5'),
('H', 'T5')]
# Construct ontology
ont_B = Ontology(hierarchy, mapping)
ont.align(ont_B)
###Output
collapse command: /cellar/users/mikeyu/anaconda2/envs/ddot_py36/lib/python3.6/site-packages/ddot/alignOntology/collapseRedundantNodes /tmp/tmpwp1dge56
collapse command: /cellar/users/mikeyu/anaconda2/envs/ddot_py36/lib/python3.6/site-packages/ddot/alignOntology/collapseRedundantNodes /tmp/tmp1lc3e9yo
Alignment command: /cellar/users/mikeyu/anaconda2/envs/ddot_py36/lib/python3.6/site-packages/ddot/alignOntology/calculateFDRs /tmp/tmp9eqtgf9r /tmp/tmpwsvypmdo 0.05 criss_cross /tmp/tmp39uq4flt 100 40 gene
###Markdown
Construct ontotypes* A major goal of genetics is to understand how genotype translates to phenotype. An ontology represents biological structure through which this genotype-phenotype translation happens. * Given a set of mutations comprising a genotype, DDOT allows you to propagate the impact of these mutations to the subsystems containing these genes in the ontology. In particular, the impact on a subsystem is estimated by the number of its genes that have been mutated. These subsystem activities, which we have called an “ontotype”, enables more accurate and interpretable predictions of phenotype from genotype (Yu et al. Cell Systems 2016, 2(2), pp.77-88. 2016).
###Code
# Genotypes can be represented as tuples of mutated genes
genotypes = [('A', 'B'),
('A', 'E'),
('A', 'H'),
('B', 'E'),
('B', 'H'),
('C', 'F'),
('D', 'E'),
('D', 'H'),
('E', 'H'),
('G', 'H')]
# Calculate the ontotypes, represented a genotype-by-term matrix. Each value represents the functional impact on a term in a genotype.
ontotypes = ont.get_ontotype(genotypes)
print(ontotypes)
# Genotypes can also be represented a genotype-by-gene matrix as an alternative input format
import pandas as pd, numpy as np
genotypes_df = pd.DataFrame(np.zeros((len(genotypes), len(ont.genes)), np.float64),
index=['Genotype%s' % i for i in range(len(genotypes))],
columns=ont.genes)
for i, (g1, g2) in enumerate(genotypes):
genotypes_df.loc['Genotype%s' % i, g1] = 1.0
genotypes_df.loc['Genotype%s' % i, g2] = 1.0
print('Genotype matrix')
print(genotypes_df)
print("")
ontotypes = ont.get_ontotype(genotypes_df, input_format='matrix')
print('Ontotype matrix')
print(ontotypes)
###Output
Genotype matrix
A B C D E F G H
Genotype0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
Genotype1 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
Genotype2 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
Genotype3 0.0 1.0 0.0 0.0 1.0 0.0 0.0 0.0
Genotype4 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0
Genotype5 0.0 0.0 1.0 0.0 0.0 1.0 0.0 0.0
Genotype6 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0
Genotype7 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0
Genotype8 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype9 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0
Ontotype matrix
S0 S1 S2 S3 S4 S5 S6
Genotype0 0.0 0.0 0.0 2.0 0.0 0.0 0.0
Genotype1 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype2 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype3 0.0 0.0 0.0 1.0 0.0 1.0 0.0
Genotype4 0.0 0.0 0.0 1.0 0.0 0.0 1.0
Genotype5 0.0 0.0 0.0 1.0 1.0 1.0 0.0
Genotype6 0.0 0.0 0.0 0.0 1.0 1.0 0.0
Genotype7 0.0 0.0 0.0 0.0 1.0 0.0 1.0
Genotype8 0.0 0.0 0.0 0.0 0.0 1.0 1.0
Genotype9 0.0 0.0 0.0 0.0 0.0 0.0 2.0
###Markdown
Conversions to NetworkX and igraph
###Code
# Convert to an igraph object
G = ont.to_igraph()
print(G)
# Reconstruct the Ontology object from the igraph object
Ontology.from_igraph(G)
# Convert to a NetworkX object
G = ont.to_networkx()
print(G.nodes())
print(G.edges())
# Reconstruct the Ontology object from the NetworkX object
tmp = Ontology.from_networkx(G)
print(tmp)
###Output
8 genes, 7 terms, 9 gene-term relations, 7 term-term relations
node_attributes: []
edge_attributes: []
###Markdown
Ontology visualization using HiView (http://hiview.ucsd.edu)* HiView is a web application for general visualization of the hierarchical structure in ontologies.* To use HiView, you must first upload your ontology into NDEx using the [Ontology.to_ndex()](http://ddot.readthedocs.io/en/latest/ontology.htmlddot.Ontology.to_ndex) function, and then input the NDEx URL for the ontology to HiView* In contrast to almost all other hierarchical visualization tools, which are limited to simple tree structures, HiView also supports more complicated hierarchies in the form of directed acyclic graphs, in which nodes may have multiple parents. A simple upload to NDEx and visualization in HiView* Upload ontologies to NDEx using the `Ontology.to_ndex()` function.* Setting the parameter `layout="bubble"` (default value) will identify a spanning tree of the DAG and then lay this tree in a space-compact manner. When viewing in HiView, only the edges in the spanning tree are shown, while the other edges can be chosen to be shown.
###Code
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble')
print('To visualize in HiView, go to http://hiview.ucsd.edu in your web browser, and then')
print('\t--Enter this into the "NDEx Sever URL" field: %s' % ddot.parse_ndex_server(url))
print('\t--Enter this into the "UUID of the main hierarchy" field: %s' % ddot.parse_ndex_uuid(url))
print('Alternatively, go to %s' % ddot.to_hiview_url(url))
###Output
To visualize in HiView, go to http://hiview.ucsd.edu in your web browser, and then
--Enter this into the "NDEx Sever URL" field: http://dev2.ndexbio.org/
--Enter this into the "UUID of the main hierarchy" field: 29c16a02-fa71-11e8-ad43-0660b7976219
Alternatively, go to http://hiview.ucsd.edu/29c16a02-fa71-11e8-ad43-0660b7976219?type=test&server=http://dev2.ndexbio.org
###Markdown
An alternative layout by duplicating nodes* Setting the parameter `layout="bubble-collect"` will convert the DAG into a tree by duplicating nodes.* This transformation enables the ontology structure to be visualized without edges crossing.
###Code
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble-collect')
print('To visualize in HiView, go to http://hiview.ucsd.edu in your web browser, and then')
print('\t--Enter this into the "NDEx Sever URL" field: %s' % ddot.parse_ndex_server(url))
print('\t--Enter this into the "UUID of the main hierarchy" field: %s' % ddot.parse_ndex_uuid(url))
print('Alternatively, go to %s' % ddot.to_hiview_url(url))
###Output
To visualize in HiView, go to http://hiview.ucsd.edu in your web browser, and then
--Enter this into the "NDEx Sever URL" field: http://dev2.ndexbio.org/
--Enter this into the "UUID of the main hierarchy" field: 29ec98b4-fa71-11e8-ad43-0660b7976219
Alternatively, go to http://hiview.ucsd.edu/29ec98b4-fa71-11e8-ad43-0660b7976219?type=test&server=http://dev2.ndexbio.org
###Markdown
Visualizing metadata by modifying node labels, colors, and sizes* An Ontology object has a `node_attr` field that is a pandas DataFrame. The rows of the dataframe are genes or terms, and the columns are node attributes.* HiView understands special node attributes to control the node labels, colors, and sizes.
###Code
# Set the node labels (default is the gene and term names, as found in Ontology.genes and Ontology.terms)
ont.node_attr.loc['S4', 'Label'] = 'S4 alias'
ont.node_attr.loc['S5', 'Label'] = 'S5 alias'
# Set the fill color of nodes
ont.node_attr.loc['C', 'Vis:Fill Color'] = '#7fc97f'
ont.node_attr.loc['S1', 'Vis:Fill Color'] = '#beaed4'
ont.node_attr.loc['S0', 'Vis:Fill Color'] = '#fdc086'
# Set the node sizes (if not set, the default is the term size, as found in Ontology.term_sizes)
ont.node_attr.loc['C', 'Size'] = 10
ont.node_attr
url, _ = ont.to_ndex(ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
layout='bubble-collect')
print('To visualize in HiView, go to http://hiview.ucsd.edu in your web browser, and then')
print('\t--Enter this into the "NDEx Sever URL" field: %s' % ddot.parse_ndex_server(url))
print('\t--Enter this into the "UUID of the main hierarchy" field: %s' % ddot.parse_ndex_uuid(url))
print('Alternatively, go to %s' % ddot.to_hiview_url(url))
# Clear node attributes (optional)
ont.clear_node_attr()
ont.node_attr
###Output
_____no_output_____
###Markdown
Visualize gene-gene interaction networks alongside the ontology* Every term in an ontology represents a biological function shared among the term's genes. Based on this intuition, those genes should be interacting in different ways, e.g. protein-protein interactions, RNA expression, or genetic interactions.* Gene-gene interaction networks can be uploaded with the ontology to NDEx, so that they can be visualized at the same time in HiView
###Code
# Calculate a gene-by-gene similarity matrix using the Resnik semantic similarity definition (see section "Inferring a data-driven ontology")
sim, genes = ont.flatten()
print(genes)
print(np.round(sim, 2))
# Convert the gene-by-gene similarity matrix into a dataframe with a "long" format, where rows represent gene pairs. This conversion can be easily done with ddot.melt_square()
import pandas as pd
sim_df = pd.DataFrame(sim, index=genes, columns=genes)
sim_long = ddot.melt_square(sim_df)
sim_long.head()
# Create other gene-gene interactions.
# For example, these can represent protein-protein interactions or gene co-expression.
# Here, we simulate types of interactions by adding a random noise to the Resnik similarity
sim_long['example_interaction_type1'] = sim_long['similarity'] + np.random.random(sim_long.shape[0]) / 2.
sim_long['example_interaction_type2'] = sim_long['similarity'] + np.random.random(sim_long.shape[0]) / 2.
sim_long.head()
# Include the above gene-gene interactions by setting the `network` and `main_feature` parameters.
url, _ = ont.to_ndex(name="Toy Ontology",
ndex_server=ndex_server,
ndex_user=ndex_user,
ndex_pass=ndex_pass,
network=sim_long,
main_feature='similarity',
layout='bubble-collect')
print('To visualize in HiView, go to http://hiview.ucsd.edu in your web browser, and then')
print('\t--Enter this into the "NDEx Sever URL" field: %s' % ddot.parse_ndex_server(url))
print('\t--Enter this into the "UUID of the main hierarchy" field: %s' % ddot.parse_ndex_uuid(url))
print('Alternatively, go to %s' % ddot.to_hiview_url(url))
###Output
To visualize in HiView, go to http://hiview.ucsd.edu in your web browser, and then
--Enter this into the "NDEx Sever URL" field: http://dev2.ndexbio.org/
--Enter this into the "UUID of the main hierarchy" field: 2b0a3dc6-fa71-11e8-ad43-0660b7976219
Alternatively, go to http://hiview.ucsd.edu/2b0a3dc6-fa71-11e8-ad43-0660b7976219?type=test&server=http://dev2.ndexbio.org
###Markdown
Using **pytwanalysis** - (**TwitterAnalysis**) + [PIP Package](https://pypi.org/project/pytwanalysis/)+ [Documentation](https://lianogueira.github.io/pytwanalysis-documentation/) Initialize package
###Code
import pytwanalysis as ta
###Output
_____no_output_____
###Markdown
Set your mongoDB connection
###Code
from pymongo import MongoClient
#db connection
mongoDBConnectionSTR = "mongodb://localhost:27017"
client = MongoClient(mongoDBConnectionSTR)
db = client.twitter_DB_API_test1 #choose your DB name here
###Output
_____no_output_____
###Markdown
Set up the folder path you want to save all of the ouput files
###Code
BASE_PATH = 'D:\\Data\\MyFiles3'
###Output
_____no_output_____
###Markdown
Initialize your twitterAnalysis object
###Code
myAnalysis = ta.TwitterAnalysis(BASE_PATH, db)
###Output
_____no_output_____
###Markdown
Import data from json files into the mongoDB database
###Code
# This is the folder path where all of your twitter json files should be
JSON_FILES_PATH = 'D:\\Data\\tests\\my_json_files'
# Load json files into mongoDB
myAnalysis.loadDocFromFile(JSON_FILES_PATH)
###Output
_____no_output_____
###Markdown
Request data from Twitter's 7-day Search API-API endpoint: https://api.twitter.com/1.1/search/tweets.json-[Twitter Search API documentation](https://developer.twitter.com/en/docs/twitter-api/v1/tweets/search/overview)
###Code
# you authentication keys here - (you can retrive these from your Twitter's developer account)
consumer_key = '[your consumer_key]'
consumer_secret = '[yourconsumer_secret]'
access_token = '[your access_token]'
access_token_secret = '[your access_token_secret]'
query='term1 OR term2 OR love'
# send the request to Twitter and save data into MongoDB
response = myAnalysis.search7dayapi(consumer_key, consumer_secret, access_token, access_token_secret, query, result_type= 'mixed', max_count='100', lang='en')
###Output
_____no_output_____
###Markdown
Request data from Twitter's Premium Search API30-day API endpoint: https://api.twitter.com/1.1/tweets/search/30day/fullarchive API endpoint: https://api.twitter.com/1.1/tweets/search/fullarchive/-[Twitter Search API documentation](https://developer.twitter.com/en/docs/twitter-api/v1/tweets/search/overview)
###Code
# options are "30day" or fullarchive
api_name = "fullarchive"
# the name of your dev environment - (The one associate with your Twitter developer account)
dev_environment = "FullArchDev.json"
# your query
query = "(term1 OR term2 OR term3) lang:en"
# start and end date
date_start = "202002150000"
date_end = "202002160000"
# twitter bearear authentication - (this can be generated from your authentication keys)
twitter_bearer = '[your bearer token]'
# send the request to Twitter and save data into MongoDB
response, next_token = myAnalysis.searchPremiumAPI(twitter_bearer, api_name, dev_environment, query, date_start, date_end, next_token=None, max_count='100')
print (next_token)
###Output
eyJtYXhJZCI6MTIyODgzMTI5ODU4OTk4NjgxNn0=
###Markdown
Create database collections that will be used to analyse the data *Depending on the size of your data, this could take a while...*
###Code
# You can set the number of tweets to load at a time.
# (Large number may cause out of memory errors, low number may take a long time to run)
step = 50000
# Build collections
myAnalysis.build_db_collections(step)
###Output
_____no_output_____
###Markdown
Export edges from MongoDB This step will create edge files that will be used for graph analysis
###Code
# Set up the periods you want to analyze
# Set period_arr to None if you don't want to analyze separate periods
# Format: Period Name, Period Start Date, Period End Date
period_arr = [['P1', '10/08/2017 00:00:00', '10/15/2017 00:00:00'],
['P2', '01/21/2018 00:00:00', '02/04/2018 00:00:00'],
['P3', '02/04/2018 00:00:00', '02/18/2018 00:00:00'],
['P4', '02/18/2018 00:00:00', '03/04/2018 00:00:00']]
## TYPE OF GRAPH EDGES
########################################################
# You can export edges for one type, or for all
# Options: user_conn_all, --All user connections
# user_conn_mention, --Only Mentions user connections
# user_conn_retweet, --Only Retweets user connections
# user_conn_reply, --Only Replies user connections
# user_conn_quote, --Only Quotes user connections
# ht_conn --Hashtag connects - (Hashtgs that were used together)
# all --It will export all of the above options
TYPE_OF_GRAPH = 'all'
myAnalysis.export_mult_types_edges_for_input(period_arr=period_arr, type_of_graph=TYPE_OF_GRAPH)
###Output
_____no_output_____
###Markdown
Print initial EDA this will show you the summary information about your data.
###Code
myAnalysis.eda_analysis()
###Output
_____no_output_____
###Markdown
Automation Analysis. It creates all folders and analysis files based on your given settings IMPORTANT STEP: Choose your settings here before running the automation analysis These variables will help you decide what files you want to see and with which parameters Running the analysis step could take a long time. If you want to run piece by piece so you can see results soon, you can change the flags to 'Y' one at the time TYPE OF GRAPH ANALYSIS
###Code
## TYPE OF GRAPH ANALYSIS
########################################################
# Type of graph analysis
# Options: user_conn_all, --All user connections
# user_conn_mention, --Only Mentions user connections
# user_conn_retweet, --Only Retweets user connections
# user_conn_reply, --Only Replies user connections
# user_conn_quote, --Only Quotes user connections
# ht_conn --Hashtag connects - (Hashtgs that were used together)
TYPE_OF_GRAPH = 'user_conn_all'
#------------------------------------------------------------
###Output
_____no_output_____
###Markdown
OUTPUT PATH, PERIOD AND BOT SETTINGS
###Code
## OUTPUT PATH, PERIOD AND BOT SETTINGS
########################################################
# Path where you want to save your output files
# It will use the path you already set previously,
# but you can change here in case you want a new path
OUTPUT_PATH = BASE_PATH
#Filter bots or not. Options: (None, '0', or '1')
IS_BOT_FILTER = None
# Same period array you already set previously.
# You can change here in case you want something new,
# just follow the same format as array in previous step
PERIOD_ARR = [['P1', '10/08/2017 00:00:00', '10/15/2017 00:00:00'],
['P2', '01/21/2018 00:00:00', '02/04/2018 00:00:00'],
['P3', '02/04/2018 00:00:00', '02/18/2018 00:00:00'],
['P4', '02/18/2018 00:00:00', '03/04/2018 00:00:00']]
#------------------------------------------------------------
###Output
_____no_output_____
###Markdown
FILES TO CREATE OPTIONS Choose which files you want to create
###Code
## FILES TO CREATE OPTIONS
# Choose which files you want to create
########################################################
# Creates a separate folder for the top degree nodes
#------------------------------------------------------------
CREATE_TOP_NODES_FILES_FLAG = 'Y'
# IF you chose CREATE_TOP_NODES_FILES_FLAG='Y', you can also set these settings
# We will create subfolder for the top degree nodes based on these number
TOP_DEGREE_START = 1
TOP_DEGREE_END = 25
# We will create subfolders for the top degree nodes
# for each period based on these numbers
PERIOD_TOP_DEGREE_START = 1
PERIOD_TOP_DEGREE_END = 10
# Creates files with the edges of each folder
# and a list of nodes and their degree
#------------------------------------------------------------
CREATE_NODES_EDGES_FILES_FLAG = 'Y'
# Creates the graph visualization files
#------------------------------------------------------------
CREATE_GRAPHS_FILES_FLAG = 'Y'
# Creates files for topic discovery
#------------------------------------------------------------
# Tweet texts for that folder, word cloud, and LDA Model Visualization
CREATE_TOPIC_MODEL_FILES_FLAG = 'Y'
# If you chose CREATE_TOPIC_MODEL_FILES_FLAG='Y', you can also set this setting
# This is the number of topics to send as input to LDA model (Default is 4)
NUM_OF_TOPICS = 4
# Creates files with ht frequency
#------------------------------------------------------------
# Text files with all hashtags used, wordcloud, and barchart
CREATE_HT_FREQUENCY_FILES_FLAG = 'Y'
# Creates files with word frequency
#------------------------------------------------------------
# Text files with all hashtags used, wordcloud, and barchart
CREATE_WORDS_FREQUENCY_FILES_FLAG = 'Y'
# If you answer yes to CREATE_WORDS_FREQUENCY_FILES_FLAG, then you can choose
# how many words you want to see in your list file.
# The number of words to save on the frequency word list file. (Default=5000)
TOP_NO_WORD_FILTER = 5000
# Creates files with time series data
#------------------------------------------------------------
CREATE_TIMESERIES_FILES_FLAG = 'Y'
# Creates graphs with hashtag information
#------------------------------------------------------------
# This can be used when you're analyzing user connections,
# but still want to see the hashtag connection graph for that group of users
CREATE_HT_CONN_FILES_FLAG = 'Y'
# IF you chose CREATE_HT_CONN_FILES_FLAG = 'Y', you can also set this setting
# This is to ignore the top hashtags in the visualization
# Sometimes ignoring the main hashtag can be helpful in visualization to
# discovery other important structures within the graph
TOP_HT_TO_IGNORE = 2
# Creates louvain communities folder and files
#------------------------------------------------------------
CREATE_COMMUNITY_FILES_FLAG = 'N'
# If set CREATE_COMMUNITY_FILES_FLAG = 'Y', then you can
# set a cutoff number of edges to identify when a folder should be created
# If the commty has less edges than this number, it won't create a new folder
# Default is 200
COMMTY_EDGE_SIZE_CUTOFF = 200
#------------------------------------------------------------
## GRAPH OPTIONS #######################################
########################################################
# In case you want to print full graph, with no reduction, and without node scale
CREATE_GRAPH_WITHOUT_NODE_SCALE_FLAG = 'Y'
# In case you want to print full graph, with no reduction, but with node scale
CREATE_GRAPH_WITH_NODE_SCALE_FLAG = 'Y'
# In case you want to print reduced graph
CREATE_REDUCED_GRAPH_FLAG = 'Y'
# This is the cutoff number of edges to decide if we will print
# the graph or not. The logic will remove nodes until it can get
# to this max number of edges to plot
# If you choose a large number it may take a long time to run.
# If you choose a small number it may contract nodes too much or not print the graph at all
GRAPH_PLOT_CUTOFF_NO_NODES = 3000
GRAPH_PLOT_CUTOFF_NO_EDGES = 10000
# Reduced Graph settings
#------------------------------------------------------------
# This is a percentage number used to remove nodes
# so we can be able to plot large graphs.
# You can run this logic multiple times with different percentages.
# Each time the logic will save the graph file with a different name
# according to the parameter given
REDUCED_GRAPH_COMTY_PER = 90
# Reduce graph by removing edges with weight less than this number
# None if you don't want to use this reduction method
REDUCED_GRAPH_REMOVE_EDGE_WEIGHT = None
# Continuously reduce graph until it gets to the GRAPH_PLOT_CUTOFF numbers or to 0
REDUCED_GRAPH_REMOVE_EDGES_UNTIL_CUTOFF_FLAG = 'Y'
#------------------------------------------------------------
###Output
_____no_output_____
###Markdown
UPDATE OBJECT WITH YOUR CHOICES
###Code
# Set configurations
myAnalysis.setConfigs(type_of_graph=TYPE_OF_GRAPH,
is_bot_Filter=IS_BOT_FILTER,
period_arr=PERIOD_ARR,
create_nodes_edges_files_flag=CREATE_NODES_EDGES_FILES_FLAG,
create_graphs_files_flag=CREATE_GRAPHS_FILES_FLAG,
create_topic_model_files_flag=CREATE_TOPIC_MODEL_FILES_FLAG,
create_ht_frequency_files_flag=CREATE_HT_FREQUENCY_FILES_FLAG,
create_words_frequency_files_flag=CREATE_WORDS_FREQUENCY_FILES_FLAG,
create_timeseries_files_flag=CREATE_TIMESERIES_FILES_FLAG,
create_top_nodes_files_flag=CREATE_TOP_NODES_FILES_FLAG,
create_community_files_flag=CREATE_COMMUNITY_FILES_FLAG,
create_ht_conn_files_flag=CREATE_HT_CONN_FILES_FLAG,
num_of_topics=NUM_OF_TOPICS,
top_no_word_filter=TOP_NO_WORD_FILTER,
top_ht_to_ignore=TOP_HT_TO_IGNORE,
graph_plot_cutoff_no_nodes=GRAPH_PLOT_CUTOFF_NO_NODES,
graph_plot_cutoff_no_edges=GRAPH_PLOT_CUTOFF_NO_EDGES,
create_graph_without_node_scale_flag=CREATE_GRAPH_WITHOUT_NODE_SCALE_FLAG,
create_graph_with_node_scale_flag=CREATE_GRAPH_WITH_NODE_SCALE_FLAG,
create_reduced_graph_flag=CREATE_REDUCED_GRAPH_FLAG,
reduced_graph_comty_contract_per=REDUCED_GRAPH_COMTY_PER,
reduced_graph_remove_edge_weight=REDUCED_GRAPH_REMOVE_EDGE_WEIGHT,
reduced_graph_remove_edges=REDUCED_GRAPH_REMOVE_EDGES_UNTIL_CUTOFF_FLAG,
top_degree_start=TOP_DEGREE_START,
top_degree_end=TOP_DEGREE_END,
period_top_degree_start=PERIOD_TOP_DEGREE_START,
period_top_degree_end=PERIOD_TOP_DEGREE_END,
commty_edge_size_cutoff=COMMTY_EDGE_SIZE_CUTOFF
)
myAnalysis.edge_files_analysis(output_path=OUTPUT_PATH)
print("**** END ****")
###Output
_____no_output_____
###Markdown
Manual Analysis Examples Create LDA Analysis files
###Code
myAnalysis.lda_analysis_files('D:\\Data\\MyFiles', startDate_filter='09/20/2020 00:00:00', endDate_filter='03/04/2021 00:00:00')
###Output
_____no_output_____
###Markdown
Create hashtag frequency Analysis files
###Code
myAnalysis.ht_analysis_files('D:\\Data\\MyFiles', startDate_filter='09/20/2020 00:00:00', endDate_filter='03/04/2021 00:00:00')
###Output
_____no_output_____
###Markdown
Create word frequency Analysis files
###Code
myAnalysis.words_analysis_files('D:\\Data\\MyFiles', startDate_filter='09/20/2020 00:00:00', endDate_filter='03/04/2021 00:00:00')
###Output
_____no_output_____
###Markdown
PART A: The temperature profile of the samples and plate is determined by detecting the edges, filling and labeling them, and monitoring the temperature at their centroids. Use the function 'edge_detection.input_file' to load the input file
###Code
frames = ed.input_file('../musicalrobot/data/10_17_19_PPA_Shallow_plate.tiff')
plt.imshow(frames[0])
###Output
_____no_output_____
###Markdown
Crop the input file if required to remove the noise and increase the accuracy of edge detection
###Code
crop_frame = []
for frame in frames:
crop_frame.append(frame[35:85,40:120])
plt.imshow(crop_frame[0])
plt.colorbar()
###Output
_____no_output_____
###Markdown
Use the wrapping function edge_detection.inflection_temp
###Code
# Using the wrapping function
sorted_regprops, s_temp, p_temp, s_infl, result_df = ed.inflection_temp(crop_frame, 3, 3,'../musicalrobot/data/')
result_df
###Output
_____no_output_____
###Markdown
Plotting the locations at which the temperature was recorded
###Code
# Plotting the original image with the samples
# and centroid and plate location
plt.imshow(crop_frame[0])
plt.scatter(sorted_regprops[0]['Plate_coord'],sorted_regprops[0]['Row'],c='orange',s=6)
plt.scatter(sorted_regprops[0]['Column'],sorted_regprops[0]['Row'],s=6,c='red')
plt.title('Sample centroid and plate locations at which the temperature profile is monitored')
# Plotting the temperature profile of a sample against the temperature profile
# of the plate at a location next to the sample.
plt.plot(p_temp[5],s_temp[5])
plt.ylabel('Temperature of the sample($^\circ$C)')
plt.xlabel('Temperature of the well plate($^\circ$C)')
plt.title('Temperature of the sample against the temperature of the plate')
###Output
_____no_output_____
###Markdown
Part B:* The temperature profile of the samples and the plate is obtained by summing the pixel values over individual rows and columns, finding the troughs in the array of all the column and row sums.* The temperature profile is then obtained by monitoring the temperature value at the intersection of peak values in the column and row sums. Load the input file as frames Use the function irtemp.pixel_temp to get the temperature of the samples and at plate locations next to the samples in every frame of the input video.
###Code
result_df1 = pa.pixel_temp(crop_frame,n_columns = 3, n_rows = 3, freeze_heat=False, path='../musicalrobot/data/')
# Dataframe containing sample coordinates and corresponding melting points
result_df1
###Output
_____no_output_____
###Markdown
Creating a Printable Model from a 3D Medical Image A Tutorial on dicom2stl.py[https://github.com/dave3d/dicom2stl](https://github.com/dave3d/dicom2stl) 
###Code
import SimpleITK as sitk
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Digital Imaging and Communications in Medicine (DICOM)DICOM is the standard for the communication and management of **medical imaging information** and related data.DICOM is most commonly used for storing and transmitting medical images enabling the **integration of medical imaging devices** such as scanners, servers, workstations, printers, network hardware, and **picture archiving and communication systems (PACS)** from multiple manufacturers[https://en.wikipedia.org/wiki/DICOM](https://en.wikipedia.org/wiki/DICOM) Imaging Modalities * CT (computed tomography) * MRI (magnetic resonance imaging) * ultrasound * X-ray * fluoroscopy * angiography * mammography * breast tomosynthesis * PET (positron emission tomography) * SPECT (single photon emission computed tomography) * Endoscopy * microscopy and whole slide imaging * OCT (optical coherence tomography).
###Code
ct_image = sitk.ReadImage('Data/ct_example.nii.gz')
mri_image = sitk.ReadImage('Data/mri_t1_example.nii.gz')
import gui
gui.MultiImageDisplay(image_list=[ct_image, mri_image], title_list=['CT Head', 'MRI T1 Head'])
###Output
_____no_output_____
###Markdown
CT Houndsfield UnitsHounsfield units (HU) are a dimensionless unit universally used in computed tomography (CT) scanning to express CT numbers in a standardized and convenient form. Hounsfield units are obtained from a linear transformation of the measured attenuation coefficients 1 * Water is 0 HU * Air is -1000 HU * Very dense bone is 2000 HU * Metal is 3000 HU [Houndsfield Wikipedia page](https://en.wikipedia.org/wiki/Hounsfield_scale) Image SegmentationThe process of partitioning an image into multiple segments.Typically used to locate objects and boundaries in images.We use thresholding (selecting a range of image intesities), but SimpleITK has a variety of algorithms[SimpleITK Notebooks](https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks/tree/master/Python)
###Code
from myshow import myshow, myshow3d
ct_bone = ct_image>200
# To visualize the labels image in RGB with needs a image with 0-255 range
ct255_image = sitk.Cast(sitk.IntensityWindowing(ct_bone,0,500.0,0.,255.),
sitk.sitkUInt8)
ct255_bone = sitk.Cast(ct_bone, sitk.sitkUInt8)
myshow(sitk.LabelOverlay(ct255_image, ct255_bone), "Basic Thresholding")
###Output
_____no_output_____
###Markdown
Iso-surface extractionExtract a polygonal surface from a 3D image. The most well known algorithm is Marching Cubes (Lorenson & Cline, SIGGRAPH 1987). The 2D version is Marching Squares, shown below Marching CubesAnd here is the lookup table for Marching Cubes dicom2stl.py processing pipelineSimpleITK image processing pipeline * **Shrink** the volume to 256^3 * Apply **anisotripic smoothing** * **Threshold** - Preset tissue types: skin, bone, fat, soft tissue - User specified iso-value * **Median filter** * **Pad** the volume with black VTK mesh pipeline * Run **Marching Cubes** to extract surface * Apply **CleanMesh** filter to merge vertices * Apply **SmoothMesh** filter * Run **polygon reduction** * Write STL
###Code
import itkwidgets
head = sitk.ReadImage("Data/ct_head.nii.gz")
itkwidgets.view(head)
import sys, os
# download dicom2stl if it's not here already
if not os.path.isdir('dicom2stl'):
!{'git clone https://github.com/dave3d/dicom2stl.git'}
!{sys.executable} dicom2stl/dicom2stl.py -h
!{sys.executable} dicom2stl/dicom2stl.py -i 400 -o bone.stl Data/ct_head.nii.gz
from dicom2stl.utils import vtkutils
mesh = vtkutils.readMesh('bone.stl')
itkwidgets.view(head, geometries=[mesh])
###Output
_____no_output_____ |
5wk_차원축소/차원축소_문제_원본.ipynb | ###Markdown
반갑습니다 13기 여러분과제를 진행해 볼게요혹시라도 도저히 모르겠거나 해결이 안되신다면 로 전화주시거나 카톡주세요!! ''' ? ''' 이 있는 부분을 채워주시면 됩니다나는 내 스타일로 하겠다 하시면 그냥 구현 하셔도 됩니다!!참고하셔야 하는 함수들은 링크 달아드렸으니 들어가서 확인해보세요 1) PCA의 과정을 한번 차근차근 밟아 볼거에요 잘 따라 오세요
###Code
import numpy as np
import numpy.linalg as lin
import matplotlib.pyplot as plt
import pandas as pd
import random
# 기본 모듈들을 불러와 줍니다
x1 = [95, 91, 66, 94, 68, 63, 12, 73, 93, 51, 13, 70, 63, 63, 97, 56, 67, 96, 75, 6]
x2 = [56, 27, 25, 1, 9, 80, 92, 69, 6, 25, 83, 82, 54, 97, 66, 93, 76, 59, 94, 9]
x3 = [57, 34, 9, 79, 4, 77, 100, 42, 6, 96, 61, 66, 9, 25, 84, 46, 16, 63, 53, 30]
# 설명변수 x1, x2, x3의 값이 이렇게 있네요
X = np.stack((x1,x2,x3),axis=0)
# 설명변수들을 하나의 행렬로 만들어 줍니다
X = pd.DataFrame(X.T,columns=['x1','x2','x3'])
X
###Output
_____no_output_____
###Markdown
1-1) 먼저 PCA를 시작하기 전에 항상!!!!!! 데이터를 scaling 해주어야 해요https://datascienceschool.net/view-notebook/f43be7d6515b48c0beb909826993c856/ 를 참고하시면 도움이 될거에요
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_std = '''?'''
X_std
features = X_std.T
features
###Output
_____no_output_____
###Markdown
1-2) 자 그럼 공분산 행렬을 구해볼게요\https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html 를 참고하시면 도움이 될거에요
###Code
cov_matrix = '''?'''
cov_matrix
###Output
_____no_output_____
###Markdown
1-3) 이제 고유값과 고유벡터를 구해볼게요방법은 실습코드에 있어요!!
###Code
eigenvalues = '''?'''
eigenvectors = '''?'''
print(eigenvalues)
print(eigenvectors)
mat = np.zeros((3,3))
mat
mat[0][0] = eigenvalues[0]
mat[1][1] = eigenvalues[1]
mat[2][2] = eigenvalues[2]
mat
###Output
_____no_output_____
###Markdown
1-4) 자 이제 고유값 분해를 할 모든 준비가 되었어요 고유값 분해의 곱으로 원래 공분산 행렬을 구해보세요https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html 를 참고해서 행렬 끼리 곱하시면 됩니다행렬 곱으로 eigenvector x mat x eigenvector.T 하면 될거에요
###Code
np.dot(np.dot(eigenvectors,mat),eigenvectors.T)
###Output
_____no_output_____
###Markdown
1-5) 마지막으로 고유 벡터 축으로 값을 변환해 볼게요함수로 한번 정의해 보았어요https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html
###Code
def new_coordinates(X,eigenvectors):
for i in range(eigenvectors.shape[0]):
if i == 0:
new = [X.dot(eigenvectors.T[i])]
else:
new = np.concatenate((new,'''?'''),axis=0)
return new.T
# 모든 고유 벡터 축으로 데이터를 projection한 값입니다
new_coordinates(X_std,eigenvectors)
# 새로운 축으로 변환되어 나타난 데이터들입니다
###Output
_____no_output_____
###Markdown
2) PCA를 구현해 보세요위의 과정을 이해하셨다면 충분히 하실 수 있을거에요
###Code
from sklearn.preprocessing import StandardScaler
def MYPCA(X,number):
scaler = StandardScaler()
x_std = '''?'''
features = x_std.T
cov_matrix = '''?'''
eigenvalues = '''?'''
eigenvectors = '''?'''
new_coordinates(x_std,eigenvectors)
new_coordinate = new_coordinates(x_std,eigenvectors)
index = eigenvalues.argsort()
index = list(index)
for i in range(number):
if i==0:
new = [new_coordinate[:,index.index(i)]]
else:
new = np.concatenate(([new_coordinate[:,index.index(i)]],new),axis=0)
return new.T
MYPCA(X,3)
# 새로운 축으로 잘 변환되어서 나타나나요?
# 위에서 했던 PCA랑은 차이가 있을 수 있어요 왜냐하면 위에서는 고유값이 큰 축 순서로 정렬을 안했었거든요
###Output
_____no_output_____
###Markdown
3) sklearn이랑 비교를 해볼까요?https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html 를 참고하시면 도움이 될거에요
###Code
from sklearn.decomposition import PCA
pca = '''?'''
'''?'''
MYPCA(X,3)
###Output
_____no_output_____
###Markdown
4) MNIST data에 적용을 해볼게요!mnist data를 따로 내려받지 않게 압축파일에 같이 두었어요~!!!mnist-original.mat 파일과 같은 위치에서 주피터 노트북을 열어주세요~!!!
###Code
import numpy as np
import numpy.linalg as lin
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import fetch_mldata
from scipy import io
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
# mnist 손글씨 데이터를 불러옵니다
mnist = io.loadmat('mnist-original.mat')
X = mnist['data'].T
y = mnist['label'].T
# data information
# 7만개의 작은 숫자 이미지
# 행 열이 반대로 되어있음 -> 전치
# grayscale 28x28 pixel = 784 feature
# 각 picel은 0~255의 값
# label = 1~10 label이 총 10개인거에 주목하자
# data를 각 픽셀에 이름붙여 표현
feat_cols = [ 'pixel'+str(i) for i in range(X.shape[1]) ]
df = pd.DataFrame(X,columns=feat_cols)
df.head()
# df에 라벨 y를 붙여서 데이터프레임 생성
df['y'] = y
df
###Output
_____no_output_____ |
matplot-tutorial.ipynb | ###Markdown
CW-07 matplotlib Tutorial Exercise*Lance Clifner, Eric Freda*CS-510October 11, 2016 Simple PlotIn this document, we will work through the example in section 1.4.2 in the matplotlib Tutorial. This tutorial can be found at: http://www.scipy-lectures.org/intro/matplotlib/matplotlib.htmlThe first step in the exercise is to calculate a string of sine and cosine values.The code is commented to explain what is being done in each step.
###Code
# get an array of evenly-spaced values (in radians) to be used with the
# sine and cosine functions.
# the N-dimensional array has a total of 256 elements,
# and the values include the end points of the range
X = np.linspace( -np.pi, np.pi, 256, endpoint=True )
# uncomment these lines for debug purposes to see the data type and values of the X array
#type( X )
#X
# get the array of sine and cosine values represented by the X array of radians
S = np.sin( X )
C = np.cos( X )
# uncomment these lines to see the type and contents of the S and C arrays
#type( S )
#type( C )
#S, C
# let's plot the sine and cosine values against the X values
plt.plot( X, S )
plt.plot( X, C )
# normally, in a python script, in order to visualize the resultant plot,
# we must force it to be displayed. The plt command to force the display is:
#plt.show()
# However, with this notebook, the magic line, %matplotlib inline, at the top of the
# document causes the plot to appear automatically when plot is called.
# Run this code, and note that the plot below runs from -pi to +pi, following the
# contents of the X array.
###Output
_____no_output_____
###Markdown
In this next segment, we will work on customizing the sine/cosine plot above. Note that the resultant plot doesn't look pretty (meaning symmetric and well-cropped), but that is by design.
###Code
# Note that the global variables and values from earlier code segments persist
# through subsequent code seqments. Thus we can continue working without having to
# copy the code from earlier seqments.
# linestyles can be found from (uncomment these lines to report the valid linestyle choices)
# note that the linestyles don't say what they are (dashed, solid, etc), so you should
# try them out to see what they do:
#from matplotlib import lines
#lines.lineStyles.keys()
# create a new figure of size 8x6 inches, using 100 dots per inch
fig = plt.figure( figsize=(8,6), dpi=100 )
# Note that a matplot figure is a top level container that holds all sorts of matplot elements
# Note that a figure doesn't draw or display anything, is is simply a container of data.
# create a new subplot from a grid of 1x1
plt.subplot( 1, 1, 1 )
# note that this creates a plot which runs from 0 to 1 on the x- and y-axes,
# but because our figure is not square (it's 8x6), the subplot also appears rectangular
# so the two axes are not equally scaled
# now, we will plot the sine with orange and a continuous width of pi pixels
plt.plot( X, S, color="orange", linewidth=np.pi, linestyle="-" )
# curiously, this plot killed our subplot and went back to the axial dimensions of our
# first plot, but the pixel dimensions seem to hold true (that is 800 x 600 pixels)
# now plot the cosine with a pink dashed line
plt.plot( X, C, color="pink", linewidth=5.5, linestyle="--" )
# note that the most recent plot draws "on-top" of the previous plots.
# set x tic marks, don't align them with the x limits, but do make them equi-spaced
plt.xticks( np.linspace( -4, 4, 9, endpoint=True ) )
# set limits on the x-axis, make these smaller than the actual x-range and assymmetric
plt.xlim( -3, 2.5 )
# note that we have to put the x-limit after the tics, otherwise the tics force the
# xlimits to be the tic range
# Set y ticks
plt.yticks( np.linspace( -1, 1, 11, endpoint=True ) )
# Set y limits to be just past the min/max of the curves--note this is also asymmetric
plt.ylim( -1.05, 2 )
# note that the upper y limit exceeds the range of the y tics, so there are no tics past 1
# no, the result doesn't look pretty, but it is exactly what we told it to be
# save this plot as a png file, with a non-standard dpi
plt.savefig( "cw_07_plot.png", dpi=58 )
# note that the file format is specified by the extension of the given filename.
# thus, .pdf, .jpg, etc.
###Output
_____no_output_____
###Markdown
In this next segment, we will toy with additional customization of the plot, including axes, labels, and legends. We are bypassing some of the tutorial points, as we covered those in the previous segment. For the record, we are skipping: colors & line widths, limits, and tics.We are doing tic labels, moving spines, legend, and annotating the plot.
###Code
# In this cell, we will move the spines to the center of the plot, just like axes on a graph
# note that the spines need to be moved before the limits and tics are set
axes = plt.gca() # note that gca stands for 'get current axes', because it gets all the axes
# there are 4 spines, one on each side of the plotted area
# we will make two of these disappear by setting the color to nothing
# we want to keep the two spines that current have labels attached to them
axes.spines[ 'top' ].set_color( 'none' )
axes.spines[ 'right' ].set_color( 'none' )
axes.xaxis.set_ticks_position( 'bottom' ) # this clears the tics from the top
axes.yaxis.set_ticks_position( 'left' ) # this clears the tics from the right
axes.spines[ 'bottom' ].set_position( ('data', 0) ) # stick it thru the origin
axes.spines[ 'left' ].set_position( ('data', 0) ) # stick it thru the origin
# let's play with the tic labels
# set the actual location of the ticks, then specify the labels for those tics
# we need to have the same number of labels as there are tic marks specified
plt.xticks( np.linspace( -np.pi, np.pi, 5, endpoint=True ),
[ r'$-\pi$', r'$-\pi/2$', r'$0$', r'$\pi/2$', r'$\pi$'])
# set the limits far enough out so that the tic marks are all seen
plt.xlim( -4, 4 )
# we can also make a specific list of tic marks, rather than an equi-spaced generated list
plt.yticks( [-1, -0.707, 0, 0.707, 1],
[ r'$-1$', r'$-\sqrt{2}/2$', r'$0$', r'$\sqrt{2}/2$', r'$1$'])
# let's add a legend to the most open area of the plot
plt.plot( X, S, color="green", linewidth=3, linestyle='--', label="Sine")
plt.plot( X, C, color="purple", linewidth=3, linestyle=':', label="Cosine")
plt.legend( loc='upper left' )
# let's annotate a the points at the pi/4
annot = np.pi/4
plt.plot( [annot, annot], [0, np.cos( annot )], color='purple', linewidth=1, linestyle="--")
plt.scatter( [annot ], [np.cos( annot )], 20, color='purple' )
plt.annotate( r'$sin(\frac{\pi}{4}) = \frac{\sqrt{2}}{2}$',
xy=(annot,np.sin(annot)), xycoords='data',
xytext=(-10,+40), textcoords='offset points', fontsize=12,
arrowprops=dict( arrowstyle="->", color='purple', connectionstyle="arc3, rad=.2" ))
###Output
_____no_output_____
###Markdown
In the next three code segments, we will look at the contour, imshow, and 3D plots.
###Code
# this is the contour plot exercise
def f(x, y):
return (1 - x / 2 + x ** 5 + y ** 3) * np.exp(-x ** 2 -y ** 2)
n = 256
x = np.linspace(-3, 3, n)
y = np.linspace(-3, 3, n)
X, Y = np.meshgrid(x, y)
# must set axes before plotting the data
plt.axes([0.025, 0.025, 0.95, 0.95])
# change to the hot map colors
plt.contourf(X, Y, f(X, Y), 8, alpha=.75, cmap=plt.cm.hot)
C = plt.contour(X, Y, f(X, Y), 8, colors='black', linewidth=.5)
# label the contours as per the the axes
plt.clabel(C, inline=1, fontsize=10)
# eliminate the tics around the edges (spines) of the plot
plt.xticks(())
plt.yticks(())
# this is the imshow plot exercise
def f(x, y):
return (1 - x / 2 + x ** 5 + y ** 3) * np.exp(-x ** 2 - y ** 2)
n = 10
x = np.linspace(-3, 3, 4 * n)
y = np.linspace(-3, 3, 3 * n)
X, Y = np.meshgrid(x, y)
# set the axes before plotting
plt.axes([0.025, 0.025, 0.95, 0.95])
plt.imshow(f(X, Y), cmap='bone', interpolation='nearest', origin='lower')
# add the color bar for the color map
plt.colorbar(shrink=.92)
# remove the tics from the spines
plt.xticks(())
plt.yticks(())
# this is the 3D plot exercise
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
X = np.arange(-4, 4, 0.25)
Y = np.arange(-4, 4, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='hot')
# set the lower z-limit
ax.set_zlim(top=2,bottom=-2)
ax.set_ylim(top=3.9)
ax.set_xlim(right=3.9)
# draw the bottom contour colors
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=plt.cm.hot)
ax.contourf(X, Y, Z, zdir='z', offset=-2, cmap=plt.cm.hot)
###Output
/usr/lib/python3/dist-packages/matplotlib/collections.py:571: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
if self._edgecolors == str('face'):
|
tutorials/2. Introduction to Submodular Optimization.ipynb | ###Markdown
2. Introduction to Submodular OptimizationSo far we've focused on looking at submodular functions in action, primarily in the context of identifying a good selection of samples for the purpose of training machine learning models. While we have covered at a high level how submodular selection works, in this tutorial we will focus on the math and mechanics behind submodular selection.As mentioned before, submodular selection is a process of greedily selecting objects from a large set of objects in such a manner that a submodular function is maximized. This general definition allows submodular selection to be applied in a wide variety of areas. In our context we'll be focusing on the selection of samples for data analysis or machine learning purpose and using terminology common to that field.The equation for a feature based function is below.\begin{equation}f(X) = \sum\limits_{u \in U} w_{u} \phi_{u} \left( \sum\limits_{x \in X} m_{u}(x) \right)\end{equation}In this equation, $U$ refers to all features in a sample, and $u$ refers to a specific feature. $X$ refers to the original data set that we are selecting from and $x$ refers to a single sample from that data set. $w$ is a vector of weights that indicate how important each feature is, with $w_{u}$ being a scalar referring to how important feature $u$ is. Frequently these weights are uniform. $\phi$ refers to a set of saturating functions, such as $sqrt(X)$ or $log(X + 1)$ that incude a property of "diminishing returns" on the feature values. These diminishing returns will become very important later.When we maximize a submodular function, our goal is to, for each iteration, select the sample that yields the largest gain when added to the growing subset. This gain is dependant on the items that are already in the subset due to the saturating function. Let's talk through an example of selecting the best subset.
###Code
%pylab inline
import seaborn; seaborn.set_style('whitegrid')
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Let's generate a two dimensional data set for the purposes of vissualization, where some samples are high in one of the features and some samples are high in the other feature. We do this purposefully so as to illustrate the effect that selection has on the marginal gain of each sample during the selection process.
###Code
numpy.random.seed(3)
X = numpy.concatenate([numpy.random.normal([0.5, 0.1], [0.3, 0.05], size=(50, 2)),
numpy.random.normal([0.1, 0.5], [0.05, 0.4], size=(50, 2))])
X = numpy.abs(X)
plt.figure(figsize=(8, 6))
plt.title("Randomly generated data", fontsize=16)
plt.scatter(X[:,0], X[:,1], s=20)
plt.xlim(0, 1.4)
plt.ylim(0, 1.4)
plt.show()
###Output
_____no_output_____
###Markdown
Now let's implement a function that calculates the gain of each sample with respect to some growing subset. Since we are using a feature based function, the gain requires summing column-wise down each sample in the growing set and then applying the saturating function in order to squash it. We can speed this up significantly by pre-computing the column-wise sums of the current selected set, so that we only need to add in the new sample to be considered.
###Code
def gains(X, z=None):
concave_sums = numpy.sum(z, axis=0) if z is not None else numpy.zeros(X.shape[1])
concave_func = numpy.log(concave_sums + 1)
gains = []
for x in X:
gain = numpy.sum(numpy.log(concave_sums + x + 1) - concave_func).sum()
gains.append(gain)
return gains
###Output
_____no_output_____
###Markdown
We can now use this function in order to calculate the gain of each of the samples in our set if we were to use it as the first sample in our subset.
###Code
gain1 = gains(X)
plt.figure(figsize=(8, 6))
plt.title("Gain if adding this item", fontsize=16)
plt.scatter(X[:,0], X[:,1], c=gain1, cmap='Purples')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
We see a clear trend that the samples with high values in either of the two dimensions have high gains, whereas those samples with small values in each dimension have small gains. Given the simplicity of the feature-based selection algorithm this makes sense---high feature values correspond to higher gains. What happens if we select the sample with the highest gain and then recalculate gains?
###Code
idx = numpy.argmax(gain1)
z = [X[idx]]
gain2 = gains(X, z)
plt.figure(figsize=(8, 6))
plt.title("Gain if adding this item", fontsize=16)
plt.scatter(X[:,0], X[:,1], c=gain2, cmap='Purples')
plt.colorbar()
plt.scatter(X[idx, 0], X[idx, 1], c='c', s=75, label="First Selected Sample")
plt.legend(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
It looks like we select a sample that has the highest magnitude single feature value. While the saturating function will flatten values as the set gets larger, with small feature values and no samples in the subset yet, the raw feature value is roughly the gain. Interestingly, it looks like all of the samples that have higher values for the y-axis now have a diminished marginal gain. The highest gain samples now look like they come from those with high x-axis values.
###Code
idx = numpy.argmax(gain2)
z += [X[idx]]
gain3 = gains(X, z)
plt.figure(figsize=(8, 6))
plt.title("Gain if adding this item", fontsize=16)
plt.scatter(X[:,0], X[:,1], c=gain3, cmap='Purples')
plt.colorbar()
plt.scatter(X[idx, 0], X[idx, 1], c='c', s=75, label="Highest Gain")
plt.legend(fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
To complete the selection process, we would greedily select samples from the full set until the desired number of samples have been reached. This requires scanning over the full set (minus the samples that have been selected) one full time for each samle that we would like to select, coting $nm$ time where $n$ is the number of samples in the full set and $m$ is the number of samples that onne would like to select. Because sometimes our goal is to induce a ranking over the full set, i.e., dtermine the order of selection for each sample, this becomes quadratic time.Is it possible to do better than than this? The short answer is yes, and the reason lies in non-negativity constraint on the input data and on the saturating function.
###Code
import pandas
gain_table = pandas.DataFrame({"x" : X[:,0], "y" : X[:,1], "gain 1": gain1, "gain 2": gain2, "gain 3": gain3})
gain_table[['x', 'y', 'gain 1', 'gain 2', 'gain 3']].head()
###Output
_____no_output_____ |
jupyter/orders.ipynb | ###Markdown
[index](./index.ipynb) | [accounts](./accounts.ipynb) | [orders](./orders.ipynb) | [trades](./trades.ipynb) | [positions](./positions.ipynb) | [historical](./historical.ipynb) | [streams](./streams.ipynb) | [errors](./exceptions.ipynb) OrdersThis notebook provides an example of + a MarketOrder + a simplyfied way for a MarketOrder by using contrib.requests.MarketOrderRequest + a LimitOrder with an expiry datetime by using *GTD* and contrib.requests.LimitOrderRequest + canceling a GTD order create a marketorder request with a TakeProfit and a StopLoss order when it gets filled.
###Code
import json
import oandapyV20
import oandapyV20.endpoints.orders as orders
from exampleauth import exampleauth
accountID, access_token = exampleauth.exampleAuth()
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD, stopLoss @1.07 takeProfit @1.10 ( current: 1.055)
# according to the docs at developer.oanda.com the requestbody looks like:
mktOrder = {
"order": {
"timeInForce": "FOK", # Fill-or-kill
"instrument": "EUR_USD",
"positionFill": "DEFAULT",
"type": "MARKET",
"units": 10000, # as integer
"takeProfitOnFill": {
"timeInForce": "GTC", # Good-till-cancelled
"price": 1.10 # as float
},
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07" # as string
}
}
}
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
print("Request: ", r)
print("MarketOrder specs: ", json.dumps(mktOrder, indent=2))
###Output
Request: v3/accounts/101-004-1435156-001/orders
MarketOrder specs: {
"order": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07"
},
"positionFill": "DEFAULT",
"units": 10000,
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": 1.1
},
"type": "MARKET"
}
}
###Markdown
Well that looks fine, but constructing orderbodies that way is not really what we want. Types are not checked for instance and all the defaults need to be supplied.This kind of datastructures can become complex, are not easy to read or construct and are prone to errors. Types and definitionsOanda uses several *types* and *definitions* througout their documentation. These types are covered by the *oandapyV20.types* package and the definitions by the *oandapyV20.definitions* package. Contrib.requestsThe *oandapyV20.contrib.requests* package offers classes providing an easy way to construct the data forthe *data* parameter of the *OrderCreate* endpoint or the *TradeCRCDO* (Create/Replace/Cancel Dependent Orders). The *oandapyV20.contrib.requests* package makes use of the *oandapyV20.types* and *oandapyV20.definitions*.Let's improve the previous example by making use of *oandapyV20.contrib.requests*:
###Code
import json
import oandapyV20
import oandapyV20.endpoints.orders as orders
from oandapyV20.contrib.requests import (
MarketOrderRequest,
TakeProfitDetails,
StopLossDetails)
from exampleauth import exampleauth
accountID, access_token = exampleauth.exampleAuth()
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD
mktOrder = MarketOrderRequest(instrument="EUR_USD",
units=10000,
takeProfitOnFill=TakeProfitDetails(price=1.10).data,
stopLossOnFill=StopLossDetails(price=1.07).data
).data
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
print("Request: ", r)
print("MarketOrder specs: ", json.dumps(mktOrder, indent=2))
###Output
Request: v3/accounts/101-004-1435156-001/orders
MarketOrder specs: {
"order": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"positionFill": "DEFAULT",
"type": "MARKET",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07000"
}
}
}
###Markdown
As you can see, the specs contain price values that were converted to strings and the defaults *positionFill* and *timeInForce* were added. Using *contrib.requests* makes it very easy to construct the orderdata body for order requests. Parameters for those requests are also validated.Next step, place the order:
###Code
rv = client.request(r)
print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2)))
###Output
Response: 201
{
"orderCancelTransaction": {
"time": "2017-03-09T13:17:59.319422181Z",
"userID": 1435156,
"batchID": "7576",
"orderID": "7576",
"id": "7577",
"type": "ORDER_CANCEL",
"accountID": "101-004-1435156-001",
"reason": "STOP_LOSS_ON_FILL_LOSS"
},
"lastTransactionID": "7577",
"orderCreateTransaction": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"batchID": "7576",
"accountID": "101-004-1435156-001",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"time": "2017-03-09T13:17:59.319422181Z",
"userID": 1435156,
"positionFill": "DEFAULT",
"id": "7576",
"type": "MARKET_ORDER",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07000"
},
"reason": "CLIENT_ORDER"
},
"relatedTransactionIDs": [
"7576",
"7577"
]
}
###Markdown
Lets analyze that. We see an *orderCancelTransaction* and *reason* **STOP_LOSS_ON_FILL_LOSS**. So the order was not placed ? Well it was placed and cancelled right away. The marketprice of EUR_USD is at the moment of this writing 1.058. So the stopLoss order at 1.07 makes no sense. The status_code of 201 is as the specs say: http://developer.oanda.com/rest-live-v20/order-ep/ .Lets change the stopLoss level below the current price and place the order once again.
###Code
mktOrder = MarketOrderRequest(instrument="EUR_USD",
units=10000,
takeProfitOnFill=TakeProfitDetails(price=1.10).data,
stopLossOnFill=StopLossDetails(price=1.05).data
).data
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
rv = client.request(r)
print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2)))
###Output
Response: 201
{
"orderFillTransaction": {
"accountBalance": "102107.4442",
"instrument": "EUR_USD",
"batchID": "7578",
"pl": "0.0000",
"accountID": "101-004-1435156-001",
"units": "10000",
"tradeOpened": {
"tradeID": "7579",
"units": "10000"
},
"financing": "0.0000",
"price": "1.05563",
"userID": 1435156,
"orderID": "7578",
"time": "2017-03-09T13:22:13.832587780Z",
"id": "7579",
"type": "ORDER_FILL",
"reason": "MARKET_ORDER"
},
"lastTransactionID": "7581",
"orderCreateTransaction": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"batchID": "7578",
"accountID": "101-004-1435156-001",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"time": "2017-03-09T13:22:13.832587780Z",
"userID": 1435156,
"positionFill": "DEFAULT",
"id": "7578",
"type": "MARKET_ORDER",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.05000"
},
"reason": "CLIENT_ORDER"
},
"relatedTransactionIDs": [
"7578",
"7579",
"7580",
"7581"
]
}
###Markdown
We now see an *orderFillTransaction* for 10000 units EUR_USD with *reason* **MARKET_ORDER**.Lets retrieve the orders. We should see the *stopLoss* and *takeProfit* orders as *pending*:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print("Response:\n", json.dumps(rv, indent=2))
###Output
Response:
{
"lastTransactionID": "7581",
"orders": [
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7579",
"id": "7581",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.10000",
"tradeID": "7579",
"id": "7580",
"state": "PENDING",
"type": "TAKE_PROFIT"
},
{
"createTime": "2017-03-09T11:45:48.928448770Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7572",
"id": "7574",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:18:51.563637768Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7562",
"id": "7564",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:08:04.219010730Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7558",
"id": "7560",
"state": "PENDING",
"type": "STOP_LOSS"
}
]
}
###Markdown
Depending on the state of your account you should see at least the orders associated with the previously executed marketorder. The *relatedTransactionIDs* should be in the *orders* output of OrdersPending().Now lets cancel all pending TAKE_PROFIT orders:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
idsToCancel = [order.get('id') for order in rv['orders'] if order.get('type') == "TAKE_PROFIT"]
for orderID in idsToCancel:
r = orders.OrderCancel(accountID=accountID, orderID=orderID)
rv = client.request(r)
print("Request: {} ... response: {}".format(r, json.dumps(rv, indent=2)))
###Output
Request: v3/accounts/101-004-1435156-001/orders/7580/cancel ... response: {
"orderCancelTransaction": {
"time": "2017-03-09T13:26:07.480994423Z",
"userID": 1435156,
"batchID": "7582",
"orderID": "7580",
"id": "7582",
"type": "ORDER_CANCEL",
"accountID": "101-004-1435156-001",
"reason": "CLIENT_REQUEST"
},
"lastTransactionID": "7582",
"relatedTransactionIDs": [
"7582"
]
}
###Markdown
create a LimitOrder with a *GTD* "good-til-date"Create a LimitOrder and let it expire: *2018-07-02T00:00:00* using *GTD*. Make sure it is in the futurewhen you run this example!
###Code
from oandapyV20.contrib.requests import LimitOrderRequest
# make sure GTD_TIME is in the future
# also make sure the price condition is not met
# and specify GTD_TIME as UTC or local
# GTD_TIME="2018-07-02T00:00:00Z" # UTC
GTD_TIME="2018-07-02T00:00:00"
ordr = LimitOrderRequest(instrument="EUR_USD",
units=10000,
timeInForce="GTD",
gtdTime=GTD_TIME,
price=1.08)
print(json.dumps(ordr.data, indent=4))
r = orders.OrderCreate(accountID=accountID, data=ordr.data)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"order": {
"price": "1.08000",
"timeInForce": "GTD",
"positionFill": "DEFAULT",
"type": "LIMIT",
"instrument": "EUR_USD",
"gtdTime": "2018-07-02T00:00:00",
"units": "10000"
}
}
{
"relatedTransactionIDs": [
"8923"
],
"lastTransactionID": "8923",
"orderCreateTransaction": {
"price": "1.08000",
"triggerCondition": "DEFAULT",
"positionFill": "DEFAULT",
"type": "LIMIT_ORDER",
"requestID": "42440345970496965",
"partialFill": "DEFAULT",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"batchID": "8923",
"id": "8923",
"userID": 1435156,
"accountID": "101-004-1435156-001",
"timeInForce": "GTD",
"reason": "CLIENT_ORDER",
"instrument": "EUR_USD",
"time": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
}
###Markdown
Request the pending orders
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [
{
"price": "1.08000",
"triggerCondition": "DEFAULT",
"state": "PENDING",
"positionFill": "DEFAULT",
"partialFill": "DEFAULT_FILL",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"id": "8923",
"timeInForce": "GTD",
"type": "LIMIT",
"instrument": "EUR_USD",
"createTime": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
],
"lastTransactionID": "8923"
}
###Markdown
Cancel the GTD orderFetch the *orderID* from the *pending orders* and cancel the order.
###Code
r = orders.OrderCancel(accountID=accountID, orderID=8923)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"relatedTransactionIDs": [
"8924"
],
"orderCancelTransaction": {
"accountID": "101-004-1435156-001",
"time": "2018-06-10T12:07:35.453416669Z",
"orderID": "8923",
"reason": "CLIENT_REQUEST",
"requestID": "42440346243149289",
"type": "ORDER_CANCEL",
"batchID": "8924",
"id": "8924",
"userID": 1435156
},
"lastTransactionID": "8924"
}
###Markdown
Request pendig orders once again ... the 8923 should be gone
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [],
"lastTransactionID": "8924"
}
###Markdown
[index](./index.ipynb) | [accounts](./accounts.ipynb) | [orders](./orders.ipynb) | [trades](./trades.ipynb) | [positions](./positions.ipynb) | [historical](./historical.ipynb) | [streams](./streams.ipynb) | [errors](./exceptions.ipynb) OrdersThis notebook provides an example of + a MarketOrder + a simplyfied way for a MarketOrder by using contrib.requests.MarketOrderRequest + a LimitOrder with an expiry datetime by using *GTD* and contrib.requests.LimitOrderRequest + canceling a GTD order create a marketorder request with a TakeProfit and a StopLoss order when it gets filled.
###Code
import json
import oandapyV20
import oandapyV20.endpoints.orders as orders
from exampleauth import exampleauth
accountID, access_token = exampleauth.exampleAuth()
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD, stopLoss @1.07 takeProfit @1.10 ( current: 1.055)
# according to the docs at developer.oanda.com the requestbody looks like:
mktOrder = {
"order": {
"timeInForce": "FOK", # Fill-or-kill
"instrument": "EUR_USD",
"positionFill": "DEFAULT",
"type": "MARKET",
"units": 10000, # as integer
"takeProfitOnFill": {
"timeInForce": "GTC", # Good-till-cancelled
"price": 1.10 # as float
},
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07" # as string
}
}
}
#try:
## rv = client.request(r)
#except V20Error as err:
# print("V20Error occurred: {}".format(err))
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
print("Request: ", r)
print("MarketOrder specs: ", json.dumps(mktOrder, indent=2))
###Output
Request: v3/accounts/101-001-14065046-001/orders
MarketOrder specs: {
"order": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"positionFill": "DEFAULT",
"type": "MARKET",
"units": 10000,
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": 1.1
},
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07"
}
}
}
###Markdown
Well that looks fine, but constructing orderbodies that way is not really what we want. Types are not checked for instance and all the defaults need to be supplied.This kind of datastructures can become complex, are not easy to read or construct and are prone to errors. Types and definitionsOanda uses several *types* and *definitions* througout their documentation. These types are covered by the *oandapyV20.types* package and the definitions by the *oandapyV20.definitions* package. Contrib.requestsThe *oandapyV20.contrib.requests* package offers classes providing an easy way to construct the data forthe *data* parameter of the *OrderCreate* endpoint or the *TradeCRCDO* (Create/Replace/Cancel Dependent Orders). The *oandapyV20.contrib.requests* package makes use of the *oandapyV20.types* and *oandapyV20.definitions*.Let's improve the previous example by making use of *oandapyV20.contrib.requests*:
###Code
import json
import oandapyV20
import oandapyV20.endpoints.orders as orders
from oandapyV20.contrib.requests import (
MarketOrderRequest,
TakeProfitDetails,
StopLossDetails)
from exampleauth import exampleauth
accountID, access_token = exampleauth.exampleAuth()
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD
mktOrder = MarketOrderRequest(instrument="EUR_USD",
units=10000,
takeProfitOnFill=TakeProfitDetails(price=1.10).data,
stopLossOnFill=StopLossDetails(price=1.07).data
).data
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
print("Request: ", r)
print("MarketOrder specs: ", json.dumps(mktOrder, indent=2))
###Output
Request: v3/accounts/101-004-1435156-001/orders
MarketOrder specs: {
"order": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"positionFill": "DEFAULT",
"type": "MARKET",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07000"
}
}
}
###Markdown
As you can see, the specs contain price values that were converted to strings and the defaults *positionFill* and *timeInForce* were added. Using *contrib.requests* makes it very easy to construct the orderdata body for order requests. Parameters for those requests are also validated.Next step, place the order:
###Code
rv = client.request(r)
print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2)))
###Output
Response: 201
{
"orderCancelTransaction": {
"time": "2017-03-09T13:17:59.319422181Z",
"userID": 1435156,
"batchID": "7576",
"orderID": "7576",
"id": "7577",
"type": "ORDER_CANCEL",
"accountID": "101-004-1435156-001",
"reason": "STOP_LOSS_ON_FILL_LOSS"
},
"lastTransactionID": "7577",
"orderCreateTransaction": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"batchID": "7576",
"accountID": "101-004-1435156-001",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"time": "2017-03-09T13:17:59.319422181Z",
"userID": 1435156,
"positionFill": "DEFAULT",
"id": "7576",
"type": "MARKET_ORDER",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07000"
},
"reason": "CLIENT_ORDER"
},
"relatedTransactionIDs": [
"7576",
"7577"
]
}
###Markdown
Lets analyze that. We see an *orderCancelTransaction* and *reason* **STOP_LOSS_ON_FILL_LOSS**. So the order was not placed ? Well it was placed and cancelled right away. The marketprice of EUR_USD is at the moment of this writing 1.058. So the stopLoss order at 1.07 makes no sense. The status_code of 201 is as the specs say: http://developer.oanda.com/rest-live-v20/order-ep/ .Lets change the stopLoss level below the current price and place the order once again.
###Code
mktOrder = MarketOrderRequest(instrument="EUR_USD",
units=10000,
takeProfitOnFill=TakeProfitDetails(price=1.10).data,
stopLossOnFill=StopLossDetails(price=1.05).data
).data
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
rv = client.request(r)
print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2)))
###Output
Response: 201
{
"orderFillTransaction": {
"accountBalance": "102107.4442",
"instrument": "EUR_USD",
"batchID": "7578",
"pl": "0.0000",
"accountID": "101-004-1435156-001",
"units": "10000",
"tradeOpened": {
"tradeID": "7579",
"units": "10000"
},
"financing": "0.0000",
"price": "1.05563",
"userID": 1435156,
"orderID": "7578",
"time": "2017-03-09T13:22:13.832587780Z",
"id": "7579",
"type": "ORDER_FILL",
"reason": "MARKET_ORDER"
},
"lastTransactionID": "7581",
"orderCreateTransaction": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"batchID": "7578",
"accountID": "101-004-1435156-001",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"time": "2017-03-09T13:22:13.832587780Z",
"userID": 1435156,
"positionFill": "DEFAULT",
"id": "7578",
"type": "MARKET_ORDER",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.05000"
},
"reason": "CLIENT_ORDER"
},
"relatedTransactionIDs": [
"7578",
"7579",
"7580",
"7581"
]
}
###Markdown
We now see an *orderFillTransaction* for 10000 units EUR_USD with *reason* **MARKET_ORDER**.Lets retrieve the orders. We should see the *stopLoss* and *takeProfit* orders as *pending*:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print("Response:\n", json.dumps(rv, indent=2))
###Output
Response:
{
"lastTransactionID": "7581",
"orders": [
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7579",
"id": "7581",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.10000",
"tradeID": "7579",
"id": "7580",
"state": "PENDING",
"type": "TAKE_PROFIT"
},
{
"createTime": "2017-03-09T11:45:48.928448770Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7572",
"id": "7574",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:18:51.563637768Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7562",
"id": "7564",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:08:04.219010730Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7558",
"id": "7560",
"state": "PENDING",
"type": "STOP_LOSS"
}
]
}
###Markdown
Depending on the state of your account you should see at least the orders associated with the previously executed marketorder. The *relatedTransactionIDs* should be in the *orders* output of OrdersPending().Now lets cancel all pending TAKE_PROFIT orders:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
idsToCancel = [order.get('id') for order in rv['orders'] if order.get('type') == "TAKE_PROFIT"]
for orderID in idsToCancel:
r = orders.OrderCancel(accountID=accountID, orderID=orderID)
rv = client.request(r)
print("Request: {} ... response: {}".format(r, json.dumps(rv, indent=2)))
###Output
Request: v3/accounts/101-004-1435156-001/orders/7580/cancel ... response: {
"orderCancelTransaction": {
"time": "2017-03-09T13:26:07.480994423Z",
"userID": 1435156,
"batchID": "7582",
"orderID": "7580",
"id": "7582",
"type": "ORDER_CANCEL",
"accountID": "101-004-1435156-001",
"reason": "CLIENT_REQUEST"
},
"lastTransactionID": "7582",
"relatedTransactionIDs": [
"7582"
]
}
###Markdown
create a LimitOrder with a *GTD* "good-til-date"Create a LimitOrder and let it expire: *2018-07-02T00:00:00* using *GTD*. Make sure it is in the futurewhen you run this example!
###Code
from oandapyV20.contrib.requests import LimitOrderRequest
# make sure GTD_TIME is in the future
# also make sure the price condition is not met
# and specify GTD_TIME as UTC or local
# GTD_TIME="2018-07-02T00:00:00Z" # UTC
GTD_TIME="2018-07-02T00:00:00"
ordr = LimitOrderRequest(instrument="EUR_USD",
units=10000,
timeInForce="GTD",
gtdTime=GTD_TIME,
price=1.08)
print(json.dumps(ordr.data, indent=4))
r = orders.OrderCreate(accountID=accountID, data=ordr.data)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"order": {
"price": "1.08000",
"timeInForce": "GTD",
"positionFill": "DEFAULT",
"type": "LIMIT",
"instrument": "EUR_USD",
"gtdTime": "2018-07-02T00:00:00",
"units": "10000"
}
}
{
"relatedTransactionIDs": [
"8923"
],
"lastTransactionID": "8923",
"orderCreateTransaction": {
"price": "1.08000",
"triggerCondition": "DEFAULT",
"positionFill": "DEFAULT",
"type": "LIMIT_ORDER",
"requestID": "42440345970496965",
"partialFill": "DEFAULT",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"batchID": "8923",
"id": "8923",
"userID": 1435156,
"accountID": "101-004-1435156-001",
"timeInForce": "GTD",
"reason": "CLIENT_ORDER",
"instrument": "EUR_USD",
"time": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
}
###Markdown
Request the pending orders
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [
{
"price": "1.08000",
"triggerCondition": "DEFAULT",
"state": "PENDING",
"positionFill": "DEFAULT",
"partialFill": "DEFAULT_FILL",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"id": "8923",
"timeInForce": "GTD",
"type": "LIMIT",
"instrument": "EUR_USD",
"createTime": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
],
"lastTransactionID": "8923"
}
###Markdown
Cancel the GTD orderFetch the *orderID* from the *pending orders* and cancel the order.
###Code
r = orders.OrderCancel(accountID=accountID, orderID=8923)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"relatedTransactionIDs": [
"8924"
],
"orderCancelTransaction": {
"accountID": "101-004-1435156-001",
"time": "2018-06-10T12:07:35.453416669Z",
"orderID": "8923",
"reason": "CLIENT_REQUEST",
"requestID": "42440346243149289",
"type": "ORDER_CANCEL",
"batchID": "8924",
"id": "8924",
"userID": 1435156
},
"lastTransactionID": "8924"
}
###Markdown
Request pendig orders once again ... the 8923 should be gone
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [],
"lastTransactionID": "8924"
}
###Markdown
[index](./index.ipynb) | [accounts](./accounts.ipynb) | [orders](./orders.ipynb) | [trades](./trades.ipynb) | [positions](./positions.ipynb) | [historical](./historical.ipynb) | [streams](./streams.ipynb) | [errors](./exceptions.ipynb) OrdersThis notebook provides an example of + a MarketOrder + a simplyfied way for a MarketOrder by using contrib.requests.MarketOrderRequest + a LimitOrder with an expiry datetime by using *GTD* and contrib.requests.LimitOrderRequest + canceling a GTD order create a marketorder request with a TakeProfit and a StopLoss order when it gets filled.
###Code
import json
import oandapyV20
import oandapyV20.endpoints.orders as orders
from authenticate import Authenticate as auth
accountID, access_token = auth('Demo', 'Primary')
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD, stopLoss @1.07 takeProfit @1.10 ( current: 1.055)
# according to the docs at developer.oanda.com the requestbody looks like:
mktOrder = {
"order": {
"timeInForce": "FOK", # Fill-or-kill
"instrument": "EUR_USD",
"positionFill": "DEFAULT",
"type": "MARKET",
"units": 10000, # as integer
"takeProfitOnFill": {
"timeInForce": "GTC", # Good-till-cancelled
"price": 1.10 # as float
},
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07" # as string
}
}
}
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
print("Request: ", r)
print("MarketOrder specs: ", json.dumps(mktOrder, indent=2))
###Output
Request: v3/accounts/101-004-1435156-001/orders
MarketOrder specs: {
"order": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07"
},
"positionFill": "DEFAULT",
"units": 10000,
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": 1.1
},
"type": "MARKET"
}
}
###Markdown
Well that looks fine, but constructing orderbodies that way is not really what we want. Types are not checked for instance and all the defaults need to be supplied.This kind of datastructures can become complex, are not easy to read or construct and are prone to errors. Types and definitionsOanda uses several *types* and *definitions* througout their documentation. These types are covered by the *oandapyV20.types* package and the definitions by the *oandapyV20.definitions* package. Contrib.requestsThe *oandapyV20.contrib.requests* package offers classes providing an easy way to construct the data forthe *data* parameter of the *OrderCreate* endpoint or the *TradeCRCDO* (Create/Replace/Cancel Dependent Orders). The *oandapyV20.contrib.requests* package makes use of the *oandapyV20.types* and *oandapyV20.definitions*.Let's improve the previous example by making use of *oandapyV20.contrib.requests*:
###Code
import json
import oandapyV20
import oandapyV20.endpoints as endpoints
from oandapyV20.contrib.requests import (
MarketOrderRequest,
StopLossDetails)
from authenticate import authorize
accountID, access_token = authorize('Demo', 'Primary')
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD
mktOrder = MarketOrderRequest(instrument='AUD_USD', units=1, stopLossOnFill=StopLossDetails(1).data).data
mktsetup = endpoints.orders.OrderCreate(accountID=accountID, data=mktOrder)
place = client.request(mktsetup)
print(json.dumps(place, indent=2))
###Output
{
"orderCreateTransaction": {
"id": "738",
"accountID": "101-001-17385496-001",
"userID": 17385496,
"batchID": "738",
"requestID": "24851611458748511",
"time": "2021-08-28T02:43:16.822236103Z",
"type": "MARKET_ORDER",
"instrument": "AUD_USD",
"units": "1",
"timeInForce": "FOK",
"positionFill": "DEFAULT",
"stopLossOnFill": {
"price": "1.00000",
"timeInForce": "GTC",
"triggerMode": "TOP_OF_BOOK"
},
"reason": "CLIENT_ORDER"
},
"orderCancelTransaction": {
"id": "739",
"accountID": "101-001-17385496-001",
"userID": 17385496,
"batchID": "738",
"requestID": "24851611458748511",
"time": "2021-08-28T02:43:16.822236103Z",
"type": "ORDER_CANCEL",
"orderID": "738",
"reason": "MARKET_HALTED"
},
"relatedTransactionIDs": [
"738",
"739"
],
"lastTransactionID": "739"
}
###Markdown
As you can see, the specs contain price values that were converted to strings and the defaults *positionFill* and *timeInForce* were added. Using *contrib.requests* makes it very easy to construct the orderdata body for order requests. Parameters for those requests are also validated.Next step, place the order: rv = client.request(r)print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2))) Lets analyze that. We see an *orderCancelTransaction* and *reason* **STOP_LOSS_ON_FILL_LOSS**. So the order was not placed ? Well it was placed and cancelled right away. The marketprice of EUR_USD is at the moment of this writing 1.058. So the stopLoss order at 1.07 makes no sense. The status_code of 201 is as the specs say: http://developer.oanda.com/rest-live-v20/order-ep/ .Lets change the stopLoss level below the current price and place the order once again.
###Code
mktOrder = MarketOrderRequest(instrument="EUR_USD",
units=10000,
takeProfitOnFill=TakeProfitDetails(price=1.10).data,
stopLossOnFill=StopLossDetails(price=1.05).data
).data
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
rv = client.request(r)
print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2)))
###Output
Response: 201
{
"orderFillTransaction": {
"accountBalance": "102107.4442",
"instrument": "EUR_USD",
"batchID": "7578",
"pl": "0.0000",
"accountID": "101-004-1435156-001",
"units": "10000",
"tradeOpened": {
"tradeID": "7579",
"units": "10000"
},
"financing": "0.0000",
"price": "1.05563",
"userID": 1435156,
"orderID": "7578",
"time": "2017-03-09T13:22:13.832587780Z",
"id": "7579",
"type": "ORDER_FILL",
"reason": "MARKET_ORDER"
},
"lastTransactionID": "7581",
"orderCreateTransaction": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"batchID": "7578",
"accountID": "101-004-1435156-001",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"time": "2017-03-09T13:22:13.832587780Z",
"userID": 1435156,
"positionFill": "DEFAULT",
"id": "7578",
"type": "MARKET_ORDER",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.05000"
},
"reason": "CLIENT_ORDER"
},
"relatedTransactionIDs": [
"7578",
"7579",
"7580",
"7581"
]
}
###Markdown
We now see an *orderFillTransaction* for 10000 units EUR_USD with *reason* **MARKET_ORDER**.Lets retrieve the orders. We should see the *stopLoss* and *takeProfit* orders as *pending*:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print("Response:\n", json.dumps(rv, indent=2))
###Output
Response:
{
"lastTransactionID": "7581",
"orders": [
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7579",
"id": "7581",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.10000",
"tradeID": "7579",
"id": "7580",
"state": "PENDING",
"type": "TAKE_PROFIT"
},
{
"createTime": "2017-03-09T11:45:48.928448770Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7572",
"id": "7574",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:18:51.563637768Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7562",
"id": "7564",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:08:04.219010730Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7558",
"id": "7560",
"state": "PENDING",
"type": "STOP_LOSS"
}
]
}
###Markdown
Depending on the state of your account you should see at least the orders associated with the previously executed marketorder. The *relatedTransactionIDs* should be in the *orders* output of OrdersPending().Now lets cancel all pending TAKE_PROFIT orders:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
idsToCancel = [order.get('id') for order in rv['orders'] if order.get('type') == "TAKE_PROFIT"]
for orderID in idsToCancel:
r = orders.OrderCancel(accountID=accountID, orderID=orderID)
rv = client.request(r)
print("Request: {} ... response: {}".format(r, json.dumps(rv, indent=2)))
###Output
Request: v3/accounts/101-004-1435156-001/orders/7580/cancel ... response: {
"orderCancelTransaction": {
"time": "2017-03-09T13:26:07.480994423Z",
"userID": 1435156,
"batchID": "7582",
"orderID": "7580",
"id": "7582",
"type": "ORDER_CANCEL",
"accountID": "101-004-1435156-001",
"reason": "CLIENT_REQUEST"
},
"lastTransactionID": "7582",
"relatedTransactionIDs": [
"7582"
]
}
###Markdown
create a LimitOrder with a *GTD* "good-til-date"Create a LimitOrder and let it expire: *2018-07-02T00:00:00* using *GTD*. Make sure it is in the futurewhen you run this example!
###Code
from oandapyV20.contrib.requests import LimitOrderRequest
# make sure GTD_TIME is in the future
# also make sure the price condition is not met
# and specify GTD_TIME as UTC or local
# GTD_TIME="2018-07-02T00:00:00Z" # UTC
GTD_TIME="2018-07-02T00:00:00"
ordr = LimitOrderRequest(instrument="EUR_USD",
units=10000,
timeInForce="GTD",
gtdTime=GTD_TIME,
price=1.08)
print(json.dumps(ordr.data, indent=4))
r = orders.OrderCreate(accountID=accountID, data=ordr.data)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"order": {
"price": "1.08000",
"timeInForce": "GTD",
"positionFill": "DEFAULT",
"type": "LIMIT",
"instrument": "EUR_USD",
"gtdTime": "2018-07-02T00:00:00",
"units": "10000"
}
}
{
"relatedTransactionIDs": [
"8923"
],
"lastTransactionID": "8923",
"orderCreateTransaction": {
"price": "1.08000",
"triggerCondition": "DEFAULT",
"positionFill": "DEFAULT",
"type": "LIMIT_ORDER",
"requestID": "42440345970496965",
"partialFill": "DEFAULT",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"batchID": "8923",
"id": "8923",
"userID": 1435156,
"accountID": "101-004-1435156-001",
"timeInForce": "GTD",
"reason": "CLIENT_ORDER",
"instrument": "EUR_USD",
"time": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
}
###Markdown
Request the pending orders
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [
{
"price": "1.08000",
"triggerCondition": "DEFAULT",
"state": "PENDING",
"positionFill": "DEFAULT",
"partialFill": "DEFAULT_FILL",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"id": "8923",
"timeInForce": "GTD",
"type": "LIMIT",
"instrument": "EUR_USD",
"createTime": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
],
"lastTransactionID": "8923"
}
###Markdown
Cancel the GTD orderFetch the *orderID* from the *pending orders* and cancel the order.
###Code
r = orders.OrderCancel(accountID=accountID, orderID=8923)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"relatedTransactionIDs": [
"8924"
],
"orderCancelTransaction": {
"accountID": "101-004-1435156-001",
"time": "2018-06-10T12:07:35.453416669Z",
"orderID": "8923",
"reason": "CLIENT_REQUEST",
"requestID": "42440346243149289",
"type": "ORDER_CANCEL",
"batchID": "8924",
"id": "8924",
"userID": 1435156
},
"lastTransactionID": "8924"
}
###Markdown
Request pendig orders once again ... the 8923 should be gone
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [],
"lastTransactionID": "8924"
}
|
5_Sequence_Models/week01/Building a Recurrent Neural Network - Step by Step/Building a Recurrent Neural Network - Step by Step - v2(solution).ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input.- Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step. - Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Let's first import all the packages that you will need during this assignment.
###Code
import numpy as np
from rnn_utils import *
###Output
_____no_output_____
###Markdown
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Here's how you can implement an RNN: **Steps**:1. Implement the calculations needed for one time-step of the RNN.2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. Let's go! 1.1 - RNN cellA Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ **Exercise**: Implement the RNN-cell described in Figure (2).**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cacheWe will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
###Code
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh(np.dot(Waa, a_prev) + np.dot(Wax, xt) + ba)
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya, a_next) + by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
###Output
a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
a_next.shape = (5, 10)
yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
yt_pred.shape = (2, 10)
###Markdown
**Expected Output**: **a_next[4]**: [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] **a_next.shape**: (5, 10) **yt[1]**: [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526] **yt.shape**: (2, 10) 1.2 - RNN forward pass You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step. **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. **Exercise**: Code the forward propagation of the RNN described in Figure (3).**Instructions**:1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.2. Initialize the "next" hidden state as $a_0$ (initial hidden state).3. Start looping over each time step, your incremental index is $t$ : - Update the "next" hidden state and the cache by running `rnn_cell_forward` - Store the "next" hidden state in $a$ ($t^{th}$ position) - Store the prediction in y - Add the cache to the list of caches4. Return $a$, $y$ and caches
###Code
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and Wy
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros([n_a, m, T_x])
y_pred = np.zeros([n_y, m, T_x])
# Initialize a_next (≈1 line)
a_next = np.zeros([n_a, m])
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:, :, t], a_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
###Output
a[4][1] = [-0.93013738 0.991315 -0.98694298 -0.99723276]
a.shape = (5, 10, 4)
y_pred[1][3] = [ 0.0440187 0.41032346 0.01401205 0.42558194]
y_pred.shape = (2, 10, 4)
caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]
len(caches) = 2
###Markdown
**Expected Output**: **a[4][1]**: [-0.99999375 0.77911235 -0.99861469 -0.99833267] **a.shape**: (5, 10, 4) **y[1][3]**: [ 0.79560373 0.86224861 0.11118257 0.81515947] **y.shape**: (2, 10, 4) **cache[1][1][3]**: [-1.1425182 -0.34934272 -0.20889423 0.58662319] **len(cache)**: 2 Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$). In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. 2 - Long Short-Term Memory (LSTM) networkThis following figure shows the operations of an LSTM-cell. **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps. About the gates - Forget gateFor the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this: $$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information. - Update gateOnce we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate: $$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$ Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$. - Updating the cell To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is: $$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$Finally, the new cell state is: $$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$ - Output gateTo decide which outputs we will use, we will use the following two formulas: $$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$ $$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state. 2.1 - LSTM cell**Exercise**: Implement the LSTM cell described in the Figure (3).**Instructions**:1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
###Code
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = np.zeros((n_x + n_a, m))
concat[: n_a, :] = a_prev
concat[n_a :, :] = xt
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = sigmoid(np.dot(Wf, concat) + bf)
it = sigmoid(np.dot(Wi, concat) + bi)
cct = np.tanh(np.dot(Wc, concat) + bc)
c_next = ft * c_prev + it * cct
ot = sigmoid(np.dot(Wo, concat) + bo)
a_next = ot * np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(np.dot(Wy, a_next) + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
###Output
a_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
a_next.shape = (5, 10)
c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
c_next.shape = (5, 10)
yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
yt.shape = (2, 10)
cache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
len(cache) = 10
###Markdown
**Expected Output**: **a_next[4]**: [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] **a_next.shape**: (5, 10) **c_next[2]**: [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932] **c_next.shape**: (5, 10) **yt[1]**: [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381] **yt.shape**: (2, 10) **cache[1][3]**: [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422] **len(cache)**: 10 2.2 - Forward pass for LSTMNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. **Figure 4**: LSTM over multiple time-steps. **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. **Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
###Code
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and Wy (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros((n_a, m, T_x))
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros((n_a, m))
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = lstm_cell_forward(x[:, :, t], a_next, c_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
###Output
a[4][3][6] = 0.172117767533
a.shape = (5, 10, 7)
y[1][4][3] = 0.95087346185
y.shape = (2, 10, 7)
caches[1][1[1]] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
c[1][2][1] -0.855544916718
len(caches) = 2
###Markdown
**Expected Output**: **a[4][3][6]** = 0.172117767533 **a.shape** = (5, 10, 7) **y[1][4][3]** = 0.95087346185 **y.shape** = (2, 10, 7) **caches[1][1][1]** = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165] **c[1][2][1]** = -0.855544916718 **len(caches)** = 2 Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The rest of this notebook is optional, and will not be graded. 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. 3.1 - Basic RNN backward passWe will start by computing the backward pass for the basic RNN-cell. **Figure 5**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. Deriving the one step backward functions: To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand. The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \sec(x)^2 = 1 - \tanh(x)^2$Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$. The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
###Code
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
dtanh = None
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = None
dWax = None
# compute the gradient with respect to Waa (≈2 lines)
da_prev = None
dWaa = None
# compute the gradient with respect to b (≈1 line)
dba = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = -0.460564103059 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = 0.0842968653807 **gradients["da_prev"].shape** = (5, 10) **gradients["dWax"][3][1]** = 0.393081873922 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = -0.28483955787 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [ 0.80517166] **gradients["dba"].shape** = (5, 1) Backward pass through the RNNComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
###Code
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = None
(a1, a0, x1, parameters) = None
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈6 lines)
dx = None
dWax = None
dWaa = None
dba = None
da0 = None
da_prevt = None
# Loop through all the time steps
for t in reversed(range(None)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = None
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = None
dWax += None
dWaa += None
dba += None
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = -0.314942375127 **gradients["da0"].shape** = (5, 10) **gradients["dWax"][3][1]** = 11.2641044965 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 2.30333312658 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [-0.74747722] **gradients["dba"].shape** = (5, 1) 3.2 - LSTM backward pass 3.2.1 One Step backwardThe LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 3.2.2 gate derivatives$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_i^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$ 3.2.3 parameter derivatives $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
###Code
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = None
n_a, m = None
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = None
dcct = None
dit = None
dft = None
# Code equations (7) to (10) (≈4 lines)
dit = None
dft = None
dot = None
dcct = None
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
da_prev = None
dc_prev = None
dxt = None
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
###Code
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈12 lines)
dx = None
da0 = None
da_prevt = None
dc_prevt = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# loop back over the whole sequence
for t in reversed(range(None)):
# Compute all gradients using lstm_cell_backward
gradients = None
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____ |
notebooks/PART1_05_Python_101.ipynb | ###Markdown
Python 101 - An Introduction to Python 1. Objects
###Code
print(2)
print(5)
print("Hello")
a = 2
b = "Hello"
c = True
d = 2.0
e = [a,c,d]
print(a,b,c,d,e)
print(
type(a),
type(b),
type(c),
type(d),
type(e),
)
###Output
<class 'int'> <class 'str'> <class 'bool'> <class 'float'> <class 'list'>
###Markdown
> __DEAL WITH ERRORS!__ 2. Operations
###Code
a = 2
b = 3
# this is a comment
print(
a+b, # this is a comment
a*b, # this is a comment
a**b, # this is a comment
a/b,
a//b
)
"Hello" + " World"
"Hello "*4
#"Hello "**4
a = (1 > 3)
b = (3 == 3)
print(a, b)
c = True
print("1 ",a or b) #1
print("2 ",a and b) #2
print("3 ",b and c) #3
###Output
False True
1 True
2 False
3 True
###Markdown
3. Methods
###Code
a = "hello world"
type(a)
print(a.capitalize())
print(a.title())
print(a.upper())
print(a.replace("o", "--"))
###Output
Hello world
Hello World
HELLO WORLD
hell-- w--rld
###Markdown
4. Indexing and Slicing
###Code
a = a.upper()
a
a[0:2]
a[2:]
a[:4]
a[::3]
a[-5:]
a[-1]
a[::-1]
"HELL" in a
"DO" in a
###Output
_____no_output_____
###Markdown
5. Collection of things* `list`* `tuple`* `dictionary` List
###Code
a = ["blueberry", "strawberry", "pineapple", 1, True]
type(a)
a[::-1]
a[-1]
a[1]
a
a[0] = "new fruit"
print(a)
a.append("a new thing")
a
a.pop()
a.pop()
a.pop()
a
a.sort()
a
a.reverse()
a
a = sorted(a)
a
a.sort()
a
###Output
_____no_output_____
###Markdown
> **Challenge:** Store a bunch of heights (in metres) in a list1. Ask five people around you for their heights (in metres).2. Store these in a list called `heights`.3. Append your own height to the list.4. Get the first height from the list and print it.
###Code
# Solution
heights = [1.72, 1.55, 1.98, 1.66, 1.78]
heights.append(1.88)
heights[0]
###Output
_____no_output_____
###Markdown
**variable assignment:** Everytime you assign something to a variable the following happens: - Example: x = 5 > (x =) - reference: The computer assigns some space on the harddrive and creates a link between your variable name and this space > (int) - type: What you dont see is, that python automatically guesses the data type of your variable by what follows after the equal sign (try "print(int(5),str(5))") > (5) - object: Now the computer writes every information about your object in the reserved space
###Code
a = 5
b = a
a = 6
print(a,b)
a = [1,2]
b = a
a.append(3)
print(a,b)
###Output
[1, 2, 3] [1, 2, 3]
###Markdown
Basically there are two types of objects. Some are called by their value (like every Number or String) and some by their reference (like lists). - Real life comparison: Imagine having a excel sheet on a computer that multiple people use. Everytime someone changes it, it is changed for every other user. Right now you are using all the same jupyter notebook, but your changes only happen at your notebook. - By typing "a = b" you are either telling python that "b" should be assigned to the same reference as "a" (so both "a" and "b" point to the same harddrive space) or that "b" should get its own copy of whatever "a" is, depending on what is stored in "a". Tuple
###Code
b = (1,2,3,4,5)
type(b)
b1 = [1,2,3,4,5]
type(b1)
b1[0] = 2
b1
#b[0] = 2
###Output
_____no_output_____
###Markdown
Dictionaries`key` $\to$ `value` pairs
###Code
my_dict = {"Marry" : 22 , "Frank" : 33 }
my_dict
my_dict["Marry"]
my_dict["Frank"]
my_dict["Anne"] = 13
my_dict
my_dict["Anne"]
#my_dict["Heidi"]
my_dict.get("Heidi", "Danger no entry found!")
my_dict.items()
my_dict.keys()
my_dict.values()
###Output
_____no_output_____
###Markdown
6. Use functions
###Code
print(type(3))
print(len('hello'))
print(round(3.3))
#?round
round(3.14159,3)
dir(__builtins__);
import math
math.pi
import math as m
m.pi
from math import pi
pi
from math import *
math.sqrt(4)
sqrt(4)
math.sin(2)
import copy
a = [1,2]
b = copy.deepcopy(a)
a.append(3)
print(a,b)
###Output
[1, 2, 3] [1, 2]
###Markdown
7. DRY (_Dont't repeat yourself_) __`for` Loops__
###Code
wordlist = ["hi", "hello", "by"]
import time
for word in wordlist:
print(word + "!")
time.sleep(1)
print("-----------")
print("Done")
for e, word in enumerate(wordlist):
print(e, word)
print("-----")
###Output
0 hi
-----
1 hello
-----
2 by
-----
###Markdown
> **Challenge*** Sum all of the values in a collection using a for loop`numlist = [1, 4, 77, 3]`
###Code
# solution
numlist = [1, 4, 77, 3]
total = 0
for mymother in numlist:
total = total + mymother
print(total)
###Output
85
###Markdown
> **Challenge*** Combine items from two lists and print them as one string to the console `name = ["John", "Ringo", "Paul", "George"]` `surname = ["Lennon", "Star", "McCartney", "Harrison"]`
###Code
name = ["John", "Ringo", "Paul", "George"]
surname = ["Lennon", "Star", "McCartney", "Harrison"]
# Solution 1
for e, n in enumerate(name):
print(e, n)
print(n, surname[e])
print("-----")
# Solution 2
for i in zip(name, surname):
print(i[0], i[1])
list(zip(name, surname))
###Output
_____no_output_____
###Markdown
**`while` loop**
###Code
# and want to stop once a certain condition is met.
step = 0
prod = 1
while prod < 100:
step = step + 1
prod = prod * 2
print(step, prod)
print('Reached a product of', prod, 'at step number', step)
###Output
1 2
2 4
3 8
4 16
5 32
6 64
7 128
Reached a product of 128 at step number 7
###Markdown
list comprehensions
###Code
[(y,x) for x,y in zip(name, surname)]
###Output
_____no_output_____
###Markdown
8. Control Flow
###Code
x = 0
if x > 0:
print("x is positive")
elif x < 0:
print("x is negative")
else:
print("x is zero")
###Output
x is zero
###Markdown
> **Challenge**Write a countdown!
###Code
range(10)
#time.sleep(1)
for number in range(10):
count = 10-number
print(count)
time.sleep(1)
if count == 1:
print("Engine start!")
###Output
10
9
8
7
6
5
4
3
2
1
Engine start!
###Markdown
9. IO Write a text file
###Code
f = open("../datasets/my_file.txt", "w")
for i in range(5):
f.write("Line {}\n".format(i))
f.close()
# using a context manager
with open("../datasets/my_file.txt", "a") as f:
for i in range(5):
f.write("LINE {}\n".format(i))
###Output
_____no_output_____
###Markdown
Read a file
###Code
with open ("../datasets/my_file.txt", "r") as f:
print(f.read())
###Output
Line 0
Line 1
Line 2
Line 3
Line 4
LINE 0
LINE 1
LINE 2
LINE 3
LINE 4
###Markdown
>**Challenge** * Extract the numerical values of the file `my_file.txt` into a list of floating point values.
###Code
my_storage = [] #list()
with open ("../datasets/my_file.txt", "r") as f:
for line in f:
number = float(line.split()[1])
my_storage.append(number)
my_storage
"LINE 0".split()
"LINE 0".split()[1]
float("LINE 0".split()[1])
###Output
_____no_output_____
###Markdown
10 Functions (UDFs)
###Code
def my_func(a,b,c=10):
rv = (a-b)*c
return rv
my_result = my_func(a=1, b=2)
my_result
###Output
_____no_output_____
###Markdown
> **Challenge** * Write a function that computes Kelvin from Fahrenheit (`fahrenheit_to_kelvin`)* Write a function that computes Celsius from Kelvin (`kelvin_to_celsius`)* Write a function that computes Celsius form Fahrenheit (`fahrenheit_to_celsius`); Resue the two functions from above. @1
###Code
def fahrenheit_to_kelvin(a):
"""
Function to compute Fahrenheit from Kelvin
"""
kelvin = (a-32.0)*5/9 + 273.15
return kelvin
fahrenheit_to_kelvin(341)
###Output
_____no_output_____
###Markdown
@2
###Code
def kelvin_to_celsius(temperature_K):
'''
Function to compute Celsius from Kelvin
'''
rv = temperature_K - 273.15
return rv
kelvin_to_celsius(0)
###Output
_____no_output_____
###Markdown
@3
###Code
def fahrenheit_to_celsius(temperature_F):
'''
Function to compite Celsius from Fahrenheit
'''
temp_K = fahrenheit_to_kelvin(temperature_F)
temp_C = kelvin_to_celsius(temp_K)
return temp_C
###Output
_____no_output_____
###Markdown
Code refactoring
###Code
%%writefile temperature_module.py
def kelvin_to_celsius(temperature_K):
'''
Function to compute Celsius from Kelvin
'''
rv = temperature_K - 273.15
return rv
def fahrenheit_to_celsius(temperature_F):
'''
Function to compite Celsius from Fahrenheit
'''
temp_K = fahrenheit_to_kelvin(temperature_F)
temp_C = kelvin_to_celsius(temp_K)
return temp_C
def fahrenheit_to_kelvin(a):
"""
Function to compute Fahrenheit from Kelvin
"""
kelvin = (a-32.0)*5/9 + 273.15
return kelvin
import temperature_module as tm
tm.kelvin_to_celsius(100)
tm.fahrenheit_to_celsius(100)
tm.fahrenheit_to_kelvin(100)
###Output
_____no_output_____
###Markdown
Final challenge Build rock sciscors paper__Task__ Implement the classic children's game Rock-paper-scissors, as well as a simple predictive AI (artificial intelligence) player.Rock Paper Scissors is a two player game.Each player chooses one of rock, paper or scissors, without knowing the other player's choice.The winner is decided by a set of rules: Rock beats scissors Scissors beat paper Paper beats rockIf both players choose the same thing, there is no winner for that round. If you don't konw the rules you may finde them [here](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors).For this task, the computer will be one of the players.The operator will select Rock, Paper or Scissors and the computer will keep a record of the choice frequency, and use that information to make a weighted random choice in an attempt to defeat its opponent.Consider the function `input()` to ask for user input. Try to implement an exit rule as well.
###Code
from random import choice
rules = {'rock': 'paper', 'scissors': 'rock', 'paper': 'scissors'}
previous = ['rock', 'paper', 'scissors']
while True:
human = input('\nchoose your weapon: ')
computer = rules[choice(previous)] # choose the weapon which beats a randomly chosen weapon from "previous"
if human in ('quit', 'exit'): break
elif human in rules:
print('the computer played', computer, end='; ')
if rules[computer] == human: # if what beats the computer's choice is the human's choice...
print('yay you win!')
elif rules[human] == computer: # if what beats the human's choice is the computer's choice...
print('the computer beat you... :(')
else: print("it's a tie!")
else: print("that's not a valid choice")
###Output
choose your weapon: exit
|
Basic_Classification/basic_classification.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Train your first neural network: basic classification View on TensorFlow.org Run in Google Colab View source on GitHub This guide trains a neural network model to classify images of clothing, like sneakers and shirts. It's okay if you don't understand all the details, this is a fast-paced overview of a complete TensorFlow program with the details explained as we go.This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow.
###Code
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Import the Fashion MNIST dataset This guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here: <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc) in an identical format to the articles of clothing we'll use here.This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow, just import and load the data:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
Loading the dataset returns four NumPy arrays:* The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn.* The model is tested against the *test set*, the `test_images`, and `test_labels` arrays.The images are 28x28 NumPy arrays, with pixel values ranging between 0 and 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents: Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images:
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
Explore the dataLet's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
Likewise, there are 60,000 labels in the training set:
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
Each label is an integer between 0 and 9:
###Code
train_labels
###Output
_____no_output_____
###Markdown
There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
And the test set contains 10,000 images labels:
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
Preprocess the dataThe data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
###Output
_____no_output_____
###Markdown
We scale these values to a range of 0 to 1 before feeding to the neural network model. For this, we divide the values by 255. It's important that the *training set* and the *testing set* are preprocessed in the same way:
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
Display the first 25 images from the *training set* and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
###Output
_____no_output_____
###Markdown
Build the modelBuilding the neural network requires configuring the layers of the model, then compiling the model. Setup the layersThe basic building block of a neural network is the *layer*. Layers extract representations from the data fed into them. And, hopefully, these representations are more meaningful for the problem at hand.Most of deep learning consists of chaining together simple layers. Most layers, like `tf.keras.layers.Dense`, have parameters that are learned during training.
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
###Output
_____no_output_____
###Markdown
The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely-connected, or fully-connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer is a 10-node *softmax* layer—this returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes. Compile the modelBefore the model is ready for training, it needs a few more settings. These are added during the model's *compile* step:* *Loss function* —This measures how accurate the model is during training. We want to minimize this function to "steer" the model in the right direction.* *Optimizer* —This is how the model is updated based on the data it sees and its loss function.* *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified.
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelTraining the neural network model requires the following steps:1. Feed the training data to the model—in this example, the `train_images` and `train_labels` arrays.2. The model learns to associate images and labels.3. We ask the model to make predictions about a test set—in this example, the `test_images` array. We verify that the predictions match the labels from the `test_labels` array. To start training, call the `model.fit` method—the model is "fit" to the training data:
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data. Evaluate accuracyNext, compare how the model performs on the test dataset:
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
It turns out, the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*. Overfitting is when a machine learning model performs worse on new data than on their training data. Make predictionsWith the model trained, we can use it to make predictions about some images.
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
A prediction is an array of 10 numbers. These describe the "confidence" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
So the model is most confident that this image is an ankle boot, or `class_names[9]`. And we can check the test label to see this is correct:
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
We can graph this to look at the full set of 10 channels
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
Let's look at the 0th image, predictions, and prediction array.
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident.
###Code
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
Finally, use the trained model to make a prediction about a single image.
###Code
# Grab an image from the test dataset
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` models are optimized to make predictions on a *batch*, or collection, of examples at once. So even though we're using a single image, we need to add it to a list:
###Code
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
Now predict the image:
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch:
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____ |
Univariate Linear Regression/Model/House_price_prediction_Univariate.ipynb | ###Markdown
***PLOT Between model price and actual price***
###Code
model_price= x*new_theta.T
fig,ax = plt.subplots(figsize=(12,8))
ax.plot(data.Size,model_price,'r',label= 'Prediction')
ax.scatter(data.Size,data.Price,label= 'Training data')
ax.legend()
ax.set_xlabel('Size')
ax.set_ylabel('Price')
ax.set_title('Predicted price vs Actual price')
fig,ax = plt.subplots(figsize=(12,8))
ax.plot(np.arange(iters),cost,'r',label= 'Error Vs Cost')
ax.legend(loc=3)
ax.set_xlabel('Iterations')
ax.set_ylabel('cost')
ax.set_title('Error Vs Iterations')
###Output
_____no_output_____
###Markdown
***Error and Accuracy calculations***
###Code
from sklearn.metrics import mean_absolute_error
Error = mean_absolute_error(model_price,y)
Accuracy = 1-Error
print('Error = {} %'.format(round(Error*100,2)))
print('Accuracy = {} %'.format(round(Accuracy*100,2)))
###Output
Error = 0.94 %
Accuracy = 99.06 %
###Markdown
***Prediction***
###Code
def predict(new_theta,accuracy):
#get input from the user
size= float(input("Enter the size of the House in sqft.:"))
#Mean Normalisation
size= (size - raw_data.Size.mean())/(raw_data.Size.max()-raw_data.Size.min())
#Model
price = (new_theta[0,0] + (new_theta[0,1]*size))
#Reverse Mean Normalisation
Predicted_Price = (price* (raw_data.Price.max()-raw_data.Price.min())) + (raw_data.Price.mean())
Price_at_max_accuracy = (Predicted_Price*(1/accuracy))
Price_range = Price_at_max_accuracy - Predicted_Price
return Predicted_Price, Price_range
Predicted_price, Price_range = predict(new_theta,Accuracy)
print("Your house cost is",str(round(Predicted_price)),'(+ or -)',str(Price_range))
###Output
Enter the size of the House in sqft.:1200
Your house cost is 3751161.0 (+ or -) 35691.47797277197
|
examples/sampling/adaptive-covariance-haario-bardenet.ipynb | ###Markdown
Inference: Haario-Bardenet adaptive covariance MCMCThis example shows you how to perform Bayesian inference on a time series, using a variant of [Adaptive Covariance MCMC](https://pints.readthedocs.io/en/latest/mcmc_samplers/haario_bardenet_ac_mcmc.html) detailed in supplementary materials of [1].[1] Uncertainty and variability in models of the cardiac action potential: Can we build trustworthy models? Johnstone, Chang, Bardenet, de Boer, Gavaghan, Pathmanathan, Clayton, Mirams (2015) Journal of Molecular and Cellular CardiologyIt follows on from the [first sampling example](./first-example.ipynb).
###Code
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
import time
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 1000)
org_values = model.simulate(real_parameters, times)
# Add noise
noise = 10
values = org_values + np.random.normal(0, noise, org_values.shape)
real_parameters = np.array(real_parameters + [noise])
# Get properties of the noise sample
noise_sample_mean = np.mean(values - org_values)
noise_sample_std = np.std(values - org_values)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, noise*0.1],
[0.02, 600, noise*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
xs = [
real_parameters * 1.1,
real_parameters * 0.9,
real_parameters * 1.15,
]
# Create mcmc routine with four chains
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC)
# Add stopping criterion
mcmc.set_max_iterations(4000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(1000)
# Disable logging mode
mcmc.set_log_to_screen(False)
# time start
start = time.time()
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# end time
end = time.time()
time = end - start
# Discard warm up
chains = chains[:, 2000:, :]
# Look at distribution across all chains
pints.plot.pairwise(np.vstack(chains), kde=False)
# Show graphs
plt.show()
###Output
Running...
Done!
###Markdown
Use a results object to tabulate parameter-specific results.
###Code
results = pints.MCMCSummary(chains=chains, time=time, parameter_names=["r", "k", "sigma"])
print(results)
###Output
param mean std. 2.5% 25% 50% 75% 97.5% rhat ess ess per sec.
------- ------ ------ ------ ------ ------ ------ ------- ------ ------ --------------
r 0.01 0.00 0.01 0.01 0.01 0.02 0.02 1.00 552.33 158.05
k 500.06 0.47 499.15 499.76 500.07 500.39 500.99 1.00 525.02 150.24
sigma 10.08 0.22 9.66 9.93 10.07 10.22 10.52 1.01 436.92 125.03
|
bitmex-inflow-outflow/notebooks/1.0-ea-outflow-check.ipynb | ###Markdown
Creates the Outflow Graph
###Code
import os
import json
import requests
import datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def start_end_time():
endTime = datetime.datetime.now()
startTime = endTime - datetime.timedelta(30)
endTime = str(int(endTime.timestamp()))
startTime = str(int(startTime.timestamp()))
return startTime, endTime
def get_response(url, headers=None, queryString=None):
"Get the REST response from the specified URL"
if not headers:
headers = {'x-api-key': api_key["AMBERDATA_API_KEY"]}
if queryString:
response = requests.request("GET", url, headers=headers, params=queryString)
else:
response = requests.request("GET", url, headers=headers)
response = json.loads(response.text)
try:
if response["title"] == "OK":
return response["payload"]
except Exception:
print(response)
return None
def reindex(data, index):
""" Returns the DataFrame calculated w/ inflow & outflow
:type data: DataFrame
:type index: List[int]
:rtype: DataFrame
"""
d = np.digitize(data.timestamp.values, index)
g = data[["inflow", "outflow"]].groupby(d).sum()
g = g.reindex(range(24*30), fill_value=0)
g.index = index
return g
def inflow_outflow(data: dict):
"Returns the inflow and outflow of the payload"
# get the column names
columns = data["metadata"]["columns"]
# load the data, dropping timestampNano
ad_hist = pd.DataFrame(data["data"], columns=columns).drop("timestampNanoseconds", axis=1)
# change dtype of appropriate columns to Int
ad_hist[["blockNumber", "timestamp", "value"]] = ad_hist[["blockNumber", "timestamp", "value"]].apply(pd.to_numeric)
# sort by blockNum desc
ad_hist = ad_hist.sort_values("timestamp").reset_index(drop=True)
# calculate inflow and outflow
ad_hist["diff"] = ad_hist["value"].diff()
ad_hist["inflow"] = np.where(ad_hist["diff"] > 0, ad_hist["diff"], 0)
ad_hist["outflow"] = np.where(ad_hist["diff"] < 0, abs(ad_hist["diff"]), 0)
# return the result
return ad_hist
def daily_inflow_outflow(address, headers, querystring):
url = "https://web3api.io/api/v2/addresses/" + address + "/account-balances/historical"
try:
payload = get_response(url=url, headers=headers, queryString=querystring)
except Exception:
return None
if len(payload["data"]) > 1: # if there is activity in the period
# calculate inflow / outflow
data = inflow_outflow(payload)
# get in the format to merge with master inflow/outflow data
g = reindex(data, index)
return g
startTime, endTime = start_end_time()
index = [10**3*(int(startTime) + i*60**2) for i in range(24*30)]
querystring = {"startDate": startTime,
"endDate": endTime
}
headers = {
'x-amberdata-blockchain-id': "bitcoin-mainnet",
'x-api-key': os.getenv("AMBERDATA_API_KEY")
}
df = pd.read_csv("../input/addresses_all.csv")
# check if we are running the full calculation
addresses = df.Address.values
activ = []
i = 0
while len(activ) < 30:
url = "https://web3api.io/api/v2/addresses/" + addresses[i] + "/account-balances/historical"
try:
payload = get_response(url=url, headers=headers, queryString=querystring)
except Exception:
pass
i += 1
if len(payload["data"]) > 1: # if there is activity in the period
# calculate inflow / outflow
data = inflow_outflow(payload)
# get in the format to merge with master inflow/outflow data
g = reindex(data, index)
g.index = [datetime.datetime.fromtimestamp(i//10**3) for i in g.index.values]
activ.append(g)
N = 30
data = [i.outflow for i in activ[:N]]
for i in range(len(data)):
plt.plot(data[i])
plt.title(f"BitMEX Outflows timing-{N} Addresses")
plt.xticks(rotation=45)
plt.savefig("../plots/btc_outflow.png", bbox_inches="tight")
# code inspired by http://blog.josephmisiti.com/group-by-datetimes-in-pandas
# load in the inflow data, rename columns
combined = pd.DataFrame(data).T
combined.columns = [str(i) for i in range(N)]
# simply indicate if outflow > 0
combined = combined.applymap(lambda x: 1 if x > 0 else 0)
# bring index to a column
combined = combined.reset_index().rename({"index": "ts"}, axis=1)
# making date column from timestamp
combined['date'] = combined["ts"].apply(lambda df: datetime.datetime(year=df.year, month=df.month, day=df.day))
# make dates the index
combined.set_index(combined["date"],inplace=True)
# dropping unused date and timestamp columns
combined = combined.drop(["date", "ts"], axis=1)
# group by days
combined = combined.resample('D').sum()
# test our assumption of 1 outflow per day
if combined.max(axis=1).max() == 1:
print("Our assumtion is safe.")
else:
print("Incorrect assumption!")
###Output
Our assumtion is safe.
|
OOPShiNetworkModelCSV.ipynb | ###Markdown
###Code
import json
from google.colab import drive
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random as rnd
import networkx as nx
import matplotlib.pyplot as plt
import math
import requests
import csv
url = "https://ric-colasanti.github.io/ASPIREColab/Data/2009data.csv"
data = requests.get(url)
lines = data.content.decode('utf-8')
cr = csv.reader(lines.splitlines(), delimiter=',')
shi_2009 = list(cr)
print(shi_2009[0])
print(len(shi_2009))
hold1 =0
hold2 = 0
hold3 =0
class Person:
T_PA = 0.12
T_EI = 0.07
def __init__(self,key,data):
self.id = key
self.gender = int(data[0])
self.age = int(data[1])
self.height = float(data[2])
self.BW = float(data[3])
self.start_BW = self.BW
self.EI = float(data[4])
self.BEE = float(data[5])
self.BW_2011 = float(data[6])
self.BMI = self.BW/(self.height * self.height)
self.BEE_calc = self.calc_BEE()
self.linked = []
self.is_part = False
self.can_choose = False
self.xpos =rnd.random()
self.ypos =rnd.random()
self.EE = self.EI #0.0 # Energy expenditure
self.PA = 0.9 * self.EI - self.BEE# 0.0 # Physical activity
self.Env = (rnd.random()*(1.08-0.82))+0.82
if self.BMI<18.5:
self.BMI_catagory = 1
elif self.BMI>=18.5 and self.BMI<24:
self.BMI_catagory = 2
elif self.BMI>=24 and self.BMI<28:
self.BMI_catagory = 3
else:
self.BMI_catagory = 4
def calc_BEE(self):
if self.gender ==1:
self.BEE = ((66.5 + 13.6 * self.BW + 500 * self.height - 6.8 * self.age) * 4186 / 1000000)
else:
self.BEE = ((655.1 + 9.5 * self.BW + 180 * self.height - 4.1 * self.age)* 4186 / 1000000)
def diffuse_behavior(self):# the calculation of influence and EI/PA change
global hold1, hold2, hold3
inf_PA = 0
inf_EI = 0
temp = 0
s = 0
inf_PA_Env = 0
inf_EI_Env = 0
for agent in self.linked:
hold2 +=1
temp = agent.PA - self.PA
s += temp
inf_PA = (1 / len(self.linked)) * s
temp = 0
s = 0
for agent in self.linked:
temp = agent.EI - self.EI
s += temp
inf_EI = (1 / len(self.linked)) * s
if inf_PA >= 0:
inf_PA_Env = inf_PA * self.Env
else:
inf_PA_Env = inf_PA / self.Env
if inf_EI < 0:
inf_EI_Env = inf_EI * self.Env
else:
inf_EI_Env = inf_EI / self.Env
if (inf_PA_Env > 0) and (abs(inf_PA_Env) > Person.T_PA * self.PA):
self.PA *= (1 + 0.05)
if (inf_PA_Env < 0) and (abs(inf_PA_Env) > Person.T_PA * self.PA):
self.PA *= (1 - 0.05)
if (inf_EI_Env > 0) and (abs(inf_EI_Env) > Person.T_EI * self.EI):
self.EI *= (1 + 0.05)
if (inf_EI_Env < 0) and (abs(inf_EI_Env) > Person.T_EI * self.EI):
self.EI *= (1 - 0.05)
hold1 += inf_EI
hold3 += inf_PA
def update(self):#the calculation of BW change
EBI = 0
self.EE = self.BEE + 0.1 * self.EI + self.PA
EIB = 7 * (self.EI - self.EE) / 5
self.BW += (EIB / (7 * math.log(self.BW + 1) + 5))
self.calc_BEE()
def distance(self,agent):
x_sqr = abs(self.xpos-agent.xpos)
x_sqr*=x_sqr
y_sqr = abs(self.ypos-agent.ypos)
y_sqr*=y_sqr
return math.sqrt(x_sqr*y_sqr)
class Population:
def __init__(self,selected_population):
self.persons = []
self.npos ={}
self.colors=[]
self.graph = nx.Graph(directed=False)
bcolors=["white","red","green","blue","yellow"]
for i in range(len(selected_population)):
new_person = Person(i,selected_population[i])
self.graph.add_node(new_person.id)
self.npos[new_person.id]=(new_person.xpos,new_person.ypos)
self.colors.append(bcolors[new_person.BMI_catagory])
self.persons.append(new_person)
def makeLink(self,agent,choice):
if self.graph.has_edge(agent.id,choice.id)==False:
self.graph.add_edge(agent.id,choice.id)
choice.linked.append(agent)
agent.linked.append(choice)
#choice.linked.append(agent)
# return True
# return False
def linkAgentTo(self,agent):
candidate = list(filter(self.chosen_not_self_filter(agent),self.persons))
sink_agent = rnd.choice(candidate)
if rnd.random()>0.2:
candidate = list(filter(self.homophily_filter(agent),self.persons))
choicelist =[]
for agnt in candidate:
for _ in range(len(agnt.linked)):
choicelist.append(agnt)
if len(choicelist)>0 :
choice = rnd.choice(choicelist)
else:
choice = sink_agent
self.makeLink(agent,choice)
# if flag and choice in self.not_linked and len(choice.linked_to) >0:
# self.not_linked.remove(choice)
def makeGraph(self,ld = 0.267):
i = 0
while i < len(self.persons):
linkable = list(filter(self.can_choose_filter(),self.persons))
if (rnd.random()<=ld) and (len(linkable)>2):
agent = rnd.choice(linkable)
self.linkAgentTo(agent)
else:
not_linked = list(filter(self.not_chosen_filter(),self.persons))
agent = rnd.choice(not_linked)
agent.can_choose = True
i+=1
not_linked = list(filter(self.not_linked_to_filter(),self.persons))
for agent in not_linked:
self.linkAgentTo(agent)
def homophily_filter(self,agent):
agent = agent
def infun(x):
d = x
a_d = agent
if x.can_choose == False:
return False
if x == agent:
return False
if (d.BMI_catagory == a_d.BMI_catagory ) and (x.gender == agent.gender) and (abs(d.age-a_d.age)<4):
return True
elif (x.gender == agent.gender) and (abs(d.age-a_d.age)<4) and (agent.distance(x)<0.2):
return True
elif (x.gender == agent.gender) and (d.BMI_catagory == a_d.BMI_catagory) and (agent.distance(x)<0.2):
return True
elif (abs(d.age-a_d.age)<4) and (d.BMI_catagory == a_d.BMI_catagory) and (agent.distance(x)<0.2):
return True
else:
return False
return infun
def not_linked_to_filter(self):
def infun(x):
if len(x.linked) ==0:
return True
#if len(x.linked_from) == 0:
# return True
return False
return infun
def chosen_not_self_filter(self,agent):
agent = agent
def infun(x):
if x == agent:
return False
elif x.can_choose:
return True
return False
return infun
def not_chosen_filter(self):
def infun(x):
if x.can_choose:
return False
return True
return infun
def can_choose_filter(self):
def infun(x):
if x.can_choose and x.is_part:
return True
return False
return infun
def run(self):
for day in range(365*2):
if day % 7 == 0:
#print(day)
for person in self.persons:
person.diffuse_behavior()
for person in self.persons:
person.update()
population = Population(shi_2009)
population.makeGraph()
weights2009 = []
weights2011calc = []
weights2011shi = []
population.run()
for person in population.persons:
weights2009.append(person.start_BW)
weights_np_2009 = np.array(weights2009)
for person in population.persons:
weights2011calc.append(person.BW)
weights_np_2011calc = np.array(weights2011calc)
for person in population.persons:
weights2011shi.append(person.BW_2011)
weights_np_2011shi = np.array(weights2011shi)
bins = [x for x in range(0,150,5)]
plt.rcParams["figure.figsize"] = (12,12)
plt.hist([weights_np_2011calc,weights_np_2011shi] ,bins=bins,label=["2011calc","2011"])
plt.xlabel("Body weight kg")
plt.ylabel("Number of persons")
plt.legend()
plt.show()
print(hold)
print(hold3)
print(hold2)
count = 0
for person in population.persons:
count+=len(person.linked)
print(count)
print(np.array(weights2009).mean())
print(np.array(weights2011shi).mean())
print(np.array(weights2011calc).mean())
from scipy.stats import ttest_ind
res = ttest_ind(np.array(weights2009), np.array(weights2011shi),equal_var = True)
print(res)
res = ttest_ind(np.array(weights2009), np.array(weights2011calc),equal_var = True)
print(res)
res = ttest_ind(np.array(weights2011shi), np.array(weights2011calc),equal_var = True)
print(res)
# gcc = sorted(nx.connected_components(population.graph), key=len, reverse=True)
# graph = population.graph.subgraph(gcc[0])
# degree_sequence = sorted([d for n, d in population.graph.degree()], reverse=True)
# plt.rcParams["figure.figsize"] = (30,10)
# plt.subplot(1,3,1)
# nx.draw(population.graph,pos=population.npos,node_size=10,node_color=population.colors,width=0.1,arrows=False)
# plt.subplot(1,3,2)
# nx.draw(population.graph,node_color=population.colors,node_size=10,width=0.1,arrows=False)
# plt.subplot(1,3,3)
# x,y =np.unique(degree_sequence, return_counts=True)
# plt.bar(x,y)
# #plt.subplot(2,2,4)
# #plt.plot()
# plt.show()
# print("Average shortest path length",nx.average_shortest_path_length(graph))
# print("Average clustering",nx.average_clustering(graph))
# print("number of nodes", graph.number_of_nodes())
# weights2009 = []
# weights2011calc = []
# weights2011shi = []
# population.run()
# for person in population.persons:
# weights2009.append(person.start_BW)
# weights_np_2009 = np.array(weights2009)
# for person in population.persons:
# weights2011calc.append(person.BW)
# weights_np_2011calc = np.array(weights2011calc)
# for person in population.persons:
# weights2011shi.append(person.BW_2011)
# weights_np_2011shi = np.array(weights2011shi)
# bins = [x for x in range(0,150,5)]
# plt.rcParams["figure.figsize"] = (12,12)
# plt.hist([weights_np_2009,weights_np_2011calc,weights_np_2011shi] ,bins=bins,label=["2009","2011calc","2011"])
# plt.xlabel("Body weight kg")
# plt.ylabel("Number of persons")
# plt.legend()
# plt.show()
###Output
_____no_output_____ |
PyBoss/employee_data.ipynb | ###Markdown
Employee Data Cleaning Process
###Code
# import the dependencies
import pandas as pd
import re
# import csv data and create dataframe
employee_csv = "data/employee_data.csv"
employee_df = pd.read_csv(employee_csv)
# first 5 list of the dataframe
employee_df.head()
# last 5 list of the dataframe
employee_df.tail()
###Output
_____no_output_____
###Markdown
Split First and Last Name of the Employee
###Code
# split the employees' names into first and last name
employee_df[['First Name', 'Last Name']] = employee_df.Name.str.split(expand=True)
# check the data
employee_df
# reorder the columns and remove unnecessary columns of the data
organized_df = employee_df[['Emp ID', 'First Name', 'Last Name', 'DOB', 'SSN', 'State']]
organized_df
###Output
_____no_output_____
###Markdown
Rewrite `State` in abbreviation
###Code
# import us state abbreviation
us_state_abbrev = {
'Alabama': 'AL',
'Alaska': 'AK',
'Arizona': 'AZ',
'Arkansas': 'AR',
'California': 'CA',
'Colorado': 'CO',
'Connecticut': 'CT',
'Delaware': 'DE',
'Florida': 'FL',
'Georgia': 'GA',
'Hawaii': 'HI',
'Idaho': 'ID',
'Illinois': 'IL',
'Indiana': 'IN',
'Iowa': 'IA',
'Kansas': 'KS',
'Kentucky': 'KY',
'Louisiana': 'LA',
'Maine': 'ME',
'Maryland': 'MD',
'Massachusetts': 'MA',
'Michigan': 'MI',
'Minnesota': 'MN',
'Mississippi': 'MS',
'Missouri': 'MO',
'Montana': 'MT',
'Nebraska': 'NE',
'Nevada': 'NV',
'New Hampshire': 'NH',
'New Jersey': 'NJ',
'New Mexico': 'NM',
'New York': 'NY',
'North Carolina': 'NC',
'North Dakota': 'ND',
'Ohio': 'OH',
'Oklahoma': 'OK',
'Oregon': 'OR',
'Pennsylvania': 'PA',
'Rhode Island': 'RI',
'South Carolina': 'SC',
'South Dakota': 'SD',
'Tennessee': 'TN',
'Texas': 'TX',
'Utah': 'UT',
'Vermont': 'VT',
'Virginia': 'VA',
'Washington': 'WA',
'West Virginia': 'WV',
'Wisconsin': 'WI',
'Wyoming': 'WY',
}
# Tried below first, but error message still showed up
# organized_df['State'] = organized_df.loc[:,'State'].map(us_state_abbrev).fillna(organized_df.loc[:,'State'])
# fill all the states with abbreviation
organized_df['State'] = organized_df['State'].map(us_state_abbrev).fillna(organized_df['State'])
organized_df
# Check the data time for organized_df
organized_df.dtypes
###Output
_____no_output_____
###Markdown
Rewrite Date Of Birth in `MM/DD/YYYY` (Month-Date-Year) format
###Code
# Tried below first, but error message still showed up
# organized_df['DOB'] = pd.to_datetime(organized_df.loc[:,'DOB'], errors='coerce', utc=True).dt.strftime('%m/%d/%Y')
# reformat employees' date of birth MM/DD/YYYY
organized_df['DOB'] = pd.to_datetime(organized_df['DOB'], errors='coerce', utc=True).dt.strftime('%m/%d/%Y')
organized_df
###Output
D:\Anaconda\envs\main-env\lib\site-packages\ipykernel_launcher.py:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""
###Markdown
Hide the employees' SSN
###Code
# check the dataset
organized_df
# check each types of columns
organized_df.dtypes
# overwrite SSN in the dataset
organized_df.SSN = organized_df.SSN.apply(lambda x: re.sub(r'\d', '*', x, count=5))
organized_df
# check the data before saving
organized_df
# save as a new csv file
organized_df.to_csv('data/clean_employee_data.csv', index=True)
###Output
_____no_output_____
###Markdown
Creating new column using apply syntax (saved for later)
###Code
# Create a new column
# This hides all numbers in SSN.
# organized_df['SSN_hidden'] = organized_df.SSN.apply(lambda x: re.sub(r'\d', '*', x, count=5))
# organized_df
###Output
_____no_output_____
###Markdown
Testing out syntaxes (saved for later):
###Code
# employee_df["Name"] = employee_df["Name"].str.split(" ", n=1, expand=True)
# employee_df
# split_names = employee_df["Name"].str.split(" ")
# names = split_names.to_list()
# last_first = ["First Name", "Last Name"]
# new_employee_df = pd.DataFrame(last_first, columns=names)
# print(new_employee_df)
# employee_df["Name"].str.split(" ", expand=True)
###Output
_____no_output_____ |
src/user_guide/shared_variables.ipynb | ###Markdown
Sharing variables across steps * **Difficulty level**: intemediate* **Time need to lean**: 20 minutes or less* **Key points**: * Variables defined in steps are not accessible from other steps * Variables can be `shared` to steps that depends on it through target `sos_variable` Section option `shared` SoS executes each step in a separate process and by default does not return any result to the master SoS process. Option `shared` is used to share variables between steps. This option accepts:* A string (variable name), or* A map between variable names and expressions (strings) that will be evaluated upon the completion of the step.* A sequence of strings (variables) or maps.For example,
###Code
%run -v1
[10: shared='myvar']
myvar = 100
[20]
print(myvar)
%run -v1
[10: shared=['v1', 'v2']]
v1 = 100
v2 = 200
[20]
print(v1)
print(v2)
###Output
100
200
###Markdown
The `dict` format of `shared` option allows the specification of expressions to be evaluated after the completion of the step, and can be used to pass pieces of `step_output` as follows:
###Code
%run -v1
[10: shared={'res': 'step_output["res"]', 'stat': 'step_output["stat"]'}]
output: res='a.res', stat='a.txt'
_output.touch()
[20]
print(res)
print(stat)
###Output
a.res
a.txt
###Markdown
`sos_variable` targets When we `shared` variables from a step, the variables will be available to the step that will be executed after it. This is why `res` and `stat` would be accessible from step `20` after the completion of step `10`. However, in a more general case, a step would need to depends on a target `sos_variable` to access the `shared` variable in a non-forward stype workflow.For example, in the following workflow, two `sos_variable` targets creates two dependencies on steps `notebookCount` and `lineCount` so that these two steps will be executed before `default` and provide the required variables.
###Code
%run -v1
[notebookCount: shared='numNotebooks']
import glob
numNotebooks = len(glob.glob('*.ipynb'))
[lineCount: shared='lineOfThisNotebook']
with open('shared_variables.ipynb') as nb:
lineOfThisNotebook = len(nb.readlines())
[default]
depends: sos_variable('numNotebooks'), sos_variable('lineOfThisNotebook')
print(f"There are {numNotebooks} notebooks in this directory")
print(f"Current notebook has {lineOfThisNotebook} lines")
###Output
There are 94 notebooks in this directory
Current notebook has 632 lines
###Markdown
Sharing variables from substeps When you share a variable from a step with multiple substeps, there can be multiple copies of the variable for each substep and it is uncertain which copy SoS will return. Current implementation returns the variable from the last substep, but this is not guaranteed. For example, in the following workflow multiple random seeds have been generated, but only the last `seed` is shared outside of step `1` and obtained by step `2`.
###Code
%run -v1
[1: shared='seed']
input: for_each={'i': range(5)}
import random
seed = random.randint(0, 1000)
print(seed)
[2]
print(f'Got seed {seed} at step 2')
###Output
50
606
267
52
701
Got seed 701 at step 2
Got seed 701 at step 2
Got seed 701 at step 2
Got seed 701 at step 2
Got seed 701 at step 2
###Markdown
If you would like to see the variable in all substeps, you can prefix the variable name with `step_`, which is a convention for option `shared` to collect variables from all substeps.
###Code
%run -v1
[1: shared='step_seed']
input: for_each={'i': range(5)}
import random
seed = random.randint(0, 1000)
[2]
print(step_seed[_index])
###Output
17
114
688
99
253
###Markdown
You can also use the `step_*` vsriables in expressions as in the following example:
###Code
%run -v1
[1: shared={'summed': 'sum(step_rng)', 'rngs': 'step_rng'}]
input: for_each={'i': range(10)}
import random
rng = random.randint(0, 10)
[2]
input: group_by='all'
print(rngs)
print(summed)
###Output
[5, 2, 5, 2, 7, 10, 5, 0, 2, 2]
40
###Markdown
Here we used `group_by='all'` to collapse multiple substeps into 1. Sharing variables from tasks Variables generated by external tasks adds another layer of complexity because tasks usually do not share variables with the substep it belongs. To solve this problem, you will have to use the `shared` option of `task` to return the variable to the substep:
###Code
%run -v1 -q localhost
[1: shared={'summed': 'sum(step_rng)', 'rngs': 'step_rng'}]
input: for_each={'i': range(5)}
task: shared='rng'
import random
rng = random.randint(0, 10*i)
[2]
input: group_by='all'
print(rngs)
print(summed)
###Output
[0, 7, 2, 23, 24]
56
###Markdown
Sharing variables across steps * **Difficulty level**: intemediate* **Time need to lean**: 20 minutes or less* **Key points**: * Variables defined in steps are not accessible from other steps * Variables can be `shared` to steps that depends on it through target `sos_variable` Section option `shared` SoS executes each step in a separate process and by default does not return any result to the master SoS process. Option `shared` is used to share variables between steps. This option accepts:* A string (variable name), or* A map between variable names and expressions (strings) that will be evaluated upon the completion of the step.* A sequence of strings (variables) or maps.For example,
###Code
%run -v1
[10: shared='myvar']
myvar = 100
[20]
print(myvar)
%run -v1
[10: shared=['v1', 'v2']]
v1 = 100
v2 = 200
[20]
print(v1)
print(v2)
###Output
100
200
###Markdown
The `dict` format of `shared` option allows the specification of expressions to be evaluated after the completion of the step, and can be used to pass pieces of `step_output` as follows:
###Code
%run -v1
[10: shared={'res': 'step_output["res"]', 'stat': 'step_output["stat"]'}]
output: res='a.res', stat='a.txt'
_output.touch()
[20]
print(res)
print(stat)
###Output
a.res
a.txt
###Markdown
`sos_variable` targets When we `shared` variables from a step, the variables will be available to the step that will be executed after it. This is why `res` and `stat` would be accessible from step `20` after the completion of step `10`. However, in a more general case, a step would need to depends on a target `sos_variable` to access the `shared` variable in a non-forward stype workflow.For example, in the following workflow, two `sos_variable` targets creates two dependencies on steps `notebookCount` and `lineCount` so that these two steps will be executed before `default` and provide the required variables.
###Code
%run -v1
[notebookCount: shared='numNotebooks']
import glob
numNotebooks = len(glob.glob('*.ipynb'))
[lineCount: shared='lineOfThisNotebook']
with open('shared_variables.ipynb') as nb:
lineOfThisNotebook = len(nb.readlines())
[default]
depends: sos_variable('numNotebooks'), sos_variable('lineOfThisNotebook')
print(f"There are {numNotebooks} notebooks in this directory")
print(f"Current notebook has {lineOfThisNotebook} lines")
###Output
There are 94 notebooks in this directory
Current notebook has 632 lines
###Markdown
Sharing variables from substeps When you share a variable from a step with multiple substeps, there can be multiple copies of the variable for each substep and it is uncertain which copy SoS will return. Current implementation returns the variable from the last substep, but this is not guaranteed. For example, in the following workflow multiple random seeds have been generated, but only the last `seed` is shared outside of step `1` and obtained by step `2`.
###Code
%run -v1
[1: shared='seed']
input: for_each={'i': range(5)}
import random
seed = random.randint(0, 1000)
print(seed)
[2]
print(f'Got seed {seed} at step 2')
###Output
50
606
267
52
701
Got seed 701 at step 2
Got seed 701 at step 2
Got seed 701 at step 2
Got seed 701 at step 2
Got seed 701 at step 2
###Markdown
If you would like to see the variable in all substeps, you can prefix the variable name with `step_`, which is a convention for option `shared` to collect variables from all substeps.
###Code
%run -v1
[1: shared='step_seed']
input: for_each={'i': range(5)}
import random
seed = random.randint(0, 1000)
[2]
print(step_seed[_index])
###Output
17
114
688
99
253
###Markdown
You can also use the `step_*` vsriables in expressions as in the following example:
###Code
%run -v1
[1: shared={'summed': 'sum(step_rng)', 'rngs': 'step_rng'}]
input: for_each={'i': range(10)}
import random
rng = random.randint(0, 10)
[2]
input: group_by='all'
print(rngs)
print(summed)
###Output
[5, 2, 5, 2, 7, 10, 5, 0, 2, 2]
40
###Markdown
Here we used `group_by='all'` to collapse multiple substeps into 1. Sharing variables from tasks Variables generated by external tasks adds another layer of complexity because tasks usually do not share variables with the substep it belongs. To solve this problem, you will have to use the `shared` option of `task` to return the variable to the substep:
###Code
%run -v1 -q localhost
[1: shared={'summed': 'sum(step_rng)', 'rngs': 'step_rng'}]
input: for_each={'i': range(5)}
task: shared='rng'
import random
rng = random.randint(0, 10*i)
[2]
input: group_by='all'
print(rngs)
print(summed)
###Output
[0, 7, 2, 23, 24]
56
###Markdown
How to pass variables between SoS steps * **Difficulty level**: easy* **Time need to lean**: 10 minutes or less* **Key points**: Option `shared` SoS executes each step in a separate process and by default does not return any result to the master SoS process. Option `shared` is used to share variables between steps. This option accepts:* A string (variable name), or* A map between variable names and expressions (strings) that will be evaluated upon the completion of the step.* A sequence of strings (variables) or maps.For example,
###Code
%run
[10: shared='myvar']
myvar = 100
[20]
print(myvar)
###Output
100
###Markdown
A map syntax is recommended to share `step_output` of one step with others, because the variable assignment will be evaluated only after the step is complete:
###Code
%sandbox
%run
[1: shared = {'test_output': 'step_output'}]
output: 'a.txt'
sh:
touch a.txt
[2]
print(f"Input file {test_output}")
input: test_output
###Output
Input file a.txt
###Markdown
The map syntax is evaluated as expressions; therefore it is possible to finer control what specific output, or variations of output, to share with others. For example:
###Code
%sandbox
%run
[1: shared={'test_output_1':'step_output[0]', 'test_output_2': 'step_output[1]'}]
output: 'a.txt', 'b.txt'
sh:
touch a.txt b.txt
[2]
print(f"output 1: {test_output_1}")
print(f"output 2: {test_output_2}")
###Output
output 1: a.txt
output 2: b.txt
###Markdown
to shared the first file in `output` (filename `output[0]`) instead of the entire output file list. The `shared` option also provides a `sos_variable` target. Things becomes more complicated when there are multiple substeps. For example, when you use option `shared` on the following step with 10 substeps, only one of the random seed is returned because `rng` represent the last value of the variable after the completion of all substeps.
###Code
%run
[1: shared='seed']
input: for_each={'i': range(10)}
import random
seed = random.randint(0, 1000)
[2]
print(seed)
###Output
450
###Markdown
If you would like to see the variable in all substeps, you can prefix the variable name with `step_`
###Code
%run
[1: shared='step_seed']
input: for_each={'i': range(10)}
import random
seed = random.randint(0, 1000)
[2]
print(step_seed)
###Output
[858, 513, 328, 610, 142, 275, 458, 57, 762, 981]
###Markdown
You can also use the `step_*` vsriables in expressions as in the following example:
###Code
%run
[1: shared={'summed': 'sum(step_rng)', 'rngs': 'step_rng'}]
input: for_each={'i': range(10)}
import random
rng = random.randint(0, 10)
[2]
print(rngs)
print(summed)
###Output
[10, 0, 8, 1, 8, 9, 6, 7, 9, 1]
59
###Markdown
Variables generated by external tasks adds another layer of complexity because tasks usually do not share variables with the substep it belongs. To solve this problem, you will have to use the `shared` option of `task` to return the variable to the substep:
###Code
%run
[1: shared={'summed': 'sum(step_rng)', 'rngs': 'step_rng'}]
input: for_each={'i': range(10)}
task: shared='rng'
import random
rng = random.randint(0, 10*i)
[2]
print(rngs)
print(summed)
###Output
_____no_output_____ |
20180109_HW1_counting.ipynb | ###Markdown
This notebook includes functions for both quicksort and bubblesort. Both functions track the number of assignments and conditionals generated for each sort as well as the runtime. At the bottom I have plotted each of these variables against the length of the input vector
###Code
import numpy as np
import random
import time
import matplotlib.pyplot as plt
def bsort(mylist):
# save starting time
ts = time.process_time()
# set assignment and conditional counters to 0
assign = 0
cond = 0
cond += 1
if len(mylist) > 1:
for i in range(len(mylist)-1, 0,-1):
for j in range(i):
cond+= 1
if mylist[j] > mylist[j+1]:
#saving the first value
temp = mylist[j]
assign += 1
# replacing first value with second (smaller) value
mylist[j] = mylist[j+1]
assign += 1
# replacing second value with first(larger) value
mylist[j+1] = temp
assign += 1
# save finish time
tf = time.process_time()
# get run time
runtime = tf - ts
return cond, assign, runtime
# the partition function will partition a list around a pivotvalue one time
# it returns the ending rightmark, which can then be used as a split point for further ordering
def partition(mylist, start, end):
p_assign = 0
p_cond = 0
pivotvalue = mylist[start]
leftmark = start + 1
rightmark = end
done = False
p_assign += 4
while not done:
# left mark moves along left side of list until leftmark is greater than either pivot value or rightmark
p_cond += 2
while leftmark <= rightmark and mylist[leftmark] <= pivotvalue:
leftmark = leftmark + 1
p_assign += 1
# inverse for rightmark
p_cond += 2
while rightmark >= leftmark and mylist[rightmark] >= pivotvalue:
rightmark = rightmark - 1
p_assign += 1
# when marks cross, end
p_cond += 1
if rightmark < leftmark:
done = True
p_assign += 1
# otherwise, if one of marks is wrong compared to pivotvalue, swap the marks
else:
temp = mylist[leftmark]
mylist[leftmark] = mylist[rightmark]
mylist[rightmark] = temp
p_assign += 3
# now we have two halves sorted around the pivot value and the marks have passed each other
# lets move the pivot value to the split point (where the rightmark is now)
temp = mylist[start]
mylist[start] = mylist[rightmark]
mylist[rightmark] = temp
p_assign += 3
return (rightmark, p_assign, p_cond)
def runqsort(mylist, start, end, cond, assign):
cond += 1
if start < end:
# run partition once to divide the list and get the splitpoint
splitpoint, p_assign, p_cond = partition(mylist, start, end)
assign += p_assign
cond += p_cond
# now run the function separately on each side of the splitpoint
cond, assign = runqsort(mylist, start, splitpoint - 1, cond, assign)[1:3]
cond, assign = runqsort(mylist, splitpoint + 1, end, cond, assign)[1:3]
return mylist, cond, assign
def qsort(mylist):
ts = time.process_time()
sortlist, cond, assign = runqsort(mylist, 0, (len(mylist)-1), 0, 0)
tf= time.process_time()
runtime = tf - ts
return cond, assign, runtime
lengths = list(range(100,1001,100))
b_cond = []
b_assign = []
b_runtime = []
# generate vectors of varying length
for n in lengths:
vectors = (list(map(lambda x: [random.randint(-1000,1000) for p in range(n)], range(10))))
tempcond = []
tempassign = []
tempruntime = []
# sort the vectors and keep track of each dependent variable
for i in range(len(vectors)):
c, a, t = bsort(vectors[i])
tempcond.append(c)
tempassign.append(a)
tempruntime.append(t)
b_cond.append(tempcond)
b_assign.append(tempassign)
b_runtime.append(tempruntime)
lengths = list(range(100,1001,100))
q_cond = []
q_assign = []
q_runtime = []
# generate vectors of varying length
for n in lengths:
vectors = (list(map(lambda x: [random.randint(-1000,1000) for p in range(n)], range(100))))
tempcond = []
tempassign = []
tempruntime = []
# sort the vectors and keep track of each dependent variable
for i in range(len(vectors)):
c, a, t = qsort(vectors[i])
tempcond.append(c)
tempassign.append(a)
tempruntime.append(t)
q_cond.append(tempcond)
q_assign.append(tempassign)
q_runtime.append(tempruntime)
# creating a plotting function to plot observed values and expected function for assignments, conditionals, and runtime
def sort_plot(vector, number, line, scale, yscale, ylab, title):
fig = plt.figure(dpi = 300)
filename = title + '.png'
plt.xlabel("Length of Vector")
plt.ylabel(ylab)
plt.title(title)
for i in range(len(lengths)):
x = ([lengths[i]]*number)
y = (vector[i])
plt.scatter(x, y, s = 0.5, c = "blue")
if line == 'square':
plt.plot(lengths, list((lengths[i]**2)*scale for i in range(len(lengths))), label = 'O(n) = n^2')
elif line == 'log':
plt.plot(lengths, (lengths * np.log(lengths) * scale), label = 'O(n) = nlog(n)')
if yscale == 'log':
plt.yscale("log")
plt.legend()
fig.savefig(filename)
# creating scaling factors ("k") for plotting the expected line
qr_scale = np.mean(q_runtime[0])/(100*np.log(100))
qc_scale = np.mean(q_cond[0])/((100*np.log(100)))
qa_scale = np.mean(q_assign[0])/((100*np.log(100)))
br_scale = np.mean(b_runtime[0])/(100**2)
bc_scale = np.mean(b_cond[0])/(100**2)
ba_scale = np.mean(b_assign[0])/(100**2)
# creating and saving plots
sort_plot(b_cond, 10, 'square', 1, 'standard', "Number Conditionals", 'Bubblesort Conditionals')
sort_plot(b_assign, 10, 'square', 1, 'standard', "Number Assignments", 'Bubblesort Assignments')
sort_plot(b_runtime, 10, 'square', br_scale, 'standard', "Runtime(s)", 'Bubblesort Runtime')
sort_plot(q_cond, 100, 'log', 1, 'log', "Number Conditionals", 'Quicksort Conditionals')
sort_plot(q_assign, 100, 'log', 1, 'log', "Number Assignments", 'Quicksort Assignments')
sort_plot(q_cond, 100, 'log', qr_scale, 'log', "Runtime (s)", 'Quicksort Runtime')
###Output
_____no_output_____ |
_notebooks/2020-12-14-PyTorch_basics.ipynb | ###Markdown
"PyTorch basics"> "A simple PyTorch tutorial to fit a function with a third order polynomial"- toc: false- branch: master- badges: true- comments: true- categories: [PyTorch, Autograd]- image: images/- hide: false- search_exclude: true- metadata_key1: metadata_value1- metadata_key2: metadata_value2- use_math: true This notebook is adapted from original [tutorial](https://https://pytorch.org/tutorials/beginner/pytorch_with_examples.html) by Justin Johnson. In this notebook, we will fit a third order polynomial on `y = sin(x)`. Our polynomial have four parameters, and we will use gradient descent to fit the random data by minimizing the Euclidean distance between the predicted output and the true output. We will see three different ways of fitting our polynomial. 1. Using numpy and manually implementing the forward and backward passes using numpy operations,2. Using the concept of **PyTorch Tensor**,3. Using the **AutoGrad** package in PyTorch which uses the automatic differentiation to automate the computation of backward passes.Let's start with numpy! --- 1. NumpyNumpy is a great tool for scientific computing but is not very handy for deep learning as it does not know anything about gradients or computation graphs. Nevertheless, it is very easy to fit a third order polynomial to our sine function. Let's see how this can be done...
###Code
import numpy as np
import math
import matplotlib.pyplot as plt
x = np.linspace(-math.pi, math.pi, 2000)
y = np.sin(x)
# We randomly initialize weights
a = np.random.randn()
b = np.random.randn()
c = np.random.randn()
d = np.random.randn()
# print randomly initialized weights
print(f'a = {a}, b = {b}, c = {c}, d = {d}')
# learning rate
lr = 1e-6
for i in range(5000):
# y = a + bx + cx^2 + dx^3
y_pred = a + b*x + c*x ** 2 + d*x ** 3
# Compute and print loss
loss = np.square(y_pred -y).sum()
if i%100 == 0:
print(i,loss)
# Backprop to compute the gradients of a, b, c, d with respect to loss
#dL/da = (dL/dy_pred) * (dy_pred/da)
#dL/db = (dL/dy_pred) * (dy_pred/db)
#dL/dc = (dL/dy_pred) * (dy_pred/dc)
#dL/dd = (dL/dy_pred) * (dy_pred/dd)
grad_y_pred = 2.0 * (y_pred-y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update Weights
a -= lr * grad_a
b -= lr * grad_b
c -= lr * grad_c
d -= lr * grad_d
plt.plot(x,y,label = 'y = sin(x)', c = 'b')
plt.plot(x, y_pred, label = 'y = a + bx + cx^2 + dx^3', c = 'r',linestyle = 'dashed')
plt.xlabel('x')
plt.ylabel('y')
plt.ylim([-2,2])
plt.legend()
plt.show()
print(f'Result: y = {a} + {b} x + {c} x^2 + {d} x^3')
###Output
a = 0.5212317253221784, b = -0.9805915149858428, c = -0.027376927441378273, d = -1.4262831777937377
0 703371.2506724729
100 1990.7685801408975
200 1327.0031477034922
300 885.8603635657721
400 592.5781313206602
500 397.5296011492919
600 267.7642695815589
700 181.39826161321042
800 123.89338756930366
900 85.58851940179167
1000 60.06144593380803
1100 43.041568420184774
1200 31.688032432273822
1300 24.11034884768269
1400 19.049958249677985
1500 15.668641966272823
1600 13.40788464942117
1700 11.895365287550472
1800 10.882761722859414
1900 10.204367249159688
2000 9.749544226693866
2100 9.444380518360695
2200 9.23946886344227
2300 9.101761631434972
2400 9.009139235728266
2500 8.946786269115005
2600 8.904772417197037
2700 8.876436698538871
2800 8.857307622798578
2900 8.844381063434946
3000 8.835637031893949
3100 8.8297160973774
3200 8.825702555431604
3300 8.822979021471838
3400 8.821128846547559
3500 8.819870574849652
3600 8.81901388552776
3700 8.81842995097411
3800 8.818031476578964
3900 8.81775924750458
4000 8.817573052631591
4100 8.817445555564575
4200 8.817358151641226
4300 8.817298164554096
4400 8.817256947450312
4500 8.8172285953216
4600 8.817209070954023
4700 8.817195610956425
4800 8.81718632167016
4900 8.817179903949093
###Markdown
--- 2. PyTorch: TensorsWe saw how easy it is to fit a third order polynomial using numpy. But what about modern deep neural networks? Unfortunately, numpy cannot utilize GPUs to accelerate its numerical computation. This is where PyTorch Tensor are useful. A Tensor is basically an n-dimensional array and can keep track of gradients and computational graphs. To run a PyTorch Tensor on GPU, we simply need to specify the correct device. But for now, we will stick to CPU. Let's see how we can use PyTorch Tensor to accomplish our task...
###Code
import torch
import math
dtype = torch.float
device = torch.device("cpu")
#device = torch.device("cuda:0") # Uncomment this if GPU is available.
# Create random input and data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(5000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
plt.plot(x,y,label = 'y = sin(x)', c = 'b')
plt.plot(x, y_pred, label = 'y = a + bx + cx^2 + dx^3', c = 'r',linestyle = 'dashed')
plt.xlabel('x')
plt.ylabel('y')
plt.ylim([-2,2])
plt.legend()
plt.show()
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
###Output
99 983.1625366210938
199 657.2805786132812
299 440.57037353515625
399 296.4062805175781
499 200.46615600585938
599 136.59274291992188
699 94.05044555664062
799 65.70246124267578
899 46.803749084472656
999 34.19861602783203
1099 25.786611557006836
1199 20.169822692871094
1299 16.41724967956543
1399 13.908618927001953
1499 12.230535507202148
1599 11.107256889343262
1699 10.354836463928223
1799 9.850460052490234
1899 9.512123107910156
1999 9.284987449645996
2099 9.132362365722656
2199 9.02973747253418
2299 8.960660934448242
2399 8.914128303527832
2499 8.882747650146484
2599 8.861570358276367
2699 8.847265243530273
2799 8.837589263916016
2899 8.831039428710938
2999 8.8266019821167
3099 8.823591232299805
3199 8.821544647216797
3299 8.820154190063477
3399 8.819206237792969
3499 8.818561553955078
3599 8.818121910095215
3699 8.817822456359863
3799 8.81761646270752
3899 8.817476272583008
3999 8.81737995147705
4099 8.817313194274902
4199 8.817267417907715
4299 8.817235946655273
4399 8.817214965820312
4499 8.817200660705566
4599 8.817190170288086
4699 8.817183494567871
4799 8.817177772521973
4899 8.817174911499023
4999 8.81717300415039
###Markdown
--- 3. PyTorch: Tensors and autogradWe saw above how Tensors can also be used to fit a third order polynomial to our sin function. However, we had to manually include both forward and backward passes. This is not so hard for a simple task such as fitting a polynomial but can get very messy for deep neural networks. Fortunately, PyTorch's **Autograd** package can be used to automate the computation of backward passes. Let's see how we can do this...
###Code
import torch
import math
dtype = torch.float
device = torch.device("cpu")
# Create tensors to hold input and outputs
# As we don't need to compute gradients with respect to these Tensors, we can set requires_grad = False. This is also the default setting.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# Create random tensors for weights. For these Tensors, we require gradients, therefore, we can set requires_grad = True
a = torch.randn((), device = device, dtype = dtype, requires_grad=True)
b = torch.randn((), device = device, dtype = dtype, requires_grad=True)
c = torch.randn((), device = device, dtype = dtype, requires_grad=True)
d = torch.randn((), device = device, dtype = dtype, requires_grad=True)
learning_rate = 1e-6
for t in range(5000):
# Forward pass: we compute predicted y using operations on Tensors.
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss using operations on Tensors.
# Now loss is a Tensor of shape (1,)
# loss.item() gets the scalar value held in the loss.
loss = (y_pred - y).pow(2).sum()
if t % 100 == 99:
print(t, loss.item())
# Use autograd to compute the backward pass. This call will compute the
# gradient of loss with respect to all Tensors with requires_grad=True.
# After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
# the gradient of the loss with respect to a, b, c, d respectively.
loss.backward()
# Manually update weights using gradient descent. Wrap in torch.no_grad()
# because weights have requires_grad=True, but we don't need to track this
# in autograd.
with torch.no_grad():
a -= learning_rate * a.grad
b -= learning_rate * b.grad
c -= learning_rate * c.grad
d -= learning_rate * d.grad
# Manually zero the gradients after updating weights
a.grad = None
b.grad = None
c.grad = None
d.grad = None
plt.plot(x,y,label = 'y = sin(x)', c = 'b')
# We need to use tensor.detach().numpy() to convert our tensor into numpy array for plotting
plt.plot(x, y_pred.detach().numpy(), label = 'y = a + bx + cx^2 + dx^3', c = 'r',linestyle = 'dashed')
plt.xlabel('x')
plt.ylabel('y')
plt.ylim([-2,2])
plt.legend()
plt.show()
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
###Output
99 1246.2117919921875
199 854.2945556640625
299 587.1728515625
399 404.901123046875
499 280.38446044921875
599 195.22470092773438
699 136.91519165039062
799 96.94408416748047
899 69.5127944946289
999 50.665863037109375
1099 37.70232391357422
1199 28.77569580078125
1299 22.622011184692383
1399 18.375377655029297
1499 15.441656112670898
1599 13.412825584411621
1699 12.008347511291504
1799 11.035103797912598
1899 10.360037803649902
1999 9.891359329223633
2099 9.565652847290039
2199 9.339130401611328
2299 9.181443214416504
2399 9.07158088684082
2499 8.994975090026855
2599 8.941520690917969
2699 8.904190063476562
2799 8.878105163574219
2899 8.859864234924316
2999 8.847099304199219
3099 8.838162422180176
3199 8.831900596618652
3299 8.82751178741455
3399 8.824435234069824
3499 8.822273254394531
3599 8.820756912231445
3699 8.819692611694336
3799 8.818944931030273
3899 8.818416595458984
3999 8.818046569824219
4099 8.817787170410156
4199 8.817604064941406
4299 8.817475318908691
4399 8.817383766174316
4499 8.817319869995117
4599 8.81727409362793
4699 8.817242622375488
4799 8.817219734191895
4899 8.817205429077148
4999 8.817193031311035
|
recover_face.ipynb | ###Markdown
hyperparams: lr 0.005-0.001, sigma 1, color True, multistart 10 Face recovery iterations
###Code
cosines_target = []
facenet_sims = []
iters = 0
with torch.no_grad():
for _ in range(2001):
start = time()
if pipeline.iters >= pipeline.N_restarts * pipeline.iters_before_restart:
pipeline.lr = 0.001
recovered_face, cos_target = pipeline()
cosines_target.append(cos_target)
time_per_iter = round(time() - start,2)
print(f"time={time_per_iter} queries={iters*pipeline.batch_size} cos_target={round(cos_target,3)} \
norm={round(pipeline.norm,4)}", end="\r")
if iters % 100 == 0:
clear_output(wait=True)
face = np.transpose(recovered_face.cpu().detach().numpy(),(1,2,0))
face = face - np.min(face)
face = face / np.max(face)
facenet_sims.append(get_sim(DEVICE,path1=IMAGE, im2=face))
plt.figure(dpi=130)
plt.subplot(1,2,1)
plt.axis("off")
plt.title(f"iterations {iters*pipeline.batch_size}")
plt.imshow(face)
plt.subplot(1,2,2)
plt.axis("off")
plt.title(f"cos_arcface={round(cos_target,3)}\ncos_facenet={round(facenet_sims[-1],3)}")
plt.imshow(np.array(Image.open(IMAGE)))
plt.show()
plt.plot(cosines_target)
plt.grid()
plt.title("arcface cos with a target embedding vs iters")
plt.show()
plt.plot(facenet_sims)
plt.grid()
plt.title("facenet cos with a target embedding vs iters")
plt.show()
iters += 1
###Output
_____no_output_____ |
examples/miscellaneous/Ch 9.ipynb | ###Markdown
9.1
###Code
from finance_ml.datasets import get_cls_data
X, label = get_cls_data(n_features=10, n_informative=5, n_redundant=0, n_samples=10000)
print(X.head())
print(label.head())
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
name = 'svc'
params_grid = {name + '__C': [1e-2, 1e-1, 1, 10, 100], name + '__gamma': [1e-2, 1e-1, 1, 10, 100]}
kernel = 'rbf'
clf = SVC(kernel=kernel, probability=True)
pipe_clf = Pipeline([(name, clf)])
fit_params = dict()
clf = clf_hyper_fit(X, label['bin'], t1=label['t1'], pipe_clf=pipe_clf, scoring='neg_log_loss',
search_params=params_grid, n_splits=3, bagging=[0, None, 1.],
rnd_search_iter=0, n_jobs=-1, pct_embargo=0., **fit_params)
###Output
_____no_output_____
###Markdown
9.2
###Code
name = 'svc'
params_dist = {name + '__C': log_uniform(a=1e-2, b=1e2),
name + '__gamma': log_uniform(a=1e-2, b=1e2)}
kernel = 'rbf'
clf = SVC(kernel=kernel, probability=True)
pipe_clf = Pipeline([(name, clf)])
fit_params = dict()
clf = clf_hyper_fit(X, label['bin'], t1=label['t1'], pipe_clf=pipe_clf, scoring='neg_log_loss',
search_params=params_grid, n_splits=3, bagging=[0, None, 1.],
rnd_search_iter=25, n_jobs=-1, pct_embargo=0., **fit_params)
###Output
_____no_output_____ |
cosine.ipynb | ###Markdown
Vector Spaces
###Code
import logging
#logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import gensim
from gensim import corpora, models, similarities
from nltk.corpus import stopwords
from collections import defaultdict
from pprint import pprint
from six import iteritems
import os
import numpy as np
import pandas as pd
import scipy.sparse
###Output
_____no_output_____
###Markdown
Load Processed Dataframe
###Code
df = pd.read_pickle('pkl/df_stop_noun.pkl')
df.head(3)
###Output
_____no_output_____
###Markdown
Convert Series to List of Strings
###Code
resumes = df['resume_nouns'].tolist()
resumes[:1]
###Output
_____no_output_____
###Markdown
From Strings to Vectors Tokenize the documents, remove stop words and words that only appear once
###Code
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in resume.split()] for resume in resumes]
# remove words that appear only once
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# remove words that occur less than n times
texts = [[token for token in text if frequency[token] > 2] for text in texts]
###Output
_____no_output_____
###Markdown
Save Token Count Dictionary to File
###Code
dictionary = corpora.Dictionary(texts)
# store the dictionary, for future reference
dictionary.save('pkl/resume_token.dict')
print(dictionary)
###Output
Dictionary(42606 unique tokens: ['blog', 'dtac', 'melmark', 'ravishankar', 'plate']...)
###Markdown
Convert Tokenized Resumes to Vectors
###Code
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('pkl/resume_token.mm', corpus) # store to disk, for later use
for c in corpus[:1]:
print(c)
###Output
[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 2), (8, 1), (9, 1), (10, 1), (11, 1), (12, 1), (13, 1), (14, 1), (15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (20, 1), (21, 1), (22, 1), (23, 8), (24, 2), (25, 1), (26, 1), (27, 1), (28, 2), (29, 1), (30, 1), (31, 2), (32, 1), (33, 1), (34, 1), (35, 1), (36, 1), (37, 1), (38, 1), (39, 1), (40, 1), (41, 1), (42, 1), (43, 1), (44, 1), (45, 1), (46, 1), (47, 1), (48, 1), (49, 1), (50, 1), (51, 1), (52, 1), (53, 1), (54, 1), (55, 1), (56, 1), (57, 2)]
###Markdown
Corpus Streaming – One Document at a Time
###Code
# replace 'texts' with 'open(my_file.txt)' to read from files (one line in the file is a document)
# or loop through and open each individual file (?)
# either way, dictionary.doc2bow wants a list of words (aka - line.lower().split())
class MyCorpus(object):
def __iter__(self):
for line in texts:
yield dictionary.doc2bow(line)
# doesn't load the corpus into memory!
corpus_memory_friendly = MyCorpus()
###Output
_____no_output_____
###Markdown
Similarly, to construct the dictionary without loading all texts into memory
###Code
_ = '''
# collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
# remove stop words and words that appear only once
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
# remove stop words and words that appear only once
dictionary.filter_tokens(stop_ids + once_ids)
# remove gaps in id sequence after words that were removed
dictionary.compactify()
print(dictionary)
'''
###Output
_____no_output_____
###Markdown
Transformation Interface
###Code
# load tokenized dictionary
if (os.path.exists('pkl/resume_token.dict')):
dictionary = corpora.Dictionary.load('pkl/resume_token.dict')
print('Tokenized dictionary LOADED as \'dictionary\'')
else:
print('Tokenized dictionary NOT FOUND')
# load sparse vector matrix
if (os.path.exists('pkl/resume_token.mm')):
corpus = corpora.MmCorpus('pkl/resume_token.mm')
print('Sparse matrix LOADED as \'corpus\'')
else:
print('Sparse matrix NOT FOUND')
###Output
Sparse matrix LOADED as 'corpus'
###Markdown
TF-IDF Transformation
###Code
# step 1 -- initialize a model
tfidf_mdl = models.TfidfModel(corpus)
###Output
_____no_output_____
###Markdown
Calling `model[corpus]` only creates a wrapper around the old corpus document stream – actual conversions are done on-the-fly, during document iteration. We cannot convert the entire corpus at the time of calling corpus_transformed = model[corpus], because that would mean storing the result in main memory, and that contradicts gensim’s objective of memory-indepedence. If you will be iterating over the transformed corpus_transformed multiple times, and the transformation is costly, serialize the resulting corpus to disk first and continue using that.
###Code
# step 2 -- use the model to transform vectors
corpus_tfidf = tfidf_mdl[corpus]
# view one resume
for doc in corpus_tfidf[:1]:
print(doc)
from sklearn.feature_extraction.text import TfidfVectorizer
n_features = 1000
tfidf_vec = TfidfVectorizer(input='content', ngram_range=(1, 3), max_df=0.9, min_df=2,
max_features=n_features, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)
tfidf_vec_prep = tfidf_vec.fit_transform(resumes)
from sklearn.cluster import KMeans
from sklearn import metrics
km = KMeans(n_clusters=8, init='k-means++', max_iter=100, n_init=1)
km_mdl = km.fit_predict(tfidf_vec_prep)
# Determine your k range
k_range = range(1,20)
# fit the kmeans model for each n_clusters = k
k_means_var = [KMeans(n_clusters=k).fit(tfidf_vec_prep) for k in k_range]
# pull out the cluster centers for each model
centroids = [X.cluster_centers_ for X in k_means_var]
from scipy.spatial.distance import cdist, pdist
# calculate the euclidean distance from each point to each cluster center
k_euclid = [cdist(tfidf_vec_prep.toarray(), cent, 'euclidean') for cent in centroids]
dist = [np.min(ke, axis=1) for ke in k_euclid]
# total within-cluster sum of squares
wcss = [sum(d**2) for d in dist]
# the total sum of squares
tss = sum(pdist(tfidf_vec_prep.toarray())**2)/tfidf_vec_prep.shape[1]
# the between-cluster sum of squares
bss = tss - wcss
import numpy as np
from scipy.cluster.vq import kmeans,vq
from scipy.spatial.distance import cdist
import matplotlib.pyplot as plt
##### cluster data into K=1..10 clusters #####
K = range(1,20)
# scipy.cluster.vq.kmeans
KM = [kmeans(tfidf_vec_prep.toarray(),k) for k in K]
centroids = [cent for (cent,var) in KM] # cluster centroids
# alternative: scipy.spatial.distance.cdist
D_k = [cdist(tfidf_vec_prep.toarray(), cent, 'euclidean') for cent in centroids]
cIdx = [np.argmin(D,axis=1) for D in D_k]
dist = [np.min(D,axis=1) for D in D_k]
avgWithinSS = [sum(d)/tfidf_vec_prep.shape[0] for d in dist]
##### plot ###
kIdx = 2
# elbow curve
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(K, avgWithinSS, 'b*-')
import seaborn as sns
sns.set_style("white")
sns.set_context("poster", font_scale=1.25, rc={"lines.linewidth": 2.5})
sns.set_palette("Set2")
colors = sns.color_palette("BrBG", 5)
# make figure
fig = plt.figure(figsize=(20,12))
ax = fig.add_subplot(111)
# color
colors = sns.color_palette("BrBG", 10)
# plots
ax.plot(K, avgWithinSS, marker='o', color=colors[-1], alpha=0.5)
# labels/titles
plt.legend(loc="best")
plt.title('Elbow for K-Means')
plt.xlabel('Number of Clusters')
plt.ylabel('Avg. Within-Cluster Sum of Squares')
# remove border
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
# show grid
ax.xaxis.grid(True, alpha=0.2)
ax.yaxis.grid(True, alpha=0.2)
# plot that biddy
plt.savefig('data/pics/{0}.png'.format('KMeans_elbow'), bbox_inches='tight')
plt.close(fig)
import numpy as np
from scipy.cluster.vq import kmeans
from scipy.spatial.distance import cdist,pdist
from sklearn import datasets
from sklearn.decomposition import RandomizedPCA
from matplotlib import pyplot as plt
from matplotlib import cm
# perform PCA dimensionality reduction
pca = RandomizedPCA(n_components=2).fit(tfidf_vec_prep.toarray())
X = pca.transform(tfidf_vec_prep.toarray())
##### cluster data into K=1..20 clusters #####
K_MAX = 20
KK = range(1,K_MAX+1)
KM = [kmeans(X,k) for k in KK]
centroids = [cent for (cent,var) in KM]
D_k = [cdist(X, cent, 'euclidean') for cent in centroids]
cIdx = [np.argmin(D,axis=1) for D in D_k]
dist = [np.min(D,axis=1) for D in D_k]
tot_withinss = [sum(d**2) for d in dist] # Total within-cluster sum of squares
totss = sum(pdist(X)**2)/X.shape[0] # The total sum of squares
betweenss = totss - tot_withinss # The between-cluster sum of squares
##### plots #####
kIdx = 4 # K=10
clr = cm.spectral( np.linspace(0,1,10) ).tolist()
mrk = 'os^p<dvh8>+x.'
# make figure
fig = plt.figure(figsize=(20,12))
ax = fig.add_subplot(111)
# color
colors = sns.color_palette("BrBG", 5)
# plots
#ax.plot(K, avgWithinSS, marker='o', color=colors[-1], alpha=0.5)
ax.plot(KK, betweenss/totss*100, marker='o', color=colors[-1], alpha=0.5)
ax.plot(KK[kIdx], betweenss[kIdx]/totss*100, marker='o', markersize=25, color=colors[0], alpha=0.5)
# labels/titles
plt.legend(loc="best")
plt.title('Elbow for KMeans Clustering')
plt.xlabel('Number of clusters')
plt.ylabel('Percentage of variance explained (%)')
ax.set_xlim((-0.1,20.5))
ax.set_ylim((-0.5,100))
# remove border
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
# show grid
ax.xaxis.grid(True, alpha=0.2)
ax.yaxis.grid(True, alpha=0.2)
# plot that biddy
plt.savefig('data/pics/{0}.png'.format('KMeans_elbow_var'), bbox_inches='tight')
plt.close(fig)
# make figure
fig = plt.figure(figsize=(20,12))
ax = fig.add_subplot(111)
# plots
for i in range(kIdx+1):
ind = (cIdx[kIdx]==i)
ax.scatter(X[ind,0],X[ind,1], s=65, c=colors[i], marker=mrk[i],
label='Cluster {0}'.format(i), alpha=1)
# labels/titles
plt.legend(loc='lower right')
plt.title('K={0} Clusters'.format(KK[kIdx]))
ax.set_xlim((-.5,.5))
ax.set_ylim((-.5,.5))
# remove border
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
# show grid
ax.xaxis.grid(True, alpha=0.2)
ax.yaxis.grid(True, alpha=0.2)
# plot that biddy
plt.savefig('data/pics/{0}.png'.format('KMeans_{0}_clusters'.format(KK[kIdx])), bbox_inches='tight')
plt.close(fig)
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
dbscan = DBSCAN(eps=0.5, min_samples=5, metric='cosine', algorithm='brute',
leaf_size=30, p=None, random_state=None)
dbscan_mdl = dbscan.fit_predict(tfidf_vec_prep)
###Output
_____no_output_____
###Markdown
Latent Semantic Indexing Topics
###Code
num_topics = 100
# initialize an LSI transformation
lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=num_topics)
corpus_lsi = lsi[corpus_tfidf]
# the topics are printed to log
a = lsi.print_topics(8)
a[0]
for doc in corpus_lsi[800]: # both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly
pass
#print(doc)
###Output
_____no_output_____
###Markdown
Model Save & Load
###Code
lsi.save('pkl/lsi_mdl.lsi')
lsi = models.LsiModel.load('pkl/lsi_mdl.lsi')
###Output
_____no_output_____
###Markdown
LDA Topics
###Code
lda_mdl = models.LdaModel(corpus, id2word=dictionary, num_topics=20)
lda_mdl.top_topics
pprint(lda_mdl.print_topics(10))
print(corpus)
doc = df.iloc[0]['resume_nouns']
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow] # convert the query to LSI space
print(vec_lsi)
###Output
[(0, 2.374975010869965), (1, 0.51728887522253952), (2, -0.058935199530753268), (3, 0.3578493537974749), (4, 1.560417648600577), (5, -1.9931029846659232), (6, 0.58697139609914861), (7, 1.437193124041608), (8, -0.38633595032575146), (9, -2.3068352804125016), (10, 0.77482570234627612), (11, -0.66082521176920128), (12, -2.0221618401059822), (13, 1.3229424544863675), (14, -0.29408524037515837), (15, -1.0569710323996966), (16, 1.110889840043604), (17, 1.3434022602282594), (18, -0.095802335904933394), (19, -0.80089048085959047), (20, -0.64832039201675884), (21, 1.35059095621303), (22, 0.36313071163680766), (23, 0.23008512654094881), (24, -1.4704302056681957), (25, -0.51110545886820391), (26, 1.5065962351771218), (27, -0.85864630999976976), (28, -0.27005311330166226), (29, 1.3357001963834654), (30, 0.11920370036201439), (31, 0.20935482520268536), (32, 0.58140672694418549), (33, 0.86476990150558442), (34, 0.21906262257842274), (35, 1.2623527033747142), (36, 0.47122700487966684), (37, 0.14754992485952445), (38, -0.029780850257687785), (39, 0.41251322337680407), (40, 0.70805306705532289), (41, -0.17539941089750521), (42, 0.099208258486715051), (43, 0.52714882842769772), (44, -0.55353450448882024), (45, -0.48520621106869544), (46, 0.42932852481533534), (47, -1.0848551994364626), (48, -0.2278193012580656), (49, -0.86398865304435535), (50, 0.26069692321941718), (51, -0.17035678155826239), (52, 0.17694402303837284), (53, 0.38019775252075771), (54, 0.52907741665760166), (55, -0.56801027798438197), (56, -0.24289558061900623), (57, -0.53166839270636368), (58, -0.75397485089313621), (59, 0.43914810153445505), (60, -0.11539391176838343), (61, 0.28098629645010242), (62, -0.22417217147281987), (63, 0.04359834386371364), (64, 0.40124504321511811), (65, 0.74406715148428892), (66, 0.083025633287427653), (67, -0.56067477401379284), (68, 0.22243465345106417), (69, -0.39550436325219973), (70, -0.54147531866201193), (71, -0.55283044224248479), (72, -1.619913100721621), (73, -0.093405314999276637), (74, 0.30444920349708604), (75, -0.53813981022164803), (76, -0.59617088497008486), (77, -0.51219246727570034), (78, -0.13706180463557627), (79, -0.16008030773188894), (80, -0.95552532874370033), (81, -1.0713657346866474), (82, -0.39524155791968052), (83, 0.10409521414708364), (84, -0.52691807273338676), (85, 0.28081975514224211), (86, -0.93232856873163084), (87, -0.18390081515478202), (88, -0.46222984135156353), (89, 0.1668585124747386), (90, 0.87547572965713072), (91, 0.037415066391670221), (92, -0.48772947456671473), (93, -0.41313026558553678), (94, 0.85224037332425129), (95, -0.25790488005477619), (96, -0.023718854903863967), (97, 0.32059833574508628), (98, -0.24697257256407545), (99, 0.41432508817899638)]
###Markdown
Cosine Similarity
###Code
index = similarities.MatrixSimilarity(lsi[corpus]) # transform corpus to LSI space and index it
index.save('pkl/resume_stopped.index')
index = similarities.MatrixSimilarity.load('pkl/resume_stopped.index')
sims = index[vec_lsi] # perform a similarity query against the corpus
# (document_number, document_similarity)
sim_lst = list(enumerate(sims))
import operator
sim_lst.sort(key=operator.itemgetter(1), reverse=True)
# comparing resumes within resumes
sim_lst[1:6]
' '.join(texts[0])
###Output
_____no_output_____
###Markdown
Vector Spaces
###Code
import logging
#logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import gensim
from gensim import corpora, models, similarities
from nltk.corpus import stopwords
from collections import defaultdict
from pprint import pprint
from six import iteritems
import os
import numpy as np
import pandas as pd
import scipy.sparse
###Output
_____no_output_____
###Markdown
Load Processed Dataframe
###Code
df = pd.read_json('data/md_contents.json')
df.head()
###Output
_____no_output_____
###Markdown
Convert Series to List of Strings
###Code
contents = df['file_contents'].tolist()
contents[:1]
###Output
_____no_output_____
###Markdown
From Strings to Vectors Tokenize the documents, remove stop words and words that only appear once
###Code
# remove common words and tokenize
stoplist = set(stopwords.words('english'))
texts = [[word.lower() for word in content.split()if word.lower() not in stoplist] for content in contents]
# remove words that appear only once
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
# remove words that occur less than n times
texts = [[token for token in text if frequency[token] > 3] for text in texts]
len(texts)
###Output
_____no_output_____
###Markdown
Save Token Count Dictionary to File
###Code
dictionary = corpora.Dictionary(texts)
# store the dictionary, for future reference
dictionary.save('data/text_token.dict')
print(dictionary)
###Output
Dictionary(24712 unique tokens: ['connector', 'mattdesl', 'hdf', 'codrops', 'pgdata']...)
###Markdown
Convert Tokenized Resumes to Vectors
###Code
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('data/text_token.mm', corpus) # store to disk, for later use
for c in corpus[:1]:
print(c)
###Output
[(0, 1), (1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 2), (7, 1), (8, 1), (9, 1), (10, 1), (11, 1), (12, 1), (13, 1), (14, 2), (15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (20, 1)]
###Markdown
Transformation Interface
###Code
# load tokenized dictionary
if (os.path.exists('data/text_token.dict')):
dictionary = corpora.Dictionary.load('data/text_token.dict')
print('Tokenized dictionary LOADED as \'dictionary\'')
else:
print('Tokenized dictionary NOT FOUND')
# load sparse vector matrix
if (os.path.exists('data/text_token.mm')):
corpus = corpora.MmCorpus('data/text_token.mm')
print('Sparse matrix LOADED as \'corpus\'')
else:
print('Sparse matrix NOT FOUND')
###Output
Sparse matrix LOADED as 'corpus'
###Markdown
TF-IDF Transformation
###Code
# step 1 -- initialize a model
tfidf_mdl = models.TfidfModel(corpus)
###Output
_____no_output_____
###Markdown
Calling `model[corpus]` only creates a wrapper around the old corpus document stream – actual conversions are done on-the-fly, during document iteration. We cannot convert the entire corpus at the time of calling corpus_transformed = model[corpus], because that would mean storing the result in main memory, and that contradicts gensim’s objective of memory-indepedence. If you will be iterating over the transformed corpus_transformed multiple times, and the transformation is costly, serialize the resulting corpus to disk first and continue using that.
###Code
# step 2 -- use the model to transform vectors
corpus_tfidf = tfidf_mdl[corpus]
print(len(corpus_tfidf))
# view one resume
for doc in corpus_tfidf[:1]:
print(doc)
from sklearn.feature_extraction.text import TfidfVectorizer
n_features = 1500
tfidf_vec = TfidfVectorizer(input='content', ngram_range=(1, 3), max_df=0.85, min_df=0.05,
max_features=n_features, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)
tfidf_vec_prep = tfidf_vec.fit_transform(resumes)
from sklearn.cluster import KMeans
km = KMeans(n_clusters=5, init='k-means++', max_iter=100, n_init=1, n_jobs=-1)
km_mdl = km.fit_predict(tfidf_vec_prep)
len(km_mdl)
# Determine your k range
k_range = range(1,20)
# fit the kmeans model for each n_clusters = k
k_means_var = [KMeans(n_clusters=k).fit(tfidf_vec_prep) for k in k_range]
# pull out the cluster centers for each model
centroids = [X.cluster_centers_ for X in k_means_var]
from scipy.spatial.distance import cdist, pdist
# calculate the euclidean distance from each point to each cluster center
k_euclid = [cdist(tfidf_vec_prep.toarray(), cent, 'euclidean') for cent in centroids]
dist = [np.min(ke, axis=1) for ke in k_euclid]
# total within-cluster sum of squares
wcss = [sum(d**2) for d in dist]
# the total sum of squares
tss = sum(pdist(tfidf_vec_prep.toarray())**2)/tfidf_vec_prep.shape[1]
# the between-cluster sum of squares
bss = tss - wcss
import seaborn as sns
sns.set_style("white")
sns.set_context("poster", font_scale=1.25, rc={"lines.linewidth": 2.5})
sns.set_palette("Set2")
colors = sns.color_palette("BrBG", 5)
# make figure
fig = plt.figure(figsize=(20,12))
ax = fig.add_subplot(111)
# color
colors = sns.color_palette("BrBG", 5)
# plots
ax.plot(K, avgWithinSS, marker='o', color=colors[-1], alpha=0.5)
# labels/titles
plt.legend(loc="best")
plt.title('Elbow for K-Means')
plt.xlabel('Number of Clusters')
plt.ylabel('Avg. Within-Cluster Sum of Squares')
# remove border
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
# show grid
ax.xaxis.grid(True, alpha=0.2)
ax.yaxis.grid(True, alpha=0.2)
# plot that biddy
plt.savefig('data/{0}.png'.format('KMeans_elbow'), bbox_inches='tight')
plt.close(fig)
import numpy as np
from scipy.cluster.vq import kmeans
from scipy.spatial.distance import cdist,pdist
from sklearn import datasets
from sklearn.decomposition import RandomizedPCA
from matplotlib import pyplot as plt
from matplotlib import cm
# perform PCA dimensionality reduction
pca = RandomizedPCA(n_components=2).fit(tfidf_vec_prep.toarray())
X = pca.transform(tfidf_vec_prep.toarray())
##### cluster data into K=1..20 clusters #####
K_MAX = 20
KK = range(1,K_MAX+1)
KM = [kmeans(X,k) for k in KK]
centroids = [cent for (cent,var) in KM]
D_k = [cdist(X, cent, 'euclidean') for cent in centroids]
cIdx = [np.argmin(D,axis=1) for D in D_k]
dist = [np.min(D,axis=1) for D in D_k]
tot_withinss = [sum(d**2) for d in dist] # Total within-cluster sum of squares
totss = sum(pdist(X)**2)/X.shape[0] # The total sum of squares
betweenss = totss - tot_withinss # The between-cluster sum of squares
##### plots #####
kIdx = 4 # K=10
clr = cm.spectral( np.linspace(0,1,10) ).tolist()
mrk = 'os^p<dvh8>+x.'
# make figure
fig = plt.figure(figsize=(20,12))
ax = fig.add_subplot(111)
# color
colors = sns.color_palette("BrBG", 5)
# plots
ax.plot(KK, betweenss/totss*100, marker='o', color=colors[-1], alpha=0.5)
ax.plot(KK[kIdx], betweenss[kIdx]/totss*100, marker='o', markersize=25, color=colors[0], alpha=0.5)
# labels/titles
plt.legend(loc="best")
plt.title('Elbow for KMeans Clustering')
plt.xlabel('Number of clusters')
plt.ylabel('Percentage of variance explained (%)')
ax.set_xlim((-0.1,20.5))
ax.set_ylim((-0.5,100))
# remove border
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
# show grid
ax.xaxis.grid(True, alpha=0.2)
ax.yaxis.grid(True, alpha=0.2)
# plot that biddy
plt.savefig('data/{0}.png'.format('KMeans_elbow_var'), bbox_inches='tight')
plt.close(fig)
# make figure
fig = plt.figure(figsize=(20,12))
ax = fig.add_subplot(111)
# plots
for i in range(kIdx+1):
ind = (cIdx[kIdx]==i)
ax.scatter(X[ind,0],X[ind,1], s=65, c=colors[i], marker=mrk[i],
label='Cluster {0}'.format(i), alpha=1)
# labels/titles
plt.legend(loc='upper right')
plt.title('K={0} Clusters'.format(KK[kIdx]))
#ax.set_xlim((-.5,.5))
#ax.set_ylim((-.3,.81))
# remove border
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
# show grid
ax.xaxis.grid(True, alpha=0.2)
ax.yaxis.grid(True, alpha=0.2)
# plot that biddy
plt.savefig('data/{0}.png'.format('KMeans_{0}_clusters'.format(KK[kIdx])), bbox_inches='tight')
plt.close(fig)
###Output
_____no_output_____
###Markdown
Latent Semantic Indexing Topics
###Code
num_topics = 100
# initialize an LSI transformation
lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=num_topics)
corpus_lsi = lsi[corpus_tfidf]
# the topics are printed to log
a = lsi.print_topics(8)
a[0]
for doc in corpus_lsi[800]: # both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly
pass
#print(doc)
###Output
_____no_output_____
###Markdown
Model Save & Load
###Code
lsi.save('pkl/lsi_mdl.lsi')
lsi = models.LsiModel.load('pkl/lsi_mdl.lsi')
###Output
_____no_output_____
###Markdown
LDA Topics
###Code
lda_mdl = models.LdaModel(corpus, id2word=dictionary, num_topics=20)
lda_mdl.top_topics
pprint(lda_mdl.print_topics(10))
print(corpus)
doc = df.iloc[0]['resume_nouns']
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow] # convert the query to LSI space
print(vec_lsi)
###Output
[(0, 2.374975010869965), (1, 0.51728887522253952), (2, -0.058935199530753268), (3, 0.3578493537974749), (4, 1.560417648600577), (5, -1.9931029846659232), (6, 0.58697139609914861), (7, 1.437193124041608), (8, -0.38633595032575146), (9, -2.3068352804125016), (10, 0.77482570234627612), (11, -0.66082521176920128), (12, -2.0221618401059822), (13, 1.3229424544863675), (14, -0.29408524037515837), (15, -1.0569710323996966), (16, 1.110889840043604), (17, 1.3434022602282594), (18, -0.095802335904933394), (19, -0.80089048085959047), (20, -0.64832039201675884), (21, 1.35059095621303), (22, 0.36313071163680766), (23, 0.23008512654094881), (24, -1.4704302056681957), (25, -0.51110545886820391), (26, 1.5065962351771218), (27, -0.85864630999976976), (28, -0.27005311330166226), (29, 1.3357001963834654), (30, 0.11920370036201439), (31, 0.20935482520268536), (32, 0.58140672694418549), (33, 0.86476990150558442), (34, 0.21906262257842274), (35, 1.2623527033747142), (36, 0.47122700487966684), (37, 0.14754992485952445), (38, -0.029780850257687785), (39, 0.41251322337680407), (40, 0.70805306705532289), (41, -0.17539941089750521), (42, 0.099208258486715051), (43, 0.52714882842769772), (44, -0.55353450448882024), (45, -0.48520621106869544), (46, 0.42932852481533534), (47, -1.0848551994364626), (48, -0.2278193012580656), (49, -0.86398865304435535), (50, 0.26069692321941718), (51, -0.17035678155826239), (52, 0.17694402303837284), (53, 0.38019775252075771), (54, 0.52907741665760166), (55, -0.56801027798438197), (56, -0.24289558061900623), (57, -0.53166839270636368), (58, -0.75397485089313621), (59, 0.43914810153445505), (60, -0.11539391176838343), (61, 0.28098629645010242), (62, -0.22417217147281987), (63, 0.04359834386371364), (64, 0.40124504321511811), (65, 0.74406715148428892), (66, 0.083025633287427653), (67, -0.56067477401379284), (68, 0.22243465345106417), (69, -0.39550436325219973), (70, -0.54147531866201193), (71, -0.55283044224248479), (72, -1.619913100721621), (73, -0.093405314999276637), (74, 0.30444920349708604), (75, -0.53813981022164803), (76, -0.59617088497008486), (77, -0.51219246727570034), (78, -0.13706180463557627), (79, -0.16008030773188894), (80, -0.95552532874370033), (81, -1.0713657346866474), (82, -0.39524155791968052), (83, 0.10409521414708364), (84, -0.52691807273338676), (85, 0.28081975514224211), (86, -0.93232856873163084), (87, -0.18390081515478202), (88, -0.46222984135156353), (89, 0.1668585124747386), (90, 0.87547572965713072), (91, 0.037415066391670221), (92, -0.48772947456671473), (93, -0.41313026558553678), (94, 0.85224037332425129), (95, -0.25790488005477619), (96, -0.023718854903863967), (97, 0.32059833574508628), (98, -0.24697257256407545), (99, 0.41432508817899638)]
###Markdown
Cosine Similarity
###Code
index = similarities.MatrixSimilarity(lsi[corpus]) # transform corpus to LSI space and index it
index.save('pkl/resume_stopped.index')
index = similarities.MatrixSimilarity.load('pkl/resume_stopped.index')
sims = index[vec_lsi] # perform a similarity query against the corpus
# (document_number, document_similarity)
sim_lst = list(enumerate(sims))
import operator
sim_lst.sort(key=operator.itemgetter(1), reverse=True)
# comparing resumes within resumes
sim_lst[1:6]
' '.join(texts[0])
###Output
_____no_output_____ |
Data Visualization/Seaborn/.ipynb_checkpoints/7. KDE Plot-checkpoint.ipynb | ###Markdown
KDE PLOT KDE Plot is used to estimate the probability density function of a continuous random variable.
###Code
sns.set_style("darkgrid")
fig1 , axes = plt.subplots(nrows=2,ncols=2 , figsize = (14,14))
x = np.random.normal(1,10,1000)
#Simple KDE Plot
axes[0,0].set_title("Simple KDE Plot")
sns.kdeplot(x,ax=axes[0,0])
# Shade under the density curve using the "shade" parameter
axes[0,1].set_title("KDE Plot (Shaded Area Under the Curve)")
sns.kdeplot(x,shade=True,ax=axes[0,1])
# Shade under the density curve using the "shade" parameter and use a different color.
axes[1,0].set_title("KDE Plot (Different Color)")
sns.kdeplot(x,ax=axes[1,0],color = 'r',shade=True,cut=0)
#Plotting the density on the vertical axis
axes[1,1].set_title("KDE Plot (Density on Vertical Axis)")
sns.kdeplot(x,vertical=True)
plt.show()
plt.figure(figsize=(6,8))
x = np.linspace(0, 10, 100)
y = np.sin(x)
sns.kdeplot(x,y,shade=True,cmap="Reds", shade_lowest=False)
insurance.head()
plt.figure(figsize=(6,8))
sns.kdeplot(insurance.bmi,insurance.charges,shade=True,cmap="Reds", shade_lowest=False)
plt.show()
iris = sns.load_dataset("iris")
plt.figure(figsize=(8,6))
sns.kdeplot(iris.sepal_width, iris.sepal_length,cmap="Reds", shade=True, shade_lowest=False)
plt.show()
###Output
_____no_output_____ |
wikipedia/processing-wikipedia.ipynb | ###Markdown
Processing Wikipedia
###Code
import textwrap
import pandas as pd
wikipedia = pd.read_csv('wikipedia.csv')
wikipedia.columns
wikipedia.sentence[0]
# print body text
def print_body(body_series: pd.Series) -> str:
print(textwrap.fill(body_series))
print_body(wikipedia.sentence[10000])
print_body(wikipedia.proc_sentence[10000])
wikipedia.shape
wikipedia.columns
wikipedia.to_csv('gs://ekaba-assets/wikipedia.csv')
!rm wikipedia.csv
wikipedia_sel = wikipedia[['proc_sentence']]
wikipedia_sel.to_csv('gs://ekaba-assets/wikipedia_proc_sentence.csv')
wikipedia_sel.to_csv( "wikipedia_proc_sentence.csv", index=False, encoding='utf-8-sig')
# convert csv to txt
import csv
import sys
maxInt = sys.maxsize
csv.field_size_limit(maxInt)
csv_file = 'wikipedia_proc_sentence.csv'
txt_file = 'wikipedia_proc_sentence.txt'
with open(txt_file, "w") as my_output_file:
with open(csv_file, "r") as my_input_file:
[ my_output_file.write(" ".join(row)+'\n') for row in csv.reader(my_input_file)]
my_output_file.close()
!gsutil -m cp wikipedia_proc_sentence.txt gs://ekaba-assets/
!rm wikipedia_proc_sentence.csv
!gsutil -m cp gs://ekaba-assets/processed_full_body_text_BODY.txt .
!rm -rf wiki
###Output
_____no_output_____
###Markdown
Combine two text files together
###Code
import shutil
with open('biomed_wikipedia_data.txt','wb') as wfd:
for f in ['processed_full_body_text_BODY.txt','wikipedia_proc_sentence.txt']:
with open(f,'rb') as fd:
shutil.copyfileobj(fd, wfd)
wfd.write(b"\n")
!gsutil -m cp biomed_wikipedia_data.txt gs://ekaba-assets/
!rm processed_full_body_text_BODY.txt
!rm wikipedia_proc_sentence.txt
###Output
_____no_output_____ |
scripts/run_questionnaires.ipynb | ###Markdown
Survey A
###Code
# raw data
f_A = '%s/Questionnaires/surveyA_151013.csv' % data_dir
df_A = pd.read_csv(f_A, sep = ",", parse_dates =[1,5])
###Output
_____no_output_____
###Markdown
Self-control scale
###Code
conv.run_SelfCtrl(df_A.copy(), out_dir = '%s/SCS' % internal_dir, public = False)
conv.run_SelfCtrl(df_A.copy(), out_dir = '%s/SCS' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/SCS/SCS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_SelfCtrl(raw_A.copy(), out_dir = '%s/SCS' % open_dir)
###Output
_____no_output_____
###Markdown
Internet addiction test
###Code
conv.run_IAT(df_A.copy(), out_dir = '%s/IAT' % internal_dir, public = False)
conv.run_IAT(df_A.copy(), out_dir = '%s/IAT' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/IAT/IAT.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_IAT(raw_A.copy(), out_dir = '%s/IAT' % open_dir)
###Output
_____no_output_____
###Markdown
Varieties of inner speech
###Code
conv.run_VIS(df_A.copy(), out_dir = '%s/VISQ' % internal_dir, public = False)
conv.run_VIS(df_A.copy(), out_dir = '%s/VISQ' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/VISQ/VISQ.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_VIS(raw_A.copy(), out_dir = '%s/VISQ' % open_dir)
###Output
_____no_output_____
###Markdown
Spontaneous and Deliberate Mind Wandering
###Code
conv.run_MW_SD(df_A.copy(), out_dir = '%s/S-D-MW' % internal_dir, public = False)
conv.run_MW_SD(df_A.copy(), out_dir = '%s/S-D-MW' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/S-D-MW/S-D-MW.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_MW_SD(raw_A.copy(), out_dir = '%s/S-D-MW' % open_dir)
###Output
_____no_output_____
###Markdown
Short dark triad
###Code
conv.run_SDT(df_A.copy(), out_dir = '%s/SD3' % internal_dir, public = False)
conv.run_SDT(df_A.copy(), out_dir = '%s/SD3' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/SD3/SD3.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_SDT(raw_A.copy(), out_dir = '%s/SD3' % open_dir)
###Output
_____no_output_____
###Markdown
Social desirability
###Code
conv.run_SDS(df_A.copy(), out_dir = '%s/SDS' % internal_dir, public = False)
conv.run_SDS(df_A.copy(), out_dir = '%s/SDS' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/SDS/SDS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_SDS(raw_A.copy(), out_dir = '%s/SDS' % open_dir)
###Output
_____no_output_____
###Markdown
Impulsivity
###Code
conv.run_UPPSP(df_A.copy(), out_dir = '%s/UPPS-P' % internal_dir, public = False)
conv.run_UPPSP(df_A.copy(), out_dir = '%s/UPPS-P' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/UPPS-P/UPPS-P.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_UPPSP(raw_A.copy(), out_dir = '%s/UPPS-P' % open_dir)
###Output
_____no_output_____
###Markdown
Tuckmann Procrastination Scale
###Code
conv.run_TPS(df_A.copy(), out_dir = '%s/TPS' % internal_dir, public = False)
conv.run_TPS(df_A.copy(), out_dir = '%s/TPS' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/TPS/TPS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_TPS(raw_A.copy(), out_dir = '%s/TPS' % open_dir)
###Output
_____no_output_____
###Markdown
ASR 18 - 59
###Code
conv.run_ASR(df_A.copy(), out_dir = '%s/ASR' % internal_dir, public = False)
conv.run_ASR(df_A.copy(), out_dir = '%s/ASR' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/ASR/ASR.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_ASR(raw_A.copy(), out_dir = '%s/ASR' % open_dir)
###Output
_____no_output_____
###Markdown
Self-esteem scale
###Code
conv.run_SE(df_A.copy(), out_dir = '%s/SE' % internal_dir, public = False)
conv.run_SE(df_A.copy(), out_dir = '%s/SE' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/SE/SE.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_SE(raw_A.copy(), out_dir = '%s/SE' % open_dir)
###Output
_____no_output_____
###Markdown
Involuntary Musical Imagery Scale
###Code
conv.run_IMIS(df_A.copy(), out_dir = '%s/IMIS' % internal_dir, public = False)
conv.run_IMIS(df_A.copy(), out_dir = '%s/IMIS' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/IMIS/IMIS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_IMIS(raw_A.copy(), out_dir = '%s/IMIS' % open_dir)
###Output
_____no_output_____
###Markdown
Goldsmiths Musical Sophistication Index
###Code
conv.run_GoldMSI(df_A.copy(), out_dir = '%s/Gold-MSI' % internal_dir, public = False)
conv.run_GoldMSI(df_A.copy(), out_dir = '%s/Gold-MSI' % restricted_dir, public = True)
raw_A = pd.read_csv('%s/Gold-MSI/Gold-MSI.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_GoldMSI(raw_A.copy(), out_dir = '%s/Gold-MSI' % open_dir)
###Output
_____no_output_____
###Markdown
Multi-gender identity questionnaire
###Code
conv.run_MGIQ(df_A.copy(), out_dir = '%s/MGIQ' % internal_dir, public = False)
conv.run_MGIQ(df_A.copy(), out_dir = '%s/MGIQ' % restricted_dir, public = True)
###Output
_____no_output_____
###Markdown
Survey B
###Code
# raw data
f_B = '%s/Questionnaires/surveyB_151013.csv' % data_dir
f2_B = '%s/Questionnaires/surveyF_151013.csv' % data_dir # due to neo ffi items
df_B = pd.read_csv(f_B, sep = ",", parse_dates =[1,5])
###Output
_____no_output_____
###Markdown
NEO PI-R
###Code
conv.run_NEOPIR(f_B, f2_B, out_dir = '%s/NEO-PI-R' % internal_dir, public = False)
conv.run_NEOPIR(f_B, f2_B, out_dir = '%s/NEO-PI-R' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/NEO-PI-R/NEO-PI-R.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_NEOPIR(raw_B.copy(), out_dir = '%s/NEO-PI-R' % open_dir)
###Output
_____no_output_____
###Markdown
Epsworth sleepiness scale
###Code
conv.run_ESS(df_B.copy(), out_dir = '%s/ESS' % internal_dir, public = False)
conv.run_ESS(df_B.copy(), out_dir = '%s/ESS' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/ESS/ESS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_ESS(raw_B.copy(), out_dir = '%s/ESS' % open_dir)
###Output
_____no_output_____
###Markdown
BDI
###Code
conv.run_BDI(df_B.copy(), out_dir = '%s/BDI' % internal_dir, public = False)
conv.run_BDI(df_B.copy(), out_dir = '%s/BDI' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/BDI/BDI.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_BDI(raw_B.copy(), out_dir = '%s/BDI' % open_dir)
###Output
_____no_output_____
###Markdown
Hamilton Anxiety Depression Scale
###Code
conv.run_HADS(df_B.copy(), out_dir = '%s/HADS' % internal_dir, public = False)
conv.run_HADS(df_B.copy(), out_dir = '%s/HADS' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/HADS/HADS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_HADS(raw_B.copy(), out_dir = '%s/HADS' % open_dir)
###Output
_____no_output_____
###Markdown
Boredom proness scale
###Code
conv.run_BPS(df_B.copy(), out_dir = '%s/BP' % internal_dir, public = False)
conv.run_BPS(df_B.copy(), out_dir = '%s/BP' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/BP/BP.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_BPS(raw_B.copy(), out_dir = '%s/BP' % open_dir)
###Output
_____no_output_____
###Markdown
Derryberry Attention Control Scale
###Code
conv.run_ACS(df_B.copy(), out_dir = '%s/ACS' % internal_dir, public = False)
conv.run_ACS(df_B.copy(), out_dir = '%s/ACS' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/ACS/ACS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_ACS(raw_B.copy(), out_dir = '%s/ACS' % open_dir)
###Output
_____no_output_____
###Markdown
PSSI - Persönlichkeitsstil- und Störungsinventar
###Code
conv.run_PSSI(df_B.copy(), out_dir = '%s/PSSI' % internal_dir, public = False)
conv.run_PSSI(df_B.copy(), out_dir = '%s/PSSI' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/PSSI/PSSI.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_PSSI(raw_B.copy(), out_dir = '%s/PSSI' % open_dir)
###Output
_____no_output_____
###Markdown
Multi-media inventory
###Code
conv.run_MMI(df_B.copy(), out_dir = '%s/MMI' % internal_dir, public = False)
conv.run_MMI(df_B.copy(), out_dir = '%s/MMI' % restricted_dir, public = True)
raw_B = pd.read_csv('%s/MMI/MMI.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_MMI(raw_B.copy(), out_dir = '%s/MMI' % open_dir)
###Output
_____no_output_____
###Markdown
Mobile phone usage
###Code
conv.run_mobile(df_B.copy(), out_dir = '%s/MPU' % internal_dir, public = False)
conv.run_mobile(df_B.copy(), out_dir = '%s/MPU' % open_dir, public = True)
###Output
_____no_output_____
###Markdown
Survey C (scanning day)
###Code
# raw data
f_C1 = '%s/Questionnaires/surveyCactive_151013.csv' % data_dir
f_C2 = '%s/Questionnaires/surveyCinactive_151013.csv' % data_dir
f_C3 = '%s/Questionnaires/surveyCcorrected_151013.csv' % data_dir
df_C1 = pd.read_csv(f_C1, sep = ",", parse_dates =[1,5])
df_C1['DS14 answer codes'] = pd.Series(np.zeros(len(df_C1)), index=df_C1.index)
df_C2 = pd.read_csv(f_C2, sep = ",", parse_dates =[1,5])
df_C2['DS14 answer codes'] = pd.Series(np.zeros(len(df_C2)), index=df_C2.index)
df_C3 = pd.read_csv(f_C3, sep = ",", parse_dates =[1,5])
df_C3['DS14 answer codes'] = pd.Series(np.ones(len(df_C3)), index=df_C3.index)
df_C = pd.concat([df_C1, df_C2, df_C3])
###Output
_____no_output_____
###Markdown
Facebook intensity scale
###Code
conv.run_FIS(df_C.copy(), out_dir = '%s/FBI' % internal_dir, public = False)
conv.run_FIS(df_C.copy(), out_dir = '%s/FBI' % restricted_dir, public = True)
###Output
_____no_output_____
###Markdown
NYC-Q on scanning day full NYC-Q (LIMIT)
###Code
# raw data
f_NYCQ_postscan = '%s/Questionnaires/LIMIT -NYC-Q post Scan.ods - LIMIT_20151215.csv' % data_dir
df_postscan = pd.read_csv(f_NYCQ_postscan)
conv.run_NYCQ_postscan(df_postscan, out_dir = '%s/NYC-Q_postscan' % internal_dir, public = False)
conv.run_NYCQ_postscan(df_postscan, out_dir = '%s/NYC-Q_postscan' % open_dir, public = True)
conv.run_NYCQ_posttasks(df_C.copy(), out_dir = '%s/NYC-Q_posttasks' % internal_dir, public = False)
conv.run_NYCQ_posttasks(df_C.copy(), out_dir = '%s/NYC-Q_posttasks' % open_dir, public = True)
###Output
_____no_output_____
###Markdown
short NYC-Q
###Code
# raw data
f_NYCQ_prescan = '%s/Questionnaires/Prescan short NYC-Q_20151215.csv' % data_dir
f_NYCQ_inscan = '%s/Questionnaires/NYCQ-short_inscanner.csv' % data_dir
f_NYCQ_postETS = '%s/Questionnaires/NYCQ-short-slider post Win_20151215.csv' % data_dir
df_prescan = pd.read_csv(f_NYCQ_prescan)
df_inscan = pd.read_csv(f_NYCQ_inscan)
df_postETS = pd.read_csv(f_NYCQ_postETS)
conv.run_NYCQ_prescan(df_prescan.copy(), out_dir = '%s/Short-NYC_prescan' % internal_dir, public = False)
conv.run_NYCQ_prescan(df_prescan.copy(), out_dir = '%s/Short-NYC_prescan' % open_dir, public = True)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=1, out_dir = '%s/Short-NYC_inscan1' % internal_dir, public = False)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=1, out_dir = '%s/Short-NYC_inscan1' % open_dir, public = True)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=2, out_dir = '%s/Short-NYC_inscan2' % internal_dir, public = False)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=2, out_dir = '%s/Short-NYC_inscan2' % open_dir, public = True)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=3, out_dir = '%s/Short-NYC_inscan3' % internal_dir, public = False)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=3, out_dir = '%s/Short-NYC_inscan3' % open_dir, public = True)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=4, out_dir = '%s/Short-NYC_inscan4' % internal_dir, public = False)
conv.run_NYCQ_inscan(df_inscan.copy(), scan=4, out_dir = '%s/Short-NYC_inscan4' % open_dir, public = True)
# where to put this
conv.run_NYCQ_postETS(df_postETS.copy(), out_dir = '%s/Short-NYC_postETS' % internal_dir, public = False)
conv.run_NYCQ_postETS(df_postETS.copy(), out_dir = '%s/Short-NYC_postETS' % open_dir, public = True)
###Output
_____no_output_____
###Markdown
Survey F
###Code
# raw data
f_F = '%s/Questionnaires/surveyF_151013.csv' % data_dir
df_F = pd.read_csv(f_F, sep = ",", parse_dates =[1,5])
lemon_dir = '/nobackup/adenauer2/XNAT/Emotion Battery LEMON001-229_ 1-4 Rounds_CSV files'
###Output
_____no_output_____
###Markdown
STAI
###Code
conv.run_STAI(df_F.copy(), out_dir = '%s/STAI-G-X2' % internal_dir, public = False)
conv.run_STAI(df_F.copy(), out_dir = '%s/STAI-G-X2' % restricted_dir, public = True)
raw_STAI_lsd = pd.read_csv('%s/STAI-G-X2/STAI-G-X2.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
raw_STAI_lemon = pd.read_csv('%s/STAI/STAI_G_Form_x2__20.csv' % lemon_dir,
sep = ",",
dtype={'ids':str})
cols = ['STAI_1', 'STAI_2', 'STAI_3', 'STAI_4', 'STAI_5', 'STAI_6',
'STAI_7', 'STAI_8', 'STAI_9', 'STAI_10', 'STAI_11', 'STAI_12',
'STAI_13', 'STAI_14', 'STAI_15', 'STAI_16', 'STAI_17', 'STAI_18',
'STAI_19', 'STAI_20']
idx = raw_STAI_lemon[cols].dropna(how='all').index
raw_STAI_lemon = raw_STAI_lemon.ix[idx]
raw_STAI = pd.concat([raw_STAI_lsd, raw_STAI_lemon])
raw_STAI.set_index([range(len(raw_STAI.index))], inplace=True)
sums.run_STAI(raw_STAI.copy(), out_dir = '%s/STAI-G-X2' % open_dir)
###Output
_____no_output_____
###Markdown
STAXI
###Code
conv.run_STAXI(df_F.copy(), out_dir = '%s/STAXI' % internal_dir, public = False)
conv.run_STAXI(df_F.copy(), out_dir = '%s/STAXI' % restricted_dir, public = True)
raw_STAXI_lsd = pd.read_csv('%s/STAXI/STAXI.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
raw_STAXI_lemon = pd.read_csv('%s/STAXI/STAXI_44.csv' % lemon_dir,
sep = ",",
dtype={'ids':str})
cols = ['STAXI_1', 'STAXI_2', 'STAXI_3', 'STAXI_4', 'STAXI_5',
'STAXI_6', 'STAXI_7', 'STAXI_8', 'STAXI_9', 'STAXI_10', 'STAXI_11',
'STAXI_12', 'STAXI_13', 'STAXI_14', 'STAXI_15', 'STAXI_16',
'STAXI_17', 'STAXI_18', 'STAXI_19', 'STAXI_20', 'STAXI_21',
'STAXI_22', 'STAXI_23', 'STAXI_24', 'STAXI_25', 'STAXI_26',
'STAXI_27', 'STAXI_28', 'STAXI_29', 'STAXI_30', 'STAXI_31',
'STAXI_32', 'STAXI_33', 'STAXI_34', 'STAXI_35', 'STAXI_36',
'STAXI_37', 'STAXI_38', 'STAXI_39', 'STAXI_40', 'STAXI_41',
'STAXI_42', 'STAXI_43', 'STAXI_44']
idx = raw_STAXI_lemon[cols].dropna(how='all').index
raw_STAXI_lemon = raw_STAXI_lemon.ix[idx]
raw_STAXI = pd.concat([raw_STAXI_lsd, raw_STAXI_lemon])
raw_STAXI.set_index([range(len(raw_STAXI.index))], inplace=True)
sums.run_STAXI(raw_STAXI.copy(), out_dir = '%s/STAXI' % open_dir)
###Output
_____no_output_____
###Markdown
BIS BAS
###Code
conv.run_BISBAS(df_F.copy(), out_dir = '%s/BISBAS' % internal_dir, public = False)
conv.run_BISBAS(df_F.copy(), out_dir = '%s/BISBAS' % restricted_dir, public = True)
raw_BISBAS_lsd = pd.read_csv('%s/BISBAS/BISBAS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
raw_BISBAS_lemon = pd.read_csv('%s/BISBAS/BISBAS_24.csv' % lemon_dir,
sep = ",",
dtype={'ids':str})
cols = ['BISBAS_1', 'BISBAS_2', 'BISBAS_3', 'BISBAS_4', 'BISBAS_5',
'BISBAS_6', 'BISBAS_7', 'BISBAS_8', 'BISBAS_9', 'BISBAS_10',
'BISBAS_11', 'BISBAS_12', 'BISBAS_13', 'BISBAS_14', 'BISBAS_15',
'BISBAS_16', 'BISBAS_17', 'BISBAS_18', 'BISBAS_19', 'BISBAS_20',
'BISBAS_21', 'BISBAS_22', 'BISBAS_23', 'BISBAS_24']
idx = raw_BISBAS_lemon[cols].dropna(how='all').index
raw_BISBAS_lemon = raw_BISBAS_lemon.ix[idx]
raw_BISBAS = pd.concat([raw_BISBAS_lsd, raw_BISBAS_lemon])
raw_BISBAS.set_index([range(len(raw_BISBAS.index))], inplace=True)
sums.run_BISBAS(raw_BISBAS.copy(), out_dir = '%s/BISBAS' % open_dir)
###Output
_____no_output_____
###Markdown
Survey G
###Code
# raw data
f_G = '%s/Questionnaires/surveyG_151013.csv' % data_dir
# AMAS was part of F and G
df_G = pd.read_csv(f_G, sep = ",", parse_dates =[1,5])
df_G = pd.concat([df_F, df_G])
###Output
_____no_output_____
###Markdown
Abbreviated Math Anxiety Scale
###Code
conv.run_AMAS(df_G.copy(), out_dir = '%s/AMAS' % internal_dir, public = False)
conv.run_AMAS(df_G.copy(), out_dir = '%s/AMAS' % restricted_dir, public = True)
raw_G = pd.read_csv('%s/AMAS/AMAS.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_AMAS(raw_G.copy(), out_dir = '%s/AMAS' % open_dir)
###Output
_____no_output_____
###Markdown
Survey Creativity
###Code
# raw data
f_Cr = '%s/Questionnaires/survey_creativity_metacog.csv' % data_dir
f_syn = '%s/Questionnaires/synesthesia_color_picker.csv' % data_dir
df_Cr = pd.read_csv(f_Cr, sep = ",", parse_dates =[1,5],
encoding="utf-8-sig").rename(columns = {'IDcode' : 'ID'})
df_syn = pd.read_csv(f_syn, sep = ",", parse_dates =[1,5]).rename(columns = {'DB_ID' : 'ID'})
###Output
_____no_output_____
###Markdown
Creative achievement questionnaire
###Code
conv.run_CAQ(df_Cr.copy(), out_dir = '%s/CAQ' % internal_dir, public = False)
conv.run_CAQ(df_Cr.copy(), out_dir = '%s/CAQ' % restricted_dir, public = True)
raw_Cr = pd.read_csv('%s/CAQ/CAQ.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_CAQ(raw_Cr.copy(), out_dir = '%s/CAQ' % open_dir)
###Output
_____no_output_____
###Markdown
Metacognition questionnaire
###Code
conv.run_MCQ30(df_Cr.copy(), out_dir = '%s/MCQ-30' % internal_dir, public = False)
conv.run_MCQ30(df_Cr.copy(), out_dir = '%s/MCQ-30' % restricted_dir, public = True)
raw_Cr = pd.read_csv('%s/MCQ-30/MCQ30.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_MCQ30(raw_Cr.copy(), out_dir = '%s/MCQ-30' % open_dir)
###Output
_____no_output_____
###Markdown
Body Consciousness Questionnaire
###Code
conv.run_BCQ(df_Cr.copy(), out_dir = '%s/BCQ' % internal_dir, public = False)
conv.run_BCQ(df_Cr.copy(), out_dir = '%s/BCQ' % restricted_dir, public = True)
raw_Cr = pd.read_csv('%s/BCQ/BCQ.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_BCQ(raw_Cr.copy(), out_dir = '%s/BCQ' % open_dir)
###Output
_____no_output_____
###Markdown
Five Facet Mindfulness Questionnaire
###Code
conv.run_FFMQ(df_Cr.copy(), out_dir = '%s/FFMQ' % internal_dir, public = False)
conv.run_FFMQ(df_Cr.copy(), out_dir = '%s/FFMQ' % restricted_dir, public = True)
raw_Cr = pd.read_csv('%s/FFMQ/FFMQ.csv' % restricted_dir,
sep = ",", parse_dates =[1,5],
dtype={'ids':str})
sums.run_FFMQ(raw_Cr.copy(), out_dir = '%s/FFMQ' % open_dir)
###Output
_____no_output_____
###Markdown
Synesthesia Color picker test
###Code
conv.run_syn(df_syn.copy(), out_dir = '%s/SYN' % internal_dir, public = False)
conv.run_syn(df_syn.copy(), out_dir = '%s/SYN' % open_dir, public = True)
###Output
_____no_output_____ |
deeplearning.ai/COURSE4 CNN/Week 01/Convolution model - Step by Step/Convolution+model+-+Step+by+Step+-+v2.ipynb | ###Markdown
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional)- Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural NetworksAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-PaddingZero-padding adds zeros around the border of an image: **Figure 1** : **Zero-Padding** Image (3 channels, RGB) with a padding of 2. The main benefits of padding are the following:- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:```pythona = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))```
###Code
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0,0), (pad, pad), (pad, pad), (0,0)), 'constant', constant_values = (0, 0))
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
###Output
x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
x[1,1] = [[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] = [[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
###Markdown
**Expected Output**: **x.shape**: (4, 3, 3, 2) **x_pad.shape**: (4, 7, 7, 2) **x[1,1]**: [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] **x_pad[1,1]**: [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input- Outputs another volume (usually of different size) **Figure 2** : **Convolution operation** with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).
###Code
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Do not add the bias yet.
s = a_slice_prev * W
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z + b
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
###Output
Z = [[[-6.99908945]]]
###Markdown
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: **Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:```pythona_slice_prev = a_prev[0:2,0:2,:]```This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. **Figure 3** : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** This figure shows only a single channel. **Reminder**:The formulas relating the output shape of the convolution to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_C = \text{number of filters used in the convolution}$$For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
###Code
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape[0], A_prev.shape[1], A_prev.shape[2], A_prev.shape[3]
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape[0], W.shape[1], W.shape[2], W.shape[3]
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + 2 * pad) / 2) + 1
n_W = int((n_W_prev - f + 2 * pad) / 2) + 1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = np.pad(A_prev, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_values = (0, 0))
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = np.sum(a_slice_prev * W[:,:,:,c]) + b[:,:,:,c]
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
###Output
Z's mean = 0.0489952035289
Z[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]
###Markdown
**Expected Output**: **Z's mean** 0.0489952035289 **Z[3,2,1]** [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] **cache_conv[0][1][2][3]** [-0.20075807 0.18656139 0.41005165] Finally, CONV layer should also contain an activation, in which case we would add the following line of code:```python Convolve the window to get back one output neuronZ[i, h, w, c] = ... Apply activationA[i, h, w, c] = activation(Z[i, h, w, c])```You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. 4.1 - Forward PoolingNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.**Reminder**:As there's no padding, the formulas binding the output shape of the pooling to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$$$ n_C = n_{C_{prev}}$$
###Code
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.average(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
###Output
mode = max
A = [[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
mode = average
A = [[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
###Markdown
**Expected Output:** A = [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] A = [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA:This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into:```pythonda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]``` 5.1.2 - Computing dW:This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into:```pythondW[:,:,:,c] += a_slice * dZ[i, h, w, c]``` 5.1.3 - Computing db:This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into:```pythondb[:,:,:,c] += dZ[i, h, w, c]```**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
###Code
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache[0], cache[1], cache[2], cache[3]
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters['stride']
pad = hparameters['pad']
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = np.pad(A_prev, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_values = (0, 0))
dA_prev_pad = np.pad(dA_prev, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_values = (0, 0))
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = dA_prev_pad[i, pad:-pad, pad:-pad, :]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
###Output
dA_mean = 1.45243777754
dW_mean = 1.72699145831
db_mean = 7.83923256462
###Markdown
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward passNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix}1 && 3 \\4 && 2\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}0 && 0 \\1 && 0\end{bmatrix}\tag{4}$$As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints:- [np.max()]() may be helpful. It computes the maximum of an array.- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:```A[i,j] = True if X[i,j] = xA[i,j] = False if X[i,j] != x```- Here, you don't need to consider cases where there are several maxima in a matrix.
###Code
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = x >= np.max(x)
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
###Output
x = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
mask = [[ True False False]
[False False False]]
###Markdown
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}1/4 && 1/4 \\1/4 && 1/4\end{bmatrix}\tag{5}$$This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
###Code
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape[0], shape[1]
# Compute the value to distribute on the matrix (≈1 line)
average = dz / (n_H * n_W)
# Create a matrix where every entry is the "average" value (≈1 line)
a = average * np.ones((n_H, n_W))
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
###Output
distributed value = [[ 0.5 0.5]
[ 0.5 0.5]]
###Markdown
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dZ.
###Code
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache[0], cache[1]
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters['stride']
f = hparameters['f']
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
for i in range(m): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = A_prev[i]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start: vert_end, horiz_start: horiz_end, c]
# Create the mask from a_prev_slice (≈1 line)
mask = a_prev_slice >= np.max(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += dA[i, h, w, c] * mask
elif mode == "average":
# Get the value a from dA (≈1 line)
da = dA[i, h, w, c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f, f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
###Output
mode = max
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0. 0. ]
[ 5.05844394 -1.68282702]
[ 0. 0. ]]
mode = average
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0.08485462 0.2787552 ]
[ 1.26461098 -0.25749373]
[ 1.17975636 -0.53624893]]
|
circuits/plots/exp_004_complexity_analysis.ipynb | ###Markdown
Data Paths
###Code
# AES
aes_dir = '../aes'
aes_log_file = aes_dir + '/aes.-1.ac.log'
aes_dot_log_file = aes_dir + '/aes.dot.log'
aes_vcd_log_file = aes_dir + '/aes.vcd.log'
aes_fanin_json = aes_dir + '/aes.fanin.json'
aes_reg2reg_json = aes_dir + '/aes.reg2reg.json'
# UART
uart_dir = '../uart'
uart_log_file = uart_dir + '/uart.-1.ac.log'
uart_dot_log_file = uart_dir + '/uart.dot.log'
uart_vcd_log_file = uart_dir + '/uart.vcd.log'
uart_fanin_json = uart_dir + '/uart.fanin.json'
uart_reg2reg_json = uart_dir + '/uart.reg2reg.json'
# OR1200
or1200_dir = '../or1200'
or1200_log_file = or1200_dir + '/or1200.-1.ac.log'
or1200_dot_log_file = or1200_dir + '/or1200.dot.log'
or1200_vcd_log_file = or1200_dir + '/or1200.vcd.log'
or1200_fanin_json = or1200_dir + '/or1200.fanin.json'
or1200_reg2reg_json = or1200_dir + '/or1200.reg2reg.json'
# PICORV32
picorv32_dir = '../picorv32'
picorv32_log_file = picorv32_dir + '/testbench.-1.ac.log'
picorv32_dot_log_file = picorv32_dir + '/testbench.dot.log'
picorv32_vcd_log_file = picorv32_dir + '/testbench.vcd.log'
picorv32_fanin_json = picorv32_dir + '/testbench.fanin.json'
picorv32_reg2reg_json = picorv32_dir + '/testbench.reg2reg.json'
# # CORTEX-M0
# cortex_m0_dir = '../arm_cortex_m0/output.bk'
# cortex_m0_log_file = cortex_m0_dir + '/cortexm0ds_logic.-1.ac.log'
# # cortex_m0_dot_log_file = cortex_m0_dir + '/cortexm0ds_logic.dot.log'
# # cortex_m0_vcd_log_file = cortex_m0_dir + '/cortexm0ds_logic.vcd.log'
# cortex_m0_fanin_json = cortex_m0_dir + '/cortexm0ds_logic.fanin.json'
# cortex_m0_reg2reg_json = cortex_m0_dir + '/cortexm0ds_logic.reg2reg.json'
###Output
_____no_output_____
###Markdown
Load Overall Fan-in/Reg2Reg Data
###Code
DESIGN_STR = 'Design'
TYPE_STR = 'Type'
FANIN_STR = 'Fan-in'
REG2REG_STR = 'Reg2Reg Distance'
fanin_data_dict = {
DESIGN_STR: [],
TYPE_STR: [],
FANIN_STR: []
}
reg2reg_data_dict = {
DESIGN_STR: [],
TYPE_STR: [],
REG2REG_STR: []
}
def load_stats(design, log_file):
avg_fanin = -1
max_fanin = -1
avg_reg2reg_path = -1
max_reg2reg_path = -1
with open(log_file, "r") as f:
for line in f:
if "Average Fan-in" in line:
avg_fanin = float(line.split('=')[-1].lstrip().rstrip())
if "Max Fan-in" in line:
max_fanin = int(line.split('=')[-1].lstrip().rstrip())
if "Average Reg2Reg" in line:
avg_reg2reg_path = float(line.split('=')[-1].lstrip().rstrip())
if "Max Reg2Reg" in line:
max_reg2reg_path = int(line.split('=')[-1].lstrip().rstrip())
f.close()
fanin_data_dict[DESIGN_STR].append(design)
fanin_data_dict[DESIGN_STR].append(design)
fanin_data_dict[TYPE_STR].append('Avg')
fanin_data_dict[TYPE_STR].append('Max')
fanin_data_dict[FANIN_STR].append(avg_fanin)
fanin_data_dict[FANIN_STR].append(max_fanin)
reg2reg_data_dict[DESIGN_STR].append(design)
reg2reg_data_dict[DESIGN_STR].append(design)
reg2reg_data_dict[TYPE_STR].append('Avg')
reg2reg_data_dict[TYPE_STR].append('Max')
reg2reg_data_dict[REG2REG_STR].append(avg_reg2reg_path)
reg2reg_data_dict[REG2REG_STR].append(max_reg2reg_path)
load_stats('AES', aes_log_file)
load_stats('UART', uart_log_file)
load_stats('OR1200', or1200_log_file)
load_stats('RISC-V', picorv32_log_file)
# load_stats('ARM CORTEX-M0', cortex_m0_log_file)
fanin_df = pd.DataFrame(fanin_data_dict)
reg2reg_df = pd.DataFrame(reg2reg_data_dict)
reg2reg_df
###Output
_____no_output_____
###Markdown
Load Local Fan-in/Reg2Reg Data
###Code
local_fanin_dict = {
FANIN_STR: [],
DESIGN_STR: []
}
local_reg2reg_dict = {
REG2REG_STR: [],
DESIGN_STR: []
}
# AES
with open(aes_fanin_json, "r") as jf:
aes_fanin_dict = json.load(jf)
local_fanin_dict[FANIN_STR].extend(aes_fanin_dict['Fan-in'])
local_fanin_dict[DESIGN_STR].extend(['AES'] * len(aes_fanin_dict['Fan-in']))
jf.close()
with open(aes_reg2reg_json, "r") as jf:
aes_reg2reg_dict = json.load(jf)
local_reg2reg_dict[REG2REG_STR].extend(aes_reg2reg_dict['Reg2Reg Path Length'])
local_reg2reg_dict[DESIGN_STR].extend(['AES'] * len(aes_reg2reg_dict['Reg2Reg Path Length']))
jf.close()
# UART
with open(uart_fanin_json, "r") as jf:
uart_fanin_dict = json.load(jf)
local_fanin_dict[FANIN_STR].extend(uart_fanin_dict['Fan-in'])
local_fanin_dict[DESIGN_STR].extend(['UART'] * len(uart_fanin_dict['Fan-in']))
jf.close()
with open(uart_reg2reg_json, "r") as jf:
uart_reg2reg_dict = json.load(jf)
local_reg2reg_dict[REG2REG_STR].extend(uart_reg2reg_dict['Reg2Reg Path Length'])
local_reg2reg_dict[DESIGN_STR].extend(['UART'] * len(uart_reg2reg_dict['Reg2Reg Path Length']))
jf.close()
# OR1200
with open(or1200_fanin_json, "r") as jf:
or1200_fanin_dict = json.load(jf)
local_fanin_dict[FANIN_STR].extend(or1200_fanin_dict['Fan-in'])
local_fanin_dict[DESIGN_STR].extend(['OR1200'] * len(or1200_fanin_dict['Fan-in']))
jf.close()
with open(or1200_reg2reg_json, "r") as jf:
or1200_reg2reg_dict = json.load(jf)
local_reg2reg_dict[REG2REG_STR].extend(or1200_reg2reg_dict['Reg2Reg Path Length'])
local_reg2reg_dict[DESIGN_STR].extend(['OR1200'] * len(or1200_reg2reg_dict['Reg2Reg Path Length']))
jf.close()
# RISC-V
with open(picorv32_fanin_json, "r") as jf:
picorv32_fanin_dict = json.load(jf)
local_fanin_dict[FANIN_STR].extend(picorv32_fanin_dict['Fan-in'])
local_fanin_dict[DESIGN_STR].extend(['RISC-V'] * len(picorv32_fanin_dict['Fan-in']))
jf.close()
with open(picorv32_reg2reg_json, "r") as jf:
picorv32_reg2reg_dict = json.load(jf)
local_reg2reg_dict[REG2REG_STR].extend(picorv32_reg2reg_dict['Reg2Reg Path Length'])
local_reg2reg_dict[DESIGN_STR].extend(['RISC-V'] * len(picorv32_reg2reg_dict['Reg2Reg Path Length']))
jf.close()
# # ARM CORTEX M0
# with open(cortex_m0_fanin_json, "r") as jf:
# cortex_m0_fanin_dict = json.load(jf)
# local_fanin_dict[FANIN_STR].extend(cortex_m0_fanin_dict['Fan-in'])
# local_fanin_dict[DESIGN_STR].extend(['ARM CORTEX-M0'] * len(cortex_m0_fanin_dict['Fan-in']))
# jf.close()
# with open(cortex_m0_reg2reg_json, "r") as jf:
# cortex_m0_reg2reg_dict = json.load(jf)
# local_reg2reg_dict[REG2REG_STR].extend(cortex_m0_reg2reg_dict['Reg2Reg Path Length'])
# local_reg2reg_dict[DESIGN_STR].extend(['ARM CORTEX-M0'] * len(cortex_m0_reg2reg_dict['Reg2Reg Path Length']))
# jf.close()
local_fanin_df = pd.DataFrame(local_fanin_dict)
local_reg2reg_df = pd.DataFrame(local_reg2reg_dict)
# Remove Outliers
local_fanin_df[FANIN_STR] = np.where(local_fanin_df[FANIN_STR] > 50, 50, local_fanin_df[FANIN_STR])
###Output
_____no_output_____
###Markdown
Load Run-time Data
###Code
SIM_RUNTIME_STR = 'Simulation Runtime (s)'
SSCCLASS_RUNTIME_STR = 'SSC Classification (s)'
SSCENUM_RUNTIME_STR = 'SSC Enumeration (s)'
DFGGEN_RUNTIME_STR = 'DFG Generation (s)'
TOTAL_RUNTIME_STR = 'Bomberman Runtime (s)'
SIZE_STR = 'Num. Regs'
runtime_data_dict = {
DESIGN_STR: [],
DFGGEN_RUNTIME_STR: [],
SIM_RUNTIME_STR: [],
SSCENUM_RUNTIME_STR: [],
SSCCLASS_RUNTIME_STR: [],
TOTAL_RUNTIME_STR: [],
SIZE_STR: []
}
def convert_time_str_2_seconds(t_str):
(seconds, frac_seconds) = t_str.split('.')
x = time.strptime(seconds.split('.')[0], '%H:%M:%S')
x = datetime.timedelta(hours=x.tm_hour,minutes=x.tm_min,seconds=x.tm_sec).total_seconds()
x += (float(frac_seconds) / 100.0)
return x
def load_runtimes(design, log_file, dot_log_file, vcd_log_file):
runtime_data_dict[DESIGN_STR].append(design)
# IVL Stats
with open(vcd_log_file, "r") as f:
for line in f:
if "real" in line:
(seconds, frac_seconds) = line.split()[-1].rstrip().split('.')
frac_seconds = float(frac_seconds.rstrip('s')) / 1000.0
t = time.strptime(seconds, '%Mm%S')
t = datetime.timedelta(hours=t.tm_hour,minutes=t.tm_min,seconds=t.tm_sec).total_seconds()
sim_runtime = t + frac_seconds
runtime_data_dict[SIM_RUNTIME_STR].append(sim_runtime)
f.close()
# DFG Stats
with open(dot_log_file, "r") as f:
for line in f:
if "real" in line:
(seconds, frac_seconds) = line.split()[-1].rstrip().split('.')
frac_seconds = float(frac_seconds.rstrip('s')) / 1000.0
t = time.strptime(seconds, '%Mm%S')
t = datetime.timedelta(hours=t.tm_hour,minutes=t.tm_min,seconds=t.tm_sec).total_seconds()
dfg_runtime = t + frac_seconds
runtime_data_dict[DFGGEN_RUNTIME_STR].append(dfg_runtime)
f.close()
# Python Script Stats
with open(log_file, "r") as f:
for line in f:
if "Num. Total FFs/Inputs:" in line:
num_regs = int(line.split()[-1].rstrip())
runtime_data_dict[SIZE_STR].append(num_regs)
if "Identifying Coalesced Counter Candidates..." in line:
for _ in range(5):
line = f.readline()
ct_enum = convert_time_str_2_seconds(line.split()[-1].rstrip())
for _ in range(7):
line = f.readline()
dt_enum = convert_time_str_2_seconds(line.split()[-1].rstrip())
total_enum = ct_enum + dt_enum
runtime_data_dict[SSCENUM_RUNTIME_STR].append(total_enum)
if "Finding malicious coalesced signals..." in line:
while "Execution Time:" not in line:
line = f.readline()
ct_class = convert_time_str_2_seconds(line.split()[-1].rstrip())
while "Execution Time:" not in line:
line = f.readline()
dt_class = convert_time_str_2_seconds(line.split()[-1].rstrip())
total_class = ct_class + dt_class
runtime_data_dict[SSCCLASS_RUNTIME_STR].append(total_class)
# if "Analysis complete." in line:
# line = f.readline()
# line = f.readline()
# t = convert_time_str_2_seconds(line.split()[-1].rstrip())
# break
f.close()
total_bm_runtime = dfg_runtime + total_enum + total_class
runtime_data_dict[TOTAL_RUNTIME_STR].append(total_bm_runtime)
dfg_runtime_percentage = (float(dfg_runtime) / float(total_bm_runtime)) * 100.0
total_enum_percentage = (float(total_enum) / float(total_bm_runtime)) * 100.0
total_class_percentage = (float(total_class) / float(total_bm_runtime)) * 100.0
percentages = [dfg_runtime_percentage, total_enum_percentage, total_class_percentage]
return percentages
aes_runtime_percentages = load_runtimes('AES', aes_log_file, aes_dot_log_file, aes_vcd_log_file)
uart_runtime_percentages = load_runtimes('UART', uart_log_file, uart_dot_log_file, uart_vcd_log_file)
or1200_runtime_percentages = load_runtimes('OR1200', or1200_log_file, or1200_dot_log_file, or1200_vcd_log_file)
picorv32_runtime_percentages = load_runtimes('RISC-V', picorv32_log_file, picorv32_dot_log_file, picorv32_vcd_log_file)
runtime_df = pd.DataFrame(runtime_data_dict)
print(runtime_df)
###Output
Design DFG Generation (s) Simulation Runtime (s) SSC Enumeration (s) \
0 AES 5.017 3.658 0.24
1 UART 0.412 3.972 0.60
2 OR1200 14.290 27.602 7.00
3 RISC-V 0.360 5.572 9.28
SSC Classification (s) Bomberman Runtime (s) Num. Regs
0 0.20 5.457 2440
1 3.90 4.912 340
2 1.28 22.570 814
3 1.20 10.840 317
###Markdown
Plot Settings
###Code
# Set Color Scheme
sns.set()
###Output
_____no_output_____
###Markdown
Plot Runtime Breakdown
###Code
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(8, 8))
plt.rcParams['font.sans-serif'] = 'Arial'
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['text.color'] = '#000000'
plt.rcParams['axes.labelcolor'] = '#000000'
plt.rcParams['xtick.color'] = '#909090'
plt.rcParams['ytick.color'] = '#909090'
plt.rcParams['font.size'] = 12
color_palette_list = sns.color_palette()
# color_palette_list = ['#009ACD', '#ADD8E6', '#63D1F4', '#0EBFE9', '#C1F0F6', '#0099CC']
labels = ['DFG Generation', 'SSC Enumeration', 'SSC Classification']
explode = (0, 0, 0)
PERCENTAGE_DIST = 1.15
# AES
ax1.pie(aes_runtime_percentages, explode=explode,
colors=color_palette_list[0:3], autopct='%1.0f%%',
shadow=False, startangle=0, pctdistance=PERCENTAGE_DIST)
ax1.axis('equal')
ax1.set_title("AES")
# UART
ax2.pie(uart_runtime_percentages, explode=explode,
colors=color_palette_list[0:3], autopct='%1.0f%%',
shadow=False, startangle=0, pctdistance=PERCENTAGE_DIST)
ax2.axis('equal')
ax2.set_title("UART")
# OR1200
ax3.pie(or1200_runtime_percentages, explode=explode,
colors=color_palette_list[0:3], autopct='%1.0f%%',
shadow=False, startangle=0, pctdistance=PERCENTAGE_DIST)
ax3.axis('equal')
ax3.set_title("OR1200")
# RISC-V
ax4.pie(picorv32_runtime_percentages, explode=explode,
colors=color_palette_list[0:3], autopct='%1.0f%%',
shadow=False, startangle=0, pctdistance=PERCENTAGE_DIST)
ax4.axis('equal')
ax4.set_title("RISC-V")
ax4.legend(frameon=False, labels=labels, bbox_to_anchor=(0.3,1.25))
plt.savefig('bomberman_runtime_breakdown.pdf', format='pdf')
###Output
_____no_output_____
###Markdown
Plot Fan-in & Reg2Reg Path
###Code
sns.set()
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 8))
sns.boxenplot(x=DESIGN_STR, y=FANIN_STR, data=local_fanin_df, linewidth=2.5, ax=ax1)
ax1.set_ylim([0,50])
sns.boxenplot(x=DESIGN_STR, y=REG2REG_STR, data=local_reg2reg_df, linewidth=2.5, ax=ax2)
ax2.set_ylim([0,8])
sns.barplot(x=DESIGN_STR, y=SIM_RUNTIME_STR, data=runtime_df, ax=ax3)
ax3.set_ylim([0,30])
sns.barplot(x=DESIGN_STR, y=TOTAL_RUNTIME_STR, data=runtime_df, ax=ax4)
ax4.set_ylim([0,30])
plt.tight_layout()
plt.savefig('bomberman_complexity_analysis.pdf', format='pdf')
sns.set()
fig, ax_r2r = plt.subplots(1, 1, figsize=(6, 3))
# sns.boxenplot(x=DESIGN_STR, y=REG2REG_STR, data=local_reg2reg_df, linewidth=2.5, ax=ax_r2r)
# sns.swarmplot(x=DESIGN_STR, y=REG2REG_STR, data=local_reg2reg_df, ax=ax_r2r)
sns.boxenplot(x=REG2REG_STR, y=DESIGN_STR, data=local_reg2reg_df, linewidth=2.5, ax=ax_r2r)
# sns.swarmplot(x=REG2REG_STR, y=DESIGN_STR, data=local_reg2reg_df, ax=ax_r2r)
ax_r2r.set_xlim([0,50])
ax_r2r.set_xlabel('Pipeline Logic Depth')
ax_r2r.set_ylabel('Design')
plt.tight_layout()
ax_r2r.text(23, 0.1, 'Bomberman RT: 5.457s', style='italic', bbox={'facecolor': 'white', 'alpha': 1.0, 'pad': 2})
ax_r2r.text(23, 1.1, 'Bomberman RT: 4.912s', style='italic', bbox={'facecolor': 'white', 'alpha': 1.0, 'pad': 2})
ax_r2r.text(23, 2.1, 'Bomberman RT: 22.570s', style='italic', bbox={'facecolor': 'white', 'alpha': 1.0, 'pad': 2})
ax_r2r.text(23, 3.1, 'Bomberman RT: 10.840s', style='italic', bbox={'facecolor': 'white', 'alpha': 1.0, 'pad': 2})
ax_r2r.text(23, 4.1, 'Bomberman RT: 643.568s', style='italic', bbox={'facecolor': 'white', 'alpha': 1.0, 'pad': 2})
plt.savefig('bomberman_reg2reg_analysis_warm_rts.pdf', format='pdf')
sns.set()
fig, ax_runtime = plt.subplots(1, 1, figsize=(6, 4))
sns.barplot(x=DESIGN_STR, y=TOTAL_RUNTIME_STR, data=runtime_df, ax=ax_runtime)
ax_r2r.set_ylim([0,30])
# ax_r2r.set_ylabel('Pipeline Logic Depth\n(# stages)')
# ax_r2r.set_xlabel('Design\n(Max. Clock Frequency)')
plt.tight_layout()
plt.savefig('bomberman_runtimes.pdf', format='pdf')
print runtime_df
###Output
Bomberman Runtime (s) DFG Generation (s) Design Num. Regs \
0 5.457 5.017 AES 2440
1 4.912 0.412 UART 340
2 22.570 14.290 OR1200 814
3 10.840 0.360 RISC-V 317
SSC Classification (s) SSC Enumeration (s) Simulation Runtime (s)
0 0.20 0.24 3.658
1 3.90 0.60 3.972
2 1.28 7.00 27.602
3 1.20 9.28 5.572
|
Muro_FinalsNumMeth.ipynb | ###Markdown
###Code
import math
def f(x):
return(math.exp(x))
a = -1
b = 1
n = 10
h = (b-a)/n
S = h* (f(a)+f(b))
for i in range(1,n):
S+=f(a+i*h)
Integral = S*h
print('Integral = %0.4f' %Integral)
###Output
Integral = 2.1731
|
Assignment_10_Pascual_Dulay.ipynb | ###Markdown
Laboratory 10 : Linear Combination and Vector Spaces ObjectivesAt the end of this activity you will be able to:* Be familiar with representing linear combinations in the 2-dimensional plane.* Visualize spans using vector fields in Python.* Perform vector fields operations using scientific programming. Discussion
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Linear CombinationIn linear algebra, a linear combination is an arrangement created by multiplying each term by a variable and then summing up the outcomes. Let's first try the vectors below:$$R = \begin{bmatrix} 6\\-1 \\\end{bmatrix} , M = \begin{bmatrix} 4\\3 \\\end{bmatrix} $$
###Code
vectR = np.array([6,-1])
vectM = np.array([4,3])
###Output
_____no_output_____
###Markdown
Span of single vectorsIt's the collection of all numerical vector state variables. One vector with a scalar is always on the similar plane, no matter how much it expands or reduces, because the orientation or slope does not change. Let's take vector X as an example. $$X = c\cdot \begin{bmatrix} 6\\-1 \\\end{bmatrix} $$
###Code
c = np.arange(-5,20,0.5)
plt.scatter(c*vectR[0],c*vectR[1])
plt.xlim(-20,20)
plt.ylim(-20,20)
plt.axhline(y=0, color='red')
plt.axvline(x=0, color='red')
plt.grid()
plt.show()
c = np.arange(-40,40,1.5)
plt.scatter(c*vectM[0],c*vectM[1])
plt.xlim(-50,50)
plt.ylim(-50,50)
plt.axhline(y=0, color='green')
plt.axvline(x=0, color='green')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Span of a linear combination of vectorsThe range of all linear combinations of v, termed the multiples, is the span of linear combination. Each location on the diagonal line is a viable linear combination of v. The span seems to be an endless line that travels through v. Let's take the span of the linear combination below: $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} -4\\1 \\\end{bmatrix}, c_2 \cdot \begin{bmatrix} 3\\7 \\\end{bmatrix}\end{Bmatrix} $$
###Code
vectA = np.array([-4,1])
vectB = np.array([3,7])
R = np.arange(-20,20,2.5)
c1, c2 = np.meshgrid(R,R)
vectR = vectA + vectB
spanRx = c1*vectA[0] + c2*vectB[0]
spanRy = c1*vectA[1] + c2*vectB[1]
##plt.scatter(R*vectA[0],R*vectA[1])
##plt.scatter(R*vectB[0],R*vectB[1])
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='blue')
plt.axvline(x=0, color='blue')
plt.grid()
plt.show()
vectP = np.array([7,-12])
vectQ = np.array([-4,11])
R = np.arange(-50,50,5)
c1, c2 = np.meshgrid(R,R)
vectR = vectP + vectQ
spanRx = c1*vectP[0] + c2*vectQ[0]
spanRy = c1*vectP[1] + c2*vectQ[1]
##plt.scatter(R*vectA[0],R*vectA[1])
##plt.scatter(R*vectB[0],R*vectB[1])
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='orange')
plt.axvline(x=0, color='orange')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
ActivityTry different linear combinations using different scalar values. In your methodology discuss the different functions that you have used, the linear equation and vector form of the linear combination, and the flowchart for declaring and displaying linear combinations. Please make sure that your flowchart has only few words and not putting the entire code as it is bad practice. In your results, display and discuss the linear combination visualization you made. You should use the cells below for displaying the equation markdows using LaTeX and your code.
###Code
vectR = np.array([10,-5])
vectM = np.array([20,40])
###Output
_____no_output_____
###Markdown
$$R = 6x - 12y$$$$M = -10 + 40y$$ $$R = \begin{bmatrix} 6\\-1 \\\end{bmatrix} ,\ M = \begin{bmatrix} 4\\3 \\\end{bmatrix} $$$$X_{R} = c\cdot \begin{bmatrix} 6\\-12 \\\end{bmatrix} $$ $$X_{M} = c\cdot \begin{bmatrix} -10\\40 \\\end{bmatrix} $$
###Code
c = np.arange(-10,10,0.5)
plt.scatter(c*vectR[0],c*vectR[1])
plt.xlim(-20,20)
plt.ylim(-20,20)
plt.axhline(y=0, color='green')
plt.axvline(x=0, color='red')
plt.grid()
plt.show()
c = np.arange(0,20,1)
plt.scatter(c*vectM[0],c*vectM[1])
plt.xlim(-50,50)
plt.ylim(-50,50)
plt.axhline(y=0, color='green')
plt.axvline(x=0, color='red')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
$$D = -8x + 2y$$$$P = 6x + 14y$$ $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} -8\\2 \\\end{bmatrix}, c_2 \cdot \begin{bmatrix} 6\\14 \\\end{bmatrix}\end{Bmatrix} $$
###Code
vectD = np.array([-8,2])
vectP = np.array([6,14])
R = np.arange(-20,20,1.5)
c1, c2 = np.meshgrid(R,R)
vectR = vectD + vectP
spanRx = c1*vectD[0] + c2*vectP[0]
spanRy = c1*vectD[1] + c2*vectP[1]
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='pink')
plt.axvline(x=0, color='pink')
plt.grid()
###Output
_____no_output_____
###Markdown
$$P = 2x + 12y$$$$Q = 4x + 16y$$ $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} 2\\12 \\\end{bmatrix}, c_2 \cdot \begin{bmatrix} 4\\16 \\\end{bmatrix}\end{Bmatrix} $$
###Code
vectP = np.array([2,12])
vectQ = np.array([4,16])
R = np.arange(-50,50,2.5)
c1, c2 = np.meshgrid(R,R)
vectR = vectP + vectQ
spanRx = c1*vectP[0] + c2*vectQ[0]
spanRy = c1*vectP[1] + c2*vectQ[1]
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='brown')
plt.axvline(x=0, color='orange')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Laboratory 10 : Linear Combination and Vector Spaces ObjectivesAt the end of this activity you will be able to:* Be familiar with representing linear combinations in the 2-dimensional plane.* Visualize spans using vector fields in Python.* Perform vector fields operations using scientific programming. Discussion
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Linear CombinationIn linear algebra, a linear combination is an arrangement created by multiplying each term by a variable and then summing up the outcomes. Let's first try the vectors below:$$R = \begin{bmatrix} 6\\-1 \\\end{bmatrix} , M = \begin{bmatrix} 4\\3 \\\end{bmatrix} $$
###Code
vectR = np.array([6,-1])
vectM = np.array([4,3])
###Output
_____no_output_____
###Markdown
Span of single vectorsIt's the collection of all numerical vector state variables. One vector with a scalar is always on the similar plane, no matter how much it expands or reduces, because the orientation or slope does not change. Let's take vector X as an example. $$X = c\cdot \begin{bmatrix} 6\\-1 \\\end{bmatrix} $$
###Code
c = np.arange(-5,20,0.5)
plt.scatter(c*vectR[0],c*vectR[1])
plt.xlim(-20,20)
plt.ylim(-20,20)
plt.axhline(y=0, color='red')
plt.axvline(x=0, color='red')
plt.grid()
plt.show()
c = np.arange(-40,40,1.5)
plt.scatter(c*vectM[0],c*vectM[1])
plt.xlim(-50,50)
plt.ylim(-50,50)
plt.axhline(y=0, color='green')
plt.axvline(x=0, color='green')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Span of a linear combination of vectorsThe range of all linear combinations of v, termed the multiples, is the span of linear combination. Each location on the diagonal line is a viable linear combination of v. The span seems to be an endless line that travels through v. Let's take the span of the linear combination below: $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} -4\\1 \\\end{bmatrix}, c_2 \cdot \begin{bmatrix} 3\\7 \\\end{bmatrix}\end{Bmatrix} $$
###Code
vectA = np.array([-4,1])
vectB = np.array([3,7])
R = np.arange(-20,20,2.5)
c1, c2 = np.meshgrid(R,R)
vectR = vectA + vectB
spanRx = c1*vectA[0] + c2*vectB[0]
spanRy = c1*vectA[1] + c2*vectB[1]
##plt.scatter(R*vectA[0],R*vectA[1])
##plt.scatter(R*vectB[0],R*vectB[1])
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='blue')
plt.axvline(x=0, color='blue')
plt.grid()
plt.show()
vectP = np.array([7,-12])
vectQ = np.array([-4,11])
R = np.arange(-50,50,5)
c1, c2 = np.meshgrid(R,R)
vectR = vectP + vectQ
spanRx = c1*vectP[0] + c2*vectQ[0]
spanRy = c1*vectP[1] + c2*vectQ[1]
##plt.scatter(R*vectA[0],R*vectA[1])
##plt.scatter(R*vectB[0],R*vectB[1])
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='orange')
plt.axvline(x=0, color='orange')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
ActivityTry different linear combinations using different scalar values. In your methodology discuss the different functions that you have used, the linear equation and vector form of the linear combination, and the flowchart for declaring and displaying linear combinations. Please make sure that your flowchart has only few words and not putting the entire code as it is bad practice. In your results, display and discuss the linear combination visualization you made. You should use the cells below for displaying the equation markdows using LaTeX and your code.
###Code
vectR = np.array([10,-5])
vectM = np.array([20,40])
###Output
_____no_output_____
###Markdown
$$R = 6x - 12y$$$$M = -10 + 40y$$ $$R = \begin{bmatrix} 6\\-1 \\\end{bmatrix} ,\ M = \begin{bmatrix} 4\\3 \\\end{bmatrix} $$$$X_{R} = c\cdot \begin{bmatrix} 6\\-12 \\\end{bmatrix} $$ $$X_{M} = c\cdot \begin{bmatrix} -10\\40 \\\end{bmatrix} $$
###Code
c = np.arange(-10,10,0.5)
plt.scatter(c*vectR[0],c*vectR[1])
plt.xlim(-20,20)
plt.ylim(-20,20)
plt.axhline(y=0, color='green')
plt.axvline(x=0, color='red')
plt.grid()
plt.show()
c = np.arange(0,20,1)
plt.scatter(c*vectM[0],c*vectM[1])
plt.xlim(-50,50)
plt.ylim(-50,50)
plt.axhline(y=0, color='green')
plt.axvline(x=0, color='red')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
$$D = -8x + 2y$$$$P = 6x + 14y$$ $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} -8\\2 \\\end{bmatrix}, c_2 \cdot \begin{bmatrix} 6\\14 \\\end{bmatrix}\end{Bmatrix} $$
###Code
vectD = np.array([-8,2])
vectP = np.array([6,14])
R = np.arange(-20,20,1.5)
c1, c2 = np.meshgrid(R,R)
vectR = vectD + vectP
spanRx = c1*vectD[0] + c2*vectP[0]
spanRy = c1*vectD[1] + c2*vectP[1]
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='pink')
plt.axvline(x=0, color='pink')
plt.grid()
###Output
_____no_output_____
###Markdown
$$P = 2x + 12y$$$$Q = 4x + 16y$$ $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} 2\\12 \\\end{bmatrix}, c_2 \cdot \begin{bmatrix} 4\\16 \\\end{bmatrix}\end{Bmatrix} $$
###Code
vectP = np.array([2,12])
vectQ = np.array([4,16])
R = np.arange(-50,50,2.5)
c1, c2 = np.meshgrid(R,R)
vectR = vectP + vectQ
spanRx = c1*vectP[0] + c2*vectQ[0]
spanRy = c1*vectP[1] + c2*vectQ[1]
plt.scatter(spanRx,spanRy, s=5, alpha=0.75)
plt.axhline(y=0, color='brown')
plt.axvline(x=0, color='orange')
plt.grid()
plt.show()
###Output
_____no_output_____ |
features_final.ipynb | ###Markdown
Features considered in this notebook:Total number of features considered = 11- Mean- Standard Deviation- Kurtosis- Skewness- Shannon Entropy- Activity- Mobility- Complexity- Permutation Entropy- Sample Entropy- Approximate Entropy
###Code
# Hyperparams
window_length = 32
# %%time
# class NeonatalSeizureFeatures:
# def __init__(self, row):
# self.row = row
# def skewness(self):
# row = np.array(self.row)
# row = row[:-1]
# row = np.reshape(row, (21, window_length))
# return (pd.Series(scipy.stats.skew(x, axis = 0, bias = False) for x in row))
# df_new = df.apply(lambda row: NeonatalSeizureFeatures(row).skewness(), axis = 1)
# df_new
###Output
Wall time: 5min 9s
###Markdown
Helper Methods:
###Code
def hMob(x):
row = np.array(x)
return (np.sqrt(np.var(np.gradient(x)) / np.var(x)))
###Output
_____no_output_____
###Markdown
Feature Methods:
###Code
# Feature Methods
def feature_mean(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(np.mean(x, axis = 0) for x in row))
def feature_stddev(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(np.std(x, axis = 0) for x in row))
def kurtosis(row):
row = np.array(row)
annotation = row[-1]
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(scipy.stats.kurtosis(x, axis = 0, bias = False) for x in row))
def skewness(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(scipy.stats.skew(x, axis = 0, bias = False) for x in row))
def spectral_entropy(row, sf = 32, nperseg = window_length, axis = 1):
row = np.array(row)
annotation = row[-1]
row = row[:-1]
row = np.reshape(row, (21, window_length))
_, psd = welch(row, sf, nperseg=nperseg, axis=axis)
psd_norm = psd / psd.sum(axis=axis, keepdims=True)
se = - np.where(psd_norm == 0, 0, psd_norm * np.log(psd_norm) / np.log(2)).sum(axis=axis)
return pd.Series(se)
def hjorthActivity(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(np.var(x, axis = 0) for x in row))
def hjorthMobility(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(np.sqrt(np.var(np.gradient(x)) / np.var(x)) for x in row))
def hjorthComplexity(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series((hMob(np.gradient(x)) / hMob(x)) for x in row))
def permutation_entropy(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(entropy.perm_entropy(x) for x in row))
def sample_entropy(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(entropy.sample_entropy(x) for x in row))
def approximate_entropy(row):
row = np.array(row)
row = row[:-1]
row = np.reshape(row, (21, window_length))
return (pd.Series(entropy.app_entropy(x) for x in row))
list_of_feature_methods = [feature_mean, feature_stddev, kurtosis, skewness, spectral_entropy, hjorthActivity, hjorthMobility,
hjorthComplexity, permutation_entropy, sample_entropy, approximate_entropy]
%%time
df_list = list()
for i, j in zip(list_of_feature_methods, range(len(list_of_feature_methods))):
print("Epoch %d ..." % (j+1))
df_temp = df.apply(lambda row: i(row), axis = 1)
df_list.append(df_temp)
new_df = pd.concat(df_list, axis = 1)
new_df
feature_df = pd.concat([new_df, df[df.columns[-1]]], axis = 1)
feature_df
feature_df.to_csv('Full_feature_data1sec.csv', index = False)
feature_df1.columns = [i for i in range(232)]
feature_df1.columns
np.isinf(feature_df1).values.any()
feature_df1 = feature_df.replace([np.inf, -np.inf], np.nan)
feature_df1.dropna(inplace = True)
feature_df1.reset_index(drop = True, inplace = True)
###Output
_____no_output_____
###Markdown
Principal Component Analysis (PCA):
###Code
# Imports
from sklearn.decomposition import PCA
# Set hyperparams for PCA
n_components = 20
random_state = 32
###Output
_____no_output_____
###Markdown
PCA 20
###Code
pca_20 = PCA(n_components = n_components, random_state = random_state)
feature_df_20 = pca_20.fit_transform(feature_df1[feature_df1.columns[:-1]])
# feature_df_20 = pca_20.fit_transform(feature_df1[feature_df1[[-1]]])
pca_20_df = pd.DataFrame(data = feature_df_20)
pca_20_df = pd.concat([pca_20_df, feature_df1[feature_df1.columns[-1]]], axis = 1)
pca_20_df
pca_20_df.to_csv('1sec/PCA_20_features.csv', index = False)
###Output
_____no_output_____
###Markdown
PCA 50
###Code
n_components = 50
pca_50 = PCA(n_components = n_components, random_state = random_state)
feature_df_50 = pca_50.fit_transform(feature_df1[feature_df1.columns[:-1]])
pca_50_df = pd.DataFrame(data = feature_df_50)
pca_50_df = pd.concat([pca_50_df, feature_df1[feature_df1.columns[-1]]], axis = 1)
pca_50_df
pca_50_df.to_csv('1sec/PCA_50_features.csv', index = False)
###Output
_____no_output_____
###Markdown
PCA 70
###Code
n_components = 70
pca_70 = PCA(n_components = n_components, random_state = random_state)
feature_df_70 = pca_70.fit_transform(feature_df1[feature_df1.columns[:-1]])
pca_70_df = pd.DataFrame(data = feature_df_70)
pca_70_df = pd.concat([pca_70_df, feature_df1[feature_df1.columns[-1]]], axis = 1)
pca_70_df.to_csv('1sec/PCA_70_features.csv', index = False)
###Output
_____no_output_____
###Markdown
PCA 100
###Code
n_components = 100
pca_100 = PCA(n_components = n_components, random_state = random_state)
feature_df_100 = pca_100.fit_transform(feature_df1[feature_df1.columns[:-1]])
pca_100_df = pd.DataFrame(data = feature_df_100)
pca_100_df = pd.concat([pca_100_df, feature_df1[feature_df1.columns[-1]]], axis = 1)
pca_100_df.to_csv('1sec/PCA_100_features.csv', index = False)
###Output
_____no_output_____ |
examples/YutaMouse41-ephys-viz.ipynb | ###Markdown
YutaMouse41-ephys-vizTo use this notebook you will need to install the ephys_viz package.Roughly speaking that involves installing ephys_viz from PyPI and installing the reactpoya_jup notebook and/or lab extensions.See: https://github.com/flatironinstitute/ephys-viz
###Code
# Imports and initialization of ephys-viz in this notebook
import pynwb
from pynwb import NWBHDF5IO
from nwbwidgets import nwb2widget
import ephys_viz as ev
from nwbwidgets.ephys_viz_interface import ephys_viz_neurodata_vis_spec as vis_spec
ev.init_jupyter()
# You need to have an .nwb file on your computer
file_name = 'YutaMouse41-150903.nwb'
# Lazy-load the nwb file
nwb_io = NWBHDF5IO(file_name, mode='r')
nwb = nwb_io.read()
# Display the LFP using ephys-viz
nwb2widget(nwb.fields['processing']['ecephys']['LFP'], vis_spec)
# The ephys-viz widget is integrated into nwbwidgets
nwb2widget(nwb, vis_spec)
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | ###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classification model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)Each learning objective will correspond to a __TODO__ in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the [solution notebook](../solutions/2_mnist_models.ipynb))for reference.First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.6" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, 1_mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud config set ai_platform/region global
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.6
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud ML EngineThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. This is where our model and tensorboard data will be stored.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Can't wait to see the results? Run the code below and copy the output into the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) to follow along with TensorBoard. Look at the web preview on port 6006.
###Code
!echo "tensorboard --logdir $JOB_DIR"
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Even though we're using a 1.14 runtime, it's compatable with TF2 exported models. Phew!Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=1.14
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classification model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)Each learning objective will correspond to a __TODO__ in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the [solution notebook](training-data-analyst/courses/machine_learning/deepdive2/image_classification/solutions/2_mnist_models.ipynb))for reference.First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.5" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud config set ai_platform/region global
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.5
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud ML EngineThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_trainer")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into model.
###Code
%%writefile mnist_trainer/trainer/task.py
import argparse
import json
import os
import sys
import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_trainer/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_trainer/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 34`, you can specify which model types you would like to check. `line 37` and `line 38` has the number of epochs and steps per epoch respectively. Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. This is where our model and tensorboard data will be stored.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_trainer/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
mkdir $JOB_DIR
python3 mnist_trainer/trainer/task.py \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Let's check out how the model did in tensorboard and confirm that it's good to go before kicking it off to train on the cloud. If running on a Deep Learning VM, open the folder corresponding to the `--job-dir` above. Then, go to File > New Launcher. Click on Tensorboard under "Other".If runnining locally, the following command can be run in a terminal:`tensorboard --logdir=` Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_trainer/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_trainer/trainer /mnist_trainer/trainer
ENTRYPOINT ["python3", "mnist_trainer/trainer/task.py"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_trainer`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_trainer/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Can't wait to see the results? Run the code below and copy the output into the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) to follow along with TensorBoard. Look at the web preview on port 6006.
###Code
!echo "tensorboard --logdir $JOB_DIR"
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Even though we're using a 1.14 runtime, it's compatable with TF2 exported models. Phew!Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=1.14
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_trainer.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classification model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)Each learning objective will correspond to a __TODO__ in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the [solution notebook](../solutions/2_mnist_models.ipynb))for reference.First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.6" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, 1_mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud config set ai_platform/region global
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.6
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud ML EngineThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. This is where our model and tensorboard data will be stored.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Let's check out how the model did in tensorboard and confirm that it's good to go before kicking it off to train on the cloud. If running on a Deep Learning VM, open the folder corresponding to the `--job-dir` above. Then, go to File > New Launcher. Click on Tensorboard under "Other".If runnining locally, the following command can be run in a terminal:`tensorboard --logdir=` Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Can't wait to see the results? Run the code below and copy the output into the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) to follow along with TensorBoard. Look at the web preview on port 6006.
###Code
!echo "tensorboard --logdir $JOB_DIR"
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Even though we're using a 1.14 runtime, it's compatable with TF2 exported models. Phew!Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=1.14
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classification model using Google Cloud's [Vertex AI](https://cloud.google.com/vertex-ai/)Each learning objective will correspond to a __TODO__ in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the [solution notebook](../solutions/2_mnist_models.ipynb))for reference.First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.6" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, 1_mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
AI platform job could take around 10 minutes to complete. Enable the **AI Platform Training & Prediction API**, if required.
###Code
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
gcloud config set ai_platform/region global
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.6
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classification model using Google Cloud's [Vertex AI](https://cloud.google.com/vertex-ai/)Each learning objective will correspond to a __TODO__ in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the [solution notebook](../solutions/2_mnist_models.ipynb))for reference.First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Here we'll show the currently installed version of TensorFlow
import tensorflow as tf
print(tf.__version__)
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "cnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.6" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, 1_mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn' # "linear", "cnn", "dnn_dropout", or "dnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
AI platform job could take around 10 minutes to complete. Enable the **AI Platform Training & Prediction API**, if required.
###Code
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud config set ai_platform/region global
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.6
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud ML EngineThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. We can run it as a python module locally first using the command line. The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. This is where our model and tensorboard data will be stored.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_trained/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
Create local directory to store our model files.
###Code
%%bash
mkdir mnist_trained
mkdir mnist_trained/models
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
mkdir $JOB_DIR
export PYTHONPATH=$PYTHONPATH:$PWD/mnist_models
python3 -m trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Let's check out how the model did in tensorboard and confirm that it's good to go before kicking it off to train on the cloud. If running on a Deep Learning VM, open the folder corresponding to the `--job-dir` above. Then, go to File > New Launcher. Click on Tensorboard under "Other".If runnining locally, the following command can be run in a terminal:`tensorboard --logdir=` Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "mnist_models/trainer/task.py"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Can't wait to see the results? Run the code below and copy the output into the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) to follow along with TensorBoard. Look at the web preview on port 6006.
###Code
!echo "tensorboard --logdir $JOB_DIR"
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Even though we're using a 1.14 runtime, it's compatable with TF2 exported models. Phew!Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=1.14
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.1" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.1
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.1" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud config set ai_platform/region global
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.1
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud ML EngineThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 34`, you can specify which model types you would like to check. `line 37` and `line 38` has the number of epochs and steps per epoch respectively. Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. This is where our model and tensorboard data will be stored.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_trained/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
mkdir $JOB_DIR
python3 mnist_models/trainer/task.py \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Let's check out how the model did in tensorboard and confirm that it's good to go before kicking it off to train on the cloud. If running on a Deep Learning VM, open the folder corresponding to the `--job-dir` above. Then, go to File > New Launcher. Click on Tensorboard under "Other".If runnining locally, the following command can be run in a terminal:`tensorboard --logdir=` Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "mnist_models/trainer/task.py"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Can't wait to see the results? Run the code below and copy the output into the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) to follow along with TensorBoard. Look at the web preview on port 6006.
###Code
!echo "tensorboard --logdir $JOB_DIR"
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Even though we're using a 1.14 runtime, it's compatable with TF2 exported models. Phew!Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=1.14
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
from datetime import datetime
import os
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-bucket-id-here" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.1" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudSince we're using an unreleased version of TensorFlow on AI Platform, we can instead use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TF2 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.1
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____ |
quantization.ipynb | ###Markdown
Signal QuantizationQuantize the given data source "Dat_2.mat" using both uniform quantization and Lloyd–Max quantization method. >1)quantize into 16 and 64 levels 2)boundary = (-6, 6) 3)Analyze the quantization noise.
###Code
import numpy as np
import scipy.io as sio
import scipy.integrate as integrate
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Load the data file into numpy array
###Code
mat_content = sio.loadmat("Dat_2.mat")
data = mat_content['X'].reshape(10000)
bound = (-6, 6)
###Output
_____no_output_____
###Markdown
Have a look at the data
###Code
plt.plot(data)
plt.grid()
###Output
_____no_output_____
###Markdown
Define the distortion function(Mean square error)
###Code
def distortion(x, y):
return(((y - x) ** 2).sum())
###Output
_____no_output_____
###Markdown
Uniform quantization
###Code
def uniform_quanti(data, n, bound=(min(data), max(data))):
data[data < bound[0]] = bound[0]
data[data > bound[1]] = bound[1]
delta = (bound[1] - bound[0])/n
return(delta * ((data/delta + 1/2) // 1))
uni_quan16 = uniform_quanti(data, 16, (-6, 6))
uni_quan64 = uniform_quanti(data, 64, (-6, 6))
uni_distortion16 = distortion(data, uni_quan16)
uni_distortion64 = distortion(data, uni_quan64)
print("4-bit distortion:", uni_distortion16)
print("6-bit distortion:", uni_distortion64)
###Output
4-bit distortion: 463.4149198485427
6-bit distortion: 29.585591761753818
###Markdown
Lloyd–Max quantizationGiven the fixed code length(4/6-bit), the optimization goal is to minimize the distortion. The distortion $D(b, y)$ is continuous and differentiable with respect to both $b$ and $y$. Thus the miminum points satisfies: $\displaystyle \frac{\partial D}{\partial b_k} = 0 \Rightarrow b_k = \frac{y_{k-1} + y_k}{2}$ $\displaystyle \frac{\partial D}{\partial y_k} = 0 \Rightarrow y_k = \frac{\int_{b_k}^{b_{k+1}} x \cdot \text{pdf}(x)dx}{\int_{b_k}^{b_{k+1}}\text{pdf}(x)dx}$ Iterate to optimize the result. Estimate the PDF from the sample data
###Code
def count_freq(data, n=10000, bound=(min(data), max(data))):
delta = (bound[1] - bound[0]) / n
freq = np.array([(((i*delta + bound[0]) <= data) & (data < ((i+1)*delta + bound[0]))).sum() for i in range(n)])
return((freq/data.size, delta))
n = 50
(freq, delta) = count_freq(data, n, bound)
x = np.linspace(bound[0], bound[1], n)
# Using a cubic interpolation to approximate the PDF
# Curve fitting should be a better approach, but considering
# the programming complexity, leave it for another time.
pdf = interp1d(x, freq/delta, kind="cubic")
xpdf = interp1d(x, x*freq/delta, kind="cubic")
plt.plot(x, pdf(x), '-', label='PDF')
plt.fill_between(x, pdf(x), 0, facecolor='orange')
plt.title("PDF of the data")
plt.text(-6, 0.2, "$\int_{-6} ^{6} pdf(x)dx = $ %.5f"%(integrate.quad(pdf, -6, 6)[0]), fontsize=14)
plt.grid()
def LM_iterate(b, y, n, pdf, xpdf):
B_Y = []
for _ in range(max(n) + 1):
b[1 : -1] = (y[1:] + y[0:-1]) / 2
for i in range(y.size):
num, err = integrate.quad(xpdf, b[i], b[i+1])
den, err = integrate.quad(pdf, b[i], b[i+1])
y[i] = num/den
if(_ in n): B_Y.append([b.copy(), y.copy()])
return(B_Y)
def quantify(data, b, y):
data[data < b[0]] = b[0]
data[data > b[-1]] = b[-1]
q_data = np.array([y[np.argmax(b >= data[i]) - 1] for i in range(data.size)])
return(q_data.copy())
y16 = np.sort(((np.random.rand(16)- 1/2) * 2 * 6))
b16 = np.zeros(16 + 1)
y64 = np.sort(((np.random.rand(64)- 1/2) * 2 * 6))
b64 = np.zeros(64 + 1)
b16[0] = -6
b16[-1] = 6
b64[0] = -6
b64[-1] = 6
n = [5, 10, 20, 30, 40, 50, 60, 80, 100]
B_Y16 = LM_iterate(b16, y16, n, pdf, xpdf)
B_Y64 = LM_iterate(b64, y64, n, pdf, xpdf)
q_data16 = []
q_data64 = []
for b, y in B_Y16:
q_data16.append(quantify(data, b, y))
for b, y in B_Y64:
q_data64.append(quantify(data, b, y))
distortions16 = []
distortions64 = []
for qd in q_data16:
distortions16.append(distortion(data, qd))
for qd in q_data64:
distortions64.append(distortion(data, qd))
plt.figure(figsize=(16, 5))
plt.subplot(1, 2, 1)
plt.plot(n, distortions16, '-o')
plt.title("4-bit quantization distortion")
plt.xticks(n)
plt.yticks(distortions16)
plt.ylabel("Distortion")
plt.xlabel("Number of iteration(n)")
plt.text(25, 293, "4-bit uniform quantization distortion: %.1f"%(uni_distortion16))
plt.grid()
plt.subplot(1, 2, 2)
plt.plot(n, distortions64, '-o')
plt.title("6-bit quantization distortion")
plt.xticks(n)
plt.yticks(distortions64)
plt.ylabel("Distortion")
plt.xlabel("Number of iteration(n)")
plt.text(25, 42, "6-bit uniform quantization distortion: %.1f"%(uni_distortion64))
plt.grid()
#plt.savefig("distortion.png", dpi=200)
###Output
_____no_output_____ |
universal-computation/universal_computation/Split_EuroSAT.ipynb | ###Markdown
This code is to split the EuroSAT dataset to train / test split
###Code
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
#############################
import sys
sys.path.insert(0, '/content/drive/GoogleDrive/MyDrive/GitHub/TransformerRS/RSdataset/')
import os
import numpy as np
import shutil
import pandas as pd
print("########### Train Test Val Script started ###########")
#data_csv = pd.read_csv("DataSet_Final.csv") ##Use if you have classes saved in any .csv file
root_dir = '/content/drive/MyDrive/GitHub/RSdataset/data/eurosat-rgb/split/'
classes_dir = ['AnnualCrop', 'Forest', 'HerbaceousVegetation', 'Highway', 'Industrial' , 'Pasture', 'PermanentCrop', 'Residential', 'River' , 'SeaLake']
processed_dir = '/content/drive/MyDrive/GitHub/RSdataset/data/eurosat-rgb/2750'
val_ratio = 0.0
test_ratio = 0.20
for cls in classes_dir:
folder_name = cls
print("$$$$$$$ Class Name " + folder_name + " $$$$$$$")
src = processed_dir +"/" + folder_name # Folder to copy images from
allFileNames = os.listdir(src)
np.random.shuffle(allFileNames)
train_FileNames, test_FileNames, val_FileNames = np.split(np.array(allFileNames),
[int(len(allFileNames) * (1 - (val_ratio + test_ratio))),
int(len(allFileNames) * (1 - val_ratio)),
])
train_FileNames = [src + '/' + name for name in train_FileNames.tolist()]
val_FileNames = [src + '/' + name for name in val_FileNames.tolist()]
test_FileNames = [src + '/' + name for name in test_FileNames.tolist()]
print('Total images: '+ str(len(allFileNames)))
print('Training: '+ str(len(train_FileNames)))
print('Validation: '+ str(len(val_FileNames)))
print('Testing: '+ str(len(test_FileNames)))
# # Creating Train / Val / Test folders (One time use)
os.makedirs(root_dir + 'train/' + folder_name)
os.makedirs(root_dir + 'val/' + folder_name)
os.makedirs(root_dir + 'test/' + folder_name)
# Copy-pasting images
for name in train_FileNames:
shutil.copy(name, root_dir + 'train/' + folder_name)
for name in val_FileNames:
shutil.copy(name, root_dir + 'val/' + folder_name)
for name in test_FileNames:
shutil.copy(name, root_dir + 'test/' + folder_name)
print("########### Train Test Val Script Ended ###########")
###Output
$$$$$$$ Class Name AnnualCrop $$$$$$$
Total images: 3000
Training: 2400
Validation: 0
Testing: 600
$$$$$$$ Class Name Forest $$$$$$$
Total images: 3000
Training: 2400
Validation: 0
Testing: 600
$$$$$$$ Class Name HerbaceousVegetation $$$$$$$
Total images: 3000
Training: 2400
Validation: 0
Testing: 600
$$$$$$$ Class Name Highway $$$$$$$
Total images: 2500
Training: 2000
Validation: 0
Testing: 500
$$$$$$$ Class Name Industrial $$$$$$$
Total images: 2500
Training: 2000
Validation: 0
Testing: 500
$$$$$$$ Class Name Pasture $$$$$$$
Total images: 2000
Training: 1600
Validation: 0
Testing: 400
$$$$$$$ Class Name PermanentCrop $$$$$$$
Total images: 2500
Training: 2000
Validation: 0
Testing: 500
$$$$$$$ Class Name Residential $$$$$$$
Total images: 3000
Training: 2400
Validation: 0
Testing: 600
$$$$$$$ Class Name River $$$$$$$
Total images: 2500
Training: 2000
Validation: 0
Testing: 500
$$$$$$$ Class Name SeaLake $$$$$$$
Total images: 3000
Training: 2400
Validation: 0
Testing: 600
|
examples/screenshot.ipynb | ###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
from IPython.display import Image
with open('screenshot.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
Image(url='screenshot.png')
###Output
_____no_output_____
###Markdown
Expected result:
###Code
plot.screenshot_scale = 4.0
plot.fetch_screenshot()
###Output
_____no_output_____
###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
with open('screenshot_upscale.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
from scipy import misc
print(misc.imread('screenshot.png').shape, misc.imread('screenshot_upscale.png').shape)
###Output
_____no_output_____
###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
from IPython.display import Image
with open('screenshot.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
Image(url='screenshot.png')
###Output
_____no_output_____
###Markdown
Expected result:
###Code
plot.screenshot_scale = 4.0
plot.fetch_screenshot()
###Output
_____no_output_____
###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
with open('screenshot_upscale.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
import imageio
print(imageio.imread('screenshot.png').shape, imageio.imread('screenshot_upscale.png').shape)
###Output
_____no_output_____
###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
from IPython.display import Image
with open('screenshot.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
Image(url='screenshot.png')
###Output
_____no_output_____
###Markdown
Expected result:
###Code
plot.screenshot_scale = 4.0
plot.fetch_screenshot()
###Output
_____no_output_____
###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
with open('screenshot_upscale.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
import imageio
print(imageio.imread('screenshot.png').shape, imageio.imread('screenshot_upscale.png').shape)
###Output
_____no_output_____
###Markdown
Taking a screenshot
###Code
import jupyterlab_dosbox
import matplotlib.pyplot as plt
db = jupyterlab_dosbox.DosboxModel()
###Output
_____no_output_____
###Markdown
Now we have to wait a moment, because I don't know yet how to make the spin-up be communicated back to python.
###Code
db.send_command("dir")
db.screenshot()
plt.imshow(db.last_screenshot)
###Output
_____no_output_____
###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
from IPython.display import Image
with open('screenshot.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
Image(url='screenshot.png')
###Output
_____no_output_____
###Markdown
Expected result:
###Code
plot.screenshot_scale = 4.0
plot.fetch_screenshot()
###Output
_____no_output_____
###Markdown
**Note**: this operation is asynchronous.We need to wait for the widgets to synchronize behind the scenes...
###Code
with open('screenshot_upscale.png', 'wb') as f:
try:
out = plot.screenshot.decode('base64')
except: # Python 3
from base64 import b64decode
out = b64decode(plot.screenshot)
f.write(out)
import imageio
print(imageio.imread('screenshot.png').shape, imageio.imread('screenshot_upscale.png').shape)
###Output
_____no_output_____ |
lab3/solutions/RL_Solution.ipynb | ###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. **Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation which is fed as input to the model
# Returns:
# action: choice of agent action
def choose_action(model, observation):
# add batch dimension to the observation
observation = np.expand_dims(observation, axis=0)
"""TODO: feed the observations through the model to predict the log probabilities of each possible action."""
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
# pass the log probabilities through a softmax to compute true probabilities
prob_weights = tf.nn.softmax(logits).numpy()
"""TODO: randomly sample from the prob_weights to pick an action.
Hint: carefully consider the dimensionality of the input probabilities (vector) and the output action (scalar)"""
action = np.random.choice(n_actions, size=1, p=prob_weights.flatten())[0] # TODO
# action = np.random.choice('''TODO''', size=1, p=''''TODO''')['''TODO''']
return action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple memory buffer that contains the agent's observations, actions, and received rewards from a given episode. **Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
"""TODO: update the list of actions with new action"""
self.actions.append(new_action) # TODO
# ['''TODO''']
"""TODO: update the list of rewards with new reward"""
self.rewards.append(new_reward) # TODO
# ['''TODO''']
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. ecall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
"""TODO: complete the function call to compute the negative log probabilities"""
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits='''TODO''', labels='''TODO''')
"""TODO: scale the negative log probability by the rewards"""
loss = tf.reduce_mean(neg_logprob * rewards) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm:
###Code
### Training step (forward and backpropagation) ###
def train_step(model, optimizer, observations, actions, discounted_rewards):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
logits = model(observations)
"""TODO: call the compute_loss function to compute the loss"""
loss = compute_loss(logits, actions, discounted_rewards) # TODO
# loss = compute_loss('''TODO''', '''TODO''', '''TODO''')
"""TODO: run backpropagation to minimize the loss using the tape.gradient method"""
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
### Cartpole training! ###
# Learning rate and optimizer
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel="Iterations", ylabel="Rewards")
if hasattr(tqdm, "_instances"):
tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
train_step(
cartpole_model,
optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards=discount_rewards(memory.rewards),
)
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: PongIn Cartpole, we dealt with an environment that was static -- in other words, it didn't change over time. What happens if our environment is dynamic and unpredictable? Well that's exactly the case in [Pong](https://en.wikipedia.org/wiki/Pong), since part of the environment is the opposing player. We don't know how our opponent will act or react to our actions, so the complexity of our problem increases. It also becomes much more interesting, since we can compete to beat our opponent. RL provides a powerful framework for training AI systems with the ability to handle and interact with dynamic, unpredictable environments. In this part of the lab, we'll use the tools and workflow we explored in Part 1 to build an RL agent capable of playing the game of Pong. 3.6 Define and inspect the Pong environmentAs with Cartpole, we'll instantiate the Pong environment in the OpenAI gym, using a seed of 1.
###Code
env = gym.make("Pong-v0", frameskip=5)
env.seed(1)
# for reproducibility
###Output
_____no_output_____
###Markdown
Let's next consider the observation space for the Pong environment. Instead of four physical descriptors of the cart-pole setup, in the case of Pong our observations are the individual video frames (i.e., images) that depict the state of the board. Thus, the observations are 210x160 RGB images (arrays of shape (210,160,3)).We can again confirm the size of the observation space by query:
###Code
print("Environment has observation space =", env.observation_space)
###Output
_____no_output_____
###Markdown
In Pong, at every time step, the agent (which controls the paddle) has six actions to choose from: no-op (no operation), move right, move left, fire, fire right, and fire left. Let's confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
3.7 Define the Pong agentAs before, we'll use a neural network to define our agent. What network architecture do you think would be especially well suited to this game? Since our observations are now in the form of images, we'll add convolutional layers to the network to increase the learning capacity of our network.
###Code
### Define the Pong agent ###
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding="same", activation="relu")
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the Pong agent
def create_pong_model():
model = tf.keras.models.Sequential(
[
# Convolutional layers
# First, 16 7x7 filters and 4x4 stride
Conv2D(filters=16, kernel_size=7, strides=4),
# TODO: define convolutional layers with 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define convolutional layers with 48 3x3 filters and 2x2 stride
Conv2D(filters=48, kernel_size=3, strides=2), # TODO
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=64, activation="relu"),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in
Dense(units=n_actions, activation=None) # TODO
# Dense('''TODO''')
]
)
return model
pong_model = create_pong_model()
###Output
_____no_output_____
###Markdown
Since we've already defined the action function, `choose_action(model, observation)`, we don't need to define it again. Instead, we'll be able to reuse it later on by passing in our new model we've just created, `pong_model`. This is awesome because our action function provides a modular and generalizable method for all sorts of RL agents! 3.8 Pong-specific functionsIn Part 1 (Cartpole), we implemented some key functions and classes to build and train our RL agent -- `choose_action(model, observation)` and the `Memory` class, for example. However, in getting ready to apply these to a new game like Pong, we might need to make some slight modifications. Namely, we need to think about what happens when a game ends. In Pong, we know a game has ended if the reward is +1 (we won!) or -1 (we lost unfortunately). Otherwise, we expect the reward at a timestep to be zero -- the players (or agents) are just playing eachother. So, after a game ends, we will need to reset the reward to zero when a game ends. This will result in a modified reward function.
###Code
### Pong reward function ###
# Compute normalized, discounted rewards for Pong (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor. Note increase to 0.99 -- rate of depreciation will be slower.
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.99):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# NEW: Reset the sum if the reward is not 0 (the game has ended!)
if rewards[t] != 0:
R = 0
# update the total discounted reward as before
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
Additionally, we have to consider the nature of the observations in the Pong environment, and how they will be fed into our network. Our observations in this case are images. Before we input an image into our network, we'll do a bit of pre-processing to crop and scale, clean up the background colors to a single color, and set the important game elements to a single color. Let's use this function to visualize what an observation might look like before and after pre-processing.
###Code
observation = env.reset()
for i in range(30):
observation, _, _, _ = env.step(0)
observation_pp = mdl.lab3.preprocess_pong(observation)
f = plt.figure(figsize=(10, 3))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.imshow(observation)
ax.grid(False)
ax2.imshow(np.squeeze(observation_pp))
ax2.grid(False)
plt.title("Preprocessed Observation")
;
###Output
_____no_output_____
###Markdown
What do you notice? How might these changes be important for training our RL algorithm? 3.9 Training PongWe're now all set up to start training our RL algorithm and agent for the game of Pong! We've already defined our loss function with `compute_loss`, which employs policy gradient learning, as well as our backpropagation step with `train_step` which is beautiful! We will use these functions to execute training the Pong agent. Let's walk through the training block.In Pong, rather than feeding our network one image at a time, it can actually improve performance to input the difference between two consecutive observations, which really gives us information about the movement between frames -- how the game is changing. We'll first pre-process the raw observation, `x`, and then we'll compute the difference with the image frame we saw one timestep before. This observation change will be forward propagated through our Pong agent, the CNN network model, which will then predict the next action to take based on this observation. The raw reward will be computed, and the observation, action, and reward will be recorded into memory. This will continue until a training episode, i.e., a game, ends.Then, we will compute the discounted rewards, and use this information to execute a training step. Memory will be cleared, and we will do it all over again!Let's run the code block to train our Pong agent. Note that completing training will take quite a bit of time (estimated at least a couple of hours). We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning.
###Code
### Training Pong ###
# Hyperparameters
learning_rate = 1e-4
MAX_ITERS = 10000 # increase the maximum number of episodes, since Pong is more complex!
# Model and optimizer
pong_model = create_pong_model()
optimizer = tf.keras.optimizers.Adam(learning_rate)
# plotting
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=5, xlabel="Iterations", ylabel="Rewards")
memory = Memory()
for i_episode in range(MAX_ITERS):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
previous_frame = mdl.lab3.preprocess_pong(observation)
while True:
# Pre-process image
current_frame = mdl.lab3.preprocess_pong(observation)
"""TODO: determine the observation change
Hint: this is the difference between the past two frames"""
obs_change = current_frame - previous_frame # TODO
# obs_change = # TODO
"""TODO: choose an action for the pong model, using the frame difference, and evaluate"""
action = choose_action(pong_model, obs_change) # TODO
# action = # TODO
# Take the chosen action
next_observation, reward, done, info = env.step(action)
"""TODO: save the observed frame difference, the action that was taken, and the resulting reward!"""
memory.add_to_memory(obs_change, action, reward) # TODO
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# begin training
train_step(
pong_model,
optimizer,
observations=np.stack(memory.observations, 0),
actions=np.array(memory.actions),
discounted_rewards=discount_rewards(memory.rewards),
)
memory.clear()
break
observation = next_observation
previous_frame = current_frame
###Output
_____no_output_____
###Markdown
Finally we can put our trained agent to the test! It will play in a newly instantiated Pong environment against the "computer", a base AI system for Pong. Your agent plays as the green paddle. Let's watch the match instant replay!
###Code
saved_pong = mdl.lab3.save_video_of_model(pong_model, "Pong-v0", obs_diff=True, pp_fn=mdl.lab3.preprocess_pong)
mdl.lab3.play_video(saved_pong)
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, simulated environments -- like games and simulation engines -- provide a convenient proving ground for developing RL algorithms and agents.In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Driving in VISTA**](https://www.mit.edu/~amini/pubs/pdf/learning-in-simulation-vista.pdf): Learn a driving control policy for an autonomous vehicle, end-to-end from raw pixel inputs and entirely in the data-driven simulation environment of VISTA. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
# Download and import the MIT 6.S191 package
!printf "Installing MIT deep learning package... "
!pip install --upgrade git+https://github.com/aamini/introtodeeplearning.git &> /dev/null
!echo "Done"
#Install some dependencies for visualizing the agents
!apt-get install -y xvfb python-opengl x11-utils &> /dev/null
!pip install gym pyvirtualdisplay scikit-video ffio pyrender &> /dev/null
!pip install tensorflow_probability==0.12.0 &> /dev/null
import os
os.environ['PYOPENGL_PLATFORM'] = 'egl'
import numpy as np
import matplotlib, cv2
import matplotlib.pyplot as plt
import base64, io, os, time, gym
import IPython, functools
import time
from tqdm import tqdm
import tensorflow_probability as tfp
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for the Cartpole task, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v1")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
_____no_output_____
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# ['''TODO''' Dense layer to output action probabilities]
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. We will also add support so that the `choose_action` function can handle either a single observation or a batch of observations.**Critically, this action function is totally general -- we will use this function for learning control algorithms for Cartpole, but it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation(s) which is/are fed as input to the model
# single: flag as to whether we are handling a single observation or batch of
# observations, provided as an np.array
# Returns:
# action: choice of agent action
def choose_action(model, observation, single=True):
# add batch dimension to the observation if only a single example was provided
observation = np.expand_dims(observation, axis=0) if single else observation
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
'''TODO: Choose an action from the categorical distribution defined by the log
probabilities of each possible action.'''
action = tf.random.categorical(logits, num_samples=1)
# action = ['''TODO''']
action = action.numpy().flatten()
return action[0] if single else action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple `Memory` buffer that contains the agent's observations, actions, and received rewards from a given episode. We will also add support to combine a list of `Memory` objects into a single `Memory`. This will be very useful for batching, which will help you accelerate training later on in the lab.**Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action) # TODO
# ['''TODO''']
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward) # TODO
# ['''TODO''']
def __len__(self):
return len(self.actions)
# Instantiate a single Memory buffer
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. Recall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.We will use this definition of the reward function in both parts of the lab so make sure you have it executed!
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
# logits='''TODO''', labels='''TODO''')
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm. This is a very generalizable definition which we will use
###Code
### Training step (forward and backpropagation) ###
def train_step(model, loss_function, optimizer, observations, actions, discounted_rewards, custom_fwd_fn=None):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
if custom_fwd_fn is not None:
prediction = custom_fwd_fn(observations)
else:
prediction = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = loss_function(prediction, actions, discounted_rewards) # TODO
# loss = loss_function('''TODO''', '''TODO''', '''TODO''')
'''TODO: run backpropagation to minimize the loss using the tape.gradient method.
Unlike supervised learning, RL is *extremely* noisy, so you will benefit
from additionally clipping your gradients to avoid falling into
dangerous local minima. After computing your gradients try also clipping
by a global normalizer. Try different clipping values, usually clipping
between 0.5 and 5 provides reasonable results. '''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', '''TODO''')
grads, _ = tf.clip_by_global_norm(grads, 2)
# grads, _ = tf.clip_by_global_norm(grads, '''TODO''')
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
## Training parameters ##
## Re-run this cell to restart training from scratch ##
# TODO: Learning rate and optimizer
learning_rate = 1e-3
# learning_rate = '''TODO'''
optimizer = tf.keras.optimizers.Adam(learning_rate)
# optimizer = '''TODO'''
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.95)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
## Cartpole training! ##
## Note: stoping and restarting this cell will pick up training where you
# left off. To restart training you need to rerun the cell above as
# well (to re-initialize the model and optimizer)
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
g = train_step(cartpole_model, compute_loss, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
matplotlib.use('Agg')
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v1")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: Training Autonomous Driving Policies in VISTAAutonomous control has traditionally be dominated by algorithms that explicitly decompose individual aspects of the control pipeline. For example, in autonomous driving, traditional methods work by first detecting road and lane boundaries, and then using path planning and rule-based methods to derive a control policy. Deep learning offers something very different -- the possibility of optimizing all these steps simultaneously, learning control end-to-end directly from raw sensory inputs.**You will explore the power of deep learning to learn autonomous control policies that are trained *end-to-end, directly from raw sensory data, and entirely within a simulated world*.**We will use the data-driven simulation engine [VISTA](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8957584&tag=1), which uses techniques in computer vision to synthesize new photorealistic trajectories and driving viewpoints, that are still consistent with the world's appearance and fall within the envelope of a real driving scene. This is a powerful approach -- we can synthesize data that is photorealistic, grounded in the real world, and then use this data for training and testing autonomous vehicle control policies within this simulator.In this part of the lab, you will use reinforcement lerning to build a self-driving agent with a neural network-based controller trained on RGB camera data. We will train the self-driving agent for the task of lane following. Beyond this data modality and control task, VISTA also supports [different data modalities](https://arxiv.org/pdf/2111.12083.pdf) (such as LiDAR data) and [different learning tasks](https://arxiv.org/pdf/2111.12137.pdf) (such as multi-car interactions).You will put your agent to the test in the VISTA environment, and potentially, on board a full-scale autonomous vehicle! Specifically, as part of the MIT lab competitions, high-performing agents -- evaluated based on the maximum distance they can travel without crashing -- will have the opportunity to be put to the ***real*** test onboard a full-scale autonomous vehicle!!! We start by installing dependencies. This includes installing the VISTA package itself.
###Code
!pip install --upgrade git+https://github.com/vista-simulator/vista-6s191.git
import vista
from vista.utils import logging
logging.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
VISTA provides some documentation which will be very helpful to completing this lab. You can always use the `?vista` command to access the package documentation.
###Code
### Access documentation for VISTA
### Run ?vista.<[name of module or function]>
?vista.Display
###Output
_____no_output_____
###Markdown
3.6 Create an environment in VISTAEnvironments in VISTA are based on and built from human-collected driving *traces*. A trace is the data from a single driving run. In this case we'll be working with RGB camera data, from the viewpoint of the driver looking out at the road: the camera collects this data as the car drives around!We will start by accessing a trace. We use that trace to instantiate an environment within VISTA. This is our `World` and defines the environment we will use for reinforcement learning. The trace itself helps to define a space for the environment; with VISTA, we can use the trace to generate new photorealistic viewpoints anywhere within that space. This provides valuable new training data as well as a robust testing environment.The simulated environment of VISTA will serve as our training ground and testbed for reinforcement learning. We also define an `Agent` -- a car -- that will actually move around in the environmnet, and make and carry out *actions* in this world. Because this is an entirely simulated environment, our car agent will also be simulated!
###Code
# Download and extract the data for vista (auto-skip if already downloaded)
!wget -nc -q --show-progress https://www.dropbox.com/s/3qogfzuugi852du/vista_traces.zip
print("Unzipping data...")
!unzip -o -q vista_traces.zip
print("Done downloading and unzipping data!")
trace_root = "./vista_traces"
trace_path = [
"20210726-154641_lexus_devens_center",
"20210726-155941_lexus_devens_center_reverse",
"20210726-184624_lexus_devens_center",
"20210726-184956_lexus_devens_center_reverse",
]
trace_path = [os.path.join(trace_root, p) for p in trace_path]
# Create a virtual world with VISTA, the world is defined by a series of data traces
world = vista.World(trace_path, trace_config={'road_width': 4})
# Create a car in our virtual world. The car will be able to step and take different
# control actions. As the car moves, its sensors will simulate any changes it environment
car = world.spawn_agent(
config={
'length': 5.,
'width': 2.,
'wheel_base': 2.78,
'steering_ratio': 14.7,
'lookahead_road': True
})
# Create a camera on the car for synthesizing the sensor data that we can use to train with!
camera = car.spawn_camera(config={'size': (200, 320)})
# Define a rendering display so we can visualize the simulated car camera stream and also
# get see its physical location with respect to the road in its environment.
display = vista.Display(world, display_config={"gui_scale": 2, "vis_full_frame": False})
# Define a simple helper function that allows us to reset VISTA and the rendering display
def vista_reset():
world.reset()
display.reset()
vista_reset()
###Output
_____no_output_____
###Markdown
If successful, you should see a blank black screen at this point. Your rendering display has been initialized. 3.7 Our virtual agent: the carOur goal is to learn a control policy for our agent, our (hopefully) autonomous vehicle, end-to-end directly from RGB camera sensory input. As in Cartpole, we need to define how our virtual agent will interact with its environment. Define agent's action functionsIn the case of driving, the car agent can act -- taking a step in the VISTA environment -- according to a given control command. This amounts to moving with a desired speed and a desired *curvature*, which reflects the car's turn radius. Curvature has units $\frac{1}{meter}$. So, if a car is traversing a circle of radius $r$ meters, then it is turning with a curvature $\frac{1}{r}$. The curvature is therefore correlated with the car's steering wheel angle, which actually controls its turn radius. Let's define the car agent's step function to capture the action of moving with a desired speed and desired curvature.
###Code
# First we define a step function, to allow our virtual agent to step
# with a given control command through the environment
# agent can act with a desired curvature (turning radius, like steering angle)
# and desired speed. if either is not provided then this step function will
# use whatever the human executed at that time in the real data.
def vista_step(curvature=None, speed=None):
# Arguments:
# curvature: curvature to step with
# speed: speed to step with
if curvature is None:
curvature = car.trace.f_curvature(car.timestamp)
if speed is None:
speed = car.trace.f_speed(car.timestamp)
car.step_dynamics(action=np.array([curvature, speed]), dt=1/15.)
car.step_sensors()
###Output
_____no_output_____
###Markdown
Inspect driving trajectories in VISTARecall that our VISTA environment is based off an initial human-collected driving trace. Also, we defined the agent's step function to defer to what the human executed if it is not provided with a desired speed and curvature with which to move.Thus, we can further inspect our environment by using the step function for the driving agent to move through the environment by following the human path. The stepping and iteration will take about 1 iteration per second. We will then observe the data that comes out to see the agent's traversal of the environment.
###Code
import shutil, os, subprocess, cv2
# Create a simple helper class that will assist us in storing videos of the render
class VideoStream():
def __init__(self):
self.tmp = "./tmp"
if os.path.exists(self.tmp) and os.path.isdir(self.tmp):
shutil.rmtree(self.tmp)
os.mkdir(self.tmp)
def write(self, image, index):
cv2.imwrite(os.path.join(self.tmp, f"{index:04}.png"), image)
def save(self, fname):
cmd = f"/usr/bin/ffmpeg -f image2 -i {self.tmp}/%04d.png -crf 0 -y {fname}"
subprocess.call(cmd, shell=True)
## Render and inspect a human trace ##
vista_reset()
stream = VideoStream()
for i in tqdm(range(100)):
vista_step()
# Render and save the display
vis_img = display.render()
stream.write(vis_img[:, :, ::-1], index=i)
if car.done:
break
print("Saving trajectory of human following...")
stream.save("human_follow.mp4")
mdl.lab3.play_video("human_follow.mp4")
###Output
_____no_output_____
###Markdown
Check out the simulated VISTA environment. What do you notice about the environment, the agent, and the setup of the simulation engine? How could these aspects useful for training models? Define terminal states: crashing! (oh no)Recall from Cartpole, our training episodes ended when the pole toppled, i.e., the agent crashed and failed. Similarly for training vehicle control policies in VISTA, we have to define what a ***crash*** means. We will define a crash as any time the car moves out of its lane or exceeds its maximum rotation. This will define the end of a training episode.
###Code
## Define terminal states and crashing conditions ##
def check_out_of_lane(car):
distance_from_center = np.abs(car.relative_state.x)
road_width = car.trace.road_width
half_road_width = road_width / 2
return distance_from_center > half_road_width
def check_exceed_max_rot(car):
maximal_rotation = np.pi / 10.
current_rotation = np.abs(car.relative_state.yaw)
return current_rotation > maximal_rotation
def check_crash(car):
return check_out_of_lane(car) or check_exceed_max_rot(car) or car.done
###Output
_____no_output_____
###Markdown
Quick check: acting with a random control policyAt this point, we have (1) an environment; (2) an agent, with a step function. Before we start learning a control policy for our vehicle agent, we start by testing out the behavior of the agent in the virtual world by providing it with a completely random control policy. Naturally we expect that the behavior will not be very robust! Let's take a look.
###Code
## Behavior with random control policy ##
i = 0
num_crashes = 5
stream = VideoStream()
for _ in range(num_crashes):
vista_reset()
while not check_crash(car):
# Sample a random curvature (between +/- 1/3), keep speed constant
curvature = np.random.uniform(-1/3, 1/3)
# Step the simulated car with the same action
vista_step(curvature=curvature)
# Render and save the display
vis_img = display.render()
stream.write(vis_img[:, :, ::-1], index=i)
i += 1
print(f"Car crashed on step {i}")
for _ in range(5):
stream.write(vis_img[:, :, ::-1], index=i)
i += 1
print("Saving trajectory with random policy...")
stream.save("random_policy.mp4")
mdl.lab3.play_video("random_policy.mp4")
###Output
_____no_output_____
###Markdown
3.8 Preparing to learn a control policy: data preprocessingSo, hopefully you saw that the random control policy was, indeed, not very robust. Yikes. Our overall goal in this lab is to build a self-driving agent using a neural network controller trained entirely in the simulator VISTA. This means that the data used to train and test the self-driving agent will be supplied by VISTA. As a step towards this, we will do some data preprocessing to make it easier for the network to learn from these visual data.Previously we rendered the data with a display as a quick check that the environment was working properly. For training the agent, we will directly access the car's observations, extract Regions Of Interest (ROI) from those observations, crop them out, and use these crops as training data for our self-driving agent controller. Observe both the full observation and the extracted ROI.
###Code
from google.colab.patches import cv2_imshow
# Directly access the raw sensor observations of the simulated car
vista_reset()
full_obs = car.observations[camera.name]
cv2_imshow(full_obs)
## ROIs ##
# Crop a smaller region of interest (ROI). This is necessary because:
# 1. The full observation will have distortions on the edge as the car deviates from the human
# 2. A smaller image of the environment will be easier for our model to learn from
region_of_interest = camera.camera_param.get_roi()
i1, j1, i2, j2 = region_of_interest
cropped_obs = full_obs[i1:i2, j1:j2]
cv2_imshow(cropped_obs)
###Output
_____no_output_____
###Markdown
We will group these steps into some helper functions that we can use during training: 1. `preprocess`: takes a full observation as input and returns a preprocessed version. This can include whatever preprocessing steps you would like! For example, ROI extraction, cropping, augmentations, and so on. You are welcome to add and modify this function as you seek to optimize your self-driving agent!2. `grab_and_preprocess`: grab the car's current observation (i.e., image view from its perspective), and then call the `preprocess` function on that observation.
###Code
## Data preprocessing functions ##
def preprocess(full_obs):
# Extract ROI
i1, j1, i2, j2 = camera.camera_param.get_roi()
obs = full_obs[i1:i2, j1:j2]
# Rescale to [0, 1]
obs = obs / 255.
return obs
def grab_and_preprocess_obs(car):
full_obs = car.observations[camera.name]
obs = preprocess(full_obs)
return obs
###Output
_____no_output_____
###Markdown
3.9 Define the self-driving agent and learning algorithmAs before, we'll use a neural network to define our agent and output actions that it will take. Fixing the agent's driving speed, we will train this network to predict a curvature -- a continuous value -- that will relate to the car's turn radius. Specifically, define the model to output a prediction of a continuous distribution of curvature, defined by a mean $\mu$ and standard deviation $\sigma$. These parameters will define a Normal distribution over curvature.What network architecture do you think would be especially well suited to the task of end-to-end control learning from RGB images? Since our observations are in the form of RGB images, we'll start with a convolutional network. Note that you will be tasked with completing a template CNN architecture for the self-driving car agent -- but you should certainly experiment beyond this template to try to optimize performance!
###Code
### Define the self-driving agent ###
# Note: we start with a template CNN architecture -- experiment away as you
# try to optimize your agent!
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
act = tf.keras.activations.swish
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='valid', activation=act)
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the self-driving agent
def create_driving_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2),
# TODO: define convolutional layers with 48 5x5 filters and 2x2 stride
Conv2D(filters=48, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define two convolutional layers with 64 3x3 filters and 2x2 stride
Conv2D(filters=64, kernel_size=3, strides=2), # TODO
Conv2D(filters=64, kernel_size=3, strides=2), # TODO
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=128, activation=act),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in.
# Remember that this model is outputing a distribution of *continuous*
# actions, which take a different shape than discrete actions.
# How many outputs should there be to define a distribution?'''
Dense(units=2, activation=None) # TODO
# Dense('''TODO''')
])
return model
driving_model = create_driving_model()
###Output
_____no_output_____
###Markdown
Now we will define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. As with Cartpole, we will use a *policy gradient* method that aims to **maximize** the likelihood of actions that result in large rewards. However, there are some key differences. In Cartpole, the agent's action space was discrete: it could only move left or right. In driving, the agent's action space is continuous: the control network is outputting a curvature, which is a continuous variable. We will define a probability distribution, defined by a mean and variance, over this continuous action space to define the possible actions the self-driving agent can take.You will define three functions that reflect these changes and form the core of the the learning algorithm:1. `run_driving_model`: takes an input image, and outputs a prediction of a continuous distribution of curvature. This will take the form of a Normal distribuion and will be defined using TensorFlow probability's [`tfp.distributions.Normal`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Normal) function, so the model's prediction will include both a mean and variance. Operates on an instance `driving_model` defined above.2. `compute_driving_loss`: computes the loss for a prediction that is in the form of a continuous Normal distribution. Recall as in Cartpole, computing the loss involves multiplying the predicted log probabilities by the discounted rewards. Similar to `compute_loss` in Cartpole.The `train_step` function to use the loss function to execute a training step will be the same as we used in Cartpole! This will have to be executed abov in order for the driving agent to train properly.
###Code
## The self-driving learning algorithm ##
# hyperparameters
max_curvature = 1/8.
max_std = 0.1
def run_driving_model(image):
# Arguments:
# image: an input image
# Returns:
# pred_dist: predicted distribution of control actions
single_image_input = tf.rank(image) == 3 # missing 4th batch dimension
if single_image_input:
image = tf.expand_dims(image, axis=0)
'''TODO: get the prediction of the model given the current observation.'''
distribution = driving_model(image) # TODO
# distribution = ''' TODO '''
mu, logsigma = tf.split(distribution, 2, axis=1)
mu = max_curvature * tf.tanh(mu) # conversion
sigma = max_std * tf.sigmoid(logsigma) + 0.005 # conversion
'''TODO: define the predicted distribution of curvature, given the predicted
mean mu and standard deviation sigma. Use a Normal distribution as defined
in TF probability (hint: tfp.distributions)'''
pred_dist = tfp.distributions.Normal(mu, sigma) # TODO
# pred_dist = ''' TODO '''
return pred_dist
def compute_driving_loss(dist, actions, rewards):
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
'''TODO: complete the function call to compute the negative log probabilities
of the agent's actions.'''
neg_logprob = -1 * dist.log_prob(actions)
# neg_logprob = '''TODO'''
'''TODO: scale the negative log probability by the rewards.'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
3.10 Train the self-driving agentWe're now all set up to start training our RL algorithm and agent for autonomous driving!We begin be initializing an opitimizer, environment, a new driving agent, and `Memory` buffer. This will be in the first code block. To restart training completely, you will need to re-run this cell to re-initiailize everything.The second code block is the main training script. Here reinforcement learning episodes will be executed by agents in the VISTA environment. Since the self-driving agent starts out with literally zero knowledge of its environment, it can often take a long time to train and achieve stable behavior -- keep this in mind! For convenience, stopping and restarting the second cell will pick up training where you left off.The training block will execute a policy in the environment until the agent crashes. When the agent crashes, the (state, action, reward) triplet `(s,a,r)` of the agent at the end of the episode will be saved into the `Memory` buffer, and then provided as input to the policy gradient loss function. This information will be used to execute optimization within the training step. Memory will be cleared, and we will then do it all over again!Let's run the code block to train our self-driving agent. We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning. **You should reach a reward of at least 100 to get bare minimum stable behavior.**
###Code
## Training parameters and initialization ##
## Re-run this cell to restart training from scratch ##
''' TODO: Learning rate and optimizer '''
learning_rate = 5e-4
# learning_rate = '''TODO'''
optimizer = tf.keras.optimizers.Adam(learning_rate)
# optimizer = '''TODO'''
# instantiate driving agent
vista_reset()
driving_model = create_driving_model()
# NOTE: the variable driving_model will be used in run_driving_model execution
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
# instantiate Memory buffer
memory = Memory()
## Driving training! Main training block. ##
## Note: stopping and restarting this cell will pick up training where you
# left off. To restart training you need to rerun the cell above as
# well (to re-initialize the model and optimizer)
max_batch_size = 300
max_reward = float('-inf') # keep track of the maximum reward acheived during training
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
vista_reset()
memory.clear()
observation = grab_and_preprocess_obs(car)
while True:
# TODO: using the car's current observation compute the desired
# action (curvature) distribution by feeding it into our
# driving model (use the function you already built to do this!) '''
curvature_dist = run_driving_model(observation)
# curvature_dist = '''TODO'''
# TODO: sample from the action *distribution* to decide how to step
# the car in the environment. You may want to check the documentation
# for tfp.distributions.Normal online. Remember that the sampled action
# should be a single scalar value after this step.
curvature_action = curvature_dist.sample()[0,0]
# curvature_action = '''TODO'''
# Step the simulated car with the same action
vista_step(curvature_action)
observation = grab_and_preprocess_obs(car)
# TODO: Compute the reward for this iteration. You define
# the reward function for this policy, start with something
# simple - for example, give a reward of 1 if the car did not
# crash and a reward of 0 if it did crash.
reward = 1.0 if not check_crash(car) else 0.0
# reward = '''TODO'''
# add to memory
memory.add_to_memory(observation, curvature_action, reward)
# is the episode over? did you crash or do so well that you're done?
if reward == 0.0:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# execute training step - remember we don't know anything about how the
# agent is doing until it has crashed! if the training step is too large
# we need to sample a mini-batch for this step.
batch_size = min(len(memory), max_batch_size)
i = np.random.choice(len(memory), batch_size, replace=False)
train_step(driving_model, compute_driving_loss, optimizer,
observations=np.array(memory.observations)[i],
actions=np.array(memory.actions)[i],
discounted_rewards = discount_rewards(memory.rewards)[i],
custom_fwd_fn=run_driving_model)
# reset the memory
memory.clear()
break
###Output
_____no_output_____
###Markdown
3.11 Evaluate the self-driving agentFinally we can put our trained self-driving agent to the test! It will execute autonomous control, in VISTA, based on the learned controller. We will evaluate how well it does based on this distance the car travels without crashing. We await the result...
###Code
## Evaluation block!##
i_step = 0
num_episodes = 5
num_reset = 5
stream = VideoStream()
for i_episode in range(num_episodes):
# Restart the environment
vista_reset()
observation = grab_and_preprocess_obs(car)
print("rolling out in env")
episode_step = 0
while not check_crash(car) and episode_step < 100:
# using our observation, choose an action and take it in the environment
curvature_dist = run_driving_model(observation)
curvature = curvature_dist.mean()[0,0]
# Step the simulated car with the same action
vista_step(curvature)
observation = grab_and_preprocess_obs(car)
vis_img = display.render()
stream.write(vis_img[:, :, ::-1], index=i_step)
i_step += 1
episode_step += 1
for _ in range(num_reset):
stream.write(np.zeros_like(vis_img), index=i_step)
i_step += 1
print(f"Average reward: {(i_step - (num_reset*num_episodes)) / num_episodes}")
print("Saving trajectory with trained policy...")
stream.save("trained_policy.mp4")
mdl.lab3.play_video("trained_policy.mp4")
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator;2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
!apt-get install -y xvfb python-opengl x11-utils > /dev/null 2>&1
!pip install gym pyvirtualdisplay scikit-video > /dev/null 2>&1
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import base64, io, time, gym
import IPython, functools
import matplotlib.pyplot as plt
from tqdm import tqdm
!pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v0")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
_____no_output_____
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# [TODO Dense layer to output action probabilities]
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. **Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation which is fed as input to the model
# Returns:
# action: choice of agent action
def choose_action(model, observation):
# add batch dimension to the observation
observation = np.expand_dims(observation, axis=0)
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
# pass the log probabilities through a softmax to compute true probabilities
prob_weights = tf.nn.softmax(logits).numpy()
'''TODO: randomly sample from the prob_weights to pick an action.
Hint: carefully consider the dimensionality of the input probabilities (vector) and the output action (scalar)'''
action = np.random.choice(n_actions, size=1, p=prob_weights.flatten())[0] # TODO
# action = np.random.choice('''TODO''', size=1, p=''''TODO''')['''TODO''']
return action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple memory buffer that contains the agent's observations, actions, and received rewards from a given episode. **Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action) # TODO
# ['''TODO''']
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward) # TODO
# ['''TODO''']
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. ecall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits='''TODO''', labels='''TODO''')
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm:
###Code
### Training step (forward and backpropagation) ###
def train_step(model, optimizer, observations, actions, discounted_rewards):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
logits = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = compute_loss(logits, actions, discounted_rewards) # TODO
# loss = compute_loss('''TODO''', '''TODO''', '''TODO''')
'''TODO: run backpropagation to minimize the loss using the tape.gradient method'''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
### Cartpole training! ###
# Learning rate and optimizer
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
train_step(cartpole_model, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: PongIn Cartpole, we dealt with an environment that was static -- in other words, it didn't change over time. What happens if our environment is dynamic and unpredictable? Well that's exactly the case in [Pong](https://en.wikipedia.org/wiki/Pong), since part of the environment is the opposing player. We don't know how our opponent will act or react to our actions, so the complexity of our problem increases. It also becomes much more interesting, since we can compete to beat our opponent. RL provides a powerful framework for training AI systems with the ability to handle and interact with dynamic, unpredictable environments. In this part of the lab, we'll use the tools and workflow we explored in Part 1 to build an RL agent capable of playing the game of Pong. 3.6 Define and inspect the Pong environmentAs with Cartpole, we'll instantiate the Pong environment in the OpenAI gym, using a seed of 1.
###Code
env = gym.make("Pong-v0", frameskip=5)
env.seed(1); # for reproducibility
###Output
_____no_output_____
###Markdown
Let's next consider the observation space for the Pong environment. Instead of four physical descriptors of the cart-pole setup, in the case of Pong our observations are the individual video frames (i.e., images) that depict the state of the board. Thus, the observations are 210x160 RGB images (arrays of shape (210,160,3)).We can again confirm the size of the observation space by query:
###Code
print("Environment has observation space =", env.observation_space)
###Output
_____no_output_____
###Markdown
In Pong, at every time step, the agent (which controls the paddle) has six actions to choose from: no-op (no operation), move right, move left, fire, fire right, and fire left. Let's confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
3.7 Define the Pong agentAs before, we'll use a neural network to define our agent. What network architecture do you think would be especially well suited to this game? Since our observations are now in the form of images, we'll add convolutional layers to the network to increase the learning capacity of our network.
###Code
### Define the Pong agent ###
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the Pong agent
def create_pong_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 16 7x7 filters and 4x4 stride
Conv2D(filters=16, kernel_size=7, strides=4),
# TODO: define convolutional layers with 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define convolutional layers with 48 3x3 filters and 2x2 stride
Conv2D(filters=48, kernel_size=3, strides=2), # TODO
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=64, activation='relu'),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in
Dense(units=n_actions, activation=None) # TODO
# Dense('''TODO''')
])
return model
pong_model = create_pong_model()
###Output
_____no_output_____
###Markdown
Since we've already defined the action function, `choose_action(model, observation)`, we don't need to define it again. Instead, we'll be able to reuse it later on by passing in our new model we've just created, `pong_model`. This is awesome because our action function provides a modular and generalizable method for all sorts of RL agents! 3.8 Pong-specific functionsIn Part 1 (Cartpole), we implemented some key functions and classes to build and train our RL agent -- `choose_action(model, observation)` and the `Memory` class, for example. However, in getting ready to apply these to a new game like Pong, we might need to make some slight modifications. Namely, we need to think about what happens when a game ends. In Pong, we know a game has ended if the reward is +1 (we won!) or -1 (we lost unfortunately). Otherwise, we expect the reward at a timestep to be zero -- the players (or agents) are just playing eachother. So, after a game ends, we will need to reset the reward to zero when a game ends. This will result in a modified reward function.
###Code
### Pong reward function ###
# Compute normalized, discounted rewards for Pong (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor. Note increase to 0.99 -- rate of depreciation will be slower.
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.99):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# NEW: Reset the sum if the reward is not 0 (the game has ended!)
if rewards[t] != 0:
R = 0
# update the total discounted reward as before
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
Additionally, we have to consider the nature of the observations in the Pong environment, and how they will be fed into our network. Our observations in this case are images. Before we input an image into our network, we'll do a bit of pre-processing to crop and scale, clean up the background colors to a single color, and set the important game elements to a single color. Let's use this function to visualize what an observation might look like before and after pre-processing.
###Code
observation = env.reset()
for i in range(30):
observation, _,_,_ = env.step(0)
observation_pp = mdl.lab3.preprocess_pong(observation)
f = plt.figure(figsize=(10,3))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.imshow(observation); ax.grid(False);
ax2.imshow(np.squeeze(observation_pp)); ax2.grid(False); plt.title('Preprocessed Observation');
###Output
_____no_output_____
###Markdown
What do you notice? How might these changes be important for training our RL algorithm? 3.9 Training PongWe're now all set up to start training our RL algorithm and agent for the game of Pong! We've already defined our loss function with `compute_loss`, which employs policy gradient learning, as well as our backpropagation step with `train_step` which is beautiful! We will use these functions to execute training the Pong agent. Let's walk through the training block.In Pong, rather than feeding our network one image at a time, it can actually improve performance to input the difference between two consecutive observations, which really gives us information about the movement between frames -- how the game is changing. We'll first pre-process the raw observation, `x`, and then we'll compute the difference with the image frame we saw one timestep before. This observation change will be forward propagated through our Pong agent, the CNN network model, which will then predict the next action to take based on this observation. The raw reward will be computed, and the observation, action, and reward will be recorded into memory. This will continue until a training episode, i.e., a game, ends.Then, we will compute the discounted rewards, and use this information to execute a training step. Memory will be cleared, and we will do it all over again!Let's run the code block to train our Pong agent. Note that completing training will take quite a bit of time (estimated at least a couple of hours). We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning.
###Code
### Training Pong ###
# Hyperparameters
learning_rate=1e-4
MAX_ITERS = 10000 # increase the maximum number of episodes, since Pong is more complex!
# Model and optimizer
pong_model = create_pong_model()
optimizer = tf.keras.optimizers.Adam(learning_rate)
# plotting
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=5, xlabel='Iterations', ylabel='Rewards')
memory = Memory()
for i_episode in range(MAX_ITERS):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
previous_frame = mdl.lab3.preprocess_pong(observation)
while True:
# Pre-process image
current_frame = mdl.lab3.preprocess_pong(observation)
'''TODO: determine the observation change
Hint: this is the difference between the past two frames'''
obs_change = current_frame - previous_frame # TODO
# obs_change = # TODO
'''TODO: choose an action for the pong model, using the frame difference, and evaluate'''
action = choose_action(pong_model, obs_change) # TODO
# action = # TODO
# Take the chosen action
next_observation, reward, done, info = env.step(action)
'''TODO: save the observed frame difference, the action that was taken, and the resulting reward!'''
memory.add_to_memory(obs_change, action, reward) # TODO
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append( total_reward )
# begin training
train_step(pong_model,
optimizer,
observations = np.stack(memory.observations, 0),
actions = np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
memory.clear()
break
observation = next_observation
previous_frame = current_frame
###Output
_____no_output_____
###Markdown
Finally we can put our trained agent to the test! It will play in a newly instantiated Pong environment against the "computer", a base AI system for Pong. Your agent plays as the green paddle. Let's watch the match instant replay!
###Code
saved_pong = mdl.lab3.save_video_of_model(
pong_model, "Pong-v0", obs_diff=True,
pp_fn=mdl.lab3.preprocess_pong)
mdl.lab3.play_video(saved_pong)
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator;2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
# !apt-get install -y xvfb python-opengl x11-utils > /dev/null 2>&1
# !pip install gym pyvirtualdisplay scikit-video > /dev/null 2>&1
# %tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import base64, io, time, gym
import IPython, functools
import matplotlib.pyplot as plt
from tqdm import tqdm
# !pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v0")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
Environment has observation space = Box(4,)
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
Number of possible actions that the agent can choose from = 2
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential(
[
# First Dense layer
tf.keras.layers.Dense(units=32, activation="relu"),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# [TODO Dense layer to output action probabilities]
]
)
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator;2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
!apt-get install -y xvfb python-opengl x11-utils > /dev/null 2>&1
!pip install gym pyvirtualdisplay scikit-video > /dev/null 2>&1
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import base64, io, time, gym
import IPython, functools
import matplotlib.pyplot as plt
from tqdm import tqdm
!pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v0")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
_____no_output_____
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# [TODO Dense layer to output action probabilities]
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. **Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation which is fed as input to the model
# Returns:
# action: choice of agent action
def choose_action(model, observation):
# add batch dimension to the observation
observation = np.expand_dims(observation, axis=0)
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
# pass the log probabilities through a softmax to compute true probabilities
prob_weights = tf.nn.softmax(logits).numpy()
'''TODO: randomly sample from the prob_weights to pick an action.
Hint: carefully consider the dimensionality of the input probabilities (vector) and the output action (scalar)'''
action = np.random.choice(n_actions, size=1, p=prob_weights.flatten())[0] # TODO
# action = np.random.choice('''TODO''', size=1, p=''''TODO''')['''TODO''']
return action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple memory buffer that contains the agent's observations, actions, and received rewards from a given episode. **Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action) # TODO
# ['''TODO''']
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward) # TODO
# ['''TODO''']
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. ecall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(logits='''TODO''', labels='''TODO''')
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm:
###Code
### Training step (forward and backpropagation) ###
def train_step(model, optimizer, observations, actions, discounted_rewards):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
logits = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = compute_loss(logits, actions, discounted_rewards) # TODO
# loss = compute_loss('''TODO''', '''TODO''', '''TODO''')
'''TODO: run backpropagation to minimize the loss using the tape.gradient method'''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
### Cartpole training! ###
# Learning rate and optimizer
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
train_step(cartpole_model, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: PongIn Cartpole, we dealt with an environment that was static -- in other words, it didn't change over time. What happens if our environment is dynamic and unpredictable? Well that's exactly the case in [Pong](https://en.wikipedia.org/wiki/Pong), since part of the environment is the opposing player. We don't know how our opponent will act or react to our actions, so the complexity of our problem increases. It also becomes much more interesting, since we can compete to beat our opponent. RL provides a powerful framework for training AI systems with the ability to handle and interact with dynamic, unpredictable environments. In this part of the lab, we'll use the tools and workflow we explored in Part 1 to build an RL agent capable of playing the game of Pong. 3.6 Define and inspect the Pong environmentAs with Cartpole, we'll instantiate the Pong environment in the OpenAI gym, using a seed of 1.
###Code
env = gym.make("Pong-v0", frameskip=5)
env.seed(1); # for reproducibility
###Output
_____no_output_____
###Markdown
Let's next consider the observation space for the Pong environment. Instead of four physical descriptors of the cart-pole setup, in the case of Pong our observations are the individual video frames (i.e., images) that depict the state of the board. Thus, the observations are 210x160 RGB images (arrays of shape (210,160,3)).We can again confirm the size of the observation space by query:
###Code
print("Environment has observation space =", env.observation_space)
###Output
_____no_output_____
###Markdown
In Pong, at every time step, the agent (which controls the paddle) has six actions to choose from: no-op (no operation), move right, move left, fire, fire right, and fire left. Let's confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
3.7 Define the Pong agentAs before, we'll use a neural network to define our agent. What network architecture do you think would be especially well suited to this game? Since our observations are now in the form of images, we'll add convolutional layers to the network to increase the learning capacity of our network.
###Code
### Define the Pong agent ###
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the Pong agent
def create_pong_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 16 7x7 filters and 4x4 stride
Conv2D(filters=16, kernel_size=7, strides=4),
# TODO: define convolutional layers with 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define convolutional layers with 48 3x3 filters and 2x2 stride
Conv2D(filters=48, kernel_size=3, strides=2), # TODO
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=64, activation='relu'),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in
Dense(units=n_actions, activation=None) # TODO
# Dense('''TODO''')
])
return model
pong_model = create_pong_model()
###Output
_____no_output_____
###Markdown
Since we've already defined the action function, `choose_action(model, observation)`, we don't need to define it again. Instead, we'll be able to reuse it later on by passing in our new model we've just created, `pong_model`. This is awesome because our action function provides a modular and generalizable method for all sorts of RL agents! 3.8 Pong-specific functionsIn Part 1 (Cartpole), we implemented some key functions and classes to build and train our RL agent -- `choose_action(model, observation)` and the `Memory` class, for example. However, in getting ready to apply these to a new game like Pong, we might need to make some slight modifications. Namely, we need to think about what happens when a game ends. In Pong, we know a game has ended if the reward is +1 (we won!) or -1 (we lost unfortunately). Otherwise, we expect the reward at a timestep to be zero -- the players (or agents) are just playing eachother. So, after a game ends, we will need to reset the reward to zero when a game ends. This will result in a modified reward function.
###Code
### Pong reward function ###
# Compute normalized, discounted rewards for Pong (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor. Note increase to 0.99 -- rate of depreciation will be slower.
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.99):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# NEW: Reset the sum if the reward is not 0 (the game has ended!)
if rewards[t] != 0:
R = 0
# update the total discounted reward as before
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
Additionally, we have to consider the nature of the observations in the Pong environment, and how they will be fed into our network. Our observations in this case are images. Before we input an image into our network, we'll do a bit of pre-processing to crop and scale, clean up the background colors to a single color, and set the important game elements to a single color. Let's use this function to visualize what an observation might look like before and after pre-processing.
###Code
observation = env.reset()
for i in range(30):
observation, _,_,_ = env.step(0)
observation_pp = mdl.lab3.preprocess_pong(observation)
f = plt.figure(figsize=(10,3))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.imshow(observation); ax.grid(False);
ax2.imshow(np.squeeze(observation_pp)); ax2.grid(False); plt.title('Preprocessed Observation');
###Output
_____no_output_____
###Markdown
What do you notice? How might these changes be important for training our RL algorithm? 3.9 Training PongWe're now all set up to start training our RL algorithm and agent for the game of Pong! We've already defined our loss function with `compute_loss`, which employs policy gradient learning, as well as our backpropagation step with `train_step` which is beautiful! We will use these functions to execute training the Pong agent. Let's walk through the training block.In Pong, rather than feeding our network one image at a time, it can actually improve performance to input the difference between two consecutive observations, which really gives us information about the movement between frames -- how the game is changing. We'll first pre-process the raw observation, `x`, and then we'll compute the difference with the image frame we saw one timestep before. This observation change will be forward propagated through our Pong agent, the CNN network model, which will then predict the next action to take based on this observation. The raw reward will be computed, and the observation, action, and reward will be recorded into memory. This will continue until a training episode, i.e., a game, ends.Then, we will compute the discounted rewards, and use this information to execute a training step. Memory will be cleared, and we will do it all over again!Let's run the code block to train our Pong agent. Note that completing training will take quite a bit of time (estimated at least a couple of hours). We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning.
###Code
### Training Pong ###
# Hyperparameters
learning_rate=1e-4
MAX_ITERS = 10000 # increase the maximum number of episodes, since Pong is more complex!
# Model and optimizer
pong_model = create_pong_model()
optimizer = tf.keras.optimizers.Adam(learning_rate)
# plotting
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=5, xlabel='Iterations', ylabel='Rewards')
memory = Memory()
for i_episode in range(MAX_ITERS):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
previous_frame = mdl.lab3.preprocess_pong(observation)
while True:
# Pre-process image
current_frame = mdl.lab3.preprocess_pong(observation)
'''TODO: determine the observation change
Hint: this is the difference between the past two frames'''
obs_change = current_frame - previous_frame # TODO
# obs_change = # TODO
'''TODO: choose an action for the pong model, using the frame difference, and evaluate'''
action = choose_action(pong_model, obs_change) # TODO
# action = # TODO
# Take the chosen action
next_observation, reward, done, info = env.step(action)
'''TODO: save the observed frame difference, the action that was taken, and the resulting reward!'''
memory.add_to_memory(obs_change, action, reward) # TODO
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append( total_reward )
# begin training
train_step(pong_model,
optimizer,
observations = np.stack(memory.observations, 0),
actions = np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
memory.clear()
break
observation = next_observation
previous_frame = current_frame
###Output
_____no_output_____
###Markdown
Finally we can put our trained agent to the test! It will play in a newly instantiated Pong environment against the "computer", a base AI system for Pong. Your agent plays as the green paddle. Let's watch the match instant replay!
###Code
saved_pong = mdl.lab3.save_video_of_model(
pong_model, "Pong-v0", obs_diff=True,
pp_fn=mdl.lab3.preprocess_pong)
mdl.lab3.play_video(saved_pong)
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator;2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
#Install some dependencies for visualizing the agents
!apt-get install -y xvfb python-opengl x11-utils > /dev/null 2>&1
!pip install gym pyvirtualdisplay scikit-video > /dev/null 2>&1
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import base64, io, time, gym
import IPython, functools
import matplotlib.pyplot as plt
import time
from tqdm import tqdm
# Download and import the MIT 6.S191 package
!pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v0")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
_____no_output_____
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# [TODO Dense layer to output action probabilities]
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. We will also add support so that the `choose_action` function can handle either a single observation or a batch of observations.**Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation(s) which is/are fed as input to the model
# single: flag as to whether we are handling a single observation or batch of
# observations, provided as an np.array
# Returns:
# action: choice of agent action
def choose_action(model, observation, single=True):
# add batch dimension to the observation if only a single example was provided
observation = np.expand_dims(observation, axis=0) if single else observation
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
'''TODO: Choose an action from the categorical distribution defined by the log
probabilities of each possible action.'''
action = tf.random.categorical(logits, num_samples=1)
# action = ['''TODO''']
action = action.numpy().flatten()
return action[0] if single else action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple `Memory` buffer that contains the agent's observations, actions, and received rewards from a given episode. We will also add support to combine a list of `Memory` objects into a single `Memory`. This will be very useful for batching, which will help you accelerate training later on in the lab.**Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action) # TODO
# ['''TODO''']
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward) # TODO
# ['''TODO''']
# Helper function to combine a list of Memory objects into a single Memory.
# This will be very useful for batching.
def aggregate_memories(memories):
batch_memory = Memory()
for memory in memories:
for step in zip(memory.observations, memory.actions, memory.rewards):
batch_memory.add_to_memory(*step)
return batch_memory
# Instantiate a single Memory buffer
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. Recall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
# logits='''TODO''', labels='''TODO''')
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm:
###Code
### Training step (forward and backpropagation) ###
def train_step(model, optimizer, observations, actions, discounted_rewards):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
logits = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = compute_loss(logits, actions, discounted_rewards) # TODO
# loss = compute_loss('''TODO''', '''TODO''', '''TODO''')
'''TODO: run backpropagation to minimize the loss using the tape.gradient method'''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
### Cartpole training! ###
# Learning rate and optimizer
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
train_step(cartpole_model, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: PongIn Cartpole, we dealt with an environment that was static -- in other words, it didn't change over time. What happens if our environment is dynamic and unpredictable? Well that's exactly the case in [Pong](https://en.wikipedia.org/wiki/Pong), since part of the environment is the opposing player. We don't know how our opponent will act or react to our actions, so the complexity of our problem increases. It also becomes much more interesting, since we can compete to beat our opponent. RL provides a powerful framework for training AI systems with the ability to handle and interact with dynamic, unpredictable environments. In this part of the lab, we'll use the tools and workflow we explored in Part 1 to build an RL agent capable of playing the game of Pong. 3.6 Define and inspect the Pong environmentAs with Cartpole, we'll instantiate the Pong environment in the OpenAI gym, using a seed of 1.
###Code
def create_pong_env():
return gym.make("Pong-v0", frameskip=5)
env = create_pong_env()
env.seed(1); # for reproducibility
###Output
_____no_output_____
###Markdown
Let's next consider the observation space for the Pong environment. Instead of four physical descriptors of the cart-pole setup, in the case of Pong our observations are the individual video frames (i.e., images) that depict the state of the board. Thus, the observations are 210x160 RGB images (arrays of shape (210,160,3)).We can again confirm the size of the observation space by query:
###Code
print("Environment has observation space =", env.observation_space)
###Output
_____no_output_____
###Markdown
In Pong, at every time step, the agent (which controls the paddle) has six actions to choose from: no-op (no operation), move right, move left, fire, fire right, and fire left. Let's confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
3.7 Define the Pong agentAs before, we'll use a neural network to define our agent. What network architecture do you think would be especially well suited to this game? Since our observations are now in the form of images, we'll add convolutional layers to the network to increase the learning capacity of our network. Note that you will be tasked with completing a template CNN architecture for the Pong agent -- but you should certainly experiment beyond this template to try to optimize performance!
###Code
### Define the Pong agent ###
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the Pong agent
def create_pong_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2),
# TODO: define convolutional layers with 48 5x5 filters and 2x2 stride
Conv2D(filters=48, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define two convolutional layers with 64 3x3 filters and 2x2 stride
Conv2D(filters=64, kernel_size=3, strides=2), # TODO
Conv2D(filters=64, kernel_size=3, strides=2),
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=128, activation='relu'),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in
Dense(units=n_actions, activation=None) # TODO
# Dense('''TODO''')
])
return model
pong_model = create_pong_model()
###Output
_____no_output_____
###Markdown
Since we've already defined the action function, `choose_action(model, observation)`, we don't need to define it again. Instead, we'll be able to reuse it later on by passing in our new model we've just created, `pong_model`. This is awesome because our action function provides a modular and generalizable method for all sorts of RL agents! 3.8 Pong-specific functionsIn Part 1 (Cartpole), we implemented some key functions and classes to build and train our RL agent -- `choose_action(model, observation)` and the `Memory` class, for example. However, in getting ready to apply these to a new game like Pong, we might need to make some slight modifications. Namely, we need to think about what happens when a game ends. In Pong, we know a game has ended if the reward is +1 (we won!) or -1 (we lost unfortunately). Otherwise, we expect the reward at a timestep to be zero -- the players (or agents) are just playing eachother. So, after a game ends, we will need to reset the reward to zero when a game ends. This will result in a modified reward function.
###Code
### Pong reward function ###
# Compute normalized, discounted rewards for Pong (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor. Note increase to 0.99 -- rate of depreciation will be slower.
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.99):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# NEW: Reset the sum if the reward is not 0 (the game has ended!)
if rewards[t] != 0:
R = 0
# update the total discounted reward as before
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
Additionally, we have to consider the nature of the observations in the Pong environment, and how they will be fed into our network. Our observations in this case are images. Before we input an image into our network, we'll do a bit of pre-processing to crop and scale, clean up the background colors to a single color, and set the important game elements to a single color. Let's use this function to visualize what a single observation might look like before and after pre-processing.
###Code
observation = env.reset()
for i in range(30):
action = np.random.choice(n_actions)
observation, _,_,_ = env.step(action)
observation_pp = mdl.lab3.preprocess_pong(observation)
f = plt.figure(figsize=(10,3))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.imshow(observation); ax.grid(False);
ax2.imshow(np.squeeze(observation_pp)); ax2.grid(False); plt.title('Preprocessed Observation');
###Output
_____no_output_____
###Markdown
Let's also consider the fact that, unlike CartPole, the Pong environment has an additional element of uncertainty -- regardless of what action the agent takes, we don't know how the opponent will play. That is, the environment is changing over time, based on *both* the actions we take and the actions of the opponent, which result in motion of the ball and motion of the paddles.
Therefore, to capture the dynamics, we also consider how the environment changes by looking at the difference between a previous observation (image frame) and the current observation (image frame). We've implemented a helper function, `pong_change`, that pre-processes two frames, calculates the change between the two, and then re-normalizes the values. Let's inspect this to visualize how the environment can change:
###Code
next_observation, _,_,_ = env.step(np.random.choice(n_actions))
diff = mdl.lab3.pong_change(observation, next_observation)
f, ax = plt.subplots(1, 3, figsize=(15,15))
for a in ax:
a.grid(False)
a.axis("off")
ax[0].imshow(observation); ax[0].set_title('Previous Frame');
ax[1].imshow(next_observation); ax[1].set_title('Current Frame');
ax[2].imshow(np.squeeze(diff)); ax[2].set_title('Difference (Model Input)');
###Output
_____no_output_____
###Markdown
What do you notice? How and why might these pre-processing changes be important for training our RL algorithm? How and why might consideration of the difference between frames be important for training and performance? Rollout function
We're now set up to define our key action algorithm for the game of Pong, which will ultimately be used to train our Pong agent. This function can be thought of as a "rollout", where the agent will 1) make an observation of the environment, 2) select an action based on its state in the environment, 3) execute a policy based on that action, resulting in some reward and a change to the environment, and 4) finally add memory of that action-reward to its `Memory` buffer. We will define this algorithm in the `collect_rollout` function below, and use it soon within a training block.
Earlier you visually inspected the raw environment frames, the pre-processed frames, and the difference between previous and current frames. As you may have gathered, in a dynamic game like Pong, it can actually be helpful to consider the difference between two consecutive observations. This gives us information about the movement between frames -- how the game is changing. We will do this using the `pong_change` function we explored above (which also pre-processes frames for us).
We will use differences between frames as the input on which actions will be selected. These observation changes will be forward propagated through our Pong agent, the CNN network model, which will then predict the next action to take based on this observation. The raw reward will be computed. The observation, action, and reward will be recorded into memory. This will loop until a particular game ends -- the rollout is completed.
For now, we will define `collect_rollout` such that a batch of observations (i.e., from a batch of agent-environment worlds) can be processed serially (i.e., one at a time, in sequence). We will later utilize a parallelized version of this function that will parallelize batch processing to help speed up training! Let's get to it.
###Code
### Rollout function ###
# Key steps for agent's operation in the environment, until completion of a rollout.
# An observation is drawn; the agent (controlled by model) selects an action;
# the agent executes that action in the environment and collects rewards;
# information is added to memory.
# This is repeated until the completion of the rollout -- the Pong game ends.
# Processes multiple batches serially.
#
# Arguments:
# batch_size: number of batches, to be processed serially
# env: environment
# model: Pong agent model
# choose_action: choose_action function
# Returns:
# memories: array of Memory buffers, of length batch_size, corresponding to the
# episode executions from the rollout
def collect_rollout(batch_size, env, model, choose_action):
# Holder array for the Memory buffers
memories = []
# Process batches serially by iterating through them
for b in range(batch_size):
# Instantiate Memory buffer, restart the environment
memory = Memory()
next_observation = env.reset()
previous_frame = next_observation
done = False # tracks whether the episode (game) is done or not
while not done:
current_frame = next_observation
'''TODO: determine the observation change.
Hint: this is the difference between the past two frames'''
frame_diff = mdl.lab3.pong_change(previous_frame, current_frame) # TODO
# frame_diff = # TODO
'''TODO: choose an action for the pong model, using the frame difference, and evaluate'''
action = choose_action(model, frame_diff) # TODO
# action = # TODO
# Take the chosen action
next_observation, reward, done, info = env.step(action)
'''TODO: save the observed frame difference, the action that was taken, and the resulting reward!'''
memory.add_to_memory(frame_diff, action, reward) # TODO
previous_frame = current_frame
# Add the memory from this batch to the array of all Memory buffers
memories.append(memory)
return memories
###Output
_____no_output_____
###Markdown
To get a sense of what is encapsulated by `collect_rollout`, we will instantiate an *untrained* Pong model, run a single rollout using this model, save the memory, and play back the observations the model sees. Note that these will be frame *differences*.
###Code
### Rollout with untrained Pong model ###
# Model
test_model = create_pong_model()
# Rollout with single batch
single_batch_size = 1
memories = collect_rollout(single_batch_size, env, test_model, choose_action)
rollout_video = mdl.lab3.save_video_of_memory(memories[0], "Pong-Random-Agent.mp4")
# Play back video of memories
mdl.lab3.play_video(rollout_video)
###Output
_____no_output_____
###Markdown
3.9 Training PongWe're now all set up to start training our RL algorithm and agent for the game of Pong! We've already defined the following:1. Loss function, `compute_loss`, and backpropagation step, `train_step`. Our loss function employs policy gradient learning. `train_step` executes a single forward pass and backpropagation gradient update.2. RL agent algorithm: `collect_rollout`. Serially processes batches of episodes, executing actions in the environment, collecting rewards, and saving these to `Memory`.We will use these functions to train the Pong agent.In the training block, episodes will be executed by agents in the environment via the RL algorithm defined in the `collect_rollout` function. Since RL agents start off with literally zero knowledge of their environment, it can often take a long time to train them and achieve stable behavior. To alleviate this, we have implemented a parallelized version of the RL algorithm, `parallelized_collect_rollout`, which you can use to accelerate training across multiple parallel batches.For training, information in the `Memory` buffer from all these batches will be aggregated (after all episodes, i.e., games, end). Discounted rewards will be computed, and this information will be used to execute a training step. Memory will be cleared, and we will do it all over again!Let's run the code block to train our Pong agent. Note that, even with parallelization, completing training and getting stable behavior will take quite a bit of time (estimated at least a couple of hours). We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning.
###Code
### Hyperparameters and setup for training ###
# Rerun this cell if you want to re-initialize the training process
# (i.e., create new model, reset loss, etc)
# Hyperparameters
learning_rate = 1e-3
MAX_ITERS = 1000 # increase the maximum to train longer
batch_size = 5 # number of batches to run
# Model, optimizer
pong_model = create_pong_model()
optimizer = tf.keras.optimizers.Adam(learning_rate)
iteration = 0 # counter for training steps
# Plotting
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
smoothed_reward.append(0) # start the reward at zero for baseline comparison
plotter = mdl.util.PeriodicPlotter(sec=15, xlabel='Iterations', ylabel='Win Percentage (%)')
# Batches and environment
# To parallelize batches, we need to make multiple copies of the environment.
envs = [create_pong_env() for _ in range(batch_size)] # For parallelization
### Training Pong ###
# You can run this cell and stop it anytime in the middle of training to save
# a progress video (see next codeblock). To continue training, simply run this
# cell again, your model will pick up right where it left off. To reset training,
# you need to run the cell above.
games_to_win_episode = 21 # this is set by OpenAI gym and cannot be changed.
# Main training loop
while iteration < MAX_ITERS:
plotter.plot(smoothed_reward.get())
tic = time.time()
# RL agent algorithm. By default, uses serial batch processing.
# memories = collect_rollout(batch_size, env, pong_model, choose_action)
# Parallelized version. Uncomment line below (and comment out line above) to parallelize
memories = mdl.lab3.parallelized_collect_rollout(batch_size, envs, pong_model, choose_action)
print(time.time()-tic)
# Aggregate memories from multiple batches
batch_memory = aggregate_memories(memories)
# Track performance based on win percentage (calculated from rewards)
total_wins = sum(np.array(batch_memory.rewards) == 1)
total_games = sum(np.abs(np.array(batch_memory.rewards)))
win_rate = total_wins / total_games
smoothed_reward.append(100 * win_rate)
# Training!
train_step(
pong_model,
optimizer,
observations = np.stack(batch_memory.observations, 0),
actions = np.array(batch_memory.actions),
discounted_rewards = discount_rewards(batch_memory.rewards)
)
# Save a video of progress -- this can be played back later
if iteration % 100 == 0:
mdl.lab3.save_video_of_model(pong_model, "Pong-v0",
suffix="_"+str(iteration))
iteration += 1 # Mark next episode
###Output
_____no_output_____
###Markdown
Finally we can put our trained agent to the test! It will play in a newly instantiated Pong environment against the "computer", a base AI system for Pong. Your agent plays as the green paddle. Let's watch the match instant replay!
###Code
latest_pong = mdl.lab3.save_video_of_model(
pong_model, "Pong-v0", suffix="_latest")
mdl.lab3.play_video(latest_pong, width=400)
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, simulated environments -- like games and simulation engines -- provide a convenient proving ground for developing RL algorithms and agents.In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Driving in VISTA**](https://www.mit.edu/~amini/pubs/pdf/learning-in-simulation-vista.pdf): Learn a driving control policy for an autonomous vehicle, end-to-end from raw pixel inputs and entirely in the data-driven simulation environment of VISTA. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
# Download and import the MIT 6.S191 package
!printf "Installing MIT deep learning package... "
!pip install --upgrade git+https://github.com/aamini/introtodeeplearning.git &> /dev/null
!echo "Done"
#Install some dependencies for visualizing the agents
!apt-get install -y xvfb python-opengl x11-utils &> /dev/null
!pip install gym pyvirtualdisplay scikit-video ffio pyrender &> /dev/null
!pip install tensorflow_probability==0.12.0 &> /dev/null
import os
os.environ['PYOPENGL_PLATFORM'] = 'egl'
import numpy as np
import matplotlib, cv2
import matplotlib.pyplot as plt
import base64, io, os, time, gym
import IPython, functools
import time
from tqdm import tqdm
import tensorflow_probability as tfp
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for the Cartpole task, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v1")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
_____no_output_____
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# ['''TODO''' Dense layer to output action probabilities]
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. We will also add support so that the `choose_action` function can handle either a single observation or a batch of observations.**Critically, this action function is totally general -- we will use this function for learning control algorithms for Cartpole, but it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation(s) which is/are fed as input to the model
# single: flag as to whether we are handling a single observation or batch of
# observations, provided as an np.array
# Returns:
# action: choice of agent action
def choose_action(model, observation, single=True):
# add batch dimension to the observation if only a single example was provided
observation = np.expand_dims(observation, axis=0) if single else observation
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
'''TODO: Choose an action from the categorical distribution defined by the log
probabilities of each possible action.'''
action = tf.random.categorical(logits, num_samples=1)
# action = ['''TODO''']
action = action.numpy().flatten()
return action[0] if single else action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple `Memory` buffer that contains the agent's observations, actions, and received rewards from a given episode. We will also add support to combine a list of `Memory` objects into a single `Memory`. This will be very useful for batching, which will help you accelerate training later on in the lab.**Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action) # TODO
# ['''TODO''']
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward) # TODO
# ['''TODO''']
def __len__(self):
return len(self.actions)
# Instantiate a single Memory buffer
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. Recall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.We will use this definition of the reward function in both parts of the lab so make sure you have it executed!
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
# logits='''TODO''', labels='''TODO''')
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm. This is a very generalizable definition which we will use
###Code
### Training step (forward and backpropagation) ###
def train_step(model, loss_function, optimizer, observations, actions, discounted_rewards, custom_fwd_fn=None):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
if custom_fwd_fn is not None:
prediction = custom_fwd_fn(observations)
else:
prediction = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = loss_function(prediction, actions, discounted_rewards) # TODO
# loss = loss_function('''TODO''', '''TODO''', '''TODO''')
'''TODO: run backpropagation to minimize the loss using the tape.gradient method.
Unlike supervised learning, RL is *extremely* noisy, so you will benefit
from additionally clipping your gradients to avoid falling into
dangerous local minima. After computing your gradients try also clipping
by a global normalizer. Try different clipping values, usually clipping
between 0.5 and 5 provides reasonable results. '''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', '''TODO''')
grads, _ = tf.clip_by_global_norm(grads, 2)
# grads, _ = tf.clip_by_global_norm(grads, '''TODO''')
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
## Training parameters ##
## Re-run this cell to restart training from scratch ##
# TODO: Learning rate and optimizer
learning_rate = 1e-3
# learning_rate = '''TODO'''
optimizer = tf.keras.optimizers.Adam(learning_rate)
# optimizer = '''TODO'''
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.95)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
## Cartpole training! ##
## Note: stoping and restarting this cell will pick up training where you
# left off. To restart training you need to rerun the cell above as
# well (to re-initialize the model and optimizer)
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
g = train_step(cartpole_model, compute_loss, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
matplotlib.use('Agg')
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v1")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: Training Autonomous Driving Policies in VISTAAutonomous control has traditionally be dominated by algorithms that explicitly decompose individual aspects of the control pipeline. For example, in autonomous driving, traditional methods work by first detecting road and lane boundaries, and then using path planning and rule-based methods to derive a control policy. Deep learning offers something very different -- the possibility of optimizing all these steps simultaneously, learning control end-to-end directly from raw sensory inputs.**You will explore the power of deep learning to learn autonomous control policies that are trained *end-to-end, directly from raw sensory data, and entirely within a simulated world*.**We will use the data-driven simulation engine [VISTA](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8957584&tag=1), which uses techniques in computer vision to synthesize new photorealistic trajectories and driving viewpoints, that are still consistent with the world's appearance and fall within the envelope of a real driving scene. This is a powerful approach -- we can synthesize data that is photorealistic, grounded in the real world, and then use this data for training and testing autonomous vehicle control policies within this simulator.In this part of the lab, you will use reinforcement lerning to build a self-driving agent with a neural network-based controller trained on RGB camera data. We will train the self-driving agent for the task of lane following. Beyond this data modality and control task, VISTA also supports [different data modalities](https://arxiv.org/pdf/2111.12083.pdf) (such as LiDAR data) and [different learning tasks](https://arxiv.org/pdf/2111.12137.pdf) (such as multi-car interactions).You will put your agent to the test in the VISTA environment, and potentially, on board a full-scale autonomous vehicle! Specifically, as part of the MIT lab competitions, high-performing agents -- evaluated based on the maximum distance they can travel without crashing -- will have the opportunity to be put to the ***real*** test onboard a full-scale autonomous vehicle!!! We start by installing dependencies. This includes installing the VISTA package itself.
###Code
!pip install --upgrade git+https://github.com/vista-simulator/vista-6s191.git
import vista
from vista.utils import logging
logging.setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
VISTA provides some documentation which will be very helpful to completing this lab. You can always use the `?vista` command to access the package documentation.
###Code
### Access documentation for VISTA
### Run ?vista.<[name of module or function]>
?vista.Display
###Output
_____no_output_____
###Markdown
3.6 Create an environment in VISTAEnvironments in VISTA are based on and built from human-collected driving *traces*. A trace is the data from a single driving run. In this case we'll be working with RGB camera data, from the viewpoint of the driver looking out at the road: the camera collects this data as the car drives around!We will start by accessing a trace. We use that trace to instantiate an environment within VISTA. This is our `World` and defines the environment we will use for reinforcement learning. The trace itself helps to define a space for the environment; with VISTA, we can use the trace to generate new photorealistic viewpoints anywhere within that space. This provides valuable new training data as well as a robust testing environment.The simulated environment of VISTA will serve as our training ground and testbed for reinforcement learning. We also define an `Agent` -- a car -- that will actually move around in the environmnet, and make and carry out *actions* in this world. Because this is an entirely simulated environment, our car agent will also be simulated!
###Code
# Download and extract the data for vista (auto-skip if already downloaded)
!wget -nc -q --show-progress https://www.dropbox.com/s/62pao4mipyzk3xu/vista_traces.zip
print("Unzipping data...")
!unzip -o -q vista_traces.zip
print("Done downloading and unzipping data!")
trace_root = "./vista_traces"
trace_path = [
"20210726-154641_lexus_devens_center",
"20210726-155941_lexus_devens_center_reverse",
"20210726-184624_lexus_devens_center",
"20210726-184956_lexus_devens_center_reverse",
]
trace_path = [os.path.join(trace_root, p) for p in trace_path]
# Create a virtual world with VISTA, the world is defined by a series of data traces
world = vista.World(trace_path, trace_config={'road_width': 4})
# Create a car in our virtual world. The car will be able to step and take different
# control actions. As the car moves, its sensors will simulate any changes it environment
car = world.spawn_agent(
config={
'length': 5.,
'width': 2.,
'wheel_base': 2.78,
'steering_ratio': 14.7,
'lookahead_road': True
})
# Create a camera on the car for synthesizing the sensor data that we can use to train with!
camera = car.spawn_camera(config={'size': (200, 320)})
# Define a rendering display so we can visualize the simulated car camera stream and also
# get see its physical location with respect to the road in its environment.
display = vista.Display(world, display_config={"gui_scale": 2, "vis_full_frame": False})
# Define a simple helper function that allows us to reset VISTA and the rendering display
def vista_reset():
world.reset()
display.reset()
vista_reset()
###Output
_____no_output_____
###Markdown
If successful, you should see a blank black screen at this point. Your rendering display has been initialized. 3.7 Our virtual agent: the carOur goal is to learn a control policy for our agent, our (hopefully) autonomous vehicle, end-to-end directly from RGB camera sensory input. As in Cartpole, we need to define how our virtual agent will interact with its environment. Define agent's action functionsIn the case of driving, the car agent can act -- taking a step in the VISTA environment -- according to a given control command. This amounts to moving with a desired speed and a desired *curvature*, which reflects the car's turn radius. Curvature has units $\frac{1}{meter}$. So, if a car is traversing a circle of radius $r$ meters, then it is turning with a curvature $\frac{1}{r}$. The curvature is therefore correlated with the car's steering wheel angle, which actually controls its turn radius. Let's define the car agent's step function to capture the action of moving with a desired speed and desired curvature.
###Code
# First we define a step function, to allow our virtual agent to step
# with a given control command through the environment
# agent can act with a desired curvature (turning radius, like steering angle)
# and desired speed. if either is not provided then this step function will
# use whatever the human executed at that time in the real data.
def vista_step(curvature=None, speed=None):
# Arguments:
# curvature: curvature to step with
# speed: speed to step with
if curvature is None:
curvature = car.trace.f_curvature(car.timestamp)
if speed is None:
speed = car.trace.f_speed(car.timestamp)
car.step_dynamics(action=np.array([curvature, speed]), dt=1/15.)
car.step_sensors()
###Output
_____no_output_____
###Markdown
Inspect driving trajectories in VISTARecall that our VISTA environment is based off an initial human-collected driving trace. Also, we defined the agent's step function to defer to what the human executed if it is not provided with a desired speed and curvature with which to move.Thus, we can further inspect our environment by using the step function for the driving agent to move through the environment by following the human path. The stepping and iteration will take about 1 iteration per second. We will then observe the data that comes out to see the agent's traversal of the environment.
###Code
import shutil, os, subprocess, cv2
# Create a simple helper class that will assist us in storing videos of the render
class VideoStream():
def __init__(self):
self.tmp = "./tmp"
if os.path.exists(self.tmp) and os.path.isdir(self.tmp):
shutil.rmtree(self.tmp)
os.mkdir(self.tmp)
def write(self, image, index):
cv2.imwrite(os.path.join(self.tmp, f"{index:04}.png"), image)
def save(self, fname):
cmd = f"/usr/bin/ffmpeg -f image2 -i {self.tmp}/%04d.png -crf 0 -y {fname}"
subprocess.call(cmd, shell=True)
## Render and inspect a human trace ##
vista_reset()
stream = VideoStream()
for i in tqdm(range(100)):
vista_step()
# Render and save the display
vis_img = display.render()
stream.write(vis_img[:, :, ::-1], index=i)
if car.done:
break
print("Saving trajectory of human following...")
stream.save("human_follow.mp4")
mdl.lab3.play_video("human_follow.mp4")
###Output
_____no_output_____
###Markdown
Check out the simulated VISTA environment. What do you notice about the environment, the agent, and the setup of the simulation engine? How could these aspects useful for training models? Define terminal states: crashing! (oh no)Recall from Cartpole, our training episodes ended when the pole toppled, i.e., the agent crashed and failed. Similarly for training vehicle control policies in VISTA, we have to define what a ***crash*** means. We will define a crash as any time the car moves out of its lane or exceeds its maximum rotation. This will define the end of a training episode.
###Code
## Define terminal states and crashing conditions ##
def check_out_of_lane(car):
distance_from_center = np.abs(car.relative_state.x)
road_width = car.trace.road_width
half_road_width = road_width / 2
return distance_from_center > half_road_width
def check_exceed_max_rot(car):
maximal_rotation = np.pi / 10.
current_rotation = np.abs(car.relative_state.yaw)
return current_rotation > maximal_rotation
def check_crash(car):
return check_out_of_lane(car) or check_exceed_max_rot(car) or car.done
###Output
_____no_output_____
###Markdown
Quick check: acting with a random control policyAt this point, we have (1) an environment; (2) an agent, with a step function. Before we start learning a control policy for our vehicle agent, we start by testing out the behavior of the agent in the virtual world by providing it with a completely random control policy. Naturally we expect that the behavior will not be very robust! Let's take a look.
###Code
## Behavior with random control policy ##
i = 0
num_crashes = 5
stream = VideoStream()
for _ in range(num_crashes):
vista_reset()
while not check_crash(car):
# Sample a random curvature (between +/- 1/3), keep speed constant
curvature = np.random.uniform(-1/3, 1/3)
# Step the simulated car with the same action
vista_step(curvature=curvature)
# Render and save the display
vis_img = display.render()
stream.write(vis_img[:, :, ::-1], index=i)
i += 1
print(f"Car crashed on step {i}")
for _ in range(5):
stream.write(vis_img[:, :, ::-1], index=i)
i += 1
print("Saving trajectory with random policy...")
stream.save("random_policy.mp4")
mdl.lab3.play_video("random_policy.mp4")
###Output
_____no_output_____
###Markdown
3.8 Preparing to learn a control policy: data preprocessingSo, hopefully you saw that the random control policy was, indeed, not very robust. Yikes. Our overall goal in this lab is to build a self-driving agent using a neural network controller trained entirely in the simulator VISTA. This means that the data used to train and test the self-driving agent will be supplied by VISTA. As a step towards this, we will do some data preprocessing to make it easier for the network to learn from these visual data.Previously we rendered the data with a display as a quick check that the environment was working properly. For training the agent, we will directly access the car's observations, extract Regions Of Interest (ROI) from those observations, crop them out, and use these crops as training data for our self-driving agent controller. Observe both the full observation and the extracted ROI.
###Code
from google.colab.patches import cv2_imshow
# Directly access the raw sensor observations of the simulated car
vista_reset()
full_obs = car.observations[camera.name]
cv2_imshow(full_obs)
## ROIs ##
# Crop a smaller region of interest (ROI). This is necessary because:
# 1. The full observation will have distortions on the edge as the car deviates from the human
# 2. A smaller image of the environment will be easier for our model to learn from
region_of_interest = camera.camera_param.get_roi()
i1, j1, i2, j2 = region_of_interest
cropped_obs = full_obs[i1:i2, j1:j2]
cv2_imshow(cropped_obs)
###Output
_____no_output_____
###Markdown
We will group these steps into some helper functions that we can use during training: 1. `preprocess`: takes a full observation as input and returns a preprocessed version. This can include whatever preprocessing steps you would like! For example, ROI extraction, cropping, augmentations, and so on. You are welcome to add and modify this function as you seek to optimize your self-driving agent!2. `grab_and_preprocess`: grab the car's current observation (i.e., image view from its perspective), and then call the `preprocess` function on that observation.
###Code
## Data preprocessing functions ##
def preprocess(full_obs):
# Extract ROI
i1, j1, i2, j2 = camera.camera_param.get_roi()
obs = full_obs[i1:i2, j1:j2]
# Rescale to [0, 1]
obs = obs / 255.
return obs
def grab_and_preprocess_obs(car):
full_obs = car.observations[camera.name]
obs = preprocess(full_obs)
return obs
###Output
_____no_output_____
###Markdown
3.9 Define the self-driving agent and learning algorithmAs before, we'll use a neural network to define our agent and output actions that it will take. Fixing the agent's driving speed, we will train this network to predict a curvature -- a continuous value -- that will relate to the car's turn radius. Specifically, define the model to output a prediction of a continuous distribution of curvature, defined by a mean $\mu$ and standard deviation $\sigma$. These parameters will define a Normal distribution over curvature.What network architecture do you think would be especially well suited to the task of end-to-end control learning from RGB images? Since our observations are in the form of RGB images, we'll start with a convolutional network. Note that you will be tasked with completing a template CNN architecture for the self-driving car agent -- but you should certainly experiment beyond this template to try to optimize performance!
###Code
### Define the self-driving agent ###
# Note: we start with a template CNN architecture -- experiment away as you
# try to optimize your agent!
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
act = tf.keras.activations.swish
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='valid', activation=act)
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the self-driving agent
def create_driving_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2),
# TODO: define convolutional layers with 48 5x5 filters and 2x2 stride
Conv2D(filters=48, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define two convolutional layers with 64 3x3 filters and 2x2 stride
Conv2D(filters=64, kernel_size=3, strides=2), # TODO
Conv2D(filters=64, kernel_size=3, strides=2), # TODO
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=128, activation=act),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in.
# Remember that this model is outputing a distribution of *continuous*
# actions, which take a different shape than discrete actions.
# How many outputs should there be to define a distribution?'''
Dense(units=2, activation=None) # TODO
# Dense('''TODO''')
])
return model
driving_model = create_driving_model()
###Output
_____no_output_____
###Markdown
Now we will define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. As with Cartpole, we will use a *policy gradient* method that aims to **maximize** the likelihood of actions that result in large rewards. However, there are some key differences. In Cartpole, the agent's action space was discrete: it could only move left or right. In driving, the agent's action space is continuous: the control network is outputting a curvature, which is a continuous variable. We will define a probability distribution, defined by a mean and variance, over this continuous action space to define the possible actions the self-driving agent can take.You will define three functions that reflect these changes and form the core of the the learning algorithm:1. `run_driving_model`: takes an input image, and outputs a prediction of a continuous distribution of curvature. This will take the form of a Normal distribuion and will be defined using TensorFlow probability's [`tfp.distributions.Normal`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/Normal) function, so the model's prediction will include both a mean and variance. Operates on an instance `driving_model` defined above.2. `compute_driving_loss`: computes the loss for a prediction that is in the form of a continuous Normal distribution. Recall as in Cartpole, computing the loss involves multiplying the predicted log probabilities by the discounted rewards. Similar to `compute_loss` in Cartpole.The `train_step` function to use the loss function to execute a training step will be the same as we used in Cartpole! This will have to be executed abov in order for the driving agent to train properly.
###Code
## The self-driving learning algorithm ##
# hyperparameters
max_curvature = 1/8.
max_std = 0.1
def run_driving_model(image):
# Arguments:
# image: an input image
# Returns:
# pred_dist: predicted distribution of control actions
single_image_input = tf.rank(image) == 3 # missing 4th batch dimension
if single_image_input:
image = tf.expand_dims(image, axis=0)
'''TODO: get the prediction of the model given the current observation.'''
distribution = driving_model(image) # TODO
# distribution = ''' TODO '''
mu, logsigma = tf.split(distribution, 2, axis=1)
mu = max_curvature * tf.tanh(mu) # conversion
sigma = max_std * tf.sigmoid(logsigma) + 0.005 # conversion
'''TODO: define the predicted distribution of curvature, given the predicted
mean mu and standard deviation sigma. Use a Normal distribution as defined
in TF probability (hint: tfp.distributions)'''
pred_dist = tfp.distributions.Normal(mu, sigma) # TODO
# pred_dist = ''' TODO '''
return pred_dist
def compute_driving_loss(dist, actions, rewards):
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
'''TODO: complete the function call to compute the negative log probabilities
of the agent's actions.'''
neg_logprob = -1 * dist.log_prob(actions)
# neg_logprob = '''TODO'''
'''TODO: scale the negative log probability by the rewards.'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
3.10 Train the self-driving agentWe're now all set up to start training our RL algorithm and agent for autonomous driving!We begin be initializing an opitimizer, environment, a new driving agent, and `Memory` buffer. This will be in the first code block. To restart training completely, you will need to re-run this cell to re-initiailize everything.The second code block is the main training script. Here reinforcement learning episodes will be executed by agents in the VISTA environment. Since the self-driving agent starts out with literally zero knowledge of its environment, it can often take a long time to train and achieve stable behavior -- keep this in mind! For convenience, stopping and restarting the second cell will pick up training where you left off.The training block will execute a policy in the environment until the agent crashes. When the agent crashes, the (state, action, reward) triplet `(s,a,r)` of the agent at the end of the episode will be saved into the `Memory` buffer, and then provided as input to the policy gradient loss function. This information will be used to execute optimization within the training step. Memory will be cleared, and we will then do it all over again!Let's run the code block to train our self-driving agent. We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning. **You should reach a reward of at least 100 to get bare minimum stable behavior.**
###Code
## Training parameters and initialization ##
## Re-run this cell to restart training from scratch ##
''' TODO: Learning rate and optimizer '''
learning_rate = 5e-4
# learning_rate = '''TODO'''
optimizer = tf.keras.optimizers.Adam(learning_rate)
# optimizer = '''TODO'''
# instantiate driving agent
vista_reset()
driving_model = create_driving_model()
# NOTE: the variable driving_model will be used in run_driving_model execution
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
# instantiate Memory buffer
memory = Memory()
## Driving training! Main training block. ##
## Note: stopping and restarting this cell will pick up training where you
# left off. To restart training you need to rerun the cell above as
# well (to re-initialize the model and optimizer)
max_batch_size = 300
max_reward = float('-inf') # keep track of the maximum reward acheived during training
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
vista_reset()
memory.clear()
observation = grab_and_preprocess_obs(car)
while True:
# TODO: using the car's current observation compute the desired
# action (curvature) distribution by feeding it into our
# driving model (use the function you already built to do this!) '''
curvature_dist = run_driving_model(observation)
# curvature_dist = '''TODO'''
# TODO: sample from the action *distribution* to decide how to step
# the car in the environment. You may want to check the documentation
# for tfp.distributions.Normal online. Remember that the sampled action
# should be a single scalar value after this step.
curvature_action = curvature_dist.sample()[0,0]
# curvature_action = '''TODO'''
# Step the simulated car with the same action
vista_step(curvature_action)
observation = grab_and_preprocess_obs(car)
# TODO: Compute the reward for this iteration. You define
# the reward function for this policy, start with something
# simple - for example, give a reward of 1 if the car did not
# crash and a reward of 0 if it did crash.
reward = 1.0 if not check_crash(car) else 0.0
# reward = '''TODO'''
# add to memory
memory.add_to_memory(observation, curvature_action, reward)
# is the episode over? did you crash or do so well that you're done?
if reward == 0.0:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# execute training step - remember we don't know anything about how the
# agent is doing until it has crashed! if the training step is too large
# we need to sample a mini-batch for this step.
batch_size = min(len(memory), max_batch_size)
i = np.random.choice(len(memory), batch_size, replace=False)
train_step(driving_model, compute_driving_loss, optimizer,
observations=np.array(memory.observations)[i],
actions=np.array(memory.actions)[i],
discounted_rewards = discount_rewards(memory.rewards)[i],
custom_fwd_fn=run_driving_model)
# reset the memory
memory.clear()
break
###Output
_____no_output_____
###Markdown
3.11 Evaluate the self-driving agentFinally we can put our trained self-driving agent to the test! It will execute autonomous control, in VISTA, based on the learned controller. We will evaluate how well it does based on this distance the car travels without crashing. We await the result...
###Code
## Evaluation block!##
i_step = 0
num_episodes = 5
num_reset = 5
stream = VideoStream()
for i_episode in range(num_episodes):
# Restart the environment
vista_reset()
observation = grab_and_preprocess_obs(car)
print("rolling out in env")
episode_step = 0
while not check_crash(car) and episode_step < 100:
# using our observation, choose an action and take it in the environment
curvature_dist = run_driving_model(observation)
curvature = curvature_dist.mean()[0,0]
# Step the simulated car with the same action
vista_step(curvature)
observation = grab_and_preprocess_obs(car)
vis_img = display.render()
stream.write(vis_img[:, :, ::-1], index=i_step)
i_step += 1
episode_step += 1
for _ in range(num_reset):
stream.write(np.zeros_like(vis_img), index=i_step)
i_step += 1
print(f"Average reward: {(i_step - (num_reset*num_episodes)) / num_episodes}")
print("Saving trajectory with trained policy...")
stream.save("trained_policy.mp4")
mdl.lab3.play_video("trained_policy.mp4")
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator;2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import base64, io, time, gym
import IPython, functools
import matplotlib.pyplot as plt
import time
from tqdm import tqdm
# Download and import the MIT 6.S191 package
!pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v0")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
_____no_output_____
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# [TODO Dense layer to output action probabilities]
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. We will also add support so that the `choose_action` function can handle either a single observation or a batch of observations.**Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation(s) which is/are fed as input to the model
# single: flag as to whether we are handling a single observation or batch of
# observations, provided as an np.array
# Returns:
# action: choice of agent action
def choose_action(model, observation, single=True):
# add batch dimension to the observation if only a single example was provided
observation = np.expand_dims(observation, axis=0) if single else observation
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
'''TODO: Choose an action from the categorical distribution defined by the log
probabilities of each possible action.'''
action = tf.random.categorical(logits, num_samples=1)
# action = ['''TODO''']
action = action.numpy().flatten()
return action[0] if single else action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple `Memory` buffer that contains the agent's observations, actions, and received rewards from a given episode. We will also add support to combine a list of `Memory` objects into a single `Memory`. This will be very useful for batching, which will help you accelerate training later on in the lab.**Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action) # TODO
# ['''TODO''']
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward) # TODO
# ['''TODO''']
# Helper function to combine a list of Memory objects into a single Memory.
# This will be very useful for batching.
def aggregate_memories(memories):
batch_memory = Memory()
for memory in memories:
for step in zip(memory.observations, memory.actions, memory.rewards):
batch_memory.add_to_memory(*step)
return batch_memory
# Instantiate a single Memory buffer
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. Recall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
# logits='''TODO''', labels='''TODO''')
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm:
###Code
### Training step (forward and backpropagation) ###
def train_step(model, optimizer, observations, actions, discounted_rewards):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
logits = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = compute_loss(logits, actions, discounted_rewards) # TODO
# loss = compute_loss('''TODO''', '''TODO''', '''TODO''')
'''TODO: run backpropagation to minimize the loss using the tape.gradient method'''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
### Cartpole training! ###
# Learning rate and optimizer
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
train_step(cartpole_model, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: PongIn Cartpole, we dealt with an environment that was static -- in other words, it didn't change over time. What happens if our environment is dynamic and unpredictable? Well that's exactly the case in [Pong](https://en.wikipedia.org/wiki/Pong), since part of the environment is the opposing player. We don't know how our opponent will act or react to our actions, so the complexity of our problem increases. It also becomes much more interesting, since we can compete to beat our opponent. RL provides a powerful framework for training AI systems with the ability to handle and interact with dynamic, unpredictable environments. In this part of the lab, we'll use the tools and workflow we explored in Part 1 to build an RL agent capable of playing the game of Pong. 3.6 Define and inspect the Pong environmentAs with Cartpole, we'll instantiate the Pong environment in the OpenAI gym, using a seed of 1.
###Code
def create_pong_env():
return gym.make("Pong-v0", frameskip=5)
env = create_pong_env()
env.seed(1); # for reproducibility
###Output
_____no_output_____
###Markdown
Let's next consider the observation space for the Pong environment. Instead of four physical descriptors of the cart-pole setup, in the case of Pong our observations are the individual video frames (i.e., images) that depict the state of the board. Thus, the observations are 210x160 RGB images (arrays of shape (210,160,3)).We can again confirm the size of the observation space by query:
###Code
print("Environment has observation space =", env.observation_space)
###Output
_____no_output_____
###Markdown
In Pong, at every time step, the agent (which controls the paddle) has six actions to choose from: no-op (no operation), move right, move left, fire, fire right, and fire left. Let's confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
3.7 Define the Pong agentAs before, we'll use a neural network to define our agent. What network architecture do you think would be especially well suited to this game? Since our observations are now in the form of images, we'll add convolutional layers to the network to increase the learning capacity of our network. Note that you will be tasked with completing a template CNN architecture for the Pong agent -- but you should certainly experiment beyond this template to try to optimize performance!
###Code
### Define the Pong agent ###
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the Pong agent
def create_pong_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2),
# TODO: define convolutional layers with 48 5x5 filters and 2x2 stride
Conv2D(filters=48, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define two convolutional layers with 64 3x3 filters and 2x2 stride
Conv2D(filters=64, kernel_size=3, strides=2), # TODO
Conv2D(filters=64, kernel_size=3, strides=2),
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=128, activation='relu'),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in
Dense(units=n_actions, activation=None) # TODO
# Dense('''TODO''')
])
return model
pong_model = create_pong_model()
###Output
_____no_output_____
###Markdown
Since we've already defined the action function, `choose_action(model, observation)`, we don't need to define it again. Instead, we'll be able to reuse it later on by passing in our new model we've just created, `pong_model`. This is awesome because our action function provides a modular and generalizable method for all sorts of RL agents! 3.8 Pong-specific functionsIn Part 1 (Cartpole), we implemented some key functions and classes to build and train our RL agent -- `choose_action(model, observation)` and the `Memory` class, for example. However, in getting ready to apply these to a new game like Pong, we might need to make some slight modifications. Namely, we need to think about what happens when a game ends. In Pong, we know a game has ended if the reward is +1 (we won!) or -1 (we lost unfortunately). Otherwise, we expect the reward at a timestep to be zero -- the players (or agents) are just playing eachother. So, after a game ends, we will need to reset the reward to zero when a game ends. This will result in a modified reward function.
###Code
### Pong reward function ###
# Compute normalized, discounted rewards for Pong (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor. Note increase to 0.99 -- rate of depreciation will be slower.
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.99):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# NEW: Reset the sum if the reward is not 0 (the game has ended!)
if rewards[t] != 0:
R = 0
# update the total discounted reward as before
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
Additionally, we have to consider the nature of the observations in the Pong environment, and how they will be fed into our network. Our observations in this case are images. Before we input an image into our network, we'll do a bit of pre-processing to crop and scale, clean up the background colors to a single color, and set the important game elements to a single color. Let's use this function to visualize what a single observation might look like before and after pre-processing.
###Code
observation = env.reset()
for i in range(30):
action = np.random.choice(n_actions)
observation, _,_,_ = env.step(action)
observation_pp = mdl.lab3.preprocess_pong(observation)
f = plt.figure(figsize=(10,3))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.imshow(observation); ax.grid(False);
ax2.imshow(np.squeeze(observation_pp)); ax2.grid(False); plt.title('Preprocessed Observation');
###Output
_____no_output_____
###Markdown
Let's also consider the fact that, unlike CartPole, the Pong environment has an additional element of uncertainty -- regardless of what action the agent takes, we don't know how the opponent will play. That is, the environment is changing over time, based on *both* the actions we take and the actions of the opponent, which result in motion of the ball and motion of the paddles.
Therefore, to capture the dynamics, we also consider how the environment changes by looking at the difference between a previous observation (image frame) and the current observation (image frame). We've implemented a helper function, `pong_change`, that pre-processes two frames, calculates the change between the two, and then re-normalizes the values. Let's inspect this to visualize how the environment can change:
###Code
next_observation, _,_,_ = env.step(np.random.choice(n_actions))
diff = mdl.lab3.pong_change(observation, next_observation)
f, ax = plt.subplots(1, 3, figsize=(15,15))
for a in ax:
a.grid(False)
a.axis("off")
ax[0].imshow(observation); ax[0].set_title('Previous Frame');
ax[1].imshow(next_observation); ax[1].set_title('Current Frame');
ax[2].imshow(np.squeeze(diff)); ax[2].set_title('Difference (Model Input)');
###Output
_____no_output_____
###Markdown
What do you notice? How and why might these pre-processing changes be important for training our RL algorithm? How and why might consideration of the difference between frames be important for training and performance? Rollout function
We're now set up to define our key action algorithm for the game of Pong, which will ultimately be used to train our Pong agent. This function can be thought of as a "rollout", where the agent will 1) make an observation of the environment, 2) select an action based on its state in the environment, 3) execute a policy based on that action, resulting in some reward and a change to the environment, and 4) finally add memory of that action-reward to its `Memory` buffer. We will define this algorithm in the `collect_rollout` function below, and use it soon within a training block.
Earlier you visually inspected the raw environment frames, the pre-processed frames, and the difference between previous and current frames. As you may have gathered, in a dynamic game like Pong, it can actually be helpful to consider the difference between two consecutive observations. This gives us information about the movement between frames -- how the game is changing. We will do this using the `pong_change` function we explored above (which also pre-processes frames for us).
We will use differences between frames as the input on which actions will be selected. These observation changes will be forward propagated through our Pong agent, the CNN network model, which will then predict the next action to take based on this observation. The raw reward will be computed. The observation, action, and reward will be recorded into memory. This will loop until a particular game ends -- the rollout is completed.
For now, we will define `collect_rollout` such that a batch of observations (i.e., from a batch of agent-environment worlds) can be processed serially (i.e., one at a time, in sequence). We will later utilize a parallelized version of this function that will parallelize batch processing to help speed up training! Let's get to it.
###Code
### Rollout function ###
# Key steps for agent's operation in the environment, until completion of a rollout.
# An observation is drawn; the agent (controlled by model) selects an action;
# the agent executes that action in the environment and collects rewards;
# information is added to memory.
# This is repeated until the completion of the rollout -- the Pong game ends.
# Processes multiple batches serially.
#
# Arguments:
# batch_size: number of batches, to be processed serially
# env: environment
# model: Pong agent model
# choose_action: choose_action function
# Returns:
# memories: array of Memory buffers, of length batch_size, corresponding to the
# episode executions from the rollout
def collect_rollout(batch_size, env, model, choose_action):
# Holder array for the Memory buffers
memories = []
# Process batches serially by iterating through them
for b in range(batch_size):
# Instantiate Memory buffer, restart the environment
memory = Memory()
next_observation = env.reset()
previous_frame = next_observation
done = False # tracks whether the episode (game) is done or not
while not done:
current_frame = next_observation
'''TODO: determine the observation change.
Hint: this is the difference between the past two frames'''
frame_diff = mdl.lab3.pong_change(previous_frame, current_frame) # TODO
# frame_diff = # TODO
'''TODO: choose an action for the pong model, using the frame difference, and evaluate'''
action = choose_action(model, frame_diff) # TODO
# action = # TODO
# Take the chosen action
next_observation, reward, done, info = env.step(action)
'''TODO: save the observed frame difference, the action that was taken, and the resulting reward!'''
memory.add_to_memory(frame_diff, action, reward) # TODO
previous_frame = current_frame
# Add the memory from this batch to the array of all Memory buffers
memories.append(memory)
return memories
###Output
_____no_output_____
###Markdown
To get a sense of what is encapsulated by `collect_rollout`, we will instantiate an *untrained* Pong model, run a single rollout using this model, save the memory, and play back the observations the model sees. Note that these will be frame *differences*.
###Code
### Rollout with untrained Pong model ###
# Model
test_model = create_pong_model()
# Rollout with single batch
single_batch_size = 1
memories = collect_rollout(single_batch_size, env, test_model, choose_action)
rollout_video = mdl.lab3.save_video_of_memory(memories[0], "Pong-Random-Agent.mp4")
# Play back video of memories
mdl.lab3.play_video(rollout_video)
###Output
_____no_output_____
###Markdown
3.9 Training PongWe're now all set up to start training our RL algorithm and agent for the game of Pong! We've already defined the following:1. Loss function, `compute_loss`, and backpropagation step, `train_step`. Our loss function employs policy gradient learning. `train_step` executes a single forward pass and backpropagation gradient update.2. RL agent algorithm: `collect_rollout`. Serially processes batches of episodes, executing actions in the environment, collecting rewards, and saving these to `Memory`.We will use these functions to train the Pong agent.In the training block, episodes will be executed by agents in the environment via the RL algorithm defined in the `collect_rollout` function. Since RL agents start off with literally zero knowledge of their environment, it can often take a long time to train them and achieve stable behavior. To alleviate this, we have implemented a parallelized version of the RL algorithm, `parallelized_collect_rollout`, which you can use to accelerate training across multiple parallel batches.For training, information in the `Memory` buffer from all these batches will be aggregated (after all episodes, i.e., games, end). Discounted rewards will be computed, and this information will be used to execute a training step. Memory will be cleared, and we will do it all over again!Let's run the code block to train our Pong agent. Note that, even with parallelization, completing training and getting stable behavior will take quite a bit of time (estimated at least a couple of hours). We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning.
###Code
### Hyperparameters and setup for training ###
# Rerun this cell if you want to re-initialize the training process
# (i.e., create new model, reset loss, etc)
# Hyperparameters
learning_rate = 1e-3
MAX_ITERS = 1000 # increase the maximum to train longer
batch_size = 5 # number of batches to run
# Model, optimizer
pong_model = create_pong_model()
optimizer = tf.keras.optimizers.Adam(learning_rate)
iteration = 0 # counter for training steps
# Plotting
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
smoothed_reward.append(0) # start the reward at zero for baseline comparison
plotter = mdl.util.PeriodicPlotter(sec=15, xlabel='Iterations', ylabel='Win Percentage (%)')
# Batches and environment
# To parallelize batches, we need to make multiple copies of the environment.
envs = [create_pong_env() for _ in range(batch_size)] # For parallelization
### Training Pong ###
# You can run this cell and stop it anytime in the middle of training to save
# a progress video (see next codeblock). To continue training, simply run this
# cell again, your model will pick up right where it left off. To reset training,
# you need to run the cell above.
games_to_win_episode = 21 # this is set by OpenAI gym and cannot be changed.
# Main training loop
while iteration < MAX_ITERS:
plotter.plot(smoothed_reward.get())
tic = time.time()
# RL agent algorithm. By default, uses serial batch processing.
# memories = collect_rollout(batch_size, env, pong_model, choose_action)
# Parallelized version. Uncomment line below (and comment out line above) to parallelize
memories = mdl.lab3.parallelized_collect_rollout(batch_size, envs, pong_model, choose_action)
print(time.time()-tic)
# Aggregate memories from multiple batches
batch_memory = aggregate_memories(memories)
# Track performance based on win percentage (calculated from rewards)
total_wins = sum(np.array(batch_memory.rewards) == 1)
total_games = sum(np.abs(np.array(batch_memory.rewards)))
win_rate = total_wins / total_games
smoothed_reward.append(100 * win_rate)
# Training!
train_step(
pong_model,
optimizer,
observations = np.stack(batch_memory.observations, 0),
actions = np.array(batch_memory.actions),
discounted_rewards = discount_rewards(batch_memory.rewards)
)
# Save a video of progress -- this can be played back later
if iteration % 100 == 0:
mdl.lab3.save_video_of_model(pong_model, "Pong-v0",
suffix="_"+str(iteration))
iteration += 1 # Mark next episode
###Output
_____no_output_____
###Markdown
Finally we can put our trained agent to the test! It will play in a newly instantiated Pong environment against the "computer", a base AI system for Pong. Your agent plays as the green paddle. Let's watch the match instant replay!
###Code
latest_pong = mdl.lab3.save_video_of_model(
pong_model, "Pong-v0", suffix="_latest")
mdl.lab3.play_video(latest_pong, width=400)
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 3: Reinforcement LearningReinforcement learning (RL) is a subset of machine learning which poses learning problems as interactions between agents and environments. It often assumes agents have no prior knowledge of a world, so they must learn to navigate environments by optimizing a reward function. Within an environment, an agent can take certain actions and receive feedback, in the form of positive or negative rewards, with respect to their decision. As such, an agent's feedback loop is somewhat akin to the idea of "trial and error", or the manner in which a child might learn to distinguish between "good" and "bad" actions.In practical terms, our RL agent will interact with the environment by taking an action at each timestep, receiving a corresponding reward, and updating its state according to what it has "learned". While the ultimate goal of reinforcement learning is to teach agents to act in the real, physical world, games provide a convenient proving ground for developing RL algorithms and agents. Games have some properties that make them particularly well suited for RL: 1. In many cases, games have perfectly describable environments. For example, all rules of chess can be formally written and programmed into a chess game simulator;2. Games are massively parallelizable. Since they do not require running in the real world, simultaneous environments can be run on large data clusters; 3. Simpler scenarios in games enable fast prototyping. This speeds up the development of algorithms that could eventually run in the real-world; and4. ... Games are fun! In previous labs, we have explored both supervised (with LSTMs, CNNs) and unsupervised / semi-supervised (with VAEs) learning tasks. Reinforcement learning is fundamentally different, in that we are training a deep learning algorithm to govern the actions of our RL agent, that is trying, within its environment, to find the optimal way to achieve a goal. The goal of training an RL agent is to determine the best next step to take to earn the greatest final payoff or return. In this lab, we focus on building a reinforcement learning algorithm to master two different environments with varying complexity. 1. **Cartpole**: Balance a pole, protruding from a cart, in an upright position by only moving the base left or right. Environment with a low-dimensional observation space.2. [**Pong**](https://en.wikipedia.org/wiki/Pong): Beat your competitors (whether other AI or humans!) at the game of Pong. Environment with a high-dimensional observation space -- learning directly from raw pixels.Let's get started! First we'll import TensorFlow, the course package, and some dependencies.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import base64, io, time, gym
import IPython, functools
import matplotlib.pyplot as plt
import time
from tqdm import tqdm
# Download and import the MIT 6.S191 package
!pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
_____no_output_____
###Markdown
Before we dive in, let's take a step back and outline our approach, which is generally applicable to reinforcement learning problems in general:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define a reward function**: describes the reward associated with an action or sequence of actions.4. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors. Part 1: Cartpole 3.1 Define the Cartpole environment and agent Environment In order to model the environment for both the Cartpole and Pong tasks, we'll be using a toolkit developed by OpenAI called [OpenAI Gym](https://gym.openai.com/). It provides several pre-defined environments for training and testing reinforcement learning agents, including those for classic physics control tasks, Atari video games, and robotic simulations. To access the Cartpole environment, we can use `env = gym.make("CartPole-v0")`, which we gained access to when we imported the `gym` package. We can instantiate different [environments](https://gym.openai.com/envs/classic_control) by passing the enivronment name to the `make` function.One issue we might experience when developing RL algorithms is that many aspects of the learning process are inherently random: initializing game states, changes in the environment, and the agent's actions. As such, it can be helpful to set a initial "seed" for the environment to ensure some level of reproducibility. Much like you might use `numpy.random.seed`, we can call the comparable function in gym, `seed`, with our defined environment to ensure the environment's random variables are initialized the same each time.
###Code
### Instantiate the Cartpole environment ###
env = gym.make("CartPole-v0")
env.seed(1)
###Output
_____no_output_____
###Markdown
In Cartpole, a pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pole starts upright, and the goal is to prevent it from falling over. The system is controlled by applying a force of +1 or -1 to the cart. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center of the track. A visual summary of the cartpole environment is depicted below:Given this setup for the environment and the objective of the game, we can think about: 1) what observations help define the environment's state; 2) what actions the agent can take. First, let's consider the observation space. In this Cartpole environment our observations are:1. Cart position2. Cart velocity3. Pole angle4. Pole rotation rateWe can confirm the size of the space by querying the environment's observation space:
###Code
n_observations = env.observation_space
print("Environment has observation space =", n_observations)
###Output
_____no_output_____
###Markdown
Second, we consider the action space. At every time step, the agent can move either right or left. Again we can confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
Cartpole agentNow that we have instantiated the environment and understood the dimensionality of the observation and action spaces, we are ready to define our agent. In deep reinforcement learning, a deep neural network defines the agent. This network will take as input an observation of the environment and output the probability of taking each of the possible actions. Since Cartpole is defined by a low-dimensional observation space, a simple feed-forward neural network should work well for our agent. We will define this using the `Sequential` API.
###Code
### Define the Cartpole agent ###
# Defines a feed-forward neural network
def create_cartpole_model():
model = tf.keras.models.Sequential([
# First Dense layer
tf.keras.layers.Dense(units=32, activation='relu'),
# TODO: Define the last Dense layer, which will provide the network's output.
# Think about the space the agent needs to act in!
tf.keras.layers.Dense(units=n_actions, activation=None) # TODO
# [TODO Dense layer to output action probabilities]
])
return model
cartpole_model = create_cartpole_model()
###Output
_____no_output_____
###Markdown
Now that we have defined the core network architecture, we will define an *action function* that executes a forward pass through the network, given a set of observations, and samples from the output. This sampling from the output probabilities will be used to select the next action for the agent. We will also add support so that the `choose_action` function can handle either a single observation or a batch of observations.**Critically, this action function is totally general -- we will use this function for both Cartpole and Pong, and it is applicable to other RL tasks, as well!**
###Code
### Define the agent's action function ###
# Function that takes observations as input, executes a forward pass through model,
# and outputs a sampled action.
# Arguments:
# model: the network that defines our agent
# observation: observation(s) which is/are fed as input to the model
# single: flag as to whether we are handling a single observation or batch of
# observations, provided as an np.array
# Returns:
# action: choice of agent action
def choose_action(model, observation, single=True):
# add batch dimension to the observation if only a single example was provided
observation = np.expand_dims(observation, axis=0) if single else observation
'''TODO: feed the observations through the model to predict the log probabilities of each possible action.'''
logits = model.predict(observation) # TODO
# logits = model.predict('''TODO''')
'''TODO: Choose an action from the categorical distribution defined by the log
probabilities of each possible action.'''
action = tf.random.categorical(logits, num_samples=1)
# action = ['''TODO''']
action = action.numpy().flatten()
return action[0] if single else action
###Output
_____no_output_____
###Markdown
3.2 Define the agent's memoryNow that we have instantiated the environment and defined the agent network architecture and action function, we are ready to move on to the next step in our RL workflow:1. **Initialize our environment and our agent**: here we will describe the different observations and actions the agent can make in the environemnt.2. **Define our agent's memory**: this will enable the agent to remember its past actions, observations, and rewards.3. **Define the learning algorithm**: this will be used to reinforce the agent's good behaviors and discourage bad behaviors.In reinforcement learning, training occurs alongside the agent's acting in the environment; an *episode* refers to a sequence of actions that ends in some terminal state, such as the pole falling down or the cart crashing. The agent will need to remember all of its observations and actions, such that once an episode ends, it can learn to "reinforce" the good actions and punish the undesirable actions via training. Our first step is to define a simple `Memory` buffer that contains the agent's observations, actions, and received rewards from a given episode. We will also add support to combine a list of `Memory` objects into a single `Memory`. This will be very useful for batching, which will help you accelerate training later on in the lab.**Once again, note the modularity of this memory buffer -- it can and will be applied to other RL tasks as well!**
###Code
### Agent Memory ###
class Memory:
def __init__(self):
self.clear()
# Resets/restarts the memory buffer
def clear(self):
self.observations = []
self.actions = []
self.rewards = []
# Add observations, actions, rewards to memory
def add_to_memory(self, new_observation, new_action, new_reward):
self.observations.append(new_observation)
'''TODO: update the list of actions with new action'''
self.actions.append(new_action) # TODO
# ['''TODO''']
'''TODO: update the list of rewards with new reward'''
self.rewards.append(new_reward) # TODO
# ['''TODO''']
# Helper function to combine a list of Memory objects into a single Memory.
# This will be very useful for batching.
def aggregate_memories(memories):
batch_memory = Memory()
for memory in memories:
for step in zip(memory.observations, memory.actions, memory.rewards):
batch_memory.add_to_memory(*step)
return batch_memory
# Instantiate a single Memory buffer
memory = Memory()
###Output
_____no_output_____
###Markdown
3.3 Reward functionWe're almost ready to begin the learning algorithm for our agent! The next step is to compute the rewards of our agent as it acts in the environment. Since we (and the agent) is uncertain about if and when the game or task will end (i.e., when the pole will fall), it is useful to emphasize getting rewards **now** rather than later in the future -- this is the idea of discounting. This is a similar concept to discounting money in the case of interest. Recall from lecture, we use reward discount to give more preference at getting rewards now rather than later in the future. The idea of discounting rewards is similar to discounting money in the case of interest.To compute the expected cumulative reward, known as the **return**, at a given timestep in a learning episode, we sum the discounted rewards expected at that time step $t$, within a learning episode, and projecting into the future. We define the return (cumulative reward) at a time step $t$, $R_{t}$ as:>$R_{t}=\sum_{k=0}^\infty\gamma^kr_{t+k}$where $0 < \gamma < 1$ is the discount factor and $r_{t}$ is the reward at time step $t$, and the index $k$ increments projection into the future within a single learning episode. Intuitively, you can think of this function as depreciating any rewards received at later time steps, which will force the agent prioritize getting rewards now. Since we can't extend episodes to infinity, in practice the computation will be limited to the number of timesteps in an episode -- after that the reward is assumed to be zero.Take note of the form of this sum -- we'll have to be clever about how we implement this function. Specifically, we'll need to initialize an array of zeros, with length of the number of time steps, and fill it with the real discounted reward values as we loop through the rewards from the episode, which will have been saved in the agents memory. What we ultimately care about is which actions are better relative to other actions taken in that episode -- so, we'll normalize our computed rewards, using the mean and standard deviation of the rewards across the learning episode.
###Code
### Reward function ###
# Helper function that normalizes an np.array x
def normalize(x):
x -= np.mean(x)
x /= np.std(x)
return x.astype(np.float32)
# Compute normalized, discounted, cumulative rewards (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.95):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# update the total discounted reward
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
3.4 Learning algorithmNow we can start to define the learing algorithm which will be used to reinforce good behaviors of the agent and discourage bad behaviours. In this lab, we will focus on *policy gradient* methods which aim to **maximize** the likelihood of actions that result in large rewards. Equivalently, this means that we want to **minimize** the negative likelihood of these same actions. We achieve this by simply **scaling** the probabilities by their associated rewards -- effectively amplifying the likelihood of actions that resujlt in large rewards.Since the log function is monotonically increasing, this means that minimizing **negative likelihood** is equivalent to minimizing **negative log-likelihood**. Recall that we can easily compute the negative log-likelihood of a discrete action by evaluting its [softmax cross entropy](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits). Like in supervised learning, we can use stochastic gradient descent methods to achieve the desired minimization. Let's begin by defining the loss function.
###Code
### Loss function ###
# Arguments:
# logits: network's predictions for actions to take
# actions: the actions the agent took in an episode
# rewards: the rewards the agent received in an episode
# Returns:
# loss
def compute_loss(logits, actions, rewards):
'''TODO: complete the function call to compute the negative log probabilities'''
neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=actions) # TODO
# neg_logprob = tf.nn.sparse_softmax_cross_entropy_with_logits(
# logits='''TODO''', labels='''TODO''')
'''TODO: scale the negative log probability by the rewards'''
loss = tf.reduce_mean( neg_logprob * rewards ) # TODO
# loss = tf.reduce_mean('''TODO''')
return loss
###Output
_____no_output_____
###Markdown
Now let's use the loss function to define a training step of our learning algorithm:
###Code
### Training step (forward and backpropagation) ###
def train_step(model, optimizer, observations, actions, discounted_rewards):
with tf.GradientTape() as tape:
# Forward propagate through the agent network
logits = model(observations)
'''TODO: call the compute_loss function to compute the loss'''
loss = compute_loss(logits, actions, discounted_rewards) # TODO
# loss = compute_loss('''TODO''', '''TODO''', '''TODO''')
'''TODO: run backpropagation to minimize the loss using the tape.gradient method'''
grads = tape.gradient(loss, model.trainable_variables) # TODO
# grads = tape.gradient('''TODO''', model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
###Output
_____no_output_____
###Markdown
3.5 Run cartpole!Having had no prior knowledge of the environment, the agent will begin to learn how to balance the pole on the cart based only on the feedback received from the environment! Having defined how our agent can move, how it takes in new observations, and how it updates its state, we'll see how it gradually learns a policy of actions to optimize balancing the pole as long as possible. To do this, we'll track how the rewards evolve as a function of training -- how should the rewards change as training progresses?
###Code
### Cartpole training! ###
# Learning rate and optimizer
learning_rate = 1e-3
optimizer = tf.keras.optimizers.Adam(learning_rate)
# instantiate cartpole agent
cartpole_model = create_cartpole_model()
# to track our progress
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Rewards')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for i_episode in range(500):
plotter.plot(smoothed_reward.get())
# Restart the environment
observation = env.reset()
memory.clear()
while True:
# using our observation, choose an action and take it in the environment
action = choose_action(cartpole_model, observation)
next_observation, reward, done, info = env.step(action)
# add to memory
memory.add_to_memory(observation, action, reward)
# is the episode over? did you crash or do so well that you're done?
if done:
# determine total reward and keep a record of this
total_reward = sum(memory.rewards)
smoothed_reward.append(total_reward)
# initiate training - remember we don't know anything about how the
# agent is doing until it has crashed!
train_step(cartpole_model, optimizer,
observations=np.vstack(memory.observations),
actions=np.array(memory.actions),
discounted_rewards = discount_rewards(memory.rewards))
# reset the memory
memory.clear()
break
# update our observatons
observation = next_observation
###Output
_____no_output_____
###Markdown
To get a sense of how our agent did, we can save a video of the trained model working on balancing the pole. Realize that this is a brand new environment that the agent has not seen before!Let's display the saved video to watch how our agent did!
###Code
saved_cartpole = mdl.lab3.save_video_of_model(cartpole_model, "CartPole-v0")
mdl.lab3.play_video(saved_cartpole)
###Output
_____no_output_____
###Markdown
How does the agent perform? Could you train it for shorter amounts of time and still perform well? Do you think that training longer would help even more? Part 2: PongIn Cartpole, we dealt with an environment that was static -- in other words, it didn't change over time. What happens if our environment is dynamic and unpredictable? Well that's exactly the case in [Pong](https://en.wikipedia.org/wiki/Pong), since part of the environment is the opposing player. We don't know how our opponent will act or react to our actions, so the complexity of our problem increases. It also becomes much more interesting, since we can compete to beat our opponent. RL provides a powerful framework for training AI systems with the ability to handle and interact with dynamic, unpredictable environments. In this part of the lab, we'll use the tools and workflow we explored in Part 1 to build an RL agent capable of playing the game of Pong. 3.6 Define and inspect the Pong environmentAs with Cartpole, we'll instantiate the Pong environment in the OpenAI gym, using a seed of 1.
###Code
def create_pong_env():
return gym.make("Pong-v0", frameskip=5)
env = create_pong_env()
env.seed(1); # for reproducibility
###Output
_____no_output_____
###Markdown
Let's next consider the observation space for the Pong environment. Instead of four physical descriptors of the cart-pole setup, in the case of Pong our observations are the individual video frames (i.e., images) that depict the state of the board. Thus, the observations are 210x160 RGB images (arrays of shape (210,160,3)).We can again confirm the size of the observation space by query:
###Code
print("Environment has observation space =", env.observation_space)
###Output
_____no_output_____
###Markdown
In Pong, at every time step, the agent (which controls the paddle) has six actions to choose from: no-op (no operation), move right, move left, fire, fire right, and fire left. Let's confirm the size of the action space by querying the environment:
###Code
n_actions = env.action_space.n
print("Number of possible actions that the agent can choose from =", n_actions)
###Output
_____no_output_____
###Markdown
3.7 Define the Pong agentAs before, we'll use a neural network to define our agent. What network architecture do you think would be especially well suited to this game? Since our observations are now in the form of images, we'll add convolutional layers to the network to increase the learning capacity of our network. Note that you will be tasked with completing a template CNN architecture for the Pong agent -- but you should certainly experiment beyond this template to try to optimize performance!
###Code
### Define the Pong agent ###
# Functionally define layers for convenience
# All convolutional layers will have ReLu activation
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')
Flatten = tf.keras.layers.Flatten
Dense = tf.keras.layers.Dense
# Defines a CNN for the Pong agent
def create_pong_model():
model = tf.keras.models.Sequential([
# Convolutional layers
# First, 32 5x5 filters and 2x2 stride
Conv2D(filters=32, kernel_size=5, strides=2),
# TODO: define convolutional layers with 48 5x5 filters and 2x2 stride
Conv2D(filters=48, kernel_size=5, strides=2), # TODO
# Conv2D('''TODO'''),
# TODO: define two convolutional layers with 64 3x3 filters and 2x2 stride
Conv2D(filters=64, kernel_size=3, strides=2), # TODO
Conv2D(filters=64, kernel_size=3, strides=2),
# Conv2D('''TODO'''),
Flatten(),
# Fully connected layer and output
Dense(units=128, activation='relu'),
# TODO: define the output dimension of the last Dense layer.
# Pay attention to the space the agent needs to act in
Dense(units=n_actions, activation=None) # TODO
# Dense('''TODO''')
])
return model
pong_model = create_pong_model()
###Output
_____no_output_____
###Markdown
Since we've already defined the action function, `choose_action(model, observation)`, we don't need to define it again. Instead, we'll be able to reuse it later on by passing in our new model we've just created, `pong_model`. This is awesome because our action function provides a modular and generalizable method for all sorts of RL agents! 3.8 Pong-specific functionsIn Part 1 (Cartpole), we implemented some key functions and classes to build and train our RL agent -- `choose_action(model, observation)` and the `Memory` class, for example. However, in getting ready to apply these to a new game like Pong, we might need to make some slight modifications. Namely, we need to think about what happens when a game ends. In Pong, we know a game has ended if the reward is +1 (we won!) or -1 (we lost unfortunately). Otherwise, we expect the reward at a timestep to be zero -- the players (or agents) are just playing eachother. So, after a game ends, we will need to reset the reward to zero when a game ends. This will result in a modified reward function.
###Code
### Pong reward function ###
# Compute normalized, discounted rewards for Pong (i.e., return)
# Arguments:
# rewards: reward at timesteps in episode
# gamma: discounting factor. Note increase to 0.99 -- rate of depreciation will be slower.
# Returns:
# normalized discounted reward
def discount_rewards(rewards, gamma=0.99):
discounted_rewards = np.zeros_like(rewards)
R = 0
for t in reversed(range(0, len(rewards))):
# NEW: Reset the sum if the reward is not 0 (the game has ended!)
if rewards[t] != 0:
R = 0
# update the total discounted reward as before
R = R * gamma + rewards[t]
discounted_rewards[t] = R
return normalize(discounted_rewards)
###Output
_____no_output_____
###Markdown
Additionally, we have to consider the nature of the observations in the Pong environment, and how they will be fed into our network. Our observations in this case are images. Before we input an image into our network, we'll do a bit of pre-processing to crop and scale, clean up the background colors to a single color, and set the important game elements to a single color. Let's use this function to visualize what a single observation might look like before and after pre-processing.
###Code
observation = env.reset()
for i in range(30):
action = np.random.choice(n_actions)
observation, _,_,_ = env.step(action)
observation_pp = mdl.lab3.preprocess_pong(observation)
f = plt.figure(figsize=(10,3))
ax = f.add_subplot(121)
ax2 = f.add_subplot(122)
ax.imshow(observation); ax.grid(False);
ax2.imshow(np.squeeze(observation_pp)); ax2.grid(False); plt.title('Preprocessed Observation');
###Output
_____no_output_____
###Markdown
Let's also consider the fact that, unlike CartPole, the Pong environment has an additional element of uncertainty -- regardless of what action the agent takes, we don't know how the opponent will play. That is, the environment is changing over time, based on *both* the actions we take and the actions of the opponent, which result in motion of the ball and motion of the paddles.
Therefore, to capture the dynamics, we also consider how the environment changes by looking at the difference between a previous observation (image frame) and the current observation (image frame). We've implemented a helper function, `pong_change`, that pre-processes two frames, calculates the change between the two, and then re-normalizes the values. Let's inspect this to visualize how the environment can change:
###Code
next_observation, _,_,_ = env.step(np.random.choice(n_actions))
diff = mdl.lab3.pong_change(observation, next_observation)
f, ax = plt.subplots(1, 3, figsize=(15,15))
for a in ax:
a.grid(False)
a.axis("off")
ax[0].imshow(observation); ax[0].set_title('Previous Frame');
ax[1].imshow(next_observation); ax[1].set_title('Current Frame');
ax[2].imshow(np.squeeze(diff)); ax[2].set_title('Difference (Model Input)');
###Output
_____no_output_____
###Markdown
What do you notice? How and why might these pre-processing changes be important for training our RL algorithm? How and why might consideration of the difference between frames be important for training and performance? Rollout function
We're now set up to define our key action algorithm for the game of Pong, which will ultimately be used to train our Pong agent. This function can be thought of as a "rollout", where the agent will 1) make an observation of the environment, 2) select an action based on its state in the environment, 3) execute a policy based on that action, resulting in some reward and a change to the environment, and 4) finally add memory of that action-reward to its `Memory` buffer. We will define this algorithm in the `collect_rollout` function below, and use it soon within a training block.
Earlier you visually inspected the raw environment frames, the pre-processed frames, and the difference between previous and current frames. As you may have gathered, in a dynamic game like Pong, it can actually be helpful to consider the difference between two consecutive observations. This gives us information about the movement between frames -- how the game is changing. We will do this using the `pong_change` function we explored above (which also pre-processes frames for us).
We will use differences between frames as the input on which actions will be selected. These observation changes will be forward propagated through our Pong agent, the CNN network model, which will then predict the next action to take based on this observation. The raw reward will be computed. The observation, action, and reward will be recorded into memory. This will loop until a particular game ends -- the rollout is completed.
For now, we will define `collect_rollout` such that a batch of observations (i.e., from a batch of agent-environment worlds) can be processed serially (i.e., one at a time, in sequence). We will later utilize a parallelized version of this function that will parallelize batch processing to help speed up training! Let's get to it.
###Code
### Rollout function ###
# Key steps for agent's operation in the environment, until completion of a rollout.
# An observation is drawn; the agent (controlled by model) selects an action;
# the agent executes that action in the environment and collects rewards;
# information is added to memory.
# This is repeated until the completion of the rollout -- the Pong game ends.
# Processes multiple batches serially.
#
# Arguments:
# batch_size: number of batches, to be processed serially
# env: environment
# model: Pong agent model
# choose_action: choose_action function
# Returns:
# memories: array of Memory buffers, of length batch_size, corresponding to the
# episode executions from the rollout
def collect_rollout(batch_size, env, model, choose_action):
# Holder array for the Memory buffers
memories = []
# Process batches serially by iterating through them
for b in range(batch_size):
# Instantiate Memory buffer, restart the environment
memory = Memory()
next_observation = env.reset()
previous_frame = next_observation
done = False # tracks whether the episode (game) is done or not
while not done:
current_frame = next_observation
'''TODO: determine the observation change.
Hint: this is the difference between the past two frames'''
frame_diff = mdl.lab3.pong_change(previous_frame, current_frame) # TODO
# frame_diff = # TODO
'''TODO: choose an action for the pong model, using the frame difference, and evaluate'''
action = choose_action(model, frame_diff) # TODO
# action = # TODO
# Take the chosen action
next_observation, reward, done, info = env.step(action)
'''TODO: save the observed frame difference, the action that was taken, and the resulting reward!'''
memory.add_to_memory(frame_diff, action, reward) # TODO
previous_frame = current_frame
# Add the memory from this batch to the array of all Memory buffers
memories.append(memory)
return memories
###Output
_____no_output_____
###Markdown
To get a sense of what is encapsulated by `collect_rollout`, we will instantiate an *untrained* Pong model, run a single rollout using this model, save the memory, and play back the observations the model sees. Note that these will be frame *differences*.
###Code
### Rollout with untrained Pong model ###
# Model
test_model = create_pong_model()
# Rollout with single batch
single_batch_size = 1
memories = collect_rollout(single_batch_size, env, test_model, choose_action)
rollout_video = mdl.lab3.save_video_of_memory(memories[0], "Pong-Random-Agent.mp4")
# Play back video of memories
mdl.lab3.play_video(rollout_video)
###Output
_____no_output_____
###Markdown
3.9 Training PongWe're now all set up to start training our RL algorithm and agent for the game of Pong! We've already defined the following:1. Loss function, `compute_loss`, and backpropagation step, `train_step`. Our loss function employs policy gradient learning. `train_step` executes a single forward pass and backpropagation gradient update.2. RL agent algorithm: `collect_rollout`. Serially processes batches of episodes, executing actions in the environment, collecting rewards, and saving these to `Memory`.We will use these functions to train the Pong agent.In the training block, episodes will be executed by agents in the environment via the RL algorithm defined in the `collect_rollout` function. Since RL agents start off with literally zero knowledge of their environment, it can often take a long time to train them and achieve stable behavior. To alleviate this, we have implemented a parallelized version of the RL algorithm, `parallelized_collect_rollout`, which you can use to accelerate training across multiple parallel batches.For training, information in the `Memory` buffer from all these batches will be aggregated (after all episodes, i.e., games, end). Discounted rewards will be computed, and this information will be used to execute a training step. Memory will be cleared, and we will do it all over again!Let's run the code block to train our Pong agent. Note that, even with parallelization, completing training and getting stable behavior will take quite a bit of time (estimated at least a couple of hours). We will again visualize the evolution of the total reward as a function of training to get a sense of how the agent is learning.
###Code
### Hyperparameters and setup for training ###
# Rerun this cell if you want to re-initialize the training process
# (i.e., create new model, reset loss, etc)
# Hyperparameters
learning_rate = 1e-3
MAX_ITERS = 1000 # increase the maximum to train longer
batch_size = 5 # number of batches to run
# Model, optimizer
pong_model = create_pong_model()
optimizer = tf.keras.optimizers.Adam(learning_rate)
iteration = 0 # counter for training steps
# Plotting
smoothed_reward = mdl.util.LossHistory(smoothing_factor=0.9)
smoothed_reward.append(0) # start the reward at zero for baseline comparison
plotter = mdl.util.PeriodicPlotter(sec=15, xlabel='Iterations', ylabel='Win Percentage (%)')
# Batches and environment
# To parallelize batches, we need to make multiple copies of the environment.
envs = [create_pong_env() for _ in range(batch_size)] # For parallelization
### Training Pong ###
# You can run this cell and stop it anytime in the middle of training to save
# a progress video (see next codeblock). To continue training, simply run this
# cell again, your model will pick up right where it left off. To reset training,
# you need to run the cell above.
games_to_win_episode = 21 # this is set by OpenAI gym and cannot be changed.
# Main training loop
while iteration < MAX_ITERS:
plotter.plot(smoothed_reward.get())
tic = time.time()
# RL agent algorithm. By default, uses serial batch processing.
# memories = collect_rollout(batch_size, env, pong_model, choose_action)
# Parallelized version. Uncomment line below (and comment out line above) to parallelize
memories = mdl.lab3.parallelized_collect_rollout(batch_size, envs, pong_model, choose_action)
print(time.time()-tic)
# Aggregate memories from multiple batches
batch_memory = aggregate_memories(memories)
# Track performance based on win percentage (calculated from rewards)
total_wins = sum(np.array(batch_memory.rewards) == 1)
total_games = sum(np.abs(np.array(batch_memory.rewards)))
win_rate = total_wins / total_games
smoothed_reward.append(100 * win_rate)
# Training!
train_step(
pong_model,
optimizer,
observations = np.stack(batch_memory.observations, 0),
actions = np.array(batch_memory.actions),
discounted_rewards = discount_rewards(batch_memory.rewards)
)
# Save a video of progress -- this can be played back later
if iteration % 100 == 0:
mdl.lab3.save_video_of_model(pong_model, "Pong-v0",
suffix="_"+str(iteration))
iteration += 1 # Mark next episode
###Output
_____no_output_____
###Markdown
Finally we can put our trained agent to the test! It will play in a newly instantiated Pong environment against the "computer", a base AI system for Pong. Your agent plays as the green paddle. Let's watch the match instant replay!
###Code
latest_pong = mdl.lab3.save_video_of_model(
pong_model, "Pong-v0", suffix="_latest")
mdl.lab3.play_video(latest_pong, width=400)
###Output
_____no_output_____ |
.ipynb_checkpoints/Project_02_Group_Bimbo_Inventory_Demand-checkpoint.ipynb | ###Markdown
**Grupo Bimbo Inventory Demand***06 de março, 2020* **1. Descrição geral do problema** ---O [Grupo Bimbo](https://www.grupobimbo.com), se esforça para atender a demanda diária dos consumidores por produtos frescos de panificação nas prateleiras de mais de 1 milhão de lojas ao longo das suas 45.000 lojas em todo o México.Atualmente, os cálculos diários de estoque são realizados por funcionários de vendas de entregas diretas, que devem, sozinhos, prever a necessidade de estoque dos produtos e demanda com base em suas experiências pessoais em cada loja. Como alguns pães têm uma vida útil de uma semana, a margem aceitável para o erro é pequena.**Objetivo:** neste projeto de aprendizado de máquina, vamos desenvolver um modelo para prever com precisão a demanda de estoque com base nos dados históricos de vendas. Isso fará com que os consumidores dos mais de 100 produtos de panificação não fiquem olhando para as prateleiras vazias, além de reduzir o valor gasto com reembolsos para os proprietários de lojas com produtos excedentes impróprios para venda. Para a construção desse projeto, utilizaremos a linguagem R e os datasets disponíveis no Kaggle em:* https://www.kaggle.com/c/grupo-bimbo-inventory-demand--- **2. Carregando dados** **2.1 Importando bibliotecas necessárias** Vamos começar nosso projeto importanto todas as bilbiotecas necessárias para a realização das fases iniciais de exploração e transformação dos dados (*Data Munging*).
###Code
# Caso não possua uma das bibliotecas importadas abaixo, a instale com um dos comandos a seguir:
install.packages(c(
'data.table',
'bigreadr',
'dplyr',
'ggplot2',
'fasttime',
'lubridate',
'corrplot',
'anomalize',
'stringr'
))
# Definindo a oculatação de warnings.
options(warn = -1)
# Importando bibliotecas.
library(data.table)
library(bigreadr)
library(dplyr)
library(ggplot2)
library(fasttime)
library(lubridate)
library(corrplot)
library(anomalize)
library(stringr)
###Output
_____no_output_____
###Markdown
**2.2 Carregando dados do dataset *cliente_tabla***
###Code
# Importando dataset.
client <- fread('/content/datasets/cliente_tabla.csv')
# Verificando as primeiras linhas do dataset.
head(client)
###Output
_____no_output_____
###Markdown
**2.3 Carregando dados do dataset *producto_tabla***
###Code
# Importando dataset.
product <- fread('/content/datasets/producto_tabla.csv')
# Verificando as primeiras linhas do dataset.
head(product)
###Output
_____no_output_____
###Markdown
**2.4 Carregando dados do dataset *town_state***
###Code
# Importando dataset.
town <- fread('/content/datasets/town_state.csv')
# Verificando as primeiras linhas do dataset.
head(town)
###Output
_____no_output_____
###Markdown
**2.5 Carregando dados de treino**
###Code
# Importando dataset.
train <- fread('/content/datasets/train.csv')
# Verificando as primeiras linhas do dataset.
head(train)
###Output
_____no_output_____
###Markdown
**2.6 Carregando dados de teste**
###Code
# Importando dataset.
test <- fread('/content/datasets/test.csv')
# Verificando as primeiras linhas do dataset.
head(test)
###Output
_____no_output_____
###Markdown
**3. Data Munging - Eliminando inconsistências nos datasets** A [documentação](https://www.kaggle.com/c/grupo-bimbo-inventory-demand/data) nos alerta para a existência de alguns problemas que devem ser tratados dentro do conjunto de dados, como por exemplo registros duplicados. Por isso, iremos fazer uma breve exploração dos datasets para eliminar todas as inconsistências que possuam. **3.1 Dataset *client***
###Code
# Visualizando as primeiras 10 linhas do dataset.
head(client, 10)
###Output
_____no_output_____
###Markdown
Podemos observar que existem *IDs* que se repetem e nomes de clientes desconhecidos (denominados como **"SIN NOMBRE"**) no conjunto de dados que precisarão ser tratados.Vamos contabilizar o número de registros duplicados no dataset.
###Code
# Verificando linhas duplicadas no dataset.
table(duplicated(client))
###Output
_____no_output_____
###Markdown
Não encontramos nenhum registro duplicado o que nos leva a crer que a variável **NombreCliente** apresenta strings com diferentes tamanhos para cada ID duplicado. Vamos confirmar esta teoria.Ao listar as primeira linhas do conjunto de dados, vimos que o *ID* **4** se repete duas vezes. Com base nisso, vamos capturar e analisar o valor da variável **NombreCliente** associado a este *ID* em cada observação.
###Code
# Definindo o número do ID que deve ser capturado.
id <- 4
# Capturando linhas que contenham o ID especificado.
client[client$Cliente_ID == id,]
# Capturando nomes associados a cada um dos registros duplicados.
fName <- client[client$Cliente_ID == id,][1, 'NombreCliente']
sName <- client[client$Cliente_ID == id,][2, 'NombreCliente']
# Definindo o número de caracteres de cada um dos nomes dos registros duplicados.
nchar(fName)
nchar(sName)
###Output
_____no_output_____
###Markdown
A partir deste resultado podemos confirmar que há uma diferença entre os valores das variáveis **NombreCliente** de cada registro duplicado. Isto provavelmente ocorre devido a diferença do número de espaços existentes em cada nome. Iremos contabilizar o número de registros duplicados a partir da variável **Cliente_ID**.
###Code
# Verificando número de IDs duplicados no dataset.
table(duplicated(client$Cliente_ID))
# Removendo registros com número de ID duplicado.
client <- client[!duplicated(client$Cliente_ID),]
###Output
_____no_output_____
###Markdown
Definiremos o número de registros sem o nome do cliente.
###Code
# Verificando número de registros sem o nome do cliente.
nrow(client[client$NombreCliente == 'SIN NOMBRE', ])
###Output
_____no_output_____
###Markdown
Há **356 observações** sem o nome do cliente.
###Code
# Verificando se existem valores nulos no dataset.
anyNA(client)
###Output
_____no_output_____
###Markdown
Não há valores nulos no dataset.
###Code
# Verificando o tipo de dado das variáveis do dataset.
glimpse(client)
###Output
Rows: 930,500
Columns: 2
$ Cliente_ID [3m[90m<int>[39m[23m 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 1…
$ NombreCliente [3m[90m<chr>[39m[23m "SIN NOMBRE", "OXXO XINANTECATL", "SIN NOMBRE", "EL MOR…
###Markdown
O conjunto de dados contém registros de **930.500** clientes. **3.2 Dataset *product***
###Code
# Visualizando as primeiras 10 linhas do dataset.
head(product, 10)
###Output
_____no_output_____
###Markdown
Vamos contabilizar o número de registros duplicados no dataset.
###Code
# Verificando linhas duplicadas no dataset.
table(duplicated(product))
###Output
_____no_output_____
###Markdown
Nenhuma observação duplicada foi encontrada.
###Code
# Verificando número de IDs duplicados no dataset.
table(duplicated(product$Producto_ID))
###Output
_____no_output_____
###Markdown
Nenhum número de *ID* duplicado foi encontrado. Porém, o produto com o *ID* **0** não possui o nome do produto (possui o valor **"NO IDENTIFICADO"**). Iremos verificar se isto só ocorre neste registro.
###Code
# Capturando a substring especificada.
pattern <- "NO IDENTIFICADO"
# Definindo linhas que não identifiquem o nome do produto.
rows <- grep(pattern, product[, 'NombreProducto'], value = F)
# Visualizando linhas que não contenham o nome do produto.
product[rows, ]
###Output
_____no_output_____
###Markdown
Concluímos que apenas o *ID* **0** não apresenta um nome de produto. Talvez vez isto ocorra porque um único *ID* pode estar sendo utilizado para identificar produtos que ainda não foram devidamente registrados no conjunto de dados.
###Code
# Verificando se existem valores nulos no dataset.
anyNA(product)
###Output
_____no_output_____
###Markdown
Não há valores nulos no dataset.
###Code
# Verificando o tipo de dado das variáveis do dataset.
glimpse(product)
###Output
Rows: 2,592
Columns: 2
$ Producto_ID [3m[90m<int>[39m[23m 0, 9, 41, 53, 72, 73, 98, 99, 100, 106, 107, 108, 109,…
$ NombreProducto [3m[90m<chr>[39m[23m "NO IDENTIFICADO 0", "Capuccino Moka 750g NES 9", "Bim…
###Markdown
O conjunto de dados contém registros de **2.592 produtos**. Repare que a variável **NombreProducto** contém outras informações além do nome do produto. Parece que a string segue o seguinte padrão:| | | | | | ||:------------------------------|:------------------|:------------------|:-------|:----------------------|:---------------|| **NombreProducto** | *Nome do produto* | *Número de peças* | *Peso* | *Sigla do fabricante* | *ID do produto*|Veja que este padrão não está presente em todos os valores da variável, mas predomina em grande parte dos dados. Bom, não precisamos de todas estas informações para a análise que iremos fazer e por isso iremos extair apenas o **nome**, o **peso** e a **sigla do fabricante** de cada produto.
###Code
## Extraindo a unidade de medida de massa do produto.
# Extraindo a substring com as informações brutas para uma variável temporária.
tmp <- str_extract(product$NombreProducto, "([0-9 ] |[0-9])+(G|g|Kg|kg|ml)")
# Criando uma varíavel para armazenar o número associado ao peso do produto.
product$Npeso <- as.integer(str_extract(tmp, "[0-9]+"))
# Criando uma varíavel para armazenar a unidade de medida do peso do produto.
product$UniPeso <- tolower(str_extract(tmp, "[A-z]+"))
# Criando uma variável para armazenar a sigla referente ao fabricante.
product$Productor <- toupper(str_extract(
str_extract(product$NombreProducto, "( [A-Z]+[a-z ]+[A-Z]+ [A-Z ]+ [0-9]+$| [A-Z ]+[A-Z ]+ [0-9]+$)"), "( [A-Z]+[a-z ]+[A-Z]+ [A-Z ]+ | [A-Z ]+[A-Z ]+ )"
))
# Extraindo o nome do produto.
product$NombreProducto <- str_extract(product$NombreProducto, "[A-z ]+")
# Visualizando dataset após a extração das informações desejadas.
head(product)
# Verificando se existem valores nulos em cada variável do dataset.
sapply(product, function(v){
table(is.na(v))
})
###Output
_____no_output_____
###Markdown
Como resultado final, verificamos que não foi possível determinar o peso de **51 produtos** e nem o sigla de **1 fabricante**. **3.3 Dataset *town***
###Code
# Visualizando as primeiras 10 linhas do dataset.
head(town, 10)
###Output
_____no_output_____
###Markdown
Vamos verificar se há registros ou *IDs* de agência duplicados no dataset.
###Code
# Verificando linhas duplicadas no dataset.
table(duplicated(town))
###Output
_____no_output_____
###Markdown
Nenhum registro duplicado foi encontrado.
###Code
# Verificando número de IDs duplicados no dataset.
table(duplicated(town$Agencia_ID))
###Output
_____no_output_____
###Markdown
Nenhum registro contém um número de *ID* duplicado.
###Code
# Verificando se existem valores nulos no dataset.
anyNA(town)
###Output
_____no_output_____
###Markdown
Não há valores nulos no dataset.
###Code
# Verificando o tipo de dado das variáveis do dataset.
glimpse(town)
###Output
Rows: 790
Columns: 3
$ Agencia_ID [3m[90m<int>[39m[23m 1110, 1111, 1112, 1113, 1114, 1116, 1117, 1118, 1119, 1120…
$ Town [3m[90m<chr>[39m[23m "2008 AG. LAGO FILT", "2002 AG. AZCAPOTZALCO", "2004 AG. C…
$ State [3m[90m<chr>[39m[23m "MÉXICO, D.F.", "MÉXICO, D.F.", "ESTADO DE MÉXICO", "MÉXIC…
###Markdown
O conjunto de dados contém registros de **790 cidades** e seus respectivos estados. **3.4 Dataset *train***
###Code
# Visualizando as primeiras 10 linhas do dataset.
head(train, 10)
# Verificando linhas duplicadas no dataset.
table(duplicated(train))
###Output
_____no_output_____
###Markdown
Não há registros duplicados no dataset.
###Code
# Verificando se existem valores nulos no dataset.
anyNA(train)
###Output
_____no_output_____
###Markdown
Não há valores nulos no dataset.
###Code
# Verificando o tipo de dado das variáveis do dataset.
glimpse(train)
###Output
Rows: 74,180,464
Columns: 11
$ Semana [3m[90m<int>[39m[23m 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, …
$ Agencia_ID [3m[90m<int>[39m[23m 1110, 1110, 1110, 1110, 1110, 1110, 1110, 1110, 111…
$ Canal_ID [3m[90m<int>[39m[23m 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, …
$ Ruta_SAK [3m[90m<int>[39m[23m 3301, 3301, 3301, 3301, 3301, 3301, 3301, 3301, 330…
$ Cliente_ID [3m[90m<int>[39m[23m 15766, 15766, 15766, 15766, 15766, 15766, 15766, 15…
$ Producto_ID [3m[90m<int>[39m[23m 1212, 1216, 1238, 1240, 1242, 1250, 1309, 3894, 408…
$ Venta_uni_hoy [3m[90m<int>[39m[23m 3, 4, 4, 4, 3, 5, 3, 6, 4, 6, 8, 4, 12, 7, 10, 5, 3…
$ Venta_hoy [3m[90m<dbl>[39m[23m 25.14, 33.52, 39.32, 33.52, 22.92, 38.20, 20.28, 56…
$ Dev_uni_proxima [3m[90m<int>[39m[23m 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ Dev_proxima [3m[90m<dbl>[39m[23m 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ Demanda_uni_equil [3m[90m<int>[39m[23m 3, 4, 4, 4, 3, 5, 3, 6, 4, 6, 8, 4, 12, 7, 10, 5, 3…
###Markdown
O conjunto de dados de treino contém **74.180.464 registros** e **11 colunas**. **3.5 Dataset *test***
###Code
# Visualizando as primeiras 10 linhas do dataset.
head(test, 10)
# Verificando linhas duplicadas no dataset.
table(duplicated(test))
###Output
_____no_output_____
###Markdown
Não há registros duplicados no dataset.
###Code
# Verificando se existem valores nulos no dataset.
anyNA(test)
###Output
_____no_output_____
###Markdown
Não há valores nulos no dataset.
###Code
# Verificando o tipo de dado das variáveis do dataset.
glimpse(test)
###Output
Rows: 6,999,251
Columns: 7
$ id [3m[90m<int>[39m[23m 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,…
$ Semana [3m[90m<int>[39m[23m 11, 11, 10, 11, 11, 11, 11, 10, 10, 11, 11, 10, 11, 10, 1…
$ Agencia_ID [3m[90m<int>[39m[23m 4037, 2237, 2045, 1227, 1219, 1146, 2057, 1612, 1349, 146…
$ Canal_ID [3m[90m<int>[39m[23m 1, 1, 1, 1, 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ Ruta_SAK [3m[90m<int>[39m[23m 2209, 1226, 2831, 4448, 1130, 6601, 4507, 2837, 1223, 120…
$ Cliente_ID [3m[90m<int>[39m[23m 4639078, 4705135, 4549769, 4717855, 966351, 1741414, 4659…
$ Producto_ID [3m[90m<int>[39m[23m 35305, 1238, 32940, 43066, 1277, 972, 1232, 35305, 1240, …
###Markdown
O conjunto de dados de treino contém **6.999.251 registros** e **7 colunas**. **4. Análise exploratória dos dados** **4.1 Visão geral** Segundo a [documentação](https://www.kaggle.com/c/grupo-bimbo-inventory-demand/data) referente ao projeto, cada linha dos dados de treinamento contém um registro de vendas com as seguintes variáveis:| Variável | Descrição ||:------------------------------|:-----------------------------------------------------------------------|| **Semana** | é o número da semana *(de quinta a quarta-feira)*; || **Agencia_ID** | é o *ID* do depósito de vendas; || **Canal_ID** | é o *ID* do canal de vendas; || **Ruta_SAK** | é o *ID* da rota *(várias rotas = depósito de vendas)*; || **Cliente_ID** | é o *ID* do cliente; || **NombreCliente** | é o nome do cliente; || **Producto_ID** | é o *ID* do produto; || **NombreProducto** | é o nome do produto; || **Venta_uni_hoy** | é o número de unidades vendidas na semana; || **Venta_hoy** | é o valor de venda na semana *(unidade monetária: pesos)*; || **Dev_uni_proxima** | é o número de unidades retornadas na próxima semana; || **Dev_proxima** | é o valor retornado na próxima semana e *(unidade monetária: pesos)* e;|| **Demanda_uni_equil (Target)**| *é a variável a ser prevista*, define a demanda ajustada. |Nesta etapa vamos buscar entender a disposição e as características dos dados dentro do dataset de treino além de extrair insigths que possam auxiliar no processo de criação do modelo preditivo.
###Code
# Verificando o tipo de dado das variáveis do dataset.
glimpse(train)
###Output
Rows: 74,180,464
Columns: 11
$ Semana [3m[90m<int>[39m[23m 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, …
$ Agencia_ID [3m[90m<int>[39m[23m 1110, 1110, 1110, 1110, 1110, 1110, 1110, 1110, 111…
$ Canal_ID [3m[90m<int>[39m[23m 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, …
$ Ruta_SAK [3m[90m<int>[39m[23m 3301, 3301, 3301, 3301, 3301, 3301, 3301, 3301, 330…
$ Cliente_ID [3m[90m<int>[39m[23m 15766, 15766, 15766, 15766, 15766, 15766, 15766, 15…
$ Producto_ID [3m[90m<int>[39m[23m 1212, 1216, 1238, 1240, 1242, 1250, 1309, 3894, 408…
$ Venta_uni_hoy [3m[90m<int>[39m[23m 3, 4, 4, 4, 3, 5, 3, 6, 4, 6, 8, 4, 12, 7, 10, 5, 3…
$ Venta_hoy [3m[90m<dbl>[39m[23m 25.14, 33.52, 39.32, 33.52, 22.92, 38.20, 20.28, 56…
$ Dev_uni_proxima [3m[90m<int>[39m[23m 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ Dev_proxima [3m[90m<dbl>[39m[23m 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ Demanda_uni_equil [3m[90m<int>[39m[23m 3, 4, 4, 4, 3, 5, 3, 6, 4, 6, 8, 4, 12, 7, 10, 5, 3…
###Markdown
O conjunto de dados de treino contém **74.180.464 registros** e **11 colunas**. Todas as variáveis apresentam o tipo de dado numérico.
###Code
# Verificando o número de valores únicos para cada variável do dataset.
t <- sapply(train, function(c) {
length(unique(c))
})
# Exibindo os resultados.
print(t)
###Output
Semana Agencia_ID Canal_ID Ruta_SAK
7 552 9 3603
Cliente_ID Producto_ID Venta_uni_hoy Venta_hoy
880604 1799 2116 78140
Dev_uni_proxima Dev_proxima Demanda_uni_equil
558 14707 2091
###Markdown
Estes resultados são interessantes. Destacamos que podemos observar a existência de registros de **7 semanas**; **880604 IDs de clientes** e; **1799 IDs de Produtos** diferentes no dataset. **4.2 Analisando cada variável separadamente** **4.2.1 Criando funções auxiliares** Criaremos algumas funções para padronizar as plotagens de gráficos que efetuaremos.
###Code
# Definindo uma função para criar gráficos de barra.
barPlot <- function(col, data) {
data %>%
mutate_at(c(var = col), as.factor) %>%
group_by(var) %>%
summarise(absFreq = n()) %>%
ggplot(aes(x = var, y = absFreq)) +
geom_bar(stat = 'identity', alpha = 0.75, fill = '#086788') +
ylab('Frequency') +
xlab(col) +
labs(title = paste('Bar plot for variable:', col)) +
theme_bw()
}
# Definindo uma função para criar gráficos de boxplot.
boxPlot <- function(col, data) {
data %>%
group_by_at(col) %>%
summarise(absFreq = n()) %>%
ggplot(aes(x = absFreq)) +
geom_boxplot(fill = '#566E3D', color = '#373D20', alpha = 0.8) +
theme_bw() +
theme(axis.text.y = element_blank()) +
xlab(paste(col, 'frequency')) +
labs(title = paste('Boxplot for variable:', col))
}
###Output
_____no_output_____
###Markdown
**4.2.2 Variável Semana**
###Code
# Definindo o nome da variável a ser analisada.
col <- 'Semana'
# Criando um gráfico de barras para a variável especificada.
barPlot(col, train)
###Output
_____no_output_____
###Markdown
O gráfico nos permite observar há existênica de uma distribuição aproximadamente uniforme dos registros entre as semanas.
###Code
# Contabilizando o número de registros por semana.
table(Semana = train$Semana)
###Output
_____no_output_____
###Markdown
**4.2.3 Variável Agencia_ID**
###Code
# Criando um boxplot com as frequências com que os ID das agências aparecem no conjunto de dados.
# Definindo o nome da variável a ser analisada.
col <- 'Agencia_ID'
# Criando um gráfico boxplot para a variável especificada.
boxPlot(col, train)
###Output
_____no_output_____
###Markdown
Vemos que há outliers nas frequências com que os *IDs* das agências aparecem no conjunto de dados. Vamos determinar a cidade e o estado destas agências com que a frequência do *ID* foi discrepante.
###Code
# Determinando o endereço, o estado e ordenando de forma decrescente a frequência dos IDs das agências no dataset.
AgencyIdFreq <- train %>%
select(Agencia_ID) %>%
group_by(Agencia_ID) %>%
summarise(absFreq = n()) %>%
arrange(desc(absFreq)) %>%
inner_join(town, by = 'Agencia_ID')
# Visualizando os 5 IDs de agências mais frequentes.
head(AgencyIdFreq)
# Extraindo outliers do dataset.
AgencyIdStates <- AgencyIdFreq %>%
anomalize(absFreq, method = 'iqr', alpha = 0.10) %>%
filter(anomaly == 'Yes') %>%
select(-c(absFreq_l1, absFreq_l2, anomaly))
# Visualizando as informações dos outliers.
AgencyIdStates
###Output
_____no_output_____
###Markdown
Parece haver um grupo de estados mais frequente dentro das informações destas agências discrepantes. Para facilitar esta análise, iremos criar um gráfico de barras.
###Code
# Criando um gráfico de barras para os estados das agências que apresentaram uma frequência discrepante.
# Definindo o nome da variável a ser analisada.
col <- 'State'
# Criando um gráfico de barras para a variável especificada.
barPlot(col, AgencyIdStates)
# Determinando a proporção dos estados identificados entre os IDs das agências que apresentaram uma frequência discrepante.
prop.table(table(AgencyIdStates$State))
###Output
_____no_output_____
###Markdown
Concluímos que o **Estado do México** apresenta 6 das agências mais recorrentes no conjunto de dados e que o estado de **Jalisco** é o que possui a agência mais frequente. **4.2.4 Variável Canal_ID**
###Code
# Plotando um gráfico de barras para o conjunto de dados da variável.
# Definindo o nome da variável a ser analisada.
col <- 'Canal_ID'
# Criando um gráfico de barras para a variável especificada.
barPlot(col, train)
###Output
_____no_output_____
###Markdown
Observamos que o canal de vendas com *ID* **1** é o mais frequente. Determinaremos a proporção da frequência de cada um destes canais dentro do conjunto de dados.
###Code
# Determinando a propoção de cada um dos canais.
train %>%
mutate(Canal_ID = as.factor(Canal_ID)) %>%
group_by(Canal_ID) %>%
summarise(Prop = round(n() / length(train$Canal_ID) * 100, digits = 3))
###Output
_____no_output_____
###Markdown
Concluímos que aproximadamente **91%** dos registros do conjunto de dados está associado ao **Canal com ID = 1**. **4.2.5 Variável Ruta_SAK**
###Code
# Criando um boxplot com as frequências com que as rotas usadas aparecem no conjunto de dados.
# Definindo o nome da variável a ser analisada.
col <- 'Ruta_SAK'
# Criando um gráfico de boxplot para a variável especificada.
boxPlot(col, train)
###Output
_____no_output_____
###Markdown
Há um grande número de outliers dentro desta variável. Pode ser interessante extraí-los e análisá-los separadamente.
###Code
# Determinando em ordem decrescente as rotas mais frequentes dentro do conjunto de dados.
routeFreq <- train %>%
group_by(Ruta_SAK) %>%
summarise(absFreq = n()) %>%
arrange(desc(absFreq))
# Visualizando as primeiras linhas do dataset.
head(routeFreq)
# Extraindo outliers do dataset.
routeOutFreq <- routeFreq %>%
anomalize(absFreq, method = 'iqr', alpha = 0.10) %>%
filter(anomaly == 'Yes') %>%
select(-c(absFreq_l1, absFreq_l2, anomaly))
# Determinando número de outliers.
nrow(routeOutFreq)
###Output
_____no_output_____
###Markdown
Bom, detectamos a existência de **605 rotas** com frequências discrepantes dentro do conjunto de dados.
###Code
# Determinando a rota mais frequente dentro do conjunto de dados.
routeOutFreq[routeOutFreq$absFreq == max(routeOutFreq$absFreq), ]
###Output
_____no_output_____
###Markdown
Constatamos que **a rota 1201 é a mais recorrente** no dataset.
###Code
# Determinando a proporção de rotas com frequências discrepantes.
length(routeOutFreq$Ruta_SAK) / length(unique(train$Ruta_SAK)) * 100
# Determinando a proporção de registros associados as rotas discrepantes.
sum(routeOutFreq$absFreq) / length(train$Ruta_SAK) * 100
###Output
_____no_output_____
###Markdown
Por fim, concluímos que aproximadamente **16.8%** das rotas apresentam frequências discrepantes e são responsáveis por **86.38%** das entregas. **4.2.6 Variável Cliente_ID**
###Code
# Criando um boxplot com as frequências com que os IDs dos clientes que estão presentes no dataset aparecem.
# Definindo o nome da variável a ser analisada.
col <- 'Cliente_ID'
# Criando um gráfico de boxplot para a variável especificada.
boxPlot(col, train)
###Output
_____no_output_____
###Markdown
Veja que interessante, há um cliente que apresenta uma frequência extremamente alta em relação aos demais e acaba distorcendo o gráfico. Vamos identificar o nome deste cliente junto com os dos demais outliers.
###Code
# Determinando em ordem decrescente os clientes mais frequentes dentro do conjunto de dados.
clientFreq <- train %>%
select(Cliente_ID) %>%
group_by(Cliente_ID) %>%
summarise(absFreq = n()) %>%
arrange(desc(absFreq)) %>%
inner_join(client, by = 'Cliente_ID')
# Visualizando as primeiras linhas do dataset.
head(clientFreq)
###Output
_____no_output_____
###Markdown
O cliente mais discrepante possui uma frequência que é aproximadamente **23.4 vezes maior** do que a do cliente que ocupa a segunda posição nos dando uma breve noção do quão distante este primeiro colocado está dos demais.
###Code
# Extraindo outliers do dataset.
clientOutFreq <- clientFreq %>%
anomalize(absFreq, method = 'iqr', alpha = 0.0415) %>%
filter(anomaly == 'Yes') %>%
select(-c(absFreq_l1, absFreq_l2, anomaly))
# Determinando número de outliers.
nrow(clientOutFreq)
###Output
_____no_output_____
###Markdown
Verificamos a existência de **1622** *IDs* de clientes com frequências discrepantes dentro do conjunto de dados.
###Code
# Determinando o cliente mais frequente dentro do conjunto de dados.
mostFrequentClient <- clientOutFreq[clientOutFreq$absFreq == max(clientOutFreq$absFreq), ]
# Visualizando o cliente mais frequente dentro do conjunto de dados.
mostFrequentClient
###Output
_____no_output_____
###Markdown
Identificamos que o nome do cliente que possuí o *ID* mais recorrente dentro do dataset é **"Puebla Remision"**.
###Code
# Determinando a proporção de registros que contém o ID do Cliente com a frequência mais discrepante.
mostFrequentClient$absFreq / length(train$Cliente_ID) * 100
###Output
_____no_output_____
###Markdown
O cliente **"Puebla Remision"** está associado a aproximadamente **0.167%** dos registros do conjunto de dados.
###Code
# Determinando a proporção de registros que contém os IDs dos Clientes com as frequências mais discrepantes.
sum(clientOutFreq$absFreq) / length(train$Cliente_ID) * 100
###Output
_____no_output_____
###Markdown
Todos os registros associados a *IDs* de clientes que possuem uma frequência discrepante correspondem a aproximadamente **1.37%** do total. Isto nos indica que a maior parte dos dados que estamos manipulando estão relacionados a muitos clientes que efetuam compras com uma frequência que não foge do padrão. **4.2.7 Variável Producto_ID**
###Code
# Criando um boxplot com as frequências com que os IDs dos produtos que estão presentes no dataset aparecem.
# Definindo o nome da variável a ser analisada.
col <- 'Producto_ID'
# Criando um gráfico de boxplot para a variável especificada.
boxPlot(col, train)
###Output
_____no_output_____
###Markdown
Existem muitos outliers entre as frequências dos produtos. Isto indica que há um subconjunto de itens que fogem do padrão de venda do resto dos demais produtos.
###Code
# Determinando em ordem decrescente os produtos mais frequentes dentro do conjunto de dados.
productFreq <- train %>%
select(Producto_ID) %>%
group_by(Producto_ID) %>%
summarise(absFreq = n()) %>%
arrange(desc(absFreq)) %>%
inner_join(product, by = 'Producto_ID')
# Visualizando as primeiras linhas do dataset.
head(productFreq)
# Extraindo outliers do dataset.
productOutFreq <- productFreq %>%
anomalize(absFreq, method = 'iqr', alpha = 0.1) %>%
filter(anomaly == 'Yes') %>%
select(-c(absFreq_l1, absFreq_l2, anomaly))
# Determinando número de outliers.
nrow(productOutFreq)
###Output
_____no_output_____
###Markdown
Observamos que existem **333 frequências** de produtos discreprantes.
###Code
# Determinando o produto mais frequente dentro do conjunto de dados.
mostFrequentProduct <- productOutFreq[productOutFreq$absFreq == max(productOutFreq$absFreq), ]
# Visualizando o produto mais frequente dentro do conjunto de dados.
mostFrequentProduct
###Output
_____no_output_____
###Markdown
Detectamos que o produto que mais apresenta registros de vendas dentro do nosso conjunto de dados é denominado **"Mantecadas Vainilla"**.
###Code
# Determinando a proporção de registros que contém os IDs dos produtos com as frequências discrepantes.
sum(productOutFreq$absFreq) / length(train$Producto_ID) * 100
###Output
_____no_output_____
###Markdown
Os produtos que apresentam uma frequência discrepante dentro do conjunto de dados são responsáveis por aproximadamente **96.76%** dos registros.
###Code
# Determinando a sigla do fabricante mais recorrente dentro do conjunto de dados que contém os IDs dos produtos com as frequências discrepantes.
manufacturersOut <- productOutFreq %>%
group_by(Productor) %>%
summarise(absFreq = n()) %>%
arrange(desc(absFreq))
# Visualizando as primeiras linhas do dataset.
head(manufacturersOut)
###Output
_____no_output_____
###Markdown
Concluímos que o fabricante identificado pela sigla **BIM** é o mais recorrente dentro dos produtos que apresentam uma frequência discrepante. **4.2.8 Variável Venta_uni_hoy**
###Code
# Verificando a distribuição dos dados.
summary(train$Venta_uni_hoy)
# Verificando a frequência com que cada número de unidades aparece no dataset.
t <- train %>%
group_by(Venta_uni_hoy) %>%
summarise(absFreq = n())
# Ordenando dados das frequências em ordem decrescente.
t <- t[order(t$absFreq, decreasing = T), ]
# Visualizando o número de unidades mais frequentes no dataset.
head(t, 10)
###Output
_____no_output_____
###Markdown
Concluímos que as vendas com **2 unidades** são as mais frequentes dentro do conjunto de dados e que o número das 10 unidades mais frequentes varia entre **1 e 10**. **4.2.9 Variável Venta_hoy**
###Code
# Verificando a distribuição dos dados.
summary(train$Venta_hoy)
# Definindo o valor total de vendas por semana.
train %>%
group_by(Semana) %>%
summarise(total_Venta_hoy = sum(Venta_hoy))
# Definindo o valor total de vendas por cliente.
train %>%
group_by(Cliente_ID ) %>%
summarise(total_Venta_hoy = sum(Venta_hoy)) %>%
arrange(desc(total_Venta_hoy)) %>%
head(10)
###Output
_____no_output_____
###Markdown
**4.2.10 Variável Dev_uni_proxima**
###Code
# Verificando a distribuição dos dados.
summary(train$Dev_uni_proxima)
# Definindo o número total de unidades retornadas na próxima semana.
train %>%
group_by(Semana) %>%
summarise(total_Dev_uni_proxima = sum(Dev_uni_proxima))
# Definindo o número total de unidades retornadas na próxima semana por cliente.
train %>%
group_by(Cliente_ID) %>%
summarise(total_Dev_uni_proxima = sum(Dev_uni_proxima)) %>%
arrange(desc(total_Dev_uni_proxima)) %>%
head()
###Output
_____no_output_____
###Markdown
**4.2.11 Variável Dev_proxima**
###Code
# Verificando a distribuição dos dados.
summary(train$Dev_proxima)
# Definindo o valor total retornado na próxima semana.
train %>%
group_by(Semana) %>%
summarise(total_Dev_proxima = sum(Dev_proxima))
# Definindo o valor total retornado na próxima semana por cliente.
train %>%
group_by(Cliente_ID) %>%
summarise(total_Dev_proxima = sum(Dev_proxima)) %>%
arrange(desc(total_Dev_proxima)) %>%
head()
###Output
_____no_output_____
###Markdown
**4.2.12 Variável Demanda_uni_equil**
###Code
# Criando um boxplot para visualizar a distribuição dos dados da variável Demanda_uni_equil.
# Definindo o nome da variável a ser analisada.
col <- 'Demanda_uni_equil'
# Criando um gráfico de boxplot para a variável especificada.
train %>%
ggplot(aes(x = Demanda_uni_equil)) +
geom_boxplot(fill = '#566E3D', color = '#373D20', alpha = 0.8) +
theme_bw() +
theme(axis.text.y = element_blank()) +
xlab('Values') +
labs(title = 'Boxplot for variable: Demanda uni equil')
###Output
_____no_output_____
###Markdown
Vemos que há uma distorção muito grande dentro dos valores da variável a ser prevista devido a presença de outliers muito discrepantes. Deveremos tratar este problema para conseguirmos aumentar a performance dos modelos preditivos que criarmos. **4.2.13 Variável Town** Agora, iremos verificar quantos e quais são os estados e as agências por cidades presentes no conjunto de dados.
###Code
# Contabilizando a frequência de agências por cidade dentro do dataset.
t <- town %>%
group_by(Town) %>%
summarise(absFreq = n())
# Ordenando resultados.
t <- t[order(t$absFreq, decreasing = T),]
# Visualizando as primeiras 10 linhas da tabela.
head(t, 10)
# Determinando o número de agências por cidade atendidas.
nrow(t)
###Output
_____no_output_____
###Markdown
Detectamos **260** agências por cidades atendidas.Criaremos um boxplot para verificar se existem outliers dentro das frequências de agências por cidade.
###Code
# Definindo o nome da variável a ser analisada.
col <- 'Town'
# Criando um gráfico de boxplot para a variável especificada.
boxPlot(col, town)
###Output
_____no_output_____
###Markdown
Concluímos que a agência ***2013 AG. MEGA NAUCALPAN*** apresenta um frequência absoluta que saí do pradrão das demais presentes no conjunto de dados. **4.2.14 Variável State**
###Code
# Contabilizando a frequência de cada estado dentro do dataset.
t <- town %>%
group_by(State) %>%
summarise(absFreq = n())
# Ordenando resultados.
t <- t[order(t$absFreq, decreasing = T),]
# Visualizando as primeiras 10 linhas da tabela.
head(t, 10)
# Determinando o número de estados atendidos.
nrow(t)
###Output
_____no_output_____
###Markdown
Detectamos que **33 estados** são atendidos.
###Code
# Definindo o nome da variável a ser analisada.
col <- 'State'
# Criando um gráfico de boxplot para a variável especificada.
boxPlot(col, town)
###Output
_____no_output_____
###Markdown
Concluímos que o ***ESTADO DE MÉXICO*** apresenta um frequência absoluta que saí do pradrão dos demais presentes no conjunto de dados. **5. Análise Preditiva** **5.1 Importando bibliotecas necessárias** Importaremos todas as bilbiotecas necessárias para a realização dos processos de modelagem preditiva.
###Code
# Caso não possua uma das bibliotecas importadas abaixo, a instale com um dos comandos a seguir:
install.packages(c(
'Metrics',
'xgboost',
'randomForest',
'caret'
))
# Importando bibliotecas.
library(Metrics)
library(xgboost)
library(randomForest)
library(caret)
###Output
_____no_output_____
###Markdown
**5.2 Feature Selection** Observe que as variáveis **Venta_uni_hoy**, **Venta_hoy**, **Dev_uni_proxima** e **Dev_proxima** não estão presentes no conjunto de dados de teste e por isso iremos excluí-las do conjunto de dados de treino.
###Code
# Selecionando as variáveis que serão utilizadas na fase de modelagem preditiva dentro do dataset de treino.
train <- train %>% select(Semana, Agencia_ID, Canal_ID, Ruta_SAK, Cliente_ID, Producto_ID, Demanda_uni_equil)
###Output
_____no_output_____
###Markdown
**5.3 Feature Engineering I - Transformando variável Target** Agora, iremos retornar ao problema da distorção dos valores do conjunto de dados da variável alvo. Criaremos mais uma vez um boxplot para visualizar a distribuição dos dados bem como um gráfico de densidade.
###Code
# Criando um boxplot para visualizar a distribuição dos dados da variável Demanda_uni_equil.
train %>%
ggplot(aes(x = Demanda_uni_equil)) +
geom_boxplot(fill = '#566E3D', color = '#373D20', alpha = 0.8) +
theme_bw() +
theme(axis.text.y = element_blank()) +
xlab('Values') +
labs(title = 'Boxplot for variable: Demanda uni equil')
# Criando um gráfico de densidade para visualizar a distribuição dos dados da variável Demanda_uni_equil.
train %>%
ggplot(aes(x = Demanda_uni_equil)) +
geom_density(fill = '#A6EBC9') +
theme_bw() +
labs(title = 'Density graph for variable: Demanda uni equil') +
xlab('Demanda uni equil')
###Output
_____no_output_____
###Markdown
Muito bem, para contornar este problema da distorção nos dados usaremos a função de transformação **log1p (ou log(x + 1))** para diminuir a irregularidade nos dados e tornar os padrões que esta variável possuí mais visíveis. Também utilizaremos a função **expm1 (ou exp(x) - 1)** para realizar o processo inverso de transformação dos resultados obtidos ao se aplicar a função **log1p**. Ou seja, transformaremos a variável a ser prevista para realizar a execução da análise preditiva e no final converteremos os resultados gerados para a sua escala original. Para mais informações sobre como a função **log** atua sobre dados distorcidos [consulte este link](http://onlinestatbook.com/2/transformations/log.html). Para entender melhor as funções **log1p e expm1** [consulte este link](https://www.johndcook.com/blog/2010/06/07/math-library-functions-that-seem-unnecessary/).Destacamos que o principal motivo de utilizarmos a função **log1p** é há existência de valores nulos dentro dos dados da variável o que inviabiliza o uso da função **log** pois o [log(0) é um valor indefinido](https://www.rapidtables.com/math/algebra/logarithm/Logarithm_of_0.html).
###Code
# Verificando a existência de valores nulos dentro do conjunto de dados da variável Demanda_uni_equil.
prop.table(table(train$Demanda_uni_equil == 0))
###Output
_____no_output_____
###Markdown
Detectamos que aproximadamente **1.8%** dos dados da variável a ser prevista são iguais a **0**.Vamos aplicar a função **log1p** ao conjunto de dados.
###Code
# Calcula o logaritmo natural de cada valor da variável Demanda_uni_equil acrescido de 1 unidade, ou seja, log(Demanda_uni_equil + 1).
train$Demanda_uni_equil <- log1p(train$Demanda_uni_equil)
###Output
_____no_output_____
###Markdown
Criaremos mais uma vez um gráfico de boxplot e um gráfico de densidade para visualizar o efeito da transformação aplicada sobre os dados.
###Code
# Criando um boxplot para visualizar a distribuição dos dados da variável Demanda_uni_equil transformada.
train %>%
ggplot(aes(x = Demanda_uni_equil)) +
geom_boxplot(fill = '#566E3D', color = '#373D20', alpha = 0.8) +
theme_bw() +
theme(axis.text.y = element_blank()) +
xlab('log(Demanda uni equil + 1)') +
labs(title = 'Boxplot for variable: Demanda uni equil')
# Criando um gráfico de densidade para visualizar a distribuição dos dados da variável Demanda_uni_equil transformada.
train %>%
ggplot(aes(x = Demanda_uni_equil)) +
geom_density(fill = '#A6EBC9') +
theme_bw() +
labs(title = 'Density graph for variable: Demanda uni equil') +
xlab('log(Demanda uni equil + 1)')
###Output
_____no_output_____
###Markdown
Concluímos que a aplicação da função **log1p** diminuiu a distorção causada pelos valores discrepantes dentro do conjunto de dados da variável a ser prevista e isso nos ajudará a alçancar valores de acurácia melhores nos modelos que criarmos. **5.4 Unindo dados de treino e de teste em um mesmo dataset** Nosso objetivo nesta etapa é criar um único dataset contendo tanto os dados de treino quanto os dados de teste. Mas, antes de executarmos esta ação devemos observar que há variáveis excluivas para cada um destes conjuntos de dados.O dataset de teste possui a variável **id** que não está contida nos dados de treino e por isso a criaremos para este conjunto de dados. O mesmo processo será efetuado para a variável **Demanda_uni_equil** no dataset de teste.Para distinguir os registros que pertencem ao conjunto de dados de treino dos que pertencem aos dados de teste criaremos uma variável binária denominada **toTest** (**0**: são dados de treino; **1**: são dados de teste).
###Code
# Criando variável ID para o conjunto de dados de treino com um valor auxiliar.
train$id <- 0
# Criando a variável para indicar se o registro pertence ou não ao dados de treino ou aos dados de teste.
train$toTest <- 0
# Criando variável Demanda_uni_equil para o conjunto de dados de teste com um valor auxiliar.
test$Demanda_uni_equil <- 0
# Criando a variável para indicar se o registro pertence ou não ao dados de treino ou aos dados de teste.
test$toTest <- 1
###Output
_____no_output_____
###Markdown
Para começar esta junção entre os datasets, iremos capturar os registros de apenas uma das semanas presentes no dataset de treino e unir com os dados de teste.Os registros dos dados de treino que não irão ser manipulados nesta etapa serão utilizados na etapa a seguir de **Feature Engennier**.
###Code
# Unindo os registros do dataset de treino em que a variável Semana é igual a 9 com todos os registros de teste.
data <- rbind(train[Semana == 9], test)
###Output
_____no_output_____
###Markdown
Como já mesclamos todos os registros do dataset de teste, podemos liberar memória excluindo a variável *test*.
###Code
# Removendo o dataset test.
rm(test)
###Output
_____no_output_____
###Markdown
**5.4.1 Feature Engennier II - Criando novas variáveis preditoras** Os registros de treino das semanas que não foram usados na etapa anterior serão agrupados ao novo dataset que estamos gerando para criar novas variáveis.
###Code
# Determinando a média da demanda ajustada de clientes por produto e a quantidade de registros de clientes por produto.
train[Semana <= 8][ , .(meanClientProd = mean(Demanda_uni_equil), countClientProd = .N),
by = .(Producto_ID, Cliente_ID)] %>%
merge(data, all.y = TRUE, by = c("Producto_ID", "Cliente_ID")) -> data
# Determinando a média da demanda ajustada por produto e a quantidade de registros por produto.
train[Semana <= 8][ , .(meanProd = mean(Demanda_uni_equil), countProd = .N),
by = .(Producto_ID)] %>%
merge(data, all.y = TRUE, by = c("Producto_ID")) -> data
# Determinando a média da demanda ajustada por cliente e a quantidade de registros por cliente.
train[Semana <= 8][ , .(meanClient = mean(Demanda_uni_equil), countCliente = .N),
by = .(Cliente_ID)] %>%
merge(data, all.y = TRUE, by = c("Cliente_ID")) -> data
# Visualizando as primeiras linhas do dataset.
head(data)
###Output
_____no_output_____
###Markdown
Observe que a partir da execução desta fase conseguimos eliminar o problema da existência de produtos dentro dos dados de teste que não estão presentes dentro dos dados de treino, pois passamos a avaliar os valores médios e quantidades de cada variável por agrupamentos.Agora também podemos eliminar a variável *train*.
###Code
# Removendo o dataset train.
rm(train)
###Output
_____no_output_____
###Markdown
**5.5 Feature Engineering III - Transformando variáveis preditoras** Nesta etapa vamos escalar os valores das variáveis preditoras entre 0 e 1.
###Code
# Definindo método de pré-processamento.
params <- preProcess(data[, !c('id', 'Demanda_uni_equil', 'toTest')], method = 'range')
# Transformando os dados.
data <- predict(params, data)
# Visualizando as primeiras linhas do dataset.
head(data)
###Output
_____no_output_____
###Markdown
**5.6 Segmentando dados de treino e de teste** Iremos separar os dados de treino e de teste do conjunto de dados que criamos nas etapas anteriores.
###Code
# Extraindo registros de treino.
train <- data %>%
filter(toTest == 0) %>%
select(-c(id, toTest))
# Visualizando as primeiras linhas do dataset.
head(train)
# Extraindo registros de teste.
test <- data %>%
filter(toTest == 1) %>%
select(-c(Demanda_uni_equil, toTest))
# Visualizando as primeiras linhas do dataset.
head(test)
###Output
_____no_output_____
###Markdown
Agora podemos remover a variável *data*.
###Code
# Removendo o dataset data.
rm(data)
###Output
_____no_output_____
###Markdown
**5.7 Criando função para gerar modelos com diferentes valores de parametrização baseados no algoritmo XGboost** Bom, optamos por utilizar o algoritmo **XGboost** para a criação do nosso modelo preditivo por apresentar uma boa performance para gerar os **scores** da métrica de avaliação a ser utilizada e por ser consideravelmente mais rápido quando comparado a outros algoritmos.Como não sabemos quais valores utilizar para sua configuração, criaremos uma função que gere diferentes modelos com diferentes ajustes e selecionaremos aquele que obtiver o melhor desempenho para os dados de teste.Iremos avaliar a performance dos modelos a serem criados com base nos dados de treino e por isso deveremos ter atenção ao *overfitting* quando formos selecionar qual deverá ser utilizado para fazer as previsões dos dados de teste.Adotando esta estratégia conseguiremos extrair o melhor que este algoritmo pode nos oferecer.
###Code
# Definindo uma função para gerar diferentes modelos com diferentes valores de parametrização baseados no algoritmo XGboost.
getBetterXGboostParameters <- function(data, label, maxDepth = 13, nEta = 0.2, nRounds = 86, subsample = 0.85, colsample = 0.7, statusPrint = F) {
# Criando o dataframe para salvar os resultados dos modelos.
featuresXGboost <- data.frame()
# Define uma varíavel auxiliar para permitir o acompanhamento do progresso na avaliação dos modelos criados.
count <- 0
# Define o número total de modelos a serem criados.
total <- length(maxDepth) * length(nEta) * length(nRounds) * length(subsample) * length(colsample)
# Convertendo os dados das variáveis do dataset para o tipo DMatrix (uma matriz densa).
dTrain <- xgb.DMatrix(
data = data, # Define as variáveis preditoras.
label = label # Define a variável a ser prevista.
)
for(m in maxDepth) {
for(e in nEta) {
for(r in nRounds) {
for(s in subsample) {
for(c in colsample) {
# Define um seed para permitir que o mesmo resultado do experimento seja reproduzível.
set.seed(100)
# Criando o modelo baseado no algoritmo XGboost.
model_xgb <- xgb.train(
params = list(
objective = "reg:linear", # Define que o modelo deve ser baseado em uma regressão logistica linear.
booster = "gbtree", # Definindo o booster a ser utilizado.
eta = e, # Define a taxa de aprendizado do modelo.
max_depth = m, # Define o tamanho máximo da árvore.
subsample = s, # Define a proporção de subamostra da instância de treinamento.
colsample_bytree = c # Define a proporção da subamostra de colunas ao construir cada árvore.
),
data = dTrain, # Define as variáveis preditoras e a variável a ser prevista.
feval = rmsle, # Define a função de avaliação a ser utilizada.
nrounds = r, # Define o número de iterações que o algoritmo deve executar.
verbose = F, # Define a exibição da queda da taxa de erro durante o treinamento.
maximize = FALSE, # Define que a pontuação da avaliação deve ser minimizada.
nthread = 16 # Define o número de threads que devem ser usadas. Quanto maior for esse número, mais rápido será o treinamento.
)
# Realizando as previsões com o modelo baseado no algoritmo XGboost.
pred <- predict(model_xgb, data)
# Armazena os parâmetros utilizados para criação do modelo e o score da métrica RMSLE obtido em um dataframe.
featuresXGboost <- rbind(featuresXGboost, data.frame(
maxDepth = m,
eta = e,
nRounds = r,
s = s,
c = c,
rmsle = rmsle(label, pred)
))
# Incrementa o número de modelos avaliados.
count <- count + 1
# Imprime a porcetagem de progresso do treinamento e o melhor score da métrica RMSLE já alcançado.
print(paste(100 * count / total, '%, best rmsle: ', min(featuresXGboost$rmsle)))
# Salvando dataframe com os resultados gerados em um arquivo .csv.
write.csv(
x = featuresXGboost, # Determinando o conjunto de dados a ser salvo.
file = "/content/outputs/featuresXGboost.csv", # Define o nome com o qual o conjunto de dados deve ser salvo.
row.names = FALSE # Indica que o nome das linhas não deve ser gravado no arquivo a ser salvo.
)
}
}
}
}
}
# Retorna o dataframe com os resultados obtidos pelo treinamento de cada modelo.
featuresXGboost
}
###Output
_____no_output_____
###Markdown
**5.8 Criando modelo XGboost** O algoritmo **XGboost** tem a capacidade de lidar com valores *NA* e por isso não vamos transformar os dados para tratar estas aparições.Dito isto, podemos criar nosso modelo.
###Code
# Gerando diferentes modelos baseados no algoritmo XGboost e determinando os scores para a métrica RMSLE de cada um.
featuresXGboost <- getBetterXGboostParameters(
data = as.matrix(train %>% select(- Demanda_uni_equil)),
label = train$Demanda_uni_equil,
maxDepth = 12:14,
nEta = 0.2,
nRounds = 85:87,
subsample = 0.85,
colsample = 0.7,
statusPrint = F
)
# Salvando dataframe com os resultados gerados em um arquivo .csv.
fwrite(featuresXGboost, '/content/outputs/featuresXGboost.csv')
###Output
_____no_output_____
###Markdown
Caso deseje pular a execução do bloco de código anterior, basta carregar os resultados já processados que estão salvos no arquivo CSV abaixo:
###Code
# Carregando dataframe com os resultados obtidos para cada modelo XGboost criado.
featuresXGboost <- fread('/content/outputs/featuresXGboost.csv')
###Output
_____no_output_____
###Markdown
Imprimiremos os registros dos modelos criados.
###Code
# Visualizando dataframe com os resultados obtidos no treinamento.
featuresXGboost
###Output
_____no_output_____
###Markdown
Após utilizar cada uma das configurações acima para realizar as previsões dos dados de teste, observamos que aquela que obteve o melhor resultado está descrita na **linha 5**. Os modelos registrados após está linha apresentam desempenhos inferiores pois começam a apresentar *overfitting*.
###Code
# Visualizando a melhor configuração para realizar as previsões dos dados de teste.
bestXGboost <- featuresXGboost[5, ]
bestXGboost
# Convertendo os dados das variáveis do dataset para o tipo DMatrix (uma matriz densa).
dTrain <- xgb.DMatrix(
data = as.matrix(train %>% select(- Demanda_uni_equil)), # Define as variáveis preditoras.
label = train$Demanda_uni_equil # Define a variável a ser prevista.
)
# Define um seed para permitir que o mesmo resultado do experimento seja reproduzível.
set.seed(100)
# Criando o modelo baseado no algoritmo XGboost.
model_xgb <- xgb.train(
params = list(
objective = "reg:linear", # Define que o modelo deve ser baseado em uma regressão logistica linear.
booster = "gbtree", # Definindo o booster a ser utilizado.
eta = bestXGboost$eta, # Define a taxa de aprendizado do modelo.
max_depth = bestXGboost$maxDepth, # Define o tamanho máximo da árvore.
subsample = bestXGboost$s, # Define a proporção de subamostra da instância de treinamento.
colsample_bytree = bestXGboost$c # Define a proporção da subamostra de colunas ao construir cada árvore.
),
data = dTrain, # Define as variáveis preditoras e a variável a ser prevista.
feval = rmsle, # Define a função de avaliação a ser utilizada.
nrounds = bestXGboost$nRounds, # Define o número de iterações que o algoritmo deve executar.
verbose = T, # Define a exibição da queda da taxa de erro durante o treinamento.
print_every_n = 5, # Define o número de iterações que devem ocorrer para que a impressão da mensagem de avaliação seja efetuada.
maximize = FALSE, # Define que a pontuação da avaliação deve ser minimizada.
nthread = 16 # Define o número de threads que devem ser usadas. Quanto maior for esse número, mais rápido será o treinamento.
)
# Realizando as previsões com o modelo baseado no algoritmo XGboost.
pred <- predict(model_xgb, as.matrix(test %>% select(- id)))
# Convertendo os resultados previsto para a escala original da variável alvo (exp(Demanda_uni_equil) - 1).
pred <- expm1(pred)
# Transformando qualquer previsão negativa em um valor nulo.
pred[pred < 0] <- 0
# Salvando os resultados gerados em um arquivo CSV.
write.csv(
x = data.frame(id = as.integer(test$id), Demanda_uni_equil = pred), # Determinando o conjunto de dados a ser salvo.
file = "/content/outputs/results.csv", # Define o nome com o qual o conjunto de dados deve ser salvo.
row.names = FALSE # Indica que o nome das linhas não deve ser gravado no arquivo a ser salvo.
)
###Output
_____no_output_____ |
1220-base_feature.ipynb | ###Markdown
测试下仅仅在base feature条件下准确率
###Code
import codecs
from itertools import *
import numpy as np
def load_data(filename):
file = codecs.open(filename,'r','utf-8')
data = []
label = []
for line in islice(file,0,None):
line = line.strip().split(',')
#print ("reading data....")
data.append([float(i) for i in line[0:-1]])
label.append(line[-1])
x = np.array(data)
y = np.array(label)
print (x)
print (y)
return x,y
import pylab as pl
from itertools import *
from sklearn import svm
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn import tree
from sklearn import model_selection
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
def train(x_train,y_label): ####为保证分类结果的准确度可靠,采用十折交叉验证
##########logisticRegression################
clf1 = LogisticRegression()
score1 = model_selection.cross_val_score(clf1,x_train,y_label,cv=10,scoring="accuracy")
x = [int(i) for i in range(1,11)]
y = score1
pl.ylabel(u'Accuracy')
pl.xlabel(u'times')
pl.plot(x,y,label='LR')
pl.legend()
pl.savefig("1_LR.png")
#pl.show()
print ('The accuracy of LogisticRegression:')
print (np.mean(score1))
###############SVM(linear)###################
clf2 = svm.LinearSVC(random_state=2016)
score2 = model_selection.cross_val_score(clf2,x_train,y_label,cv=10,scoring='accuracy')
#print score2
print ('The accuracy of linearSVM:')
print ((np.mean(score2)))
x = [int(i) for i in range(1, 11)]
y = score2
pl.ylabel(u'Accuracy')
pl.xlabel(u'times')
pl.plot(x, y,label='SVM')
pl.legend()
pl.savefig("2_SVM.png")
#pl.show()
#################Naive Bayes################
clf3 = GaussianNB()
score3 = model_selection.cross_val_score(clf3,x_train,y_label,cv=10,scoring='accuracy')
print ("The accuracy of Naive Bayes:")
print ((np.mean(score3)))
x = [int(i) for i in range(1, 11)]
y = score3
pl.ylabel(u'Accuracy')
pl.xlabel(u'times')
pl.plot(x, y,label='NB')
pl.legend()
pl.savefig("3_NB.png")
#pl.show()
################DecidionTree###############
clf4 = tree.DecisionTreeClassifier()
score4 = model_selection.cross_val_score(clf4,x_train,y_label,cv=10,scoring="accuracy")
print ('The accuracy of DB:')
print (np.mean(score4))
x = [int(i) for i in range(1, 11)]
y = score4
pl.ylabel(u'Accuracy')
pl.xlabel(u'times')
pl.plot(x, y,label='DB')
pl.legend()
pl.savefig("4_DB.png")
#pl.show()
X,Y = load_data('base_feature/feature_ATGC_freq.csv')
train(X,Y)
###Output
The accuracy of LogisticRegression:
0.610124610592
The accuracy of linearSVM:
0.619496365524
The accuracy of Naive Bayes:
0.546936656282
The accuracy of DB:
0.574852890273
|
examples/Convallaria/Convallaria-Training.ipynb | ###Markdown
DivNoising - TrainingThis notebook contains an example on how to train a DivNoising VAE. This requires having a noise model (model of the imaging noise) which can be either measured from calibration data or estimated from raw noisy images themselves. If you haven't done so, please first run 'Convallaria-CreateNoiseModel.ipynb', which will download the data and create a noise model.
###Code
# We import all our dependencies.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset
from torch.utils.data import Dataset, DataLoader
from torch.nn import init
import os
import glob
from tifffile import imread
from matplotlib import pyplot as plt
import sys
sys.path.append('../../')
from divnoising import dataLoader
from divnoising import utils
from divnoising import training
from nets import model
from divnoising import histNoiseModel
from divnoising.gaussianMixtureNoiseModel import GaussianMixtureNoiseModel
import urllib
import os
import zipfile
from tqdm import tqdm
device = torch.device("cuda:0")
###Output
_____no_output_____
###Markdown
Specify ```path``` to load dataYour data should be stored in the directory indicated by ```path```.
###Code
path="./data/Convallaria_diaphragm/"
observation= imread(path+'20190520_tl_25um_50msec_05pc_488_130EM_Conv.tif')
###Output
_____no_output_____
###Markdown
Training Data Preparation For training we need to follow some preprocessing steps first which will prepare the data for training purposes. Data preprocessingWe first divide the data into training and validation sets with 85% images allocated to training set and rest to validation set. Then we augment the training data 8-fold by 90 degree rotations and flips.
###Code
train_images = observation[:int(0.85*observation.shape[0])]
val_images = observation[int(0.85*observation.shape[0]):]
print("Shape of training images:", train_images.shape, "Shape of validation images:", val_images.shape)
train_images = utils.augment_data(train_images)
###Output
Shape of training images: (85, 1024, 1024) Shape of validation images: (15, 1024, 1024)
Raw image size after augmentation (680, 1024, 1024)
###Markdown
We extract overlapping patches of size ```patch_size x patch_size``` from training and validation images. Specify the parameter ```patch_size```. The number of patches to be extracted is automatically determined depending on the size of images.
###Code
patch_size = 128
img_width = observation.shape[2]
img_height = observation.shape[1]
num_patches = int(float(img_width*img_height)/float(patch_size**2)*2)
x_train_crops = utils.extract_patches(train_images, patch_size, num_patches)
x_val_crops = utils.extract_patches(val_images, patch_size, num_patches)
###Output
100%|██████████| 680/680 [00:04<00:00, 151.58it/s]
100%|██████████| 15/15 [00:00<00:00, 157.53it/s]
###Markdown
Finally, we compute the mean and standard deviation of our combined train and validation sets and do some additional preprocessing.
###Code
data_mean, data_std = utils.getMeanStdData(train_images, val_images)
x_train, x_val = utils.convertToFloat32(x_train_crops, x_val_crops)
x_train_extra_axis = x_train[:,np.newaxis]
x_val_extra_axis = x_val[:,np.newaxis]
x_train_tensor = utils.convertNumpyToTensor(x_train_extra_axis)
x_val_tensor = utils.convertNumpyToTensor(x_val_extra_axis)
print("Shape of training tensor:", x_train_tensor.shape)
###Output
Shape of training tensor: torch.Size([87040, 1, 128, 128])
###Markdown
Configure DivNoising model Here we specify some parameters of our DivNoising network needed for training. The parameter z_dim specifies the size of the bottleneck dimension corresponding to each pixel. The parameter in_channels specifies the number of input channels which for this dataset is 1. We currently have support for only 1 channel input but it may be extended to arbitrary number of channels in the future. The parameter init_filters specifies the number of filters in the first layer of the network. The parameter n_depth specifies the depth of the network. The parameter batch_size specifies the batch size used for training. The parameter n_filters_per_depth specifies the number of convolutions per depth. The parameter directory_path specifies the directory where the model will be saved. The parameter n_epochs specifies the number of training epochs. The parameter lr specifies the learning rate. The parameter val_loss_patience specifies the number of epochs after which training will be terminated if the validation loss does not decrease by a factor of 1e-6. The parameter noiseModel is the noise model you want to use. Run the notebook ```Convallaria-CreateNoiseModel.ipynb```, if you have not yet generated the noise model for this dataset yet. If set to None a Gaussian noise model is used.The parameter gaussian_noise_std is the standard deviation of the Gaussian noise model. This should only be set if 'noiseModel' is None. Otherwise, if you have created a noise model already, set it to ```None```. The parameter model_name specifies the name of the model with which the weights will be saved for prediction later.__Note:__ We observed good performance of the DivNosing network for most datasets with the default settings in the next cell. However, we also observed that playing with the paramaters sensibly can also improve performance.
###Code
z_dim=64
in_channels = 1
init_filters = 32
n_filters_per_depth=2
n_depth=2
batch_size=32
directory_path = "./"
n_epochs = int(22000000/(x_train_tensor.shape[0])) # A heurisitc to set the number of epochs
lr=0.001
val_loss_patience = 100
gaussian_noise_std = None
#noise_model_params= np.load("/home/krull/fileserver/experiments/ReDo/convallaria/GMMNoiseModel_convallaria_3_2_calibration.npz")
noise_model_params= np.load("data/Convallaria_diaphragm/GMMNoiseModel_convallaria_3_2_calibration.npz")
noiseModel = GaussianMixtureNoiseModel(params = noise_model_params, device = device)
model_name = "convallaria-"
###Output
_____no_output_____
###Markdown
Train network__Note:__ We observed that for certain datasets, the KL loss goes towards 0. This phenomenon is called ```posterior collapse``` and is undesirable.We prevent it by aborting and restarting the training once the KL dropy below a threshold (```kl_min```).An alternative approach is a technique called *KL Annealing* where we increase the weight on KL divergence loss term from 0 to 1 gradually in a numer of steps.This cann be activated by setting the parameter ```kl_annealing``` to ```True```. The parameter ```kl_start``` specifies the epoch when KL annelaing will start. The parameter ```kl_annealtime``` specifies until which epoch KL annealing will be operational. If the parameter ```kl_annealing``` is set to ```False```, the values of ```kl_start``` and ```kl_annealtime``` are ignored.
###Code
train_dataset = dataLoader.MyDataset(x_train_tensor,x_train_tensor)
val_dataset = dataLoader.MyDataset(x_val_tensor,x_val_tensor)
trainHist, reconHist, klHist, valHist = None, None, None, None
attempts=0
while trainHist is None:
attempts+=1
print('start training: attempt '+ str(attempts))
vae = model.VAE(z_dim=z_dim,
in_channels=in_channels,
init_filters = init_filters,
n_filters_per_depth=n_filters_per_depth,
n_depth=n_depth)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True)
trainHist, reconHist, klHist, valHist = training.trainNetwork(net=vae, train_loader=train_loader,
val_loader=val_loader,
device=device,directory_path=directory_path,
model_name=model_name,
n_epochs=n_epochs, batch_size=batch_size,lr=lr,
val_loss_patience = val_loss_patience,
kl_annealing = False,
kl_start = 0,
kl_annealtime = 3,
kl_min=1e-5,
data_mean =data_mean,data_std=data_std,
noiseModel = noiseModel,
gaussian_noise_std = gaussian_noise_std)
###Output
start training: attempt 1
postersior collapse: aborting
start training: attempt 2
postersior collapse: aborting
start training: attempt 3
Epoch[1/252] Training Loss: 6.474 Reconstruction Loss: 6.306 KL Loss: 0.167
kl_weight: 1.0
saving ./convallaria-last_vae.net
saving ./convallaria-best_vae.net
Patience: 0 Validation Loss: 6.30342960357666 Min validation loss: 6.30342960357666
Time for epoch: 151seconds
Est remaining time: 10:31:41 or 37901 seconds
----------------------------------------
###Markdown
Plotting losses
###Code
trainHist=np.load(directory_path+"/train_loss.npy")
reconHist=np.load(directory_path+"/train_reco_loss.npy")
klHist=np.load(directory_path+"/train_kl_loss.npy")
valHist=np.load(directory_path+"/val_loss.npy")
plt.figure(figsize=(18, 3))
plt.subplot(1,3,1)
plt.plot(trainHist,label='training')
plt.plot(valHist,label='validation')
plt.xlabel("epochs")
plt.ylabel("loss")
plt.legend()
plt.subplot(1,3,2)
plt.plot(reconHist,label='training')
plt.xlabel("epochs")
plt.ylabel("reconstruction loss")
plt.legend()
plt.subplot(1,3,3)
plt.plot(klHist,label='training')
plt.xlabel("epochs")
plt.ylabel("KL loss")
plt.legend()
plt.show()
###Output
_____no_output_____ |
docs/apis/python-bindings/tutorials/ClassAds-Introduction.ipynb | ###Markdown
ClassAds IntroductionLaunch this tutorial in a Jupyter Notebook on Binder: [](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/ClassAds-Introduction.ipynb)In this tutorial, we will learn the basics of the [ClassAd language](https://research.cs.wisc.edu/htcondor/classad/classad.html),the policy and data exchange language that underpins all of HTCondor.ClassAds are fundamental in the HTCondor ecosystem, so understanding them will be good preparation for future tutorials.The Python implementation of the ClassAd language is in the `classad` module:
###Code
import classad
###Output
_____no_output_____
###Markdown
Expressions The ClassAd language is built around _values_ and _expressions_. If you know Python, both concepts are familiar. Examples of familiar values include:- Integers (`1`, `2`, `3`),- Floating point numbers (`3.145`, `-1e-6`)- Booleans (`true` and `false`).Examples of expressions are:- Attribute references: `foo`- Boolean expressions: `a && b`- Arithmetic expressions: `123 + c`- Function calls: `ifThenElse(foo == 123, 3.14, 5.2)`Expressions can be evaluated to values.Unlike many programming languages, expressions are lazily-evaluated: they are kept in memory as expressions until a value is explicitly requested.ClassAds holding expressions to be evaluated later are how many internal parts of HTCondor, like job requirements, are expressed.Expressions are represented in Python with `ExprTree` objects.The desired ClassAd expression is passed as a string to the constructor:
###Code
arith_expr = classad.ExprTree("1 + 4")
print(f"ClassAd arithemetic expression: {arith_expr} (of type {type(arith_expr)})")
###Output
ClassAd arithemetic expression: 1 + 4 (of type <class 'classad.classad.ExprTree'>)
###Markdown
Expressions can be evaluated on-demand:
###Code
print(arith_expr.eval())
###Output
5
###Markdown
Here's an expression that includes a ClassAd function:
###Code
function_expr = classad.ExprTree("ifThenElse(4 > 6, 123, 456)")
print(f"Function expression: {function_expr}")
value = function_expr.eval()
print(f"Corresponding value: {value} (of type {type(value)})")
###Output
Function expression: ifThenElse(4 > 6,123,456)
Corresponding value: 456 (of type <class 'int'>)
###Markdown
Notice that, when possible, we convert ClassAd values to Python values. Hence, the result of evaluating the expression above is the Python `int` `456`.There are two important values in the ClassAd language that have no direct equivalent in Python: `Undefined` and `Error`.`Undefined` occurs when a reference occurs to an attribute that is not defined; it is analogous to a `NameError` exception in Python (but there is no concept of an exception in ClassAds).For example, evaluating an unset attribute produces `Undefined`:
###Code
print(classad.ExprTree("foo").eval())
###Output
Undefined
###Markdown
`Error` occurs primarily when an expression combines two different types or when a function call occurs with the incorrect arguments.Note that even in this case, no Python exception is raised!
###Code
print(classad.ExprTree('5 + "bar"').eval())
print(classad.ExprTree('ifThenElse(1, 2, 3, 4, 5)').eval())
###Output
Error
Error
###Markdown
ClassAds The concept that makes the ClassAd language special is, of course, the _ClassAd_!The ClassAd is analogous to a Python or JSON dictionary. _Unlike_ a dictionary, which is a set of unique key-value pairs, the ClassAd object is a set of key-_expression_ pairs.The expressions in the ad can contain attribute references to other keys in the ad, which will be followed when evaluated.There are two common ways to represent ClassAds in text.The "new ClassAd" format:```[ a = 1; b = "foo"; c = b]```And the "old ClassAd" format:```a = 1b = "foo"c = b```Despite the "new" and "old" monikers, "new" is over a decade old.HTCondor command line tools utilize the "old" representation.The Python bindings default to "new".A `ClassAd` object may be initialized via a string in either of the above representation.As a ClassAd is so similar to a Python dictionary, they may also be constructed from a dictionary.Let's construct some ClassAds!
###Code
ad1 = classad.ClassAd("""
[
a = 1;
b = "foo";
c = b;
d = a + 4;
]""")
print(ad1)
###Output
[
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
We can construct the same ClassAd from a dictionary:
###Code
ad_from_dict = classad.ClassAd(
{
"a": 1,
"b": "foo",
"c": classad.ExprTree("b"),
"d": classad.ExprTree("a + 4"),
})
print(ad_from_dict)
###Output
[
d = a + 4;
c = b;
b = "foo";
a = 1
]
###Markdown
ClassAds are quite similar to dictionaries; in Python, the `ClassAd` object behaves similarly to a dictionary and has similar convenience methods:
###Code
print(ad1["a"])
print(ad1["not_here"])
print(ad1.get("not_here", 5))
ad1.update({"e": 8, "f": True})
print(ad1)
###Output
[
f = true;
e = 8;
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
Remember our example of an `Undefined` attribute above? We now can evaluate references within the context of the ad:
###Code
print(ad1.eval("d"))
###Output
5
###Markdown
Note that an expression is still not evaluated until requested, even if it is invalid:
###Code
ad1["g"] = classad.ExprTree("b + 5")
print(ad1["g"])
print(type(ad1["g"]))
print(ad1.eval("g"))
###Output
b + 5
<class 'classad.classad.ExprTree'>
Error
###Markdown
ClassAds IntroductionLaunch this tutorial in a Jupyter Notebook on Binder: [](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/ClassAds-Introduction.ipynb)In this tutorial, we will learn the basics of the [ClassAd language](https://research.cs.wisc.edu/htcondor/classad/classad.html),the policy and data exchange language that underpins all of HTCondor.ClassAds are fundamental in the HTCondor ecosystem, so understanding them will be good preparation for future tutorials.The Python implementation of the ClassAd language is in the `classad` module:
###Code
import classad
###Output
_____no_output_____
###Markdown
Expressions The ClassAd language is built around _values_ and _expressions_. If you know Python, both concepts are familiar. Examples of familiar values include:- Integers (`1`, `2`, `3`),- Floating point numbers (`3.145`, `-1e-6`)- Booleans (`true` and `false`).Examples of expressions are:- Attribute references: `foo`- Boolean expressions: `a && b`- Arithmetic expressions: `123 + c`- Function calls: `ifThenElse(foo == 123, 3.14, 5.2)`Expressions can be evaluated to values.Unlike many programming languages, expressions are lazily-evaluated: they are kept in memory as expressions until a value is explicitly requested.ClassAds holding expressions to be evaluated later are how many internal parts of HTCondor, like job requirements, are expressed.Expressions are represented in Python with `ExprTree` objects.The desired ClassAd expression is passed as a string to the constructor:
###Code
arith_expr = classad.ExprTree("1 + 4")
print(f"ClassAd arithemetic expression: {arith_expr} (of type {type(arith_expr)})")
###Output
ClassAd arithemetic expression: 1 + 4 (of type <class 'classad.classad.ExprTree'>)
###Markdown
Expressions can be evaluated on-demand:
###Code
print(arith_expr.eval())
###Output
5
###Markdown
Here's an expression that includes a ClassAd function:
###Code
function_expr = classad.ExprTree("ifThenElse(4 > 6, 123, 456)")
print(f"Function expression: {function_expr}")
value = function_expr.eval()
print(f"Corresponding value: {value} (of type {type(value)})")
###Output
Function expression: ifThenElse(4 > 6,123,456)
Corresponding value: 456 (of type <class 'int'>)
###Markdown
Notice that, when possible, we convert ClassAd values to Python values. Hence, the result of evaluating the expression above is the Python `int` `456`.There are two important values in the ClassAd language that have no direct equivalent in Python: `Undefined` and `Error`.`Undefined` occurs when a reference occurs to an attribute that is not defined; it is analogous to a `NameError` exception in Python (but there is no concept of an exception in ClassAds).For example, evaluating an unset attribute produces `Undefined`:
###Code
print(classad.ExprTree("foo").eval())
###Output
Undefined
###Markdown
`Error` occurs primarily when an expression combines two different types or when a function call occurs with the incorrect arguments.Note that even in this case, no Python exception is raised!
###Code
print(classad.ExprTree('5 + "bar"').eval())
print(classad.ExprTree('ifThenElse(1, 2, 3, 4, 5)').eval())
###Output
Error
Error
###Markdown
ClassAds The concept that makes the ClassAd language special is, of course, the _ClassAd_!The ClassAd is analogous to a Python or JSON dictionary. _Unlike_ a dictionary, which is a set of unique key-value pairs, the ClassAd object is a set of key-_expression_ pairs.The expressions in the ad can contain attribute references to other keys in the ad, which will be followed when evaluated.There are two common ways to represent ClassAds in text.The "new ClassAd" format:```[ a = 1; b = "foo"; c = b]```And the "old ClassAd" format:```a = 1b = "foo"c = b```Despite the "new" and "old" monikers, "new" is over a decade old.HTCondor command line tools utilize the "old" representation.The Python bindings default to "new".A `ClassAd` object may be initialized via a string in either of the above representation.As a ClassAd is so similar to a Python dictionary, they may also be constructed from a dictionary.Let's construct some ClassAds!
###Code
ad1 = classad.ClassAd("""
[
a = 1;
b = "foo";
c = b;
d = a + 4;
]""")
print(ad1)
###Output
[
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
We can construct the same ClassAd from a dictionary:
###Code
ad_from_dict = classad.ClassAd(
{
"a": 1,
"b": "foo",
"c": classad.ExprTree("b"),
"d": classad.ExprTree("a + 4"),
})
print(ad_from_dict)
###Output
[
d = a + 4;
c = b;
b = "foo";
a = 1
]
###Markdown
ClassAds are quite similar to dictionaries; in Python, the `ClassAd` object behaves similarly to a dictionary and has similar convenience methods:
###Code
print(ad1["a"])
print(ad1["not_here"])
print(ad1.get("not_here", 5))
ad1.update({"e": 8, "f": True})
print(ad1)
###Output
[
f = true;
e = 8;
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
Remember our example of an `Undefined` attribute above? We now can evaluate references within the context of the ad:
###Code
print(ad1.eval("d"))
###Output
5
###Markdown
Note that an expression is still not evaluated until requested, even if it is invalid:
###Code
ad1["g"] = classad.ExprTree("b + 5")
print(ad1["g"])
print(type(ad1["g"]))
print(ad1.eval("g"))
###Output
b + 5
<class 'classad.classad.ExprTree'>
Error
###Markdown
ClassAds IntroductionLaunch this tutorial in a Jupyter Notebook on Binder: [](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/ClassAds-Introduction.ipynb)In this tutorial, we will learn the basics of the [ClassAd language](https://research.cs.wisc.edu/htcondor/classad/classad.html),the policy and data exchange language that underpins all of HTCondor.ClassAds are fundamental in the HTCondor ecosystem, so understanding them will be good preparation for future tutorials.The Python implementation of the ClassAd language is in the `classad` module:
###Code
import classad
###Output
_____no_output_____
###Markdown
Expressions The ClassAd language is built around _values_ and _expressions_. If you know Python, both concepts are familiar. Examples of familiar values include:- Integers (`1`, `2`, `3`),- Floating point numbers (`3.145`, `-1e-6`)- Booleans (`true` and `false`).Examples of expressions are:- Attribute references: `foo`- Boolean expressions: `a && b`- Arithmetic expressions: `123 + c`- Function calls: `ifThenElse(foo == 123, 3.14, 5.2)`Expressions can be evaluated to values.Unlike many programming languages, expressions are lazily-evaluated: they are kept in memory as expressions until a value is explicitly requested.ClassAds holding expressions to be evaluated later are how many internal parts of HTCondor, like job requirements, are expressed.Expressions are represented in Python with `ExprTree` objects.The desired ClassAd expression is passed as a string to the constructor:
###Code
arith_expr = classad.ExprTree("1 + 4")
print(f"ClassAd arithemetic expression: {arith_expr} (of type {type(arith_expr)})")
###Output
ClassAd arithemetic expression: 1 + 4 (of type <class 'classad.classad.ExprTree'>)
###Markdown
Expressions can be evaluated on-demand:
###Code
print(arith_expr.eval())
###Output
5
###Markdown
Here's an expression that includes a ClassAd function:
###Code
function_expr = classad.ExprTree("ifThenElse(4 > 6, 123, 456)")
print(f"Function expression: {function_expr}")
value = function_expr.eval()
print(f"Corresponding value: {value} (of type {type(value)})")
###Output
Function expression: ifThenElse(4 > 6,123,456)
Corresponding value: 456 (of type <class 'int'>)
###Markdown
Notice that, when possible, we convert ClassAd values to Python values. Hence, the result of evaluating the expression above is the Python `int` `456`.There are two important values in the ClassAd language that have no direct equivalent in Python: `Undefined` and `Error`.`Undefined` occurs when a reference occurs to an attribute that is not defined; it is analogous to a `NameError` exception in Python (but there is no concept of an exception in ClassAds).For example, evaluating an unset attribute produces `Undefined`:
###Code
print(classad.ExprTree("foo").eval())
###Output
Undefined
###Markdown
`Error` occurs primarily when an expression combines two different types or when a function call occurs with the incorrect arguments.Note that even in this case, no Python exception is raised!
###Code
print(classad.ExprTree('5 + "bar"').eval())
print(classad.ExprTree('ifThenElse(1, 2, 3, 4, 5)').eval())
###Output
Error
Error
###Markdown
ClassAds The concept that makes the ClassAd language special is, of course, the _ClassAd_!The ClassAd is analogous to a Python or JSON dictionary. _Unlike_ a dictionary, which is a set of unique key-value pairs, the ClassAd object is a set of key-_expression_ pairs.The expressions in the ad can contain attribute references to other keys in the ad, which will be followed when evaluated.There are two common ways to represent ClassAds in text.The "new ClassAd" format:```[ a = 1; b = "foo"; c = b]```And the "old ClassAd" format:```a = 1b = "foo"c = b```Despite the "new" and "old" monikers, "new" is over a decade old.HTCondor command line tools utilize the "old" representation.The Python bindings default to "new".A `ClassAd` object may be initialized via a string in either of the above representation.As a ClassAd is so similar to a Python dictionary, they may also be constructed from a dictionary.Let's construct some ClassAds!
###Code
ad1 = classad.ClassAd("""
[
a = 1;
b = "foo";
c = b;
d = a + 4;
]""")
print(ad1)
###Output
[
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
We can construct the same ClassAd from a dictionary:
###Code
ad_from_dict = classad.ClassAd(
{
"a": 1,
"b": "foo",
"c": classad.ExprTree("b"),
"d": classad.ExprTree("a + 4"),
})
print(ad_from_dict)
###Output
[
d = a + 4;
c = b;
b = "foo";
a = 1
]
###Markdown
ClassAds are quite similar to dictionaries; in Python, the `ClassAd` object behaves similarly to a dictionary and has similar convenience methods:
###Code
print(ad1["a"])
print(ad1["not_here"])
print(ad1.get("not_here", 5))
ad1.update({"e": 8, "f": True})
print(ad1)
###Output
[
f = true;
e = 8;
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
Remember our example of an `Undefined` attribute above? We now can evaluate references within the context of the ad:
###Code
print(ad1.eval("d"))
###Output
5
###Markdown
Note that an expression is still not evaluated until requested, even if it is invalid:
###Code
ad1["g"] = classad.ExprTree("b + 5")
print(ad1["g"])
print(type(ad1["g"]))
print(ad1.eval("g"))
###Output
b + 5
<class 'classad.classad.ExprTree'>
Error
###Markdown
ClassAds IntroductionLaunch this tutorial in a Jupyter Notebook on Binder: [](https://mybinder.org/v2/gh/htcondor/htcondor-python-bindings-tutorials/master?urlpath=lab/tree/ClassAds-Introduction.ipynb)In this tutorial, we will learn the basics of the [ClassAd language](https://research.cs.wisc.edu/htcondor/classad/classad.html),the policy and data exchange language that underpins all of HTCondor.ClassAds are fundamental in the HTCondor ecosystem, so understanding them will be good preparation for future tutorials.The Python implementation of the ClassAd language is in the `classad` module:
###Code
import classad
###Output
_____no_output_____
###Markdown
Expressions The ClassAd language is built around _values_ and _expressions_. If you know Python, both concepts are familiar. Examples of familiar values include:- Integers (`1`, `2`, `3`),- Floating point numbers (`3.145`, `-1e-6`)- Booleans (`true` and `false`).Examples of expressions are:- Attribute references: `foo`- Boolean expressions: `a && b`- Arithmetic expressions: `123 + c`- Function calls: `ifThenElse(foo == 123, 3.14, 5.2)`Expressions can be evaluated to values.Unlike many programming languages, expressions are lazily-evaluated: they are kept in memory as expressions until a value is explicitly requested.ClassAds holding expressions to be evaluated later are how many internal parts of HTCondor, like job requirements, are expressed.Expressions are represented in Python with `ExprTree` objects.The desired ClassAd expression is passed as a string to the constructor:
###Code
arith_expr = classad.ExprTree("1 + 4")
print(f"ClassAd arithemetic expression: {arith_expr} (of type {type(arith_expr)})")
###Output
ClassAd arithemetic expression: 1 + 4 (of type <class 'classad.classad.ExprTree'>)
###Markdown
Expressions can be evaluated on-demand:
###Code
print(arith_expr.eval())
###Output
5
###Markdown
Here's an expression that includes a ClassAd function:
###Code
function_expr = classad.ExprTree("ifThenElse(4 > 6, 123, 456)")
print(f"Function expression: {function_expr}")
value = function_expr.eval()
print(f"Corresponding value: {value} (of type {type(value)})")
###Output
Function expression: ifThenElse(4 > 6,123,456)
Corresponding value: 456 (of type <class 'int'>)
###Markdown
Notice that, when possible, we convert ClassAd values to Python values. Hence, the result of evaluating the expression above is the Python `int` `456`.There are two important values in the ClassAd language that have no direct equivalent in Python: `Undefined` and `Error`.`Undefined` occurs when a reference occurs to an attribute that is not defined; it is analogous to a `NameError` exception in Python (but there is no concept of an exception in ClassAds).For example, evaluating an unset attribute produces `Undefined`:
###Code
print(classad.ExprTree("foo").eval())
###Output
Undefined
###Markdown
`Error` occurs primarily when an expression combines two different types or when a function call occurs with the incorrect arguments.Note that even in this case, no Python exception is raised!
###Code
print(classad.ExprTree('5 + "bar"').eval())
print(classad.ExprTree('ifThenElse(1, 2, 3, 4, 5)').eval())
###Output
Error
Error
###Markdown
ClassAds The concept that makes the ClassAd language special is, of course, the _ClassAd_!The ClassAd is analogous to a Python or JSON dictionary. _Unlike_ a dictionary, which is a set of unique key-value pairs, the ClassAd object is a set of key-_expression_ pairs.The expressions in the ad can contain attribute references to other keys in the ad, which will be followed when evaluated.There are two common ways to represent ClassAds in text.The "new ClassAd" format:```[ a = 1; b = "foo"; c = b]```And the "old ClassAd" format:```a = 1b = "foo"c = b```Despite the "new" and "old" monikers, "new" is over a decade old.HTCondor command line tools utilize the "old" representation.The Python bindings default to "new".A `ClassAd` object may be initialized via a string in either of the above representation.As a ClassAd is so similar to a Python dictionary, they may also be constructed from a dictionary.Let's construct some ClassAds!
###Code
ad1 = classad.ClassAd("""
[
a = 1;
b = "foo";
c = b;
d = a + 4;
]""")
print(ad1)
###Output
[
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
We can construct the same ClassAd from a dictionary:
###Code
ad_from_dict = classad.ClassAd(
{
"a": 1,
"b": "foo",
"c": classad.ExprTree("b"),
"d": classad.ExprTree("a + 4"),
})
print(ad_from_dict)
###Output
[
d = a + 4;
c = b;
b = "foo";
a = 1
]
###Markdown
ClassAds are quite similar to dictionaries; in Python, the `ClassAd` object behaves similarly to a dictionary and has similar convenience methods:
###Code
print(ad1["a"])
print(ad1["not_here"])
print(ad1.get("not_here", 5))
ad1.update({"e": 8, "f": True})
print(ad1)
###Output
[
f = true;
e = 8;
a = 1;
b = "foo";
c = b;
d = a + 4
]
###Markdown
Remember our example of an `Undefined` attribute above? We now can evaluate references within the context of the ad:
###Code
print(ad1.eval("d"))
###Output
5
###Markdown
Note that an expression is still not evaluated until requested, even if it is invalid:
###Code
ad1["g"] = classad.ExprTree("b + 5")
print(ad1["g"])
print(type(ad1["g"]))
print(ad1.eval("g"))
###Output
b + 5
<class 'classad.classad.ExprTree'>
Error
|
code/data/make_events.ipynb | ###Markdown
Overview This script takes as input the in-MEG behavioral data (which is trial based) and transforms it as events for MNE processing.Each trial has several distinct events: - trial onset - precue - each RSVP image (including the target on 'present' trials) - postcue - response - etc. Each of these has a corresponding MEG code:| Event | Code Value ||-------------------------|------------|| precue - cue shown | 81 || precue - cue not shown | 80 || object - flower | 11 || object - car | 12 || object - shoe | 13 || object - chair | 14 || scene - woods | 21 || scene - bathroom | 22 || scene - desert | 23 || scene - coast | 24 || target image | 9 || postcue - cue shown | 181 || postcue - cue not shown | 180 || response screen | 77 | This structure makes it easy to categorize events by the type of condition they were a part of. Below is an example of a precued, target-present trial, for which the target was a flower, and that the subjct got correct: | subject | trial_num | response_correct | cue_type | precue_type | postcue_type | target_identity | target_category | variable | value ||---------|-----------|------------------|----------|:-----------:|--------------|-----------------|-----------------|----------------|-------|| s002 | 1 | 1 | precue | precue | nocue | flower | object | precue | 81 || s002 | 1 | 1 | precue | precue | nocue | flower | object | picture1_value | 24 || s002 | 1 | 1 | precue | precue | nocue | flower | object | picture2_value | 119 || s002 | 1 | 1 | precue | precue | nocue | flower | object | picture3_value | 14 || s002 | 1 | 1 | precue | precue | nocue | flower | object | picture4_value | 12 || s002 | 1 | 1 | precue | precue | nocue | flower | object | picture5_value | 22 || s002 | 1 | 1 | precue | precue | nocue | flower | object | picture6_value | 21 || s002 | 1 | 1 | precue | precue | nocue | flower | object | postcue | 180 |It may seem redundant, but I can easily subset the MEG data into epochs for "picture 5" for which the target category was an object. All of the event/trial categorization should take place in this script. For example, if I wanted to look at the first half of trials, I'd add a column to the resulting events file for "exp_half [1/2]".
###Code
import os
import pandas as pd
data_path = '../../data/raw/'
infiles = [item for item in os.listdir(data_path) if item.endswith('txt')]
columns = ['trial_num','cue_type','target_identity','target_category','choices','target_presence',
'response','response_correct','response_time','total_trial_time',
'precue_value','IDUNNOpre1','IDUNNOpre2','precue_time','precue_time_actual','precue_position',
'picture1_value','IDUNNO1','picture1_stim','picture1_time','picture1_time_actual', 'picture1_posititon',
'picture2_value','IDUNNO2','picture2_stim','picture2_time','picture2_time_actual', 'picture2_posititon',
'picture3_value','IDUNNO3','picture3_stim','picture3_time','picture3_time_actual', 'picture3_posititon',
'picture4_value','IDUNNO4','picture4_stim','picture4_time','picture4_time_actual', 'picture4_posititon',
'picture5_value','IDUNNO5','picture5_stim','picture5_time','picture5_time_actual', 'picture5_posititon',
'picture6_value','IDUNNO6','picture6_stim','picture6_time','picture6_time_actual', 'picture6_posititon',
'postcue_value','IDUNNOpost1','IDUNNOpost2','postcue_time','postcue_time_actual','postcue_position']
df_all = []
for infile in infiles:
subject = infile[:4]
if subject not in ['s002','s008']: #s002 is ok, just not the same format
df = pd.read_csv(data_path+infile, sep="\t", header = None)
df.columns = columns
df['subject'] = subject
df_all.append(df)
elif subject == 's002':
df2 = pd.read_csv(data_path+infile, sep="\t")
df2['subject'] = subject
df2.columns = columns+['subject']
df_all.append(df2)
result = pd.concat(df_all)
result = result.reset_index(drop=True)
result.head(10)
Take the RSVP images and
stimlist = result[['subject','picture1_value','picture2_value','picture3_value','picture4_value','picture5_value','picture6_value']]
stimlist = stimlist.apply(lambda x: x.astype(str).str.replace('[',''))
stimlist = stimlist.apply(lambda x: x.astype(str).str.replace(']',''))
stimlist = stimlist.apply(lambda x: x.astype(str).str.replace("'",''))
scene_dict = {'woods':1,'bathroom':2,'desert':3,'coast':4}
object_dict = {'flower':1,'car':2,'shoe':3,'chair':4}
stimlist.replace(r'\bwoods_\d*\b', 'woods', regex=True,inplace=True)
stimlist.replace(r'\bbathroom_\d*\b', 'bathroom', regex=True,inplace=True)
stimlist.replace(r'\bdesert_\d*\b', 'desert', regex=True,inplace=True)
stimlist.replace(r'\bcoast_\d*\b', 'coast', regex=True,inplace=True)
stimlist.replace(r'\bflower_\d*\b', 'flower', regex=True,inplace=True)
stimlist.replace(r'\bcar_\d*\b', 'car', regex=True,inplace=True)
stimlist.replace(r'\bshoe_\d*\b', 'shoe', regex=True,inplace=True)
stimlist.replace(r'\bchair_\d*\b', 'chair', regex=True,inplace=True)
stimlist['target_identity'] = result['target_identity']
stimlist.replace({'woods':21,'bathroom':22,'desert':23,'coast':24},inplace=True)
stimlist.replace({'flower':11,'car':12,'shoe':13,'chair':14},inplace=True)
stimlist = stimlist.reset_index(drop=True)
stimlist1 = pd.DataFrame(stimlist)
for i,row in stimlist1.iterrows():
for item,key in zip(row.values[:6],stimlist1.columns[:6]):
if item == row.values[7]:
stimlist1.loc[i,key] = int(str(item)+'9')
stimlist1['cue_type'] = result['cue_type']
stimlist1['trial_num'] = result['trial_num']
stimlist1['response_correct'] = result['response_correct']
stimlist1['target_identity'] = result['target_identity']
stimlist1['target_category'] = result['target_category']
stimlist1['precue'] = 80
stimlist1['postcue'] = 180
stimlist1.loc[stimlist1.cue_type == 'precue','precue'] = 81
stimlist1.loc[stimlist1.cue_type == 'postcue','postcue'] = 181
stimlist1.loc[stimlist1.cue_type == 'doublecue','precue'] = 81
stimlist1.loc[stimlist1.cue_type == 'doublecue','postcue'] = 181
stimList_final = stimlist1[['subject','trial_num','response_correct','cue_type','target_identity','target_category',
'precue','picture1_value','picture2_value','picture3_value','picture4_value',
'picture5_value','picture6_value','postcue']]
stimList_final.head()
stimList_final.loc[stimList_final.cue_type == 'precue','precue_type'] = 'precue'
stimList_final.loc[stimList_final.cue_type == 'postcue','precue_type'] = 'nocue'
stimList_final.loc[stimList_final.cue_type == 'nocue','precue_type'] = 'nocue'
stimList_final.loc[stimList_final.cue_type == 'doublecue','precue_type'] = 'precue'
stimList_final.loc[stimList_final.cue_type == 'precue','postcue_type'] = 'nocue'
stimList_final.loc[stimList_final.cue_type == 'postcue','postcue_type'] = 'postcue'
stimList_final.loc[stimList_final.cue_type == 'nocue','postcue_type'] = 'nocue'
stimList_final.loc[stimList_final.cue_type == 'doublecue','postcue_type'] = 'postcue'
df_list = pd.melt(stimList_final,
id_vars=['subject','trial_num','response_correct','cue_type',
'precue_type','postcue_type','target_identity',
'target_category'],
value_vars=['precue','picture1_value','picture2_value','picture3_value',
'picture4_value','picture5_value','picture6_value',
'postcue'])
df_list = df_list.sort(['subject','trial_num']).reset_index(drop=True)
df_list.head(10)
data_path_events = '../../data/processed/events/'
for subject in df_list.subject.unique():
df_list[df_list.subject == subject][['response_correct','cue_type','precue_type','postcue_type',
'target_identity','target_category','value']].to_csv(data_path_events+subject+'_events.txt',
header=None, index=None)
###Output
_____no_output_____ |
practical_ai/archive/07-Deep-Q-Learning/01-Manual-DQN.ipynb | ###Markdown
______Copyright by Pierian Data Inc.For more information, visit us at www.pieriandata.com Manually Creating a DQN Model Deep-Q-LearningIn this notebook we will create our first Deep Reeinforcement Learning model, called Deep-Q-Network (DQN)We are again using a simple environment from openai gym. However, you will soon see the enormous gain we will get by switching from standard Q-Learning to Deep Q Learning.In this notebook we again take a look at the CartPole problem (https://gym.openai.com/envs/CartPole-v1/) Let us start by importing the necessary packages Part 0: ImportsNotice how we're importing the TF libraries here at the top together, in some rare instances, if you import them later on, you get strange bugs, so best just to import everything from Tensorflow here at the top.
###Code
from collections import deque
import random
import numpy as np
import gym # Contains the game we want to play
from tensorflow.keras.models import Sequential # To compose multiple Layers
from tensorflow.keras.layers import Dense # Fully-Connected layer
from tensorflow.keras.layers import Activation # Activation functions
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import clone_model
###Output
_____no_output_____
###Markdown
Part 1: The Environment
###Code
env_name = 'CartPole-v1'
env = gym.make(env_name) # create the environment
###Output
_____no_output_____
###Markdown
Remember, the goal of the CartPole challenge was to balance the stick upright
###Code
env.reset() # reset the environment to the initial state
for _ in range(200): # play for max 200 iterations
env.render(mode="human") # render the current game state on your screen
random_action = env.action_space.sample() # chose a random action
env.step(random_action) # execute that action
env.close() # close the environment
###Output
c:\users\marcial\anaconda_new\envs\rl_recording\lib\site-packages\gym\logger.py:30: UserWarning: [33mWARN: You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior.[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
###Markdown
Part 2: The Artificial Neural Network Let us build our first Neural NetworkTo build our network, we first need to find out how many actions and observation our environment has.We can either get those information from the source code (https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.py) or via the following commands:
###Code
num_actions = env.action_space.n
num_observations = env.observation_space.shape[0] # You can use this command to get the number of observations
print(f"There are {num_actions} possible actions and {num_observations} observations")
###Output
There are 2 possible actions and 4 observations
###Markdown
So our network needs to have an input dimension of 4 and an output dimension of 2.In between we are free to chose.Let's just say we want to use a four layer architecture:1. The first layer has 16 neurons2. The second layer has 32 neurons4. The fourth layer (output layer) has 2 neuronsThis yields 690 parameters$$ \text{4 observations} * 16 (\text{neurons}) + 16 (\text{bias}) + (16*32) + 32 + (32*2)+2 = 690$$
###Code
model = Sequential()
model.add(Dense(16, input_shape=(1, num_observations)))
model.add(Activation('relu'))
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dense(num_actions))
model.add(Activation('linear'))
print(model.summary())
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 1, 16) 80
_________________________________________________________________
activation (Activation) (None, 1, 16) 0
_________________________________________________________________
dense_1 (Dense) (None, 1, 32) 544
_________________________________________________________________
activation_1 (Activation) (None, 1, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 1, 2) 66
_________________________________________________________________
activation_2 (Activation) (None, 1, 2) 0
=================================================================
Total params: 690
Trainable params: 690
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Now we have our model which takes an observation as input and outputs a value for each action.The higher the value, the more likely that this value is a suitable action for the current observationAs stated in the lecture, Deep-Q-Learning works better when using a target network.So let's just copy the above network
###Code
#model.load_weights("34.ckt")
target_model = clone_model(model)
###Output
_____no_output_____
###Markdown
Now it is time to define our hyperparameters. Part 3: Hyperparameters and Update Function
###Code
EPOCHS = 1000
epsilon = 1.0
EPSILON_REDUCE = 0.995 # is multiplied with epsilon each epoch to reduce it
LEARNING_RATE = 0.001 #NOT THE SAME AS ALPHA FROM Q-LEARNING FROM BEFORE!!
GAMMA = 0.95
###Output
_____no_output_____
###Markdown
Let us use the epsilon greedy action selection function once again:
###Code
def epsilon_greedy_action_selection(model, epsilon, observation):
if np.random.random() > epsilon:
prediction = model.predict(observation) # perform the prediction on the observation
action = np.argmax(prediction) # Chose the action with the higher value
else:
action = np.random.randint(0, env.action_space.n) # Else use random action
return action
###Output
_____no_output_____
###Markdown
As shown in the lecture, we need a replay buffer.We can use the **deque** data structure for this, which already implements the circular behavior.The *maxlen* argument specifies the number of elements the buffer can store between he overwrites them at the beginningThe following cell shows an example usage of the deque function. You can see, that in the first example all values fit into the deque, so nothing is overwritten. In the second example, the deque is printed in each iteration. It can hold all values in the first five iterations but then needs to delete the oldest value in the deque to make room for the new value
###Code
### deque examples
deque_1 = deque(maxlen=5)
for i in range(5): # all values fit into the deque, no overwriting
deque_1.append(i)
print(deque_1)
print("---------------------")
deque_2 = deque(maxlen=5)
# after the first 5 values are stored, it needs to overwrite the oldest value to store the new one
for i in range(10):
deque_2.append(i)
print(deque_2)
###Output
deque([0, 1, 2, 3, 4], maxlen=5)
---------------------
deque([0], maxlen=5)
deque([0, 1], maxlen=5)
deque([0, 1, 2], maxlen=5)
deque([0, 1, 2, 3], maxlen=5)
deque([0, 1, 2, 3, 4], maxlen=5)
deque([1, 2, 3, 4, 5], maxlen=5)
deque([2, 3, 4, 5, 6], maxlen=5)
deque([3, 4, 5, 6, 7], maxlen=5)
deque([4, 5, 6, 7, 8], maxlen=5)
deque([5, 6, 7, 8, 9], maxlen=5)
###Markdown
Let's say we allow our replay buffer a maximum size of 20000
###Code
replay_buffer = deque(maxlen=20000)
update_target_model = 10
###Output
_____no_output_____
###Markdown
As mentioned in the lecture, action replaying is crucial for Deep Q-Learning. The following cell implements one version of the action replay algorithm. It uses the zip statement paired with the * (Unpacking Argument Lists) operator to create batches from the samples for efficient prediction and training.The zip statement returns all corresponding pairs from each entry. It might look confusing but the following example should clarify it
###Code
test_tuple = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
zipped_list = list(zip(*test_tuple))
a, b, c = zipped_list
print(a, b, c)
###Output
(1, 4, 7) (2, 5, 8) (3, 6, 9)
###Markdown
Now it's time to write the replay function
###Code
def replay(replay_buffer, batch_size, model, target_model):
# As long as the buffer has not enough elements we do nothing
if len(replay_buffer) < batch_size:
return
# Take a random sample from the buffer with size batch_size
samples = random.sample(replay_buffer, batch_size)
# to store the targets predicted by the target network for training
target_batch = []
# Efficient way to handle the sample by using the zip functionality
zipped_samples = list(zip(*samples))
states, actions, rewards, new_states, dones = zipped_samples
# Predict targets for all states from the sample
targets = target_model.predict(np.array(states))
# Predict Q-Values for all new states from the sample
q_values = model.predict(np.array(new_states))
# Now we loop over all predicted values to compute the actual targets
for i in range(batch_size):
# Take the maximum Q-Value for each sample
q_value = max(q_values[i][0])
# Store the ith target in order to update it according to the formula
target = targets[i].copy()
if dones[i]:
target[0][actions[i]] = rewards[i]
else:
target[0][actions[i]] = rewards[i] + q_value * GAMMA
target_batch.append(target)
# Fit the model based on the states and the updated targets for 1 epoch
model.fit(np.array(states), np.array(target_batch), epochs=1, verbose=0)
###Output
_____no_output_____
###Markdown
We need to update our target network every once in a while. Keras provides the *set_weights()* and *get_weights()* methods which do the work for us, so we only need to check whether we hit an update epoch
###Code
def update_model_handler(epoch, update_target_model, model, target_model):
if epoch > 0 and epoch % update_target_model == 0:
target_model.set_weights(model.get_weights())
###Output
_____no_output_____
###Markdown
Part 4: Training the Model Now it is time to write the training loop! First we compile the model
###Code
model.compile(loss='mse', optimizer=Adam(lr=LEARNING_RATE))
###Output
_____no_output_____
###Markdown
Then we perform the training routine. This might take some time, so make sure to grab your favorite beverage and watch your model learn. Feel free to use our provided chekpoints as a starting point
###Code
best_so_far = 0
for epoch in range(EPOCHS):
observation = env.reset() # Get inital state
# Keras expects the input to be of shape [1, X] thus we have to reshape
observation = observation.reshape([1, 4])
done = False
points = 0
while not done: # as long current run is active
# Select action acc. to strategy
action = epsilon_greedy_action_selection(model, epsilon, observation)
# Perform action and get next state
next_observation, reward, done, info = env.step(action)
next_observation = next_observation.reshape([1, 4]) # Reshape!!
replay_buffer.append((observation, action, reward, next_observation, done)) # Update the replay buffer
observation = next_observation # update the observation
points+=1
# Most important step! Training the model by replaying
replay(replay_buffer, 32, model, target_model)
epsilon *= EPSILON_REDUCE # Reduce epsilon
# Check if we need to update the target model
update_model_handler(epoch, update_target_model, model, target_model)
if points > best_so_far:
best_so_far = points
if epoch %25 == 0:
print(f"{epoch}: Points reached: {points} - epsilon: {epsilon} - Best: {best_so_far}")
###Output
0: Points reached: 18 - epsilon: 0.995 - Best: 18
WARNING:tensorflow:Model was constructed with shape (None, 1, 4) for input KerasTensor(type_spec=TensorSpec(shape=(None, 1, 4), dtype=tf.float32, name='dense_input'), name='dense_input', description="created by layer 'dense_input'"), but it was called on an input with incompatible shape (None, 4).
25: Points reached: 13 - epsilon: 0.8778091417340573 - Best: 53
50: Points reached: 25 - epsilon: 0.7744209942832988 - Best: 80
75: Points reached: 32 - epsilon: 0.6832098777212641 - Best: 80
100: Points reached: 79 - epsilon: 0.6027415843082742 - Best: 118
125: Points reached: 39 - epsilon: 0.531750826943791 - Best: 151
150: Points reached: 24 - epsilon: 0.46912134373457726 - Best: 191
175: Points reached: 160 - epsilon: 0.41386834584198684 - Best: 191
200: Points reached: 148 - epsilon: 0.36512303261753626 - Best: 287
225: Points reached: 130 - epsilon: 0.322118930542046 - Best: 287
250: Points reached: 205 - epsilon: 0.28417984116121187 - Best: 287
275: Points reached: 139 - epsilon: 0.2507092085103961 - Best: 299
300: Points reached: 169 - epsilon: 0.2211807388415433 - Best: 351
325: Points reached: 184 - epsilon: 0.19513012515638165 - Best: 351
350: Points reached: 230 - epsilon: 0.17214774642209296 - Best: 351
375: Points reached: 175 - epsilon: 0.1518722266715875 - Best: 380
400: Points reached: 143 - epsilon: 0.13398475271138335 - Best: 380
425: Points reached: 152 - epsilon: 0.11820406108847166 - Best: 380
450: Points reached: 176 - epsilon: 0.1042820154910064 - Best: 380
475: Points reached: 161 - epsilon: 0.09199970504166631 - Best: 380
500: Points reached: 185 - epsilon: 0.0811640021330769 - Best: 380
525: Points reached: 158 - epsilon: 0.0716045256805401 - Best: 380
550: Points reached: 139 - epsilon: 0.06317096204211972 - Best: 380
575: Points reached: 148 - epsilon: 0.05573070148010834 - Best: 380
600: Points reached: 156 - epsilon: 0.04916675299948831 - Best: 380
625: Points reached: 141 - epsilon: 0.043375904776212296 - Best: 380
650: Points reached: 159 - epsilon: 0.03826710124979409 - Best: 380
675: Points reached: 167 - epsilon: 0.033760011361539714 - Best: 380
700: Points reached: 135 - epsilon: 0.029783765425331846 - Best: 380
725: Points reached: 169 - epsilon: 0.026275840769466357 - Best: 380
750: Points reached: 157 - epsilon: 0.023181078627322618 - Best: 380
775: Points reached: 146 - epsilon: 0.020450816818411825 - Best: 380
###Markdown
Part 5: Using Trained Model
###Code
observation = env.reset()
for counter in range(300):
env.render()
# TODO: Get discretized observation
action = np.argmax(model.predict(observation.reshape([1,4])))
# TODO: Perform the action
observation, reward, done, info = env.step(action) # Finally perform the action
if done:
print(f"done")
break
env.close()
###Output
_____no_output_____ |
jupyter_russian/topic04_linear_models/topic4_linear_models_part5_valid_learning_curves.ipynb | ###Markdown
Открытый курс по машинному обучениюАвтор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала. Тема 4. Линейные модели классификации и регрессии Часть 5. Кривые валидации и обучения
###Code
from __future__ import division, print_function
# отключим всякие предупреждения Anaconda
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV, SGDClassifier
from sklearn.model_selection import validation_curve
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures, StandardScaler
###Output
_____no_output_____
###Markdown
Мы уже получили представление о проверке модели, кросс-валидации и регуляризации.Теперь рассмотрим главный вопрос:**Если качество модели нас не устраивает, что делать?**- Сделать модель сложнее или упростить?- Добавить больше признаков?- Или нам просто нужно больше данных для обучения?Ответы на данные вопросы не всегда лежат на поверхности. В частности, иногда использование более сложной модели приведет к ухудшению показателей. Либо добавление наблюдений не приведет к ощутимым изменениям. Способность принять правильное решение и выбрать правильный способ улучшения модели, собственно говоря, и отличает хорошего специалиста от плохого. Будем работать со знакомыми данными по оттоку клиентов телеком-оператора.
###Code
data = pd.read_csv("../../data/telecom_churn.csv").drop("State", axis=1)
data["International plan"] = data["International plan"].map({"Yes": 1, "No": 0})
data["Voice mail plan"] = data["Voice mail plan"].map({"Yes": 1, "No": 0})
y = data["Churn"].astype("int").values
X = data.drop("Churn", axis=1).values
###Output
_____no_output_____
###Markdown
**Логистическую регрессию будем обучать стохастическим градиентным спуском. Пока объясним это тем, что так быстрее, но далее в программе у нас отдельная статья про это дело.**
###Code
alphas = np.logspace(-2, 0, 20)
sgd_logit = SGDClassifier(loss="log", n_jobs=-1, random_state=17)
logit_pipe = Pipeline(
[
("scaler", StandardScaler()),
("poly", PolynomialFeatures(degree=2)),
("sgd_logit", sgd_logit),
]
)
val_train, val_test = validation_curve(
logit_pipe, X, y, "sgd_logit__alpha", alphas, cv=5, scoring="roc_auc"
)
###Output
_____no_output_____
###Markdown
**Построим валидационные кривые, показывающие, как качество (ROC AUC) на обучающей и проверочной выборке меняется с изменением параметра регуляризации.**
###Code
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, "-", **kwargs)
plt.fill_between(
x,
mu - std,
mu + std,
edgecolor="none",
facecolor=lines[0].get_color(),
alpha=0.2,
)
plot_with_err(alphas, val_train, label="training scores")
plot_with_err(alphas, val_test, label="validation scores")
plt.xlabel(r"$\alpha$")
plt.ylabel("ROC AUC")
plt.legend();
###Output
_____no_output_____
###Markdown
Тенденция видна сразу, и она очень часто встречается.1. Для простых моделей тренировочная и валидационная ошибка находятся где-то рядом, и они велики. Это говорит о том, что модель **недообучилась**: то есть она не имеет достаточное кол-во параметров.2. Для сильно усложненных моделей тренировочная и валидационная ошибки значительно отличаются. Это можно объяснить **переобучением**: когда параметров слишком много либо не хватает регуляризации, алгоритм может "отвлекаться" на шум в данных и упускать основной тренд. Сколько нужно данных?Известно, что чем больше данных использует модель, тем лучше. Но как нам понять в конкретной ситуации, помогут ли новые данные? Скажем, целесообразно ли нам потратить \$ N на труд асессоров, чтобы увеличить выборку вдвое?Поскольку новых данных пока может и не быть, разумно поварьировать размер имеющейся обучающей выборки и посмотреть, как качество решения задачи зависит от объема данных, на которм мы обучали модель. Так получаются **кривые обучения** (**learning curves**).Идея простая: мы отображаем ошибку как функцию от количества примеров, используемых для обучения. При этом параметры модели фиксируются заранее.
###Code
from sklearn.model_selection import learning_curve
def plot_learning_curve(degree=2, alpha=0.01):
train_sizes = np.linspace(0.05, 1, 20)
logit_pipe = Pipeline(
[
("scaler", StandardScaler()),
("poly", PolynomialFeatures(degree=degree)),
("sgd_logit", SGDClassifier(n_jobs=-1, random_state=17, alpha=alpha)),
]
)
N_train, val_train, val_test = learning_curve(
logit_pipe, X, y, train_sizes=train_sizes, cv=5, scoring="roc_auc"
)
plot_with_err(N_train, val_train, label="training scores")
plot_with_err(N_train, val_test, label="validation scores")
plt.xlabel("Training Set Size")
plt.ylabel("AUC")
plt.legend()
###Output
_____no_output_____
###Markdown
Давайте посмотрим, что мы получим для линейной модели. Коэффициент регуляризации выставим большим.
###Code
plot_learning_curve(degree=2, alpha=10)
###Output
_____no_output_____
###Markdown
Типичная ситуация: для небольшого объема данных ошибки на обучающей выборке и в процессе кросс-валидации довольно сильно отличаются, что указывает на переобучение. Для той же модели, но с большим объемом данных ошибки "сходятся", что указывается на недообучение.Если добавить еще данные, ошибка на обучающей выборке не будет расти, но с другой стороны, ошибка на тестовых данных не будет уменьшаться. Получается, ошибки "сошлись", и добавление новых данных не поможет. Собственно, это случай – самый интересный для бизнеса. Возможна ситуация, когда мы увеличиваем выборку в 10 раз. Но если не менять сложность модели, это может и не помочь. То есть стратегия "настроил один раз – дальше использую 10 раз" может и не работать. Что будет, если изменить коэффициент регуляризации?Видим хорошую тенденцию – кривые постепенно сходятся, и если дальше двигаться направо (добавлять в модель данные), можно еще повысить качество на валидации.
###Code
plot_learning_curve(degree=2, alpha=0.05)
###Output
_____no_output_____
###Markdown
А если усложнить ещё больше?Проявляется переобучение - AUC падает как на обучении, так и на валидации.
###Code
plot_learning_curve(degree=2, alpha=1e-4)
###Output
_____no_output_____ |
09-NeuralWordEmbedding/Multi_class_Sentiment_Analysis_Deployment.ipynb | ###Markdown
5 - Multi-class Sentiment AnalysisIn all of the previous notebooks we have performed sentiment analysis on a dataset with only two classes, positive or negative. When we have only two classes our output can be a single scalar, bound between 0 and 1, that indicates what class an example belongs to. When we have more than 2 examples, our output must be a $C$ dimensional vector, where $C$ is the number of classes.In this notebook, we'll be performing classification on a dataset with 6 classes. Note that this dataset isn't actually a sentiment analysis dataset, it's a dataset of questions and the task is to classify what category the question belongs to. However, everything covered in this notebook applies to any dataset with examples that contain an input sequence belonging to one of $C$ classes.Below, we setup the fields, and load the dataset. The first difference is that we do not need to set the `dtype` in the `LABEL` field. When doing a mutli-class problem, PyTorch expects the labels to be numericalized `LongTensor`s. The second different is that we use `TREC` instead of `IMDB` to load the `TREC` dataset. The `fine_grained` argument allows us to use the fine-grained labels (of which there are 50 classes) or not (in which case they'll be 6 classes). You can change this how you please. Also update to torchtext 0.7.0
###Code
! pip install torchtext==0.7.0
import torchtext
torchtext.__version__
import torch
from torchtext import data
from torchtext import datasets
import random
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(tokenize = 'spacy')
LABEL = data.LabelField()
train_data, test_data = datasets.TREC.splits(TEXT, LABEL, fine_grained=False)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
###Output
/usr/local/lib/python3.6/dist-packages/torchtext/data/field.py:150: UserWarning: Field class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
warnings.warn('{} class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.'.format(self.__class__.__name__), UserWarning)
/usr/local/lib/python3.6/dist-packages/torchtext/data/field.py:150: UserWarning: LabelField class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
warnings.warn('{} class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.'.format(self.__class__.__name__), UserWarning)
###Markdown
Let's look at one of the examples in the training set.
###Code
vars(train_data[-1])
###Output
_____no_output_____
###Markdown
Next, we'll build the vocabulary. As this dataset is small (only ~3800 training examples) it also has a very small vocabulary (~7500 unique tokens), this means we do not need to set a `max_size` on the vocabulary as before.
###Code
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
###Output
_____no_output_____
###Markdown
Next, we can check the labels.The 6 labels (for the non-fine-grained case) correspond to the 6 types of questions in the dataset:- `HUM` for questions about humans- `ENTY` for questions about entities- `DESC` for questions asking you for a description - `NUM` for questions where the answer is numerical- `LOC` for questions where the answer is a location- `ABBR` for questions asking about abbreviations
###Code
TEXT.vocab.freqs.most_common(10)
print(LABEL.vocab.stoi)
###Output
defaultdict(None, {'HUM': 0, 'ENTY': 1, 'DESC': 2, 'NUM': 3, 'LOC': 4, 'ABBR': 5})
###Markdown
As always, we set up the iterators.
###Code
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
###Output
/usr/local/lib/python3.6/dist-packages/torchtext/data/iterator.py:48: UserWarning: BucketIterator class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
warnings.warn('{} class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.'.format(self.__class__.__name__), UserWarning)
###Markdown
We'll be using the CNN model from the previous notebook, however any of the models covered in these tutorials will work on this dataset. The only difference is now the `output_dim` will be $C$ instead of $1$.
###Code
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [sent len, batch size]
text = text.permute(1, 0)
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
#conv_n = [batch size, n_filters, sent len - filter_sizes[n]]
pooled = [F.max_pool1d(conv, int(conv.shape[2])).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
###Output
_____no_output_____
###Markdown
We define our model, making sure to set `OUTPUT_DIM` to $C$. We can get $C$ easily by using the size of the `LABEL` vocab, much like we used the length of the `TEXT` vocab to get the size of the vocabulary of the input.The examples in this dataset are generally a lot smaller than those in the IMDb dataset, so we'll use smaller filter sizes.
###Code
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [2,3,4]
OUTPUT_DIM = len(LABEL.vocab)
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)
###Output
_____no_output_____
###Markdown
Checking the number of parameters, we can see how the smaller filter sizes means we have about a third of the parameters than we did for the CNN model on the IMDb dataset.
###Code
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
! pip install torchsummaryX
from torchsummaryX import summary
inputs = torch.zeros((100, 1), dtype=torch.long)
summary(model.to(device), inputs.to(device))
###Output
========================================================================
Kernel Shape Output Shape Params Mult-Adds
Layer
0_embedding [100, 7503] [1, 100, 100] 750.3k 750.3k
1_convs.Conv2d_0 [1, 100, 2, 100] [1, 100, 99, 1] 20.1k 1.98M
2_convs.Conv2d_1 [1, 100, 3, 100] [1, 100, 98, 1] 30.1k 2.94M
3_convs.Conv2d_2 [1, 100, 4, 100] [1, 100, 97, 1] 40.1k 3.88M
4_dropout - [1, 300] - -
5_fc [300, 6] [1, 6] 1.806k 1.8k
------------------------------------------------------------------------
Totals
Total params 842.406k
Trainable params 842.406k
Non-trainable params 0.0
Mult-Adds 9.5521M
========================================================================
###Markdown
Next, we'll load our pre-trained embeddings.
###Code
pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)
###Output
_____no_output_____
###Markdown
Then zero the initial weights of the unknown and padding tokens.
###Code
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
###Output
_____no_output_____
###Markdown
Another different to the previous notebooks is our loss function (aka criterion). Before we used `BCEWithLogitsLoss`, however now we use `CrossEntropyLoss`. Without going into too much detail, `CrossEntropyLoss` performs a *softmax* function over our model outputs and the loss is given by the *cross entropy* between that and the label.Generally:- `CrossEntropyLoss` is used when our examples exclusively belong to one of $C$ classes- `BCEWithLogitsLoss` is used when our examples exclusively belong to only 2 classes (0 and 1) and is also used in the case where our examples belong to between 0 and $C$ classes (aka multilabel classification).
###Code
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss()
model = model.to(device)
criterion = criterion.to(device)
###Output
_____no_output_____
###Markdown
Before, we had a function that calculated accuracy in the binary label case, where we said if the value was over 0.5 then we would assume it is positive. In the case where we have more than 2 classes, our model outputs a $C$ dimensional vector, where the value of each element is the beleief that the example belongs to that class. For example, in our labels we have: 'HUM' = 0, 'ENTY' = 1, 'DESC' = 2, 'NUM' = 3, 'LOC' = 4 and 'ABBR' = 5. If the output of our model was something like: **[5.1, 0.3, 0.1, 2.1, 0.2, 0.6]** this means that the model strongly believes the example belongs to class 0, a question about a human, and slightly believes the example belongs to class 3, a numerical question.We calculate the accuracy by performing an `argmax` to get the index of the maximum value in the prediction for each element in the batch, and then counting how many times this equals the actual label. We then average this across the batch.
###Code
def categorical_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability
correct = max_preds.squeeze(1).eq(y)
return correct.sum() / torch.FloatTensor([y.shape[0]]).to(device)
###Output
_____no_output_____
###Markdown
The training loop is similar to before, without the need to `squeeze` the model predictions as `CrossEntropyLoss` expects the input to be **[batch size, n classes]** and the label to be **[batch size]**.The label needs to be a `LongTensor`, which it is by default as we did not set the `dtype` to a `FloatTensor` as before.
###Code
batch = next(iter(train_iterator))
batch.label, batch.text
batch.text
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
batch.text = batch.text.to(device)
batch.label = batch.label.to(device)
optimizer.zero_grad()
predictions = model(batch.text)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
The evaluation loop is, again, similar to before.
###Code
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Next, we train our model.
###Code
N_EPOCHS = 15
best_valid_loss = float('inf')
model = model.to(device)
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut5-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
###Output
/usr/local/lib/python3.6/dist-packages/torchtext/data/batch.py:23: UserWarning: Batch class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.
warnings.warn('{} class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.'.format(self.__class__.__name__), UserWarning)
###Markdown
Finally, let's run our model on the test set!
###Code
model.load_state_dict(torch.load('tut5-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.275 | Test Acc: 89.05%
###Markdown
Similar to how we made a function to predict sentiment for any given sentences, we can now make a function that will predict the class of question given.The only difference here is that instead of using a sigmoid function to squash the input between 0 and 1, we use the `argmax` to get the highest predicted class index. We then use this index with the label vocab to get the human readable label.
###Code
import spacy
nlp = spacy.load('en')
def predict_class(model, sentence, min_len = 4):
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
if len(tokenized) < min_len:
tokenized += ['<pad>'] * (min_len - len(tokenized))
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(1)
preds = model(tensor)
max_preds = preds.argmax(dim = 1)
return max_preds.item()
type(nlp)
sentence = 'how old are you?'
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
tokenized
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
indexed
tensor = torch.LongTensor(indexed).to(device)
tensor
predicted = model(tensor.unsqueeze(1).to('cpu')).squeeze(0)
predicted = F.softmax(predicted)
predicted
sorted_values = predicted.argsort(descending=True).cpu().numpy()
sorted_values
list(map(lambda x: { "label_idx": x.item(), "label_name": LABEL.vocab.itos[x], 'confidence': predicted[x].item() } , sorted_values))
###Output
_____no_output_____
###Markdown
Now, let's try it out on a few different questions...
###Code
pred_class = predict_class(model, "Who is Keyser Söze?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
pred_class = predict_class(model, "How many minutes are in six hundred and eighteen hours?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
pred_class = predict_class(model, "What continent is Bulgaria in?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
pred_class = predict_class(model, "What does WYSIWYG stand for?")
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
###Output
Predicted class is: 5 = ABBR
###Markdown
Save the Model and the Vocab
###Code
def save_vocab(vocab, path):
import pickle
output = open(path, 'wb')
pickle.dump(vocab, output)
output.close()
torch.save(model, 'conv-sentimental-mclass.pt')
save_vocab({ 'TEXT.vocab': TEXT.vocab, 'LABEL.vocab': LABEL.vocab }, 'conv-sentimental-vocab.pkl')
###Output
_____no_output_____
###Markdown
We need to use scripted model since traced model will make the shapes constant and we wont be able to use variable length strings
###Code
scripted_model = torch.jit.script(model.to('cpu'))
scripted_model(torch.zeros((5, 1), dtype=torch.long))
scripted_model.save('conv-sentimental-mclass.scripted.pt')
###Output
_____no_output_____ |
pynq_dpu/edge/notebooks/dpu_inception_v1.ipynb | ###Markdown
DPU example: Inception_v1This notebooks shows an example of DPU applications. The application,as well as the DPU IP, is pulled from the official [Vitis AI Github Repository](https://github.com/Xilinx/Vitis-AI).For more information, please refer to the [Xilinx Vitis AI page](https://www.xilinx.com/products/design-tools/vitis/vitis-ai.html).In this notebook, we will show how to use **Python API** to run DPU tasks. 1. Prepare the overlayWe will download the overlay onto the board.
###Code
from pynq_dpu import DpuOverlay
overlay = DpuOverlay("dpu.bit")
###Output
_____no_output_____
###Markdown
The VAI package has been installed onto your board. There are multiplebinaries installed; for example, you can check the current DPU status using`dexplorer`. You should be able to see reasonable values from the output.
###Code
!dexplorer -w
###Output
[DPU IP Spec]
IP Timestamp : 2020-03-26 13:30:00
DPU Core Count : 2
[DPU Core Configuration List]
DPU Core : #0
DPU Enabled : Yes
DPU Arch : B4096
DPU Target Version : v1.4.1
DPU Freqency : 300 MHz
Ram Usage : High
DepthwiseConv : Enabled
DepthwiseConv+Relu6 : Enabled
Conv+Leakyrelu : Enabled
Conv+Relu6 : Enabled
Channel Augmentation : Enabled
Average Pool : Enabled
DPU Core : #1
DPU Enabled : Yes
DPU Arch : B4096
DPU Target Version : v1.4.1
DPU Freqency : 300 MHz
Ram Usage : High
DepthwiseConv : Enabled
DepthwiseConv+Relu6 : Enabled
Conv+Leakyrelu : Enabled
Conv+Relu6 : Enabled
Channel Augmentation : Enabled
Average Pool : Enabled
###Markdown
The compiled quantized model may have different kernel names depending on the DPU architectures.This piece of information can usually be found when compiling the `*.elf` model file.The `load_model()` method can automatically parse the kernel name from the provided `*.elf` model file.
###Code
overlay.load_model("dpu_inception_v1_0.elf")
###Output
_____no_output_____
###Markdown
2. Run Python programWe will use DNNDK's Python API to run DPU tasks.In this example, we will set the number of iterations to 500, meaning that a single picture will be taken and classified 500 times.Users can adjust this value if they want.
###Code
from ctypes import *
import cv2
import numpy as np
from dnndk import n2cube
import os
import threading
import time
from pynq_dpu import dputils
KERNEL_CONV = "inception_v1_0"
KERNEL_CONV_INPUT = "conv1_7x7_s2"
KERNEL_FC_OUTPUT = "loss3_classifier"
num_iterations = 500
lock = threading.Lock()
###Output
_____no_output_____
###Markdown
Let's first take a picture from the image folder and display it.
###Code
from IPython.display import display
from PIL import Image
image_folder = "./img"
listimage = [i for i in os.listdir(image_folder) if i.endswith("JPEG")]
path = os.path.join(image_folder, listimage[0])
img = cv2.imread(path)
display(Image.open(path))
###Output
_____no_output_____
###Markdown
We will also open and initialize the DPU device. We will create a DPU kernel and reuse it.Throughout the entire notebook, we don't have to redo this step.**Note**: if you open and close DPU multiple times, the Jupyter kernel might die;this is because the current DNNDK implementation requires bitstream to be downloaded by XRT,which is not supported by `pynq` package. Hence we encourage users to stay withone single DPU session, both for program robustness and higher performance.
###Code
n2cube.dpuOpen()
kernel = n2cube.dpuLoadKernel(KERNEL_CONV)
###Output
_____no_output_____
###Markdown
Single executionWe define a function that will use the DPU to make a prediction on an input image and provide a softmax output.
###Code
def predict_label(img):
task = n2cube.dpuCreateTask(kernel, 0)
dputils.dpuSetInputImage2(task, KERNEL_CONV_INPUT, img)
n2cube.dpuGetInputTensor(task, KERNEL_CONV_INPUT)
n2cube.dpuRunTask(task)
size = n2cube.dpuGetOutputTensorSize(task, KERNEL_FC_OUTPUT)
channel = n2cube.dpuGetOutputTensorChannel(task, KERNEL_FC_OUTPUT)
conf = n2cube.dpuGetOutputTensorAddress(task, KERNEL_FC_OUTPUT)
outputScale = n2cube.dpuGetOutputTensorScale(task, KERNEL_FC_OUTPUT)
softmax = n2cube.dpuRunSoftmax(conf, channel, size//channel, outputScale)
n2cube.dpuDestroyTask(task)
with open("img/words.txt", "r") as f:
lines = f.readlines()
return lines[np.argmax(softmax)]
label = predict_label(img)
print('Class label: {}'.format(label))
###Output
Class label: tricycle, trike, velocipede
###Markdown
Multiple executionsAfter we have verified the correctness of a single execution, we cantry multiple executions and measure the throughput in Frames Per Second (FPS).Let's define a function that processes a single image in multiple iterations. The parameters are:* `kernel`: DPU kernel.* `img`: image to be classified.* `count` : test rounds count.The number of iterations is defined as `num_iterations` in previous cells.
###Code
def run_dpu_task(kernel, img, count):
task = n2cube.dpuCreateTask(kernel, 0)
count = 0
while count < num_iterations:
dputils.dpuSetInputImage2(task, KERNEL_CONV_INPUT, img)
n2cube.dpuGetInputTensor(task, KERNEL_CONV_INPUT)
n2cube.dpuRunTask(task)
size = n2cube.dpuGetOutputTensorSize(task, KERNEL_FC_OUTPUT)
channel = n2cube.dpuGetOutputTensorChannel(task, KERNEL_FC_OUTPUT)
conf = n2cube.dpuGetOutputTensorAddress(task, KERNEL_FC_OUTPUT)
outputScale = n2cube.dpuGetOutputTensorScale(task, KERNEL_FC_OUTPUT)
softmax = n2cube.dpuRunSoftmax(
conf, channel, size//channel, outputScale)
lock.acquire()
count = count + threadnum
lock.release()
n2cube.dpuDestroyTask(task)
###Output
_____no_output_____
###Markdown
Now we are able to run the batch processing and print out DPU throughput.Users can change the `image_folder` to point to other picture locations.We will use the previously defined and classified image `img` and process it for`num_interations` times.In this example, we will just use a single thread.The following cell may take a while to run. Please be patient.
###Code
threadAll = []
threadnum = 1
start = time.time()
for i in range(threadnum):
t1 = threading.Thread(target=run_dpu_task, args=(kernel, img, i))
threadAll.append(t1)
for x in threadAll:
x.start()
for x in threadAll:
x.join()
end = time.time()
fps = float(num_iterations/(end-start))
print("%.2f FPS" % fps)
###Output
89.63 FPS
###Markdown
Clean upFinally, when you are done with the DPU experiments, remember to destroy the kernel and close the DPU.
###Code
n2cube.dpuDestroyKernel(kernel)
###Output
_____no_output_____ |
code/.ipynb_checkpoints/Data prep - Bathymetry and Coord Grid Generator (Py3)-checkpoint.ipynb | ###Markdown
Plots to check channels etc- took from old checkbathy and plotgrids files
###Code
import scipy.io as sio
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
from helpers import expandf, grid_angle
# grid
def load1(f):
with nc.Dataset(f) as ncid:
glamt = ncid.variables["glamt"][0, :, :].filled()
gphit = ncid.variables["gphit"][0, :, :].filled()
glamu = ncid.variables["glamu"][0, :, :].filled()
gphiu = ncid.variables["gphiu"][0, :, :].filled()
glamv = ncid.variables["glamv"][0, :, :].filled()
gphiv = ncid.variables["gphiv"][0, :, :].filled()
glamf = ncid.variables["glamf"][0, :, :].filled()
gphif = ncid.variables["gphif"][0, :, :].filled()
return glamt, glamu, glamv, glamf, gphit, gphiu, gphiv, gphif
#
def load2(f):
with nc.Dataset(f) as ncid:
e1t = ncid.variables["e1t"][0, :, :].filled()
e1u = ncid.variables["e1u"][0, :, :].filled()
e1v = ncid.variables["e1v"][0, :, :].filled()
e1f = ncid.variables["e1f"][0, :, :].filled()
e2t = ncid.variables["e2t"][0, :, :].filled()
e2u = ncid.variables["e2u"][0, :, :].filled()
e2v = ncid.variables["e2v"][0, :, :].filled()
e2f = ncid.variables["e2f"][0, :, :].filled()
return e1t,e1u,e1v,e1f,e2t,e2u,e2v,e2f
def load3(f):
with nc.Dataset(f) as ncid:
depth = ncid.variables["Bathymetry"][:, :].filled()
latt = ncid.variables["nav_lat"][:, :].filled()
lont = ncid.variables["nav_lon"][:, :].filled()
return depth, latt, lont
# for rivers - GO
def load4(f):
with nc.Dataset(f) as ncid:
rorunoff = ncid.variables["rorunoff"][6, :, :].filled()
latt = ncid.variables["nav_lat"][:, :].filled()
lont = ncid.variables["nav_lon"][:, :].filled()
return rorunoff, latt, lont
# grid
def plotgrid1(f):
glamt, glamu, glamv, glamf, gphit, gphiu, gphiv, gphif = load1(f)
plt.figure(figsize=(7,5)); plt.clf()
# Draw sides of every box
glamfe, gphife = expandf(glamf, gphif)
NY,NX = glamfe.shape
print(glamt.shape)
print(glamu.shape)
print(glamf.shape)
for j in range(NY):
plt.plot(glamfe[j,:],gphife[j,:], 'k')
for i in range(NX):
plt.plot(glamfe[:,i],gphife[:,i], 'k')
# Plot t, u, v, f points in red, green, blue, magenta
plt.plot(glamt, gphit, 'r.')
plt.plot(glamu, gphiu, 'g.')
plt.plot(glamv, gphiv, 'b.')
plt.plot(glamf, gphif, 'm.')
plt.tight_layout()
plt.xlim([-123.5,-123.3])
plt.ylim([46.84,46.95])
#plt.savefig(f.replace(".nc","_gridpts.png"))
# grid
def plotgrid2(f):
glamt, glamu, glamv, glamf, gphit, gphiu, gphiv, gphif = load1(f)
e1t,e1u,e1v,e1f,e2t,e2u,e2v,e2f = load2(f)
glamfe, gphife = expandf(glamf, gphif)
A = grid_angle(f)
plt.figure(figsize=(12,4))
plt.subplot(1,3,1)
plt.pcolormesh(glamfe,gphife,e1t); plt.colorbar(); plt.title("e1t (m)")
plt.subplot(1,3,2)
plt.pcolormesh(glamfe,gphife,e2t); plt.colorbar(); plt.title("e2t (m)")
plt.subplot(1,3,3)
plt.pcolormesh(glamf,gphif,A); plt.colorbar(); plt.title("angle (deg)")
plt.tight_layout()
plt.savefig(f.replace(".nc","_resolution_angle.png"))
# bathy
def plotgrid3(f):
depth, latt, lont = load3(f)
depth[depth==0]=np.nan
depth[depth>0]=1
#print(depth.shape)
# can do edits below
# made permanent in the main create bathy above
# north to south
#depth[178,128] = 400 #northern fjord
# depth[296,54] = 60 #northern fjord
# depth[296,53] = 60 #northern fjord
plt.figure(figsize=(8,8))
plt.subplot(1,1,1)
plt.pcolormesh(depth, cmap=plt.plasma()); plt.colorbar(); plt.title("depth")
#plt.pcolormesh(depth); plt.colorbar(); plt.title("depth")
#plt.pcolormesh(ma_rorunoff, cmap=plt.pink()); plt.title("rodepth")
plt.tight_layout()
plt.savefig(f.replace(".nc","_bathycheck.png"))
# runoff / rivers
def plotgrid4(f):
depth, latt, lont = load3(f)
# added for river runoff overlay
rorunoff, latt2, lontt2 = load4('c:/temp/runofftools/rivers_month_202101GO.nc')
#rorunoff[rorunoff==0]=np.nan
#print(rorunoff.shape)
ma_rorunoff = np.ma.masked_array(rorunoff, rorunoff == 0)
depth[depth==0]=np.nan
depth[depth>0]=1
#print(depth.shape)
plt.figure(figsize=(8,8))
plt.subplot(1,1,1)
plt.pcolormesh(depth, cmap=plt.plasma()); plt.colorbar(); plt.title("depth")
#plt.pcolormesh(depth); plt.colorbar(); plt.title("depth")
#plt.pcolormesh(ma_rorunoff, cmap=plt.pink()); plt.title("rodepth")
plt.tight_layout()
plt.savefig("C:/temp/runofftools/runoffcheck2.png")
# #################################################################
# #################### BASIC PLOT OF BATHY ########################
gridfilename = '..//data//grid//coordinates_salishsea_1500m.nc'
#bathyfilename = 'bathy_salishsea_1500m_before_manual_edits.nc'
#bathyfilename = '..//data//bathymetry//bathy_salishsea_1500m_Dec30.nc'
with nc.Dataset(gridfilename) as ncid:
glamt = ncid.variables["glamt"][0, :, :].filled()
gphit = ncid.variables["gphit"][0, :, :].filled()
glamf = ncid.variables["glamf"][0, :, :].filled()
gphif = ncid.variables["gphif"][0, :, :].filled()
glamfe,gphife=expandf(glamf,gphif)
with nc.Dataset(bathyout_filename) as nc_b_file:
bathy = nc_b_file.variables["Bathymetry"][:, :].filled()
bb=np.copy(bathy); bb[bb==0]=np.nan
plt.figure(figsize=(8,8))
plt.subplot(1,1,1)
plt.pcolormesh(glamfe,gphife,bb); plt.colorbar()
# Coastlines
mfile = sio.loadmat('..//data//reference//PNW.mat')
ncst = mfile['ncst']
plt.plot(ncst[:,0],ncst[:,1],'k')
mfile2 = sio.loadmat('..//data//reference//PNWrivers.mat')
ncst2 = mfile2['ncst']
plt.plot(ncst2[:,0],ncst2[:,1],'k')
##########################################################
############### PLOTS TO CHECK BATHY ETC #################
# plotgrid1('coordinates_seagrid_SalishSea2.nc')
#plotgrid1('coordinates_salishsea_1km.nc')
#plotgrid1('coordinates_salishsea_1500m.nc')
#plotgrid1('coordinates_salishsea_2km.nc')
#plotgrid2('coordinates_seagrid_SalishSea2.nc')
# plotgrid2('coordinates_salishsea_1km.nc')
#plotgrid2('coordinates_salishsea_2km.nc')
#plotgrid2('coordinates_salishsea_1p5km.nc')
#plotgrid3('bathy_salishsea_1500m_Dec21.nc')
plotgrid3(bathyout_filename)
#plotgrid3('bathy_salishsea_2km.nc')
# junk code below
a = range(24)
b = a[::3]
list(b)
my_list[0] = [_ for _ in 'abcdefghi']
my_list[1] = [_ for _ in 'abcdefghi']
my_list[0:-1]
glamu.shape
a[296,10]
############################################################
### EXPLORE TWO MESHES - NEMO ORAS5 and SS1500 #############
### Apr 2021
import sys
# load mask (tmask)
def loadmask(f):
with nc.Dataset(f) as ncid:
tmaskutil = ncid.variables["tmaskutil"][0,:, :].filled()
latt = ncid.variables["nav_lat"][:, :].filled()
lont = ncid.variables["nav_lon"][:, :].filled()
e1t = ncid.variables["e1t"][0,:, :].filled()
e2t = ncid.variables["e2t"][0,:, :].filled()
return tmaskutil, latt, lont, e1t, e2t
def plot_two_grids(f,g):
# load ss1500mask
tmask, latt, lont, e1t, e2t = loadmask(f)
# load ORAS5
tmask2, latt2, lont2, e1t2, e2t2 = loadmask(g)
#print(tmask[:,])
#plt.subplot(1,1,1)
#plt.figure(figsize=(7,5)); plt.clf()
plt.scatter(lont, latt, tmask)
plt.scatter(lont2, latt2, tmask2)
# Draw sides of every box
#glamfe, gphife = expandf(glamf, gphif)
#NY,NX = glamfe.shape
#for j in range(NY):
# plt.plot(glamfe[j,:],gphife[j,:], 'k')
#for i in range(NX):
# plt.plot(glamfe[:,i],gphife[:,i], 'k')
# Plot t, u, v, f points in red, green, blue, magenta
#plt.plot(glamt, gphit, 'r.')
#plt.plot(glamu, gphiu, 'g.')
#plt.plot(glamv, gphiv, 'b.')
#plt.plot(glamf, gphif, 'm.')
#plt.plot(glamt_2, gphit_2, 'b.')
#plt.plot(glamu, gphiu, 'g.')
#plt.plot(glamv, gphiv, 'b.')
#plt.plot(glamf, gphif, 'm.')
plt.tight_layout()
plt.xlim([-126.2,-122.1])
plt.ylim([46.84,52])
#plt.savefig(f.replace(".nc","_gridpts.png"))
res = "1500m"
ss1500grid = "..//data//grid//coordinates_salishsea_{}.nc".format(res) # in
datetag = "20210406"
oras5grid = "..//data//reference//ORAS5 Mask and Bathy//mesh_mask.nc"
ss1500meshmask = "..//data//mesh mask//mesh_mask_20210406.nc"
np.set_printoptions(threshold=sys.maxsize)
plot_two_grids(ss1500meshmask, oras5grid)
tmask, latt, lont, e1t, e2t = load2(f)
plt.figure(figsize=(8,8))
plt.subplot(1,1,1)
plt.pcolormesh(tmask[:,:], cmap=plt.pink()); plt.title("model_mask")
plt.tight_layout()
plt.figure(figsize=(7,5)); plt.clf()
plt.plot(tmaskutil[0,:],tmaskutil[:,0], 'r.')
with nc.Dataset(ss1500meshmask) as ncid:
print(tmaskutil[:,0])
###Output
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0]
|
onnx_conversion_scripts/onnx_to_tensorflow.ipynb | ###Markdown
Onnx to Tensorflow conversion explorationIn this notebook we test the [onnx-to-tensorflow](https://github.com/onnx/onnx-tensorflow/) convertor package, by running original models and converted models and comparing the outcomes.
###Code
from pathlib import Path
import onnx
from onnx_tf.backend import prepare
import numpy as np
import dianna
###Output
_____no_output_____
###Markdown
Create functions for running on onnx or converted to tf
###Code
def run_onnx_through_tf(onnx_model_path, data):
onnx_model = onnx.load(onnx_model_path) # load onnx model
tf_output = prepare(onnx_model).run(data).output
return tf_output
def run_onnx_using_runner(onnx_model_path, data):
runner = dianna.utils.onnx_runner.SimpleModelRunner(str(onnx_model_path))
onnx_runner_output = runner(data)
return onnx_runner_output
###Output
_____no_output_____
###Markdown
Case 1: Leafsnap
###Code
folder = Path(r'C:\Users\ChristiaanMeijer\Documents\dianna\tutorials')
leafsnap_model_path = folder/'leafsnap_model.onnx'
np.random.seed = 1234
leafsnap_input = np.random.randn(64,3,128,128).astype(np.float32)
abs_diff = np.abs(run_onnx_through_tf(leafsnap_model_path, leafsnap_input)
- run_onnx_using_runner(leafsnap_model_path, leafsnap_input))
print('mean', np.mean(abs_diff), '\nstd', np.std(abs_diff), '\nmax', np.max(abs_diff))
###Output
WARNING:tensorflow:From C:\Users\ChristiaanMeijer\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\ops\array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version.
Instructions for updating:
The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.
mean 2.69079e-05
std 2.9605108e-05
max 0.00030517578
###Markdown
Conclusion: outputs are equivalent. Case 2: Mnist
###Code
mnist_model_path = folder/'mnist_model.onnx'
mnist_input = np.random.randn(64,1,28,28).astype(np.float32)
abs_diff = np.abs(run_onnx_through_tf(mnist_model_path, mnist_input)
- run_onnx_using_runner(mnist_model_path, mnist_input))
print('mean', np.mean(abs_diff), '\nstd', np.std(abs_diff), '\nmax', np.max(abs_diff))
###Output
mean 7.450581e-09
std 1.9712383e-08
max 5.9604645e-08
|
whale_analysis-Copy1.ipynb | ###Markdown
A Whale off the Port(folio) --- In this assignment, you'll get to use what you've learned this week to evaluate the performance among various algorithmic, hedge, and mutual fund portfolios and compare them against the S&P 500 Index.
###Code
# Initial imports
import pandas as pd
import numpy as np
import datetime as dt
from pathlib import Path
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data CleaningIn this section, you will need to read the CSV files into DataFrames and perform any necessary data cleaning steps. After cleaning, combine all DataFrames into a single DataFrame.Files:* `whale_returns.csv`: Contains returns of some famous "whale" investors' portfolios.* `algo_returns.csv`: Contains returns from the in-house trading algorithms from Harold's company.* `sp500_history.csv`: Contains historical closing prices of the S&P 500 Index. Whale ReturnsRead the Whale Portfolio daily returns and clean the data
###Code
# Reading whale returns
# Set the file paths
whale_returns_data = Path("./Resources/whale_returns.csv")
algo_returns_data = Path("./Resources/algo_returns.csv")
sp500_history_data = Path("./Resources/sp500_history.csv")
aapl_historical_data = Path("./Resources/aapl_historical.csv")
cost_historical_data = Path("./Resources/cost_historical.csv")
goog_historical_data = Path("./Resources/goog_historical.csv")
#whale_returns_data = Path("./Resources/whale_returns.csv")
# Read the CSVs and set the `date` column as a datetime index to the DataFrame
#whale_returns_df = pd.read_csv(whale_returns_data, index_col="date", infer_datetime_format=True, parse_dates=True)
#algo_returns_data = pd.read_csv(algo_returns_data, index_col="date", infer_datetime_format=True, parse_dates=True)
sp500_history_df = pd.read_csv(sp500_history_data)
#index_col="Date", infer_datetime_format=True, parse_dates=True)
aapl_historical_df = pd.read_csv(aapl_historical_data)
#index_col="Trade DATE", infer_datetime_format=True, parse_dates=True)
cost_historical_df = pd.read_csv(cost_historical_data)
#index_col="Trade DATE", infer_datetime_format=True, parse_dates=True)
goog_historical_df = pd.read_csv(goog_historical_data)
#index_col="Trade DATE", infer_datetime_format=True, parse_dates=True)
whale_returns_data = pd.read(whale_returns_data)
algo_returns_data = pd.read(algo_returns_data)
# sort each of the data frames with respect to the date
# set the collumn names
sp500_history_df.columns = ['date', 'close',]
#drop $ symbols
# Read dates to match accross all CSVs (format)
sp500_history_df['date'] = pd.to_datetime(sp500_history_df['date'], format = '%d-%b-%y')
pd.to_datetime(sp500_history_df['date'], format = '%d-%b-%y')
# drop symbol
aapl_historical_df = aapl_historical_df.drop(columns = ['Symbol'])
cost_historical_df = cost_historical_df.drop(columns = ['Symbol'])
goog_historical_df = goog_historical_df.drop(columns = ['Symbol'])
aapl_historical_df.columns = ['date', 'close']
aapl_historical_df['date'] = pd.to_datetime(aapl_historical_df['date'], format='%m/%d/%Y')
cost_historical_df.columns = ['date', 'close']
cost_historical_df['date'] = pd.to_datetime(cost_historical_df['date'], format='%m/%d/%Y')
goog_historical_df.columns = ['date', 'close']
goog_historical_df['date'] = pd.to_datetime(goog_historical_df['date'], format='%m/%d/%Y')
# Print rows
print(sp500_history_df.head(10))
print(sp500_history_df.dtypes)
print(aapl_historical_df.head(10))
print(aapl_historical_df.dtypes)
print(cost_historical_df.head(10))
print(cost_historical_df.dtypes)
print(goog_historical_df.head(10))
print(goog_historical_df.dtypes)
#print(whale_returns_data_df.head(10))
#print(whale_returns_data.dtypes)
#set index
#sp500_history_df = sp500_history_df.set_index("Date")
#aapl_historical_df = aapl_historical_df.set_index("Date")
#cost_historical_df = cost_historical_df.set_index("Date")
#goog_historical_df = goog_historical_df.set_index("Date")
# Sort each of the data frames with respect to their corresponding dates
#sp500_history_df.sort_index(inplace=True)
#aapl_historical_df.sort_index(inplace=True)
#cost_historical_df.sort_index(inplace=True)
#goog_historical_df.sort_index(inplace=True)
#(inplace=True, index_col="date", infer_datatime_format=True, parse_dates=True)
# Display a few rows
#whale_returns_df wrk_df.head()
#SEARCH AND FIND TO IMPLEMENT: INFER_DATETIME_FORMAT = TRUE, PARSE_DATES-TRUE (DELETE)
#concat
#combined_df = pd.concat([sp500_history_df, aapl_historical_df, goog_historical_df, cost_historical_df], axis = "columns", join = "inner")
# sort datetime index in ascending order
#combined_df.sort_index(inplace=True)
# set column names
#combined_df.columns = ['SP500', 'AAPL', 'COST', 'GOOG']
#sp500_history_df.head()
#combined_df.head()
#set index
sp500_history_df = sp500_history_df.set_index("date")
aapl_historical_df = aapl_historical_df.set_index("date")
cost_historical_df = cost_historical_df.set_index("date")
goog_historical_df = goog_historical_df.set_index("date")
# Sort each of the data frames with respect to their corresponding dates
sp500_history_df.sort_index(inplace=True)
aapl_historical_df.sort_index(inplace=True)
cost_historical_df.sort_index(inplace=True)
goog_historical_df.sort_index(inplace=True)
combined_df = pd.concat([sp500_history_df, aapl_historical_df, goog_historical_df, cost_historical_df], axis = "columns", join = "inner")
combined_df.columns = ['SP500', 'AAPL', 'COST', 'GOOG']
#sp500_history_df.head()
combined_df.head()
# Clean and assign date to drop any un-wanted dollar
#drop the dollar signs
def clean_dollarsign(value):
if isinstance(value, str):
return(value.replace('$', ''))
return(value)
#set column names
sp500_history_df.columns = ["date", "close"]
whale_returns_df.columns = ["date", "soros_fund", "paulson_fund", "tiger_global_fund", "berkshare_fund"]
algo_returns_df.columns = ["date", "algo_1", "algo_2"]
#set the date format from the csv file into the sp500 dataframe
sp500_history_df['date'] = pd.to_datetime(sp500_history_fd['date'], format='%d-%b-%y')
#remove the dollar sign and change thetypeto float
sp500_history_df['close'] = sp500_history_df['close'].apply(clean_dollar_sign).astype('float')
#set index
sp500_history_df = sp500_history_df.set_index("date")
whale_returns_df = whale_returns_df.set_index("date")
algo_returns_df = algo_returns_df.set_index("date")
# Sort each of the data frames with respect to their corresponding dates
sp500_history_df.sort_index(inplace=True)
whale_returns_df.sort_index(inplace=True)
algo_returns_df.sort_index(inplace=True)
#calculate returns for the sp500_history_df
sp500_returns_df = sp500_csv.pct_change()
#check for null values
print(f" Null values in s&p500 :\n{sp500_history_df.isnull().sum()}\n")
print(f" Null values in Whale :\n{whale_returns_df.isnull().sum()}\n")
print(f" Null values in Algo :\n{sp500_history_df.isnull().sum()}\n")
# Print out all CSVs to see the drop dollar sign changes
print(sp500_history_df.head())
print(sp500_history_df.dtypes)
print(aapl_historical_df.head())
print(aapl_historical_df.dtypes)
print(cost_historical_df.head())
print(cost_historical_df.dtypes)
print(goog_historical_df.head())
print(goog_historical_df.dtypes)
# Count nulls
sp500_history_df.isnull().sum()
aapl_historical_df.isnull().sum()
cost_historical_df.isnull().sum()
goog_historical_df.isnull().sum()
# Drop nulls
sp500_history_df.dropna()
aapl_historical_df.dropna()
cost_historical_df.dropna()
goog_historical_df.dropna()
###Output
_____no_output_____
###Markdown
Algorithmic Daily ReturnsRead the algorithmic daily returns and clean the data
###Code
# Reading algorithmic returns
algo_returns_data = Path("./Resources/algo_returns.csv")
algo_returns_df = pd.read_csv(algo_returns_data)
#set columns
algo_returns_df.columns = ['date', 'Algo1', 'Algo2',]
print(algo_returns_df.head(10))
print(algo_returns_df.dtypes)
# Count nulls
also_returns_data.isnull().sum()
# Drop nulls
also_returns_data.dropna()
###Output
_____no_output_____
###Markdown
S&P 500 ReturnsRead the S&P 500 historic closing prices and create a new daily returns DataFrame from the data.
###Code
# Reading S&P 500 Closing Prices
# Check Data Types
# Fix Data Types
# Calculate Daily Returns
# Drop nulls
# Rename `Close` Column to be specific to this portfolio.
###Output
_____no_output_____
###Markdown
Combine Whale, Algorithmic, and S&P 500 Returns
###Code
# Join Whale Returns, Algorithmic Returns, and the S&P 500 Returns into a single DataFrame with columns for each portfolio's returns.
###Output
_____no_output_____
###Markdown
--- Conduct Quantitative AnalysisIn this section, you will calculate and visualize performance and risk metrics for the portfolios. Performance Anlysis Calculate and Plot the daily returns.
###Code
# Plot daily returns of all portfolios
###Output
_____no_output_____
###Markdown
Calculate and Plot cumulative returns.
###Code
# Calculate cumulative returns of all portfolios
# Plot cumulative returns
###Output
_____no_output_____
###Markdown
--- Risk AnalysisDetermine the _risk_ of each portfolio:1. Create a box plot for each portfolio. 2. Calculate the standard deviation for all portfolios4. Determine which portfolios are riskier than the S&P 5005. Calculate the Annualized Standard Deviation Create a box plot for each portfolio
###Code
# Box plot to visually show risk
###Output
_____no_output_____
###Markdown
Calculate Standard Deviations
###Code
# Calculate the daily standard deviations of all portfolios
###Output
_____no_output_____
###Markdown
Determine which portfolios are riskier than the S&P 500
###Code
# Calculate the daily standard deviation of S&P 500
# Determine which portfolios are riskier than the S&P 500
###Output
_____no_output_____
###Markdown
Calculate the Annualized Standard Deviation
###Code
# Calculate the annualized standard deviation (252 trading days)
###Output
_____no_output_____
###Markdown
--- Rolling StatisticsRisk changes over time. Analyze the rolling statistics for Risk and Beta. 1. Calculate and plot the rolling standard deviation for the S&P 500 using a 21-day window2. Calculate the correlation between each stock to determine which portfolios may mimick the S&P 5003. Choose one portfolio, then calculate and plot the 60-day rolling beta between it and the S&P 500 Calculate and plot rolling `std` for all portfolios with 21-day window
###Code
# Calculate the rolling standard deviation for all portfolios using a 21-day window
# Plot the rolling standard deviation
###Output
_____no_output_____
###Markdown
Calculate and plot the correlation
###Code
# Calculate the correlation
# Display de correlation matrix
###Output
_____no_output_____
###Markdown
Calculate and Plot Beta for a chosen portfolio and the S&P 500
###Code
# Calculate covariance of a single portfolio
# Calculate variance of S&P 500
# Computing beta
# Plot beta trend
###Output
_____no_output_____
###Markdown
Rolling Statistics Challenge: Exponentially Weighted Average An alternative way to calculate a rolling window is to take the exponentially weighted moving average. This is like a moving window average, but it assigns greater importance to more recent observations. Try calculating the [`ewm`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html) with a 21-day half-life.
###Code
# Use `ewm` to calculate the rolling window
###Output
_____no_output_____
###Markdown
--- Sharpe RatiosIn reality, investment managers and thier institutional investors look at the ratio of return-to-risk, and not just returns alone. After all, if you could invest in one of two portfolios, and each offered the same 10% return, yet one offered lower risk, you'd take that one, right? Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot
###Code
# Annualized Sharpe Ratios
# Visualize the sharpe ratios as a bar plot
###Output
_____no_output_____
###Markdown
Determine whether the algorithmic strategies outperform both the market (S&P 500) and the whales portfolios.Write your answer here! --- Create Custom PortfolioIn this section, you will build your own portfolio of stocks, calculate the returns, and compare the results to the Whale Portfolios and the S&P 500. 1. Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.2. Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock3. Join your portfolio returns to the DataFrame that contains all of the portfolio returns4. Re-run the performance and risk analysis with your portfolio to see how it compares to the others5. Include correlation analysis to determine which stocks (if any) are correlated Choose 3-5 custom stocks with at last 1 year's worth of historic prices and create a DataFrame of the closing prices and dates for each stock.For this demo solution, we fetch data from three companies listes in the S&P 500 index.* `GOOG` - [Google, LLC](https://en.wikipedia.org/wiki/Google)* `AAPL` - [Apple Inc.](https://en.wikipedia.org/wiki/Apple_Inc.)* `COST` - [Costco Wholesale Corporation](https://en.wikipedia.org/wiki/Costco)
###Code
# Reading data from 1st stock
# Reading data from 2nd stock
# Reading data from 3rd stock
# Combine all stocks in a single DataFrame
# Reset Date index
# Reorganize portfolio data by having a column per symbol
# Calculate daily returns
# Drop NAs
# Display sample data
###Output
_____no_output_____
###Markdown
Calculate the weighted returns for the portfolio assuming an equal number of shares for each stock
###Code
# Set weights
weights = [1/3, 1/3, 1/3]
# Calculate portfolio return
# Display sample data
###Output
_____no_output_____
###Markdown
Join your portfolio returns to the DataFrame that contains all of the portfolio returns
###Code
# Join your returns DataFrame to the original returns DataFrame
# Only compare dates where return data exists for all the stocks (drop NaNs)
###Output
_____no_output_____
###Markdown
Re-run the risk analysis with your portfolio to see how it compares to the others Calculate the Annualized Standard Deviation
###Code
# Calculate the annualized `std`
###Output
_____no_output_____
###Markdown
Calculate and plot rolling `std` with 21-day window
###Code
# Calculate rolling standard deviation
# Plot rolling standard deviation
###Output
_____no_output_____
###Markdown
Calculate and plot the correlation
###Code
# Calculate and plot the correlation
###Output
_____no_output_____
###Markdown
Calculate and Plot Rolling 60-day Beta for Your Portfolio compared to the S&P 500
###Code
# Calculate and plot Beta
###Output
_____no_output_____
###Markdown
Using the daily returns, calculate and visualize the Sharpe ratios using a bar plot
###Code
# Calculate Annualzied Sharpe Ratios
# Visualize the sharpe ratios as a bar plot
###Output
_____no_output_____ |
.ipynb_checkpoints/7_Logistic_Regression_And_PolynomialFeature(degree)_LogisticRegression(C=C, penalty = 'l1' or 'l2')-checkpoint.ipynb | ###Markdown
1.Sigmoid function
###Code
# import
import numpy as np
import matplotlib.pyplot as plt
# 二分类问题的概率函数sigmoid
def sigmoid(x):
y = 1/(1 + np.exp(-x))
return y
# plot the sigmoid funciton
x = np.linspace(-20,20,100)
y = sigmoid(x)
plt.plot(x,y,'r')
plt.show()
###Output
_____no_output_____
###Markdown
2. logistic Regression implement in sklearn
###Code
# data set
np.random.seed(666)
X = np.random.normal(0, 1,size = (300,2)) # X.shape = (200,2)
y = np.array(X[:,0] + X[:,1] <0.5 ,dtype = 'int') # y.shape = (200,)
#y
for _ in range(20):
y[np.random.randint(200)] = 1
type(y)
# show the data
plt.scatter(X[y == 0,0], X[y == 0,1])
plt.scatter(X[y == 1,0], X[y == 1,1])
plt.show()
###Output
_____no_output_____
###Markdown
2.1 Split the data and building logistic modeling
###Code
# try to split data
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,random_state = 666)
# modle
from sklearn.linear_model import LogisticRegression
logre_clf = LogisticRegression()
logre_clf.fit(X_train,y_train)
# test the score on training data set
logre_clf.score(X_train,y_train)
###Output
_____no_output_____
###Markdown
2.2 Test the model on test data set
###Code
# test on test data set
logre_clf.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
2.3 Plot the decision boundary
###Code
def plot_decision_boundary(model, axis):
x0, x1 = np.meshgrid(
np.linspace(axis[0], axis[1], int((axis[1]-axis[0])*100)).reshape(-1, 1),
np.linspace(axis[2], axis[3], int((axis[3]-axis[2])*100)).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = model.predict(X_new)
zz = y_predict.reshape(x0.shape)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#EF9A9A','#FFF59D','#90CAF9'])
plt.contourf(x0, x1, zz, cmap=custom_cmap)
plot_decision_boundary(logre_clf,axis = [-4,4,-4,4])
plt.scatter(X[y ==0,1],X[y == 0,1])
plt.scatter(X[y ==1,0],X[y == 1,1])
plt.show()
###Output
_____no_output_____
###Markdown
3 Logistic Regression with PolynomialFeatures : degree = 20
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
def LogisticregressionPolynomialFeatures(degree):
return Pipeline([
('poly',PolynomialFeatures(degree = degree)),
('standardscaler',StandardScaler()),
('logregression',LogisticRegression())
])
log_polynomialfeatures_clf = LogisticregressionPolynomialFeatures(degree = 20)
log_polynomialfeatures_clf.fit(X_train,y_train)
# test the score
log_polynomialfeatures_clf.score(X_train,y_train)
log_polynomialfeatures_clf.score(X_test,y_test)
# plot the decision boundary
plot_decision_boundary(log_polynomialfeatures_clf,axis = [-4,4,-4,4])
plt.scatter(X[y ==0,1],X[y == 0,1])
plt.scatter(X[y ==1,0],X[y == 1,1])
plt.show()
###Output
_____no_output_____
###Markdown
4 Logistic Regression with PolynomialFeatures : degree = 20 and C = 0.1
###Code
def LogisticregressionPolynomialFeatures_C(degree,C ):
return Pipeline([
('poly',PolynomialFeatures(degree = degree)),
('standardscaler',StandardScaler()),
('logregression',LogisticRegression(C = C)) # C value for logistic Regression
])
logreg_C_clf =LogisticregressionPolynomialFeatures_C(degree = 20, C= 0.1)
logreg_C_clf.fit(X_train,y_train)
logreg_C_clf.score(X_train,y_train)
logreg_C_clf.score(X_test,y_test)
# plot the decision boundary
plot_decision_boundary(logreg_C_clf,axis = [-4,4,-4,4])
plt.scatter(X[y ==0,1],X[y == 0,1])
plt.scatter(X[y ==1,0],X[y == 1,1])
plt.show()
#### 5 Logistic Regression with PolynomialFeatures : degree = 20 and C = 0.1,
def PolynomialLogisticRegression_penality(degree, C, penalty='l2'): #l2 正则化
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression(C=C, penalty=penalty))
])
PolynomialLogistic_penality = PolynomialLogisticRegression_penality(degree = 12, C= 0.1,penalty = 'l1')
poly_clf = PolynomialLogistic_penality.fit(X_train,y_train)
# modeling
poly_clf.score(X_train,y_train)
poly_clf.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
5.L1 正则化分类结果:
###Code
# plot the decision boundary
plot_decision_boundary(poly_clf, axis =[-4,4,-4,4])
plt.scatter(X[y == 0,0],X[y == 0,1])
plt.scatter(X[y == 1,0],X[y == 1,1])
plt.show()
###Output
_____no_output_____
###Markdown
5.2 L2 正则化结果
###Code
PolynomialLogistic_penality = PolynomialLogisticRegression_penality(degree = 12, C= 0.1,penalty = 'l2')
poly_clf_l2 = PolynomialLogistic_penality.fit(X_train,y_train)
poly_clf_l2.score(X_test,y_test)
plot_decision_boundary(poly_clf_l2, axis =[-4,4,-4,4])
plt.scatter(X[y == 0,0],X[y == 0,1])
plt.scatter(X[y == 1,0],X[y == 1,1])
plt.show()
###Output
_____no_output_____ |
jupyter/Experiment_unconstrained_problem_freelancer_pop08_rare01.ipynb | ###Markdown
Import packages
###Code
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import os
import sys
import dill
import yaml
import numpy as np
import pandas as pd
import ast
import collections
import seaborn as sns
sns.set(style='ticks')
###Output
_____no_output_____
###Markdown
Import submodular-optimization packages
###Code
sys.path.insert(0, "/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/submodular_optimization/")
###Output
_____no_output_____
###Markdown
Visualizations directory
###Code
VIZ_DIR = os.path.abspath("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/submodular_optimization/viz/")
###Output
_____no_output_____
###Markdown
Legends and style dictionary
###Code
legends = {
"distorted_greedy":"DistortedGreedy",
"cost_scaled_lazy_exact_greedy":"CSLG",
"unconstrained_linear":"OnlineCSG",
"unconstrained_distorted_greedy":"UnconstrainedDistortedGreedy",
"stochastic_distorted_greedy_0.01":"StochasticDistortedGreedy",
"baseline_topk": "Top-k-Experts",
"greedy":"Greedy"
}
legends = collections.OrderedDict(sorted(legends.items()))
line_styles = {'distorted_greedy':':',
'cost_scaled_lazy_exact_greedy':'-',
'unconstrained_linear':'-',
'unconstrained_distorted_greedy':'-',
'stochastic_distorted_greedy_0.01':'-.',
'baseline_topk':'--',
"greedy":"--"
}
line_styles = collections.OrderedDict(sorted(line_styles.items()))
marker_style = {'distorted_greedy':'s',
'cost_scaled_lazy_exact_greedy':'x',
'unconstrained_linear':'*',
'unconstrained_distorted_greedy':'+',
'stochastic_distorted_greedy_0.01':'o',
'baseline_topk':'d',
"greedy":"h"
}
marker_style = collections.OrderedDict(sorted(marker_style.items()))
marker_size = {'distorted_greedy':25,
'cost_scaled_lazy_exact_greedy':30,
'unconstrained_linear':25,
'unconstrained_distorted_greedy':25,
'stochastic_distorted_greedy_0.01':25,
'baseline_topk':22,
"greedy":30
}
marker_size = collections.OrderedDict(sorted(marker_size.items()))
marker_edge_width = {'distorted_greedy':6,
'cost_scaled_lazy_exact_greedy':10,
'unconstrained_linear':10,
'unconstrained_distorted_greedy':6,
'stochastic_distorted_greedy_0.01':6,
'baseline_topk':6,
"greedy":6
}
marker_edge_width = collections.OrderedDict(sorted(marker_edge_width.items()))
line_width = {'distorted_greedy':5,
'cost_scaled_lazy_exact_greedy':5,
'unconstrained_linear':5,
'unconstrained_distorted_greedy':5,
'stochastic_distorted_greedy_0.01':5,
'baseline_topk':5,
"greedy":5
}
line_width = collections.OrderedDict(sorted(line_width.items()))
name_objective = "Combined objective (g)"
fontsize = 53
legendsize = 42
labelsize = 53
x_size = 20
y_size = 16
###Output
_____no_output_____
###Markdown
Plotting utilities
###Code
def set_style():
# This sets reasonable defaults for font size for a paper
sns.set_context("paper")
# Set the font to be serif
sns.set(font='serif')#, rc={'text.usetex' : True})
# Make the background white, and specify the specific font family
sns.set_style("white", {
"font.family": "serif",
"font.serif": ["Times", "Palatino", "serif"]
})
# Set tick size for axes
sns.set_style("ticks", {"xtick.major.size": 6, "ytick.major.size": 6})
def set_size(fig, width=13, height=12):
fig.set_size_inches(width, height)
plt.tight_layout()
def save_fig(fig, filename):
fig.savefig(os.path.join(VIZ_DIR, filename), dpi=600, format='pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Plots
###Code
df1 = pd.read_csv("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/jupyter/experiment_00_freelancer_pop08_rare01_final.csv",
header=0,
index_col=False)
df1.columns = ['Algorithm', 'sol', 'val', 'submodular_val', 'cost', 'runtime', 'lazy_epsilon',
'sample_epsilon','user_sample_ratio','scaling_factor','num_rare_skills','num_common_skills',
'num_popular_skills','num_sampled_skills','seed','k']
df2 = pd.read_csv("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/jupyter/experiment_00_freelancer_pop08_rare01_greedy.csv",
header=0,
index_col=False)
df2.columns = ['Algorithm', 'sol', 'val', 'submodular_val', 'cost', 'runtime', 'lazy_epsilon',
'sample_epsilon','user_sample_ratio','scaling_factor','num_rare_skills','num_common_skills',
'num_popular_skills','num_sampled_skills','seed','k']
frames = []
frames.append(df1)
frames.append(df2)
df_final = pd.concat(frames)
df_final.to_csv("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/jupyter/experiment_00_freelancer_pop08_rare01.csv", index=False)
###Output
_____no_output_____
###Markdown
DetailsOriginal marginal gain: $$g(e|S) = f(e|S) - w(e)$$Scaled marginal gain: $$\tilde{g}(e|S) = f(e|S) - 2w(e)$$Distorted marginal gain: $$\tilde{g}(e|S) = (1-\frac{\gamma}{n})^{n-(i+1)}f(e|S) - w(e)$$ Algorithms:1. Cost Scaled Greedy: The algorithm performs iterations i = 0,...,n-1. In each iteration the algorithm selects the element that maximizes the scaled marginal gain. It adds the element to the solution if the original marginal gain of the element is >= 0. The algorithm returns a solution S: f(S) - w(S) >= (1/2)f(OPT) - w(OPT). The running time is O($n^2$).2. Cost Scaled Exact Lazy Greedy: The algorithm first initializes a max heap with all the elements. The key of each element is its scaled marginal gain and the value is the element id. If the scaled marginal gain of an element is = the next elements's old gain we return the popped element, otherwise if its new scaled marginal gain is >= 0 we reinsert the element to the heap and repeat step iii, otherwise we discard it and repeat step iii, (iv) if the returned element's original marginal gain is >= 0 we add it to the solution. The algorithm returns a solution S: f(S) - w(S) >= (1/2)f(OPT) - w(OPT). The running time is O($n^2$).3. Unconstrained Linear: The algorithm performs i = 0,...,n-1 iterations (one for each arriving element). For each element it adds it to the solution if its scaled marginal gain is > 0. The algorithm returns a solution S: f(S) - w(S) >= (1/2)f(OPT) - w(OPT). The running time is O($n$).4. Distorted Greedy: The algorithm performs i = 0,...,n-1 iterations. In each iteration the algorithm selects the element that maximizes the distorted marginal gain. It adds the element to the solution if the distorted marginal gain of the element is > 0. The algorithm returns a solution S: f(S) - w(S) >= (1-1/e)f(OPT) - w(OPT). The running time is O($n^2$). The algorithmic implementation is based on Algorithm 1 found [here](https://arxiv.org/pdf/1904.09354.pdf) for k=n.5. Stochastic Distorted Greedy: The algorithm performs i = 0,...,n-1 iterations. In each iteration the algorithm chooses a sample of s=log(1/ε) elements uniformly and independently and from this sample it selects the element that maximizes the distorted marginal gain. It adds the element to the solution if the distorted marginal gain of the element is > 0. We set $ε=0.01$. The algorithm returns a solution S: E[f(S) - w(S)] >= (1-1/e-ε)f(OPT) - w(OPT). The running time is O($n\log{1/ε}$). The algorithmic implementation is based on Algorithm 2 found [here](https://arxiv.org/pdf/1904.09354.pdf) for k=n.6. Unconstrained Distorted Greedy: The algorithm performs i = 0,...,n-1 iterations. In each iteration the algorithm chooses a random single element uniformly. It adds the element to the solution if the distorted marginal gain of the element is > 0. The algorithm returns a solution S: E[f(S) - w(S)] >= (1-1/e)f(OPT) - w(OPT). The running time is O($n$). Performance comparison
###Code
def plot_performance_comparison(df):
palette = sns.color_palette(['#b30000','#dd8452', '#4c72b0','#ccb974',
'#55a868', '#64b5cd',
'#8172b3', '#937860', '#da8bc3', '#8c8c8c',
'#ccb974', '#64b5cd'],7)
ax = sns.lineplot(x='user_sample_ratio', y='val', data=df,
style="Algorithm",hue='Algorithm', ci='sd',
mfc='none',palette=palette, dashes=False)
i = 0
for key, val in line_styles.items():
ax.lines[i].set_linestyle(val)
# ax.lines[i].set_color(colors[key])
ax.lines[i].set_linewidth(line_width[key])
ax.lines[i].set_marker(marker_style[key])
ax.lines[i].set_markersize(marker_size[key])
ax.lines[i].set_markeredgewidth(marker_edge_width[key])
ax.lines[i].set_markeredgecolor(None)
i += 1
plt.yticks(np.arange(0, 45000, 5000))
plt.xticks(np.arange(0, 1.1, 0.1))
plt.xlabel('Expert sample fraction', fontsize=fontsize)
plt.ylabel(name_objective, fontsize=fontsize)
# plt.title('Performance comparison')
fig = plt.gcf()
figlegend = plt.legend([val for key,val in legends.items()],loc=3, bbox_to_anchor=(0., 1.02, 1., .102),
ncol=2, mode="expand", borderaxespad=0., frameon=False,prop={'size': legendsize})
ax = plt.gca()
plt.gca().tick_params(axis='y', labelsize=labelsize)
plt.gca().tick_params(axis='x', labelsize=labelsize)
return fig, ax
df = pd.read_csv("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/jupyter/experiment_00_freelancer_pop08_rare01.csv",
header=0,
index_col=False)
df.columns = ['Algorithm', 'sol', 'val', 'submodular_val', 'cost', 'runtime', 'lazy_epsilon',
'sample_epsilon','user_sample_ratio','scaling_factor','num_rare_skills','num_common_skills',
'num_popular_skills','num_sampled_skills','seed','k']
df = df[(df.Algorithm == 'distorted_greedy')
# |(df.Algorithm == 'cost_scaled_greedy')
|(df.Algorithm == 'cost_scaled_lazy_greedy')
|(df.Algorithm == 'unconstrained_linear')
|(df.Algorithm == 'unconstrained_distorted_greedy')
|(df.Algorithm == 'stochastic_distorted_greedy_0.01')
|(df.Algorithm == 'baseline_topk')
|(df.Algorithm == 'greedy')
]
df0 = df[(df['sample_epsilon'].isnull()) | (df['sample_epsilon'] == 0.01)]
df0.sort_values(by ='Algorithm',inplace=True)
set_style()
fig, axes = plot_performance_comparison(df0)
set_size(fig, x_size, y_size)
save_fig(fig,'score_unconstrained_freelancer_pop08_rare01.pdf')
###Output
_____no_output_____
###Markdown
Runtime comparison for different dataset sizes
###Code
legends = {
"distorted_greedy":"DistortedGreedy",
"cost_scaled_greedy":"CSG",
"cost_scaled_lazy_exact_greedy":"CSLG",
"unconstrained_linear":"OnlineCSG",
"unconstrained_distorted_greedy":"UnconstrainedDistortedGreedy",
"stochastic_distorted_greedy_0.01":"StochasticDistortedGreedy",
"baseline_topk": "Top-k-Experts",
"greedy":"Greedy"
}
legends = collections.OrderedDict(sorted(legends.items()))
line_styles = {'distorted_greedy':':',
'cost_scaled_greedy':'-',
'cost_scaled_lazy_exact_greedy':'-',
'unconstrained_linear':'-',
'unconstrained_distorted_greedy':'-',
'stochastic_distorted_greedy_0.01':'-.',
'baseline_topk':'--',
"greedy":"--"
}
line_styles = collections.OrderedDict(sorted(line_styles.items()))
marker_style = {'distorted_greedy':'s',
'cost_scaled_greedy':'x',
'cost_scaled_lazy_exact_greedy':'x',
'unconstrained_linear':'*',
'unconstrained_distorted_greedy':'+',
'stochastic_distorted_greedy_0.01':'o',
'baseline_topk':'d',
"greedy":"h"
}
marker_style = collections.OrderedDict(sorted(marker_style.items()))
marker_size = {'distorted_greedy':25,
'cost_scaled_greedy':30,
'cost_scaled_lazy_exact_greedy':30,
'unconstrained_linear':25,
'unconstrained_distorted_greedy':25,
'stochastic_distorted_greedy_0.01':25,
'baseline_topk':22,
"greedy":30
}
marker_size = collections.OrderedDict(sorted(marker_size.items()))
marker_edge_width = {'distorted_greedy':6,
'cost_scaled_greedy':10,
'cost_scaled_lazy_exact_greedy':10,
'unconstrained_linear':6,
'unconstrained_distorted_greedy':6,
'stochastic_distorted_greedy_0.01':6,
'baseline_topk':6,
"greedy":6
}
marker_edge_width = collections.OrderedDict(sorted(marker_edge_width.items()))
line_width = {'distorted_greedy':5,
'cost_scaled_greedy':5,
'cost_scaled_lazy_exact_greedy':5,
'unconstrained_linear':5,
'unconstrained_distorted_greedy':5,
'stochastic_distorted_greedy_0.01':5,
'baseline_topk':5,
"greedy":5}
line_width = collections.OrderedDict(sorted(line_width.items()))
name_objective = "Combined objective (g)"
fontsize = 53
legendsize = 42
labelsize = 53
x_size = 20
y_size = 16
def plot_performance_comparison(df):
palette = sns.color_palette(['#b30000','#937860','#dd8452', '#4c72b0','#ccb974'
,'#55a868', '#64b5cd',
'#8172b3', '#937860', '#da8bc3', '#8c8c8c',
'#ccb974', '#64b5cd'],8)
ax = sns.lineplot(x='user_sample_ratio', y='runtime', data=df,
style="Algorithm",hue='Algorithm', ci='sd',
mfc='none',palette=palette, dashes=False)
i = 0
for key, val in line_styles.items():
ax.lines[i].set_linestyle(val)
# ax.lines[i].set_color(colors[key])
ax.lines[i].set_linewidth(line_width[key])
ax.lines[i].set_marker(marker_style[key])
ax.lines[i].set_markersize(marker_size[key])
ax.lines[i].set_markeredgewidth(marker_edge_width[key])
ax.lines[i].set_markeredgecolor(None)
i += 1
# plt.yticks(np.arange(0, 45000, 5000))
plt.xticks(np.arange(0, 1.1, 0.1))
plt.xlabel('Expert sample fraction', fontsize=fontsize)
plt.ylabel('Time (sec)', fontsize=fontsize)
# plt.title('Performance comparison')
fig = plt.gcf()
figlegend = plt.legend([val for key,val in legends.items()],loc=3, bbox_to_anchor=(0., 1.02, 1., .102),
ncol=2, mode="expand", borderaxespad=0., frameon=False,prop={'size': legendsize})
ax = plt.gca()
plt.gca().tick_params(axis='y', labelsize=labelsize)
plt.gca().tick_params(axis='x', labelsize=labelsize)
a = plt.axes([.17, .43, .35, .3])
ax2 = sns.lineplot(x='user_sample_ratio', y='runtime', data=df,
hue='Algorithm', legend=False,
mfc='none',palette=palette,label=False)
i = 0
for key, val in line_styles.items():
ax2.lines[i].set_linestyle(val)
# ax.lines[i].set_color(colors[key])
ax2.lines[i].set_linewidth(2)
ax2.lines[i].set_marker(marker_style[key])
ax2.lines[i].set_markersize(12)
ax2.lines[i].set_markeredgewidth(3)
ax2.lines[i].set_markeredgecolor(None)
i += 1
ax2.set(ylim=(0, 2))
ax2.set(xlim=(0, 1))
ax2.set_ylabel('')
ax2.set_xlabel('')
# plt.gca().xaxis.set_major_formatter(mtick.FormatStrFormatter('%.1e'))
# plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1e'))
plt.gca().tick_params(axis='x', labelsize=22)
plt.gca().tick_params(axis='y', labelsize=22)
return fig, ax
df = pd.read_csv("/Users/smnikolakaki/GitHub/submodular-linear-cost-maximization/jupyter/experiment_00_freelancer_pop08_rare01.csv",
header=0,
index_col=False)
df.columns = ['Algorithm', 'sol', 'val', 'submodular_val', 'cost', 'runtime', 'lazy_epsilon',
'sample_epsilon','user_sample_ratio','scaling_factor','num_rare_skills','num_common_skills',
'num_popular_skills','num_sampled_skills','seed','k']
df = df[(df.Algorithm == 'distorted_greedy')
|(df.Algorithm == 'cost_scaled_greedy')
|(df.Algorithm == 'cost_scaled_lazy_greedy')
|(df.Algorithm == 'unconstrained_linear')
|(df.Algorithm == 'unconstrained_distorted_greedy')
|(df.Algorithm == 'stochastic_distorted_greedy_0.01')
|(df.Algorithm == 'baseline_topk')
|(df.Algorithm == 'greedy'
)
]
df0 = df[(df['sample_epsilon'].isnull()) | (df['sample_epsilon'] == 0.01)]
df0.sort_values(by ='Algorithm',inplace=True)
set_style()
fig, axes = plot_performance_comparison(df0)
set_size(fig, x_size, y_size)
save_fig(fig,'time_unconstrained_freelancer_pop08_rare01.pdf')
###Output
/opt/anaconda3/envs/python3.6/lib/python3.6/site-packages/ipykernel_launcher.py:3: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
This is separate from the ipykernel package so we can avoid doing imports until
|
MLOps-Specialization/course-1-introduction-to-machine-learning-in-production/week-1-overview-of-ml-lifecycle-and-deployment/part_1_deploying_machine_learning_model.ipynb | ###Markdown
Part 1 - Deploying a Machine Learning Model Welcome to this ungraded lab! If you are reading this it means you did the setup properly, nice work!This lab is all about deploying a real machine learning model, and checking what doing so feels like. More concretely, you will deploy a computer vision model trained to detect common objects in pictures. Deploying a model is one of the last steps in a prototypical machine learning lifecycle. However, we thought it would be exciting to get you to deploy a model right away. This lab uses a pretrained model called [`YOLOV3`](https://pjreddie.com/darknet/yolo/). This model is very convenient for two reasons: it runs really fast, and for object detection it yields accurate results.The sequence of steps/tasks to complete in this lab are as follow:1. Inspect the image data set used for object detection2. Take a look at the model itself3. Deploy the model using [`fastAPI`](https://fastapi.tiangolo.com/) Setup
###Code
!pip install cvlib uvicorn fastapi nest_asyncio python-multipart pyngrok
from IPython.display import Image, display
import os
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
import io
import uvicorn
import numpy as np
import nest_asyncio
from pyngrok import ngrok
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
%%shell
wget -q https://raw.githubusercontent.com/rahiakela/MLOps-Specialization/main/course-1-introduction-to-machine-learning-in-production/week-1-overview-of-ml-lifecycle-and-deployment/images/images.zip
unzip -q images.zip
rm -rf images.zip
# move all images to images folder
mkdir images
mv *.jpg images/
###Output
_____no_output_____
###Markdown
Object Detection with YOLOV3 Inspecting the images Let's take a look at the images that will be passed to the YOLOV3 model. This will bring insight on what type of common objects are present for detection. These images are part of the [`ImageNet`](http://www.image-net.org/index) dataset.
###Code
# Some example images
image_files = [
'apple.jpg',
'clock.jpg',
'oranges.jpg',
'car.jpg'
]
for image_file in image_files:
print(f"\nDisplaying image: {image_file}")
display(Image(filename=f"images/{image_file}"))
###Output
Displaying image: apple.jpg
###Markdown
Overview of the model Now that you have a sense of the image data and the objects present, let's try and see if the model is able to detect and classify them correctly.For this you will be using [`cvlib`](https://www.cvlib.net/), which is a very simple but powerful library for object detection that is fueled by [`OpenCV`](https://docs.opencv.org/4.5.1/) and [`Tensorflow`](https://www.tensorflow.org/).More concretely, you will use the [`detect_common_objects`](https://docs.cvlib.net/object_detection/) function, which takes an image formatted as a [`numpy array`](https://numpy.org/doc/stable/reference/generated/numpy.array.html) and returns:- `bbox`: list of list containing bounding box coordinates for detected objects. Example: ```python [[32, 76, 128, 192], [130, 83, 220, 185]] ``` - `label`: list of labels for detected objects. Example: ```python ['apple', 'apple'] ```- `conf`: list of confidence scores for detected objects. Example: ```python [0.6187325716018677, 0.42835739254951477] ``` In the next section you will visually see these elements in action. Creating the detect_and_draw_box function Before using the object detection model, create a directory where you can store the resulting images:
###Code
dir_name = "images_with_boxes"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Let's define the `detect_and_draw_box` function which takes as input arguments: the **filename** of a file on your system, a **model**, and a **confidence level**. With these inputs, it detects common objects in the image and saves a new image displaying the bounding boxes alongside the detected object.You might ask yourself why does this function receive the model as an input argument? What models are there to choose from? The answer is that `detect_common_objects` uses the `yolov3` model by default. However, there is another option available that is much tinier and requires less computational power. It is the `yolov3-tiny` version. As the model name indicates, this model is designed for constrained environments that cannot store big models. With this comes a natural tradeoff: the results are less accurate than the full model. However, it still works pretty well. Going forward, we recommend you stick to it since it is a lot smaller than the regular `yolov3` and downloading its pretrained weights takes less time.The model output is a vector of probabilities for the presence of different objects on the image. The last input argument, confidence level, determines the threshold that the probability needs to surpass to report that a given object is detected on the supplied image. By default, `detect_common_objects` uses a value of 0.5 for this.
###Code
def detect_and_draw_box(filename, model="yolov3-tiny", confidence=0.5):
"""
Detects common objects on an image and creates a new image with bounding boxes.
Args:
filename (str): Filename of the image.
model (str): Either "yolov3" or "yolov3-tiny". Defaults to "yolov3-tiny".
confidence (float, optional): Desired confidence level. Defaults to 0.5.
"""
# Images are stored under the images/ directory
img_filepath = f"images/{filename}"
# Read the image into a numpy array
img = cv2.imread(img_filepath)
# Perform the object detection
bbox, label, conf = cv.detect_common_objects(img, confidence=confidence, model=model)
# Print current image's filename
print(f"========================\nImage processed: {filename}\n")
# Print detected objects with confidence level
for l, c in zip(label, conf):
print(f"Detected object: {l} with confidence level of {c}\n")
# Create a new image that includes the bounding boxes
output_image = draw_bbox(img, bbox, label, conf)
# Save the image in the directory images_with_boxes
cv2.imwrite(f"images_with_boxes/{filename}", output_image)
# Display the image with bounding boxes
display(Image(f"images_with_boxes/{filename}"))
###Output
_____no_output_____
###Markdown
Let's try it out for the example images.
###Code
for image_file in image_files:
detect_and_draw_box(image_file)
###Output
Downloading yolov3-tiny.cfg from https://github.com/pjreddie/darknet/raw/master/cfg/yolov3-tiny.cfg
###Markdown
Changing the confidence level Looks like the object detection went fairly well. Let's try it out on a more difficult image containing several objects:
###Code
detect_and_draw_box("fruits.jpg")
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.5150989890098572
###Markdown
The **model failed to detect** several fruits and **misclassified** an orange as an apple. This might seem strange since it was able to detect one apple before, so one might think the model has a fair representation on how an apple looks like.One possibility is that the model **did** detect the other fruits but with a confidence level lower than 0.5. Let's test if this is a valid hypothesis:
###Code
detect_and_draw_box("fruits.jpg", confidence=0.2)
###Output
========================
Image processed: fruits.jpg
Detected object: apple with confidence level of 0.5818482041358948
Detected object: orange with confidence level of 0.5346484184265137
Detected object: orange with confidence level of 0.5150989890098572
Detected object: apple with confidence level of 0.3475987911224365
Detected object: orange with confidence level of 0.3287609815597534
Detected object: apple with confidence level of 0.31244683265686035
Detected object: orange with confidence level of 0.27986058592796326
Detected object: orange with confidence level of 0.27499768137931824
Detected object: apple with confidence level of 0.27445051074028015
Detected object: orange with confidence level of 0.21419072151184082
###Markdown
By lowering the confidence level the model successfully detects most of the fruits. However, in order to correctly detect the objects present, we had to set the confidence level really low. In general, you should be careful when decreasing or increasing these kinds of parameters, as changing them might yield undesired results.As for this concrete example when an orange was misclassified as an apple, it serves as a reminder that these models are not perfect and this should be considered when using them for tasks in production. Deploying the model using fastAPI Placing your object detection model in a server Now that you know how the model works it is time for you to deploy it! Aren't you excited? :)Before diving into deployment, let's quickly recap some important concepts and how they translate to `fastAPI`. Let's also create a directory to store the images that are uploaded to the server.
###Code
dir_name = "images_uploaded"
if not os.path.exists(dir_name):
os.mkdir(dir_name)
###Output
_____no_output_____
###Markdown
Some concept clarifications **Client-Server model**When talking about **deploying**, what is usually meant is to put all of the software required for predicting in a `server`. By doing this, a `client` can interact with the model by sending `requests` to the server. This client-server interaction is out of the scope of this notebook but there are a lot of resources on the internet that you can use to understand it better.The important thing you need to focus on, is that the Machine Learning model lives in a server waiting for clients to submit prediction requests. The client should provide the required information that the model needs in order to make a prediction. Keep in mind that it is common to batch many predictions in a single request. The server will use the information provided to return predictions to the client, who can then use them at their leisure.Let's get started by creating an instance of the `FastAPI` class:```pythonapp = FastAPI()```The next step is using this instance to create endpoints that will handle the logic for predicting (more on this next). Once all the code is in place to run the server you only need to use the command:```pythonuvicorn.run(app)```Your API is coded using fastAPI but the serving is done using [`uvicorn`](https://www.uvicorn.org/), which is a really fast Asynchronous Server Gateway Interface (ASGI) implementation. Both technologies are closely interconnected and you don't need to understand the implementation details. Knowing that uvicorn handles the serving is sufficient for the purpose of this lab.**Endpoints**You can host multiple Machine Learning models on the same server. For this to work, you can assign a different `endpoint` to each model so you always know what model is being used. An endpoint is represented by a pattern in the `URL`. For example, if you have a website called `myawesomemodel.com` you could have three different models in the following endpoints:- `myawesomemodel.com/count-cars/`- `myawesomemodel.com/count-apples/`- `myawesomemodel.com/count-plants/`Each model would do what the name pattern suggests.In fastAPI you define an endpoint by creating a function that will handle all of the logic for that endpoint and [decorating](https://www.python.org/dev/peps/pep-0318/) it with a function that contains information on the HTTP method allowed (more on this next) and the pattern in the URL that it will use.The following example shows how to allow a HTTP GET request for the endpoint "/my-endpoint":```[email protected]("/my-endpoint")def handle_endpoint(): ... ...```**HTTP Requests**The client and the server communicate with each other through a protocol called `HTTP`. The key concept here is that this communication between client and server uses some verbs to denote common actions. Two very common verbs are:- `GET` -> Retrieves information from the server.- `POST` -> Provides information to the server, which it uses to respond.If your client does a `GET request` to an endpoint of a server you will get some information from this endpoint without the need to provide additional information. In the case of a `POST request` you are explicitly telling the server that you will provide some information for it that must be processed in some way.Interactions with Machine Learning models living on endpoints are usually done via a `POST request` since you need to provide the information that is required to compute a prediction.Let's take a look at a POST request:```[email protected]("/my-other-endpoint")def handle_other_endpoint(param1: int, param2: str): ... ...```For POST requests, the handler function contains parameters. In contrast with GET, POST requests expect the client to provide some information to it. In this case we supplied two parameters: an integer and a string.**Why fastAPI?**With fastAPI you can create web servers to host your models very easily. Additionally, this platform is extremely fast and it **has a built-in client that can be used to interact with the server**. To use it you will need to visit the "/docs" endpoint, for this case this means to visit http://localhost:8000/docs. Isn't that convenient?Enough chatter, let's get going!
###Code
import io
import uvicorn
import numpy as np
import nest_asyncio
from enum import Enum
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
# Assign an instance of the FastAPI class to the variable "app".
# You will interact with your api using this instance.
app = FastAPI(title="Deploying a ML Model with FastAPI")
# List available models using Enum for convenience. This is useful when the options are pre-defined.
class Model(str, Enum):
yolov3tiny = "yolov3-tiny"
yolov3 = "yolov3"
# By using @app.get("/") you are allowing the GET method to work for the / endpoint.
@app.get("/")
def home():
return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
# This endpoint handles all the logic necessary for the object detection to work.
# It requires the desired model and the image in which to perform object detection.
@app.post("/predict")
def prediction(model: Model, file: UploadFile = File(...)):
# 1. VALIDATE INPUT FILE
filename = file.filename
fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
if not fileExtension:
raise HTTPException(status_code=415, detail="Unsupported file provided.")
# 2. TRANSFORM RAW IMAGE INTO CV2 image
# Read image as a stream of bytes
image_stream = io.BytesIO(file.file.read())
# Start the stream from the beginning (position zero)
image_stream.seek(0)
# Write the stream of bytes into a numpy array
file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
# Decode the numpy array as an image
image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
# 3. RUN OBJECT DETECTION MODEL
# Run object detection
bbox, label, conf = cv.detect_common_objects(image, model=model)
# Create image that includes bounding boxes and labels
output_image = draw_bbox(image, bbox, label, conf)
# Save it in a folder within the server
cv2.imwrite(f"images_uploaded/{filename}", output_image)
# 4. STREAM THE RESPONSE BACK TO THE CLIENT
# Open the saved image for reading in binary mode
file_image = open(f"images_uploaded/{filename}", mode="rb")
# Return the image as a stream specifying media type
return StreamingResponse(file_image, media_type="image/jpeg")
###Output
_____no_output_____
###Markdown
By running the following cell you will spin up the server!This causes the notebook to block (no cells/code can run) until you manually interrupt the kernel. You can do this by clicking on the `Kernel` tab and then on `Interrupt`. You can also enter Jupyter's command mode by pressing the `ESC` key and tapping the `I` key twice.```bash for local machine Allows the server to be run in this interactive environmentnest_asyncio.apply() Host depends on the setup you selected (docker or virtual env)host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1" Spin up the server! uvicorn.run(app, host=host, port=8000)```
###Code
ngrok_tunnel = ngrok.connect(8000)
print('Public URL:', ngrok_tunnel.public_url)
nest_asyncio.apply()
uvicorn.run(app, port=8000)
###Output
Public URL: http://05808ad9a081.ngrok.io
|
Tutorials/2.Content/2.2-Pricing/TUT_2.2.04-Pricing-Chain.ipynb | ###Markdown
---- Data Library for Python---- Content - Pricing - Chain constituentsThis notebook demonstrates how to use the Pricing interface to retrieve the consituents of a Chain instrument :- either as a static snapshot of the current Constituent RICs- or streaming updates for any changes to Constituent RICs Set the location of the configuration fileFor ease of use, you can set various initialization parameters of the RD Library in the **_refinitiv-data.config.json_** configuration file - as described in the Quick Start -> Sessions example. One config file for the tutorialsAs these tutorial Notebooks are categorised into sub-folders and to avoid the need for multiple config files, we will use the _RD_LIB_CONFIG_PATH_ environment variable to point to a single instance of the config file in the top-level ***Configuration*** folder.Before proceeding, please **ensure you have entered your credentials** into the config file in the ***Configuration*** folder.
###Code
import os
os.environ["RD_LIB_CONFIG_PATH"] = "../../../Configuration"
from refinitiv.data.content import pricing
import refinitiv.data as rd
from pandas import DataFrame
from IPython.display import display, clear_output
###Output
_____no_output_____
###Markdown
Open the default sessionTo open the default session ensure you have a '*refinitiv-data.config.json*' in the ***Configuration*** directory, populated with your credentials and specified a 'default' session in the config file
###Code
rd.open_session()
###Output
_____no_output_____
###Markdown
Define and open ChainDefine a streaming price object for the FTSE index.
###Code
# define a chain to fetch FTSE constituent RICs
ftse = pricing.chain.Definition(name="0#.FTSE").get_stream()
###Output
_____no_output_____
###Markdown
Then open method tells the Chain object to subscribe to a stream of the constituent RICs.
###Code
ftse.open()
###Output
_____no_output_____
###Markdown
Get a list of the current Constituent RICs Once the open method returns, the Chain object is ready to be used. Its internal cache will be updated as and when the list of Consituent changes - which for many Chains is not that often - e.g. the FTSE constituents don't change that often.However, for some chains, the constituents can change more often.
###Code
constituent_list = ftse.constituents
display(constituent_list)
###Output
_____no_output_____
###Markdown
Other means of accessing the list of constituents Check if the Stream really is for a chain instrument?
###Code
# check is this a chain or not?
print(f"{ftse} is_chain :", ftse.is_chain )
###Output
<refinitiv.data.content.pricing.chain.Stream object at 0x28b50586f40 {name='0#.FTSE'}> is_chain : True
###Markdown
Get constituent in the chain record
###Code
# at this point we do snapshot for 1st RIC - as its a streaming request, it may different to the above
first_constituent = ftse.constituents[0]
print(f"{ftse} constituent at index 0 :", first_constituent )
# loop over all constituents in the chain record
for index, constituent in enumerate( ftse.constituents ):
print(f"{index} = {constituent}")
###Output
0 = AAL.L
1 = ABDN.L
2 = ABF.L
3 = ADML.L
4 = AHT.L
5 = ANTO.L
6 = AUTOA.L
7 = AV.L
8 = AVST.L
9 = AVV.L
10 = AZN.L
11 = BAES.L
12 = BARC.L
13 = BATS.L
14 = BDEV.L
15 = BHPB.L
16 = BKGH.L
17 = BLND.L
18 = BMEB.L
19 = BNZL.L
20 = BP.L
21 = BRBY.L
22 = BT.L
23 = CCH.L
24 = CPG.L
25 = CRDA.L
26 = CRH.L
27 = DCC.L
28 = DGE.L
29 = DPH.L
30 = ECM.L
31 = ENT.L
32 = EVRE.L
33 = EXPN.L
34 = FERG.L
35 = FLTRF.L
36 = FRES.L
37 = GLEN.L
38 = GSK.L
39 = HIK.L
40 = HLMA.L
41 = HRGV.L
42 = HSBA.L
43 = ICAG.L
44 = ICP.L
45 = IHG.L
46 = III.L
47 = IMB.L
48 = INF.L
49 = ITRK.L
50 = ITV.L
51 = JD.L
52 = KGF.L
53 = LAND.L
54 = LGEN.L
55 = LLOY.L
56 = LSEG.L
57 = MGGT.L
58 = MNDI.L
59 = MNG.L
60 = MRON.L
61 = NG.L
62 = NWG.L
63 = NXT.L
64 = OCDO.L
65 = PHNX.L
66 = POLYP.L
67 = PRU.L
68 = PSHP.L
69 = PSN.L
70 = PSON.L
71 = RDSa.L
72 = RDSb.L
73 = REL.L
74 = RIO.L
75 = RKT.L
76 = RMG.L
77 = RMV.L
78 = RR.L
79 = RTO.L
80 = SBRY.L
81 = SDR.L
82 = SGE.L
83 = SGRO.L
84 = SJP.L
85 = SKG.L
86 = SMDS.L
87 = SMIN.L
88 = SMT.L
89 = SN.L
90 = SPX.L
91 = SSE.L
92 = STAN.L
93 = SVT.L
94 = TSCO.L
95 = TW.L
96 = ULVR.L
97 = UU.L
98 = VOD.L
99 = WPP.L
100 = WTB.L
###Markdown
Get the summary links of the chain record
###Code
# Chains often have Summary RICs for the chain
summary_links = ftse.summary_links
print(f"summary links of the chain are : {summary_links}")
###Output
summary links of the chain are : ['.FTSE', '.AD.FTSE']
###Markdown
Close the Streaming Chain instrument
###Code
ftse.close()
###Output
_____no_output_____
###Markdown
Once close() is called the Chain stops updating its internal cache of Constituents. The get_constituents function can still be called but it will always return the state of the chaing before the close was called. Additional ParametersYou can control whether to skip summary Links and/or empty constituents - with the optional parameters which default to True.
###Code
ftse = rd.content.pricing.chain.Definition(name="0#.FTSE",
skip_summary_links=True, skip_empty=True ).get_stream()
###Output
_____no_output_____
###Markdown
Snap the Chain constituentsIf you are not planning to use the Chain over an extended period of time and/or just want to snap the current Constituents, you can open it without updates.
###Code
ftse.open(with_updates=False)
###Output
_____no_output_____
###Markdown
The Library will request the Chain and then close the stream once it has received a response from the server. You can then use the get_constituents function to access the consituent list as they were at the time of open() call. Close the session
###Code
rd.close_session()
###Output
_____no_output_____ |
exploring.ipynb | ###Markdown
TAMU Datathon: Taco/Burrito Challenge Team Name: Taco 'Bout It! Team Members: Alex Riley, Jacqueline Antwi-Danso We decided to participate in the Goldman Sachs data challenge, which centers on a dataset logging taco and burrito menu items in the United States ([Kaggle link](https://www.kaggle.com/datafiniti/restaurants-burritos-and-tacos/)). The tasks of the challenge are:```The final product of your efforts should include a visualization of your output, with supporting documentation detailing the modeling and analysis performed.```We'll start with the usual Python imports, plus some that will be useful for data cleaning (dealing with zip codes).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from uszipcode import SearchEngine
import plotly.graph_objects as go
%matplotlib inline
# so plotly map can render
import plotly
plotly.offline.init_notebook_mode(connected=True)
# contains mappings between state name and abbreviation
import utils
###Output
_____no_output_____
###Markdown
Take a look at data
###Code
file = 'data/just-tacos-and-burritos.csv'
data = pd.read_csv(file)
data.head(5)
###Output
_____no_output_____
###Markdown
There's a lot of unnamed columns that are filled with `NaN`. Let's get rid of those.
###Code
empty = data.isna().sum() == len(data)
assert np.array(["Unnamed" in col for col in data.columns[empty]]).all()
data.drop(columns=data.columns[empty], inplace=True)
###Output
_____no_output_____
###Markdown
List of columns* `id`: unique ID for restaurant* `address`: restaurant address (number and street name)* `categories`: categories for restaurant (e.g. "Restaurant" or "Restaurant Delivery")* `city`: city name* `country`: country (note: all are in the US)* `cuisines`: type of restaurant, e.g. "Coffee" or "Mexican". Not unique (one example is "Buffets, Pizza")* `dateAdded`: date that entry was added to dataset* `dateUpdated`: date that entry was last updated (can be equal to `dateAdded`)* `keys`: ???* `latitude`: latitude of the restaurant* `longitude`: longitude of the restaurant* `menuPageURL`: URL to menu* `menus.amountMax`: max amount on menu? (sparsely filled; 37,000 NaN)* `menus.amountMin`: min amount on menu? (sparsely filled; 37,000 NaN)* `menus.category`: category that item falls under in menu (e.g. "Main Course", "Tacos"). Sparsely filled, 73,531 NaN* `menus.currency`: currency used on item. usually USD, 16 entries are EUR* `menus.dateseen`: date that menu was observed* `menus.description`: description of item in menu* `name`: name of restaurant* `postalCode`: ZIP code of restaurant* `priceRangeCurrency`: currency used for `menus.priceRangeMin/Max` usually USD, one entry in AUD* `priceRangeMin`: minimum price of items on menu* `priceRangeMax`: maximum price of items on menu* `province`: typically state but not always. needs cleaning* `websites`: website for the restaurant Potential data cleaning issues* `name` can have multiple values, like `McDonald's` and `Mc Donalds`* many columns are incomplete, including `postalCode` and `latitude/longitude` which might make analysis/visualizing the spatial distribution of restaurants difficult Cleaning the dataConsistent identification of city + state (`province` is not clean version of this). We'll start off by creating a new column named `state`.**Note: this section can be skipped if it's already been run once**
###Code
data['state'] = data['province']
# three entries had no province info, all were in San Francisco
data.loc[data['state'].isna(), 'state'] = 'CA'
###Output
_____no_output_____
###Markdown
Now we have a few freebies. These were common (top 25-ish) values for `province` that are easily mapped to states, as well as `province` values that were 2 characters that did not match state abbreviation codes.
###Code
data.loc[data['state'] == 'California', 'state'] = 'CA'
data.loc[data['state'] == 'Manhattan', 'state'] = 'NY'
data.loc[data['state'] == 'New York City', 'state'] = 'NY'
data.loc[data['state'] == 'Ny', 'state'] = 'NY'
data.loc[data['state'] == 'Ls', 'state'] = 'MO'
###Output
_____no_output_____
###Markdown
The pops that remain would be a pain to continue for. For these ~8000 pops, we will use the `uszipcode` package to map provided zip codes to the state.
###Code
badmask = data['state'].apply(len) != 2
search = SearchEngine()
data.loc[badmask, 'state'] = data[badmask].apply(lambda x: search.by_zipcode(x['postalCode']).state if x['postalCode'] else x['state'], axis=1)
###Output
_____no_output_____
###Markdown
And we're done! We can check that all of the `state` items are valid state codes by cross-referencing against the list located in `utils.py`.
###Code
data['state'].apply(lambda x: True if x in utils.abbrev_us_state else False).all()
data['citystate'] = data.apply(lambda x: x['city']+', '+x['state'], axis=1)
###Output
_____no_output_____
###Markdown
Question: Where are the authentic Mexican restaurants? Marking out "authentic"We want to exclude stores that can be reliably marked as "inauthentic," like Subway or McDonald's. For this, we'll exclude any restaurants from this list of the [32 biggest fast food chains in America](https://www.qsrmagazine.com/content/32-biggest-fast-food-chains-america). We also opt to include Chili's in the list of excluded chains.Notice, some names have permutations that match names occurring in top 100 (e.g. McDonald's).
###Code
def chains_mask(data):
exclude_list = ["Subway", "Starbucks",
"McDonald's", "Mcdonald's", "Mc Donald's", "Mcdonalds", "McDonalds",
"Dunkin", "Pizza Hut", "Burger King", "Wendy's", "Taco Bell",
"Domino's", "Dairy Queen", "Little Caesars", "KFC",
"Sonic Drive In", "SONIC Drive In", "Sonic Drive-in", "Sonic Drive-In",
"Papa John's", "Arby's", "Jimmy John's",
"Baskin-Robbins", "Chipotle Mexican Grill", "Chick-Fil-A", "Popeye's",
"Jack in the Box", "Jack In The Box",
"Panda Express", "Panera", "Carl's Jr.", "Jersey Mike's", "Papa Murphy's",
"Five Guys", "Auntie Anne's", "Wingstop", "Firehouse Subs"]
# also exclude Chili's
exclude_list.append("Chili's Grill & Bar")
exclude_list.append("Chili's Grill Bar")
exclude_list.append("Chili's")
exclude_list.append("Chili's Too")
chain = [False] * len(data)
for name in exclude_list:
chain |= data['name'] == name
return ~chain
authentic = data[chains_mask(data)]
###Output
_____no_output_____
###Markdown
Now we are interested in the question where are authentic **restaurants** concentrated in the U.S.? For this, we need to only have a list of authentic restaurants, not a list of authentic burritos/tacos. Luckily, we can just mask duplicated `id`s.
###Code
unique_restaurant_mask = ~authentic['id'].duplicated()
restaurants = authentic[unique_restaurant_mask]
###Output
_____no_output_____
###Markdown
Now we can get a very simple answer for which cities host the most authentic Mexican restaurants in the U.S.:
###Code
citycounts = restaurants['citystate'].value_counts()
citycounts.head(7)
###Output
_____no_output_____
###Markdown
However, this just looks like a list of big cities with a lot of people (who would therefore have a lot of authentic Mexican restaurants). To fix this, we can instead try to get the number of restaurants _per capita_. For population data, we'll use [population estimates from the U.S. Census Bureau](https://www.census.gov/data/tables/time-series/demo/popest/2010s-total-cities-and-towns.htmlds) for 2018.
###Code
popfile = 'data/sub-est2018_all.csv'
popdata = pd.read_csv(popfile, encoding="ISO-8859-1")
popdata.head()
###Output
_____no_output_____
###Markdown
So we simply need to find a city by matching the city name and state in the `popdata` table. We also need to convert the state code (e.g. "AL") to a state name ("Alabama").
###Code
population = []
for name, pop in citycounts.iteritems():
match = popdata['NAME'] == name[:-4] + ' city'
match |= popdata['NAME'] == name[:-4] + ' town'
match |= popdata['NAME'] == name[:-4] + ' village'
match &= popdata['STNAME'] == utils.abbrev_us_state[name[-2:]]
pop = np.max(popdata[match]['POPESTIMATE2018'])
population.append(pop)
population = np.array(population)
citycounts_percapita = (citycounts/population) * 1000
citycounts_percapita.sort_values(ascending=False).head(7)
###Output
_____no_output_____
###Markdown
So these are the places that have a lot of authentic Mexican restaurants relative to how big their population is. However, all of these places are very small and isolated. Let's place a cut on the data requiring a population of over 50,000 (the high end of what is considered the threshold for a "city")
###Code
threshold = 50000
citycounts_percapita[population > threshold].sort_values(ascending=False).head(7)
###Output
_____no_output_____
###Markdown
As can be (somewhat) expected, the list is now dominated by cities in California (with one entry from New Mexico). Visualization: where are the tacos?
###Code
cities = data['city'].unique().tolist()
def food_nums(loc_arr):
df = data
tacos = []; burritos = []
for name in loc_arr:
menu_options = df[df['city'] == name]['menus.name']
# for each restaurant in each city, calculate the number of burritos and tacos
num_tacos = []
num_burritos = []
for option in menu_options:
if "Taco" in option:
num_tacos.append(1)
if "Burrito" in option:
num_burritos.append(1)
if len(num_tacos) != 0:
total_tacos = np.sum(num_tacos)
else:
total_tacos = 0
if len(num_burritos) != 0:
total_burritos = np.sum(num_burritos)
else:
total_burritos = 0
tacos.append(total_tacos)
burritos.append(total_burritos)
return tacos, burritos
num_tacos, num_burritos = food_nums(cities)
# turn zeros into very big/small numbers to avoid to avoid division by 0
num_tacos = np.array(num_tacos)
num_burritos = np.array(num_burritos)
tmp_num_tacos = np.copy(num_tacos)
tmp_num_burritos = np.copy(num_burritos)
tmp_num_tacos[tmp_num_tacos == 0] = -1
tmp_num_burritos[tmp_num_burritos == 0] = -1
city_ratio = tmp_num_burritos/tmp_num_tacos
city_ratio[city_ratio < 0] = np.inf
# find unique lon and lat for each city
lon = []; lat = []
for city in cities:
lon.append(np.unique(data[data['city'] == city]['longitude'])[0])
lat.append(np.unique(data[data['city'] == city]['latitude'])[0])
# renaming because plotly has similar keyword
lon_arr = np.copy(lon); lat_arr = np.copy(lat)
tmp_cities = np.array(cities)[~np.isinf(city_ratio)]
text_arr = [city + '\n B/T:' + str(np.round_(num, decimals = 2)) for city, num in zip(cities, city_ratio)]
fig = go.Figure()
limits = [(0,3),(4,7),(8,11),(12,15),(16,20)]
colors = ["brown","magenta","cyan","orange","green"]
for i in range(len(limits)):
lim = limits[i]
fig.add_trace(go.Scattergeo(
locationmode = 'USA-states',
lon = lon_arr,
lat = lat_arr,
text = text_arr,
marker = dict(
size = city_ratio*5,
color = colors[i],
sizemode = 'area',
),
name = '{0} - {1}'.format(lim[0],lim[1])))
fig.update_layout(
#title_text = 'Menu options by city <br>(Click legend to populate map)',
title_text = 'Menu options by city <br>(Hover on point to see burrito/taco ratio)',
showlegend = False,
geo = dict(
scope = 'usa',
landcolor = 'rgb(217, 217, 217)',))
###Output
_____no_output_____
###Markdown
Exploring the links datasetIn this notebook, we visualize some of the features from the links dataset generated from the RICO dataset.
###Code
import numpy as np
with open('training.npy', 'rb') as f:
X = np.load(f)
Y = np.load(f)
links = [
{
'source.hue': X[index][0],
'source.saturation': X[index][1],
'source.lightness': X[index][2],
'semantic_similarity': X[index][3],
'is_link': Y[index],
}
for index in range(len(Y))
]
import pandas as pd
links_df = pd.DataFrame(links)
links_df.head()
links_df.describe()
links_df.groupby('is_link').describe()
import seaborn as sns
sns.scatterplot(data=links_df, x="source.saturation", y="source.lightness", hue="is_link")
g = sns.FacetGrid(links_df, row="is_link")
g.map(sns.distplot, "semantic_similarity")
g = sns.FacetGrid(links_df, row="is_link")
g.map(sns.distplot, "source.hue")
g = sns.FacetGrid(links_df, row="is_link")
g.map(sns.distplot, "source.saturation")
g = sns.FacetGrid(links_df, row="is_link")
g.map(sns.distplot, "source.lightness")
sns.displot(data=links_df, x="semantic_similarity", hue="is_link", kind="kde")
rough_total_sample_size = 10000
sample = links_df.groupby('is_link').apply(lambda link_class: link_class.sample(int(rough_total_sample_size/2))).reset_index(drop=True)
proportional_sample = links_df.groupby('is_link').apply(lambda link_class: link_class.sample(int(len(link_class)/len(links_df)*rough_total_sample_size))).reset_index(drop=True)
sample
proportional_sample
sns.displot(data=sample, x="source.hue", hue="is_link", kind="kde")
sns.displot(data=sample, x="source.saturation", hue="is_link", kind="kde")
sns.displot(data=sample, x="source.lightness", hue="is_link", kind="kde")
sns.displot(data=sample, x="semantic_similarity", hue="is_link", kind="kde")
sns.displot(data=sample, x="semantic_similarity", y="source.saturation", hue="is_link", kind="kde")
sns.displot(data=sample, x="source.lightness", y="source.saturation", hue="is_link", kind="kde")
sns.displot(data=sample, x="source.hue", y="source.saturation", hue="is_link", kind="kde")
sns.displot(data=sample, x="source.lightness", y="semantic_similarity", hue="is_link", kind="kde")
###Output
_____no_output_____
###Markdown
SetupLet's import some useful libs and configure the basics parameters.Then, we need to import the CSV files into datasets.
###Code
import pandas as pd # to create the datasets
import matplotlib.pyplot as plt # to plot graphics
# Defining teh default options for our plots
%matplotlib inline
plt.rcParams['figure.figsize'] = (18,6)
###Output
_____no_output_____
###Markdown
Importing the files into CSV files and checking the first lines:
###Code
vmstat = pd.read_csv('./vmstat.csv')
vmstat.head()
pidstat = pd.read_csv('./pidstat.csv')
pidstat.head()
###Output
_____no_output_____
###Markdown
Exploring the datasetsWe have to take a look on both datasets and identify possible missing values, importing errors or other strange behaviors and understand each feature.The pidstat dataset has a Time column in Unix Epoch format. It is necessary to convert to standard time.
###Code
print('Datasets Shapes\n' + '-' * 20)
for ds in ['pidstat', 'vmstat']:
print(ds, eval(ds).shape)
vmstat['datetime'] = pd.to_datetime(vmstat['date'].astype(str) + ' ' + vmstat['time'])
vmstat['datetime'] = vmstat['datetime'].dt.tz_localize('UTC').dt.tz_convert('America/Sao_Paulo')
vmstat['datetime'] = vmstat['datetime'] + pd.Timedelta('03:00:00')
print(vmstat['datetime'].dtypes)
vmstat.head()
pidstat['Time'] = pd.to_datetime(pidstat['Time'], unit='s', origin='unix')
pidstat['Time'] = pidstat['Time'].dt.tz_localize('UTC').dt.tz_convert('America/Sao_Paulo')
print(pidstat['Time'].dtypes)
pidstat.head()
###Output
_____no_output_____
###Markdown
Studying Pidstat
###Code
pidstat.Command.describe()
# Top 15 most frequent commands
pidstat.Command.value_counts()[:15,]
# What is the most intense process on kernel ring?
# Let's calculate the average Kernel CPU usage for each command and
# print a list with the TOP 5
g_pidstat = pidstat.groupby('Command')
top5_kernel = g_pidstat['%system'].mean().sort_values(ascending=False)[:5,]
print(top5_kernel)
fig, ax = plt.subplots()
x_pos = pd.np.arange(5)
ax.bar(x_pos, top5_kernel.values)
ax.set_xticks(x_pos)
ax.set_xticklabels(list(top5_kernel.index))
plt.show()
# And the Top 5 process consiming resources on User ring
top5_user = g_pidstat['%usr'].mean().sort_values(ascending=False)[:5,]
print(top5_user)
fig, ax = plt.subplots()
x_pos = pd.np.arange(5)
ax.bar(x_pos, top5_user.values)
ax.set_xticks(x_pos)
ax.set_xticklabels(list(top5_user.index))
plt.show()
###Output
_____no_output_____
###Markdown
Studying Vmstat
###Code
# Let's preview it again to remember the features
vmstat.head()
# I would like to see more details about IO
io_info = vmstat.loc[:, ['dsk_read', 'dsk_write', 'datetime']]
n_rows = len(io_info)
fig, ax = plt.subplots()
ax.plot(io_info['dsk_write'], color='darkred')
ax.plot(io_info['dsk_read'], color='blue', alpha=0.5)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Cross Data CheckingThe last graph is showing some peaks in read and write.It would be a good idea to verify the time they occured and lookup the process running.To acomplish this task we will need to compare data in two different datasets.
###Code
# Finding the disk io peaks
top_5_read = io_info.sort_values(by='dsk_read', ascending=False)[:5]
top_5_write = io_info.sort_values(by='dsk_write', ascending=False)[:5]
print(top_5_read, '\n\n', top_5_write)
reads = pidstat.loc[pidstat['Time'].isin(top_5_read['datetime'])]
writes = pidstat.loc[pidstat['Time'].isin(top_5_write['datetime'])]
reads.sort_values(by=['%wait','%CPU'], ascending=False)[:5]
writes.sort_values(by=['%wait','%CPU'], ascending=False)[:5]
###Output
_____no_output_____
###Markdown
Park-Vorhersage Website to extract info from
###Code
url = r'https://www.parken-osnabrueck.de/'
###Output
_____no_output_____
###Markdown
SeleniumUse selenium to drive headless firefox (other browsers can be configured, too)
###Code
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
options = Options()
options.headless = True
driver = webdriver.Firefox(options=options)
###Output
_____no_output_____
###Markdown
robots.txtCheck if crawling is generally not desired by website operator
###Code
from urllib import robotparser
parser = robotparser.RobotFileParser(url=url)
parser.read()
'Website can be parsed: {}'.format(parser.can_fetch('*', url))
###Output
_____no_output_____
###Markdown
Read page source and extract information
###Code
driver.get(url)
###Output
_____no_output_____
###Markdown
Use BeautifulSoup to easily read page source
###Code
from bs4 import BeautifulSoup
soup = BeautifulSoup(driver.page_source)
for item in soup.find_all('span', 'parking-ramp-utilization'):
print(item.text)
###Output
_____no_output_____ |
05. Python for Data Analysis - NumPy/5.18 np_array_introduction.ipynb | ###Markdown
**Linspace returns evenly spaced numbers over a specified region. Range returns integers out of [start,stop) in a given interval size while linspace takes third argument that you want and separetes start and stop appropriately.**
###Code
np.linspace(0,5,10) # Between 0 to 5 it sends 10 evenly spaced points.
np.linspace(0,10,100)
###Output
_____no_output_____
###Markdown
** of brackets tells the dimension of the array at the opening or the closing. At a 2D array you'd find [[ ]]at the beginning and end.**Range takes the third argument as the step size you want. Whereas linspace takes the third argument as the number of points you want.
###Code
# Creating an IDENTITY Matrix - A 2D matrix with rows = columns and anything except primary diagonal = 0
np.eye(4) # 4 being the number of rows and columns in the identity matrix that we are making
###Output
_____no_output_____
###Markdown
**To Create Arrays of Random Numbers**
###Code
np.random.rand(5) # Creates an array of the given shape that we pass in.
np.random.rand(5,5)
###Output
_____no_output_____
###Markdown
** Randn Returns numbers not from a uniform distribution but from zero to 1 but normal distribution centred around zero**
###Code
np.random.randn(4,4) # No need to pass tuple, just the dimensions.
np.random.randint(1,100) # Lowest inclusive and highest exclusive to be returned. [low,high,size]
np.random.randint(1,100,4) # Here we have specified size as well. By default 3rd parameter size is 1.
arr = np.arange(25)
arr
ranarr=np.random.randint(0,50,10)
ranarr
###Output
_____no_output_____
###Markdown
Reshape an Array **Returns array having same data but in a new shape**
###Code
arr.reshape(5,5) # Passing reshape(number_of_rows,number_of_columns)
# Also you need to fill out all the elements in the new array of exact same size, nothing less, nothing more.
# Number of rows*Number of Columns in the reshaped array = Number of elements in the original array being reshaped.
ranarr
ranarr.max()
ranarr.min()
ranarr.argmax() # Gets the index location of the maximum value
ranarr.argmin() # Geths the index location of the minimum value
arr
arr.shape # (25,) it means that arr was just a 1D vector
arr = arr.reshape(5,5)
arr
arr.shape # No parantheses
arr.dtype # Returns datatype in the array.
#Shortcut for having randint or another method inside numpy that you know you will be using
# regularly is to import it
from numpy.random import randint as ri
ri(1,20,10) # direct use of randint without np.random.randint()
###Output
_____no_output_____
###Markdown
**Linspace returns evenly spaced numbers over a specified region. Range returns integers out of [start,stop) in a given interval size while linspace takes third argument that you want and separetes start and stop appropriately.**
###Code
np.linspace(0,5,10) # Between 0 to 5 it sends 10 evenly spaced points.
np.linspace(0,10,100)
###Output
_____no_output_____
###Markdown
** of brackets tells the dimension of the array at the opening or the closing. At a 2D array you'd find [[ ]]at the beginning and end.**Range takes the third argument as the step size you want. Whereas linspace takes the third argument as the number of points you want.
###Code
# Creating an IDENTITY Matrix - A 2D matrix with rows = columns and anything except primary diagonal = 0
np.eye(4) # 4 being the number of rows and columns in the identity matrix that we are making
###Output
_____no_output_____
###Markdown
**To Create Arrays of Random Numbers**
###Code
np.random.rand(5) # Creates an array of the given shape that we pass in.
np.random.rand(5,5)
###Output
_____no_output_____
###Markdown
** Randn Returns numbers not from a uniform distribution but from zero to 1 but normal distribution centred around zero**
###Code
np.random.randn(4,4) # No need to pass tuple, just the dimensions.
np.random.randint(1,100) # Lowest inclusive and highest exclusive to be returned. [low,high,size]
np.random.randint(1,100,4) # Here we have specified size as well. By default 3rd parameter size is 1.
arr = np.arange(25)
arr
ranarr=np.random.randint(0,50,10)
ranarr
###Output
_____no_output_____
###Markdown
Reshape an Array **Returns array having same data but in a new shape**
###Code
arr.reshape(5,5) # Passing reshape(number_of_rows,number_of_columns)
# Also you need to fill out all the elements in the new array of exact same size, nothing less, nothing more.
# Number of rows*Number of Columns in the reshaped array = Number of elements in the original array being reshaped.
ranarr
ranarr.max()
ranarr.min()
ranarr.argmax() # Gets the index location of the maximum value
ranarr.argmin() # Geths the index location of the minimum value
arr
arr.shape # (25,) it means that arr was just a 1D vector
arr = arr.reshape(5,5)
arr
arr.shape # No parantheses
arr.dtype # Returns datatype in the array.
#Shortcut for having randint or another method inside numpy that you know you will be using
# regularly is to import it
from numpy.random import randint as ri
ri(1,20,10) # direct use of randint without np.random.randint()
###Output
_____no_output_____ |
M_accelerate_ALL.ipynb | ###Markdown
MobileNet - Pytorch Step 1: Prepare data
###Code
# MobileNet-Pytorch
import argparse
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from torchvision import datasets, transforms
from torch.autograd import Variable
from torch.utils.data.sampler import SubsetRandomSampler
from sklearn.metrics import accuracy_score
#from mobilenets import mobilenet
use_cuda = torch.cuda.is_available()
use_cudause_cud = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
# Train, Validate, Test. Heavily inspired by Kevinzakka https://github.com/kevinzakka/DenseNet/blob/master/data_loader.py
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
valid_size=0.1
# define transforms
valid_transform = transforms.Compose([
transforms.ToTensor(),
normalize
])
train_transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
])
# load the dataset
train_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=train_transform)
valid_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=valid_transform)
num_train = len(train_dataset)
indices = list(range(num_train))
split = int(np.floor(valid_size * num_train)) #5w张图片的10%用来当做验证集
np.random.seed(42)# 42
np.random.shuffle(indices) # 随机乱序[0,1,...,49999]
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_idx) # 这个很有意思
valid_sampler = SubsetRandomSampler(valid_idx)
###################################################################################
# ------------------------- 使用不同的批次大小 ------------------------------------
###################################################################################
show_step=2 # 批次大,show_step就小点
max_epoch=150 # 训练最大epoch数目
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=256, sampler=train_sampler)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=256, sampler=valid_sampler)
test_transform = transforms.Compose([
transforms.ToTensor(), normalize
])
test_dataset = datasets.CIFAR10(root="data",
train=False,
download=True,transform=test_transform)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=1,
shuffle=True)
###Output
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Step 2: Model Config 32 缩放5次到 1x1@1024 From https://github.com/kuangliu/pytorch-cifar import torchimport torch.nn as nnimport torch.nn.functional as Fclass Block(nn.Module): '''Depthwise conv + Pointwise conv''' def __init__(self, in_planes, out_planes, stride=1): super(Block, self).__init__() 分组卷积数=输入通道数 self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False) self.bn1 = nn.BatchNorm2d(in_planes) self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False) one_conv_kernel_size = 3 self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1,bias=False) 在__init__初始化 self.bn2 = nn.BatchNorm2d(out_planes) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) -------------------------- Attention ----------------------- w = F.avg_pool2d(x,x.shape[-1]) 最好在初始化层定义好 print(w.shape) [bs,in_Channel,1,1] w = w.view(w.shape[0],1,w.shape[1]) [bs,1,in_Channel] one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) 在__init__初始化 [bs,out_channel,in_Channel] w = self.conv1D(w) w = 0.5*F.tanh(w) [-0.5,+0.5] -------------- softmax --------------------------- print(w.shape) w = w.view(w.shape[0],w.shape[1],w.shape[2],1,1) print(w.shape) ------------------------- fusion -------------------------- out=out.view(out.shape[0],1,out.shape[1],out.shape[2],out.shape[3]) print("x size:",out.shape) out=out*w print("after fusion x size:",out.shape) out=out.sum(dim=2) out = F.relu(self.bn2(out)) return outclass MobileNet(nn.Module): (128,2) means conv planes=128, conv stride=2, by default conv stride=1 cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024] def __init__(self, num_classes=10): super(MobileNet, self).__init__() self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(32) self.layers = self._make_layers(in_planes=32) 自动化构建层 self.linear = nn.Linear(1024, num_classes) def _make_layers(self, in_planes): layers = [] for x in self.cfg: out_planes = x if isinstance(x, int) else x[0] stride = 1 if isinstance(x, int) else x[1] layers.append(Block(in_planes, out_planes, stride)) in_planes = out_planes return nn.Sequential(*layers) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = self.layers(out) out = F.avg_pool2d(out, 2) out = out.view(out.size(0), -1) out = self.linear(out) return out
###Code
# 32 缩放5次到 1x1@1024
# From https://github.com/kuangliu/pytorch-cifar
import torch
import torch.nn as nn
import torch.nn.functional as F
class Block_Attention_HALF(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention_HALF, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#------------------------ 一半 ------------------------------
self.conv2 = nn.Conv2d(in_planes, out_planes//2, kernel_size=1, stride=1, padding=0, bias=False)
#------------------------ 另一半 ----------------------------
one_conv_kernel_size = 9 # [3,7,9]
self.conv1D= nn.Conv1d(1, out_planes//2, one_conv_kernel_size, stride=1,padding=4,groups=1,dilation=1,bias=False) # 在__init__初始化
#------------------------------------------------------------
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu6(self.bn1(self.conv1(x)))
#out = self.bn1(self.conv1(x))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
w=w[0]
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel//2,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel//2,in_Channel]
#-------------------------------------
w = F.tanh(w) # [-0.5,+0.5]
#w=F.relu6(w) # 效果大大折扣
# [bs=1,out_channel//2,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel//2,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out_1=self.conv2(out)
out_2=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out=torch.cat([out_1,out_2],1)
# ----------------------- 试一试不要用relu -------------------------------
#out = self.bn2(out)
out=F.relu6(self.bn2(out))
return out
class Block_Attention(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
one_conv_kernel_size = 17 # [3,7,9]
self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=8,groups=1,dilation=1,bias=False) # 在__init__初始化
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
w=w[0]
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel,in_Channel]
w = 0.5*F.tanh(w) # [-0.5,+0.5]
# [bs=1,out_channel,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out = F.relu(self.bn2(out))
return out
class Block(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
return out
class MobileNet(nn.Module):
# (128,2) means conv planes=128, conv stride=2, by default conv stride=1
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024]
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), [1024,1]]
cfg = [64, (128,2), 128, (256,2), (256,1), (512,2), [512,1], [512,1], [512,1],[512,1], [512,1], [1024,2], [1024,1]]
def __init__(self, num_classes=10):
super(MobileNet, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.layers = self._make_layers(in_planes=32) # 自动化构建层
self.linear = nn.Linear(1024, num_classes)
def _make_layers(self, in_planes):
layers = []
for x in self.cfg:
if isinstance(x, int):
out_planes = x
stride = 1
layers.append(Block(in_planes, out_planes, stride))
elif isinstance(x, tuple):
out_planes = x[0]
stride = x[1]
layers.append(Block(in_planes, out_planes, stride))
# AC层通过list存放设置参数
elif isinstance(x, list):
out_planes= x[0]
stride = x[1] if len(x)==2 else 1
layers.append(Block_Attention_HALF(in_planes, out_planes, stride))
else:
pass
in_planes = out_planes
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layers(out)
out = F.avg_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
# From https://github.com/Z0m6ie/CIFAR-10_PyTorch
#model = mobilenet(num_classes=10, large_img=False)
# From https://github.com/kuangliu/pytorch-cifar
if torch.cuda.is_available():
model=MobileNet(10).cuda()
else:
model=MobileNet(10)
optimizer = optim.Adam(model.parameters(), lr=0.01)
scheduler = StepLR(optimizer, step_size=20, gamma=0.5)
criterion = nn.CrossEntropyLoss()
# Implement validation
def train(epoch):
model.train()
#writer = SummaryWriter()
for batch_idx, (data, target) in enumerate(train_loader):
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
correct = 0
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
loss = criterion(output, target)
loss.backward()
accuracy = 100. * (correct.cpu().numpy()/ len(output))
optimizer.step()
if batch_idx % 5*show_step == 0:
# if batch_idx % 2*show_step == 0:
# print(model.layers[1].conv1D.weight.shape)
# print(model.layers[1].conv1D.weight[0:2][0:2])
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(), accuracy))
f1=open("Cifar10_INFO.txt","a+")
f1.write("\n"+'Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(), accuracy))
f1.close()
#writer.add_scalar('Loss/Loss', loss.item(), epoch)
#writer.add_scalar('Accuracy/Accuracy', accuracy, epoch)
scheduler.step()
def validate(epoch):
model.eval()
#writer = SummaryWriter()
valid_loss = 0
correct = 0
for data, target in valid_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
valid_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
valid_loss /= len(valid_idx)
accuracy = 100. * correct.cpu().numpy() / len(valid_idx)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
valid_loss, correct, len(valid_idx),
100. * correct / len(valid_idx)))
f1=open("Cifar10_INFO.txt","a+")
f1.write('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
valid_loss, correct, len(valid_idx),
100. * correct / len(valid_idx)))
f1.close()
#writer.add_scalar('Loss/Validation_Loss', valid_loss, epoch)
#writer.add_scalar('Accuracy/Validation_Accuracy', accuracy, epoch)
return valid_loss, accuracy
# Fix best model
def test(epoch):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct.cpu().numpy() / len(test_loader.dataset)))
f1=open("Cifar10_INFO.txt","a+")
f1.write('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct.cpu().numpy() / len(test_loader.dataset)))
f1.close()
def save_best(loss, accuracy, best_loss, best_acc):
if best_loss == None:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
elif loss < best_loss and accuracy > best_acc:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
return best_loss, best_acc
# Fantastic logger for tensorboard and pytorch,
# run tensorboard by opening a new terminal and run "tensorboard --logdir runs"
# open tensorboard at http://localhost:6006/
from tensorboardX import SummaryWriter
best_loss = None
best_acc = None
import time
SINCE=time.time()
for epoch in range(max_epoch):
train(epoch)
loss, accuracy = validate(epoch)
best_loss, best_acc = save_best(loss, accuracy, best_loss, best_acc)
NOW=time.time()
DURINGS=NOW-SINCE
SINCE=NOW
print("the time of this epoch:[{} s]".format(DURINGS))
# writer = SummaryWriter()
# writer.export_scalars_to_json("./all_scalars.json")
# writer.close()
#---------------------------- Test ------------------------------
test(epoch)
###Output
Train Epoch: 0 [0/50000 (0%)] Loss: 2.317187, Accuracy: 10.16
Train Epoch: 0 [1280/50000 (3%)] Loss: 2.739555, Accuracy: 9.38
Train Epoch: 0 [2560/50000 (6%)] Loss: 2.468929, Accuracy: 9.77
Train Epoch: 0 [3840/50000 (9%)] Loss: 2.271804, Accuracy: 11.72
Train Epoch: 0 [5120/50000 (11%)] Loss: 2.214532, Accuracy: 16.80
Train Epoch: 0 [6400/50000 (14%)] Loss: 2.125456, Accuracy: 18.36
Train Epoch: 0 [7680/50000 (17%)] Loss: 2.063283, Accuracy: 18.75
Train Epoch: 0 [8960/50000 (20%)] Loss: 1.957888, Accuracy: 25.39
Train Epoch: 0 [10240/50000 (23%)] Loss: 2.006796, Accuracy: 23.05
Train Epoch: 0 [11520/50000 (26%)] Loss: 1.940412, Accuracy: 26.56
Train Epoch: 0 [12800/50000 (28%)] Loss: 1.925677, Accuracy: 23.83
Train Epoch: 0 [14080/50000 (31%)] Loss: 1.894595, Accuracy: 26.95
Train Epoch: 0 [15360/50000 (34%)] Loss: 1.874364, Accuracy: 28.12
Train Epoch: 0 [16640/50000 (37%)] Loss: 1.920679, Accuracy: 23.83
Train Epoch: 0 [17920/50000 (40%)] Loss: 1.921081, Accuracy: 26.95
Train Epoch: 0 [19200/50000 (43%)] Loss: 1.870906, Accuracy: 23.05
Train Epoch: 0 [20480/50000 (45%)] Loss: 1.905605, Accuracy: 28.52
Train Epoch: 0 [21760/50000 (48%)] Loss: 1.875262, Accuracy: 25.78
Train Epoch: 0 [23040/50000 (51%)] Loss: 1.760782, Accuracy: 31.25
Train Epoch: 0 [24320/50000 (54%)] Loss: 1.815875, Accuracy: 29.69
Train Epoch: 0 [25600/50000 (57%)] Loss: 1.789346, Accuracy: 28.12
Train Epoch: 0 [26880/50000 (60%)] Loss: 1.729920, Accuracy: 32.81
Train Epoch: 0 [28160/50000 (62%)] Loss: 1.692929, Accuracy: 39.06
Train Epoch: 0 [29440/50000 (65%)] Loss: 1.751308, Accuracy: 35.94
Train Epoch: 0 [30720/50000 (68%)] Loss: 1.733815, Accuracy: 29.30
Train Epoch: 0 [32000/50000 (71%)] Loss: 1.748714, Accuracy: 37.50
Train Epoch: 0 [33280/50000 (74%)] Loss: 1.753810, Accuracy: 28.52
Train Epoch: 0 [34560/50000 (77%)] Loss: 1.652142, Accuracy: 37.50
Train Epoch: 0 [35840/50000 (80%)] Loss: 1.726063, Accuracy: 28.91
Train Epoch: 0 [37120/50000 (82%)] Loss: 1.670838, Accuracy: 34.38
Train Epoch: 0 [38400/50000 (85%)] Loss: 1.573907, Accuracy: 43.36
Train Epoch: 0 [39680/50000 (88%)] Loss: 1.650179, Accuracy: 35.16
Train Epoch: 0 [40960/50000 (91%)] Loss: 1.646149, Accuracy: 37.89
Train Epoch: 0 [42240/50000 (94%)] Loss: 1.821646, Accuracy: 35.16
Train Epoch: 0 [43520/50000 (97%)] Loss: 1.581002, Accuracy: 39.06
Train Epoch: 0 [35000/50000 (99%)] Loss: 1.636860, Accuracy: 37.00
Validation set: Average loss: 4.5531, Accuracy: 1197/5000 (23.00%)
the time of this epoch:[21.662639617919922 s]
Train Epoch: 1 [0/50000 (0%)] Loss: 1.654787, Accuracy: 39.06
Train Epoch: 1 [1280/50000 (3%)] Loss: 1.533673, Accuracy: 39.45
Train Epoch: 1 [2560/50000 (6%)] Loss: 1.631841, Accuracy: 32.42
Train Epoch: 1 [3840/50000 (9%)] Loss: 1.631126, Accuracy: 38.67
Train Epoch: 1 [5120/50000 (11%)] Loss: 1.639353, Accuracy: 38.67
Train Epoch: 1 [6400/50000 (14%)] Loss: 1.574432, Accuracy: 44.53
Train Epoch: 1 [7680/50000 (17%)] Loss: 1.517393, Accuracy: 47.27
Train Epoch: 1 [8960/50000 (20%)] Loss: 1.620072, Accuracy: 39.84
Train Epoch: 1 [10240/50000 (23%)] Loss: 1.485670, Accuracy: 45.31
Train Epoch: 1 [11520/50000 (26%)] Loss: 1.407409, Accuracy: 48.44
Train Epoch: 1 [12800/50000 (28%)] Loss: 1.564423, Accuracy: 42.19
Train Epoch: 1 [14080/50000 (31%)] Loss: 1.428197, Accuracy: 44.14
Train Epoch: 1 [15360/50000 (34%)] Loss: 1.486374, Accuracy: 44.53
Train Epoch: 1 [16640/50000 (37%)] Loss: 1.496168, Accuracy: 46.09
Train Epoch: 1 [17920/50000 (40%)] Loss: 1.353985, Accuracy: 48.05
Train Epoch: 1 [19200/50000 (43%)] Loss: 1.440946, Accuracy: 49.61
Train Epoch: 1 [20480/50000 (45%)] Loss: 1.428810, Accuracy: 49.61
Train Epoch: 1 [21760/50000 (48%)] Loss: 1.366629, Accuracy: 51.56
Train Epoch: 1 [23040/50000 (51%)] Loss: 1.520613, Accuracy: 41.41
Train Epoch: 1 [24320/50000 (54%)] Loss: 1.424382, Accuracy: 45.70
Train Epoch: 1 [25600/50000 (57%)] Loss: 1.417356, Accuracy: 49.61
Train Epoch: 1 [26880/50000 (60%)] Loss: 1.472661, Accuracy: 45.31
Train Epoch: 1 [28160/50000 (62%)] Loss: 1.338052, Accuracy: 47.66
Train Epoch: 1 [29440/50000 (65%)] Loss: 1.241622, Accuracy: 57.42
Train Epoch: 1 [30720/50000 (68%)] Loss: 1.254776, Accuracy: 51.56
Train Epoch: 1 [32000/50000 (71%)] Loss: 1.353367, Accuracy: 50.00
Train Epoch: 1 [33280/50000 (74%)] Loss: 1.335058, Accuracy: 50.00
Train Epoch: 1 [34560/50000 (77%)] Loss: 1.464982, Accuracy: 48.83
Train Epoch: 1 [35840/50000 (80%)] Loss: 1.362602, Accuracy: 54.30
Train Epoch: 1 [37120/50000 (82%)] Loss: 1.333230, Accuracy: 52.73
Train Epoch: 1 [38400/50000 (85%)] Loss: 1.346182, Accuracy: 49.22
Train Epoch: 1 [39680/50000 (88%)] Loss: 1.174814, Accuracy: 58.98
Train Epoch: 1 [40960/50000 (91%)] Loss: 1.270859, Accuracy: 51.17
Train Epoch: 1 [42240/50000 (94%)] Loss: 1.222242, Accuracy: 59.38
Train Epoch: 1 [43520/50000 (97%)] Loss: 1.269724, Accuracy: 54.30
Train Epoch: 1 [35000/50000 (99%)] Loss: 1.163435, Accuracy: 57.00
Validation set: Average loss: 4.8672, Accuracy: 1912/5000 (38.00%)
the time of this epoch:[21.558705806732178 s]
Train Epoch: 2 [0/50000 (0%)] Loss: 1.170546, Accuracy: 58.20
Train Epoch: 2 [1280/50000 (3%)] Loss: 1.167825, Accuracy: 55.86
Train Epoch: 2 [2560/50000 (6%)] Loss: 1.267082, Accuracy: 57.03
Train Epoch: 2 [3840/50000 (9%)] Loss: 1.203408, Accuracy: 57.42
Train Epoch: 2 [5120/50000 (11%)] Loss: 1.226529, Accuracy: 49.22
Train Epoch: 2 [6400/50000 (14%)] Loss: 1.311463, Accuracy: 53.91
Train Epoch: 2 [7680/50000 (17%)] Loss: 1.213612, Accuracy: 59.38
Train Epoch: 2 [8960/50000 (20%)] Loss: 1.147260, Accuracy: 56.64
Train Epoch: 2 [10240/50000 (23%)] Loss: 1.254088, Accuracy: 55.08
Train Epoch: 2 [11520/50000 (26%)] Loss: 1.197541, Accuracy: 56.25
Train Epoch: 2 [12800/50000 (28%)] Loss: 1.137027, Accuracy: 55.86
Train Epoch: 2 [14080/50000 (31%)] Loss: 1.194584, Accuracy: 61.33
Train Epoch: 2 [15360/50000 (34%)] Loss: 1.204290, Accuracy: 60.16
Train Epoch: 2 [16640/50000 (37%)] Loss: 1.172325, Accuracy: 55.86
Train Epoch: 2 [17920/50000 (40%)] Loss: 1.149843, Accuracy: 59.38
Train Epoch: 2 [19200/50000 (43%)] Loss: 1.126659, Accuracy: 58.20
Train Epoch: 2 [20480/50000 (45%)] Loss: 1.092484, Accuracy: 58.98
Train Epoch: 2 [21760/50000 (48%)] Loss: 1.099942, Accuracy: 55.47
Train Epoch: 2 [23040/50000 (51%)] Loss: 1.186884, Accuracy: 60.94
Train Epoch: 2 [24320/50000 (54%)] Loss: 1.117447, Accuracy: 60.55
Train Epoch: 2 [25600/50000 (57%)] Loss: 1.173386, Accuracy: 55.08
Train Epoch: 2 [26880/50000 (60%)] Loss: 1.084559, Accuracy: 58.98
Train Epoch: 2 [28160/50000 (62%)] Loss: 1.171377, Accuracy: 59.77
Train Epoch: 2 [29440/50000 (65%)] Loss: 1.049761, Accuracy: 62.89
Train Epoch: 2 [30720/50000 (68%)] Loss: 1.029481, Accuracy: 63.28
Train Epoch: 2 [32000/50000 (71%)] Loss: 1.121746, Accuracy: 58.98
Train Epoch: 2 [33280/50000 (74%)] Loss: 1.131498, Accuracy: 58.98
Train Epoch: 2 [34560/50000 (77%)] Loss: 1.166869, Accuracy: 60.55
Train Epoch: 2 [35840/50000 (80%)] Loss: 1.035683, Accuracy: 60.55
Train Epoch: 2 [37120/50000 (82%)] Loss: 1.028708, Accuracy: 59.38
Train Epoch: 2 [38400/50000 (85%)] Loss: 1.008805, Accuracy: 62.11
Train Epoch: 2 [39680/50000 (88%)] Loss: 1.161276, Accuracy: 58.59
Train Epoch: 2 [40960/50000 (91%)] Loss: 1.036611, Accuracy: 63.28
Train Epoch: 2 [42240/50000 (94%)] Loss: 1.166182, Accuracy: 57.03
Train Epoch: 2 [43520/50000 (97%)] Loss: 1.036265, Accuracy: 62.89
Train Epoch: 2 [35000/50000 (99%)] Loss: 0.937140, Accuracy: 70.50
Validation set: Average loss: 2.1692, Accuracy: 2482/5000 (49.00%)
the time of this epoch:[21.577322244644165 s]
Train Epoch: 3 [0/50000 (0%)] Loss: 1.088407, Accuracy: 61.33
Train Epoch: 3 [1280/50000 (3%)] Loss: 1.053350, Accuracy: 62.89
Train Epoch: 3 [2560/50000 (6%)] Loss: 1.126616, Accuracy: 57.03
Train Epoch: 3 [3840/50000 (9%)] Loss: 0.988742, Accuracy: 65.23
Train Epoch: 3 [5120/50000 (11%)] Loss: 1.202767, Accuracy: 57.42
Train Epoch: 3 [6400/50000 (14%)] Loss: 1.000872, Accuracy: 63.67
Train Epoch: 3 [7680/50000 (17%)] Loss: 0.907074, Accuracy: 67.58
Train Epoch: 3 [8960/50000 (20%)] Loss: 1.005800, Accuracy: 62.11
Train Epoch: 3 [10240/50000 (23%)] Loss: 0.993526, Accuracy: 66.02
Train Epoch: 3 [11520/50000 (26%)] Loss: 0.856707, Accuracy: 71.48
Train Epoch: 3 [12800/50000 (28%)] Loss: 1.010143, Accuracy: 61.33
###Markdown
Step 3: Test
###Code
test(epoch)
###Output
Test set: Average loss: 0.6860, Accuracy: 8937/10000 (89.37%)
###Markdown
第一次 scale 位于[0,1] 
###Code
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'ro')
plt.figure(2)
plt.plot(xs, losses, 'ro')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
#parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'r-')
plt.figure(2)
plt.plot(xs, losses, 'r-')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
###Output
_____no_output_____ |
visualization/.ipynb_checkpoints/pygrib_viz-checkpoint.ipynb | ###Markdown
Visualizion of a grib file using pygrib pygrib will need to be installed prior to using this notebook.In order to install pygrib, you can use conda: `conda install -c conda-forge pygrib`
###Code
import pygrib
# We'll be using widgets in the notebook
import ipywidgets as widgets
from IPython.display import display
###Output
_____no_output_____
###Markdown
Now to select a grib fileThis can be any grib file, but you can use our example grib file in the `data/` directory of this repository.
###Code
grib_file = '../data/gdas.t12z.pgrb2.1p00.f000'
###Output
_____no_output_____
###Markdown
Opening a Grib file in pygrib is similar to any other file. Additionally, since it seeks to different byte offsets in the file, it only loads into memory what you ask.
###Code
fh = pygrib.open(grib_file)
num_messages = fh.messages
print(num_messages)
fh.message(1)
###Output
_____no_output_____
###Markdown
Now we can select the variables
###Code
grib_messages = [(fh.message(i), i) for i in range(1,num_messages)]
w = widgets.Dropdown(
options=grib_messages,
value=1,
description="Select which grib message you would like to visualize")
display(w)
w.value
fh.seek(w.value)
message = fh[w.value]
data = message.values
lats,lons = message.latlons()
###Output
_____no_output_____
###Markdown
With your variable selected, we can now visualize the data.
###Code
import matplotlib.pyplot as plt # used to plot the data.
import cartopy.crs as ccrs # Used to georeference data.
import cartopy.util as cutil
data = data.data
proj = ccrs.PlateCarree(central_longitude=-90)
plt.gcf().set_size_inches(15,15)
ax = plt.axes(projection=proj)
plt.contourf(lons,lats,data, transform=proj)
ax.coastlines()
plt.show()
data, lons_1d = cutil.add_cyclic_point(data, coord=lons[0])
###Output
_____no_output_____ |
source/07_Generative_Algorithms/Code.ipynb | ###Markdown
Today:* Generative models* Naive Bayes Resources:* Generative model: https://en.wikipedia.org/wiki/Generative_model* Naive Bayes: http://cs229.stanford.edu/notes/cs229-notes2.pdf* Naive Bayes: https://en.wikipedia.org/wiki/Naive_Bayes_classifier
###Code
import numpy as np
import tensorflow as tf
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
from scipy.stats import norm
from sklearn import datasets
iris = datasets.load_iris()
# Only take the first feature
X = iris.data[:, :1]
y = iris.target
print('IRIS DATASET')
print('Features', X[:3])
print('Class', y[:3])
# Separate training points by class (nb_classes * nb_samples * nb_features)
unique_y = np.unique(y)
print('Unique labels', unique_y)
points_by_class = np.array([[x for x, t in zip(X, y) if t == c] for c in unique_y])
print('Separeted', points_by_class.shape)
with tf.Session() as sess:
# FIT
# Estimate mean and variance for each class / feature
# Shape: number_of_classes * number_of_features
mean, var = tf.nn.moments(tf.constant(points_by_class), axes=[1])
print('Mean', mean.eval())
print('Var', var.eval())
# Create a 3x2 univariate normal distribution with the known mean and variance
dist = tf.distributions.Normal(loc=mean, scale=tf.sqrt(var))
# PREDICT
nb_classes, nb_features = map(int, dist.scale.shape)
print(nb_classes, nb_features)
X = X[45:55]
print(X.shape)
print(tf.reshape(tf.tile(X, [1, nb_classes]), [-1, nb_classes, nb_features]).shape)
# Conditional probabilities log P(x|c) with shape (nb_samples, nb_classes)
cond_probs = tf.reduce_sum(
dist.log_prob(tf.reshape(tf.tile(X, [1, nb_classes]), [-1, nb_classes, nb_features])),
axis=2
)
# uniform priors
priors = np.log(np.array([1. / nb_classes] * nb_classes))
# posterior log probability, log P(c) + log P(x|c)
joint_likelihood = tf.add(priors, cond_probs)
# normalize to get (log)-probabilities
norm_factor = tf.reduce_logsumexp(joint_likelihood, axis=1, keep_dims=True)
log_prob = joint_likelihood - norm_factor
# exp to get the actual probabilities
Z = sess.run(tf.argmax(tf.exp(log_prob), axis=1))
print(y[45:55])
print(Z)
fig, ax = plt.subplots()
colors = ['r', 'g', 'b']
x = np.linspace(0, 10, 1000)
for i in range(len(mu)):
# Create a normal distribution
dist = norm(mu[i], sigma[i])
# Plot
plt.plot(x, dist.pdf(x), c=colors[i], label=r'$\mu=%.1f,\ \sigma=%.1f$' % (mu[i], sigma[i]))
plt.xlim(4.0, 8.2)
plt.ylim(0, 1.5)
plt.title('Gaussian Distribution')
plt.legend()
plt.show()
###Output
_____no_output_____ |
05_Data_Modelling_Unsupervised_Learning.ipynb | ###Markdown
Data Modelling
###Code
from pyspark.sql.session import SparkSession
from pyspark.ml.feature import VectorAssembler,VectorIndexer
from pyspark.ml.evaluation import ClusteringEvaluator
from pyspark.ml.clustering import KMeans
from helpers.helper_functions import translate_to_file_string
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.feature import StringIndexer
from pyspark.mllib.evaluation import BinaryClassificationMetrics
from pyspark.ml import Pipeline
from pyspark.mllib.evaluation import MulticlassMetrics
inputFile = translate_to_file_string("./data/Data_Preparation_Result.csv")
def prettyPrint(dm, collArray) :
rows = dm.toArray().tolist()
dfDM = spark.createDataFrame(rows,collArray)
newDf = dfDM.toPandas()
from IPython.display import display, HTML
return HTML(newDf.to_html(index=False))
###Output
_____no_output_____
###Markdown
Create Spark Session
###Code
#create a SparkSession
spark = (SparkSession
.builder
.appName("DataModelling")
.getOrCreate())
# create a DataFrame using an ifered Schema
df = spark.read.option("header", "true") \
.option("inferSchema", "true") \
.option("delimiter", ";") \
.csv(inputFile)
print(df.printSchema())
###Output
root
|-- Bundesland: string (nullable = true)
|-- BundeslandIndex: integer (nullable = true)
|-- Landkreis: string (nullable = true)
|-- LandkreisIndex: integer (nullable = true)
|-- Altersgruppe: string (nullable = true)
|-- AltersgruppeIndex: double (nullable = true)
|-- Geschlecht: string (nullable = true)
|-- GeschlechtIndex: double (nullable = true)
|-- FallStatus: string (nullable = true)
|-- FallStatusIndex: double (nullable = true)
|-- Falldatum: string (nullable = true)
None
###Markdown
Vorbereitung der Daten Filtern der DatensätzeFür das Training dieses Modells ist es sinnvoll nur die Fälle zu betrachten, bei den der Ausgang der Corona-Erkrankung bereits bekannt ist ("GENESEN" oder "GESTORBEN"). Daher werden die Fälle mit noch erkrankten Personen herausgefiltert. Ebenfalls muss der FallStatusIndex neu vergeben werden, damit dieses Feature nur noch die Werte 0 oder 1 enthält.
###Code
dfNeu = df.filter(df.FallStatus != "NICHTEINGETRETEN").drop("FallStatusIndex")
###Output
_____no_output_____
###Markdown
FallStatusIndex
###Code
indexer = StringIndexer(inputCol="FallStatus", outputCol="FallStatusIndex")
dfReindexed = indexer.fit(dfNeu).transform(dfNeu)
###Output
_____no_output_____
###Markdown
Ziehen eines SamplesDa der Datensatz sehr groß ist,kann es evt. notwendig sein, nur mit einem kleineren Umfang zu trainieren. Mit Fraction kann an dieser Stelle der Umfang der Stichprobe angepasst werden.
###Code
dfsample = dfReindexed.sample(withReplacement=False, fraction=1.0, seed=12334556)
###Output
_____no_output_____
###Markdown
UndersamplingÄhnlich dem Fraud-Detection-Beispiel von Tara Boyle (2019) ist die Klasse der an Corona-Verstorbenen im vorliegenden Datensatz unterrepresentiert, weshalb man an dieser Stelle von einer Data Imbalance spricht. Dies sieht man wenn man die Anzahl der Todesfälle mit den Anzahl der Genesenen vergleicht.
###Code
# Vergleich der Fallzahlen
dfsample.groupBy("FallStatus").count().show()
###Output
+----------+-------+
|FallStatus| count|
+----------+-------+
| GENESEN|3471830|
| GESTORBEN| 88350|
+----------+-------+
###Markdown
Die meisten Machine Learning Algorithmen arbeiten am Besten wenn die Nummer der Samples in allen Klassen ungefähr die selbe größe haben. Dies konnte auch im Zuge dieser Arbeit bei den unterschiedlichen Regressions-Modellen festgestellt werden. Da die einzelnen Modelle versuchen den Fehler zu reduzieren, haben alle Modelle am Ende für einen Datensatz nur die Klasse Genesen geliefert, da hier die Wahrscheinlichkeit am größten war korrekt zu liegen. Um diese Problem zu lösen gibt es zwei Möglichkeiten: Under- und Oversampling. Beides fällt unter den Begriff ResamplingBeim Undersampling werden aus der Klasse mit den meisten Instanzen, Datensätze gelöscht, wohingegen beim Oversampling, der Klasse mit den wenigsten Isntanzen, neue Werte hinzugefügt werden. (Will Badr 2019; Tara Boyle 2019)Da in diesem Fall ausreichend Datensätze vorhanden sind, bietet sich Ersteres an.
###Code
# Ermittlung der Anzahl dr Verstorbenen
dfGestorben = dfsample.filter(dfsample.FallStatus == "GESTORBEN")
anzahlGestorben = dfGestorben.count()
print("Anzahl Gestorben : %s" % anzahlGestorben)
# Ermittlung des Verhätlnisses von Verstorben und Gensen
dfGenesen = dfsample.filter(dfsample.FallStatus == "GENESEN")
anzahlGenesen = dfGenesen.count()
print("Anzahl Genesen : %s" % anzahlGenesen)
ratio = anzahlGestorben / anzahlGenesen
print("Verhältnis : %s" % ratio)
# Ziehen eines Samples mit der näherungsweise selben Anzahl wie Verstorbene
dfGenesenSample = dfGenesen.sample(fraction=ratio, seed=12345)
dfGesamtSample = dfGestorben.union(dfGenesenSample)
# Kontrolle
dfGesamtSample.groupBy("FallStatus").count().show()
###Output
+----------+-----+
|FallStatus|count|
+----------+-----+
| GENESEN|88520|
| GESTORBEN|88350|
+----------+-----+
###Markdown
Splitten in Trainings und Testdaten
###Code
splits = dfGesamtSample.randomSplit([0.8, 0.2], 345678)
trainingData = splits[0]
testData = splits[1]
###Output
_____no_output_____
###Markdown
Aufbau des Feature-Vectors
###Code
assembler = VectorAssembler(outputCol="features", inputCols=["GeschlechtIndex","FallStatusIndex","AltersgruppeIndex", "LandkreisIndex","BundeslandIndex"])
###Output
_____no_output_____
###Markdown
Aufbau eiens VectorIndexerEin VectorIndexer dient zur kategorisierung von Features in einem Vector-Datensatz.
###Code
featureIndexer = VectorIndexer(inputCol="features",outputCol="indexedFeatures", maxCategories=10)
# TODO Max Kategories testen
###Output
_____no_output_____
###Markdown
Modellierung K-MeansK-Means ist eines der meist verwendent Clustering-Algortimen. (Apache Spark 2021)
###Code
kmeans = KMeans(featuresCol="indexedFeatures", predictionCol="prediction", seed=122334455) #predictionCol="prediction",
###Output
_____no_output_____
###Markdown
Pipeline
###Code
pipeline = Pipeline(stages=[assembler,featureIndexer, kmeans])
###Output
_____no_output_____
###Markdown
EvaluatorDas Clustering wird mit einem spezielen ClusteringEvaluator evaluiert.
###Code
# Definition des Evaluators
evaluator = ClusteringEvaluator()
###Output
_____no_output_____
###Markdown
ParametertuningEine wichtige Aufgabe beim Machine Learning ist die Auswahl des geeigneten Modells bzw. die passenden Paramter für ein Modell herauszufinden. Letzteres wird auch Parametertuning genannt. Die in Pyspark enthaltene MLLib bietet speziell hierfür ein entsprechende Tooling. Und zwar kann ein CrossValidator bzw. ein TrainValidationSplit verwendet werden. Voraussetzung sind ein Estimator (ein Modell oder eine Pipeline), ein Paramter-Grid und eine Evaluator. Dies ist auch im Zusammenhang mit dem Thema Cross-Validation zu sehen. (Apache Spark 2020a)
###Code
paramGrid = ParamGridBuilder()\
.addGrid(kmeans.k, [3,4,6]) \
.addGrid(kmeans.maxIter, [10])\
.build()
###Output
_____no_output_____
###Markdown
Cross-Validation
###Code
# Definition des Cross-Validators
# num-Folds gibt an in wie viele Datensatz-Paare die Datensätze aufgeteilt werden.
crossval = CrossValidator(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
numFolds=2,
parallelism=2)
###Output
_____no_output_____
###Markdown
Training
###Code
# Anpassung des Modells und Auswahl der besten Parameter
cvModel = crossval.fit(trainingData)
###Output
_____no_output_____
###Markdown
Ermitteln der Paramter
###Code
kmModel = cvModel.bestModel.stages[2]
print(kmModel.explainParams())
centers = kmModel.clusterCenters()
print("Cluster Centers: ")
for center in centers:
print(center)
###Output
distanceMeasure: the distance measure. Supported options: 'euclidean' and 'cosine'. (default: euclidean)
featuresCol: features column name. (default: features, current: indexedFeatures)
initMode: The initialization algorithm. This can be either "random" to choose random points as initial cluster centers, or "k-means||" to use a parallel variant of k-means++ (default: k-means||)
initSteps: The number of steps for k-means|| initialization mode. Must be > 0. (default: 2)
k: The number of clusters to create. Must be > 1. (default: 2, current: 4)
maxIter: max number of iterations (>= 0). (default: 20, current: 10)
predictionCol: prediction column name. (default: prediction, current: prediction)
seed: random seed. (default: -81890329110200490, current: 122334455)
tol: the convergence tolerance for iterative algorithms (>= 0). (default: 0.0001)
weightCol: weight column name. If this is not set or empty, we treat all instance weights as 1.0. (undefined)
Cluster Centers:
[4.97482656e-01 5.81090188e-01 2.02731417e+00 1.49107083e+04
1.45687215e+01]
[5.06732740e-01 4.66102794e-01 1.82227642e+00 4.30144703e+03
4.00996290e+00]
[5.02451641e-01 4.96901773e-01 1.88541947e+00 1.00675486e+04
9.78544103e+00]
[5.07824630e-01 4.79187717e-01 1.85541070e+00 7.27558676e+03
6.86731400e+00]
###Markdown
Testen des Modells
###Code
predictions = cvModel.transform(testData)
predictions.show()
# Fläche unter der Soluette
silhouette = evaluator.evaluate(predictions)
print("Silhouette with squared euclidean distance = " , silhouette)
predictions.groupBy( "prediction","FallStatus", "Altersgruppe", "Geschlecht").count().orderBy("prediction").show()
###Output
+----------+----------+------------+----------+-----+
|prediction|FallStatus|Altersgruppe|Geschlecht|count|
+----------+----------+------------+----------+-----+
| 0| GENESEN| A00-A04| W| 20|
| 0| GESTORBEN| A80+| W| 1335|
| 0| GENESEN| A15-A34| M| 256|
| 0| GENESEN| A35-A59| W| 573|
| 0| GENESEN| A80+| M| 67|
| 0| GENESEN| A05-A14| W| 67|
| 0| GENESEN| A80+| W| 166|
| 0| GENESEN| A60-A79| W| 262|
| 0| GESTORBEN| A80+| M| 1132|
| 0| GESTORBEN| A15-A34| M| 5|
| 0| GENESEN| A00-A04| M| 31|
| 0| GENESEN| A15-A34| W| 310|
| 0| GENESEN| A35-A59| M| 491|
| 0| GESTORBEN| A60-A79| M| 669|
| 0| GENESEN| A60-A79| M| 248|
| 0| GESTORBEN| A15-A34| W| 5|
| 0| GESTORBEN| A35-A59| W| 36|
| 0| GESTORBEN| A60-A79| W| 367|
| 0| GESTORBEN| A35-A59| M| 79|
| 0| GENESEN| A05-A14| M| 68|
+----------+----------+------------+----------+-----+
only showing top 20 rows
|
tutorials/001 - Introduction.ipynb | ###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.5.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.5.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.7.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.7.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.6.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.6.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.11.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.11.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/stable/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7 and 3.8, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow), [Boto3](https://github.com/boto/boto3), [s3fs](https://github.com/dask/s3fs), [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy), [Psycopg2](https://github.com/psycopg/psycopg2) and [PyMySQL](https://github.com/PyMySQL/PyMySQL), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/latest/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7 and 3.8, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-lambda-layer) - [AWS Glue Wheel](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-glue-wheel) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow), [Boto3](https://github.com/boto/boto3), [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy), [Psycopg2](https://github.com/psycopg/psycopg2) and [PyMySQL](https://github.com/PyMySQL/PyMySQL), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/latest/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7 and 3.8, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.12.1/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.12.1/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.12.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.12.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.8.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.8.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.9.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.9.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.14.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.14.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7 and 3.8, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.4.0-docs/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.15.1/api.html). How to install?Wrangler runs almost anywhere over Python 3.7, 3.8, 3.9 and 3.10, so there are several different ways to install it in the desired environment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.15.1/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow), [Boto3](https://github.com/boto/boto3), [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy), [Psycopg2](https://github.com/psycopg/psycopg2) and [PyMySQL](https://github.com/PyMySQL/PyMySQL), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/latest/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7 and 3.8, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/latest/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.10.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.10.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.13.0/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7, 3.8 and 3.9, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.13.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon Timestream**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow) and [Boto3](https://github.com/boto/boto3), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/2.15.0/api.html). How to install?Wrangler runs almost anywhere over Python 3.7, 3.8, 3.9 and 3.10, so there are several different ways to install it in the desired environment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/2.15.0/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____
###Markdown
[](https://github.com/awslabs/aws-data-wrangler) 1 - Introduction What is AWS Data Wrangler?An [open-source](https://github.com/awslabs/aws-data-wrangler>) Python package that extends the power of [Pandas](https://github.com/pandas-dev/pandas>) library to AWS connecting **DataFrames** and AWS data related services (**Amazon Redshift**, **AWS Glue**, **Amazon Athena**, **Amazon EMR**, etc).Built on top of other open-source projects like [Pandas](https://github.com/pandas-dev/pandas), [Apache Arrow](https://github.com/apache/arrow), [Boto3](https://github.com/boto/boto3), [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy), [Psycopg2](https://github.com/psycopg/psycopg2) and [PyMySQL](https://github.com/PyMySQL/PyMySQL), it offers abstracted functions to execute usual ETL tasks like load/unload data from **Data Lakes**, **Data Warehouses** and **Databases**.Check our [list of functionalities](https://aws-data-wrangler.readthedocs.io/en/stable/api.html). How to install?The Wrangler runs almost anywhere over Python 3.6, 3.7 and 3.8, so there are several different ways to install it in the desired enviroment. - [PyPi (pip)](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlpypi-pip) - [Conda](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlconda) - [AWS Lambda Layer](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlaws-lambda-layer) - [AWS Glue Python Shell Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlaws-glue-python-shell-jobs) - [AWS Glue PySpark Jobs](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlaws-glue-pyspark-jobs) - [Amazon SageMaker Notebook](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlamazon-sagemaker-notebook) - [Amazon SageMaker Notebook Lifecycle](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlamazon-sagemaker-notebook-lifecycle) - [EMR Cluster](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlemr-cluster) - [From source](https://aws-data-wrangler.readthedocs.io/en/stable/install.htmlfrom-source)Some good practices for most of the above methods are: - Use new and individual Virtual Environments for each project ([venv](https://docs.python.org/3/library/venv.html)) - On Notebooks, always restart your kernel after installations. Let's Install it!
###Code
!pip install awswrangler
###Output
_____no_output_____
###Markdown
> Restart your kernel after the installation!
###Code
import awswrangler as wr
wr.__version__
###Output
_____no_output_____ |
parseNameNode.ipynb | ###Markdown
调用关系解析 全采样信息 - wordcount 调用树关系
###Code
try:
w_tree = pd.read_pickle('pickle/wordcount_tiny_namenode.pkl')
except:
wordcount = Parse('wordcount.out')
wordcount.build_tree()
trees = [node for node in wordcount.trees]
hashtree = hash_tree(trees)
se_tree = pd.Series(hashtree)
w_tree = se_tree.value_counts().to_frame()
w_tree.to_pickle('pickle/wordcount_tiny_namenode.pkl')
n = Parse('htrace.out')
n.build_tree()
trees = [node for node in n.trees]
hashtree = hash_tree(trees)
se_tree = pd.Series(hashtree)
n_tree = se_tree.value_counts().to_frame()
c = pd.concat([w_tree, n_tree], axis=1, sort=True)
c.dropna(axis=0,how='any')
###Output
_____no_output_____
###Markdown
全采样信息 - terasort
###Code
try:
t_tree = pd.read_pickle('pickle/terasort_tiny_namenode.pkl')
except:
terasort = Parse('terasort.out')
terasort.build_tree()
trees = [node for node in terasort.trees]
hashtree = hash_tree(trees)
se_tree = pd.Series(hashtree)
t_tree = se_tree.value_counts().to_frame()
t_tree.to_pickle('pickle/terasort_tiny_namenode.pkl')
t_tree
###Output
_____no_output_____
###Markdown
全采样信息 sort
###Code
try:
s_tree = pd.read_pickle('pickle/sort_tiny_namenode.pkl')
except:
sort = Parse('sort_tiny.out')
sort.build_tree()
trees = [node for node in sort.trees]
hashtree = hash_tree(trees)
se_tree = pd.Series(hashtree)
s_tree = se_tree.value_counts().to_frame()
s_tree.to_pickle('pickle/sort_tiny_namenode.pkl')
s_tree
###Output
_____no_output_____
###Markdown
Kmeans
###Code
try:
k_tree = pd.read_pickle('pickle/kmeans_tiny_namenode.pkl')
except:
kmeans = Parse('kmeans.out')
kmeans.build_tree()
trees = [node for node in kmeans.trees]
hashtree = hash_tree(trees)
se_tree = pd.Series(hashtree)
k_tree = se_tree.value_counts().to_frame()
k_tree.to_pickle('pickle/kmeans_tiny_namenode.pkl')
k_tree
###Output
_____no_output_____
###Markdown
Bayes
###Code
b_tree = pd.read_pickle('pickle/bayes_tiny_namenode.pkl')
b_tree
###Output
_____no_output_____
###Markdown
PageRank
###Code
try:
p_tree = pd.read_pickle('pickle/pagerank_tiny_namenode.pkl')
except:
pagerank = Parse('pagerank.out')
pagerank.build_tree()
trees = [node for node in pagerank.trees]
hashtree = hash_tree(trees)
se_tree = pd.Series(hashtree)
p_tree = se_tree.value_counts().to_frame()
p_tree.to_pickle('pickle/pagerank_tiny_namenode.pkl')
p_tree
###Output
_____no_output_____
###Markdown
统计整理
###Code
c = pd.concat([w_tree, s_tree, t_tree, k_tree, b_tree, p_tree], axis=1, sort=True)
c.columns = ['wordcount','sort', 'terasort', 'kmeans', 'bayes', 'pagerank']
c.dropna(axis=0,how='any')
c = pd.concat([w_tree, s_tree, p_tree], axis=1, sort=True)
c.columns = ['wordcount','sort', 'pagerank']
c.dropna(axis=0,how='any')
###Output
_____no_output_____ |
codebase/notebooks/03_link_occupational_data/Link_occupations_to_UK_employment.ipynb | ###Markdown
Linking ESCO occupations to UK employment statisticsHere, we use employment estimates from the EU LFS to derive rough estimates of workers employed in each 'top level' ESCO occupation. From the EU LFS data we have derived employment estimates at the level of three-digit ISCO occupational groups. We then uniformly redistribute the number of employed workers across all 'top level' ESCO occupations belonging to the respective ISCO three-digit group (top level refers to level 5 in the ESCO hierarchy which follows immediately after ISCO four-digit unit groups). 0. Import dependencies and inputs
###Code
%run ../notebook_preamble.ipy
# Import all ESCO occupations
occ = pd.read_csv(data_folder + 'processed/ESCO_occupational_hierarchy/ESCO_occupational_hierarchy.csv')
# Import EU LFS estimates of UK employment
file_path = useful_paths.project_dir + '/supplementary_online_data/demographic_analysis/national_count_isco/uk_breakdown_by_isco_w_risk.csv'
lfs_estimates = pd.read_csv(file_path)
# Which year to use
year = '2018'
# Total number of workers in employment
n_total = lfs_estimates[year].sum()
print(f'Total number of employed workers in {year}: {n_total/1e+3} million')
lfs_estimates.head(3)
###Output
Total number of employed workers in 2018: 32.151679 million
###Markdown
1. Redistribute workers from 3-digit ISCO to ESCO occupations
###Code
# Distribute equally the number of workers across all lower level occupations
occupations_employment = occ.copy()
# Note: We only do this for the
occupations_employment = occupations_employment[occupations_employment.is_top_level==True]
occupations_employment['employment_share'] = np.nan
occupations_employment['employment_count'] = np.nan
for j, row in lfs_estimates.iterrows():
occ_rows = occupations_employment[occupations_employment.isco_level_3==row.isco_code].id.to_list()
occupations_employment.loc[occ_rows, 'employment_share'] = (row[year]/n_total) / len(occ_rows)
occupations_employment.loc[occ_rows, 'employment_count'] = row[year] / len(occ_rows)
occupations_employment['employment_count'] = np.round(occupations_employment['employment_count'] * 1000)
# Check that we're still in the right range of total employment
print(occupations_employment.employment_count.sum())
# Sanity check
print(occupations_employment.employment_share.sum())
###Output
0.9999999999999999
###Markdown
One can compare this estimate to the Office for National Statistics data (e.g. see [here](https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/timeseries/mgrz/lms)).
###Code
# Largest ESCO occupations
occupations_employment.sort_values('employment_count', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
1.1 Check null valuesNote that some ISCO three-digit codes have been omitted from the EU LFS results. Hence, some 'top level' ESCO occupations will not have an employment estimate.
###Code
occupations_employment.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1701 entries, 1 to 2941
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 1701 non-null int64
1 concept_type 1701 non-null object
2 concept_uri 1701 non-null object
3 preferred_label 1701 non-null object
4 isco_level_1 1701 non-null int64
5 isco_level_2 1701 non-null int64
6 isco_level_3 1701 non-null int64
7 isco_level_4 1701 non-null int64
8 is_top_level 1701 non-null bool
9 is_second_level 1701 non-null bool
10 is_third_level 1701 non-null bool
11 is_fourth_level 1701 non-null bool
12 parent_occupation_id 0 non-null float64
13 top_level_parent_id 1701 non-null int64
14 employment_share 1627 non-null float64
15 employment_count 1627 non-null float64
dtypes: bool(4), float64(3), int64(6), object(3)
memory usage: 259.4+ KB
###Markdown
2. Export
###Code
occupations_employment[[
'id', 'concept_uri', 'preferred_label', 'isco_level_3', 'isco_level_4', 'is_top_level',
'employment_share', 'employment_count']].to_csv(
data_folder + 'processed/linked_data/ESCO_top_occupations_UK_employment.csv', index=False)
###Output
_____no_output_____ |
ml-model.ipynb | ###Markdown
Elimizde 2 tane veri var. - Beşiktaş'a özel bir veriseti: besiktasHız.csv - Tüm ilçelerin karışık olduğu bir veriseti: fuseedİlce.csv
###Code
BESIKTAS_DATA = "./data/BesiktasHiz.csv"
FUSEED_DATA = "./data/Fuseed_Data_İlçe.csv"
besiktas_df = pd.read_csv(BESIKTAS_DATA, sep=";")
besiktas_df.head()
fuseed_df = pd.read_csv(FUSEED_DATA)
fuseed_df.head()
###Output
_____no_output_____
###Markdown
Tüm ilçelerden sadece beşiktaş verilerini filtreleyelim.
###Code
fuseed_besiktas_df = fuseed_df[fuseed_df.vSegID.isin(besiktas_df.vSegID)]
fuseed_besiktas_df.head()
###Output
_____no_output_____
###Markdown
Verileri birleştirerek ana dataframe'i oluşturalım.
###Code
df = besiktas_df.append(fuseed_besiktas_df)
df.head()
###Output
_____no_output_____
###Markdown
Toplam Satır Sayıları
###Code
print(f'Besiktas: {len(besiktas_df.index)} satır')
print(f'Tüm ilçelerde Beşiktaş: {len(fuseed_besiktas_df.index)} satır')
print(f'Toplam Besiktas: {len(df.index)} satır')
###Output
Besiktas: 938941 satır
Tüm ilçelerde Beşiktaş: 921576 satır
Toplam Besiktas: 1860517 satır
###Markdown
Segment ID'lere karşılık gelen enlem boylam verileri
###Code
segment_list = pd.read_excel("./data/bjk-segment.xlsx")
segment_list.head()
###Output
_____no_output_____
###Markdown
Seçilen yol --> BEŞİKTAŞ BARBAROS BULVARI SEGMENTID = 65 Seçilen SegmentID'ye Göre Filtrele- Bu yol için hem gidiş hem dönüş verisi var. Şimdilik gidiş yönünü ele alalım. vSegDir = 0
###Code
df = df.loc[df['vSegID'] == 65]
df = df.loc[df['vSegDir'] == 0]
df.head()
###Output
_____no_output_____
###Markdown
Yol Bakım2 adet bakım calısması veriseti var.- İki verisetini birleştir.- Beşiktaş Barbaros Bulvarına göre filtrele.
###Code
yol_bakim1 = pd.read_csv("./data/bakim-veri/yol_bakım.csv")
yol_bakim2 = pd.read_csv("./data/bakim-veri/yol_bakim_2.csv")
result = pd.concat([yol_bakim1, yol_bakim2])
result = result.loc[result["ilce"] == "BEŞİKTAŞ"].reset_index(drop=True)
result.loc[result['yol_adi'].str.contains("BARBAROS")].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
TKM Duyurular
###Code
tkm_duyuru = pd.read_csv("../Trafik Analizi Verileri/Microsoft_azure_calisma/TKM_Duyurular.csv")
tkm_duyuru.loc[tkm_duyuru['DUYURUBASLIK'].str.contains("Beşiktaş")]
###Output
_____no_output_____
###Markdown
Yol bakım verileri saat bazlı değil. Ana veriye eklenmeyecek. Sütun İsimleri Düzenleme
###Code
df.rename(columns={'vSegID': 'SegmentID',
'vSegDir': 'SegmentDirection',
'fusedYear': 'Year',
'fusedMonth': 'Month',
'fusedday' : 'Day',
'fusedHour' : 'Hour' ,
'avgspeed': 'AverageSpeed',
'GRP' : 'Minute'
},
inplace=True)
###Output
_____no_output_____
###Markdown
Yıl, Ay, Gün, Saat ile DateTime Bilgisi Oluşturma- Bu sütun hem sıralama hem gruplamada gerekli olacak.
###Code
df['DateTime'] = pd.to_datetime(df[['Year','Month','Day','Hour']])
df.head()
df.loc[df['DateTime'] == pd.datetime(2019,3,11, 2)]
df.loc[df['DateTime'] == pd.datetime(2019,4,23, 11)]
###Output
/Users/md/venv/jupyter-venv/lib/python3.6/site-packages/ipykernel_launcher.py:1: FutureWarning: The pandas.datetime class is deprecated and will be removed from pandas in a future version. Import from datetime instead.
"""Entry point for launching an IPython kernel.
###Markdown
Aynı saatte ölçülmüş 15. ve 45. dakikalardaki veriyi gruplayarak saat bazlı ortalamayla ilerlenecek.
###Code
df = df.groupby(["DateTime"]).mean().reset_index()
df.head()
###Output
_____no_output_____
###Markdown
Yıllık Tatil Günleri Sütunu
###Code
holidays = pd.read_excel("data/resmi_tatiller.xlsx")
holiday_dates = list(holidays['tarih'])
holiday_dates
df['isNationalHoliday'] = df['DateTime'].apply(lambda x : 1 if str(x.date()) in holiday_dates else 0)
###Output
_____no_output_____
###Markdown
Datetime Kullanarak Haftasonu Sütunu Ekleme
###Code
df['DayName'] = df.apply(lambda tr: tr['DateTime'].day_name(), axis=1)
df['isWeekend'] = df.apply(lambda tr: 1 if tr['DayName'] in ["Saturday", "Sunday"] else 0, axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Okul Tatil Günleri
###Code
school_hols = pd.read_csv("./data/Okul-Tatilleri.csv")
school_holiday_dates = list(school_hols['Okul Tatilleri'])
school_holiday_dates
df['isSchoolHoliday'] = df['DateTime'].apply(lambda x : 1 if str(x.date()) in school_holiday_dates else 0)
df.head(10)
###Output
_____no_output_____
###Markdown
Trafik Oranlarını Sınıflandırma- ML modeli 0-90 arası bir hız tahmini yapmak yerine, düşük,ortalama,yüksek hız şeklinde sınıflandırma yapmalı. K-means ile Kümeleme
###Code
from sklearn.cluster import KMeans
km = KMeans(n_clusters=3)
df['label'] = km.fit_predict(df[['AverageSpeed']])
df[['label', 'AverageSpeed']]
CLASS0 = df.loc[df['label'] == 0]['AverageSpeed'].min(), df.loc[df['label'] == 0]['AverageSpeed'].max()
CLASS0
CLASS1 = df.loc[df['label'] == 1]['AverageSpeed'].min(), df.loc[df['label'] == 1]['AverageSpeed'].max()
CLASS1
CLASS2 = df.loc[df['label'] == 2]['AverageSpeed'].min(), df.loc[df['label'] == 2]['AverageSpeed'].max()
CLASS2
###Output
_____no_output_____
###Markdown
Yağmur Verisi
###Code
rain_df = pd.read_csv("./ETL/saatlik_ortalamalar.csv", sep=";")
# 0'dan büyük örnek yağmur değerleri
rain_df.loc[rain_df.ort_yagis > 15].reset_index(drop=True)
# tarih sütunu isminde görünmeyen karakterler var. Onu düzenleyelim.
print(rain_df.columns[0])
rain_df.rename(columns={'\ufefftarih': 'tarih'}, inplace=True, errors="raise")
print(rain_df.columns[0])
###Output
tarih
tarih
###Markdown
Yağmur Verisini Kümeleyelim
###Code
rain_df['isRainy'] = rain_df['ort_yagis'].apply(lambda x : 1 if x > 10 else 0)
rain_df.loc[rain_df['ort_yagis'] > 10]
###Output
_____no_output_____
###Markdown
Yağmur ortalaması 10'dan büyükse 1 değilse 0 atadık Yağmur verisini Ekleyelim**DF** tarafında pd.datetime tipinde tutuyoruz. **Yağmur** üzerinde zaman string halde. Verileri birleştirirken sabit bir sütun kullanacağız. Bunun için de- dateTime sütununun date'ini ve saat sütununun değerini kullanacağız.
###Code
df["period"] = df['DateTime'].apply(lambda x : x.date())
df["period"] = df['period'].astype(str) + " " + df['Hour'].astype(str)
df.head()
###Output
_____no_output_____
###Markdown
Aynı işlemi bu sefer yağmur verisine uyguluyoruz.
###Code
rain_df["period"] = rain_df['tarih'] + " " + rain_df['saat'].astype(str)
# yaptığımız işlemin doğruluğunu test edelim.
IDX = 177
print(rain_df['period'][IDX])
print(df['period'][IDX])
print(df['period'][IDX] == rain_df['period'][IDX])
###Output
2019-01-08 9
2019-01-08 9
True
###Markdown
Evet yukarıdaki teknik ile artık iki veri arasında ortak aynı veri tipinde bir sütun elde ettik. --> **PERIOD** Şimdi birleştirelim.
###Code
# period sütununu index olarak işaretle ve birleştir.
df = df.join(rain_df.set_index('period'), on='period')
df.loc[df['period'] == '2019-05-05 1']
rain_df.loc[rain_df['period'] == '2019-05-05 1']
DATE = '2019-01-05 4'
print(df.loc[df['period'] == DATE]['isRainy'])
print(rain_df.loc[df['period'] == DATE]['isRainy'])
df.dropna(inplace=True)
df['isRainy'] = df.isRainy.astype(int)
#df.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
Önemli Örnek- ortalama yağış yüksek olsa da hız çok düşmemiş.
###Code
#df.loc[df['ort_yagis'] > 30][['period', 'ort_yagis', 'AverageSpeed']].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
İstenmeyen Sütunların Silinmesi
###Code
# prediction tarafında Average Speed'e ihtiyacım var. Ama burada silmem gerekiyor, bir kopya oluştur.
copy_df = df.copy()
df = df[['Month', 'Day', 'Hour', 'isNationalHoliday', 'isWeekend',
'isSchoolHoliday', 'isRainy', 'label']]
df.head()
###Output
_____no_output_____
###Markdown
Lets Create ML Model
###Code
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import Perceptron, LinearRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
df.dropna(inplace=True)
# Feature ve class ayrımı.
y = df['label'].values
X = df.drop(['label'], axis=1)
###Output
_____no_output_____
###Markdown
Prediction
###Code
def create_prediction_example(m, d, h, isN, isW, isS, r):
data = pd.DataFrame(0, index=[0], columns= list(X_train.columns))
data['Month'] = m
data['Day'] = d
data['Hour'] = h
data['isNationalHoliday'] = isN
data['isWeekend'] = isW
data['isSchoolHoliday'] = isS
data['isRainy'] = r
return data
###Output
_____no_output_____
###Markdown
Cross Validation
###Code
from sklearn.model_selection import KFold
def cross_validation(df):
piece = KFold(shuffle=True) # shuffle true olursa ayrılan veriler de karışık gelir.
inputs = df.iloc[:,:-1].values
outputs = df.iloc[:,-1].values
models = {'Perceptron': Perceptron(),
'NaiveBayes': MultinomialNB(),
'KNearest': KNeighborsClassifier(n_neighbors=3),
'DecisionTree': DecisionTreeRegressor(max_depth= 110, min_samples_leaf = 25),
'SupportVector': SVC(kernel="linear")}
# girdiyi train ve test olarak ayırıyor.
for (name, model) in models.items():
total_score = 0
for idx, (train_idx, test_idx) in enumerate(piece.split(inputs)):
# artık verisetim bölünmüş bir halde.
# eğitim kısmını eğitime, test kısmını teste atayalım.
train_input = inputs[train_idx, :]
test_input = inputs[test_idx, :]
train_output = outputs[train_idx] # output kısmıdan zaten 1 adet sütun var.
test_output = outputs[test_idx]
model.fit(train_input, train_output)
score = round(model.score(test_input, test_output) * 100, 2)
print(f'{str(name):17}{idx+1}x: %{score}')
total_score += score
print(f'Average %{round((total_score / 5),2)}')
print("-" * 52)
cross_validation(df)
###Output
Perceptron 1x: %55.56
Perceptron 2x: %51.55
Perceptron 3x: %61.31
Perceptron 4x: %47.59
Perceptron 5x: %35.23
Average %50.25
----------------------------------------------------
NaiveBayes 1x: %50.49
NaiveBayes 2x: %51.3
NaiveBayes 3x: %51.55
NaiveBayes 4x: %51.67
NaiveBayes 5x: %54.14
Average %51.83
----------------------------------------------------
KNearest 1x: %73.58
KNearest 2x: %75.77
KNearest 3x: %77.75
KNearest 4x: %73.05
KNearest 5x: %76.39
Average %75.31
----------------------------------------------------
DecisionTree 1x: %61.5
DecisionTree 2x: %59.84
DecisionTree 3x: %59.04
DecisionTree 4x: %61.13
DecisionTree 5x: %62.44
Average %60.79
----------------------------------------------------
SupportVector 1x: %62.59
SupportVector 2x: %62.92
SupportVector 3x: %63.54
SupportVector 4x: %66.01
SupportVector 5x: %63.29
Average %63.67
----------------------------------------------------
###Markdown
Saving Model
###Code
model = KNeighborsClassifier(n_neighbors=3)
piece = KFold(shuffle=True) # shuffle true olursa ayrılan veriler de karışık gelir.
inputs = df.iloc[:,:-1].values
outputs = df.iloc[:,-1].values
for idx, (train_idx, test_idx) in enumerate(piece.split(inputs)):
# artık verisetim bölünmüş bir halde.
# eğitim kısmını eğitime, test kısmını teste atayalım.
train_input = inputs[train_idx, :]
test_input = inputs[test_idx, :]
train_output = outputs[train_idx] # output kısmıdan zaten 1 adet sütun var.
test_output = outputs[test_idx]
model.fit(train_input, train_output)
from joblib import dump
dump(model, 'model.joblib')
CLASS0, CLASS1, CLASS2
###Output
_____no_output_____ |
site/en/r1/tutorials/eager/automatic_differentiation.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatibility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Automatic differentiation and gradient tape Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. In the previous tutorial we introduced `Tensor`s and operations on them. In this tutorial we will cover [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), a key technique for optimizing machine learning models. Setup
###Code
import tensorflow.compat.v1 as tf
###Output
_____no_output_____
###Markdown
Gradient tapesTensorFlow provides the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).For example:
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
for i in [0, 1]:
for j in [0, 1]:
assert dz_dx[i][j].numpy() == 8.0
###Output
_____no_output_____
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as t:
t.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
# Use the tape to compute the derivative of z with respect to the
# intermediate value y.
dz_dy = t.gradient(z, y)
assert dz_dy.numpy() == 8.0
###Output
_____no_output_____
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a `persistent` gradient tape. This allows multiple calls to the `gradient()` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as t:
t.watch(x)
y = x * x
z = y * y
dz_dx = t.gradient(z, x) # 108.0 (4*x^3 at x = 3)
dy_dx = t.gradient(y, x) # 6.0
del t # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as t:
t.watch(x)
out = f(x, y)
return t.gradient(out, x)
x = tf.convert_to_tensor(2.0)
assert grad(x, 6).numpy() == 12.0
assert grad(x, 5).numpy() == 12.0
assert grad(x, 4).numpy() == 4.0
###Output
_____no_output_____
###Markdown
Higher-order gradientsOperations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t:
with tf.GradientTape() as t2:
y = x * x * x
# Compute the gradient inside the 't' context manager
# which means the gradient computation is differentiable as well.
dy_dx = t2.gradient(y, x)
d2y_dx2 = t.gradient(dy_dx, x)
assert dy_dx.numpy() == 3.0
assert d2y_dx2.numpy() == 6.0
###Output
_____no_output_____ |
qoqo/examples/Teleportation_Example.ipynb | ###Markdown
Quantum Teleportation with qoqo & the use of conditional measurementsThis notebook is designed to demonstrate the use of conditional measurements, by following through an example of quantum state teleportation.In quantum teleportation there are two end users: The first user, Alice, wishes to send a particular quantum state to the second user, Bob. The protocol requires a total of three qubits, and the transmission of two classical bits. The sender Alice controls qubits 0 and 1, and the reciever Bob controls qubit 2.
###Code
from qoqo_quest import Backend
from qoqo import Circuit
from qoqo import operations as ops
from math import pi
###Output
_____no_output_____
###Markdown
State preparationThe first step is to prepare the quantum state which Alice will send to Bob. As an example, the most general single qubit quantum state is given by:\begin{equation}|\psi \rangle = cos(\frac{\theta}{2}) |0 \rangle + e^{i \phi} sin(\frac{\theta}{2}) |1 \rangle.\end{equation}This state can be prepared by a sequence of two single qubit rotations. In the code block below we first define a function that takes the angles $\theta$ and $\phi$ as input and prepares qubit 0 of a quantum register in the state $| \psi \rangle$.Next we use an instance of the function with the angles $\theta=\frac{\pi}{2}$ and $\phi=0$ to create a circuit which prepares the state: \begin{equation}|\psi \rangle = \frac{1}{\sqrt{2}} \big ( |0 \rangle + |1 \rangle \big ) = | + \rangle.\end{equation}
###Code
def prep_psi(Theta: float, Phi: float) -> Circuit:
circuit = Circuit()
circuit += ops.RotateY(qubit=0, theta=Theta)
circuit += ops.RotateZ(qubit=0, theta=Phi)
return circuit
init_circuit = prep_psi(pi/2, 0.0)
###Output
_____no_output_____
###Markdown
Preparing an entangled resource stateQuantum teleportation requires that the end users initially share an entangled resource state, \begin{equation}|\Phi_{+} \rangle = \frac{1}{\sqrt(2)} \big ( |00 \rangle + |11 \rangle \big ) .\end{equation}The following circuit prepares the state $|\Phi_{+} \rangle$ between qubit 1, held by Alice, and qubit 2, held by Bob.
###Code
entangling_circ = Circuit()
entangling_circ += ops.Hadamard(qubit=1)
entangling_circ += ops.CNOT(control=1, target=2)
###Output
_____no_output_____
###Markdown
Encoding the state to be sent in the entangled resource stateThe next step of the procedure is to encode the state of qubit 0, $\psi$, into the entangled resource state. This is accomplished by way of the circuit defined below, which is similar to that used to prepare the entangled resource.
###Code
encoding_circ = Circuit()
encoding_circ += ops.CNOT(control=0, target=1)
encoding_circ += ops.Hadamard(qubit=0)
###Output
_____no_output_____
###Markdown
State transfer part 1: MeasurementAt this stage in the process both of Alice's qubits, 0 and 1, are measured. The measurement consumes the entangled resource and leaves the state of qubit 2,Bob's qubit, in a state that depends on the two measurement outcomes. Let us call the classical bit which results from measuring qubit 0 'M1' and the bit resulting from measuring qubit 1 'M2'. The circuit below defines the classical register named 'M1M2', performs the measurement of qubits 0 and 1, and stores the results in the register 'M1M2'.
###Code
meas_circ = Circuit()
meas_circ += ops.DefinitionBit(name='M1M2', length=2, is_output=True) #for classical bits corresponding to measurement outcomes
meas_circ += ops.MeasureQubit(qubit=0,readout='M1M2',readout_index=0)
meas_circ += ops.MeasureQubit(qubit=1,readout='M1M2',readout_index=1)
###Output
_____no_output_____
###Markdown
Defining the circuit for a conditional operationConditional operations in qoqo have three inputs: the name of a classical register containing boolean values, the index of the register containing the value to be used to condition the operation, and the operation or sequence of operations to be performed if the boolean condition value is True. To prepare the third input, it is necessary to create circuit snippets corresponding to the operations to be completed if the condition is True. In the case of quantum teleportation, we need two conditional operations. The first is a Pauli Z acting on Bob's qubit, conditioned on the measurement result M1. The second is a Pauli X acting on Bob's qubit, conditioned on the measurement result M2. Hence we prepare circuit snippets correspponding to a Pauli Z and a Pauli X operation.
###Code
conditional_Z = Circuit()
conditional_Z += ops.PauliZ(qubit=2)
conditional_X = Circuit()
conditional_X += ops.PauliX(qubit=2)
###Output
_____no_output_____
###Markdown
State transfer part 2: conditional operationsThe final stage of the teleportation protocol is to perform corrections to the state of Bob's qubit 2, according to the measurement outcomes 'M1' and 'M2'.The below circuit makes use of the circuit snippets defined above to perform the conditional corrections to the state of qubit 2. 2.
###Code
conditional_circ = Circuit()
conditional_circ += ops.PragmaConditional(condition_register='M1M2',condition_index=1, circuit=conditional_X)
conditional_circ += ops.PragmaConditional(condition_register='M1M2',condition_index=0, circuit=conditional_Z)
###Output
_____no_output_____
###Markdown
Putting it all togetherCombining each of the circuits we have defined yeilds the full teleportation protocol. We can verify that the protocol is successful by reading out the final state vector and comparing it to the state which was to be sent, $|\psi \rangle$.
###Code
verification = Circuit()
# Create register for state vector readout
verification += ops.DefinitionComplex(name='psi', length=8, is_output=True)
verification += ops.PragmaGetStateVector(readout='psi', circuit=Circuit())
# Combine parts for full protocol
teleportation_circuit = init_circuit + entangling_circ + encoding_circ + meas_circ + conditional_circ + verification
# Run simulation and collect outputs
backend = Backend(number_qubits=3)
(result_bit_registers, result_float_registers, result_complex_registers)=backend.run_circuit(teleportation_circuit)
# View measurement outcomes and post-protocol state of qubits
print(result_bit_registers['M1M2'])
print(result_complex_registers['psi'])
###Output
[[True, False]]
[[0j, (0.7071067811865476+0j), 0j, 0j, (-0+0j), (0.7071067811865475-0j), (-0+0j), (-0+0j)]]
###Markdown
Quantum Teleportation with qoqo & the use of conditional measurementsThis notebook is designed to demonstrate the use of conditional measurements, by following through an example of quantum state teleportation.In quantum teleportation there are two end users: The first user, Alice, wishes to send a particular quantum state to the second user, Bob. The protocol requires a total of three qubits, and the transmission of two classical bits. The sender Alice controls qubits 0 and 1, and the reciever Bob controls qubit 2.
###Code
from qoqo_pyquest import PyQuestBackend
from qoqo import Circuit
from qoqo import operations as ops
import numpy as np
from math import sqrt, pi
###Output
_____no_output_____
###Markdown
State preparationThe first step is to prepare the quantum state which Alice will send to Bob. As an example, the most general single qubit quantum state is given by:\begin{equation}|\psi \rangle = cos(\frac{\theta}{2}) |0 \rangle + e^{i \phi} sin(\frac{\theta}{2}) |1 \rangle.\end{equation}This state can be prepared by a sequence of two single qubit rotations. In the code block below we first define a function that takes the angles $\theta$ and $\phi$ as input and prepares qubit 0 of a quantum register in the state $| \psi \rangle$.Next we use an instance of the function with the angles $\theta=\frac{\pi}{2}$ and $\phi=0$ to create a circuit which prepares the state: \begin{equation}|\psi \rangle = \frac{1}{\sqrt{2}} \big{(} |0 \rangle + |1 \rangle \big{)} = | + \rangle.\end{equation}
###Code
def prep_psi(Theta: float, Phi: float) -> Circuit:
circuit = Circuit()
circuit += ops.RotateY(qubit=0, theta=Theta)
circuit += ops.RotateZ(qubit=0, theta=Phi)
return circuit
init_circuit = prep_psi(pi/2, 0.0)
###Output
_____no_output_____
###Markdown
Preparing an entangled resource stateQuantum teleportation requires that the end users initially share an entangled resource state, \begin{equation}|\Phi_{+} \rangle = \frac{1}{\sqrt(2)} \big{(} |00 \rangle + |11 \rangle \big{)} .\end{equation}The following circuit prepares the state $|\Phi_{+} \rangle$ between qubit 1, held by Alice, and qubit 2, held by Bob.
###Code
entangling_circ = Circuit()
entangling_circ += ops.Hadamard(qubit=1)
entangling_circ += ops.CNOT(control=1, target=2)
###Output
_____no_output_____
###Markdown
Encoding the state to be sent in the entangled resource stateThe next step of the procedure is to encode the state of qubit 0, $\psi$, into the entangled resource state. This is accomplished by way of the circuit defined below, which is similar to that used to prepare the entangled resource.
###Code
encoding_circ = Circuit()
encoding_circ += ops.CNOT(control=0, target=1)
encoding_circ += ops.Hadamard(qubit=0)
###Output
_____no_output_____
###Markdown
State transfer part 1: MeasurementAt this stage in the process both of Alice's qubits, 0 and 1, are measured. The measurement consumes the entangled resource and leaves the state of qubit 2,Bob's qubit, in a state that depends on the two measurement outcomes. Let us call the classical bit which results from measuring qubit 0 'M1' and the bit resulting from measuring qubit 1 'M2'. The circuit below defines the classical register named 'M1M2', performs the measurement of qubits 0 and 1, and stores the results in the register 'M1M2'.
###Code
meas_circ = Circuit()
meas_circ += ops.DefinitionBit(name='M1M2', length=2, is_output=True) #for classical bits corresponding to measurement outcomes
meas_circ += ops.MeasureQubit(qubit=0,readout='M1M2',readout_index=0)
meas_circ += ops.MeasureQubit(qubit=1,readout='M1M2',readout_index=1)
###Output
_____no_output_____
###Markdown
Defining the circuit for a conditional operationConditional operations in qoqo have three inputs: the name of a classical register containing boolean values, the index of the register containing the value to be used to condition the operation, and the operation or sequence of operations to be performed if the boolean condition value is True. To prepare the third input, it is necessary to create circuit snippets corresponding to the operations to be completed if the condition is True. In the case of quantum teleportation, we need two conditional operations. The first is a Pauli Z acting on Bob's qubit, conditioned on the measurement result M1. The second is a Pauli X acting on Bob's qubit, conditioned on the measurement result M2. Hence we prepare circuit snippets correspponding to a Pauli Z and a Pauli X operation.
###Code
conditional_Z = Circuit()
conditional_Z += ops.PauliZ(qubit=2)
conditional_X = Circuit()
conditional_X += ops.PauliX(qubit=2)
###Output
_____no_output_____
###Markdown
State transfer part 2: conditional operationsThe final stage of the teleportation protocol is to perform corrections to the state of Bob's qubit 2, according to the measurement outcomes 'M1' and 'M2'.The below circuit makes use of the circuit snippets defined above to perform the conditional corrections to the state of qubit 2. 2.
###Code
conditional_circ = Circuit()
conditional_circ += ops.PragmaConditional(condition_register='M1M2',condition_index=1, circuit=conditional_X)
conditional_circ += ops.PragmaConditional(condition_register='M1M2',condition_index=0, circuit=conditional_Z)
###Output
_____no_output_____
###Markdown
Putting it all togetherCombining each of the circuits we have defined yeilds the full teleportation protocol. We can verify that the protocol is successful by reading out the final state vector and comparing it to the state which was to be sent, $|\psi \rangle$.
###Code
verification = Circuit()
# Create register for state vector readout
verification += ops.DefinitionComplex(name='psi', length=8, is_output=True)
verification += ops.PragmaGetStateVector(readout='psi', circuit=Circuit())
# Combine parts for full protocol
teleportation_circuit = init_circuit + entangling_circ + encoding_circ + meas_circ + conditional_circ + verification
# Run simulation and collect outputs
backend = PyQuestBackend(number_qubits=3)
(result_bit_registers, result_float_registers, result_complex_registers)=backend.run_circuit(teleportation_circuit)
# View measurement outcomes and post-protocol state of qubits
print(result_bit_registers['M1M2'])
print(result_complex_registers['psi'])
###Output
[[True, False]]
[array([0. +0.j, 0.70710678+0.j, 0. +0.j, 0. +0.j,
0. +0.j, 0.70710678+0.j, 0. +0.j, 0. +0.j])]
|
SILAM_NO2.ipynb | ###Markdown
Air Pollution over IndonesiaNitrogen Dioxide Data is obtained from Finnish Meteorological Institute (https://silam.fmi.fi/) overlaid with Coal Power Plants data from World Resource Institute (WRI). Units in ug/m^3 and values below 10 ug/m^3 are not shown on the map. This map is updated every hour. For more information, please visit https://josefmtd.com/.
###Code
# Get the observed datetime
now = datetime.datetime.utcnow()
obs = datetime.datetime(now.year, now.month, now.day, now.hour)
print('Datetime :', obs.strftime("%Y-%m-%d %H:%M UTC"))
# Create a map service
Map = geemap.Map(center=(0,120), zoom=5,
min_zoom=5, max_zoom=12,
basemap=basemaps.CartoDB.DarkMatter,
add_google_map=False
)
# Obtain NO2 data from SILAM TDS Catalog exported to Google Cloud Storage
fname = f'NO2_{obs.strftime("%Y%m%dT%H")}.tif'
var = ee.Image.loadGeoTIFF(f'gs://silam-neonet-rasters/{fname}')
# Resample and only show values above 500 ug/m^3
data = var.resample().reproject(ee.Projection('EPSG:4326'), scale=1000)
data = data.updateMask(data.gt(10))
# Add Coal Power Plant from WRI's Global Power Plant Database
indo_power = ee.FeatureCollection('WRI/GPPD/power_plants') \
.filter(ee.Filter.eq('country', 'IDN')) \
.filter(ee.Filter.eq('fuel1', 'Coal'))
# Limit the area around Indonesia
bbox = ee.Geometry.BBox(80.0, -15.0, 160.0, 15.0)
# Visualization Parameters
vmin = 0.0
vmax = 250.0
palette = ['blue', 'yellow', 'red', 'purple']
vis_params = {
'min' : vmin,
'max' : vmax,
'palette' : palette,
'opacity' : 0.5,
}
# Add Coal Power Plant data and Nitrogen Dioxide data
Map.addLayer(data.clip(bbox), vis_params, 'NO2')
Map.addLayer(indo_power, {'color' : 'ff0000'}, 'Coal Power Plant')
Map.add_colorbar_branca(colors=palette, vmin=vmin, vmax=vmax,
layer_name='NO2')
Map.addLayerControl()
Map
###Output
_____no_output_____ |
Inteligência Artificial/Ciência de Dados/2. Análise de Dados - Medidas.ipynb | ###Markdown
Análise de Dados - MedidasComo sugere o título, esse guia contextualizará a análise de dados através das **medidas** e como podemos fazer isso de um jeito simples.Hoje, a internet gera uma quantidade enorme de dados a cada instante. Saiba que são feitas aproximadamente dois milhões de buscas no Google por minuto, e ele é apenas mais um dos mecanismos. Para analisar esses dados, é possível utilizar diversas técnicas e ferramentas (todas baseadas em teorias), porém **a mais básica de todas é o entendimento das medidas simples**, que são medidas estatísticas. Sumário1. [Tipos de dados](git1)2. [Escala de dados](git2)3. [Vamos descrever os dados!](git3) 1. [Medida de frequência](git3.1) 2. [Medidas centrais](git3.2) 1. [Moda](git3.2.1) 2. [Média](git3.2.2) 3. [Mediana](git3.2.3) 4. [Quando utilizar a Média e a Mediana](git3.2.4) 5. [Quartil e percentil](git3.2.5) 6. [Boxplot](git3.2.6) 3. [Medidas de dispersão](git3.3) 1. [Variância](git3.3.1) 2. [Desvio padrão](git3.3.2) 3. [IQR](git3.3.3)4. [Mensagem de conforto + aplicação em um código pequeno](git4) 1. Tipos de dados [🠡](intro)Antes de entrarmos no mérito de analisar dados, precisamos entender o que são eles e quais os tipos de dados. Os números **não são** a maior parte deles, porém hoje em dia **os dados são transformados em números para que o computador possa interpretá-los**.Os **dados não estruturados** são aqueles que não possuem uma estrutura concreta, existindo uma enorme variabilidade:1. Textos - qualquer tipo de texto encontrado na internet;1. Áudios, Vídeos e Imagens;1. Grafos (são redes, nós, como a rede de amigos do Facebook, que te indica os amigos em comum, etc);1. Webpages (código fonte das páginas);1. Séries temporais (são dados do mesmo objeto que variam com o tempo);Se pegarmos o Facebook ou a Wikipedia, podemos encontrar todos os tipos de **dados não estruturados** acima. Consegue pensar em mais alguma plataforma que também contenha tudo isso?Como dito antes, esses dados são transformados para que o computador possa interpretá-los. Também são transformados para que **nós humanos** possamos analisá-los. Sendo assim, transformamos os **dados não estruturados** em **dados estruturados**, possuindo atributos/valores.Os dados são estruturados em matrizes, onde a coluna diz respeito aos objetos (uma imagem por exemplo) e a linha diz respeito aos atributos (o que aquela imagem representa). No exemplo abaixo, uma tabela analisa o objeto **tipos de carro** e possui atributos **motor**, **quantos kilomestros roda por litro** e **ano de fabricação**.Objeto | Motor | Gasto de combustível (Km/L) | Ano--------- | ------ | ---- | -------Carro 1 | x | 10,3 | 2007Carro 2 | y | 8,7 | 2012... | ... | ... | ...Carro n | z | 9,0 | 2020Os dados **não precisam ser numéricos**. Vamos ver alguns tipos de variáveis:1. Qualitativas: 1. Nominais (sem significado matemático, exemplo: **motor x, y, z**); 2. Ordinais (também não são números, porém representam uma ordem, exemplo: **pouco, médio, muito** ou **baixo, médio, alto**).2. Quantitativas: 1. Discretas (valores contáveis, exemplo: **ano = 2007**); 2. Contínuas (valores reais, exemplo: **gasto de combustível = 10,3**, **peso**, **distância**, etc). Na tabela fictícia abaixo em que podemos analisar as relações entre as diversas variáveis de uma pessoa para entender o porquê de uma _Nota final_. Código | Nome | Idade | Sexo | Região | Escolaridade | Nota final --- | --- | --- | --- | --- | --- | ---1 | Mário | 20 | Masculino | Sudeste | Ensino Médio | 702 | Julia | 19 | Feminino | Centro-oeste | Ensino Médio | 733 | Clebson | 32 | Masculino | Nordeste | Ensino Superior | 85... | ... | ... | ... | ... | ... | ...77 | Roberta | 26 | Feminino | Norte | Ensino Superior | 83 Apesar de as colunas **Nome** e **Código** representarem o **Objeto**, elas podem ser entendidas como dados do tipo **qualitativo nominal**. A coluna Código possui valores numéricos, mas o número é apenas um símbolo indicando uma pessoa. As colunas **Região** e **Sexo** também possuem dados qualitativos nominais.A coluna **Escolaridade** também possui dados qualitativos, porém diferentemente dos anteriores, eses são **qualitativos ordinais**, pois o nível de escolaridade pode ser interpretado como sendo **baixo**, **médio** e **alto**, e até mesmo transformado em numerais, como **1**, **2** e **3**.A coluna **Idade** diz respeito a um dado **quantitativo discreto**, pois é um número que podemos contar facilmente. Já a coluna **Nota final** possui dados **quantitativos contínuos**, pois apesar de ser numérico, é um número que possui suas próprias variáveis (por exemplo o peso de questões em uma prova, a média de todas elas, nota de uma redação, etc). 2. Escala de dados [🠡](intro)A **escala de dados** diz respeito a quais operações lógicas podem ser realizadas nos valores dos atributos. Vamos entender melhor descrevendo com os tipos de dados e as operações possíveis. 1. Qualitativas: 1. Nominais: **=** e **≠**. Exemplo: Sudeste **=** Sudeste; Norte ≠ Nordeste; 2. Ordinais: **=**, **≠**, ****, **≤**, **≥**. Essas outras operações são possíveis pois os dados qualitativos ordinais são **contáveis**. **Escolaridade baixa < alta**.2. Quantitativas: 1. Intervalares: **=, ≠, , ≤, ≥, +** e **-**: datas, temperatura, distância, etc. Esse tipo de valor não pode ser contabilizado como um numeral comum. 20 celsius não é o dobro de 10 celsius, pois é uma escala baseada em Kelvin. O ano 2000 também não pode ser o dobro do ano 1000, pois o calendário é baseado em datas abstratas. Alguns anos são maiores do que os outros, por exemplo. 2. Racionais: **=, ≠, , ≤, ≥, +, -, *** e **/** diferente dos intervalares, os valores numéricos racionais possuem um **significado absoluto**. Uma grande diferença entre números intervalares e números racionais é que o último pode conter o número zero absoluto. Exemplo: a própria escala Kelvin, que possui um zero absoluto, além de salário, número de objetos e pessoas, saldo em conta. Aqui podemos **multiplicar e dividir**. Metade de um valor é obtido através da divisão por dois. Consulte a tabela abaixo para ver o que cada símbolo significa e uma exemplificação mais gráfica!Símbolo | Operação | Qualitativo nominal | Qualitativo Ordinal | Quantitativo Intervalar | Quantitativo Racional--| --- | --- | --- | --| ---= | Igual | Sudeste = Sudeste | Baixo = Baixo | 32º F = 32º F | 9,807 m/s² = 9,807 m/s² (gravidade da terra)≠ | Diferente | Norte ≠ Nordeste | Muito ≠ Pouco | 32º F ≠ -32º F | 1.000 N ≠ 3.000 N (força do soco de um boxeador)< | Menor | | Baixo < Alto | 10ºC < 20ºC | 20 centavos < 21 centavos> | Maior | | Alto > Baixo | 10ºC > 20ºC | 20,10 reais > 20,01 reais≤ | Menor ou igual | | Alto ≤ Alto | ano 200 a.C. ≤ 400 d.C. | 200K ≤ 300K≥ | Maior ou igual | | Alto ≥ Alto | ano 200 a.C. ≥ 200 a.C. | 4 laranjas ≥ 4 laranjas+ | Positivo | | | 20ºC | + 200 reais de saldo- | Negativo | | | -20ºC | - 200 reais de saldo/ | Divisão | | | | 800K / 2 = 400K* | Multiplicação | | | | 400 reais * 2 = 800 reais 3. Vamos descrever os dados! [🠡](intro)Vimos até aqui:1. quais são os diferentes tipos de atributos2. como classificamos os valores3. quais operações podemos realizarAgora podemos **descrever os dados** através de métodos da **Estatística Descritiva**. As medidas que analisaremos são as seguintes:1. Medida de frequência;2. Medidas centrais;3. Medidas de dispersão.Vamos ampliar a tabela que utilizamos anteriormente para exemplificar cada uma das medidas! Consideraremos somente as 10 primeiras linhas da matriz. Código | Nome | Idade | Sexo | Região | Escolaridade | Nota final --- | --- | --- | --- | --- | --- | ---1 | Mário | 20 | Masculino | Sudeste | Ensino Médio | 702 | Julia | 19 | Feminino | Centro-oeste | Ensino Médio | 733 | Clebson | 32 | Masculino | Nordeste | Ensino Superior | 854 | Kelly| 43 |Feminino | Sudeste | Ensino Médio | 755 | Salviano | 77 | Masculino | Norte | Ensino Médio | 346 |Pietro | 17 | Masculino | Sul | Ensino Superior | 437 | Jade | 24 | Feminino | Sul | Ensino Superior | 628 |Gabrielly | 17 | Feminino | Nordeste | Ensino Médio | 169 | Joesley | 56 | Masculino | Centro-oeste | Ensino Médio | 6410 | Paulo | 24 | Masculino | Sudeste | Ensino Superior | 94... | ... | ... | ... | ... | ... | ... 3.1 Medida de frequência [🠡](intro)A **medida de frequência** é a mais conhecida! Ela diz respeito à frequência de aparição de um certo valor. Vamos pegar a variante **Sexo**. O valor **Masculino** aparece 6 vezes e o **Feminino** aparece 4 vezes. Intuitivamente, 60% são Masculinos e 40% são Femininos.Para o cálculo da frequência **x**, a _regrinha de três_ pode ser utilizada. Podemos definir a fórmula como **Número de linhas * x = Número de Elementos * 100**. O número de elementos diz respeito a quantas vezes apareceu o elemento nas linhas selecionadas.Para medir a frequência **x** de pessoas masculinas:10 * x = 6 * 10010x = 600x = 600/10x = 60% 3.2 Medidas centrais [🠡](intro) Moda [🠡](intro)As medidas centrais são também chamadas de **Moda**. Com elas, costumamos medir dados **nominais** (porém é possível medir qualquer tipo de dado estruturado) com o objetivo de **retornar o valor mais comum**.Vamos medir a **Moda da variente Região** nas 10 primeiras linhas da matriz. Para isso, contamos o número de ocorrências de cada valor e identificamos qual deles aparece mais:Região | Número de aparições --- | ---Sul | 2Sudeste | 3Centro-oeste | 2Norte | 1Nordeste | 2Podemos constatar nesse rápido exemplo que **a Moda da variante Região é Sudeste**.Caso queira representar a medida de Frequência e a medida de Moda em **gráfico**, opte por representá-la através do gráfico de pizza. A ordenação dos valores em um gráfico de barras poderá dar uma falsa sugestão de que algo está crescendo ou decrescendo, e isso deve ser evitado pois não diz respeito à análise que queremos representar. Média [🠡](intro)Para determinar a medida central de **variáveis quantitativas**, nós calculamos o valor da **Média**. Para isso, **somamos** os valores e **dividimos** pelo número total de observações (linhas calculadas).Para calcular a Média da variante **Idade**, somamos todos os valores da coluna e dividimos pelo número de linhas (ou número de elementos somados).
###Code
media_idade = (20 + 19 + 32 + 43 + 77 + 17 + 24 + 17 + 56 + 24) / 10
print("A Média da variável Idade é:", media_idade)
###Output
A Média da variável Idade é: 32.9
###Markdown
Mediana [🠡](intro)**Não confunda Média com Mediana**. Essa última diz respeito ao **valor central**. Vamos exemplificar com o cálculo da Mediana da variável fictícia **peso**:Jorge | Matheus | Fernanda | Samanta | Carla-- | -- | -- | -- | --54.3 kg | 76.2 kg | 97.7 kg | 55.0 kg | 69.6 kgPara chegarmos ao valor central, precisamos:1. Ordenar os valores (pode ser crescente ou decrescente);Jorge | Samanta | Carla | Matheus | Fernanda-- | -- | -- | -- | --54.3 kg | 55.0 kg | 69.6 kg | 76.2 kg | 97.7 kg2. Saber se o número de elementos do conjunto de valores a ser calculado é um número **ímpar** ou **par**. 1. Ímpar: a **Mediana** é o valor do meio, ou seja, **69.6 kg**; 2. Par: a **Mediana** é a soma dos dois valores do meio / 2.Vamos adicionar mais um valor para calcular a Mediana de um conjunto par:Jorge | Samanta | Carla | Matheus | Fernanda | Roberto-- | -- | -- | -- | -- | --54.3 kg | 55.0 kg | 69.6 kg | 76.2 kg | 97.7 kg | 101.2 kgComo agora o número total de elementos do conjunto é de número **par**, a **Mediana** é a soma dos dois valores centrais dividido por dois.
###Code
Carla = 69.6
Matheus = 76.2
mediana_par = (Carla + Matheus) / 2
print("A mediana do conjunto é:", mediana_par)
###Output
A mediana do conjunto é: 72.9
###Markdown
Quando utilizar a Média e a Mediana? [🠡](intro)O cálculo da Média e da Mediana geralmente possui os mesmos objetivos, porém em alguns casos a **Mediana** é melhor e mais confiável. Isso acontece pois algumas distribuições de valores são muito desiguais! Imagine se quisermos calcular a Média de **Altura** de uma equipe de basquete. Camisa | Nome | Altura--- | --- | ---1 | Miguel | 204cm 2 | Joel | 192cm3 | Manoel | 198cm4 | Daniel | 202cm 5 | Natanael | 189cm 6 | Leonel | 195cm7 | Tafarel | 196cm8 | Josiel | 198cm9 | Rafael | 202cm10 | Gabriel | 199cm11 | Uliel | 192cm12 | Pedro | 1 cmSe calcularmos a **Média** da variável **Altura** de toda a equipe, obteremos:
###Code
media_altura = (204 + 192 + 198 + 202 + 189 + 195 + 196 + 198 + 202 + 199 + 192 + 1) / 12
print("A Média da altura do time de basquete é:", media_altura, "centímetros")
###Output
A Média da altura do time de basquete é: 180.66666666666666 centímetros
###Markdown
Já se calcularmos a **Mediana**, obteremos:
###Code
mediana_altura = (195 + 196) / 2
print("A Mediana da altura do time de basquete é:", mediana_altura, "centímetros")
###Output
A Mediana da altura do time de basquete é: 195.5 centímetros
###Markdown
Nesse caso, a **Média** não pode ser confiável, por conta de um **outlier**. Esse outlier é **Pedro**, uma formiga. Por ser muito pequeno, ele desestabilizou o cálculo da Média!A culpa não é do Pedro, ele faz parte do time de basquete e TEM QUE SER CONTADO no cálculo, é um direito dele!Para que a presença desse **outlier** não afete tanto nossa análise, **optaremos pela Mediana**. Quanto mais diferente do conjunto for o outlier, mais será a distância entre o resultado da Média e da Mediana.**Agora que você já sabe como fazer o cálculo, pode utilizar uma biblioteca pra tornar o processo mais rápido**!
###Code
import numpy as np
time_basquete = [204, 192, 198, 202, 189, 195, 196, 198, 202, 199, 192, 1]
média_time_basquete = np.mean(time_basquete)
print("Média:", média_time_basquete)
mediana_time_basquete = np.median(time_basquete)
print("Mediana:", mediana_time_basquete)
###Output
Média: 180.66666666666666
Mediana: 197.0
###Markdown
Quartil e percentil [🠡](intro)Calculada a Mediana, podemos ainda realizar o cálculo do **Quartil** e do **Percentil**.No **Quartil**, dividimos o conjunto em quatro partes e a mediana é agora o 2º quartil, ou 50%. Ele é maior do que 50% das observações:1. O 1º quartil será o valor que tem **25%** dos demais valores abaixo dele;1. O 2º quartil será o valor que tem **50%** dos demais valores abaixo dele;1. O 3º quartil será o valor que tem **75%** dos demais valores abaixo dele.1 | 2 | 3 | 4 |5 | 6 | 7 -| -| - |- | - | - | - | 1º | | 2º | | 3º | Ao identificar o valor da **mediana de todos os três quartis**, **podemos realizar uma análise mais acurada de todo o conjunto de dados**. Principalmente em se tratando de um conjunto com a presença de outliers!A diferença do Quartil para o **Percentil** é que no último, ao invés de ser 25%, 50% e 75%, **podem ser utilizados quaisquer valores**. Boxplot [🠡](intro)O tipo de gráfico para representar o cálculo de quartis e percentis é o **Boxplot**. Observe a imagem abaixo onde:1. O quadrado vermelho é a **distância entre o primeiro e o terceiro quartil**;2. A linha amarela é o valor da mediana do conjunto;3. Os traços mais marginais são definidos arbitrariamente. Eles servem para excluir da representação quaisquer dados **outliers** que venham a prejudicar a análise gráfica. 3.3 Medidas de dispersão [🠡](intro)São o último tipo de medidas que estudaremos aqui nesse guia. As **medidas de dispersão** servem para casos em que os conjuntos de dados **diferentes** possuem **a mesma Média**, como no exemplo abaixo.
###Code
X = [5, 6, 1, 4]
Y = [2, 6, 0, 8]
Z = [4, 4, 4, 4]
média_X = np.mean(X)
média_Y = np.mean(Y)
média_Z = np.mean(Z)
print("X:", média_X, "Y:", média_Y, "Z:", média_Z)
###Output
X: 4.0 Y: 4.0 Z: 4.0
###Markdown
O objetivo das medidas de dispersão é medir a **dispersão**/espalhamento de um conjunto de valores. A pergunta a se fazer é **como os valores estão espalhados em relação à Média?**Para obter essa resposta, podemos utilizar várias medidas de dispersão diferentes, porém as três mais comuns são:1. Variância;2. Desvio padrão;3. IQR (Intervalo Interquartil ou _Interquartile range_). Variância [🠡](intro)Para obter a variância é necessário calcular a **Esperança de X menos a Esperança de X ao quadrado** E[(X - E(X))²]A variância é igual **a distância quadrática média em relação à média**. Desvio padrão [🠡](intro)O **desvio padrão** é uma mudança no cálculo da variância visto acima, utilizado quando em nossos dados possuímos apenas uma **amostragem**, ou seja, apenas parte de todos os dados possíveis. Exemplo: um curso possui 800 alunos matriculados, porém os dados de apenas 100 deles foram colhidos. **O desvio padrão amostral** será considerado. A diferença da fórmula da Variância para a fórmula com o Desvio Padrão é que o último subtrai 1 (um) de N (N - 1)."N" significa o número total da **população**/número de elementos de um conjunto. Nos últimos conjuntos vistos por nós (X, Y e Z), o **N** é igual a **4** e o cálculo considerando o desvio padrão amostral incluirá **N - 1 = 3**. IQR [🠡](intro)O **IQR** ou intervalo interquartil é outra medida de dispersão muito importante. Ao ler "interquartil", você deve ter se lembrado dos quartis e do gráfico Boxsplot. Eles são utilizados quando uma análise é sensível a **outliers** e o mesmo acontece aqui, com o **IQR**. **Se a média é afetada pelos outliers, a variância também será**.Calculamos a distância interquartil do **terceiro** quartil **menos** o **primeiro** quartil:IQR = Q3 - Q1 4. Mensagem de conforto + aplicação em um código pequeno [🠡](intro)Assim como eu, pode ser que você tenha ficado muito desconfortável com essas últimas medidas de dispersão, pois a complexidade é **sim** grande, por mais que tenha encontrado pessoas falando na maior naturalidade. **Fique tranquilo**, basta compreender **quando** você utilizará esses cálculos e **por que resultou daquilo** após calcular.Hoje em dia, existem várias linguagens de programações, que possuem boas bibliotecas especialmente construídas para realizar todos esses cálculos com uma simples linha de código! O Python é a linguagem mais utilizada hoje em dia para realizarmos Data Sciente e Machine Learning, e você está no caminho certo. Pra encerrar, vou rodar aqui embaixo o cálculo da variância considerando o **intervalo interquartil**. Para isso, utilizarei a biblioteca **numpy** para calcular a **Média** e a biblioteca **scipy**, para calcular o **IQR**.
###Code
from scipy.stats import iqr
def variancia(conjunto): # função para calcular variância do conjunto
media = np.mean(conjunto) # vamos calcular a Média do conjunto
N = len(conjunto) # como vimos, N é o número de amostragem/de elementos do conjunto
variancia = 0 # vazia mas será preenchida abaixo
for i in np.arange(0, len(conjunto)): # para cada elemento "i" no "arange" de 0 até o comprimento do conjunto
variancia = variancia + (conjunto[i]-media)**2 # calcular "i" menos média ao quadrado
variancia = variancia/(N-1)
return variancia
def print_variancia(conjunto):
print("Média de", conjunto, " =", np.mean(conjunto))
print("Variância de", conjunto, " =", variancia(conjunto))
print("IQR de", conjunto, " =", iqr(conjunto))
print("Amplitude de", conjunto, " =", np.max(conjunto)-np.min(conjunto))
print("")
X = [5, 6, 1, 4]
Y = [2, 6, 0, 8]
Z = [4, 4, 4, 4]
QUALQUEROUTRA = [0, 0, 1, 1, 18]
print_variancia(X)
print_variancia(Y)
print_variancia(Z)
print_variancia(QUALQUEROUTRA)
###Output
Média de [5, 6, 1, 4] = 4.0
Variância de [5, 6, 1, 4] = 4.666666666666667
IQR de [5, 6, 1, 4] = 2.0
Amplitude de [5, 6, 1, 4] = 5
Média de [2, 6, 0, 8] = 4.0
Variância de [2, 6, 0, 8] = 13.333333333333334
IQR de [2, 6, 0, 8] = 5.0
Amplitude de [2, 6, 0, 8] = 8
Média de [4, 4, 4, 4] = 4.0
Variância de [4, 4, 4, 4] = 0.0
IQR de [4, 4, 4, 4] = 0.0
Amplitude de [4, 4, 4, 4] = 0
Média de [0, 0, 1, 1, 18] = 4.0
Variância de [0, 0, 1, 1, 18] = 61.5
IQR de [0, 0, 1, 1, 18] = 1.0
Amplitude de [0, 0, 1, 1, 18] = 18
|
1_Neural Networks and Deep Learning/Week 3/Planar_data_classification_with_one_hidden_layer.ipynb | ###Markdown
Planar data classification with one hidden layerWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. **You will learn how to:**- Implement a 2-class classification neural network with a single hidden layer- Use units with a non-linear activation function, such as tanh - Compute the cross entropy loss - Implement forward and backward propagation 1 - Packages Let's first import all the packages that you will need during this assignment.- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis. - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.- testCases provides some test examples to assess the correctness of your functions- planar_utils provide various useful functions used in this assignment
###Code
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
###Output
_____no_output_____
###Markdown
2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
###Code
X, Y = load_planar_dataset()
###Output
_____no_output_____
###Markdown
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
###Code
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y[0,:], s=40, cmap=plt.cm.Spectral);
###Output
_____no_output_____
###Markdown
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1).Lets first get a better sense of what our data is like. **Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`? **Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
###Code
### START CODE HERE ### (≈ 3 lines of code)
m=X.shape[1]
shape_X=X.shape
shape_Y=Y.shape
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
###Output
The shape of X is: (2, 400)
The shape of Y is: (1, 400)
I have m = 400 training examples!
###Markdown
**Expected Output**: **shape of X** (2, 400) **shape of Y** (1, 400) **m** 400 3 - Simple Logistic RegressionBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
###Code
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, (Y.T).ravel());
###Output
_____no_output_____
###Markdown
You can now plot the decision boundary of these models. Run the code below.
###Code
# Plot the decision boundary for logistic regression
Y1=Y.ravel()
plot_decision_boundary(lambda x: clf.predict(x), X, Y1)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y1,LR_predictions) + np.dot(1-Y1,1-LR_predictions))/float(Y1.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
###Output
Accuracy of logistic regression: 47 % (percentage of correctly labelled datapoints)
###Markdown
**Expected Output**: **Accuracy** 47% **Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network modelLogistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.**Here is our model**:**Mathematically**:For one example $x^{(i)}$:$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\tag{3}$$$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$**Reminder**: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( of input units, of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent)You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data. 4.1 - Defining the neural network structure **Exercise**: Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
###Code
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x=X.shape[0]
n_h=4
n_y=Y.shape[0]
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
###Output
The size of the input layer is: n_x = 5
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 2
###Markdown
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). **n_x** 5 **n_h** 4 **n_y** 2 4.2 - Initialize the model's parameters **Exercise**: Implement the function `initialize_parameters()`.**Instructions**:- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.- You will initialize the weights matrices with random values. - Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).- You will initialize the bias vectors as zeros. - Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1=np.random.randn(n_h,n_x)*0.01
b1=np.zeros((n_h,1))
W2=np.random.randn(n_y,n_h)*0.01
b2=np.zeros((n_y,1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01057952 -0.00909008 0.00551454 0.02292208]] **b2** [[ 0.]] 4.3 - The Loop **Question**: Implement `forward_propagation()`.**Instructions**:- Look above at the mathematical representation of your classifier.- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.- You can use the function `np.tanh()`. It is part of the numpy library.- The steps you have to implement are: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1=parameters["W1"]
b1=parameters["b1"]
W2=parameters["W2"]
b2=parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1=W1@X+b1
A1=np.tanh(Z1)
Z2=W2@A1+b2
A2=sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
###Output
0.26281864019752443 0.09199904522700109 -1.3076660128732143 0.21287768171914198
###Markdown
**Expected Output**: 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.**Instructions**:- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:```pythonlogprobs = np.multiply(np.log(A2),Y)cost = - np.sum(logprobs) no need to use a for loop!```(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
cost=(-1/m)*np.sum((np.dot(Y,(np.log(A2)).T)+np.dot(1-Y,(np.log(1-A2)).T)))
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
###Output
cost = 0.6930587610394646
###Markdown
**Expected Output**: **cost** 0.6930587610394646 Using the cache computed during forward propagation, you can now implement backward propagation.**Question**: Implement the function `backward_propagation()`.**Instructions**:Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. <!--$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$- Note that $*$ denotes elementwise multiplication.- The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !-->- Tips: - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
###Code
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1=parameters["W1"]
W2=parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1=cache["A1"]
A2=cache["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2=A2-Y
dW2=(1/m)*np.dot(dZ2,A1.T)
db2=(1/m)*np.sum(dZ2,axis=1,keepdims=True)
dZ1=np.dot(W2.T,dZ2)*(1-np.power(A1,2))
dW1=(1/m)*np.dot(dZ1,X.T)
db1=(1/m)*np.sum(dZ1,axis=1,keepdims=True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
###Output
dW1 = [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]]
db1 = [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]]
dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]]
db2 = [[-0.16655712]]
###Markdown
**Expected output**: **dW1** [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] **db1** [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] **dW2** [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] **db2** [[-0.16655712]] **Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
###Code
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1=parameters["W1"]
b1=parameters["b1"]
W2=parameters["W2"]
b2=parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1=grads["dW1"]
db1=grads["db1"]
dW2=grads["dW2"]
db2=grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1-=learning_rate*dW1
b1-=learning_rate*db1
W2-=learning_rate*dW2
b2-=learning_rate*db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]
b1 = [[-1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[-3.20136836e-06]]
W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]]
b2 = [[0.00010457]]
###Markdown
**Expected Output**: **W1** [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] **b1** [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]] **W2** [[-0.01041081 -0.04463285 0.01758031 0.04747113]] **b2** [[ 0.00010457]] 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() **Question**: Build your neural network model in `nn_model()`.**Instructions**: The neural network model has to use the previous functions in the right order.
###Code
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost=compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
Cost after iteration 0: 0.692739
Cost after iteration 1000: 0.000218
Cost after iteration 2000: 0.000107
Cost after iteration 3000: 0.000071
Cost after iteration 4000: 0.000053
Cost after iteration 5000: 0.000042
Cost after iteration 6000: 0.000035
Cost after iteration 7000: 0.000030
Cost after iteration 8000: 0.000026
Cost after iteration 9000: 0.000023
W1 = [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]
b1 = [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]]
W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]]
b2 = [[0.20459656]]
###Markdown
**Expected Output**: **cost after iteration 0** 0.692739 $\vdots$ $\vdots$ **W1** [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]] **b1** [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] **W2** [[-2.45566237 -3.27042274 2.00784958 3.36773273]] **b2** [[ 0.20459656]] 4.5 Predictions**Question**: Use your model to predict by building predict().Use forward propagation to predict results.**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \\ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
###Code
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, _ = forward_propagation(X, parameters)
predictions=(A2>0.5)*1
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
###Output
predictions mean = 0.6666666666666666
###Markdown
**Expected Output**: **predictions mean** 0.666666666667 It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
###Code
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y.ravel())
plt.title("Decision Boundary for hidden layer size " + str(4))
###Output
Cost after iteration 0: 0.693048
Cost after iteration 1000: 0.288083
Cost after iteration 2000: 0.254385
Cost after iteration 3000: 0.233864
Cost after iteration 4000: 0.226792
Cost after iteration 5000: 0.222644
Cost after iteration 6000: 0.219731
Cost after iteration 7000: 0.217504
Cost after iteration 8000: 0.219469
Cost after iteration 9000: 0.218611
###Markdown
**Expected Output**: **Cost after iteration 9000** 0.218607
###Code
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
###Output
Accuracy: 90%
###Markdown
**Expected Output**: **Accuracy** 90% Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. 4.6 - Tuning hidden layer size (optional/ungraded exercise) Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
###Code
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y.ravel())
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
###Output
Accuracy for 1 hidden units: 67.5 %
Accuracy for 2 hidden units: 67.25 %
Accuracy for 3 hidden units: 90.75 %
Accuracy for 4 hidden units: 90.5 %
Accuracy for 5 hidden units: 91.25 %
Accuracy for 20 hidden units: 90.0 %
Accuracy for 50 hidden units: 90.25 %
###Markdown
**Interpretation**:- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. **Optional questions**:**Note**: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?- Play with the learning_rate. What happens?- What if we change the dataset? (See part 5 below!) **You've learnt to:**- Build a complete neural network with a hidden layer- Make a good use of a non-linear unit- Implemented forward propagation and backpropagation, and trained a neural network- See the impact of varying the hidden layer size, including overfitting. Nice work! 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
###Code
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
###Output
_____no_output_____ |
src/notebooks/mixs-to-rdf/mixs-to-rdf.ipynb | ###Markdown
MIxS to RDF This notebook demonstrates how to use the mixs_to_rdf library to convert MIxS spreadsheets to RDF. Load mixs-to-rdf library* In order to find the library, you need to add the path to the system. * rdflib is needed in order to work with output graphs
###Code
import os, sys
sys.path.append(os.path.abspath('../../code/mixs_to_rdf/')) # add rdf_etl module to sys path
from mixs_file_to_rdf import mixs_package_file_to_rdf, mixs_package_directory_to_rdf
from rdflib import Graph
###Output
_____no_output_____
###Markdown
Review help information for mixs_package_file_to_rdf function.
###Code
help(mixs_package_file_to_rdf)
###Output
Help on function mixs_package_file_to_rdf in module mixs_file_to_rdf:
mixs_package_file_to_rdf(file_name, mixs_version, package_name='', term_type='class', file_type='excel', sep='\t', base_iri='https://gensc.org/mixs#', ontology_iri='https://gensc.org/mixs.owl', output_file='', ontology_format='turtle', print_output=False)
Builds an ontology (rdflib graph) from a MIxS package file.
Args:
file_name: The name of MIxS package file.
mixs_version: The version of MIxS package.
package_name: Overrides the package name provided in the package Excel spreadsheet.
This argument if necessary when using a file is a csv or tsv.
term_type: Specifies if the MIxS terms will be represented as classes or data properties.
Accepted values: 'class', 'data property'
Default: 'class'
file_type: The type of file being processed.
If file type is not 'excel', a field separator/delimitor must be provided.
Default: 'excel'
sep: Specifies the field separator/delimitor for non-Excel files.
base_iri: The IRI used as prefix for MIxS terms.
ontology_iri: The IRI used for the output ontology.
output_file: The file used to save the output.
If saving to different directory, include the path (e.g., '../output/mixs.ttl').
ontology_format: The rdf syntax of the output ontology.
Accepted values: 'turtle', 'ttl', 'nt', 'ntriples', 'trix', 'json-ld', 'xml'
Default: 'turtle'
print_output: Specifies whether to print ontology on screen.
Default: False
Returns:
rdflib Graph
###Markdown
Test creating RDF versions of the MIxS-air, version 5, package. RDF files are output to the output directory.* test_classes.ttl will can MIxS terms converted to classes.* test_classes.ttl will can MIxS terms converted to data properties.
###Code
test_file = "../../mixs_data/mixs_v5_packages/MIxSair_20180621.xlsx"
graph_cls = mixs_package_file_to_rdf(test_file, 5, output_file='output/test_classes.ttl')
graph_dp = mixs_package_file_to_rdf(test_file, 5, term_type='data property', output_file='output/test_dataproperties.ttl')
###Output
_____no_output_____
###Markdown
Test creating RDF versions of all MIxS package version 4 & 5 from a specified directories. RDF files are output to the output directory.* test_classes.ttl will can MIxS terms converted to classes.* test_classes.ttl will can MIxS terms converted to data properties. Review help information for mixs_package_directory_to_rdf function.
###Code
help(mixs_package_directory_to_rdf)
version_4_dir = '../../mixs_data/mixs_v4_packages/'
version_5_dir = '../../mixs_data/mixs_v5_packages/'
###Output
_____no_output_____
###Markdown
First create version with terms as classes.**NB:** The base IRI is changes to `https://gensc.org/mixs/mixs-class`
###Code
mixs_4_package_class_graph = mixs_package_directory_to_rdf(version_4_dir, 4, base_iri="https://gensc.org/mixs/mixs-class#")
mixs_5_package_class_graph = mixs_package_directory_to_rdf(version_5_dir, 5, base_iri="https://gensc.org/mixs/mixs-class#")
###Output
processing: MIxShumanskin_20180621.xlsx
processing: MIxSwater_20180621.xlsx
processing: MIxShydrocarbcores_20180621.xlsx
processing: MIxShumangut_20180621.xlsx
processing: MIxSair_20180621.xlsx
processing: MIxShumanoral_20180621.xlsx
processing: MIxShydrocarbfs_20180621.xlsx
processing: MIxSbuiltenv_20180621.xlsx
processing: MIxShumanassoc_20180621.xlsx
processing: MIxSsoil_20180621.xlsx
processing: MIxSsediment_20180621.xlsx
processing: MIxShostassoc_20180621.xlsx
processing: MIxSwastesludge_20180621.xlsx
processing: MIxShumanvaginal_20180621.xlsx
processing: MIxSplantassoc_20180621.xlsx
processing: MIxSmatbiofilm_20180621.xlsx
processing: MIxSmisc_20180621.xlsx
###Markdown
Merge MIxS 4 & 5 class graphs and save output
###Code
mixs_package_class_graph = Graph()
mixs_package_class_graph = mixs_4_package_class_graph + mixs_5_package_class_graph
## save output
mixs_package_class_graph.serialize(format='turtle', destination='output/mixs_package_class.ttl')
###Output
_____no_output_____
###Markdown
Next create version with terms as data properties.**NB:** The base IRI is changes to `https://gensc.org/mixs/mixs-data-property`
###Code
mixs_4_package_dp_graph = mixs_package_directory_to_rdf(version_4_dir, 4, term_type='data property', base_iri="https://gensc.org/mixs/mixs-data-property#")
mixs_5_package_dp_graph = mixs_package_directory_to_rdf(version_5_dir, 5, term_type='data property', base_iri="https://gensc.org/mixs/mixs-data-property#")
###Output
processing: MIxShumanskin_20180621.xlsx
processing: MIxSwater_20180621.xlsx
processing: MIxShydrocarbcores_20180621.xlsx
processing: MIxShumangut_20180621.xlsx
processing: MIxSair_20180621.xlsx
processing: MIxShumanoral_20180621.xlsx
processing: MIxShydrocarbfs_20180621.xlsx
processing: MIxSbuiltenv_20180621.xlsx
processing: MIxShumanassoc_20180621.xlsx
processing: MIxSsoil_20180621.xlsx
processing: MIxSsediment_20180621.xlsx
processing: MIxShostassoc_20180621.xlsx
processing: MIxSwastesludge_20180621.xlsx
processing: MIxShumanvaginal_20180621.xlsx
processing: MIxSplantassoc_20180621.xlsx
processing: MIxSmatbiofilm_20180621.xlsx
processing: MIxSmisc_20180621.xlsx
###Markdown
Merge MIxS 4 & 5 data property graphs and save output
###Code
mixs_package_dp_graph = Graph()
mixs_package_dp_graph = mixs_4_package_dp_graph + mixs_5_package_dp_graph
## save output
mixs_package_dp_graph.serialize(format='turtle', destination='output/mixs_package_dp.ttl')
###Output
_____no_output_____
###Markdown
Test SPARQL queries on ontologies As an example, I'll use the class version of MIxS terms. Note: rdflib is not the best libary for doing queries. It is SLOW. demonstration purposes it is fine. Find the first terms and labels
###Code
query = """
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
prefix mixs: <https://gensc.org/mixs/mixs-class#>
select
?iri ?label
where {
?iri rdfs:subClassOf mixs:mixs_term ;
rdfs:label ?label .
}
limit 5
"""
results = mixs_package_class_graph.query(query)
for r in results:
print(f"""{r.iri:60} {r.label}""")
###Output
https://gensc.org/mixs/mixs-class#annual_season_precpt mean annual and seasonal precipitation
https://gensc.org/mixs/mixs-class#host_common_name host common name
https://gensc.org/mixs/mixs-class#root_med_carbon rooting medium carbon
https://gensc.org/mixs/mixs-class#urogenit_disord urogenital disorder
https://gensc.org/mixs/mixs-class#assembly_software assembly software
###Markdown
Find to number of terms in version 4 & 5
###Code
query = """
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
prefix mixs: <https://gensc.org/mixs/mixs-class#>
select
(count (?iri_v4) as ?num_v4)
(count (?iri_v5) as ?num_v5)
where {
{
?iri_v4 rdfs:subClassOf mixs:mixs_term ;
mixs:mixs_version ?version .
filter (?version = 4)
} union {
?iri_v5 rdfs:subClassOf mixs:mixs_term ;
mixs:mixs_version ?version .
filter (?version = 5)
}
}
"""
results = mixs_package_class_graph.query(query)
for r in results:
print(f"""
number of mixs 4 terms: {r.num_v4}
number of mixs 5 terms: {r.num_v5}
""")
###Output
number of mixs 4 terms: 343
number of mixs 5 terms: 601
###Markdown
Find terms that are in both versions 4 & 5
###Code
query = """
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
prefix mixs: <https://gensc.org/mixs/mixs-class#>
select
?iri ?version_4 ?version_5
where {
?iri rdfs:subClassOf mixs:mixs_term ;
mixs:mixs_version ?version_4, ?version_5 .
values (?version_4 ?version_5) { (4 5) }
}
limit 5
"""
results = mixs_package_class_graph.query(query)
for r in results:
print(f"""{r.iri:60} {r.version_4} {r.version_5}""")
###Output
https://gensc.org/mixs/mixs-class#host_common_name 4 5
https://gensc.org/mixs/mixs-class#urogenit_disord 4 5
https://gensc.org/mixs/mixs-class#host_disease_stat 4 5
https://gensc.org/mixs/mixs-class#sewage_type 4 5
https://gensc.org/mixs/mixs-class#reactor_type 4 5
###Markdown
Find total number of terms that are in both versions 4 & 5
###Code
query = """
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
prefix mixs: <https://gensc.org/mixs/mixs-class#>
select
(count (?iri) as ?num)
where {
?iri rdfs:subClassOf mixs:mixs_term ;
mixs:mixs_version ?version_4, ?version_5 .
values (?version_4 ?version_5) { (4 5) }
}
"""
results = mixs_package_class_graph.query(query)
for r in results:
print(f"""number of mixs terms in version 4 & 5: {r.num}""")
###Output
number of mixs terms in version 4 & 5: 329
|
Reinforcement Learning for Ion Traps.ipynb | ###Markdown
Reinforcement Learning for Ion Trap Quantum Computers This exercise is a short extension of the **Ion Trap Reinforcement Learning Environment** where we are going to employ a Projective Simulation (PS) agent to use short laser pulse sequences mapping an initially unentangled state $|000\rangle$ onto a GHZ-like state:\begin{align}|\mathrm{GHZ}\rangle = \frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|iii\rangle.\nonumber\end{align}We will consider three qutrits, i.e., $d=3$ for simplicity but you may choose to extend this at your own leisure.More formally, we do not want to find GHZ states exactly but those states which are maximally entangled. We consider $n$ $d$-level states to be maximally entangled if they have a *Schmidt rank vector* (SRV) of $(d,...,d)$ where the $i$th entry is the rank of the reduced density matrix $\rho_i=\mathrm{tr}_{\bar{i}}(\rho)$ where $\bar{i}$ is the complement of $\{i\}$ in $\{1,...,n\}$.Luckily, you don't really have to take care of this since this is already the default settings of the environment which we are going to load now:
###Code
from ion_trap import IonTrapEnv
###Output
_____no_output_____
###Markdown
That was easy. According to the docs in the `init` method, the class allows the following kwargs:* `num_ions` (int): The number of ions. Defaults to 3.* `dim` (int): The local (odd) dimension of an ion. Defaults to 3.* `goal` (list): List of SRVs that are rewarded. Defaults to `[[3,3,3]]`.* `phases` (dict): The phases defining the laser gate set. Defaults to `{'pulse_angles': [np.pi/2], 'pulse_phases': [0, np.pi/2, np.pi/6], 'ms_phases': [-np.pi/2]}`* `max_steps` (int): The maximum number of allowed time steps. Defaults to 10.If you want to change anything you need to provide kwargs in form of a `dict` with the desired arguments as follows `IonTrapEnv(**{ 'max_steps': 20 })`. Indeed, let us submit a small change. Since this is just supposed to be a small scale test, let us reduce the number of allowed phases and therefore, the number of possible actions.
###Code
import numpy as np
KWARGS = {'phases': {'pulse_angles': [np.pi/2], 'pulse_phases': [np.pi/2], 'ms_phases': [-np.pi/2]}}
env = IonTrapEnv(**KWARGS)
###Output
_____no_output_____
###Markdown
Next, we need to get the reinforcement learning agent that is to learn some pulse sequences. We have a simple PS agent for you in store:
###Code
from ps import PSAgent
###Output
_____no_output_____
###Markdown
For the args of this class the docs say the following:* `num_actions` (int): The number of available actions.* `glow` (float, optional): The glow (or eta) parameter. Defaults to 0.1* `damp` (float, optional): The damping (or gamma) parameter. Defaults to 0.* `softmax` (float, optional): The softmax (or beta) parameter. Defaults to 0.1.We don't know the number of actions at this point, but possibly want to keep all the other default parameters. Let's ask the environment how many actions there are and initialize the agent accordingly.
###Code
num_actions = env.num_actions
agent = PSAgent(num_actions)
###Output
_____no_output_____
###Markdown
Fantastic, we have everything ready for a first run. Let's do that. The interaction between an environment and an agent is standardized through the [*openAI* `gym`](https://github.com/openai/gym) environments. In terms of code, we can imagine the interaction to go as follows,Indeed, every reinforcement learning environment should provide at least two methods:* `reset()`: Resets the environment to its initial state. *Returns* the initial observation.* `step(action)`: Performs an action (given by an action index) on the environment. *Returns* the new observation, an associated reward and a bool value `done` which indicates whether a terminal state has been reached.The agent on the other hand, supports the following two main methods:* `predict(observation)`: Given an observation, the agent predicts an action. *Returns* an action index.* `learn(reward)`: Uses the current reward to update internal network.Knowing that the `IonTrapEnv` has been built according to this standard and the agent features the two methods above, we can start coding the interaction between agent and environment:
###Code
# data set for performance evaluation
DATA_STEPS = []
# maximum number of episodes
NUM_EPISODES = 5000
for i in range(NUM_EPISODES):
# initial observation from environment
observation = env.reset()
#bool: whether or not the environment has finished the episode
done = False
#int: the current time step in this episode
num_steps = 0
action_seq = []
while not done:
# increment counter
num_steps += 1
# predict action
action = agent.predict(observation)
action_seq.append(action)
# perform action on environment and receive observation and reward
observation, reward, done = env.step(action)
# learn from reward
agent.train(reward)
# gather statistics
if done:
DATA_STEPS.append(num_steps)
print(action_seq)
###Output
[0, 1, 5, 3, 0]
###Markdown
And this is all the code that is needed to have an agent interact with our environment! In `DATA_STEPS` we have gathered the data that keeps track of the length of pulse sequences that generate GHZ-like states. We can use `matplotlib` to visualize the performance of the agent over time:
###Code
import matplotlib.pyplot as plt
import numpy as np
x_axis = np.arange(len(DATA_STEPS))
plt.plot(x_axis, DATA_STEPS)
plt.ylabel('Length of pulse sequence')
plt.xlabel('Episode')
###Output
_____no_output_____
###Markdown
Reinforcement Learning for Ion Trap Quantum Computers This exercise is a short extension of the **Ion Trap Reinforcement Learning Environment** where we are going to employ a Projective Simulation (PS) agent to use short laser pulse sequences mapping an initially unentangled state $|000\rangle$ onto a GHZ-like state:\begin{align}|\mathrm{GHZ}\rangle = \frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|iii\rangle.\nonumber\end{align}We will consider three qutrits, i.e., $d=3$ for simplicity but you may choose to extend this at your own leisure.More formally, we do not want to find GHZ states exactly but those states which are maximally entangled. We consider $n$ $d$-level states to be maximally entangled if they have a *Schmidt rank vector* (SRV) of $(d,...,d)$ where the $i$th entry is the rank of the reduced density matrix $\rho_i=\mathrm{tr}_{\bar{i}}(\rho)$ where $\bar{i}$ is the complement of $\{i\}$ in $\{1,...,n\}$.Luckily, you don't really have to take care of this since this is already the default settings of the environment which we are going to load now:
###Code
from ion_trap import IonTrapEnv
###Output
_____no_output_____
###Markdown
That was easy. According to the docs in the `init` method, the class allows the following kwargs:* `num_ions` (int): The number of ions. Defaults to 3.* `dim` (int): The local (odd) dimension of an ion. Defaults to 3.* `goal` (list): List of SRVs that are rewarded. Defaults to `[[3,3,3]]`.* `phases` (dict): The phases defining the laser gate set. Defaults to `{'pulse_angles': [np.pi/2], 'pulse_phases': [0, np.pi/2, np.pi/6], 'ms_phases': [-np.pi/2]}`* `max_steps` (int): The maximum number of allowed time steps. Defaults to 10.If you want to change anything you need to provide kwargs in form of a `dict` with the desired arguments as follows `IonTrapEnv(**{ 'max_steps': 20 })`. Indeed, let us submit a small change. Since this is just supposed to be a small scale test, let us reduce the number of allowed phases and therefore, the number of possible actions.
###Code
import numpy as np
KWARGS = {'phases': {'pulse_angles': [np.pi/2], 'pulse_phases': [np.pi/2], 'ms_phases': [-np.pi/2]}}
env = IonTrapEnv(**KWARGS)
###Output
_____no_output_____
###Markdown
Next, we need to get the reinforcement learning agent that is to learn some pulse sequences. We have a simple PS agent for you in store:
###Code
from ps import PSAgent
###Output
_____no_output_____
###Markdown
For the args of this class the docs say the following:* `num_actions` (int): The number of available actions.* `glow` (float, optional): The glow (or eta) parameter. Defaults to 0.1* `damp` (float, optional): The damping (or gamma) parameter. Defaults to 0.* `softmax` (float, optional): The softmax (or beta) parameter. Defaults to 0.1.We don't know the number of actions at this point, but possibly want to keep all the other default parameters. Let's ask the environment how many actions there are and initialize the agent accordingly.
###Code
num_actions = env.num_actions
agent = PSAgent(num_actions)
###Output
_____no_output_____
###Markdown
Fantastic, we have everything ready for a first run. Let's do that. The interaction between an environment and an agent is standardized through the [*openAI* `gym`](https://github.com/openai/gym) environments. In terms of code, we can imagine the interaction to go as follows,Indeed, every reinforcement learning environment should provide at least two methods:* `reset()`: Resets the environment to its initial state. *Returns* the initial observation.* `step(action)`: Performs an action (given by an action index) on the environment. *Returns* the new observation, an associated reward and a bool value `done` which indicates whether a terminal state has been reached.The agent on the other hand, supports the following two main methods:* `predict(observation)`: Given an observation, the agent predicts an action. *Returns* an action index.* `learn(reward)`: Uses the current reward to update internal network.Knowing that the `IonTrapEnv` has been built according to this standard and the agent features the two methods above, we can start coding the interaction between agent and environment:
###Code
# data set for performance evaluation
DATA_STEPS = []
# maximum number of episodes
NUM_EPISODES = 5000
for i in range(NUM_EPISODES):
# initial observation from environment
observation = env.reset()
#bool: whether or not the environment has finished the episode
done = False
#int: the current time step in this episode
num_steps = 0
action_seq = []
while not done:
# increment counter
num_steps += 1
# predict action
action = agent.predict(observation)
action_seq.append(action)
# perform action on environment and receive observation and reward
observation, reward, done = env.step(action)
# learn from reward
agent.train(reward)
# gather statistics
if done:
DATA_STEPS.append(num_steps)
print(action_seq)
###Output
[0, 1, 5, 3, 0]
###Markdown
And this is all the code that is needed to have an agent interact with our environment! In `DATA_STEPS` we have gathered the data that keeps track of the length of pulse sequences that generate GHZ-like states. We can use `matplotlib` to visualize the performance of the agent over time:
###Code
import matplotlib.pyplot as plt
import numpy as np
x_axis = np.arange(len(DATA_STEPS))
plt.plot(x_axis, DATA_STEPS)
plt.ylabel('Length of pulse sequence')
plt.xlabel('Episode')
###Output
_____no_output_____
###Markdown
Reinforcement Learning for Ion Trap Quantum Computers This exercise is a short extension of the two Tutorials: - **Ion Trap Reinforcement Learning Environment Tutorial** - **Projective Simulation Tutorial** Here we are going to employ the implemented Projective Simulation (PS) agent to use short laser pulse sequences mapping an initially unentangled state $|000\rangle$ onto a GHZ-like state:\begin{align}|\mathrm{GHZ}\rangle = \frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|iii\rangle.\nonumber\end{align}We will consider three qutrits, i.e., $d=3$ for simplicity but you may choose to extend this at your own leisure.More formally, we do not want to find GHZ states exactly but those states which are maximally entangled. We consider $n$ $d$-level states to be maximally entangled if they have a *Schmidt rank vector* (SRV) of $(d,...,d)$ where the $i$th entry is the rank of the reduced density matrix $\rho_i=\mathrm{tr}_{\bar{i}}(\rho)$ where $\bar{i}$ is the complement of $\{i\}$ in $\{1,...,n\}$.Luckily, you don't really have to take care of this since this is already the default settings of the environment which we are going to load now:
###Code
from ENV.IonTrap_env import IonTrapEnv
###Output
_____no_output_____
###Markdown
That was easy. According to the docs in the `init` method, the class allows the following kwargs:* `num_ions` (int): The number of ions. Defaults to 3.* `dim` (int): The local (odd) dimension of an ion. Defaults to 3.* `goal` (list): List of SRVs that are rewarded. Defaults to `[[3,3,3]]`.* `phases` (dict): The phases defining the laser gate set. Defaults to `{'pulse_angles': [np.pi/2], 'pulse_phases': [0, np.pi/2, np.pi/6], 'ms_phases': [-np.pi/2]}`* `max_steps` (int): The maximum number of allowed time steps. Defaults to 10.If you want to change anything you need to provide kwargs in form of a `dict` with the desired arguments as follows `IonTrapEnv(**{ 'max_steps': 20 })`. Indeed, let us submit a small change. Since this is just supposed to be a small scale test, let us reduce the number of allowed phases and therefore, the number of possible actions.
###Code
import numpy as np
KWARGS = {'phases': {'pulse_angles': [np.pi/2], 'pulse_phases': [np.pi/2], 'ms_phases': [-np.pi/2]}}
env = IonTrapEnv(**KWARGS)
###Output
_____no_output_____
###Markdown
Next, we need to get the PS agent and the ECM:
###Code
from PS.agent.Universal_Agent import UniversalAgent
from PS.ecm.Universal_ECM import UniversalECM
###Output
_____no_output_____
###Markdown
For the initialisation we read through the docs: Agent: * `ECM` (object): Episodic compositional memory (ECM). The brain of the agent.* `actions` (np.ndarray): An array of possible actions. Specified by the environment.* `adj_matrix` (np.ndarray): Adjancency matrix representing the structure of the default decision tree.ECM: * `gamma_damping` (float): The damping (or gamma) parameter. Set to zero if the environment doesn't change in time. Defaults to 0.* `eta_glow_damping` (float): glow parameter. Defaults to 0.1.* `beta` (float): softmax parameter. Defaults to 1.We don't know the actions and the adjancency matrix at this point, but want to keep all the other default parameters. Let's at first initialize the adjancency matrix. For now a two layered clip network is enough, later you can try other structures. I have a little task here.__TASK:__ Initialize the adjancency matrix for the following decision tree. Use the PS Tutorial for help. Tipp: The size of the matrix is (number actions + 1, number actions + 1)
###Code
###Output
_____no_output_____
###Markdown
__SOLUTION:__
###Code
num_actions = len(env.actions)
adj_matrix = np.zeros((num_actions + 1, num_actions + 1))
adj_matrix[0][list(range(1, num_actions + 1))] = 1
###Output
_____no_output_____
###Markdown
Now we can ask the environment what the actions are and initialize the agent accordingly:
###Code
actions = env.actions
ecm = UniversalECM()
agent = UniversalAgent(ECM=ecm, actions=actions, adj_matrix=adj_matrix)
###Output
_____no_output_____
###Markdown
Fantastic, we have everything ready for a first run. Let's do that. The interaction between an environment and an agent is standardized through the [*openAI* `gym`](https://github.com/openai/gym) environments. In terms of code, we can imagine the interaction to go as follows,Indeed, every reinforcement learning environment should provide at least two methods:* `reset()`: Resets the environment to its initial state. *Returns* the initial observation.* `step(action)`: Performs an action (given by an action index) on the environment. *Returns* the new observation, an associated reward and a bool value `done` which indicates whether a terminal state has been reached.The agent on the other hand, supports the following two main methods:* `predict(observation)` (here: `step(observation)`): Given an observation, the agent predicts an action. *Returns* an action index.* `learn(reward)`: Uses the current reward to update internal network.Knowing that the `IonTrapEnv` has been built according to this standard and the agent features the two methods above, we can start coding the interaction between agent and environment:
###Code
# data set for performance evaluation
DATA_STEPS = []
# maximum number of episodes
NUM_EPISODES = 500
for i in range(NUM_EPISODES):
# initial observation from environment
observation = env.reset()
#bool: whether or not the environment has finished the episode
done = False
#int: the current time step in this episode
num_steps = 0
action_seq = []
while not done:
# increment counter
num_steps += 1
# predict action
action = agent.step(observation)
action_seq.append(action)
# perform action on environment and receive observation and reward
observation, reward, done = env.step(action)
# learn from reward
agent.learn(reward)
# gather statistics
if done:
DATA_STEPS.append(num_steps)
print(action_seq)
###Output
[0, 4, 1, 0, 3, 5, 0]
###Markdown
And this is all the code that is needed to have an agent interact with our environment! In `DATA_STEPS` we have gathered the data that keeps track of the length of pulse sequences that generate GHZ-like states. We can use `matplotlib` to visualize the performance of the agent over time:
###Code
import matplotlib.pyplot as plt
import numpy as np
x_axis = np.arange(len(DATA_STEPS))
plt.plot(x_axis, DATA_STEPS)
plt.ylabel('Length of pulse sequence')
plt.xlabel('Episode')
###Output
_____no_output_____ |
Integrated-gradient-camptum-CIFARImage.ipynb | ###Markdown
**Computer Vision: Saliency Map for CIFAR Dataset** Interpret the deep learning model result by looking on its gradients. Method used in the code is Vanilla Gradient method. There are multiple saliency methods.
###Code
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
import numpy as np
from torch.autograd import Variable
from torchvision import datasets
from torchvision import transforms
# Functional module contains helper functions
import torch.nn.functional as F
from captum.attr import IntegratedGradients
from captum.attr import Saliency
from captum.attr import DeepLift
from captum.attr import NoiseTunnel
from captum.attr import visualization as viz
###Output
_____no_output_____
###Markdown
**Set up the deep learning model**
###Code
net = torch.hub.load('pytorch/vision:v0.6.0', 'alexnet', pretrained=True)
#Updating the second classifier
net.classifier[4] = nn.Linear(4096,1024)
#Updating the third and the last classifier that is the output layer of the network. Make sure to have 10 output nodes if we are going to get 10 class labels through our model.
net.classifier[6] = nn.Linear(1024,10)
net.load_state_dict(torch.load("./2.model.path"))
###Output
Using cache found in C:\Users\merna/.cache\torch\hub\pytorch_vision_v0.6.0
###Markdown
**Open the Image and preprocess**
###Code
from PIL import Image
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torch.autograd import Variable
# Torchvision module contains various utilities, classes, models and datasets
# used towards computer vision usecases
from torchvision import datasets
from torchvision import transforms
# Functional module contains helper functions
import torch.nn.functional as F
transform = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
dataiter = iter(testloader)
images, labels = dataiter.next()
ind = 3
X = images[ind].unsqueeze(0)
###Output
Files already downloaded and verified
###Markdown
**Retrieve the gradient**
###Code
net.eval()
# Set the requires_grad_ to the image for retrieving gradients
X.requires_grad = True
#saliency = None
# Retrieve output from the image
output = net(X)
# Catch the output
output_idx = output.argmax()
output_max = output[0, output_idx]
# Do backpropagation to get the derivative
# of the output based on the image
output_max.backward()
###Output
_____no_output_____
###Markdown
**Visualize the Result**
###Code
def attribute_image_features(algorithm, input, **kwargs):
net.zero_grad()
tensor_attributions = algorithm.attribute(input,
target=labels[ind],
**kwargs
)
return tensor_attributions
import torch
import torch.nn as nn
# Retireve the saliency map and also pick the maximum value from channels on each pixel.
# In this case, we look at dim=1. Recall the shape (batch_size, channel, width, height)
saliency = Saliency(net)
grads = saliency.attribute(X, target=labels[ind].item())
grads = np.transpose(grads.squeeze().cpu().detach().numpy(), (1, 2, 0))
ig = IntegratedGradients(net)
attr_ig, delta = attribute_image_features(ig, X, baselines=X*0, return_convergence_delta=True)
attr_ig = np.transpose(attr_ig.squeeze().cpu().detach().numpy(), (1, 2, 0))
original_image = np.transpose((images[ind].cpu().detach().numpy() / 2) + 0.5, (1, 2, 0))
_ = viz.visualize_image_attr(None, original_image,
method="original_image", title="Original Image")
_ = viz.visualize_image_attr(grads, original_image, method="blended_heat_map", sign="absolute_value",
show_colorbar=True, title="Overlayed Gradient Magnitudes")
_ = viz.visualize_image_attr(attr_ig, original_image, method="blended_heat_map",sign="all",
show_colorbar=True, title="Overlayed Integrated Gradients")
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
|
examples/colab/Training/binary_text_classification/NLU_training_sarcasam_classifier_demo_news_headlines.ipynb | ###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/Training/binary_text_classification/NLU_training_sarcasam_classifier_demo_news_headlines.ipynb) Training a Sentiment Analysis Classifier with NLU 2 Class News Headlines Sarcasam TrainingWith the [SentimentDL model](https://nlp.johnsnowlabs.com/docs/en/annotatorssentimentdl-multi-class-sentiment-analysis-annotator) from Spark NLP you can achieve State Of the Art results on any multi class text classification problem This notebook showcases the following features : - How to train the deep learning classifier- How to store a pipeline to disk- How to load the pipeline from disk (Enables NLU offline mode)You can achieve these results or even better on this dataset with training data:You can achieve these results or even better on this dataset with test data: 1. Install Java 8 and NLU
###Code
import os
from sklearn.metrics import classification_report
! apt-get update -qq > /dev/null
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! pip install nlu pyspark==2.4.7 > /dev/null
import nlu
###Output
_____no_output_____
###Markdown
2. Download News Headlines Sarcsam dataset https://www.kaggle.com/rmisra/news-headlines-dataset-for-sarcasm-detectionContextPast studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.To overcome the limitations related to noise in Twitter datasets, this News Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.This new dataset has following advantages over the existing Twitter datasets:Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements.
###Code
! wget http://ckl-it.de/wp-content/uploads/2021/02/Sarcasm_Headlines_Dataset_v2.csv
import pandas as pd
test_path = '/content/Sarcasm_Headlines_Dataset_v2.csv'
train_df = pd.read_csv(test_path,sep=",")
cols = ["y","text"]
train_df = train_df[cols]
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(train_df, test_size=0.2)
train_df
###Output
_____no_output_____
###Markdown
3. Train Deep Learning Classifier using nlu.load('train.sentiment')You dataset label column should be named 'y' and the feature column with text data should be named 'text'
###Code
import nlu
# load a trainable pipeline by specifying the train. prefix and fit it on a datset with label and text columns
# by default the Universal Sentence Encoder (USE) Sentence embeddings are used for generation
trainable_pipe = nlu.load('train.sentiment')
fitted_pipe = trainable_pipe.fit(train_df.iloc[:50])
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:50],output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
preds
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
precision recall f1-score support
negative 1.00 0.54 0.70 26
neutral 0.00 0.00 0.00 0
positive 0.96 0.96 0.96 24
accuracy 0.74 50
macro avg 0.65 0.50 0.55 50
weighted avg 0.98 0.74 0.82 50
###Markdown
4. Test the fitted pipe on new example
###Code
fitted_pipe.predict('Aliens are immortal!')
###Output
_____no_output_____
###Markdown
5. Configure pipe training parameters
###Code
trainable_pipe.print_info()
###Output
The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :
>>> pipe['sentiment_dl'] has settable params:
pipe['sentiment_dl'].setMaxEpochs(2) | Info: Maximum number of epochs to train | Currently set to : 2
pipe['sentiment_dl'].setLr(0.005) | Info: Learning Rate | Currently set to : 0.005
pipe['sentiment_dl'].setBatchSize(64) | Info: Batch size | Currently set to : 64
pipe['sentiment_dl'].setDropout(0.5) | Info: Dropout coefficient | Currently set to : 0.5
pipe['sentiment_dl'].setEnableOutputLogs(True) | Info: Whether to use stdout in addition to Spark logs. | Currently set to : True
pipe['sentiment_dl'].setThreshold(0.6) | Info: The minimum threshold for the final result otheriwse it will be neutral | Currently set to : 0.6
pipe['sentiment_dl'].setThresholdLabel('neutral') | Info: In case the score is less than threshold, what should be the label. Default is neutral. | Currently set to : neutral
>>> pipe['default_tokenizer'] has settable params:
pipe['default_tokenizer'].setTargetPattern('\S+') | Info: pattern to grab from text as token candidates. Defaults \S+ | Currently set to : \S+
pipe['default_tokenizer'].setContextChars(['.', ',', ';', ':', '!', '?', '*', '-', '(', ')', '"', "'"]) | Info: character list used to separate from token boundaries | Currently set to : ['.', ',', ';', ':', '!', '?', '*', '-', '(', ')', '"', "'"]
pipe['default_tokenizer'].setCaseSensitiveExceptions(True) | Info: Whether to care for case sensitiveness in exceptions | Currently set to : True
pipe['default_tokenizer'].setMinLength(0) | Info: Set the minimum allowed legth for each token | Currently set to : 0
pipe['default_tokenizer'].setMaxLength(99999) | Info: Set the maximum allowed legth for each token | Currently set to : 99999
>>> pipe['default_name'] has settable params:
pipe['default_name'].setDimension(512) | Info: Number of embedding dimensions | Currently set to : 512
pipe['default_name'].setLoadSP(False) | Info: Whether to load SentencePiece ops file which is required only by multi-lingual models. This is not changeable after it's set with a pretrained model nor it is compatible with Windows. | Currently set to : False
pipe['default_name'].setStorageRef('tfhub_use') | Info: unique reference name for identification | Currently set to : tfhub_use
>>> pipe['sentence_detector'] has settable params:
pipe['sentence_detector'].setUseAbbreviations(True) | Info: whether to apply abbreviations at sentence detection | Currently set to : True
pipe['sentence_detector'].setDetectLists(True) | Info: whether detect lists during sentence detection | Currently set to : True
pipe['sentence_detector'].setUseCustomBoundsOnly(False) | Info: Only utilize custom bounds in sentence detection | Currently set to : False
pipe['sentence_detector'].setCustomBounds([]) | Info: characters used to explicitly mark sentence bounds | Currently set to : []
pipe['sentence_detector'].setExplodeSentences(False) | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False
pipe['sentence_detector'].setMinLength(0) | Info: Set the minimum allowed length for each sentence. | Currently set to : 0
pipe['sentence_detector'].setMaxLength(99999) | Info: Set the maximum allowed length for each sentence | Currently set to : 99999
>>> pipe['document_assembler'] has settable params:
pipe['document_assembler'].setCleanupMode('shrink') | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink
###Markdown
6. Retrain with new parameters
###Code
# Train longer!
trainable_pipe['sentiment_dl'].setMaxEpochs(5)
fitted_pipe = trainable_pipe.fit(train_df.iloc[:50])
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:50],output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
preds
###Output
precision recall f1-score support
negative 1.00 0.96 0.98 26
positive 0.96 1.00 0.98 24
accuracy 0.98 50
macro avg 0.98 0.98 0.98 50
weighted avg 0.98 0.98 0.98 50
###Markdown
7. Try training with different Embeddings
###Code
# We can use nlu.print_components(action='embed_sentence') to see every possibler sentence embedding we could use. Lets use bert!
nlu.print_components(action='embed_sentence')
trainable_pipe = nlu.load('en.embed_sentence.small_bert_L12_768 train.sentiment')
# We need to train longer and user smaller LR for NON-USE based sentence embeddings usually
# We could tune the hyperparameters further with hyperparameter tuning methods like gridsearch
# Also longer training gives more accuracy
trainable_pipe['sentiment_dl'].setMaxEpochs(120)
trainable_pipe['sentiment_dl'].setLr(0.0005)
fitted_pipe = trainable_pipe.fit(train_df)
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
#preds
###Output
sent_small_bert_L12_768 download started this may take some time.
Approximate size to download 392.9 MB
[OK!]
precision recall f1-score support
negative 0.91 0.87 0.89 3952
neutral 0.00 0.00 0.00 0
positive 0.90 0.89 0.89 4048
accuracy 0.88 8000
macro avg 0.60 0.59 0.59 8000
weighted avg 0.90 0.88 0.89 8000
###Markdown
7.1 evaluate on Test Data
###Code
preds = fitted_pipe.predict(test_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
###Output
precision recall f1-score support
negative 0.87 0.82 0.84 1048
neutral 0.00 0.00 0.00 0
positive 0.83 0.84 0.83 952
accuracy 0.83 2000
macro avg 0.57 0.55 0.56 2000
weighted avg 0.85 0.83 0.84 2000
###Markdown
8. Lets save the model
###Code
stored_model_path = './models/classifier_dl_trained'
fitted_pipe.save(stored_model_path)
###Output
Stored model in ./models/classifier_dl_trained
###Markdown
9. Lets load the model from HDD.This makes Offlien NLU usage possible! You need to call nlu.load(path=path_to_the_pipe) to load a model/pipeline from disk.
###Code
hdd_pipe = nlu.load(path=stored_model_path)
preds = hdd_pipe.predict('Aliens are immortal!')
preds
hdd_pipe.print_info()
###Output
The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :
>>> pipe['document_assembler'] has settable params:
pipe['document_assembler'].setCleanupMode('shrink') | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink
>>> pipe['sentence_detector'] has settable params:
pipe['sentence_detector'].setCustomBounds([]) | Info: characters used to explicitly mark sentence bounds | Currently set to : []
pipe['sentence_detector'].setDetectLists(True) | Info: whether detect lists during sentence detection | Currently set to : True
pipe['sentence_detector'].setExplodeSentences(False) | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False
pipe['sentence_detector'].setMaxLength(99999) | Info: Set the maximum allowed length for each sentence | Currently set to : 99999
pipe['sentence_detector'].setMinLength(0) | Info: Set the minimum allowed length for each sentence. | Currently set to : 0
pipe['sentence_detector'].setUseAbbreviations(True) | Info: whether to apply abbreviations at sentence detection | Currently set to : True
pipe['sentence_detector'].setUseCustomBoundsOnly(False) | Info: Only utilize custom bounds in sentence detection | Currently set to : False
>>> pipe['regex_tokenizer'] has settable params:
pipe['regex_tokenizer'].setCaseSensitiveExceptions(True) | Info: Whether to care for case sensitiveness in exceptions | Currently set to : True
pipe['regex_tokenizer'].setTargetPattern('\S+') | Info: pattern to grab from text as token candidates. Defaults \S+ | Currently set to : \S+
pipe['regex_tokenizer'].setMaxLength(99999) | Info: Set the maximum allowed length for each token | Currently set to : 99999
pipe['regex_tokenizer'].setMinLength(0) | Info: Set the minimum allowed length for each token | Currently set to : 0
>>> pipe['glove'] has settable params:
pipe['glove'].setBatchSize(32) | Info: Batch size. Large values allows faster processing but requires more memory. | Currently set to : 32
pipe['glove'].setCaseSensitive(False) | Info: whether to ignore case in tokens for embeddings matching | Currently set to : False
pipe['glove'].setDimension(768) | Info: Number of embedding dimensions | Currently set to : 768
pipe['glove'].setMaxSentenceLength(128) | Info: Max sentence length to process | Currently set to : 128
pipe['glove'].setIsLong(False) | Info: Use Long type instead of Int type for inputs buffer - Some Bert models require Long instead of Int. | Currently set to : False
pipe['glove'].setStorageRef('sent_small_bert_L12_768') | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768
>>> pipe['sentiment_dl'] has settable params:
pipe['sentiment_dl'].setThreshold(0.6) | Info: The minimum threshold for the final result otheriwse it will be neutral | Currently set to : 0.6
pipe['sentiment_dl'].setThresholdLabel('neutral') | Info: In case the score is less than threshold, what should be the label. Default is neutral. | Currently set to : neutral
pipe['sentiment_dl'].setClasses(['positive', 'negative']) | Info: get the tags used to trained this SentimentDLModel | Currently set to : ['positive', 'negative']
pipe['sentiment_dl'].setStorageRef('sent_small_bert_L12_768') | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768
|
Labs/11-Kmeans/11-K-Means.ipynb | ###Markdown
Lab 11: Unsupervised Learning with $k$-meansIn this lab, we begin our survey of common unsupervised learning methods. Supervised vs. Unsupervised LearningAs we know, in the supervised setting, we are presented with a set of training pairs $(\mathbf{x}^{(i)},y^{(i)}), \mathbf{x}^{(i)} \in {\cal X}, y^{(i)} \in {\cal Y},i \in 1..m$,where typically ${\cal X} = \mathbb{R}^n$ and either ${\cal Y} = \mathbb{R}$ (regression) or ${\cal Y} = \{ 1, \ldots, k \}$ (classification). The goal is, given a new$\mathbf{x} \in {\cal X}$ to come up with the best possible prediction $\hat{y} \in {\cal Y}$ corresponding to $\mathbf{x}$ or a set of predicted probabilities$p(y=y_i \mid \mathbf{x}), i \in \{1, \ldots, k\}$.In the *unsupervised setting*, we are presented with a set of training items $\mathbf{x}^{(i)} \in {\cal X}$ without any labels or targets. The goal is generally tounderstand, given a new $\mathbf{x} \in {\cal X}$, the relationship of $\mathbf{x}$ with the training examples $\mathbf{x}^{(i)}$.The phrase *understand the relationship* can mean many different things depending on the problem setting. Among the most common specific goals is *clustering*, in whichwe map the training data to $K$ *clusters*, then, given $\mathbf{x}$, find the most similar cluster $c \in \{1,\ldots,K\}$. $k$-means ClusteringClustering is the most common unsupervised learning problem, and $k$-means is the most frequently used clustering algorithm. $k$-means is suitable when ${\cal X} = \mathbb{R}^n$ and Euclidean distance is a reasonable model of dissimilarity between items in ${\cal X}$.The algorithm is very simple:1. Randomly initialize $k$ cluster centroids $\mu_1, \ldots, \mu_k \in \mathbb{R}^n$.2. Repeat until convergence: 1. For $i \in 1..m, c^{(i)} \leftarrow \text{argmin}_j \| \mathbf{x}^{(i)} - \mu_j \|^2.$ 2. For $j \in 1..k,$ $$ \mu_j \leftarrow \frac{\sum_{i=1}^m \delta(c^{(i)} = j)\mathbf{x}^{(i)}}{\sum_{i=1}^m \delta(c^{(i)}=j)}$$ In-Lab ExerciseWrite Python code to generate 100 examples from each of three different well-separated 2D Gaussian distributions. Plot the data, initialize three arbitrary means,and animate the process of iterative cluster assignment and cluster mean assignment. *Hint: there's a naive implementation of the algorithm in this notebook below. You can use it or make your own implementation.* Example with Kaggle Customer Segmentation DataThis example is based on the [Kaggle Mall Customers Dataset](https://www.kaggle.com/vjchoudhary7/customer-segmentation-tutorial-in-python) and [Caner Dabakoglu's](https://www.kaggle.com/cdabakoglu) tutorial on the dataset. The goal is customer segmentation.The dataset has 5 columns, `CustomerID`, `Gender`, `Age`, `Annual Income`, and `Spending score`.We will use three of these variables, namely `Age`, `Annual Income`, and `Spending score` for segmenting customers.(Give some thought to why we don't use `CustomerID` or `Gender`.)First, let's import some libraries:
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Next we read the data set and print out some information about it.
###Code
df = pd.read_csv("Mall_Customers.csv")
print('Dataset information:\n')
df.info()
print('\nDataset head (first five rows):\n')
df.head()
###Output
Dataset information:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 200 entries, 0 to 199
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CustomerID 200 non-null int64
1 Gender 200 non-null object
2 Age 200 non-null int64
3 Annual Income (k$) 200 non-null int64
4 Spending Score (1-100) 200 non-null int64
dtypes: int64(4), object(1)
memory usage: 7.9+ KB
Dataset head (first five rows):
###Markdown
Let's drop the `CustomerID` column, as it's not useful.
###Code
df.drop(["CustomerID"], axis = 1, inplace=True)
###Output
_____no_output_____
###Markdown
Next, let's visualize the marginal distribution over each variable, to get an idea of how cohesive they are. We can see that the variables are notquite Gaussian and have some skew:
###Code
sns.distplot(df.Age)
_ = plt.title('Customer Age distribution')
sns.distplot(df['Spending Score (1-100)'])
_ = plt.title('Customer Spending Score distribution')
sns.distplot(df['Annual Income (k$)'])
_ = plt.title('Customer Income distribution')
###Output
_____no_output_____
###Markdown
Next, let's make a 3D scatter plot of the relevant variables:
###Code
sns.set_style("white")
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df.Age, df["Annual Income (k$)"], df["Spending Score (1-100)"], c='blue', s=60)
ax.view_init(0, 45)
plt.xlabel("Age")
plt.ylabel("Annual Income (k$)")
ax.set_zlabel('Spending Score (1-100)')
plt.show()
###Output
_____no_output_____
###Markdown
Next, let's implement $k$-means:
###Code
# Initialize a k-means model given a dataset
def init_kmeans(X, k):
m = X.shape[0]
n = X.shape[1]
means = np.zeros((k,n))
order = np.random.permutation(m)[:k]
for i in range(k):
means[i,:] = X[order[i],:]
return means
# Run one iteration of k-means
def iterate_kmeans(X, means):
m = X.shape[0]
n = X.shape[1]
k = means.shape[0]
distortion = np.zeros(m)
c = np.zeros(m)
for i in range(m):
min_j = 0
min_dist = 0
for j in range(k):
dist_j = np.linalg.norm(X[i,:] - means[j,:])
if dist_j < min_dist or j == 0:
min_dist = dist_j
min_j = j
distortion[i] = min_dist
c[i] = min_j
for j in range(k):
means[j,:] = np.zeros((1,n))
nj = 0
for i in range(m):
if c[i] == j:
nj = nj + 1
means[j,:] = means[j,:] + X[i,:]
if nj > 0:
means[j,:] = means[j,:] / nj
return means, c, np.sum(distortion)
###Output
_____no_output_____
###Markdown
Let's build models with $k \in 1..20$, plot the distortion for each $k$, and try to choose a good value for $k$ using the so-called "elbow method."
###Code
# Convert dataframe to matrix
X = np.array(df.iloc[:,1:])
# Intialize hyperparameters
max_k = 20
epsilon = 0.001
# For each value of k, do one run and record the resulting cost (Euclidean distortion)
distortions = np.zeros(max_k)
for k in range(1, max_k + 1):
best_distortion = 0
for l in range(5):
means = init_kmeans(X, k)
prev_distortion = 0
while True:
means, c, distortion = iterate_kmeans(X, means)
if prev_distortion > 0 and prev_distortion - distortion < epsilon:
break
prev_distortion = distortion
if l == 0 or distortion < distortions[k-1]:
distortions[k-1] = distortion
# Plot distortion as function of k
plt.figure(figsize=(16,8))
plt.plot(range(1,max_k+1), distortion_k, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('k-means distortion as a function of k')
plt.show()
###Output
_____no_output_____
###Markdown
Read about the so-called "elbow method" in [Wikipedia](https://en.wikipedia.org/wiki/Elbow_method_(clustering)). Note what it says,that "In practice there may not be a sharp elbow, and as a heuristic method, such an 'elbow' cannot always be unambiguously identified." Do you see a unique elbow in the distortion plot above?Note that the results are somewhat noisy, being dependent on initial conditions.Here's a visualization of the results for three clusters:
###Code
# Re-run k-means with k=3
k = 3
means = init_kmeans(X, k)
prev_distortion = 0
while True:
means, c, distortion = iterate_kmeans(X, means)
if prev_distortion > 0 and prev_distortion - distortion < epsilon:
break
prev_distortion = distortion
# Set labels in dataset to cluster IDs according to k-means model.
df["label"] = c
# Plot the data
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df.Age[df.label == 0], df["Annual Income (k$)"][df.label == 0], df["Spending Score (1-100)"][df.label == 0], c='blue', s=60)
ax.scatter(df.Age[df.label == 1], df["Annual Income (k$)"][df.label == 1], df["Spending Score (1-100)"][df.label == 1], c='red', s=60)
ax.scatter(df.Age[df.label == 2], df["Annual Income (k$)"][df.label == 2], df["Spending Score (1-100)"][df.label == 2], c='green', s=60)
# For 5 clusters, you can uncomment the following two lines.
#ax.scatter(df.Age[df.label == 3], df["Annual Income (k$)"][df.label == 3], df["Spending Score (1-100)"][df.label == 3], c='orange', s=60)
#ax.scatter(df.Age[df.label == 4], df["Annual Income (k$)"][df.label == 4], df["Spending Score (1-100)"][df.label == 4], c='purple', s=60)
ax.view_init(0, 45)
plt.xlabel("Age")
plt.ylabel("Annual Income (k$)")
ax.set_zlabel('Spending Score (1-100)')
plt.title('Customer segments (k=3)')
plt.show()
###Output
_____no_output_____
###Markdown
In-Lab Exercise 21. Consider the three cluster centers above. Look at the three means closely and come up with English descriptions of each cluster from a business point of view. Label the clusters in the visualization accordingly.2. Note that the distortion plot is quite noisy due to random initial conditions. Modify the optimization to perfrom, for each $k$, several different runs, and take the minimum distortion over those runs. Re-plot the distortion plot and see if an "elbow" is more prominent. K-Means in PyTorchNow, to get more experience with PyTorch, let's do the same thing with the library. First, some imports. You may need to install some packages for this to work: pip install kmeans-pytorch pip install tqdm First, import the libraries:
###Code
import torch
from kmeans_pytorch import kmeans
x = torch.from_numpy(X)
device = 'cuda:0'
device = 'cpu'
c, means = kmeans(X=x, num_clusters=3, distance='euclidean', device=torch.device(device))
df["label"] = c
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df.Age[df.label == 0], df["Annual Income (k$)"][df.label == 0], df["Spending Score (1-100)"][df.label == 0], c='blue', s=60)
ax.scatter(df.Age[df.label == 1], df["Annual Income (k$)"][df.label == 1], df["Spending Score (1-100)"][df.label == 1], c='red', s=60)
ax.scatter(df.Age[df.label == 2], df["Annual Income (k$)"][df.label == 2], df["Spending Score (1-100)"][df.label == 2], c='green', s=60)
#ax.scatter(df.Age[df.label == 3], df["Annual Income (k$)"][df.label == 3], df["Spending Score (1-100)"][df.label == 3], c='orange', s=60)
#ax.scatter(df.Age[df.label == 4], df["Annual Income (k$)"][df.label == 4], df["Spending Score (1-100)"][df.label == 4], c='purple', s=60)
ax.view_init(0, 45)
plt.xlabel("Age")
plt.ylabel("Annual Income (k$)")
ax.set_zlabel('Spending Score (1-100)')
plt.title('Customer Segments (PyTorch k=3)')
plt.show()
###Output
_____no_output_____ |
FailurePrediction/ConstantRotationalSpeed/EnvelopeSpectrum/Envelope_Inner_014.ipynb | ###Markdown
ENVELOPE SPECTRUM - INNER RACE (Fault Diameter 0.014")
###Code
import scipy.io as sio
import numpy as np
import matplotlib.pyplot as plt
import lee_dataset_CWRU
from lee_dataset_CWRU import *
import envelope_spectrum
from envelope_spectrum import *
faultRates = [3.585, 5.415, 1] #[outer, inner, shaft]
Fs = 12000
DE_I1, FE_I1, t_DE_I1, t_FE_I1, RPM_I1, samples_s_DE_I1, samples_s_FE_I1 = lee_dataset('../DataCWRU/169.mat')
DE_I2, FE_I2, t_DE_I2, t_FE_I2, RPM_I2, samples_s_DE_I2, samples_s_FE_I2 = lee_dataset('../DataCWRU/170.mat')
DE_I3, FE_I3, t_DE_I3, t_FE_I3, RPM_I3, samples_s_DE_I3, samples_s_FE_I3 = lee_dataset('../DataCWRU/171.mat')
DE_I4, FE_I4, t_DE_I4, t_FE_I4, RPM_I4, samples_s_DE_I4, samples_s_FE_I4 = lee_dataset('../DataCWRU/172.mat')
fr_I1 = RPM_I1 / 60
BPFI_I1 = 5.4152 * fr_I1
BPFO_I1 = 3.5848 * fr_I1
fr_I2 = RPM_I2 / 60
BPFI_I2 = 5.4152 * fr_I2
BPFO_I2 = 3.5848 * fr_I2
fr_I3 = RPM_I3 / 60
BPFI_I3 = 5.4152 * fr_I3
BPFO_I3 = 3.5848 * fr_I3
fr_I4 = RPM_I4 / 60
BPFI_I4 = 5.4152 * fr_I4
BPFO_I4 = 3.5848 * fr_I4
fSpec_I1, xSpec_I1 = envelope_spectrum2(DE_I1, Fs)
fSpec_I2, xSpec_I2 = envelope_spectrum2(DE_I2, Fs)
fSpec_I3, xSpec_I3 = envelope_spectrum2(DE_I3, Fs)
fSpec_I4, xSpec_I4 = envelope_spectrum2(DE_I4, Fs)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
fig.set_size_inches(14, 10)
ax1.plot(fSpec_I1, xSpec_I1, label = 'Env. spectrum')
ax1.axvline(x = fr_I1, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax1.axvline(x = BPFI_I1, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax1.axvline(x = BPFO_I1, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax1.set_xlim(0,200)
ax1.set_xlabel('Frequency')
ax1.set_ylabel('Env. spectrum')
ax1.set_title('Inner race. Fault Diameter 0.014", 1797 RPM')
ax1.legend(loc = 2)
ax2.plot(fSpec_I2, xSpec_I2, label = 'Env. spectrum')
ax2.axvline(x = fr_I2, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax2.axvline(x = BPFI_I2, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax2.axvline(x = BPFO_I2, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax2.set_xlim(0,200)
ax2.legend(loc = 2)
ax2.set_xlabel('Frequency')
ax2.set_ylabel('Env. spectrum')
ax2.set_title('Inner race. Fault Diameter 0.014", 1772 RPM')
ax3.plot(fSpec_I3, xSpec_I3, label = 'Env. spectrum')
ax3.axvline(x = fr_I3, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax3.axvline(x = BPFI_I3, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax3.axvline(x = BPFO_I3, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax3.set_xlim(0,200)
ax3.legend(loc = 2)
ax3.set_xlabel('Frequency')
ax3.set_ylabel('Env. spectrum')
ax3.set_title('Inner race. Fault Diameter 0.014", 1750 RPM')
ax4.plot(fSpec_I4, xSpec_I4, label = 'Env. spectrum')
ax4.axvline(x = fr_I4, color = 'k', linestyle = '--', lw = 1.5, label = 'fr', alpha = 0.6)
ax4.axvline(x = BPFI_I4, color = 'r', linestyle = '--', lw = 1.5, label = 'BPFI', alpha = 0.6)
ax4.axvline(x = BPFO_I4, color = 'g', linestyle = '--', lw = 1.5, label = 'BPFO', alpha = 0.6)
ax4.set_xlim(0,200)
ax4.legend(loc = 2)
ax4.set_xlabel('Frequency')
ax4.set_ylabel('Env. spectrum')
ax4.set_title('Inner race. Fault Diameter 0.014", 1730 RPM')
clasificacion_inner = pd.DataFrame({'Señal': ['105.mat', '106.mat', '107.mat', '108.mat'],
'Estado': ['Fallo Inner Race'] * 4,
'Predicción': [clasificacion_envelope(fSpec_I1, xSpec_I1, fr_I1, BPFO_I1, BPFI_I1),
clasificacion_envelope(fSpec_I2, xSpec_I2, fr_I2, BPFO_I2, BPFI_I2),
clasificacion_envelope(fSpec_I3, xSpec_I3, fr_I3, BPFO_I3, BPFI_I3),
clasificacion_envelope(fSpec_I4, xSpec_I4, fr_I4, BPFO_I4, BPFI_I4)]})
clasificacion_inner
###Output
_____no_output_____ |
Huawei-interview/Huawei Research London Coding Interview LSTM.ipynb | ###Markdown
Coding Test You will be assesed overall on;1) How far you get in the alloted time.2) Code optimisations.3) Code reusability.4) Code readability.Some hints; 1) Take regulaer berak (at least 5 minutes every hour) or changes in activity2) Avoiding awkward, static postures by regularly changing position3) Getting up and moving or doing stretching exercises4) Avoiding eye fatigue by changing focus or blinking from time to time
###Code
import gym
import torch
import numpy
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
###Output
_____no_output_____
###Markdown
Part 1: PPO - Implement a vanilla PPO learning agent and train it on 'acrobot-v1'.
###Code
learning_rate = 0.00005
gamma = 0.98
lmbda = 0.95
#extra-hyperparameter
eps_clip = 0.1
K_epoch = 3
class PPO(nn.Module):
def __init__(self):
super(PPO, self).__init__()
self.prep_data = []
self.function1 = nn.Linear(6,256)
self.function_pi = nn.Linear(256,3)
self.function_v = nn.Linear(256,1)
self.optimizer = optim.Adam(self.parameters(), lr=learning_rate)
def pi(self, x, softmax_dim = 0):
x = F.relu(self.function1(x))
x = self.function_pi(x)
prob = F.softmax(x, dim=softmax_dim)
return prob
def v(self, x):
x = F.relu(self.function1(x))
v = self.function_v(x)
return v
def put_data(self, transition):
self.data.append(transition)
def make_batch(self):
s_lst, a_lst, r_lst, s_prime_lst, prob_a_lst, done_lst = [], [], [], [], [], []
for transition in self.prep_data:
s, a, r, s_prime, prob_a, done = transition
s_lst.append(s)
a_lst.append([a])
r_lst.append([r])
s_prime_lst.append(s_prime)
prob_a_lst.append([prob_a])
done_mask = 0 if done else 1
done_lst.append([done_mask])
s,a,r,s_prime,done_mask, prob_a = torch.tensor(s_lst, dtype=torch.float), torch.tensor(a_lst), \
torch.tensor(r_lst), torch.tensor(s_prime_lst, dtype=torch.float), \
torch.tensor(done_lst, dtype=torch.float), torch.tensor(prob_a_lst)
self.prep_data = []
return s, a, r, s_prime, done_mask, prob_a
def train_net(self):
s, a, r, s_prime, done_mask, prob_a = self.make_batch()
for i in range(K_epoch):
td_target = r + gamma * self.v(s_prime) * done_mask
delta = td_target - self.v(s)
delta = delta.detach().numpy()
advantage_lst = []
advantage = 0.0
for delta_t in delta[::-1]:
advantage = gamma * lmbda * advantage + delta_t[0]
advantage_lst.append([advantage])
advantage_lst.reverse()
advantage = torch.tensor(advantage_lst, dtype=torch.float)
pi = self.pi(s, softmax_dim=1)
pi_a = pi.gather(1,a)
ratio = torch.exp(torch.log(pi_a) - torch.log(prob_a)) # a/b == exp(log(a)-log(b))
surr1 = ratio * advantage
surr2 = torch.clamp(ratio, 1-eps_clip, 1+eps_clip) * advantage
loss = -torch.min(surr1, surr2) + F.smooth_l1_loss(self.v(s) , td_target.detach())
self.optimizer.zero_grad()
loss.mean().backward()
self.optimizer.step()
def main():
env = gym.make('Acrobot-v1')
model = PPO()
score = 0.0
print_interval = 200
for n_epi in range(10000):
s = env.reset()
done = False
test_a = 0
mn_a = 1000
while not done:
for t in range(20):
prob = model.pi(torch.from_numpy(s).float())
m = Categorical(prob)
a = m.sample().item()
# env.render()
s_prime, r, done, info = env.step(a)
test_a = max(test_a, a)
mn_a = min(mn_a, a)
model.prep_data.append((s, a, r/100.0, s_prime, prob[a].item(), done))
s = s_prime
score += r
if done:
break
model.train_net()
if n_epi%print_interval==0 and n_epi!=0:
print("# of episode :{}, avg score : {:.1f}".format(n_epi, score/print_interval))
score = 0.0
env.close()
if __name__ == '__main__':
main()
###Output
_____no_output_____ |
20201114_ResNet50V2_kfold.ipynb | ###Markdown
Model
###Code
import tensorflow as tf
from tensorflow.keras.preprocessing import image
import cv2
import matplotlib.pyplot as plt
from PIL import Image
from sklearn.model_selection import train_test_split, KFold, RepeatedKFold, GroupKFold, RepeatedStratifiedKFold
from sklearn.utils import shuffle
import numpy as np
import pandas as pd
import os
import os.path as pth
import shutil
import time
from tqdm import tqdm
import itertools
from itertools import product, combinations
import numpy as np
from PIL import Image
from IPython.display import clear_output
from multiprocessing import Process, Queue
import datetime
import tensorflow.keras as keras
from tensorflow.keras.utils import to_categorical, Sequence
from tensorflow.keras.layers import Input, Dense, Activation, BatchNormalization, \
Flatten, Conv3D, AveragePooling3D, MaxPooling3D, Dropout, \
Concatenate, GlobalMaxPool3D, GlobalAvgPool3D
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.callbacks import ModelCheckpoint,LearningRateScheduler, \
EarlyStopping
from tensorflow.keras.losses import mean_squared_error, mean_absolute_error
from tensorflow.keras import backend as K
from tensorflow.keras.constraints import max_norm
def build_cnn(config):
input_layer = Input(shape=config['input_shape'], name='input_layer')
pret_model = my_model(
input_tensor=input_layer, include_top=False, weights='imagenet',
input_shape=config['input_shape'], pooling=config['between_type'],
classes=config['output_size']
)
pret_model.trainable = False
x = pret_model.output
if config['between_type'] == None:
x = Flatten(name='flatten_layer')(x)
if config['is_dropout']:
x = Dropout(config['dropout_rate'], name='output_dropout')(x)
x = Dense(config['output_size'], activation=config['output_activation'],
name='output_fc')(x)
# x = Activation(activation=config['output_activation'], name='output_activation')(x)
model = Model(inputs=input_layer, outputs=x, name='{}'.format(BASE_MODEL_NAME))
return model
model = build_cnn(config)
model.summary(line_length=150)
del model
model_base_path = data_base_path
model_checkpoint_path = pth.join(model_base_path, 'checkpoint')
def seed_everything(seed):
random.seed(seed)
np.random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
tf.random.set_seed(seed)
AUTO = tf.data.experimental.AUTOTUNE
FILENAMES = tf.io.gfile.glob(pth.join(data_base_path, 'train_tfrec', '*'))
TEST_FILENAMES = tf.io.gfile.glob(pth.join(data_base_path, 'test_tfrec', '*'))
# training tfrecords 로드
def read_tr_tfrecord(example):
TFREC_FORMAT = {
"image_raw": tf.io.FixedLenFeature([], tf.string),
"landmark_id": tf.io.FixedLenFeature([], tf.int64),
'id': tf.io.FixedLenFeature([], tf.string),
}
example = tf.io.parse_single_example(example, TFREC_FORMAT)
return example
# image = example['image_raw']
# target = tf.cast(example['landmark_id'], tf.int64)
# return image, target
# validation tfrecords 로드
def read_val_tfrecord(example):
TFREC_FORMAT = {
"image_raw": tf.io.FixedLenFeature([], tf.string),
"landmark_id": tf.io.FixedLenFeature([], tf.int64),
'id': tf.io.FixedLenFeature([], tf.string),
}
example = tf.io.parse_single_example(example, TFREC_FORMAT)
return example
# image = example['image_raw']
# target = tf.cast(example['landmark_id'], tf.int64)
# return image, target
# test tfrecords 로드
def read_test_tfrecord(example):
TFREC_FORMAT = {
"image_raw": tf.io.FixedLenFeature([], tf.string),
'id': tf.io.FixedLenFeature([], tf.string),
}
example = tf.io.parse_single_example(example, TFREC_FORMAT)
return example
# image = example['image_raw']
# id = example['id']
# return image, id
def get_training_dataset(filenames, ordered = False):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads = AUTO)
dataset = dataset.with_options(ignore_order)
dataset = dataset.map(read_tr_tfrecord, num_parallel_calls = AUTO)
#dataset = dataset.map(_parse_image_function, num_parallel_calls=tf.data.experimental.AUTOTUNE)
# dataset = dataset.cache()
dataset = dataset.map(map_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(resize_and_crop_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(image_aug_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.repeat()
dataset = dataset.shuffle(config['buffer_size'])
dataset = dataset.batch(config['batch_size'])
dataset = dataset.map(post_process_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
return dataset
def get_validation_dataset(filenames, ordered = True, prediction = False):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads = AUTO)
dataset = dataset.with_options(ignore_order)
dataset = dataset.map(read_val_tfrecord, num_parallel_calls = AUTO)
dataset = dataset.map(map_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(resize_and_crop_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
#dataset = dataset.map(image_aug_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
if prediction:
dataset = dataset.batch(config['batch_size'] * 4) # why 4 times?
else:
dataset = dataset.batch(config['batch_size'])
dataset = dataset.map(post_process_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.prefetch(AUTO)
return dataset
image_feature_description_for_test = {
'image_raw': tf.io.FixedLenFeature([], tf.string),
# 'randmark_id': tf.io.FixedLenFeature([], tf.int64),
# 'id': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function_for_test(example_proto):
return tf.io.parse_single_example(example_proto, image_feature_description_for_test)
def map_func_for_test(target_record):
img = target_record['image_raw']
img = tf.image.decode_jpeg(img, channels=3)
img = tf.dtypes.cast(img, tf.float32)
return img
def resize_and_crop_func_for_test(image):
result_image = tf.image.resize(image, config['aug']['resize'])
#result_image = tf.image.random_crop(image, size=config['input_shape'], seed=7777) # revive
return result_image
def image_aug_func_for_test(img):
#pass
img = tf.image.random_flip_left_right(img)
#img = tf.image.random_hue(img, 0.01)
img = tf.image.random_saturation(img, 0.7, 1.3)
img = tf.image.random_contrast(img, 0.8, 1.2)
img = tf.image.random_brightness(img, 0.1)
return img
def post_process_func_for_test(image):
# result_image = result_image / 255
result_image = my_model_base.preprocess_input(image)
return result_image
# def test_just_image(image, id):
# return image
# def test_just_id(image, id):
# return id
def get_test_dataset(filenames, ordered=True, prediction=False, name=False):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads = AUTO)
dataset = dataset.with_options(ignore_order)
dataset = dataset.map(read_test_tfrecord, num_parallel_calls = AUTO)
dataset = dataset.map(map_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(resize_and_crop_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(image_aug_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.repeat()
# if name:
# dataset = dataset.map(test_just_id, num_parallel_calls = AUTO)
# else:
# dataset = dataset.map(test_just_image, num_parallel_calls = AUTO)
dataset = dataset.batch(config['batch_size'])
dataset = dataset.map(post_process_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.prefetch(AUTO)
return dataset
# USE DIFFERENT SEED FOR DIFFERENT STRATIFIED KFOLD
SEED = 42
# NUMBER OF FOLDS. USE 3, 5, OR 15
FOLDS = 5
#BATCH_SIZES = [32]*FOLDS
EPOCHS = [8]*FOLDS
PRE_TRAIN_EPOCH = 1
# WGTS - this should be 1/FOLDS for each fold. This is the weight when ensembling the folds to predict the test set. If you want a weird ensemble, you can use different weights.
# WEIGHTS FOR FOLD MODELS WHEN PREDICTING TEST
WGTS = [1/FOLDS]*FOLDS
# TEST TIME AUGMENTATION STEPS
TTA = 2
def get_lr_callback():
lr_start = 0.000001*10*0.5
lr_max = 0.0000005 * config['batch_size'] * 10*0.5
lr_min = 0.000001 * 10*0.5
#lr_ramp_ep = 3 #### TODO: NEED TO BE CONSIDERED WISELY. # 5
lr_ramp_ep = config['batch'] // 3 #### (small lr) going up -> ramp (large max lr) -> going down (small lr)
lr_sus_ep = 0
lr_decay = 0.8
def lrfn(epoch):
if epoch < lr_ramp_ep:
lr = (lr_max - lr_start) / lr_ramp_ep * epoch + lr_start
elif epoch < lr_ramp_ep + lr_sus_ep:
lr = lr_max
else:
lr = (lr_max - lr_min) * lr_decay**(epoch - lr_ramp_ep - lr_sus_ep) + lr_min
print('lr=',lr)
return lr
lr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose = False)
return lr_callback
base = BASE_MODEL_NAME
base += '_resize_{}'.format(config['aug']['resize'][0])
#base += '_input_{}'.format(config['input_shape'][0])
base += '_conv_{}'.format('-'.join(map(lambda x:str(x),config['conv']['conv_num'])))
base += '_basech_{}'.format(config['conv']['base_channel'])
base += '_act_{}'.format(config['activation'])
base += '_pool_{}'.format(config['pool']['type'])
base += '_betw_{}'.format(config['between_type'])
base += '_fc_{}'.format(config['fc']['fc_num'])
base += '_zscore_{}'.format(config['is_zscore'])
base += '_batch_{}'.format(config['batch_size'])
if config['is_dropout']:
base += '_DO_'+str(config['dropout_rate']).replace('.', '')
if config['is_batchnorm']:
base += '_BN'+'_O'
else:
base += '_BN'+'_X'
model_name = base
import gc
from sklearn.model_selection import KFold
FILENAMES = np.array(FILENAMES)
oof_pred = []; oof_tar = []; oof_val = []; oof_names = []; oof_folds = []
preds = np.zeros((len(TEST_FILENAMES),config['num_class']))
skf = KFold(n_splits = FOLDS, shuffle=True,random_state=SEED)
for fold, (tr_index, val_index) in enumerate(skf.split(FILENAMES)):
# if fold == 0:
# continue
print('#'*25); print('#### FOLD',fold+1)
#gc.collect()
#print('################', 'lr=', LEARNING_RATE)
print(model_name)
TRAINING_FILENAMES, VALIDATION_FILENAMES = FILENAMES[tr_index], FILENAMES[val_index]
#NUM_TRAINING_IMAGES = count_data_items(TRAINING_FILENAMES)
np.random.shuffle(TRAINING_FILENAMES); print('#'*25)
#seed_everything(SEED)
train_dataset = get_training_dataset(TRAINING_FILENAMES,ordered = False)
val_dataset = get_validation_dataset(VALIDATION_FILENAMES,ordered = True, prediction = False)
print('FILENAMES=', len(FILENAMES))
print('TRAINING_FILENAMES=', len(TRAINING_FILENAMES))
print('VALIDATION_FILENAMES=', len(VALIDATION_FILENAMES))
STEPS_PER_EPOCH = np.ceil(len(TRAINING_FILENAMES)/config['batch_size'])
print('STEPS_PER_EPOCH=', STEPS_PER_EPOCH)
model_path = pth.join(
model_checkpoint_path, model_name,
)
model = build_cnn(config)
initial_epoch = 0
# if pth.isdir(model_path) and len([_ for _ in os.listdir(model_path) if _.endswith('hdf5')]) >= 1:
# for layer in model.layers[:166]:
# layer.trainable = False
# for layer in model.layers[166:]:
# layer.trainable = True
# model.compile(loss=config['loss'], optimizer=Adam(lr=config['learning_rate']),
# metrics=['acc', 'Precision', 'Recall', 'AUC'])
# model_chk_name = sorted(os.listdir(model_path))[-1]
# initial_epoch = int(model_chk_name.split('-')[0])
# model.load_weights(pth.join(model_path, model_chk_name))
# else:
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['acc', 'Precision', 'Recall', 'AUC'])
model.fit(
x=train_dataset, epochs=PRE_TRAIN_EPOCH, # train only top layers for just a few epochs.
validation_data=val_dataset, shuffle=True,
steps_per_epoch=STEPS_PER_EPOCH,
#callbacks = [checkpointer, es], #batch_size=config['batch_size']
initial_epoch=initial_epoch,
# steps_per_epoch=train_num_steps, validation_steps=val_num_steps,
verbose=1)
for i, layer in enumerate(model.layers):
print(i, layer.name)
for layer in model.layers[:166]:
layer.trainable = False
for layer in model.layers[166:]:
layer.trainable = True
model.compile(loss=config['loss'], optimizer=Adam(lr=config['learning_rate']),
metrics=['acc', 'Precision', 'Recall', 'AUC'])
initial_epoch=PRE_TRAIN_EPOCH
# ### Freeze first layer
# conv_list = [layer for layer in model.layers if isinstance(layer, keras.layers.Conv2D)]
# conv_list[0].trainable = False
# # conv_list[1].trainable = False
os.makedirs(model_path, exist_ok=True)
model_filename = pth.join(model_path, f'fold{fold:02d}-' +'{epoch:06d}-{val_loss:0.6f}-{loss:0.6f}.hdf5')
checkpointer = ModelCheckpoint(
filepath=model_filename, verbose=1,
period=1, save_best_only=True,
monitor='val_loss'
)
es = EarlyStopping(monitor='val_loss', verbose=1, patience=10)
hist = model.fit(
x=train_dataset, #epochs=config['num_epoch'],
#batch_size = BATCH_SIZES[fold],
epochs=EPOCHS[fold],
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=val_dataset, shuffle=True,
callbacks = [get_lr_callback(), checkpointer], #, es], #batch_size=config['batch_size']
initial_epoch=0, #### JUST 0 TO FIXED EPOCH COUNT #initial_epoch,
# steps_per_epoch=train_num_steps, validation_steps=val_num_steps,
verbose=1
)
model_chk_name = sorted(glob(pth.join(model_path, f'fold{fold:02d}-*')))[-1]
print('selected weight to load=', model_chk_name)
model.load_weights(model_chk_name)
ct_test = len(TEST_FILENAMES)
STEPS = TTA * ct_test / config['batch_size']
pred = model.predict(test_dataset,steps=STEPS, verbose=1)[:ct_test * TTA,]
preds += np.mean(pred.reshape((ct_test, TTA, config['num_class']), order='F'), axis=1) * WGTS[fold]
K.clear_session()
del(model)
# chk_name_list = sorted([name for name in os.listdir(model_path) if name != '000000_last.hdf5'])
# for chk_name in chk_name_list[:-20]:
# os.remove(pth.join(model_path, chk_name))
# clear_output()
### Inference
submission_base_path = pth.join(data_base_path, 'submission')
os.makedirs(submission_base_path, exist_ok=True)
pred_labels = np.argsort(-preds)
submission_csv_path = pth.join(data_base_path, submission_csv_name)
submission_df = pd.read_csv(submission_csv_path)
today_str = datetime.date.today().strftime('%Y%m%d')
result_filename = '{}.csv'.format(model_name)
submission_csv_fileaname = pth.join(submission_base_path, '_'.join([today_str, result_filename]))
submission_csv_fileaname_top1 = pth.join(submission_base_path, '_'.join([today_str, 'top1', result_filename]))
merged_df = []
RANK_TO_SAVE = 5
for i in range(RANK_TO_SAVE):
tmp_df = submission_df.copy()
tmp_labels = pred_labels[:, i]
tmp_df['landmark_id'] = tmp_labels
tmp_df['conf'] = np.array([pred[indice] for pred, indice in zip(preds, tmp_labels)])
if i == 0:
tmp_df.to_csv(submission_csv_fileaname_top1, index=False)
merged_df.append(tmp_df)
submission_df = pd.concat(merged_df)
submission_df.to_csv(submission_csv_fileaname, index=False)
model_path = pth.join(
model_checkpoint_path, model_name,
)
model = build_cnn(config)
# model.summary()
model.compile(loss=config['loss'], optimizer=Adam(lr=config['learning_rate']),
metrics=['acc', 'Precision', 'Recall', 'AUC'])
#model_chk_name = sorted(glob(pth.join(model_path, 'fold{fold:02d}-*')))[-1]
model_chk_name = sorted(glob(pth.join(model_path, '*')))[-1]
print('selected weight to load=', model_chk_name)
model.load_weights(model_chk_name)
test_dataset = get_test_dataset(TEST_FILENAMES)
###Output
_____no_output_____
###Markdown
Define datasettest_dataset = tf.data.TFRecordDataset(test_tfrecord_path, compression_type='GZIP')test_dataset = test_dataset.map(_parse_image_function_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)test_dataset = test_dataset.map(map_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)test_dataset = test_dataset.map(resize_and_crop_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)test_dataset = test_dataset.map(image_aug_func, num_parallel_calls=tf.data.experimental.AUTOTUNE)test_dataset = test_dataset.repeat()test_dataset = test_dataset.batch(config['batch_size'])test_dataset = test_dataset.map(post_process_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)test_dataset = test_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
###Code
pred = model.predict(test_dataset,verbose=1) #[:TTA*ct_test,]
np.shape(pred)
preds = np.zeros((len(TEST_FILENAMES),config['num_class']))
ct_test = len(TEST_FILENAMES)
TTA = 3
STEPS = TTA * ct_test / config['batch_size']
pred = model.predict(test_dataset,steps=STEPS, verbose=1)[:ct_test * TTA,]
tmp = pred[:ct_test * TTA,]
preds += np.mean(tmp.reshape((ct_test, TTA, config['num_class']), order='F'), axis=1) * WGTS[fold]
np.shape(preds)
np.shape(m)
preds += m
m[0]
preds[0]
m = np.mean(tmp,axis=1)
np.shape(preds)
np.shape(m)
ds_test = get_dataset(files_test,labeled=False,return_image_names=False,augment=True,
repeat=True,shuffle=False,dim=IMG_SIZES[fold],batch_size=BATCH_SIZES[fold]*4)
ct_test = count_data_items(files_test); STEPS = TTA * ct_test/BATCH_SIZES[fold]/4/REPLICAS
pred = model.predict(ds_test,steps=STEPS,verbose=VERBOSE)[:TTA*ct_test,]
preds[:,0] += np.mean(pred.reshape((ct_test,TTA),order='F'),axis=1) * WGTS[fold]
###Output
_____no_output_____
###Markdown
Inference
###Code
image_feature_description_for_test = {
'image_raw': tf.io.FixedLenFeature([], tf.string),
# 'randmark_id': tf.io.FixedLenFeature([], tf.int64),
# 'id': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function_for_test(example_proto):
return tf.io.parse_single_example(example_proto, image_feature_description_for_test)
def map_func_for_test(target_record):
img = target_record['image_raw']
img = tf.image.decode_jpeg(img, channels=3)
img = tf.dtypes.cast(img, tf.float32)
return img
def resize_and_crop_func_for_test(image):
result_image = tf.image.resize(image, config['aug']['resize'])
#result_image = tf.image.random_crop(image, size=config['input_shape'], seed=7777) # revive
return result_image
def post_process_func_for_test(image):
# result_image = result_image / 255
result_image = my_model_base.preprocess_input(image)
return result_image
submission_base_path = pth.join(data_base_path, 'submission')
os.makedirs(submission_base_path, exist_ok=True)
preds = []
# for conv_comb, activation, base_channel, \
# between_type, fc_num, batch_size \
# in itertools.product(conv_comb_list, activation_list,
# base_channel_list, between_type_list, fc_list,
# batch_size_list):
# config['conv']['conv_num'] = conv_comb
# config['conv']['base_channel'] = base_channel
# config['activation'] = activation
# config['between_type'] = between_type
# config['fc']['fc_num'] = fc_num
# config['batch_size'] = batch_size
for LEARNING_RATE in [1e-3]: #, 1e-4, 1e-5]: # just once
base = BASE_MODEL_NAME
base += '_resize_{}'.format(config['aug']['resize'][0])
#base += '_input_{}'.format(config['input_shape'][0])
base += '_conv_{}'.format('-'.join(map(lambda x:str(x),config['conv']['conv_num'])))
base += '_basech_{}'.format(config['conv']['base_channel'])
base += '_act_{}'.format(config['activation'])
base += '_pool_{}'.format(config['pool']['type'])
base += '_betw_{}'.format(config['between_type'])
base += '_fc_{}'.format(config['fc']['fc_num'])
base += '_zscore_{}'.format(config['is_zscore'])
base += '_batch_{}'.format(config['batch_size'])
if config['is_dropout']:
base += '_DO_'+str(config['dropout_rate']).replace('.', '')
if config['is_batchnorm']:
base += '_BN'+'_O'
else:
base += '_BN'+'_X'
model_name = base
print(model_name)
### Define dataset
test_dataset = tf.data.TFRecordDataset(test_tfrecord_path, compression_type='GZIP')
test_dataset = test_dataset.map(_parse_image_function_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.map(map_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.map(resize_and_crop_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.batch(config['batch_size'])
test_dataset = test_dataset.map(post_process_func_for_test, num_parallel_calls=tf.data.experimental.AUTOTUNE)
test_dataset = test_dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
model_path = pth.join(
model_checkpoint_path, model_name,
)
model = build_cnn(config)
# model.summary()
model.compile(loss=config['loss'], optimizer=Adam(lr=config['learning_rate']),
metrics=['acc', 'Precision', 'Recall', 'AUC'])
initial_epoch = 0
model_chk_name = sorted(os.listdir(model_path))[-1]
print('selected weight to load=', model_chk_name)
initial_epoch = int(model_chk_name.split('-')[0])
model.load_weights(pth.join(model_path, model_chk_name))
preds = model.predict(test_dataset, verbose=1)
#pred_labels = np.argmax(preds, axis=1)
#pred_probs = np.array([pred[indice] for pred, indice in zip(preds, pred_labels)])
# argmax --> top3
pred_labels = np.argsort(-preds)
submission_csv_path = pth.join(data_base_path, submission_csv_name)
submission_df = pd.read_csv(submission_csv_path)
merged_df = []
RANK_TO_SAVE = 5
for i in range(RANK_TO_SAVE):
tmp_df = submission_df.copy()
tmp_labels = pred_labels[:, i]
tmp_df['landmark_id'] = tmp_labels
tmp_df['conf'] = np.array([pred[indice] for pred, indice in zip(preds, tmp_labels)])
merged_df.append(tmp_df)
submission_df = pd.concat(merged_df)
#submission_df['landmark_id'] = pred_labels
#submission_df['conf'] = pred_probs
today_str = datetime.date.today().strftime('%Y%m%d')
result_filename = '{}.csv'.format(model_name)
submission_csv_fileaname = pth.join(submission_base_path, '_'.join([today_str, result_filename]))
submission_df.to_csv(submission_csv_fileaname, index=False)
submission_csv_path = pth.join(data_base_path, submission_csv_name)
submission_df = pd.read_csv(submission_csv_path)
merged_df = []
RANK_TO_SAVE = 1
for i in range(RANK_TO_SAVE):
tmp_df = submission_df.copy()
tmp_labels = pred_labels[:, i]
tmp_df['landmark_id'] = tmp_labels
tmp_df['conf'] = np.array([pred[indice] for pred, indice in zip(preds, tmp_labels)])
merged_df.append(tmp_df)
submission_df = pd.concat(merged_df)
#submission_df['landmark_id'] = pred_labels
#submission_df['conf'] = pred_probs
today_str = datetime.date.today().strftime('%Y%m%d')
result_filename = '{}_top1.csv'.format(model_name)
submission_csv_fileaname = pth.join(submission_base_path, '_'.join([today_str, result_filename]))
submission_df.to_csv(submission_csv_fileaname, index=False)
###Output
_____no_output_____ |
week_2/.ipynb_checkpoints/day_9_lab-checkpoint.ipynb | ###Markdown
Exercise 1Write a function that takes a string as input and it returns whether the string is a valid password (the return value is True) or not (the return value is False). A string is a valid password if - it contains least 1 number between 0 and 9,- if it contains at least 1 character from the list ['$','','@','.','!','?',''],- if it has a minimum length of at least 6 characters.Please check that the input to the function is indeed a string. If it is not, print a diagnostic message and raise a value error.Test your function with the strings below to make sure it works correctly. Note how the test strings try all possible ways the conditions can fail. You should aim to test your code with such thoroughness.'ilikeplums!' => False (fails on first condition)'plum2020' => False (fails on second condition)'a2b3?' => False (fails on third condition)'applesaretasty' => False (fails on the first and second conditionss)'plum!' => False (fails on first and third conditions)'plum5' => False (fails on second and third conditions)'apple' => False (fails all three conditions)'1234' => True'!p1umsareblue' => True
###Code
# this is a list with all the test passwords
# write a for loop to iterate through the elements, call your function on each element,
# and check if your function gives the correct output
passwords_lst = ['ilikeplums!','plum2020','a2b3?','applesaretasty','plum!','plum5','apple','<apple>1234','!p1umsareblue']
def check_pwd(pwd):
# add your code here:
return
for password in passwords_lst:
print(check_pwd(password))
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.