code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
# Write a program to remove characters from a string starting from zero up to n and return a new string.
__Example:__
remove_char("Untitled", 4) so output must be tled. Here we need to remove first four characters from a string
```
def remove_char(a, b):
# Write your code here
print("started")
a="Untitled"
b=4
remove_char(a,b)
```
# Write a program to find how many times substring appears in the given string.
__Example:__
"You can use Markdown to format documentation you add to Markdown cells" sub_string: Markdown
In the above the substring Markdown is appeared two times.So the count is two
```
def sub_string(m_string,s_string):
# Write your code here
print("started")
m_string="You can use Markdown to format documentation you add to Markdown cells"
s_string="Markdown"
sub_string(m_string,s_string)
```
# Write a program to check if the given number is a palindrome number.
__Exapmle:__
A palindrome number is a number that is same after reverse. For example 242, is the palindrome number
```
def palindrom_check(a):
# Write your code here
print("started")
palindrom_check(242)
```
# Write a program to Extract Unique values from dictionary values
__Example:__
test= {"gfg': [5, 6, 7, 8], 'is': [10, 11, 7, 5], 'best' : [6, 12, 10, 8], 'for': [1, 2, 5]}
out_put: [1, 2, 5, 6, 7, 8, 10, 11, 12]
```
def extract_unique(a):
# Write your code here
print("started")
test= {'gfg': [5, 6, 7, 8], 'is': [10, 11, 7, 5], 'best' : [6, 12, 10, 8], 'for': [1, 2, 5]}
extract_unique(test)
```
# Write a program to find the dictionary with maximum count of pairs
__Example:__
Input: test_list = [{"gfg": 2, "best":4}, {"gfg": 2, "is" : 3, "best": 4, "CS":9}, {"gfg":2}]
Output: 4
```
def max_count(a):
# Write your code here
print("started")
test_list = [{"gfg": 2, "best":4}, {"gfg": 2, "is" : 3, "best": 4, "CS":9}, {"gfg":2}]
max_count(test_list)
```
# Access the value of key 'history' from the below dict
```
def key_access(a):
# Write your code here
print("started")
sampleDict = {
"class":{
"student":{
"name": "Mike",
"marks" : {
"physics":70,
"history":80
}
}
}
}
key_access(sampleDict)
```
# Print the value of key hair
# Print the third element of the key interested in
```
def third_ele(a):
# Write your code here
print("started")
info={
"personal data":{
"name":"Lauren",
"age":20,
"major":"Information Science",
"physical_features":{
"color":{
"eye":"blue",
"hair":"brown"
},
"height":"5'8"
}
},
"other":{
"favorite_colors":[
"purple",
"green",
"blue"
],
"interested_in":[
"social media",
"intellectual property",
"copyright",
"music",
"books"
]
}
}
third_ele(info)
import pandas as pd
import numpy as np
exam_data = {'name': ['Anastasia', 'Dima', 'Katherine', 'James', 'Emily', 'Michael', 'Matthew', 'Laura', 'Kevin', 'Jonas'],
'score': [12.5, 9, 16.5, np.nan, 9, 20, 14.5, np.nan, 8, 19],
'attempts': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'qualify': ['yes', 'no', 'yes', 'no', 'no', 'yes', 'yes', 'no', 'no', 'yes']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df = pd.DataFrame(exam_data , index=labels)
df
```
# Print the Unique values from attempts column
```
def un_values(df):
# Write your code here
print("started")
un_values(df)
```
# Print the top five rows from the data frame
```
def top_five(df):
# Write your code here
print("started")
top_five(df)
```
# Print the max and min values of the coulmn attempts
```
def min_max(df):
# Write your code here
print("started")
min_max(df)
```
| github_jupyter |
# Pattern Mining
## Library
```
source("https://raw.githubusercontent.com/eogasawara/mylibrary/master/myPreprocessing.R")
loadlibrary("arules")
loadlibrary("arulesViz")
loadlibrary("arulesSequences")
data(AdultUCI)
dim(AdultUCI)
head(AdultUCI)
```
## Removing attributes
```
AdultUCI$fnlwgt <- NULL
AdultUCI$"education-num" <- NULL
```
## Conceptual Hierarchy and Binning
```
AdultUCI$age <- ordered(cut(AdultUCI$age, c(15,25,45,65,100)),
labels = c("Young", "Middle-aged", "Senior", "Old"))
AdultUCI$"hours-per-week" <- ordered(cut(AdultUCI$"hours-per-week",
c(0,25,40,60,168)),
labels = c("Part-time", "Full-time", "Over-time", "Workaholic"))
AdultUCI$"capital-gain" <- ordered(cut(AdultUCI$"capital-gain",
c(-Inf,0,median(AdultUCI$"capital-gain"[AdultUCI$"capital-gain">0]),
Inf)), labels = c("None", "Low", "High"))
AdultUCI$"capital-loss" <- ordered(cut(AdultUCI$"capital-loss",
c(-Inf,0, median(AdultUCI$"capital-loss"[AdultUCI$"capital-loss">0]),
Inf)), labels = c("None", "Low", "High"))
head(AdultUCI)
```
## Convert to transactions
```
AdultTrans <- as(AdultUCI, "transactions")
```
## A Priori
```
rules <- apriori(AdultTrans, parameter=list(supp = 0.5, conf = 0.9, minlen=2, maxlen= 10, target = "rules"),
appearance=list(rhs = c("capital-gain=None"), default="lhs"), control=NULL)
inspect(rules)
rules_a <- as(rules, "data.frame")
head(rules_a)
```
## Analysis of Rules
```
imrules <- interestMeasure(rules, transactions = AdultTrans)
head(imrules)
```
## Removing redundant rules
```
nrules <- rules[!is.redundant(rules)]
arules::inspect(nrules)
```
## Showing the transactions that support the rules
In this example, we can see the transactions (trans) that support rules 1.
```
st <- supportingTransactions(nrules[1], AdultTrans)
trans <- unique(st@data@i)
length(trans)
print(c(length(trans)/length(AdultTrans), nrules[1]@quality$support))
```
Now we can see the transactions (trans) that support rules 1 and 2.
As can be observed, the support for both rules is not the sum of the support of each rule.
```
st <- supportingTransactions(nrules[1:2], AdultTrans)
trans <- unique(st@data@i)
length(trans)
print(c(length(trans)/length(AdultTrans), nrules[1:2]@quality$support))
```
## Rules visualization
```
options(repr.plot.width=7, repr.plot.height=4)
plot(rules)
options(repr.plot.width=7, repr.plot.height=4)
plot(rules, method="paracoord", control=list(reorder=TRUE))
```
# Sequence Mining
```
x <- read_baskets(con = system.file("misc", "zaki.txt", package = "arulesSequences"), info = c("sequenceID","eventID","SIZE"))
as(x, "data.frame")
s1 <- cspade(x, parameter = list(support = 0.4), control = list(verbose = TRUE))
as(s1, "data.frame")
```
| github_jupyter |
```
# %load CommonFunctions.py
# # COMMON ATOMIC AND ASTRING FUNCTIONS
# In[14]:
############### One String Pulse with width, shift and scale #############
def StringPulse(String1, t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
if (x < -1):
res = -0.5
elif (x > 1):
res = 0.5
else:
res = String1(x)
res = d + res * c
return res
# In[16]:
###### Atomic String Applied to list with width, shift and scale #############
def String(String1, x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(StringPulse(String1, x[i], a, b, c, d))
return res
# In[17]:
###### Summation of two lists #############
def Sum(x1: list, x2: list) -> list:
res = []
for i in range(len(x1)):
res.append(x1[i] + x2[i])
return res
# In[18]:
##########################################################
##This script introduces Atomic Function
################### One Pulse of atomic function
def up1(x: float) -> float:
#Atomic function table
up_y = [0.5, 0.48, 0.460000017,0.440000421,0.420003478,0.400016184, 0.380053256, 0.360139056, 0.340308139, 0.320605107,
0.301083436, 0.281802850, 0.262826445, 0.244218000, 0.226041554, 0.208361009, 0.191239338, 0.174736305,
0.158905389, 0.143991189, 0.129427260, 0.115840866, 0.103044024, 0.9110444278e-01, 0.798444445e-01, 0.694444445e-01,
0.598444445e-01, 0.510444877e-01, 0.430440239e-01, 0.358409663e-01, 0.294282603e-01, 0.237911889e-01, 0.189053889e-01,
0.147363055e-01, 0.112393379e-01, 0.836100883e-02, 0.604155412e-02, 0.421800000e-02, 0.282644445e-02, 0.180999032e-02,
0.108343562e-02, 0.605106267e-03, 0.308138660e-03, 0.139055523e-03, 0.532555251e-04, 0.161841328e-04, 0.347816874e-05,
0.420576116e-05, 0.167693347e-07, 0.354008603e-10, 0]
up_x = np.arange(0.5, 1.01, 0.01)
res = 0.
if ((x >= 0.5) and (x <= 1)):
for i in range(len(up_x) - 1):
if (up_x[i] >= x) and (x < up_x[i+1]):
N1 = 1 - (x - up_x[i])/0.01
res = N1 * up_y[i] + (1 - N1) * up_y[i+1]
return res
return res
# In[19]:
############### Atomic Function Pulse with width, shift and scale #############
def pulse(up1, t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
res = 0.
if (x >= 0.5) and (x <= 1):
res = up1(x)
elif (x >= 0.0) and (x < 0.5):
res = 1 - up1(1 - x)
elif (x >= -1 and x <= -0.5):
res = up1(-x)
elif (x > -0.5) and (x < 0):
res = 1 - up1(1 + x)
res = d + res * c
return res
############### Atomic Function Applied to list with width, shift and scale #############
def up(up1, x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(pulse(up1, x[i], a, b, c, d))
return res
############### Atomic String #############
def AString1(x: float) -> float:
res = 1 * (pulse(up1, x/2.0 - 0.5) - 0.5)
return res
############### Atomic String Pulse with width, shift and scale #############
def AStringPulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
if (x < -1):
res = -0.5
elif (x > 1):
res = 0.5
else:
res = AString1(x)
res = d + res * c
return res
###### Atomic String Applied to list with width, shift and scale #############
def AString(x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(AStringPulse(x[i], a, b, c, d))
return res
import numpy as np
import pylab as pl
x = np.arange(-2.0, 2.0, 0.01)
pl.title('Atomic Function')
pl.plot(x, up(up1, x), label='Atomic Function')
pl.grid(True)
pl.show()
pl.title('Atomic String')
pl.plot(x, String(AString1, x, 1.0, 0, 1, 0), label='Atomic String')
pl.grid(True)
pl.show()
x = np.arange(-4.0, 4.0, 0.01)
dx = x[1] - x[0]
pl.title('Atomic String')
pl.plot(x, String(AString1, x, 1., 0., 1., 1.), label='Atomic String')
IntAString = np.cumsum(String(AString1, x, 1., 0., 1., 1.)) * dx
pl.plot(x, IntAString, label='AString Integral')
Int2AString = np.cumsum(IntAString) * dx
pl.plot(x, Int2AString, label='AString Integral Integral')
pl.title('AString with Integrals')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
```
## Summary and Observations
1) AString Integrals provide smooth curly connections between two straight lines
2) Further integrals provide smooth curly connections between parabolas!!
3) In general, AString integrals can provide smooth connections between any similar shapes!!!
| github_jupyter |
<a href="https://colab.research.google.com/github/yohanesnuwara/reservoir-geomechanics/blob/master/delft%20course%20dr%20weijermars/stress_tensor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import matplotlib.pyplot as plt
```
# Introduction to vectors
Plot vector that has notation (2,4,4). Another vector has notation (1,2,3). Find the direction cosines of each vector, the angles of each vector to the three axes, and the angle between the two vectors!
```
from mpl_toolkits.mplot3d import axes3d
X = np.array((0, 0))
Y= np.array((0, 0))
Z = np.array((0, 0))
U = np.array((2, 1))
V = np.array((4, 2))
W = np.array((4, 3))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver(X, Y, Z, U, V, W)
ax.set_xlim([-4, 4])
ax.set_ylim([-4, 4])
ax.set_zlim([-4, 4])
# vector A and B
A_mag = np.sqrt(((U[0] - X[0])**2) + ((V[0] - Y[0])**2) + ((W[0] - Z[0])**2))
print('Magnitude of vector A:', A_mag, 'units')
B_mag = np.sqrt(((U[1] - X[1])**2) + ((V[1] - Y[1])**2) + ((W[1] - Z[1])**2))
print('Magnitude of vector B:', B_mag, 'units')
# direction cosines
l_A = (U[0] - X[0]) / A_mag
m_A = (V[0] - Y[0]) / A_mag
n_A = (W[0] - Z[0]) / A_mag
print('Direction cosine to x axis (cos alpha):', l_A, "to y axis (cos beta):", m_A, "to z axis (cos gamma):", n_A)
print('Pythagorean Sum of direction cosines of vector A:', l_A**2 + m_A**2 + n_A**2, "and must be equals to 1")
l_B = (U[1] - X[1]) / B_mag
m_B = (V[1] - Y[1]) / B_mag
n_B = (W[1] - Z[1]) / B_mag
print('Direction cosine to x axis (cos alpha):', l_B, "to y axis (cos beta):", m_B, "to z axis (cos gamma):", n_B)
print('Pythagorean Sum of direction cosines of vector B:', l_B**2 + m_B**2 + n_B**2, "and must be equals to 1")
# angles
alpha_A = np.rad2deg(np.arccos(l_A))
beta_A = np.rad2deg(np.arccos(m_A))
gamma_A = np.rad2deg(np.arccos(n_A))
print('Angle to x axis (alpha):', alpha_A, "to y axis (beta):", beta_A, "to z axis (gamma):", gamma_A)
alpha_B = np.rad2deg(np.arccos(l_B))
beta_B= np.rad2deg(np.arccos(m_B))
gamma_B = np.rad2deg(np.arccos(n_B))
print('Angle to x axis (alpha):', alpha_B, "to y axis (beta):", beta_B, "to z axis (gamma):", gamma_B)
# angle between two vectors
cosine_angle = (l_A * l_B) + (m_A * m_B) + (n_A * n_B)
angle = np.rad2deg(np.arccos(cosine_angle))
print('Angle between vector A and B:', angle, 'degrees')
```
# Exercise 10-3. Effective, Normal, and Shear Stress on a Plane
Consider a plane that makes an angle 60 degrees with $\sigma_1$ and 60 degrees with $\sigma_3$. The principal stresses are: -600, -400, -200 MPa. Calculate:
* Total effective stress
* Normal stress
* Shear stress
```
# principle stresses
sigma_1 = -600; sigma_2 = -400; sigma_3 = -200
# calculate the angle of plane to second principal stress sigma 2
# using pythagorean
alpha = 60; gamma = 60
l = np.cos(np.deg2rad(alpha))
n = np.cos(np.deg2rad(gamma))
m = np.sqrt(1 - l**2 - n**2)
beta = np.rad2deg(np.arccos(m))
print("The second principal stress sigma 2 makes angle:", beta, "degrees to the plane")
# effective stress
sigma_eff = np.sqrt(((sigma_1**2) * (l**2)) + ((sigma_2**2) * (m**2)) + ((sigma_3**2) * (n**2)))
print("The effective stress is:", -sigma_eff, "MPa (minus because it's compressive)")
# normal stress
sigma_normal = (sigma_1 * (l**2)) + (sigma_2 * (m**2)) + (sigma_3 * (n**2))
print("The normal stress is:", sigma_normal, "MPa")
# shear stress
sigma_shear = np.sqrt((sigma_eff**2) - (sigma_normal**2))
print("The shear stress is:", sigma_shear, "MPa")
```
# Stress Tensor Components
```
stress_tensor = [[sigma_xx, sigma_xy, sigma_xz],
[sigma_yx, sigma_yy, sigma_yz],
[sigma_zx, sigma_zy, sigma_zz]]
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
# point of cube
points = np.array([[-5, -5, -5],
[5, -5, -5 ],
[5, 5, -5],
[-5, 5, -5],
[-5, -5, 5],
[5, -5, 5 ],
[5, 5, 5],
[-5, 5, 5]])
# vector
a = np.array((0, 0))
b= np.array((0, 0))
c = np.array((0, 0))
u = np.array((0, -4))
v = np.array((5, 0))
w = np.array((0, -4))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver(a, b, c, u, v, w, color='black')
ax.set_xlim([-5, 5])
ax.set_ylim([-5, 5])
ax.set_zlim([-5, 5])
r = [-5,5]
X, Y = np.meshgrid(r, r)
one = np.array([5, 5, 5, 5])
one = one.reshape(2, 2)
ax.plot_wireframe(X,Y,one, alpha=0.5)
ax.plot_wireframe(X,Y,-one, alpha=0.5)
ax.plot_wireframe(X,-one,Y, alpha=0.5)
ax.plot_wireframe(X,one,Y, alpha=0.5)
ax.plot_wireframe(one,X,Y, alpha=0.5)
ax.plot_wireframe(-one,X,Y, alpha=0.5)
ax.scatter3D(points[:, 0], points[:, 1], points[:, 2])
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
np.ones(4)
```
# Exercise 10-7 Total Stress, Deviatoric Stress, Effective Stress, Cauchy Summation
$$\sigma_{ij}=\tau_{ij}+P_{ij}$$
$$P_{ij}=P \cdot \delta_{ij}$$
Pressure is: $P=|\sigma_{mean}|=|\frac{\sigma_{xx}+\sigma_{yy}+\sigma_{zz}}{3}|$
Knorecker Delta is: $\delta_{ij}=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
Pressure tensor is: $P_{ij}=P \cdot \delta_{ij}$
So, overall the total stress is: $\sigma_{ij}=\begin{bmatrix} P+\tau_{xx} & \tau_{xy} & \tau_{zx} \\ \tau_{yx} & P+\tau_{yy} & \tau_{yz} \\ \tau_{zx} & \tau_{zy} & P+\tau_{zz} \end{bmatrix}$
Cauchy summation to calculate the components of effective stress
$$\sigma_{eff}=\begin{bmatrix} \sigma_x \\ \sigma_y \\ \sigma_z \end{bmatrix}=\begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\ \sigma_{yx} & \sigma_{yy} & \sigma_{zy} \\ \sigma_{zx} & \sigma_{zy} & \sigma_{zz} \end{bmatrix} \cdot \begin{bmatrix} l \\ m \\ n \end{bmatrix}$$
**Known**: direction cosines of plane ABC, total stress tensor.
**Task**:
* Determine the deviatoric stress tensor
* Calculate the components of effective stress on plane ABC (use Cauchy's summation)
* Calculate total effective stress, total normal stress, total shear stress
```
# known
l, m, n = 0.7, 0.5, 0.5 # direction cosines
alpha, beta, gamma = 45, 60, 60 # angles
stress_ij = np.array([[-40, -40, -35],
[-40, 45, -50],
[-35, -50, -20]]) # total stress tensor
# calculate pressure
P = np.abs(np.mean(np.array([(stress_ij[0][0]), (stress_ij[1][1]), (stress_ij[2][2])])))
print("Pressure:", P, "MPa")
# pressure TENSOR
kronecker = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
P_ij = P * kronecker
print('Pressure tensor:')
print(P_ij)
# deviatoric stress TENSOR
tau_ij = stress_ij - P_ij
print('Deviatoric stress tensor:')
print(tau_ij)
# direction cosines VECTOR
lmn = np.array([[l],
[m],
[n]])
# effective stress VECTOR
stress_eff = np.dot(stress_ij, lmn)
stress_eff_1 = stress_eff[0][0]
stress_eff_2 = stress_eff[1][0]
stress_eff_3 = stress_eff[2][0]
print('Effective stress vector:')
print(stress_eff)
print('X component of effective stress:', stress_eff_1, 'MPa')
print('Y component of effective stress:', stress_eff_2, 'MPa')
print('Z component of effective stress:', stress_eff_3, 'MPa')
# total / magnitude of effective stress, is SCALAR
sigma_eff = np.sqrt((stress_eff_1**2) + (stress_eff_2**2) + (stress_eff_3**2))
print("The total effective stress is:", -sigma_eff, "MPa")
# principal stresses
sigma_1 = stress_eff_1 / l
sigma_2 = stress_eff_2 / m
sigma_3 = stress_eff_3 / n
print('X component of principal stress:', sigma_1, 'MPa')
print('Y component of principal stress:', sigma_2, 'MPa')
print('Z component of principal stress:', sigma_3, 'MPa')
# total normal stress
sigma_normal = (sigma_1 * (l**2)) + (sigma_2 * (m**2)) + (sigma_3 * (n**2))
print("The normal stress is:", sigma_normal, "MPa")
print("Because normal stress", sigma_normal, "MPa nearly equals to sigma 1", sigma_1, "MPa, the plane is nearly normal to sigma 1")
# total shear stress
sigma_shear = np.sqrt((sigma_eff**2) - (sigma_normal**2))
print("The shear stress is:", sigma_shear, "MPa")
```
<div>
<img src="https://user-images.githubusercontent.com/51282928/77084625-cdfbe280-6a31-11ea-9c3f-c4e592d5cfd9.jpeg" width="500"/>
</div>
# Exercise 10-8 Transforming Stress Tensor (Containing all the 9 tensors of shear and normal) to Principal Stress Tensor using Cubic Equation
```
sigma_ij = np.array([[0, 0, 100],
[0, 0, 0],
[-100, 0, 0]]) # stress tensor
# cubic equation
coeff3 = 1
coeff2 = -((sigma_ij[0][0] + sigma_ij[1][1] + sigma_ij[2][2]))
coeff1 = (sigma_ij[0][0] * sigma_ij[1][1]) + (sigma_ij[1][1] * sigma_ij[2][2]) + (sigma_ij[2][2] * sigma_ij[0][0]) - ((sigma_ij[0][1])**2) - ((sigma_ij[1][2])**2) - ((sigma_ij[2][0])**2)
coeff0 = -((sigma_ij[0][0] * sigma_ij[1][1] * sigma_ij[2][2]) + (2 * sigma_ij[0][1] * sigma_ij[1][2] * sigma_ij[2][0]) - (sigma_ij[0][0] * (sigma_ij[1][2])**2) - (sigma_ij[1][1] * (sigma_ij[2][0])**2) - (sigma_ij[2][2]* (sigma_ij[0][1])**2))
roots = np.roots([coeff3, coeff2, coeff1, coeff0])
sigma = np.sort(roots)
sigma_1 = sigma[2]
sigma_2 = sigma[1]
sigma_3 = sigma[0]
sigma_principal = np.array([[sigma_1, 0, 0],
[0, sigma_2, 0],
[0, 0, sigma_3]])
print("The principal stresses are, sigma 1:", sigma_1, "MPa, sigma 2:", sigma_2, "MPa, and sigma 3:", sigma_3, "MPa")
print("Principal stress tensor:")
print(sigma_principal)
denominator_l = (sigma_ij[0][0] * sigma_ij[2][2]) - (sigma_ij[1][1] * sigma_1) - (sigma_ij[2][2] * sigma_1) + (sigma_1)**2 - (sigma_ij[1][2])**2
denominator_m = (sigma_2 * sigma_ij[0][1]) + (sigma_ij[2][0] * sigma_ij[1][2]) - (sigma_ij[0][1] * sigma_ij[2][2])
denominator_n = (sigma_3 * sigma_ij[2][0]) + (sigma_ij[0][1] * sigma_ij[1][2]) - (sigma_ij[2][0] * sigma_ij[1][1])
denominator_l, denominator_m, denominator_n
```
# ***
```
from mpl_toolkits.mplot3d import axes3d
X = np.array((0))
Y= np.array((0))
U = np.array((0))
V = np.array((4))
fig, ax = plt.subplots()
q = ax.quiver(X, Y, U, V,units='xy' ,scale=1)
plt.grid()
ax.set_aspect('equal')
plt.xlim(-5,5)
plt.ylim(-5,5)
from mpl_toolkits.mplot3d import axes3d
X = np.array((0))
Y= np.array((0))
Z = np.array((0))
U = np.array((1))
V = np.array((1))
W = np.array((1))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver(X, Y, Z, U, V, W)
ax.set_xlim([-1, 1])
ax.set_ylim([-1, 1])
ax.set_zlim([-1, 1])
from mpl_toolkits.mplot3d import axes3d
vx_mag = v_mag * l
vy_mag = v_mag * m
vz_mag = v_mag * n
x = 0; y = 0; z = 0
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.quiver(x, y, z, vx_mag, vy_mag, vz_mag)
ax.set_xlim(0, 10); ax.set_ylim(0, 10); ax.set_zlim(0, 5)
```
| github_jupyter |
# AutoGluon Tabular with SageMaker
[AutoGluon](https://github.com/awslabs/autogluon) automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy deep learning models on tabular, image, and text data.
This notebook shows how to use AutoGluon-Tabular with Amazon SageMaker by creating custom containers.
## Prerequisites
If using a SageMaker hosted notebook, select kernel `conda_mxnet_p36`.
```
# Make sure docker compose is set up properly for local mode
!./setup.sh
# Imports
import os
import boto3
import sagemaker
from time import sleep
from collections import Counter
import numpy as np
import pandas as pd
from sagemaker import get_execution_role, local, Model, utils, fw_utils, s3
from sagemaker.estimator import Estimator
from sagemaker.predictor import RealTimePredictor, csv_serializer, StringDeserializer
from sklearn.metrics import accuracy_score, classification_report
from IPython.core.display import display, HTML
from IPython.core.interactiveshell import InteractiveShell
# Print settings
InteractiveShell.ast_node_interactivity = "all"
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 10)
# Account/s3 setup
session = sagemaker.Session()
local_session = local.LocalSession()
bucket = session.default_bucket()
prefix = 'sagemaker/autogluon-tabular'
region = session.boto_region_name
role = get_execution_role()
client = session.boto_session.client(
"sts", region_name=region, endpoint_url=utils.sts_regional_endpoint(region)
)
account = client.get_caller_identity()['Account']
ecr_uri_prefix = utils.get_ecr_image_uri_prefix(account, region)
registry_id = fw_utils._registry_id(region, 'mxnet', 'py3', account, '1.6.0')
registry_uri = utils.get_ecr_image_uri_prefix(registry_id, region)
```
### Build docker images
First, build autogluon package to copy into docker image.
```
if not os.path.exists('package'):
!pip install PrettyTable -t package
!pip install --upgrade boto3 -t package
!pip install bokeh -t package
!pip install --upgrade matplotlib -t package
!pip install autogluon -t package
```
Now build the training/inference image and push to ECR
```
training_algorithm_name = 'autogluon-sagemaker-training'
inference_algorithm_name = 'autogluon-sagemaker-inference'
!./container-training/build_push_training.sh {account} {region} {training_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri}
!./container-inference/build_push_inference.sh {account} {region} {inference_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri}
```
### Get the data
In this example we'll use the direct-marketing dataset to build a binary classification model that predicts whether customers will accept or decline a marketing offer.
First we'll download the data and split it into train and test sets. AutoGluon does not require a separate validation set (it uses bagged k-fold cross-validation).
```
# Download and unzip the data
!aws s3 cp --region {region} s3://sagemaker-sample-data-{region}/autopilot/direct_marketing/bank-additional.zip .
!unzip -qq -o bank-additional.zip
!rm bank-additional.zip
local_data_path = './bank-additional/bank-additional-full.csv'
data = pd.read_csv(local_data_path)
# Split train/test data
train = data.sample(frac=0.7, random_state=42)
test = data.drop(train.index)
# Split test X/y
label = 'y'
y_test = test[label]
X_test = test.drop(columns=[label])
```
##### Check the data
```
train.head(3)
train.shape
test.head(3)
test.shape
X_test.head(3)
X_test.shape
```
Upload the data to s3
```
train_file = 'train.csv'
train.to_csv(train_file,index=False)
train_s3_path = session.upload_data(train_file, key_prefix='{}/data'.format(prefix))
test_file = 'test.csv'
test.to_csv(test_file,index=False)
test_s3_path = session.upload_data(test_file, key_prefix='{}/data'.format(prefix))
X_test_file = 'X_test.csv'
X_test.to_csv(X_test_file,index=False)
X_test_s3_path = session.upload_data(X_test_file, key_prefix='{}/data'.format(prefix))
```
## Hyperparameter Selection
The minimum required settings for training is just a target label, `fit_args['label']`.
Additional optional hyperparameters can be passed to the `autogluon.task.TabularPrediction.fit` function via `fit_args`.
Below shows a more in depth example of AutoGluon-Tabular hyperparameters from the example [Predicting Columns in a Table - In Depth](https://autogluon.mxnet.io/tutorials/tabular_prediction/tabular-indepth.html#model-ensembling-with-stacking-bagging). Please see [fit parameters](https://autogluon.mxnet.io/api/autogluon.task.html?highlight=eval_metric#autogluon.task.TabularPrediction.fit) for further information. Note that in order for hyperparameter ranges to work in SageMaker, values passed to the `fit_args['hyperparameters']` must be represented as strings.
```python
nn_options = {
'num_epochs': "10",
'learning_rate': "ag.space.Real(1e-4, 1e-2, default=5e-4, log=True)",
'activation': "ag.space.Categorical('relu', 'softrelu', 'tanh')",
'layers': "ag.space.Categorical([100],[1000],[200,100],[300,200,100])",
'dropout_prob': "ag.space.Real(0.0, 0.5, default=0.1)"
}
gbm_options = {
'num_boost_round': "100",
'num_leaves': "ag.space.Int(lower=26, upper=66, default=36)"
}
model_hps = {'NN': nn_options, 'GBM': gbm_options}
fit_args = {
'label': 'y',
'presets': ['best_quality', 'optimize_for_deployment'],
'time_limits': 60*10,
'hyperparameters': model_hps,
'hyperparameter_tune': True,
'search_strategy': 'skopt'
}
hyperparameters = {
'fit_args': fit_args,
'feature_importance': True
}
```
**Note:** Your hyperparameter choices may affect the size of the model package, which could result in additional time taken to upload your model and complete training. Including `'optimize_for_deployment'` in the list of `fit_args['presets']` is recommended to greatly reduce upload times.
<br>
```
# Define required label and optional additional parameters
fit_args = {
'label': 'y',
# Adding 'best_quality' to presets list will result in better performance (but longer runtime)
'presets': ['optimize_for_deployment'],
}
# Pass fit_args to SageMaker estimator hyperparameters
hyperparameters = {
'fit_args': fit_args,
'feature_importance': True
}
```
## Train
For local training set `train_instance_type` to `local` .
For non-local training the recommended instance type is `ml.m5.2xlarge`.
**Note:** Depending on how many underlying models are trained, `train_volume_size` may need to be increased so that they all fit on disk.
```
%%time
instance_type = 'ml.m5.2xlarge'
#instance_type = 'local'
ecr_image = f'{ecr_uri_prefix}/{training_algorithm_name}:latest'
estimator = Estimator(image_name=ecr_image,
role=role,
train_instance_count=1,
train_instance_type=instance_type,
hyperparameters=hyperparameters,
train_volume_size=100)
# Set inputs. Test data is optional, but requires a label column.
inputs = {'training': train_s3_path, 'testing': test_s3_path}
estimator.fit(inputs)
```
### Create Model
```
# Create predictor object
class AutoGluonTabularPredictor(RealTimePredictor):
def __init__(self, *args, **kwargs):
super().__init__(*args, content_type='text/csv',
serializer=csv_serializer,
deserializer=StringDeserializer(), **kwargs)
ecr_image = f'{ecr_uri_prefix}/{inference_algorithm_name}:latest'
if instance_type == 'local':
model = estimator.create_model(image=ecr_image, role=role)
else:
model_uri = os.path.join(estimator.output_path, estimator._current_job_name, "output", "model.tar.gz")
model = Model(model_uri, ecr_image, role=role, sagemaker_session=session, predictor_cls=AutoGluonTabularPredictor)
```
### Batch Transform
For local mode, either `s3://<bucket>/<prefix>/output/` or `file:///<absolute_local_path>` can be used as outputs.
By including the label column in the test data, you can also evaluate prediction performance (In this case, passing `test_s3_path` instead of `X_test_s3_path`).
```
output_path = f's3://{bucket}/{prefix}/output/'
# output_path = f'file://{os.getcwd()}'
transformer = model.transformer(instance_count=1,
instance_type=instance_type,
strategy='MultiRecord',
max_payload=6,
max_concurrent_transforms=1,
output_path=output_path)
transformer.transform(test_s3_path, content_type='text/csv', split_type='Line')
transformer.wait()
```
### Endpoint
##### Deploy remote or local endpoint
```
instance_type = 'ml.m5.2xlarge'
#instance_type = 'local'
predictor = model.deploy(initial_instance_count=1,
instance_type=instance_type)
```
##### Attach to endpoint (or reattach if kernel was restarted)
```
# Select standard or local session based on instance_type
if instance_type == 'local':
sess = local_session
else:
sess = session
# Attach to endpoint
predictor = AutoGluonTabularPredictor(predictor.endpoint, sagemaker_session=sess)
```
##### Predict on unlabeled test data
```
results = predictor.predict(X_test.to_csv(index=False)).splitlines()
# Check output
print(Counter(results))
```
##### Predict on data that includes label column
Prediction performance metrics will be printed to endpoint logs.
```
results = predictor.predict(test.to_csv(index=False)).splitlines()
# Check output
print(Counter(results))
```
##### Check that classification performance metrics match evaluation printed to endpoint logs as expected
```
y_results = np.array(results)
print("accuracy: {}".format(accuracy_score(y_true=y_test, y_pred=y_results)))
print(classification_report(y_true=y_test, y_pred=y_results, digits=6))
```
##### Clean up endpoint
```
predictor.delete_endpoint()
```
| github_jupyter |
<a href="https://colab.research.google.com/gist/taruma/b00880905f297013f046dad95dc2e284/taruma_hk73_bmkg.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Berdasarkan isu [#73](https://github.com/taruma/hidrokit/issues/73): **request: mengolah berkas dari data bmkg**
Deskripsi:
- mengolah berkas excel yang diperoleh dari data online bmkg untuk siap dipakai
- memeriksa kondisi data
Fungsi yang diharapkan:
__Umum / General__
- Memeriksa apakah data lengkap atau tidak? Jika tidak, data apa dan pada tanggal berapa?
- Memeriksa apakah data tidak ada data / tidak ada pengukuran (9999) atau data tidak diukur (8888)? Jika ada, data apa dan pada tanggal berapa?
- Menampilkan "potongan" baris yang tidak memiliki data / tidak melakukan pengukuran?
# DATASET
```
# AKSES GOOGLE DRIVE
from google.colab import drive
drive.mount('/content/gdrive')
# DRIVE PATH
DRIVE_DROP_PATH = '/content/gdrive/My Drive/Colab Notebooks/_dropbox'
DRIVE_DATASET_PATH = '/content/gdrive/My Drive/Colab Notebooks/_dataset/uma_pamarayan'
DATASET_PATH = DRIVE_DATASET_PATH + '/klimatologi_geofisika_tangerang_1998_2009.xlsx'
```
# FUNGSI
```
import pandas as pd
import numpy as np
from operator import itemgetter
from itertools import groupby
def _read_bmkg(io):
return pd.read_excel(
io, skiprows=8, skipfooter=16, header=0, index_col=0, parse_dates=True,
date_parser=lambda x: pd.to_datetime(x, format='%d-%m-%Y')
)
def _have_nan(dataset):
if dataset.isna().any().any():
return True
else:
return False
def _get_index1D(array1D_bool):
return np.argwhere(array1D_bool).reshape(-1,)
def _get_nan(dataset):
nan = {}
for col in dataset.columns:
nan[col] = _get_index1D(dataset[col].isna().values).tolist()
return nan
def _get_missing(dataset):
missing = {}
for col in dataset.columns:
masking = (dataset[col] == 8888) | (dataset[col] == 9999)
missing[col] = _get_index1D(masking.values)
return missing
def _check_nan(dataset):
if _have_nan(dataset):
return _get_nan(dataset)
else:
return None
def _get_nan_columns(dataset):
return dataset.columns[dataset.isna().any()].tolist()
def _group_as_list(array):
# based on https://stackoverflow.com/a/15276206
group_list = []
for _, g in groupby(enumerate(array), lambda x: x[0]-x[1]):
single_list = sorted(list(map(itemgetter(1), g)))
group_list.append(single_list)
return group_list
def _group_as_index(
group_list, index=None, date_format='%Y%m%d',
format_date = '{}-{}'
):
group_index = []
date_index = isinstance(index, pd.DatetimeIndex)
for item in group_list:
if len(item) == 1:
if date_index:
group_index.append(index[item[0]].strftime(date_format))
else:
group_index.append(index[item[0]])
else:
if date_index:
group_index.append(
format_date.format(
index[item[0]].strftime(date_format),
index[item[-1]].strftime(date_format)
)
)
else:
group_index.append(
format_date.format(
index[item[0]], index[item[-1]]
)
)
return group_index
```
# PENGGUNAAN
## Fungsi `_read_bmkg`
Tujuan: Impor berkas excel bmkg ke dataframe
```
dataset = _read_bmkg(DATASET_PATH)
dataset.head()
dataset.tail()
```
## Fungsi `_have_nan()`
Tujuan: Memeriksa apakah di dalam tabel memiliki nilai yang hilang (np.nan)
```
_have_nan(dataset)
```
## Fungsi `_get_index1D()`
Tujuan: Memperoleh index data yang hilang untuk setiap array
```
_get_index1D(dataset['RH_avg'].isna().values)
```
## Fungsi `_get_nan()`
Tujuan: Memperoleh index data yang hilang untuk setiap kolom dalam bentuk `dictionary`
```
_get_nan(dataset).keys()
print(_get_nan(dataset)['RH_avg'])
```
## Fungsi `_get_nan_columns()`
Tujuan: Memperoleh nama kolom yang memiliki nilai yang hilang `NaN`.
```
_get_nan_columns(dataset)
```
## Fungsi `_check_nan()`
Tujuan: Gabungan dari `_have_nan()` dan `_get_nan()`. Memeriksa apakah dataset memiliki `NaN`, jika iya, memberikan nilai hasil `_get_nan()`, jika tidak memberikan nilai `None`.
```
_check_nan(dataset).items()
# Jika tidak memiliki nilai nan
print(_check_nan(dataset.drop(_get_nan_columns(dataset), axis=1)))
```
## Fungsi `_group_as_list()`
Tujuan: Mengelompokkan kelompok array yang bersifat kontinu (nilainya berurutan) dalam masing-masing list.
Referensi: https://stackoverflow.com/a/15276206 (dimodifikasi untuk Python 3.x dan kemudahan membaca)
```
missing_dict = _get_nan(dataset)
missing_RH_avg = missing_dict['RH_avg']
print(missing_RH_avg)
print(_group_as_list(missing_RH_avg))
```
## Fungsi `_group_as_index()`
Tujuan: Mengubah hasil pengelompokkan menjadi jenis index dataset (dalam kasus ini dalam bentuk tanggal dibandingkan dalam bentuk angka-index dataset).
```
_group_as_index(_group_as_list(missing_RH_avg), index=dataset.index, date_format='%d %b %Y')
```
## Fungsi `_get_missing()`
Tujuan: Memperoleh index yang memiliki nilai tidak terukur (bernilai `8888` atau `9999`) untuk setiap kolomnya
```
_get_missing(dataset)
```
# Penerapan
## Menampilkan index yang bermasalah
Tujuan: Setelah memperoleh index dari hasil `_get_missing()` atau `_get_nan()`, bisa menampilkan potongan index tersebut dalam dataframe.
```
dataset.iloc[_get_missing(dataset)['RR']]
_group_as_list(_get_missing(dataset)['RR'])
_group_as_index(_, index=dataset.index, date_format='%d %b %Y', format_date='{} sampai {}')
```
# Changelog
```
- 20190928 - 1.0.0 - Initial
```
#### Copyright © 2019 [Taruma Sakti Megariansyah](https://taruma.github.io)
Source code in this notebook is licensed under a [MIT License](https://choosealicense.com/licenses/mit/). Data in this notebook is licensed under a [Creative Common Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
| github_jupyter |
# Homework - Random Walks (18 pts)
## Continuous random walk in three dimensions
Write a program simulating a three-dimensional random walk in a continuous space. Let 1000 independent particles all start at random positions within a cube with corners at (0,0,0) and (1,1,1). At each time step each particle will move in a random direction by a random amount between -1 and 1 along each axis (x, y, z).
1. (3 pts) Create data structure(s) to store your simulated particle positions for each of 2000 time steps and initialize them with the particles starting positions.
```
import numpy as np
numTimeSteps = 2000
numParticles = 1000
positions = np.zeros( (numParticles, 3, numTimeSteps) )
# initialize starting positions on first time step
positions[:,:,0] = np.random.random( (numParticles, 3) )
```
2. (3 pts) Write code to run your simulation for 2000 time steps.
```
for t in range(numTimeSteps-1):
# 2 * [0 to 1] - 1 --> [-1 to 1]
jumpsForAllParticles = 2 * np.random.random((numParticles, 3)) - 1
positions[:,:,t+1] = positions[:,:,t] + jumpsForAllParticles
# just for fun, here's another way to run the simulation above without a loop
jumpsForAllParticlesAndAllTimeSteps = 2 * np.random.random((numParticles, 3, numTimeSteps-1)) - 1
positions[:,:,1:] = positions[:,:,0].reshape(numParticles, 3, 1) + np.cumsum(jumpsForAllParticlesAndAllTimeSteps, axis=2)
```
3. (3 pts) Generate a series of four 3D scatter plots at selected time points to visually convey what is going on. Arrange the plots in a single row from left to right. Make sure you indicate which time points you are showing.
```
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
lim = 70
plt.figure(figsize=(12,3))
for (i,t) in enumerate([0, 100, 1000, 1999]):
ax = plt.subplot(1, 4, i+1, projection='3d')
x = positions[:,0,t]
y = positions[:,1,t]
z = positions[:,2,t]
ax.scatter(x, y, z)
plt.xlim([-lim, lim])
plt.ylim([-lim, lim])
ax.set_zlim([-lim, lim])
plt.xlabel("x")
plt.ylabel("y")
ax.set_zlabel("z")
plt.title(f"Time {t}");
```
4. (3 pts) Draw the path of a single particle (your choice) across all time steps in a 3D plot.
```
ax = plt.subplot(1, 1, 1, projection='3d')
i = 10 # particle index
x = positions[i,0,:]
y = positions[i,1,:]
z = positions[i,2,:]
plt.plot(x, y, z)
plt.xlabel("x")
plt.ylabel("y")
ax.set_zlabel("z")
plt.title(f"Particle {i}");
```
5. (3 pts) Find the minimum, maximum, mean and variance for the jump distances of all particles throughout the entire simulation. Jump distance is the euclidean distance moved on each time step $\sqrt(dx^2+dy^2+dz^2)$. *Hint: numpy makes this very simple.*
```
jumpsXYZForAllParticlesAndAllTimeSteps = positions[:,:,1:] - positions[:,:,:-1]
jumpDistancesForAllParticlesAndAllTimeSteps = np.sqrt(np.sum(jumpsXYZForAllParticlesAndAllTimeSteps**2, axis=1))
print(f"min = {jumpDistancesForAllParticlesAndAllTimeSteps.min()}")
print(f"max = {jumpDistancesForAllParticlesAndAllTimeSteps.max()}")
print(f"mean = {jumpDistancesForAllParticlesAndAllTimeSteps.mean()}")
print(f"var = {jumpDistancesForAllParticlesAndAllTimeSteps.var()}")
```
6. (3 pts) Repeat the simulation, but this time confine the particles to a unit cell of dimension 10x10x10. Make it so that if a particle leaves one edge of the cell, it enters on the opposite edge (this is the sort of thing most molecular dynamics simulations do). Show plots as in #3 to visualize the simulation (note that most interesting stuff liekly happens in the first 100 time steps).
```
for t in range(numTimeSteps-1):
# 2 * [0 to 1] - 1 --> [-1 to 1]
jumpsForAllParticles = 2 * np.random.random((numParticles, 3)) - 1
positions[:,:,t+1] = positions[:,:,t] + jumpsForAllParticles
# check for out-of-bounds and warp to opposite bound
for i in range(numParticles):
for j in range(3):
if positions[i,j,t+1] < 0:
positions[i,j,t+1] += 10
elif positions[i,j,t+1] > 10:
positions[i,j,t+1] -= 10
plt.figure(figsize=(12,3))
for (i,t) in enumerate([0, 3, 10, 1999]):
ax = plt.subplot(1, 4, i+1, projection='3d')
x = positions[:,0,t]
y = positions[:,1,t]
z = positions[:,2,t]
ax.scatter(x, y, z)
plt.xlim([0, 10])
plt.ylim([0, 10])
ax.set_zlim([0, 10])
plt.xlabel("x")
plt.ylabel("y")
ax.set_zlabel("z")
plt.title(f"Time {t}");
```
| github_jupyter |
- 使用ngram进行恶意域名识别
- 参考论文:https://www.researchgate.net/publication/330843380_Malicious_Domain_Names_Detection_Algorithm_Based_on_N_-Gram
```
import numpy as np
import pandas as pd
import tldextract
import matplotlib.pyplot as plt
import os
import re
import time
from scipy import sparse
%matplotlib inline
```
## 加载数据
- 加载正常域名
```
df_benign_domain = pd.read_csv('top-1m.csv', index_col=0, header=None).reset_index(drop=True)
df_benign_domain.columns = ['domain']
df_benign_domain['label'] = 0
```
- 加载恶意域名
```
df_malicious_domain = pd.read_csv('malicious-domain.csv', engine='python', header=None)
df_malicious_domain = df_malicious_domain[[1]]
df_malicious_domain.columns = ['domain']
df_malicious_domain = df_malicious_domain[df_malicious_domain['domain'] != '-']
df_malicious_domain['label'] = 1
df_domain = pd.concat([df_benign_domain, df_malicious_domain], axis=0)
def remove_tld(domain):
ext = tldextract.extract(domain)
if ext.subdomain != '':
domain = ext.subdomain + '.' + ext.domain
else:
domain = ext.domain
return domain
df_domain['domain'] = df_domain['domain'].map(lambda x: tldextract.extract(x).domain)
```
## 提取ngram特征
```
from sklearn.feature_extraction.text import CountVectorizer
domain_list = df_domain[df_domain['label'] == 0]['domain'].values.tolist()
benign_text_str = '.'.join(domain_list)
benign_text = re.split(r'[.-]', benign_text_str)
benign_text = list(filter(lambda x: len(x) >= 3, benign_text))
def get_ngram_weight_dict(benign_text):
cv = CountVectorizer(ngram_range = (3, 7), analyzer='char', max_features=100000)
cv.fit(benign_text)
feature_names = cv.get_feature_names()
benign_text_vectors = cv.transform(benign_text)
ngram_count = benign_text_vectors.sum(axis=0)
window_sizes = np.array(list(map(lambda x: len(x), feature_names)))
ngram_weights = np.multiply(np.log2(ngram_count), window_sizes)
ngram_weights = sparse.csr_matrix(ngram_weights)
feature_names = cv.get_feature_names()
ngram_weights_dict = dict()
for ngram, weight in zip(feature_names, ngram_weights.toarray()[0].tolist()):
ngram_weights_dict[ngram] = weight
return ngram_weights_dict
ngram_weights_dict = get_ngram_weight_dict(benign_text)
```
## 计算域名的信誉值
```
def get_reputation_value(ngram_weights_dict, domain):
if len(domain) < 3:
return 1000
domains = re.split(r'[.-]', domain)
reputation = 0
domain_len = 0
for domain in domains:
domain_len += len(domain)
for window_size in range(3, 8):
for i in range(len(domain) - window_size + 1):
reputation += ngram_weights_dict.get(domain[i:i+window_size], 0)
reputation = reputation / domain_len
return reputation
get_reputation_value(ngram_weights_dict, 'google')
get_reputation_value(ngram_weights_dict, 'ta0ba0')
get_reputation_value(ngram_weights_dict, 'dskdjisuowerwdfskdfj000')
start = time.time()
df_domain['reputation'] = df_domain['domain'].map(lambda x: get_reputation_value(ngram_weights_dict, x))
end = time.time()
print('cost time : {}'.format(end - start))
df_domain[df_domain['label'] == 0]['reputation'].describe()
df_domain[df_domain['label'] == 1]['reputation'].describe()
```
## 保存模型文件
```
import joblib
joblib.dump(ngram_weights_dict, 'ngram_weights_dict.m', compress=4)
```
| github_jupyter |
# Data preparation for tutorial
This notebook contains the code to convert raw downloaded external data into a cleaned or simplified version for tutorial purposes.
The raw data is expected to be in the `./raw` sub-directory (not included in the git repo).
```
%matplotlib inline
import pandas as pd
import geopandas
```
## Countries dataset
http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-countries/
```
countries = geopandas.read_file("zip://./raw/original_data_ne/ne_110m_admin_0_countries.zip")
countries.head()
len(countries)
countries_subset = countries[['ADM0_A3', 'NAME', 'CONTINENT', 'POP_EST', 'GDP_MD_EST', 'geometry']]
countries_subset.columns = countries_subset.columns.str.lower()
countries_subset = countries_subset.rename(columns={'adm0_a3': 'iso_a3'})
countries_subset.head()
countries_subset.to_file("ne_110m_admin_0_countries.shp")
```
## Natural Earth - Cities dataset
http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-populated-places/ (simple, version 4.0.0, downloaded May 2018)
```
cities = geopandas.read_file("zip://./raw/original_data_ne/ne_110m_populated_places_simple.zip")
cities.head()
len(cities)
cities_subset = cities[['name', 'geometry']]
cities_subset.head()
cities_subset.to_file("ne_110m_populated_places.shp")
```
## Natural Earth - Rivers dataset
http://www.naturalearthdata.com/downloads/50m-physical-vectors/50m-rivers-lake-centerlines/ (version 4.0.0, downloaded May 2018)
```
rivers = geopandas.read_file("zip://./raw/ne_50m_rivers_lake_centerlines.zip")
rivers.head()
```
Remove rows with missing geometry:
```
len(rivers)
rivers = rivers[~rivers.geometry.isna()].reset_index(drop=True)
len(rivers)
```
Subset of the columns:
```
rivers_subset = rivers[['featurecla', 'name_en', 'geometry']].rename(columns={'name_en': 'name'})
rivers_subset.head()
rivers_subset.to_file("ne_50m_rivers_lake_centerlines.shp")
```
## Paris districts
Source: https://opendata.paris.fr/explore/dataset/quartier_paris/ (downloaded as GeoJSON file on August 20, 2018)
Administrative districts, polygon dataset
```
districts = geopandas.read_file("./raw/quartier_paris.geojson")
districts.head()
districts = districts.rename(columns={'l_qu': 'district_name', 'c_qu': 'id'}).sort_values('id').reset_index(drop=True)
```
Add population data (based on pdfs downloaded from ..):
```
population = pd.read_csv("./raw/paris-population.csv")
population['temp'] = population.district_name.str.lower()
population['temp'] = population['temp'].replace({
'javel': 'javel 15art',
'saint avoye': 'sainte avoie',
"saint germain l'auxerrois": "st germain l'auxerrois",
'plaine monceau': 'plaine de monceaux',
'la chapelle': 'la chapelle'})
districts['temp'] = (districts.district_name.str.lower().str.replace('-', ' ')
.str.replace('é', 'e').str.replace('è', 'e').str.replace('ê', 'e').str.replace('ô', 'o'))
res = pd.merge(districts, population[['population', 'temp']], on='temp', how='outer')
assert len(res) == len(districts)
districts = res[['id', 'district_name', 'population', 'geometry']]
districts.head()
districts.to_file("processed/paris_districts.geojson", driver='GeoJSON')
districts = districts.to_crs(epsg=32631)
districts.to_file("paris_districts_utm.geojson", driver='GeoJSON')
```
## Commerces de Paris
Source: https://opendata.paris.fr/explore/dataset/commercesparis/ (downloaded as csv file (`commercesparis.csv`) on October 30, 2018)
```
df = pd.read_csv("./raw/commercesparis.csv", sep=';')
df.iloc[0]
```
Take subset of the restaurants:
```
restaurants = df[df['CODE ACTIVITE'].str.startswith('CH1', na=False)].copy()
restaurants['LIBELLE ACTIVITE'].value_counts()
restaurants = restaurants.dropna(subset=['XY']).reset_index(drop=True)
```
Translate the restaurants and rename column:
```
restaurants['LIBELLE ACTIVITE'] = restaurants['LIBELLE ACTIVITE'].replace({
'Restaurant traditionnel français': 'Traditional French restaurant',
'Restaurant asiatique': 'Asian restaurant',
'Restaurant européen': 'European restuarant',
'Restaurant indien, pakistanais et Moyen Orient': 'Indian / Middle Eastern restaurant',
'Restaurant maghrébin': 'Maghrebian restaurant',
'Restaurant africain': 'African restaurant',
'Autre restaurant du monde': 'Other world restaurant',
'Restaurant central et sud américain': 'Central and South American restuarant',
'Restaurant antillais': 'Caribbean restaurant'
})
restaurants = restaurants.rename(columns={'LIBELLE ACTIVITE': 'type'})
```
Create GeoDataFrame
```
from shapely.geometry import Point
restaurants['geometry'] = restaurants['XY'].str.split(', ').map(lambda x: Point(float(x[1]), float(x[0])))
restaurants = geopandas.GeoDataFrame(restaurants[['type', 'geometry']], crs={'init': 'epsg:4326'})
restaurants.head()
restaurants.to_file("processed/paris_restaurants.gpkg", driver='GPKG')
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"></ul></div>
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="images/book_cover.jpg" width="120">
*This notebook contains an excerpt from the [Python Programming and Numerical Methods - A Guide for Engineers and Scientists](https://www.elsevier.com/books/python-programming-and-numerical-methods/kong/978-0-12-819549-9), the content is also available at [Berkeley Python Numerical Methods](https://pythonnumericalmethods.berkeley.edu/notebooks/Index.html).*
*The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work on [Elsevier](https://www.elsevier.com/books/python-programming-and-numerical-methods/kong/978-0-12-819549-9) or [Amazon](https://www.amazon.com/Python-Programming-Numerical-Methods-Scientists/dp/0128195495/ref=sr_1_1?dchild=1&keywords=Python+Programming+and+Numerical+Methods+-+A+Guide+for+Engineers+and+Scientists&qid=1604761352&sr=8-1)!*
<!--NAVIGATION-->
< [14.1 Basics of Linear Algebra](chapter14.01-Basics-of-Linear-Algebra.ipynb) | [Contents](Index.ipynb) | [14.3 Systems of Linear Equations](chapter14.03-Systems-of-Linear-Equations.ipynb) >
# Linear Transformations
For vectors $x$ and $y$, and scalars $a$ and $b$, it is sufficient to say that a function, $F$, is a **linear transformation** if
$$
F(ax + by) = aF(x) + bF(y).
$$
It can be shown that multiplying an ${m} \times {n}$ matrix, $A$, and an ${n} \times {1}$ vector, $v$, of compatible size is a linear transformation of $v$. Therefore from this point forward, a matrix will be synonymous with a linear transformation function.
**TRY IT!** Let $x$ be a vector and let $F(x)$ be defined by $F(x) = Ax$ where $A$ is a rectangular matrix of appropriate size. Show that $F(x)$ is a linear transformation.
Proof:
Since $F(x) = Ax$, then
for vectors $v$ and $w$, and scalars $a$ and $b$, $F(av +
bw) = A(av + bw)$ (by definition of $F$)$=$$aAv + bAw$ (by
distributive property of matrix multiplication)$=$$aF(v) +
bF(w)$ (by definition of $F$).
If $A$ is an ${m} \times {n}$ matrix, then there are two important subpsaces associated with $A$, one is ${\mathbb{R}}^n$, the other is ${\mathbb{R}}^m$. The **domain** of $A$ is a subspace of ${\mathbb{R}}^n$. It is the set of all vectors that can be multiplied by $A$ on the right. The **range** of $A$ is a subspace of ${\mathbb{R}}^m$. It is the set of all vectors $y$ such that $y=Ax$. It can be denoted as $\mathcal{R}(\mathbf{A})$, where $\mathcal{R}(\mathbf{A}) = \{y \in {\mathbb{R}}^m: Ax = y\}$. Another way to think about the range of $A$ is the set of all linear combinations of the columns in $A$, where $x_i$ is the coefficient of the ith column in $A$. The **null space** of $A$, defined as $\mathcal{N}(\mathbf{A}) = \{x \in {\mathbb{R}}^n: Ax = 0_m\}$, is the subset of vectors in the domain of $A, x$, such that $Ax = 0_m$, where $0_m$ is the **zero vector** (i.e., a vector in ${\mathbb{R}}^m$ with all zeros).
**TRY IT!** Let $A = [[1, 0, 0], [0, 1, 0], [0, 0, 0]]$ and let the domain of $A$ be ${\mathbb{R}}^3$. Characterize the range and nullspace of $A$.
Let $v = [x,y,z]$ be a vector in ${\mathbb{R}}^3$. Then $u = Av$ is the vector $u = [x,y,0]$. Since $x,y\in {\mathbb{R}}$, the range of $A$ is the $x$-$y$ plane at $z = 0$.
Let $v = [0,0,z]$ for $z\in {\mathbb{R}}$. Then $u = Av$ is the vector $u = [0, 0, 0]$. Therefore, the nullspace of $A$ is the $z$-axis (i.e., the set of vectors $[0,0,z]$ $z\in {\mathbb{R}}$).
Therefore, this linear transformation "flattens" any $z$-component from a vector.
<!--NAVIGATION-->
< [14.1 Basics of Linear Algebra](chapter14.01-Basics-of-Linear-Algebra.ipynb) | [Contents](Index.ipynb) | [14.3 Systems of Linear Equations](chapter14.03-Systems-of-Linear-Equations.ipynb) >
| github_jupyter |
# Another attempt at MC Simulation on AHP/ANP
The ideas are the following:
1. There is a class MCAnp that has a sim() method that will simulate any Prioritizer
2. MCAnp also has a sim_fill() function that does fills in the data needed for a single simulation
## Import needed libs
```
import pandas as pd
import sys
import os
sys.path.insert(0, os.path.abspath("../"))
import numpy as np
from scipy.stats import triang
from copy import deepcopy
from pyanp.priority import pri_eigen
from pyanp.pairwise import Pairwise
from pyanp.ahptree import AHPTree, AHPTreeNode
from pyanp.direct import Direct
```
# MCAnp class
```
def ascale_mscale(val:(float,int))->float:
if val is None:
return 0
elif val < 0:
val = -val
val += 1
val = 1.0/val
return val
else:
return val+1
def mscale_ascale(val:(float,int))->float:
if val == 0:
return None
elif val >= 1:
return val - 1
else:
val = 1/val
val = val-1
return -val
DEFAULT_DISTRIB = triang(c=0.5, loc=-1.5, scale=3.0)
def avote_random(avote):
"""
Returns a random additive vote in the neighborhood of the additive vote avote
according to the default disribution DEFAULT_DISTRIB
"""
if avote is None:
return None
raw_val = DEFAULT_DISTRIB.rvs(size=1)[0]
return avote+raw_val
def mvote_random(mvote):
"""
Returns a random multiplicative vote in the neighborhhod of the multiplicative vote mvote
according to the default distribution DEFAULT_DISTRIB. This is handled by converting
the multiplicative vote to an additive vote, calling avote_random() and converting the
result back to an additive vote
"""
avote = mscale_ascale(mvote)
rval_a = avote_random(avote)
rval = ascale_mscale(rval_a)
return rval
def direct_random(direct, max_percent_chg=0.2)->float:
"""
Returns a random direct data value near the value `direct'. This function
creates a random percent change, between -max_percent_chg and +max_percent_chg, and
then changes the direct value by that factor, and returns it.
"""
pchg = np.random.uniform(low=-max_percent_chg, high=max_percent_chg)
return direct * (1 + pchg)
class MCAnp:
def __init__(self):
# Setup the random pairwise vote generator
self.pwvote_random = mvote_random
# Setup the random direct vote generator
self.directvote_random = direct_random
# Set the default user to use across the simulation
# follows the standard from Pairwise class, i.e. it can be a list
# of usernames, a single username, or None (which means total group average)
self.username = None
# What is the pairwise priority calculation?
self.pwprioritycalc = pri_eigen
def sim_fill(self, src, dest):
"""
Fills in data on a structure prior to doing the simulation calculations.
This function calls sim_NAME_fill depending on the class of the src object.
If the dest object is None, we create a dest object by calling deepcopy().
In either case, we always return the allocated dest object
"""
if dest is None:
dest = deepcopy(src)
# Which kind of src do we have
if isinstance(src, np.ndarray):
# We are simulating on a pairwise comparison matrix
return self.sim_pwmat_fill(src, dest)
elif isinstance(src, Pairwise):
# We are simulating on a multi-user pairwise comparison object
return self.sim_pw_fill(src, dest)
elif isinstance(src, AHPTree):
# We are simulating on an ahp tree object
return self.sim_ahptree_fill(src, dest)
elif isinstance(src, Direct):
# We are simulating on an ahp direct data
return self.sim_direct_fill(src, dest)
else:
raise ValueError("Src class is not handled, it is "+type(src).__name__)
def sim_pwmat_fill(self, pwsrc:np.ndarray, pwdest:np.ndarray=None)->np.ndarray:
"""
Fills in a pairwise comparison matrix with noisy votes based on pwsrc
If pwsrc is None, we create a new matrix, otherwise we fill in pwdest
with noisy values based on pwsrc and the self.pwvote_random parameter.
In either case, we return the resulting noisy matrix
"""
if pwdest is None:
pwdest = deepcopy(pwsrc)
size = len(pwsrc)
for row in range(size):
pwdest[row,row] = 1.0
for col in range(row+1, size):
val = pwsrc[row,col]
if val >= 1:
nvote = self.pwvote_random(val)
pwdest[row, col]=nvote
pwdest[col, row]=1/nvote
elif val!= 0:
nvote = self.pwvote_random(1/val)
pwdest[col, row] = nvote
pwdest[row, col] = 1/nvote
else:
pwdest[row, col] = nvote
pwdest[col, row] = nvote
return pwdest
def sim_pwmat(self, pwsrc:np.ndarray, pwdest:np.ndarray=None)->np.ndarray:
"""
creates a noisy pw comparison matrix from pwsrc, stores the matrix in pwdest (which
is created if pwdest is None) calculates the resulting priority and returns that
"""
pwdest = self.sim_pwmat_fill(pwsrc, pwdest)
rval = self.pwprioritycalc(pwdest)
return rval
def sim_pw(self, pwsrc:Pairwise, pwdest:Pairwise)->np.ndarray:
"""
Performs a simulation on a pairwise comparison matrix object and returns the
resulting priorities
"""
pwdest = self.sim_pw_fill(pwsrc, pwdest)
mat = pwdest.matrix(self.username)
rval = self.pwprioritycalc(mat)
return rval
def sim_pw_fill(self, pwsrc:Pairwise, pwdest:Pairwise=None)->Pairwise:
"""
Fills in the pairwise comparison structure of pwdest with noisy pairwise data from pwsrc.
If pwdest is None, we create one first, then fill in. In either case, we return the pwdest
object with new noisy data in it.
"""
if pwdest is None:
pwdest = deepcopy(pwsrc)
for user in pwsrc.usernames():
srcmat = pwsrc.matrix(user)
destmat = pwdest.matrix(user)
self.sim_pwmat_fill(srcmat, destmat)
return pwdest
def sim_direct_fill(self, directsrc:Direct, directdest:Direct=None)->Direct:
"""
Fills in the direct data structure of directdest with noisy data from directsrc.
If directdest is None, we create on as a deep copy of directsrc, then fill in.
In either case, we return the directdest object with new noisy data in it.
"""
if directdest is None:
directdest = deepcopy(directsrc)
for altpos in range(len(directdest)):
orig = directsrc[altpos]
newvote = self.directvote_random(orig)
directdest.data[altpos] = newvote
return directdest
def sim_direct(self, directsrc:Direct, directdest:Direct=None)->np.ndarray:
"""
Simulates for direct data
"""
directdest = self.sim_direct_fill(directsrc, directdest)
return directdest.priority()
def sim_ahptree_fill(self, ahpsrc:AHPTree, ahpdest:AHPTree)->AHPTree:
"""
Fills in the ahp tree structure of ahpdest with noisy data from ahpsrc.
If ahpdest is None, we create one as a deepcopy of ahpsrc, then fill in.
In either case, we return the ahpdest object with new noisy data in it.
"""
if ahpdest is None:
ahpdest = deepcopy(ahpsrc)
self.sim_ahptreenode_fill(ahpsrc.root, ahpdest.root)
return ahpdest
def sim_ahptreenode_fill(self, nodesrc:AHPTreeNode, nodedest:AHPTreeNode)->AHPTreeNode:
"""
Fills in data in an AHPTree
"""
#Okay, first we fill in for the alt_prioritizer
if nodesrc.alt_prioritizer is not None:
self.sim_fill(nodesrc.alt_prioritizer, nodedest.alt_prioritizer)
#Now wefill in the child prioritizer
if nodesrc.child_prioritizer is not None:
self.sim_fill(nodesrc.child_prioritizer, nodedest.child_prioritizer)
#Now for each child, fill in
for childsrc, childdest in zip(nodesrc.children, nodedest.children):
self.sim_ahptreenode_fill(childsrc, childdest)
#We are done, return the dest
return nodedest
def sim_ahptree(self, ahpsrc:AHPTree, ahpdest:AHPTree)->np.ndarray:
"""
Perform the actual simulation
"""
ahpdest = self.sim_ahptree_fill(ahpsrc, ahpdest)
return ahpdest.priority()
mc = MCAnp()
pw = np.array([
[1, 1/2, 3],
[2, 1, 5],
[1/3, 1/5, 1]
])
rpw= mc.sim_pwmat_fill(pw)
rpw
[mc.sim_pwmat(pw) for i in range(20)]
pwobj = Pairwise(alts=['alt '+str(i) for i in range(3)])
pwobj.vote_matrix(user_name='u1', val=pw)
```
## Checking that the deep copy is actually a deep copy
For some reason deepcopy was not copying the matrix, I had to overwrite
__deepcopy__ in Pairwise
```
pwobj.matrix('u1')
rpwobj = pwobj.__deepcopy__()
a=rpwobj
b=pwobj
a.df
display(a.df.loc['u1', 'Matrix'])
display(b.df.loc['u1', 'Matrix'])
display(a.matrix('u1') is b.matrix('u1'))
display(a.matrix('u1') == b.matrix('u1'))
```
## Now let's try to simulate
```
[mc.sim_pw(pwobj, rpwobj) for i in range(20)]
pwobj.matrix('u1')
```
## Try to simulate a direct data
```
dd = Direct(alt_names=['a1', 'a2', 'a3'])
dd.data[0]=0.5
dd.data[1]=0.3
dd.data[2]=0.2
rdd=mc.sim_direct_fill(dd)
rdd.data
```
## Simulate an ahptree
```
alts=['alt '+str(i) for i in range(3)]
tree = AHPTree(alt_names=alts)
kids = ['crit '+str(i) for i in range(4)]
for kid in kids:
tree.add_child(kid)
node = tree.get_node(kid)
direct = node.alt_prioritizer
s = 0
for alt in alts:
direct[alt] = np.random.uniform()
s += direct[alt]
if s != 0:
for alt in alts:
direct[alt] /= s
tree.priority()
mc.sim_ahptree(tree, None)
tree.priority()
```
| github_jupyter |
# Laboratorio 5
## Datos: _European Union lesbian, gay, bisexual and transgender survey (2012)_
Link a los datos [aquí](https://www.kaggle.com/ruslankl/european-union-lgbt-survey-2012).
### Contexto
La FRA (Agencia de Derechos Fundamentales) realizó una encuesta en línea para identificar cómo las personas lesbianas, gays, bisexuales y transgénero (LGBT) que viven en la Unión Europea y Croacia experimentan el cumplimiento de sus derechos fundamentales. La evidencia producida por la encuesta apoyará el desarrollo de leyes y políticas más efectivas para combatir la discriminación, la violencia y el acoso, mejorando la igualdad de trato en toda la sociedad. La necesidad de una encuesta de este tipo en toda la UE se hizo evidente después de la publicación en 2009 del primer informe de la FRA sobre la homofobia y la discriminación por motivos de orientación sexual o identidad de género, que destacó la ausencia de datos comparables. La Comisión Europea solicitó a FRA que recopilara datos comparables en toda la UE sobre este tema. FRA organizó la recopilación de datos en forma de una encuesta en línea que abarca todos los Estados miembros de la UE y Croacia. Los encuestados eran personas mayores de 18 años, que se identifican como lesbianas, homosexuales, bisexuales o transgénero, de forma anónima. La encuesta se hizo disponible en línea, de abril a julio de 2012, en los 23 idiomas oficiales de la UE (excepto irlandés) más catalán, croata, luxemburgués, ruso y turco. En total, 93,079 personas LGBT completaron la encuesta. Los expertos internos de FRA diseñaron la encuesta que fue implementada por Gallup, uno de los líderes del mercado en encuestas a gran escala. Además, organizaciones de la sociedad civil como ILGA-Europa (Región Europea de la Asociación Internacional de Lesbianas, Gays, Bisexuales, Trans e Intersexuales) y Transgender Europe (TGEU) brindaron asesoramiento sobre cómo acercarse mejor a las personas LGBT.
Puede encontrar más información sobre la metodología de la encuesta en el [__Informe técnico de la encuesta LGBT de la UE. Metodología, encuesta en línea, cuestionario y muestra__](https://fra.europa.eu/sites/default/files/eu-lgbt-survey-technical-report_en.pdf).
### Contenido
El conjunto de datos consta de 5 archivos .csv que representan 5 bloques de preguntas: vida cotidiana, discriminación, violencia y acoso, conciencia de los derechos, preguntas específicas de personas transgénero.
El esquema de todas las tablas es idéntico:
* `CountryCode` - name of the country
* `subset` - Lesbian, Gay, Bisexual women, Bisexual men or Transgender (for Transgender Specific Questions table the value is only Transgender)
* `question_code` - unique code ID for the question
* `question_label` - full question text
* `answer` - answer given
* `percentage`
* `notes` - [0]: small sample size; [1]: NA due to small sample size; [2]: missing value
En el laboratorio de hoy solo utilizaremos los relacionados a la vida cotidiana, disponibles en el archivo `LGBT_Survey_DailyLife.csv` dentro de la carpeta `data`.
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
daily_life_raw = pd.read_csv(os.path.join("..", "data", "LGBT_Survey_DailyLife.csv"))
daily_life_raw.head()
daily_life_raw.info()
daily_life_raw.describe(include="all").T
questions = (
daily_life_raw.loc[: , ["question_code", "question_label"]]
.drop_duplicates()
.set_index("question_code")
.squeeze()
)
for idx, value in questions.items():
print(f"Question code {idx}:\n\n{value}\n\n")
```
### Preprocesamiento de datos
¿Te fijaste que la columna `percentage` no es numérica? Eso es por los registros con notes `[1]`, por lo que los eliminaremos.
```
daily_life_raw.notes.unique()
daily_life = (
daily_life_raw.query("notes != ' [1] '")
.astype({"percentage": "int"})
.drop(columns=["question_label", "notes"])
.rename(columns={"CountryCode": "country"})
)
daily_life.head()
```
## Ejercicio 1
(1 pto)
¿A qué tipo de dato (nominal, ordinal, discreto, continuo) corresponde cada columna del DataFrame `daily_life`?
Recomendación, mira los valores únicos de cada columna.
```
daily_life.dtypes
# FREE STYLE #
```
__Respuesta:__
* `country`:
* `subset`:
* `question_code`:
* `answer`:
* `percentage`:
## Ejercicio 2
(1 pto)
Crea un nuevo dataframe `df1` tal que solo posea registros de Bélgica, la pregunta con código `b1_b` y que hayan respondido _Very widespread_.
Ahora, crea un gráfico de barras vertical con la función `bar` de `matplotlib` para mostrar el porcentaje de respuestas por cada grupo. La figura debe ser de tamaño 10 x 6 y el color de las barras verde.
```
print(f"Question b1_b:\n\n{questions['b1_b']}")
df1 = # FIX ME #
df1
x = # FIX ME #
y = # FIX ME #
fig = plt.figure(# FIX ME #)
plt# FIX ME #
plt.show()
```
## Ejercicio 3
(1 pto)
Respecto a la pregunta con código `g5`, ¿Cuál es el porcentaje promedio por cada valor de la respuesta (notar que la respuestas a las preguntas son numéricas)?
```
print(f"Question g5:\n\n{questions['g5']}")
```
Crea un DataFrame llamado `df2` tal que:
1. Solo sean registros con la pregunta con código `g5`
2. Cambia el tipo de la columna `answer` a `int`.
3. Agrupa por país y respuesta y calcula el promedio a la columna porcentaje (usa `agg`).
4. Resetea los índices.
```
df2 = (
# FIX ME #
)
df2
```
Crea un DataFrame llamado `df2_mean` tal que:
1. Agrupa `df2` por respuesta y calcula el promedio del porcentaje.
2. Resetea los índices.
```
df2_mean = df2.# FIX ME #
df2_mean.head()
```
Ahora, grafica lo siguiente:
1. Una figura con dos columnas, tamaño de figura 15 x 12 y que compartan eje x y eje y. Usar `plt.subplots`.
2. Para el primer _Axe_ (`ax1`), haz un _scatter plot_ tal que el eje x sea los valores de respuestas de `df2`, y el eye y corresponda a los porcentajes de `df2`. Recuerda que en este caso corresponde a promedios por país, por lo que habrán más de 10 puntos en el gráfico..
3. Para el segundo _Axe_ (`ax2`), haz un gráfico de barras horizontal tal que el eje x sea los valores de respuestas de `df2_mean`, y el eye y corresponda a los porcentajes de `df2_mean`.
```
x = # FIX ME #
y = # FIX ME #
x_mean = # FIX ME #s
y_mean = # FIX ME #
fig, (ax1, ax2) = plt.subplots(# FIX ME #)
ax1.# FIX ME #
ax1.grid(alpha=0.3)
ax2.# FIX ME #
ax2.grid(alpha=0.3)
fig.show()
```
## Ejercicio 4
(1 pto)
Respecto a la misma pregunta `g5`, cómo se distribuyen los porcentajes en promedio para cada país - grupo?
Utilizaremos el mapa de calor presentado en la clase, para ello es necesario procesar un poco los datos para conformar los elementos que se necesitan.
Crea un DataFrame llamado `df3` tal que:
1. Solo sean registros con la pregunta con código `g5`
2. Cambia el tipo de la columna `answer` a `int`.
3. Agrupa por país y subset, luego calcula el promedio a la columna porcentaje (usa `agg`).
4. Resetea los índices.
5. Pivotea tal que los índices sean los países, las columnas los grupos y los valores el promedio de porcentajes.
6. Llena los valores nulos con cero. Usa `fillna`.
```
## Code from:
# https://matplotlib.org/3.1.1/gallery/images_contours_and_fields/image_annotated_heatmap.html#sphx-glr-gallery-images-contours-and-fields-image-annotated-heatmap-py
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
def heatmap(data, row_labels, col_labels, ax=None,
cbar_kw={}, cbarlabel="", **kwargs):
"""
Create a heatmap from a numpy array and two lists of labels.
Parameters
----------
data
A 2D numpy array of shape (N, M).
row_labels
A list or array of length N with the labels for the rows.
col_labels
A list or array of length M with the labels for the columns.
ax
A `matplotlib.axes.Axes` instance to which the heatmap is plotted. If
not provided, use current axes or create a new one. Optional.
cbar_kw
A dictionary with arguments to `matplotlib.Figure.colorbar`. Optional.
cbarlabel
The label for the colorbar. Optional.
**kwargs
All other arguments are forwarded to `imshow`.
"""
if not ax:
ax = plt.gca()
# Plot the heatmap
im = ax.imshow(data, **kwargs)
# Create colorbar
cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw)
cbar.ax.set_ylabel(cbarlabel, rotation=-90, va="bottom")
# We want to show all ticks...
ax.set_xticks(np.arange(data.shape[1]))
ax.set_yticks(np.arange(data.shape[0]))
# ... and label them with the respective list entries.
ax.set_xticklabels(col_labels)
ax.set_yticklabels(row_labels)
# Let the horizontal axes labeling appear on top.
ax.tick_params(top=True, bottom=False,
labeltop=True, labelbottom=False)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=-30, ha="right",
rotation_mode="anchor")
# Turn spines off and create white grid.
for edge, spine in ax.spines.items():
spine.set_visible(False)
ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True)
ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)
ax.grid(which="minor", color="w", linestyle='-', linewidth=3)
ax.tick_params(which="minor", bottom=False, left=False)
return im, cbar
def annotate_heatmap(im, data=None, valfmt="{x:.2f}",
textcolors=["black", "white"],
threshold=None, **textkw):
"""
A function to annotate a heatmap.
Parameters
----------
im
The AxesImage to be labeled.
data
Data used to annotate. If None, the image's data is used. Optional.
valfmt
The format of the annotations inside the heatmap. This should either
use the string format method, e.g. "$ {x:.2f}", or be a
`matplotlib.ticker.Formatter`. Optional.
textcolors
A list or array of two color specifications. The first is used for
values below a threshold, the second for those above. Optional.
threshold
Value in data units according to which the colors from textcolors are
applied. If None (the default) uses the middle of the colormap as
separation. Optional.
**kwargs
All other arguments are forwarded to each call to `text` used to create
the text labels.
"""
if not isinstance(data, (list, np.ndarray)):
data = im.get_array()
# Normalize the threshold to the images color range.
if threshold is not None:
threshold = im.norm(threshold)
else:
threshold = im.norm(data.max())/2.
# Set default alignment to center, but allow it to be
# overwritten by textkw.
kw = dict(horizontalalignment="center",
verticalalignment="center")
kw.update(textkw)
# Get the formatter in case a string is supplied
if isinstance(valfmt, str):
valfmt = matplotlib.ticker.StrMethodFormatter(valfmt)
# Loop over the data and create a `Text` for each "pixel".
# Change the text's color depending on the data.
texts = []
for i in range(data.shape[0]):
for j in range(data.shape[1]):
kw.update(color=textcolors[int(im.norm(data[i, j]) > threshold)])
text = im.axes.text(j, i, valfmt(data[i, j], None), **kw)
texts.append(text)
return texts
df3 = (
# FIX ME #
)
df3.head()
```
Finalmente, los ingredientes para el heat map son:
```
countries = df3.index.tolist()
subsets = df3.columns.tolist()
answers = df3.values
```
El mapa de calor debe ser de la siguiente manera:
* Tamaño figura: 15 x 20
* cmap = "YlGn"
* cbarlabel = "Porcentaje promedio (%)"
* Precición en las anotaciones: Flotante con dos decimales.
```
fig, ax = plt.subplots(# FIX ME #)
im, cbar = heatmap(# FIX ME #")
texts = annotate_heatmap(# FIX ME #)
fig.tight_layout()
plt.show()
```
| github_jupyter |
# Talktorial 1
# Compound data acquisition (ChEMBL)
#### Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin
Paula Junge and Svetlana Leng
## Aim of this talktorial
We learn how to extract data from ChEMBL:
* Find ligands which were tested on a certain target
* Filter by available bioactivity data
* Calculate pIC50 values
* Merge dataframes and draw extracted molecules
## Learning goals
### Theory
* ChEMBL database
* ChEMBL web services
* ChEMBL webresource client
* Compound activity measures
* IC50
* pIC50
### Practical
Goal: Get list of compounds with bioactivity data for a given target
* Connect to ChEMBL database
* Get target data (EGFR kinase)
* Bioactivity data
* Download and filter bioactivities
* Clean and convert
* Compound data
* Get list of compounds
* Prepare output data
* Output
* Draw molecules with highest pIC50
* Write output file
## References
* ChEMBL bioactivity database (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5210557/)
* ChEMBL web services: <i>Nucleic Acids Res.</i> (2015), <b>43</b>, 612-620 (https://academic.oup.com/nar/article/43/W1/W612/2467881)
* ChEMBL webrescource client GitHub (https://github.com/chembl/chembl_webresource_client)
* myChEMBL webservices version 2.x (https://github.com/chembl/mychembl/blob/master/ipython_notebooks/09_myChEMBL_web_services.ipynb)
* ChEMBL web-interface (https://www.ebi.ac.uk/chembl/)
* EBI-RDF platform (https://www.ncbi.nlm.nih.gov/pubmed/24413672)
* IC50 and pIC50 (https://en.wikipedia.org/wiki/IC50)
* UniProt website (https://www.uniprot.org/)
_____________________________________________________________________________________________________________________
## Theory
### ChEMBL database
* Open large-scale bioactivity database
* **Current data content (as of 10.2018):**
* \>1.8 million distinct compound structures
* \>15 million activity values from 1 million assays
* Assays are mapped to ∼12 000 targets
* **Data sources** include scientific literature, PubChem bioassays, Drugs for Neglected Diseases Initiative (DNDi), BindingDB database, ...
* ChEMBL data can be accessed via a [web-interface](https://www.ebi.ac.uk/chembl/), the [EBI-RDF platform](https://www.ncbi.nlm.nih.gov/pubmed/24413672) and the [ChEMBL web services](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/#B5)
#### ChEMBL web services
* RESTful web service
* ChEMBL web service version 2.x resource schema:
[](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489243/figure/F2/)
*Figure 1:*
"ChEMBL web service schema diagram. The oval shapes represent ChEMBL web service resources and the line between two resources indicates that they share a common attribute. The arrow direction shows where the primary information about a resource type can be found. A dashed line indicates the relationship between two resources behaves differently. For example, the `Image` resource provides a graphical based representation of a `Molecule`."
Figure and description taken from: [<i>Nucleic Acids Res.</i> (2015), <b>43</b>, 612-620](https://academic.oup.com/nar/article/43/W1/W612/2467881).
#### ChEMBL webresource client
* Python client library for accessing ChEMBL data
* Handles interaction with the HTTPS protocol
* Lazy evaluation of results -> reduced number of network requests
### Compound activity measures
#### IC50
* [Half maximal inhibitory concentration](https://en.wikipedia.org/wiki/IC50)
* Indicates how much of a particular drug or other substance is needed to inhibit a given biological process by half
[<img src="https://upload.wikimedia.org/wikipedia/commons/8/81/Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png" width="450" align="center" >](https://commons.wikimedia.org/wiki/File:Example_IC50_curve_demonstrating_visually_how_IC50_is_derived.png)
*Figure 2:* Visual demonstration of how to derive an IC50 value: Arrange data with inhibition on vertical axis and log(concentration) on horizontal axis; then identify max and min inhibition; then the IC50 is the concentration at which the curve passes through the 50% inhibition level.
#### pIC50
* To facilitate the comparison of IC50 values, we define pIC50 values on a logarithmic scale, such that <br />
$ pIC_{50} = -log_{10}(IC_{50}) $ where $ IC_{50}$ is specified in units of M.
* Higher pIC50 values indicate exponentially greater potency of the drug
* pIC50 is given in terms of molar concentration (mol/L or M) <br />
* IC50 should be specified in M to convert to pIC50
* For nM: $pIC_{50} = -log_{10}(IC_{50}*10^{-9})= 9-log_{10}(IC_{50}) $
Besides, IC50 and pIC50, other bioactivity measures are used, such as the equilibrium constant [KI](https://en.wikipedia.org/wiki/Equilibrium_constant) and the half maximal effective concentration [EC50](https://en.wikipedia.org/wiki/EC50).
## Practical
In the following, we want to download all molecules that have been tested against our target of interest, the EGFR kinase.
### Connect to ChEMBL database
First, the ChEMBL webresource client as well as other python libraries are imported.
```
from chembl_webresource_client.new_client import new_client
import pandas as pd
import math
from rdkit.Chem import PandasTools
```
Create resource objects for API access.
```
targets = new_client.target
compounds = new_client.molecule
bioactivities = new_client.activity
```
## Target data
* Get UniProt-ID (http://www.uniprot.org/uniprot/P00533) of the target of interest (EGFR kinase) from UniProt website (https://www.uniprot.org/)
* Use UniProt-ID to get target information
* Select a different UniProt-ID if you are interested into another target
```
uniprot_id = 'P00533'
# Get target information from ChEMBL but restrict to specified values only
target_P00533 = targets.get(target_components__accession=uniprot_id) \
.only('target_chembl_id', 'organism', 'pref_name', 'target_type')
print(type(target_P00533))
pd.DataFrame.from_records(target_P00533)
```
### After checking the entries, we select the first entry as our target of interest
`CHEMBL203`: It is a single protein and represents the human Epidermal growth factor receptor (EGFR, also named erbB1)
```
target = target_P00533[0]
target
```
Save selected ChEMBL-ID.
```
chembl_id = target['target_chembl_id']
chembl_id
```
### Bioactivity data
Now, we want to query bioactivity data for the target of interest.
#### Download and filter bioactivities for the target
In this step, we download and filter the bioactivity data and only consider
* human proteins
* bioactivity type IC50
* exact measurements (relation '=')
* binding data (assay type 'B')
```
bioact = bioactivities.filter(target_chembl_id = chembl_id) \
.filter(type = 'IC50') \
.filter(relation = '=') \
.filter(assay_type = 'B') \
.only('activity_id','assay_chembl_id', 'assay_description', 'assay_type', \
'molecule_chembl_id', 'type', 'units', 'relation', 'value', \
'target_chembl_id', 'target_organism')
len(bioact), len(bioact[0]), type(bioact), type(bioact[0])
```
If you experience difficulties to query the ChEMBL database, we provide here a file containing the results for the query in the previous cell (11 April 2019). We do this using the Python package pickle which serializes Python objects so they can be saved to a file, and loaded in a program again later on.
(Learn more about object serialization on [DataCamp](https://www.datacamp.com/community/tutorials/pickle-python-tutorial))
You can load the "pickled" compounds by uncommenting and running the next cell.
```
#import pickle
#bioact = pickle.load(open("../data/T1/EGFR_compounds_from_chembl_query_20190411.p", "rb"))
```
#### Clean and convert bioactivity data
The data is stored as a list of dictionaries
```
bioact[0]
```
Convert to pandas dataframe (this might take some minutes).
```
bioact_df = pd.DataFrame.from_records(bioact)
bioact_df.head(10)
bioact_df.shape
```
Delete entries with missing values.
```
bioact_df = bioact_df.dropna(axis=0, how = 'any')
bioact_df.shape
```
Delete duplicates:
Sometimes the same molecule (`molecule_chembl_id`) has been tested more than once, in this case, we only keep the first one.
```
bioact_df = bioact_df.drop_duplicates('molecule_chembl_id', keep = 'first')
bioact_df.shape
```
We would like to only keep bioactivity data measured in molar units. The following print statements will help us to see what units are contained and to control what is kept after dropping some rows.
```
print(bioact_df.units.unique())
bioact_df = bioact_df.drop(bioact_df.index[~bioact_df.units.str.contains('M')])
print(bioact_df.units.unique())
bioact_df.shape
```
Since we deleted some rows, but we want to iterate over the index later, we reset index to be continuous.
```
bioact_df = bioact_df.reset_index(drop=True)
bioact_df.head()
```
To allow further comparison of the IC50 values, we convert all units to nM. First, we write a helper function, which can be applied to the whole dataframe in the next step.
```
def convert_to_NM(unit, bioactivity):
# c=0
# for i, unit in enumerate(bioact_df.units):
if unit != "nM":
if unit == "pM":
value = float(bioactivity)/1000
elif unit == "10'-11M":
value = float(bioactivity)/100
elif unit == "10'-10M":
value = float(bioactivity)/10
elif unit == "10'-8M":
value = float(bioactivity)*10
elif unit == "10'-1microM" or unit == "10'-7M":
value = float(bioactivity)*100
elif unit == "uM" or unit == "/uM" or unit == "10'-6M":
value = float(bioactivity)*1000
elif unit == "10'1 uM":
value = float(bioactivity)*10000
elif unit == "10'2 uM":
value = float(bioactivity)*100000
elif unit == "mM":
value = float(bioactivity)*1000000
elif unit == "M":
value = float(bioactivity)*1000000000
else:
print ('unit not recognized...', unit)
return value
else: return bioactivity
bioactivity_nM = []
for i, row in bioact_df.iterrows():
bioact_nM = convert_to_NM(row['units'], row['value'])
bioactivity_nM.append(bioact_nM)
bioact_df['value'] = bioactivity_nM
bioact_df['units'] = 'nM'
bioact_df.head()
```
### Compound data
We have a data frame containing all molecules tested (with the respective measure) against EGFR. Now, we want to get the molecules that are stored behind the respective ChEMBL IDs.
#### Get list of compounds
Let's have a look at the compounds from ChEMBL we have defined bioactivity data for. First, we retrieve ChEMBL ID and structures for the compounds with desired bioactivity data.
```
cmpd_id_list = list(bioact_df['molecule_chembl_id'])
compound_list = compounds.filter(molecule_chembl_id__in = cmpd_id_list) \
.only('molecule_chembl_id','molecule_structures')
```
Then, we convert the list to a pandas dataframe and delete duplicates (again, the pandas from_records function might take some time).
```
compound_df = pd.DataFrame.from_records(compound_list)
compound_df = compound_df.drop_duplicates('molecule_chembl_id', keep = 'first')
print(compound_df.shape)
print(bioact_df.shape)
compound_df.head()
```
So far, we have multiple different molecular structure representations. We only want to keep the canonical SMILES.
```
for i, cmpd in compound_df.iterrows():
if compound_df.loc[i]['molecule_structures'] != None:
compound_df.loc[i]['molecule_structures'] = cmpd['molecule_structures']['canonical_smiles']
print (compound_df.shape)
```
#### Prepare output data
Merge values of interest in one dataframe on ChEMBL-IDs:
* ChEMBL-IDs
* SMILES
* units
* IC50
```
output_df = pd.merge(bioact_df[['molecule_chembl_id','units','value']], compound_df, on='molecule_chembl_id')
print(output_df.shape)
output_df.head()
```
For distinct column names, we rename IC50 and SMILES columns.
```
output_df = output_df.rename(columns= {'molecule_structures':'smiles', 'value':'IC50'})
output_df.shape
```
If we do not have a SMILES representation of a compound, we can not further use it in the following talktorials. Therefore, we delete compounds without SMILES column.
```
output_df = output_df[~output_df['smiles'].isnull()]
print(output_df.shape)
output_df.head()
```
In the next cell, you see that the low IC50 values are difficult to read. Therefore, we prefer to convert the IC50 values to pIC50.
```
output_df = output_df.reset_index(drop=True)
ic50 = output_df.IC50.astype(float)
print(len(ic50))
print(ic50.head(10))
# Convert IC50 to pIC50 and add pIC50 column:
pIC50 = pd.Series()
i = 0
while i < len(output_df.IC50):
value = 9 - math.log10(ic50[i]) # pIC50=-log10(IC50 mol/l) --> for nM: -log10(IC50*10**-9)= 9-log10(IC50)
if value < 0:
print("Negative pIC50 value at index"+str(i))
pIC50.at[i] = value
i += 1
output_df['pIC50'] = pIC50
output_df.head()
```
### Collected bioactivity data for EGFR
Let's have a look at our collected data set.
#### Draw molecules
In the next steps, we add a molecule column to our datafame and look at the structures of the molecules with the highest pIC50 values.
```
PandasTools.AddMoleculeColumnToFrame(output_df, smilesCol='smiles')
```
Sort molecules by pIC50.
```
output_df.sort_values(by="pIC50", ascending=False, inplace=True)
output_df.reset_index(drop=True, inplace=True)
```
Show the most active molecules = molecules with the highest pIC50 values.
```
output_df.drop("smiles", axis=1).head()
```
#### Write output file
To use the data for the following talktorials, we save the data as csv file. Note that it is advisable to drop the molecule column (only contains an image of the molecules) when saving the data.
```
output_df.drop("ROMol", axis=1).to_csv("../data/T1/EGFR_compounds.csv")
```
## Discussion
In this tutorial, we collected all available bioactivity data for our target of interest from the ChEMBL database. We filtered the data set to only contain molecules with measured IC50 or pIC50 bioactivity values.
Be aware that ChEMBL data originates from various sources. Compound data has been generated in different labs by different people all over the world. Therefore, we have to be cautious with the predictions we make using this dataset. It is always important to consider the source of the data and consistency of data production assays when interpreting the results and determining how much confidence we have in our predictions.
In the next tutorials we will filter our acquired data by the Lipinski rule of five and by unwanted substructures. Another important step would be to clean the data and remove duplicates. As this is not shown in any of our talktorials (yet), we would like to refer to the standardiser library ([github Francis Atkinson](https://github.com/flatkinson/standardiser)) or [MolVS](https://molvs.readthedocs.io/en/latest/) as possible tools for this task.
## Quiz
* We have downloaded in this talktorial molecules and bioactivity data from ChEMBL. What else is the ChEMBL database useful for?
* What is the difference between IC50 and EC50?
* What can we use the data extracted from ChEMBL for?
| github_jupyter |
<a href="https://colab.research.google.com/github/BreakoutMentors/Data-Science-and-Machine-Learning/blob/main/machine_learning/lesson%204%20-%20ML%20Apps/Gradio/EMNIST_Gradio_Tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Making ML Applications with Gradio
[Gradio](https://www.gradio.app/) is a python library that provides web interfaces for your models. This library is very high-level with it being the easiest to learn for beginners. Here we use a dataset called [EMNIST](https://pytorch.org/vision/stable/datasets.html#emnist) which is an addition to the MNIST(dataset of images with numbers) datasest, by including images of capital and lowercase letters with a total of 62 classes.
Using Gradio, an interface is created at the bottom using the model trained in this notebook to accept our drawings of images or numbers to then predict.
## Importing libraries and Installing Gradio using PIP
Google does not have Gradio automatically installed on their Google Colab machines, so it is necessary to install it to the specific machine you are using right now. If you choose another runtime machine, it is necessary to repeat this step.
**Also, please run this code with a GPU**
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Importing PyTorch
import torch
import torch.nn as nn
# Importing torchvision for dataset
import torchvision
import torchvision.transforms as transforms
# Installing gradio using PIP
!pip install gradio
```
## Downloading and Preparing EMNIST Dataset
**Note:** Even though the images in the EMNIST dataset are 28x28 images just like the regular MNIST dataset, there are necessary transforms needed for EMNIST dataset. If not transformed, the images are rotated 90° counter-clockwise and are flipped vertically. To undo these two issues, we first rotate it 90° counter-clockwise and then flip it horizontally
Here is the image before processing:
<img src="https://raw.githubusercontent.com/BreakoutMentors/Data-Science-and-Machine-Learning/main/machine_learning/lesson%204%20-%20ML%20Apps/images/image_before_processing.jpg" width=200>
Here is the image after processing:
<img src="https://github.com/BreakoutMentors/Data-Science-and-Machine-Learning/blob/main/machine_learning/lesson%204%20-%20ML%20Apps/images/image_after_processing.jpg?raw=true" width=200>
```
# Getting Dataset
!mkdir EMNIST
root = '/content/EMNIST'
# Creating Transforms
transforms = transforms.Compose([
# Rotating image 90 degrees counter-clockwise
transforms.RandomRotation((-90,-90)),
# Flipping images horizontally
transforms.RandomHorizontalFlip(p=1),
# Converting images to tensor
transforms.ToTensor()
])
# Getting dataset
training_dataset = torchvision.datasets.EMNIST(root,
split='byclass',
train=True,
download=True,
transform=transforms)
test_dataset = torchvision.datasets.EMNIST(root,
split='byclass',
train=False,
download=True,
transform=transforms)
# Loading Dataset into dataloaders
batch_size = 2048
training_dataloader = torch.utils.data.DataLoader(training_dataset, batch_size=batch_size, shuffle=True)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# Getting shapes of dataset
print('Shape of the training dataset:', training_dataset.data.shape)
print('Shape of the test dataset:', test_dataset.data.shape)
# Getting reverted class_to_idx dictionary to get classes by idx
idx_to_class = {val:key for key, val in training_dataset.class_to_idx.items()}
# Plotting 5 images with classes
plt.figure(figsize=(10,2))
for i in range(5):
plt.subplot(1,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(training_dataset[i][0].squeeze().numpy(), cmap=plt.cm.binary)
plt.xlabel(idx_to_class[training_dataset[i][1]])
```
## Building the Model
```
class Neural_Network(nn.Module):
# Constructor
def __init__(self, num_classes):
super(Neural_Network, self).__init__()
# Defining Fully-Connected Layers
self.fc1 = nn.Linear(28*28, 392) # 28*28 since each image is 28*28
self.fc2 = nn.Linear(392, 196)
self.fc3 = nn.Linear(196, 98)
self.fc4 = nn.Linear(98, num_classes)
# Activation function
self.relu = nn.ReLU()
def forward(self, x):
# Need to flatten each image in the batch
x = x.flatten(start_dim=1)
# Input it into the Fully connected layers
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.relu(self.fc3(x))
x = self.fc4(x)
return x
# Getting number of classes
num_classes = len(idx_to_class)
model = Neural_Network(num_classes)
print(model)
```
## Defining Loss Function and Optimizer
```
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
```
## Moving model to GPU
If you have not changed the runtime type to a GPU, please do so now. This helps with the speed of training.
```
# Use GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
# Moving model to use GPU
model.to(device)
```
## Training the Model
```
# Function that returns a torch tensor with predictions to compare with labels
def get_preds_from_logits(logits):
# Using softmax to get an array that sums to 1, and then getting the index with the highest value
return torch.nn.functional.softmax(logits, dim=1).argmax(dim=1)
epochs = 10
train_losses = []
train_accuracies = []
for epoch in range(1, epochs+1):
train_loss = 0.0
train_counts = 0
###################
# train the model #
###################
# Setting model to train mode
model.train()
for images, labels in training_dataloader:
# Moving data to GPU if available
images, labels = images.to(device), labels.to(device)
# Setting all gradients to zero
optimizer.zero_grad()
# Calculate Output
output = model(images)
# Calculate Loss
loss = criterion(output, labels)
# Calculate Gradients
loss.backward()
# Perform Gradient Descent Step
optimizer.step()
# Saving loss
train_loss += loss.item()
# Get Predictions
train_preds = get_preds_from_logits(output)
# Saving number of right predictions for accuracy
train_counts += train_preds.eq(labels).sum().item()
# Averaging and Saving Losses
train_loss/=len(training_dataset)
train_losses.append(train_loss)
# Getting accuracies and saving them
train_acc = train_counts/len(training_dataset)
train_accuracies.append(train_acc)
print('Epoch: {} \tTraining Loss: {:.6f} \tTraining Accuracy: {:.2f}%'.format(epoch, train_loss, train_acc*100))
plt.plot(train_losses)
plt.xlabel('epoch')
plt.ylabel('Mean Squared Error')
plt.title('Training Loss')
plt.show()
plt.plot(train_accuracies)
plt.xlabel('epoch')
plt.ylabel('Accuracy')
plt.title('Training Accuracy')
plt.show()
```
## Evaluating the model
Here we will display the test loss and accuracy and examples of images that were misclassified.
```
test_loss = 0.0
test_counts = 0
# Setting model to evaluation mode, no parameters will change
model.eval()
for images, labels in test_dataloader:
# Moving to GPU if available
images, labels = images.to(device), labels.to(device)
# Calculate Output
output = model(images)
# Calculate Loss
loss = criterion(output, labels)
# Saving loss
test_loss += loss.item()
# Get Predictions
test_preds = get_preds_from_logits(output)
# Saving number of right predictions for accuracy
test_counts += test_preds.eq(labels).sum().item()
# Calculating test accuracy
test_acc = test_counts/len(test_dataset)
print('Test Loss: {:.6f} \tTest Accuracy: {:.2f}%'.format(test_loss, test_acc*100))
import torchvision.transforms as transforms
# Have to another set of transforms to rotate and flip testing data
test_transforms = transforms.Compose([
# Rotating image 90 degrees counter-clockwise
transforms.RandomRotation((-90,-90)),
# Flipping images horizontally
transforms.RandomHorizontalFlip(p=1)
])
# Transforming the data and normalizing them
test_images = test_transforms(test_dataset.data).to(device)/255
# Getting Predictions
predictions = get_preds_from_logits(model(test_images))
# Getting Labels
test_labels = test_dataset.targets.to(device)
# Getting misclassified booleans
correct_bools = test_labels.eq(predictions)
misclassified_indices = []
for i in range(len(correct_bools)):
if correct_bools[i] == False:
misclassified_indices.append(i)
# Plotting 5 misclassified images
plt.figure(figsize=(10,2))
for i in range(5):
idx = misclassified_indices[i]
plt.subplot(1,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(test_images[idx].squeeze().cpu().numpy(), cmap=plt.cm.binary)
true_label = idx_to_class[test_labels[idx].item()]
pred_label = idx_to_class[predictions[idx].item()]
plt.xlabel(f'True: {true_label}, Pred: {pred_label}')
```
# How to use Gradio
There are three parts of using Gradio
1. Define a function that takes input and returns your model's output
2. Define what type of input the interface will use
3. Define what type of output the interface will give
The function `recognize_image` takes a 28x28 image that is not yet normalized and returns a dictionary with the keys being the classes and the values being the probabilities for that class.
The class [`gradio.inputs.Image`](https://www.gradio.app/docs#i_image) is used as the input that provides a window in the Gradio interface, but there are many customizations you can provide.
These are some the parameters:
1. shape - (width, height) shape to crop and resize image to; if None, matches input image size.
2. image_mode - "RGB" if color, or "L" if black and white.
3. invert_colors - whether to invert the image as a preprocessing step.
4. source - Source of image. "upload" creates a box where user can drop an image file, "webcam" allows user to take snapshot from their webcam, "canvas" defaults to a white image that can be edited and drawn upon with tools.
The class [gradio.outputs.Label](https://www.gradio.app/docs#o_label) is used as the output, which provides probabilities to the interface for the purpose of displaying them.
These are the parameters:
1. num_top_classes - number of most confident classes to show.
2. type - Type of value to be passed to component. "value" expects a single out label, "confidences" expects a dictionary mapping labels to confidence scores, "auto" detects return type.
3. label - component name in interface.
The interface class [gradio.Interface](https://www.gradio.app/docs#interface) is responsible of creating the interface that compiles the type of inputs and outputs. There is a `.launch()` method that launches the interface in this notebook after compiling.
These are the parameters used in this interface:
1. fn - the function to wrap an interface around.
2. inputs - a single Gradio input component, or list of Gradio input components. Components can either be passed as instantiated objects, or referred to by their string shortcuts. The number of input components should match the number of parameters in fn.
3. outputs - a single Gradio output component, or list of Gradio output components. Components can either be passed as instantiated objects, or referred to by their string shortcuts. The number of output components should match the number of values returned by fn.
4. title - a title for the interface; if provided, appears above the input and output components.
5. description - a description for the interface; if provided, appears above the input and output components.
6. live - whether the interface should automatically reload on change.
7. interpretation - function that provides interpretation explaining prediction output. Pass "default" to use built-in interpreter.
I will enourage you to view the [documentation](https://www.gradio.app/docs) for the interface, inputs and outputs, you can find all the information you need there. It is helpful to refer to the documentation to understand other parameters that are not used in this lesson.
```
import gradio
import gradio as gr
# Function that returns a torch tensor with predictions to compare with labels
def get_probs_from_logits(logits):
# Using softmax to get probabilities from the logits
return torch.nn.functional.softmax(logits, dim=1)
# Function that takes the img drawn in the Gradio interface, then gives probabilities
def recognize_image(img):
# Normalizes inputted image and converts it to a tensor for the model
img = torch.tensor(img/255, dtype=torch.float).unsqueeze(dim=0).to(device)
# Getting output
output = model(img)
# Getting probabilites of the image
probabilities = get_probs_from_logits(output).flatten()
# Returns a dictionary with the key being the class and val being the probability
probabilities_dict = {idx_to_class[i]:probabilities[i].item() for i in range(num_classes)}
return probabilities_dict
im = gradio.inputs.Image(shape=(28, 28),
image_mode='L',
invert_colors=True,
source="canvas")
title = "Number and Letter Classifier App"
description = """This app is able to guess the number or letter you draw below.
The ML model was trained on the EMNIST dataset, please use below!"""
iface = gr.Interface(fn=recognize_image,
inputs=im,
outputs=gradio.outputs.Label(num_top_classes=5),
title=title,
description=description,
live=True,
interpretation="default")
iface.launch()
```
# What's next?
The next challenge will cover pretrained models, which are models that are already trained for us and gives us the availability of using the model to make predictions automatically. You will create another Gradio app that uses pretrained models to classify images.
| github_jupyter |
# Baixando a base de dados do Kaggle
```
# baixando a lib do kaggle
!pip install --upgrade kaggle
!pip install plotly
# para visualizar dados faltantes
!pip install missingno
# requisitando upload do token de autentificação do Kaggle
# OBS: o arquivo kaggle.json precisa ser baixado da sua conta pessoal do Kaggle.
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=fn, length=len(uploaded[fn]) ))
# alocando o arquivo kaggle.json em seu devido local e permitindo escrita e leitura no mesmo
!mkdir -p ~/.kaggle
!mv kaggle.json ~/.kaggle
!chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download -d gpreda/covid-world-vaccination-progress
!!unzip covid-world-vaccination-progress.zip -d data_folder
```
# Código da análise exploratória em si
```
import pandas as pd
import matplotlib.pyplot as plt
import missingno as msno
import plotly.graph_objects as go
import matplotlib.ticker as ticker
def gera_lista_vacinas(dataset):
'''
Gera uma lista com todas as vacinas do dataset
input: DataFrame dos dados
output: lista de todas as vacinas
'''
todas_vacinas = list(dataset.groupby(['vacinas']).count().index)
conjunto_vacinas = set()
for lista_vacinas in todas_vacinas:
lista_vacinas = lista_vacinas.split(', ')
for vacina in lista_vacinas:
conjunto_vacinas.add(vacina)
lista_vacinas = list(conjunto_vacinas)
lista_vacinas.sort()
return lista_vacinas
def gera_lista_paises(dataset):
'''
Gera a lista de países que estão em vacinação
input: DataFrame dos dados
output: lista de todos os países
'''
return list(dataset.groupby(['pais']).count().index)
def gera_dataframe_vacinas(dataset):
'''
Gera um novo DataFrame em que as vacinas antes eram listadas na coluna 'vacinas'
agora são listadas entre 10 colunas correspondentes à cada vacina, com 0's e 1's.
Os 1's representam que vacina está sendo aplicada naquele país, os 0's que não!
input: DataFrame dos dados
output: DataFrame dos dados das vacinas categorizados
'''
labels_vacinas = gera_lista_vacinas(dataset) # lista das vacinas entendidas como labels
dataset_vacinas = dataset['vacinas']
array_temporario_vacinas = [] # inicia como uma lista vazia
for linha_vacina in dataset_vacinas:
sublista_vacinas = linha_vacina.split(', ')
#lista de tamanho len(labels_vacinas) com 0's para elementos em sublista
nova_linha = [int(vacina in sublista_vacinas) for vacina in labels_vacinas]
array_temporario_vacinas.append(nova_linha)
dataset_temporario_vacinas = pd.DataFrame(array_temporario_vacinas, columns=labels_vacinas)
dataset.drop(columns=['vacinas'], axis=1, inplace=True)
dataset = pd.concat([dataset, dataset_temporario_vacinas], axis=1)
return dataset
dataset = pd.read_csv(r'data_folder/country_vaccinations.csv')
nome_colunas = ['pais', 'codigo_iso', 'data', 'total_vacinacoes', 'pessoas_vacinadas',
'pessoas_tot_vacinadas', 'vacinacoes_diarias_raw', 'vacinacoes_diarias',
'tot_vacinacoes_por_cent', 'pessoas_vacinadas_por_cent', 'pessoas_tot_vacinadas_por_cent',
'vacinacoes_diarias_por_milhao', 'vacinas', 'fonte_dados', 'website_fonte']
nome_colunas_antigo = list(dataset.columns)
dataset.rename(columns=dict(zip(nome_colunas_antigo, nome_colunas)), inplace=True)
dataset.head()
# DATAFRAME COM AS INFOS DAS VACINAS
freq_vacinas = dataset.groupby('pais').max()
demais_colunas = [coluna for coluna in nome_colunas if coluna not in lista_vacinas and coluna not in ['pais', 'vacinas']]
freq_vacinas.drop(columns=demais_colunas, axis=1, inplace=True)
# para o bar plot vacinas x num_paises
densidade_vacinas = pd.DataFrame(freq_vacinas.sum(), columns=['num_paises'])
# BARPLOT DAS VACINAS
fig_disposicao_vacinas = plt.figure(figsize = (20, 10))
plt.title('Número de países que utilizam as vacinas', fontsize=18)
y_label = densidade_vacinas.index
x_label = densidade_vacinas['num_paises'].values
plt.bar(y_label, x_label)
plt.grid()
for i in range(len(x_label)):
plt.annotate(str(x_label[i]), xy=(y_label[i], x_label[i]), ha='center', va='bottom', fontsize=14)
plt.show()
# dados faltantes de todo o banco de dados
msno.matrix(dataset)
# Vamos visualizar a distribuição de dados faltantes POR PAÍS
from math import floor
# caso dê problema, é possível que um novo país tenha sido adicionado!
num_rows = 25
num_columns = 6
fig, axarr = plt.subplots(num_rows, num_columns, figsize=(24, 90))
lista_paises = gera_lista_paises(dataset)
for pais in enumerate(lista_paises):
# extraindo nome e numero do pais
num_pais = pais[0]
nome_pais = pais[1]
# definindo coordenadas de onde no subplot será plotado
x_plot = floor(num_pais/num_columns)
y_plot = num_pais % num_columns
axarr[x_plot][y_plot].set_title(nome_pais)
msno.matrix(dataset[dataset['pais'] == nome_pais], ax=axarr[x_plot][y_plot], labels=False)
dataset.describe()
```
# Código da criação dos gráficos e mapas
```
groupby_country = dataset.groupby(['pais'])
listof_dataframe_countries = []
for idx, group in enumerate(groupby_country):
listof_dataframe_countries.append(group)
total_vac_top_countries = pd.DataFrame()
# total_vacinacoes pessoas_vacinadas pessoas_tot_vacinadas
for i in range(len(listof_dataframe_countries)):
country_df = listof_dataframe_countries[i][1]
filtered_df = country_df[country_df['total_vacinacoes'].notna()]
latest_day_data = filtered_df.iloc[-1:]
total_vac_top_countries = total_vac_top_countries.append(latest_day_data, ignore_index=True)
total_vac_top_countries = total_vac_top_countries.sort_values(by=['total_vacinacoes'], ascending=False)
fig, axes = plt.subplots(nrows=2, ncols=5)
i = 0
j = 0
for pais in total_vac_top_countries.head(10).iterrows():
country = dataset[dataset['pais'] == pais[1]['pais']]
filtered = country[country['total_vacinacoes'].notna()].reset_index()
fig2 = filtered[['total_vacinacoes','pessoas_vacinadas','pessoas_tot_vacinadas']].plot(title=pais[1]['pais'], ax=axes[j][i], grid=True)
fig2.yaxis.set_major_formatter(ticker.EngFormatter())
i+=1
if(i%5 == 0):
j+=1
i=0
plt.show()
fig, axes = plt.subplots(nrows=2, ncols=5)
i = 0
j = 0
for pais in total_vac_top_countries.head(10).iterrows():
country = dataset[dataset['pais'] == pais[1]['pais']]
filtered = country[country['tot_vacinacoes_por_cent'].notna()].reset_index()
fig2 = filtered[['tot_vacinacoes_por_cent','pessoas_vacinadas_por_cent','pessoas_tot_vacinadas_por_cent']].plot(title=pais[1]['pais'], ax=axes[j][i], grid=True)
fig2.yaxis.set_major_formatter(ticker.PercentFormatter())
fig2.set_ylim(0, 100)
fig2.legend(('Total doses', 'Pessoas vacinadas', 'Pessoas imunizadas'))
i+=1
if(i%5 == 0):
j+=1
i=0
plt.show()
for i in range(len(listof_dataframe_countries)):
country_name = listof_dataframe_countries[i][0]
if(country_name in ["United States", "Austria", "Brazil", "United Kingdom"]):
country_df = listof_dataframe_countries[i][1]
filtered_df = country_df[country_df['total_vacinacoes'].notna()]
filtered_df[['total_vacinacoes','pessoas_vacinadas','pessoas_tot_vacinadas']].plot(title=country_name)
plt.show()
df = pd.DataFrame()
for i in range(len(listof_dataframe_countries)):
country_name = listof_dataframe_countries[i][0]
country_df = listof_dataframe_countries[i][1]
filtered_df = country_df[country_df['pessoas_vacinadas_por_cent'].notna()]
latest_day_data = filtered_df.iloc[-1:]
df = df.append(latest_day_data, ignore_index=True)
df.to_csv('./pessoas_vacinadas_por_cent.csv')
fig_pessoas_vacinadas = go.Figure(data=go.Choropleth(
locations = df['codigo_iso'],
z = df['pessoas_vacinadas_por_cent'],
text = df['pais'],
colorscale = 'YlGnBu',
autocolorscale=False,
marker_line_width=0.5,
colorbar_title = '% pessoas<br>vacinadas',
))
config = {
'modeBarButtonsToRemove': ['lasso2d','zoomInGeo','zoomOutGeo']
}
fig_pessoas_vacinadas.update_layout(
title_text='Covid-19 World Vaccination - Porcentagem de pessoas que tomaram pelo menos uma dose da vacina',
geo=dict(
showframe=False,
showcoastlines=False,
projection_type='equirectangular'
)
)
fig_pessoas_vacinadas.data[0].update(zmin=0, zmax=60)
fig_pessoas_vacinadas.show(config=config)
df2 = pd.DataFrame()
for i in range(len(listof_dataframe_countries)):
country_name = listof_dataframe_countries[i][0]
country_df = listof_dataframe_countries[i][1]
filtered_df = country_df[country_df['total_vacinacoes'].notna()]
latest_day_data = filtered_df.iloc[-1:]
df2 = df2.append(latest_day_data, ignore_index=True)
df2.to_csv('./total_vacinacoes.csv')
fig_total_doses = go.Figure(data=go.Choropleth(
locations = df2['codigo_iso'],
z = df2['total_vacinacoes'],
text = df2['pais'],
colorscale = 'Blues',
autocolorscale=False,
marker_line_width=0.5,
colorbar_title = 'Total<br>vacinas<br>(milhões)',
))
config = {
'modeBarButtonsToRemove': ['lasso2d','zoomInGeo','zoomOutGeo']
}
fig_total_doses.update_layout(
title_text='Covid-19 World Vaccination - Total de doses aplicadas',
geo=dict(
showframe=False,
showcoastlines=False,
projection_type='equirectangular'
)
)
fig_total_doses.show(config=config)
df3 = pd.DataFrame()
for i in range(len(listof_dataframe_countries)):
country_name = listof_dataframe_countries[i][0]
country_df = listof_dataframe_countries[i][1]
filtered_df = country_df[country_df['vacinacoes_diarias_por_milhao'].notna()]
latest_day_data = filtered_df.iloc[-1:]
df3 = df3.append(latest_day_data, ignore_index=True)
df3.to_csv('./vac_diarias_milhao.csv')
fig_vac_diarias_milhao = go.Figure(data=go.Choropleth(
locations = df3['codigo_iso'],
z = df3['vacinacoes_diarias_por_milhao'],
text = df3['pais'],
colorscale = 'YlGnBu',
autocolorscale=False,
reversescale=False,
marker_line_width=0.5,
colorbar_title = 'vacinações<br>diárias<br>p/ milhão',
))
config = {
'modeBarButtonsToRemove': ['lasso2d','zoomInGeo','zoomOutGeo']
}
fig_vac_diarias_milhao.update_layout(
title_text='Covid-19 World Vaccination - Vacinações diárias por milhão',
geo=dict(
showframe=False,
showcoastlines=False,
projection_type='equirectangular'
)
)
fig_vac_diarias_milhao.data[0].update(zmin=500, zmax=15000)
fig_vac_diarias_milhao.show(config=config)
```
| github_jupyter |
# Backprop Core Example: Text Summarisation
Text summarisation takes a chunk of text, and extracts the key information.
```
# Set your API key to do inference on Backprop's platform
# Leave as None to run locally
api_key = None
import backprop
summarisation = backprop.Summarisation(api_key=api_key)
# Change this up.
input_text = """
Britain began its third COVID-19 lockdown on Tuesday with the government calling for one last major national effort to defeat the spread of a virus that has infected an estimated one in 50 citizens before mass vaccinations turn the tide.
Finance minister Rishi Sunak announced a new package of business grants worth 4.6 billion pounds ($6.2 billion) to help keep people in jobs and firms afloat until measures are relaxed gradually, at the earliest from mid-February but likely later.
Britain has been among the countries worst-hit by COVID-19, with the second highest death toll in Europe and an economy that suffered the sharpest contraction of any in the Group of Seven during the first wave of infections last spring.
Prime Minister Boris Johnson said the latest data showed 2% of the population were currently infected - more than a million people in England.
“When everybody looks at the position, people understand overwhelmingly that we have no choice,” he told a news conference.
More than 1.3 million people in Britain have already received their first dose of a COVID-19 vaccination, but this is not enough to have an impact on transmission yet.
Johnson announced the new lockdown late on Monday, saying the highly contagious new coronavirus variant first identified in Britain was spreading so fast the National Health Service risked being overwhelmed within 21 days.
In England alone, some 27,000 people are in hospital with COVID, 40% more than during the first peak in April, with infection numbers expected to rise further after increased socialising during the Christmas period.
Since the start of the pandemic, more than 75,000 people have died in the United Kingdom within 28 days of testing positive for coronavirus, according to official figures. The number of daily new infections passed 60,000 for the first time on Tuesday.
A Savanta-ComRes poll taken just after Johnson’s address suggested four in five adults in England supported the lockdown.
“I definitely think it was the right decision to make,” said Londoner Kaitlin Colucci, 28. “I just hope that everyone doesn’t struggle too much with having to be indoors again.”
Downing Street said Johnson had cancelled a visit to India later this month to focus on the response to the virus, and Buckingham Palace called off its traditional summer garden parties this year.
nder the new rules in England, schools are closed to most pupils, people should work from home if possible, and all hospitality and non-essential shops are closed. Semi-autonomous executives in Scotland, Wales and Northern Ireland have imposed similar measures.
As infection rates soar across Europe, other countries are also clamping down on public life. Germany is set to extend its strict lockdown until the end of the month, and Italy will keep nationwide restrictions in place this weekend while relaxing curbs on weekdays.
Sunak’s latest package of grants adds to the eye-watering 280 billion pounds in UK government support already announced for this financial year to stave off total economic collapse.
The new lockdown is likely to cause the economy to shrink again, though not as much as during the first lockdown last spring. JP Morgan economist Allan Monks said he expected the economy to shrink by 2.5% in the first quarter of 2021 -- compared with almost 20% in the second quarter of 2020.
To end the cycle of lockdowns, the government is pinning its hopes on vaccines. It aims to vaccinate all elderly care home residents and their carers, everyone over the age of 70, all frontline health and social care workers, and everyone who is clinically extremely vulnerable by mid-February.
"""
summary = summarisation(input_text)
print(summary)
```
| github_jupyter |
```
%pylab --no-import-all
%matplotlib inline
import PyDSTool as pdt
ab = np.loadtxt('birdsynth/test/ba_example_ab.dat')
#ab = np.zeros((40000, 2))
ab[:, 0] += np.random.normal(0, 0.01, len(ab))
t_mom = np.linspace(0, len(ab)/44100, len(ab))
inputs = pdt.pointset_to_traj(pdt.Pointset(coorddict={'a': ab[:, 1], 'b':ab[:, 0]}, indepvardict={'t': t_mom}))
```
# Jacobian calculation
```
x = pdt.Var('x')
y = pdt.Var('y')
gm = pdt.Par('gm')
a = pdt.Par('a')
b = pdt.Par('b')
t = pdt.Var('t')
xdot = pdt.Fun(y, [y], 'xdot')
ydot = pdt.Fun(-a*gm*gm - b*gm*gm*x -gm*gm*x*x*x -gm*x*x*y + gm*gm*x*x - gm*x*y, [x, y], 'ydot')
F = pdt.Fun([xdot(y), ydot(x, y)], [x,y], 'F')
jac = pdt.Fun(pdt.Diff(F, [x, y]), [t, x, y], 'Jacobian')
jac.simplify()
print(jac.eval(t=t, x=x, y=y))
```
# Simple model
```
icdict = {'x': 0, 'y': 0}
pardict = {
'gm': 2 # g is γ in Boari 2015
}
vardict = {
'x': xdot(y),
'y': ydot(x,y),
}
args = pdt.args()
args.name = 'birdsynth'
args.fnspecs = [jac, xdot, ydot]
args.ics = icdict
args.pars = pardict
args.inputs = inputs
args.tdata = [0, 1]
args.varspecs = vardict
ds = pdt.Generator.Vode_ODEsystem(args)
ds.haveJacobian()
traj = ds.compute('demo')
plt.plot(traj.sample(dt=1/(44100*20))['x'])
auxdict = {'Pi':(['t', 'x', 'a_'], 'if(t > 0, a_ * x - r * 1, 0)'),
'Pt':(['t', 'x', 'a_'], '(1 - r) * Pi(t - 0.5 * T, x, a_)')
}
icdict = {'x': 0, 'y': 0, 'o1':0, 'i1':0, 'i3':0}
pardict = {'g': 2400, # g is γ in Boari 2015
'T': 0.2,
'r': 0.1,
'a_p': -540e6,
'b_p': -7800,
'c_p': 1.8e8,
'd_p': 1.2e-2,
'e_p': 7.2e-1,
'f_p': -0.83e-2,
'g_p': -5e2,
'h_p': 1e-4
}
vardict = {'x': 'y',
'y': '-a*Pow(g, 2) - b * Pow(g, 2) * x - Pow(g, 2) * Pow(x, 3) - g * Pow(x, 2) * y + Pow(g, 2) * x * x'
'- g * x * y',
'i1': 'o1',
'o1': 'a_p * i1 + b_p * o1 + c_p * i3 + d_p * Pt(t, x, a) + e_p * Pt(t, x, a)',
'i3': 'f_p * o1 + g_p * i3 + h_p * Pt(t, x, a)'
}
args = pdt.args()
args.name = 'birdsynth'
args.ics = icdict
args.pars = pardict
args.fnspecs = auxdict
args.inputs = inputs
args.tdata = [0, len(ab)/44100]
args.varspecs = vardict
ds = pdt.Generator.Vode_ODEsystem(args)
traj = ds.compute('demo')
pts = traj.sample(dt=1/(44100))
plt.plot(pts['t'], pts['x'])
x = ds.variables['x']
y_0 = pdt.Var('-a*Pow(g, 2) - b * Pow(g, 2) * x - Pow(g, 2) * Pow(x, 3) - g * Pow(x, 2) * y + Pow(g, 2) * x * x'
'- g * x * y', 'y_0')
Pi(2)
```
| github_jupyter |
# Bayesian Hierarchical Modeling
This jupyter notebook accompanies the Bayesian Hierarchical Modeling lecture(s) delivered by Stephen Feeney as part of David Hogg's [Computational Data Analysis class](http://dwh.gg/FlatironCDA). As part of the lecture(s) you will be asked to complete a number of tasks, some of which will involve direct coding into the notebook; these sections are marked by task. This notebook requires numpy, matplotlib, scipy, [corner](https://github.com/sfeeney/bhm_lecture.git), [pystan](https://pystan.readthedocs.io/en/latest/getting_started.html) and pickle to run (the last two are required solely for the final task).
The model we're going to be inferring is below.
<img src="bhm_plot.png" alt="drawing" width="500"/>
We start with imports...
```
from __future__ import print_function
# make sure everything we need is installed if running on Google Colab
def is_colab():
try:
cfg = get_ipython().config
if cfg['IPKernelApp']['kernel_class'] == 'google.colab._kernel.Kernel':
return True
else:
return False
except NameError:
return False
if is_colab():
!pip install --quiet numpy matplotlib scipy corner pystan
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as mp
%matplotlib inline
```
... and immediately move to...
## Task 2
In which I ask you to write a Python function to generate a simulated Cepheid sample using the period-luminosity relation $m_{ij} = \mu_i + M^* + s\,\log p_{ij} + \epsilon(\sigma_{\rm int})$. For simplicity, assume Gaussian priors on everything, Gaussian intrinsic scatter and Gaussian measurement uncertainties. Assume only the first host has a distance modulus estimate.
```
# setup
n_gal = 2
n_star = 200
n_samples = 50000
# PL relation parameters
abs_bar = -26.0 # mean of standard absolute magnitude prior
abs_sig = 4.0 # std dev of standard absolute magnitude prior
s_bar = -1.0 # mean of slope prior
s_sig = 1.0 # std dev of slope prior
mu_bar = 30.0 # mean of distance modulus prior
mu_sig = 5.0 # std dev of distance modulus prior
m_sig_int = 0.05 # intrinsic scatter, assumed known
# uncertainties
mu_hat_sig = 0.01 # distance modulus measurement uncertainty
m_hat_sig = 0.02 # apparent magnitude measurement uncertainty
def simulate(n_gal, n_star, abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, m_sig_int, m_hat_sig):
# draw CPL parameters from Gaussian prior with means abs_bar and s_bar and standard deviations
# abs_sig and s_sig
#abs_true = abs_bar
#s_true = s_bar
abs_true = abs_bar + npr.randn() * abs_sig
s_true = s_bar + npr.randn() * s_sig
# draw n_gal distance moduli from Gaussian prior with mean mu_bar and standard deviation mu_sig
# i've chosen to sort here so the closest galaxy is the one with the measured distance modulus
mu_true = np.sort(mu_bar + npr.randn(n_gal) * mu_sig)
# measure ONLY ONE galaxy's distance modulus noisily. the noise here is assumed Gaussian with
# zero mean and standard deviation mu_hat_sig
mu_hat = mu_true[0] + npr.randn() * mu_hat_sig
# draw log periods. these are assumed to be perfectly observed in this model, so they
# are simply a set of pre-specified numbers. i have chosen to generate new values with
# each simulation, drawn such that log-periods are uniformly drawn in the range 1-2 (i.e.,
# 10 to 100 days). you can have these for free!
lp_true = 1.0 + npr.rand(n_gal, n_star)
# draw true apparent magnitudes. these are distributed around the Cepheid period-luminosity
# relation with Gaussian intrinsic scatter (mean 0, standard deviation m_sig_int)
m_true = np.zeros((n_gal, n_star))
for i in range(n_gal):
m_true[i, :] = mu_true[i] + abs_true + s_true * lp_true[i, :] + npr.randn(n_star) * m_sig_int
# measure the apparent magnitudes noisily, all with the same measurement uncertainty m_hat_sig
m_hat = m_true + npr.randn(n_gal, n_star) * m_hat_sig
# return!
return (abs_true, s_true, mu_true, lp_true, m_true, mu_hat, m_hat)
```
Let's check that the simulation generates something sane. A simple test that the magnitude measurements errors are correctly generated.
```
# simulate
abs_true, s_true, mu_true, lp_true, m_true, mu_hat, m_hat = \
simulate(n_gal, n_star, abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, m_sig_int, m_hat_sig)
# plot difference between true and observed apparent magnitudes. this should be the
# noise, which is Gaussian distributed with mean zero and std dev m_hat_sig
outs = mp.hist((m_true - m_hat).flatten())
dm_grid = np.linspace(np.min(outs[1]), np.max(outs[1]))
mp.plot(dm_grid, np.exp(-0.5 * (dm_grid/m_hat_sig) ** 2) * np.max(outs[0]))
mp.xlabel(r'$m_{ij} - \hat{m}_{ij}$')
mp.ylabel(r'$N \left(m_{ij} - \hat{m}_{ij}\right)$')
```
And another test that the intrinsic scatter is added as expected.
```
# plot difference between true apparent magnitudes and expected apparent
# magnitude given a perfect (i.e., intrinsic-scatter-free) period-luminosity
# relation. this should be the intrinsic scatter, which is Gaussian-
# distributed with mean zero and std dev m_sig_int
eps = np.zeros((n_gal, n_star))
for i in range(n_gal):
eps[i, :] = mu_true[i] + abs_true + s_true * lp_true[i, :] - m_true[i, :]
outs = mp.hist(eps.flatten())
dm_grid = np.linspace(np.min(outs[1]), np.max(outs[1]))
mp.plot(dm_grid, np.exp(-0.5 * (dm_grid/m_sig_int) ** 2) * np.max(outs[0]))
mp.xlabel(r'$m_{ij} - \hat{m}_{ij}$')
mp.ylabel(r'$N \left(m_{ij} - \hat{m}_{ij}\right)$')
```
## Generalized Least Squares Demo
Coding up the [GLS estimator](https://en.wikipedia.org/wiki/Generalized_least_squares) is a little involved, so I've done it for you below. Note that, rather unhelpfully, I've done so in a different order than in the notes. When I get a chance I will re-write. For now, you can simply evaluate the cells and bask in the glory of the fastest inference you will ever do!
```
def gls_fit(n_gal, n_star, mu_hat, mu_hat_sig, m_hat, m_sig_int, m_hat_sig, \
lp_true, priors=None):
# setup
# n_obs is one anchor constraint and one magnitude per Cepheid.
# n_par is one mu per Cepheid host and 2 CPL params. if priors
# are used, we add on n_gal + 2 observations: one prior constraint
# on each host distance modulus and CPL parameter
n_obs = n_gal * n_star + 1
n_par = n_gal + 2
if priors is not None:
n_obs += n_gal + 2
data = np.zeros(n_obs)
design = np.zeros((n_obs, n_par))
cov_inv = np.zeros((n_obs, n_obs))
# anchor
data[0] = mu_hat
design[0, 0] = 1.0
cov_inv[0, 0] = 1.0 / mu_hat_sig ** 2
# Cepheids
k = 1
for i in range(0, n_gal):
for j in range(0, n_star):
data[k] = m_hat[i, j]
design[k, i] = 1.0
design[k, n_gal] = 1.0
design[k, n_gal + 1] = lp_true[i, j]
cov_inv[k, k] = 1.0 / (m_hat_sig ** 2 + m_sig_int ** 2)
k += 1
# and, finally, priors if desired
if priors is not None:
abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig = priors
for i in range(n_gal):
data[k] = mu_bar
design[k, i] = 1.0
cov_inv[k, k] = 1.0 / mu_sig ** 2
k += 1
data[k] = abs_bar
design[k, n_gal] = 1.0
cov_inv[k, k] = 1.0 / abs_sig ** 2
k += 1
data[k] = s_bar
design[k, n_gal + 1] = 1.0
cov_inv[k, k] = 1.0 / s_sig ** 2
k += 1
# fit and return
destci = np.dot(design.transpose(), cov_inv)
pars_cov = np.linalg.inv(np.dot(destci, design))
pars = np.dot(np.dot(pars_cov, destci), data)
res = data - np.dot(design, pars)
dof = n_obs - n_par
chisq_dof = np.dot(res.transpose(), np.dot(cov_inv, res))
return pars, pars_cov, chisq_dof
gls_pars, gls_pars_cov, gls_chisq = gls_fit(n_gal, n_star, mu_hat, mu_hat_sig, m_hat, \
m_sig_int, m_hat_sig, lp_true, \
priors=[abs_bar, abs_sig, s_bar, s_sig, mu_bar, mu_sig])
```
In order to plot the outputs of the GLS fit we could draw a large number of samples from the resulting multivariate Gaussian posterior and pass them to something like [`corner`](https://corner.readthedocs.io/en/latest/); however, as we have analytic results we might as well use those directly. I've coded up something totally hacky here in order to do so. Information on how to draw confidence ellipses can be found in [Dan Coe's note](https://arxiv.org/pdf/0906.4123.pdf).
```
# this is a hacky function designed to transform the analytic GLS outputs
# into a corner.py style triangle plot, containing 1D and 2D marginalized
# posteriors
import scipy.stats as sps
import matplotlib.patches as mpp
def schmorner(par_mean, par_cov, par_true, par_label):
# setup
par_std = np.sqrt(np.diag(par_cov))
x_min = par_mean[0] - 3.5 * par_std[0]
x_max = par_mean[0] + 3.5 * par_std[0]
y_min = par_mean[1] - 3.5 * par_std[1]
y_max = par_mean[1] + 3.5 * par_std[1]
fig, axes = mp.subplots(2, 2)
# 1D marge
x = np.linspace(x_min, x_max, 100)
axes[0, 0].plot(x, sps.norm.pdf(x, par_mean[0], par_std[0]), 'k')
axes[0, 0].axvline(par_true[0])
axes[1, 0].axvline(par_true[0])
axes[0, 0].set_xticklabels([])
axes[0, 0].set_yticklabels([])
axes[0, 0].set_xlim(x_min, x_max)
axes[0, 0].set_title(par_label[0])
axes[0, 0].set_title(par_label[0] + r'$=' + '{:6.2f}'.format(par_mean[0]) + \
r'\pm' + '{:4.2f}'.format(par_std[0]) + r'$')
y = np.linspace(y_min, y_max, 100)
axes[1, 1].plot(y, sps.norm.pdf(y, par_mean[1], par_std[1]), 'k')
axes[1, 0].axhline(par_true[1])
axes[1, 1].axvline(par_true[1])
axes[1, 1].tick_params(labelleft=False)
axes[1, 1].set_xlim(y_min, y_max)
for tick in axes[1, 1].get_xticklabels():
tick.set_rotation(45)
axes[1, 1].set_title(par_label[1] + r'$=' + '{:5.2f}'.format(par_mean[1]) + \
r'\pm' + '{:4.2f}'.format(par_std[1]) + r'$')
# 2D marge
vals, vecs = np.linalg.eig(par_cov)
theta = np.degrees(np.arctan2(*vecs[::-1, 0]))
w, h = 2 * np.sqrt(vals)
ell = mpp.Ellipse(xy=par_mean, width=w, height=h,
angle=theta, color='k')
ell.set_facecolor("none")
axes[1, 0].add_artist(ell)
ell = mpp.Ellipse(xy=par_mean, width=2*w, height=2*h,
angle=theta, color='k')
ell.set_facecolor("none")
axes[1, 0].add_artist(ell)
axes[1, 0].set_xlim(x_min, x_max)
axes[1, 0].set_ylim(y_min, y_max)
for tick in axes[1, 0].get_xticklabels():
tick.set_rotation(45)
for tick in axes[1, 0].get_yticklabels():
tick.set_rotation(45)
axes[1, 0].set_xlabel(par_label[0])
axes[1, 0].set_ylabel(par_label[1])
fig.delaxes(axes[0, 1])
fig.subplots_adjust(hspace=0, wspace=0)
test = schmorner(gls_pars[n_gal:], gls_pars_cov[n_gal:, n_gal:], \
[abs_true, s_true], [r'$M$', r'$s$'])
#
#lazy = npr.multivariate_normal(gls_pars[n_gal:], gls_pars_cov[n_gal:, n_gal:], n_samples)
#fig = corner.corner(samples.T, labels=[r"$M$", r"$s$"],
# show_titles=True, truths=[abs_bar, s_bar])
```
## Task 3B
Below I've written the majority of a Gibbs sampler to infer the hyper-parameters of the Cepheid PL relation from our simulated sample. One component is missing: drawing from the conditional distribution of the standard absolute magnitude, $M^*$. Please fill it in, using the results of whiteboard/paper Task 3A.
```
def gibbs_sample(n_samples, n_gal, n_star, abs_bar, abs_sig, \
s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, \
m_sig_int, m_hat_sig, mu_hat, lp_true, m_hat):
# storage
abs_samples = np.zeros(n_samples)
s_samples = np.zeros(n_samples)
mu_samples = np.zeros((n_gal, n_samples))
m_samples = np.zeros((n_gal, n_star, n_samples))
# initialize sampler
abs_samples[0] = abs_bar + npr.randn() * abs_sig
s_samples[0] = s_bar + npr.randn() * s_sig
mu_samples[:, 0] = mu_bar + npr.randn(n_gal) * mu_bar
for i in range(n_gal):
m_samples[i, :, 0] = mu_samples[i, 0] + abs_samples[0] + s_samples[0] * lp_true[i, :]
# sample!
for k in range(1, n_samples):
# sample abs mag
abs_sig_pl = m_sig_int / np.sqrt(n_gal * n_star)
abs_bar_pl = 0.0
for j in range(n_gal):
abs_bar_pl += np.sum(m_samples[j, :, k - 1] - mu_samples[j, k - 1] - s_samples[k - 1] * lp_true[j, :])
abs_bar_pl /= (n_gal * n_star)
abs_std = np.sqrt((abs_sig * abs_sig_pl) ** 2 / (abs_sig ** 2 + abs_sig_pl ** 2))
abs_mean = (abs_sig ** 2 * abs_bar_pl + abs_sig_pl ** 2 * abs_bar) / \
(abs_sig ** 2 + abs_sig_pl ** 2)
abs_samples[k] = abs_mean + npr.randn() * abs_std
# sample slope
s_sig_pl = m_sig_int / np.sqrt(np.sum(lp_true ** 2))
s_bar_pl = 0.0
for j in range(n_gal):
s_bar_pl += np.sum((m_samples[j, :, k - 1] - mu_samples[j, k - 1] - abs_samples[k]) * lp_true[j, :])
s_bar_pl /= np.sum(lp_true ** 2)
s_std = np.sqrt((s_sig * s_sig_pl) ** 2 / (s_sig ** 2 + s_sig_pl ** 2))
s_mean = (s_sig ** 2 * s_bar_pl + s_sig_pl ** 2 * s_bar) / \
(s_sig ** 2 + s_sig_pl ** 2)
s_samples[k] = s_mean + npr.randn() * s_std
# sample apparent magnitudes
for j in range(n_gal):
m_mean_pl = mu_samples[j, k - 1] + abs_samples[k] + s_samples[k] * lp_true[j, :]
m_std = np.sqrt(m_sig_int ** 2 * m_hat_sig ** 2 / (m_sig_int ** 2 + m_hat_sig ** 2))
m_mean = (m_sig_int ** 2 * m_hat[j, :] + m_hat_sig ** 2 * m_mean_pl) / (m_sig_int ** 2 + m_hat_sig ** 2)
m_samples[j, :, k] = m_mean + npr.randn(n_star) * m_std
# sample distance moduli
mu_sig_pl = m_sig_int / np.sqrt(n_star)
mu_bar_pl = np.mean(m_samples[0, :, k] - abs_samples[k] - s_samples[k] * lp_true[0, :])
mu_var = 1.0 / (1.0 / mu_sig ** 2 + 1.0 / mu_hat_sig ** 2 + 1.0 / mu_sig_pl ** 2)
mu_mean = (mu_bar / mu_sig ** 2 + mu_hat / mu_hat_sig ** 2 + mu_bar_pl / mu_sig_pl ** 2) * mu_var
mu_samples[0, k] = mu_mean + npr.randn() * np.sqrt(mu_var)
for j in range(1, n_gal):
mu_sig_pl = m_sig_int / np.sqrt(n_star)
mu_bar_pl = np.mean(m_samples[j, :, k] - abs_samples[k] - s_samples[k] * lp_true[j, :])
mu_std = (mu_sig * mu_sig_pl) ** 2 / (mu_sig ** 2 + mu_sig_pl ** 2)
mu_mean = (mu_sig ** 2 * mu_bar_pl + mu_sig_pl ** 2 * mu_bar) / \
(mu_sig ** 2 + mu_sig_pl ** 2)
mu_samples[j, k] = mu_mean + npr.randn() * mu_std
return (abs_samples, s_samples, mu_samples, m_samples)
```
Now let's sample, setting aside the first half of the samples as warmup.
```
all_samples = gibbs_sample(n_samples, n_gal, n_star, abs_bar, abs_sig, \
s_bar, s_sig, mu_bar, mu_sig, mu_hat_sig, \
m_sig_int, m_hat_sig, mu_hat, lp_true, m_hat)
n_warmup = int(n_samples / 2)
g_samples = [samples[n_warmup:] for samples in all_samples]
```
Let's make sure that the absolute magnitude is being inferred as expected. First, generate a trace plot of the absolute magnitude samples (the first entry in `g_samples`), overlaying the ground truth. Then print out the mean and standard deviation of the marginalized absolute magnitude posterior. Recall that marginalizing is as simple as throwing away the samples of all other parameters.
```
mp.plot(g_samples[0])
mp.axhline(abs_true)
mp.xlabel('sample')
mp.ylabel(r'$M^*$')
print('Truth {:6.2f}; inferred {:6.2f} +/- {:4.2f}'.format(abs_true, np.mean(g_samples[0]), np.std(g_samples[0])))
```
Now let's generate some marginalized parameter posteriors (by simply discarding all samples of the latent parameters) using DFM's [`corner`](https://corner.readthedocs.io/en/latest/) package. Note the near identical nature of this plot to the `schmorner` plot we generated above.
```
import corner
samples = np.stack((g_samples[0], g_samples[1]))
fig = corner.corner(samples.T, labels=[r"$M^*$", r"$s$"],
show_titles=True, truths=[abs_true, s_true])
```
## Task 4
The final task is to write a [Stan model](https://pystan.readthedocs.io/en/latest/getting_started.html) to infer the parameters of the period-luminosity relation. I've coded up the other two blocks required (`data` and `parameters`), so all that is required is for you to write the joint posterior (factorized into its individual components) in Stan's sampling-statement-based syntax. Essentially all you need are Gaussian sampling statements (`abs_true ~ normal(abs_bar, abs_sig);`) and for loops (`for(i in 1: n_gal){...}`).
When you evaluate this cell, Stan will translate your model into `c++` code and compile it. We will then pickle the compiled model so you can re-use it rapidly without recompiling. To do so, please set `recompile = False` in the notebook.
```
import sys
import pystan as ps
import pickle
stan_code = """
data {
int<lower=0> n_gal;
int<lower=0> n_star;
real mu_hat;
real mu_hat_sig;
real m_hat[n_gal, n_star];
real m_hat_sig;
real m_sig_int;
real lp_true[n_gal, n_star];
real abs_bar;
real abs_sig;
real s_bar;
real s_sig;
real mu_bar;
real mu_sig;
}
parameters {
real mu_true[n_gal];
real m_true[n_gal, n_star];
real abs_true;
real s_true;
}
model {
// priors
abs_true ~ normal(abs_bar, abs_sig);
s_true ~ normal(s_bar, s_sig);
mu_true ~ normal(mu_bar, mu_sig);
// whatevers
for(i in 1: n_gal){
for(j in 1: n_star){
m_true[i, j] ~ normal(mu_true[i] + abs_true + s_true * lp_true[i, j], m_sig_int);
}
}
// likelihoods
mu_hat ~ normal(mu_true[1], mu_hat_sig);
for(i in 1: n_gal){
for(j in 1: n_star){
m_hat[i, j] ~ normal(m_true[i, j], m_hat_sig);
}
}
}
"""
n_samples_stan = 5000
recompile = True
pkl_fname = 'bhms_stan_model_v{:d}p{:d}p{:d}.pkl'.format(sys.version_info[0], \
sys.version_info[1], \
sys.version_info[2])
if recompile:
stan_model = ps.StanModel(model_code=stan_code)
with open(pkl_fname, 'wb') as f:
pickle.dump(stan_model, f)
else:
try:
with open(pkl_fname, 'rb') as f:
stan_model = pickle.load(f)
except EnvironmentError:
print('ERROR: pickled Stan model (' + pkl_fname + ') not found. ' + \
'Please set recompile = True')
```
Now let's sample...
```
stan_data = {'n_gal': n_gal, 'n_star': n_star, 'mu_hat': mu_hat, 'mu_hat_sig': mu_hat_sig, \
'm_hat': m_hat, 'm_hat_sig': m_hat_sig, 'm_sig_int': m_sig_int, 'lp_true': lp_true, \
'abs_bar': abs_bar, 'abs_sig': abs_sig, 's_bar': s_bar, 's_sig': s_sig, \
'mu_bar': mu_bar, 'mu_sig': mu_sig}
fit = stan_model.sampling(data=stan_data, iter=n_samples_stan, chains=4)
```
... print out Stan's posterior summary (note this is for _all_ parameters)...
```
samples = fit.extract(permuted=True)
print(fit)
```
... and plot the marginalized posterior of the PL parameters, as with the Gibbs sampler.
```
c_samples = np.stack((samples['abs_true'], samples['s_true']))
fig = corner.corner(c_samples.T, labels=[r"$M^*$", r"$s$"],
show_titles=True, truths=[abs_true, s_true])
```
Our work here is done!
| github_jupyter |
# Detecting Loops in Linked Lists
In this notebook, you'll implement a function that detects if a loop exists in a linked list. The way we'll do this is by having two pointers, called "runners", moving through the list at different rates. Typically we have a "slow" runner which moves at one node per step and a "fast" runner that moves at two nodes per step.
If a loop exists in the list, the fast runner will eventually move behind the slow runner as it moves to the beginning of the loop. Eventually it will catch up to the slow runner and both runners will be pointing to the same node at the same time. If this happens then you know there is a loop in the linked list. Below is an example where we have a slow runner (the green arrow) and a fast runner (the red arrow).
<center><img src='assets/two_runners_circular.png' alt="Visual walk through of the steps described above to determine if a loop exists in a linked list." width=300px></center>
```
class Node:
def __init__(self, value):
self.value = value
self.next = None
class LinkedList:
def __init__(self, init_list=None):
self.head = None
if init_list:
for value in init_list:
self.append(value)
def append(self, value):
if self.head is None:
self.head = Node(value)
return
# Move to the tail (the last node)
node = self.head
while node.next:
node = node.next
node.next = Node(value)
return
def __iter__(self):
node = self.head
while node:
yield node.value
node = node.next
def __repr__(self):
return str([i for i in self])
list_with_loop = LinkedList([2, -1, 3, 0, 5])
# Creating a loop where the last node points back to the second node
loop_start = list_with_loop.head.next
node = list_with_loop.head
while node.next:
node = node.next
node.next = loop_start
# You will encouter the unlimited loop
# Click on stop
# Then right click on `clear outpit`
for i in list_with_loop:
print(i)
```
### Write the function definition here
**Exercise:** Given a linked list, implement a function `iscircular` that returns `True` if a loop exists in the list and `False` otherwise.
```
def iscircular(linked_list):
"""
Determine whether the Linked List is circular or not
Args:
linked_list(obj): Linked List to be checked
Returns:
bool: Return True if the linked list is circular, return False otherwise
"""
# TODO: Write function to check if linked list is circular
if linked_list is None:
return False
slow, fast = linked_list.head, linked_list.head
while fast and fast.next:
slow, fast = slow.next, fast.next.next
if slow == fast:
return True
return False
```
### Let's test your function
```
iscircular(list_with_loop)
# Test Cases
# Create another circular linked list
small_loop = LinkedList([0])
small_loop.head.next = small_loop.head
print ("Pass" if iscircular(list_with_loop) else "Fail") # Pass
print ("Pass" if iscircular(LinkedList([-4, 7, 2, 5, -1])) else "Fail") # Fail
print ("Pass" if iscircular(LinkedList([1])) else "Fail") # Fail
print ("Pass" if iscircular(small_loop) else "Fail") # Pass
print ("Pass" if iscircular(LinkedList([])) else "Fail") # Fail
```
<span class="graffiti-highlight graffiti-id_tuhz4y1-id_fy0906u"><i></i><button>Show Solution</button></span>
| github_jupyter |
```
import numpy as np
from scipy.stats import norm
import matplotlib.pylab as plt
import pandas as pd
from bokeh.layouts import row, widgetbox, layout, gridplot
from bokeh.models import CustomJS, Slider
from bokeh.plotting import figure, output_file, show, ColumnDataSource
from bokeh.models.glyphs import MultiLine
from bokeh.io import output_notebook
from bokeh.models.widgets import Div
%matplotlib inline
output_notebook()
num_data = 10
X = norm.rvs(size=(num_data,3), random_state=42)
#X = np.dot(Y, np.linalg.cholesky([[1, 0.6], [0.6, 0.6]]))
m = X.mean(axis=0)
X = X - m
X
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], X[:,2])
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
a, b = np.linalg.eig(np.cov(X.T));
a
b
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
pca.fit(X)
print(pca.components_)
print(pca.explained_variance_)
X_star = pca.transform(X)
X_star
# keep projections onto first two pcs
F_2 = np.dot(pca.components_[0:2,:], X.T)
np.dot(F_2, F_2.T)
# keep projection onto first pc
F_1 = np.dot(pca.components_[0,:], X.T)
F_1
X
XF = np.outer(pca.components_[0,:].T, F_1)
XF
resid = X.T - XF
resid
np.dot(resid, resid.T)
from sklearn.decomposition import PCA
class RiskModelPCA():
ANN_FACTOR = 252
def __init__(self, num_factors):
self._num_factors = num_factors
self.num_stocks_ = None
self.factor_betas_ = None
self.factor_returns_ = None
self.common_returns_ = None
self.residuals_ = None
self.factor_cov_matrix_ = None
self.idio_var_matrix_ = None
self.explained_variance_ratio_ = None
def fit(self, returns):
self.num_stocks_ = len(returns.columns)
mod = PCA(n_components=self._num_factors, svd_solver='full')
mod.fit(returns)
self.factor_betas_ = pd.DataFrame(
data=mod.components_.T,
index=returns.columns
)
self.factor_returns_ = pd.DataFrame(
data=mod.transform(returns),
index=returns.index
)
self.explained_variance_ratio_ = mod.explained_variance_ratio_
self.common_returns_ = pd.DataFrame(
data=np.dot(self.factor_returns_, self.factor_betas_.T),
index=returns.index
)
self.common_returns_.columns = returns.columns
self.residuals_ = (returns - self.common_returns_)
self.factor_cov_matrix_ = np.diag(
self.factor_returns_.var(axis=0, ddof=1)*RiskModelPCA.ANN_FACTOR
)
self.idio_var_matrix_ = pd.DataFrame(
data=np.diag(np.var(self.residuals_))*RiskModelPCA.ANN_FACTOR,
index=returns.columns
)
self.idio_var_vector_ = pd.DataFrame(
data=np.diag(self.idio_var_matrix_.values),
index=returns.columns
)
self.idio_var_matrix_.columns = index=returns.columns
def get_factor_exposures(self, weights):
F = self.factor_betas_.loc[weights.index]
return F.T.dot(weights)
def predict(self, weights):
""" Calculates expected portfolio risk as sqrt(h'XFX'h + h'Sh).
This will fail if your portfolio has asset weights not in the risk model"""
all_assets = pd.DataFrame(
data=np.repeat(0, self.num_stocks_),
index=self.factor_betas_.index)
all_assets.loc[weights.index] = weights
h = all_assets
X = self.factor_betas_
F = self.factor_cov_matrix_
S = self.idio_var_matrix_
return np.sqrt(h.T.dot(X).dot(F).dot(X.T).dot(h) + h.T.dot(S).dot(h))[0].values[0]
rm = RiskModelPCA(1)
rm.fit(pd.DataFrame(X))
rm.idio_var_matrix_/252
```
| github_jupyter |
<a href="https://colab.research.google.com/github/google-research/tapas/blob/master/notebooks/sqa_predictions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2020 The Google AI Language Team Authors
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2019 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
Running a Tapas fine-tuned checkpoint
---
This notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)
# Clone and install the repository
First, let's install the code.
```
! pip install tapas-table-parsing
```
# Fetch models fom Google Storage
Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is base sized model trained on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253). Note that best results in the paper were obtained with a large model, with 24 layers instead of 12.
```
! gsutil cp gs://tapas_models/2020_04_21/tapas_sqa_base.zip . && unzip tapas_sqa_base.zip
```
# Imports
```
import tensorflow.compat.v1 as tf
import os
import shutil
import csv
import pandas as pd
import IPython
tf.get_logger().setLevel('ERROR')
from tapas.utils import tf_example_utils
from tapas.protos import interaction_pb2
from tapas.utils import number_annotation_utils
from tapas.scripts import prediction_utils
```
# Load checkpoint for prediction
Here's the prediction code, which will create and `interaction_pb2.Interaction` protobuf object, which is the datastructure we use to store examples, and then call the prediction script.
```
os.makedirs('results/sqa/tf_examples', exist_ok=True)
os.makedirs('results/sqa/model', exist_ok=True)
with open('results/sqa/model/checkpoint', 'w') as f:
f.write('model_checkpoint_path: "model.ckpt-0"')
for suffix in ['.data-00000-of-00001', '.index', '.meta']:
shutil.copyfile(f'tapas_sqa_base/model.ckpt{suffix}', f'results/sqa/model/model.ckpt-0{suffix}')
max_seq_length = 512
vocab_file = "tapas_sqa_base/vocab.txt"
config = tf_example_utils.ClassifierConversionConfig(
vocab_file=vocab_file,
max_seq_length=max_seq_length,
max_column_id=max_seq_length,
max_row_id=max_seq_length,
strip_column_names=False,
add_aggregation_candidates=False,
)
converter = tf_example_utils.ToClassifierTensorflowExample(config)
def convert_interactions_to_examples(tables_and_queries):
"""Calls Tapas converter to convert interaction to example."""
for idx, (table, queries) in enumerate(tables_and_queries):
interaction = interaction_pb2.Interaction()
for position, query in enumerate(queries):
question = interaction.questions.add()
question.original_text = query
question.id = f"{idx}-0_{position}"
for header in table[0]:
interaction.table.columns.add().text = header
for line in table[1:]:
row = interaction.table.rows.add()
for cell in line:
row.cells.add().text = cell
number_annotation_utils.add_numeric_values(interaction)
for i in range(len(interaction.questions)):
try:
yield converter.convert(interaction, i)
except ValueError as e:
print(f"Can't convert interaction: {interaction.id} error: {e}")
def write_tf_example(filename, examples):
with tf.io.TFRecordWriter(filename) as writer:
for example in examples:
writer.write(example.SerializeToString())
def predict(table_data, queries):
table = [list(map(lambda s: s.strip(), row.split("|")))
for row in table_data.split("\n") if row.strip()]
examples = convert_interactions_to_examples([(table, queries)])
write_tf_example("results/sqa/tf_examples/test.tfrecord", examples)
write_tf_example("results/sqa/tf_examples/random-split-1-dev.tfrecord", [])
! python -m tapas.run_task_main \
--task="SQA" \
--output_dir="results" \
--noloop_predict \
--test_batch_size={len(queries)} \
--tapas_verbosity="ERROR" \
--compression_type= \
--init_checkpoint="tapas_sqa_base/model.ckpt" \
--bert_config_file="tapas_sqa_base/bert_config.json" \
--mode="predict" 2> error
results_path = "results/sqa/model/test_sequence.tsv"
all_coordinates = []
df = pd.DataFrame(table[1:], columns=table[0])
display(IPython.display.HTML(df.to_html(index=False)))
print()
with open(results_path) as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t')
for row in reader:
coordinates = prediction_utils.parse_coordinates(row["answer_coordinates"])
all_coordinates.append(coordinates)
answers = ', '.join([table[row + 1][col] for row, col in coordinates])
position = int(row['position'])
print(">", queries[position])
print(answers)
return all_coordinates
```
# Predict
```
# Example nu-1000-0
result = predict("""
Doctor_ID|Doctor_Name|Department|opd_day|Morning_time|Evening_time
1|ABCD|Nephrology|Monday|9|5
2|ABC|Opthomology|Tuesday|9|6
3|DEF|Nephrology|Wednesday|9|6
4|GHI|Gynaecology|Thursday|9|6
5|JKL|Orthopeadics|Friday|9|6
6|MNO|Cardiology|Saturday|9|6
7|PQR|Dentistry|Sunday|9|5
8|STU|Epidemology|Monday|9|6
9|WVX|ENT|Tuesday|9|5
10|GILOY|Genetics|Wednesday|9|6
11|Rajeev|Neurology|Wednesday|10|4:30
12|Makan|Immunology|Tuesday|9|4:30
13|Arora|Paediatrics|Sunday|11|4:30
14|Piyush|Radiology|Monday|11:20|2
15|Roha|Gynaecology|Wednesday|9:20|2
16|Bohra|Dentistry|Thursday|11|2
17|Rajeev Khan|Virology|Tuesday|10|2
18|Arnab|Pharmocology|Sunday|10|2
19|Muskan|ENT|Friday|10|2
20|pamela|Epidemology|Monday|10|2
21|Rohit|Radiology|Tuesday|10|2
22|Aniket|Cardiology|Saturday|10|2
23|Darbar|Genetics|Saturday|10|2
24|Suyash|Neurology|Friday|10|2
25|Abhishek|Immunology|Wednesday|10|2
26|Yogesh|Immunology|Saturday|10|2
27|Kunal|Paediatrics|Monday|10|2
28|Vimal|Pharmocology|Friday|10|2
29|Kalyan|Virology|Tuesday|10|2
30|DSS|Nephrology|Thursday|10|2
""", ["How many doctors are there in Immunology department?", "of these, which doctor is available on Saturday?"])
```
| github_jupyter |
# Tutorial Part 6: Going Deeper On Molecular Featurizations
One of the most important steps of doing machine learning on molecular data is transforming this data into a form amenable to the application of learning algorithms. This process is broadly called "featurization" and involves tutrning a molecule into a vector or tensor of some sort. There are a number of different ways of doing such transformations, and the choice of featurization is often dependent on the problem at hand.
In this tutorial, we explore the different featurization methods available for molecules. These featurization methods include:
1. `ConvMolFeaturizer`,
2. `WeaveFeaturizer`,
3. `CircularFingerprints`
4. `RDKitDescriptors`
5. `BPSymmetryFunction`
6. `CoulombMatrix`
7. `CoulombMatrixEig`
8. `AdjacencyFingerprints`
## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/06_Going_Deeper_on_Molecular_Featurizations.ipynb)
## Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
```
!wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
!chmod +x Anaconda3-2019.10-Linux-x86_64.sh
!bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local
!conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
```
Let's start with some basic imports
```
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
import numpy as np
from rdkit import Chem
from deepchem.feat import ConvMolFeaturizer, WeaveFeaturizer, CircularFingerprint
from deepchem.feat import AdjacencyFingerprint, RDKitDescriptors
from deepchem.feat import BPSymmetryFunctionInput, CoulombMatrix, CoulombMatrixEig
from deepchem.utils import conformers
```
We use `propane`( $CH_3 CH_2 CH_3 $ ) as a running example throughout this tutorial. Many of the featurization methods use conformers or the molecules. A conformer can be generated using the `ConformerGenerator` class in `deepchem.utils.conformers`.
### RDKitDescriptors
`RDKitDescriptors` featurizes a molecule by computing descriptors values for specified descriptors. Intrinsic to the featurizer is a set of allowed descriptors, which can be accessed using `RDKitDescriptors.allowedDescriptors`.
The featurizer uses the descriptors in `rdkit.Chem.Descriptors.descList`, checks if they are in the list of allowed descriptors and computes the descriptor value for the molecule.
```
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
```
Let's check the allowed list of descriptors. As you will see shortly, there's a wide range of chemical properties that RDKit computes for us.
```
for descriptor in RDKitDescriptors.allowedDescriptors:
print(descriptor)
rdkit_desc = RDKitDescriptors()
features = rdkit_desc._featurize(example_mol)
print('The number of descriptors present are: ', len(features))
```
### BPSymmetryFunction
`Behler-Parinello Symmetry function` or `BPSymmetryFunction` featurizes a molecule by computing the atomic number and coordinates for each atom in the molecule. The features can be used as input for symmetry functions, like `RadialSymmetry`, `DistanceMatrix` and `DistanceCutoff` . More details on these symmetry functions can be found in [this paper](https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.98.146401). These functions can be found in `deepchem.feat.coulomb_matrices`
The featurizer takes in `max_atoms` as an argument. As input, it takes in a conformer of the molecule and computes:
1. coordinates of every atom in the molecule (in Bohr units)
2. the atomic numbers for all atoms.
These features are concantenated and padded with zeros to account for different number of atoms, across molecules.
```
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
```
Let's now take a look at the actual featurized matrix that comes out.
```
bp_sym = BPSymmetryFunctionInput(max_atoms=20)
features = bp_sym._featurize(mol=example_mol)
features
```
A simple check for the featurization would be to count the different atomic numbers present in the features.
```
atomic_numbers = features[:, 0]
from collections import Counter
unique_numbers = Counter(atomic_numbers)
print(unique_numbers)
```
For propane, we have $3$ `C-atoms` and $8$ `H-atoms`, and these numbers are in agreement with the results shown above. There's also the additional padding of 9 atoms, to equalize with `max_atoms`.
### CoulombMatrix
`CoulombMatrix`, featurizes a molecule by computing the coulomb matrices for different conformers of the molecule, and returning it as a list.
A Coulomb matrix tries to encode the energy structure of a molecule. The matrix is symmetric, with the off-diagonal elements capturing the Coulombic repulsion between pairs of atoms and the diagonal elements capturing atomic energies using the atomic numbers. More information on the functional forms used can be found [here](https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.108.058301).
The featurizer takes in `max_atoms` as an argument and also has options for removing hydrogens from the molecule (`remove_hydrogens`), generating additional random coulomb matrices(`randomize`), and getting only the upper triangular matrix (`upper_tri`).
```
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
print("Number of available conformers for propane: ", len(example_mol.GetConformers()))
coulomb_mat = CoulombMatrix(max_atoms=20, randomize=False, remove_hydrogens=False, upper_tri=False)
features = coulomb_mat._featurize(mol=example_mol)
```
A simple check for the featurization is to see if the feature list has the same length as the number of conformers
```
print(len(example_mol.GetConformers()) == len(features))
```
### CoulombMatrixEig
`CoulombMatrix` is invariant to molecular rotation and translation, since the interatomic distances or atomic numbers do not change. However the matrix is not invariant to random permutations of the atom's indices. To deal with this, the `CoulumbMatrixEig` featurizer was introduced, which uses the eigenvalue spectrum of the columb matrix, and is invariant to random permutations of the atom's indices.
`CoulombMatrixEig` inherits from `CoulombMatrix` and featurizes a molecule by first computing the coulomb matrices for different conformers of the molecule and then computing the eigenvalues for each coulomb matrix. These eigenvalues are then padded to account for variation in number of atoms across molecules.
The featurizer takes in `max_atoms` as an argument and also has options for removing hydrogens from the molecule (`remove_hydrogens`), generating additional random coulomb matrices(`randomize`).
```
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
print("Number of available conformers for propane: ", len(example_mol.GetConformers()))
coulomb_mat_eig = CoulombMatrixEig(max_atoms=20, randomize=False, remove_hydrogens=False)
features = coulomb_mat_eig._featurize(mol=example_mol)
print(len(example_mol.GetConformers()) == len(features))
```
### Adjacency Fingerprints
TODO(rbharath): This tutorial still needs to be expanded out with the additional fingerprints.
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
| github_jupyter |
# Neural Networks
In the previous part of this exercise, you implemented multi-class logistic re gression to recognize handwritten digits. However, logistic regression cannot form more complex hypotheses as it is only a linear classifier.<br><br>
In this part of the exercise, you will implement a neural network to recognize handwritten digits using the same training set as before. The <strong>neural network</strong> will be able to represent complex models that form <strong>non-linear hypotheses</strong>. For this week, you will be using parameters from <strong>a neural network that we have already trained</strong>. Your goal is to implement the <strong>feedforward propagation algorithm to use our weights for prediction</strong>. In next week’s exercise, you will write the backpropagation algorithm for learning the neural network parameters.<br><br>
The file <strong><em>ex3data1</em></strong> contains a training set.<br>
The structure of the dataset described blow:<br>
1. X array = <strong>400 columns describe the values of pixels of 20*20 images in flatten format for 5000 samples</strong>
2. y array = <strong>Value of image (number between 0-9)</strong>
<br><br>
<strong>
Our assignment has these sections:
1. Visualizing the Data
1. Converting .mat to .csv
2. Loading Dataset and Trained Neural Network Weights
3. Ploting Data
2. Model Representation
3. Feedforward Propagation and Prediction
</strong>
In each section full description provided.
## 1. Visualizing the Dataset
Before starting on any task, it is often useful to understand the data by visualizing it.<br>
### 1.A Converting .mat to .csv
In this specific assignment, the instructor added a .mat file as training set and weights of trained neural network. But we have to convert it to .csv to use in python.<br>
After all we now ready to import our new csv files to pandas dataframes and do preprocessing on it and make it ready for next steps.
```
# import libraries
import scipy.io
import numpy as np
data = scipy.io.loadmat("ex3data1")
weights = scipy.io.loadmat('ex3weights')
```
Now we extract X and y variables from the .mat file and save them into .csv file for further usage. After running the below code <strong>you should see X.csv and y.csv files</strong> in your directory.
```
for i in data:
if '__' not in i and 'readme' not in i:
np.savetxt((i+".csv"),data[i],delimiter=',')
for i in weights:
if '__' not in i and 'readme' not in i:
np.savetxt((i+".csv"),weights[i],delimiter=',')
```
### 1.B Loading Dataset and Trained Neural Network Weights
First we import .csv files into pandas dataframes then save them into numpy arrays.<br><br>
There are <strong>5000 training examples</strong> in ex3data1.mat, where each training example is a <strong>20 pixel by 20 pixel <em>grayscale</em> image of the digit</strong>. Each pixel is represented by a floating point number indicating the <strong>grayscale intensity</strong> at that location. The 20 by 20 grid of pixels is <strong>"flatten" into a 400-dimensional vector</strong>. <strong>Each of these training examples becomes a single row in our data matrix X</strong>. This gives us a 5000 by 400 matrix X where every row is a training example for a handwritten digit image.<br><br>
The second part of the training set is a <strong>5000-dimensional vector y that contains labels</strong> for the training set.<br><br>
<strong>Notice: In dataset, the digit zero mapped to the value ten. Therefore, a "0" digit is labeled as "10", while the digits "1" to "9" are labeled as "1" to "9" in their natural order.<br></strong>
But this make thing harder so we bring it back to natural order for 0!
```
# import library
import pandas as pd
# saving .csv files to pandas dataframes
x_df = pd.read_csv('X.csv',names= np.arange(0,400))
y_df = pd.read_csv('y.csv',names=['label'])
# saving .csv files to pandas dataframes
Theta1_df = pd.read_csv('Theta1.csv',names = np.arange(0,401))
Theta2_df = pd.read_csv('Theta2.csv',names = np.arange(0,26))
# saving x_df and y_df into numpy arrays
x = x_df.iloc[:,:].values
y = y_df.iloc[:,:].values
m, n = x.shape
# bring back 0 to 0 !!!
y = y.reshape(m,)
y[y==10] = 0
y = y.reshape(m,1)
print('#{} Number of training samples, #{} features per sample'.format(m,n))
# saving Theta1_df and Theta2_df into numpy arrays
theta1 = Theta1_df.iloc[:,:].values
theta2 = Theta2_df.iloc[:,:].values
```
### 1.C Plotting Data
You will begin by visualizing a subset of the training set. In first part, the code <strong>randomly selects selects 100 rows from X</strong> and passes those rows to the <strong>display_data</strong> function. This function maps each row to a 20 pixel by 20 pixel grayscale image and displays the images together.<br>
After plotting, you should see an image like this:<img src='img/plot.jpg'>
```
import numpy as np
import matplotlib.pyplot as plt
import random
amount = 100
lines = 10
columns = 10
image = np.zeros((amount, 20, 20))
number = np.zeros(amount)
for i in range(amount):
rnd = random.randint(0,4999)
image[i] = x[rnd].reshape(20, 20)
y_temp = y.reshape(m,)
number[i] = y_temp[rnd]
fig = plt.figure(figsize=(8,8))
for i in range(amount):
ax = fig.add_subplot(lines, columns, 1 + i)
# Turn off tick labels
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.imshow(image[i], cmap='binary')
plt.show()
print(number)
```
# 2. Model Representation
Our neural network is shown in below figure. It has <strong>3 layers an input layer, a hidden layer and an output layer</strong>. Recall that our <strong>inputs are pixel</strong> values of digit images. Since the images are of <strong>size 20×20</strong>, this gives us <strong>400 input layer units</strong> (excluding the extra bias unit which always outputs +1).<br><br><img src='img/nn.jpg'><br>
You have been provided with a set of <strong>network parameters (Θ<sup>(1)</sup>; Θ<sup>(2)</sup>)</strong> already trained by instructor.<br><br>
<strong>Theta1 and Theta2 The parameters have dimensions that are sized for a neural network with 25 units in the second layer and 10 output units (corresponding to the 10 digit classes).</strong>
```
print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape))
```
It seems our weights are transposed, so we transpose them to have them in a way our neural network is.
```
theta1 = theta1.transpose()
theta2 = theta2.transpose()
print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape))
```
# 3. Feedforward Propagation and Prediction
Now you will implement feedforward propagation for the neural network.<br>
You should implement the <strong>feedforward computation</strong> that computes <strong>h<sub>θ</sub>(x<sup>(i)</sup>)</strong> for every example i and returns the associated predictions. Similar to the one-vs-all classification strategy, the prediction from the neural network will be the <strong>label</strong> that has the <strong>largest output <strong>h<sub>θ</sub>(x)<sub>k</sub></strong></strong>.
<strong>Implementation Note:</strong> The matrix X contains the examples in rows. When you complete the code, <strong>you will need to add the column of 1’s</strong> to the matrix. The matrices <strong>Theta1 and Theta2 contain the parameters for each unit in rows.</strong> Specifically, the first row of Theta1 corresponds to the first hidden unit in the second layer. <br>
You must get <strong>a<sup>(l)</sup></strong> as a column vector.<br><br>
You should see that the <strong>accuracy is about 97.5%</strong>.
```
# adding column of 1's to x
x = np.append(np.ones(shape=(m,1)),x,axis = 1)
```
<strong>h = hypothesis(x,theta)</strong> will compute <strong>sigmoid</strong> function on <strong>θ<sup>T</sup>X</strong> and return a number which <strong>0<=h<=1</strong>.<br>
You can use <a href='https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.special.expit.html'>this</a> library for calculating sigmoid.
```
def sigmoid(z):
return 1/(1+np.exp(-z))
def lr_hypothesis(x,theta):
return np.dot(x,theta)
```
<strong>predict(theta1, theta2, x):</strong> outputs the predicted label of x given the trained weights of a neural network (theta1, theta2).
```
layers = 3
num_labels = 10
```
<strong>Becuase the initial dataset has changed and mapped 0 to "10", so the weights also are changed. So we just rotate columns one step to right, to predict correct values.<br>
Recall we have changed mapping 0 to "10" to 0 to "0" but we cannot detect this mapping in weights of neural netwrok. So we have to this rotation on final output of probabilities.</strong>
```
def rotate_column(array):
array_ = np.zeros(shape=(m,num_labels))
temp = np.zeros(num_labels,)
temp= array[:,9]
array_[:,1:10] = array[:,0:9]
array_[:,0] = temp
return array_
def predict(theta1,theta2,x):
z2 = np.dot(x,theta1) # hidden layer
a2 = sigmoid(z2) # hidden layer
# adding column of 1's to a2
a2 = np.append(np.ones(shape=(m,1)),a2,axis = 1)
z3 = np.dot(a2,theta2)
a3 = sigmoid(z3)
# mapping problem. Rotate left one step
y_prob = rotate_column(a3)
# prediction on activation a2
y_pred = np.argmax(y_prob, axis=1).reshape(-1,1)
return y_pred
y_pred = predict(theta1,theta2,x)
y_pred.shape
```
Now we will compare our predicted result to the true one with <a href='http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html'>confusion_matrix</a> of numpy library.
```
from sklearn.metrics import confusion_matrix
# Function for accuracy
def acc(confusion_matrix):
t = 0
for i in range(num_labels):
t += confusion_matrix[i][i]
f = m-t
ac = t/(m)
return (t,f,ac)
#import library
from sklearn.metrics import confusion_matrix
cm_train = confusion_matrix(y.reshape(m,),y_pred.reshape(m,))
t,f,ac = acc(cm_train)
print('With #{} correct, #{} wrong ==========> accuracy = {}%'
.format(t,f,ac*100))
cm_train
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/neoaksa/IMDB_Spider/blob/master/Movie_Analysis.ipynb)
```
# I've already uploaded three files onto googledrive, you can use uploaded function blew to upload the files.
# # upload
# uploaded = files.upload()
# for fn in uploaded.keys():
# print('User uploaded file "{name}" with length {length} bytes'.format(
# name=fn, length=len(uploaded[fn])))
import pandas as pd
import numpy as np
import urllib.request
! pip install pydrive
# these classes allow you to request the Google drive API
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
from googleapiclient.discovery import build
from google.colab import auth
# authenticate google drive
auth.authenticate_user()
drive_service = build('drive', 'v3')
# 1. Authenticate and create the PyDrive client.
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
def downloadFile(inputfilename,outputfilename):
downloaded = drive.CreateFile({'id': inputfilename})
# assume the file is called file.csv and it's located at the root of your drive
downloaded.GetContentFile(outputfilename)
# traning file download
MovieItemFile = downloadFile("1w8Ce9An_6vJH_o5Ux7A8Zf0zc2E419xN","MovieItem.csv")
MovieReview = downloadFile("1R7kAHF9X_YnPGwsclqMn2_XA1WgVgjlC","MovieReview.csv")
MovieStar = downloadFile("15d3ZiHoqvxxdRhS9-5it979D0M60Ued0","MovieStar.csv")
df_movieItem = pd.read_csv('MovieItem.csv', delimiter=',',index_col=['id'])
df_movieReview = pd.read_csv('MovieReview.csv', delimiter=',',index_col=['id'])
df_movieStar = pd.read_csv('MovieStar.csv', delimiter=',',index_col=['id'])
# sort by index id(also known by rating)
df_movieItem = df_movieItem.sort_index(axis=0)
# rating overview
import seaborn as sns
sns.stripplot(data=df_movieItem,y='rating',jitter= True,orient = 'v' ,size=6)
plt.title('Movie Rating Overview')
plt.show()
# stars analysis
# pre-process for movieItem and movieStar
star_list = []
for index,stars in df_movieItem[['stars','stars_id']].iterrows():
star_list += [(x.lstrip().replace('"',''),y.lstrip().replace('"',''))
for x,y in zip(stars['stars'][1:-1].replace('\'','').split(','),stars['stars_id'][1:-1].replace('\'','').split(','))]
# star_id_list += [x.lstrip().replace('"','') for x in stars['stars_id'][1:-1].replace('\'','').split(',')]
# reduce duplicate
star_list = list(set(star_list))
# create a dataframe for output
df_star = pd.DataFrame(columns=['stars_id','stars','avg_rating','num_movie'])
df_star['stars_id'] = [x[1] for x in star_list]
df_star['stars'] = [x[0] for x in star_list]
for index,star_id in enumerate(df_star['stars_id']):
filter = df_movieItem['stars_id'].str.contains(star_id)
df_star['num_movie'][index] = len(df_movieItem[filter])
df_star['avg_rating'][index] = pd.to_numeric(df_movieItem[filter]['rating'].str[2:-2]).sum(axis=0)/df_star['num_movie'][index]
# left join star information
df_star
# order by # of movies
df_star = df_star.sort_values(['num_movie'],ascending=False)
print(df_star.head(10))
# order by avg rating
df_star = df_star.sort_values(['avg_rating'],ascending=False)
print(df_star.head(10))
```
Accordig this breif table, we can find **Robert De Niro** took the most movies in top 250 list. Followed by **Harrison**,**Tom** and **Leonardo** .
```
# visual stars
import matplotlib.pyplot as plt
# figure = plt.figure()
ax1 = plt.subplot()
df_aggbyMovie = df_star[df_star['num_movie']>0].groupby(['num_movie']).agg({'stars_id':np.size})
df_aggbyMovie.columns.values[0] ='freq'
df_aggbyMovie = df_aggbyMovie.sort_values(['freq'])
acc_numMovie = np.cumsum(df_aggbyMovie['freq'])
ax1.plot(acc_numMovie)
ax1.set_xlabel('# of movies')
ax1.set_ylabel('cumulated # of stars')
ax1.set_title('Cumulated chart for each segement')
plt.gca().invert_xaxis()
plt.show()
ax2 = plt.subplot()
ax2.pie(df_aggbyMovie,
labels=df_aggbyMovie.index,
startangle=90,
autopct='%1.1f%%')
ax2.set_title('Percetage of segements')
plt.show()
# check out which moive the best stars perform. - best stars: who took more than one movies in the top250 list
df_star_2plus = df_star[df_star['num_movie']>1]['stars_id']
i = 0
movie_list = []
for index,row in df_movieItem[['stars_id','title']].iterrows():
for x in df_star_2plus.values:
if x in row['stars_id']:
i +=1
movie_list.append(row['title'])
break
df_movieItem[df_movieItem['title'].isin(movie_list)].head(10)
```
**165** movies in top 250 movies are performed by the **100** best stars who is defined that took more than one movies in the list. We picked up these 100 movie stars for future star research
```
# movie star relationship analysis
df_movie_star_plus = df_star[df_star['num_movie']>2][['stars_id','stars']]
# transfer star list to relationship list
def starlist2network(list):
bi_list = []
i = 0
while i<len(list):
j = 1
while j<len(list)-i:
bi_list.append((list[i],list[i+j]))
j += 1
i += 1
return tuple(bi_list)
star_map_list =set()
for index,stars in df_movieItem[['stars']].iterrows():
star_list = []
star_list += [x.lstrip().replace('"','')
for x in stars['stars'][1:-1].replace('\'','').split(',')]
for item in starlist2network(star_list):
if item[0] in df_movie_star_plus['stars'].values and item[1] in df_movie_star_plus['stars'].values:
star_map_list.add(tuple(sorted(item)))
!pip install networkx
import networkx as nx
import matplotlib.pyplot as plt
# Creating a Graph
G = nx.Graph() # Right now G is empty
G.add_edges_from(star_map_list)
# k controls the distance between the nodes and varies between 0 and 1
# iterations is the number of times simulated annealing is run
# default k =0.1 and iterations=50
pos = nx.spring_layout(G,k=0.55,iterations=50)
nx.draw(G,pos, with_labels=True, font_weight='bold',node_shape = 'o')
```
I picked up a few stars who took more than 2 movies in the top 250 list, and create a relationship netwrok for them.We can find the major 5 blocks, if we loose the filter, maybe we can find more.
```
# pick 100 stars for age analysis
# rebin the year by 10 years
df_movieStar_bin = df_movieStar.copy()
df_movieStar_bin['name'] = df_movieStar_bin['name'].str[2:-2]
df_movieStar_bin['born_year'] = df_movieStar_bin['born_year'].str[2:-2]
df_movieStar_bin['born_area'] = df_movieStar_bin['born_area'].str[2:-2]
df_movieStar_bin['born_year'] = pd.cut(pd.to_numeric(df_movieStar_bin['born_year'].str[0:4]),range(1900,2020,10),right=False)
df_movieStar_bin = df_movieStar_bin.dropna()
df_movieStar_bin['born_year'] = df_movieStar_bin['born_year'].astype(str).str[1:5] + 's'
df_movieStar_bin = df_movieStar_bin[df_movieStar_bin.index.isin(df_star_2plus.values)]
fig = plt.figure(figsize=(12,6))
plt.style.use('fivethirtyeight')
ax3 = plt.subplot()
ax3.hist(df_movieStar_bin['born_year'])
ax3.set_title('Histogram of Star born year')
plt.xlabel('Star Born Year')
plt.ylabel('# of Star')
plt.show()
# star city anlysis
df_movieStar_bin['born_state'] = [x.split(',')[1] for x in df_movieStar_bin['born_area']]
df_movieStar_by_state = df_movieStar_bin.groupby(['born_state']).size().sort_values(ascending=False)
df_movieStar_by_state = df_movieStar_by_state[df_movieStar_by_state>=2].append(
pd.Series(df_movieStar_by_state[df_movieStar_by_state<2].sum(),index=['Others']))
# print(df_movieStar_by_state)
fig = plt.figure(figsize=(20,6))
plt.bar(range(len(df_movieStar_by_state)), df_movieStar_by_state, align='center', alpha=0.5)
plt.xticks(range(len(df_movieStar_by_state)), df_movieStar_by_state.index)
plt.ylabel('# of Stars')
plt.title('Movie Star by States')
plt.show()
```
From picked 100 movie stars, most of them are born between **1930s to 1970s**. **California, Illinois, New Jersey ** are the states with most movie stars. Even so, none of state or regions is predominant.
```
# review analysis
!pip install wordcloud
!pip install multidict
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import string
import multidict as multidict
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
Lemmatizer = WordNetLemmatizer()
# remove punctuation
list_word = []
for text in df_movieReview['content'].values:
nopunc = [char.lower() for char in text if char not in string.punctuation]
nopunc = ''.join(nopunc)
list_word.append(nopunc)
# setting words unuseful
del_words = ['movie','character','film','story','wa','ha']# excluded words
word_type_list_In = ("JJ","NN") # only picked adj and noun
# word_list_Ex = ("/", "br", "<", ">","be","movie","film","have","do","none","none none")
words = {}
for sent in list_word:
text = nltk.word_tokenize(sent) # tokenize sentence to words
text = [Lemmatizer.lemmatize(word) for word in text] # get stem of words
text_tag = nltk.pos_tag(text) # get words type
for item in [x[0] for x in text_tag if x[1][:2] in word_type_list_In and x[0] not in del_words and x[0] not in stopwords.words('english')]:
if item not in words:
words[item] = 1
else:
words[item] += 1
#sort by value
sorted_words = sorted(words.items(), key=lambda x: x[1],reverse=True)
# filtered_words = ' '.join([x[0] for x in sorted_words if x[1]>=1000])
print(sorted_words[0:20])
fullTermsDict = multidict.MultiDict()
for key in words:
fullTermsDict.add(key, words[key])
# Create the wordcloud object
wordcloud = WordCloud(width=1600, height=800, margin=0,max_font_size=100).generate_from_frequencies(fullTermsDict)
# Display the generated image:
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.margins(x=0, y=0)
plt.show()
```
| github_jupyter |
```
#%%
from dataclasses import dataclass, field
import numpy as np
from sklearn import metrics
import numpy as np
from tqdm import tqdm
import random
from typing import List, Dict
from sklearn.utils import resample
from scipy.special import expit
from shared import bootstrap_auc
from sklearn.model_selection import train_test_split
# start off by seeding random number generators:
RANDOM_SEED = 12345
random.seed(RANDOM_SEED)
np.random.seed(RANDOM_SEED)
# import data; choose feature space
from dataset_poetry import y_train, Xd_train, y_vali, Xd_vali
X_train = Xd_train["numeric"]
X_vali = Xd_vali["numeric"]
#%%
from sklearn.linear_model import LogisticRegression
m = LogisticRegression(random_state=RANDOM_SEED, penalty="none", max_iter=2000)
m.fit(X_train, y_train)
print("skLearn-LR AUC: {:.3}".format(np.mean(bootstrap_auc(m, X_vali, y_vali))))
print("skLearn-LR Acc: {:.3}".format(m.score(X_vali, y_vali)))
@dataclass
class LogisticRegressionModel:
# Managed to squeeze bias into this weights array by adding some +1s.
weights: np.ndarray
@staticmethod
def random(D: int) -> "LogisticRegressionModel":
weights = np.random.randn(D + 1, 1)
return LogisticRegressionModel(weights)
def decision_function(self, X: np.ndarray) -> np.ndarray:
""" Compute the expit of the signed distance from the self.weights hyperplane. """
(N, D) = X.shape
assert self.weights[:D].shape == (D, 1)
# Matrix multiplication; sprinkle transpose and assert to get the shapes you want (or remember Linear Algebra)... or both!
output = np.dot(self.weights[:D].transpose(), X.transpose())
assert output.shape == (1, N)
# now add bias and put it through the 'S'/sigmoid/'expit' function.
return np.array(expit(output + self.weights[-1])).ravel()
def predict(self, X: np.ndarray) -> np.ndarray:
return np.array(self.decision_function(X) > 0.5, dtype="int32").ravel()
def score(self, X: np.ndarray, y: np.ndarray) -> float:
""" Take predictions and compute accuracy. """
y_hat = self.predict(X)
return metrics.accuracy_score(np.asarray(y), y_hat) # type:ignore
@dataclass
class ModelTrainingCurve:
train: List[float] = field(default_factory=list)
validation: List[float] = field(default_factory=list)
def add_sample(
self,
m: LogisticRegressionModel,
X: np.ndarray,
y: np.ndarray,
X_vali: np.ndarray,
y_vali: np.ndarray,
) -> None:
self.train.append(m.score(X, y))
self.validation.append(m.score(X_vali, y_vali))
(N, D) = X_train.shape
learning_curves: Dict[str, ModelTrainingCurve] = {}
def compute_gradient_update(m, X, y) -> np.ndarray:
""" Predict using m over X; compare to y, calculate the gradient update."""
(N, D) = X.shape
y_hat = m.decision_function(X)
y_diffs = np.array(y_hat - y)
# look at all the predictions to compute our derivative:
gradient = np.zeros((D + 1, 1))
# Literally a bajillion times faster if we ditch the for loops!
# 1. scale X matrix by the y_diffs; then sum columns:
x_scaled_by_y = X.T * y_diffs
non_bias_gradient = np.sum(x_scaled_by_y, axis=1)
gradient[:D] = non_bias_gradient.reshape((D, 1))
# 2. the bias term is always 1 in X rows; so we just sum up the y_diffs for this.
gradient[D] += np.sum(y_diffs)
# take an gradient step in the negative direction ('down')
return -(gradient / N)
def train_logistic_regression_gd(a, name="LR-GD", num_iter=100):
plot = ModelTrainingCurve()
learning_curves[name] = plot
m = LogisticRegressionModel.random(D)
# Alpha is the 'learning rate'.
alpha = a
for _ in tqdm(range(num_iter), total=num_iter, desc=name):
# Each step is scaled by alpha, to control how fast we move, overall:
m.weights += alpha * compute_gradient_update(m, X_train, y_train)
# record performance:
plot.add_sample(m, X_train, y_train, X_vali, y_vali)
return m
m = train_logistic_regression_gd(a=1, num_iter=2000)
print("LR-GD AUC: {:.3}".format(np.mean(bootstrap_auc(m, X_vali, y_vali))))
print("LR-GD Acc: {:.3}".format(m.score(X_vali, y_vali)))
def train_logistic_regression_sgd_opt(a, name="LR-SGD", num_iter=100, minibatch_size=512):
""" This is bootstrap-sampling minibatch SGD """
plot = ModelTrainingCurve()
learning_curves[name] = plot
m = LogisticRegressionModel.random(D)
alpha = a
n_samples = max(1, N // minibatch_size)
for _ in tqdm(range(num_iter), total=num_iter, desc=name):
for _ in range(n_samples):
X_mb, y_mb = resample(X_train, y_train, n_samples=minibatch_size)
m.weights += alpha * compute_gradient_update(m, X_mb, y_mb)
# record performance:
plot.add_sample(m, X_train, y_train, X_vali, y_vali)
return m
m = train_logistic_regression_sgd_opt(a=1, num_iter=2000)
print("LR-SGD AUC: {:.3}".format(np.mean(bootstrap_auc(m, X_vali, y_vali))))
print("LR-SGD Acc: {:.3}".format(m.score(X_vali, y_vali)))
## Create training curve plots:
import matplotlib.pyplot as plt
for key, dataset in learning_curves.items():
xs = np.array(list(range(len(dataset.train))))
plt.plot(xs, dataset.train, label="{}".format(key), alpha=0.7)
plt.title("Training Curves")
plt.xlabel("Iteration")
plt.ylabel("Accuracy")
plt.legend()
plt.tight_layout()
plt.savefig("graphs/p12-training-curves.png")
plt.show()
## Create validation curve plots:
for key, dataset in learning_curves.items():
xs = np.array(list(range(len(dataset.validation))))
plt.plot(xs, dataset.validation, label="{}".format(key), alpha=0.7)
plt.title("Validation Curves")
plt.xlabel("Iteration")
plt.ylabel("Accuracy")
plt.legend()
plt.tight_layout()
plt.savefig("graphs/p12-vali-curves.png")
plt.show()
```
# TODO:
#
### 1. pick SGD or GD (I recommend SGD)
i looked at both
### 2. pick a smaller max_iter that gets good performance.
max_iter = 1000
##### Do either A or B:
##### (A) Explore Learning Rates:
##### 3. make ``alpha``, the learning rate, a parameter of the train function.
done
##### 4. make a graph including some faster and slower alphas:
##### .... alpha = [0.05, 0.1, 0.5, 1.0]
##### .... what do you notice?
The alpha of 1 converges faster than 0.05
##### (B) Explore 'Automatic/Early Stopping'
##### 3. split the 'training' data into **another** validation set.
##### 4. modify the SGD/GD loop to keep track of loss/accuarcy on this mini validation set at each iteration.
##### 5. add a tolerance parameter, and stop looping when the loss/accuracy on the mini validation set stops going down.
| github_jupyter |
```
# This cell is added by sphinx-gallery
!pip install mrsimulator --quiet
%matplotlib inline
import mrsimulator
print(f'You are using mrsimulator v{mrsimulator.__version__}')
```
# ²⁹Si 1D MAS spinning sideband (CSA)
After acquiring an NMR spectrum, we often require a least-squares analysis to
determine site populations and nuclear spin interaction parameters. Generally, this
comprises of two steps:
- create a fitting model, and
- determine the model parameters that give the best fit to the spectrum.
Here, we will use the mrsimulator objects to create a fitting model, and use the
`LMFIT <https://lmfit.github.io/lmfit-py/>`_ library for performing the least-squares
fitting optimization.
In this example, we use a synthetic $^{29}\text{Si}$ NMR spectrum of cuspidine,
generated from the tensor parameters reported by Hansen `et al.` [#f1]_, to
demonstrate a simple fitting procedure.
We will begin by importing relevant modules and establishing figure size.
```
import csdmpy as cp
import matplotlib.pyplot as plt
from lmfit import Minimizer, Parameters
from mrsimulator import Simulator, SpinSystem, Site
from mrsimulator.methods import BlochDecaySpectrum
from mrsimulator import signal_processing as sp
from mrsimulator.utils import spectral_fitting as sf
```
## Import the dataset
Use the `csdmpy <https://csdmpy.readthedocs.io/en/stable/index.html>`_
module to load the synthetic dataset as a CSDM object.
```
file_ = "https://sandbox.zenodo.org/record/835664/files/synthetic_cuspidine_test.csdf?"
synthetic_experiment = cp.load(file_).real
# standard deviation of noise from the dataset
sigma = 0.03383338
# convert the dimension coordinates from Hz to ppm
synthetic_experiment.x[0].to("ppm", "nmr_frequency_ratio")
# Plot of the synthetic dataset.
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.plot(synthetic_experiment, "k", alpha=0.5)
ax.set_xlim(50, -200)
plt.grid()
plt.tight_layout()
plt.show()
```
## Create a fitting model
Before you can fit a simulation to an experiment, in this case, the synthetic dataset,
you will first need to create a fitting model. We will use the ``mrsimulator`` objects
as tools in creating a model for the least-squares fitting.
**Step 1:** Create initial guess sites and spin systems.
The initial guess is often based on some prior knowledge about the system under
investigation. For the current example, we know that Cuspidine is a crystalline silica
polymorph with one crystallographic Si site. Therefore, our initial guess model is a
single $^{29}\text{Si}$ site spin system. For non-linear fitting algorithms, as
a general recommendation, the initial guess model parameters should be a good starting
point for the algorithms to converge.
```
# the guess model comprising of a single site spin system
site = Site(
isotope="29Si",
isotropic_chemical_shift=-82.0, # in ppm,
shielding_symmetric={"zeta": -63, "eta": 0.4}, # zeta in ppm
)
spin_system = SpinSystem(
name="Si Site",
description="A 29Si site in cuspidine",
sites=[site], # from the above code
abundance=100,
)
```
**Step 2:** Create the method object.
The method should be the same as the one used
in the measurement. In this example, we use the `BlochDecaySpectrum` method. Note,
when creating the method object, the value of the method parameters must match the
respective values used in the experiment.
```
MAS = BlochDecaySpectrum(
channels=["29Si"],
magnetic_flux_density=7.1, # in T
rotor_frequency=780, # in Hz
spectral_dimensions=[
{
"count": 2048,
"spectral_width": 25000, # in Hz
"reference_offset": -5000, # in Hz
}
],
experiment=synthetic_experiment, # add the measurement to the method.
)
```
**Step 3:** Create the Simulator object, add the method and spin system objects, and
run the simulation.
```
sim = Simulator(spin_systems=[spin_system], methods=[MAS])
sim.run()
```
**Step 4:** Create a SignalProcessor class and apply post simulation processing.
```
processor = sp.SignalProcessor(
operations=[
sp.IFFT(), # inverse FFT to convert frequency based spectrum to time domain.
sp.apodization.Exponential(FWHM="200 Hz"), # apodization of time domain signal.
sp.FFT(), # forward FFT to convert time domain signal to frequency spectrum.
sp.Scale(factor=3), # scale the frequency spectrum.
]
)
processed_data = processor.apply_operations(data=sim.methods[0].simulation).real
```
**Step 5:** The plot the spectrum. We also plot the synthetic dataset for comparison.
```
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment")
ax.plot(processed_data, "r", alpha=0.75, linewidth=1, label="guess spectrum")
ax.set_xlim(50, -200)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
```
## Setup a Least-squares minimization
Now that our model is ready, the next step is to set up a least-squares minimization.
You may use any optimization package of choice, here we show an application using
LMFIT. You may read more on the LMFIT
`documentation page <https://lmfit.github.io/lmfit-py/index.html>`_.
### Create fitting parameters
Next, you will need a list of parameters that will be used in the fit. The *LMFIT*
library provides a `Parameters <https://lmfit.github.io/lmfit-py/parameters.html>`_
class to create a list of parameters.
```
site1 = spin_system.sites[0]
params = Parameters()
params.add(name="iso", value=site1.isotropic_chemical_shift)
params.add(name="eta", value=site1.shielding_symmetric.eta, min=0, max=1)
params.add(name="zeta", value=site1.shielding_symmetric.zeta)
params.add(name="FWHM", value=processor.operations[1].FWHM)
params.add(name="factor", value=processor.operations[3].factor)
```
### Create a minimization function
Note, the above set of parameters does not know about the model. You will need to
set up a function that will
- update the parameters of the `Simulator` and `SignalProcessor` object based on the
LMFIT parameter updates,
- re-simulate the spectrum based on the updated values, and
- return the difference between the experiment and simulation.
```
def minimization_function(params, sim, processor, sigma=1):
values = params.valuesdict()
# the experiment data as a Numpy array
intensity = sim.methods[0].experiment.y[0].components[0].real
# Here, we update simulation parameters iso, eta, and zeta for the site object
site = sim.spin_systems[0].sites[0]
site.isotropic_chemical_shift = values["iso"]
site.shielding_symmetric.eta = values["eta"]
site.shielding_symmetric.zeta = values["zeta"]
# run the simulation
sim.run()
# update the SignalProcessor parameter and apply line broadening.
# update the scaling factor parameter at index 3 of operations list.
processor.operations[3].factor = values["factor"]
# update the exponential apodization FWHM parameter at index 1 of operations list.
processor.operations[1].FWHM = values["FWHM"]
# apply signal processing
processed_data = processor.apply_operations(sim.methods[0].simulation)
# return the difference vector.
diff = intensity - processed_data.y[0].components[0].real
return diff / sigma
```
<div class="alert alert-info"><h4>Note</h4><p>To automate the fitting process, we provide a function to parse the
``Simulator`` and ``SignalProcessor`` objects for parameters and construct an
*LMFIT* ``Parameters`` object. Similarly, a minimization function, analogous to
the above `minimization_function`, is also included in the *mrsimulator*
library. See the next example for usage instructions.</p></div>
### Perform the least-squares minimization
With the synthetic dataset, simulation, and the initial guess parameters, we are ready
to perform the fit. To fit, we use the *LMFIT*
`Minimizer <https://lmfit.github.io/lmfit-py/fitting.html>`_ class.
```
minner = Minimizer(minimization_function, params, fcn_args=(sim, processor, sigma))
result = minner.minimize()
result
```
The plot of the fit, measurement and the residuals is shown below.
```
best_fit = sf.bestfit(sim, processor)[0]
residuals = sf.residuals(sim, processor)[0]
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment")
ax.plot(best_fit, "r", alpha=0.75, linewidth=1, label="Best Fit")
ax.plot(residuals, alpha=0.75, linewidth=1, label="Residuals")
ax.set_xlabel("Frequency / Hz")
ax.set_xlim(50, -200)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
```
.. [#f1] Hansen, M. R., Jakobsen, H. J., Skibsted, J., $^{29}\text{Si}$
Chemical Shift Anisotropies in Calcium Silicates from High-Field
$^{29}\text{Si}$ MAS NMR Spectroscopy, Inorg. Chem. 2003,
**42**, *7*, 2368-2377.
`DOI: 10.1021/ic020647f <https://doi.org/10.1021/ic020647f>`_
| github_jupyter |
<a href="https://colab.research.google.com/github/lamahechag/pytorch_tensorflow/blob/master/pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Pytorch
Pytorch is a framework that challenge you to build a ANN almost from scratch.
This tutorial aims to explain how load non-iamges data in Pytorch, and create classification models.
1. Learn how to generate synthetic data for classification. The more complex the bidimentional patern, the larger the high dimentional transformation to find a hiperplane that separes the prolem.
1. Understand the basic components of a neural network using Pytorch: layers, foward pass, gradient calculation, update weights with any gradient desent method.
1. Do a paralallel view of TensorFlow and Pytorch.
1. Apply transformations to Loss function to trainning with imbalanced data: class weight, focal loss, etc.
__References__
https://towardsdatascience.com/pytorch-tabular-binary-classification-a0368da5bb89
https://towardsdatascience.com/pytorch-basics-intro-to-dataloaders-and-loss-functions-868e86450047
https://towardsdatascience.com/understanding-pytorch-with-an-example-a-step-by-step-tutorial-81fc5f8c4e8e
https://cs230.stanford.edu/blog/pytorch/
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
# plotlib and sklearn modules
import numpy as np
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.metrics import accuracy_score, f1_score
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# binary imbalanced set
X_imb, y_imb = make_classification(n_samples=10000, n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1, weights=[0.95, 0.05], class_sep=1.5)
rng = np.random.RandomState(2)
X_imb += 2 * rng.uniform(size=X_imb.shape)
# multiclass set
X_multi, y_multi = make_classification(n_samples=10000, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=3,
n_clusters_per_class=1, weights=[0.33, 0.33, 0.33],
class_sep=0.8, random_state=7)
# non-linear separable
X_moon, y_moon = make_moons(n_samples=10000, noise=0.3, random_state=3)
plt.scatter(X_imb[:, 0], X_imb[:, 1], marker='o', c=y_imb,
s=25, edgecolor='k')
plt.scatter(X_moon[:, 0], X_moon[:, 1], marker='o', c=y_moon,
s=25, edgecolor='k')
plt.scatter(X_multi[:, 0], X_multi[:, 1], marker='o', c=y_multi,
s=25, edgecolor='k')
```
# Data loader
We create a custom dataset class to iterate our data in the dataloader from Pytorch.
`trainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train))`
Then we use `DataLoader` to allow auto batching. The function `loader_data()` gather all the pipeline to load tha data in a Pytorch tensor.
```
class trainData(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__ (self):
return len(self.X_data)
class testData(Dataset):
def __init__(self, X_data):
self.X_data = X_data
def __getitem__(self, index):
return self.X_data[index]
def __len__ (self):
return len(self.X_data)
def loader_data(X, y, BATCH_SIZE=500):
# create function that recive the X and y, batch and returns: train_loader and test_loader.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,
random_state=69, stratify=y_imb)
train_data = trainData(torch.FloatTensor(X_train), torch.FloatTensor(y_train))
test_data = testData(torch.FloatTensor(X_test))
train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_data, batch_size=100)
return train_loader, test_loader, y_test
```
# Pytorch Model
To build a model in Pytorch, you should define a `Class`. The class has two parts:
1. `__init__` defines the different elements of calculation, like: hidden layers, activation functions, dropouts, etc.
1. `foward` method where you define how the input going through each calculation element.
You will see that in the output layer for the binary classifiers there is not `sigmoid` function in the output layer, this is because in Pytorch it can be include in the loss function that will be defined later.
```
class LogisClassifier(nn.Module):
def __init__(self, num_input=2):
super(LogisClassifier, self).__init__()
self.num_input = num_input
# Number of input features
self.layer_1 = nn.Linear(self.num_input, 1)
def forward(self, inputs):
x = self.layer_1(inputs)
return x
class binaryClassification(nn.Module):
def __init__(self, num_input=2):
super(binaryClassification, self).__init__()
self.num_input = num_input
# Number of input features
self.layer_1 = nn.Linear(self.num_input, 120)
self.layer_2 = nn.Linear(120, 64)
self.layer_out = nn.Linear(64, 1)
self.relu = nn.ReLU()
self.dropout = nn.Dropout(p=0.2)
self.batchnorm1 = nn.BatchNorm1d(120)
self.batchnorm2 = nn.BatchNorm1d(64)
def forward(self, inputs):
x = self.relu(self.layer_1(inputs))
x = self.batchnorm1(x)
x = self.relu(self.layer_2(x))
x = self.batchnorm2(x)
x = self.dropout(x)
x = self.layer_out(x)
return x
class Multiclass(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2, 50)
self.relu1 = nn.ReLU()
self.dout = nn.Dropout(0.2)
self.fc2 = nn.Linear(50, 100)
self.prelu = nn.PReLU(1)
self.out = nn.Linear(100, 1)
self.out_act = nn.Softmax(dim=1)
def forward(self, input_):
a1 = self.fc1(input_)
h1 = self.relu1(a1)
dout = self.dout(h1)
a2 = self.fc2(dout)
h2 = self.prelu(a2)
a3 = self.out(h2)
y = self.out_act(a3)
return y
```
# Training loop
In a neural network the process of learning is as follow: calculate the output, calculate the gradient, do the backward pass and update the weights.
Within the training loop, you should do this in each iteration.
1. reset gradient to zero.
1. perform backward step.
1. update parameters.
Also before to measure accuracy and evaluate should be define in Pytorch operations.
```
def binary_acc(y_pred, y_test):
y_pred_tag = torch.round(torch.sigmoid(y_pred))
correct_results_sum = (y_pred_tag == y_test).sum().float()
acc = correct_results_sum/y_test.shape[0]
acc = torch.round(acc * 100)
return acc
def eval_testdata(model, test_loader):
y_pred_list = []
model.eval()
# this 'with' is to evaluate without a gradient step.
with torch.no_grad():
for X_batch in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_test_pred = torch.sigmoid(y_test_pred)
y_pred_tag = torch.round(y_test_pred)
y_pred_list += y_pred_tag.cpu().numpy().squeeze().tolist()
return y_pred_list
def train_model(model, criterion, optimizer, train_loader, EPOCHS, test_loader, y_test):
model.train()
for e in range(1, EPOCHS+1):
epoch_loss = 0
epoch_acc = 0
for X_batch, y_batch in train_loader:
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch.unsqueeze(1))
acc = binary_acc(y_pred, y_batch.unsqueeze(1))
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
y_pred_test = eval_testdata(model, test_loader)
eval_acc = round(accuracy_score(y_true=y_test, y_pred=y_pred_test), 2)
eval_f1 = round(f1_score(y_true=y_test, y_pred=y_pred_test),2)
print(f'Epoch {e+0:03}: | Loss: {epoch_loss/len(train_loader):.5f} | Acc: {epoch_acc/len(train_loader):.3f} | Acc_eval: {eval_acc} | f1_eval: {eval_f1}')
```
# Declare model and train
We have defined a training loop, but we need a loss function and an optimizer to perform gradient desent step.
In the first line the data are loaded, followed by the model declaration and send to the `GPU` device in this case.
## First experiment: Logistic classifier.
```
train_loader, test_loader, y_test = loader_data(X_moon, y_moon, BATCH_SIZE=10)
model = LogisClassifier()
model.to(device)
# define loss function
criterion = nn.BCEWithLogitsLoss()
LEARNING_RATE = 0.001
# define gradient decent optimizer
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
print(model)
# now train(fit) the model
EPOCHS = 100
train_model(model, criterion, optimizer, train_loader, EPOCHS, test_loader, y_test)
```
| github_jupyter |
```
import pandas as pd
import scipy.stats as st
import matplotlib.pyplot as plt
import numpy as np
import operator
```
# Crimes
### Svetozar Mateev
## Putting Crime in the US in Context
First I am going to calculate the total crimes by dividing the population by 100 000 and then multiplying it by the crimes percapita.Then I am going to remove the NaN values.
```
crime_reports=pd.read_csv("report.csv")
crime_reports=crime_reports.dropna()
crime_reports=crime_reports.reset_index()
crime_reports["total_crimes"]=(crime_reports.population/100000*crime_reports.crimes_percapita)
#crime_reports[["population",'crimes_percapita','total_crimes']]
```
• Have a look at the “months_reported” column values. What do they mean? What percent of the rows have less than 12 months? How significant is that?
```
crime_reports["months_reported"].unique()
less_than_twelve=crime_reports[crime_reports.months_reported<12]
print(str(len(less_than_twelve)/len(crime_reports.months_reported)*100)+'%')
```
The months reported column indicates how much months from the year have been reported and only 1.9% of the rows have less than 12 months reported per year whichn on a 5% level isn't significant.
• Overall crime popularity: Create a bar chart of crime frequencies (total, not per capita). Display the type of crime and total occurrences (sum over all years). Sort largest to smallest. Are there any patterns? Which crime is most common?
```
homicides_total_sum=crime_reports.homicides.sum()
rapes_total_sum=crime_reports.rapes.sum()
assaults_total_sum=crime_reports.assaults.sum()
robberies_total_sum=crime_reports.robberies.sum()
total_crimes_total_sum= crime_reports.total_crimes.sum()
homicides_frequency=homicides_total_sum/total_crimes_total_sum
rapes_frequency=rapes_total_sum/total_crimes_total_sum
assaults_frequency=assaults_total_sum/total_crimes_total_sum
robberies_frequency=robberies_total_sum/total_crimes_total_sum
plt.bar(height=[assaults_frequency,robberies_frequency,rapes_frequency,homicides_frequency],left=[1,2,3,4], align = "center",width=0.2)
plt.xticks([1,2,3,4,],['Assaults','Robberies','Rapes','Homicides'])
plt.ylabel("Frequency of a crime")
plt.show()
```
The most frequent crimes are the assaults and i can see from the diagram that crimes which are less serious are committed more often.
• Crime popularity by year: Break down the analysis of the previous graph by year. What is the most common crime (total, not per capita) for each year? What is the least common one?
```
homicides_sum=0
rapes_sum=0
assaults_sum=0
robberies_sum=0
for year in crime_reports.report_year.unique():
year_df=crime_reports[crime_reports.report_year==year]
homicides_sum_year=year_df.homicides.sum()
rapes_sum_year=year_df.rapes.sum()
assaults_sum_year=year_df.assaults.sum()
robberies_sum_year=year_df.robberies.sum()
if(homicides_sum_year>rapes_sum_year and homicides_sum_year>assaults_sum_year and homicides_sum_year>robberies_sum_year):
homiciedes_sum+=1
print(str(year)+' '+"homicides")
elif(homicides_sum_year<rapes_sum_year and rapes_sum_year>assaults_sum_year and rapes_sum_year>robberies_sum_year):
rapes_sum+=1
print(str(year)+' '+"rapes")
elif(homicides_sum_year<assaults_sum_year and rapes_sum_year<assaults_sum_year and assaults_sum_year>robberies_sum_year):
assaults_sum+=1
print(str(year)+' '+"assaults")
elif(homicides_sum_year<robberies_sum_year and rapes_sum_year<robberies_sum_year and assaults_sum_year<robberies_sum_year):
robberies_sum+=1
print(str(year)+' '+"robberies")
plt.bar(height=[assaults_sum,robberies_sum,homicides_sum,rapes_sum],left=[1,2,3,4],align='center')#most common one through the years
plt.xticks([1,2,3,4,],['Assaults','Robberies','Homicides','Rapes'])
plt.ylabel("Times a crime was most often for a year")
plt.show()
```
I can see from the bar chart that assault were the most popular crime for a year almost thirty time and that the homicides and rapes were never the most popular crime for a year.
• Crime evolution (e. g. crime rates as a function of time): How do crime rates per capita evolve over the years? Create a plot (or a series) of plots displaying how each rate evolves. Create another plot of all crimes (total, not per capita) over the years.
```
rapes_per_capita=[]
homicides_per_capita=[]
assaults_per_capita=[]
robberies_per_capita=[]
for year in crime_reports.report_year.unique():
year_df=crime_reports[crime_reports.report_year==year]
homicides_mean_year=year_df.homicides_percapita.mean()
rapes_mean_year=year_df.rapes_percapita.mean()
assaults_mean_year=year_df.assaults_percapita.mean()
robberies_mean_year=year_df.robberies_percapita.mean()
homicides_per_capita.append(homicides_mean_year)
rapes_per_capita.append(rapes_mean_year)
assaults_per_capita.append(assaults_mean_year)
robberies_per_capita.append(robberies_mean_year)
plt.plot(crime_reports.report_year.unique(),rapes_per_capita)
plt.suptitle("Rapes")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()
plt.plot(crime_reports.report_year.unique(),homicides_per_capita)
plt.suptitle("Homicides")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()
plt.plot(crime_reports.report_year.unique(),assaults_per_capita)
plt.suptitle("Assaults")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()
plt.plot(crime_reports.report_year.unique(),robberies_per_capita)
plt.suptitle("Robberies")
plt.xlabel("Years")
plt.ylabel('Crimes per capira')
plt.show()
```
From the plots we can see that each crime has significanttly lower rate per capita and that for all of them the peak was between 1990 and 1995.
```
rapes_per_year=[]
homicides_per_year=[]
assaults_per_year=[]
robberies_per_year=[]
for year in crime_reports.report_year.unique():
year_df=crime_reports[crime_reports.report_year==year]
homicides_mean_year=year_df.homicides.sum()
rapes_mean_year=year_df.rapes.sum()
assaults_mean_year=year_df.assaults.sum()
robberies_mean_year=year_df.robberies.sum()
homicides_per_year.append(homicides_mean_year)
rapes_per_year.append(rapes_mean_year)
assaults_per_year.append(assaults_mean_year)
robberies_per_year.append(robberies_mean_year)
plt.plot(crime_reports.report_year.unique(),rapes_per_year,label="Rapes")
plt.plot(crime_reports.report_year.unique(),assaults_per_year,label="Assaults")
plt.plot(crime_reports.report_year.unique(),homicides_per_year,label="Homicides")
plt.plot(crime_reports.report_year.unique(),robberies_per_year,label="Robberies")
plt.legend()
plt.ylabel("Number of crimes")
plt.xlabel("Years")
plt.show()
```
Again our observations are confirmed that the peak of the crimes is around 1990 and that in present there are a lot less crimes except the rapes which between 2010 and 2015 have begun raise slightly.
## Crimes by States
• “Criminal” jurisdictions: Plot the sum of all crimes (total, not per capita) for each jurisdiction. Sort largest to smallest. Are any jurisdictions more prone to crime?
```
#agency_jurisdiction
jurisdicitons=[]
counter=0
crimes_per_jurisdiction=[]
agencies_df=crime_reports.sort_values('violent_crimes',ascending=False)
for jurisdiciton in agencies_df.agency_jurisdiction.unique():
jurisdicition_df=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton]
all_crimes=jurisdicition_df.violent_crimes.sum()
crimes_per_jurisdiction.append(all_crimes)
counter+=1
jurisdicitons.append(jurisdiciton)
if counter==10:
break
df_plottt=pd.DataFrame({'area':jurisdicitons,'num':crimes_per_jurisdiction})
df_plottt=df_plottt.sort_values('num',ascending=False)
plt.bar(height=df_plottt.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],df_plottt.area,rotation='vertical')
plt.ylabel("Number of Crimes")
plt.show()
```
From the bar chart we can see that the New York City,Ny jurisdiction has the most crimes.
• “Criminal” jurisdictions, part 2: Create the same type of chart as above, but use the crime rates per capita this time. Are you getting the same distribution? Why? You may need data from the “population” column to answer this. Don’t perform significance tests, just inspect the plots.
```
jurisdicitons=[]
counter=0
crimes_per_jurisdiction=[]
population=[]
agencies_df=crime_reports
agencies_df=crime_reports.sort_values('crimes_percapita',ascending=False)
for a in agencies_df.agency_jurisdiction.unique():
agencies_df["crimes_percapita_per_agency"]=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton].crimes_percapita.sum()
agencies_df=agencies_df.sort_values('crimes_percapita_per_agency',ascending=True)
for jurisdiciton in agencies_df.agency_jurisdiction.unique():
jurisdicition_df=agencies_df[agencies_df.agency_jurisdiction==jurisdiciton]
all_crimes=jurisdicition_df.crimes_percapita.sum()
crimes_per_jurisdiction.append(all_crimes)
counter+=1
jurisdicitons.append(jurisdiciton)
population.append(jurisdicition_df.population.mean())
if counter==10:
break
df_plot=pd.DataFrame({'jurisdicitons':jurisdicitons,'num':crimes_per_jurisdiction})
df_plot=df_plot.sort_values('num',ascending=False,axis=0)
plt.bar(height=df_plot.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],df_plot.jurisdicitons,rotation='vertical')
plt.ylabel("Number of Crimes")
plt.show()
df_pop_plot=pd.DataFrame({'area':jurisdicitons,'num':population})
df_pop_plot=df_pop_plot.sort_values('num',ascending=False,axis=0)
plt.bar(height=df_pop_plot.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],df_pop_plot.area,rotation='vertical')
plt.ylabel("Population")
plt.show()
```
We can see that the crime per capita in Miami is the biggest contary to the previous plot. However it appears to have little correlation between the number of crimes per capita and the population.
• “Criminal states”: Create the same type of chart as in the first subproblem, but use the states instead. You can get the state name in two ways: either the first two letters of the agency_code column or the symbols after the comma in the agency_jurisdiction column.
```
parts=crime_reports['agency_jurisdiction'].str.extract("(\w+), (\w+)", expand = True)
parts.columns=['something_else','state']
parts['state']
crime_reports['state']=parts['state']
crime_states=[]
total_crimes=[]
counter=0
gencies_df=crime_reports.sort_values('violent_crimes',ascending=False)
for state in crime_reports.state.unique():
jurisdicition_df=crime_reports[crime_reports.state==state]
all_crimes=jurisdicition_df.violent_crimes.sum()
total_crimes.append(all_crimes)
crime_states.append(state)
counter+=1
jurisdicitons.append(jurisdiciton)
if counter==10:
break
plot_df=pd.DataFrame({'states':crime_states,'num':total_crimes})
plot_df=plot_df.sort_values('num',ascending=False)
plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states)
plt.ylabel("Number Of Crimes")
plt.show()
```
From the chart we can see that New York has the biggest number of crimes.
• Hypothesis testing: Are crime rates per capita related to population, e. g. does a more densely populated community produce more crime (because there are more people), or less crime (because there is a better police force)? Plot the total number of crimes vs. population to find out. Is there any correlation? If so, what is it? Is the correlation significant?
```
total_crimes=[]
agency_jurisdiction=[]
population=[]
counter=0
for jurisdiction in crime_reports.agency_jurisdiction.unique():
jurisdicition_df=crime_reports[crime_reports.agency_jurisdiction==jurisdiction]
all_crimes=jurisdicition_df.violent_crimes.sum()
total_crimes.append(all_crimes)
counter+=1
agency_jurisdiction.append(jurisdiction)
population.append(jurisdicition_df.population.mean())
if counter==10:
break
print(len(total_crimes),len(agency_jurisdiction))
plot_df=pd.DataFrame({'states':agency_jurisdiction,'num':total_crimes,'popu':population})
plot_df=plot_df.sort_values('num',ascending=False)
plt.bar(height=plot_df.popu,left=[1,2,3,4,5,6,7,8,9,10],align='center',color='r',label="Population")
plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center',color='b',label="Crimes")
plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states,rotation='vertical')
plt.ylabel("Number")
plt.legend()
plt.show()
```
We can see that there isn't a corelation between the population and the crimes because some places like Atlanta,GA shows that there might be but others like Baltimore Country,MD show us totaly different results
## Additional data
First I am droping some of the unnecessary columns and then I am tranforming the dates to datetime objects.
```
crimes=pd.read_csv("crimes.csv")
crimes=crimes.drop(['x','y','OBJECTID','ESRI_OID','Time'],axis=1)
crimes.columns=['publicaddress', 'controlnbr', 'CCN', 'precinct', 'reported_date',
'begin_date', 'offense', 'description', 'UCRCode', 'entered_date',
'long', 'lat', 'neighborhood', 'lastchanged', 'last_update_date']
crimes.dtypes
#2015-09-21T14:16:59.000Z
crimes['reported_date']=pd.to_datetime(crimes['reported_date'],format='%Y-%m-%d',errors='ignore')
crimes['entered_date']=pd.to_datetime(crimes['entered_date'],format='%Y-%m-%d',errors='ignore')
crimes['lastchanged']=pd.to_datetime(crimes['lastchanged'],format='%Y-%m-%d',errors='ignore')
crimes['last_update_date']=pd.to_datetime(crimes['last_update_date'],format='%Y-%m-%d',errors='ignore')
crimes['begin_date']=pd.to_datetime(crimes['begin_date'],format='%Y-%m-%d',errors='ignore')
crimes=crimes.dropna()
```
• Total number of crimes per year: Count all crimes for years in the dataset (2010-2016). Print the total number.
```
print(str(len(crimes))+" "+"crimes between 2010 and 2016")
```
• Plot how crimes evolve each year
```
year_10=0
year_11=0
year_12=0
year_13=0
year_14=0
year_15=0
year_16=0
for date in crimes.begin_date:
if date.year==2010:
year_10+=1
elif date.year==2011:
year_11+=1
elif date.year==2012:
year_12+=1
elif date.year==2013:
year_13+=1
elif date.year==2014:
year_14+=1
elif date.year==2015:
year_15+=1
elif date.year==2016:
year_16+=1
plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center')
plt.ylabel("Number of Crimes")
plt.xticks([1, 2, 3, 4 ,5 ,6 ,7],['2010','2011','2012','2013','2014','2015','2016',])
plt.show()
```
From 2010 to 2012 ther is a sligth raise in the number of crimes.However from 2012 to 2016 there is a drop in the number of crimes committed.
• Compare the previous plot to the plots in the previous exercise.
Note: In order to make comparison better, plot the data for all states again, but this time filter only years 2010-2016. Does the crime rate in MN have any connection to the total crime rate? What percentage of the total crime rate (in all given states) is given by MN?
```
crime_states=[]
total_crimes=[]
counter=0
gencies_df=crime_reports.sort_values('violent_crimes',ascending=False)
for state in crime_reports.state.unique():
jurisdicition_df=crime_reports[crime_reports.state==state]
right_year=jurisdicition_df[jurisdicition_df.report_year>2009]
all_crimes=right_year.violent_crimes.sum()
total_crimes.append(all_crimes)
crime_states.append(state)
counter+=1
jurisdicitons.append(jurisdiciton)
if counter==10:
break
plot_df=pd.DataFrame({'states':crime_states,'num':total_crimes})
plot_df=plot_df.sort_values('num',ascending=False)
plt.bar(height=plot_df.num,left=[1,2,3,4,5,6,7,8,9,10],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10],plot_df.states)
plt.ylabel("Number Of Crimes")
plt.show()
year_10=0
year_11=0
year_12=0
year_13=0
year_14=0
year_15=0
year_16=0
for date in crimes.begin_date:
if date.year==2010:
year_10+=1
elif date.year==2011:
year_11+=1
elif date.year==2012:
year_12+=1
elif date.year==2013:
year_13+=1
elif date.year==2014:
year_14+=1
elif date.year==2015:
year_15+=1
elif date.year==2016:
year_16+=1
plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center')
plt.ylabel("Number of Crimes")
plt.xticks([1, 2, 3, 4 ,5 ,6 ,7],['2010','2011','2012','2013','2014','2015','2016',])
plt.show()
whole_number = sum(i for i in total_crimes)
print(str(len(crimes)/whole_number)+' '+'% from the total number of crimes committed between 2010 and 2016')
```
• Cross-dataset matching: Get data from the previous dataset (crime rates in the US) again. This time, search only for MN and only for years 2010-2016. Do you have any results? If so, the results for total crime in MN should match in both datasets. Do they match?
```
year_10n=4064.0
year_11n=3722.0
year_12n=3872.0
year_13n=4038.0
year_14n=4093.0
year_15n=0
year_16n=0
MN=crime_reports[crime_reports.state=="MN"]
MN=MN[MN.report_year>2009]
number_crimes=sum(MN.violent_crimes)
print(str(int(number_crimes))+" from the first data set")
print(str(len(crimes))+" "+"from the second data set")
year_10=0
year_11=0
year_12=0
year_13=0
year_14=0
year_15=0
year_16=0
for date in crimes.begin_date:
if date.year==2010:
year_10+=1
elif date.year==2011:
year_11+=1
elif date.year==2012:
year_12+=1
elif date.year==2013:
year_13+=1
elif date.year==2014:
year_14+=1
elif date.year==2015:
year_15+=1
elif date.year==2016:
year_16+=1
plt.bar(height=[year_10,year_11,year_12,year_13,year_14,year_15,year_16],left=[1, 2, 3, 4 ,5 ,6 ,7],align='center',color='r',label="Second DataSet values")
plt.bar(height=[year_10n,year_11n,year_12n,year_13n,year_14n,year_15n,year_16n],left=[1,2,3,4,5,6,7],align='center',color='b',label="First DataSet values")
plt.legend()
plt.xticks([1,2,3,4,5,6,7],['2010','2011','2012','2013','2014','2015','2016',])
plt.ylabel("Crimes")
plt.show()
```
The values in the first data set are until 2014 and they are much smaller than those in the second.There is a big difference between the two.
## Temporal Analysis
• Look at the crime categories. Which is the most popular crime category in MN overall?
```
crimes.description.unique()
d={'Shoplifting':1, 'Theft From Motr Vehc':1, 'Other Theft':1,
'Theft From Building':1, 'Crim Sex Cond-rape':1, 'Burglary Of Dwelling':1,
'Theft From Person':1, 'Motor Vehicle Theft':1, 'Robbery Of Business':1,
'Aslt-police/emerg P':1, 'Domestic Assault/Strangulation':1,
'Theft-motr Veh Parts':1, 'Robbery Of Person':1, 'Asslt W/dngrs Weapon':1,
'Robbery Per Agg':1, 'Burglary Of Business':1, 'Arson':1,
'Theft By Swindle':1, 'Aslt-great Bodily Hm':1, 'Aslt-sgnfcnt Bdly Hm':1,
'On-line Theft':1, '2nd Deg Domes Aslt':1, 'Murder (general)':1,
'Adulteration/poison':1, 'Gas Station Driv-off':1,
'Other Vehicle Theft':1, '3rd Deg Domes Aslt':1, 'Pocket-picking':1,
'Theft/coinop Device':1, 'Disarm a Police Officer':1,
'Theft By Computer':1, '1st Deg Domes Asslt':1, 'Bike Theft':1,
'Scrapping-Recycling Theft':1, 'Justifiable Homicide':0, 'Looting':1}
for desc in crimes.description:
d[desc]+=1
sorted_d = sorted(d.items(), key=operator.itemgetter(1))
print(sorted_d)
```
The most common type is Other theft but since it si do unclear we can say that Burglary of Dwelling is the most commnon type of theft.
• Break down the data by months. Plot the total number of crimes for each month, summed over the years. Is there a seasonal component? Which month has the highest crime rate? Which has the smallest? Are the differences significant?
```
january=0
february=0
march=0
april=0
may=0
june=0
july=0
august=0
september=0
october=0
november=0
december=0
for date in crimes.begin_date:
if(date.month==1):
january+=1
elif(date.month==2):
february+=1
elif(date.month==3):
march+=1
elif(date.month==4):
april+=1
elif(date.month==5):
may+=1
elif(date.month==6):
june+=1
elif(date.month==7):
july+=1
elif(date.month==8):
august+=1
elif(date.month==9):
september+=1
elif(date.month==10):
october+=1
elif(date.month==11):
november+=1
elif(date.month==12):
december+=1
plt.bar(height=[january,february,march,april,may,june,july,august,september,october,november,december]
,left=[1,2,3,4,5,6,7,8,9,10,11,12],align='center')
plt.xticks([1,2,3,4,5,6,7,8,9,10,11,12],
['january','february','march','april','may','june','july','august','september','october','november','december']
,rotation='vertical')
plt.ylabel("Number Of Crimes")
plt.show()
```
We can see that most of the crimes are in june and that there is seasonal tendency that most of the crimes are committer in the summer.
• Break the results by weekday. You can get the weekday from the date (there are functions for this). Do more crimes happen on
the weekends?
```
Monday=0
Tuesday=0
Wednesday=0
Thursday=0
Friday=0
Saturday=0
Sunday=0
for date in crimes.begin_date:
if(date.weekday()==0):
Monday+=1
elif(date.weekday()==1):
Tuesday+=1
elif(date.weekday()==2):
Wednesday+=1
elif(date.weekday()==3):
Thursday+=1
elif(date.weekday()==4):
Friday+=1
elif(date.weekday()==5):
Saturday+=1
elif(date.weekday()==6):
Sunday+=1
plt.bar(height=[Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday]
,left=[1,2,3,4,5,6,7],align='center')
plt.xticks([1,2,3,4,5,6,7],['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday'],rotation='vertical')
plt.ylabel("Number Of Crimes")
plt.show()
```
Most crimes are committed on Fridays.On the second place are Thursdays.
• Break the weekday data by crime type. Are certain types of crime more likely to happen on a given day? Comment your findings.
I have no time to complete this because I have a Programming Fundamentals Exam to take but I would make 7 plots one for each day of the week with the top 10 types of crimes.
## 5. Significant Factors in Crime
```
communities= pd.read_table("communities.data",sep=',',header=None)
communities.columns
communities_names= pd.read_table('communities.names',header=None)
communities_names
```
| github_jupyter |
```
#default_exp core
```
# fastdot.core
> Drawing graphs with graphviz.
```
#export
from fastcore.all import *
import pydot
from matplotlib.colors import rgb2hex, hex2color
#export
_all_ = ['pydot']
#hide
from nbdev.showdoc import *
```
## Nodes
```
#export
def Dot(defaults=None, rankdir='LR', directed=True, compound=True, **kwargs):
"Create a `pydot.Dot` graph with fastai/fastdot style defaults"
return pydot.Dot(rankdir=rankdir, directed=directed, compound=compound, **kwargs)
#export
def uniq_name(o): return 'n'+(uuid4().hex)
def quote(x, q='"'):
'Surround `x` with `"`'
return f'"{x}"'
@patch
def _repr_svg_(self:pydot.Dot):
return self.create_svg().decode('utf-8')
#export
graph_objects = {}
object_names = {}
#export
def add_mapping(graph_item, obj):
graph_objects[graph_item.get_name()] = graph_item
object_names[id(obj)] = graph_item.get_name()
return graph_item
#export
def _pydot_create(f, obj, **kwargs):
for k,v in kwargs.items():
if callable(v): v = kwargs[k] = v(obj)
if k not in ('name','graph_name'): kwargs[k] = quote(v)
return add_mapping(f(**kwargs), obj)
#export
node_defaults = dict(label=str, tooltip=str, name=uniq_name, shape='box', style='rounded, filled', fillcolor='white')
#export
def Node(obj, **kwargs):
"Create a `pydot.Node` with a unique name"
if not isinstance(obj,str) and isinstance(obj, Collection) and len(obj)==2:
obj,kwargs['tooltip'] = obj
kwargs = merge(node_defaults, kwargs)
return _pydot_create(pydot.Node, obj, **kwargs)
```
`pydot` uses the same name-based approach to identifying graph items as `graphviz`. However we would rather use python objects. Therefore, we patch `pydot` to use unique names.
```
g = Dot()
a = Node('a')
g.add_node(a)
g
```
If a 2-tuple is passed to `add_node`, then the 2nd element becomes the tooltip. You can also pass any `kwargs` that are accepted by `graphviz`.
```
g = Dot()
g.add_node(Node(['a', "My tooltip"], fillcolor='pink'))
g
```
Keyword args can also be arbitrary functions, which will called with the node's label.
```
g = Dot()
o = 'a'
g.add_node(Node(o, fillcolor=lambda o:'pink'))
g
#export
def object2graph(o):
"Get graph item representing `o`"
return graph_objects[object_names[id(o)]]
object2graph(o).get_fillcolor()
```
## Colors
The callable kwargs functionality can be used to map labels to colors in a consistent way..
```
#export
def obj2node_color(cm, minalpha, rangealpha, o):
"Create a consistent mapping from objects to colors, using colormap `cm`"
h = hash(o)
i = float(h % 256) / 256
alpha = (h^hash('something')) % rangealpha + minalpha
return rgb2hex(cm(i)) + f'{alpha:02X}'
#exports
graph_colors1 = partial(obj2node_color, plt.get_cmap('rainbow'), 30, 160)
graph_colors2 = partial(obj2node_color, plt.get_cmap('tab20'), 30, 160)
```
These predefined color mapping functions provide a good range of colors and readable text.
```
g = Dot()
g.add_node(Node('a', fillcolor=graph_colors1))
g.add_node(Node('b', fillcolor=graph_colors1))
g
g = Dot()
g.add_node(Node('a', fillcolor=graph_colors2))
g.add_node(Node('b', fillcolor=graph_colors2))
g
```
We'll use the former color function as our default. You can change it by simply modifying `node_defaults`.
```
#export
node_defaults['fillcolor'] = graph_colors1
```
## Clusters and Items
```
#export
cluster_defaults = dict(label=str, tooltip=str, graph_name=uniq_name, style='rounded, filled', fillcolor='#55555522')
#export
def Cluster(obj='', **kwargs):
"Create a `pydot.Cluster` with a unique name"
kwargs = merge(cluster_defaults, kwargs)
return _pydot_create(pydot.Cluster, obj, **kwargs)
g = Dot()
sg = Cluster('clus', tooltip='Cluster tooltip')
sg.add_node(Node(['a', "My tooltip"]))
sg.add_node(Node('b'))
g.add_subgraph(sg)
g
#export
@patch
def nodes(self:pydot.Graph):
"`i`th node in `Graph`"
return L(o for o in self.get_nodes() if o.get_label() is not None)
#export
@patch
def __getitem__(self:pydot.Graph, i):
"`i`th node in `Graph`"
return self.nodes()[i]
```
You can subscript into a `Graph`'s `Node`s by index:
```
print(sg[0].get_label())
#export
@patch
def add_item(self:pydot.Graph, item, **kwargs):
"Add a `Cluster`, `Node`, or `Edge` to the `Graph`"
if not isinstance(item, (pydot.Edge,pydot.Node,pydot.Graph)): item = Node(item, **kwargs)
f = self.add_node if isinstance(item, pydot.Node ) else \
self.add_subgraph if isinstance(item, pydot.Graph) else \
self.add_edge if isinstance(item, pydot.Edge ) else None
f(item)
return item
```
There's no good reason to have different methods for adding clusters vs nodes (as `pydot` requires), so we provide a single method.
```
g = Dot()
sg = Cluster('clus')
g.add_item(sg)
sg.add_item('a')
g
#export
@patch
def add_items(self:pydot.Graph, *items, **kwargs):
"Add `items` the `Graph`"
return L(self.add_item(it, **kwargs) for it in items)
#export
def graph_items(*items, **kwargs):
"Add `items` to a new `pydot.Dot`"
g = Dot()
g.add_items(*items, **kwargs)
return g
sg1 = Cluster('clus')
sg1.add_items('n1', 'n2')
sg2 = Cluster()
sg2.add_item('n')
graph_items(sg1,sg2)
```
## Edges
```
#export
@patch
def first(self:pydot.Graph):
"First node in `Graph`, searching subgraphs recursively as needed"
nodes = self.nodes()
if nodes: return nodes[0]
for subg in self.get_subgraphs():
res = subg.first()
if res: return res
#export
@patch
def last(self:pydot.Graph):
"Lastt node in `Graph`, searching subgraphs recursively as needed"
nodes = self.nodes()
if nodes: return nodes[-1]
for subg in reversed(self.get_subgraphs()):
res = subg.last()
if res: return res
#export
@patch
def with_compass(self:(pydot.Node,pydot.Graph), compass=None):
r = self.get_name()
return f'{r}:{compass}' if compass else r
# export
@patch
def connect(self:(pydot.Node,pydot.Graph), item, compass1=None, compass2=None, **kwargs):
"Connect two nodes or clusters"
a,b,ltail,lhead = self,item,'',''
if isinstance(self,pydot.Graph):
a = self.last()
ltail=self.get_name()
if isinstance(item,pydot.Graph):
b = item.first()
lhead=item.get_name()
a,b = a.with_compass(compass1),b.with_compass(compass2)
return pydot.Edge(a, b, lhead=lhead, ltail=ltail, **kwargs)
sg2 = Cluster('clus2')
n1 = sg2.add_item('n1', fillcolor='pink')
n2 = sg2.add_item('n2', fillcolor='lightblue')
sg2.add_item(n1.connect(n2))
sg1 = Cluster('clus1')
sg1.add_item(sg2)
a,b = Node('a'),Node('b')
edges = a.connect(b),a.connect(a),sg1.connect(b),sg2[0].connect(a)
g = Dot()
g.add_items(sg1, a, b, *edges)
g
#export
def object_connections(conns):
"Create connections between all pairs in `conns`"
return [object2graph(a).connect(object2graph(b)) for a,b in conns]
```
This is a shortcut for creating connections between objects that are already in a graph.
```
a,b = 'a','b'
g = graph_items(a, b)
g.add_items(*object_connections([(a,b)]))
g
```
## Sequential
Since it's common to want to connect a series sequentially, we provide some simple shortcuts for this functionality.
```
#export
def graph_edges_seq(items):
"Add edges between each pair of nodes in `items`"
return L(items[i].connect(items[i+1]) for i in range(len(items)-1))
#export
@patch
def add_edges_seq(self:pydot.Graph, items):
"Add edges between each pair of nodes in `items`"
return self.add_items(*graph_edges_seq(items))
g = Dot()
its = g.add_items('a','b','c')
g.add_edges_seq(its)
g
#export
def seq_cluster(items, cluster_label='', **kwargs):
sg = Cluster(cluster_label)
its = sg.add_items(*items, **kwargs)
sg.add_edges_seq(its)
return sg
g = Dot()
g.add_item(seq_cluster(['a','b','c'], 'clust'))
g.add_item(seq_cluster(['1','2','c'], 'clust2'))
g
g = Dot()
g.add_item(seq_cluster(['a','b','c'], 'clust'))
g
sg1 = seq_cluster(['a','b','c'], 'clust1')
sg2 = seq_cluster(['a1','a2',sg1], 'clust2')
g = Dot()
g.add_item(sg2)
g
sg1 = seq_cluster(['inp'], 'clust1')
sg2 = seq_cluster(['a','b','c'], 'clust2')
sg2.add_items(sg1.connect(sg2[-1]), sg1.connect(sg2))
g = Dot()
g.add_items(sg1,sg2)
g
# export
def Point(label='pnt', **kwargs):
"Create a `Node` with a 'point' shape"
return (Node('pnt', shape='point'))
sg = Cluster('clus')
a,b,c = sg.add_items('a','b','c')
p = sg.add_item(Point())
sg.add_item(p.connect(c))
sg.add_items(p.connect(a), a.connect(b), b.connect(c))
g = Dot()
g.add_items(sg)
g
```
# Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# Data Science Boot Camp
## Introduction to Pandas Part 1
* __Pandas__ is a Python package providing fast, flexible, and expressive data structures designed to work with *relational* or *labeled* data both.<br>
<br>
* It is a fundamental high-level building block for doing practical, real world data analysis in Python.<br>
<br>
* Python has always been great for prepping and munging data, but it's never been great for analysis - you'd usually end up using R or loading it into a database and using SQL. Pandas makes Python great for analysis.<br>
* Pandas is well suited for:<br>
<br>
* Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet<br>
<br>
* Ordered and unordered (not necessarily fixed-frequency) time series data.<br>
<br>
* Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels<br>
<br>
* Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure<br>
* Key features of Pandas:<br>
<br>
* Easy handling of __missing data__<br>
<br>
* __Size mutability__: columns can be inserted and deleted from DataFrame and higher dimensional objects.<br>
<br>
* Automatic and explicit __data alignment__: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically.<br>
<br>
* __Fast__ and __efficient__ DataFrame object with default and customized indexing.<br>
<br>
* __Reshaping__ and __pivoting__ of data sets.<br>
* Key features of Pandas (Continued):<br>
<br>
* Label-based __slicing__, __indexing__, __fancy indexing__ and __subsetting__ of large data sets.<br>
<br>
* __Group by__ data for aggregation and transformations.<br>
<br>
* High performance __merging__ and __joining__ of data.<br>
<br>
* __IO Tools__ for loading data into in-memory data objects from different file formats.<br>
<br>
* __Time Series__ functionality.<br>
* First thing we have to import pandas and numpy library under the aliases pd and np.<br>
<br>
* Then check our pandas version.<br>
```
%matplotlib inline
import pandas as pd
import numpy as np
print(pd.__version__)
```
* Let's set some options for `Pandas`
```
pd.set_option('display.notebook_repr_html', False)
pd.set_option('max_columns', 10)
pd.set_option('max_rows', 10)
```
## Pandas Objects
* At the very basic level, Pandas objects can be thought of as enhanced versions of NumPy structured arrays in which the rows and columns are identified with labels rather than simple integer indices.<br>
<br>
* There are three fundamental Pandas data structures: the Series, DataFrame, and Index.
### Series
* A __Series__ is a single vector of data (like a NumPy array) with an *index* that labels each element in the vector.<br><br>
* It can be created from a list or array as follows:
```
counts = pd.Series([15029231, 7529491, 7499740, 5445026, 2702492, 2742534, 4279677, 2133548, 2146129])
counts
```
* If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the `Series`, while the index is a pandas `Index` object.
```
counts.values
counts.index
```
* We can assign meaningful labels to the index, if they are available:
```
population = pd.Series([15029231, 7529491, 7499740, 5445026, 2702492, 2742534, 4279677, 2133548, 2146129],
index=['Istanbul Total', 'Istanbul Males', 'Istanbul Females', 'Ankara Total', 'Ankara Males', 'Ankara Females', 'Izmir Total', 'Izmir Males', 'Izmir Females'])
population
```
* These labels can be used to refer to the values in the `Series`.
```
population['Istanbul Total']
mask = [city.endswith('Females') for city in population.index]
mask
population[mask]
```
* As you noticed that we can masking in series.<br>
<br>
* Also we can still use positional indexing even we assign meaningful labels to the index, if we wish.<br>
```
population[0]
```
* We can give both the array of values and the index meaningful labels themselves:<br>
```
population.name = 'population'
population.index.name = 'city'
population
```
* Also, NumPy's math functions and other operations can be applied to Series without losing the data structure.<br>
```
np.ceil(population / 1000000) * 1000000
```
* We can also filter according to the values in the `Series` like in the Numpy's:
```
population[population>3000000]
```
* A `Series` can be thought of as an ordered key-value store. In fact, we can create one from a `dict`:
```
populationDict = {'Istanbul Total': 15029231, 'Ankara Total': 5445026, 'Izmir Total': 4279677}
pd.Series(populationDict)
```
* Notice that the `Series` is created in key-sorted order.<br>
<br>
* If we pass a custom index to `Series`, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the `NaN` (not a number) type for missing values.<br>
```
population2 = pd.Series(populationDict, index=['Istanbul Total','Ankara Total','Izmir Total','Bursa Total', 'Antalya Total'])
population2
population2.isnull()
```
* Critically, the labels are used to **align data** when used in operations with other Series objects:
```
population + population2
```
* Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition.
### DataFrame
* A `DataFrame` represents a tabular, spreadsheet-like data structure containing an or- dered collection of columns, each of which can be a different value type (numeric, string, boolean, etc.).<br>
<br>
* `DataFrame` has both a row and column index; it can be thought of as a dict of Series (one for all sharing the same index).
```
areaDict = {'Istanbul': 5461, 'Ankara': 25632, 'Izmir': 11891,
'Bursa': 10813, 'Antalya': 20177}
area = pd.Series(areaDict)
area
populationDict = {'Istanbul': 15029231, 'Ankara': 5445026, 'Izmir': 4279677, 'Bursa': 2936803, 'Antalya': 2364396}
population3 = pd.Series(populationDict)
population3
```
* Now that we have 2 Series population by cities and areas by cities, we can use a dictionary to construct a single two-dimensional object containing this information:
```
cities = pd.DataFrame({'population': population3, 'area': area})
cities
```
* Or we can create our cities `DataFrame` with lists and indexes.
```
cities = pd.DataFrame({
'population':[15029231, 5445026, 4279677, 2936803, 2364396],
'area':[5461, 25632, 11891, 10813, 20177],
'city':['Istanbul', 'Ankara', 'Izmir', 'Bursa', 'Antalya']
})
cities
```
Notice the `DataFrame` is sorted by column name. We can change the order by indexing them in the order we desire:
```
cities[['city','area', 'population']]
```
* A `DataFrame` has a second index, representing the columns:
```
cities.columns
```
* If we wish to access columns, we can do so either by dictionary like indexing or by attribute:
```
cities['area']
cities.area
type(cities.area)
type(cities[['area']])
```
* Notice this is different than with `Series`, where dictionary like indexing retrieved a particular element (row). If we want access to a row in a `DataFrame`, we index its `iloc` attribute.
```
cities.iloc[2]
cities.iloc[0:2]
```
Alternatively, we can create a `DataFrame` with a dict of dicts:
```
cities = pd.DataFrame({
0: {'city': 'Istanbul', 'area': 5461, 'population': 15029231},
1: {'city': 'Ankara', 'area': 25632, 'population': 5445026},
2: {'city': 'Izmir', 'area': 11891, 'population': 4279677},
3: {'city': 'Bursa', 'area': 10813, 'population': 2936803},
4: {'city': 'Antalya', 'area': 20177, 'population': 2364396},
})
cities
```
* We probably want this transposed:
```
cities = cities.T
cities
```
* It's important to note that the Series returned when a DataFrame is indexted is merely a **view** on the DataFrame, and not a copy of the data itself. <br>
<br>
* So you must be cautious when manipulating this data just like in the Numpy.<br>
```
areas = cities.area
areas
areas[3] = 0
areas
cities
```
* It's a usefull behavior for large data sets but for preventing this you can use copy method.<br>
```
areas = cities.area.copy()
areas[3] = 10813
areas
cities
```
* We can create or modify columns by assignment:<br>
```
cities.area[3] = 10813
cities
cities['year'] = 2017
cities
```
* But note that, we can not use the attribute indexing method to add a new column:<br>
```
cities.projection2020 = 20000000
cities
```
* It creates another variable.<br>
```
cities.projection2020
```
* Specifying a `Series` as a new columns cause its values to be added according to the `DataFrame`'s index:
```
populationIn2000 = pd.Series([11076840, 3889199, 3431204, 2150571, 1430539])
populationIn2000
cities['population_2000'] = populationIn2000
cities
```
* Other Python data structures (ones without an index) need to be the same length as the `DataFrame`:
```
populationIn2007 = [12573836, 4466756, 3739353, 2439876]
cities['population_2007'] = populationIn2007
```
* We can use `del` to remove columns, in the same way `dict` entries can be removed:
```
cities
del cities['population_2000']
cities
```
* We can extract the underlying data as a simple `ndarray` by accessing the `values` attribute:<br>
```
cities.values
```
* Notice that because of the mix of string and integer (and could be`NaN`) values, the dtype of the array is `object`.
* The dtype will automatically be chosen to be as general as needed to accomodate all the columns.
```
df = pd.DataFrame({'integers': [1,2,3], 'floatNumbers':[0.5, -1.25, 2.5]})
df
print(df.values.dtype)
df.values
```
* Pandas uses a custom data structure to represent the indices of Series and DataFrames.
```
cities.index
```
* Index objects are immutable:
```
cities.index[0] = 15
```
* This is so that Index objects can be shared between data structures without fear that they will be changed.
* That means you can move, copy your meaningful labels to other `DataFrames`
```
cities
cities.index = population2.index
cities
```
## Importing data
* A key, but often under appreciated, step in data analysis is importing the data that we wish to analyze.<br>
<br>
* Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure.<br>
<br>
* Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a `DataFrame` object.
* Let's start with some more population data, stored in csv format.
```
!cat data/population.csv
```
* This table can be read into a DataFrame using `read_csv`:
```
populationDF = pd.read_csv("data/population.csv")
populationDF
```
* Notice that `read_csv` automatically considered the first row in the file to be a header row.<br>
<br>
* We can override default behavior by customizing some the arguments, like `header`, `names` or `index_col`.<br>
* `read_csv` is just a convenience function for `read_table`, since csv is such a common format:<br>
```
pd.set_option('max_columns', 5)
populationDF = pd.read_table("data/population_missing.csv", sep=';')
populationDF
```
* The `sep` argument can be customized as needed to accomodate arbitrary separators.<br>
* If we have sections of data that we do not wish to import (for example, in this example empty rows), we can populate the `skiprows` argument:
```
populationDF = pd.read_csv("data/population_missing.csv", sep=';', skiprows=[1,2])
populationDF
```
* For a more useful index, we can specify the first column, which provide a unique index to the data.
```
populationDF = pd.read_csv("data/population.csv", sep=';', index_col='Provinces')
populationDF.index
```
Conversely, if we only want to import a small number of rows from, say, a very large data file we can use `nrows`:
```
pd.read_csv("data/population.csv", sep=';', nrows=4)
```
* Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including `NA`, `NaN`, `NULL`.
```
pd.read_csv("data/population_missing.csv", sep=';').head(10)
```
Above, Pandas recognized `NaN` and an empty field as missing data.
```
pd.isnull(pd.read_csv("data/population_missing.csv", sep=';')).head(10)
```
### Microsoft Excel
* Since so much financial and scientific data ends up in Excel spreadsheets, Pandas' ability to directly import Excel spreadsheets is valuable. <br>
<br>
* This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed: `xlrd` and `openpyxl`.<br>
<br>
* Importing Excel data to Pandas is a two-step process. First, we create an `ExcelFile` object using the path of the file:
```
excel_file = pd.ExcelFile('data/population.xlsx')
excel_file
```
* Then, since modern spreadsheets consist of one or more "sheets", we parse the sheet with the data of interest:
```
excelDf = excel_file.parse("Sheet 1 ")
excelDf
```
* Also, there is a `read_excel` conveneince function in Pandas that combines these steps into a single call:
```
excelDf2 = pd.read_excel('data/population.xlsx', sheet_name='Sheet 1 ')
excelDf2.head(10)
```
* In, the first day we learned how to read and write `JSON` Files, with that way you can also import JSON files to `DataFrames`.
* Also, you can connect to databases and import your data into `DataFrames` by help of 3rd party libraries.
## Pandas Fundamentals
* This section introduces the new user to the key functionality of Pandas that is required to use the software effectively.<br>
<br>
* For some variety, we will leave our population data behind and employ some `Superhero` data.<br>
* The data comes from Marvel Wikia.<br>
<br>
* The file has the following variables:<br>
<table>
<table>
<thead>
<tr>
<th style="text-align:left;">Variable</th>
<th style="text-align:left;">Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left;">page_id</td>
<td style="text-align:left;">The unique identifier for that characters page within the wikia</td>
</tr>
<tr>
<td style="text-align:left;">name</td>
<td style="text-align:left;">The name of the character</td>
</tr>
<tr>
<td style="text-align:left;">urlslug</td>
<td style="text-align:left;">The unique url within the wikia that takes you to the character</td>
</tr>
<tr>
<td style="text-align:left;">ID</td>
<td style="text-align:left;">The identity status of the character (Secret Identity, Public identity No Dual Identity)</td>
</tr>
<tr>
<td style="text-align:left;">ALIGN</td>
<td style="text-align:left;">If the character is Good, Bad or Neutral</td>
</tr>
<tr>
<td style="text-align:left;">EYE</td>
<td style="text-align:left;">Eye color of the character</td>
</tr>
<tr>
<td style="text-align:left;">HAIR</td>
<td style="text-align:left;">Hair color of the character</td>
</tr>
<tr>
<td style="text-align:left;">SEX</td>
<td style="text-align:left;">Sex of the character (e.g. Male, Female, etc.)</td>
</tr>
<tr>
<td style="text-align:left;">GSM</td>
<td style="text-align:left;">If the character is a gender or sexual minority (e.g. Homosexual characters, bisexual characters)</td>
</tr>
<tr>
<td style="text-align:left;">ALIVE</td>
<td style="text-align:left;">If the character is alive or deceased</td>
</tr>
<tr>
<td style="text-align:left;">APPEARANCES</td>
<td style="text-align:left;">The number of appareances of the character in comic books (as of Sep. 2, 2014. Number will become increasingly out of date as time goes on.)</td>
</tr>
<tr>
<td style="text-align:left;">FIRST APPEARANCE</td>
<td style="text-align:left;">The month and year of the character's first appearance in a comic book, if available</td>
</tr>
<tr>
<td style="text-align:left;">YEAR</td>
<td style="text-align:left;">The year of the character's first appearance in a comic book, if available</td>
</tr>
</tbody>
</table>
```
pd.set_option('max_columns', 12)
pd.set_option('display.notebook_repr_html', True)
marvelDF = pd.read_csv("data/marvel-wikia-data.csv", index_col='page_id')
marvelDF.head(5)
```
* Notice that we specified the `page_id` column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by trimming `name`:
* First, import the regex module of python.<br>
<br>
* Then, trim the name column with regex.<br>
```
import re
pattern = re.compile('([a-zA-Z]|-|\s|\.|\')*([a-zA-Z])')
heroName = []
for name in marvelDF.name:
match = re.search(pattern, name)
if match:
heroName.append(match.group())
else:
heroName.append(name)
heroName
```
* This looks okay, let's copy '__marvelDF__' to '__marvelDF_newID__' and assign new indexes.<br>
```
marvelDF_newID = marvelDF.copy()
marvelDF_newID.index = heroName
marvelDF_newID.head(5)
```
* Let's check the uniqueness of ID's:
```
marvelDF_newID.index.is_unique
```
* So, indices need not be unique. Our choice is not unique because some of superheros have some differenet variations.
```
pd.Series(marvelDF_newID.index).value_counts()
```
* The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels:
```
marvelDF_newID.loc['Peter Parker']
```
* Let's give a truly unique index by not triming `name` column:
```
hero_id = marvelDF.name
marvelDF_newID = marvelDF.copy()
marvelDF_newID.index = hero_id
marvelDF_newID.head()
marvelDF_newID.index.is_unique
```
* We can create meaningful indices more easily using a hierarchical index.<br>
<br>
* For now, we will stick with the numeric IDs as our index for '__NewID__' DataFrame.<br>
```
marvelDF_newID.index = range(16376)
marvelDF.index = marvelDF['name']
marvelDF_newID.head(5)
```
### Manipulating indices
* __Reindexing__ allows users to manipulate the data labels in a DataFrame. <br>
<br>
* It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.<br>
<br>
* A simple use of `reindex` is reverse the order of the rows:
```
marvelDF_newID.reindex(marvelDF_newID.index[::-1]).head()
```
* Keep in mind that `reindex` does not work if we pass a non-unique index series.
* We can remove rows or columns via the `drop` method:
```
marvelDF_newID.shape
marvelDF_dropped = marvelDF_newID.drop([16375, 16374])
print(marvelDF_newID.shape)
print(marvelDF_dropped.shape)
marvelDF_dropped = marvelDF_newID.drop(['EYE','HAIR'], axis=1)
print(marvelDF_newID.shape)
print(marvelDF_dropped.shape)
```
## Indexing and Selection
* Indexing works like indexing in NumPy arrays, except we can use the labels in the `Index` object to extract values in addition to arrays of integers.<br>
```
heroAppearances = marvelDF.APPEARANCES
heroAppearances
```
* Let's start with Numpy style indexing:
```
heroAppearances[:3]
```
* Indexing by Label:
```
heroAppearances[['Spider-Man (Peter Parker)','Hulk (Robert Bruce Banner)']]
```
* We can also slice with data labels, since they have an intrinsic order within the Index:
```
heroAppearances['Spider-Man (Peter Parker)':'Matthew Murdock (Earth-616)']
```
* You can change sliced array, and if you get warning it's ok.<br>
```
heroAppearances['Minister of Castile D\'or (Earth-616)':'Yologarch (Earth-616)'] = 0
heroAppearances
```
* In a `DataFrame` we can slice along either or both axes:
```
marvelDF[['SEX','ALIGN']]
mask = marvelDF.APPEARANCES>50
marvelDF[mask]
```
* The indexing field `loc` allows us to select subsets of rows and columns in an intuitive way:
```
marvelDF.loc['Spider-Man (Peter Parker)', ['ID', 'EYE', 'HAIR']]
marvelDF.loc[['Spider-Man (Peter Parker)','Thor (Thor Odinson)'],['ID', 'EYE', 'HAIR']]
```
## Operations
* `DataFrame` and `Series` objects allow for several operations to take place either on a single object, or between two or more objects.<br>
<br>
* For example, we can perform arithmetic on the elements of two objects, such as change in population across years:
```
populationDF
pop2000 = populationDF['2000']
pop2017 = populationDF['2017']
pop2000DF = pd.Series(pop2000.values, index=populationDF.index)
pop2017DF = pd.Series(pop2017.values, index=populationDF.index)
popDiff = pop2017DF - pop2000DF
popDiff
```
* Let's assume our '__pop2000DF__' DataFrame has not row which index is "Yalova"
```
pop2000DF["Yalova"] = np.nan
pop2000DF
popDiff = pop2017DF - pop2000DF
popDiff
```
* For accessing not null elements, we can use Pandas'notnull function.
```
popDiff[popDiff.notnull()]
```
* We can add `fill_value` argument to insert a zero for home `NaN` values.
```
pop2017DF.subtract(pop2000DF, fill_value=0)
```
* We can also use functions to each column or row of a `DataFrame`
```
minPop = pop2017DF.values.min()
indexOfMinPop = pop2017DF.index[pop2017DF.values.argmin()]
print(indexOfMinPop + " -> " + str(minPop))
populationDF['2000'] = np.ceil(populationDF['2000'] / 10000) * 10000
populationDF
```
## Sorting and Ranking
* Pandas objects include methods for re-ordering data.
```
populationDF.sort_index(ascending=True).head()
populationDF.sort_index().head()
populationDF.sort_index(axis=1, ascending=False).head()
```
* We can also use `order` to sort a `Series` by value, rather than by label.
* For a `DataFrame`, we can sort according to the values of one or more columns using the `by` argument of `sort_values`:
```
populationDF[['2017','2001']].sort_values(by=['2017', '2001'],ascending=[False,True]).head(10)
```
* __Ranking__ does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.
```
populationDF['2010'].rank(ascending=False)
populationDF[['2017','2001']].sort_values(by=['2017', '2001'],ascending=[False,True]).rank(ascending=False)
```
* Ties are assigned the mean value of the tied ranks, which may result in decimal values.
```
pd.Series([50,60,50]).rank()
```
* Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:
```
pd.Series([100,50,100]).rank(method='first')
```
* Calling the `DataFrame`'s `rank` method results in the ranks of all columns:
```
populationDF.rank(ascending=False)
```
## Hierarchical indexing
* Hierarchical indexing is an important feature of pandas enabling you to have multiple (two or more) index levels on an axis.<br>
<br>
* Somewhat abstractly, it provides a way for you to work with higher dimensional data in a lower dimensional form.<br>
* Let’s create a Series with a list of lists or arrays as the index:
```
data = pd.Series(np.random.randn(10),
index=[['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'd', 'd'],
[1, 2, 3, 1, 2, 3, 1, 2, 2, 3]])
data
data.index
```
* With a hierarchically-indexed object, so-called partial indexing is possible, enabling you to concisely select subsets of the data:
```
data['b']
data['a':'c']
```
* Selection is even possible in some cases from an “inner” level:
```
data[:, 1]
```
* Hierarchical indexing plays a critical role in reshaping data and group-based operations like forming a pivot table. For example, this data could be rearranged into a DataFrame using its unstack method:
```
dataDF = data.unstack()
dataDF
```
* The inverse operation of unstack is stack:
```
dataDF.stack()
```
## Missing data
* The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.
* Missing data are represented in `Series` and `DataFrame` objects by the `NaN` floating point value. However, `None` is also treated as missing, since it is commonly used as such in other contexts (NumPy).
```
weirdSeries = pd.Series([np.nan, None, 'string', 1])
weirdSeries
weirdSeries.isnull()
```
* Missing values may be dropped or indexed out:
```
population2
population2.dropna()
population2[population2.notnull()]
dataDF
```
* By default, `dropna` drops entire rows in which one or more values are missing.
```
dataDF.dropna()
```
* This can be overridden by passing the `how='all'` argument, which only drops a row when every field is a missing value.
```
dataDF.dropna(how='all')
```
* This can be customized further by specifying how many values need to be present before a row is dropped via the `thresh` argument.
```
dataDF[2]['c'] = np.nan
dataDF
dataDF.dropna(thresh=2)
```
* If we want to drop missing values column-wise instead of row-wise, we use `axis=1`.
```
dataDF[1]['d'] = np.random.randn(1)
dataDF
dataDF.dropna(axis=1)
```
* Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. <br>
<br>
* We can do this programmatically in Pandas with the `fillna` argument.<br>
```
dataDF
dataDF.fillna(0)
dataDF.fillna({2: 1.5, 3:0.50})
```
* Notice that `fillna` by default returns a new object with the desired filling behavior, rather than changing the `Series` or `DataFrame` in place.
```
dataDF
```
* If you don't like this behaviour you can alter values in-place using `inplace=True`.
```
dataDF.fillna({2: 1.5, 3:0.50}, inplace=True)
dataDF
```
* Missing values can also be interpolated, using any one of a variety of methods:
```
dataDF[2]['c'] = np.nan
dataDF[3]['d'] = np.nan
dataDF
```
* We can also propagate non-null values forward or backward.
```
dataDF.fillna(method='ffill')
dataDF.fillna(dataDF.mean())
```
## Data summarization
* We often wish to summarize data in `Series` or `DataFrame` objects, so that they can more easily be understood or compared with similar data.<br>
<br>
* The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures.<br>
```
marvelDF.sum()
```
* Clearly, `sum` is more meaningful for some columns than others.(Total Appearances)<br>
* For methods like `mean` for which application to string variables is not just meaningless, but impossible, these columns are automatically exculded:
```
marvelDF.mean()
```
* The important difference between NumPy's functions and Pandas' methods is that Numpy have different functions for handling missing data like 'nansum' but Pandas use same functions.
```
dataDF
dataDF.mean()
```
* Sometimes we may not want to ignore missing values, and allow the `nan` to propagate.
```
dataDF.mean(skipna=False)
```
* A useful summarization that gives a quick snapshot of multiple statistics for a `Series` or `DataFrame` is `describe`:
```
dataDF.describe()
```
* `describe` can detect non-numeric data and sometimes yield useful information about it.
## Writing Data to Files
* Pandas can also export data to a variety of storage formats.<br>
<br>
* We will bring your attention to just a couple of these.
```
myDF = populationDF['2000']
myDF.to_csv("data/roundedPopulation2000.csv")
```
* The `to_csv` method writes a `DataFrame` to a comma-separated values (csv) file.<br>
<br>
* You can specify custom delimiters (via `sep` argument), how missing values are written (via `na_rep` argument), whether the index is writen (via `index` argument), whether the header is included (via `header` argument), among other options.
| github_jupyter |
```
import pandas as pd
import numpy as np
import IPython.display as dsp
from pyqstrat.pq_utils import zero_to_nan, get_empty_np_value, infer_frequency, resample_trade_bars, has_display, strtup2date
from pyqstrat.plot import TradeBarSeries, TimeSeries, Subplot, Plot
from typing import Optional, Sequence, Tuple, Union, Callable
def _sort_trade_bars_key(a: str) -> int:
sorted_cols = ['timestamp', 'o', 'h', 'l', 'c', 'v', 'vwap']
if a in sorted_cols:
return sorted_cols.index(a)
else:
return len(sorted_cols)
def sort_trade_bars(columns: Sequence[str]) -> Sequence[str]:
'''Given a list of column names, sort them in olhcv order'''
columns = sorted(list(columns)) # Use stable sort to sort columns that we don't know about alphabetically
return sorted(columns, key=_sort_trade_bars_key)
class TradeBars:
'''Used to store OHLCV bars. You must at least supply timestamps and close prices. All other fields are optional.
Attributes:
timestamp: A numpy datetime array with the datetime for each bar. Must be monotonically increasing.
c: A numpy float array with close prices for the bar.
o: A numpy float array with open prices . Default None
h: A numpy float array with high prices. Default None
l: A numpy float array with high prices. Default None
v: A numpy integer array with volume for the bar. Default None
vwap: A numpy float array with the volume weighted average price for the bar. Default None
'''
def __init__(self,
timestamps: np.ndarray,
c: np.ndarray,
o: Optional[np.ndarray] = None,
h: Optional[np.ndarray] = None,
l: Optional[np.ndarray] = None,
v: Optional[np.ndarray] = None,
vwap: Optional[np.ndarray] = None) -> None:
'''Zeroes in o, h, l, c are set to nan'''
assert(len(timestamps) > 1)
assert(len(c) == len(timestamps))
assert(o is None or len(o) == len(timestamps))
assert(h is None or len(h) == len(timestamps))
assert(l is None or len(l) == len(timestamps))
assert(v is None or len(v) == len(timestamps))
assert(vwap is None or len(vwap) == len(timestamps))
if not np.all(np.diff(timestamps).astype(np.float) > 0): # check for monotonically increasing timestamps
raise Exception('timestamps must be unique monotonically increasing')
self.timestamps, self.o, self.h, self.l, self.c, self.v, self.vwap = timestamps, o, h, l, c, v, vwap
for field in ['timestamps', 'h', 'l', 'c', 'v', 'vwap']:
v = getattr(self, field)
if isinstance(v, pd.Series):
setattr(self, field, v.values)
for field in ['o', 'h', 'l', 'c']:
setattr(self, field, zero_to_nan(getattr(self, field)))
self._set_valid_rows()
def add_timestamps(self, timestamps: np.ndarray) -> None:
'''
Adds new timestamps to a market data object.
Args:
timestamps (np.array of np.datetime64): New timestamps to add. Does not have to be sorted or unique
>>> timestamps = np.array(['2018-01-05', '2018-01-09', '2018-01-10'], dtype = 'M8[ns]')
>>> c = np.array([8.1, 8.2, 8.3])
>>> o = np.array([9, 10, 11])
>>> trade_bar = TradeBars(timestamps, c, o)
>>> new_timestamps = np.array(['2018-01-07', '2018-01-09'], dtype = 'M8[ns]')
>>> trade_bar.add_timestamps(new_timestamps)
>>> print(trade_bar.timestamps)
['2018-01-05T00:00:00.000000000' '2018-01-07T00:00:00.000000000'
'2018-01-09T00:00:00.000000000' '2018-01-10T00:00:00.000000000']
>>> np.set_printoptions(formatter = {'float': lambda x: f'{x:.4f}'}) # After numpy 1.13 positive floats don't have a leading space for sign
>>> print(trade_bar.o, trade_bar.c)
[9.0000 nan 10.0000 11.0000] [8.1000 nan 8.2000 8.3000]
'''
if timestamps is None or len(timestamps) == 0: return
timestamps = np.unique(timestamps)
new_timestamps = np.setdiff1d(timestamps, self.timestamps, assume_unique=True)
all_timestamps = np.concatenate([self.timestamps, new_timestamps])
col_list = ['o', 'h', 'l', 'c', 'vwap']
sort_index = all_timestamps.argsort()
for col in col_list:
v = getattr(self, col)
if v is None: continue
dtype = getattr(self, col).dtype
fill_value = get_empty_np_value(dtype)
v = np.concatenate([v, np.full(len(new_timestamps), fill_value, dtype=dtype)])
v = v[sort_index]
setattr(self, col, v)
self.timestamps = np.sort(all_timestamps)
self._set_valid_rows
def _get_fill_value(self, col_name: str) -> np.generic:
dtype = getattr(self, col_name).dtype
return get_empty_np_value(dtype)
def _set_valid_rows(self) -> None:
col_list = [col for col in [self.o, self.h, self.l, self.c, self.vwap] if col is not None]
nans = np.any(np.isnan(col_list), axis=0)
self.valid_rows = ~nans
def valid_row(self, i: int) -> bool:
'''Return True if the row with index i has no nans in it.'''
return self.valid_rows[i]
def resample(self, sampling_frequency: str) -> Optional['TradeBars']:
'''
Downsample the trade bars data into a new bar frequency
Args:
sampling_frequency: See sampling frequency in pandas
'''
if sampling_frequency is None:
return self
df = self.df()
# Rename timestamps to timestamp
df.index.name = 'timestamp'
df = resample_trade_bars(df, sampling_frequency)
o = df.o if 'o' in df.columns else None
h = df.h if 'h' in df.columns else None
_l = df.l if 'l' in df.columns else None
v = df.v if 'v' in df.columns else None
vwap = df.vwap if 'vwap' in df.columns else None
trade_bar = TradeBars(df.timestamp, df.c, o, h, _l, v, vwap)
trade_bar._set_valid_rows()
return trade_bar
def errors(self, display: bool = True) -> Optional[pd.DataFrame]:
'''Returns a dataframe indicating any highs that are lower than opens, closes, lows or lows that are higher than other columns
Also includes any ohlcv values that are negative
'''
df = self.df()
errors_list = []
if 'h' in df.columns:
bad_highs = df[(df.h < df.c) | (df.h < df.o)]
if len(bad_highs):
bad_highs.insert(len(df.columns), 'error', 'bad high')
errors_list.append(bad_highs)
if 'l' in df.columns:
bad_lows = df[(df.l > df.c) | (df.l > df.o)]
if len(bad_lows):
bad_lows.insert(len(df.columns), 'error', 'bad low')
errors_list.append(bad_lows)
neg_values_mask = (df.c < 0)
for col in ['o', 'h', 'l', 'c', 'v', 'vwap']:
if col in df.columns:
neg_values_mask |= (df[col] < 0)
neg_values = df[neg_values_mask]
if len(neg_values):
neg_values.insert(len(df.columns), 'error', 'negative values')
errors_list.append(neg_values)
if not len(errors_list): return None
df = pd.concat(errors_list)
df = df[sort_trade_bars(df.columns)]
if display: dsp.display(df)
return df
def warnings(self, warn_std: int = 10, display: bool = True) -> pd.DataFrame:
'''Returns a dataframe indicating any values where the bar over bar change is more than warn_std standard deviations.
Args:
warn_std: Number of standard deviations to use as a threshold (default 10)
display: Whether to print out the warning dataframe as well as returning it
'''
df = self.df()
warnings_list = []
for col in ['o', 'h', 'l', 'c', 'vwap']:
if col in df.columns:
ret = np.abs(df[col].pct_change())
std = ret.std()
mask = ret > warn_std * std
df_tmp = df[mask]
if len(df_tmp):
double_mask = mask | mask.shift(-1) # Add the previous row so we know the two values computing a return
df_tmp = df[double_mask]
df_tmp.insert(len(df_tmp.columns), 'ret', ret[mask])
df_tmp.insert(len(df_tmp.columns), 'warning', f'{col} ret > {warn_std} * std: {std:.5g}')
warnings_list.append(df_tmp)
if not len(warnings_list): return None
df = pd.concat(warnings_list)
df = df[sort_trade_bars(df.columns)]
if display: dsp.display(df)
return df
def overview(self, display: bool = True) -> pd.DataFrame:
'''Returns a dataframe showing basic information about the data, including count, number and percent missing, min, max
Args:
display: Whether to print out the warning dataframe as well as returning it
'''
df = self.df().reset_index()
df_overview = pd.DataFrame({'count': len(df),
'num_missing': df.isnull().sum(),
'pct_missing': df.isnull().sum() / len(df),
'min': df.min(),
'max': df.max()})
df_overview = df_overview.T
df_overview = df_overview[sort_trade_bars(df_overview.columns)]
if display: dsp.display(df_overview)
return df_overview
def time_distribution(self,
frequency: str = '15 minutes',
display: bool = True,
plot: bool = True,
figsize: Optional[Tuple[int, int]] = None) -> pd.DataFrame:
'''
Return a dataframe with the time distribution of the bars
Args:
frequency: The width of each bin (default "15 minutes"). You can use hours or days as well.
display: Whether to display the data in addition to returning it.
plot: Whether to plot the data in addition to returning it.
figsize: If plot is set, optional figure size for the plot (default (20,8))
'''
group_col = None
n = int(frequency.split(' ')[0])
freq = frequency.split(' ')[1]
df = self.df().reset_index()
if freq == 'minutes' or freq == 'mins' or freq == 'min':
group_col = [df.date.dt.hour, df.date.dt.minute // n * n]
names = ['hour', 'minute']
elif freq == 'hours' or freq == 'hrs' or freq == 'hr':
group_col = [df.date.dt.weekday_name, df.date.dt.hour // n * n]
names = ['weekday', 'hour']
elif freq == 'weekdays' or freq == 'days' or freq == 'day':
group_col = df.date.dt.weekday_name // n * n
names = ['weekday']
else:
raise Exception(f'unknown time freq: {freq}')
count = df.groupby(group_col)['c'].count()
tdf = pd.DataFrame({'close_count': count, 'count_pct': count / df.c.count()})[['close_count', 'count_pct']]
if 'v' in df.columns:
vsum = df.groupby(group_col)['v'].sum()
vdf = pd.DataFrame({'volume': vsum, 'volume_pct': vsum / df.v.sum()})[['volume', 'volume_pct']]
tdf = pd.concat([vdf, tdf], axis=1)
tdf.index.names = names
if display:
dsp.display(tdf)
if plot:
if not figsize: figsize = (20, 8)
cols = ['close_count', 'volume'] if 'v' in df.columns else ['close_count']
if not has_display():
print('no display found, cannot plot time distribution')
return tdf
tdf[cols].plot(figsize=figsize, kind='bar', subplots=True, title='Time Distribution')
return tdf
def freq_str(self) -> str:
freq = infer_frequency(self.timestamps)
if freq < 1:
freq_str = f'{round(freq * 24. * 60, 2)} minutes'
else:
freq_str = f'{freq} days'
return freq_str
def describe(self,
warn_std: int = 10,
time_distribution_frequency: str = '15 min',
print_time_distribution: bool = False) -> None:
'''
Describe the bars. Shows an overview, errors and warnings for the bar data. This is a good function to use
before running any backtests on a set of bar data.
Args:
warn_std: See warning function
time_distribution_frequency: See time_distribution function
print_time_distribution: Whether to print the time distribution in addition to plotting it.
'''
print(f'Inferred Frequency: {self.freq_str()}')
self.overview()
print('Errors:')
self.errors()
print('Warnings:')
self.warnings(warn_std=warn_std)
print('Time distribution:')
self.time_distribution(display=print_time_distribution, frequency=time_distribution_frequency)
def has_ohlc(self) -> bool:
'''
Returns True if we have all ohlc columns and none are empty
'''
return not (self.o is None or self.h is None or self.l is None or self.c is None)
def plot(self,
figsize: Tuple[int, int] = (15, 8),
date_range: Optional[Union[Tuple[str, str], Tuple[np.datetime64, np.datetime64]]] = None,
sampling_frequency: str = None,
title: str = 'Price / Volume') -> None:
'''
Plot a candlestick or line plot depending on whether we have ohlc data or just close prices
Args:
figsize: Size of the figure (default (15,8))
date_range: A tuple of strings or numpy datetimes for plotting a smaller sample of the data, e.g. ("2018-01-01", "2018-01-06")
sampling_frequency: Downsample before plotting. See pandas frequency strings for possible values.
title: Title of the graph, default "Price / Volume"
'''
if date_range and isinstance(date_range[0], str):
date_range = strtup2date(date_range)
data: Union[TradeBarSeries, TimeSeries]
if self.has_ohlc():
data = TradeBarSeries('price', self.timestamps, self.o, self.h, self.l, self.c, self.v, self.vwap)
else:
data = TimeSeries('price', self.timestamps, self.c)
subplot = Subplot(data)
plot = Plot([subplot], figsize=figsize, date_range=date_range, sampling_frequency=sampling_frequency, title=title)
plot.draw()
def df(self,
start_date: Optional[np.datetime64] = None,
end_date: Optional[np.datetime64] = None) -> pd.DataFrame:
df = pd.DataFrame({'date': self.timestamps, 'c': self.c}).set_index('date')
for tup in [('o', self.o), ('h', self.h), ('l', self.l), ('v', self.v), ('vwap', self.vwap)]:
if tup[1] is not None: df.insert(0, tup[0], tup[1])
if start_date: df = df[df.index.values >= start_date]
if end_date: df = df[df.index.values <= end_date]
return df
def roll_futures(fut_prices: pd.DataFrame,
date_func: Callable[[pd.DataFrame], np.ndarray],
condition_func: Callable[[pd.DataFrame], np.ndarray],
expiries: pd.DataFrame = None,
return_full_df: bool = False) -> pd.DataFrame:
'''Construct a continuous futures dataframe with one row per datetime given rolling logic
Args:
fut_prices: A dataframe containing the columns 'date', 'series', and any other market data,
for example, ohlcv data. Date can contain time for sub-daily bars.
The series column must contain a different string name for each futures series, e.g. SEP2018, DEC2018, etc.
date_func: A function that takes the future prices as an input and returns a numpy array of booleans
True indicates that the future should be rolled on this date if the condition specified in condition_func is met.
This function can assume that we have all the columns in the original market data object plus the same
columns suffixed with _next for the potential series to roll over to.
condition_func: A function that takes the future prices as input and returns a numpy array of booleans.
True indicates that we should try to roll the future at that row.
expiries: An optional dataframe with 2 columns, 'series' and 'expiry'. This should have one row per future series
indicating that future's expiry date.
If you don't pass this in, the function will assume that the expiry column is present in the original dataframe.
return_full_df: If set, will return the datframe without removing extra timestamps so you can use your own logic for rolling,
including the _next columns and the roll flag
Returns:
A pandas DataFrame with one row per date, which contains the columns in the original md DataFrame and the same columns suffixed with _next
representing the series we want to roll to. There is also a column called roll_flag which is set to True whenever
the date and roll condition functions are met.
>>> fut_prices = pd.DataFrame({'timestamp': np.concatenate((np.arange(np.datetime64('2018-03-11'), np.datetime64('2018-03-16')),
... np.arange(np.datetime64('2018-03-11'), np.datetime64('2018-03-16')))),
... 'c': [10, 10.1, 10.2, 10.3, 10.4] + [10.35, 10.45, 10.55, 10.65, 10.75],
... 'v': [200, 200, 150, 100, 100] + [100, 50, 200, 250, 300],
... 'series': ['MAR2018'] * 5 + ['JUN2018'] * 5})[['timestamp','series', 'c', 'v']]
>>> expiries = pd.Series(np.array(['2018-03-15', '2018-06-15'], dtype = 'M8[D]'), index = ['MAR2018', 'JUN2018'], name = "expiry")
>>> date_func = lambda fut_prices: fut_prices.expiry - fut_prices.timestamp <= np.timedelta64(3, 'D')
>>> condition_func = lambda fut_prices: fut_prices.v_next > fut_prices.v
>>> df = roll_futures(fut_prices, date_func, condition_func, expiries)
>>> print(df[df.series == 'MAR2018'].timestamp.max() == np.datetime64('2018-03-14'))
True
>>> print(df[df.series == 'JUN2018'].timestamp.max() == np.datetime64('2018-03-15'))
True
'''
if 'timestamp' not in fut_prices.columns or 'series' not in fut_prices.columns:
raise Exception('timestamp or series not found in columns: {fut_prices.columns}')
if expiries is not None:
expiries = expiries.to_frame(name='expiry')
fut_prices = pd.merge(fut_prices, expiries, left_on=['series'], right_index=True, how='left')
else:
if 'expiry' not in fut_prices.columns: raise Exception('expiry column must be present in market data if expiries argument is not specified')
expiries = fut_prices[['series', 'expiry']].drop_duplicates().sort_values(by='expiry').set_index('s')
expiries = pd.merge(expiries, expiries.shift(-1), left_index=True, right_index=True, how='left', suffixes=['', '_next'])
orig_cols = [col for col in fut_prices.columns if col not in ['timestamp']]
fut_prices1 = pd.merge(fut_prices, expiries[['expiry', 'expiry_next']], on=['expiry'], how='left')
fut_prices = pd.merge(fut_prices1, fut_prices, left_on=['timestamp', 'expiry_next'],
right_on=['timestamp', 'expiry'], how='left', suffixes=['', '_next'])
fut_prices = fut_prices.sort_values(by=['expiry', 'timestamp'])
roll_flag = date_func(fut_prices) & condition_func(fut_prices)
df_roll = pd.DataFrame({'series': fut_prices.series, 'timestamp': fut_prices.timestamp, 'roll_flag': roll_flag})
df_roll = df_roll[df_roll.roll_flag].groupby('series', as_index=False).first()
fut_prices = pd.merge(fut_prices, df_roll, on=['series', 'timestamp'], how='left')
fut_prices.roll_flag = fut_prices.roll_flag.fillna(False)
cols = ['timestamp'] + orig_cols + [col + '_next' for col in orig_cols] + ['roll_flag']
fut_prices = fut_prices[cols]
if return_full_df: return fut_prices
df_list = []
for series, g in fut_prices.groupby('expiry'):
roll_flag = g.roll_flag
true_values = roll_flag[roll_flag]
if len(true_values):
first_true_index = true_values.index[0]
roll_flag = roll_flag[first_true_index:]
false_after_true_values = roll_flag[~roll_flag]
if len(false_after_true_values):
first_false_after_true_idx = false_after_true_values.index[0]
g = g.loc[:first_false_after_true_idx]
df_list.append(g)
full_df = pd.concat(df_list)
full_df = full_df.sort_values(by=['expiry', 'timestamp']).drop_duplicates(subset=['timestamp'])
return full_df
def test_trade_bars() -> None:
from datetime import datetime, timedelta
np.random.seed(0)
timestamps = np.arange(datetime(2018, 1, 1, 9, 0, 0), datetime(2018, 3, 1, 16, 0, 0), timedelta(minutes=5))
timestamps = np.array([dt for dt in timestamps.astype(object) if dt.hour >= 9 and dt.hour <= 16]).astype('M8[m]')
rets = np.random.normal(size=len(timestamps)) / 1000
c_0 = 100
c = np.round(c_0 * np.cumprod(1 + rets), 2)
_l = np.round(c * (1. - np.abs(np.random.random(size=len(timestamps)) / 1000.)), 2) # PEP8 thinks l is hard to distinguish
h = np.round(c * (1. + np.abs(np.random.random(size=len(timestamps)) / 1000.)), 2)
o = np.round(_l + (h - _l) * np.random.random(size=len(timestamps)), 2)
v = np.abs(np.round(np.random.normal(size=len(timestamps)) * 1000))
vwap = 0.5 * (_l + h)
c[18] = np.nan
_l[85] = 1000
trade_bar = TradeBars(timestamps, c, o, h, _l, v, vwap)
trade_bar.describe()
trade_bar.plot(date_range=('2018-01-02', '2018-01-02 12:00'))
if __name__ == "__main__":
test_trade_bars()
import doctest
doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
```
| github_jupyter |
```
import os, json, sys, time, random
import numpy as np
import torch
from easydict import EasyDict
from math import floor
from easydict import EasyDict
from steves_utils.vanilla_train_eval_test_jig import Vanilla_Train_Eval_Test_Jig
from steves_utils.torch_utils import get_dataset_metrics, independent_accuracy_assesment
from steves_models.configurable_vanilla import Configurable_Vanilla
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.lazy_map import Lazy_Map
from steves_utils.sequence_aggregator import Sequence_Aggregator
from steves_utils.stratified_dataset.traditional_accessor import Traditional_Accessor_Factory
from steves_utils.cnn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.torch_utils import (
confusion_by_domain_over_dataloader,
independent_accuracy_assesment
)
from steves_utils.utils_v2 import (
per_domain_accuracy_from_confusion,
get_datasets_base_path
)
# from steves_utils.ptn_do_report import TBD
required_parameters = {
"experiment_name",
"lr",
"device",
"dataset_seed",
"seed",
"labels",
"domains_target",
"domains_source",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"batch_size",
"n_epoch",
"patience",
"criteria_for_best",
"normalize_source",
"normalize_target",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"pickle_name_source",
"pickle_name_target",
"torch_default_dtype",
}
from steves_utils.ORACLE.utils_v2 import (
ALL_SERIAL_NUMBERS,
ALL_DISTANCES_FEET_NARROWED,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "MANUAL CORES CNN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["seed"] = 1337
standalone_parameters["labels"] = ALL_SERIAL_NUMBERS
standalone_parameters["domains_source"] = [8,32,50]
standalone_parameters["domains_target"] = [14,20,26,38,44,]
standalone_parameters["num_examples_per_domain_per_label_source"]=-1
standalone_parameters["num_examples_per_domain_per_label_target"]=-1
standalone_parameters["pickle_name_source"] = "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"
standalone_parameters["pickle_name_target"] = "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["batch_size"]=128
standalone_parameters["n_epoch"] = 3
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["normalize_source"] = False
standalone_parameters["normalize_target"] = False
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": len(standalone_parameters["labels"])}},
]
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "cnn_1:oracle.run2",
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains_source": [8, 32, 50],
"domains_target": [14, 20, 26, 38, 44],
"pickle_name_source": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"pickle_name_target": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"device": "cuda",
"lr": 0.0001,
"batch_size": 128,
"normalize_source": False,
"normalize_target": False,
"num_examples_per_domain_per_label_source": -1,
"num_examples_per_domain_per_label_target": -1,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 16}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"dataset_seed": 7,
"seed": 7,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
def wrap_in_dataloader(p, ds):
return torch.utils.data.DataLoader(
ds,
batch_size=p.batch_size,
shuffle=True,
num_workers=1,
persistent_workers=True,
prefetch_factor=50,
pin_memory=True
)
taf_source = Traditional_Accessor_Factory(
labels=p.labels,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_source),
seed=p.dataset_seed
)
train_original_source, val_original_source, test_original_source = \
taf_source.get_train(), taf_source.get_val(), taf_source.get_test()
taf_target = Traditional_Accessor_Factory(
labels=p.labels,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_target),
seed=p.dataset_seed
)
train_original_target, val_original_target, test_original_target = \
taf_target.get_train(), taf_target.get_val(), taf_target.get_test()
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Map. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[:2] # Strip the tuple to just (x,y)
train_processed_source = wrap_in_dataloader(
p,
Lazy_Map(train_original_source, transform_lambda)
)
val_processed_source = wrap_in_dataloader(
p,
Lazy_Map(val_original_source, transform_lambda)
)
test_processed_source = wrap_in_dataloader(
p,
Lazy_Map(test_original_source, transform_lambda)
)
train_processed_target = wrap_in_dataloader(
p,
Lazy_Map(train_original_target, transform_lambda)
)
val_processed_target = wrap_in_dataloader(
p,
Lazy_Map(val_original_target, transform_lambda)
)
test_processed_target = wrap_in_dataloader(
p,
Lazy_Map(test_original_target, transform_lambda)
)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
ep = next(iter(test_processed_target))
ep[0].dtype
model = Configurable_Vanilla(
x_net=x_net,
label_loss_object=torch.nn.NLLLoss(),
learning_rate=p.lr
)
jig = Vanilla_Train_Eval_Test_Jig(
model=model,
path_to_best_model=p.BEST_MODEL_PATH,
device=p.device,
label_loss_object=torch.nn.NLLLoss(),
)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
patience=p.patience,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
criteria_for_best=p.criteria_for_best
)
total_experiment_time_secs = time.time() - start_time_secs
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = wrap_in_dataloader(p, Sequence_Aggregator((datasets.source.original.val, datasets.target.original.val)))
confusion = confusion_by_domain_over_dataloader(model, p.device, val_dl, forward_uses_domain=False)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
###################################
# Write out the results
###################################
experiment = {
"experiment_name": p.experiment_name,
"parameters": p,
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "cnn"),
}
get_loss_curve(experiment)
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
\# Developer: Ali Hashaam ([email protected]) <br>
\# 5th March 2019 <br>
\# © 2019 initOS GmbH <br>
\# License MIT <br>
\# Library for TSVM and SelfLearning taken from https://github.com/tmadl/semisup-learn <br>
\# Library for lagrangean-S3VM taken from https://github.com/fbagattini/lagrangean-s3vm <br>
```
from sklearn.svm import SVC
import pandas as pd
import numpy as np
from __future__ import division
import re
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from frameworks.SelfLearning import *
from imblearn.over_sampling import SMOTE
from collections import Counter
from imblearn.under_sampling import RepeatedEditedNearestNeighbours
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.externals import joblib
import time
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from methods.scikitTSVM import SKTSVM
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
source_domain = pd.read_csv('github_preprocessed_data.csv')
target_domain = pd.read_csv('mantis_data_for_domain_adaptation.csv')
source_domain.drop(["Unnamed: 0"], axis=1, inplace=True)
target_domain.drop(["Unnamed: 0"], axis=1, inplace=True)
source_domain['text'] = source_domain['text'].fillna("")
target_domain['textual_data'] = target_domain['textual_data'].fillna("")
print source_domain['type'].value_counts()
print target_domain['type'].value_counts()
unlabelled_index_target = target_domain[(target_domain['bug_or_not'].isnull())].index
labelled_index_target = target_domain[~(target_domain['bug_or_not'].isnull())].index
len(unlabelled_index_target), len(labelled_index_target)
target_domain_labeled = target_domain.loc[labelled_index_target]
len(target_domain_labeled), len(target_domain)
tfidf_vectorizer_source = TfidfVectorizer(max_df=0.95, min_df=2, max_features=500, stop_words='english')
source_domain_balanced = source_domain.groupby('type').apply(lambda x: x.sample(400))
print source_domain['type'].value_counts()
source_domain_X = tfidf_vectorizer_source.fit_transform(source_domain_balanced['text'])
source_domain_Y = np.array(source_domain_balanced['type'])
stratified_shuffle_split = StratifiedShuffleSplit(n_splits=3, test_size=0.3, random_state=0)
scores = []
iteration = 1
for train_index, test_index in stratified_shuffle_split.split(source_domain_X, source_domain_Y):
X_train = source_domain_X[train_index].copy()
Y_train = source_domain_Y[train_index].copy()
X_test = source_domain_X[test_index].copy()
Y_test = source_domain_Y[test_index].copy()
clf = MultinomialNB()
clf.fit(X_train, Y_train)
print clf.score(X_test, Y_test.astype(float))
y_pred = clf.predict(X_test)
result = classification_report(Y_test.astype(float), y_pred.astype(float), output_dict=True)
src = pd.DataFrame(result)
src.transpose().to_csv('{}_{}_{}_latex_table_report.csv'.format('source_vs_source', '500', iteration))
print src.transpose()
iteration += 1
```
# Baseline TL Source VS Target Supervised
```
source_domain_X = tfidf_vectorizer_source.fit_transform(source_domain['text'])
source_domain_Y = np.array(source_domain['type'])
for x in xrange(5):
clf = MultinomialNB()
clf.fit(source_domain_X, source_domain_Y)
target_domain_labeled_balanced = target_domain_labeled.groupby('type').apply(lambda x: x.sample(90))
target_domain_X = tfidf_vectorizer_source.transform(target_domain_labeled_balanced['textual_data'])
target_domain_Y = np.array(target_domain_labeled_balanced['type'])
print("members for classes {}".format(",".join("(%s,%s)" % tup for tup in sorted(Counter(target_domain_Y).items()))))
score = clf.score(target_domain_X, target_domain_Y.astype(float))
print "Baseline TL Score: "+ str(score)
y_pred = clf.predict(target_domain_X)
print("members for classes {}".format(",".join("(%s,%s)" % tup for tup in sorted(Counter(y_pred).items()))))
result = classification_report(target_domain_Y.astype(float), y_pred.astype(float), output_dict=True)
src = pd.DataFrame(result)
print src
src.transpose().to_csv('{}_{}_{}_latex_table_report.csv'.format('source_vs_target_supervised', '500', x))
```
# TL Source Semi-Supervised
```
target_domain_unlabeled = target_domain.loc[unlabelled_index_target, ["textual_data", "type"]].copy()
target_domain_unlabeled["type"] = -1
source_domain_df = source_domain[["text", "type"]].copy()
source_domain_df.rename({"text":"textual_data", "type":"type"}, axis=1, inplace=1)
domain_adaptation_df = pd.concat([source_domain_df, target_domain_unlabeled])
len(domain_adaptation_df), len(source_domain_df), len(target_domain_unlabeled)
print domain_adaptation_df['type'].value_counts()
print target_domain_labeled_balanced['type'].value_counts()
from collections import Counter
#print("members for classes {}".format(",".join("(%s,%s)" % tup for tup in sorted(Counter(Y).items()))))
def domain_adaptation(dom_a_df, target_domain_labeled,
classifier, label_type, neg_class, classifier_name):
dom_a_df.loc[dom_a_df['type']==0, 'type'] = neg_class
dom_a_df.loc[dom_a_df['type']==1, 'type'] = 1
target_domain_labeled.loc[target_domain_labeled['type']==0, 'type'] = neg_class
target_domain_labeled.loc[target_domain_labeled['type']==1, 'type'] = 1
tfidf_vectorizer_source = TfidfVectorizer(max_df=0.95, min_df=2, max_features=500, stop_words='english')
source_X = tfidf_vectorizer_source.fit_transform(dom_a_df['textual_data']).toarray()
source_Y = np.array(dom_a_df['type'])
target_domain_X = tfidf_vectorizer_source.transform(target_domain_labeled['textual_data']).toarray()
target_domain_Y = np.array(target_domain_labeled['type'])
if label_type != 'int':
source_Y = source_Y.astype(float)
else:
source_Y = source_Y.astype(int)
classifier.fit(source_X, source_Y)
score = classifier.score(target_domain_X, target_domain_Y.astype(int))
joblib.dump(classifier, 'models/DA_{}.pkl'.format(classifier_name))
joblib.dump(target_domain_X, 'models/X_test_DA_{}.pkl'.format(classifier_name))
joblib.dump(target_domain_Y, 'models/Y_test_DA_{}.pkl'.format(classifier_name))
print "{} score: {}".format(classifier_name, score)
sklearn_lr = LogisticRegression(solver='lbfgs')
domain_adaptation(domain_adaptation_df.copy(), target_domain_labeled_balanced.copy(),
SelfLearningModel(sklearn_lr), 'float', 0, 'ST_LR')
domain_adaptation(domain_adaptation_df.copy(), target_domain_labeled_balanced.copy(),
SKTSVM(), 'int', 0, 'TSVM')
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
#classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=['Bug', 'Non-Bug'], yticklabels=['Bug', 'Non-Bug'],
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
#plt.savefig('Confusion_matrix(Phase_3).png', bbox_inches='tight')
return ax
def get_results(classifier, data_type):
dict_features = {}
model = joblib.load('models/DA_{}.pkl'.format(classifier, 3))
x_tst = joblib.load('models/X_test_DA_{}.pkl'.format(classifier, 3))
y_tst = joblib.load('models/Y_test_DA_{}.pkl'.format(classifier, 3))
acc = model.score(x_tst, y_tst.astype(data_type))
print acc
y_pred = model.predict(x_tst)
print("members for classes {}".format(",".join("(%s,%s)" % tup for tup in sorted(Counter(y_tst).items()))))
print("members for classes {}".format(",".join("(%s,%s)" % tup for tup in sorted(Counter(y_pred).items()))))
result = classification_report(y_tst.astype(data_type), y_pred.astype(data_type), output_dict=True)
result_df = pd.DataFrame(result)
result_df.transpose().to_csv('DA_{}_latex_table_report.csv'.format(classifier))
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plot_confusion_matrix(y_tst.astype(data_type), y_pred.astype(data_type), classes=[0, 1],
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
#plot_confusion_matrix(y_tst.astype(data_type), y_pred.astype(data_type), classes=[0, 1], normalize=True,
# title='Normalized confusion matrix')
plt.show()
print result_df.transpose()
return result_df
st_results = get_results('ST_LR', float)
tsvm_results = get_results('TSVM', int)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/lauraAriasFdez/Ciphers/blob/master/project_tfif.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### 1. Connect To Google Drive + Get Data
```
# MAIN DIRECTORY STILL TO DO
from google.colab import drive
drive.mount('/content/gdrive')
data_file = "/content/gdrive/MyDrive/CSCI4511W/project/sentiments.csv"
import pandas as pd
import numpy as np
cols = ['sentiment','id','date','query_string','user','text']
sms_data = pd.read_csv(data_file, encoding='latin-1',header=None,names=cols)
# replace lables 0 = neg 1= pos
sms_data.sentiment = sms_data.sentiment.replace({0: 0, 4: 1})
labels = sms_data[sms_data.columns[0]].to_numpy()
```
### Preprocess Data
```
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
#We import English stop-words from the NLTK package and removed them if found in the sentence.
#While removing stop-words, we perform stemming that is if the word is not a stop-word, it will be converted to its root form. This is called stemming.
"""
https://stackoverflow.com/questions/52026677/sentiment140-preprocessing
https://www.analyticsvidhya.com/blog/2020/11/understanding-naive-bayes-svm-and-its-implementation-on-spam-sms/
"""
def clean_data(content):
stemming = PorterStemmer()
for i in range (0,len(content)):
## print where in cleaning they are
if (i%1000000==0):
print(i ," already cleaned")
#remove @mentions
tweet = re.sub(r'@[A-Za-z0-9]+',"",content[i])
#remove urls
tweet = re.sub(r'https?:\/\/\S+',"",tweet)
#remove all unecessary charachters like punctuations
tweet = re.sub('[^a-zA-Z]',repl = ' ',string = tweet)
tweet.lower()
tweet = tweet.split()
## steeeming and remove stop words
tweet = [stemming.stem(word) for word in tweet if word not in set(stopwords.words('english'))]
tweet = ' '.join(tweet)
#cleaned Twwet
content[i] = tweet
return content
```
https://getpocket.com/read/3040941140
Texthero is designed as a Pandas wrapper, so it makes it easier than ever to preprocess and analyze text based Pandas Series
```
!pip install texthero
import pandas as pd
import texthero as hero #config import cid, csec, ua
custom_cleaning = [
#Replace not assigned values with empty space
hero.preprocessing.fillna,
hero.preprocessing.lowercase,
hero.preprocessing.remove_digits,
hero.preprocessing.remove_punctuation,
hero.preprocessing.remove_diacritics,
hero.preprocessing.remove_stopwords,
hero.preprocessing.remove_whitespace,
hero.preprocessing.stem
]
content = hero.clean(sms_data['text'], pipeline = custom_cleaning)
#content = content.to_numpy()
```
### TF-IDF Feature Extraction
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
tfidf_data = tfidf.fit_transform(content)
from sklearn.model_selection import train_test_split
tfidf_x_train,tfidf_x_test,y_train,y_test = train_test_split(tfidf_data,labels,test_size = 0.3, stratify=labels,random_state=100)
```
### Multinomial Naive Bayes
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import f1_score
# NAIVE BAYES + TLF
print("NAIVE BAYES + TLF______________________________________________________________")
clf_multinomialnb = MultinomialNB()
clf_multinomialnb.fit(tfidf_x_train,y_train)
y_pred = clf_multinomialnb.predict(tfidf_x_test)
print(classification_report(y_test,y_pred))
#>>> f1_score(y_true, y_pred, average='weighted')
f1_score(y_test,y_pred)
```
### SVM
```
from sklearn.svm import LinearSVC
# SVM + TLF
print("LINEAR SVM + TLF______________________________________________________________")
linearsvc = LinearSVC()
linearsvc.fit(tfidf_x_train,y_train)
y_pred = linearsvc.predict(tfidf_x_test)
print(classification_report(y_test,y_pred))
f1_score(y_test,y_pred)
```
### Logistic Regression
```
#https://towardsdatascience.com/logistic-regression-using-python-sklearn-numpy-mnist-handwriting-recognition-matplotlib-a6b31e2b166a
from sklearn.linear_model import LogisticRegression
logisticRegr = LogisticRegression()
logisticRegr.fit(tfidf_x_train,y_train)
y_pred = logisticRegr.predict(tfidf_x_test)
print(classification_report(y_test,y_pred))
f1_score(y_test,y_pred)
```
| github_jupyter |
# Chainer MNIST Model Deployment
* Wrap a Chainer MNIST python model for use as a prediction microservice in seldon-core
* Run locally on Docker to test
* Deploy on seldon-core running on minikube
## Dependencies
* [Helm](https://github.com/kubernetes/helm)
* [Minikube](https://github.com/kubernetes/minikube)
* [S2I](https://github.com/openshift/source-to-image)
```bash
pip install seldon-core
pip install chainer==6.2.0
```
## Train locally
```
#!/usr/bin/env python
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
import chainerx
from chainer import training
from chainer.training import extensions
# Network definition
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
parser = argparse.ArgumentParser(description="Chainer example: MNIST")
parser.add_argument(
"--batchsize",
"-b",
type=int,
default=100,
help="Number of images in each mini-batch",
)
parser.add_argument(
"--epoch",
"-e",
type=int,
default=20,
help="Number of sweeps over the dataset to train",
)
parser.add_argument(
"--frequency", "-f", type=int, default=-1, help="Frequency of taking a snapshot"
)
parser.add_argument(
"--device",
"-d",
type=str,
default="-1",
help="Device specifier. Either ChainerX device "
"specifier or an integer. If non-negative integer, "
"CuPy arrays with specified device id are used. If "
"negative integer, NumPy arrays are used",
)
parser.add_argument(
"--out", "-o", default="result", help="Directory to output the result"
)
parser.add_argument(
"--resume", "-r", type=str, help="Resume the training from snapshot"
)
parser.add_argument("--unit", "-u", type=int, default=1000, help="Number of units")
parser.add_argument(
"--noplot",
dest="plot",
action="store_false",
help="Disable PlotReport extension",
)
group = parser.add_argument_group("deprecated arguments")
group.add_argument(
"--gpu",
"-g",
dest="device",
type=int,
nargs="?",
const=0,
help="GPU ID (negative value indicates CPU)",
)
args = parser.parse_args(args=[])
device = chainer.get_device(args.device)
print("Device: {}".format(device))
print("# unit: {}".format(args.unit))
print("# Minibatch-size: {}".format(args.batchsize))
print("# epoch: {}".format(args.epoch))
print("")
# Set up a neural network to train
# Classifier reports softmax cross entropy loss and accuracy at every
# iteration, which will be used by the PrintReport extension below.
model = L.Classifier(MLP(args.unit, 10))
model.to_device(device)
device.use()
# Setup an optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
# Load the MNIST dataset
train, test = chainer.datasets.get_mnist()
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(
test, args.batchsize, repeat=False, shuffle=False
)
# Set up a trainer
updater = training.updaters.StandardUpdater(train_iter, optimizer, device=device)
trainer = training.Trainer(updater, (args.epoch, "epoch"), out=args.out)
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=device))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
# TODO(niboshi): Temporarily disabled for chainerx. Fix it.
if device.xp is not chainerx:
trainer.extend(extensions.DumpGraph("main/loss"))
# Take a snapshot for each specified epoch
frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
trainer.extend(extensions.snapshot(), trigger=(frequency, "epoch"))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
if args.plot and extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(
["main/loss", "validation/main/loss"], "epoch", file_name="loss.png"
)
)
trainer.extend(
extensions.PlotReport(
["main/accuracy", "validation/main/accuracy"],
"epoch",
file_name="accuracy.png",
)
)
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(
extensions.PrintReport(
[
"epoch",
"main/loss",
"validation/main/loss",
"main/accuracy",
"validation/main/accuracy",
"elapsed_time",
]
)
)
# Print a progress bar to stdout
trainer.extend(extensions.ProgressBar())
if args.resume is not None:
# Resume from a snapshot
chainer.serializers.load_npz(args.resume, trainer)
# Run the training
trainer.run()
if __name__ == "__main__":
main()
```
Wrap model using s2i
```
!s2i build . seldonio/seldon-core-s2i-python37-ubi8:1.7.0-dev chainer-mnist:0.1
!docker run --name "mnist_predictor" -d --rm -p 5000:5000 chainer-mnist:0.1
```
Send some random features that conform to the contract
```
!seldon-core-tester contract.json 0.0.0.0 5000 -p
!docker rm mnist_predictor --force
```
## Test using Minikube
**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
```
!minikube start --memory 4096
```
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Setup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Ambassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html#Install-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
```
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-python37-ubi8:1.7.0-dev chainer-mnist:0.1
!kubectl create -f chainer_mnist_deployment.json
!kubectl rollout status deploy/chainer-mnist-deployment-chainer-mnist-predictor-76478b2
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
```
| github_jupyter |
```
# coding=utf-8
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.utils import np_utils
from keras.models import Sequential,load_model,save_model
from keras.layers import Dense, Dropout, Activation,LeakyReLU
from keras.optimizers import SGD, Adam
from keras.callbacks import EarlyStopping,ModelCheckpoint
from keras import backend as K
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import roc_auc_score,accuracy_score
from scipy import sparse
import gc
from time import strftime, localtime
# 打印当前时间
def printTime():
print(strftime("%Y-%m-%d %H:%M:%S", localtime()))
return
printTime()
csr_trainData0 = sparse.load_npz(r'../trainTestData/trainData13100.npz')
csr_trainData0.shape
csr_trainData1 = sparse.load_npz(r'../trainTestData/trainData15112.npz')
csr_trainData1.shape
csr_trainData = sparse.hstack((csr_trainData0,csr_trainData1),format='csr')
del csr_trainData0,csr_trainData1
gc.collect()
age_train = pd.read_csv(r'../data/age_train.csv',header=None)
label = age_train[1].values
print(label.shape)
import time
seed = 7
np.random.seed(seed)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
model_filePath = r'../model/model28212_NN_'
currK = 0
val_index_list, score = [], []
val_probability = np.zeros((2010000,7))
printTime()
for train_index, val_index in kfold.split(csr_trainData,label):
K.clear_session()
trainData, trainLabel, valData, valLabel = csr_trainData[train_index,:], label[train_index], csr_trainData[val_index,:] , label[val_index]
trainLabel,valLabel = np_utils.to_categorical(trainLabel,num_classes=7),np_utils.to_categorical(valLabel,num_classes=7)
print('----------------------------------------------------------------------------------------------------------------------------------')
print(currK,'split Done!\n')
# 全连接模型
model = Sequential()
model.add(Dense(4000, activation='tanh', input_shape=(csr_trainData.shape[1],)))
model.add(Dense(2000, activation='relu'))
model.add(Dense(1000, activation='sigmoid'))
model.add(Dense(7, activation='softmax'))
#损失函数使用交叉熵
adam = Adam(lr=0.0003)
model.compile(loss='categorical_crossentropy',
optimizer = adam,
metrics=['accuracy'])
#模型训练
batch_size = 1024
epochs = 100
early_stopping = EarlyStopping(monitor='val_loss', patience=5, verbose=2)
bestModel = ModelCheckpoint(model_filePath + str(currK) + r'.h5', monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='auto', period=1)
hist = model.fit(trainData, trainLabel,
batch_size=batch_size,
epochs=epochs,
verbose=1,
shuffle=True,
validation_data=(valData,valLabel),
callbacks=[early_stopping,bestModel],
)
print('\n',currK,'train Done!')
printTime()
K.clear_session()
model = load_model(model_filePath + str(currK) + r'.h5')
probability = model.predict(valData,batch_size=1024)
val_probability[val_index,:] = probability
score.append(np.max(hist.history['val_acc']))
y_label = label[val_index]
val_label = np.argmax(probability,axis=1)
print(currK,'val_acc:',accuracy_score(val_label,y_label),'\n\n')
currK += 1
K.clear_session()
del trainData, valData, trainLabel,valLabel,model
print('----------------------------------------------------------------------------------------------------------------------------------')
print('mean val_acc:', np.mean(score))
printTime()
accuracy_score(np.argmax(val_probability,axis=1) ,label)
del csr_trainData
import gc
gc.collect()
```
# 验证集
```
val_probability = pd.DataFrame(val_probability)
print(val_probability.shape)
print(val_probability.head())
val_probability.drop(labels=[0],axis=1,inplace=True)
val_probability.to_csv(r'../processed/val_probability_28212.csv',header=None,index=False)
```
# 测试集
```
import os
model_file = r'../model/model28212_NN_'
csr_testData0 = sparse.load_npz(r'../trainTestData/trainData13100.npz')
csr_testData0.shape
csr_testData1 = sparse.load_npz(r'../trainTestData/trainData15112.npz')
csr_testData1.shape
csr_testData = sparse.hstack((csr_testData0, csr_testData1),format='csr')
del csr_trainData0,csr_trainData1
gc.collect()
age_test = pd.read_csv(r'../data/age_test.csv',header=None,usecols=[0])
printTime()
proflag = True
model_Num = 0
for i in list(range(10)):
model = load_model(model_file + str(i) + '.h5')
if proflag==True:
probability = model.predict(csr_testData,batch_size=1024,verbose=1)
proflag = False
else:
probability += model.predict(csr_testData,batch_size=1024,verbose=1)
model_Num += 1
print(model_Num)
K.clear_session()
del model
printTime()
model_Num
probability /= model_Num
age = np.argmax(probability,axis=1)
age_test = pd.read_csv(r'../data/age_test.csv',header=None,usecols=[0])
age_test = age_test.values
type(age_test)
print(probability.shape)
pro = np.column_stack((age_test,probability))
pro = pd.DataFrame(pro)
pro.drop(labels=[0,1],axis=1,inplace=True)
print(pro.shape)
pro.to_csv(r'../processed/test_probability_28212.csv',index=False,header=False)
```
| github_jupyter |
```
import torch
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from statsmodels.discrete.discrete_model import Probit
import patsy
import matplotlib.pylab as plt
import tqdm
import itertools
ax = np.newaxis
```
Make sure you have installed the pygfe package. You can simply call `pip install pygrpfe` in the terminal or call the magic command `!pip install pygrpfe` from within the notebook. If you are using the binder link, then `pygrpfe` is already installed. You can import the package directly.
```
import pygrpfe as gfe
```
# A simple model of wage and participation
\begin{align*}
Y^*_{it} & = \alpha_i + \epsilon_{it} \\
D_{it} &= 1\big[ u(\alpha_i) \geq c(D_{it-1}) + V_{it} \big] \\
Y_{it} &= D_{it} Y^*_{it} \\
\end{align*}
where we use
$$u(\alpha) = \frac{e^{(1-\gamma) \alpha } -1}{1-\gamma}$$
and use as initial conditions $D_{i1} = 1\big[ u(\alpha_i) \geq c(1) + V_{i1} \big]$.
```
def dgp_simulate(ni,nt,gamma=2.0,eps_sd=1.0):
""" simulates according to the model """
alpha = np.random.normal(size=(ni))
eps = np.random.normal(size=(ni,nt))
v = np.random.normal(size=(ni,nt))
# non-censored outcome
W = alpha[:,ax] + eps*eps_sd
# utility
U = (np.exp( alpha * (1-gamma)) - 1)/(1-gamma)
U = U - U.mean()
# costs
C1 = -1; C0=0;
# binary decision
Y = np.ones((ni,nt))
Y[:,0] = U.squeeze() > C1 + v[:,0]
for t in range(1,nt):
Y[:,t] = U > C1*Y[:,t-1] + C0*(1-Y[:,t-1]) + v[:,t]
W = W * Y
return(W,Y)
```
# Estimating the model
We show the steps to estimating the model. Later on, we will run a Monte-Carlo Simulation.
We simulate from the DGP we have defined.
```
ni = 1000
nt = 50
Y,D = dgp_simulate(ni,nt,2.0)
```
## Step 1: grouping observations
We group individuals based on their outcomes. We consider as moments the average value of $Y$ and the average value of $D$. We give our gfe function the $t$ sepcific values so that it can compute the within individual variation. This is a measure used to pick the nubmer of groups.
The `group` function chooses the number of groups based on the rule described in the paper.
```
# we create the moments
# this has dimension ni x nt x nm
M_itm = np.stack([Y,D],axis=2)
# we use our sugar function to get the groups
G_i,_ = gfe.group(M_itm)
print("Number of groups = {:d}".format(G_i.max()))
```
We can plot the grouping:
```
dd = pd.DataFrame({'Y':Y.mean(1),'G':G_i,'D':D.mean(1)})
plt.scatter(dd.Y,dd.D,c=dd.G*1.0)
plt.show()
```
## Step 2: Estimate the likelihood model with group specific parameters
In the model we proposed, this second step is a probit. We can then directly use python probit routine with group dummies.
```
ni,nt = D.shape
# next we minimize using groups as FE
dd = pd.DataFrame({
'd': D[:,range(1,nt)].flatten(),
'dl':D[:,range(nt-1)].flatten(),
'gi':np.broadcast_to(G_i[:,ax], (ni,nt-1)).flatten()})
yv,Xv = patsy.dmatrices("d ~ 0 + dl + C(gi)", dd, return_type='matrix')
mod = Probit(dd['d'], Xv)
res = mod.fit(maxiter=2000,method='bfgs')
print("Estimated cost parameters = {:.3f}".format(res.params[-1]))
```
## Step 2 (alternative implementation): Pytorch and auto-diff
We next write down a likelihood that we want to optimize. Instead of using the Python routine for the Probit, we make use of automatic differentiation from PyTorch. This makes it easy to modify the estimating model to accomodate for less standard likelihoods!
We create a class which initializes the parameters in the `__init__` method and computes the loss in the `loss` method. We will see later how we can use this to define a fixed effect estimator.
```
class GrpProbit:
# initialize parameters and data
def __init__(self,D,G_i):
# define parameters and tell PyTorch to keep track of gradients
self.alpha = torch.tensor( np.ones(G_i.max()+1), requires_grad=True)
self.cost = torch.tensor( np.random.normal(1), requires_grad=True)
self.params = [self.alpha,self.cost]
# predefine some components
ni,nt = D.shape
self.ni = ni
self.G_i = G_i
self.Dlag = torch.tensor(D[:,range(0,nt-1)])
self.Dout = torch.tensor(D[:,range(1,nt)])
self.N = torch.distributions.normal.Normal(0,1)
# define our loss function
def loss(self):
Id = self.alpha[self.G_i].reshape(self.ni,1) + self.cost * self.Dlag
lik_it = self.Dout * torch.log( torch.clamp( self.N.cdf( Id ), min=1e-7)) + \
(1-self.Dout)*torch.log( torch.clamp( self.N.cdf( -Id ), min=1e-7) )
return(- lik_it.mean())
# initialize the model with groups and estimate it
model = GrpProbit(D,G_i)
gfe.train(model)
print("Estimated cost parameters = {:.3f}".format(model.params[1]))
```
## Use PyTorch to estimate Fixed Effect version
Since Pytorch makes use of efficient automatic differentiation, we can use it with many variables. This allows us to give each individual their own group, effectivily estimating a fixed-effect model.
```
model_fe = GrpProbit(D,np.arange(ni))
gfe.train(model_fe)
print("Estimated cost parameters FE = {:.3f}".format(model_fe.params[1]))
```
# Monte-Carlo
We finish with running a short Monte-Carlo exercise.
```
all = []
import itertools
ll = list(itertools.product(range(50), [10,20,30,40]))
for r, nt in tqdm.tqdm(ll):
ni = 1000
gamma =2.0
Y,D = dgp_simulate(ni,nt,gamma)
M_itm = np.stack([Y,D],axis=2)
G_i,_ = blm2.group(M_itm,scale=True)
model_fe = GrpProbit(D,np.arange(ni))
gfe.train(model_fe)
model_gfe = GrpProbit(D,G_i)
gfe.train(model_gfe)
all.append({
'c_fe' : model_fe.params[1].item(),
'c_gfe': model_gfe.params[1].item(),
'ni':ni,
'nt':nt,
'gamma':gamma,
'ng':G_i.max()+1})
df = pd.DataFrame(all)
df2 = df.groupby(['ni','nt','gamma']).mean().reset_index()
plt.plot(df2['nt'],df2['c_gfe'],label="gfe",color="orange")
plt.plot(df2['nt'],df2['c_fe'],label="fe",color="red")
plt.axhline(1.0,label="true",color="black",linestyle=":")
plt.xlabel("T")
plt.legend()
plt.show()
df.groupby(['ni','nt','gamma']).mean()
```
| github_jupyter |
# GDP and life expectancy
Richer countries can afford to invest more on healthcare, on work and road safety, and other measures that reduce mortality. On the other hand, richer countries may have less healthy lifestyles. Is there any relation between the wealth of a country and the life expectancy of its inhabitants?
The following analysis checks whether there is any correlation between the total gross domestic product (GDP) of a country in 2013 and the life expectancy of people born in that country in 2013.
Getting the data
Two datasets of the World Bank are considered. One dataset, available at http://data.worldbank.org/indicator/NY.GDP.MKTP.CD, lists the GDP of the world's countries in current US dollars, for various years. The use of a common currency allows us to compare GDP values across countries. The other dataset, available at http://data.worldbank.org/indicator/SP.DYN.LE00.IN, lists the life expectancy of the world's countries. The datasets were downloaded as CSV files in March 2016.
```
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pandas as pd
YEAR = 2018
GDP_INDICATOR = 'NY.GDP.MKTP.CD'
gdpReset = pd.read_csv('WB 2018 GDP.csv')
LIFE_INDICATOR = 'SP.DYN.LE00.IN_'
lifeReset = pd.read_csv('WB 2018 LE.csv')
lifeReset.head()
```
## Cleaning the data
Inspecting the data with `head()` and `tail()` shows that:
1. the first 34 rows are aggregated data, for the Arab World, the Caribbean small states, and other country groups used by the World Bank;
- GDP and life expectancy values are missing for some countries.
The data is therefore cleaned by:
1. removing the first 34 rows;
- removing rows with unavailable values.
```
gdpCountries = gdpReset.dropna()
lifeCountries = lifeReset.dropna()
```
## Transforming the data
The World Bank reports GDP in US dollars and cents. To make the data easier to read, the GDP is converted to millions of British pounds (the author's local currency) with the following auxiliary functions, using the average 2013 dollar-to-pound conversion rate provided by <http://www.ukforex.co.uk/forex-tools/historical-rate-tools/yearly-average-rates>.
```
def roundToMillions (value):
return round(value / 1000000)
def usdToGBP (usd):
return usd / 1.334801
GDP = 'GDP (£m)'
gdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions)
gdpCountries.head()
COUNTRY = 'Country Name'
headings = [COUNTRY, GDP]
gdpClean = gdpCountries[headings]
gdpClean.head()
LIFE = 'Life expectancy (years)'
lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round)
headings = [COUNTRY, LIFE]
lifeClean = lifeCountries[headings]
lifeClean.head()
gdpVsLife = pd.merge(gdpClean, lifeClean, on=COUNTRY, how='inner')
gdpVsLife.head()
```
## Calculating the correlation
To measure if the life expectancy and the GDP grow together, the Spearman rank correlation coefficient is used. It is a number from -1 (perfect inverse rank correlation: if one indicator increases, the other decreases) to 1 (perfect direct rank correlation: if one indicator increases, so does the other), with 0 meaning there is no rank correlation. A perfect correlation doesn't imply any cause-effect relation between the two indicators. A p-value below 0.05 means the correlation is statistically significant.
```
from scipy.stats import spearmanr
gdpColumn = gdpVsLife[GDP]
lifeColumn = gdpVsLife[LIFE]
(correlation, pValue) = spearmanr(gdpColumn, lifeColumn)
print('The correlation is', correlation)
if pValue < 0.05:
print('It is statistically significant.')
else:
print('It is not statistically significant.')
```
The value shows a direct correlation, i.e. richer countries tend to have longer life expectancy.
## Showing the data
Measures of correlation can be misleading, so it is best to see the overall picture with a scatterplot. The GDP axis uses a logarithmic scale to better display the vast range of GDP values, from a few million to several billion (million of million) pounds.
```
%matplotlib inline
gdpVsLife.plot(x=GDP, y=LIFE, kind='scatter', grid=True, logx=True, figsize=(10, 4))
```
The plot shows there is no clear correlation: there are rich countries with low life expectancy, poor countries with high expectancy, and countries with around 10 thousand (104) million pounds GDP have almost the full range of values, from below 50 to over 80 years. Towards the lower and higher end of GDP, the variation diminishes. Above 40 thousand million pounds of GDP (3rd tick mark to the right of 104), most countries have an expectancy of 70 years or more, whilst below that threshold most countries' life expectancy is below 70 years.
Comparing the 10 poorest countries and the 10 countries with the lowest life expectancy shows that total GDP is a rather crude measure. The population size should be taken into account for a more precise definiton of what 'poor' and 'rich' means. Furthermore, looking at the countries below, droughts and internal conflicts may also play a role in life expectancy.
```
# the 10 countries with lowest GDP
gdpVsLife.sort_values(GDP).head(10)
# the 10 countries with lowest life expectancy
gdpVsLife.sort_values(LIFE).head(10)
```
## Conclusions
To sum up, there is no strong correlation between a country's wealth and the life expectancy of its inhabitants: there is often a wide variation of life expectancy for countries with similar GDP, countries with the lowest life expectancy are not the poorest countries, and countries with the highest expectancy are not the richest countries. Nevertheless there is some relationship, because the vast majority of countries with a life expectancy below 70 years is on the left half of the scatterplot.
| github_jupyter |
# Immune disease associations of Neanderthal-introgressed SNPs
This code investigates if Neanderthal-introgressed SNPs (present in Chen introgressed sequences) have been associated with any immune-related diseases, including infectious diseases, allergic diseases, autoimmune diseases and autoinflammatory diseases, using data from the NHGRI-EBI GWAS Catalog.
Neanderthal-introgressed SNPs from:
1. Dannemann M, Prufer K & Kelso J. Functional implications of Neandertal introgression in modern humans. *Genome Biol* 2017 **18**:61.
2. Simonti CN *et al.* The phenotypic legacy of admixture between modern humans and Neandertals. *Science* 2016 **351**:737-41.
Neanderthal-introgressed sequences by Chen *et al.* from:
* Chen L *et al.* Identifying and interpreting apparent Neanderthal ancestry in African individuals. *Cell* 2020 **180**:677-687.
GWAS summary statistics from:
* [GWAS Catalog](https://www.ebi.ac.uk/gwas/docs/file-downloads)
```
# Import modules
import pandas as pd
```
## Get Neanderthal SNPs present in GWAS Catalog
```
# Load Chen Neanderthal-introgressed SNPs
chen = pd.read_excel('../chen/Additional File 1.xlsx', 'Sheet1', usecols=['Chromosome', 'Position', 'Source', 'ID', 'Chen'])
neanderthal = chen.loc[chen.Chen == 'Yes'].copy()
neanderthal.drop('Chen', axis=1)
# Load GWAS catalog
catalog = pd.read_csv('GWAS_Catalog.tsv', sep="\t", header=0,
usecols=['DISEASE/TRAIT', 'CHR_ID', 'CHR_POS', 'REPORTED GENE(S)', 'MAPPED_GENE',
'STRONGEST SNP-RISK ALLELE', 'SNPS', 'RISK ALLELE FREQUENCY', 'P-VALUE', 'OR or BETA',
'95% CI (TEXT)', 'MAPPED_TRAIT', 'STUDY ACCESSION'], low_memory=False)
catalog = catalog.loc[catalog.CHR_ID != 'X'].copy()
catalog = catalog.loc[catalog.CHR_ID != 'Y'].copy()
catalog.rename(columns={'CHR_ID': 'Chromosome', 'CHR_POS': 'Position', 'SNPS': 'ID'}, inplace=True)
# Neanderthal SNPs present in GWAS catalog
nean_catalog = neanderthal.merge(catalog.drop(columns=['Chromosome', 'Position']), how='inner', on='ID')
nean_catalog
```
## Immune-related diseases associated with Neanderthal SNPs
### Infections
```
nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('influenza')]
nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('wart')]
nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('HIV')]
nean_catalog.loc[nean_catalog['DISEASE/TRAIT'].str.contains('Malaria')]
```
### Allergic diseases
```
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('allerg')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('asthma')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Eczema')]
```
### Autoimmune/autoinflammatory diseases
```
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('lupus')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('rheumatoid')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('scleroderma')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Sjogren')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Grave')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('glomerulonephritis')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('colitis')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('Crohn')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('bowel')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('psoriasis')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('celiac')]
nean_catalog.loc[nean_catalog['MAPPED_TRAIT'].str.contains('multiple sclerosis')]
```
## Do immune disease-associated Neanderthal SNPs show eQTL?
```
# Load eQTL data
fairfax_ori = pd.read_csv("../fairfax/tab2_a_cis_eSNPs.txt", sep="\t", usecols=["SNP", "Gene", "Min.dataset", "LPS2.FDR", "LPS24.FDR", "IFN.FDR", "Naive.FDR"])
fairfax_re = pd.read_csv('overlap_filtered_fairfax.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])
fairfax_re.sort_values('pvalue', inplace=True)
fairfax_re.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)
nedelec_re = pd.read_csv('overlap_filtered_nedelec.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])
nedelec_re.sort_values('pvalue', inplace=True)
nedelec_re.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)
quach = pd.read_csv('overlap_filtered_quach.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])
quach.sort_values('pvalue', inplace=True)
quach.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)
alasoo = pd.read_csv('overlap_filtered_alasoo.csv', usecols=['rsid', 'pvalue', 'gene_id', 'Condition', 'beta'])
alasoo.sort_values('pvalue', inplace=True)
alasoo.drop_duplicates(subset=['rsid', 'gene_id', 'Condition'], keep='first', inplace=True)
# Selected Neanderthal SNPs with immune disease associations
gwas = open('overlapped_SNPs.txt', 'r').read().splitlines()
gwas
# Overlap with original Fairfax eQTLs
ls = set(list(fairfax_ori.SNP)).intersection(gwas)
fairfax_ori.loc[fairfax_ori.SNP.isin(ls)]
# Overlap with recomputed Fairfax eQTLs
ls = set(list(fairfax_re.rsid)).intersection(gwas)
fairfax_re.loc[fairfax_re.rsid.isin(ls)]
# Overlap with recomputed Nedelec eQTLs
ls = set(list(nedelec_re.rsid)).intersection(gwas)
nedelec_re.loc[nedelec_re.rsid.isin(ls)]
# Overlap with recomputed Quach eQTLs
ls = set(list(quach.rsid)).intersection(gwas)
quach.loc[quach.rsid.isin(ls)]
# Overlap with recomputed Alasoo eQTLs
ls = set(list(alasoo.rsid)).intersection(gwas)
alasoo.loc[alasoo.rsid.isin(ls)]
```
| github_jupyter |
# American Gut Project example
This notebook was created from a question we recieved from a user of MGnify.
The question was:
```
I am attempting to retrieve some of the MGnify results from samples that are part of the American Gut Project based on sample location.
However latitude and longitude do not appear to be searchable fields.
Is it possible to query these fields myself or to work with someone to retrieve a list of samples from a specific geographic range? I am interested in samples from people in Hawaii, so 20.5 - 20.7 and -154.0 - -161.2.
```
Let's decompose the question:
- project "American Gut Project"
- Metadata filtration using the geographic location of a sample.
- Get samples for Hawai: 20.5 - 20.7 ; -154.0 - -161.2
Each sample if MGnify it's obtained from [ENA](https://www.ebi.ac.uk/ena).
## Get samples
The first step is to obtain the samples using [ENA advanced search API](https://www.ebi.ac.uk/ena/browser/advanced-search).
```
from pandas import DataFrame
import requests
base_url = 'https://www.ebi.ac.uk/ena/portal/api/search'
# parameters
params = {
'result': 'sample',
'query': ' AND '.join([
'geo_box1(16.9175,-158.4687,21.6593,-152.7969)',
'description="*American Gut Project*"'
]),
'fields': ','.join(['secondary_sample_accession', 'lat', 'lon']),
'format': 'json',
}
response = requests.post(base_url, data=params)
agp_samples = response.json()
df = DataFrame(columns=('secondary_sample_accession', 'lat', 'lon'))
df.index.name = 'accession'
for s in agp_samples:
df.loc[s.get('accession')] = [
s.get('secondary_sample_accession'),
s.get('lat'),
s.get('lon')
]
df
```
Now we can use EMG API to get the information.
```
#!/bin/usr/env python
import requests
import sys
def get_links(data):
return data["links"]["related"]
if __name__ == "__main__":
samples_url = "https://www.ebi.ac.uk/metagenomics/api/v1/samples/"
tsv = sys.argv[1] if len(sys.argv) == 2 else None
if not tsv:
print("The first arg is the tsv file")
exit(1)
tsv_fh = open(tsv, "r")
# header
next(tsv_fh)
for record in tsv_fh:
# get the runs first
# mgnify references the secondary accession
_, sec_acc, *_ = record.split("\t")
samples_res = requests.get(samples_url + sec_acc)
if samples_res.status_code == 404:
print(sec_acc + " not found in MGnify")
continue
# then the analysis for that run
runs_url = get_links(samples_res.json()["data"]["relationships"]["runs"])
if not runs_url:
print("No runs for sample " + sec_acc)
continue
print("Getting the runs: " + runs_url)
run_res = requests.get(runs_url)
if run_res.status_code != 200:
print(run_url + " failed", file=sys.stderr)
continue
# iterate over the sample runs
run_data = run_res.json()
# this script doesn't consider pagination, it's just an example
# there could be more that one page of runs
# use links -> next to get the next page
for run in run_data["data"]:
analyses_url = get_links(run["relationships"]["analyses"])
if not analyses_url:
print("No analyses for run " + run)
continue
analyses_res = requests.get(analyses_url)
if analyses_res.status_code != 200:
print(analyses_url + " failed", file=sys.stderr)
continue
# dump
print("Raw analyses data")
print(analyses_res.json())
print("=" * 30)
tsv_fh.close()
```
| github_jupyter |
# Employee Attrition Prediction
There is a class of problems that predict that some event happens after N years. Examples are employee attrition, hard drive failure, life expectancy, etc.
Usually these kind of problems are considered simple problems and are the models have vairous degree of performance. Usually it is treated as a classification problem, predicting if after exactly N years the event happens. The problem with this approach is that people care not so much about the likelihood that the event happens exactly after N years, but the probability that the event happens today. While you can infer this using Bayes theorem, doing it during prediction will not give you good accuracy because the Bayesian inference will be based on one piece of data. It is better to do this kind of inference during training time, and learn the probability than the likelihood function.
Thus, the problem is learning a conditional probability of the person quitting, given he has not quit yet, and is similar to the Hazard function in survival analysis problem
```
#Import
import numpy as np
import pandas as pd
import numpy.random
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
import math
%matplotlib inline
numpy.random.seed(1239)
# Read the data
# Source: https://www.ibm.com/communities/analytics/watson-analytics-blog/hr-employee-attrition/
raw_data = pd.read_csv('data/WA_Fn-UseC_-HR-Employee-Attrition.csv')
#Check if any is nan. If no nans, we don't need to worry about dealing with them
raw_data.isna().sum().sum()
def prepare_data(raw_data):
'''
Prepare the data
1. Set EmployeeNumber as the index
2. Drop redundant columns
3. Reorder columns to make YearsAtCompany first
4. Change OverTime to the boolean type
5. Do 1-hot encoding
'''
labels = raw_data.Attrition == 'Yes'
employee_data = raw_data.set_index('EmployeeNumber').drop(columns=['Attrition', 'EmployeeCount', 'Over18'])
employee_data.loc[:, 'OverTime'] = (employee_data.OverTime == 'Yes').astype('float')
employee_data = pd.get_dummies(employee_data)
employee_data = pd.concat([employee_data.YearsAtCompany, employee_data.drop(columns='YearsAtCompany')], axis=1)
return employee_data, labels
#Split to features and labels
employee_data, labels = prepare_data(raw_data)
```
First we will work on the synthetic set of data, for this reason we will not split the dataset to train/test yet
```
#Now scale the entire dataset, but not the first column (YearsAtCompany). Instead scale the dataset to be similar range
#to the first column
max_year = employee_data.YearsAtCompany.max()
scaler = MinMaxScaler(feature_range=(0, max_year))
scaled_data = pd.DataFrame(scaler.fit_transform(employee_data.values.astype('float')),
columns=employee_data.columns,
index=employee_data.index)
```
Based on the chart it seems like a realistic data set.
Now we need to construct our loss function. It will have an additional parameter: number of years
We define probability $p(x, t)$ that the person quits this very day, where t is the number of years and x is the remaining features. Then the likelihood that the person has quit after the year $t$ is
$$P(x,t) = (\prod_{l=0}^{t-1} (1-p(x,l))) p(x,t) $$ whereas the likelihood that the person will remain after the year $t$ is
$$P(x,t) = \prod_{l=0}^{t} (1-p(x,l)) $$
Strictly speaking x is also dependent on t, but we don't have the historical data for this, so we assume that x is independent of t.
Using the principle of maximum likelihood, we derive the loss function taking negative log of the likelihood function:
$$\mathscr{L}(y,p) = -\sum_{l=0}^{t-1} \log(1-p(x,l)) - y \log{p} - (1-y) \log(1-p) $$
Where y is an indicator if the person has quit after working exactly t years or not.
Notice that the last two terms is the cross-entropy loss function, and the first term is a hitorical term.
We will use a modified Cox Hazard function mechanism and model the conditional probability $p(x,l)$ a sigmoid function (for simplicity we include bias in the list of weights, and so the weight for the t parameter): $$p=\frac{1}{1 + e^{-\bf{w}\bf{x}}}$$
To create a synthetic set we assume that p does not depend on anything. Then the maximum likelihood gives us this simple formula: $$Pos=M p \bar{t}$$
Here Pos is the number of positive example (people who quit) and M is the total number of examples and $\bar{t}$ is the mean time (number of years)
```
#pick a p
p = 0.01
#Get the maximum years. We need it to make sure that the product of p YearsAtCompany never exceeds 1.
#In reality that is not a problem, but we will use it to correctly create synthetic labels
scaled_data.YearsAtCompany.max()
#Create the synthetic labels.
synthetic_labels = numpy.random.rand(employee_data.shape[0]) < p * employee_data.YearsAtCompany
#Plot the data with the synthetic labels
sns.swarmplot(y='years', x='quit', data=pd.DataFrame({"quit":synthetic_labels, 'years':employee_data.YearsAtCompany}));
#We expect the probability based on the synthesized data (but we are getting something else....) to be close to p
synthetic_labels.sum()/len(synthetic_labels)/employee_data.YearsAtCompany.mean()
```
Indeed pretty close to the value of p we set beforehand
## Logistic Regression with the synthetic labels
In this version of the POC we will use TensorFlow
We need to add ones to the dataframe.
But since we scaled everything to be between `0` and `40`, the convergence will be faster if we add `40.0` instead of `1`
```
#Add 1 to the employee data.
#But to make convergence fa
scaled_data['Ones'] = 40.0
scaled_data
def reset_graph(seed=1239):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
def create_year_column(X, w, year):
year_term = tf.reshape(X[:,0]-year, (-1,1)) * w[0]
year_column = tf.reshape(X @ w - year_term,(-1,))
return year_column * tf.cast(tf.greater(X[:,0],year), dtype=tf.float32)
def logit(X, w):
'''
IMPORTANT: This assumes that the weight for the temporal variable is w[0]
TODO: Remove this assumption and allow to specify the index of the temporal variable
'''
max_year_tf = tf.reduce_max(X[:,0])
tensors = tf.map_fn(lambda year: create_year_column(X, w, year), tf.range(max_year_tf))
return tf.transpose(tensors)
logit_result = logit(X,weights)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
result = logit_result.eval()
result[1]
def get_loss(X, y, w):
'''
The loss function
'''
#The first term
logit_ = logit(X, w)
temp_tensor = tf.sigmoid(logit_) * tf.cast(tf.greater(logit_, 0), tf.float32)
sum_loss = tf.reduce_sum(tf.log(1-temp_tensor),1)
sum_loss = tf.reshape(sum_loss, (-1,1))
logistic_prob = tf.sigmoid(X @ w)
return -sum_loss - y * tf.log(logistic_prob) - (1-y) * tf.log(1-logistic_prob)
loss_result = get_loss(X, y, weights/100)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
result = loss_result.eval()
result
reset_graph()
learning_rate = 0.0005
l2 = 2.0
X = tf.constant(scaled_data.values, dtype=tf.float32, name="X")
y = tf.constant(synthetic_labels.values.reshape(-1, 1), dtype=tf.float32, name="y")
weights = tf.Variable(tf.random_uniform([scaled_data.values.shape[1], 1], -0.01, 0.01, seed=1239), name="weights")
loss = get_loss(X, y, weights)
l2_regularizer = tf.nn.l2_loss(weights) - 0.5 * weights[-1] ** 2
cost = tf.reduce_mean(loss) + l2 * l2_regularizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(cost)
init = tf.global_variables_initializer()
n_epochs = 20000
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 1000 == 0:
print("Epoch", epoch, "Cost =", cost.eval())
print(f'w: {weights[-1].eval()}')
sess.run(training_op)
best_theta = weights.eval()
```
The cost will never go down to zero, because of the additional term in the loss function.
```
#We will print the learned weights.
learned_weights = [(column_name,float(best_theta[column_num])) \
for column_num, column_name in enumerate(scaled_data.columns)]
#We print the weights sorted by the absolute value of the value
sorted(learned_weights, key=lambda x: abs(x[1]), reverse=True)
```
To compare with the other result we need to multiplty the last weight by 40
```
print(f'The predicted probability is: {float(1/(1+np.exp(-best_theta[-1]*40)))}')
```
This is very close indeed to the value `0.01` we created for the synthetic dataset of
| github_jupyter |
```
# Configuration --- Change to your setup and preferences!
CAFFE_ROOT = "~/caffe2"
# What image do you want to test? Can be local or URL.
# IMAGE_LOCATION = "images/cat.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Whole-Lemon.jpg/1235px-Whole-Lemon.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/7/7b/Orange-Whole-%26-Split.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/7/7c/Zucchini-Whole.jpg"
# IMAGE_LOCATION = "https://upload.wikimedia.org/wikipedia/commons/a/ac/Pretzel.jpg"
IMAGE_LOCATION = "https://cdn.pixabay.com/photo/2015/02/10/21/28/flower-631765_1280.jpg"
# What model are we using? You should have already converted or downloaded one.
# format below is the model's:
# folder, init_net, predict_net, mean, input image size
# you can switch the comments on MODEL to try out different model conversions
MODEL = 'squeezenet', 'init_net.pb', 'run_net.pb', 'ilsvrc_2012_mean.npy', 227
# googlenet will fail with "enforce fail at fully_connected_op.h:25"
# MODEL = 'bvlc_googlenet', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 224
# these will run out of memory and fail... waiting for C++ version of predictor
# MODEL = 'bvlc_alexnet', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 224
# MODEL = 'finetune_flickr_style', 'init_net.pb', 'predict_net.pb', 'ilsvrc_2012_mean.npy', 224
# The list of output codes for the AlexNet models (squeezenet)
codes = "https://gist.githubusercontent.com/maraoz/388eddec39d60c6d52d4/raw/791d5b370e4e31a4e9058d49005be4888ca98472/gistfile1.txt"
print "Config set!"
%matplotlib inline
from caffe2.proto import caffe2_pb2
import numpy as np
import skimage.io
import skimage.transform
from matplotlib import pyplot
import os
from caffe2.python import core, workspace
import urllib2
print("Required modules imported.")
def crop_center(img,cropx,cropy):
y,x,c = img.shape
startx = x//2-(cropx//2)
starty = y//2-(cropy//2)
return img[starty:starty+cropy,startx:startx+cropx]
def rescale(img, input_height, input_width):
print("Original image shape:" + str(img.shape) + " and remember it should be in H, W, C!")
print("Model's input shape is %dx%d") % (input_height, input_width)
aspect = img.shape[1]/float(img.shape[0])
print("Orginal aspect ratio: " + str(aspect))
if(aspect>1):
# landscape orientation - wide image
res = int(aspect * input_height)
imgScaled = skimage.transform.resize(img, (input_width, res))
if(aspect<1):
# portrait orientation - tall image
res = int(input_width/aspect)
imgScaled = skimage.transform.resize(img, (res, input_height))
if(aspect == 1):
imgScaled = skimage.transform.resize(img, (input_width, input_height))
pyplot.figure()
pyplot.imshow(imgScaled)
pyplot.axis('on')
pyplot.title('Rescaled image')
print("New image shape:" + str(imgScaled.shape) + " in HWC")
return imgScaled
print "Functions set."
# set paths and variables from model choice
CAFFE_ROOT = os.path.expanduser(CAFFE_ROOT)
CAFFE_MODELS = os.path.join(CAFFE_ROOT, 'models')
MEAN_FILE = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[3])
if not os.path.exists(MEAN_FILE):
mean = 128
else:
mean = np.load(MEAN_FILE).mean(1).mean(1)
mean = mean[:, np.newaxis, np.newaxis]
print "mean was set to: ", mean
INPUT_IMAGE_SIZE = MODEL[4]
if not os.path.exists(CAFFE_ROOT):
print("Houston, you may have a problem.")
INIT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[1])
PREDICT_NET = os.path.join(CAFFE_MODELS, MODEL[0], MODEL[2])
if not os.path.exists(INIT_NET):
print(INIT_NET + " not found!")
else:
print "Found ", INIT_NET, "...Now looking for", PREDICT_NET
if not os.path.exists(PREDICT_NET):
print "Caffe model file, " + PREDICT_NET + " was not found!"
else:
print "All needed files found! Loading the model in the next block."
# initialize the neural net
p = workspace.Predictor(INIT_NET, PREDICT_NET)
# load and transform image
img = skimage.img_as_float(skimage.io.imread(IMAGE_LOCATION)).astype(np.float32)
img = rescale(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
img = crop_center(img, INPUT_IMAGE_SIZE, INPUT_IMAGE_SIZE)
print "After crop: " , img.shape
pyplot.figure()
pyplot.imshow(img)
pyplot.axis('on')
pyplot.title('Cropped')
# switch to CHW
img = img.swapaxes(1, 2).swapaxes(0, 1)
pyplot.figure()
for i in range(3):
# For some reason, pyplot subplot follows Matlab's indexing
# convention (starting with 1). Well, we'll just follow it...
pyplot.subplot(1, 3, i+1)
pyplot.imshow(img[i])
pyplot.axis('off')
pyplot.title('RGB channel %d' % (i+1))
# switch to BGR
img = img[(2, 1, 0), :, :]
# remove mean for better results
img = img * 255 - mean
# add batch size
img = img[np.newaxis, :, :, :].astype(np.float32)
print "NCHW: ", img.shape
# run the net and return prediction
results = p.run([img])
results = np.asarray(results)
results = np.delete(results, 1)
index = 0
highest = 0
arr = np.empty((0,2), dtype=object)
arr[:,0] = int(10)
arr[:,1:] = float(10)
for i, r in enumerate(results):
# imagenet index begins with 1!# imagenet index begins with 1!
i=i+1
arr = np.append(arr, np.array([[i,r]]), axis=0)
if (r > highest):
highest = r
index = i
print index, " :: ", highest
# top 3
# sorted(arr, key=lambda x: x[1], reverse=True)[:3]
response = urllib2.urlopen(codes)
for line in response:
code, result = line.partition(":")[::2]
if (code.strip() == str(index)):
print result.strip()[1:-2]
```
Check [this list](https://gist.github.com/maraoz/388eddec39d60c6d52d4) to verify the results.
| github_jupyter |
# Training a Boltzmann Generator for Alanine Dipeptide
This notebook introduces basic concepts behind `bgflow`.
It shows how to build an train a Boltzmann generator for a small peptide. The most important aspects it will cover are
- retrieval of molecular training data
- defining a internal coordinate transform
- defining normalizing flow classes
- combining different normalizing flows
- training a Boltzmann generator via NLL and KLL
The main purpose of this tutorial is to introduce the implementation. The network design is optimized for educational purposes rather than good performance. In the conlusions, we will discuss some aspects of the generator that are not ideal and outline improvements.
## Some Preliminaries
We instruct jupyter to reload any imports automatically and define the device and datatype, on which we want to perform the computations.
```
%load_ext autoreload
%autoreload 2
import torch
device = "cuda:3" if torch.cuda.is_available() else "cpu"
dtype = torch.float32
# a context tensor to send data to the right device and dtype via '.to(ctx)'
ctx = torch.zeros([], device=device, dtype=dtype)
```
## Load the Data and the Molecular System
Molecular trajectories and their corresponding potential energy functions are available from the `bgmol` repository.
```
# import os
# from bgmol.datasets import Ala2TSF300
# target_energy = Ala2TSF300().get_energy_model(n_workers=1)
import os
import mdtraj
#dataset = mdtraj.load('output.dcd', top='ala2_fromURL.pdb')
dataset = mdtraj.load('TSFtraj.dcd', top='ala2_fromURL.pdb')
#fname = "obc_xmlsystem_savedmodel"
#coordinates = dataset.xyz
#target_energy = Ala2TSF300().get_energy_model(n_workers=1)
print(dataset)
import numpy as np
rigid_block = np.array([6, 8, 9, 10, 14])
z_matrix = np.array([
[0, 1, 4, 6],
[1, 4, 6, 8],
[2, 1, 4, 0],
[3, 1, 4, 0],
[4, 6, 8, 14],
[5, 4, 6, 8],
[7, 6, 8, 4],
[11, 10, 8, 6],
[12, 10, 8, 11],
[13, 10, 8, 11],
[15, 14, 8, 16],
[16, 14, 8, 6],
[17, 16, 14, 15],
[18, 16, 14, 8],
[19, 18, 16, 14],
[20, 18, 16, 19],
[21, 18, 16, 19]
])
def dimensions(dataset):
return np.prod(dataset.xyz[0].shape)
dim = dimensions(dataset)
print(dim)
from simtk import openmm
with open('ala2_xml_system.txt') as f:
xml = f.read()
system = openmm.XmlSerializer.deserialize(xml)
from bgflow.distribution.energy.openmm import OpenMMBridge, OpenMMEnergy
from openmmtools import integrators
from simtk import unit
temperature = 300.0 * unit.kelvin
collision_rate = 1.0 / unit.picosecond
timestep = 4.0 * unit.femtosecond
integrator = integrators.LangevinIntegrator(temperature=temperature,collision_rate=collision_rate,timestep=timestep)
energy_bridge = OpenMMBridge(system, integrator, n_workers=1)
target_energy = OpenMMEnergy(int(dim), energy_bridge)
```
The energy model is a `bgflow.Energy` that wraps around OpenMM. The `n_workers` argument determines the number of openmm contexts that are used for energy evaluations. In notebooks, we set `n_workers=1` to avoid hickups. In production, we can omit this argument so that `n_workers` is automatically set to the number of CPU cores.
### Visualize Data: Ramachandran Plot for the Backbone Angles
```
# def compute_phi_psi(trajectory):
# phi_atoms = [4, 6, 8, 14]
# phi = md.compute_dihedrals(trajectory, indices=[phi_atoms])[:, 0]
# psi_atoms = [6, 8, 14, 16]
# psi = md.compute_dihedrals(trajectory, indices=[psi_atoms])[:, 0]
# return phi, psi
import numpy as np
import mdtraj as md
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
# def plot_phi_psi(ax, trajectory):
# if not isinstance(trajectory, md.Trajectory):
# trajectory = md.Trajectory(
# xyz=trajectory.cpu().detach().numpy().reshape(-1, 22, 3),
# topology=md.load('ala2_fromURL.pdb').topology
# )
# phi, psi = compute_phi_psi(trajectory)
# ax.hist2d(phi, psi, 50, norm=LogNorm())
# ax.set_xlim(-np.pi, np.pi)
# ax.set_ylim(-np.pi, np.pi)
# ax.set_xlabel("$\phi$")
# _ = ax.set_ylabel("$\psi$")
# return trajectory
import numpy as np
n_train = len(dataset)//2
n_test = len(dataset) - n_train
permutation = np.random.permutation(n_train)
all_data = dataset.xyz.reshape(-1, dimensions(dataset))
training_data = torch.tensor(all_data[permutation]).to(ctx)
test_data = torch.tensor(all_data[permutation + n_train]).to(ctx)
#print(training_data.shape)
```
## Define the Internal Coordinate Transform
Rather than generating all-Cartesian coordinates, we use a mixed internal coordinate transform.
The five central alanine atoms will serve as a Cartesian "anchor", from which all other atoms are placed with respect to internal coordinates (IC) defined through a z-matrix. We have deposited a valid `z_matrix` and the corresponding `rigid_block` in the `dataset.system` from `bgmol`.
```
import bgflow as bg
# throw away 6 degrees of freedom (rotation and translation)
dim_cartesian = len(rigid_block) * 3 - 6
print(dim_cartesian)
#dim_cartesian = len(system.rigid_block) * 3
dim_bonds = len(z_matrix)
print(dim_bonds)
dim_angles = dim_bonds
dim_torsions = dim_bonds
coordinate_transform = bg.MixedCoordinateTransformation(
data=training_data,
z_matrix=z_matrix,
fixed_atoms=rigid_block,
#keepdims=None,
keepdims=dim_cartesian,
normalize_angles=True,
).to(ctx)
```
For demonstration, we transform the first 3 samples from the training data set into internal coordinates as follows:
```
# bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(training_data[:3])
# bonds.shape, angles.shape, torsions.shape, cartesian.shape, dlogp.shape
# #print(bonds)
```
## Prior Distribution
The next step is to define a prior distribution that we can easily sample from. The normalizing flow will be trained to transform such latent samples into molecular coordinates. Here, we just take a normal distribution, which is a rather naive choice for reasons that will be discussed in other notebooks.
```
dim_ics = dim_bonds + dim_angles + dim_torsions + dim_cartesian
mean = torch.zeros(dim_ics).to(ctx)
# passing the mean explicitly to create samples on the correct device
prior = bg.NormalDistribution(dim_ics, mean=mean)
```
## Normalizing Flow
Next, we set up the normalizing flow by stacking together different neural networks. For now, we will do this in a rather naive way, not distinguishing between bonds, angles, and torsions. Therefore, we will first define a flow that splits the output from the prior into the different IC terms.
### Split Layer
```
split_into_ics_flow = bg.SplitFlow(dim_bonds, dim_angles, dim_torsions, dim_cartesian)
# test
#print(prior.sample(3))
# ics = split_into_ics_flow(prior.sample(1))
# #print(_ics)
# coordinate_transform.forward(*ics, inverse=True)[0].shape
```
### Coupling Layers
Next, we will set up so-called RealNVP coupling layers, which split the input into two channels and then learn affine transformations of channel 1 conditioned on channel 2. Here we will do the split naively between the first and second half of the degrees of freedom.
```
class RealNVP(bg.SequentialFlow):
def __init__(self, dim, hidden):
self.dim = dim
self.hidden = hidden
super().__init__(self._create_layers())
def _create_layers(self):
dim_channel1 = self.dim//2
dim_channel2 = self.dim - dim_channel1
split_into_2 = bg.SplitFlow(dim_channel1, dim_channel2)
layers = [
# -- split
split_into_2,
# --transform
self._coupling_block(dim_channel1, dim_channel2),
bg.SwapFlow(),
self._coupling_block(dim_channel2, dim_channel1),
# -- merge
bg.InverseFlow(split_into_2)
]
return layers
def _dense_net(self, dim1, dim2):
return bg.DenseNet(
[dim1, *self.hidden, dim2],
activation=torch.nn.ReLU()
)
def _coupling_block(self, dim1, dim2):
return bg.CouplingFlow(bg.AffineTransformer(
shift_transformation=self._dense_net(dim1, dim2),
scale_transformation=self._dense_net(dim1, dim2)
))
#RealNVP(dim_ics, hidden=[128]).to(ctx).forward(prior.sample(3))[0].shape
```
### Boltzmann Generator
Finally, we define the Boltzmann generator.
It will sample molecular conformations by
1. sampling in latent space from the normal prior distribution,
2. transforming the samples into a more complication distribution through a number of RealNVP blocks (the parameters of these blocks will be subject to optimization),
3. splitting the output of the network into blocks that define the internal coordinates, and
4. transforming the internal coordinates into Cartesian coordinates through the inverse IC transform.
```
n_realnvp_blocks = 5
layers = []
for i in range(n_realnvp_blocks):
layers.append(RealNVP(dim_ics, hidden=[128, 128, 128]))
layers.append(split_into_ics_flow)
layers.append(bg.InverseFlow(coordinate_transform))
flow = bg.SequentialFlow(layers).to(ctx)
# test
#flow.forward(prior.sample(3))[0].shape
flow.load_state_dict(torch.load('modelTSFtraj_xmlsystem_20000KLL.pt'))
# print number of trainable parameters
"#Parameters:", np.sum([np.prod(p.size()) for p in flow.parameters()])
generator = bg.BoltzmannGenerator(
flow=flow,
prior=prior,
target=target_energy
)
def plot_energies(ax, samples, target_energy, test_data):
sample_energies = target_energy.energy(samples).cpu().detach().numpy()
md_energies = target_energy.energy(test_data[:len(samples)]).cpu().detach().numpy()
cut = max(np.percentile(sample_energies, 80), 20)
ax.set_xlabel("Energy [$k_B T$]")
# y-axis on the right
ax2 = plt.twinx(ax)
ax.get_yaxis().set_visible(False)
ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label="BG")
ax2.hist(md_energies, range=(-50, cut), bins=40, density=False, label="MD")
ax2.set_ylabel(f"Count [#Samples / {len(samples)}]")
ax2.legend()
def plot_energy_onlyMD(ax, target_energy, test_data):
md_energies = target_energy.energy(test_data[:1000]).cpu().detach().numpy()
ax.set_xlabel("Energy [$k_B T$]")
# y-axis on the right
ax2 = plt.twinx(ax)
ax.get_yaxis().set_visible(False)
#ax2.hist(sample_energies, range=(-50, cut), bins=40, density=False, label="BG")
ax2.hist(md_energies, bins=40, density=False, label="MD")
ax2.set_ylabel(f"Count [#Samples / 1000]")
ax2.legend()
n_samples = 10000
samples = generator.sample(n_samples)
print(samples.shape)
fig, axes = plt.subplots(1, 2, figsize=(6,3))
fig.tight_layout()
samplestrajectory = plot_phi_psi(axes[0], samples)
plot_energies(axes[1], samples, target_energy, test_data)
#plt.savefig(f"varysnapshots/{fname}.png", bbox_inches = 'tight')
#samplestrajectory.save("mytraj_full_samples.dcd")
#del samples
```
bonds, angles, torsions, cartesian, dlogp = coordinate_transform.forward(samples)
print(bonds.shape)
print('1:', bonds[0])
CHbond_indices = [0, 2, 3 ,7 ,8, 9 ,14 ,15 ,16]
bonds_new = bonds.clone().detach()
bonds_new[:,CHbond_indices] = 0.109
print('2:', bonds_new[0:3])
samples_corrected = coordinate_transform.forward(bonds_new,angles,torsions,cartesian,inverse=True)
print(samples_corrected[0].shape)
```
samplestrajectory = mdtraj.Trajectory(
xyz=samples[0].cpu().detach().numpy().reshape(-1, 22, 3),
topology=mdtraj.load('ala2_fromURL.pdb').topology
)
#samplestrajectory.save('mysamples_traj_correctedonce.dcd')
import nglview as nv
#samplestrajectory.save("Samplestraj.pdb")
#md.save(samplestrajectory, "obcstride10Samplestraj.dcd")
widget = nv.show_mdtraj(samplestrajectory)
widget
```
## Conclusions
This tutorial has introduced the most basic concepts and implementations underlying Boltzmann generators and `bgflow`. That said, the trained networks did not do a particularly good job in reproducing the molecular Boltzmann distribution. Specifically, they only modeled the major modes of the $\phi$ angle and still produced many samples with unreasonably large energies. Let's look at a few shortcomings of the present architecture:
### 1) Unconstrained Internal Coordinates
Bonds, angles, and torsions must not take arbitrary values in principle. Bond lengths need to be positive, angles live in $[0,\pi],$ and torsions are periodic in $[-\pi, \pi].$ Neither those bounds nor the periodicity of torsions distributions have been taken into account by the present Boltzmann generator. The layers of the normalizing flow should be build in a way that preserves these constraints on the ICs.
### 2) Arbitrary Coupling
The input for the coupling layers was split into two channels rather arbitrarily (first vs. second half). A partial remedy is to define the conditioning in a physically informed manner. Another solution is to augment the base space by momenta, which can be done with augmented normalizing flows (see for instance the notebook on temperature-steering flows).
### 3) RealNVP Layers
Affine coupling layers are well-known to perform poorly in separating modes. This explains that the metastable region around $\phi \approx \pi/2$ was not captured by the generator. Other architectures such as augmented flows or neural spline flows do a better job for complicated, multimodal distributions.
### 4) Training
The generators were only trained for relatively few iterations and performance may improve with longer training and better choices of the learning rate and hyperparameters.
| github_jupyter |
# LassoLars Regression with Robust Scaler
This Code template is for the regression analysis using a simple LassoLars Regression. It is a lasso model implemented using the LARS algorithm and feature scaling using Robust Scaler in a Pipeline
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import LassoLars
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
LassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients.
### Tuning parameters
> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations
> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.
> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
> **max_iter** -> Maximum number of iterations to perform.
> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations.
### Feature Scaling
Robust Scaler scale features using statistics that are robust to outliers.
This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).<br>
For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)
```
model=make_pipeline(RobustScaler(),LassoLars())
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
score: The score function returns the coefficient of determination R2 of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Anu Rithiga , Github: [Profile](https://github.com/iamgrootsh7)
| github_jupyter |
```
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score
from hyperopt import hp, fmin, tpe, STATUS_OK
import eli5
from eli5.sklearn import PermutationImportance
```
## Wczytanie danych
```
df = pd.read_hdf("../data/car.h5")
df.sample()
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list):
continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat+ SUFFIX_CAT] = factorized_values
cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) =='None' else int(x.split(' ')[0]))
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) =='None' else int(x.split('cm')[0].replace(' ', '')))
feats = [
'param_rok-produkcji',
'param_stan__cat',
'param_napęd__cat',
'param_skrzynia-biegów__cat',
'param_moc',
'param_faktura-vat__cat',
'param_marka-pojazdu__cat',
'param_typ__cat',
'feature_kamera-cofania__cat',
'param_wersja__cat',
'param_model-pojazdu__cat',
'param_pojemność-skokowa',
'param_kod-silnika__cat',
'seller_name__cat',
'feature_wspomaganie-kierownicy__cat',
'feature_czujniki-parkowania-przednie__cat',
'param_uszkodzony__cat',
'feature_system-start-stop__cat',
'feature_regulowane-zawieszenie__cat',
'feature_asystent-pasa-ruchu__cat',
]
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
```
## XGBoost
```
xgb_params = {
'max_depth':5,
'n_estimatords':50,
'learning_rate':0.1,
'seed':0,
'nthread': 3
}
model = xgb.XGBRegressor(**xgb_params)
run_model(model, feats)
def obj_func(params):
print("Traniang with params: ")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {"loss": np.abs(mean_mae), "status": STATUS_OK}
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimatords': 100,
'seed':0,
'nthread': 4
}
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=30)
best
```
| github_jupyter |
# SLAM算法介绍
## 1. 名词解释:
### 1.1 什么是SLAM?
SLAM,即Simultaneous localization and mapping,中文可译作“同时定位与地图构建”。它描述的是这样一类过程:机器人在陌生环境中运动,通过处理各类传感器收集的机器人自身及环境信息,精确地获取对机器人自身位置的估计(即“定位”),再通过机器人自身位置确定周围环境特征的位置(即“建图”)
在SLAM过程中,机器人不断地在收集各类传感器信息,如激光雷达的点云、相机的图像、imu的信息、里程计的信息等,通过对这些不断变化的传感器的一系列分析计算,机器人会实时地得出自身行进的轨迹(比如一系列时刻的位姿),但得到的轨迹往往包含很大误差,因此需要进行修正优化,修正的过程很可能不再是实时进行的。实时得出自身行进轨迹的过程一般称作“前端”,修正优化的过程一般称作“后端”。
实现后端优化的处理方法可以分为滤波和优化两类。
### 1.2 什么是滤波?
滤波在一般工程领域指的是根据一定规则对信号进行筛选,保留需要的内容,如各种高通滤波、低通滤波、带通滤波等。但在SLAM算法的语境下,滤波指的是“贝叶斯滤波”概念下的一系列“滤波器”,它们通过概率分析,使用传感器读数、传感器参数、机器人上一时刻位姿等信息,对机器人的下一时刻位姿作出修正:机器人不够准确的粗略轨迹经过”过滤“,变得更准确了。
SLAM中常见滤波有:EKF扩展卡尔曼滤波、UKF无迹卡尔曼滤波、particle filter粒子滤波等。
### 1.3 什么是优化问题?什么是非线性最小二乘优化问题?
各种滤波手段在SLAM问题中曾经占据主导地位,但随着地图规模的扩大(如机器人行进的面积范围增大、引入视觉算法后地图更“精细”),滤波方法所需要的计算量会不断增大。因此现阶段各种优化算法成为了SLAM问题后端处理方法的主流。
什么是优化问题呢?假设有一个函数f,以x为输入,以y为输出,那么一个优化问题就是通过某种手段找到一个x,使y的值最大/最小。而一个SLAM问题的优化中,x通常指的是各种待确定的状态量,比如机器人在各个时刻的位姿、地图中特征点的空间位置等,y通常指的是各种误差,比如传感器测量的量与状态量的差。SLAM问题待优化的函数f通常是非线性的,而且是以二次方项加和的形式存在的,因此属于非线性最小二乘优化问题。
解决非线性优化的开源库如google的Ceres,应用于cartographer、VINS等算法中。
### 1.4 什么是图优化?
图优化指的是把一个优化问题以一个“图”(graph)的形式表示出来(注:这里的”图“可以看做是指一种特殊的数据结构),可以用到图论相关的性质和算法,本质上还是一个优化问题。可以简单理解:待优化的状态量,即机器人在各个时刻的位姿、地图中特征点的空间位置,可以表示为graph的各个顶点,相关的顶点间以边连接,各个边代表的就是误差项,所以图优化问题就是通过优化各个顶点的”位置“,使所有的边加起来的和最小。
解决图优化的开源库如g2o,应用于ORB SLAM等算法中。
### 1.5 什么是约束?
在图优化问题中,顶点与顶点间连接的边就称为一个“约束”(constraint),这个约束可以表示如激光测量量与位置状态量之间的差值、imu测量量与位置状态量之间的差值等。
### 1.6 什么是回环检测
回环检测,也可以称为闭环检测等。简单理解就是,机器人“看到”了看到过的场景,就叫做回环检测成功。回环检测在SLAM问题中,对后端优化具有重要作用。
### 1.7 一个最简单的例子:
[graph slam tutorial : 从推导到应用1](https://heyijia.blog.csdn.net/article/details/47686523)
## 2. 举例分析
主武器与辅助武器:
对于一辆坦克来说,炮塔中央的主炮显然就是主武器,其他辅助武器可以有:机枪、反坦克导弹等。
相似地,对于激光slam算法,激光雷达是主武器,imu、里程计等属于辅助武器;对于视觉slam算法,相机就是主武器,imu、里程计等属于辅助武器。
### 2.1 激光slam举例:
cartographer

在SLAM问题的工程实践中,所谓的非线性优化,其实不止出现在后端的全局优化阶段。以google的cartographer为例:
算法前端接收一帧接一帧的激光扫描数据scans,插入到一个小范围的子图(submap)中(比如规定90帧scans组成一个子图),通过调用非线性优化解算库Ceres解决scan在submap中的插入位置问题,在这个优化过程中,imu和里程计负责提供初始值;后端负责进行“回环检测”,寻找新建立的子图submap和之前的scan间的约束,调用非线性优化解算库Ceres计算这个约束,使用一种叫”分支定界“的方法提供这类优化的初始值;最终,后端还要根据约束对所有已有的scan和submap进行全局优化,再次调用非线性优化解算库Ceres解决这个问题。
所以可以粗略地认为,在cartographer中有三处都运用了非线性优化。
### 2.2 视觉slam举例:
VINS-mono

港科大的VINS是视觉融合imu信息处理SLAM问题的典范。以单目视觉算法为主的VINS-mono为例:
首先进行”初始化“步骤,在此步骤中,视觉图像和imu信息互相辅助,imu解决了单目图像无法测量深度的问题,并提供了重力方向,视觉图像标定了imu的某些内部参数;
通过”滑窗“方法,使用图像、imu信息建立非线性优化问题,解算每帧图像的优化后位姿,以上内容组成了VIO,即所谓”视觉imu里程计“,可以算是前端的内容,但实际上这个前端也是在使用非线性优化在一直优化每帧的位姿的。
如果回环检测成功检测到了闭环,那么通过非线性优化进行”重定位“,调整滑窗内的位姿;最终通过全局优化,使用非线性优化方法修正所有帧的位姿。
以下是论文中对于重定位及全局优化的配图:

为便于理解,总结一下imu在不同slam算法中的作用:
1. imu在cartographer中的主要作用:通过scan match插入一帧激光建立submap前,预估机器人新位姿,给非线性优化提供初始值。
2. imu在VINS中的主要作用:在“初始化”阶段,获取图像深度尺度等参数;参与VIO优化约束建立。
| github_jupyter |
```
from fknn import *
import numpy as np
import pandas as pd
dataset = pd.read_csv("iris-virginica.csv")
dataset = dataset.sample(frac=1)
dataset
X = dataset.iloc[:, 1:3].values
Y = dataset.iloc[:,0].values
from sklearn.model_selection import train_test_split
xTrain, xTest, yTrain, yTest = train_test_split(X,Y)
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score, mean_squared_error
model = FuzzyKNN()
model.fit(xTrain, yTrain)
model.score(xTest, yTest)
model.mean_squared_error(xTest, yTest)
model.predict(xTrain[3])
```
# Cross Validation
```
value_array = []
error_array = []
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=1)
for train_index, test_index in skf.split(X, Y):
print("TRAIN:", train_index, "TEST:", test_index)
xTrain, xTest = X[train_index], X[test_index]
yTrain, yTest = Y[train_index], Y[test_index]
model.fit(xTrain, yTrain)
value = model.score(xTest, yTest)
error = model.mean_squared_error(xTest, yTest)
value_array.append(value)
error_array.append(error)
np.mean(value_array)
np.mean(error_array)
```
# Model Selection & Cross Validation
```
a = np.arange (1, 21, 2)
parameters = {"k" : a}
parameters["k"]
from sklearn.model_selection import GridSearchCV
clf = GridSearchCV(model, parameters, cv = 5)
clf.fit(xTrain, yTrain)
clf.score(xTest, yTest)
best_params = clf.best_params_
best_params
model = clf.best_estimator_
def MSE_membership(self, X, y):
memb, _ = self.predict(X)
res = []
for t in memb:
res.append(t[1])
return mean_squared_error(y, res)
model.RMSE_membership(xTest, yTest)
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from sklearn.metrics import classification_report, mean_squared_error
from sklearn.metrics import accuracy_score
from sklearn.utils import shuffle
df = pd.read_csv('iris-setosa.csv')
X = df.iloc[:, 1:3].values
y = df.iloc[:,0].values
seed = 10
X, y = shuffle(X, y, random_state=seed)
a = np.arange (1, 21, 2)
parameters = {"k" : a}
N_SPLIT = 5
err = []
acc = []
skf = StratifiedKFold(n_splits=N_SPLIT, shuffle=False, random_state=5)
for train_index, validation_index in skf.split(X, y):
print(train_index)
X_train, X_validation = X[train_index], X[validation_index]
y_train, y_validation = y[train_index], y[validation_index]
model = FuzzyKNN()
clf = GridSearchCV(model, parameters, cv=5)
clf.fit(X_train, y_train)
best_model = clf.best_estimator_
best_model.fit(X_train, y_train)
acc.append(best_model.score(X_validation, y_validation))
val = best_model.RMSE_membership(X_validation, y_validation)
err.append(val)
acc
err
```
| github_jupyter |
```
import tensorflow as tf
import numpy as np
rng = np.random
import matplotlib.pyplot as plt
learning_rate = 0.0001
training_epochs = 1000
display_step = 50
with tf.name_scope("Creation_of_array"):
x_array=np.asarray([2.0,9.4,3.32,0.88,-2.23,1.11,0.57,-2.25,-3.31,6.45])
y_array=np.asarray([1.22,0.34,-0.08,2.25,4.41,3.09,-6.66,-9.77,0.001,2.25])
x = tf.constant(x_array,dtype = tf.float32,name = "x_array")
y = tf.constant(y_array,dtype = tf.float32, name= "y_array")
with tf.name_scope("Calculating_y_mean"):
mean_y = tf.reduce_mean(y, name = "mean_y")
with tf.Session() as sess:
result_y = sess.run(mean_y)
print(result_y)
with tf.name_scope("Calculating_x_mean_and_x_variance"):
mean_x, variance = tf.nn.moments(x, [0], name = "mean_x_and_variance_x")
with tf.Session() as sess:
m, v = sess.run([mean_x, variance])
print(m)
print(v)
with tf.name_scope("Calculating_covariance"):
def tensorflow_covariance(x_array,y_array,x_mean,y_mean):
cov = 0.0
for i in range(0,10):
x_val = tf.subtract(x_array[i],x_mean, name="Finding_difference_of_xval_and_mean")
y_val = tf.subtract(y_array[i],y_mean, name="Finding_difference_of_yval_and_mean")
total_val = tf.multiply(x_val,y_val, name="Multiplying_found_values")
cov = tf.add(cov,total_val, name="Recursive_addition")
return cov/10.0
with tf.Session() as sess:
covar = sess.run(tensorflow_covariance(x,y,m,result_y))
print(covar)
with tf.name_scope("Calculating_slope_m_and_c"):
slope = tf.div(covar,v,name="Finding_slope")
intm = tf.multiply(slope,m,name = "Intermediate_step")
c_intm = tf.subtract(result_y,intm,name = "Finding_c")
with tf.Session() as sess:
m_slope = sess.run(slope)
c = sess.run(c_intm)
print(m_slope)
print(c)
with tf.name_scope("Plotting"):
n_samples = x_array.shape[0]
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
# Construct a linear model
pred = tf.add(tf.multiply(X, W), b)
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
# Gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Fit all training data
for epoch in range(training_epochs):
for (p, r) in zip(x_array, y_array):
sess.run(optimizer, feed_dict={X: p, Y: r})
# Display logs per epoch step
if (epoch+1) % display_step == 0:
c = sess.run(cost, feed_dict={X: x_array, Y:y_array})
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \
"W=", sess.run(W), "b=", sess.run(b))
print("Optimization Finished!")
training_cost = sess.run(cost, feed_dict={X: x_array, Y: y_array})
print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')
# Graphic display
plt.plot(x_array, y_array, 'ro', label='Original data')
plt.plot(x_array, sess.run(W) * x_array + sess.run(b), label='Fitted line')
plt.legend()
plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import gc
plt.style.use('ggplot')
dtypes = {
'ip' : 'uint32',
'app' : 'uint16',
'device' : 'uint16',
'os' : 'uint16',
'channel' : 'uint16',
'is_attributed' : 'uint8',
}
random = pd.read_csv('train_random_10_percent.csv', dtype=dtypes)
df = random.sample(3000000)
# prepare test data
test = pd.read_csv("test.csv", dtype=dtypes)
df = df.sort_values(['ip','click_time'])
test = test.sort_values(['ip','click_time'])
df.shape
gc.collect()
df['click_time'] = pd.to_datetime(df.click_time)
df['attributed_time'] = pd.to_datetime(df.attributed_time)
test['click_time'] = pd.to_datetime(test.click_time)
did_download = df[df.is_attributed==1].ip.values
did_download
df[df.is_attributed==1]
#ip of people that downloaded an application at some point
did_download = df[df.ip.apply(lambda x: x in did_download)]
did_download
did_download.shape
ip_ad_exposure = did_download.ip.value_counts()
ip_ad_exposure
app_or_channel = did_download[did_download.is_attributed == 1]
app_or_channel.shape
downloaded = did_download.dropna()
#lets explore more just the adds that led to download
time_of_exposure = did_download.attributed_time.dropna().groupby(did_download.attributed_time.dt.hour).count()
time_of_exposure
t = downloaded.attributed_time - downloaded.click_time
channel_success = did_download.groupby(['channel']).is_attributed.mean()
channel_success.head(10)
app_success = did_download.groupby(['app']).is_attributed.mean()
channel_success = channel_success.to_dict()
app_success = app_success.to_dict()
df['channel_success'] = df.channel.map(channel_success)
df['app_success'] = df.channel.map(app_success)
df.channel_success.fillna(0,inplace=True)
df.app_success.fillna(0,inplace=True)
df.head(10)
s = df.groupby(['ip']).os.value_counts().to_frame().rename(columns={'os':'ip_os_count'}).reset_index()
u = test.groupby(['ip']).os.value_counts().to_frame().rename(columns={'os':'ip_os_count'}).reset_index()
s.head(10)
gc.collect()
df = pd.merge(df,s,on=['ip','os'])
df['ip_os_count'] = df.ip_os_count.astype('float')
test = pd.merge(test,u,on=['ip','os'])
test['ip_os_count'] = test.ip_os_count.astype('float')
df.head(10)
n_chans = df.groupby(['ip','app']).channel.count().reset_index().rename(columns={'channel':'ip_app_count'})
df = df.merge(n_chans,on=['ip','app'],how='left')
x_chans = test.groupby(['ip','app']).channel.count().reset_index().rename(columns={'channel':'ip_app_count'})
test = test.merge(x_chans,on=['ip','app'],how='left')
test.head(10)
df['clicked'] = np.ones(df.shape[0],dtype= np.float64)
df['app_exposure'] = df.groupby(['ip','app',]).clicked.cumsum()
df['channel_exposure'] = df.groupby(['ip','channel',]).clicked.cumsum()
test['clicked'] = np.ones(test.shape[0],dtype= np.float64)
test['app_exposure'] = test.groupby(['ip','app',]).clicked.cumsum()
test['channel_exposure'] = test.groupby(['ip','channel',]).clicked.cumsum()
df.head(10)
df['daily_usage'] = df.groupby(['ip',df.click_time.dt.day]).clicked.cumsum()
df.head(10)
df['hour'] = df.click_time.dt.hour
df['hour_cumative_clicks'] = df.groupby(['ip',df.click_time.dt.hour]).clicked.cumsum()
df.head(10)
gc.collect()
test['daily_usage'] = test.groupby(['ip', test.click_time.dt.day]).clicked.cumsum()
test['hour'] = test.click_time.dt.hour
test['hour_cumative_clicks'] = test.groupby(['ip', test.click_time.dt.hour]).clicked.cumsum()
gc.collect()
from sklearn.model_selection import train_test_split
X = df[['app','device','os','channel','app_exposure','daily_usage','hour','hour_cumative_clicks','ip_os_count']]
y = df.is_attributed
X_test = test[['app','device','os','channel','app_exposure','daily_usage','hour','hour_cumative_clicks','ip_os_count']]
gc.collect()
from catboost import CatBoostClassifier
categorical_features_indices = np.where(X.dtypes != np.float)[0]
categorical_features_indices = np.where(X_test.dtypes != np.float)[0]
cat = CatBoostClassifier()
model = cat.fit(X, y,cat_features=categorical_features_indices,plot=False,verbose=True)
y_pred_prob = model.predict_proba(X_test)
gc.collect()
output = pd.DataFrame(test['click_id'])
output['is_attributed'] = y_pred_prob[:,1]
output = output.set_index('click_id')
output.to_csv("submission_stackF.csv")
```
| github_jupyter |
```
# likely the simplest possible version?
# import turtle as t
# def sier(n,length):
# if (n==0):
# return
# for i in range(3):
# sier(n-1, length/2)
# t.fd(length)
# t.rt(120)
#!/usr/bin/env python
##########################################################################################
# a very complicated version
# import necessary modules
# ------------------------
from numpy import *
import turtle
##########################################################################################
# Functions defining the drawing actions
# (used by the function DrawSierpinskiTriangle).
# ----------------------------------------------
def Left(turn, point, fwd, angle, turt):
turt.left(angle)
return [turn, point, fwd, angle, turt]
def Right(turn, point, fwd, angle, turt):
turt.right(angle)
return [turn, point, fwd, angle, turt]
def Forward(turn, point, fwd, angle, turt):
turt.forward(fwd)
return [turn, point, fwd, angle, turt]
##########################################################################################
# The drawing function
# --------------------
#
# level level of Sierpinski triangle (minimum value = 1)
# ss screensize (Draws on a screen of size ss x ss. Default value = 400.)
#-----------------------------------------------------------------------------------------
def DrawSierpinskiTriangle(level, ss=400):
# typical values
turn = 0 # initial turn (0 to start horizontally)
angle=60.0 # in degrees
# Initialize the turtle
turtle.hideturtle()
turtle.screensize(ss,ss)
turtle.penup()
turtle.degrees()
# The starting point on the canvas
fwd0 = float(ss)
point=array([-fwd0/2.0, -fwd0/2.0])
# Setting up the Lindenmayer system
# Assuming that the triangle will be drawn in the following way:
# 1.) Start at a point
# 2.) Draw a straight line - the horizontal line (H)
# 3.) Bend twice by 60 degrees to the left (--)
# 4.) Draw a straight line - the slanted line (X)
# 5.) Bend twice by 60 degrees to the left (--)
# 6.) Draw a straight line - another slanted line (X)
# This produces the triangle in the first level. (so the axiom to begin with is H--X--X)
# 7.) For the next level replace each horizontal line using
# X->XX
# H -> H--X++H++X--H
# The lengths will be halved.
decode = {'-':Left, '+':Right, 'X':Forward, 'H':Forward}
axiom = 'H--X--X'
# Start the drawing
turtle.goto(point[0], point[1])
turtle.pendown()
turtle.hideturtle()
turt=turtle.getpen()
startposition=turt.clone()
# Get the triangle in the Lindenmayer system
fwd = fwd0/(2.0**level)
path = axiom
for i in range(0,level):
path=path.replace('X','XX')
path=path.replace('H','H--X++H++X--H')
# Draw it.
for i in path:
[turn, point, fwd, angle, turt]=decode[i](turn, point, fwd, angle, turt)
##########################################################################################
DrawSierpinskiTriangle(5)
```
| github_jupyter |
## Analyze A/B Test Results
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). **Please save regularly.**
This project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck!
## Table of Contents
- [Introduction](#intro)
- [Part I - Probability](#probability)
- [Part II - A/B Test](#ab_test)
- [Part III - Regression](#regression)
<a id='intro'></a>
### Introduction
A/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these
For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.
**As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question.** The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the [RUBRIC](https://review.udacity.com/#!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric).
<a id='probability'></a>
#### Part I - Probability
To get started, let's import our libraries.
```
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
#We are setting the seed to assure you get the same answers on quizzes as we set up
random.seed(42)
```
`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**
a. Read in the dataset and take a look at the top few rows here:
```
#import the dataset
df = pd.read_csv('ab_data.csv')
#show the first 5 rows
df.head()
```
b. Use the cell below to find the number of rows in the dataset.
```
#show the total number of rows
df.shape[0]
```
c. The number of unique users in the dataset.
```
#calculare the number of unique user_id
len(df['user_id'].unique())
```
d. The proportion of users converted.
```
#calculate the converted users
df['converted'].mean()
```
e. The number of times the `new_page` and `treatment` don't match.
```
#treatment in group will be called A and new_page in landing_page will be called B
df_A_not_B = df.query('group == "treatment" & landing_page != "new_page"')
df_B_not_A = df.query('group != "treatment" & landing_page == "new_page"')
#calculate thenumber of time new_page and treatment don't line up
len(df_A_not_B) + len(df_B_not_A)
```
f. Do any of the rows have missing values?
```
#view if there is any missing value
df.info()
```
**No missing Values**
`2.` For the rows where **treatment** does not match with **new_page** or **control** does not match with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to figure out how we should handle these rows.
a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in **df2**.
```
#remove the mismatch rows
df1 = df.drop(df[(df.group == "treatment") & (df.landing_page != "new_page")].index)
df2 = df1.drop(df1[(df1.group == "control") & (df1.landing_page != "old_page")].index)
# Double Check all of the correct rows were removed - this should be 0
df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]
```
`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom.
a. How many unique **user_id**s are in **df2**?
```
#calculare the number of unique user_id
len(df2['user_id'].unique())
```
b. There is one **user_id** repeated in **df2**. What is it?
```
#find out the duplicate user_id
df2.loc[df2.user_id.duplicated()]
```
c. What is the row information for the repeat **user_id**?
```
#find out the duplicate user_id
df2.loc[df2.user_id.duplicated()]
```
d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.
```
# Now we remove duplicate rows
df2 = df2.drop_duplicates()
# Check agin if duplicated values are deleted or not
sum(df2.duplicated())
```
`4.` Use **df2** in the cells below to answer the quiz questions related to **Quiz 4** in the classroom.
a. What is the probability of an individual converting regardless of the page they receive?
```
# Probability of an individual converting regardless of the page they receive
df2['converted'].mean()
```
b. Given that an individual was in the `control` group, what is the probability they converted?
```
# The probability of an individual converting given that an individual was in the control group
control_group = len(df2.query('group=="control" and converted==1'))/len(df2.query('group=="control"'))
control_group
```
c. Given that an individual was in the `treatment` group, what is the probability they converted?
```
# The probability of an individual converting given that an individual was in the treatment group
treatment_group = len(df2.query('group=="treatment" and converted==1'))/len(df2.query('group=="treatment"'))
treatment_group
```
d. What is the probability that an individual received the new page?
```
# The probability of individual received new page
len(df2.query('landing_page=="new_page"'))/len(df2.index)
```
e. Consider your results from parts (a) through (d) above, and explain below whether you think there is sufficient evidence to conclude that the new treatment page leads to more conversions.
**Your answer goes here.**
<a id='ab_test'></a>
### Part II - A/B Test
Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed.
However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another?
These questions are the difficult parts associated with A/B tests in general.
`1.` For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of **$p_{old}$** and **$p_{new}$**, which are the converted rates for the old and new pages.
**Put your answer here.**
`2.` Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the **converted** success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the **converted** rate in **ab_data.csv** regardless of the page. <br><br>
Use a sample size for each page equal to the ones in **ab_data.csv**. <br><br>
Perform the sampling distribution for the difference in **converted** between the two pages over 10,000 iterations of calculating an estimate from the null. <br><br>
Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use **Quiz 5** in the classroom to make sure you are on the right track.<br><br>
a. What is the **conversion rate** for $p_{new}$ under the null?
```
p_new = len(df2.query( 'converted==1'))/len(df2.index)
p_new
```
b. What is the **conversion rate** for $p_{old}$ under the null? <br><br>
```
p_old = len(df2.query('converted==1'))/len(df2.index)
p_old
p_new = len(df2.query( 'converted==1'))/len(df2.index)
p_new
# probablity under null
p=np.mean([p_old,p_new])
p
# difference of p_new and p_old
p_diff=p_new-p_old
```
#### Under null p_old is equal to p_new
c. What is $n_{new}$, the number of individuals in the treatment group?
```
#calculate number of queries when landing_page is equal to new_page
n_new = len(df2.query('landing_page=="new_page"'))
#print n_new
n_new
```
d. What is $n_{old}$, the number of individuals in the control group?
```
#calculate number of queries when landing_page is equal to old_page
n_old = len(df2.query('landing_page=="old_page"'))
#print n_old
n_old
```
e. Simulate $n_{new}$ transactions with a conversion rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.
```
## simulate n_old transactions with a convert rate of p_new under the null
new_page_converted = np.random.choice([0, 1], n_new, p = [p_new, 1-p_new])
```
f. Simulate $n_{old}$ transactions with a conversion rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.
```
# simulate n_old transactions with a convert rate of p_old under the null
old_page_converted = np.random.choice([0, 1], n_old, p = [p_old, 1-p_old])
```
g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
```
# differences computed in from p_new and p_old
obs_diff= new_page_converted.mean() - old_page_converted.mean()# differences computed in from p_new and p_old
obs_diff
```
h. Create 10,000 $p_{new}$ - $p_{old}$ values using the same simulation process you used in parts (a) through (g) above. Store all 10,000 values in a NumPy array called **p_diffs**.
```
# Create sampling distribution for difference in p_new-p_old simulated values
# with boostrapping
p_diffs = []
for i in range(10000):
# 1st parameter dictates the choices you want. In this case [1, 0]
p_new1 = np.random.choice([1, 0],n_new,replace = True,p = [p_new, 1-p_new])
p_old1 = np.random.choice([1, 0],n_old,replace = True,p = [p_old, 1-p_old])
p_new2 = p_new1.mean()
p_old2 = p_old1.mean()
p_diffs.append(p_new2-p_old2)
#_p_diffs = np.array(_p_diffs)
```
i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
```
p_diffs=np.array(p_diffs)
#histogram of p_diff
plt.hist(p_diffs)
plt.title('Graph of p_diffs')#title of graphs
plt.xlabel('Page difference') # x-label of graphs
plt.ylabel('Count') # y-label of graphs
```
j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?
```
#histogram of p_diff
plt.hist(p_diffs);
plt.title('Graph of p_diffs') #title of graphs
plt.xlabel('Page difference') # x-label of graphs
plt.ylabel('Count') # y-label of graphs
plt.axvline(x= obs_diff, color='r');
```
k. Please explain using the vocabulary you've learned in this course what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?
89.57% is the proportion of the p_diffs that are greater than the actual difference observed in ab_data.csv. In scientific studies this value is also called p-value. This value means that we cannot reject the null hypothesis and that we do not have sufficient evidence that the new_page has a higher conversion rate than the old_page.
**Put your answer here.**
l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let `n_old` and `n_new` refer the the number of rows associated with the old page and new pages, respectively.
```
import statsmodels.api as sm
convert_old = len(df2.query('converted==1 and landing_page=="old_page"')) #rows converted with old_page
convert_new = len(df2.query('converted==1 and landing_page=="new_page"')) #rows converted with new_page
n_old = len(df2.query('landing_page=="old_page"')) #rows_associated with old_page
n_new = len(df2.query('landing_page=="new_page"')) #rows associated with new_page
n_new
```
m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](https://docs.w3cub.com/statsmodels/generated/statsmodels.stats.proportion.proportions_ztest/) is a helpful link on using the built in.
```
#Computing z_score and p_value
z_score, p_value = sm.stats.proportions_ztest([convert_old,convert_new], [n_old, n_new],alternative='smaller')
#display z_score and p_value
print(z_score,p_value)
```
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**?
```
from scipy.stats import norm
norm.cdf(z_score) #how significant our z_score is
norm.ppf(1-(0.05)) #critical value of 95% confidence
```
The z-score and the p_value mean that one doesn't reject the Null. The Null being the converted rate of the old_page is the same or greater than the converted rate of the new_page. The p_value is 0.91 and is higher than 0.05 significance level. That means we can not be confident with a 95% confidence level that the converted rate of the new_page is larger than the old_page.
<a id='regression'></a>
### Part III - A regression approach
`1.` In this final part, you will see that the result you achieved in the A/B test in Part II above can also be achieved by performing regression.<br><br>
a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?
The dependent variable is a binary variable (converted vs not converted). Thus, you need to use a logistic regression.
b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create in df2 a column for the intercept, and create a dummy variable column for which page each user received. Add an **intercept** column, as well as an **ab_page** column, which is 1 when an individual receives the **treatment** and 0 if **control**.
```
#adding an intercept column
df2['intercept'] = 1
#Create dummy variable column
df2['ab_page'] = pd.get_dummies(df2['group'])['treatment']
df2.head()
```
c. Use **statsmodels** to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part **b.** to predict whether or not an individual converts.
```
import statsmodels.api as sm
model=sm.Logit(df2['converted'],df2[['intercept','ab_page']])
results=model.fit()
```
d. Provide the summary of your model below, and use it as necessary to answer the following questions.
```
results.summary()
```
e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**?<br><br> **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in **Part II**?
The p-value associated with ab_page is 0.19. It is higher than 0.05. Thus, the coefficient is not significant.
Alternative hypothesis from part II: the conversion rate of the old_page is less than the conversion rate of the new_page. This assumes a one-tailed test. In Part III, the alternative hypothesis can be formulated as follows: (1) The landing_page type influences (positively or negatively) the conversion rate or (2) the conversion rate of the old_page is different to the conversion rate of the new_page. This assumes a two-tailed test.
in both cases, the results do not support the alternative hypothesis sufficiently.
The p-value is very different. In part II the p-value is 0.91. This might be because the tests of the regression model (not the A/B test) assumes an intercept and because of differences in one or two-tailed testing.
f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?
It is a good idea to consider other factors in order to identify other potencial influences on the conversion rate.
A disadvantage is that the model gets more complex.
g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives in. You will need to read in the **countries.csv** dataset and merge together your datasets on the appropriate rows. [Here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) are the docs for joining tables.
Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - **Hint: You will need two columns for the three dummy variables.** Provide the statistical output as well as a written response to answer this question.
```
# Store Countries.csv data in dataframe
countries = pd.read_csv('countries.csv')
countries.head()
#Inner join two datas
new = countries.set_index('user_id').join(df2.set_index('user_id'), how = 'inner')
new.head()
#adding dummy variables with 'CA' as the baseline
new[['US', 'UK']] = pd.get_dummies(new['country'])[['US', "UK"]]
new.head()
new['US_ab_page'] = new['US']*new['ab_page']
new.head()
new['UK_ab_page'] = new['UK']*new['ab_page']
new.head()
new['intercept'] = 1
logit3 = sm.Logit(new['converted'], new[['intercept', 'ab_page', 'US', 'UK', 'US_ab_page', 'US_ab_page']])
logit3
#Check the result
results = logit3.fit()
#Check the result
results.summary()
```
h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model.
Provide the summary results, and your conclusions based on the results.
**Conclusions:** None of the variables have significant p-values. Therefore, we will fail to reject the null and conclude that there is not sufficient evidence to suggest that there is an interaction between country and page received that will predict whether a user converts or not.
In the larger picture, based on the available information, we do not have sufficient evidence to suggest that the new page results in more conversions than the old page.
```
from subprocess import call
call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
```
<a id='conclusions'></a>
## Finishing Up
> Congratulations! You have reached the end of the A/B Test Results project! You should be very proud of all you have accomplished!
> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the rubric (found on the project submission page at the end of the lesson). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible.
## Directions to Submit
> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).
> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.
> Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
```
from subprocess import call
call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
```
| github_jupyter |
# 2章 微分積分
## 2.1 関数
```
# 必要ライブラリの宣言
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# PDF出力用
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png', 'pdf')
def f(x):
return x**2 +1
f(1)
f(2)
```
### 図2-2 点(x, f(x))のプロットとy=f(x)のグラフ
```
x = np.linspace(-3, 3, 601)
y = f(x)
x1 = np.linspace(-3, 3, 7)
y1 = f(x1)
plt.figure(figsize=(6,6))
plt.ylim(-2,10)
plt.plot([-3,3],[0,0],c='k')
plt.plot([0,0],[-2,10],c='k')
plt.scatter(x1,y1,c='k',s=50)
plt.grid()
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.show()
x2 = np.linspace(-3, 3, 31)
y2 = f(x2)
plt.figure(figsize=(6,6))
plt.ylim(-2,10)
plt.plot([-3,3],[0,0],c='k')
plt.plot([0,0],[-2,10],c='k')
plt.scatter(x2,y2,c='k',s=50)
plt.grid()
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.show()
plt.figure(figsize=(6,6))
plt.plot(x,y,c='k')
plt.ylim(-2,10)
plt.plot([-3,3],[0,0],c='k')
plt.plot([0,0],[-2,10],c='k')
plt.scatter([1,2],[2,5],c='k',s=50)
plt.grid()
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.show()
```
## 2.2 合成関数・逆関数
### 図2.6 逆関数のグラフ
```
def f(x):
return(x**2 + 1)
def g(x):
return(np.sqrt(x - 1))
xx1 = np.linspace(0.0, 4.0, 200)
xx2 = np.linspace(1.0, 4.0, 200)
yy1 = f(xx1)
yy2 = g(xx2)
plt.figure(figsize=(6,6))
plt.xlabel('$x$',fontsize=14)
plt.ylabel('$y$',fontsize=14)
plt.ylim(-2.0, 4.0)
plt.xlim(-2.0, 4.0)
plt.grid()
plt.plot(xx1,yy1, linestyle='-', c='k', label='$y=x^2+1$')
plt.plot(xx2,yy2, linestyle='-.', c='k', label='$y=\sqrt{x-1}$')
plt.plot([-2,4],[-2,4], color='black')
plt.plot([-2,4],[0,0], color='black')
plt.plot([0,0],[-2,4],color='black')
plt.legend(fontsize=14)
plt.show()
```
## 2.3 微分と極限
### 図2-7 関数のグラフを拡大したときの様子
```
from matplotlib import pyplot as plt
import numpy as np
def f(x):
return(x**3 - x)
delta = 2.0
x = np.linspace(0.5-delta, 0.5+delta, 200)
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)
plt.xlim(0.5-delta, 0.5+delta)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter([0.5], [-3.0/8.0])
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.grid()
plt.title('delta = %.4f' % delta, fontsize=14)
plt.show()
delta = 0.2
x = np.linspace(0.5-delta, 0.5+delta, 200)
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)
plt.xlim(0.5-delta, 0.5+delta)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter([0.5], [-3.0/8.0])
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.grid()
plt.title('delta = %.4f' % delta, fontsize=14)
plt.show()
delta = 0.01
x = np.linspace(0.5-delta, 0.5+delta, 200)
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)
plt.xlim(0.5-delta, 0.5+delta)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter(0.5, -3.0/8.0)
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.grid()
plt.title('delta = %.4f' % delta, fontsize=14)
plt.show()
```
### 図2-8 関数のグラフ上の2点を結んだ直線の傾き
```
delta = 2.0
x = np.linspace(0.5-delta, 0.5+delta, 200)
x1 = 0.6
x2 = 1.0
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-1, 0.5)
plt.xlim(0, 1.5)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter([x1, x2], [f(x1), f(x2)], c='k', lw=1)
plt.plot([x1, x2], [f(x1), f(x2)], c='k', lw=1)
plt.plot([x1, x2, x2], [f(x1), f(x1), f(x2)], c='k', lw=1)
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.show()
```
### 図2-10 接線の方程式
```
def f(x):
return(x**2 - 4*x)
def g(x):
return(-2*x -1)
x = np.linspace(-2, 6, 500)
fig = plt.figure(figsize=(6,6))
plt.scatter([1],[-3],c='k')
plt.plot(x, f(x), 'b-', lw=1, c='k')
plt.plot(x, g(x), 'b-', lw=1, c='b')
plt.plot([x.min(), x.max()], [0, 0], lw=2, c='k')
plt.plot([0, 0], [g(x).min(), f(x).max()], lw=2, c='k')
plt.grid(lw=2)
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.xlabel('X')
plt.show()
```
## 2.4 極大・極小
### 図2-11 y= x3-3xのグラフと極大・極小
```
def f1(x):
return(x**3 - 3*x)
x = np.linspace(-3, 3, 500)
y = f1(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-4, 4)
plt.xlim(-3, 3)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.plot([0,0],[-4,4],c='k')
plt.plot([-3,3],[0,0],c='k')
plt.grid()
plt.show()
```
### 図2-12 極大でも極小でもない例 (y=x3のグラフ)
```
def f2(x):
return(x**3)
x = np.linspace(-3, 3, 500)
y = f2(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-4, 4)
plt.xlim(-3, 3)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.plot([0,0],[-4,4],c='k')
plt.plot([-3,3],[0,0],c='k')
plt.grid()
plt.show()
```
## 2.7 合成関数の微分
### 図2-14 逆関数の微分
```
#逆関数の微分
def f(x):
return(x**2 + 1)
def g(x):
return(np.sqrt(x - 1))
xx1 = np.linspace(0.0, 4.0, 200)
xx2 = np.linspace(1.0, 4.0, 200)
yy1 = f(xx1)
yy2 = g(xx2)
plt.figure(figsize=(6,6))
plt.xlabel('$x$',fontsize=14)
plt.ylabel('$y$',fontsize=14)
plt.ylim(-2.0, 4.0)
plt.xlim(-2.0, 4.0)
plt.grid()
plt.plot(xx1,yy1, linestyle='-', color='blue')
plt.plot(xx2,yy2, linestyle='-', color='blue')
plt.plot([-2,4],[-2,4], color='black')
plt.plot([-2,4],[0,0], color='black')
plt.plot([0,0],[-2,4],color='black')
plt.show()
```
## 2.9 積分
### 図2-15 面積を表す関数S(x)とf(x)の関係
```
def f(x) :
return x**2 + 1
xx = np.linspace(-4.0, 4.0, 200)
yy = f(xx)
plt.figure(figsize=(6,6))
plt.xlim(-2,2)
plt.ylim(-1,4)
plt.plot(xx, yy)
plt.plot([-2,2],[0,0],c='k',lw=1)
plt.plot([0,0],[-1,4],c='k',lw=1)
plt.plot([0,0],[0,f(0)],c='b')
plt.plot([1,1],[0,f(1)],c='b')
plt.plot([1.5,1.5],[0,f(1.5)],c='b')
plt.plot([1,1.5],[f(1),f(1)],c='b')
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.show()
```
### 図2-16 グラフの面積と定積分
```
plt.figure(figsize=(6,6))
plt.xlim(-2,2)
plt.ylim(-1,4)
plt.plot(xx, yy)
plt.plot([-2,2],[0,0],c='k',lw=1)
plt.plot([0,0],[-1,4],c='k',lw=1)
plt.plot([0,0],[0,f(0)],c='b')
plt.plot([1,1],[0,f(1)],c='b')
plt.plot([1.5,1.5],[0,f(1.5)],c='b')
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.show()
```
### 図2-17 積分と面積の関係
```
def f(x) :
return x**2 + 1
x = np.linspace(-1.0, 2.0, 200)
y = f(x)
N = 10
xx = np.linspace(0.5, 1.5, N+1)
yy = f(xx)
print(xx)
plt.figure(figsize=(6,6))
plt.xlim(-1,2)
plt.ylim(-1,4)
plt.plot(x, y)
plt.plot([-1,2],[0,0],c='k',lw=2)
plt.plot([0,0],[-1,4],c='k',lw=2)
plt.plot([0.5,0.5],[0,f(0.5)],c='b')
plt.plot([1.5,1.5],[0,f(1.5)],c='b')
plt.bar(xx[:-1], yy[:-1], align='edge', width=1/N*0.9)
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.grid()
plt.show()
```
| github_jupyter |
```
import random
class Coin:
def __init__(self, rare = False, clean = True, heads = True, **kwargs):
for key,value in kwargs.items():
setattr(self,key,value)
self.is_rare = rare
self.is_clean = clean
self.heads = heads
if self.is_rare:
self.value = self.original_value * 1.25
else:
self.value = self.original_value
if self.clean:
self.colour = self.clean_colour
else:
self.colour = self.rusty_colour
def rust(self):
self.colour = self.rusty_colour
def clean(self):
self.colour = self.clean_colour
def __del__(self):
print("Coin spent!")
def flip(self):
heads_options = [True, False]
choice = random.choice(heads_options)
self.heads = choice
def __str__(self):
if self.original_value >= 1:
return "£{} coin".format(int(self.original_value))
else:
return "{}p Coin".format(int(self.original_value * 100))
class One_Pence(Coin):
def __init__(self):
data = {
"original_value": 0.01,
"clean_colour": "bronze",
"rusty_colour": "brownish",
"num_edges": 1,
"diameter": 20.3, #mm
"thickness": 1.52, #mm
"mass": 3.56, #grams
}
super().__init__(**data)
class Two_Pence(Coin):
def __init__(self):
data = {
"original_value": 0.02,
"clean_colour": "bronze",
"rusty_colour": "brownish",
"num_edges": 1,
"diameter": 25.9, #mm
"thickness": 1.85, #mm
"mass": 7.12, #grams
}
super().__init__(**data)
class Five_Pence(Coin):
def __init__(self):
data = {
"original_value": 0.05,
"clean_colour": "silver",
"rusty_colour": None,
"num_edges": 1,
"diameter": 18.0, #mm
"thickness": 1.77, #mm
"mass": 3.25, #grams
}
super().__init__(**data)
def rust(self):
self.colour = self.clean_colour
def clean(self):
self.colour = self.clean_colour
class Ten_Pence(Coin):
def __init__(self):
data = {
"original_value": 0.10,
"clean_colour": "silver",
"rusty_colour": None,
"num_edges": 1,
"diameter": 24.5, #mm
"thickness": 1.85, #mm
"mass": 6.50, #grams
}
super().__init__(**data)
def rust(self):
self.colour = self.clean_colour
def clean(self):
self.colour = self.clean_colour
class Twenty_Pence(Coin):
def __init__(self):
data = {
"original_value": 0.20,
"clean_colour": "silver",
"rusty_colour": None,
"num_edges": 7,
"diameter": 21.4, #mm
"thickness": 1.7, #mm
"mass": 5.00, #grams
}
super().__init__(**data)
def rust(self):
self.colour = self.clean_colour
def clean(self):
self.colour = self.clean_colour
class Fifty_Pence(Coin):
def __init__(self):
data = {
"original_value": 0.50,
"clean_colour": "silver",
"rusty_colour": None,
"num_edges": 7,
"diameter": 27.3, #mm
"thickness": 1.78, #mm
"mass": 8.00, #grams
}
super().__init__(**data)
def rust(self):
self.colour = self.clean_colour
def clean(self):
self.colour = self.clean_colour
class One_Pound(Coin):
def __init__(self):
data = {
"original_value": 1.00,
"clean_colour": "gold",
"rusty_colour": "greenish",
"num_edges": 1,
"diameter": 22.5, #mm
"thickness": 3.15, #mm
"mass": 9.5, #grams
}
super().__init__(**data)
class Two_Pound(Coin):
def __init__(self):
data = {
"original_value": 2.00,
"clean_colour": "gold & silver",
"rusty_colour": "greenish",
"num_edges": 1,
"diameter": 28.4, #mm
"thickness": 2.50, #mm
"mass": 12.00, #grams
}
super().__init__(**data)
coins =[One_Pence(), Two_Pence(), Five_Pence(), Ten_Pence(), Twenty_Pence(),
Fifty_Pence(), One_Pound(), Two_Pound()]
for coin in coins:
arguments = [coin, coin.colour, coin.value, coin.diameter, coin.thickness,
coin.num_edges, coin.mass]
string = "{} - Colour: {}, value:{}, diameter(mm):{}, thickness(mm):{}, number of edges:{}, mass(g):{}".format(*arguments)
print(string)
```
| github_jupyter |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
# Configure gmaps
gmaps.configure(api_key=gkey)
print(gkey)
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
# Create vacation dataframe
#clean_city_data_df.to_csv('../Resources/city_output.csv')
vacation_df = pd.read_csv('../Resources/city_output.csv')
#vacation_df = vacation_df.drop(columns="Unnamed: 0")
vacation_df.head()
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
# Store latitude and longitude in locations
locations = vacation_df[["lat", "long"]]
weights = vacation_df["humidity"].astype(float)
fig = gmaps.figure()
# Create heat layer
heat_layer = gmaps.heatmap_layer(locations, weights=weights,
dissipating=False, max_intensity=10,
point_radius=300)
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
#vacation_df.dropna(inplace = True) max temp, cloudiness = 0, wind speed <10, 70> <80
city_weather_df = vacation_df.copy()
city_weather_df.dropna(inplace = True)
city_weather_df
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
#Search for hotel in cities and assign to a new column in hotel_df
hotelname = []
hotel_df = city_weather_df.copy()
params = {}
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?"
for index, row in hotel_df.iterrows():
# get city name, lat, lng from df
lat = row["lat"]
lng = row["long"]
city_name = row["city"]
# add keyword to params dict
params["location"] = f"{lat},{lng}"
params["radius"] = "5000"
params["type"] = "hotel"
params['keyword'] = 'hotel'
params["key"] = gkey
url_params = urlencode(params)
# assemble url and make API request
#print(f"Retrieving Results for Index {index}: {city_name}.")
query_string = base_url+url_params
#pprint(query_string)
# save the hotel name to dataframe
try:
response = requests.get(query_string).json()
# extract results
results = response['results']
#print(f"Closest hotel in {city_name} is {results[0]['name']}.")
hotel_df.loc[index, "Hotel Name"] = results[0]['name']
time.sleep(.2)
# if there is no hotel available, show missing field
except (KeyError, IndexError):
print(f"{index} - There isn't any hotel in a 5000m radius.")
#print("------------")
# Print end of search once searching is completed
#print("-------End of Search-------")
hotel_df
hotel_df = hotel_df.dropna()
# Make adjustmentss to hotel_df, calculate;
# 1. A max temperature lower than 80 degrees but higher than 70.
# 2. Wind speed less than 10 mph.
# 3. Zero cloudiness.
hotel_df = hotel_df.loc[(hotel_df.maxtemp < 290) & (hotel_df.maxtemp > 270)]
hotel_df = hotel_df.loc[hotel_df.windspeed < 10]
hotel_df = hotel_df.loc[hotel_df.cloudiness == 0]
hotel_df
#NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{city}</dd>
<dt>Country</dt><dd>{country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["lat", "long"]]
# Create a map using state centroid coordinates to set markers
marker_locations = locations
# Create a marker_layer
#fig = gmaps.figure()
markers = gmaps.marker_layer(marker_locations, info_box_content=hotel_info)
fig.add_layer(markers)
fig
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap as Basemap
from matplotlib.patches import Polygon
from matplotlib.colorbar import ColorbarBase
%config InlineBackend.figure_format = 'retina'
```
To install basemap
`conda install -c conda-forge proj4`
`conda install -c anaconda basemap`
In this notebook we will preprocess data to be able to compute death rates by state due to covid. You will need this data for plotting a map in hw3.
## Dataframes
A DataFrame object is a two-dimensional matrix with rows and columns. Each column can have different data types, but all values within a column must be of the same data type. The columns behave like [series objects](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html).
Data frames columns are ordered and the name-to-column mapping is stored in an index. Data frames also have an index for the rows, just like a series has an index into the values of the series. So, a data frame has two indexes which lets us zero in, for example, on a specific element using row and column index values.
Let's use `pd.read_csv` to read a csv file with all covid cases per state. Taken from the [nytimes github]( https://github.com/nytimes/covid-19-data). `.head()` gives the top 5 rows of the dataframe.
```
covid = pd.read_csv("data/us-states.csv")
covid.head()
```
This dataframe has population estimates and a lot of other info. See `data/nst-est2019-alldata.pdf` for a descrition of all columns.
```
population = pd.read_csv("data/nst-est2019-alldata.csv")
population.head()
## let's look at the columns. I am looking for the population of 2019 per state.
#list(population.columns)
```
Always look at shapes of objects before and after you manipulate them. You will get `(number of row, number of columns).` How many states in United States of America?
```
covid.shape, population.shape
covid.describe()
# note that the counts are different because there are missing values in some columns
# covid["confirmed_cases"]
covid["confirmed_cases"].isnull()
# count how many rows are null?
(covid["confirmed_cases"].isnull() == True).sum()
# similarly
(covid["confirmed_cases"].isnull()).sum()
# is.na() also works
(covid["confirmed_cases"].isna()).sum()
# take first 10 elements of the column "confirmed_cases"
c = covid["confirmed_cases"][:10]
c
# be careful on how different functions behave with respect to NAs
len(c), c.count(), c.sum(), c.sum(skipna=False), np.sum(c), sum(c)
# if you want to fill the NAs you can do
covid = covid.fillna(-1)
covid.head()
```
### Exercise 1
How to fill NAs with different values for different columns?
## Subsetting and merging dataframes
We need info about deaths from the covid dataframe and info about population from other dataframe. Let's keep just that. Also we need a way to combine (merge) the two dataframes. The column `fips` is a unique identifier for the state so I will keep that. Also the state name can be useful.
```
covid.head()
# selecting columns
covid = covid[["state", "fips", "deaths"]]
covid.head()
population.head()
# from the pdf we have the following info
# STATE = State FIPS code
# NAME = State name
# POPESTIMATE2019 = 7/1/2019 resident total population estimate
population = population[["STATE", "NAME", "POPESTIMATE2019"]]
# show first 10 rows
population.iloc[:10]
# we are not interested in top values of the population table (aggregates)
population = population.iloc[5:] # all rows after 5
population.head()
covid.shape, population.shape
```
There are various ways to merge two dataframes. At the moment we want to preserve all the data.
`outer`: use union of keys from both frames
```
# Can we merge on state name?
rates = covid.merge(population, how="outer", left_on='fips', right_on='STATE')
rates.iloc[:15]
# let's look at rows with NAs
na_index = rates["POPESTIMATE2019"].isnull()
rates[na_index]
## Let's drop them
rates = rates.dropna()
rates.shape
# cleaning up some more
rates = rates[["state", "fips", "deaths", "POPESTIMATE2019"]]
rates["rates"] = 1000*rates["deaths"]/rates["POPESTIMATE2019"] # set a new column
rates
# sorting by rates
rates = rates.sort_values(by=["rates"])
#rates
## mean value of the rate column
rates["rates"].mean(), rates["rates"].median()
rates["rates"].quantile(q=[0.1, 0.25, 0.5, 0.75, 0.9])
# if you want 7 groups of color you need 8 quantiles
q = np.linspace(0, 1, 8, endpoint=True) # equidistant numbers between 0 and 1
q
# compute quantile of covid rates
rates["rates"].quantile(q=q)
qq = rates["rates"].quantile(q=q)
type(qq) # what is the type?
type(qq.values) # I prefer working with numpy arrays
boundaries = rates["rates"].quantile(q=q).values
boundaries
## let's define a new ordinal variable based on the quantiles of the rates
rates["color"] = pd.qcut(rates["rates"], 7)
rates["color"]
rates["color"].unique()
## let's directly put colors here for our plot
colors = ["#ffffd4", "#fee391", "#fec44f", "#fe9929", "#ec7014", "#cc4c02", "#8c2d04"] # from colorbrewer2.org
rates["color"] = pd.qcut(rates["rates"], 7, labels=colors)
rates["color"].values
```
## Dictionary of color per state
```
# iterate through rows
for i, row in rates.iterrows():
print(row["state"], row["color"])
# make a dictionary of color per state
state2color = {}
for i, row in rates.iterrows():
state2color[row["state"]] = row["color"]
# here is a shortcut of the same
# dictionary comprehension
state2color = {row["state"]: row["color"] for i, row in rates.iterrows()}
```
## Making a map in matplotlib
Based on these examples
https://github.com/matplotlib/basemap/blob/master/examples/fillstates.py
https://stackoverflow.com/questions/39742305/how-to-use-basemap-python-to-plot-us-with-50-states
```
# Lambert Conformal map of lower 48 states.
m = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49,
projection='lcc',lat_1=33,lat_2=45,lon_0=-95)
# load the shapefile, use the name 'states'
shape = m.readshapefile('st99_d00', name='states', drawbounds=True)
ax = plt.gca() # get current axes instance
# list of states in the data
states = [shapedict['NAME'] for shapedict in m.states_info]
for i, seg in enumerate(m.states):
state = states[i]
color = state2color[state]
poly = Polygon(seg, facecolor=color, edgecolor=color)
ax.add_patch(poly)
states = [shapedict['NAME'] for shapedict in m.states_info] # list comprenhension
#states
```
## How to make a column bar
```
colors = ["#ffffd4", "#fee391", "#fec44f", "#fe9929", "#ec7014", "#cc4c02", "#8c2d04"]
bounds = [1,2,3,4,5,6,7,8]
boundaries = [0.055, 0.139, 0.23, 0.316, 0.387, 0.588, 0.832, 1.804]
fig, ax = plt.subplots(figsize=(1, 8))
fig.subplots_adjust(bottom=0.5)
cmap = mpl.colors.ListedColormap(colors)
cb2 = ColorbarBase(ax, cmap=cmap,
boundaries=bounds,
ticks=bounds,
label=boundaries,
orientation='vertical')
cb2.set_label('Covid rates')
cb2.set_ticklabels(boundaries)
```
## Put it together
```
# rounding
boundaries = [0.00, 0.14, 0.23, 0.32, 0.39, 0.59, 0.83, 1.80]
# Lambert Conformal map of lower 48 states.
fig, ax = plt.subplots(figsize=(12,6))
m = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49,
projection='lcc',lat_1=33,lat_2=45,lon_0=-95)
# load the shapefile, use the name 'states'
shape = m.readshapefile('st99_d00', name='states', drawbounds=True,
linewidth=0.2,color='#808080')
# list of states in the data
states = [shapedict['NAME'] for shapedict in m.states_info]
for i, seg in enumerate(m.states):
state = states[i]
color = state2color[state]
poly = Polygon(seg, facecolor=color, edgecolor=color)
ax.add_patch(poly)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.annotate("Covid death rates per thousands", xy=(0, 1.05), xycoords='axes fraction', fontsize=20, color='#303030')
# [left, bottom, width, height]
ax_c = fig.add_axes([0.25, 0.05, 0.5, 0.03])
cmap = mpl.colors.ListedColormap(colors)
cb2 = ColorbarBase(ax_c, cmap=cmap,
boundaries=bounds,
ticks=bounds,
label=boundaries,
orientation='horizontal')
cb2.set_label("")
cb2.set_ticklabels(boundaries)
```
## More on dataframe manipulation
`.iloc` for slicing a dataframe
```
rates.head()
rates = rates.reset_index(drop=True)
rates.head()
## keep the first 7 rows
rates_top7 = rates.iloc[:7]
rates_top7
## keep columns 2 and 3
rates_top7_cols23 = rates_top7.iloc[:, 2:4]
rates_top7_cols23
# we can do it at the same time
rates.iloc[:7, 2:4]
```
**Exercise 2**: Make a map of `rate of covid cases` per state. Can you use a diverging palette to understand states that are abobe or below average? Which plot makes more sense for this problem, one with a diverging palette or a sequential one?
**Exercise 3**: (hard) Can you annotate this plot showing the top states with death rates.
| github_jupyter |
## 1. Meet Dr. Ignaz Semmelweis
<p><img style="float: left;margin:5px 20px 5px 1px" src="https://assets.datacamp.com/production/project_20/img/ignaz_semmelweis_1860.jpeg"></p>
<!--
<img style="float: left;margin:5px 20px 5px 1px" src="https://assets.datacamp.com/production/project_20/datasets/ignaz_semmelweis_1860.jpeg">
-->
<p>This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about <em>childbed fever</em>: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and <em>wash their hands</em>!</p>
<p>In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of <em>handwashing</em>. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.</p>
```
# Importing modules
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv("datasets/yearly_deaths_by_clinic.csv")
# Print out yearly
yearly
```
## 2. The alarming number of deaths
<p>The table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an <em>alarming</em> number of women died as the result of childbirth, most of them from childbed fever.</p>
<p>We see this more clearly if we look at the <em>proportion of deaths</em> out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.</p>
```
# Calculate proportion of deaths per no. births
yearly["proportion_deaths"] = yearly["deaths"] / yearly["births"]
# Extract Clinic 1 data into clinic_1 and Clinic 2 data into clinic_2
clinic_1 = yearly[yearly["clinic"] == "clinic 1"]
clinic_2 = yearly[yearly["clinic"] == "clinic 2"]
# Print out clinic_1
clinic_1
```
## 3. Death at the clinics
<p>If we now plot the proportion of deaths at both Clinic 1 and Clinic 2 we'll see a curious pattern…</p>
```
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = clinic_1.plot(x="year", y="proportion_deaths", label="Clinic 1")
clinic_2.plot(x="year", y="proportion_deaths", label="Clinic 2", ax=ax, ylabel="Proportion deaths")
```
## 4. The handwashing begins
<p>Why is the proportion of deaths consistently so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. </p>
<p>Semmelweis started to suspect that something on the corpses spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: <em>Wash your hands!</em> This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. </p>
<p>Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.</p>
```
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv("datasets/monthly_deaths.csv", parse_dates=["date"])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"] = monthly["deaths"] / monthly["births"]
# Print out the first rows in monthly
monthly.head()
```
## 5. The effect of handwashing
<p>With the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!</p>
```
# Plot monthly proportion of deaths
ax = monthly.plot(x="date", y="proportion_deaths", ylabel="Proportion deaths")
```
## 6. The effect of handwashing highlighted
<p>Starting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. </p>
<p>The effect of handwashing is made even more clear if we highlight this in the graph.</p>
```
# Date when handwashing was made mandatory
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly["date"] < handwashing_start]
after_washing = monthly[monthly["date"] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x="date", y="proportion_deaths",
label="Before handwashing")
after_washing.plot(x="date", y="proportion_deaths",
label="After handwashing", ax=ax, ylabel="Proportion deaths")
```
## 7. More handwashing, fewer deaths?
<p>Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?</p>
```
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing["proportion_deaths"]
after_proportion = after_washing["proportion_deaths"]
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
```
## 8. A Bootstrap analysis of Semmelweis handwashing data
<p>It reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). </p>
<p>To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).</p>
```
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append( boot_after.mean() - boot_before.mean() )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
```
## 9. The fate of Dr. Semmelweis
<p>So handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.</p>
<p>The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as <em>bacteria</em>) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.</p>
<p>One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.</p>
```
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
```
| github_jupyter |
```
import os
from glob import glob
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
## Cleaning Up (& Stats About It)
- For each annotator:
- How many annotation files?
- How many txt files?
- Number of empty .ann files
- How many non-empty .ann files have a `TranscriptionError_Document`/`DuplicatePage` tag?
- How many .ann files have ONLY one of those two tags and are empty o/w? -> remove if so
=> remove corresponding .txt files
=> create new corpus
```
def get_all_files(annotator):
""" collapsing folder structure per annotator"""
data_dir = "../Data/"
ann_dir = data_dir+annotator+"/"
for cur_dir in glob(ann_dir+"/6*"):
txt_files = sorted(glob(cur_dir+"/*.txt"))
ann_files = sorted(glob(cur_dir+"/*.ann"))
yield from zip(txt_files, ann_files)
def has_error_tag(any_string):
"""Return strings with error tags"""
return "TranscriptionError_Document" in any_string or\
"DuplicatePage" in any_string
def remove_error_tag_lines(ann_file_content):
return [line for line in ann_file_content.strip().split("\n")
if not has_error_tag(line)]
annotators = "A B C Silja Yolien".split()
results = {}
print("Total Annotation Files Per Annotator\n")
for anno in annotators:
empty = []
cur_keep = []
error_tag = []
error_tag_but_non_empty = []
ann_files = list(get_all_files(anno))
print(anno, len(ann_files))
for txt, ann in ann_files:
with open(ann) as handle:
contents = handle.read()
if not contents.strip():
empty.append((txt, ann))
elif has_error_tag(contents):
error_tags_removed = remove_error_tag_lines(
contents
)
if error_tags_removed == []:
error_tag.append((txt, ann))
else:
error_tag_but_non_empty.append((txt, ann))
else:
cur_keep.append((txt, ann))
results[anno] = [cur_keep, empty, error_tag, error_tag_but_non_empty]
from tabulate import tabulate
stats = pd.DataFrame([
[k, sum(map(len, v))]+
[len(v[0])+len(v[-1])]+
list(map(len, v)) for k, v in results.items()
],
columns=["Annotator", "Total", "Keep",
"Non-empty-No error", "Empty", "Error", "Err.&Non-Empty"]).set_index("Annotator")
print(stats)
stats_T = pd.melt(stats[["Total", "Empty", "Keep", "Error"]].reset_index(),
id_vars=["Annotator"], value_name="Number")
plt.figure(figsize=(10, 7))
sns.barplot(data=stats_T, x='Annotator', y="Number", hue="variable")
keep = {anno: v[0]+v[-1] for anno, v in results.items()}
{k: len(v) for k, v in keep.items()}
# keep
```
### Make New Corpus
by copying files
```
from shutil import copy2
already_copied = True
if not already_copied:
from tqdm import tqdm
os.makedirs('Keep')
for anno, ls in tqdm(keep.items()):
cur_dir = f"Keep/{anno}"
os.makedirs(cur_dir)
for txt, ann in ls:
copy2(txt, cur_dir)
copy2(ann, cur_dir)
else:
print("Already copied, doing nothing!")
```
# Pairwise Intersections of Annotation Files
```
def only_names(file_list):
"returns only names of files in a particular list"
return [ann.split("/")[-1] for txt, ann in file_list]
ls = []
for a1, fs1 in keep.items():
for a2, fs2 in keep.items():
if not a1 == a2:
names1, names2 = only_names(fs1), only_names(fs2)
inter = set(names1) & set(names2) #names of files are identical
val = len(inter)/len(names1)
total_names1 = only_names(tup for ls in results[a1] for tup in ls)
total_names2 = only_names(tup for ls in results[a2] for tup in ls)
total_inter = set(total_names1) & set(total_names2)
total_val = len(total_inter)/len(total_names1)
jacc_val = len(set(names1).intersection(set(names2)))/len(set(names1).union(set(names2)))
jacc_val_2 = len(set(total_names1).intersection(set(total_names2)))/len(set(total_names1).union(set(total_names2)))
ls.append([a1, a2, len(inter), val,
len(total_inter), total_val, jacc_val, jacc_val_2])
inter_stats = pd.DataFrame(ls,
columns=["Anno1", "Anno2",
"Intersection", "normed_Intersection",
"total_Intersection", "total_normed_Intersection", "Jaccard_distance", "Jaccard_Distance_2"])
# inter_stats
```
#### Jaccard Distance to Understand Overlap Pages between Annotators
```
inter_stats_T = inter_stats.pivot_table(
values="Jaccard_distance",
index="Anno1", columns="Anno2"
)
sns.heatmap(inter_stats_T*100, annot=True, cmap="YlGnBu")
_ = plt.title("Before Clean Up: Jaccard Distance (percentage)")
plt.show()
inter_stats_T = inter_stats.pivot_table(
values="Jaccard_Distance_2",
index="Anno1", columns="Anno2"
)
sns.heatmap(inter_stats_T*100, annot=True, cmap="YlGnBu")
_ = plt.title("After Clean Up: Jaccard Distance (percentage)")
plt.show()
# inter_stats_T = inter_stats.pivot_table(
# values="Intersection",
# index="Anno1", columns="Anno2"
# )
# sns.heatmap(inter_stats_T,
# annot=True, cmap="YlGnBu")
# _ = plt.title("Before Clean Up: Raw Counts")
```
**Conclusion**: Each pair of annotators annotated on average have 6% overlap (over the total documents they annotated together).
## Check Tag Distributions
```
def get_lines(ann_file):
with open(ann_file) as handle:
for l in handle:
if not l.strip(): continue
yield l.strip().split("\t")
def get_entities(ann_file):
for line in get_lines(ann_file):
if line[0].startswith("T") and len(line) >= 2:
tag_type, tag, string = line
yield tag.split()[0]
ents = {a: [e for txt, ann in files for e in get_entities(ann)]
for a, files in keep.items()}
from collections import Counter
entity_stats = pd.DataFrame(
[[a, e, c] for a in ents for e, c in Counter(ents[a]).items() if not e in ["DuplicatePage", "Noteworthy", "TranscriptionError_Document"]],
columns=["Annotator", "EntityType", "Count"]
)
plt.figure(figsize=(10, 7))
_ = sns.barplot(data=entity_stats, x='Annotator', y="Count", hue="EntityType")
```
**Conclusion**:
Here we see that most annotators follow a similar trend in entities annotated, only annotator who stands out is '3'.
| github_jupyter |
# Lambda School Data Science - Loading, Cleaning and Visualizing Data
Objectives for today:
- Load data from multiple sources into a Python notebook
- From a URL (github or otherwise)
- CSV upload method
- !wget method
- "Clean" a dataset using common Python libraries
- Removing NaN values "Data Imputation"
- Create basic plots appropriate for different data types
- Scatter Plot
- Histogram
- Density Plot
- Pairplot (if we have time)
# Part 1 - Loading Data
Data comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.
Data set sources:
- https://archive.ics.uci.edu/ml/datasets.html
- https://github.com/awesomedata/awesome-public-datasets
- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)
Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags).
## Lecture example - flag data
```
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
```
### Yes, but what does it *mean*?
This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).
```
1. name: Name of the country concerned
2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania
3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW
4. area: in thousands of square km
5. population: in round millions
6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others
7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others
8. bars: Number of vertical bars in the flag
9. stripes: Number of horizontal stripes in the flag
10. colours: Number of different colours in the flag
11. red: 0 if red absent, 1 if red present in the flag
12. green: same for green
13. blue: same for blue
14. gold: same for gold (also yellow)
15. white: same for white
16. black: same for black
17. orange: same for orange (also brown)
18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)
19. circles: Number of circles in the flag
20. crosses: Number of (upright) crosses
21. saltires: Number of diagonal crosses
22. quarters: Number of quartered sections
23. sunstars: Number of sun or star symbols
24. crescent: 1 if a crescent moon symbol present, else 0
25. triangle: 1 if any triangles present, 0 otherwise
26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 0
27. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise
28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise
29. topleft: colour in the top-left corner (moving right to decide tie-breaks)
30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)
```
Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1...
## Steps of Loading and Exploring a Dataset:
- Find a dataset that looks interesting
- Learn what you can about it
- What's in it?
- How many rows and columns?
- What types of variables?
- Look at the raw contents of the file
- Load it into your workspace (notebook)
- Handle any challenges with headers
- Handle any problems with missing values
- Then you can start to explore the data
- Look at the summary statistics
- Look at counts of different categories
- Make some plots to look at the distribution of the data
## 3 ways of loading a dataset
### From its URL
```
dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
column_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week',
'native-country', 'income']
df = pd.read_csv(dataset_url, names=column_headers)
print(df.shape)
df.head()
```
### From a local file
```
from google.colab import files
uploaded
```
### Using the `!wget` command
```
import wget
wget https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data
```
# Part 2 - Deal with Missing Values
## Diagnose Missing Values
Lets use the Adult Dataset from UCI. <https://github.com/ryanleeallred/datasets>
```
df.isnull().sum()
```
## Fill Missing Values
```
dataset_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
column_headers = ['age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week',
'native-country', 'income']
df = pd.read_csv(dataset_url, names=column_headers, na_values=[' ?'])
print(df.shape)
df.head(20)
df.dtypes
df.iloc[14][13]
```
# Part 3 - Explore the Dataset:
## Look at "Summary Statistics
### Numeric
```
df.describe()
```
###Non-Numeric
```
df.describe(exclude="number")
```
## Look at Categorical Values
# Part 4 - Basic Visualizations (using the Pandas Library)
## Histogram
```
# Pandas Histogram
```
## Density Plot (KDE)
```
# Pandas Density Plot
```
## Scatter Plot
```
# Pandas Scatterplot
```
| github_jupyter |
# Sci-Fi IRL #1: Technology Terminology Velocity
### A Data Storytelling Project by Tobias Reaper
### ---- Datalogue 008 ----
---
---
### Imports and Configuration
```
# Three Musketeers
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# For using the API
import requests
# More advanced vizualizations with Bokeh
from bokeh.plotting import figure, output_file, output_notebook, show
from bokeh.layouts import column
from bokeh.models.glyphs import Patches
# Import color library
import colorcet as cc
# Define color palette
palette = [cc.bkr[i*15] for i in range(17)]
palette
# Set pandas display options to allow for more columns and rows
pd.set_option("display.max_columns", 100)
pd.set_option("display.max_rows", 500)
```
---
### Functions
```
def pushshift_api_request(query, subreddit, frequency="month", aggs="created_utc"):
"""
Returns the JSON response of a PushShift API aggregate comment search as a Python dictionary.
Note: if you're reading this note, that means that this function is still only written
with the intention of automating a specific set of actions for a specific project.
---- Arguments ----
query: (str) keyword to search.
subreddit: (str) subreddit name
frequency: (str) set the size of the time buckets.
aggs: (str) aggregate function name. Default is "created_utc".
(For more information, read the PushShift API Documentation.)
-------------------
"""
# Build the query url based on endpoints and parameters
url = f"https://api.pushshift.io/reddit/search/comment/?q={query}&subreddit={subreddit}&aggs={aggs}&frequency={frequency}&size=100"
# Send the request and save the response into the response object
response = requests.get(url)
# Check the response; stop execution if failed
assert response.status_code == 200
# Parse the JSON into a Python dictionary
# and return it for further processing
return response.json()
def create_df(data, keyword, frequency="month"):
"""
Returns cleaned Pandas DataFrame of keyword frequency over time, given correctly-formatted Python dictionary.
Renames the frequency column to keyword; converts month to datetime.
Note: if you're reading this note, that means that this function is still only written
with the intention of automating a specific set of actions for a specific project.
---- Arguments ----
data: (dict) Python dictionary converted from JSON API response.
keyword: (str) the keyword that was queried.
time_bucket: (str) size of time buckets, which is also the name of the resulting DataFrame column. Defaults to "month".
-------------------
"""
# Convert the python object into a pandas dataframe
df = pd.DataFrame(data["aggs"]["created_utc"])
# Convert "key" into a datetime column
df["key"] = pd.to_datetime(df["key"], unit="s", origin="unix")
# Rename "key" to reflect the fact that it is the beginning of the time bucket
df = df.rename(mapper={"key": frequency, "doc_count": keyword}, axis="columns")
# Return the DataFrame
return df
def comments_df(data):
"""
Returns Reddit comments in Pandas DataFrame, given the correctly-formatted Python dictionary.
Note: if you're reading this note, that means that this function is still only written
with the intention of automating a specific set of actions for a specific project.
---- Arguments ----
data: (dict) Python dictionary converted from JSON API response.
-------------------
"""
# Convert the comments into a pandas dataframe
df = pd.DataFrame(data["data"])
# Return the DataFrame
return df
def df_to_csv(data, filename):
"""
Basically just a wrapper around the Pandas `.to_csv()` method,
created to standardize the inputs and outputs.
---- Arguments ----
data: (pd.DataFrame) Pandas DataFrame to be saved as a csv.
filepath: (str) name or path of the file to be saved.
-------------------
"""
# Saves the DataFrame to csv
data.to_csv(path_or_buf=filename, index=False)
# And that's it, folks!
def reddit_data_setter(keywords, subreddits, csv=False, frequency="month", aggs="created_utc"):
"""
Creates two DataFrames that hold combined data of all combinations of keywords / subreddits.
Note: if you're reading this note, that means that this function is still only written
with the intention of automating a specific set of actions for a specific project.
---- Arguments ----
keywords: (list) keyword(s) to search.
subreddits: (list) name of subreddit(s) to include.
csv: (bool) if True, save the resulting dataframes as csv file.
frequency: (str) set the size of the time buckets.
aggs: (str) aggregate function name. Default is "created_utc".
(For more information, read the PushShift API Documentation.)
-------------------
"""
from time import sleep
comment_df_list = [] # Empty list to hold comment dataframes
word_df_list = [] # Empty list to hold monthly word count dataframes
df_comm = pd.DataFrame() # Empty dataframe for comment data
df_main = pd.DataFrame() # Empty dataframe for keyword counts
# Create the "month" (datetime) column - to be used when joining
df_main["month"] = pd.date_range(start="2005-01-01", end="2019-09-01", freq="MS")
# Run query for individual keywords on each subreddit
# Subreddit (outer) -> keyword (inner) = all keywords in one subreddit at a time
for subreddit in subreddits:
for word in keywords:
# Create unique column name for each subreddit / word combo
col_name = f"{subreddit}_{word.replace(' ', '')}"
# Indicates current subreddit / keyword
start = f"{col_name}..."
print(start)
sleep(0.5) # Add sleep time to reduce API load
# Make request and convert response to dictionary
dictionary = pushshift_api_request(word, subreddit)
# Append aggs word count df to word_df_list
word_df_list.append(create_df(dictionary, col_name))
# Append comments df to comment_df_list
comment_df_list.append(comments_df(dictionary))
sleep(0.5) # More sleep to reduce API load
sleep(0.5)
# Set "month" as index in order to concatenate list of dataframes
df_main = pd.concat([df.set_index("month") for df in word_df_list],
axis=1, join="outer").reset_index()
# Concatenate comment_df_list dataframes
df_comm = pd.concat(comment_df_list, axis=0, sort=False,
join="outer", ignore_index=True)
# If csv parameter is set to True, save datasets to filesystem as csv
if csv:
df_to_csv(df_main, f"{keywords[0]}-monthly.csv")
df_to_csv(df_comm, f"{keywords[0]}-comments.csv")
# Return df_main, df_comm, respectively
return df_main, df_comm
```
---
---
## Term Velocity: Algorithm
The velocity of the term "algorithm" in each of the target subreddits.
```
# Define keywords and subreddits as python lists
words = [
"algorithm",
]
subs = [
"Futurology",
"technology",
"science",
"askscience",
"gadgets",
"books",
"scifi",
"movies",
"gaming",
"television",
"news",
"worldnews",
"politics",
"philosophy",
"AskReddit",
"todayilearned",
"explainlikeimfive",
]
# Run the function to create and save the dataset
df_main, df_comm = reddit_data_setter(words, subs, True)
# Take a look to be sure it worked as expected
print(df_main.shape)
df_main.head()
```
---
### Visualizations
```
# Load csv
df_main = pd.read_csv("008-Session_Exports/algorithm-monthly.csv")
df_main["month"] = pd.to_datetime(df_main["month"], infer_datetime_format=True)
df_main.head()
df_main.dtypes
# Color assignments
subs_colors = {}
for i in range(len(subs)):
subs_colors[f"{subs[i]}"] = f"{palette[i]}"
# Output to current notebook
output_notebook()
output_file(f"{words[0]}-velocity-viz.html")
p = {} # dict to hold plots
p_names = [] # list for plot names
for sub in subs_colors:
p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}",
plot_width=1000, plot_height=200,
x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))
p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}")
p_names.append(p[f"{sub}"])
# Show the results
show(column(p_names))
```
---
---
## Term Velocity: AI
The velocity of the term "AI" (abbreviation of artificial intelligence) in each of the target subreddits.
```
# Define keywords and subreddits as python lists
words = [
"AI",
]
subs = [
"Futurology",
"technology",
"science",
"askscience",
"gadgets",
"books",
"scifi",
"movies",
"gaming",
"television",
"news",
"worldnews",
"politics",
"philosophy",
"AskReddit",
"todayilearned",
"explainlikeimfive",
]
# Run the function to create and save the dataset
df_main, df_comm = reddit_data_setter(words, subs, True)
# Take a look to be sure it worked as expected
print(df_main.shape)
df_main.head()
```
---
### Visualizations
```
# Color assignments
subs_colors = {}
for i in range(len(subs)):
subs_colors[f"{subs[i]}"] = f"{palette[i]}"
# Output to current notebook
output_notebook()
output_file(f"{words[0]}-velocity-viz.html")
p = {} # dict to hold plots
p_names = [] # list for plot names
for sub in subs_colors:
p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}",
plot_width=1000, plot_height=200,
x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))
p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}")
p_names.append(p[f"{sub}"])
# Show the results
show(column(p_names))
```
---
---
## Term Velocity: AR
The velocity of the term "AR" (abbreviation of augmented reality) in each of the target subreddits.
```
# Define keywords and subreddits as python lists
words = [
"AR",
]
subs = [
"Futurology",
"technology",
"science",
"askscience",
"gadgets",
"books",
"scifi",
"movies",
"gaming",
"television",
"news",
"worldnews",
"politics",
"philosophy",
"AskReddit",
"todayilearned",
"explainlikeimfive",
]
# Run the function to create and save the dataset
df_main, df_comm = reddit_data_setter(words, subs, True)
# Take a look to be sure it worked as expected
print(df_main.shape)
df_main.head()
```
---
### Visualizations
```
# Color assignments
subs_colors = {}
for i in range(len(subs)):
subs_colors[f"{subs[i]}"] = f"{palette[i]}"
# Output to current notebook
output_notebook()
output_file(f"{words[0]}-velocity-viz.html")
p = {} # dict to hold plots
p_names = [] # list for plot names
for sub in subs_colors:
p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}",
plot_width=1000, plot_height=200,
x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))
p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}")
p_names.append(p[f"{sub}"])
# Show the results
show(column(p_names))
```
---
---
## Term Velocity: Automation
The velocity of the term "automation" in each of the target subreddits.
```
# Define keywords and subreddits as python lists
words = [
"automation",
]
subs = [
"Futurology",
"technology",
"science",
"askscience",
"gadgets",
"books",
"scifi",
"movies",
"gaming",
"television",
"news",
"worldnews",
"politics",
"philosophy",
"AskReddit",
"todayilearned",
"explainlikeimfive",
]
# Run the function to create and save the dataset
df_main, df_comm = reddit_data_setter(words, subs, True)
# Take a look to be sure it worked as expected
print(df_main.shape)
df_main.head()
```
---
### Visualizations
```
# Output to current notebook
output_notebook()
output_file(f"{words[0]}-velocity-viz.html")
p = {} # dict to hold plots
p_names = [] # list for plot names
for sub in subs_colors:
p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}",
plot_width=1000, plot_height=200,
x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))
p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0]}"], line_width=2, line_color=f"{subs_colors[sub]}")
p_names.append(p[f"{sub}"])
# Show the results
show(column(p_names))
```
---
---
## Term Velocity: Big Data
The velocity of the term "big data" in each of the target subreddits.
```
# Define keywords and subreddits as python lists
words = [
"big data",
]
subs = [
"Futurology",
"technology",
"science",
"askscience",
"gadgets",
"books",
"scifi",
"movies",
"gaming",
"television",
"news",
"worldnews",
"politics",
"philosophy",
"AskReddit",
"todayilearned",
"explainlikeimfive",
]
# Run the function to create and save the dataset
df_main, df_comm = reddit_data_setter(words, subs, True)
# Take a look to be sure it worked as expected
print(df_main.shape)
df_main.head()
```
---
### Visualizations
```
# Output to current notebook
output_notebook()
output_file(f"{words[0].replace(' ', '')}-velocity-viz.html")
p = {} # dict to hold plots
p_names = [] # list for plot names
for sub in subs_colors:
p[f"{sub}"] = figure(title=f"Comments that mention '{words[0]}' in r/{sub}",
plot_width=1000, plot_height=200,
x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))
p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_{words[0].replace(' ', '')}"], line_width=2, line_color=f"{subs_colors[sub]}")
p_names.append(p[f"{sub}"])
# Show the results
show(column(p_names))
```
---
---
## Overall Subreddit Comment Velocity
The total number of comments made in each of the subreddits. This is one way I can normalize the data.
```
# Define keywords and subreddits as python lists
words = [""] # Passing in an empty list this time to look at all comments
subs = [
"Futurology",
"technology",
"science",
"askscience",
"gadgets",
"books",
"scifi",
"movies",
"gaming",
"television",
"news",
"worldnews",
"politics",
"philosophy",
"AskReddit",
"todayilearned",
"explainlikeimfive",
]
```
---
```
def all_comments_monthly(subreddit, frequency="month", aggs="created_utc"):
"""
Returns the JSON response of a PushShift API aggregate comment search as a Python dictionary.
Note: if you're reading this note, that means that this function is still only written
with the intention of automating a specific set of actions for a specific project.
---- Arguments ----
query: (str) keyword to search.
subreddit: (str) subreddit name
frequency: (str) set the size of the time buckets.
aggs: (str) aggregate function name. Default is "created_utc".
(For more information, read the PushShift API Documentation.)
-------------------
"""
# Build the query url based on endpoints and parameters
url = f"https://api.pushshift.io/reddit/search/comment/?subreddit={subreddit}&aggs={aggs}&frequency={frequency}&size=100"
# Send the request and save the response into the response object
response = requests.get(url)
# Check the response; stop execution if failed
assert response.status_code == 200
# Parse the JSON into a Python dictionary and return it for further processing
return response.json()
def all_comments_aggregator(keywords, subreddits, csv=False, frequency="month", aggs="created_utc"):
"""
Creates two DataFrames that hold combined data of all comments in all the target subreddits.
Note: if you're reading this note, that means that this function is still only written
with the intention of automating a specific set of actions for a specific project.
---- Arguments ----
keywords: (list) keyword(s) to search.
subreddits: (list) name of subreddit(s) to include.
csv: (bool) if True, save the resulting dataframes as csv file.
frequency: (str) set the size of the time buckets.
aggs: (str) aggregate function name. Default is "created_utc".
(For more information, read the PushShift API Documentation.)
-------------------
"""
from time import sleep
comment_df_list = [] # Empty list to hold comment dataframes
word_df_list = [] # Empty list to hold monthly word count dataframes
df_comm = pd.DataFrame() # Empty dataframe for comment data
df_main = pd.DataFrame() # Empty dataframe for keyword counts
# Create the "month" (datetime) column - to be used when joining
df_main["month"] = pd.date_range(start="2005-01-01", end="2019-09-01", freq="MS")
# Run query for individual keywords on each subreddit
# Subreddit (outer) -> keyword (inner) = all keywords in one subreddit at a time
for subreddit in subreddits:
for word in keywords:
# Create unique column name for each subreddit / word combo
col_name = f"{subreddit}_{word.replace(' ', '')}"
# Indicates current subreddit / keyword
start = f"{col_name}..."
print(start)
sleep(0.5) # Add sleep time to reduce API load
# Make request and convert response to dictionary
dictionary = pushshift_api_request(word, subreddit)
# Append aggs word count df to word_df_list
word_df_list.append(create_df(dictionary, col_name))
# Append comments df to comment_df_list
comment_df_list.append(comments_df(dictionary))
sleep(0.5) # More sleep to reduce API load
sleep(0.5)
# Set "month" as index in order to concatenate list of dataframes
df_main = pd.concat([df.set_index("month") for df in word_df_list],
axis=1, join="outer").reset_index()
# Concatenate comment_df_list dataframes
df_comm = pd.concat(comment_df_list, axis=0, sort=False,
join="outer", ignore_index=True)
# If csv parameter is set to True, save datasets to filesystem as csv
if csv:
df_to_csv(df_main, f"{keywords[0]}-monthly.csv")
df_to_csv(df_comm, f"{keywords[0]}-comments.csv")
# Return df_main, df_comm, respectively
return df_main, df_comm
```
---
```
# Run the function to create and save the dataset
df_main, df_comm = reddit_data_setter(words, subs, True)
# Take a look to be sure it worked as expected
print(df_main.shape)
df_main.head()
```
---
### Visualizations
```
# Output to current notebook
output_notebook()
output_file("overall-subreddit-velocity-viz.html")
p = {} # dict to hold plots
p_names = [] # list for plot names
for sub in subs_colors:
p[f"{sub}"] = figure(title=f"Comments in r/{sub}",
plot_width=1000, plot_height=200,
x_axis_type="datetime", x_range=(df_main.iloc[14][0], df_main.iloc[-1][0]))
p[f"{sub}"].line(df_main["month"], df_main[f"{sub}_"], line_width=2, line_color=f"{subs_colors[sub]}")
p_names.append(p[f"{sub}"])
# Show the results
show(column(p_names))
```
| github_jupyter |
# ORF recognition by CNN
Compare to ORF_CNN_101.
Use 2-layer CNN.
Run on Mac.
```
PC_SEQUENCES=20000 # how many protein-coding sequences
NC_SEQUENCES=20000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
BASES=1000 # how long is each sequence
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (BASES,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (BASES,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 32 # how many different patterns the model looks for
NEURONS = 16
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=10 # how many times to train on all the data
SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=5 # train the model this many times (range 1 to SPLITS)
import sys
try:
from google.colab import drive
IN_COLAB = True
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from RNA_gen import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import *
except:
print("CoLab not working. On my PC, use relative paths.")
IN_COLAB = False
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_gen import *
from SimTools.RNA_describe import *
from SimTools.RNA_prep import *
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
if not assert_imported_RNA_gen():
print("ERROR: Cannot use RNA_gen.")
if not assert_imported_RNA_prep():
print("ERROR: Cannot use RNA_prep.")
from os import listdir
import time # datetime
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
# Use code from our SimTools library.
def make_generators(seq_len):
pcgen = Collection_Generator()
pcgen.get_len_oracle().set_mean(seq_len)
pcgen.set_seq_oracle(Transcript_Oracle())
ncgen = Collection_Generator()
ncgen.get_len_oracle().set_mean(seq_len)
return pcgen,ncgen
pc_sim,nc_sim = make_generators(BASES)
pc_train = pc_sim.get_sequences(PC_SEQUENCES)
nc_train = nc_sim.get_sequences(NC_SEQUENCES)
print("Train on",len(pc_train),"PC seqs")
print("Train on",len(nc_train),"NC seqs")
# Use code from our LearnTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
pc_test = pc_sim.get_sequences(PC_TESTS)
nc_test = nc_sim.get_sequences(NC_TESTS)
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc))
```
| github_jupyter |
```
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
!pip install wget
!apt-get install sox libsndfile1 ffmpeg
!pip install unidecode
# ## Install NeMo
!python -m pip install --upgrade git+https://github.com/NVIDIA/NeMo.git#egg=nemo_toolkit[all]
## Install TorchAudio
!pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
## Grab the config we'll use in this example
!mkdir configs
```
# minGPT License
*This notebook port's the [minGPT codebase](https://github.com/karpathy/minGPT) into equivalent NeMo code. The license for minGPT has therefore been attached here.*
```
The MIT License (MIT) Copyright (c) 2020 Andrej Karpathy
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
# torch-rnn License
*This notebook utilizes the `tiny-shakespeare` dataset from the [torch-rnn](https://github.com/jcjohnson/torch-rnn) codebase. The license for torch-rnn has therefore been attached here.*
```
The MIT License (MIT)
Copyright (c) 2016 Justin Johnson
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
-------
***Note: This notebook will intentionally introduce some errors to show the power of Neural Types or model development concepts, inside the cells marked with `[ERROR CELL]`. The explanation of and resolution of such errors can be found in the subsequent cells.***
-----
# The NeMo Model
NeMo comes with many state of the art pre-trained Conversational AI models for users to quickly be able to start training and fine-tuning on their own datasets.
In the previous [NeMo Primer](https://colab.research.google.com/github/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb) notebook, we learned how to download pretrained checkpoints with NeMo and we also discussed the fundamental concepts of the NeMo Model. The previous tutorial showed us how to use, modify, save, and restore NeMo Models.
In this tutorial we will learn how to develop a non-trivial NeMo model from scratch. This helps us to understand the underlying components and how they interact with the overall PyTorch ecosystem.
-------
At the heart of NeMo lies the concept of the "Model". For NeMo developers, a "Model" is the neural network(s) as well as all the infrastructure supporting those network(s), wrapped into a singular, cohesive unit. As such, most NeMo models are constructed to contain the following out of the box (note: some NeMo models support additional functionality specific to the domain/use case!) -
- Neural Network architecture - all of the modules that are required for the model.
- Dataset + Data Loaders - all of the components that prepare the data for consumption during training or evaluation.
- Preprocessing + Postprocessing - any of the components that process the datasets so the modules can easily consume them.
- Optimizer + Schedulers - basic defaults that work out of the box and allow further experimentation with ease.
- Any other supporting infrastructure - tokenizers, language model configuration, data augmentation, etc.
# Constructing a NeMo Model
NeMo "Models" are comprised of a few key components, so let's tackle them one by one. We will attempt to go in the order that's stated above.
To make this slightly challenging, let's port a model from the NLP domain this time. Transformers are all the rage, with BERT and his friends from Sesame Street forming the core infrastructure for many NLP tasks.
An excellent (yet simple) implementation of one such model - GPT - can be found in the `minGPT` repository - https://github.com/karpathy/minGPT. While the script is short, it explains and succinctly explores all of the core components we expect in a NeMo model, so it's a prime candidate for NeMo! Sidenote: NeMo supports GPT in its NLP collection, and as such, this notebook aims to be an in-depth development walkthrough for such models.
In the following notebook, we will attempt to port minGPT to NeMo, and along the way, discuss some core concepts of NeMo itself.
# Constructing the Neural Network Architecture
First, on the list - the neural network that forms the backbone of the NeMo Model.
So how do we create such a model? Using PyTorch! As you'll see below, NeMo components are compatible with all of PyTorch, so you can augment your workflow without ever losing the flexibility of PyTorch itself!
Let's start with a couple of imports -
```
import torch
import nemo
from nemo.core import NeuralModule
from nemo.core import typecheck
```
## Neural Module
Wait, what's `NeuralModule`? Where is the wonderful `torch.nn.Module`?
`NeuralModule` is a subclass of `torch.nn.Module`, and it brings with it a few additional functionalities.
In addition to being a `torch.nn.Module`, thereby being entirely compatible with the PyTorch ecosystem, it has the following capabilities -
1) `Typing` - It adds support for `Neural Type Checking` to the model. `Typing` is optional but quite useful, as we will discuss below!
2) `Serialization` - Remember the `OmegaConf` config dict and YAML config files? Well, all `NeuralModules` inherently supports serialization/deserialization from such config dictionaries!
3) `FileIO` - This is another entirely optional file serialization system. Does your `NeuralModule` require some way to preserve data that can't be saved into a PyTorch checkpoint? Write your serialization and deserialization logic in two handy methods! **Note**: When you create the final NeMo Model, this will be implemented for you! Automatic serialization and deserialization support of NeMo models!
```
class MyEmptyModule(NeuralModule):
def forward(self):
print("Neural Module ~ hello world!")
x = MyEmptyModule()
x()
```
## Neural Types
Neural Types? You might be wondering what that term refers to.
Almost all NeMo components inherit the class `Typing`. `Typing` is a simple class that adds two properties to the class that inherits it - `input_types` and `output_types`. A NeuralType, by its shortest definition, is simply a semantic tensor. It contains information regarding the semantic shape the tensor should hold, as well as the semantic information of what that tensor represents. That's it.
So what semantic information does such a typed tensor contain? Let's take an example below.
------
Across the Deep Learning domain, we often encounter cases where tensor shapes may match, but the semantics don't match at all. For example take a look at the following rank 3 tensors -
```
# Case 1:
embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30)
x = torch.randint(high=10, size=(1, 5))
print("x :", x)
print("embedding(x) :", embedding(x).shape)
# Case 2
lstm = torch.nn.LSTM(1, 30, batch_first=True)
x = torch.randn(1, 5, 1)
print("x :", x)
print("lstm(x) :", lstm(x)[0].shape) # Let's take all timestep outputs of the LSTM
```
-------
As you can see, the output of Case 1 is an embedding of shape [1, 5, 30], and the output of Case 2 is an LSTM output (state `h` over all time steps), also of the same shape [1, 5, 30].
Do they have the same shape? **Yes**. <br>If we do a Case 1 .shape == Case 2 .shape, will we get True as an output? **Yes**. <br>
Do they represent the same concept? **No**. <br>
The ability to recognize that the two tensors do not represent the same semantic information is precisely why we utilize Neural Types. It contains the information of both the shape and the semantic concept of what that tensor represents. If we performed a neural type check between the two outputs of those tensors, it would raise an error saying semantically they were different things (more technically, it would say that they are `INCOMPATIBLE` with each other)!
--------
You may have read of concepts such as [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html). While conceptually similar, Neural Types attached by NeMo are not as tightly bound to the PyTorch ecosystem - practically any object of a class can be attached with a neural type!
## Neural Types - Usage
Neural Types sound interesting, so how do we go about adding them? Let's take a few cases below.
Neural Types are one of the core foundations of NeMo - you will find them in a vast majority of Neural Modules, and every NeMo Model will have its Neural Types defined. While they are entirely optional and unintrusive, NeMo takes great care to support it so that there is no semantic incompatibility between components being used by users.
Let's start with a basic example of a type checked module.
```
from nemo.core.neural_types import NeuralType
from nemo.core.neural_types import *
class EmbeddingModule(NeuralModule):
def __init__(self):
super().__init__()
self.embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=30)
@typecheck()
def forward(self, x):
return self.embedding(x)
@property
def input_types(self):
return {
'x': NeuralType(axes=('B', 'T'), elements_type=Index())
}
@property
def output_types(self):
return {
'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EmbeddedTextType())
}
```
To show the benefit of Neural Types, we are going to replicate the above cases inside NeuralModules.
Let's discuss how we added type checking support to the above class.
1) `forward` has a decorator `@typecheck()` on it.
2) `input_types` and `output_types` properties are defined.
That's it!
-------
Let's expand on each of the above steps.
- `@typecheck()` is a simple decorator that takes any class that inherits `Typing` (NeuralModule does this for us) and adds the two default properties of `input_types` and `output_types`, which by default returns None.
The `@typecheck()` decorator's explicit use ensures that, by default, neural type checking is **disabled**. NeMo does not wish to intrude on the development process of models. So users can "opt-in" to type checking by overriding the two properties. Therefore, the decorator ensures that users are not burdened with type checking before they wish to have it.
So what is `@typecheck()`? Simply put, you can wrap **any** function of a class that inherits `Typing` with this decorator, and it will look up the definition of the types of that class and enforce them. Typically, `torch.nn.Module` subclasses only implement `forward()` so it is most common to wrap that method, but `@typecheck()` is a very flexible decorator. Inside NeMo, we will show some advanced use cases (which are quite crucial to particular domains such as TTS).
------
As we see above, `@typecheck()` enforces the types. How then, do we provide this type of information to NeMo?
By overriding `input_types` and `output_types` properties of the class, we can return a dictionary mapping a string name to a `NeuralType`.
In the above case, we define a `NeuralType` as two components -
- `axes`: This is the semantic information of the carried by the axes themselves. The most common axes information is from single character notation.
> `B` = Batch <br>
> `C` / `D` - Channel / Dimension (treated the same) <br>
> `T` - Time <br>
> `H` / `W` - Height / Width <br>
- `elements_type`: This is the semantic information of "what the tensor represents". All such types are derived from the basic `ElementType`, and merely subclassing `ElementType` allows us to build a hierarchy of custom semantic types that can be used by NeMo!
Here, we declare that the input is an element_type of `Index` (index of the character in the vocabulary) and that the output is an element_type of `EmbeddedTextType` (the text embedding)
```
embedding_module = EmbeddingModule()
```
Now let's construct the equivalent of the Case 2 above, but as a `NeuralModule`.
```
class LSTMModule(NeuralModule):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(1, 30, batch_first=True)
@typecheck()
def forward(self, x):
return self.lstm(x)
@property
def input_types(self):
return {
'x': NeuralType(axes=('B', 'T', 'C'), elements_type=SpectrogramType())
}
@property
def output_types(self):
return {
'y': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation())
}
```
------
Here, we define the LSTM module from the Case 2 above.
We changed the input to be a rank three tensor, now representing a "SpectrogramType". We intentionally keep it generic - it can be a `MelSpectrogramType` or a `MFCCSpectrogramType` as it's input!
The output of an LSTM is now an `EncodedRepresentation`. Practically, this can be the output of a CNN layer, a Transformer block, or in this case, an LSTM layer. We can, of course, specialize by subclassing EncodedRepresentation and then using that!
```
lstm_module = LSTMModule()
```
------
Now for the test !
```
# Case 1 [ERROR CELL]
x1 = torch.randint(high=10, size=(1, 5))
print("x :", x1)
print("embedding(x) :", embedding_module(x1).shape)
```
-----
You might be wondering why we get a `TypeError` right off the bat. This `TypeError` is raised by design.
Positional arguments can cause significant issues during model development, mostly when the model/module design is not finalized. To reduce the potential for mistakes caused by wrong positional arguments and enforce the name of arguments provided to the function, `Typing` requires you to **call all of your type-checked functions by kwargs only**.
```
# Case 1
print("x :", x1)
print("embedding(x) :", embedding_module(x=x1).shape)
```
Now let's try the same for the `LSTMModule` in Case 2
```
# Case 2 [ERROR CELL]
x2 = torch.randn(1, 5, 1)
print("x :", x2)
print("lstm(x) :", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM
```
-----
Now we get a type error stating that the number of output arguments provided does not match what is expected.
What exactly is going on here? Well, inside our `LSTMModule` class, we declare the output types to be a single NeuralType - an `EncodedRepresentation` of shape [B, T, C].
But the output of an LSTM layer is a tuple of two state values - the hidden state `h` and the cell state `c`!
So the neural type system raises an error saying that the number of output arguments does not match what is expected.
Let's fix the above.
```
class CorrectLSTMModule(LSTMModule): # Let's inherit the wrong class to make it easy to override
@property
def output_types(self):
return {
'h': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()),
'c': NeuralType(axes=('B', 'T', 'C'), elements_type=EncodedRepresentation()),
}
lstm_module = CorrectLSTMModule()
# Case 2
x2 = torch.randn(1, 5, 1)
print("x :", x2)
print("lstm(x) :", lstm_module(x=x2)[0].shape) # Let's take all timestep outputs of the LSTM `h` gate
```
------
Great! So now, the type checking system is happy.
If you looked closely, the outputs were ordinary Torch Tensors (this is good news; we don't want to be incompatible with torch Tensors after all!). So, where exactly is the type of information stored?
When the `output_types` is overridden, and valid torch tensors are returned as a result, these tensors are attached with the attribute `neural_type`. Let's inspect this -
```
emb_out = embedding_module(x=x1)
lstm_out = lstm_module(x=x2)[0]
assert hasattr(emb_out, 'neural_type')
assert hasattr(lstm_out, 'neural_type')
print("Embedding tensor :", emb_out.neural_type)
print("LSTM tensor :", lstm_out.neural_type)
```
-------
So we see that these tensors now have this attribute called `neural_type` and are the same shape.
This exercise's entire goal was to assert that the two outputs are semantically **not** the same object, even if they are the same shape.
Let's test this!
```
emb_out.neural_type.compare(lstm_out.neural_type)
emb_out.neural_type == lstm_out.neural_type
```
## Neural Types - Limitations
You might have noticed one interesting fact - our inputs were just `torch.Tensor` to both typed function calls, and they had no `neural_type` assigned to them.
So why did the type check system not raise any error?
This is to maintain compatibility - type checking is meant to work on a chain of function calls - and each of these functions should themselves be wrapped with the `@typecheck()` decorator. This is also done because we don't want to overtax the forward call with dozens of checks, and therefore we only type modules that perform some higher-order logical computation.
------
As an example, it is mostly unnecessary (but still possible) to type the input and output of every residual block of a ResNet model. However, it is practically important to type the encoder (no matter how many layers is inside it) and the decoder (the classification head) separately so that when one does fine-tuning, there is no semantic mismatch of the tensors input to the encoder and bound to the decoder.
-------
For this case, since it would be impractical to extend a class to attach a type to the input tensor, we can take a shortcut and directly attach the neural type to the input!
```
embedding_module = EmbeddingModule()
x1 = torch.randint(high=10, size=(1, 5))
# Attach correct neural type
x1.neural_type = NeuralType(('B', 'T'), Index())
print("embedding(x) :", embedding_module(x=x1).shape)
# Attach wrong neural type [ERROR CELL]
x1.neural_type = NeuralType(('B', 'T'), LabelsType())
print("embedding(x) :", embedding_module(x=x1).shape)
```
## Let's create the minGPT components
Now that we have a somewhat firm grasp of neural type checking, let's begin porting the minGPT example code. Once again, most of the code will be a direct port from the [minGPT repository](https://github.com/karpathy/minGPT).
Here, you will notice one thing. By just changing class imports, one `@typecheck()` on forward, and adding `input_types` and `output_types` (which are also entirely optional!), we are almost entirely done with the PyTorch Lightning port!
```
import math
from typing import List, Set, Dict, Tuple, Optional
import torch
import torch.nn as nn
from torch.nn import functional as F
```
## Creating Element Types
Till now, we have used the Neural Types provided by the NeMo core. But we need not be restricted to the pre-defined element types !
Users have total flexibility in defining any hierarchy of element types as they please!
```
class AttentionType(EncodedRepresentation):
"""Basic Attention Element Type"""
class SelfAttentionType(AttentionType):
"""Self Attention Element Type"""
class CausalSelfAttentionType(SelfAttentionType):
"""Causal Self Attention Element Type"""
```
## Creating the modules
Neural Modules are generally top-level modules but can be used at any level of the module hierarchy.
For demonstration, we will treat an encoder comprising a block of Causal Self Attention modules as a typed Neural Module. Of course, we can also treat each Causal Self Attention layer itself as a neural module if we require it, but top-level modules are generally preferred.
```
class CausalSelfAttention(nn.Module):
"""
A vanilla multi-head masked self-attention layer with a projection at the end.
It is possible to use torch.nn.MultiheadAttention here but I am including an
explicit implementation here to show that there is nothing too scary here.
"""
def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop):
super().__init__()
assert n_embd % n_head == 0
self.n_head = n_head
# key, query, value projections for all heads
self.key = nn.Linear(n_embd, n_embd)
self.query = nn.Linear(n_embd, n_embd)
self.value = nn.Linear(n_embd, n_embd)
# regularization
self.attn_drop = nn.Dropout(attn_pdrop)
self.resid_drop = nn.Dropout(resid_pdrop)
# output projection
self.proj = nn.Linear(n_embd, n_embd)
# causal mask to ensure that attention is only applied to the left in the input sequence
self.register_buffer("mask", torch.tril(torch.ones(block_size, block_size))
.view(1, 1, block_size, block_size))
def forward(self, x, layer_past=None):
B, T, C = x.size()
# calculate query, key, values for all heads in batch and move head forward to be the batch dim
k = self.key(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
q = self.query(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
v = self.value(x).view(B, T, self.n_head, C // self.n_head).transpose(1, 2) # (B, nh, T, hs)
# causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
att = att.masked_fill(self.mask[:,:,:T,:T] == 0, float('-inf'))
att = F.softmax(att, dim=-1)
att = self.attn_drop(att)
y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
y = y.transpose(1, 2).contiguous().view(B, T, C) # re-assemble all head outputs side by side
# output projection
y = self.resid_drop(self.proj(y))
return y
class Block(nn.Module):
""" an unassuming Transformer block """
def __init__(self, n_embd, block_size, n_head, attn_pdrop, resid_pdrop):
super().__init__()
self.ln1 = nn.LayerNorm(n_embd)
self.ln2 = nn.LayerNorm(n_embd)
self.attn = CausalSelfAttention(n_embd, block_size, n_head, attn_pdrop, resid_pdrop)
self.mlp = nn.Sequential(
nn.Linear(n_embd, 4 * n_embd),
nn.GELU(),
nn.Linear(4 * n_embd, n_embd),
nn.Dropout(resid_pdrop),
)
def forward(self, x):
x = x + self.attn(self.ln1(x))
x = x + self.mlp(self.ln2(x))
return x
```
## Building the NeMo Model
Since a NeMo Model is comprised of various parts, we are going to iterate on the model step by step inside this notebook. As such, we will have multiple intermediate NeMo "Models", which will be partial implementations, and they will inherit each other iteratively.
In a complete implementation of a NeMo Model (as found in the NeMo collections), all of these components will generally be found in a single class.
Let's start by inheriting `ModelPT` - the core class of a PyTorch NeMo Model, which inherits the PyTorch Lightning Module.
-------
**Remember**:
- The NeMo equivalent of `torch.nn.Module` is the `NeuralModule.
- The NeMo equivalent of the `LightningModule` is `ModelPT`.
```
import pytorch_lightning as ptl
from nemo.core import ModelPT
from omegaconf import OmegaConf
```
------
Next, let's construct the bare minimum implementation of the NeMo Model - just the constructor, the initializer of weights, and the forward method.
Initially, we will follow the steps followed by the minGPT implementation, and progressively refactor for NeMo
```
class PTLGPT(ptl.LightningModule):
def __init__(self,
# model definition args
vocab_size: int, # size of the vocabulary (number of possible tokens)
block_size: int, # length of the model's context window in time
n_layer: int, # depth of the model; number of Transformer blocks in sequence
n_embd: int, # the "width" of the model, number of channels in each Transformer
n_head: int, # number of heads in each multi-head attention inside each Transformer block
# model optimization args
learning_rate: float = 3e-4, # the base learning rate of the model
weight_decay: float = 0.1, # amount of regularizing L2 weight decay on MatMul ops
betas: Tuple[float, float] = (0.9, 0.95), # momentum terms (betas) for the Adam optimizer
embd_pdrop: float = 0.1, # \in [0,1]: amount of dropout on input embeddings
resid_pdrop: float = 0.1, # \in [0,1]: amount of dropout in each residual connection
attn_pdrop: float = 0.1, # \in [0,1]: amount of dropout on the attention matrix
):
super().__init__()
# save these for optimizer init later
self.learning_rate = learning_rate
self.weight_decay = weight_decay
self.betas = betas
# input embedding stem: drop(content + position)
self.tok_emb = nn.Embedding(vocab_size, n_embd)
self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd))
self.drop = nn.Dropout(embd_pdrop)
# deep transformer: just a sequence of transformer blocks
self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop) for _ in range(n_layer)])
# decoder: at the end one more layernorm and decode the answers
self.ln_f = nn.LayerNorm(n_embd)
self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f
self.block_size = block_size
self.apply(self._init_weights)
print("number of parameters: %e" % sum(p.numel() for p in self.parameters()))
def forward(self, idx):
b, t = idx.size()
assert t <= self.block_size, "Cannot forward, model block size is exhausted."
# forward the GPT model
token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
x = self.drop(token_embeddings + position_embeddings)
x = self.blocks(x)
x = self.ln_f(x)
logits = self.head(x)
return logits
def get_block_size(self):
return self.block_size
def _init_weights(self, module):
"""
Vanilla model initialization:
- all MatMul weights \in N(0, 0.02) and biases to zero
- all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0
"""
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=0.02)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
```
------
Let's create a PyTorch Lightning Model above, just to make sure it works !
```
m = PTLGPT(vocab_size=100, block_size=32, n_layer=1, n_embd=32, n_head=4)
```
------
Now, let's convert the above easily into a NeMo Model.
A NeMo Model constructor generally accepts only two things -
1) `cfg`: An OmegaConf DictConfig object that defines precisely the components required by the model to define its neural network architecture, data loader setup, optimizer setup, and any additional components needed for the model itself.
2) `trainer`: An optional Trainer from PyTorch Lightning if the NeMo model will be used for training. It can be set after construction (if required) using the `set_trainer` method. For this notebook, we will not be constructing the config for the Trainer object.
## Refactoring Neural Modules
As we discussed above, Neural Modules are generally higher-level components of the Model and can potentially be replaced by equivalent Neural Modules.
As we see above, the embedding modules, deep transformer network, and final decoder layer have all been combined inside the PyTorch Lightning implementation constructor.
------
However, the decoder could have been an RNN instead of a simple Linear layer, or it could have been a 1D-CNN instead.
Likewise, the deep encoder could potentially have a different implementation of Self Attention modules.
These changes cannot be easily implemented any more inside the above implementation. However, if we refactor these components into their respective NeuralModules, then we can easily replace them with equivalent modules we construct in the future!
### Refactoring the Embedding module
Let's first refactor out the embedding module from the above implementation
```
class GPTEmbedding(NeuralModule):
def __init__(self, vocab_size: int, n_embd: int, block_size: int, embd_pdrop: float = 0.0):
super().__init__()
# input embedding stem: drop(content + position)
self.tok_emb = nn.Embedding(vocab_size, n_embd)
self.pos_emb = nn.Parameter(torch.zeros(1, block_size, n_embd))
self.drop = nn.Dropout(embd_pdrop)
@typecheck()
def forward(self, idx):
b, t = idx.size()
# forward the GPT model
token_embeddings = self.tok_emb(idx) # each index maps to a (learnable) vector
position_embeddings = self.pos_emb[:, :t, :] # each position maps to a (learnable) vector
x = self.drop(token_embeddings + position_embeddings)
return x
@property
def input_types(self):
return {
'idx': NeuralType(('B', 'T'), Index())
}
@property
def output_types(self):
return {
'embeddings': NeuralType(('B', 'T', 'C'), EmbeddedTextType())
}
```
### Refactoring the Encoder
Next, let's refactor the Encoder - the multi layer Transformer Encoder
```
class GPTTransformerEncoder(NeuralModule):
def __init__(self, n_embd: int, block_size: int, n_head: int, n_layer: int, attn_pdrop: float = 0.0, resid_pdrop: float = 0.0):
super().__init__()
self.blocks = nn.Sequential(*[Block(n_embd, block_size, n_head, attn_pdrop, resid_pdrop)
for _ in range(n_layer)])
@typecheck()
def forward(self, embed):
return self.blocks(embed)
@property
def input_types(self):
return {
'embed': NeuralType(('B', 'T', 'C'), EmbeddedTextType())
}
@property
def output_types(self):
return {
'encoding': NeuralType(('B', 'T', 'C'), CausalSelfAttentionType())
}
```
### Refactoring the Decoder
Finally, let's refactor the Decoder - the small one-layer feed-forward network to decode the answer.
-------
Note an interesting detail - The `input_types` of the Decoder accepts the generic `EncoderRepresentation()`, where as the `neural_type` of the `GPTTransformerEncoder` has the `output_type` of `CausalSelfAttentionType`.
This is semantically *not* a mismatch! As you can see above in the inheritance chart, we declare `EncodedRepresentation` -> `AttentionType` -> `SelfAttentionType` -> `CausalSelfAttentionType`.
Such an inheritance hierarchy for the `element_type` allows future encoders (which also have a neural output type of at least `EncodedRepresentation`) to be swapped in place of the current GPT Causal Self Attention Encoder while keeping the rest of the NeMo model working just fine!
```
class GPTDecoder(NeuralModule):
def __init__(self, n_embd: int, vocab_size: int):
super().__init__()
self.ln_f = nn.LayerNorm(n_embd)
self.head = nn.Linear(n_embd, vocab_size, bias=False) # no need for extra bias due to one in ln_f
@typecheck()
def forward(self, encoding):
x = self.ln_f(encoding)
logits = self.head(x)
return logits
@property
def input_types(self):
return {
'encoding': NeuralType(('B', 'T', 'C'), EncodedRepresentation())
}
@property
def output_types(self):
return {
'logits': NeuralType(('B', 'T', 'C'), LogitsType())
}
```
### Refactoring the NeMo GPT Model
Now that we have 3 NeuralModules for the embedding, the encoder, and the decoder, let's refactor the NeMo model to take advantage of this refactor!
This time, we inherit from `ModelPT` instead of the general `LightningModule`.
```
class AbstractNeMoGPT(ModelPT):
def __init__(self, cfg: OmegaConf, trainer: ptl.Trainer = None):
super().__init__(cfg=cfg, trainer=trainer)
# input embedding stem: drop(content + position)
self.embedding = self.from_config_dict(self.cfg.embedding)
# deep transformer: just a sequence of transformer blocks
self.encoder = self.from_config_dict(self.cfg.encoder)
# decoder: at the end one more layernorm and decode the answers
self.decoder = self.from_config_dict(self.cfg.decoder)
self.block_size = self.cfg.embedding.block_size
self.apply(self._init_weights)
print("number of parameters: %e" % self.num_weights)
@typecheck()
def forward(self, idx):
b, t = idx.size()
assert t <= self.block_size, "Cannot forward, model block size is exhausted."
# forward the GPT model
# Remember: Only kwargs are allowed !
e = self.embedding(idx=idx)
x = self.encoder(embed=e)
logits = self.decoder(encoding=x)
return logits
def get_block_size(self):
return self.block_size
def _init_weights(self, module):
"""
Vanilla model initialization:
- all MatMul weights \in N(0, 0.02) and biases to zero
- all LayerNorm post-normalization scaling set to identity, so weight=1, bias=0
"""
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=0.02)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
@property
def input_types(self):
return {
'idx': NeuralType(('B', 'T'), Index())
}
@property
def output_types(self):
return {
'logits': NeuralType(('B', 'T', 'C'), LogitsType())
}
```
## Creating a config for a Model
At first glance, not much changed compared to the PyTorch Lightning implementation above. Other than the constructor, which now accepts a config, nothing changed at all!
NeMo operates on the concept of a NeMo Model being accompanied by a corresponding config dict (instantiated as an OmegaConf object). This enables us to prototype the model by utilizing Hydra rapidly. This includes various other benefits - such as hyperparameter optimization and serialization/deserialization of NeMo models.
Let's look at how actually to construct such config objects!
```
# model definition args (required)
# ================================
# vocab_size: int # size of the vocabulary (number of possible tokens)
# block_size: int # length of the model's context window in time
# n_layer: int # depth of the model; number of Transformer blocks in sequence
# n_embd: int # the "width" of the model, number of channels in each Transformer
# n_head: int # number of heads in each multi-head attention inside each Transformer block
# model definition args (optional)
# ================================
# embd_pdrop: float = 0.1, # \in [0,1]: amount of dropout on input embeddings
# resid_pdrop: float = 0.1, # \in [0,1]: amount of dropout in each residual connection
# attn_pdrop: float = 0.1, # \in [0,1]: amount of dropout on the attention matrix
```
------
As we look at the required parameters above, we need a way to tell OmegaConf that these values are currently not set, but the user should set them before we use them.
OmegaConf supports such behavior using the `MISSING` value. A similar effect can be achieved in YAML configs by using `???` as a placeholder.
```
from omegaconf import MISSING
# Let's create a utility for building the class path
def get_class_path(cls):
return f'{cls.__module__}.{cls.__name__}'
```
### Structure of a Model config
Let's first create a config for the common components of the model level config -
```
common_config = OmegaConf.create({
'vocab_size': MISSING,
'block_size': MISSING,
'n_layer': MISSING,
'n_embd': MISSING,
'n_head': MISSING,
})
```
-----
The model config right now is still being built - it needs to contain a lot more details!
A complete Model Config should have the sub-configs of all of its top-level modules as well. This means the configs of the `embedding`, `encoder`, and the `decoder`.
### Structure of sub-module config
For top-level models, we generally don't change the actual module very often, and instead, primarily change the hyperparameters of that model.
So we will make use of `Hydra`'s Class instantiation method - which can easily be accessed via the class method `ModelPT.from_config_dict()`.
Let's take a few examples below -
```
embedding_config = OmegaConf.create({
'_target_': get_class_path(GPTEmbedding),
'vocab_size': '${model.vocab_size}',
'n_embd': '${model.n_embd}',
'block_size': '${model.block_size}',
'embd_pdrop': 0.1
})
encoder_config = OmegaConf.create({
'_target_': get_class_path(GPTTransformerEncoder),
'n_embd': '${model.n_embd}',
'block_size': '${model.block_size}',
'n_head': '${model.n_head}',
'n_layer': '${model.n_layer}',
'attn_pdrop': 0.1,
'resid_pdrop': 0.1
})
decoder_config = OmegaConf.create({
'_target_': get_class_path(GPTDecoder),
# n_embd: int, vocab_size: int
'n_embd': '${model.n_embd}',
'vocab_size': '${model.vocab_size}'
})
```
##### What is `_target_`?
--------
In the above config, we see a `_target_` in the config. `_target_` is usually a full classpath to the actual class in the python package/user local directory. It is required for Hydra to locate and instantiate the model from its path correctly.
So why do we want to set a classpath?
In general, when developing models, we don't often change the encoder or the decoder, but we do change the hyperparameters of the encoder and decoder.
This notation helps us keep the Model level declaration of the forward step neat and precise. It also logically helps us demark which parts of the model can be easily replaced - in the future, we can easily replace the encoder with some other type of self-attention block or the decoder with an RNN or 1D-CNN neural module (as long as they have the same Neural Type definition as the current blocks).
##### What is the `${}` syntax?
-------
OmegaConf, and by extension, Hydra, supports Variable Interpolation. As you can see in the `__init__` of embedding, encoder, and decoder neural modules, they often share many parameters between each other.
It would become tedious and error-prone to set each of these constructors' values separately in each of the embedding, encoder, and decoder configs.
So instead, we define standard keys inside of the `model` level config and then interpolate these values inside of the respective configs!
### Attaching the model and module-level configs
So now, we have a Model level and per-module level configs for the core components. Sub-module configs generally fall under the "model" namespace, but you have the flexibility to define the structure as you require.
Let's attach them!
```
model_config = OmegaConf.create({
'model': common_config
})
# Then let's attach the sub-module configs
model_config.model.embedding = embedding_config
model_config.model.encoder = encoder_config
model_config.model.decoder = decoder_config
```
-----
Let's print this config!
```
print(OmegaConf.to_yaml(model_config))
```
-----
Wait, why did OmegaConf not fill in the value of the variable interpolation for the configs yet?
This is because OmegaConf takes a deferred approach to variable interpolation. To force it ahead of time, we can use the following snippet -
```
temp_config = OmegaConf.create(OmegaConf.to_container(model_config, resolve=True))
print(OmegaConf.to_yaml(temp_config))
```
-----
Now that we have a config, let's try to create an object of the NeMo Model !
```
import copy
# Let's work on a copy of the model config and update it before we send it into the Model.
cfg = copy.deepcopy(model_config)
# Let's set the values of the config (for some plausible small model)
cfg.model.vocab_size = 100
cfg.model.block_size = 128
cfg.model.n_layer = 1
cfg.model.n_embd = 32
cfg.model.n_head = 4
print(OmegaConf.to_yaml(cfg))
# Try to create a model with this config [ERROR CELL]
m = AbstractNeMoGPT(cfg.model)
```
-----
You will note that we added the `Abstract` tag for a reason to this NeMo Model and that when we try to instantiate it - it raises an error that we need to implement specific methods.
1) `setup_training_data` & `setup_validation_data` - All NeMo models should implement two data loaders - the training data loader and the validation data loader. Optionally, they can go one step further and also implement the `setup_test_data` method to add support for evaluating the Model on its own.
Why do we enforce this? NeMo Models are meant to be a unified, cohesive object containing the details about the neural network underlying that Model and the data loaders to train, validate, and optionally test those models.
In doing so, once the Model is created/deserialized, it would take just a few more steps to train the Model from scratch / fine-tune/evaluate the Model on any data that the user provides, as long as this user-provided dataset is in a format supported by the Dataset / DataLoader that is used by this Model!
2) `list_available_models` - This is a utility method to provide a list of pre-trained NeMo models to the user from the cloud.
Typically, NeMo models can be easily packaged into a tar file (which we call a .nemo file in the earlier primer notebook). These tar files contain the model config + the pre-trained checkpoint weights of the Model, and can easily be downloaded from some cloud service.
For this notebook, we will not be implementing this method.
--------
Finally, let's create a concrete implementation of the above NeMo Model!
```
from nemo.core.classes.common import PretrainedModelInfo
class BasicNeMoGPT(AbstractNeMoGPT):
@classmethod
def list_available_models(cls) -> PretrainedModelInfo:
return None
def setup_training_data(self, train_data_config: OmegaConf):
self._train_dl = None
def setup_validation_data(self, val_data_config: OmegaConf):
self._validation_dl = None
def setup_test_data(self, test_data_config: OmegaConf):
self._test_dl = None
```
------
Now let's try to create an object of the `BasicNeMoGPT` model
```
m = BasicNeMoGPT(cfg.model)
```
## Setting up train-val-test steps
The above `BasicNeMoGPT` Model is a basic PyTorch Lightning Module, with some added functionality -
1) Neural Type checks support - as defined in the Model as well as the internal modules.
2) Save and restore of the Model (in the trivial case) to a tarfile.
But as the Model is right now, it crucially does not support PyTorch Lightning's `Trainer`. As such, while this Model can be called manually, it cannot be easily trained or evaluated by using the PyTorch Lightning framework.
------
Let's begin adding support for this then -
```
class BasicNeMoGPTWithSteps(BasicNeMoGPT):
def step_(self, split, batch, batch_idx=None):
idx, targets = batch
logits = self(idx=idx)
loss = F.cross_entropy(logits.view(-1, logits.size(-1)), targets.view(-1))
key = 'loss' if split == 'train' else f"{split}_loss"
return {key: loss}
def training_step(self, *args, **kwargs):
return self.step_('train', *args, **kwargs)
def validation_step(self, *args, **kwargs):
return self.step_('val', *args, **kwargs)
def test_step(self, *args, **kwargs):
return self.step_('test', *args, **kwargs)
# This is useful for multiple validation data loader setup
def multi_validation_epoch_end(self, outputs, dataloader_idx: int = 0):
val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'val_loss': val_loss_mean}
# This is useful for multiple test data loader setup
def multi_test_epoch_end(self, outputs, dataloader_idx: int = 0):
test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean()
return {'test_loss': test_loss_mean}
m = BasicNeMoGPTWithSteps(cfg=cfg.model)
```
### Setup for Multi Validation and Multi Test data loaders
As discussed in the NeMo Primer, NeMo has in-built support for multiple data loaders for validation and test steps. Therefore, as an example of how easy it is to add such support, we include the `multi_validation_epoch_end` and `multi_test_epoch_end` overrides.
It is also practically essential to collate results from more than one distributed GPUs, and then aggregate results properly at the end of the epoch. NeMo strictly enforces the correct collation of results, even if you will work on only one device! Future-proofing is baked into the model design for this case!
Therefore NeMo provides the above two generic methods to support aggregation and simultaneously support multiple datasets!
**Please note, you can prepend your already existing `validation_epoch_end` and `test_epoch_end` implementations with the `multi_` in the name, and that alone is sufficient to enable multi-dataset and multi-GPU support!**
------
**Note: To disable multi-dataset support, simply override `validation_epoch_end` and `test_epoch_end` instead of `multi_validation_epoch_end` and `multi_test_epoch_end`!**
## Setting up the optimizer / scheduler
We are relatively close to reaching feature parity with the MinGPT Model! But we are missing a crucial piece - the optimizer.
All NeMo Model's come with a default implementation of `setup_optimization()`, which will parse the provided model config to obtain the `optim` and `sched` sub-configs, and automatically configure the optimizer and scheduler.
If training GPT was as simple as plugging in an Adam optimizer over all the parameters with a cosine weight decay schedule, we could do that from the config alone.
-------
But GPT is not such a trivial model - more specifically, it requires weight decay to be applied to the weight matrices but not to the biases, the embedding matrix, or the LayerNorm layers.
We can drop the support that Nemo provides for such special cases and instead utilize the PyTorch Lightning method `configure_optimizers` to perform the same task.
-------
Note, for NeMo Models; the `configure_optimizers` is implemented as a trivial call to `setup_optimization()` followed by returning the generated optimizer and scheduler! So we can override the `configure_optimizer` method and manage the optimizer creation manually!
NeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases which the additional flexibility becomes necessary!
```
class BasicNeMoGPTWithOptim(BasicNeMoGPTWithSteps):
def configure_optimizers(self):
"""
This long function is unfortunately doing something very simple and is being very defensive:
We are separating out all parameters of the model into two buckets: those that will experience
weight decay for regularization and those that won't (biases, and layernorm/embedding weights).
We are then returning the PyTorch optimizer object.
"""
# separate out all parameters to those that will and won't experience weight decay
decay = set()
no_decay = set()
whitelist_weight_modules = (torch.nn.Linear, )
blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding)
for mn, m in self.named_modules():
for pn, p in m.named_parameters():
fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
if pn.endswith('bias'):
# all biases will not be decayed
no_decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules):
# weights of whitelist modules will be weight decayed
decay.add(fpn)
elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules):
# weights of blacklist modules will NOT be weight decayed
no_decay.add(fpn)
# special case the position embedding parameter in the root GPT module as not decayed
no_decay.add('embedding.pos_emb')
# validate that we considered every parameter
param_dict = {pn: p for pn, p in self.named_parameters()}
inter_params = decay & no_decay
union_params = decay | no_decay
assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params), )
assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \
% (str(param_dict.keys() - union_params), )
# create the pytorch optimizer object
optim_groups = [
{"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": self.cfg.optim.weight_decay},
{"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0},
]
optimizer = torch.optim.AdamW(optim_groups, lr=self.cfg.optim.lr, betas=self.cfg.optim.betas)
return optimizer
m = BasicNeMoGPTWithOptim(cfg=cfg.model)
```
-----
Now let's setup the config for the optimizer !
```
OmegaConf.set_struct(cfg.model, False)
optim_config = OmegaConf.create({
'lr': 3e-4,
'weight_decay': 0.1,
'betas': [0.9, 0.95]
})
cfg.model.optim = optim_config
OmegaConf.set_struct(cfg.model, True)
```
## Setting up the dataset / data loaders
So we were able almost entirely to replicate the MinGPT implementation.
Remember, NeMo models should contain all of the logic to load the Dataset and DataLoader for at least the train and validation step.
We temporarily provided empty implementations to get around it till now, but let's fill that in now!
-------
**Note for datasets**: Below, we will show an example using a very small dataset called `tiny_shakespeare`, found at the original [char-rnn repository](https://github.com/karpathy/char-rnn), but practically you could use any text corpus. The one suggested in minGPT is available at http://mattmahoney.net/dc/textdata.html
### Creating the Dataset
NeMo has Neural Type checking support, even for Datasets! It's just a minor change of the import in most cases and one difference in how we handle `collate_fn`.
We could paste the dataset info from minGPT, and you'd only need to make 2 changes!
-----
In this example, we will be writing a thin subclass over the datasets provided by `nlp` from HuggingFace!
```
from nemo.core import Dataset
from torch.utils import data
from torch.utils.data.dataloader import DataLoader
class TinyShakespeareDataset(Dataset):
def __init__(self, data_path, block_size, crop=None, override_vocab=None):
# load the data and crop it appropriately
with open(data_path, 'r') as f:
if crop is None:
data = f.read()
else:
f.seek(crop[0])
data = f.read(crop[1])
# build a vocabulary from data or inherit it
vocab = sorted(list(set(data))) if override_vocab is None else override_vocab
# Add UNK
special_tokens = ['<PAD>', '<UNK>'] # We use just <UNK> and <PAD> in the call, but can add others.
if not override_vocab:
vocab = [*special_tokens, *vocab] # Update train vocab with special tokens
data_size, vocab_size = len(data), len(vocab)
print('data of crop %s has %d characters, vocab of size %d.' % (str(crop), data_size, vocab_size))
print('Num samples in dataset : %d' % (data_size // block_size))
self.stoi = { ch:i for i,ch in enumerate(vocab) }
self.itos = { i:ch for i,ch in enumerate(vocab) }
self.block_size = block_size
self.vocab_size = vocab_size
self.data = data
self.vocab = vocab
self.special_tokens = special_tokens
def __len__(self):
return len(self.data) // self.block_size
def __getitem__(self, idx):
# attempt to fetch a chunk of (block_size + 1) items, but (block_size) will work too
chunk = self.data[idx*self.block_size : min(len(self.data), (idx+1)*self.block_size + 1)]
# map the string into a sequence of integers
ixes = [self.stoi[s] if s in self.stoi else self.stoi['<UNK>'] for s in chunk ]
# if stars align (last idx and len(self.data) % self.block_size == 0), pad with <PAD>
if len(ixes) < self.block_size + 1:
assert len(ixes) == self.block_size # i believe this is the only way this could happen, make sure
ixes.append(self.stoi['<PAD>'])
dix = torch.tensor(ixes, dtype=torch.long)
return dix[:-1], dix[1:]
@property
def output_types(self):
return {
'input': NeuralType(('B', 'T'), Index()),
'target': NeuralType(('B', 'T'), LabelsType())
}
```
------
We didn't have to change anything until here. How then is type-checking done?
NeMo does type-checking inside of the collate function implementation itself! In this case, it is not necessary to override the `collate_fn` inside the Dataset, but if we did need to override it, **NeMo requires that the private method `_collate_fn` be overridden instead**.
We can then use data loaders with minor modifications!
**Also, there is no need to implement the `input_types` for Dataset, as they are the ones generating the input for the model!**
-----
Let's prepare the dataset that we are going to use - Tiny Shakespeare from the following codebase [char-rnn](https://github.com/karpathy/char-rnn).
```
import os
if not os.path.exists('tiny-shakespeare.txt'):
!wget https://raw.githubusercontent.com/jcjohnson/torch-rnn/master/data/tiny-shakespeare.txt
!head -n 5 tiny-shakespeare.txt
train_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(0, int(1e6)))
val_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1e6), int(50e3)), override_vocab=train_dataset.vocab)
test_dataset = TinyShakespeareDataset('tiny-shakespeare.txt', cfg.model.block_size, crop=(int(1.05e6), int(100e3)), override_vocab=train_dataset.vocab)
```
### Setting up dataset/data loader support in the Model
So we now know our data loader works. Let's integrate it as part of the Model itself!
To do this, we use the three special attributes of the NeMo Model - `self._train_dl`, `self._validation_dl` and `self._test_dl`. Once you construct your DataLoader, place your data loader to these three variables.
For multi-data loader support, the same applies! NeMo will automatically handle the management of multiple data loaders for you!
```
class NeMoGPT(BasicNeMoGPTWithOptim):
def _setup_data_loader(self, cfg):
if self.vocab is None:
override_vocab = None
else:
override_vocab = self.vocab
dataset = TinyShakespeareDataset(
data_path=cfg.data_path,
block_size=cfg.block_size,
crop=tuple(cfg.crop) if 'crop' in cfg else None,
override_vocab=override_vocab
)
if self.vocab is None:
self.vocab = dataset.vocab
return DataLoader(
dataset=dataset,
batch_size=cfg.batch_size,
shuffle=cfg.shuffle,
collate_fn=dataset.collate_fn, # <-- this is necessary for type checking
pin_memory=cfg.pin_memory if 'pin_memory' in cfg else False,
num_workers=cfg.num_workers if 'num_workers' in cfg else 0
)
def setup_training_data(self, train_data_config: OmegaConf):
self.vocab = None
self._train_dl = self._setup_data_loader(train_data_config)
def setup_validation_data(self, val_data_config: OmegaConf):
self._validation_dl = self._setup_data_loader(val_data_config)
def setup_test_data(self, test_data_config: OmegaConf):
self._test_dl = self._setup_data_loader(test_data_config)
```
### Creating the dataset / dataloader config
The final step to setup this model is to add the `train_ds`, `validation_ds` and `test_ds` configs inside the model config!
```
OmegaConf.set_struct(cfg.model, False)
# Set the data path and update vocabular size
cfg.model.data_path = 'tiny-shakespeare.txt'
cfg.model.vocab_size = train_dataset.vocab_size
OmegaConf.set_struct(cfg.model, True)
train_ds = OmegaConf.create({
'data_path': '${model.data_path}',
'block_size': '${model.block_size}',
'crop': [0, int(1e6)],
'batch_size': 64,
'shuffle': True,
})
validation_ds = OmegaConf.create({
'data_path': '${model.data_path}',
'block_size': '${model.block_size}',
'crop': [int(1e6), int(50e3)],
'batch_size': 4,
'shuffle': False,
})
test_ds = OmegaConf.create({
'data_path': '${model.data_path}',
'block_size': '${model.block_size}',
'crop': [int(1.05e6), int(100e3)],
'batch_size': 4,
'shuffle': False,
})
# Attach to the model config
OmegaConf.set_struct(cfg.model, False)
cfg.model.train_ds = train_ds
cfg.model.validation_ds = validation_ds
cfg.model.test_ds = test_ds
OmegaConf.set_struct(cfg.model, True)
# Let's see the config now !
print(OmegaConf.to_yaml(cfg))
# Let's try creating a model now !
model = NeMoGPT(cfg=cfg.model)
```
-----
All the data loaders load properly ! Yay!
# Evaluate the model - end to end!
Now that the data loaders have been set up, all that's left is to train and test the model! We have most of the components required by this model - the train, val and test data loaders, the optimizer, and the type-checked forward step to perform the train-validation-test steps!
But training a GPT model from scratch is not the goal of this primer, so instead, let's do a sanity check by merely testing the model for a few steps using random initial weights.
The above will ensure that -
1) Our data loaders work as intended
2) The type checking system assures us that our Neural Modules are performing their forward step correctly.
3) The loss is calculated, and therefore the model runs end to end, ultimately supporting PyTorch Lightning.
```
if torch.cuda.is_available():
cuda = 1
else:
cuda = 0
trainer = ptl.Trainer(gpus=cuda, test_percent_check=1.0)
trainer.test(model)
```
# Saving and restoring models
NeMo internally keeps track of the model configuration, as well as the model checkpoints and parameters.
As long as your NeMo follows the above general guidelines, you can call the `save_to` and `restore_from` methods to save and restore your models!
```
model.save_to('gpt_model.nemo')
!ls -d -- *.nemo
temp_model = NeMoGPT.restore_from('gpt_model.nemo')
# [ERROR CELL]
temp_model.setup_test_data(temp_model.cfg.test_ds)
```
-----
Hmm, it seems it wasn't so easy in this case. Non-trivial models have non-trivial issues!
Remember, our NeMoGPT model sets its self.vocab inside the `setup_train_data` step. But that depends on the vocabulary generated by the train set... which is **not** restored during model restoration (unless you call `setup_train_data` explicitly!).
We can quickly resolve this issue by constructing an external data file to enable save and restore support, and NeMo supports that too! We will use the `register_artifact` API in NeMo to support external files being attached to the .nemo checkpoint.
```
class NeMoGPTv2(NeMoGPT):
def setup_training_data(self, train_data_config: OmegaConf):
self.vocab = None
self._train_dl = self._setup_data_loader(train_data_config)
# Save the vocab into a text file for now
with open('vocab.txt', 'w') as f:
for token in self.vocab:
f.write(f"{token}<SEP>")
# This is going to register the file into .nemo!
# When you later use .save_to(), it will copy this file into the tar file.
self.register_artifact(None, 'vocab.txt')
def setup_validation_data(self, val_data_config: OmegaConf):
# This is going to try to find the same file, and if it fails,
# it will use the copy in .nemo
vocab_file = self.register_artifact(None, 'vocab.txt')
with open(vocab_file, 'r') as f:
vocab = []
vocab = f.read().split('<SEP>')[:-1] # the -1 here is for the dangling <SEP> token in the file
self.vocab = vocab
self._validation_dl = self._setup_data_loader(val_data_config)
def setup_test_data(self, test_data_config: OmegaConf):
# This is going to try to find the same file, and if it fails,
# it will use the copy in .nemo
vocab_file = self.register_artifact(None, 'vocab.txt')
with open(vocab_file, 'r') as f:
vocab = []
vocab = f.read().split('<SEP>')[:-1] # the -1 here is for the dangling <SEP> token in the file
self.vocab = vocab
self._test_dl = self._setup_data_loader(test_data_config)
# Let's try creating a model now !
model = NeMoGPTv2(cfg=cfg.model)
# Now let's try to save and restore !
model.save_to('gpt_model.nemo')
temp_model = NeMoGPTv2.restore_from('gpt_model.nemo')
temp_model.setup_multiple_test_data(temp_model.cfg.test_ds)
if torch.cuda.is_available():
cuda = 1
else:
cuda = 0
trainer = ptl.Trainer(gpus=cuda, test_percent_check=1.0)
trainer.test(model)
```
------
There we go ! Now our model's can be serialized and de-serialized without any issue, even with an external vocab file !
| github_jupyter |
```
import pandas as pd
import numpy as np
#upload the csv file or
#!git clone
#and locate the csv and change location
df=pd.read_csv("/content/T1.csv", engine='python')
df.head()
lst=df["Wind Speed (m/s)"]
lst
max(lst)
min(lst)
lst=list(df["Wind Speed (m/s)"])
# Python program to get average of a list
def Average(lst):
return sum(lst) / len(lst)
# Driver Code
average = Average(lst)
# Printing average of the list
print("Average of the list =", round(average, 2))
for i in range(len(lst)):
lst[i]=round(lst[i],0)
lst
# Python program to count the frequency of
# elements in a list using a dictionary
def CountFrequency(my_list):
# Creating an empty dictionary
freq = {}
for item in my_list:
if (item in freq):
freq[item] += 1
else:
freq[item] = 1
for key, value in freq.items():
print ("% d : % d"%(key, value))
return freq
f=CountFrequency(lst)
dictionary_items = f.items()
sorted_items = sorted(dictionary_items)
sorted_items
#x wind speed
#y frequency
x=[]
y=[]
for each in sorted_items:
print(each)
x.append(each[0])
y.append(each[1])
x
y
ybar=np.array(y)/5
ybar=ybar/10
xbar=np.array(x)
from scipy import stats
import matplotlib.pyplot as plt
plt.figure(figsize=(20,8))
plt.style.use('dark_background')
#plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams["font.size"] = "16"
plt.title('Actual Distribution of Wind Speed in a Practical Scenario', fontsize=30)
plt.grid(False)
from scipy.interpolate import make_interp_spline, BSpline
T,power=xbar,ybar
# 300 represents number of points to make between T.min and T.max
xnew = np.linspace(T.min(), T.max(), 300)
spl = make_interp_spline(T, power, k=3) # type: BSpline
power_smooth = spl(xnew)
plt.plot(xnew, power_smooth,color="w")
#plt.show()
#plt.plot(xbar, ybar)
#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);
width=0.8
bar1=plt.bar(xbar, ybar, width,color="y")
for rect,val in zip(bar1,ybar):
height = rect.get_height()
#print(val)
if(val==0):
plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str("-"), ha='center', va='bottom',fontsize=20)
else:
plt.text(rect.get_x() + rect.get_width()/2.0, height+2, str(int(round(val,0))), ha='center', va='bottom',fontsize=12)
#plt.xticks(np.arange(25) + width , list(range(25)))
plt.rcParams['xtick.labelsize']=16
plt.rcParams['ytick.labelsize']=16
plt.xlabel('Wind Speed(m/s)', fontsize=18)
plt.ylabel('Frequency[%]', fontsize=18)
plt.show()
def percentage(y):
#print(y)
tot=y.sum()
#print(tot)
y=y/tot
return y*100
ybar=percentage(np.array(y))
#print(ybar)
xbar=np.array(x)
from scipy import stats
import matplotlib.pyplot as plt
plt.figure(figsize=(20,8))
plt.style.use('dark_background')
#plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams["font.size"] = "16"
plt.title('Actual Distribution of Wind Speed in a Practical Scenario', fontsize=30)
plt.grid(False)
from scipy.interpolate import make_interp_spline, BSpline
T,power=xbar,ybar
# 300 represents number of points to make between T.min and T.max
xnew = np.linspace(T.min(), T.max(), 300)
spl = make_interp_spline(T, power, k=3) # type: BSpline
power_smooth = spl(xnew)
plt.plot(xnew, power_smooth,color="w")
#plt.show()
#plt.plot(xbar, ybar)
#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);
width=0.8
bar1=plt.bar(xbar, ybar, width,color="y")
for rect,val in zip(bar1,ybar):
height = rect.get_height()
#print(val)
if(val==0):
plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str("-"), ha='center', va='bottom',fontsize=20)
else:
plt.text(rect.get_x() + rect.get_width()/2.0, height+0.2, str(round(val,1)), ha='center', va='bottom',fontsize=12)
#plt.xticks(np.arange(25) + width , list(range(25)))
plt.rcParams['xtick.labelsize']=16
plt.rcParams['ytick.labelsize']=16
plt.xlabel('Wind Speed(m/s)', fontsize=18)
plt.ylabel('Frequency[%]', fontsize=18)
plt.savefig("actual_distribution.png" ,dpi=100)
plt.show()
from scipy import stats
import matplotlib.pyplot as plt
#input for pseudo data
N = 100
Kappa_in = 2.08
Lambda_in = 8.97
a_in = 1
loc_in = 0
#Generate data from given input
data = stats.exponweib.rvs(a=a_in,c=Kappa_in, loc=loc_in, scale=Lambda_in, size = N)
#The a and loc are fixed in the fit since it is standard to assume they are known
a_out, Kappa_out, loc_out, Lambda_out = stats.exponweib.fit(data, f0=a_in,floc=loc_in)
#Plot
bins = range(25)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
y=stats.exponweib.pdf(bins, a=a_out,c=Kappa_out,loc=loc_out,scale = Lambda_out)
ax.plot(bins,y*1000)
#ax.hist(data, bins = bins , alpha=0.5)
#ax.annotate("Shape: $k = %.2f$ \n Scale: $\lambda = %.2f$"%(Kappa_out,Lambda_out), xy=(0.7, 0.85), xycoords=ax.transAxes)
plt.show()
def percentage(y):
#print(y)
tot=y.sum()
#print(tot)
y=y/tot
return y*100
ybar=percentage(np.array(y))
from scipy import stats
import matplotlib.pyplot as plt
#input for pseudo data
N = 100
Kappa_in = 2.08
Lambda_in = 8.97
a_in = 1
loc_in = 0
#Generate data from given input
data = stats.exponweib.rvs(a=a_in,c=Kappa_in, loc=loc_in, scale=Lambda_in, size = N)
#The a and loc are fixed in the fit since it is standard to assume they are known
a_out, Kappa_out, loc_out, Lambda_out = stats.exponweib.fit(data, f0=a_in,floc=loc_in)
#Plot
#print(ybar)
xbar=np.array(x)
from scipy import stats
import matplotlib.pyplot as plt
plt.figure(figsize=(20,8))
plt.style.use('dark_background')
bins = range(25)
#fig = plt.figure()
#ax = fig.add_subplot(1, 1, 1)
yhat=stats.exponweib.pdf(bins, a=a_out,c=Kappa_out,loc=loc_out,scale = Lambda_out)
plt.plot(bins,yhat*100, linewidth=4,markersize=12,marker='o',color='green')
#ax.hist(data, bins = bins , alpha=0.5)
#ax.annotate("Shape: $k = %.2f$ \n Scale: $\lambda = %.2f$"%(Kappa_out,Lambda_out), xy=(0.7, 0.85), xycoords=ax.transAxes)
#plt.show()
#plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams["font.size"] = "16"
plt.title('Comparitive Distribution of Wind Speed', fontsize=30)
plt.grid(False)
from scipy.interpolate import make_interp_spline, BSpline
T,power=xbar[:-1],ybar
print(xbar.shape,ybar.shape)
# 300 represents number of points to make between T.min and T.max
xnew = np.linspace(T.min(), T.max(), 300)
spl = make_interp_spline(T, power, k=3) # type: BSpline
power_smooth = spl(xnew)
plt.plot(xnew, power_smooth,color="red" ,linewidth=4,markersize=12,marker='+')
#plt.show()
#plt.plot(xbar, ybar)
#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);
width=0.8
#bar1=plt.bar(xbar, ybar, width,color="y")
"""
for rect,val in zip(bar1,ybar):
height = rect.get_height()
#print(val)
if(val==0):
plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str("-"), ha='center', va='bottom',fontsize=20)
else:
plt.text(rect.get_x() + rect.get_width()/2.0, height+0.2, str(round(val,1)), ha='center', va='bottom',fontsize=12)
"""
#plt.xticks(np.arange(25) + width , list(range(25)))
plt.rcParams['xtick.labelsize']=16
plt.rcParams['ytick.labelsize']=16
plt.xlabel('Wind Speed(m/s)', fontsize=18)
plt.ylabel('Frequency[%]', fontsize=18)
plt.savefig("new_distribution.png" ,dpi=100)
plt.show()
def percentage(y):
#print(y)
tot=y.sum()
#print(tot)
y=y/tot
return y*100
ybar=percentage(np.array(y))
from scipy import stats
import matplotlib.pyplot as plt
#input for pseudo data
N = 100
Kappa_in = 2.08
Lambda_in = 8.97
a_in = 1
loc_in = 0
#Generate data from given input
data = stats.exponweib.rvs(a=a_in,c=Kappa_in, loc=loc_in, scale=Lambda_in, size = N)
#The a and loc are fixed in the fit since it is standard to assume they are known
a_out, Kappa_out, loc_out, Lambda_out = stats.exponweib.fit(data, f0=a_in,floc=loc_in)
#Plot
#print(ybar)
xbar=np.array(x)
from scipy import stats
import matplotlib.pyplot as plt
plt.figure(figsize=(20,8))
plt.style.use('dark_background')
bins = range(25)
#fig = plt.figure()
#ax = fig.add_subplot(1, 1, 1)
yhat=stats.exponweib.pdf(bins, a=a_out,c=Kappa_out,loc=loc_out,scale = Lambda_out)
plt.plot(bins,yhat*100, linewidth=4,color='chartreuse',label="Theoretical Weibull Distribution")
#ax.hist(data, bins = bins , alpha=0.5)
#ax.annotate("Shape: $k = %.2f$ \n Scale: $\lambda = %.2f$"%(Kappa_out,Lambda_out), xy=(0.7, 0.85), xycoords=ax.transAxes)
#plt.show()
#plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams["font.size"] = "16"
plt.title('Comparative Distribution of Wind Speed', fontsize=30)
plt.grid(False)
from scipy.interpolate import make_interp_spline, BSpline
T,power=xbar[:-1],ybar
# 300 represents number of points to make between T.min and T.max
xnew = np.linspace(T.min(), T.max(), 300)
spl = make_interp_spline(T, power, k=3) # type: BSpline
power_smooth = spl(xnew)
plt.plot(xnew, power_smooth,color="red" ,linewidth=4,label=" Practical Distribution")
#plt.show()
#plt.plot(xbar, ybar)
#plt.hist(x, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);
width=0.8
#bar1=plt.bar(xbar, ybar, width,color="y")
"""
for rect,val in zip(bar1,ybar):
height = rect.get_height()
#print(val)
if(val==0):
plt.text(rect.get_x() + rect.get_width()/2.0, height+0.01, str("-"), ha='center', va='bottom',fontsize=20)
else:
plt.text(rect.get_x() + rect.get_width()/2.0, height+0.2, str(round(val,1)), ha='center', va='bottom',fontsize=12)
"""
#plt.xticks(np.arange(25) + width , list(range(25)))
plt.rcParams['xtick.labelsize']=16
plt.rcParams['ytick.labelsize']=16
lg=plt.legend(loc='best',title='Distribution Type', prop={'size': 20})
lg.get_title().set_fontsize(20)
lg._legend_box.align = "center"
plt.xlabel('Wind Speed(m/s)', fontsize=18)
plt.ylabel('Frequency[%]', fontsize=18)
plt.savefig("new_distribution.png" ,dpi=100)
plt.show()
1. Sort data in ascending order
2. Assign them a rank, such that the lowest data point is 1, second lowest is 2, etc.
3. Assign each data point a probability. For beginners, i recommend (i-0.5)/n, where i and n are rank and sample size, respectively.
4. Take natural log of data.
5. Calculate ln (-ln (1-P)) for every data, where P is probabiliyy calculated in step 3.
6. Linear regression with results of Step 5 as Y and results of Step 4 as X. Altrrnatively, you can fit a trendline in Excel.
7. Slope of the regression line is the shape parameter, aka Weibull modulus. The intercept is the negative of the product of shape parameter and natural log of scale parameter.
from scipy.interpolate import make_interp_spline, BSpline
T,power=xbar,ybar
# 300 represents number of points to make between T.min and T.max
xnew = np.linspace(T.min(), T.max(), 300)
spl = make_interp_spline(T, power, k=3) # type: BSpline
power_smooth = spl(xnew)
plt.plot(xnew, power_smooth)
plt.show()
#x = np.random.normal(size=100)
import seaborn as sns
sns.distplot(x);
sns.jointplot(x=x, y=y);
```
| github_jupyter |
```
from sklearn.datasets import load_iris
iris_dataset = load_iris()
'''
This is an example of a classifi cation problem. The possi‐
ble outputs (different species of irises) are called classes. Every iris in the dataset
belongs to one of three classes, so this problem is a three-class classification problem.
The desired output for a single data point (an iris) is the species of this flower.
For a particular data point, the species it belongs to is called its label.
'''
iris_dataset.keys()
## target: species of flower that we want to predict
iris_dataset['target_names']
# each entity row is known as sample
# each columns is known as features
iris_dataset['feature_names']
'''
all of the elements in a NumPy array should be homogeneous. The mathematical operations that are meant to be
performed on arrays would be extremely inefficient if the arrays weren’t homogeneous.
NumPy uses much less memory to store data and it provides a mechanism of specifying the data types.
This allows the code to be optimized even further.
'''
type(iris_dataset['data'])
iris_dataset['data'].shape
iris_dataset['data']
iris_dataset['data'].shape
# Target: is a one-dimension array
iris_dataset['target']
# we cannot use the data we used to build the model to evaluate
# we need to show a new data with labels
# this is usually done splitting the data - Training Data and Training Set
'''
In scikit-learn , data is usually denoted with a capital X , while labels are denoted by
a lowercase y . This is inspired by the standard formulation f(x)=y in mathematics,
where x is the input to a function and y is the output.
we use a capital X because the data is a two-dimensional array (a
matrix) and a lowercase y because the target is a one-dimensional array (a vector).
'''
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'], iris_dataset['target'], random_state=0)
# 75% - training set
# 25% - test set
# inspect the data
# inspecting your data is a good way to find abnormalities and peculiarities.
# One of the best ways to inspect data is to visualize it.
# pair plot
import pandas as pd
from pandas.plotting import scatter_matrix
iris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names)
grr = scatter_matrix(iris_dataframe, c=y_train, figsize=(15, 15), marker='o', hist_kwds={'bins': 20}, s=60, alpha=.8)
# k-Nearest Neighbors
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
# we call the fit method of the knn object,
knn.fit(X_train, y_train)
iris_dataset['target_names']
X_new = [[5, 2.9, 1, 0.2]] # as scikit-learn always expects two-dimensional arrays for the data.
knn.predict(X_new)
iris_dataset['target_names'][knn.predict(X_new)]
# Evaluating the Model
import numpy as np
y_pred = knn.predict(X_test)
np.mean(y_pred == y_test)
```
| github_jupyter |
# Fundus Analysis - Pathological Myopia
```
!nvidia-smi
```
**Import Data from Google Drive**
```
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/My Drive/Kaggle"
%cd /content/gdrive/My Drive/Kaggle
pwd
```
**Download Data in Colab**
```
!kaggle datasets download -d andrewmvd/ocular-disease-recognition-odir5k
!ls
```
**Un-zip the Data**
```
!unzip \*.zip && rm *.zip
```
## Classfication
Import Statements
```
import numpy as np
import pandas as pd
import cv2
import random
from tqdm import tqdm
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from scipy import interp
from sklearn.metrics import roc_auc_score
import os
df = pd.read_csv("/content/gdrive/My Drive/Kaggle/full_df.csv")
df.head()
def has_myopia(text):
if "pathological myopia" or "myopia" in text:
return 1
else:
return 0
df["left_myopia"] = df["Left-Diagnostic Keywords"].apply(lambda x: has_myopia(x))
df["right_myopia"] = df["Right-Diagnostic Keywords"].apply(lambda x: has_myopia(x))
left_myopia = df.loc[(df.M == 1) & (df.left_myopia == 1)]["Left-Fundus"].values
print(left_myopia[:10])
right_myopia = df.loc[(df.M == 1) & (df.right_myopia == 1)]["Right-Fundus"].values
print(right_myopia[:10])
print("Left Eye Images having myopia: {}".format(len(left_myopia)))
print("Right Eye Images having myopia: {}".format(len(right_myopia)))
left_normal = df.loc[(df.C ==0) & (df["Left-Diagnostic Keywords"] == "normal fundus")]["Left-Fundus"].sample(300,random_state=42).values
right_normal = df.loc[(df.C ==0) & (df["Right-Diagnostic Keywords"] == "normal fundus")]["Right-Fundus"].sample(300,random_state=42).values
print(left_normal[:10])
print(right_normal[:10])
```
Left and Right Images Together
```
myopia = np.concatenate((left_myopia,right_myopia),axis=0)
normal = np.concatenate((left_normal,right_normal),axis=0)
print("myopia: {}".format(len(myopia)))
print("Normal: {}".format(len(normal)))
dataset_dir = "/content/gdrive/MyDrive/Kaggle/preprocessed_images/"
image_size = 224
labels = []
dataset = []
def create_dataset(image_category,label):
for img in tqdm(image_category):
image_path = os.path.join(dataset_dir,img)
try:
image = cv2.imread(image_path,cv2.IMREAD_COLOR)
image = cv2.resize(image,(image_size,image_size))
except:
continue
dataset.append([np.array(image),np.array(label)])
random.shuffle(dataset)
return dataset
dataset = create_dataset(myopia,1)
len(dataset)
dataset = create_dataset(normal,0)
len(dataset)
plt.figure(figsize=(12,7))
for i in range(10):
sample = random.choice(range(len(dataset)))
image = dataset[sample][0]
category = dataset[sample][1]
if category == 0:
label = "Normal"
else:
label = "Myopia"
plt.subplot(2,5,i+1)
plt.imshow(image)
plt.xlabel(label)
plt.tight_layout()
x = np.array([i[0] for i in dataset]).reshape(-1,image_size,image_size,3)
y = np.array([i[1] for i in dataset])
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.2)
```
**Keras Pretrained Models**
```
!kaggle datasets download -d gaborfodor/keras-pretrained-models
!unzip \*.zip && rm *.zip
!ls
pwd
from keras.applications.vgg16 import VGG16, preprocess_input
vgg16_weight_path = '/content/gdrive/MyDrive/Kaggle/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'
vgg = VGG16(
weights = vgg16_weight_path,
include_top = False,
input_shape = (224,224,3)
)
for layer in vgg.layers:
layer.trainable = False
```
**Model**
```
from tensorflow.keras import Sequential
from keras import layers
from tensorflow.keras.layers import Flatten ,Dense
model = Sequential()
model.add(vgg)
model.add(Dense(256, activation='relu'))
model.add(layers.Dropout(rate=0.5))
model.add(Dense(128, activation='sigmoid'))
model.add(layers.Dropout(rate=0.2))
model.add(Dense(128, activation='relu'))
model.add(layers.Dropout(0.1))
model.add(Flatten())
model.add(Dense(1,activation="sigmoid"))
```
Model's Summary
```
model.summary()
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
history = model.fit(x_train, y_train,
batch_size = 32,
epochs = 30,
validation_data = (x_test, y_test)
)
%cd /content/gdrive/MyDrive/Kaggle
model.save('fundus_model_MYO.h5')
print('saved')
!ls
from sklearn.metrics import confusion_matrix,classification_report,accuracy_score
y_pred = model.predict_classes(x_test)
accuracy_score(y_test,y_pred)
print(classification_report(y_test,y_pred))
```
## Predictions
```
# from IPython.display import Image, display
# images = ["/content/gdrive/MyDrive/Kaggle/preprocessed_images/560_right.jpg",
# "/content/gdrive/MyDrive/Kaggle/preprocessed_images/1550_right.jpg",
# "/content/gdrive/MyDrive/Kaggle/preprocessed_images/2330_right.jpg",
# "/content/gdrive/MyDrive/Kaggle/preprocessed_images/0_left.jpg",
# "/content/gdrive/MyDrive/Kaggle/preprocessed_images/179_right.jpg"]
# for image in images:
# display(Image(image, width = 120, height = 120))
# print()
```
Loaded Model
```
pwd
from tensorflow import keras
model = keras.models.load_model('/content/gdrive/MyDrive/Kaggle/fundus_model_MYO.h5')
print('loaded')
model.summary()
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='vgg.png')
from keras.preprocessing.image import load_img
image = load_img("/content/gdrive/MyDrive/Kaggle/preprocessed_images/179_right.jpg", target_size=(224, 224))
from keras.preprocessing.image import img_to_array
# convert the image pixels to a numpy array
image = img_to_array(image)
# reshape data for the model
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
from keras.applications.vgg16 import preprocess_input
# prepare the image for the VGG model
image = preprocess_input(image)
```
Normal Fundus
```
def disease(predic):
if predic > 0.75:
return 'Pathological Myopia'
return 'Normal'
pred = model.predict(image)
status = disease(pred[0])
print("Situation: {}".format(status))
print("Percentage: {}".format(round(int(pred[0]), 1)))
```
Myopic Fundus
```
def ready_image(img_path):
image = load_img(img_path, target_size=(224, 224))
image = img_to_array(image)
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
image = preprocess_input(image)
return image
image = ready_image("/content/gdrive/MyDrive/Kaggle/preprocessed_images/13_right.jpg")
pred = model.predict(image)
status = disease(pred[0])
print("Situation: {}".format(status))
print("Percentage: {}".format(round(int(pred[0]), 1)))
image = ready_image("/content/gdrive/MyDrive/Kaggle/preprocessed_images/233_right.jpg")
pred = model.predict(image)
status = disease(pred[0])
print("Situation: {}".format(status))
print("Percentage: {}".format(round(int(pred[0]), 1)))
```
| github_jupyter |
```
from dask_gateway import Gateway
import os
# External IPs
gateway = Gateway(
"http://ad7f4b0a2492a11eabd750e8c5de8801-1750344606.us-west-2.elb.amazonaws.com",
proxy_address="tls://ad7f57e7d492a11eabd750e8c5de8801-778017149.us-west-2.elb.amazonaws.com:8786",
auth='jupyterhub'
)
# Internal IPs
gateway = Gateway(
"http://10.100.90.71:80",
proxy_address="tls://10.100.210.56:8786",
auth='jupyterhub'
)
gateway.list_clusters()
os.environ['JUPYTER_IMAGE']
```
Started 15:05. Ended
```
cluster = gateway.new_cluster(image=os.environ['JUPYTER_IMAGE'])
```
## Error Messages
### Cluster-Autoscaler pod Logs
Pod jhub/dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6 is unschedulable
Pod dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6 can't be scheduled on eksctl-jupyterhub-salvis-nodegroup-user-spot-NodeGroup-1MDSBX01QDJ20, predicate failed: PodToleratesNodeTaints predicate mismatch, reason: node(s) had taints that the pod didn't tolerate
Pod dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6 can't be scheduled on eksctl-jupyterhub-salvis-nodegroup-worker-spot-NodeGroup-1IHH8XDNZ0NT8, predicate failed: PodToleratesNodeTaints predicate mismatch, reason: node(s) had taints that the pod didn't tolerate
Event(v1.ObjectReference{Kind:"Pod", Namespace:"jhub", Name:"dask-gateway-salvis2-scheduler-9bef8d32f29b4619b6122eab446837c6", UID:"7bd0dfca-492b-11ea-bd75-0e8c5de88014", APIVersion:"v1", ResourceVersion:"3963", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 node(s) had taints that the pod didn't tolerate, 1 max limit reached
### Scheduler-proxy-dask-gateway pod Logs
Lots of "Extracting SNI: Error reading TLS record header: EOF"
### gateway-dask-gateway pod Logs
[I 2020-02-06 22:18:07.146 DaskGateway] Starting cluster 0fb1652be05749e1b290fcdea95f7bf9 for user salvis2...
[I 2020-02-06 22:18:07.172 DaskGateway] Cluster 0fb1652be05749e1b290fcdea95f7bf9 has started, waiting for connection
[I 2020-02-06 22:18:27.158 DaskGateway] 200 GET /api/clusters/0fb1652be05749e1b290fcdea95f7bf9?wait (192.168.139.35) 20004.70ms
[I 2020-02-06 22:18:47.680 DaskGateway] 200 GET /api/clusters/0fb1652be05749e1b290fcdea95f7bf9?wait (192.168.139.35) 20014.58ms
[W 2020-02-06 22:19:07.153 DaskGateway] Cluster 0fb1652be05749e1b290fcdea95f7bf9 startup timed out after 60.0 seconds
## To Try
More timeout. Not specified in cluster creation. 10 min limit still hit timeout. Probably not a timeout problem.
Pod tolerations bad? Do I need to re-enable the one tag I took out?
Pod tolerations: nothing specified in dask-gateway-config.yml.
[Incorrect address?](https://github.com/dask/dask-gateway/issues/163) No
Internal IPs? No.
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import sys,os
sys.path.insert(0,'../')
from ml_tools.descriptors import RawSoapInternal
from ml_tools.models.KRR import KRR,TrainerCholesky,KRRFastCV
from ml_tools.kernels import KernelPower,KernelSum
from ml_tools.utils import get_mae,get_rmse,get_sup,get_spearman,get_score,load_pck,tqdm_cs
from ml_tools.split import KFold,LCSplit,ShuffleSplit
from ml_tools.compressor import FPSFilter
import numpy as np
from ase.io import read,write
from ase.visualize import view
```
# Build a kernel Matrix
```
# load the structures
frames = read('data/dft-smiles_500.xyz',':')
global_species = []
for frame in frames:
global_species.extend(frame.get_atomic_numbers())
global_species = np.unique(global_species)
# split the structures in 2 sets
frames_train = frames[:300]
frames_test = frames[300:]
# set up the soap parameters
soap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,
global_species=global_species,nocenters=[])
representation = RawSoapInternal(**soap_params)
# set up the kernel parameters
kernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])
# compute the soap vectors
rawsoaps = representation.transform(frames_train)
X_train = dict(feature_matrix=rawsoaps,strides=representation.strides)
# compute the soap vectors
rawsoaps = representation.transform(frames_test)
X_test = dict(feature_matrix=rawsoaps,strides=representation.strides)
# compute the square kernel matrix
Kmat = kernel.transform(X_train)
# compute a rectangular kernel matrix
Kmat_rect = kernel.transform(X_test,X_train)
```
# FPS selection of the samples
```
# load the structures
frames = read('data/dft-smiles_500.xyz',':300')
global_species = []
for frame in frames:
global_species.extend(frame.get_atomic_numbers())
global_species = np.unique(global_species)
# set up the soap parameters
soap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,
global_species=global_species,nocenters=[])
representation = RawSoapInternal(**soap_params)
# set up the kernel parameters
kernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])
# compute the soap vectors
rawsoaps = representation.transform(frames)
X = dict(feature_matrix=rawsoaps,strides=representation.strides)
# run the fps selection on the set and plot the minmax distance
Nselect = 250
compressor = FPSFilter(Nselect,kernel,act_on='sample',precompute_kernel=True,disable_pbar=True)
compressor.fit(X,dry_run=True)
compressor.plot()
# select the appropriate number of samples to select
compressor.Nselect = 250
# and compress
X_compressed = compressor.transform(X)
compressor.selected_ids[:compressor.Nselect]
X['feature_matrix'].shape
X_compressed['feature_matrix'].shape
X_compressed['strides'].shape
```
# FPS selection of the features
```
# load the structures
frames = read('data/dft-smiles_500.xyz',':300')
global_species = []
for frame in frames:
global_species.extend(frame.get_atomic_numbers())
global_species = np.unique(global_species)
# set up the soap parameters
soap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,
global_species=global_species,nocenters=[])
representation = RawSoapInternal(**soap_params)
# set up the kernel parameters
kernel = KernelPower(zeta = 2)
# compute the soap vectors
X = representation.transform(frames)
# run the fps selection on the set and plot the minmax distance
Nselect = 250
compressor = FPSFilter(Nselect,kernel,act_on='feature',precompute_kernel=True,disable_pbar=True)
compressor.fit(X,dry_run=True)
compressor.plot()
# select the appropriate number of samples to select
compressor.Nselect = 500
# and compress
X_compressed = compressor.transform(X)
compressor.selected_ids[:compressor.Nselect]
```
# get a cross validation score
```
# load the structures
frames = read('data/dft-smiles_500.xyz',':')
global_species = []
y = []
for frame in frames:
global_species.extend(frame.get_atomic_numbers())
y.append(frame.info['dft_formation_energy_per_atom_in_eV'])
y = np.array(y)
global_species = np.unique(global_species)
# set up the soap parameters
soap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,
global_species=global_species,nocenters=[])
representation = RawSoapInternal(**soap_params)
# set up the kernel parameters
kernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])
# set the splitting rational
cv = KFold(n_splits=6,random_state=10,shuffle=True)
# set up the regression model
jitter = 1e-8
krr = KRRFastCV(jitter, 1.,cv)
# compute the soap vectors
rawsoaps = representation.transform(frames)
X = dict(feature_matrix=rawsoaps,strides=representation.strides)
rawsoaps.shape
# compute the kernel matrix for the dataset
Kmat = kernel.transform(X)
# fit the model
krr.fit(Kmat,y)
# get the predictions for each folds
y_pred = krr.predict()
# compute the CV score for the dataset
get_score(y_pred,y)
```
# LC
```
# load the structures
frames = read('data/dft-smiles_500.xyz',':')
global_species = []
y = []
for frame in frames:
global_species.extend(frame.get_atomic_numbers())
y.append(frame.info['dft_formation_energy_per_atom_in_eV'])
y = np.array(y)
global_species = np.unique(global_species)
# set up the soap parameters
soap_params = dict(rc=3.5, nmax=6, lmax=6, awidth=0.4,
global_species=global_species,nocenters=[])
representation = RawSoapInternal(**soap_params)
# set up the kernel parameters
kernel = KernelSum(KernelPower(zeta = 2),chunk_shape=[100,100])
# set the splitting rational
trainer = TrainerCholesky(memory_efficient=True)
# set up the regression model
jitter = 1e-8
krr = KRR(jitter,1.,trainer)
train_sizes=[20,50,100]
lc = LCSplit(ShuffleSplit, n_repeats=[20,20,20],train_sizes=train_sizes,test_size=100, random_state=10)
rawsoaps = representation.transform(frames)
X = dict(feature_matrix=rawsoaps,strides=representation.strides)
K = kernel.transform(X)
scores = {size:[] for size in train_sizes}
for train,test in tqdm_cs(lc.split(y),total=lc.n_splits):
Ntrain = len(train)
k_train = K[np.ix_(train,train)]
y_train = y[train]
k_test = K[np.ix_(test,train)]
krr.fit(k_train,y_train)
y_pred = krr.predict(k_test)
scores[Ntrain].append(get_score(y_pred,y[test]))
sc_name = 'RMSE'
Ntrains = []
avg_scores = []
for Ntrain, score in scores.items():
avg = 0
for sc in score:
avg += sc[sc_name]
avg /= len(score)
avg_scores.append(avg)
Ntrains.append(Ntrain)
plt.plot(Ntrains,avg_scores,'--o')
plt.xlabel('Number of training samples')
plt.ylabel('Test {}'.format(sc_name))
plt.xscale('log')
plt.yscale('log')
```
| github_jupyter |
```
import csv
from numpy import genfromtxt
import numpy as np
import pandas as pd
from random import random
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
import math
import sklearn.linear_model
# Function to check and remove NaNs from dataset
def dataChecker(arr):
idxRow = -1
for row in arr:
idxRow = idxRow + 1
for idx in range(len(row)):
if math.isnan(arr[idxRow,idx]) == True:
arr[idxRow, idx] = 0
return arr
# Find max value in the dataset and its index
def maxVal(arr):
idxRow = -1
maxVal = -100
indexes = np.empty(2)
for row in arr:
idxRow = idxRow + 1
for idx in range(len(row)):
if ((arr[idxRow,idx] > maxVal) and (idx != 0 and idx != 4 and idx != 5 and idx != 6 and idx != 7 and idx != 8)):
maxVal = arr[idxRow, idx]
indexes[0] = idxRow
indexes[1] = idx
return indexes, maxVal
# Find max value in the dataset and its index
def minVal(arr):
idxRow = -1
minVal = 100
indexes = np.empty(2)
for row in arr:
idxRow = idxRow + 1
for idx in range(len(row)):
if ((arr[idxRow,idx] < minVal) and (idx != 0 and idx != 4 and idx != 5 and idx != 6 and idx != 7 and idx != 8)):
minVal = arr[idxRow, idx]
indexes[0] = idxRow
indexes[1] = idx
return indexes, minVal
# Scale all values in the array that are the waveform or waveform-dependent to a range
def scaleVals(arrIn, arrOut, minAllowed, maxAllowed, minValue, maxValue):
idxRow = -1
for row in arrIn:
idxRow = idxRow + 1
for idx in range(len(row)):
if(idx != 0 and idx != 4 and idx != 5 and idx != 6 and idx != 7 and idx != 8):
scaled = (((maxAllowed - minAllowed) * (arrIn[idxRow,idx] - minValue)) / (maxValue - minValue)) + minAllowed
arrOut[idxRow, idx] = scaled
else:
arrOut[idxRow, idx] = arrIn[idxRow,idx]
return arrOut
# Perform Recursive Feature Elimination to identify the 3 top features
def RFE(arr):
#data = X, target = Y
X = arr[:,1:9]
Y = arr[:,0]
#Feature extraction
model = sklearn.linear_model.LogisticRegression()
rfeFeatures = sklearn.feature_selection.RFE(model, 3)
fit = rfeFeatures.fit(X,Y)
return fit.ranking_
# Number of waveforms for each neuron cell type
valsFS = 1438775
valsPT = 319484
valsIT = 126460
# Number of rows in each array
rows_FS = valsFS
rows_PT = valsPT
rows_IT = valsIT
# Separation value to split up training:testing sets (67:33)
sep_FS = 2 * rows_FS // 3
sep_PT = 2 * rows_PT // 3
sep_IT = 2 * rows_IT // 3
# Create training sets
col = 38
trainArrSize = sep_FS
train_set_FS = np.empty((trainArrSize,col))
train_set_PT_attr = np.empty((trainArrSize,col))
train_set_IT_attr = np.empty((trainArrSize,col))
# Fill the training sets with the 66% that is already existent (prior to oversampling)
for indFS_init in range(sep_FS):
train_set_FS[indFS_init, :] = FS[indFS_init,:]
for indPT_init in range(sep_PT):
train_set_PT_attr[indPT_init, :] = PT[indPT_init,:]
for indIT_init in range(sep_IT):
train_set_IT_attr[indIT_init, :] = IT[indIT_init,:]
# Fill the test sets to completion
test_set_FS = np.empty((0,col))
test_size_FS = valsFS - sep_FS
test_set_PT = np.zeros((0,col))
test_size_PT = valsPT - sep_PT
test_set_IT = np.zeros((0,col))
test_size_IT = valsIT - sep_IT
test_set_FS = np.append(test_set_FS, FS[sep_FS:valsFS, :], axis = 0)
test_set_PT = np.append(test_set_PT, PT[sep_PT:valsPT, :], axis = 0)
test_set_IT = np.append(test_set_IT, IT[sep_IT:valsIT, :], axis = 0)
# Oversampling the minority with replacement
# Determine how much to add to PT/IT and size of pre-oversampling array
numAdd_PT = sep_FS - sep_PT
numAdd_IT = sep_FS - sep_IT
trainPTArrSize = sep_PT
trainITArrSize = sep_IT
# Randomize attribute-wise (_attr) for all features but the waveform,
# which will be randomized as single unit
for indPT_2 in range(trainPTArrSize,numAdd_PT+trainPTArrSize):
for attrPT in range(9):
rand = int(random() * (sep_PT+1))
train_set_PT_attr[indPT_2,attrPT] = train_set_PT_attr[rand, attrPT]
rand = int(random() * (sep_PT+1))
train_set_PT_attr[indPT_2, 9:] = train_set_PT_attr[rand, 9:]
for indIT_2 in range(trainITArrSize,numAdd_IT+trainITArrSize):
for attrIT in range(9):
rand = int(random() * (sep_IT+1))
train_set_IT_attr[indIT_2,attrIT] = train_set_IT_attr[rand, attrIT]
rand = int(random() * (sep_IT+1))
train_set_IT_attr[indIT_2, 9:] = train_set_IT_attr[rand, 9:]
# Randomly combine individual training and testing sets into master training and testing sets
train_set_attr = np.empty((trainArrSize * 3, col))
countFS = 0
countPT = 0
countIT = 0
indTrain= 0
while indTrain < (trainArrSize * 3):
rand = int(random() * 3 + 1)
if rand == 1 and (countFS + 1 <= trainArrSize):
train_set_attr[indTrain,:] = train_set_FS[countFS,:]
countFS = countFS + 1
indTrain = indTrain + 1
elif rand == 2 and (countPT + 1 <= trainArrSize):
train_set_attr[indTrain,:] = train_set_PT_attr[countPT,:]
countPT = countPT + 1
indTrain = indTrain + 1
elif rand == 3 and (countIT + 1 <= trainArrSize):
train_set_attr[indTrain,:] = train_set_IT_attr[countIT,:]
countIT = countIT + 1
indTrain = indTrain + 1
test_set = np.empty((test_size_FS + test_size_PT + test_size_IT, col))
countFS = 0
countPT = 0
countIT = 0
indTest = 0
while indTest < (test_size_FS + test_size_PT + test_size_IT):
rand = int(random() * 3 + 1)
if rand == 1 and (countFS + 1 <= test_size_FS):
test_set[indTest,:] = test_set_FS[countFS,:]
countFS = countFS + 1
indTest = indTest + 1
elif rand == 2 and (countPT + 1 <= test_size_PT):
test_set[indTest,:] = test_set_PT[countPT,:]
countPT = countPT + 1
indTest = indTest + 1
elif rand == 3 and (countIT + 1 <= test_size_IT):
test_set[indTest,:] = test_set_IT[countIT,:]
countIT = countIT + 1
indTest = indTest + 1
# Remove NaNs in each array
train_set_attr = dataChecker(train_set_attr)
test_set = dataChecker(test_set)
# Scaling inputs to 0-1
train_set_attr_scld = np.empty((2877549, 38))
test_set_scld = np.empty((628241, 38))
minValue = -0.00098502
maxValue = 0.0011485
train_set_attr_scld = scaleVals(train_set_attr, train_set_attr_scld, 0, 1, minValue, maxValue)
test_set_scld = scaleVals(test_set, test_set_scld, 0, 1, minValue, maxValue)
# Save files as a .csv
np.savetxt('train_set_attr_scld.csv', train_set_attr_scld, delimiter = ",")
np.savetxt('test_set_scld.csv', test_set_scld, delimiter = ",")
```
| github_jupyter |
# How to detect breast cancer with a Support Vector Machine (SVM) and k-nearest neighbours clustering and compare results.
Load some packages
```
import scipy
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from sklearn import preprocessing
from sklearn.model_selection import train_test_split # cross_validation is deprecated
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn import model_selection
from sklearn.metrics import classification_report, accuracy_score
from pandas.plotting import scatter_matrix
print('NumPy must be 1.14 to run this, it is {}'.format(np.__version__))
print('Python should be version 2.7 or higher, it is {}'.format(sys.version))
```
Read in the dataset from thw UCI data repository.
This details a lot of information from cells, such as their size, clump thickness, shape etc. A pathologist would consider these to determine whether a cell had cancer.
Specifically, we use the read_csv command from pd (pandas) package and supply a url of the dataset and some column names. Then we display the table.
```
# Load Dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data"
names = ['id', 'clump_thickness', 'uniform_cell_size', 'uniform_cell_shape',
'marginal_adhesion', 'single_epithelial_size', 'bare_nuclei',
'bland_chromatin', 'normal_nucleoli', 'mitoses', 'class']
df = pd.read_csv(url, names=names)
df.drop(['id'], 1, inplace = True) # We have removed the id field from the dataframe as we would not be running any models on it and we already know that each row represents a single cell.
display(df)
```
Get some summary statistics for each of our variables
```
df.describe()
```
The dataset has some missing values. you can use .isnull() to return booleen true false and then tabulate that using .describe to say how many occurrences of true or false there are.
```
df.isnull().describe()
```
If you have missing data, you can replace it.
```
df.replace('?', -9999, inplace = True)
```
Class contains information on whether the tumour is benign (class = 2) or malignant (class = 4).
Next we plot a histogram of all variables to show the distrubition.
```
df.hist(figsize = (15,15))
plt.show() # by using plt.show() you render just the plot itself, because python will always display only the last command.
```
Look at the relationship between variables with a scatter matrix.
There looks like a pretty strong linear relationship between unifrorm cell shape and uniform cell size.
If you look at the cells representing comparisons with class (our outcome variable), it appears that there are a range of values for each of the items.
```
scatter_matrix(df, figsize = (15,15))
plt.show() # by using plt.show() you render just the plot itself, because python will always display only the last command.
```
### Models
Create training and testing datasets.
We need to keep some of the data back to validate the model, seeing how well it generalises to other data.
x data will contain all the potential explanatory variables (called features I think in this context)
y will contain the outcome data (called label in ML)
```
X_df = np.array(df.drop(['class'], 1)) # this will create a variable called X_df which is df except class
y_df = np.array(df['class']) # this is just the class field
X_train, X_test, y_train, y_test = train_test_split(X_df, y_df, test_size=0.2) # split the dataset into four, two with features, two with labels (and choose 20% of the data for testing (validation))
```
Add a seed to make the data reproducible (this will change the results a little each time we run the model)
```
seed = 8
scoring = 'accuracy'
```
### Create training models
make an empty list then append
```
models = []
models.append(('KNN', KNeighborsClassifier(n_neighbors = 5))) # You can alter the number of neighbours
models.append(('SVM', SVC()))
results = [] # also create lists for results and names. We use this to print out the results
names = []
```
Evaluate each model in turn
```
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state = seed, shuffle = True)
cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
```
The KNN tries to cluster the data points into two groups, malignant and benign, whilst the SWM is looking for the optimal separating hyperplane (??) that can separate the data points into malignant and benign cells
## Making predictions
| github_jupyter |
<a href="https://colab.research.google.com/github/timeseriesAI/tsai/blob/master/tutorial_nbs/02_ROCKET_a_new_SOTA_classifier.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
created by Ignacio Oguiza - email: [email protected]
<img src="https://github.com/timeseriesAI/tsai/blob/master/tutorial_nbs/images/Rocket.svg?raw=1" width="150">
ROCKET (RandOm Convolutional KErnel Transform) is a new Time Series Classification (TSC) method that has just been released (Oct 29th, 2019), and has achieved **state-of-the-art performance on the UCR univariate time series classification datasets, surpassing HIVE-COTE (the previous state of the art since 2017) in accuracy, with exceptional speed compared to other traditional DL methods.**
To achieve these 2 things at once is **VERY IMPRESSIVE**. ROCKET is certainly a new TSC method you should try.
Authors:
Dempster, A., Petitjean, F., & Webb, G. I. (2019). ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. arXiv preprint arXiv:1910.13051.
[paper](https://arxiv.org/pdf/1910.13051)
There are 2 main limitations to the original ROCKET method though:
- Released code doesn't handle multivariate data
- It doesn't run on a GPU, so it's slow when used with a large datasets
In this notebook you will learn:
- how you can use the original ROCKET method
- you will also learn about a new ROCKET version I have developed in Pytorch, that handles both **univariate and multivariate** data, and uses **GPU**
- you will see how you can integrate the ROCKET features with fastai or other classifiers
## Import libraries 📚
```
# ## NOTE: UNCOMMENT AND RUN THIS CELL IF YOU NEED TO INSTALL/ UPGRADE TSAI
# stable = False # True: latest version from github, False: stable version in pip
# if stable:
# !pip install -Uqq tsai
# else:
# !pip install -Uqq git+https://github.com/timeseriesAI/tsai.git
# ## NOTE: REMEMBER TO RESTART YOUR RUNTIME ONCE THE INSTALLATION IS FINISHED
from tsai.all import *
print('tsai :', tsai.__version__)
print('fastai :', fastai.__version__)
print('fastcore :', fastcore.__version__)
print('torch :', torch.__version__)
```
## How to use the original ROCKET method? 🚀
ROCKET is applied in 2 phases:
1. Generate features from each time series: ROCKET calculates 20k features from each time series, independently of the sequence length.
2. Apply a classifier to those calculated features. Those features are then used by the classifier of your choice. In the original code they use 2 simple linear classifiers: RidgeClassifierCV and Logistic Regression, but you can use any classifier.
### 1️⃣ Generate features
Let's first generate the features. We'll import data from a UCR Time Series dataset.
The original method requires the time series to be in a 2d array of shape (samples, len). Remember than only univariate sequences are allow in this original method.
```
X_train, y_train, X_valid, y_valid = get_UCR_data('OliveOil')
seq_len = X_train.shape[-1]
X_train = X_train[:, 0]
X_valid = X_valid[:, 0]
labels = np.unique(y_train)
transform = {}
for i, l in enumerate(labels): transform[l] = i
y_train = np.vectorize(transform.get)(y_train)
y_valid = np.vectorize(transform.get)(y_valid)
```
Now we normalize the data to mean 0 and std 1 **'per sample'** (recommended by the authors), that is they set each sample to mean 0 and std 1).
```
X_train = (X_train - X_train.mean(axis = 1, keepdims = True)) / (X_train.std(axis = 1, keepdims = True) + 1e-8)
X_valid = (X_valid - X_valid.mean(axis = 1, keepdims = True)) / (X_valid.std(axis = 1, keepdims = True) + 1e-8)
X_train.mean(axis = 1, keepdims = True).shape
```
To generate the features, we first need to create the 10k random kernels that will be used to process the data.
```
kernels = generate_kernels(seq_len, 10000)
```
Now we apply those ramdom kernels to the data
```
X_train_tfm = apply_kernels(X_train, kernels)
X_valid_tfm = apply_kernels(X_valid, kernels)
```
### 2️⃣ Apply a classifier
So now we have the features, and we are ready to apply a classifier.
Let's use a simple, linear RidgeClassifierCV as they propose in the paper. We first instantiate it.
Note:
alphas: Array of alpha values to try. Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to C^-1 in other linear models such as LogisticRegression or LinearSVC.
```
from sklearn.linear_model import RidgeClassifierCV
classifier = RidgeClassifierCV(alphas=np.logspace(-3, 3, 7), normalize=True)
classifier.fit(X_train_tfm, y_train)
classifier.score(X_valid_tfm, y_valid)
```
☣️ **This is pretty impressive! It matches or exceeds the state-of-the-art performance without any fine tuning in <2 seconds!!!**
```
kernels = generate_kernels(seq_len, 10000)
X_train_tfm = apply_kernels(X_train, kernels)
X_valid_tfm = apply_kernels(X_valid, kernels)
from sklearn.linear_model import RidgeClassifierCV
classifier = RidgeClassifierCV(alphas=np.logspace(-3, 3, 7), normalize=True)
classifier.fit(X_train_tfm, y_train)
classifier.score(X_valid_tfm, y_valid)
```
⚠️ Bear in mind that this process is not deterministic since there is randomness involved in the kernels. In thiis case, performance may vary between .9 to .933
## How to use ROCKET with large and/ or multivariate datasets on GPU? - Recommended ⭐️
As stated before, the current ROCKET method doesn't support multivariate time series or GPU. This may be a drawback in some cases.
To overcome both limitations I've created a multivariate ROCKET on GPU in Pytorch.
### 1️⃣ Generate features
First you prepare the input data and normalize it per sample. The input to ROCKET Pytorch is a 3d tensor of shape (samples, vars, len), preferrable on gpu.
The way to use ROCKET in Pytorch is the following:
* Create a dataset as you would normally do in `tsai`.
* Create a TSDataLoaders with the following kwargs:
* drop_last=False. In this way we get features for every input sample.
* shuffle_train=False
* batch_tfms=[TSStandardize(by_sample=True)] so that input is normalized by sample, as recommended by the authors
```
X, y, splits = get_UCR_data('HandMovementDirection', split_data=False)
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_sample=True)]
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, drop_last=False, shuffle_train=False, batch_tfms=batch_tfms, bs=10_000)
```
☣️☣️ You will be able to create a dls (TSDataLoaders) object with unusually large batch sizes. I've tested it with a large dataset and a batch size = 100_000 and it worked fine. This is because ROCKET is not a usual Deep Learning model. It just applies convolutions (kernels) one at a time to create the features.
Instantiate a rocket model with the desired n_kernels (authors use 10_000) and kernel sizes (7, 9 and 11 in the original paper).
```
model = build_ts_model(ROCKET, dls=dls) # n_kernels=10_000, kss=[7, 9, 11] set by default, but you can pass other values as kwargs
```
Now generate rocket features for the entire train and valid datasets using the create_rocket_features convenience function `create_rocket_features`.
And we now transform the original data, creating 20k features per sample
```
X_train, y_train = create_rocket_features(dls.train, model)
X_valid, y_valid = create_rocket_features(dls.valid, model)
X_train.shape, X_valid.shape
```
### 2️⃣ Apply a classifier
Once you build the 20k features per sample, you can use them to train any classifier of your choice.
#### RidgeClassifierCV
And now you apply a classifier of your choice.
With RidgeClassifierCV in particular, there's no need to normalize the calculated features before passing them to the classifier, as it does it internally (if normalize is set to True as recommended by the authors).
```
from sklearn.linear_model import RidgeClassifierCV
ridge = RidgeClassifierCV(alphas=np.logspace(-8, 8, 17), normalize=True)
ridge.fit(X_train, y_train)
print(f'alpha: {ridge.alpha_:.2E} train: {ridge.score(X_train, y_train):.5f} valid: {ridge.score(X_valid, y_valid):.5f}')
```
This result is amazing!! The previous state of the art (Inceptiontime) was .37837
#### Logistic Regression
In the case of other classifiers (like Logistic Regression), the authors recommend a per-feature normalization.
```
eps = 1e-6
Cs = np.logspace(-5, 5, 11)
from sklearn.linear_model import LogisticRegression
best_loss = np.inf
for i, C in enumerate(Cs):
f_mean = X_train.mean(axis=0, keepdims=True)
f_std = X_train.std(axis=0, keepdims=True) + eps # epsilon to avoid dividing by 0
X_train_tfm2 = (X_train - f_mean) / f_std
X_valid_tfm2 = (X_valid - f_mean) / f_std
classifier = LogisticRegression(penalty='l2', C=C, n_jobs=-1)
classifier.fit(X_train_tfm2, y_train)
probas = classifier.predict_proba(X_train_tfm2)
loss = nn.CrossEntropyLoss()(torch.tensor(probas), torch.tensor(y_train)).item()
train_score = classifier.score(X_train_tfm2, y_train)
val_score = classifier.score(X_valid_tfm2, y_valid)
if loss < best_loss:
best_eps = eps
best_C = C
best_loss = loss
best_train_score = train_score
best_val_score = val_score
print('{:2} eps: {:.2E} C: {:.2E} loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(
i, eps, C, loss, train_score, val_score))
print('\nBest result:')
print('eps: {:.2E} C: {:.2E} train_loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(
best_eps, best_C, best_loss, best_train_score, best_val_score))
```
☣️ Note: Epsilon has a large impact on the result. You can actually test several values to find the one that best fits your problem, but bear in mind you can only select C and epsilon based on train data!!!
##### RandomSearch
One way to do this would be to perform a random search using several epsilon and C values
```
n_tests = 10
epss = np.logspace(-8, 0, 9)
Cs = np.logspace(-5, 5, 11)
from sklearn.linear_model import LogisticRegression
best_loss = np.inf
for i in range(n_tests):
eps = np.random.choice(epss)
C = np.random.choice(Cs)
f_mean = X_train.mean(axis=0, keepdims=True)
f_std = X_train.std(axis=0, keepdims=True) + eps # epsilon
X_train_tfm2 = (X_train - f_mean) / f_std
X_valid_tfm2 = (X_valid - f_mean) / f_std
classifier = LogisticRegression(penalty='l2', C=C, n_jobs=-1)
classifier.fit(X_train_tfm2, y_train)
probas = classifier.predict_proba(X_train_tfm2)
loss = nn.CrossEntropyLoss()(torch.tensor(probas), torch.tensor(y_train)).item()
train_score = classifier.score(X_train_tfm2, y_train)
val_score = classifier.score(X_valid_tfm2, y_valid)
if loss < best_loss:
best_eps = eps
best_C = C
best_loss = loss
best_train_score = train_score
best_val_score = val_score
print('{:2} eps: {:.2E} C: {:.2E} loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(
i, eps, C, loss, train_score, val_score))
print('\nBest result:')
print('eps: {:.2E} C: {:.2E} train_loss: {:.5f} train_acc: {:.5f} valid_acc: {:.5f}'.format(
best_eps, best_C, best_loss, best_train_score, best_val_score))
```
In general, the original method may be a bit faster than the GPU method, but for larger datasets, there's a great benefit in using the GPU version.
In addition to this, I have also run the code on the TSC UCR multivariate datasets (all the ones that don't contain nan values), and the results are also very good, beating the previous state-of-the-art in this category as well by a large margin. For example, ROCKET reduces InceptionTime errors by 26% on average.
#### Fastai classifier head
```
X = concat(X_train, X_valid)
y = concat(y_train, y_valid)
splits = get_predefined_splits(X_train, X_valid)
tfms = [None, [Categorize()]]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, batch_tfms=[TSStandardize(by_var=True)])# per feature normalization
dls.show_batch()
def lin_zero_init(layer):
if isinstance(layer, nn.Linear):
nn.init.constant_(layer.weight.data, 0.)
if layer.bias is not None: nn.init.constant_(layer.bias.data, 0.)
model = create_mlp_head(dls.vars, dls.c, dls.len)
model.apply(lin_zero_init)
learn = Learner(dls, model, metrics=accuracy, cbs=ShowGraph())
learn.fit_one_cycle(50, lr_max=1e-4)
learn.plot_metrics()
```
#### XGBoost
```
eps = 1e-6
# normalize 'per feature'
f_mean = X_train.mean(axis=0, keepdims=True)
f_std = X_train.std(axis=0, keepdims=True) + eps
X_train_norm = (X_train - f_mean) / f_std
X_valid_norm = (X_valid - f_mean) / f_std
import xgboost as xgb
classifier = xgb.XGBClassifier(max_depth=3,
learning_rate=0.1,
n_estimators=100,
verbosity=1,
objective='binary:logistic',
booster='gbtree',
tree_method='auto',
n_jobs=-1,
gpu_id=default_device().index,
gamma=0,
min_child_weight=1,
max_delta_step=0,
subsample=.5,
colsample_bytree=1,
colsample_bylevel=1,
colsample_bynode=1,
reg_alpha=0,
reg_lambda=1,
scale_pos_weight=1,
base_score=0.5,
random_state=0,
missing=None)
classifier.fit(X_train_norm, y_train)
preds = classifier.predict(X_valid_norm)
(preds == y_valid).mean()
```
## Conclusions
ROCKET is a great method for TSC that has established a new level of performance both in terms of accuracy and time. It does it by successfully applying an approach quite different from the traditional DL approaches. The method uses 10k random kernels to generate features that are then classified by linear classifiers (although you may use a classifier of your choice).
The original method has 2 limitations (lack of multivariate and lack of GPU support) that are overcome by the Pytorch implementation shared in this notebook.
So this is all the code you need to train a state-of-the-art model using rocket and GPU in `tsai`:
```
X, y, splits = get_UCR_data('HandMovementDirection', return_split=False)
tfms = [None, [Categorize()]]
batch_tfms = [TSStandardize(by_sample=True)]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=64, drop_last=False, shuffle_train=False, batch_tfms=[TSStandardize(by_sample=True)])
model = create_model(ROCKET, dls=dls)
X_train, y_train = create_rocket_features(dls.train, model)
X_valid, y_valid = create_rocket_features(dls.valid, model)
ridge = RidgeClassifierCV(alphas=np.logspace(-8, 8, 17), normalize=True)
ridge.fit(X_train, y_train)
print(f'alpha: {ridge.alpha_:.2E} train: {ridge.score(X_train, y_train):.5f} valid: {ridge.score(X_valid, y_valid):.5f}')
```
| github_jupyter |
# Brownian process in stock price dynamics
Brownian Moton:

source: https://en.wikipedia.org/wiki/Brownian_motion

A **random-walk** can be seen as a **motion** resulting from a succession of discrete **random steps**.
The random-walk after the i-th steps is:
\begin{equation}
\tag{1}
X_{i} = X_{i-1} + \epsilon_{i}
\end{equation}
being $X_{i=0} = X_{0} = 0$ the starting point and $\epsilon_{i}$ a random variable
```
# conda install -c anaconda pandas-datareader
# pip install pandas-datareader
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# Possible steps
steps = [-1,1] # backward and forward of 1 units
# Nr of steps n_steps
n_steps = 100
# Initialise the random walk variable X
X = np.zeros(n_steps) #<--- numpy array of (N=n_steps) zeros
# Fill in X according to eq. 1
for i in range(1,n_steps):
X[i]= X[i-1] + np.random.choice(steps)#<--- from 1 to fulfill Initial condition
# Faster alternative
def random_walk(steps,n_steps):
random_steps = np.random.choice(steps,size=n_steps)
X = random_steps.cumsum()
X[0] = 0 # <--- initial position
return X
for i in range(4):
plt.plot(random_walk(steps,n_steps))
```
**If we repeat the experiment where does the man end up in average?**
```
# Repeat the random walk n_trials time
# Record the last position for each trial
def monte_random_walk(n_steps,steps,n_trials):
X_fin = np.zeros(n_trials)#<-- X_fin numpy array of (N=n_trial) zeros
for i in range(n_trials):
X_fin[i] =random_walk(steps,n_steps)[-1]
return X_fin
n_trial = 20000
steps = [-1,1]
n_steps = 100
X_fin = monte_random_walk(n_steps,steps,n_trial)
# Plot the distribution of X_fin
width_bin = 4
n_bins = int(np.ceil((np.max(X_fin)-np.min(X_fin))/width_bin))
sns.distplot(X_fin,kde=True,bins=n_bins);
plt.xlabel('Final position');
np.std(X_fin)
```

We can see a Brownian process $B(t)$ as a **continuous Gaussian** random walk.
**Gaussian & continuous**: we divide the observation time $t$ into $N$ small timestep $\Delta t$, so that $t=N\cdot\Delta t$.
For any time $t_i=i\cdot\Delta t$, the change in $B$ is normally distributed:
$$ B_{i+1}-B_i \sim \sqrt{\Delta t}\cdot N(0,1)$$
Taking time step $\Delta t$ smaller and smaller will make B a continuous random walk.
```
def brownian_motion(T,N,n_trials,random_seed = None):
np.random.seed(random_seed)
dt = T/N
random_steps = np.sqrt(dt)*np.random.normal(loc = 0,scale = 1,size = (N,n_trials))
random_steps[0,:] = 0
X = np.cumsum(random_steps,axis=0)
return X
T=7
N=100
n_trials=2000
random_seed = 1
dt=T/N
dt
X= brownian_motion(T,N,n_trials,random_seed)
# Last step
X_fin = X[-1,:]
plt.plot(X);
# Plot the distribution of X_fin
width_bin = .51
n_bins = int(np.ceil((np.max(X_fin)-np.min(X_fin))/width_bin))
sns.distplot(X_fin,bins=n_bins);
```
### Connection to stock-price
The dynamics of stock-prices can be modeled by the following equation:
\begin{equation}
\tag{2}
\Delta S_{t} = \mu S_{t} \Delta t + \sigma S_{t}\Delta B_{t}
\end{equation}
being:
$S$ the stock price,
$\mu$ the drift coefficient (a.k.a the mean of returns),
$\sigma$ the diffusion coefficient (a.k.a the standard deviation of returns),
$B$ the brownian motion.
The eq. (2) admits the following solution:
\begin{equation}
\tag{3}
S(t) = S_{0} \cdot e^{[(\mu - \sigma^2/2)\cdot t + \sigma \cdot B_{t}] }
\end{equation}
```
def stock_price(N,S0,u,sigma,T,n_trials,random_seed = None):
"""
N: number of intervals
S0: initial stock price
u: mean of returns over some period
sigma: volatility a.k.a. standard deviation of returns
random_seed: seed for pseudorandom generator
T: observation time
m: number of brownian path
"""
dt = T/N
t = np.arange(0.,T,dt)
t=t[:,np.newaxis]
drift = (u - (sigma/np.sqrt(2))**2)*t
shock = sigma * brownian_motion(T,N,n_trials,random_seed = None)
S = S0*np.exp(drift + shock)
return t, S
```
### Scraping from Yahoo Finance
```
from pandas_datareader import data as scraper
import pandas as pd
symbol = 'FB' # 'FB'Facebook, 'FCA.MI' FIAT Crysler, 'AAPL' Apple
start_date = '2020-01-01'
end_date = '2020-12-31'
df = scraper.DataReader(symbol, 'yahoo', start_date, end_date)
df.head()
df.describe()
#close price
close_price = df['Close']
close_price.plot();
plt.ylabel('Price $');
# Calculate the daily percentage return
daily_return= (close_price.pct_change() )
daily_return.plot(label='Daily Return')
(close_price*.002).plot(label='Close Price');
plt.legend();
# Plot the distribution of daily_return
width_bin = .01
n_bins = int(np.ceil((np.max(daily_return)-np.min(daily_return))/width_bin))
sns.distplot(daily_return,bins=n_bins);
plt.title("Daily returns on FB, 2020");
# compute the return mu and the sigma
mu = np.mean(daily_return)
sigma = np.std(daily_return)
print(f'Mean of daily-returns μ: {round(mu,4)*100} %')
print('')
print(f'Volatility σ: {round(sigma,3)}')
# Parameters simulation
N = 5000 # <--- lenght of each trials
T=252 # <--- # days of a business year
S0=close_price[0] # <--- Initial close-price
n_trials=25500 # <--- # of trials
T/N # <--- Δt about 0.05
# Extracting stock price pathways and time vector from the model
t,model_S = stock_price(N,S0,mu,sigma,T,n_trials,random_seed = 42)
#model_S.shape
# Define other two time range
t2=np.arange(0,253,1)
# Plot simulated and actual stock-prizes
plt.plot(t,model_S);
#plt.plot(t3,close_price[-12:],linewidth=3,c='k');
plt.plot(t2,close_price[:],linewidth=3,c='k');
plt.xlabel('Days');
plt.ylabel('Stock Price');
# Compute final predicted stock-price
S_fin = model_S[-1,:]
# Calculate mean and std from S_fin
mean = np.mean(S_fin)
median=np.median(S_fin)
std_ = np.std(S_fin)
min_ = np.min(S_fin)
max_ = np.max(S_fin)
print('*******************')
print(f' * Statistics *')
print('*******************\n')
print(f'Min: {round(min_)} $')
print(f'Max: {round(max_)} $')
print(f'Median: {round(median)} $')
print(f'Mean: {round(mean)} $')
print(f'Standard deviation: {round(std_)} $')
# Plot the simulated final stock-price
sns.distplot(S_fin);
plt.plot([median,median], [0, .02], 'k-.', lw=6,label='median')
plt.plot([mean,mean], [0, .02], 'b-.', lw=2,label='mean')
plt.plot([close_price[-1],close_price[-1]], [0, .02], 'g-', lw=2,label='actual prize')
plt.ylim(top=0.004);
plt.xlim(left=-100,right=1200)
plt.legend();
plt.title('Montecarlo Simulation on Facebook Stock-Price');
plt.xlabel('Stock price $');
from scipy.stats import norm,lognorm,t
def lognorm_fit(data_,x_min,x_max,dx):
# Fits the datas with a log-norm distribution
params = lognorm.fit(data_)
shape, mean, std = params
# Generate a log-norm probability distribution function pdf
x = np.arange(x_min,x_max,dx)
lnd = lognorm(s=shape,loc=mean,scale=std)# <--- initialise the log-norm distribution
lognormal_pdf =lnd.pdf(x)
# Calculate the mode of distribution
index_max = np.argmax(lognormal_pdf) #np.where(lognormal_pdf == np.max(lognormal_pdf))
mode =x[index_max]
return lnd,lognormal_pdf, mode,x
x_min=0
x_max=5000
dx=.1
# Distribution and mode
lnd_S,lognormal_pdf_S,mode_S,x = lognorm_fit(S_fin,x_min,x_max,dx)
# Plot the simulated final stock-price
sns.distplot(S_fin);
sns.lineplot(x,lognormal_pdf_S,label = 'log-normal')
plt.plot([mode_S,mode_S],[0,.02],'r-.',label= 'mode')
plt.plot([median,median], [0, .02], 'k-.', lw=6,label='median')
plt.plot([mean,mean], [0, .02], 'b-.', lw=2,label='mean')
plt.plot([close_price[-1],close_price[-1]], [0, .02], 'g-', lw=2,label='actual prize')
plt.ylim(top=0.004);
plt.xlim(left=-100,right=1200)
plt.legend();
plt.title('Montecarlo Simulation on Facebook Stock-Price');
plt.xlabel('Stock price $');
```
what is the probability of having a loss after one year?
```
# Annual Return
annual_return_pct = (S_fin -S0)/S0
# Calculate mean and std from S_fin
mean_ar = np.mean(annual_return_pct)
median_ar=np.median(annual_return_pct)
std_ar = np.std(annual_return_pct)
min_ar = np.min(annual_return_pct)
max_ar = np.max(annual_return_pct)
print('*******************')
print(f' * Statistics *')
print('*******************\n')
print(f'Min: {round(min_ar,2)} %')
print(f'Max: {round(max_ar,2)} %')
print(f'Median: {round(median_ar,2)} %')
print(f'Mean: {round(mean_ar,2)} %')
print(f'Standard deviation: {round(std_ar,2)} %')
# Plot distribution of simulated annual return
sns.distplot(annual_return_pct);
plt.ylim(top=0.8);
plt.xlim(left=-3,right=6)
plt.title('Montecarlo Simulation on Facebook Stock-Price');
plt.xlabel('Annual Return % ');
```
Analysis of underlying distribution
```
x_min=-5
x_max=6
dx=.001
# Distribution and mode
lnd_ar,lognormal_pdf_ar,mode_ar,x_ar = lognorm_fit(annual_return_pct,x_min,x_max,dx)
# Plot distribution of simulated annual return
sns.distplot(annual_return_pct);
sns.lineplot(x_ar,lognormal_pdf_ar,label = 'log-normal');
plt.plot([mode_ar,mode_ar],[0,.9],'k-.',label= 'mode');
plt.ylim(top=0.8);
plt.xlim(left=-3,right=6)
plt.legend();
plt.text(x=2,y=.5,s=f'mode @ {round(mode_ar,3)}',)
plt.title('Montecarlo Simulation on Facebook Stock-Price');
plt.xlabel('Annual Return % ');
# Cumulative distribution Function CDF (probability of obtaining a value equal or smaller than the given value)
cdf = lnd_ar.cdf(x_ar) # <--- cumulative
# Plot CDF SF function
sns.lineplot(x_ar,cdf,label='CDF');
plt.plot([0,0], [0, 1], 'r-', lw=2,label='No returns');
plt.legend();
plt.xlabel('Annual Return %');
def get_prob(value_return1,value_return2=None):
mask_1 = (x_ar<=value_return1)
if value_return2==None:
prob = round(np.max(cdf[mask_1])*100,2)
else:
mask_2 = (x_ar<=value_return2)
area1 = np.max(cdf[mask_1])*100
area2 = np.max(cdf[mask_2])*100
prob = np.round(area2 - area1,2)
return prob
print('**************************************')
print(' * Results *')
print('**************************************\n')
print(' Return_1 Return_2 Probability\n')
print(f'Loss {get_prob(-0.0001)} %')
print(f'Gain 0.1% 1% {get_prob(0.1,1)} % ')
print(f'Gain 1% 2% {get_prob(1,2)} % ')
```
| github_jupyter |
# City street network orientations
Compare the spatial orientations of city street networks with OSMnx.
- [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
- [GitHub repo](https://github.com/gboeing/osmnx)
- [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples)
- [Documentation](https://osmnx.readthedocs.io/en/stable/)
- [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/)
```
import matplotlib.pyplot as plt
import numpy as np
import osmnx as ox
import pandas as pd
ox.config(log_console=True, use_cache=True)
weight_by_length = False
ox.__version__
# define the study sites as label : query
places = {'Atlanta' : 'Atlanta, GA, USA',
'Boston' : 'Boston, MA, USA',
'Buffalo' : 'Buffalo, NY, USA',
'Charlotte' : 'Charlotte, NC, USA',
'Chicago' : 'Chicago, IL, USA',
'Cleveland' : 'Cleveland, OH, USA',
'Dallas' : 'Dallas, TX, USA',
'Houston' : 'Houston, TX, USA',
'Denver' : 'Denver, CO, USA',
'Detroit' : 'Detroit, MI, USA',
'Las Vegas' : 'Las Vegas, NV, USA',
'Los Angeles' : {'city':'Los Angeles', 'state':'CA', 'country':'USA'},
'Manhattan' : 'Manhattan, NYC, NY, USA',
'Miami' : 'Miami, FL, USA',
'Minneapolis' : 'Minneapolis, MN, USA',
'Orlando' : 'Orlando, FL, USA',
'Philadelphia' : 'Philadelphia, PA, USA',
'Phoenix' : 'Phoenix, AZ, USA',
'Portland' : 'Portland, OR, USA',
'Sacramento' : 'Sacramento, CA, USA',
'San Francisco' : {'city':'San Francisco', 'state':'CA', 'country':'USA'},
'Seattle' : 'Seattle, WA, USA',
'St Louis' : 'St. Louis, MO, USA',
'Tampa' : 'Tampa, FL, USA',
'Washington' : 'Washington, DC, USA'}
places = {'Accra' : 'Accra Metropolitan, Greater Accra Region, Ghana'}
# verify OSMnx geocodes each query to what you expect
gdf = ox.gdf_from_places(places.values())
gdf
```
## Get the street networks and their edge bearings
```
def reverse_bearing(x):
return x + 180 if x < 180 else x - 180
bearings = {}
for place in sorted(places.keys()):
# get the graph
query = places[place]
G = ox.graph_from_place(query, network_type='drive')
# calculate edge bearings
Gu = ox.add_edge_bearings(ox.get_undirected(G))
if weight_by_length:
# weight bearings by length (meters)
city_bearings = []
for u, v, k, d in Gu.edges(keys=True, data=True):
city_bearings.extend([d['bearing']] * int(d['length']))
b = pd.Series(city_bearings)
bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True')
else:
# don't weight bearings, just take one value per street segment
b = pd.Series([d['bearing'] for u, v, k, d in Gu.edges(keys=True, data=True)])
bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True')
```
## Visualize it
```
def count_and_merge(n, bearings):
# make twice as many bins as desired, then merge them in pairs
# prevents bin-edge effects around common values like 0° and 90°
n = n * 2
bins = np.arange(n + 1) * 360 / n
count, _ = np.histogram(bearings, bins=bins)
# move the last bin to the front, so eg 0.01° and 359.99° will be binned together
count = np.roll(count, 1)
return count[::2] + count[1::2]
# function to draw a polar histogram for a set of edge bearings
def polar_plot(ax, bearings, n=36, title=''):
bins = np.arange(n + 1) * 360 / n
count = count_and_merge(n, bearings)
_, division = np.histogram(bearings, bins=bins)
frequency = count / count.sum()
division = division[0:-1]
width = 2 * np.pi / n
ax.set_theta_zero_location('N')
ax.set_theta_direction('clockwise')
x = division * np.pi / 180
bars = ax.bar(x, height=frequency, width=width, align='center', bottom=0, zorder=2,
color='#003366', edgecolor='k', linewidth=0.5, alpha=0.7)
ax.set_ylim(top=frequency.max())
title_font = {'family':'Century Gothic', 'size':24, 'weight':'bold'}
xtick_font = {'family':'Century Gothic', 'size':10, 'weight':'bold', 'alpha':1.0, 'zorder':3}
ytick_font = {'family':'Century Gothic', 'size': 9, 'weight':'bold', 'alpha':0.2, 'zorder':3}
ax.set_title(title.upper(), y=1.05, fontdict=title_font)
ax.set_yticks(np.linspace(0, max(ax.get_ylim()), 5))
yticklabels = ['{:.2f}'.format(y) for y in ax.get_yticks()]
yticklabels[0] = ''
ax.set_yticklabels(labels=yticklabels, fontdict=ytick_font)
xticklabels = ['N', '', 'E', '', 'S', '', 'W', '']
ax.set_xticklabels(labels=xticklabels, fontdict=xtick_font)
ax.tick_params(axis='x', which='major', pad=-2)
# create figure and axes
n = len(places)
ncols = int(np.ceil(np.sqrt(n)))
nrows = int(np.ceil(n / ncols))
figsize = (ncols * 5, nrows * 5)
fig, axes = plt.subplots(nrows, ncols, figsize=figsize, subplot_kw={'projection':'polar'})
# plot each city's polar histogram
for ax, place in zip(axes.flat, sorted(places.keys())):
polar_plot(ax, bearings[place].dropna(), title=place)
# add super title and save full image
suptitle_font = {'family':'Century Gothic', 'fontsize':60, 'fontweight':'normal', 'y':1.07}
fig.suptitle('City Street Network Orientation', **suptitle_font)
fig.tight_layout()
fig.subplots_adjust(hspace=0.35)
fig.savefig('images/street-orientations.png', dpi=120, bbox_inches='tight')
plt.close()
```
| github_jupyter |
# Modelado de Robots
Recordando la práctica anterior, tenemos que la ecuación diferencial que caracteriza a un sistema masa-resorte-amoritguador es:
$$
m \ddot{x} + c \dot{x} + k x = F
$$
y revisamos 3 maneras de obtener el comportamiento de ese sistema, sin embargo nos interesa saber el comportamiento de un sistema mas complejo, un robot; empezaremos con un pendulo simple, el cual tiene la siguiente ecuación de movimiento:
$$
m l^2 \ddot{q} + m g l \cos{q} = \tau
$$
Como podemos ver, son similares en el sentido de que involucran una sola variable, sin embargo, en la segunda ecuación, nuestra variable esta involucrada adentro de una función no lineal ($\cos{q}$), por lo que nuestra ecuación diferencial es no lineal, y por lo tanto _no_ podemos usar el formalismo de función de transferencia para resolverla; tenemos que usar la función ```odeint``` para poder resolverla.
Como es de segundo grado, tenemos que dividir nuestra ecuación diferencial en dos mas simples, por lo tanto usaremos el siguiente truco:
$$
\frac{d}{dt} q = \dot{q}
$$
entonces, tenemos dos ecuaciones diferenciales, por lo que podemos resolver dos incognitas $q$ y $\dot{q}$.
Utilizando nuestros conocimientos de algebra lineal, podemos acomodar nuestro sistema de ecuaciones en una matriz, de tal manera que si antes teniamos que:
$$
\begin{align}
\frac{d}{dt} q &= \dot{q} \\
\frac{d}{dt} \dot{q} &= \ddot{q} = \frac{\tau - m g l \cos{q}}{ml^2}
\end{align}
$$
Por lo que podemos ver que nuestro sistema de ecuaciones tiene un estado mas grande que antes; la ecuación diferencial que teniamos como no lineal, de segundo orden, podemos escribirla como no lineal, de primer orden siempre y cuando nuestro estado sea mas grande.
Definamos a lo que nos referimos con estado:
$$
x =
\begin{pmatrix}
q \\
\dot{q}
\end{pmatrix}
$$
con esta definición de estado, podemos escribir el sistema de ecuaciónes de arriba como:
$$
\frac{d}{dt} x = \dot{x} = \frac{d}{dt}
\begin{pmatrix}
q \\
\dot{q}
\end{pmatrix} =
\begin{pmatrix}
\dot{q} \\
\frac{\tau - m g l \cos{q}}{ml^2}
\end{pmatrix}
$$
o bien $\dot{x} = f(x)$, en donde $f(x)$ es una función vectorial, o bien, un vector de funciones:
$$
f(x) =
\begin{pmatrix}
\dot{q} \\
\frac{\tau - m g l \cos{q}}{ml^2}
\end{pmatrix}
$$
Por lo que ya estamos listos para simular este sistema mecánico, con la ayuda de ```odeint()```; empecemos importando laas librerias necesarias:
```
from scipy.integrate import odeint
from numpy import linspace
```
y definiendo una función que devuelva un arreglo con los valores de $f(x)$
```
def f(x, t):
from numpy import cos
q, q̇ = x
τ = 0
m = 1
g = 9.81
l = 1
return [q̇, τ - m*g*l*cos(q)/(m*l**2)]
```
Vamos a simular desde el tiempo $0$, hasta $10$, y las condiciones iniciales del pendulo son $q=0$ y $\dot{q} = 0$.
```
ts = linspace(0, 10, 100)
x0 = [0, 0]
```
Utilizamos la función ```odeint``` para simular el comportamiento del pendulo, dandole la función que programamos con la dinámica de $f(x)$ y sacamos los valores de $q$ y $\dot{q}$ que nos devolvió ```odeint``` envueltos en el estado $x$
```
xs = odeint(func = f, y0 = x0, t = ts)
qs, q̇s = list(zip(*xs.tolist()))
```
En este punto ya tenemos nuestros datos de la simulación, tan solo queda graficarlos para interpretar los resultados:
```
%matplotlib inline
from matplotlib.pyplot import style, plot, figure
style.use("ggplot")
fig1 = figure(figsize = (8, 8))
ax1 = fig1.gca()
ax1.plot(xs);
fig2 = figure(figsize = (8, 8))
ax2 = fig2.gca()
ax2.plot(qs)
ax2.plot(q̇s);
```
Pero las gráficas de trayectoria son aburridas, recordemos que podemos hacer una animación con matplotlib:
```
from matplotlib import animation
from numpy import sin, cos, arange
# Se define el tamaño de la figura
fig = figure(figsize=(8, 8))
# Se define una sola grafica en la figura y se dan los limites de los ejes x y y
axi = fig.add_subplot(111, autoscale_on=False, xlim=(-1.5, 1.5), ylim=(-2, 1))
# Se utilizan graficas de linea para el eslabon del pendulo
linea, = axi.plot([], [], "-o", lw=2, color='gray')
def init():
# Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema
linea.set_data([], [])
return linea
def animate(i):
# Esta funcion se ejecuta para cada cuadro del GIF
# Se obtienen las coordenadas x y y para el eslabon
xs, ys = [[0, cos(qs[i])], [0, sin(qs[i])]]
linea.set_data(xs, ys)
return linea
# Se hace la animacion dandole el nombre de la figura definida al principio, la funcion que
# se debe ejecutar para cada cuadro, el numero de cuadros que se debe de hacer, el periodo
# de cada cuadro y la funcion inicial
ani = animation.FuncAnimation(fig, animate, arange(1, len(qs)), interval=25,
blit=True, init_func=init)
# Se guarda el GIF en el archivo indicado
ani.save('./imagenes/pendulo-simple.gif', writer='imagemagick');
```

# Problemas
1. Realiza una gráfica de trayectoria y una animación de un pendulo doble.
| github_jupyter |
```
import pandas as pd
from lifelines import KaplanMeierFitter
import seaborn as sns
import matplotlib.pyplot as plt
preprints_df = pd.read_csv("output/biorxiv_article_metadata.tsv", sep="\t",)
preprints_df["date_received"] = pd.to_datetime(preprints_df["date_received"])
xml_df = (
preprints_df.sort_values(by="date_received")
.dropna(subset=["date_received"])
.groupby("doi")
.first()
)
api_df = pd.read_csv("output/biorxiv_published_api_data.tsv", sep="\t")
api_df[api_df["published_date"].str.contains(":")]
index = api_df[api_df["published_date"].str.contains(":")].index
api_df.loc[index, "published_date"] = (
api_df.loc[index, "published_date"].str.split(":").str[0]
)
for col in ["preprint_date", "published_date"]:
api_df[col] = pd.to_datetime(api_df[col])
api_df.set_index("biorxiv_doi")
merged_df = pd.merge(
xml_df,
api_df.set_index("biorxiv_doi"),
left_index=True,
right_index=True,
how="outer",
)
merged_df
merged_df["document"].isna().sum()
merged_df["published_doi"].isna().sum()
len(merged_df)
# lets ignore papers we don't have xmls for
merged_df = pd.merge(
xml_df,
api_df.set_index("biorxiv_doi"),
left_index=True,
right_index=True,
how="left",
)
merged_df["published"] = ~merged_df["published_doi"].isna()
# I should change this to when the data was pulled, but I didn't record that for now :(
merged_df.loc[merged_df["published"], "observation_date"] = merged_df.loc[
merged_df["published"], "published_date"
]
merged_df.loc[~merged_df["published"], "observation_date"] = pd.datetime.today()
merged_df["observation_duration"] = (
merged_df["observation_date"] - merged_df["date_received"]
)
(merged_df["observation_duration"] < pd.Timedelta(0)).sum()
merged_df = merged_df[merged_df["observation_duration"] > pd.Timedelta(0)]
ax = sns.distplot(
merged_df["observation_duration"].dt.total_seconds() / 60 / 60 / 24 / 365
)
kmf = KaplanMeierFitter()
kmf.fit(
merged_df["observation_duration"].dt.total_seconds() / 60 / 60 / 24 / 365,
event_observed=merged_df["published"],
)
ax = kmf.plot(label="all papers", logx=True)
_ = ax.set_ylabel("proportion of unpublished biorxiv papers")
_ = ax.set_xlabel("timeline (years)")
_ = ax.set_ylim(0, 1)
f = plt.figure(figsize=(10, 8))
ax = None
for category, cat_group in merged_df.groupby("category"):
kmf.fit(
cat_group["observation_duration"].dt.total_seconds() / 60 / 60 / 24 / 365,
event_observed=cat_group["published"],
)
ax = kmf.plot(label=category, ax=ax, ci_show=False, logx=True)
# Shrink current axis by 20%
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a legend to the right of the current axis
_ = ax.legend(loc="center left", bbox_to_anchor=(1, 0.5), title="Biorxiv category")
_ = ax.set_ylabel("proportion of unpublished biorxiv papers")
_ = ax.set_xlabel("timeline (years)")
_ = ax.set_ylim(0, 1)
merged_df["doi_prefix"] = merged_df["published_doi"].str.split("/").str[0]
%%time
f = plt.figure(figsize=(10, 8))
ax = None
for category, cat_group in merged_df.groupby("doi_prefix"):
if len(cat_group) > 100:
kmf.fit(
cat_group["observation_duration"].dt.total_seconds() / 60 / 60 / 24 / 365,
event_observed=cat_group["published"],
)
ax = kmf.plot(label=category, ax=ax, ci_show=False, logx=True)
# Shrink current axis by 20%
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a legend to the right of the current axis
_ = ax.legend(loc="center left", bbox_to_anchor=(1, 0.5), title="DOI prefix")
_ = ax.set_ylabel("proportion of unpublished biorxiv papers")
_ = ax.set_xlabel("timeline (years)")
_ = ax.set_ylim(0, 1)
%%time
doi_prefix_df = merged_df.groupby("doi_prefix").apply(
lambda cat_group: pd.Series(
{
"count": len(cat_group),
"80th_percentile": kmf.fit(
cat_group["observation_duration"].dt.total_seconds() / 60 / 60 / 24,
event_observed=cat_group["published"],
).percentile(0.8),
}
)
)
doi_prefix_df[doi_prefix_df["count"] > 50].sort_values("80th_percentile").head()
```
F1000 Research Ltd <== 10.12688
MDPI AG <== 10.3390 - wikipedia notes questionable quality of peer-review
| github_jupyter |
```
import calour as ca
import calour_utils as cu
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import glob
import os
import pandas as pd
import shutil
ca.set_log_level('INFO')
%matplotlib inline
pwd
```
# Load the data
### Without the known blooming bacteria (from American Gut paper)
```
ca.set_log_level('ERROR')
ratios=ca.read_amplicon('../lefse_ratios/ratios.biom','../studies/index.csv',
feature_metadata_file='../taxonomy/DB1-15_taxonomy_svs_numbers.tsv',normalize=None, min_reads=None)
ca.set_log_level('INFO')
ratios.sparse = False
ratios
np.sum(np.sum(ratios.data==0,axis=0)>30)
ratios.feature_metadata['keep']=(np.sum(ratios.data==0,axis=0)<=30)
ratios=ratios.filter_by_metadata('keep',[True],axis='f')
```
## Fix taxonomy and filter chloroplast/mitochondria
```
ratios.feature_metadata['taxonomy'] = ratios.feature_metadata.Taxon
ratios.feature_metadata['taxonomy'].fillna('NA',inplace=True)
ratios = ratios.filter_by_taxonomy(['chloroplast','cyanobacteria','mitochondria'],negate=True)
disease_colors = {}
disease_colors = {xx: (0,0,0) for xx in ratios.sample_metadata.disease.unique()}
disease_colors.update({'HIV': (1.00,0.93,0.35),'Autism': (0.50,0.99,0.52),'Bipolar': (1.00, 0.63, 0.00),
'IBD_Crohn disease': (0.72,0.11,0.11),'IBD_Ulcerative Colitis': (0.043,1,0.97),
'IBD_Inflammtory bowel disease': (0.90,0.59,0.043),
'Diabetes T2': (0.47,0.53,0.80),
'Depression': (0.48,0.12,0.64),
'Obesity': (0.25,0.32,0.71),
'Parkinson': (0.29,0.08,0.55),
'Schizophrenia': (0.88,0.75,0.91),
'Gastroenteritis': (0.94,0.33,0.31),
'Heart diseases': (0.33,0.43,1.00),
'Irritable bowel syndrom': (0.90,0.45,0.45),
'Alzheimer': (0.83, 0.83, 0.83), 'Anorexia': (0.83, 0.83, 0.83), 'Cancer': (0.83, 0.83, 0.83), 'Autoimmun diseases': (0.83, 0.83, 0.83), 'C.difficile infection': (0.83, 0.83, 0.83),
'Cancer': (0.83, 0.83, 0.83), 'Chronic fatigue syndrome': (0.83, 0.83, 0.83), 'Diabetes T1': (0.83, 0.83, 0.83), 'Gout': (0.83, 0.83, 0.83),
'Hepatitis B': (0.83, 0.83, 0.83), 'Hepatitis C': (0.83, 0.83, 0.83), 'Hypertension': (0.83, 0.83, 0.83),
'Lupus': (0.83, 0.83, 0.83), 'Pancreatitis': (0.83, 0.83, 0.83), 'Psoriasis': (0.83, 0.83, 0.83), 'Rheumatoid arthritis': (0.83, 0.83, 0.83),
})
```
### creat a chart pie for diseases
```
ratios.sample_metadata['pie_disease']=ratios.sample_metadata.disease.copy()
ratios.sample_metadata.pie_disease.replace('Gout','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Irritable bowel syndrom','IBS',inplace=True)
ratios.sample_metadata.pie_disease.replace('Hepatitis B','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('IBD_Crohn disease','IBD',inplace=True)
ratios.sample_metadata.pie_disease.replace('IBD_Ulcerative Colitis','IBD',inplace=True)
ratios.sample_metadata.pie_disease.replace('IBD_Inflammtory bowel disease','IBD',inplace=True)
ratios.sample_metadata.pie_disease.replace('Alzheimer','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Anorexia','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Autoimmun diseases','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Cancer','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('C.difficile infection','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Diabetes T1','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Hypertension','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Chronic fatigue syndrome','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Gout','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('C.difficile infection','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Gout','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Lupus','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Pancreatitis','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Psoriasis','Other',inplace=True)
ratios.sample_metadata.pie_disease.replace('Rheumatoid arthritis','Other',inplace=True)
pass
disease_colors.update({'HIV': (1.00,0.93,0.35),'Autism': (0.50,0.99,0.52),
'Bipolar': (1.00, 0.63, 0.00),
'IBD': (0.72,0.11,0.11),
'Diabetes T2': (0.47,0.53,0.80),
'Depression': (0.48,0.12,0.64),
'Obesity': (0.25,0.32,0.71),
'Parkinson’s': (0.29,0.08,0.55),
'Schizophrenia': (0.88,0.75,0.91),
'Gastroenteritis': (0.94,0.33,0.31),
'Heart diseases': (0.33,0.43,1.00),
'IBS': (0.90,0.45,0.45),
'Other': (0.83, 0.83, 0.83)})
plt.figure()
pp=plt.pie(ratios.sample_metadata.pie_disease.value_counts(),textprops={'fontsize': 7}, labels=ratios.sample_metadata.pie_disease.unique(), labeldistance=0.5, rotatelabels=True)
for pie_wedge in pp[0]:
pie_wedge.set_edgecolor('white')
pie_wedge.set_facecolor(disease_colors[pie_wedge.get_label()])
```
### Prepare the colormap for the heatmaps
We want coolwarm, with white for exact 0s (which mean not present)
```
current_cmap = mpl.cm.get_cmap('coolwarm')
current_cmap.set_bad(color='red')
ncm = current_cmap(np.linspace(0,1,1000000))
ncm[500000]=(1,1,1,1)
ncm=mpl.colors.ListedColormap(ncm)
```
# Look at the data
```
ratios.feature_metadata
ratios.plot(gui='cli',norm=None,cmap=ncm ,clim=[-0.5,0.5], bad_color='w')
ratios.plot(gui='cli',norm=None,cmap=ncm ,clim=[-1,1], bad_color='w')
ratios=ratios.sort_abundance(key=np.mean)
ratios.plot(gui='cli',norm=None,cmap=ncm ,clim=[-1,1], bad_color='w')
# cu.splot(ratios,'disease',norm=None,cmap=ncm,clim=[-0.5,0.5],xticks_max=None)
```
# Plot all bacteria
## aggregate all samples by disease so CD/UC count as 1
```
ratios_agg=ratios.aggregate_by_metadata('disease',agg='mean')
ratios_agg
# cu.splot(ratios_agg,'disease',norm=None,cmap=ncm,clim=[-0.25,0.25],xticks_max=None)
ratios_agg.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-0.25,0.25],xticks_max=None)
ratios
np.sum(ratios_agg.data[:]>0)
np.sum(ratios_agg.data[:]<0)
np.sum(ratios_agg.data[:]==0)
```
## Sort by mean abundance over all disease
With 1 sample per disease (aggregation by mean)
```
ratios_agg=ratios_agg.sort_abundance(key=np.mean)
# cu.splot(ratios_agg,'disease',norm=None,cmap=ncm,clim=[-0.25,0.25],xticks_max=None)
allbact = ratios.filter_ids(ratios_agg.feature_metadata.index)
allbact = allbact.sort_samples('disease')
allbact
f=allbact.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-1,1],xticks_max=None,xticklabel_len=None,
xticklabel_kwargs={'size':8, 'rotation':90}, barx_fields=['disease'],barx_label=False,barx_colors=disease_colors)
f.save_figure('../figures/sup-heatmap-allbact-lefse.pdf')
```
# Plot the non-specific bacteria
Using the binomial sign test (only on experiments where the bacteria is present), with at least 4 experiments per bacteria. FDR=0.1
The test is done on 1 aggregated sample per disease to prevent bias by disease with many studies
```
np.random.seed(2020)
nonspecific_agg=cu.get_sign_pvals(ratios_agg,alpha=0.25,min_present=4)
nonspecific = ratios.filter_ids(nonspecific_agg.feature_metadata.index)
nonspecific = nonspecific.sort_samples('disease')
nonspecific.feature_metadata = nonspecific.feature_metadata.join(nonspecific_agg.feature_metadata,lsuffix='',rsuffix='_agg')
nonspecific.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-0.5,0.5],xticks_max=None,xticklabel_len=None)
cu.splot(nonspecific,'disease',norm=None,cmap=ncm,clim=[-1,1],xticks_max=None,xticklabel_len=None)
f=nonspecific.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-1,1],xticks_max=None,xticklabel_len=None,
xticklabel_kwargs={'size':8, 'rotation':90},barx_fields=['disease'],barx_label=False,barx_colors=disease_colors)
f.save_figure('../figures/sup-heatmap-nonspecific-lefse.pdf')
```
### Save the non-secific bacteria
```
nonspecific_agg.save('../lefse_ratios/nonspecific/nonspecific')
nonspecific_agg.save_fasta('../lefse_ratios/nonspecific/nonspecific.fa',header='seq')
nonspecific.save('../lefse_ratios/nonspecific/nonspecific_all',fmt='txt')
```
### Also save only the ones going up or down
```
nsup_ids=nonspecific_agg.feature_metadata[nonspecific_agg.feature_metadata.esize > 0]
nsdown_ids=nonspecific_agg.feature_metadata[nonspecific_agg.feature_metadata.esize < 0]
len(nsup_ids)
len(nsdown_ids)
nsup = nonspecific.filter_ids(nsup_ids.index)
nsup.save('../lefse_ratios/nonspecific/nonspecific-up')
nsdown = nonspecific.filter_ids(nsdown_ids.index)
nsdown.save('../lefse_ratios/nonspecific/nonspecific-down')
```
## how many higher/lower in non-specific
```
np.sum(nonspecific_agg.feature_metadata.esize<0)
np.sum(nonspecific_agg.feature_metadata.esize>0)
```
## Get the enriched dbBact terms
```
nonspecific_agg.feature_metadata['_calour_stat'] = nonspecific_agg.feature_metadata['esize']
nonspecific_agg.feature_metadata['_calour_direction'] = 'down'
nonspecific_agg.feature_metadata.loc[nonspecific_agg.feature_metadata['esize']>0,'_calour_direction']='up'
f,dterms = nonspecific_agg.plot_diff_abundance_enrichment()
f.figure.savefig('../figures/sup-nonspecific-dbbact-terms-lefse.pdf')
```
### Draw the dbbact term wordcloud for the non-specific bacteria
```
dbbact=ca.database._get_database_class('dbbact')
f=dbbact.draw_wordcloud(nonspecific)
f.savefig('../figures/sup-wordcloud-nonspecific-lefse.pdf')
f=dbbact.draw_wordcloud(nsup)
f.savefig('../figures/sup-wordcloud-nonspecific-up-lefse.pdf')
f=dbbact.draw_wordcloud(nsdown)
f.savefig('../figures/sup-wordcloud-nonspecific-down-lefse.pdf')
```
# IBD specific
```
def nzdiff(data,labels):
'''Calculate the mean difference between two groups without using 0s
used for the calour.diff_abundance for only non-zero samples
Parameters
----------
data: np.array
sample * feature(similar to calour Experiment.data)
labels:::: np.array of 0s and 1s
the label for each sample.
Returns
-------
np.array
for each feature, mean(group1:group1!=0)- mean(group2: group2!=0)
'''
data0=data[:,labels==0]
data1=data[:,labels==1]
res = np.zeros(data.shape[0])
for i in range(data.shape[0]):
m1=data1[i,:]
m1=m1[m1!=0]
if len(m1) == 0:
continue
m1=np.mean(m1)
m0=data0[i,:]
m0=m0[m0!=0]
if len(m0) == 0:
continue
m0=np.mean(m0)
res[i]= m1 - m0
return res
def ratio_enrichment(exp, field, val1, val2=None, alpha=0.1, min_prev=3, random_seed=None, transform=None):
'''Identify bacteria significantly enriched (i.e. ratios higher/lower) in samples with field=val1 vs. val2 (or all other samples if val2==None)
Test is performed only on non-zero features present in at least min_prev samples in each group.
Parameters
----------
exp: calour.Experiment
The experiment to test
field: str
Name of the field for identifying the 2 groups of samples
val1: str or list of str
Values of field for the first group of samples
val2: str or list of str or None
Values of field for the second group of samples. If None, use all samples not with val1
alpha: float, optional
the dsFDR threshold
min_prev: int, optional
use only bacteria present in at least min_prev samples (not 0) in each group
random_seed: int, optional
transform: str or None, optional
the data transform (from ca.diff_abundance)
'''
# pre filter the data to keep only features present in enough samples in both groups
e1 = exp.filter_samples(field, val1)
e1.sparse=False
e1.data[e1.data!=0] = 1
e1 = e1.filter_sum_abundance(min_prev)
if val2 is None:
e2 = exp.filter_samples(field, val1, negate=True)
else:
e2 = exp.filter_samples(field, val2)
e2.sparse=False
e2.data[e2.data!=0] = 1
e2 = e2.filter_sum_abundance(min_prev)
# keep only features present in > min_prev samples in group1 and group2
exp = exp.filter_ids(e1.feature_metadata.index)
exp = exp.filter_ids(e2.feature_metadata.index)
print('%d remaining after filtering for min_prev %d' % (len(exp.feature_metadata), min_prev))
# find the features significantly different between group1 and group2
# we use the nzdiff statist
dd=exp.diff_abundance(field,val1,val2, transform=transform,alpha=alpha,method=nzdiff,random_seed=random_seed)
return dd
```
### remove the biopsies studies
```
ratios_no_biop = ratios.filter_samples('_sample_id',['23', '29', '49', '52'],negate=True)
ratios_no_biop
```
# Calculate the specific bacteria
## without the Gevers biopsies studies
```
def nice_taxonomy(exp):
'''add nice taxonomy string (only phyla+genus+species if available) for heatmap
Parameters
----------
exp: calour.AmpliconExperiment
with the taxonomy in 'Taxon' field
Returns
-------
exp: calour.AmpliconExperiment, with added feature metadata field "nice_tax"
'''
nice_tax=[]
for cidx,crow in exp.feature_metadata.iterrows():
ctax = crow['Taxon']
ctax=ctax.split(';')
new_tax = ctax[1].split('_')[-1]+'|'
if len(ctax) > 5:
new_tax += ctax[5].split('_')[-1]
if len(ctax) > 6:
if len(ctax[6])>4:
new_tax += '|'+ctax[6].split('_')[-1]
else:
new_tax += ctax[-1].split('_')[-1]
nice_tax.append(new_tax)
newexp = exp.copy()
newexp.feature_metadata['nice_tax'] = nice_tax
return newexp
np.random.seed(2020)
specific_no_biop=ratio_enrichment(ratios_no_biop, 'disease',['IBD_Crohn disease','IBD_Ulcerative Colitis'],
alpha=0.1, min_prev=3,random_seed=2020, transform='rankdata')
specific_no_biop.save('../lefse_ratios/ibd_specific/ibd-no-biopsies-specific')
specific_no_biop.save_fasta('../lefse_ratios/ibd_specific/ibd-no-biopsies-specific')
specific_no_biop = specific_no_biop.sort_samples('disease')
specific_no_biop = nice_taxonomy(specific_no_biop)
f=specific_no_biop.plot(sample_field='disease',norm=None,cmap=ncm,clim=[-1,1],
xticks_max=None,xticklabel_len=None, xticklabel_kwargs={'size':5, 'rotation':90},
feature_field='nice_tax', yticklabel_len=None, yticklabel_kwargs={'size':5}, barx_fields=['disease'],barx_label=False,barx_colors=disease_colors)
f.figure.savefig('../figures/sup-heatmap-specific-lefse.pdf')
```
### draw the wordcloud for the CD/UC specific bacteria
```
f=dbbact.draw_wordcloud(specific_no_biop)
f.savefig('../figures/sup-wordcloud-specific-lefse.pdf')
```
# Venn comparison to main analysis
```
import matplotlib_venn
ns_norarefaction_down = pd.read_csv('../ratios/nonspecific/nonspecific-down_feature.txt',sep='\t')
ns_lefse_down = pd.read_csv('../lefse_ratios/nonspecific/nonspecific-down_feature.txt',sep='\t')
ns_norarefaction_up = pd.read_csv('../ratios/nonspecific/nonspecific-up_feature.txt',sep='\t')
ns_lefse_up = pd.read_csv('../lefse_ratios/nonspecific/nonspecific-up_feature.txt',sep='\t')
f=plt.figure()
matplotlib_venn.venn3([set(ns_norarefaction_up['_feature_id'].values),set(ns_lefse_up['_feature_id'].values),set(ns_lefse_down['_feature_id'].values)],set_labels=['NR up','LEFSE up','LEFSE down'])
f.savefig('../figures/sup-fig-venn-lefse-up.pdf')
f=plt.figure()
matplotlib_venn.venn3([set(ns_norarefaction_down['_feature_id'].values),set(ns_lefse_up['_feature_id'].values),set(ns_lefse_down['_feature_id'].values)],set_labels=['NR up','LEFSE up','LEFSE down'])
f.savefig('../figures/sup-fig-venn-lefse-down.pdf')
spec_norarefaction = pd.read_csv('../ratios/ibd_specific/ibd-no-biopsies-specific_feature.txt',sep='\t')
spec_lefse = pd.read_csv('../lefse_ratios/ibd_specific/ibd-no-biopsies-specific_feature.txt',sep='\t')
f=plt.figure()
matplotlib_venn.venn2([set(spec_norarefaction['_feature_id'].values),set(spec_lefse['_feature_id'].values)],set_labels=['NRMS','LEFSE'])
# f.savefig('../figures/sup-fig-venn-lefse-down.pdf')
ib=set(spec_norarefaction['_feature_id'].values).intersection(set(spec_lefse['_feature_id'].values))
print([spec_norarefaction[spec_norarefaction['_feature_id']==x]['SV_number'].values for x in ib])
spec_norarefaction.iloc[0]['Taxon']
```
# compare lefse to nrmd using all lefse features and direction of change
```
nrmd_up=pd.read_csv('../ratios/nonspecific/nonspecific-up_feature.txt',sep='\t',index_col=0)
nrmd_down=pd.read_csv('../ratios/nonspecific/nonspecific-down_feature.txt',sep='\t',index_col=0)
all_lefse = pd.read_csv('../lefse_ratios/all_lefse_ratios.txt',sep='\t',index_col=0)
up_dir=all_lefse.filter(nrmd_up.index,axis='index')
down_dir=all_lefse.filter(nrmd_down.index,axis='index')
print('in NRMD up (%d), %d (LEFSE>0), %d (LEFSE<0)'% (len(nrmd_up),np.sum(np.mean(up_dir, axis=1)>0),np.sum(np.mean(up_dir, axis=1)<0)))
print('in NRMD down (%d), %d (LEFSE>0), %d (LEFSE<0)'% (len(nrmd_down),np.sum(np.mean(down_dir, axis=1)>0),np.sum(np.mean(down_dir, axis=1)<0)))
smd=pd.read_csv('../studies/index.csv',sep='\t',index_col=0)
smd.index=smd.index.astype(str)
xx=ca.AmpliconExperiment.from_pandas(up_dir.transpose())
xx.sample_metadata=xx.sample_metadata.merge(smd,how='left',left_index=True,right_index=True)
xx=xx.sort_samples('disease')
f=xx.plot(sample_field='disease', clim=[-1,1],norm=None,cmap=ncm, xticks_max=None,xticklabel_len=None,
xticklabel_kwargs={'size':8, 'rotation':90},barx_fields=['disease'],barx_label=False,barx_colors=disease_colors,bad_color='w')
f.save_figure('../figures/sup-lefse-dir-for-nrmd-up.pdf')
xx=ca.AmpliconExperiment.from_pandas(down_dir.transpose())
xx.sample_metadata=xx.sample_metadata.merge(smd,how='left',left_index=True,right_index=True)
xx=xx.sort_samples('disease')
f=xx.plot(sample_field='disease', clim=[-1,1],norm=None,cmap=ncm, xticks_max=None,xticklabel_len=None,
xticklabel_kwargs={'size':8, 'rotation':90},barx_fields=['disease'],barx_label=False,barx_colors=disease_colors, bad_color='w')
f.save_figure('../figures/sup-lefse-dir-for-nrmd-down.pdf')
```
| github_jupyter |
# Use BlackJAX with Numpyro
BlackJAX can take any log-probability function as long as it is compatible with JAX's JIT. In this notebook we show how we can use Numpyro as a modeling language and BlackJAX as an inference library.
We reproduce the Eight Schools example from the [Numpyro documentation](https://github.com/pyro-ppl/numpyro) (all credit for the model goes to the Numpyro team). For this notebook to run you will need to install Numpyro:
```bash
pip install numpyro
```
```
import jax
import numpy as np
import numpyro
import numpyro.distributions as dist
from numpyro.infer.reparam import TransformReparam
from numpyro.infer.util import initialize_model
import blackjax
num_warmup = 1000
# We can use this notebook for simple benchmarking by setting
# below to True and run from Terminal.
# $ipython examples/use_with_numpyro.ipynb
RUN_BENCHMARK = False
if RUN_BENCHMARK:
num_sample = 5_000_000
print(f"Benchmark with {num_warmup} warmup steps and {num_sample} sampling steps.")
else:
num_sample = 10_000
```
## Data
```
# Data of the Eight Schools Model
J = 8
y = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])
sigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])
```
## Model
We use the non-centered version of the model described towards the end of the README on Numpyro's repository:
```
# Eight Schools example - Non-centered Reparametrization
def eight_schools_noncentered(J, sigma, y=None):
mu = numpyro.sample("mu", dist.Normal(0, 5))
tau = numpyro.sample("tau", dist.HalfCauchy(5))
with numpyro.plate("J", J):
with numpyro.handlers.reparam(config={"theta": TransformReparam()}):
theta = numpyro.sample(
"theta",
dist.TransformedDistribution(
dist.Normal(0.0, 1.0), dist.transforms.AffineTransform(mu, tau)
),
)
numpyro.sample("obs", dist.Normal(theta, sigma), obs=y)
```
We need to translate the model into a log-probability function that will be used by BlackJAX to perform inference. For that we use the `initialize_model` function in Numpyro's internals. We will also use the initial position it returns:
```
rng_key = jax.random.PRNGKey(0)
init_params, potential_fn_gen, *_ = initialize_model(
rng_key,
eight_schools_noncentered,
model_args=(J, sigma, y),
dynamic_args=True,
)
```
Now we create the potential using the `potential_fn_gen` provided by Numpyro and initialize the NUTS state with BlackJAX:
```
if RUN_BENCHMARK:
print("\nBlackjax:")
print("-> Running warmup.")
```
We now run the window adaptation in BlackJAX:
```
%%time
initial_position = init_params.z
logprob = lambda position: -potential_fn_gen(J, sigma, y)(position)
adapt = blackjax.window_adaptation(
blackjax.nuts, logprob, num_warmup, target_acceptance_rate=0.8
)
last_state, kernel, _ = adapt.run(rng_key, initial_position)
```
Let us now perform inference using the previously computed step size and inverse mass matrix. We also time the sampling to give you an idea of how fast BlackJAX can be on simple models:
```
if RUN_BENCHMARK:
print("-> Running sampling.")
%%time
def inference_loop(rng_key, kernel, initial_state, num_samples):
@jax.jit
def one_step(state, rng_key):
state, info = kernel(rng_key, state)
return state, (state, info)
keys = jax.random.split(rng_key, num_samples)
_, (states, infos) = jax.lax.scan(one_step, initial_state, keys)
return states, (
infos.acceptance_probability,
infos.is_divergent,
infos.integration_steps,
)
# Sample from the posterior distribution
states, infos = inference_loop(rng_key, kernel, last_state, num_sample)
_ = states.position["mu"].block_until_ready()
```
Let us compute the average acceptance probability and check the number of divergences (to make sure that the model sampled correctly, and that the sampling time is not a result of a majority of divergent transitions):
```
acceptance_rate = np.mean(infos[0])
num_divergent = np.mean(infos[1])
print(f"\nAcceptance rate: {acceptance_rate:.2f}")
print(f"{100*num_divergent:.2f}% divergent transitions")
```
Let us now plot the distribution of the parameters. Note that since we use a transformed variable, Numpyro does not output the school treatment effect directly:
```
if not RUN_BENCHMARK:
import seaborn as sns
from matplotlib import pyplot as plt
samples = states.position
fig, axes = plt.subplots(ncols=2)
fig.set_size_inches(12, 5)
sns.kdeplot(samples["mu"], ax=axes[0])
sns.kdeplot(samples["tau"], ax=axes[1])
axes[0].set_xlabel("mu")
axes[1].set_xlabel("tau")
fig.tight_layout()
if not RUN_BENCHMARK:
fig, axes = plt.subplots(8, 2, sharex="col", sharey="col")
fig.set_size_inches(12, 10)
for i in range(J):
axes[i][0].plot(samples["theta_base"][:, i])
axes[i][0].title.set_text(f"School {i} relative treatment effect chain")
sns.kdeplot(samples["theta_base"][:, i], ax=axes[i][1], shade=True)
axes[i][1].title.set_text(f"School {i} relative treatment effect distribution")
axes[J - 1][0].set_xlabel("Iteration")
axes[J - 1][1].set_xlabel("School effect")
fig.tight_layout()
plt.show()
if not RUN_BENCHMARK:
for i in range(J):
print(
f"Relative treatment effect for school {i}: {np.mean(samples['theta_base'][:, i]):.2f}"
)
```
## Compare sampling time with Numpyro
We compare the time it took BlackJAX to do the warmup for 1,000 iterations and then taking 100,000 samples with Numpyro's:
```
from numpyro.infer import MCMC, NUTS
if RUN_BENCHMARK:
print("\nNumpyro:")
print("-> Running warmup+sampling.")
%%time
nuts_kernel = NUTS(eight_schools_noncentered, target_accept_prob=0.8)
mcmc = MCMC(
nuts_kernel, num_warmup=num_warmup, num_samples=num_sample, progress_bar=False
)
rng_key = jax.random.PRNGKey(0)
mcmc.run(rng_key, J, sigma, y=y, extra_fields=("num_steps", "accept_prob"))
samples = mcmc.get_samples()
_ = samples["mu"].block_until_ready()
print(f"\nAcceptance rate: {mcmc.get_extra_fields()['accept_prob'].mean():.2f}")
print(f"{100*mcmc.get_extra_fields()['diverging'].mean():.2f}% divergent transitions")
print(f"\nBlackjax average {infos[2].mean():.2f} leapfrog per iteration.")
print(
f"Numpyro average {mcmc.get_extra_fields()['num_steps'].mean():.2f} leapfrog per iteration."
)
```
| github_jupyter |
## Exercicis del Tema 4
### Subprogrames
Es recomanable fer tots els exercicis en el mateix fitxer Python. Un cop heu realitzat la funció o subprograma
corresponent heu de comprovar el seu correcte funcionament.
1.Subprograma que rep dos enters, els suma i retorna el resultat.
2.Procediment que rep dos enters, els suma i mostra el valor resultant.
3.Subprograma que rep dos nombres i torna el més gran.
4.Subprograma que rep tres nombres i torna el més gran.
4.1Realitzar un subprograma que rep tres nombres i torna el gran fent ús del subprograma del punt 3.
5.Subprograma que rep tres nombres i mostra per pantalla si hi ha almanco dos nombres iguals.
6.Subprograma que rep un nombre i retorna el seu valor absolut.
7.Subprograma que rep un caràcter i retorna cert si el caràcter és una vocal.
8.Subprograma que rep dos sencers i retorna cert si el primer és divisor del segon.
9.Subprograma anomenat llegir_int que retorna un enter llegit del teclat.
10.Subprograma que llegeix dos enters del teclat i retorna cert si el primer és divisor del segon.
11.Subprograma que rep un enter i retorna un valor booleà segons si aquest és un nombre primer o no.
12.Subprograma que rep un enter i retorna el menor nombre primer major que aquest número.
```
Per exemple
- El nombre 7 tornaria 11.
- El nombre 14 tornaria 17.
- 22 tornaria 23.
```
13.Subprograma que rep un caràcter de l'abecedari en llengua anglesa o un digit entre el 0 i el 9, i el retorna en
majúscules.
```
Per exemple
El caràcter a retorna el caràcter A.
El caràcter B retorna el caràcter B.
El caràcter 9 retorna el caràcter 9.
```
14.Subprograma que rep els dos costats d'un triangle rectangle i torna la hipotenusa.
15.Realitzar un subprograma que rep dos enters i torna el mcd de tots dos.
16.Realitzar el subprograma anomenat _word2num_ que llegeix caràcters numèrics ( '1', '2', '3'…, '0') com si d'una
paraula es tracta i ens retorna un nombre enter. Podeu usar el punt o el enter com a final de seqüència.
```
Si llegim la seqüència de caràcters '1234' ha de tornar el nombre 1234.
```
17.Realitzar un subprograma que rep dos sencers, que representen una fracció. La funció hauria de reduir la fracció
als termes més petits possibles i després mostrar per pantalla tant el numerador com el denominador de la fracció
reduïda.
```
Exemple:
Si els paràmetres passats a la funció són 6 i 63, llavors el resultat és 2 i 21.
```
18.Realitzar el subprograma anomenat _sumaseq_ que rep un enter en el rang 1, 9 i un nombre de repeticions. El
programa ha
de realitzar la següent operació:
```
Si rep 8 i 5 repeticions ha de retornar el resultat de la següent suma 8 + 88 + 888 +8888 + 88888
Si rep 5 i 2 repeticions ha de retornar: 5 + 55
Si rep 1 i 8 repeticions ha de retornar: 1 + 11 +111+ 1111 + 11111 + 111111 + 1111111 + 11111111
```
19.Realitzar un programa que rep 3 nombres (d, m, a) que representen una data (dia, mes i any). El programa ha de
tornar el dia següent a aquesta data. Heu de tenir en compte els dies de cada mes i els anys de traspàs.
### Seqüències
A continuació teniu una llista de problemes relacionats amb seqüències de text. Abans de fer aquests exercicis s'ha
d'entendre bé el material del tema 4 i dominar els exercicis de subprogrames. Es recomana crear un document per cada
un dels problemes.
1. Comptar el nombre de paraules parells (nombre de lletres parells) i el nombre de paraules senars(nombre de lletres senars).
2. Comptar les paraules que tenen a almanco una 'a'.
3. Comptar les paraules que tenen a almanco una vocal.
4. Comptar les paraules que comencen per 'sa'. **Per exemple:** savis són els que en saben. 2 paraules comencen per sa.
5. Crear un programa que compti les paraules que tenen més vocals que consonants.
6. Crear un programa que ens digui si una seqüència de paraules és un abecegrama. Un abecegrama és una frase les
paraules es disposen en ordre alfabètic; és a dir, la primera paraula de la frase comença per a; la segona, per b; la
tercera, per c ...
**Per exemple:**
ahir brollava calor d emocions fum gelat hui immens jardi karma latent malejant nu onades peregrines que
restrenyen salobre temps un venerable wagneria xiprer yep zingar.
7. Fer un programa que ens digui quantes lletres té la paraula més llarga d'una seqüència acabada en punt.
8. Fer un programa que ens informi de quantes paraules contenen la lletra 'j' sense que aquesta no sigui ni la
primera ni la darrera de les seves lletres en una seqüència de text acabada en '.'.
### Seqüències numèriques
A continuació teniu un conjunt de problemes relacionats amb seqüències numèriques. Es recomana crear un fitxer per
cada un dels problemes.
1.Realitza un programa que genera 50 nombres aleatoris amb valors entre el 0 i el 10 i ens mostra quantes vegades
tenim 2 nombres consecutius que son iguals. També volem saber quins han estat aquests nombres.
1.1.Fes una variant del programa anterior que mostra les vegades on el nombre previ és múltiple del nombre actual i
quines són aquestes parelles.
2.En el rang (0, 2223). Quants nombres tenen almanco dues vegades seguides el nombre 2. Per exemple: 22, 221 ...
3.Els nombres primers bessons són aquelles parelles de nombres primers que difereixen en 2. És a dir, dos nombres p i q
(amb p < q) són primers bessons si q = p + 2. Excepte pel cas del 2 i el 3. Quins són els primers bessons majors al
valor 150? Solució: (179, 181) Quants primers bessons hi ha entre 100 i 1000? Solució: 27 Quins són?
4.Quants nombres de cinc dígits comencen per 4, acaben en 5 i les seves xifres sumen 18?
5.Realitza un programa que genera 100 nombres aleatoris amb valors entre el 100 i el 1000. Ha de mostrar aquells que
tenen les seves xifres en ordre estrictament decreixent.
| github_jupyter |
The visualization used for this homework is based on Alexandr Verinov's code.
# Generative models
In this homework we will try several criterions for learning an implicit model. Almost everything is written for you, and you only need to implement the objective for the game and play around with the model.
**0)** Read the code
**1)** Implement objective for a vanilla [Generative Adversarial Networks](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) (GAN). The hyperparameters are already set in the code. The model will converge if you implement the objective (1) right.
**2)** Note the discussion in the paper, that the objective for $G$ can be of two kinds: $min_G log(1 - D)$ and $min_G - log(D)$. Implement the second objective and ensure model converges. Most likely, in this example you will not notice the difference, but people usually use the second objective, it really matters in more complicated scenarios.
**3 & 4)** Implement [Wasserstein GAN](https://arxiv.org/abs/1701.07875) ([WGAN](https://arxiv.org/abs/1704.00028)) and WGAN-GP. To make the discriminator have Lipschitz property you need to clip discriminator's weights to $[-0.01, 0.01]$ range (WGAN) or use gradient penalty (WGAN-GP). You will need to make few modifications to the code: 1) remove sigmoids from discriminator 2) add weight clipping clipping / gradient penaly. 3) change objective. See [implementation 1](https://github.com/martinarjovsky/WassersteinGAN/) / [implementation 2](https://github.com/caogang/wgan-gp). They also use different optimizer. The default hyperparameters may not work, spend time to tune them.
**5) Bonus: same thing without GANs** Implement maximum mean discrepancy estimator (MMD). MMD is discrepancy measure between distributions. In our case we use it to calculate discrepancy between real and fake data. You need to implement RBF kernel $k(x,x')=\exp \left(-{\frac {1}{2\sigma ^{2}}}||x-x'||^{2}\right)$ and an MMD estimator (see eq.8 from https://arxiv.org/pdf/1505.03906.pdf). MMD is then used instead of discriminator.
```
#!L
"""
Please, implement everything in one notebook, using if statements to switch between the tasks
"""
TASK = 1 # 2, 3, 4, 5
```
# Imports
```
#!L
import numpy as np
import time
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(12345)
lims=(-5, 5)
```
# Define sampler from real data and Z
```
#!L
from scipy.stats import rv_discrete
MEANS = np.array(
[[-1,-3],
[1,3],
[-2,0],
])
COVS = np.array(
[[[1,0.8],[0.8,1]],
[[1,-0.5],[-0.5,1]],
[[1,0],[0,1]],
])
PROBS = np.array([
0.2,
0.5,
0.3
])
assert len(MEANS) == len(COVS) == len(PROBS), "number of components mismatch"
COMPONENTS = len(MEANS)
comps_dist = rv_discrete(values=(range(COMPONENTS), PROBS))
def sample_true(N):
comps = comps_dist.rvs(size=N)
conds = np.arange(COMPONENTS)[:,None] == comps[None,:]
arr = np.array([np.random.multivariate_normal(MEANS[c], COVS[c], size=N)
for c in range(COMPONENTS)])
return np.select(conds[:,:,None], arr).astype(np.float32)
NOISE_DIM = 20
def sample_noise(N):
return np.random.normal(size=(N,NOISE_DIM)).astype(np.float32)
```
# Visualization functions
```
#!L
def vis_data(data):
"""
Visualizes data as histogram
"""
hist = np.histogram2d(data[:, 1], data[:, 0], bins=100, range=[lims, lims])
plt.pcolormesh(hist[1], hist[2], hist[0], alpha=0.5)
fixed_noise = sample_noise(1000)
def vis_g():
"""
Visualizes generator's samples as circles
"""
data = generator(Variable(torch.Tensor(fixed_noise))).data.numpy()
if np.isnan(data).any():
return
plt.scatter(data[:,0], data[:,1], alpha=0.2, c='b')
plt.xlim(lims)
plt.ylim(lims)
def vis_d():
"""
Visualizes discriminator's gradient on grid
"""
X, Y = np.meshgrid(np.linspace(lims[0], lims[1], 30), np.linspace(lims[0], lims[1], 30))
X = X.flatten()
Y = Y.flatten()
grid = Variable(torch.Tensor(np.vstack([X, Y]).T), requires_grad=True)
data_gen = generator(Variable(torch.Tensor(fixed_noise)))
loss = d_loss(discriminator(data_gen), discriminator(grid))
loss.backward()
grads = - grid.grad.data.numpy()
plt.quiver(X, Y, grads[:, 0], grads[:, 1], color='black',alpha=0.9)
```
# Define architectures
After you've passed task 1 you can play with architectures.
#### Generator
```
#!L
class Generator(nn.Module):
def __init__(self, noise_dim, out_dim, hidden_dim=100):
super(Generator, self).__init__()
self.fc1 = nn.Linear(noise_dim, hidden_dim)
nn.init.xavier_normal_(self.fc1.weight)
nn.init.constant_(self.fc1.bias, 0.0)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
nn.init.xavier_normal_(self.fc2.weight)
nn.init.constant_(self.fc2.bias, 0.0)
self.fc3 = nn.Linear(hidden_dim, out_dim)
nn.init.xavier_normal_(self.fc3.weight)
nn.init.constant_(self.fc3.bias, 0.0)
def forward(self, z):
"""
Generator takes a vector of noise and produces sample
"""
h1 = F.tanh(self.fc1(z))
h2 = F.leaky_relu(self.fc2(h1))
y_gen = self.fc3(h2)
return y_gen
```
#### Discriminator
```
#!L
class Discriminator(nn.Module):
def __init__(self, in_dim, hidden_dim=100):
super(Discriminator, self).__init__()
self.fc1 = nn.Linear(in_dim, hidden_dim)
nn.init.xavier_normal_(self.fc1.weight)
nn.init.constant_(self.fc1.bias, 0.0)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
nn.init.xavier_normal_(self.fc2.weight)
nn.init.constant_(self.fc2.bias, 0.0)
self.fc3 = nn.Linear(hidden_dim, hidden_dim)
nn.init.xavier_normal_(self.fc3.weight)
nn.init.constant_(self.fc3.bias, 0.0)
self.fc4 = nn.Linear(hidden_dim, 1)
nn.init.xavier_normal_(self.fc4.weight)
nn.init.constant_(self.fc4.bias, 0.0)
def forward(self, x):
h1 = F.tanh(self.fc1(x))
h2 = F.leaky_relu(self.fc2(h1))
h3 = F.leaky_relu(self.fc3(h2))
score = torch.sigmoid(self.fc4(h3))
return score
```
# Define updates and losses
```
#!L
generator = Generator(NOISE_DIM, out_dim = 2)
discriminator = Discriminator(in_dim = 2)
lr = 0.001
g_optimizer = optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999))
d_optimizer = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999))
```
Notice we are using ADAM optimizer with `beta1=0.5` for both discriminator and discriminator. This is a common practice and works well. Motivation: models should be flexible and adapt itself rapidly to the distributions.
You can try different optimizers and parameters.
```
#!L
################################
# IMPLEMENT HERE
# Define the g_loss and d_loss here
# these are the only lines of code you need to change to implement GAN game
def g_loss():
# if TASK == 1:
# do something
return # TODO
def d_loss():
# if TASK == 1:
# do something
return # TODO
################################
```
# Get real data
```
#!L
data = sample_true(100000)
def iterate_minibatches(X, batchsize, y=None):
perm = np.random.permutation(X.shape[0])
for start in range(0, X.shape[0], batchsize):
end = min(start + batchsize, X.shape[0])
if y is None:
yield X[perm[start:end]]
else:
yield X[perm[start:end]], y[perm[start:end]]
#!L
plt.rcParams['figure.figsize'] = (12, 12)
vis_data(data)
vis_g()
vis_d()
```
**Legend**:
- Blue dots are generated samples.
- Colored histogram at the back shows density of real data.
- And with arrows we show gradients of the discriminator -- they are the directions that discriminator pushes generator's samples.
# Train the model
```
#!L
from IPython import display
plt.xlim(lims)
plt.ylim(lims)
num_epochs = 100
batch_size = 64
# ===========================
# IMPORTANT PARAMETER:
# Number of D updates per G update
# ===========================
k_d, k_g = 4, 1
accs = []
try:
for epoch in range(num_epochs):
for input_data in iterate_minibatches(data, batch_size):
# Optimize D
for _ in range(k_d):
# Sample noise
noise = Variable(torch.Tensor(sample_noise(len(input_data))))
# Do an update
inp_data = Variable(torch.Tensor(input_data))
data_gen = generator(noise)
loss = d_loss(discriminator(data_gen), discriminator(inp_data))
d_optimizer.zero_grad()
loss.backward()
d_optimizer.step()
# Optimize G
for _ in range(k_g):
# Sample noise
noise = Variable(torch.Tensor(sample_noise(len(input_data))))
# Do an update
data_gen = generator(noise)
loss = g_loss(discriminator(data_gen))
g_optimizer.zero_grad()
loss.backward()
g_optimizer.step()
# Visualize
plt.clf()
vis_data(data); vis_g(); vis_d()
display.clear_output(wait=True)
display.display(plt.gcf())
except KeyboardInterrupt:
pass
```
# Describe your findings here
A ya tomat.
| github_jupyter |
# 1. Import libraries
```
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import tensorflow as tf
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
#tf.set_random_seed(seed)
tf.compat.v1.set_random_seed(seed)
#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import random
import scipy.sparse as sparse
import scipy.io
from keras.utils import to_categorical
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from skfeature.function.similarity_based import lap_score
from skfeature.utility import construct_W
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
import time
import pandas as pd
def mse_check(train, val):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(val[0]) - val[1]) ** 2).mean()
return MSELR
def next_batch(samples, labels, num):
# Return a total of `num` random samples and labels.
idx = np.random.choice(len(samples), num)
return samples[idx], labels[idx]
def standard_single_hidden_layer_autoencoder(X, units, O):
reg_alpha = 1e-3
D = X.shape[1]
weights = tf.get_variable("weights", [D, units])
biases = tf.get_variable("biases", [units])
X = tf.matmul(X, weights) + biases
X = tf.layers.dense(X, O, kernel_regularizer = tf.contrib.layers.l2_regularizer(reg_alpha))
return X, weights
def aefs_subset_selector(train, K, epoch_num=1000, alpha=0.1):
D = train[0].shape[1]
O = train[1].shape[1]
learning_rate = 0.001
tf.reset_default_graph()
X = tf.placeholder(tf.float32, (None, D))
TY = tf.placeholder(tf.float32, (None, O))
Y, weights = standard_single_hidden_layer_autoencoder(X, K, O)
loss = tf.reduce_mean(tf.square(TY - Y)) + alpha * tf.reduce_sum(tf.sqrt(tf.reduce_sum(tf.square(weights), axis=1)), axis=0) + tf.losses.get_total_loss()
train_op = tf.train.AdamOptimizer(learning_rate).minimize(loss)
init = tf.global_variables_initializer()
batch_size = 8
batch_per_epoch = train[0].shape[0] // batch_size
costs = []
session_config = tf.ConfigProto()
session_config.gpu_options.allow_growth = False
with tf.Session(config = session_config) as sess:
sess.run(init)
for ep in range(epoch_num):
cost = 0
for batch_n in range(batch_per_epoch):
imgs, yimgs = next_batch(train[0], train[1], batch_size)
_, c, p = sess.run([train_op, loss, weights], feed_dict = {X: imgs, TY: yimgs})
cost += c / batch_per_epoch
costs.append(cost)
return list(np.argmax(np.abs(p), axis=0)), costs
def AEFS(train, test, K, debug = True):
x_train, x_val, y_train, y_val = train_test_split(train[0], train[1], test_size = 0.1)
print("y_train.shape",y_train.shape)
bindices = []
bmse = 1e100
for alpha in [1e-3, 1e-1, 1e1, 1e3]:
print("alpha",alpha)
indices, _ = aefs_subset_selector(train, K)
mse = mse_check((train[0][:, indices], train[1]), (x_val[:, indices], y_val))
if bmse > mse:
bmse = mse
bindices = indices
if debug:
print(bindices, bmse)
return train[0][:, bindices], test[0][:, bindices]
#--------------------------------------------------------------------------------------------------------------------------------
def ETree(p_train_feature,p_train_label,p_test_feature,p_test_label,p_seed):
clf = ExtraTreesClassifier(n_estimators=50, random_state=p_seed)
# Training
clf.fit(p_train_feature, p_train_label)
# Training accuracy
print('Training accuracy:',clf.score(p_train_feature, np.array(p_train_label)))
print('Training accuracy:',accuracy_score(np.array(p_train_label),clf.predict(p_train_feature)))
#print('Training accuracy:',np.sum(clf.predict(p_train_feature)==np.array(p_train_label))/p_train_label.shape[0])
# Testing accuracy
print('Testing accuracy:',clf.score(p_test_feature, np.array(p_test_label)))
print('Testing accuracy:',accuracy_score(np.array(p_test_label),clf.predict(p_test_feature)))
#print('Testing accuracy:',np.sum(clf.predict(p_test_feature)==np.array(p_test_label))/p_test_label.shape[0])
#--------------------------------------------------------------------------------------------------------------------------------
def write_to_csv(p_data,p_path):
dataframe = pd.DataFrame(p_data)
dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')
```
# 2. Loading data
```
data_path="./Dataset/Prostate_GE.mat"
Data = scipy.io.loadmat(data_path)
data_arr=Data['X']
label_arr=Data['Y'][:, 0]-1
Data=MinMaxScaler(feature_range=(0,1)).fit_transform(data_arr)
C_train_x,C_test_x,C_train_y,C_test_y= train_test_split(Data,label_arr,test_size=0.2,random_state=seed)
print('Shape of C_train_x: ' + str(C_train_x.shape))
print('Shape of C_train_y: ' + str(C_train_y.shape))
print('Shape of C_test_x: ' + str(C_test_x.shape))
print('Shape of C_test_y: ' + str(C_test_y.shape))
key_feture_number=64
```
# 3. Model
```
train=(C_train_x,C_train_x)
test=(C_test_x,C_test_x)
start = time.clock()
C_train_selected_x, C_test_selected_x = AEFS((train[0], train[0]), (test[0], test[0]), key_feture_number)
time_cost=time.clock() - start
write_to_csv(np.array([time_cost]),"./log/AEFS_time"+str(key_feture_number)+".csv")
```
# 4. Classifying
### Extra Trees
```
train_feature=C_train_x
train_label=C_train_y
test_feature=C_test_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
train_feature=C_train_selected_x
train_label=C_train_y
test_feature=C_test_selected_x
test_label=C_test_y
print('Shape of train_feature: ' + str(train_feature.shape))
print('Shape of train_label: ' + str(train_label.shape))
print('Shape of test_feature: ' + str(test_feature.shape))
print('Shape of test_label: ' + str(test_label.shape))
p_seed=seed
ETree(train_feature,train_label,test_feature,test_label,p_seed)
```
# 6. Reconstruction loss
```
from sklearn.linear_model import LinearRegression
def mse_check(train, test):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()
return MSELR
train_feature_tuple=(C_train_selected_x,C_train_x)
test_feature_tuple=(C_test_selected_x,C_test_x)
reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)
print(reconstruction_loss)
```
| github_jupyter |
# Deprecated - Connecting Brain region through BAMS information
This script connects brain regions through BAMS conenctivity informtation.
However, at this level the connectivity information has no reference to the original, and that is not ok. Thereby do **not** use this.
```
### DEPRECATED
import pandas as pd
import re
import itertools
from difflib import SequenceMatcher
root = "Data/csvs/basal_ganglia/regions"
sim_csv_loc = "/region_similarity.csv"
def similar(a, b):
return SequenceMatcher(None, a, b).ratio()
## Prepare regions and regions_other csvs
df_all_regions = pd.read_csv(root + "/all_regions.csv", dtype="object")
df = pd.DataFrame(columns = ["ID1", "Region_name_1", "ID2", "Region_name_2", "Sim"])
# Put region names and ID into tuple list
subset = df_all_regions[["ID", "Region_name"]]
region_name_tuples = [tuple(x) for x in subset.to_numpy()]
# Find all combinations of region_names and look at similarity in name
for a, b in itertools.combinations(region_name_tuples, 2):
id1, reg1 = a
id2, reg2 = b
sim_score = similar(reg1, reg2)
if(sim_score > 0.7):
a_row = pd.Series([id1, reg1, id2, reg2, sim_score], index = ["ID1", "Region_name_1", "ID2", "Region_name_2", "Sim"])
df = df.append(a_row, ignore_index=True)
# Store similarities
df_sorted = df.sort_values('Sim')
df_sorted.to_csv(root + sim_csv_loc, encoding='utf-8')
print("Similarities stored in", sim_csv_loc)
def get_count_of_type(label, session):
q = "MATCH (n:%s) RETURN count(n)" % label
res = session.run(q)
print("Added", res.value()[0], "nodes of type", label)
def get_count_of_relationship(label, session):
q = "MATCH ()-[r:%s]-() RETURN count(*)" %label
res = session.run(q)
print("Added", res.value()[0], "relationships of type", label)
def get_csv_path(csv_file):
path_all_csv = os.path.realpath("Data/csvs/basal_ganglia/regions")
return os.path.join(path_all_csv, csv_file).replace("\\","/")
## Then find the regions that correspond to each other and stor that in a new CSV file
# Add relation to all areas that define positions
positioning = ["caudal", "rostral", "ventral", "dorsal"]
area_describing = ["internal", "compact", "core", "shell"]
df_sims = pd.read_csv(root + sim_csv_loc, converters = {"Sims": float})
# ALl with score above 0.95 are the same
# Also the same: Substantia innominata, basal",103,"Substantia innominata, basal part" 0.91
df_equals = df_sims.loc[df_sims['Sim'] > 0.95]
df_sorted.to_csv(root + "/regions_equal.csv", encoding='utf-8')
from neo4j import GraphDatabase, basic_auth
from dotenv import load_dotenv
import os
load_dotenv()
neo4jUser = os.getenv("NEO4J_USER")
neo4jPwd = os.getenv("NEO4J_PASSWORD")
driver = GraphDatabase.driver("bolt://localhost:7687",auth=basic_auth(neo4jUser, neo4jPwd))
# Relationship EQUALS between equal BrainRegion nodes
csv_file_path = "file:///%s" % get_csv_path("regions_equal.csv")
query="""
LOAD CSV WITH HEADERS FROM "%s" AS row
MATCH (a:BrainRegion { id: row.ID1})
MATCH (c:BrainRegion { id: row.ID2 })
MERGE (a)-[:EQUALS]->(c)
""" % csv_file_path
with driver.session() as session:
session.run(query)
get_count_of_relationship("EQUALS", session)
## TODO add rel for belongs-to/part of
```
| github_jupyter |
```
import requests as r
url = 'https://api.covid19api.com/dayone/country/brazil'
resp = r.get(url)
resp.status_code
raw_data = resp.json()
raw_data[0]
final_data = []
for data in raw_data:
final_data.append([data['Confirmed'], data['Deaths'], data['Recovered'], data['Active'], data['Date']])
final_data.insert(0, ['Confirmed', 'Deaths', 'Recovered', 'Active', 'Date'])
final_data
Confirmed = 0
Deaths = 1
Recovered = 2
Active = 3
Date = 4
for i in range(1, len(final_data)):
final_data[i][Date] = final_data[i][Date][:10]
final_data
import datetime as dt
print(dt.time(12, 6, 21, 7)) #h:min:seg:milseg
print(dt.date(2021, 7, 8)) #ano-mes-dia
print(dt.datetime(2021, 7, 8, 12, 6, 21, 7)) #ano-mes-dia h:min:seg:milseg
import csv
with open('brasil-covid.csv', 'w', encoding='utf-8') as file:
writer = csv.writer(file)
writer.writerows(final_data)
for i in range(1, len(final_data)):
final_data[i][Date] = dt.datetime.strptime(final_data[i][Date], '%Y-%m-%d')
final_data
def get_datasets(y, labels):
if type(y[0]) == list:
datasets = []
for i in range(len(y)):
datasets.append({
'labels' : labels[i],
'data' : y[i]
})
return datasets
else:
return [
{
'labels' : labes[0],
'data' : y
}
]
def set_title(title=''):
if title != '':
display = 'true'
else:
display = 'false'
return {
'title' : title,
'display' : display
}
def creater_chart(x, y, labels, kind='bar', title=''):
datasets = get_datasets(y, labels)
options = set_title(title)
chart = {
'type' : kind,
'data' : {
'labels' : x,
'datasets' : datasets
},
'options' : options
}
return chart
def get_api_chart(chart):
url_base = 'https://quickchart.io/chart'
resp = r.get(f'{url_base}?c={str(chart)}')
return resp.content
def save_image(path, content):
with open(path, 'wb') as image:
image.write(content)
from PIL import Image
from IPython.display import display
def display_image(path):
img_pil = Image.open(path)
display(img_pil)
y_data_1 = []
for obs in final_data[1::10]:
y_data_1.append(obs[Confirmed])
y_data_2 = []
for obs in final_data[1::10]:
y_data_2.append(obs[Recovered])
labels = ['Confirmed', 'Recovered']
x = []
for obs in final_data[1::10]:
x.append(obs[Date].strftime('%d/%m/%Y'))
chart = creater_chart(x, [y_data_1, y_data_2], labels, title='Gráfico:ConfirmadosxRecperados')
chart_content = get_api_chart(chart)
save_image('grafico.png', chart_content)
display_image('grafico.png')
```
| github_jupyter |
# Tile Coding
---
Tile coding is an innovative way of discretizing a continuous space that enables better generalization compared to a single grid-based approach. The fundamental idea is to create several overlapping grids or _tilings_; then for any given sample value, you need only check which tiles it lies in. You can then encode the original continuous value by a vector of integer indices or bits that identifies each activated tile.
### 1. Import the Necessary Packages
```
# Import common libraries
import sys
import gym
import numpy as np
import matplotlib.pyplot as plt
# Set plotting options
%matplotlib inline
plt.style.use('ggplot')
np.set_printoptions(precision=3, linewidth=120)
```
### 2. Specify the Environment, and Explore the State and Action Spaces
We'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's begin with an environment that has a continuous state space, but a discrete action space.
```
# Create an environment
env = gym.make('Acrobot-v1')
env.seed(505);
# Explore state (observation) space
print("State space:", env.observation_space)
print("- low:", env.observation_space.low)
print("- high:", env.observation_space.high)
# Explore action space
print("Action space:", env.action_space)
```
Note that the state space is multi-dimensional, with most dimensions ranging from -1 to 1 (positions of the two joints), while the final two dimensions have a larger range. How do we discretize such a space using tiles?
### 3. Tiling
Let's first design a way to create a single tiling for a given state space. This is very similar to a uniform grid! The only difference is that you should include an offset for each dimension that shifts the split points.
For instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, `bins = (10, 10)`, and `offsets = (-0.1, 0.5)`, then return a list of 2 NumPy arrays (2 dimensions) each containing the following split points (9 split points per dimension):
```
[array([-0.9, -0.7, -0.5, -0.3, -0.1, 0.1, 0.3, 0.5, 0.7]),
array([-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5, 4.5])]
```
Notice how the split points for the first dimension are offset by `-0.1`, and for the second dimension are offset by `+0.5`. This might mean that some of our tiles, especially along the perimeter, are partially outside the valid state space, but that is unavoidable and harmless.
```
def float_range(start: float, stop: float, step_size: float):
count: int = 0
while True:
temp = start + count * step_size
if step_size > 0 and temp >= stop:
break
if step_size < 0 and temp <= stop:
break
yield temp
count += 1
def create_tiling_grid(low, high, bins=(10, 10), offsets=(0.0, 0.0)):
"""Define a uniformly-spaced grid that can be used for tile-coding a space.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
bins : tuple
Number of bins or tiles along each corresponding dimension.
offsets : tuple
Split points for each dimension should be offset by these values.
Returns
-------
grid : list of array_like
A list of arrays containing split points for each dimension.
"""
tiling_grid_d = []
for d in range(0, len(bins)):
low_bound_d = low[d]
high_bound_d = high[d]
range_d = abs(high_bound_d - low_bound_d)
step_size_d = range_d / bins[d]
offset_d = offsets[d]
raw_tiling_grid_d = [x for x in \
float_range(low_bound_d + step_size_d + offset_d, \
high_bound_d, step_size_d)]
tiling_grid_d.append(raw_tiling_grid_d[:(bins[d]-1)])
return tiling_grid_d
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_tiling_grid(low, high, bins=(10, 10), offsets=(-0.1, 0.5)) # [test]
```
You can now use this function to define a set of tilings that are a little offset from each other.
```
def create_tilings(low, high, tiling_specs):
"""Define multiple tilings using the provided specifications.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
tiling_specs : list of tuples
A sequence of (bins, offsets) to be passed to create_tiling_grid().
Returns
-------
tilings : list
A list of tilings (grids), each produced by create_tiling_grid().
"""
return [create_tiling_grid(low, high, bins, offset) for bins, offset in tiling_specs]
# Tiling specs: [(<bins>, <offsets>), ...]
tiling_specs = [((10, 10), (-0.066, -0.33)),
((10, 10), (0.0, 0.0)),
((10, 10), (0.066, 0.33))]
tilings = create_tilings(low, high, tiling_specs)
```
It may be hard to gauge whether you are getting desired results or not. So let's try to visualize these tilings.
```
from matplotlib.lines import Line2D
def visualize_tilings(tilings):
"""Plot each tiling as a grid."""
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
linestyles = ['-', '--', ':']
legend_lines = []
fig, ax = plt.subplots(figsize=(10, 10))
for i, grid in enumerate(tilings):
for x in grid[0]:
l = ax.axvline(x=x, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)], label=i)
for y in grid[1]:
l = ax.axhline(y=y, color=colors[i % len(colors)], linestyle=linestyles[i % len(linestyles)])
legend_lines.append(l)
ax.grid('off')
ax.legend(legend_lines, ["Tiling #{}".format(t) for t in range(len(legend_lines))], facecolor='white', framealpha=0.9)
ax.set_title("Tilings")
return ax # return Axis object to draw on later, if needed
visualize_tilings(tilings);
```
Great! Now that we have a way to generate these tilings, we can next write our encoding function that will convert any given continuous state value to a discrete vector.
### 4. Tile Encoding
Implement the following to produce a vector that contains the indices for each tile that the input state value belongs to. The shape of the vector can be the same as the arrangment of tiles you have, or it can be ultimately flattened for convenience.
You can use the same `discretize()` function here from grid-based discretization, and simply call it for each tiling.
```
def discretize(sample, grid):
"""Discretize a sample as per given grid.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
grid : list of array_like
A list of arrays containing split points for each dimension.
Returns
-------
discretized_sample : array_like
A sequence of integers with the same number of dimensions as sample.
"""
digitized_d = ()
for dimension in range(0, len(sample)):
digitized_d = digitized_d + (int(np.digitize(sample[dimension],
grid[dimension],
right=False)),)
return digitized_d
def tile_encode(sample, tilings, flatten=False):
"""Encode given sample using tile-coding.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
tilings : list
A list of tilings (grids), each produced by create_tiling_grid().
flatten : bool
If true, flatten the resulting binary arrays into a single long vector.
Returns
-------
encoded_sample : list or array_like
A list of binary vectors, one for each tiling, or flattened into one.
"""
encoded_tiles = [discretize(sample, tiling) for tiling in tilings]
if flatten:
return np.concatenate(encoded_tiles)
else:
return encoded_tiles
# Test with some sample values
samples = [(-1.2 , -5.1 ),
(-0.75, 3.25),
(-0.5 , 0.0 ),
( 0.25, -1.9 ),
( 0.15, -1.75),
( 0.75, 2.5 ),
( 0.7 , -3.7 ),
( 1.0 , 5.0 )]
encoded_samples = [tile_encode(sample, tilings) for sample in samples]
print("\nSamples:", repr(samples), sep="\n")
print("\nEncoded samples:", repr(encoded_samples), sep="\n")
```
Note that we did not flatten the encoding above, which is why each sample's representation is a pair of indices for each tiling. This makes it easy to visualize it using the tilings.
```
from matplotlib.patches import Rectangle
def visualize_encoded_samples(samples, encoded_samples, tilings, low=None, high=None):
"""Visualize samples by activating the respective tiles."""
samples = np.array(samples) # for ease of indexing
# Show tiling grids
ax = visualize_tilings(tilings)
# If bounds (low, high) are specified, use them to set axis limits
if low is not None and high is not None:
ax.set_xlim(low[0], high[0])
ax.set_ylim(low[1], high[1])
else:
# Pre-render (invisible) samples to automatically set reasonable axis limits, and use them as (low, high)
ax.plot(samples[:, 0], samples[:, 1], 'o', alpha=0.0)
low = [ax.get_xlim()[0], ax.get_ylim()[0]]
high = [ax.get_xlim()[1], ax.get_ylim()[1]]
# Map each encoded sample (which is really a list of indices) to the corresponding tiles it belongs to
tilings_extended = [np.hstack((np.array([low]).T, grid, np.array([high]).T)) for grid in tilings] # add low and high ends
tile_centers = [(grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 for grid_extended in tilings_extended] # compute center of each tile
tile_toplefts = [grid_extended[:, :-1] for grid_extended in tilings_extended] # compute topleft of each tile
tile_bottomrights = [grid_extended[:, 1:] for grid_extended in tilings_extended] # compute bottomright of each tile
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
for sample, encoded_sample in zip(samples, encoded_samples):
for i, tile in enumerate(encoded_sample):
# Shade the entire tile with a rectangle
topleft = tile_toplefts[i][0][tile[0]], tile_toplefts[i][1][tile[1]]
bottomright = tile_bottomrights[i][0][tile[0]], tile_bottomrights[i][1][tile[1]]
ax.add_patch(Rectangle(topleft, bottomright[0] - topleft[0], bottomright[1] - topleft[1],
color=colors[i], alpha=0.33))
# In case sample is outside tile bounds, it may not have been highlighted properly
if any(sample < topleft) or any(sample > bottomright):
# So plot a point in the center of the tile and draw a connecting line
cx, cy = tile_centers[i][0][tile[0]], tile_centers[i][1][tile[1]]
ax.add_line(Line2D([sample[0], cx], [sample[1], cy], color=colors[i]))
ax.plot(cx, cy, 's', color=colors[i])
# Finally, plot original samples
ax.plot(samples[:, 0], samples[:, 1], 'o', color='r')
ax.margins(x=0, y=0) # remove unnecessary margins
ax.set_title("Tile-encoded samples")
return ax
visualize_encoded_samples(samples, encoded_samples, tilings);
```
Inspect the results and make sure you understand how the corresponding tiles are being chosen. Note that some samples may have one or more tiles in common.
### 5. Q-Table with Tile Coding
The next step is to design a special Q-table that is able to utilize this tile coding scheme. It should have the same kind of interface as a regular table, i.e. given a `<state, action>` pair, it should return a `<value>`. Similarly, it should also allow you to update the `<value>` for a given `<state, action>` pair (note that this should update all the tiles that `<state>` belongs to).
The `<state>` supplied here is assumed to be from the original continuous state space, and `<action>` is discrete (and integer index). The Q-table should internally convert the `<state>` to its tile-coded representation when required.
```
class QTable:
"""Simple Q-table."""
def __init__(self, state_size, action_size):
"""Initialize Q-table.
Parameters
----------
state_size : tuple
Number of discrete values along each dimension of state space.
action_size : int
Number of discrete actions in action space.
"""
self.state_size = state_size
self.action_size = action_size
self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))
# TODO: Create Q-table, initialize all Q-values to zero
# Note: If state_size = (9, 9), action_size = 2, q_table.shape should be (9, 9, 2)
print("QTable(): size =", self.q_table.shape)
class TiledQTable:
"""Composite Q-table with an internal tile coding scheme."""
def __init__(self, low, high, tiling_specs, action_size):
"""Create tilings and initialize internal Q-table(s).
Parameters
----------
low : array_like
Lower bounds for each dimension of state space.
high : array_like
Upper bounds for each dimension of state space.
tiling_specs : list of tuples
A sequence of (bins, offsets) to be passed to create_tilings() along with low, high.
action_size : int
Number of discrete actions in action space.
"""
self.tilings = create_tilings(low, high, tiling_specs)
self.state_sizes = [tuple(len(splits)+1 for splits in tiling_grid) for tiling_grid in self.tilings]
self.action_size = action_size
self.q_tables = [QTable(state_size, self.action_size) for state_size in self.state_sizes]
print("TiledQTable(): no. of internal tables = ", len(self.q_tables))
def get(self, state, action):
"""Get Q-value for given <state, action> pair.
Parameters
----------
state : array_like
Vector representing the state in the original continuous space.
action : int
Index of desired action.
Returns
-------
value : float
Q-value of given <state, action> pair, averaged from all internal Q-tables.
"""
# TODO: Encode state to get tile indices
state_encoding = tile_encode(state, self.tilings)
# TODO: Retrieve q-value for each tiling, and return their average
action_value: float = 0.0
for i, tile_q_table in enumerate(self.q_tables):
action_value += tile_q_table.q_table[tuple(state_encoding[i] + (action,))]
return action_value / len(self.q_tables)
def update(self, state, action, value, alpha=0.1):
"""Soft-update Q-value for given <state, action> pair to value.
Instead of overwriting Q(state, action) with value, perform soft-update:
Q(state, action) = alpha * value + (1.0 - alpha) * Q(state, action)
Parameters
----------
state : array_like
Vector representing the state in the original continuous space.
action : int
Index of desired action.
value : float
Desired Q-value for <state, action> pair.
alpha : float
Update factor to perform soft-update, in [0.0, 1.0] range.
"""
# TODO: Encode state to get tile indices
state_encoding = tile_encode(state, self.tilings)
# TODO: Update q-value for each tiling by update factor alpha
for i, tile_q_table in enumerate(self.q_tables):
q_table_value = tile_q_table.q_table[tuple(state_encoding[i] + (action,))]
new_value = alpha * value + (1.0 - alpha) * q_table_value
tile_q_table.q_table[tuple(state_encoding[i] + (action,))] = new_value
# Test with a sample Q-table
tq = TiledQTable(low, high, tiling_specs, 2)
s1 = 3; s2 = 4; a = 0; q = 1.0
print("[GET] Q({}, {}) = {}".format(samples[s1], a, tq.get(samples[s1], a))) # check value at sample = s1, action = a
print("[UPDATE] Q({}, {}) = {}".format(samples[s2], a, q)); tq.update(samples[s2], a, q) # update value for sample with some common tile(s)
print("[GET] Q({}, {}) = {}".format(samples[s1], a, tq.get(samples[s1], a))) # check value again, should be slightly updated
```
If you update the q-value for a particular state (say, `(0.25, -1.91)`) and action (say, `0`), then you should notice the q-value of a nearby state (e.g. `(0.15, -1.75)` and same action) has changed as well! This is how tile-coding is able to generalize values across the state space better than a single uniform grid.
### 6. Implement a Q-Learning Agent using Tile-Coding
Now it's your turn to apply this discretization technique to design and test a complete learning agent!
```
class QLearningAgentTileCoding:
"""Q-Learning agent that can act on a continuous state space by discretizing it."""
def __init__(self, env, tiled_q_table, alpha=0.02, gamma=0.99,
epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=123):
"""Initialize variables, create grid for discretization."""
# Environment info
self.env = env
self.state_size = tiled_q_tables.state_sizes
self.action_size = self.env.action_space.n # 1-dimensional discrete action space
self.seed = np.random.seed(seed)
print("Environment:", self.env)
print("State space size:", self.state_size)
print("Action space size:", self.action_size)
# Learning parameters
self.alpha = alpha # learning rate
self.gamma = gamma # discount factor
self.epsilon = self.initial_epsilon = epsilon # initial exploration rate
self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon
self.min_epsilon = min_epsilon
# Create Q-table
self.tiled_q_table = tiled_q_table
def reset_episode(self, state):
"""Reset variables for a new episode."""
# Gradually decrease exploration rate
self.epsilon *= self.epsilon_decay_rate
self.epsilon = max(self.epsilon, self.min_epsilon)
# Decide initial action
self.last_state = state
Q_state = [self.tiled_q_table.get(state, action) for action in range(self.action_size)]
self.last_action = np.argmax(Q_state)
return self.last_action
def reset_exploration(self, epsilon=None):
"""Reset exploration rate used when training."""
self.epsilon = epsilon if epsilon is not None else self.initial_epsilon
def act(self, state, reward=None, done=None, mode='train'):
"""Pick next action and update internal Q table (when mode != 'test')."""
Q_state = [self.tiled_q_table.get(state, action) for action in range(self.action_size)]
if mode == 'test':
# Test mode: Simply produce an action
action = np.argmax(Q_state)
else:
# Train mode (default): Update Q table, pick next action
# Note: We update the Q table entry for the *last* (state, action) pair with current state, reward
action_value = reward + self.gamma * max(Q_state)
self.tiled_q_table.update(self.last_state, self.last_action, action_value, self.alpha)
# Exploration vs. exploitation
do_exploration = np.random.uniform(0, 1) < self.epsilon
if do_exploration:
# Pick a random action
action = np.random.randint(0, self.action_size)
else:
# Pick the best action from Q table
action = np.argmax(Q_state)
# Roll over current state, action for next step
self.last_state = state
self.last_action = action
return action
n_bins = 10
obs_space = env.observation_space
n_actions = env.action_space.n
obs_space_shape = env.observation_space.shape[0]
bins = tuple([n_bins]*obs_space_shape)
offset_positions = (obs_space.high - obs_space.low)/(3*n_bins)
tiling_specifications = [(bins, -offset_positions),
(bins, tuple([0.0] * obs_space_shape)),
(bins, +offset_positions)]
tiled_q_tables = TiledQTable(obs_space.low,
obs_space.high,
tiling_specifications,
n_actions)
agent = QLearningAgentTileCoding(env=env,
tiled_q_table=tiled_q_tables)
print(f'''Observation Space Shape: {obs_space_shape}''')
print(f'''Bins: {bins}''')
print(f'''Offsets: {offset_positions}''')
print(f'''Tilings: {tiling_specifications}''')
def run(agent, env, num_episodes=10000, mode='train'):
"""Run agent in given reinforcement learning environment and return scores."""
scores = []
max_avg_score = -np.inf
for i_episode in range(1, num_episodes+1):
# Initialize episode
state = env.reset()
action = agent.reset_episode(state)
total_reward = 0
done = False
# Roll out steps until done
while not done:
state, reward, done, info = env.step(action)
total_reward += reward
action = agent.act(state, reward, done, mode)
# Save final score
scores.append(total_reward)
# Print episode stats
if mode == 'train':
if len(scores) > 100:
avg_score = np.mean(scores[-100:])
if avg_score > max_avg_score:
max_avg_score = avg_score
if i_episode % 100 == 0:
print("\rEpisode {}/{} | Max Average Score: {}".format(i_episode, num_episodes, max_avg_score), end="")
sys.stdout.flush()
return scores
scores = run(agent, env)
import pandas as pd
def plot_scores(scores, rolling_window=100):
"""Plot scores and optional rolling mean using specified window."""
plt.plot(scores); plt.title("Scores");
rolling_mean = pd.Series(scores).rolling(rolling_window).mean()
plt.plot(rolling_mean);
return rolling_mean
rolling_mean = plot_scores(scores)
```
| github_jupyter |
```
import h5py
import numpy as np
files = ['../Data/ModelNet40_train/ply_data_train0.h5',
'../Data/ModelNet40_train/ply_data_train1.h5',
'../Data/ModelNet40_train/ply_data_train2.h5',
'../Data/ModelNet40_train/ply_data_train3.h5',
'../Data/ModelNet40_train/ply_data_train4.h5']
#files = ['../Data/ModelNet10_train/modelnet10_train.h5']
d = []
l = []
for i in range(len(files)):
fh5 = h5py.File(files[0], 'r')
data = fh5['data'][:]
label = fh5['label'][:]
fh5.close()
if(i != 0):
d = np.append(d, data, axis=0)
l = np.append(l, label, axis=0)
else:
d = data
l = label
print d.shape
print l.shape
import matplotlib.pyplot as plt
plt.hist(l, bins=100)
plt.show()
from keras.utils import to_categorical
Y_train = to_categorical(l)
classes = Y_train.shape[1]
print Y_train.shape
print "Loaded dataset with %s classes"%(classes)
from tqdm import trange
# now we need to voxelize that point cloud...
def voxelize(dim, data):
# uncomment below if you have not already normalized your object to [0,1]^3
#m = max(x.min(), x.max(), key=abs)
#data /= m # This puts the data in [0,1]
data *= (dim/2) # This puts the data in [0,dim]
data += (dim/2)
data = np.asarray([[int(i[0]), int(i[1]), int(i[2])] for i in data])
data = np.unique(data, axis=1)
retval = np.zeros((dim, dim, dim))
for i in data:
retval[i[0]][i[1]][i[2]] = 1
retval = np.asarray([retval])
return retval
X_train = [voxelize(32, i) for i in d]
X_train = np.asarray(X_train)
X_train = np.reshape(X_train, (-1, 32, 32, 32, 1))
print X_train.shape
files = ['../Data/ModelNet40_test/ply_data_test0.h5',
'../Data/ModelNet40_test/ply_data_test1.h5']
d = []
l = []
for i in range(len(files)):
fh5 = h5py.File(files[0], 'r')
data = fh5['data'][:]
label = fh5['label'][:]
fh5.close()
if(i != 0):
d = np.append(d, data, axis=0)
l = np.append(l, label, axis=0)
else:
d = data
l = label
print d.shape
print l.shape
Y_test = to_categorical(l)
X_test = [voxelize(32, i) for i in d]
X_test = np.asarray(X_test)
X_test = np.reshape(X_test, (-1, 32, 32, 32, 1))
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Convolution3D, MaxPooling3D
from keras.layers import Conv3D
from keras.layers.core import Activation, Dense, Dropout, Flatten
from keras.layers.advanced_activations import LeakyReLU
from keras.regularizers import l2
from keras.callbacks import LearningRateScheduler, ModelCheckpoint
from keras.optimizers import SGD
import random
import numpy as np
num_classes = classes
# Defining VoxNet in Keras 2
model = Sequential()
model.add(Conv3D(input_shape=(32, 32, 32, 1), filters=32,
kernel_size=(5,5,5), strides=(2, 2, 2)))
model.add(Activation(LeakyReLU(alpha=0.1)))
model.add(Dropout(rate=0.3))
model.add(Conv3D(filters=32, kernel_size=(3,3,3)))
model.add(Activation(LeakyReLU(alpha=0.1)))
model.add(MaxPooling3D(pool_size=(2, 2, 2), strides=None))
model.add(Dropout(rate=0.4))
model.add(Flatten())
model.add(Dense(units=128, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(units=num_classes, kernel_initializer='normal', activation='relu'))
model.add(Activation("softmax"))
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=["accuracy"])
model.summary()
history = model.fit(x=X_train, y=Y_train, batch_size=16,
epochs=25, verbose=1, validation_data=(X_test, Y_test))
# serialize model to JSON
from keras.models import model_from_json
import os
#model_json = model.to_json()
#with open("voxnet40.json", "w") as json_file:
# json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("VoxNet-ModelNet40.h5")
print("Saved model to disk")
```
| github_jupyter |
# Sorting
### 1. Bubble: $O(n^2)$
repeatedly swapping the adjacent elements if they are in wrong order
### 2. Selection: $O(n^2)$
find largest number and place it in the correct order
### 3. Insertion: $O(n^2)$
### 4. Shell: $O(n^2)$
### 5. Merge: $O(n \log n)$
### 6. Quick: $O(n \log n)$
it is important to select proper pivot
### 7. Counting: $O(n)$
### 8. Radix: $O(n)$
### 9. Bucket: $O(n)$
---
# Bubble
```
def bubble(arr):
n = len(arr)
for i in range(n):
# (n-1)-(i): 뒤에서부터 i+1 번째 idx
# 0번째 -> 커서가 n-1까지 움직임
# 1번째 -> 커서가 n-1-1
for j in range(0, (n-1)-i):
print(j)
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
def bubble(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-1-(i+1))
arr = [64, 34, 25, 12, 22, 11, 90]
bubble(arr)
arr
def bubble2(arr):
n = len(arr)
for i in range(n):
swapped = False
for j in range(0, n-1-i):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
# 정렬 안된 부분이 있음
swapped = True
if swapped == False:
break
def b(arr):
n = len(arr)
for i in range(n):
swapped = False
for j in range(0, n-1-i):
if arr[j] > arr[j+1]:
swapped = True
arr[j], arr[j+1] = arr[j+1], arr[j]
if swapped == False:
return
```
# Selection Sorting
```
def Selection(arr):
n = len(arr)
for i in range(n-1, 0, -1):
positionOfMax=0
for loc in range(1, i+1):
if arr[loc] > arr[positionOfMax]:
positionOfMax = loc
arr[i], arr[loc] = arr[loc], arr[i]
# test code
arr = [54,26,93,17,77,31,44,55,20]
Selection(arr)
print(arr)
```
# Quick
```
# partition은 cur가 앞에서부터 high까지 순회하면서
def partition(arr, low, high):
i = low - 1
pivot = arr[high]
for cur in range(low, high):
print(cur, i)
if arr[cur] <= pivot:
i += 1
arr[i], arr[cur] = arr[cur], arr[i]
arr[i+1], arr[high] = arr[high], arr[i+1]
return i+1
def QuickSort(arr, low, high):
if low < high:
pi = partition(arr, low, high)
# 절반 중 1
QuickSort(arr, low, pi-1)
# 절반 중 2
QuickSort(arr, pi+1, high)
# test code
arr = [10, 7, 8, 9, 1, 5]
n = len(arr)
QuickSort(arr, 0, n-1)
for i in range(n):
print(arr[i])
```
# Quick2
```
def partition(arr, start, end):
povot = arr[start]
i = start + 1
j = end -1
while True:
# i: traverse from begin
# j: traverse from end
# if arr[i](left side of pivot) smaller than pivot, then pass
while (i <= j and arr[i] <= pivot):
i += 1
# if arr[j](right side of pivot) larger than pivot, then pass
while (i <= j and arr[j] >= pivot):
j -= 1
if i <= j:
arr[i], arr[j] = arr[j], arr[i]
print(start)
# i, j가 엇갈리면 left side of pivot의 맨 오른쪽 값과 pivot(맨앞) 자리바꿈
else:
arr[start], arr[j] = arr[j], arr[start]
return j
def quicksort(arr, start, end):
if end - start > 1:
# p: pivot location
p = partition(arr, start, end)
quicksort(arr, start=start, end=p)
quicksort(arr, start=p+1, end=end)
```
# 계수정렬 Counting Sort
- reference: https://www.geeksforgeeks.org/radix-sort/
- count_arr: count how many each of 0,1,2,...,n is in arr
- iter 0, 1, ..., n
- fill ans with 0, 1, ..., n
```
# 핵심은 counting arr생성
# 갯수만큼 itter
def counting_sort(arr, max_val):
count_arr = [0 for _ in range(max_val)]
for num in arr:
count_arr[num] += 1
i = 0
for num in range(max_val):
iter_n = count_arr[num]
for _ in range(iter_n):
arr[i] = num
i += 1
return arr
# test code
arr = [5,1,5,1,1,2,4,3,4,3,2]
max_val = 6
counting_sort(arr, max_val)
```
# 기수정렬 Radix Sort
## 핵심
- `숫자 //` 원하는 `digit`(첫쨰 자리: 1, 둘째 자리: 10, ...) `% 10`
- `// 10^(digit-1)`: 끝자리가 내가 원하는 digit의 숫자가 됨
- eg. 25948의 끝에서 셋째 자리 9를 끝자리로 만드려면, 25948 // 10^(3-1) = 259
- `%10`: 마지막 끝자리만 남김
```
4378 // 10**(4-1) % 10
def SortingByDigit(arr, exp):
n = len(arr)
output = [0 for _ in range(n)]
count = [0 for _ in range(10)]
for num in arr:
last_digit = num // exp % 10
count[last_digit] += 1
i = 1
while i < max_:
count[i] += count[i-1]
i += 1
print('digit:', np.log10(exp)+1)
print(count)
# 왜 거꾸로 iter? 마지막 가장 큰 digit에 근거해 배열할 때 필요
i = n-1
while i >= 0:
last_digit = (arr[i] // exp) % 10
idx_by_cum = count[last_digit]
output[idx_by_cum - 1] = arr[i]
count[last_digit] -= 1
i -= 1
print(count)
# update arr
i = 0
for i in range(0,len(arr)):
arr[i] = output[i]
# arr = [i for i in output]
print(arr)
print()
def radixSort(arr):
max_ = max(arr)
exp = 1
while (max_ // exp) > 0:
print(max_, exp)
SortingByDigit(arr, exp)
exp *= 10
# test code
arr = [170, 5145, 3145, 2145, 802, 24]
radixSort(arr)
```
| github_jupyter |
# Physically labeled data: pyfocs single-ended examples
Finally, after all of that (probably confusing) work we can map the data to physical coordinates.
```
import xarray as xr
import pyfocs
import os
```
# 1. Load data
## 1.1 Configuration files
As in the previous example we will load and prepare the configuration files. This time we will load all the configuration files.
Physically labeled data is triggered by setting the below flag within the configuration file.
```python
final_flag = True
```
```
dir_example = os.path.join('../tests/data/')
# Grab a configuration file for the twisted pair pvc fiber and for the stainless steel fiber
config_names = [
'example_configuration_steelfiber.yml',
'example_twistedpair_bothwls.yml',
'example_twistedpair_p1wls.yml',
'example_twistedpair_p2wls.yml',
]
cfg_fname = os.path.join(dir_example, config_names[0])
cfg_ss, lib_ss = pyfocs.check.config(cfg_fname, ignore_flags=True)
cfg_fname = os.path.join(dir_example, config_names[1])
cfg_both, lib_both = pyfocs.check.config(cfg_fname, ignore_flags=True)
cfg_fname = os.path.join(dir_example, config_names[2])
cfg_p1, lib_p1 = pyfocs.check.config(cfg_fname, ignore_flags=True)
cfg_fname = os.path.join(dir_example, config_names[3])
cfg_p2, lib_p2 = pyfocs.check.config(cfg_fname, ignore_flags=True)
```
## 1.2 Data
- In this case we only use a single twisted pair, p1, since it is closer to the DTS device in LAF space yielding a less noisy signal.
- Additionally, we will load the paired heated-unheated stainless steel fiber that has been interpolated to a common spatial index.
```
ds_p1 = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_p1-wls_unheated.nc'))
ds_p2 = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_p2-wls_unheated.nc'))
ds_cold = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_ss-wls_unheated.nc'))
ds_heat = xr.open_dataset(os.path.join(dir_example, 'multifiledemo', 'final', 'multifiledemo_final_20190722-0000_ss-wls_heated.nc'))
print('=================')
print('Unheated fibers - Twisted PVC fiber, pair 1')
print(ds_p1)
print('')
print('=================')
print('Unheated fibers - Twisted PVC fiber, pair 2')
print(ds_p2)
print('')
print('=================')
print('Unheated fibers - stainless steel')
print(ds_cold)
print('')
print('=================')
print('Heated fibers - stainless steel')
print(ds_heat)
print('')
```
Here we see that all datasets now have `x`, `y`, and `z` coordinates which are labeled using the `xyz` multiindex. Other quantities have been dropped.
The netcdf files are also now labeled differently. Channel information has been excluded and there is now a label on the location type at the end of the file name.
# 2. Calculate wind speed
## 2.1 Construct the power variable
Here I will construct a data variable of power. The details on what is happening here are not important besides `power` is a data variable with dimensions of LAF. The wind speed code can accept `power` as a DataArray with dimensions shared with `cal_temp` or as a single float.
```
import numpy as np
power_loc = {
'1': [1892.5, 2063.5],
'2': [2063.5, 2205.5],
'3': [2207.0, 2361.],
'4': [2361., 2524.]}
power_vals = {
'1': 6.1,
'2': 6.4,
'3': 4.7,
'4': 5.4,}
ds_heat['power'] = ('LAF', np.zeros_like(ds_heat.LAF))
for p in power_vals:
laf_mask = ((ds_heat.LAF > power_loc[p][0]) & (ds_heat.LAF < power_loc[p][1]))
ds_heat['power'] = xr.where(laf_mask, np.ones_like(ds_heat.LAF.values) * power_vals[p], ds_heat.power.values)
```
## 2.2 Calculate wind speed
```
wind_speed = pyfocs.wind_speed.calculate(ds_heat.cal_temp, ds_cold.cal_temp, ds_heat.power)
```
## 2.3 Split up wind speed based
Wind speed is most efficiently measured in the direction orthogonal to the fiber. Since we have fibers that are orthogonal to each other that means we effectively measured wind in two different directions. We represent that here by combining sections that are parallel to each other.
```
cross_valley_components = ['OR_SE', 'OR_NW']
logic = [wind_speed.unheated == l for l in cross_valley_components]
logic = xr.concat(logic, dim='locations').any(dim='locations')
wind_speed_cross_valley = wind_speed.where(logic, drop=True)
along_valley_components = ['OR_SW2', 'OR_SW1', 'OR_NE1', 'OR_NE2']
logic = [wind_speed.unheated == l for l in along_valley_components]
logic = xr.concat(logic, dim='locations').any(dim='locations')
wind_speed_along_valley = wind_speed.where(logic, drop=True)
```
## 2.4 Create a Dataset that contains all unheated data
```
unheated = xr.concat([ds_cold, ds_p1], dim='xyz', coords='different')
```
# 3. Plot your Fiber Optic Distributed Sensing data
## 3.1 Wind speed and temperature
```
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12, 6),)
spec = fig.add_gridspec(ncols=4,
nrows=2,
width_ratios=[1, 0.08, 0.04, 0.08],
hspace=0.18, wspace=0.25,
)
ax_ew_cbar = fig.add_subplot(spec[0, 3])
ax_ns_cbar = fig.add_subplot(spec[1, 3])
ax_t_cbar = fig.add_subplot(spec[:, 1])
ax_temp = fig.add_subplot(spec[:, 0])
im = ax_temp.scatter(unheated.x, unheated.y, s=10,
c=unheated.mean(dim='time').cal_temp.values,
cmap='viridis', vmin=8.5, vmax=10)
ax_temp.set_ylabel('Relative Northing (m)')
ax_temp.set_xlabel('Relative Easting (m)')
plt.colorbar(im, cax=ax_t_cbar, extend='both')
ax_t_cbar.set_ylabel('Temperature (C)')
ax_temp.set_title('a) LOVE19 Outer Array', loc='left')
im = ax_temp.scatter(wind_speed_along_valley.x * 1.1,
wind_speed_along_valley.y * 1.1,
s=10,
c=wind_speed_along_valley.mean(dim='time').values,
cmap='Oranges', vmin=0.5, vmax=4)
plt.colorbar(im, cax=ax_ew_cbar, extend='max')
ax_ew_cbar.set_ylabel('Along valley wind (m/s)')
im = ax_temp.scatter(wind_speed_cross_valley.x * 1.1,
wind_speed_cross_valley.y * 1.1,
s=10,
c=wind_speed_cross_valley.mean(dim='time').values,
cmap='Blues', vmin=0.5, vmax=4)
plt.colorbar(im, cax=ax_ns_cbar, extend='max')
ax_ns_cbar.set_ylabel('Cross valley wind (m/s)')
```
## 3.2 Biases in space
```
ds_p2 = ds_p2.interp_like(ds_p1)
fig = plt.figure(figsize=(8, 6),)
spec = fig.add_gridspec(ncols=2,
nrows=1,
width_ratios=[1, 0.1],
hspace=0.18, wspace=0.25,
)
ax_t_cbar = fig.add_subplot(spec[:, 1])
ax_temp = fig.add_subplot(spec[:, 0])
im = ax_temp.scatter(
ds_p1.x,
ds_p1.y,
s=10,
c=(ds_p1.cal_temp - ds_p2.cal_temp).mean(dim='time').values,
cmap='RdBu', vmin=-0.5, vmax=0.5)
ax_temp.set_ylabel('Relative Northing (m)')
ax_temp.set_xlabel('Relative Easting (m)')
plt.colorbar(im, cax=ax_t_cbar, extend='both')
ax_t_cbar.set_ylabel('p1 - p2 (K)')
ax_temp.set_title('LOVE19 Twisted PVC Fiber Bias', loc='left')
```
Here we can see that the reference sections are a bit misleading. While they evaluate to effectively zero bias, there are substantial biases between what should be replicate measurements. We have found this to be typical of DTS observations. The cause and correction is a subject of on-going research but we highlight as a final word of caution on DTS. The method is excpetionally powerful but is very far from a push-button operation. It requires a substantial investment in time for all steps: setting up the fiber takes much longer than other instruments, preparing the dataset is a long process even with the tools provided by pyfocs, and it is still a new technique that is subject to uncertainties that are not even known to the community.
| github_jupyter |
# Machine Translation English-German Example Using SageMaker Seq2Seq
1. [Introduction](#Introduction)
2. [Setup](#Setup)
3. [Download dataset and preprocess](#Download-dataset-and-preprocess)
3. [Training the Machine Translation model](#Training-the-Machine-Translation-model)
4. [Inference](#Inference)
## Introduction
Welcome to our Machine Translation end-to-end example! In this demo, we will train a English-German translation model and will test the predictions on a few examples.
SageMaker Seq2Seq algorithm is built on top of [Sockeye](https://github.com/awslabs/sockeye), a sequence-to-sequence framework for Neural Machine Translation based on MXNet. SageMaker Seq2Seq implements state-of-the-art encoder-decoder architectures which can also be used for tasks like Abstractive Summarization in addition to Machine Translation.
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Setup
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. **This should be within the same region as the Notebook Instance, training, and hosting.**
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp in the cell below with a the appropriate full IAM role arn string(s).
```
# S3 bucket and prefix
bucket = '<your_s3_bucket_name_here>'
prefix = 'sagemaker/<your_s3_prefix_here>' # E.g.'sagemaker/seq2seq/eng-german'
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
```
Next, we'll import the Python libraries we'll need for the remainder of the exercise.
```
from time import gmtime, strftime
import time
import numpy as np
import os
import json
# For plotting attention matrix later on
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
```
## Download dataset and preprocess
In this notebook, we will train a English to German translation model on a dataset from the
[Conference on Machine Translation (WMT) 2017](http://www.statmt.org/wmt17/).
```
%%bash
wget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.de.gz & \
wget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.en.gz & wait
gunzip corpus.tc.de.gz & \
gunzip corpus.tc.en.gz & wait
mkdir validation
curl http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/dev.tgz | tar xvzf - -C validation
```
Please note that it is a common practise to split words into subwords using Byte Pair Encoding (BPE). Please refer to [this](https://github.com/awslabs/sockeye/tree/master/tutorials/wmt) tutorial if you are interested in performing BPE.
Since training on the whole dataset might take several hours/days, for this demo, let us train on the **first 10,000 lines only**. Don't run the next cell if you want to train on the complete dataset.
```
!head -n 10000 corpus.tc.en > corpus.tc.en.small
!head -n 10000 corpus.tc.de > corpus.tc.de.small
```
Now, let's use the preprocessing script `create_vocab_proto.py` (provided with this notebook) to create vocabulary mappings (strings to integers) and convert these files to x-recordio-protobuf as required for training by SageMaker Seq2Seq.
Uncomment the cell below and run to see check the arguments this script expects.
```
%%bash
# python3 create_vocab_proto.py -h
```
The cell below does the preprocessing. If you are using the complete dataset, the script might take around 10-15 min on an m4.xlarge notebook instance. Remove ".small" from the file names for training on full datasets.
```
%%time
%%bash
python3 create_vocab_proto.py \
--train-source corpus.tc.en.small \
--train-target corpus.tc.de.small \
--val-source validation/newstest2014.tc.en \
--val-target validation/newstest2014.tc.de
```
The script will output 4 files, namely:
- train.rec : Contains source and target sentences for training in protobuf format
- val.rec : Contains source and target sentences for validation in protobuf format
- vocab.src.json : Vocabulary mapping (string to int) for source language (English in this example)
- vocab.trg.json : Vocabulary mapping (string to int) for target language (German in this example)
Let's upload the pre-processed dataset and vocabularies to S3
```
def upload_to_s3(bucket, prefix, channel, file):
s3 = boto3.resource('s3')
data = open(file, "rb")
key = prefix + "/" + channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
upload_to_s3(bucket, prefix, 'train', 'train.rec')
upload_to_s3(bucket, prefix, 'validation', 'val.rec')
upload_to_s3(bucket, prefix, 'vocab', 'vocab.src.json')
upload_to_s3(bucket, prefix, 'vocab', 'vocab.trg.json')
region_name = boto3.Session().region_name
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/seq2seq:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/seq2seq:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/seq2seq:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/seq2seq:latest'}
container = containers[region_name]
print('Using SageMaker Seq2Seq container: {} ({})'.format(container, region_name))
```
## Training the Machine Translation model
```
job_name = 'seq2seq-en-de-p2-xlarge-' + strftime("%Y-%m-%d-%H", gmtime())
print("Training job", job_name)
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": "s3://{}/{}/".format(bucket, prefix)
},
"ResourceConfig": {
# Seq2Seq does not support multiple machines. Currently, it only supports single machine, multiple GPUs
"InstanceCount": 1,
"InstanceType": "ml.p2.xlarge", # We suggest one of ["ml.p2.16xlarge", "ml.p2.8xlarge", "ml.p2.xlarge"]
"VolumeSizeInGB": 50
},
"TrainingJobName": job_name,
"HyperParameters": {
# Please refer to the documentation for complete list of parameters
"max_seq_len_source": "60",
"max_seq_len_target": "60",
"optimized_metric": "bleu",
"batch_size": "64", # Please use a larger batch size (256 or 512) if using ml.p2.8xlarge or ml.p2.16xlarge
"checkpoint_frequency_num_batches": "1000",
"rnn_num_hidden": "512",
"num_layers_encoder": "1",
"num_layers_decoder": "1",
"num_embed_source": "512",
"num_embed_target": "512",
"checkpoint_threshold": "3",
"max_num_batches": "2100"
# Training will stop after 2100 iterations/batches.
# This is just for demo purposes. Remove the above parameter if you want a better model.
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 48 * 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/train/".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
},
{
"ChannelName": "vocab",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/vocab/".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/validation/".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
}
]
}
sagemaker_client = boto3.Session().client(service_name='sagemaker')
sagemaker_client.create_training_job(**create_training_params)
status = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
status = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
# if the job failed, determine why
if status == 'Failed':
message = sage.describe_training_job(TrainingJobName=job_name)['FailureReason']
print('Training failed with the following error: {}'.format(message))
raise Exception('Training job failed')
```
> Now wait for the training job to complete and proceed to the next step after you see model artifacts in your S3 bucket.
You can jump to [Use a pretrained model](#Use-a-pretrained-model) as training might take some time.
## Inference
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means translating sentence(s) from English to German.
This section involves several steps,
- Create model - Create a model using the artifact (model.tar.gz) produced by training
- Create Endpoint Configuration - Create a configuration defining an endpoint, using the above model
- Create Endpoint - Use the configuration to create an inference endpoint.
- Perform Inference - Perform inference on some input data using the endpoint.
### Create model
We now create a SageMaker Model from the training output. Using the model, we can then create an Endpoint Configuration.
```
use_pretrained_model = False
```
### Use a pretrained model
#### Please uncomment and run the cell below if you want to use a pretrained model, as training might take several hours/days to complete.
```
# use_pretrained_model = True
# model_name = "pretrained-en-de-model"
# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/model.tar.gz > model.tar.gz
# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.src.json > vocab.src.json
# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.trg.json > vocab.trg.json
# upload_to_s3(bucket, prefix, 'pretrained_model', 'model.tar.gz')
# model_data = "s3://{}/{}/pretrained_model/model.tar.gz".format(bucket, prefix)
%%time
sage = boto3.client('sagemaker')
if not use_pretrained_model:
info = sage.describe_training_job(TrainingJobName=job_name)
model_name=job_name
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_name)
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = sage.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
```
### Create endpoint configuration
Use the model to create an endpoint configuration. The endpoint configuration also contains information about the type and number of EC2 instances to use when hosting the model.
Since SageMaker Seq2Seq is based on Neural Nets, we could use an ml.p2.xlarge (GPU) instance, but for this example we will use a free tier eligible ml.m4.xlarge.
```
from time import gmtime, strftime
endpoint_config_name = 'Seq2SeqEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = sage.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
```
### Create endpoint
Lastly, we create the endpoint that serves up model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 10-15 minutes to complete.
```
%%time
import time
endpoint_name = 'Seq2SeqEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = sage.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = sage.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
# wait until the status has changed
sage.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
# print the status of the endpoint
endpoint_response = sage.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))
if status != 'InService':
raise Exception('Endpoint creation failed.')
```
If you see the message,
> Endpoint creation ended with EndpointStatus = InService
then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console.
We will finally create a runtime object from which we can invoke the endpoint.
```
runtime = boto3.client(service_name='runtime.sagemaker')
```
# Perform Inference
### Using JSON format for inference (Suggested for a single or small number of data instances)
#### Note that you don't have to convert string to text using the vocabulary mapping for inference using JSON mode
```
sentences = ["you are so good !",
"can you drive a car ?",
"i want to watch a movie ."
]
payload = {"instances" : []}
for sent in sentences:
payload["instances"].append({"data" : sent})
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/json',
Body=json.dumps(payload))
response = response["Body"].read().decode("utf-8")
response = json.loads(response)
print(response)
```
### Retrieving the Attention Matrix
Passing `"attention_matrix":"true"` in `configuration` of the data instance will return the attention matrix.
```
sentence = 'can you drive a car ?'
payload = {"instances" : [{
"data" : sentence,
"configuration" : {"attention_matrix":"true"}
}
]}
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/json',
Body=json.dumps(payload))
response = response["Body"].read().decode("utf-8")
response = json.loads(response)['predictions'][0]
source = sentence
target = response["target"]
attention_matrix = np.array(response["matrix"])
print("Source: %s \nTarget: %s" % (source, target))
# Define a function for plotting the attentioan matrix
def plot_matrix(attention_matrix, target, source):
source_tokens = source.split()
target_tokens = target.split()
assert attention_matrix.shape[0] == len(target_tokens)
plt.imshow(attention_matrix.transpose(), interpolation="nearest", cmap="Greys")
plt.xlabel("target")
plt.ylabel("source")
plt.gca().set_xticks([i for i in range(0, len(target_tokens))])
plt.gca().set_yticks([i for i in range(0, len(source_tokens))])
plt.gca().set_xticklabels(target_tokens)
plt.gca().set_yticklabels(source_tokens)
plt.tight_layout()
plot_matrix(attention_matrix, target, source)
```
### Using Protobuf format for inference (Suggested for efficient bulk inference)
Reading the vocabulary mappings as this mode of inference accepts list of integers and returns list of integers.
```
import io
import tempfile
from record_pb2 import Record
from create_vocab_proto import vocab_from_json, reverse_vocab, write_recordio, list_to_record_bytes, read_next
source = vocab_from_json("vocab.src.json")
target = vocab_from_json("vocab.trg.json")
source_rev = reverse_vocab(source)
target_rev = reverse_vocab(target)
sentences = ["this is so cool",
"i am having dinner .",
"i am sitting in an aeroplane .",
"come let us go for a long drive ."]
```
Converting the string to integers, followed by protobuf encoding:
```
# Convert strings to integers using source vocab mapping. Out-of-vocabulary strings are mapped to 1 - the mapping for <unk>
sentences = [[source.get(token, 1) for token in sentence.split()] for sentence in sentences]
f = io.BytesIO()
for sentence in sentences:
record = list_to_record_bytes(sentence, [])
write_recordio(f, record)
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-recordio-protobuf',
Body=f.getvalue())
response = response["Body"].read()
```
Now, parse the protobuf response and convert list of integers back to strings
```
def _parse_proto_response(received_bytes):
output_file = tempfile.NamedTemporaryFile()
output_file.write(received_bytes)
output_file.flush()
target_sentences = []
with open(output_file.name, 'rb') as datum:
next_record = True
while next_record:
next_record = read_next(datum)
if next_record:
rec = Record()
rec.ParseFromString(next_record)
target = list(rec.features["target"].int32_tensor.values)
target_sentences.append(target)
else:
break
return target_sentences
targets = _parse_proto_response(response)
resp = [" ".join([target_rev.get(token, "<unk>") for token in sentence]) for
sentence in targets]
print(resp)
```
# Stop / Close the Endpoint (Optional)
Finally, we should delete the endpoint before we close the notebook.
```
sage.delete_endpoint(EndpointName=endpoint_name)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/eunyul24/eunyul24.github.io/blob/master/B_DS2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
import numpy as np
import csv
```
```
header = []
userId = []
movieId = []
ratings = []
test = []
rownum = -1
with open('/content/drive/My Drive/Colab Notebooks/ml-20m/ratings.csv','r') as f:
data = csv.reader(f)
for row in data:
rownum += 1
if rownum == 0:
header = row
continue
if int(row[3]) < 1388502017:
userId.append(int(row[0]))
movieId.append(int(row[1]))
ratings.append(float(row[2]))
else: test.append([int(row[0]), int(row[1]), float(row[2]), int(row[3])])
print(len(userId))
print(len(test))
```
```
userIdx = dict()
for i, uid in enumerate(np.unique(userId)):
userIdx[uid] = i
movieIdx = dict()
for i, mid in enumerate(np.unique(movieId)):
movieIdx[mid] = i
X = np.zeros((len(ratings),2), dtype=int)
for i in range(len(userId)):
X[i] = [userIdx[userId[i]], movieIdx[movieId[i]]]
```
```
class MatrixFactorization():
def __init__(self, ratings, X, k = 10, learning_rate = 0.01, reg_param = 0.1, epochs = 20):
"""
param R: ratings
param X: userId, movieId
param k: latent parameter
param learning_rate: alpha on weight update
param reg_param: beta on weight update
param epochs: training epochs
"""
self.ratings = ratings
self.X = X
self.num_users = len(np.unique(X[:, 0]))
self.num_movies = len(np.unique(X[:, 1]))
self.k = k
self.learning_rate = learning_rate
self.reg_param = reg_param
self.epochs = epochs
def fit(self):
"""
training Matrix Factorization : Update matrix latent weight and bias
return: training_process
"""
# init latent features
self.P = np.random.normal(size=(self.num_users, self.k))
self.Q = np.random.normal(size=(self.num_movies, self.k))
# init biases
self.b = np.mean(self.ratings)
self.b_P = np.zeros(self.num_users)
self.b_Q = np.zeros(self.num_movies)
# train while epochs
self.training_process = []
for epoch in range(self.epochs):
for i,rating in enumerate(self.ratings):
self.gradient_descent(self.X[i, 0], self.X[i, 1], rating)
rmse = self.rmse()
self.training_process.append((epoch,rmse))
# print status
if (epoch + 1) % 10 == 0:
print("Iteration: %d ; RMSE = %.4f" % (epoch + 1, rmse))
return self.training_process
def rmse(self):
"""
compute root mean square error
return: rmse cost
"""
error = 0
for i,rating in enumerate(ratings):
error += pow(rating - self.get_prediction(self.X[i, 0], self.X[i, 1]), 2)
return np.sqrt(error)
def gradient_descent(self, i, j, rating):
"""
graident descent function
param i: user index of matrix
param j: item index of matrix
param rating: rating of (i,j)
"""
# get error
prediction = self.get_prediction(i, j)
error = rating - prediction
# update biases
self.b_P[i] += self.learning_rate * (error - self.reg_param * self.b_P[i])
self.b_Q[j] += self.learning_rate * (error - self.reg_param * self.b_Q[j])
# update latent feature
self.P[i, :] += self.learning_rate * (error * self.Q[j, :] - self.reg_param * self.P[i, :])
self.Q[j, :] += self.learning_rate * (error * self.P[i, :] - self.reg_param * self.Q[j, :])
def get_prediction(self, i, j):
"""
get predicted rating: user_i, item_j
return: prediction of r_ij
"""
return self.b + self.b_P[i] + self.b_Q[j] + self.P[i, :].dot(self.Q[j, :].T)
```
```
MF = MatrixFactorization(ratings, X)
training_process = MF.fit()
print("train RMSE:", MF.rmse())
f = open('/content/drive/My Drive/Colab Notebooks/ml-20m/B_results_DS2.csv', 'w', encoding='utf-8')
header[2] = 'predected rating'
wr = csv.writer(f)
wr.writerow(header)
error = 0
for uId, mId, rating, time in test:
if uId in userIdx.keys() and mId in movieIdx.keys():
predicted = MF.get_prediction(userIdx[uId], movieIdx[mId])
elif not uId in userIdx.keys() and mId in movieIdx.keys():
predicted = np.mean([ratings[i] for i in np.where(X[:, 1] == movieIdx[mId])[0]])
elif uId in userIdx.keys() and not mId in movieIdx.keys():
predicted = np.mean([ratings[i] for i in np.where(X[:, 0] == userIdx[uId])[0]])
else:
predicted = np.mean(ratings)
error += pow(rating - predicted, 2)
wr.writerow([uId, mId, predicted,time])
f.close()
print("test RMSE:", np.sqrt(error))
```
| github_jupyter |
# 911 Calls Capstone Project - Solutions
For this capstone project we will be analyzing some 911 call data from [Kaggle](https://www.kaggle.com/mchirico/montcoalert). The data contains the following fields:
* lat : String variable, Latitude
* lng: String variable, Longitude
* desc: String variable, Description of the Emergency Call
* zip: String variable, Zipcode
* title: String variable, Title
* timeStamp: String variable, YYYY-MM-DD HH:MM:SS
* twp: String variable, Township
* addr: String variable, Address
* e: String variable, Dummy variable (always 1)
Just go along with this notebook and try to complete the instructions or answer the questions in bold using your Python and Data Science skills!
___
* Import numpy and Pandas
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
```
* Import visualization libraries and set %matplotlib inline.
```
df = pd.read_csv('911.csv')
```
* Read in the csv file as a dataframe called df
```
df.dtypes
df.info()
df.head(3)
```
# Short Questions
* What are the bottom 5 zipcodes for 911 calls?
```
df['zip'].value_counts().tail(5)
df.head()
```
* What are the top 5 townships (twp) for 911 calls?
```
df['twp'].value_counts().head(5)
```
* Take a look at the 'title' column, how many unique title codes are there?
```
df['title'].nunique()
```
# Adding New Features
* In the titles column there are "Reasons/Departments" specified before the title code. These are EMS, Fire, and Traffic. Use .apply() with a custom lambda expression to create a new column called "Reason" that contains this string value.
* *For example, if the title column value is EMS: BACK PAINS/INJURY , the Reason column value would be EMS.*
```
df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])
df.head()
```
* Most common Reason for a 911 call based off of this new column?
```
# df3 = df2.value_counts()
# df3.columns= 'count'
df['Reason'].value_counts()
sns.countplot(x='Reason',data=df,palette='viridis')
```
___
* Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?
```
# Convert it to DateTime object
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)
# map Day of week column according to the days in a week
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df['Day of Week'] = df['Day of Week'].map(dmap)
sns.countplot(x='Day of Week',data=df,hue='Reason',palette='viridis')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
sns.countplot(x='Month',data=df,hue='Reason',palette='viridis')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
* You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...
* Now create a gropuby object called byMonth, where you group the DataFrame by the month column and use the count() method for aggregation. Use the head() method on this returned DataFrame.
```
byMonth = df.groupby('Month').count()
byMonth.head()
# Simple line plot of any column of byMonth
byMonth['twp'].plot()
# Now see if you can use seaborn's lmplot() to create a linear fit
# on the number of calls per month. Keep in mind you
# may need to reset the index to a column.
sns.lmplot(x='Month',y='twp',data=byMonth.reset_index())
# Create a new column Date in the df
df['Date']=df['timeStamp'].apply(lambda t: t.date())
df.head()
```
* Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.
```
# use .plot()
df.groupby('Date').count()['twp'].plot()
plt.tight_layout()
```
* Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call
```
# Traffic
df[df['Reason']=='Traffic'].groupby('Date').count()['twp'].plot()
plt.title('Traffic')
plt.tight_layout()
# Fire
df[df['Reason']=='Fire'].groupby('Date').count()['twp'].plot()
plt.title('Fire')
plt.tight_layout()
# EMS
df[df['Reason']=='EMS'].groupby('Date').count()['twp'].plot()
plt.title('EMS')
plt.tight_layout()
```
* Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an [unstack](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html) method.
```
dayHour = df.groupby(by=['Day of Week','Hour']).count()['Reason'].unstack()
dayHour.head()
dayHour.head()
plt.figure(figsize=(12,6))
sns.heatmap(dayHour)
sns.clustermap(dayHour)
```
* Now repeat these same plots and operations, for a DataFrame that shows the Month as the column.
```
dayMonth = df.groupby(by=['Day of Week','Month']).count()['Reason'].unstack()
dayMonth.head()
plt.figure(figsize=(12,6))
sns.heatmap(dayMonth)
sns.clustermap(dayMonth)
```
# Excellent job!
Keep exploring data however you see fit
| github_jupyter |
# Let's Grow your Own Inner Core!
### Choose a model in the list:
- geodyn_trg.TranslationGrowthRotation()
- geodyn_static.Hemispheres()
### Choose a proxy type:
- age
- position
- phi
- theta
- growth rate
### set the parameters for the model : geodynModel.set_parameters(parameters)
### set the units : geodynModel.define_units()
### Choose a data set:
- data.SeismicFromFile(filename) # Lauren's data set
- data.RandomData(numbers_of_points)
- data.PerfectSamplingEquator(numbers_of_points)
organized on a cartesian grid. numbers_of_points is the number of points along the x or y axis. The total number of points is numbers_of_points**2*pi/4
- as a special plot function to show streamlines: plot_c_vec(self,modelgeodyn)
- data.PerfectSamplingEquatorRadial(Nr, Ntheta)
same than below, but organized on a polar grid, not a cartesian grid.
### Extract the info:
- calculate the proxy value for all points of the data set: geodyn.evaluate_proxy(data_set, geodynModel)
- extract the positions as numpy arrays: extract_rtp or extract_xyz
- calculate other variables: positions.angular_distance_to_point(t,p, t_point, p_point)
```
%matplotlib inline
# import statements
import numpy as np
import matplotlib.pyplot as plt #for figures
from mpl_toolkits.basemap import Basemap #to render maps
import math
import json #to write dict with parameters
from GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data
plt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures
cm = plt.cm.get_cmap('viridis')
cm2 = plt.cm.get_cmap('winter')
```
## Define the geodynamical model
Un-comment one of the model
```
## un-comment one of them
geodynModel = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
# geodynModel = geodyn_static.Hemispheres() #this is a static model, only hemispheres.
```
Change the values of the parameters to get the model you want (here, parameters for .TranslationGrowthRotation())
```
age_ic_dim = 1e9 #in years
rICB_dim = 1221. #in km
v_g_dim = rICB_dim/age_ic_dim # in km/years #growth rate
print("Growth rate is {:.2e} km/years".format(v_g_dim))
v_g_dim_seconds = v_g_dim*1e3/(np.pi*1e7)
translation_velocity_dim = 0.8*v_g_dim_seconds#4e-10 #0.8*v_g_dim_seconds#4e-10 #m.s, value for today's Earth with Q_cmb = 10TW (see Alboussiere et al. 2010)
time_translation = rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)
maxAge = 2.*time_translation/1e6
print("The translation recycles the inner core material in {0:.2e} million years".format(maxAge))
print("Translation velocity is {0:.2e} km/years".format(translation_velocity_dim*np.pi*1e7/1e3))
units = None #we give them already dimensionless parameters.
rICB = 1.
age_ic = 1.
omega = 0.#0.5*np.pi/200e6*age_ic_dim#0.5*np.pi #0. #0.5*np.pi/200e6*age_ic_dim# 0.#0.5*np.pi#0.#0.5*np.pi/200e6*age_ic_dim #0. #-0.5*np.pi # Rotation rates has to be in ]-np.pi, np.pi[
print("Rotation rate is {:.2e}".format(omega))
velocity_amplitude = translation_velocity_dim*age_ic_dim*np.pi*1e7/rICB_dim/1e3
velocity_center = [0., 100.]#center of the eastern hemisphere
velocity = geodyn_trg.translation_velocity(velocity_center, velocity_amplitude)
exponent_growth = 1.#0.1#1
print(v_g_dim, velocity_amplitude, omega/age_ic_dim*180/np.pi*1e6)
```
Define a proxy type, and a proxy name (to be used in the figures to annotate the axes)
You can re-define it later if you want (or define another proxy_type2 if needed)
```
proxy_type = "age"#"growth rate"
proxy_name = "age (Myears)" #growth rate (km/Myears)"
proxy_lim = [0, maxAge] #or None
#proxy_lim = None
fig_name = "figures/test_" #to name the figures
print(rICB, age_ic, velocity_amplitude, omega, exponent_growth, proxy_type)
print(velocity)
```
### Parameters for the geodynamical model
This will input the different parameters in the model.
```
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity,
'exponent_growth': exponent_growth,
'omega': omega,
'proxy_type': proxy_type})
geodynModel.set_parameters(parameters)
geodynModel.define_units()
param = parameters
param['vt'] = parameters['vt'].tolist() #for json serialization
# write file with parameters, readable with json, byt also human-readable
with open(fig_name+'parameters.json', 'w') as f:
json.dump(param, f)
print(parameters)
```
## Different data set and visualisations
### Perfect sampling at the equator (to visualise the flow lines)
You can add more points to get a better precision.
```
npoints = 10 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingEquator(npoints, rICB = 1.)
data_set.method = "bt_point"
proxy = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="age", verbose = False)
data_set.plot_c_vec(geodynModel, proxy=proxy, cm=cm, nameproxy="age (Myears)")
plt.savefig(fig_name+"equatorial_plot.pdf", bbox_inches='tight')
```
### Perfect sampling in the first 100km (to visualise the depth evolution)
```
data_meshgrid = data.Equator_upperpart(10,10)
data_meshgrid.method = "bt_point"
proxy_meshgrid = geodyn.evaluate_proxy(data_meshgrid, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_meshgrid.extract_rtp("bottom_turning_point")
fig3, ax3 = plt.subplots(figsize=(8, 2))
X, Y, Z = data_meshgrid.mesh_RPProxy(proxy_meshgrid)
sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm)
sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k")
ax3.set_ylim(-0, 120)
fig3.gca().invert_yaxis()
ax3.set_xlim(-180,180)
cbar = fig3.colorbar(sc)
#cbar.set_clim(0, maxAge)
cbar.set_label(proxy_name)
ax3.set_xlabel("longitude")
ax3.set_ylabel("depth below ICB (km)")
plt.savefig(fig_name+"meshgrid.pdf", bbox_inches='tight')
npoints = 20 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingSurface(npoints, rICB = 1., depth=0.01)
data_set.method = "bt_point"
proxy_surface = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_set.extract_rtp("bottom_turning_point")
X, Y, Z = data_set.mesh_TPProxy(proxy_surface)
## map
m, fig = plot_data.setting_map()
y, x = m(Y, X)
sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+"map_surface.pdf", bbox_inches='tight')
```
### Random data set, in the first 100km - bottom turning point only
#### Calculate the data
```
# random data set
data_set_random = data.RandomData(300)
data_set_random.method = "bt_point"
proxy_random = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=proxy_type, verbose=False)
data_path = "../GrowYourIC/data/"
geodynModel.data_path = data_path
if proxy_type == "age":
# ## domain size and Vp
proxy_random_size = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="domain_size", verbose=False)
proxy_random_dV = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="dV_V", verbose=False)
r, t, p = data_set_random.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy_random,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set_random.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy_random, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy_random, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_random_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_random_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name +data_set_random.shortname+ '_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy_random, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_depth.pdf", bbox_inches='tight')
```
### Real Data set from Waszek paper
```
## real data set
data_set = data.SeismicFromFile("../GrowYourIC/data/WD11.dat")
data_set.method = "bt_point"
proxy2 = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose=False)
if proxy_type == "age":
## domain size and DV/V
proxy_size = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="domain_size", verbose=False)
proxy_dV = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="dV_V", verbose=False)
r, t, p = data_set.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy2,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy2, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy2, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name + data_set.shortname+'_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy2, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_depth.pdf", bbox_inches='tight')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/danzerzine/seospider-colab/blob/main/Running_screamingfrog_SEO_spider_in_Colab_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Запуск SEO бота Screaming Frog SEO spider в облаке через Google Colab
-------------
> *Protip: под задачу для крупного сайта лучше всего подходят High RAM (25GB) инстансы без GPU/TPU, доступные в PRO подписке*
###Косметическое улучшение: добавляем перенос строки для длинных однострочных команд
```
from IPython.display import HTML, display
def set_css():
display(HTML('''
<style>
pre {
white-space: pre-wrap;
}
</style>
'''))
get_ipython().events.register('pre_run_cell', set_css)
```
###Подключаем Google Drive в котором хранятся конфиги бота и куда будут сохраняться результаты обхода
```
from google.colab import drive
drive.mount('/content/drive')
```
###Узнаем внешний IP инстанса
чтобы затем ручками добавить его в исключения файерволла cloudflare -- иначе очень быстро упремся в rate limit и нам начнут показывать страницу с проверкой на человекообразность
```
!wget -qO- http://ipecho.net/plain | xargs echo && wget -qO - icanhazip.com
```
###Устанавливаем последнюю версию seo spider, делаем мелкие дела по хозяйству
* Обновляем установленные linux пакеты
* Копируем настройки с десктопной версии SEO spider в локальную папку инстанса (это нужно чтобы передать токены авторизации к google search console, GA и так далее)
```
#@title Settings directory on GDrive { vertical-output: true, display-mode: "both" }
settings_path = "" #@param {type:"string"}
!wget https://download.screamingfrog.co.uk/products/seo-spider/screamingfrogseospider_16.3_all.deb
!apt-get install screamingfrogseospider_16.3_all.deb
!sudo apt-get update && sudo apt-get upgrade -y
!mkdir -p ~/.ScreamingFrogSEOSpider
!cp -r $settings_path/* ~/.ScreamingFrogSEOSpider
```
### Запускаем bash скрипт для донастройки инстанса и бота
Он добавит виртуальный дисплей для вывода из JAVA, переключит бота в режим сохранения результатов на диске вместо RAM и т.д.
```
!wget https://raw.githubusercontent.com/fili/screaming-frog-on-google-compute-engine/master/gce-sf.sh -O install.sh && chmod +x install.sh && source ./install.sh
```
###Делаем симлинк скрытой папки с временными файлами и настройками бота
на случай если придется что-то редактировать или вынимать оттуда наживую, иначе ее не будет видно в браузере файлов слева
```
!ln -s ~/.ScreamingFrogSEOSpider ~/ScreamingFrogSEOSpider
```
###Даем команду боту в headless режиме
прописываем все нужные флаги для экспорта, настроек, отчетов, выгрузок и так далее
```
#@title Crawl settings { vertical-output: true }
url_start = "" #@param {type:"string"}
use_gcs = "" #@param ["", "--use-google-search-console \"account \""] {allow-input: true}
config_path = "" #@param {type:"string"}
output_folder = "" #@param {type:"string"}
!screamingfrogseospider --crawl "$url_start" $use_gcs --headless --config "$config_path" --output-folder "$output_folder" --timestamped-output --save-crawl --export-tabs "Internal:All,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB" --bulk-export "Canonicals:Contains Canonical Inlinks,Canonicals:Self Referencing Inlinks,Canonicals:Canonicalised Inlinks,Canonicals:Missing Inlinks,Canonicals:Multiple Inlinks,Canonicals:Non-Indexable Canonical Inlinks,AMP:All Inlinks,AMP:Non-200 Response Inlinks,AMP:Missing Non-AMP Return Link Inlinks,AMP:Missing Canonical to Non-AMP Inlinks,AMP:Non-Indexable Canonical Inlinks,AMP:Indexable Inlinks,AMP:Non-Indexable Inlinks,Structured Data:Contains Structured Data,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:JSON-LD URLs,Structured Data:Microdata URLs,Structured Data:RDFa URLs,Sitemaps:URLs in Sitemap Inlinks,Sitemaps:Orphan URLs Inlinks,Sitemaps:Non-Indexable URLs in Sitemap Inlinks,Sitemaps:URLs in Multiple Sitemaps Inlinks" --save-report "Crawl Overview,Redirects:All Redirects,Redirects:Redirect Chains,Redirects:Redirect & Canonical Chains,Canonicals:Canonical Chains,Canonicals:Non-Indexable Canonicals,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Hreflang:All hreflang URLs,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non Canonical Return Links,Hreflang:Noindex Return Links,Insecure Content,SERP Summary,Orphan Pages,Structured Data:Validation Errors & Warnings Summary,Structured Data:Validation Errors & Warnings,Structured Data:Google Rich Results Features Summary,Structured Data:Google Rich Results Features,HTTP Headers:HTTP Header Summary,Cookies:Cookie Summary" --export-format xlsx --export-custom-summary "Site Crawled,Date,Time,Total URLs Encountered,Total URLs Crawled,Total Internal blocked by robots.txt,Total External blocked by robots.txt,URLs Displayed,Total Internal URLs,Total External URLs,Total Internal Indexable URLs,Total Internal Non-Indexable URLs,JavaScript:All,JavaScript:Uses Old AJAX Crawling Scheme URLs,JavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag,JavaScript:Page Title Only in Rendered HTML,JavaScript:Page Title Updated by JavaScript,JavaScript:H1 Only in Rendered HTML,JavaScript:H1 Updated by JavaScript,JavaScript:Meta Description Only in Rendered HTML,JavaScript:Meta Description Updated by JavaScript,JavaScript:Canonical Only in Rendered HTML,JavaScript:Canonical Mismatch,JavaScript:Noindex Only in Original HTML,JavaScript:Nofollow Only in Original HTML,JavaScript:Contains JavaScript Links,JavaScript:Contains JavaScript Content,JavaScript:Pages with Blocked Resources,H1:All,H1:Missing,H1:Duplicate,H1:Over X Characters,H1:Multiple,H2:All,H2:Missing,H2:Duplicate,H2:Over X Characters,H2:Multiple,Internal:All,Internal:HTML,Internal:JavaScript,Internal:CSS,Internal:Images,Internal:PDF,Internal:Flash,Internal:Other,Internal:Unknown,External:All,External:HTML,External:JavaScript,External:CSS,External:Images,External:PDF,External:Flash,External:Other,External:Unknown,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Content:All,Content:Spelling Errors,Content:Grammar Errors,Content:Near Duplicates,Content:Exact Duplicates,Content:Low Content Pages,Custom Extraction:All,Custom Search:All,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,Analytics:All,Analytics:Sessions Above 0,Analytics:Bounce Rate Above 70%,Analytics:No GA Data,Analytics:Non-Indexable with GA Data,Analytics:Orphan URLs,Search Console:All,Search Console:Clicks Above 0,Search Console:No GSC Data,Search Console:Non-Indexable with GSC Data,Search Console:Orphan URLs,Hreflang:All,Hreflang:Contains hreflang,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non-Canonical Return Links,Hreflang:Noindex Return Links,Hreflang:Incorrect Language & Region Codes,Hreflang:Multiple Entries,Hreflang:Missing Self Reference,Hreflang:Not Using Canonical,Hreflang:Missing X-Default,Hreflang:Missing,Images:All,Images:Over X KB,Images:Missing Alt Text,Images:Missing Alt Attribute,Images:Alt Text Over X Characters,Link Metrics:All,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,PageSpeed:All,PageSpeed:Eliminate Render-Blocking Resources,PageSpeed:Defer Offscreen Images,PageSpeed:Efficiently Encode Images,PageSpeed:Properly Size Images,PageSpeed:Minify CSS,PageSpeed:Minify JavaScript,PageSpeed:Reduce Unused CSS,PageSpeed:Reduce Unused JavaScript,PageSpeed:Serve Images in Next-Gen Formats,PageSpeed:Enable Text Compression,PageSpeed:Preconnect to Required Origins,PageSpeed:Reduce Server Response Times (TTFB),PageSpeed:Avoid Multiple Page Redirects,PageSpeed:Preload Key Requests,PageSpeed:Use Video Formats for Animated Content,PageSpeed:Avoid Excessive DOM Size,PageSpeed:Reduce JavaScript Execution Time,PageSpeed:Serve Static Assets with an Efficient Cache Policy,PageSpeed:Minimize Main-Thread Work,PageSpeed:Ensure Text Remains Visible During Webfont Load,PageSpeed:Image Elements Do Not Have Explicit Width & Height,PageSpeed:Avoid Large Layout Shifts,PageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers,PageSpeed:Request Errors,Pagination:All,Pagination:Contains Pagination,Pagination:First Page,Pagination:Paginated 2+ Pages,Pagination:Pagination URL Not in Anchor Tag,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Pagination:Non-Indexable,Pagination:Multiple Pagination URLs,Pagination:Pagination Loop,Pagination:Sequence Error,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Success (2xx),Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Security:All,Security:HTTP URLs,Security:HTTPS URLs,Security:Mixed Content,Security:Form URL Insecure,Security:Form on HTTP URL,Security:Unsafe Cross-Origin Links,Security:Missing HSTS Header,Security:Bad Content Type,Security:Missing X-Content-Type-Options Header,Security:Missing X-Frame-Options Header,Security:Protocol-Relative Resource Links,Security:Missing Content-Security-Policy Header,Security:Missing Secure Referrer-Policy Header,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,URL:All,URL:Non ASCII Characters,URL:Underscores,URL:Uppercase,URL:Parameters,URL:Over X Characters,URL:Multiple Slashes,URL:Repetitive Path,URL:Contains Space,URL:Broken Bookmark,URL:Internal Search,Depth 1,Depth 2,Depth 3,Depth 4,Depth 5,Depth 6,Depth 7,Depth 8,Depth 9,Depth 10+,Top Inlinks 1 URL,Top Inlinks 1 Number of Inlinks,Top Inlinks 2 URL,Top Inlinks 2 Number of Inlinks,Top Inlinks 3 URL,Top Inlinks 3 Number of Inlinks,Top Inlinks 4 URL,Top Inlinks 4 Number of Inlinks,Top Inlinks 5 URL,Top Inlinks 5 Number of Inlinks,Top Inlinks 6 URL,Top Inlinks 6 Number of Inlinks,Top Inlinks 7 URL,Top Inlinks 7 Number of Inlinks,Top Inlinks 8 URL,Top Inlinks 8 Number of Inlinks,Top Inlinks 9 URL,Top Inlinks 9 Number of Inlinks,Top Inlinks 10 URL,Top Inlinks 10 Number of Inlinks,Top Inlinks 11 URL,Top Inlinks 11 Number of Inlinks,Top Inlinks 12 URL,Top Inlinks 12 Number of Inlinks,Top Inlinks 13 URL,Top Inlinks 13 Number of Inlinks,Top Inlinks 14 URL,Top Inlinks 14 Number of Inlinks,Top Inlinks 15 URL,Top Inlinks 15 Number of Inlinks,Top Inlinks 16 URL,Top Inlinks 16 Number of Inlinks,Top Inlinks 17 URL,Top Inlinks 17 Number of Inlinks,Top Inlinks 18 URL,Top Inlinks 18 Number of Inlinks,Top Inlinks 19 URL,Top Inlinks 19 Number of Inlinks,Top Inlinks 20 URL,Top Inlinks 20 Number of Inlinks,Response Times 0s to 1s,Response Times 1s to 2s,Response Times 2s to 3s,Response Times 3s to 4s,Response Times 4s to 5s,Response Times 5s to 6s,Response Times 6s to 7s,Response Times 7s to 8s,Response Times 8s to 9s,Response Times 10s or more"
```
# ✦ *Colab Still Alive Console Script:*
<p><font size=2px ><font color="red"> Tip - Set a javascript interval to click on the connect button every 60 seconds. Open developer-settings (in your web-browser) with Ctrl+Shift+I then click on console tab and type this on the console prompt. (for mac press Option+Command+I)</font></p><b>Copy script in hidden cell and paste at your browser console !!! DO NOT CLOSE YOUR BROWSER IN ORDER TO STILL RUNNING SCRIPT</b>
<code>function ClickConnect(){
console.log("Working");
document.querySelector("colab-connect-button").click()
}setInterval(ClickConnect,6000)</code>
# *Что в итоге*
На выходе в идеале получаем
папку с датой обхода и следующими выгрузками в формате Excel
**Tabs**:
```
Internal:All
Response Codes:All
Response Codes:Blocked by Robots.txt
Response Codes:Blocked Resource
Response Codes:No Response
Response Codes:Redirection (3xx)
Response Codes:Redirection (JavaScript)
Response Codes:Redirection (Meta Refresh)
Response Codes:Client Error (4xx)
Response Codes:Server Error (5xx)
Page Titles:All
Page Titles:Missing
Page Titles:Duplicate
Page Titles:Over X Characters
Page Titles:Below X Characters
Page Titles:Over X Pixels
Page Titles:Below X Pixels
Page Titles:Same as H1
Page Titles:Multiple
Meta Description:All
Meta Description:Missing
Meta Description:Duplicate
Meta Description:Over X Characters
Meta Description:Below X Characters
Meta Description:Over X Pixels
Meta Description:Below X Pixels
Meta Description:Multiple
Meta Keywords:All
Meta Keywords:Missing
Meta Keywords:Duplicate
Meta Keywords:Multiple
Canonicals:All
Canonicals:Contains Canonical
Canonicals:Self Referencing
Canonicals:Canonicalised
Canonicals:Missing
Canonicals:Multiple
Canonicals:Non-Indexable Canonical
Directives:All
Directives:Index
Directives:Noindex
Directives:Follow
Directives:Nofollow
Directives:None
Directives:NoArchive
Directives:NoSnippet
Directives:Max-Snippet
Directives:Max-Image-Preview
Directives:Max-Video-Preview
Directives:NoODP
Directives:NoYDIR
Directives:NoImageIndex
Directives:NoTranslate
Directives:Unavailable_After
Directives:Refresh
AMP:All
AMP:Non-200 Response
AMP:Missing Non-AMP Return Link
AMP:Missing Canonical to Non-AMP
AMP:Non-Indexable Canonical
AMP:Indexable
AMP:Non-Indexable
AMP:Missing <html amp> Tag
AMP:Missing/Invalid <!doctype html> Tag
AMP:Missing <head> Tag
AMP:Missing <body> Tag
AMP:Missing Canonical
AMP:Missing/Invalid <meta charset> Tag
AMP:Missing/Invalid <meta viewport> Tag
AMP:Missing/Invalid AMP Script
AMP:Missing/Invalid AMP Boilerplate
AMP:Contains Disallowed HTML
AMP:Other Validation Errors
Structured Data:All
Structured Data:Contains Structured Data
Structured Data:Missing
Structured Data:Validation Errors
Structured Data:Validation Warnings
Structured Data:Parse Errors
Structured Data:Microdata URLs
Structured Data:JSON-LD URLs
Structured Data:RDFa URLs
Sitemaps:All
Sitemaps:URLs in Sitemap
Sitemaps:URLs not in Sitemap
Sitemaps:Orphan URLs
Sitemaps:Non-Indexable URLs in Sitemap
Sitemaps:URLs in Multiple Sitemaps
Sitemaps:XML Sitemap with over 50k URLs
Sitemaps:XML Sitemap over 50MB" --bulk-export "Canonicals:Contains Canonical Inlinks
Canonicals:Self Referencing Inlinks
Canonicals:Canonicalised Inlinks
Canonicals:Missing Inlinks
Canonicals:Multiple Inlinks
Canonicals:Non-Indexable Canonical Inlinks
AMP:All Inlinks
AMP:Non-200 Response Inlinks
AMP:Missing Non-AMP Return Link Inlinks
AMP:Missing Canonical to Non-AMP Inlinks
AMP:Non-Indexable Canonical Inlinks
AMP:Indexable Inlinks
AMP:Non-Indexable Inlinks
Structured Data:Contains Structured Data
Structured Data:Validation Errors
Structured Data:Validation Warnings
Structured Data:JSON-LD URLs
Structured Data:Microdata URLs
Structured Data:RDFa URLs
Sitemaps:URLs in Sitemap Inlinks
Sitemaps:Orphan URLs Inlinks
Sitemaps:Non-Indexable URLs in Sitemap Inlinks
Sitemaps:URLs in Multiple Sitemaps Inlinks" --save-report "Crawl Overview
Redirects:All Redirects
Redirects:Redirect Chains
Redirects:Redirect & Canonical Chains
Canonicals:Canonical Chains
Canonicals:Non-Indexable Canonicals
Pagination:Non-200 Pagination URLs
Pagination:Unlinked Pagination URLs
Hreflang:All hreflang URLs
Hreflang:Non-200 hreflang URLs
Hreflang:Unlinked hreflang URLs
Hreflang:Missing Return Links
Hreflang:Inconsistent Language & Region Return Links
Hreflang:Non Canonical Return Links
Hreflang:Noindex Return Links
Insecure Content
SERP Summary
Orphan Pages
Structured Data:Validation Errors & Warnings Summary
Structured Data:Validation Errors & Warnings
Structured Data:Google Rich Results Features Summary
Structured Data:Google Rich Results Features
HTTP Headers:HTTP Header Summary
Cookies:Cookie Summary
```
**Summary**:
```
Site Crawled
Date
Time
Total URLs Encountered
Total URLs Crawled
Total Internal blocked by robots.txt
Total External blocked by robots.txt
URLs Displayed
Total Internal URLs
Total External URLs
Total Internal Indexable URLs
Total Internal Non-Indexable URLs
JavaScript:All
JavaScript:Uses Old AJAX Crawling Scheme URLs
JavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag
JavaScript:Page Title Only in Rendered HTML
JavaScript:Page Title Updated by JavaScript
JavaScript:H1 Only in Rendered HTML
JavaScript:H1 Updated by JavaScript
JavaScript:Meta Description Only in Rendered HTML
JavaScript:Meta Description Updated by JavaScript
JavaScript:Canonical Only in Rendered HTML
JavaScript:Canonical Mismatch
JavaScript:Noindex Only in Original HTML
JavaScript:Nofollow Only in Original HTML
JavaScript:Contains JavaScript Links
JavaScript:Contains JavaScript Content
JavaScript:Pages with Blocked Resources
H1:All
H1:Missing
H1:Duplicate
H1:Over X Characters
H1:Multiple
H2:All
H2:Missing
H2:Duplicate
H2:Over X Characters
H2:Multiple
Internal:All
Internal:HTML
Internal:JavaScript
Internal:CSS
Internal:Images
Internal:PDF
Internal:Flash
Internal:Other
Internal:Unknown
External:All
External:HTML
External:JavaScript
External:CSS
External:Images
External:PDF
External:Flash
External:Other
External:Unknown
AMP:All
AMP:Non-200 Response
AMP:Missing Non-AMP Return Link
AMP:Missing Canonical to Non-AMP
AMP:Non-Indexable Canonical
AMP:Indexable
AMP:Non-Indexable
AMP:Missing <html amp> Tag
AMP:Missing/Invalid <!doctype html> Tag
AMP:Missing <head> Tag
AMP:Missing <body> Tag
AMP:Missing Canonical
AMP:Missing/Invalid <meta charset> Tag
AMP:Missing/Invalid <meta viewport> Tag
AMP:Missing/Invalid AMP Script
AMP:Missing/Invalid AMP Boilerplate
AMP:Contains Disallowed HTML
AMP:Other Validation Errors
Canonicals:All
Canonicals:Contains Canonical
Canonicals:Self Referencing
Canonicals:Canonicalised
Canonicals:Missing
Canonicals:Multiple
Canonicals:Non-Indexable Canonical
Content:All
Content:Spelling Errors
Content:Grammar Errors
Content:Near Duplicates
Content:Exact Duplicates
Content:Low Content Pages
Custom Extraction:All
Custom Search:All
Directives:All
Directives:Index
Directives:Noindex
Directives:Follow
Directives:Nofollow
Directives:None
Directives:NoArchive
Directives:NoSnippet
Directives:Max-Snippet
Directives:Max-Image-Preview
Directives:Max-Video-Preview
Directives:NoODP
Directives:NoYDIR
Directives:NoImageIndex
Directives:NoTranslate
Directives:Unavailable_After
Directives:Refresh
Analytics:All
Analytics:Sessions Above 0
Analytics:Bounce Rate Above 70%
Analytics:No GA Data
Analytics:Non-Indexable with GA Data
Analytics:Orphan URLs
Search Console:All
Search Console:Clicks Above 0
Search Console:No GSC Data
Search Console:Non-Indexable with GSC Data
Search Console:Orphan URLs
Hreflang:All
Hreflang:Contains hreflang
Hreflang:Non-200 hreflang URLs
Hreflang:Unlinked hreflang URLs
Hreflang:Missing Return Links
Hreflang:Inconsistent Language & Region Return Links
Hreflang:Non-Canonical Return Links
Hreflang:Noindex Return Links
Hreflang:Incorrect Language & Region Codes
Hreflang:Multiple Entries
Hreflang:Missing Self Reference
Hreflang:Not Using Canonical
Hreflang:Missing X-Default
Hreflang:Missing
Images:All
Images:Over X KB
Images:Missing Alt Text
Images:Missing Alt Attribute
Images:Alt Text Over X Characters
Link Metrics:All
Meta Description:All
Meta Description:Missing
Meta Description:Duplicate
Meta Description:Over X Characters
Meta Description:Below X Characters
Meta Description:Over X Pixels
Meta Description:Below X Pixels
Meta Description:Multiple
Meta Keywords:All
Meta Keywords:Missing
Meta Keywords:Duplicate
Meta Keywords:Multiple
PageSpeed:All
PageSpeed:Eliminate Render-Blocking Resources
PageSpeed:Defer Offscreen Images
PageSpeed:Efficiently Encode Images
PageSpeed:Properly Size Images
PageSpeed:Minify CSS
PageSpeed:Minify JavaScript
PageSpeed:Reduce Unused CSS
PageSpeed:Reduce Unused JavaScript
PageSpeed:Serve Images in Next-Gen Formats
PageSpeed:Enable Text Compression
PageSpeed:Preconnect to Required Origins
PageSpeed:Reduce Server Response Times (TTFB)
PageSpeed:Avoid Multiple Page Redirects
PageSpeed:Preload Key Requests
PageSpeed:Use Video Formats for Animated Content
PageSpeed:Avoid Excessive DOM Size
PageSpeed:Reduce JavaScript Execution Time
PageSpeed:Serve Static Assets with an Efficient Cache Policy
PageSpeed:Minimize Main-Thread Work
PageSpeed:Ensure Text Remains Visible During Webfont Load
PageSpeed:Image Elements Do Not Have Explicit Width & Height
PageSpeed:Avoid Large Layout Shifts
PageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers
PageSpeed:Request Errors
Pagination:All
Pagination:Contains Pagination
Pagination:First Page
Pagination:Paginated 2+ Pages
Pagination:Pagination URL Not in Anchor Tag
Pagination:Non-200 Pagination URLs
Pagination:Unlinked Pagination URLs
Pagination:Non-Indexable
Pagination:Multiple Pagination URLs
Pagination:Pagination Loop
Pagination:Sequence Error
Response Codes:All
Response Codes:Blocked by Robots.txt
Response Codes:Blocked Resource
Response Codes:No Response
Response Codes:Success (2xx)
Response Codes:Redirection (3xx)
Response Codes:Redirection (JavaScript)
Response Codes:Redirection (Meta Refresh)
Response Codes:Client Error (4xx)
Response Codes:Server Error (5xx)
Security:All
Security:HTTP URLs
Security:HTTPS URLs
Security:Mixed Content
Security:Form URL Insecure
Security:Form on HTTP URL
Security:Unsafe Cross-Origin Links
Security:Missing HSTS Header
Security:Bad Content Type
Security:Missing X-Content-Type-Options Header
Security:Missing X-Frame-Options Header
Security:Protocol-Relative Resource Links
Security:Missing Content-Security-Policy Header
Security:Missing Secure Referrer-Policy Header
Sitemaps:All
Sitemaps:URLs in Sitemap
Sitemaps:URLs not in Sitemap
Sitemaps:Orphan URLs
Sitemaps:Non-Indexable URLs in Sitemap
Sitemaps:URLs in Multiple Sitemaps
Sitemaps:XML Sitemap with over 50k URLs
Sitemaps:XML Sitemap over 50MB
Structured Data:All
Structured Data:Contains Structured Data
Structured Data:Missing
Structured Data:Validation Errors
Structured Data:Validation Warnings
Structured Data:Parse Errors
Structured Data:Microdata URLs
Structured Data:JSON-LD URLs
Structured Data:RDFa URLs
Page Titles:All
Page Titles:Missing
Page Titles:Duplicate
Page Titles:Over X Characters
Page Titles:Below X Characters
Page Titles:Over X Pixels
Page Titles:Below X Pixels
Page Titles:Same as H1
Page Titles:Multiple
URL:All
URL:Non ASCII Characters
URL:Underscores
URL:Uppercase
URL:Parameters
URL:Over X Characters
URL:Multiple Slashes
URL:Repetitive Path
URL:Contains Space
URL:Broken Bookmark
URL:Internal Search
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
Depth 6
Depth 7
Depth 8
Depth 9
Depth 10+
Top Inlinks 1 URL
Top Inlinks 1 Number of Inlinks
Top Inlinks 2 URL
Top Inlinks 2 Number of Inlinks
Top Inlinks 3 URL
Top Inlinks 3 Number of Inlinks
Top Inlinks 4 URL
Top Inlinks 4 Number of Inlinks
Top Inlinks 5 URL
Top Inlinks 5 Number of Inlinks
Top Inlinks 6 URL
Top Inlinks 6 Number of Inlinks
Top Inlinks 7 URL
Top Inlinks 7 Number of Inlinks
Top Inlinks 8 URL
Top Inlinks 8 Number of Inlinks
Top Inlinks 9 URL
Top Inlinks 9 Number of Inlinks
Top Inlinks 10 URL
Top Inlinks 10 Number of Inlinks
Top Inlinks 11 URL
Top Inlinks 11 Number of Inlinks
Top Inlinks 12 URL
Top Inlinks 12 Number of Inlinks
Top Inlinks 13 URL
Top Inlinks 13 Number of Inlinks
Top Inlinks 14 URL
Top Inlinks 14 Number of Inlinks
Top Inlinks 15 URL
Top Inlinks 15 Number of Inlinks
Top Inlinks 16 URL
Top Inlinks 16 Number of Inlinks
Top Inlinks 17 URL
Top Inlinks 17 Number of Inlinks
Top Inlinks 18 URL
Top Inlinks 18 Number of Inlinks
Top Inlinks 19 URL
Top Inlinks 19 Number of Inlinks
Top Inlinks 20 URL
Top Inlinks 20 Number of Inlinks
Response Times 0s to 1s
Response Times 1s to 2s
Response Times 2s to 3s
Response Times 3s to 4s
Response Times 4s to 5s
Response Times 5s to 6s
Response Times 6s to 7s
Response Times 7s to 8s
Response Times 8s to 9s
Response Times 10s or more" ```
| github_jupyter |
----
### 베이즈 정리
- 데이터라는 조건이 주어졌을 때 조건부 확률을 구하는 공식
- $P(A|B) = \frac{P(B|A)P(A)}{P(B)}$
----
- $P(A|B)$ : 사후확률(posterior). 사건 B가 발생한 후 갱신된 사건 A의 확률
- $P(A)$ : 사전확률 (prior). 사건 B가 발생하기 전에 가지고 있던 사건 A의 확률
- $P(B|A)$ : 가능도(likelihood). 사건 A가 발생한 경우 사건 B의 확률
- $P(B)$ : 정규화상수(normalizing constant) 또는 증거(evidence). 확률의 크기 조정
---
#### 베이즈 정리 확장1
- $P(A_1|B)$
$= \frac{P(B|A)P(A)}{P(B)}$
$= \frac{P(B|A_1)P(A_1)}{\sum_iP(A_i,B)}$
$= \frac{P(B|A_1)P(A_1)}{\sum_iP(B|A_I)P(A_i)}$
- $P(A_i|B)$ 에서 $i$의 값이 바뀌어도 분자의 값만 비교하면 됨
---
#### Classification 의 장점과 단점
- 장점 : 첫번째 답이 아닐 때 2,3을 구할 수 있음.
- 단점 : Class4개를 풀기 위해서 4개를 구해야함....
---
#### $A_1 = A , A_2 = A^\complement$ 인 경우
- $P(A|B)$
$ = \frac{P(B|A)P(A)}{P(B)}$
$ = \frac{P(B|A)P(A)}{P(B,A)+P(B,A^\complement}$
$ = \frac{p(B|A)P(A)}{P(B|A)P(A) + P(B|A^\complement)P(A^\complement)}$
$ = \frac{P(B|A)P(A)}{P(B|A)P(A)+P(B|A^\complement)(1-P(A)}$
- 2진 분류 문제
---
### 검사 시약 문제
1) 사건
- 병에 걸리는 경우 : D
- 양성반응을 보이는 경우 : S
- 병에 걸린 사람이 양성 반응을 보이는 경우 : S|D
- 양성 반응을 보이는 사람이 병에 걸려있을 경우 : D|S
2) 문제
- $P(S|D) = 0.99$가 주어졌을 때, P(D|S)를 구하라.
----
#### 베이즈 정리에 의해서
- $P(D|S) = \frac{P(S|D)P(D)}{P(S)}$
-- 현재 $P(S), P(D)$ 를 모르기 때문에 구할 수가 없다.
----
3) 추가 조사 정보
- 이 병은 전체 인구 중에서 걸린 사람이 0.2%인 희귀병이다.
: $P(D) = 0.002$
- 이 병에 걸리지 않은 사람에게 시약검사를 했을 때, 양성반응이 나타날 확률은 5%이다.
: $P(S|D^\complement) = 0.05$
---
#### 베이즈 정리의 확장에 의해서
- $P(D|S)$
$= \frac{P(S|D)P(D)}{P(S)}$
$ = \frac{P(S|D)P(D)}{P(S,D)+P(S,D^\complement)} $
$ = \frac{P(S|D)P(D)}{P(S|D)P(D)+P(S|D^\complement)P(D^\complement)}$
$ = \frac{P(S|D)P(D)}{P(S|D)P(D)+P(S|D^\complement)(1-P(D))}$
$ = \frac{0.99\cdot 0.002}{0.99\cdot 0.002+0.05\cdot (1-0.002)}$
$ = 0.038$
```
round((0.99*0.002) / (0.99*0.002+0.05)*(1-0.002), 3)
```
----
#### TabularCPD(variable, variable_card, value, evidence=None, evidence_card=None)
- BayesianModel : 베이즈정리에 적용
- TabularCPD : 조건부확률을 구현
----
- variable : 확률 변수의 이름 문자열
- variable_card : 확률변수가 가질 수 있는 경우의 수
- value : 조건부확률 배열. 하나의 열(column)이 동일 조건을 뜻하므로, 하나의 열의 확률 합은 1이어야 한다.
- evidence : 조건이 되는 확률변수의 이름 문자열 리스트
- evidence_card : 조건이 되는 확률변수가 가질 수 있는 경우의 수 리스트
일반적인 확률을 구현할 때 : evidence = None , evidence_card = None
#### 병에 걸렸을 사전확률 $P(D) = P(X=1)$, 병에 걸리지 않았을 사전확률 $P(D^\complement) = P(X = 0)$
```
from pgmpy.factors.discrete import TabularCPD
cpd_X = TabularCPD('X', 2, [[1-0.002, 0.002]])
print(cpd_X)
```
#### 양성반응이 나올 확률 $P(S) = P(Y = 1)$, 음성 반응이 나올 확률 $P(S^\complement) = P(Y=0)$
- 확률 변수 $Y$ 에 확률을 베이즈 모형에 넣을 때는 $P(Y|X)$의 형태로 넣어야한다.
- evidence : 조건이 되는 확률변수가 누구냐 !
- evidence_card : 몇가지 조건이 존재하는가 !
```
cpd_Y_on_X = TabularCPD('Y', 2, np.array(
[[0.95, 0.01], [0.05, 0.99]]), evidence=['X'], evidence_card=[2])
print(cpd_Y_on_X)
from pgmpy.models import BayesianModel
```
#### BayesianModel(variables)
- variables : 확률모형이 포함하는 확률변수 이름 문자열 리스트
- add_cpds() : 조건부확률 추가
- check_model() : 모형이 정상적인지 확인. True이면 정상모델
```
model = BayesianModel([('X','Y')])
model.add_cpds(cpd_X,cpd_Y_on_X)
model.check_model()
from pgmpy.inference import VariableElimination
```
#### VariableElimination (변수제거법) 을 사용한 추정을 제공
#### query(variables, evidences)
- query() 를 통해 사후확률 계산
----
- variables : 사후 확률을 계산할 확률변수의 이름 리스트
- evidences : 조건이 되는 확률변수의 값을 나타내는 딕셔너리
```
inference = VariableElimination(model)
posterior = inference.query(['X'], evidence={'Y':1})
print(posterior)
```
----
#### 베이즈 정리 확장 2
- 베이즈 정리는 사건 A의 확률이 사건 B에 의해 갱신된 확률을 계산하는 것.
- 베이즈 정리 확장2에서는 이 상태에서 추가적으로 사건 C가 발생!
- $P(A|B,C) = \frac{P(C|A,B)P(A|B)}{P(C|B)}$
----
### 몬티 홀 문제
- 확률변수 (random box) 정의
1) 자동차가 있는 문을 나타내는 확률변수 C : 0,1,2
2) 참가자가 선택한 문을 나타내는 확률변수 X : 0,1,2
3) 진행자가 열어준 문을 나타내는 확률변수 H : 0,1,2
---
##### 참가자와 진행자의 행위를 조건으로 자동차의 위치를 결과로 하는 조건부 확률을 푸는 문제
FACT
1) 자동차를 놓는 진행자는 참가자의 선택을 예측할 수 없고, 참가자는 자동차를 볼 수 없으므로 자동차의 위치와 참가자의 선택은 서로 독립적
- $P(C,X) = P(C)P(X)$
2) 진행자가 어떤 문을 여는가가 자동차의 위치 및 참가자의 선택에 좌우됨.
- $P(H_0|C_0,X_1) = 0$
- $P(H_1|C_0,X_1) = 0$
- $P(H_2|C_0,X_1) = 1$
----
- 참가자가 1번 문을 선택하고, 진행자가 2번 문을 열어서 자동차가 없다는 것을 보인 경우, 0번 문 뒤에 차가 있을 확률
$P(C_0|X_1,H_2) = \frac{2}{3}$
$ = \frac{P(C_0,X_1,H_2)}{P(X_1,H_2)}$
$ = \frac{P(H_2|C_0,X_1)P(C_0,X_1)}{P(X_1,H_2)}$
$ = \frac{P(C_0)P(X_1)}{P(H_2|X_1)P(X)}$
$ = \frac{P(C_0)}{P(H_2|X_1)}$
$ = \frac{P(C_0)}{P(H_2,C_0|X_1)+P(H_2,C_1|X_1)+P(H_2,C_2|X_1)}$
$ = \frac{P(C_0)}{P(H_2|X_1,C_0)P(C_0)+P(H_2|X_1,C_1)P(C_1)+P(H_2|X_1,C_2)P(C_2)}$
$ = \frac{\frac{1}{3}}{1\cdot \frac{1}{3} + \frac{1}{2}\cdot \frac{1}{3}+0\cdot \frac{1}{3}}$
$ = \frac{2}{3}$
| github_jupyter |
# Machine Learning
## Overview
Machine learning is the ability of computers to take a dataset of objects and learn patterns about them. This dataset is structured as a table, where each row is a vector representing some object by encoding their properties as the values of the vector. The columns represent **features** - properties that all the objects share.
There are, broadly speaking, two kinds of machine learning. **Supervised learning** has an extra column at the end of the dataset, and the program learns to predict the value of this based on the input features for some new object. If the output value is continuous, it is **regression**, otherwise it is **classification**. **Unsupervised learning** seeks to find patterns within the data by, for example, clustering.

## Supervised Learning
One of the most critical concepts in supervised learning is the dataset. This represents the knowledge about the set of objects in question that you wish the machine to learn. It is essentially a table where the rows represent objects, and the columns represent the properties. 'Training' is essentially the creation of an object called a model, which can take a row missing the last column, and predict what its value will be by examining the data in the dataset. For example...
```
import pandas as pd
iris_dataset = pd.read_csv("../data/iris.csv")
iris_dataset.head()
```
Here a dataset has been loaded from CSV into a pandas dataframe. Each row represents a flower, on which four measurements have been taken, and each flower belongs to one of three classes. A supervised learning model would take this dataset of 150 flowers and train such that any other flower for which the relevant measurements were known could have its class predicted. This would obviously be a classification problem, not regression.
A very simple model would take just two features and map them to one of two classes. The dataset can be reduced to this form asd follows:
```
simple_iris = iris_dataset.iloc[0:100, [0, 2, 4]]
simple_iris.head()
simple_iris.tail()
```
Because this is just two dimensions, it can be easily visualised as a scatter plot.
```
import sys
sys.path.append("..")
import numerus.learning as ml
ml.plot_dataset(simple_iris)
```
The data can be seen to be **linearly separable** - there is a line that can be drawn between them that would separate them perfectly.
One of the simplest classifiers for supervised learning is the perceptron. Perceptrons have a weights vector which they dot with an input vector to get some level of activation. If the activation is above some threshold, one class is predicted - otherwise the other is predicted. Training a perceptron means giving the model training inputs until it has values for the weights and threshold that effectively separate the classes.
The data must be split into training and test data, and then a perceptron created from the training data.
```
train_simple_iris, test_simple_iris = ml.split_data(simple_iris)
ml.plot_dataset(train_simple_iris, title="Training Data")
perceptron = ml.Perceptron(train_simple_iris)
print(perceptron)
```
| github_jupyter |
## _*Using Qiskit Aqua for clique problems*_
This Qiskit Aqua Optimization notebook demonstrates how to use the VQE quantum algorithm to compute the clique of a given graph.
The problem is defined as follows. A clique in a graph $G$ is a complete subgraph of $G$. That is, it is a subset $K$ of the vertices such that every two vertices in $K$ are the two endpoints of an edge in $G$. A maximal clique is a clique to which no more vertices can be added. A maximum clique is a clique that includes the largest possible number of vertices.
We will go through three examples to show (1) how to run the optimization in the non-programming way, (2) how to run the optimization in the programming way, (3) how to run the optimization with the VQE.
We will omit the details for the support of CPLEX, which are explained in other notebooks such as maxcut.
Note that the solution may not be unique.
### The problem and a brute-force method.
```
import numpy as np
from qiskit import Aer
from qiskit_aqua import run_algorithm
from qiskit_aqua.input import EnergyInput
from qiskit_aqua.translators.ising import clique
from qiskit_aqua.algorithms import ExactEigensolver
```
first, let us have a look at the graph, which is in the adjacent matrix form.
```
K = 3 # K means the size of the clique
np.random.seed(100)
num_nodes = 5
w = clique.random_graph(num_nodes, edge_prob=0.8, weight_range=10)
print(w)
```
Let us try a brute-force method. Basically, we exhaustively try all the binary assignments. In each binary assignment, the entry of a vertex is either 0 (meaning the vertex is not in the clique) or 1 (meaning the vertex is in the clique). We print the binary assignment that satisfies the definition of the clique (Note the size is specified as K).
```
def brute_force():
# brute-force way: try every possible assignment!
def bitfield(n, L):
result = np.binary_repr(n, L)
return [int(digit) for digit in result]
L = num_nodes # length of the bitstring that represents the assignment
max = 2**L
has_sol = False
for i in range(max):
cur = bitfield(i, L)
cur_v = clique.satisfy_or_not(np.array(cur), w, K)
if cur_v:
has_sol = True
break
return has_sol, cur
has_sol, sol = brute_force()
if has_sol:
print("solution is ", sol)
else:
print("no solution found for K=", K)
```
### Part I: run the optimization in the non-programming way
```
qubit_op, offset = clique.get_clique_qubitops(w, K)
algo_input = EnergyInput(qubit_op)
params = {
'problem': {'name': 'ising'},
'algorithm': {'name': 'ExactEigensolver'}
}
result = run_algorithm(params, algo_input)
x = clique.sample_most_likely(len(w), result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("solution is", ising_sol)
else:
print("no solution found for K=", K)
```
### Part II: run the optimization in the programming way
```
algo = ExactEigensolver(algo_input.qubit_op, k=1, aux_operators=[])
result = algo.run()
x = clique.sample_most_likely(len(w), result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("solution is", ising_sol)
else:
print("no solution found for K=", K)
```
### Part III: run the optimization with the VQE
```
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'COBYLA'
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg
}
backend = Aer.get_backend('statevector_simulator')
result = run_algorithm(params, algo_input, backend=backend)
x = clique.sample_most_likely(len(w), result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("solution is", ising_sol)
else:
print("no solution found for K=", K)
```
| github_jupyter |
# Test shifting template experiments
```
%load_ext autoreload
%autoreload 2
import os
import sys
import pandas as pd
import numpy as np
import random
import umap
import glob
import pickle
import tensorflow as tf
from keras.models import load_model
from sklearn.decomposition import PCA
from plotnine import (ggplot,
labs,
geom_point,
aes,
ggsave,
theme_bw,
theme,
facet_wrap,
scale_color_manual,
guides,
guide_legend,
element_blank,
element_text,
element_rect,
element_line,
coords)
import warnings
warnings.filterwarnings(action='ignore')
from ponyo import utils, train_vae_modules, simulate_expression_data
# Set seeds to get reproducible VAE trained models
# The below is necessary in Python 3.2.3 onwards to
# have reproducible behavior for certain hash-based operations.
# See these references for further details:
# https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development
# https://docs.python.org/3.4/using/cmdline.html#envvar-PYTHONHASHSEED
# https://github.com/keras-team/keras/issues/2280#issuecomment-306959926
os.environ["PYTHONHASHSEED"] = "0"
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
random.seed(12345)
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
tf.set_random_seed(1234)
# Read in config variables
base_dir = os.path.abspath(os.path.join(os.getcwd(),"../"))
config_filename = os.path.abspath(os.path.join(base_dir,
"human_tests",
"config_test_human.tsv"))
params = utils.read_config(config_filename)
# Load parameters
local_dir = params["local_dir"]
dataset_name = params['dataset_name']
analysis_name = params["simulation_type"]
rpkm_data_filename = params["raw_data_filename"]
normalized_data_filename = params["normalized_data_filename"]
metadata_filename = params["metadata_filename"]
NN_architecture = params['NN_architecture']
scaler_filename = params['scaler_transform_filename']
num_runs = params['num_simulated']
metadata_delimiter = params["metadata_delimiter"]
experiment_id_colname = params['metadata_experiment_colname']
sample_id_colname = params['metadata_sample_colname']
project_id = params['project_id']
NN_dir = os.path.join(
base_dir,
dataset_name,
"models",
NN_architecture)
assert os.path.exists(rpkm_data_filename)
```
## Setup directories
```
utils.setup_dir(config_filename)
```
## Pre-process data
```
train_vae_modules.normalize_expression_data(base_dir,
config_filename,
rpkm_data_filename,
normalized_data_filename)
```
## Train VAE
```
# Directory containing log information from VAE training
vae_log_dir = os.path.join(
base_dir,
dataset_name,
"logs",
NN_architecture)
# Train VAE
train_vae_modules.train_vae(config_filename,
normalized_data_filename)
```
## Shift template experiment
```
#tmp result dir
tmp = os.path.join(local_dir, "pseudo_experiment")
os.makedirs(tmp, exist_ok=True)
# Load pickled file
scaler = pickle.load(open(scaler_filename, "rb"))
# Run simulation
normalized_data = normalized_data = pd.read_csv(
normalized_data_filename, header=0, sep="\t", index_col=0
)
for run in range(num_runs):
simulate_expression_data.shift_template_experiment(
normalized_data,
NN_architecture,
dataset_name,
scaler,
metadata_filename,
metadata_delimiter,
experiment_id_colname,
sample_id_colname,
project_id,
local_dir,
base_dir,
run)
```
## Visualize latent transform compendium
```
# Load VAE models
model_encoder_filename = glob.glob(os.path.join(
NN_dir,
"*_encoder_model.h5"))[0]
weights_encoder_filename = glob.glob(os.path.join(
NN_dir,
"*_encoder_weights.h5"))[0]
model_decoder_filename = glob.glob(os.path.join(
NN_dir,
"*_decoder_model.h5"))[0]
weights_decoder_filename = glob.glob(os.path.join(
NN_dir,
"*_decoder_weights.h5"))[0]
# Load saved models
loaded_model = load_model(model_encoder_filename)
loaded_decode_model = load_model(model_decoder_filename)
loaded_model.load_weights(weights_encoder_filename)
loaded_decode_model.load_weights(weights_decoder_filename)
pca = PCA(n_components=2)
# Read data
normalized_compendium = pd.read_csv(normalized_data_filename, header=0, sep="\t", index_col=0)
# Encode normalized compendium into latent space
compendium_encoded = loaded_model.predict_on_batch(normalized_compendium)
compendium_encoded_df = pd.DataFrame(data=compendium_encoded,
index=normalized_compendium.index)
# Get and save PCA model
model = pca.fit(compendium_encoded_df)
compendium_PCAencoded = model.transform(compendium_encoded_df)
compendium_PCAencoded_df = pd.DataFrame(data=compendium_PCAencoded,
index=compendium_encoded_df.index,
columns=['1','2'])
# Add label
compendium_PCAencoded_df['experiment_id'] = 'background'
# Embedding of real template experiment (encoded)
template_filename = os.path.join(local_dir,
"pseudo_experiment",
"template_normalized_data_"+project_id+"_test.txt")
template_data = pd.read_csv(template_filename, header=0, sep='\t', index_col=0)
# Encode template experiment into latent space
template_encoded = loaded_model.predict_on_batch(template_data)
template_encoded_df = pd.DataFrame(data=template_encoded,
index=template_data.index)
template_PCAencoded = model.transform(template_encoded_df)
template_PCAencoded_df = pd.DataFrame(data=template_PCAencoded,
index=template_encoded_df.index,
columns=['1','2'])
# Add back label column
template_PCAencoded_df['experiment_id'] = 'template_experiment'
# Embedding of simulated experiment (encoded)
encoded_simulated_filename = os.path.join(local_dir,
"pseudo_experiment",
"selected_simulated_encoded_data_"+project_id+"_1.txt")
simulated_encoded_df = pd.read_csv(encoded_simulated_filename,header=0, sep='\t', index_col=0)
simulated_PCAencoded = model.transform(simulated_encoded_df)
simulated_PCAencoded_df = pd.DataFrame(data=simulated_PCAencoded,
index=simulated_encoded_df.index,
columns=['1','2'])
# Add back label column
simulated_PCAencoded_df['experiment_id'] = 'simulated_experiment'
# Concatenate dataframes
combined_PCAencoded_df = pd.concat([compendium_PCAencoded_df,
template_PCAencoded_df,
simulated_PCAencoded_df])
print(combined_PCAencoded_df.shape)
combined_PCAencoded_df.head()
# Plot
fig = ggplot(combined_PCAencoded_df, aes(x='1', y='2'))
fig += geom_point(aes(color='experiment_id'), alpha=0.2)
fig += labs(x ='PCA 1',
y = 'PCA 2',
title = 'PCA original data with experiments (latent space)')
fig += theme_bw()
fig += theme(
legend_title_align = "center",
plot_background=element_rect(fill='white'),
legend_key=element_rect(fill='white', colour='white'),
legend_title=element_text(family='sans-serif', size=15),
legend_text=element_text(family='sans-serif', size=12),
plot_title=element_text(family='sans-serif', size=15),
axis_text=element_text(family='sans-serif', size=12),
axis_title=element_text(family='sans-serif', size=15)
)
fig += guides(colour=guide_legend(override_aes={'alpha': 1}))
fig += scale_color_manual(['#bdbdbd', 'red', 'blue'])
fig += geom_point(data=combined_PCAencoded_df[combined_PCAencoded_df['experiment_id'] == 'template_experiment'],
alpha=0.2,
color='blue')
fig += geom_point(data=combined_PCAencoded_df[combined_PCAencoded_df['experiment_id'] == 'simulated_experiment'],
alpha=0.1,
color='red')
print(fig)
```
| github_jupyter |
# 选择
## 布尔类型、数值和表达式

- 注意:比较运算符的相等是两个等到,一个等到代表赋值
- 在Python中可以用整型0来代表False,其他数字来代表True
- 后面还会讲到 is 在判断语句中的用发
```
1== true
while 1:
print('hahaha')
```
## 字符串的比较使用ASCII值
```
'a'>True
0<10>100
num=eval(input('>>'))
if num>=90:
print('A')
elif 80<=num<90:
print('B')
else :
print('C')
```
## Markdown
- https://github.com/younghz/Markdown
## EP:
- <img src="../Photo/34.png"></img>
- 输入一个数字,判断其实奇数还是偶数
## 产生随机数字
- 函数random.randint(a,b) 可以用来产生一个a和b之间且包括a和b的随机整数
```
import random
a=random.randint(1,5)
print(a)
while True:
num=eval(input('>>'))
if num == a:
print('Success')
break
elif num>a:
print('太大了')
elif num<a:
print('太小了')
```
## 其他random方法
- random.random 返回0.0到1.0之间前闭后开区间的随机浮点
- random.randrange(a,b) 前闭后开
## EP:
- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字的和,并判定其是否正确
- 进阶:写一个随机序号点名程序
```
import random
a=random.randint(1,5)
b=random.randint(2,6)
print(a,b)
# num=eval(input('>>'))
# if num==a+b:
# print('Success')
# else :
# print('失败')
num=a+b
while 1:
input('>>')
if input == num:
print('Success')
break
else :
print('失败')
```
## if语句
- 如果条件正确就执行一个单向if语句,亦即当条件为真的时候才执行if内部的语句
- Python有很多选择语句:
> - 单向if
- 双向if-else
- 嵌套if
- 多向if-elif-else
- 注意:当语句含有子语句的时候,那么一定至少要有一个缩进,也就是说如果有儿子存在,那么一定要缩进
- 切记不可tab键和space混用,单用tab 或者 space
- 当你输出的结果是无论if是否为真时都需要显示时,语句应该与if对齐
```
a=eval(input('>>'))
if a<=30:
b=input('>>')
if b!='丑':
c=input('>>')
if c=='高':
d=input('>>')
if d=='是':
print('见')
else:
print('不见')
else :
print('不见')
else :
print('不见')
else:
print('too old')
```
## EP:
- 用户输入一个数字,判断其实奇数还是偶数
- 进阶:可以查看下4.5实例研究猜生日
## 双向if-else 语句
- 如果条件为真,那么走if内部语句,否则走else内部语句
## EP:
- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字,并判定其是否正确,如果正确打印“you‘re correct”,否则打印正确错误
## 嵌套if 和多向if-elif-else

## EP:
- 提示用户输入一个年份,然后显示表示这一年的动物

- 计算身体质量指数的程序
- BMI = 以千克为单位的体重除以以米为单位的身高

```
a=eval(input('>>'))
num=a%12
if num==0:
print('猴')
elif num == 1:
print('鸡')
elif num == 2:
print('狗')
elif num == 3:
print('猪')
elif num== 4:
print('鼠')
elif num== 5:
print('牛')
elif num== 6:
print('虎')
elif num== 7:
print('兔')
elif num== 8:
print('龙')
elif num== 9:
print('蛇')
elif num== 10:
print('马')
else:
print('羊')
w,h=eval(input('>>'))
bmi=w/(h*h)
print(bmi)
if bmi<18.5:
print('超轻')
elif 18.5<=bmi<25.0:
print('标准')
elif 25.0<=bmi<30.0:
print('超重')
else :
print('痴肥')
```
## 逻辑运算符



```
a=[1,2,3,4]
1 not in a
```
## EP:
- 判定闰年:一个年份如果能被4整除但不能被100整除,或者能被400整除,那么这个年份就是闰年
- 提示用户输入一个年份,并返回是否是闰年
- 提示用户输入一个数字,判断其是否为水仙花数
```
year=eval(input('>>'))
a=year%4==0
b=year%100!=0
c=year%400==0
if (a or c) and b :
print('闰年')
else :
print('非闰年')
n=eval(input('>>'))
a1=n//100
a2=n//10%10
a3=n%10
s=a1**3+a2**3+a3**3
if s == n:
print('是水仙花数')
else :
print('结束')
```
## 实例研究:彩票

```
import random
a1=random.randint(0,9)
a2=random.randint(0,9)
print(a1,a2)
a=str(a1)+str(a2)
num=input('>>')
if num==a:
print('一等奖')
elif (num[0]==a[1] and (num[1]== a[0])):
print('二等奖')
elif ((num[0]==a[0]) or (num[1]==a[0]) or (num[0]==a[1]) or (num[1]==a[1])):
print('三等奖')
else :
('未中奖')
```
# Homework
- 1

```
import math
a,b,c=eval(input('>>'))
pan=b**2-4*a*c
r1=((-b)+math.sqrt(pan))/(2*a)
r2=((-b)-math.sqrt(pan))/(2*a)
if pan>0:
print(r1,r2)
elif pan==0:
print(r1)
else :
print('The equation has no real roots')
```
- 2

```
import random
a1=random.randint(0,99)
a2=random.randint(0,99)
print(a1,a2)
num=eval(input('>>'))
number=a1+a2
if num == number:
print('True')
else :
print('False')
```
- 3

```
day = eval(input('今天是哪一天(星期天是0,星期一是1,。。。,星期六是6):'))
days = eval(input('今天之后到未来某天的天数:'))
n = day + days
if day==0:
a='星期日'
elif day==1:
a='星期一'
elif day==2:
a='星期二'
elif day==3:
a='星期三'
elif day==4:
a='星期四'
elif day==5:
a='星期五'
elif day==6:
a='星期六'
if n%7 ==0:
print('今天是'+str(a)+'并且'+str(days)+'天之后是星期天')
elif n%7 ==1:
print('今天是'+str(a)+'并且'+str(days)+'天之后是星期一')
elif n%7 ==2:
print('今天是'+str(a)+'并且'+str(days)+'天之后是星期二')
elif n%7 ==3:
print('今天是'+str(a)+'并且'+str(days)+'天之后是星期三')
elif n%7 ==4:
print('今天是'+str(a)+'并且'+str(days)+'天之后是星期四')
elif n%7 ==5:
print('今天是'+str(a)+'并且'+str(days)+'天之后是星期五')
elif n%7 ==6:
print('今天是'+str(a)+'并且'+str(days)+'天之后是星期六')
```
- 4

```
a,b,c = eval(input('输入三个整数:'))
if a>=b and b>=c:
print(c,b,a)
elif a>=b and b<=c and a>=c:
print(b,c,a)
elif b>=a and a>=c :
print(c,a,b)
elif b>=a and a<=c and b>=c:
print(a,c,b)
elif c>=b and b>=a:
print(a,b,c)
elif c>=b and b<=a and c>=a:
print(b,a,c)
```
- 5

```
a1,a2=eval(input('输入第一种重量和价钱:'))
b1,b2=eval(input('输入第一种重量和价钱:'))
num1=a2/a1
num2=b2/b1
if num1>num2:
print('购买第二种更加合适')
else :
print('购买第一种更合适')
```
- 6

```
m,year=eval(input('输入月份和年'))
a=year%4==0
b=year%100!=0
c=year%400==0
r=[1,3,5,7,8,10,12]
if (a or c) and b and m==2:
print(str(year)+'年'+str(m)+'月有29天')
elif ((m==1) or (m==3) or (m==5) or (m==7) or (m==8) or (m==10) or (m==12)):
print(str(year)+'年'+str(m)+'月有31天')
elif ((m==4) or (m==6) or (m==9) or (m==11)):
print(str(year)+'年'+str(m)+'月有30天')
else :
print(str(year)+'年'+str(m)+'月有28天')
```
- 7

```
import random
a=random.randint(0,1)
print(a)
num=eval(input('>>'))
if a==num:
print('正确')
else :
print('错误')
```
- 8

```
a=eval(input('输入1,2或0:'))
import random
d=random.randint(0,3)
if d==a:
print('平局')
elif a==0 and d==1:
print('你输了')
elif a==0 and d==2:
print('你赢了')
elif a==1 and d==0:
print('你赢了')
elif a==1 and d==2:
print('你输了')
elif a==2 and d==1:
print('你赢了')
elif a==2 and d==0:
print('你输了')
```
- 9

```
y = eval(input('请输入年份:'))
m = eval(input('请输入月份:'))
q = eval(input('请输入天数:'))
j = y//100//1
k = y%100
if m == 1:
m = 13
elif m == 2:
m = 14
h = (q + (26*(m+1))/10//1+k+k/4//1+j/4//1+5*j)%7
print(round(h))
```
- 10

```
import random
size=['Ace',2,3,4,5,6,7,8,9,10,'Jack','Queen','King']
A=random.randint(0,len(size)-1)
color=['Diamond','Heart','Spade','Club']
B=random.randint(0,len(color)-1)
print('The card you picked is the ' + str(size[A]) + ' of ' + str(color[B]))
```
- 11

```
x = input('Enter a three-digit integer:')
if x[0] == x[2] :
print(str(x)+'is a palindrome')
else:
print(str(x)+'is not a palindrome')
```
- 12

```
lenght1,lenght2,lenght3, =eval(input('Enter three adges:'))
perimeter = lenght1 + lenght2 + lenght3
if lenght1 + lenght2 > lenght3 and lenght1 + lenght3 > lenght2 and lenght2 + lenght3 > lenght1:
print('The perimeter is',perimeter)
else:
print('The perimeter invalid')
```
| github_jupyter |
<img src="../../../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
# _*Qiskit Finance: Option Pricing*_
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorials.
***
### Contributors
Stefan Woerner<sup>[1]</sup>, Daniel Egger<sup>[1]</sup>, Christa Zoufal<sup>[1]</sup>, Shaohan Hu<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>
### Affliation
- <sup>[1]</sup>IBMQ
In this notebook we provide an overview of the available Qiskit Finance tutorials on how to use Quantum Amplitude Estimation (QAE) for option pricing. We analyze different types of options with increasing complexity, featuring:
- single asset / multi asset (basket) options,
- piecewise linear payoff functions (arbitrary number of break points, possibly non-continuous), and
- path-dependency (sum/average, barrier, etc.).
The basic ideas on using QAE for option pricing and risk analysis are provided here:<br>
<a href="https://www.nature.com/articles/s41534-019-0130-6">Quantum Risk Analysis. Stefan Woerner, Daniel J. Egger (2019)</a>.
A Qiskit Aqua tutorial on QAE can be found here:<br>
<a href="../../aqua/general/amplitude_estimation.ipynb">Qiskit Tutorial on QAE</a>
We provide tutorials for the following types simple options:
- <a href="european_call_option_pricing.ipynb">European Call Option</a> (univariate, payoff with 2 segments)
- <a href="european_put_option_pricing.ipynb">European Put Option</a> (univariate, payoff with 2 segments)
- <a href="bull_spread_pricing.ipynb">Bull Spread</a> (univariate, payoff with 3 segments)
Note that the provided framework can cover all options of this type, i.e., options that are fully determined by a piecewise linear payoff with respect to the spot price at maturity of the underlying asset.
However, the framework also allows to price more complex options, for instance, options that depend on multiple assets or are path-dependent:
- <a href="basket_option_pricing.ipynb">Basket Option</a> (multivariate, payoff with 2 segments)
- <a href="asian_barrier_spread_pricing.ipynb">Asian Barrier Spread</a> (multivariate, path-dependent, payoff with 3 segments)
More examples on option pricing with a quantum computer can be found in the [Qiskit Finance Community](https://github.com/Qiskit/qiskit-tutorials-community/tree/master/finance) section of the Qiskit Tutorials.
All examples illustrate how to use the genereric Qiskit Finance framework to construct QAE-operators (uncertainty problems). The same framework can be easily adjusted to estimate risk as well, for instance, the Value at Risk (VaR) or the Conditional Value at Risk (CVaR, also known as Expected Shortfall). How to use Qiskit Finance for risk analysis is illustrated in the following tutorial:
<a href="credit_risk_analysis.ipynb">Credit Risk Analysis</a>.
An example of how quantum Generative Adversarial Networks (qGANs) can be used to learn and efficiently load generic random distributions for option pricing can be found here:
<a href="../machine_learning/qgan_option_pricing.ipynb">QGANs to learn and load random distributions for option pricing</a>
| github_jupyter |
```
# Synapse Classification Challenge
# Introduction to Connectomics 2017
# Darius Irani
your_name = 'irani_darius'
!pip install mahotas
!pip install ndparse
%matplotlib inline
# Load data
import numpy as np
import tensorflow as tf
data = np.load('./synchallenge2017_training.npz')
imtrain = data['imtrain']
annotrain = data['annotrain']
ytrain = data['ytrain']
data = np.load('./synchallenge2017_validation.npz')
imvalid = data['imvalid']
annovalid = data['annovalid']
yvalid = data['yvalid']
# Define feature extraction code
import skimage.feature as skif
def extract_features(imdata):
xtrain = []
for im in imdata:
fvector = []
# 50th percentile based on intensity
fvector.append(np.percentile(im,50))
# add a contrast feature
g = skif.greycomatrix(im, [1, 2], [0, np.pi/2],normed=True, symmetric=True)
homogeneity = skif.greycoprops(g, 'homogeneity')
# explict way to add feature elements one at a time
homogeneity = np.ravel(homogeneity)
for i in homogeneity:
fvector.append(i)
# compute Harris corner measure response image
cor = skif.corner_harris(im,method='k',k=0.1,eps=1.5e-06,sigma=2)
cor = np.ravel(cor)
for i in cor:
fvector.append(i)
# edge filter an image using the Canny algorithm
can = skif.canny(im,sigma=1.5, low_threshold=None, high_threshold=None)
can = np.ravel(can)
for i in can:
fvector.append(i)
# extract FAST corners for a given image
fast = skif.corner_shi_tomasi(im,sigma=2)
fast = np.ravel(fast)
for i in fast:
fvector.append(i)
fvector = np.asarray(fvector)
xtrain.append(fvector)
return np.asarray(xtrain)
# Extract Features from training
xtrain = extract_features(imtrain)
# Train Classifier
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=200)
clf = clf.fit(xtrain, ytrain)
# Extract features from validation set
xvalid = extract_features(imvalid)
# Run Classifier on validation set
scoresvalid = clf.predict_proba(xvalid)
# Best f1 score report on validation set
from sklearn.metrics import f1_score
# Can add post-processing here if desired
prob_syn = scoresvalid[:,1]
# default threshold
print('default f1 score: {}'.format(np.round(f1_score(yvalid, prob_syn >=0.5),2)))
f1_out = 0
thresh = 0
for i in np.arange(0.0, 1, 0.05):
f1_test = f1_score(yvalid, prob_syn > i)
if f1_test > f1_out:
f1_out = f1_test
thresh = i
print('My best validation f1-score is: {} at {} threshold.'.format(np.round(f1_out,2), thresh))
# here we can inspect results
valid_labels = np.asarray(prob_syn > thresh,dtype='int')
# find images we did well on
idx_correct_syn = np.where((valid_labels == yvalid) & (yvalid == 1))[0]
idx_correct_nosyn = np.where((valid_labels == yvalid) & (yvalid == 0))[0]
# find images we did poorly on
idx_wrong_syn = np.where((valid_labels != yvalid) & (yvalid == 1))[0]
idx_wrong_nosyn = np.where((valid_labels != yvalid) & (yvalid == 0))[0]
import ndparse as ndp
print('synapse present - true positive')
ndp.plot(imvalid[idx_correct_syn[3]])
print('no synapse present - true negative')
ndp.plot(imvalid[idx_correct_nosyn[3]])
print('synapse present - false negative')
ndp.plot(imvalid[idx_wrong_syn[3]])
print('no synapse present - false positive')
ndp.plot(imvalid[idx_wrong_nosyn[3]])
# Validate performance on test set (should only run/score once!)
data = np.load('./synchallenge2017_test_notruth.npz')
imtest = data['imtest']
annotest = data['annotest']
# Extract features from test set
xtest = extract_features(imtest)
# Run classifier on test set
scoretest = clf.predict_proba(xvalid)
# Post-processing
prob_syntest = scoretest[:,1]
syntest_predict = prob_syntest > thresh
syntest_predict = np.asarray(syntest_predict,dtype = 'uint8')
# save file and upload to google docs with label vector
np.save(your_name+'_synchallenge_testdata.npy',syntest_predict)
```
| github_jupyter |
```
# Загрузка зависимостей
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
#Часто используемые функции
def hist_show(a, b = 50):
plt.hist(a, bins = b)
plt.show()
def replace_zero_to_mean(a):
mean_data = int(a.mean())
return a.replace(0, mean_data)
def mm_scaler(a):
a = np.array(a).reshape(-1, 1)
a =MinMaxScaler().fit_transform(a).flatten()
return a
def standard_scaler(a):
a = np.array(a).reshape(-1, 1)
a =StandardScaler().fit_transform(a).flatten()
return a
# Загрузка и анализ набора данных
country_dataset = pd.read_csv('Набор_3_страны_мира.csv', sep=';')
country_dataset.head(10)
# Создаем набор данных, в котором будут храниться обработанные данные
dataset = pd.DataFrame()
# столбец "region"
data = country_dataset['region']
data = pd.get_dummies(data)
data = np.array([data[i[1]] * (i[0]+1) for i in enumerate(data)]).flatten()
data = data[data != 0]
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['region'] = mm_scaler(data**0.5)
dataset.head(10)
# столбец "population"
data = country_dataset['population']
hist_show(data)
data = np.clip(data, 0, 94000000)
hist_show(data)
data = replace_zero_to_mean(data)
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['population'] = mm_scaler(np.log(data))
dataset.head(10)
# столбец "area"
data = country_dataset['area']
hist_show(data)
data = np.clip(data, 0, 1275200)
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['area'] = mm_scaler(np.log(data))
dataset.head(10)
# столбец "infant_mortality"
country_dataset['infant_mortality'] = country_dataset['infant_mortality'].astype(str)
country_dataset['infant_mortality'] = [x.replace(',', '.') for x in country_dataset['infant_mortality']]
country_dataset['infant_mortality'] = country_dataset['infant_mortality'].astype(float)
data = country_dataset['infant_mortality']
plt.hist(data, bins = 50)
plt.show()
data = data.replace(0, data.mean())
plt.hist(data, bins = 50)
plt.show()
hist_show(data**0.5)
hist_show(np.log(data))
plt.hist(np.log(data), bins = 50)
plt.show()
dataset['infant_mortality'] = mm_scaler(np.log(data))
dataset.head(10)
# столбец "gdp"
data = country_dataset['gdp']
hist_show(data)
data = np.clip(data, 0, 38000)
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['gdp'] = mm_scaler(data**0.5)
dataset.head(10)
# столбец "literacy"
country_dataset['literacy'] = country_dataset['literacy'].astype(str)
country_dataset['literacy'] = [x.replace(',', '.') for x in country_dataset['literacy']]
country_dataset['literacy'] = country_dataset['literacy'].astype(float)
data = country_dataset['literacy']
hist_show(data)
data = replace_zero_to_mean(data)
hist_show(data)
data = np.clip(data, 38, 100)
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['literacy'] = mm_scaler(data**0.5)
dataset.head(10)
# столбец "arable"
country_dataset['arable'] = country_dataset['arable'].astype(str)
country_dataset['arable'] = [x.replace(',', '.') for x in country_dataset['arable']]
country_dataset['arable'] = country_dataset['arable'].astype(float)
data = country_dataset['arable']
hist_show(data)
data = replace_zero_to_mean(data)
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['arable'] = mm_scaler(data**0.5)
dataset.head(10)
# столбец "birthrate"
country_dataset['birthrate'] = country_dataset['birthrate'].astype(str)
country_dataset['birthrate'] = [x.replace(',', '.') for x in country_dataset['birthrate']]
country_dataset['birthrate'] = country_dataset['birthrate'].astype(float)
data = country_dataset['birthrate']
hist_show(data)
data = replace_zero_to_mean(data)
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['birthrate'] = mm_scaler(data**0.5)
dataset.head(10)
# столбец "deathrate"
country_dataset['deathrate'] = country_dataset['deathrate'].astype(str)
country_dataset['deathrate'] = [x.replace(',', '.') for x in country_dataset['deathrate']]
country_dataset['deathrate'] = country_dataset['deathrate'].astype(float)
data = country_dataset['deathrate']
hist_show(data)
data = np.clip(data, 0, 23)
hist_show(data)
data = replace_zero_to_mean(data)
hist_show(data)
hist_show(data**0.5)
hist_show(np.log(data))
dataset['deathrate'] = mm_scaler(data**0.5)
dataset.head(10)
dataset.to_csv('prepared_data.csv')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
# 1. Деревья решений для классификации (продолжение)
На прошлом занятии мы разобрали идею Деревьев решений:

Давайте теперь разберемся **как происходит разделения в каждом узле** то есть как проходит этап **обучения модели**. Есть как минимум две причины в этом разобраться : во-первых это позволит нам решать задачи классификации на 3 и более классов, во-вторых это даст нам возможность считать *важность* признаков в обученной модели.
Для начала посмотрим какие бывают деревья решений
----
Дерево решений вообще говоря **не обязано быть бинарным**, на практике однако используются именно бинарные деревья, поскольку для любоого не бинарного дерева решений **можно построить бинарное** (при этом увеличится глубина дерева).
### 1. Деревья решений использую простой одномерный предикат для разделения объектов
Имеется ввиду что в каждом узле разделение объектов (и создание двух новых узлов) происходит **по 1 (одному)** признаку:
*Все объекты со значением некоторого признака меньше трешхолда отправляются в один узел, а больше - в другой:*
$$
[x_j < t]
$$
Вообще говоря это совсем не обязательно, например в каждом отдельном узле можно строить любую модель (например логистическую регрессию или KNN), рассматривая сразу несколько признаков.
### 2. Оценка качества
Мы говорили про простой функционал качества разбиения (**выбора трешхолда**): количество ошибок (1-accuracy).
На практике используются два критерия: Gini's impurity index и Information gain.
**Индекс Джини**
$$
I_{Gini} = 1 - \sum_i^K p_i^2
$$
где $K$ - количество классов, a $p_i = \frac{|n_i|}{n}$ - доля представителей $i$ - ого класса в данном узле
**Энтропия**
$$
H(p) = - \sum_i^K p_i\log(p_i)
$$
**Информационный критерий**
$$
IG(p) = H(\text{parent}) - H(\text{child})
$$
#### Разделение производится по тому трешхолду и тому признаку по которому взвешенное среднее функционала качества в узлах потомках наименьшее.
### 3. Критерий остановки
Мы с вами говорили о таких параметрах Решающего дерева как минимальное число объектов в листе,
и минимальное число объектов в узле, для того чтобы он был разделен на два. Еще один критерий -
глубина дерева. Возможны и другие.
* Ограничение числа объектов в листе
* Ограничение числа объектов в узле, для того чтобы он был разделен
* Ограничение глубины дерева
* Ограничение минимального прироста Энтропии или Информационного критерия при разделении
* Остановка в случае если все объекты в листе принадлежат к одному классу
На прошлой лекции мы обсуждали технику которая называется **Прунинг** (pruning) это альтернатива Критериям остановки, когда сначала строится переобученное дерево, а затем она каким то образом упрощается. На практике по ряду причин чаще используются критерии остановки, а не прунинг.
Подробнее см. https://github.com/esokolov/ml-course-hse/blob/master/2018-fall/lecture-notes/lecture07-trees.pdf
Оссобенности разбиения непрерывных признаков
* http://kevinmeurer.com/a-simple-guide-to-entropy-based-discretization/
* http://clear-lines.com/blog/post/Discretizing-a-continuous-variable-using-Entropy.aspx
---
## 1.1. Оценка качества разделения в узле
```
def gini_impurity(y_current):
n = y_current.shape[0]
val, count = np.unique(y_current, return_counts=True)
gini = 1 - ((count/n)**2).sum()
return gini
def entropy(y_current):
gini = 1
n = y_current.shape[0]
val, count = np.unique(y_current, return_counts=True)
p = count/n
igain = p.dot(np.log(p))
return igain
n = 100
Y_example = np.zeros((100,100))
for i in range(100):
for j in range(i, 100):
Y_example[i, j] = 1
gini = [gini_impurity(y) for y in Y_example]
ig = [-entropy(y) for y in Y_example]
plt.figure(figsize=(7,7))
plt.plot(np.linspace(0,1,100), gini, label='Index Gini');
plt.plot(np.linspace(0,1,100), ig, label ='Entropy');
plt.legend()
plt.xlabel('Доля примеров\n положительного класса')
plt.ylabel('Значение оптимизируемого\n функционала');
```
## 1.2. Пример работы Решающего дерева
**Индекс Джини** и **Информационный критерий** это меры сбалансированности вектора (насколько значения объектов в наборе однородны). Максимальная неоднородность когда объектов разных классов поровну. Максимальная однородность когда в наборе объекты одного класса.
Разбивая множество объектов на два подмножества, мы стремимся уменьшить неоднородность в каждом подмножестве.
Посмотрем на примере Ирисов Фишера
### Ирисы Фишера
```
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
model = DecisionTreeClassifier()
model = model.fit(iris.data, iris.target)
feature_names = ['sepal length', 'sepal width', 'petal length', 'petal width']
target_names = ['setosa', 'versicolor', 'virginica']
model.feature_importances_
np.array(model.decision_path(iris.data).todense())[0]
np.array(model.decision_path(iris.data).todense())[90]
iris.data[0]
model.predict(iris.data)
model.tree_.node_count
```
### Цифры. Интерпретируемость
```
from sklearn.datasets import load_digits
X, y = load_digits(n_class=2, return_X_y=True)
plt.figure(figsize=(12,12))
for i in range(9):
ax = plt.subplot(3,3,i+1)
ax.imshow(X[i].reshape(8,8), cmap='gray')
from sklearn.metrics import accuracy_score
model = DecisionTreeClassifier()
model.fit(X, y)
y_pred = model.predict(X)
print(accuracy_score(y, y_pred))
print(X.shape)
np.array(model.decision_path(X).todense())[0]
model.feature_importances_
plt.imshow(model.feature_importances_.reshape(8,8));
from sklearn.tree import export_graphviz
export_graphviz(model, out_file='tree.dot', filled=True)
# #sudo apt-get install graphviz
# !dot -Tpng 'tree.dot' -o 'tree.png'
# 
np.array(model.decision_path(X).todense())[0]
plt.imshow(X[0].reshape(8,8))
```
## 2.3. Решающие деревья легко обобщаются на задачу многоклассовой классификации
### Пример с рукописными цифрами
```
X, y = load_digits(n_class=10, return_X_y=True)
plt.figure(figsize=(12,12))
for i in range(9):
ax = plt.subplot(3,3,i+1)
ax.imshow(X[i].reshape(8,8), cmap='gray')
ax.set_title(y[i])
ax.set_xticks([])
ax.set_yticks([])
model = DecisionTreeClassifier()
model.fit(X, y)
y_pred = model.predict(X)
print(accuracy_score(y, y_pred))
plt.imshow(model.feature_importances_.reshape(8,8));
model.feature_importances_
```
### Вопрос: откуда мы получаем feature importance?
## 2.4. Пример на котором дерево решений строит очень сложную разделяющую кривую
Пример взят отсюда https://habr.com/ru/company/ods/blog/322534/#slozhnyy-sluchay-dlya-derevev-resheniy .
Как мы помним Деревья используют одномерный предикат для разделени множества объектов.
Это значит что если данные плохо разделимы по **каждому** (индивидуальному) признаку по отдельности, результирующее решающее правило может оказаться очень сложным.
```
from sklearn.tree import DecisionTreeClassifier
def form_linearly_separable_data(n=500, x1_min=0, x1_max=30, x2_min=0, x2_max=30):
data, target = [], []
for i in range(n):
x1, x2 = np.random.randint(x1_min, x1_max), np.random.randint(x2_min, x2_max)
if np.abs(x1 - x2) > 0.5:
data.append([x1, x2])
target.append(np.sign(x1 - x2))
return np.array(data), np.array(target)
X, y = form_linearly_separable_data()
plt.figure(figsize=(10,10))
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='autumn');
```
Давайте посмотрим как данные выглядит в проекции на 1 ось
```
plt.figure(figsize=(15,5))
ax1 = plt.subplot(1,2,1)
ax1.set_title('Проекция на ось $X_0$')
ax1.hist(X[y==1, 0], alpha=.3);
ax1.hist(X[y==-1, 0], alpha=.6);
ax2 = plt.subplot(1,2,2)
ax2.set_title('Проекция на ось $X_1$')
ax2.hist(X[y==1, 1], alpha=.3);
ax2.hist(X[y==-1, 1], alpha=.6);
def get_grid(data, eps=0.01):
x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1
y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1
return np.meshgrid(np.arange(x_min, x_max, eps),
np.arange(y_min, y_max, eps))
tree = DecisionTreeClassifier(random_state=17).fit(X, y)
xx, yy = get_grid(X, eps=.05)
predicted = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.figure(figsize=(10,10))
plt.pcolormesh(xx, yy, predicted, cmap='autumn', alpha=0.3)
plt.scatter(X[y==1, 0], X[y==1, 1], marker='x', s=100, cmap='autumn', linewidth=1.5)
plt.scatter(X[y==-1, 0], X[y==-1, 1], marker='o', s=100, cmap='autumn', edgecolors='k',linewidth=1.5)
plt.title('Easy task. Decision tree compexifies everything');
# export_graphviz(tree, out_file='complex_tree.dot', filled=True)
# !dot -Tpng 'complex_tree.dot' -o 'complex_tree.png'
```
## 2.5. Деревья решений для регрессии (кратко)
см. sklearn.DecisionTreeRegressor
# 3. Ансамблирование деревьев. Случайный лес.
Что если у нас несколько классификаторов (каждый может быть не очень *умным*) ошибающихся на разных объектах
Тогда если в качестве предсказания мы будем использовать *моду* мы можем расчитывать на лучшую предсказательную силу.
### Идея 1
Как получить модели которые ошибаются в разных местах?
Давайте брать *тупые* деревья но учить их на **разных подвыборках признаков** !
### Идея 2
Как получить модели которые ошибаются в разных местах?
Давайте брать *тупые* деревья, но учить их на **разных подвыборках объектов** !
### Результат: Случайный лес.
sklearn.ensemble RandomForrest
| github_jupyter |
```
# Boilerplate that all notebooks reuse:
from analysis_common import *
%matplotlib inline
```
# Kernel analysis
```
df = read_ods("./results.ods", "matmul-kernel")
expand_modes(df)
print(df["MODE"].unique())
#############################################
# Disregard the store result for the kernel #
#############################################
df.loc[df["MODE"] == "AD (volatile result)", "MODE"] = "AD"
order = ['DRAM', 'AD', 'AD (in-place FMA)', 'MM (hot)', 'MM (cold)']
hue_order = [7000, 1000]
# Split the two families of experiments
df_rowcol = df[df.MATRIX_SIDE != 0]
df = df[df.MATRIX_SIDE == 0]
sns.barplot(x='MODE', y='TIMING',
data=df[(df.BLOCKSIZE == 1000)],
capsize=0.1,
order=order,
palette=custom_kernel_palette(6))
plt.title("Submatrix size: 1000x1000 (small object)")
plt.xticks(rotation=25, horizontalalignment='right')
plt.show()
sns.barplot(x='MODE', y='TIMING',
data=df[(df.BLOCKSIZE == 7000)],
capsize=0.1,
order=order,
palette=custom_kernel_palette(6))
plt.title("Submatrix size: 7000x7000 (big object)")
plt.xticks(rotation=25, horizontalalignment='right')
plt.show()
###################################
# sns.barplot(x='MODE', y='TIMING',
# data=df_rowcol[(df_rowcol.BLOCKSIZE == 1000)],
# capsize=0.1,
# order=order,
# palette=palette)
# plt.title("BLOCKSIZE: 1k || row x col")
# plt.xticks(rotation=25, horizontalalignment='right')
# plt.show()
# sns.barplot(x='MODE', y='TIMING',
# data=df_rowcol[(df_rowcol.BLOCKSIZE == 7000)],
# capsize=0.1,
# order=order,
# palette=palette)
# plt.title("BLOCKSIZE: 7k || row x col")
# plt.xticks(rotation=25, horizontalalignment='right')
# plt.show()
# Remove MM-NVM as it is outlier-ish
#df = df[df.MODE != 'MM-NVM']
# ... or maybe not? trying set_ylim maybe:
#axes = plt.gca()
#axes.set_ylim([0,1.5])
#plt.title("...")
#plt.show()
df.loc[(df.BLOCKSIZE == 1000), "NORMALIZED"] = df.TIMING
df.loc[(df.BLOCKSIZE == 7000), "NORMALIZED"] = df.TIMING / (7*7*7)
ax = sns.barplot(y='MODE', x='NORMALIZED',
data=df,
capsize=0.1,
order=order,
hue_order=hue_order,
hue="BLOCKSIZE",
palette="muted")
kernel_plot_tweaks(ax, 7*7*7, legend_title="Submatrix blocksize")
plt.savefig("matmul-kernel.pdf", bbox_inches='tight')
plt.show()
kernel_times = df.groupby(["BLOCKSIZE", "MODE"]).min()
kernel_times
#rowcol_times = df_rowcol.groupby(["BLOCKSIZE", "MODE"]).min()
#rowcol_times
```
# Matmul results analysis
```
df = read_ods("./results.ods", "matmul-app")
expand_modes(df)
df
for bs in [1000, 7000]:
df.loc[(df.BLOCKSIZE == bs) & (df.MODE == "DRAM"), "ATOM_KERNEL"] = \
kernel_times.loc[(bs, "DRAM"), "TIMING"]
df.loc[(df.BLOCKSIZE == bs) & (df.MODE == "AD (volatile result)"), "ATOM_KERNEL"] = \
kernel_times.loc[(bs, "AD"), "TIMING"]
df.loc[(df.BLOCKSIZE == bs) & (df.MODE == "AD (store result)"), "ATOM_KERNEL"] = \
kernel_times.loc[(bs, "AD"), "TIMING"]
df.loc[(df.BLOCKSIZE == bs) & (df.MODE == "AD (in-place FMA)"), "ATOM_KERNEL"] = \
kernel_times.loc[(bs, "AD (in-place FMA)"), "TIMING"]
df.loc[(df.BLOCKSIZE == bs) & (df.MODE == "DAOS (volatile result)"), "ATOM_KERNEL"] = \
kernel_times.loc[(bs, "DRAM"), "TIMING"]
df.loc[(df.BLOCKSIZE == bs) & (df.MODE == "DAOS (store result)"), "ATOM_KERNEL"] = \
kernel_times.loc[(bs, "DRAM"), "TIMING"]
df.loc[(df.BLOCKSIZE == 1000)
& (df.MATRIX_SIDE == 42)
& (df.MODE == "MM"),
"ATOM_KERNEL"] = kernel_times.loc[(1000, "MM (hot)"), "TIMING"]
df.loc[(df.BLOCKSIZE == 7000)
& (df.MATRIX_SIDE == 6)
& (df.MODE == "MM"),
"ATOM_KERNEL"] = kernel_times.loc[(7000, "MM (hot)"), "TIMING"]
df.loc[(df.BLOCKSIZE == 1000)
& (df.MATRIX_SIDE == 84)
& (df.MODE == "MM"),
"ATOM_KERNEL"] = kernel_times.loc[(1000, "MM (cold)"), "TIMING"]
df.loc[(df.BLOCKSIZE == 7000)
& (df.MATRIX_SIDE == 12)
& (df.MODE == "MM"),
"ATOM_KERNEL"] = kernel_times.loc[(7000, "MM (cold)"), "TIMING"]
df["KERNEL_TIME"] = df["MATRIX_SIDE"]**3 * df["ATOM_KERNEL"]
# Sanity check
null_values = df[df.isnull().values]
if len(null_values) > 0:
print('There are null values, check null_values variable')
df
```
# Article image generation
```
sns.set(style="whitegrid")
order = ['DRAM', 'AD (volatile result)', 'AD (store result)', 'AD (in-place FMA)',
'MM', 'DAOS (volatile result)', 'DAOS (store result)']
small = (
((df.BLOCKSIZE == 1000) & (df.MATRIX_SIDE == 42)) |
((df.BLOCKSIZE == 7000) & (df.MATRIX_SIDE == 6))
)
big = (
((df.BLOCKSIZE == 1000) & (df.MATRIX_SIDE == 84)) |
((df.BLOCKSIZE == 7000) & (df.MATRIX_SIDE == 12))
)
ax = sns.barplot(y='MODE', x="TIMING",
data=df[small],
capsize=0.1,
order=order,
hue_order=hue_order,
palette="colorblind",
hue=df.BLOCKSIZE)
bottom = sns.barplot(y='MODE', x="KERNEL_TIME",
data=df[small],
capsize=0,
order=order,
hue_order=hue_order,
palette="pastel",
hue=df.BLOCKSIZE)
crop_axis(ax, 800)
ylabel_tweaks(ax, [2, 5], ['non-active', 'active'], 0.40, 0.005)
legend_tweaks(bottom, ["big objects", "small objects", "kernel comp."], placement='upper center')
ax.set_xlabel("execution time (s)")
plt.title("Small dataset")
save_tweaks("matmul-small.pdf", big=True)
plt.show()
ax = sns.barplot(y='MODE', x="TIMING",
data=df[big],
capsize=0.1,
order=order,
hue_order=hue_order,
palette="colorblind",
hue=df.BLOCKSIZE)
annotate_dram(ax)
bottom = sns.barplot(y='MODE', x="KERNEL_TIME",
data=df[big],
capsize=0,
order=order,
hue_order=hue_order,
palette="pastel",
hue=df.BLOCKSIZE)
crop_axis(ax, 6000)
ylabel_tweaks(ax, [2, 5], ['non-active', 'active'], 0.40, 0.005)
legend_tweaks(bottom, ["big objects", "small objects", "kernel comp."], placement='upper center')
ax.set_xlabel("execution time (s)")
plt.title("Big dataset")
save_tweaks("matmul-big.pdf", big=True)
plt.show()
df.groupby(["BLOCKSIZE", "MATRIX_SIDE", "MODE"]).mean()
```
| github_jupyter |
```
import re
import numpy as np
import pandas as pd
import collections
from sklearn import metrics
from sklearn.preprocessing import LabelEncoder
import tensorflow as tf
from sklearn.model_selection import train_test_split
from unidecode import unidecode
from tqdm import tqdm
import time
rules_normalizer = {
'experience': 'pengalaman',
'bagasi': 'bagasi',
'kg': 'kampung',
'kilo': 'kilogram',
'g': 'gram',
'grm': 'gram',
'k': 'okay',
'abgkat': 'abang dekat',
'abis': 'habis',
'ade': 'ada',
'adoi': 'aduh',
'adoii': 'aduhh',
'aerodarat': 'kapal darat',
'agkt': 'angkat',
'ahh': 'ah',
'ailior': 'air liur',
'airasia': 'air asia x',
'airasiax': 'penerbangan',
'airline': 'penerbangan',
'airlines': 'penerbangan',
'airport': 'lapangan terbang',
'airpot': 'lapangan terbang',
'aje': 'sahaja',
'ajelah': 'sahajalah',
'ajer': 'sahaja',
'ak': 'aku',
'aq': 'aku',
'all': 'semua',
'ambik': 'ambil',
'amek': 'ambil',
'amer': 'amir',
'amik': 'ambil',
'ana': 'saya',
'angkt': 'angkat',
'anual': 'tahunan',
'apapun': 'apa pun',
'ape': 'apa',
'arab': 'arab',
'area': 'kawasan',
'aritu': 'hari itu',
'ask': 'tanya',
'astro': 'astro',
'at': 'pada',
'attitude': 'sikap',
'babi': 'khinzir',
'back': 'belakang',
'bag': 'beg',
'bang': 'abang',
'bangla': 'bangladesh',
'banyk': 'banyak',
'bard': 'pujangga',
'bargasi': 'bagasi',
'bawak': 'bawa',
'bawanges': 'bawang',
'be': 'jadi',
'behave': 'berkelakuan baik',
'belagak': 'berlagak',
'berdisiplin': 'berdisplin',
'berenti': 'berhenti',
'beskal': 'basikal',
'bff': 'rakan karib',
'bg': 'bagi',
'bgi': 'bagi',
'biase': 'biasa',
'big': 'besar',
'bike': 'basikal',
'bile': 'bila',
'binawe': 'binatang',
'bini': 'isteri',
'bkn': 'bukan',
'bla': 'bila',
'blom': 'belum',
'bnyak': 'banyak',
'body': 'tubuh',
'bole': 'boleh',
'boss': 'bos',
'bowling': 'boling',
'bpe': 'berapa',
'brand': 'jenama',
'brg': 'barang',
'briefing': 'taklimat',
'brng': 'barang',
'bro': 'abang',
'bru': 'baru',
'bruntung': 'beruntung',
'bsikal': 'basikal',
'btnggjwb': 'bertanggungjawab',
'btul': 'betul',
'buatlh': 'buatlah',
'buh': 'letak',
'buka': 'buka',
'but': 'tetapi',
'bwk': 'bawa',
'by': 'dengan',
'byr': 'bayar',
'bz': 'sibuk',
'camera': 'kamera',
'camni': 'macam ini',
'cane': 'macam mana',
'cant': 'tak boleh',
'carakerja': 'cara kerja',
'care': 'jaga',
'cargo': 'kargo',
'cctv': 'kamera litar tertutup',
'celako': 'celaka',
'cer': 'cerita',
'cheap': 'murah',
'check': 'semak',
'ciput': 'sedikit',
'cite': 'cerita',
'citer': 'cerita',
'ckit': 'sikit',
'ckp': 'cakap',
'class': 'kelas',
'cm': 'macam',
'cmni': 'macam ini',
'cmpak': 'campak',
'committed': 'komited',
'company': 'syarikat',
'complain': 'aduan',
'corn': 'jagung',
'couldnt': 'tak boleh',
'cr': 'cari',
'crew': 'krew',
'cube': 'cuba',
'cuma': 'cuma',
'curinyaa': 'curinya',
'cust': 'pelanggan',
'customer': 'pelanggan',
'd': 'di',
'da': 'dah',
'dn': 'dan',
'dahh': 'dah',
'damaged': 'rosak',
'dapek': 'dapat',
'day': 'hari',
'dazrin': 'dazrin',
'dbalingnya': 'dibalingnya',
'de': 'ada',
'deep': 'dalam',
'deliberately': 'sengaja',
'depa': 'mereka',
'dessa': 'desa',
'dgn': 'dengan',
'dh': 'dah',
'didunia': 'di dunia',
'diorang': 'mereka',
'diorng': 'mereka',
'direct': 'secara terus',
'diving': 'junam',
'dkt': 'dekat',
'dlempar': 'dilempar',
'dlm': 'dalam',
'dlt': 'padam',
'dlu': 'dulu',
'done': 'siap',
'dont': 'jangan',
'dorg': 'mereka',
'dpermudhkn': 'dipermudahkan',
'dpt': 'dapat',
'dr': 'dari',
'dri': 'dari',
'dsb': 'dan sebagainya',
'dy': 'dia',
'educate': 'mendidik',
'ensure': 'memastikan',
'everything': 'semua',
'ewahh': 'wah',
'expect': 'sangka',
'fb': 'facebook',
'fired': 'pecat',
'first': 'pertama',
'fkr': 'fikir',
'flight': 'kapal terbang',
'for': 'untuk',
'free': 'percuma',
'friend': 'kawan',
'fyi': 'untuk pengetahuan anda',
'gantila': 'gantilah',
'gantirugi': 'ganti rugi',
'gentlemen': 'lelaki budiman',
'gerenti': 'jaminan',
'gile': 'gila',
'gk': 'juga',
'gnti': 'ganti',
'go': 'pergi',
'gomen': 'kerajaan',
'goment': 'kerajaan',
'good': 'baik',
'ground': 'tanah',
'guarno': 'macam mana',
'hampa': 'mereka',
'hampeh': 'teruk',
'hanat': 'jahanam',
'handle': 'kawal',
'handling': 'kawalan',
'hanta': 'hantar',
'haritu': 'hari itu',
'hate': 'benci',
'have': 'ada',
'hawau': 'celaka',
'henpon': 'telefon',
'heran': 'hairan',
'him': 'dia',
'his': 'dia',
'hmpa': 'mereka',
'hntr': 'hantar',
'hotak': 'otak',
'hr': 'hari',
'i': 'saya',
'hrga': 'harga',
'hrp': 'harap',
'hu': 'sedih',
'humble': 'merendah diri',
'ibon': 'ikon',
'ichi': 'inci',
'idung': 'hidung',
'if': 'jika',
'ig': 'instagram',
'iklas': 'ikhlas',
'improve': 'menambah baik',
'in': 'masuk',
'isn t': 'tidak',
'isyaallah': 'insyallah',
'ja': 'sahaja',
'japan': 'jepun',
'jd': 'jadi',
'je': 'saja',
'jee': 'saja',
'jek': 'saja',
'jepun': 'jepun',
'jer': 'saja',
'jerr': 'saja',
'jez': 'saja',
'jg': 'juga',
'jgk': 'juga',
'jgn': 'jangan',
'jgnla': 'janganlah',
'jibake': 'celaka',
'jjur': 'jujur',
'job': 'kerja',
'jobscope': 'skop kerja',
'jogja': 'jogjakarta',
'jpam': 'jpam',
'jth': 'jatuh',
'jugak': 'juga',
'ka': 'ke',
'kalo': 'kalau',
'kalu': 'kalau',
'kang': 'nanti',
'kantoi': 'temberang',
'kasi': 'beri',
'kat': 'dekat',
'kbye': 'ok bye',
'kearah': 'ke arah',
'kecik': 'kecil',
'keja': 'kerja',
'keje': 'kerja',
'kejo': 'kerja',
'keksongan': 'kekosongan',
'kemana': 'ke mana',
'kene': 'kena',
'kenekan': 'kenakan',
'kesah': 'kisah',
'ketempat': 'ke tempat',
'kije': 'kerja',
'kijo': 'kerja',
'kiss': 'cium',
'kite': 'kita',
'kito': 'kita',
'kje': 'kerja',
'kjr': 'kerja',
'kk': 'okay',
'kmi': 'kami',
'kt': 'kat',
'tlg': 'tolong',
'kl': 'kuala lumpur',
'klai': 'kalau',
'klau': 'kalau',
'klia': 'klia',
'klo': 'kalau',
'klu': 'kalau',
'kn': 'kan',
'knapa': 'kenapa',
'kne': 'kena',
'ko': 'kau',
'kompom': 'sah',
'korang': 'kamu semua',
'korea': 'korea',
'korg': 'kamu semua',
'kot': 'mungkin',
'krja': 'kerja',
'ksalahan': 'kesalahan',
'kta': 'kita',
'kuar': 'keluar',
'kut': 'mungkin',
'la': 'lah',
'laa': 'lah',
'lahabau': 'celaka',
'lahanat': 'celaka',
'lainda': 'lain dah',
'lak': 'pula',
'last': 'akhir',
'le': 'lah',
'leader': 'ketua',
'leave': 'pergi',
'ler': 'lah',
'less': 'kurang',
'letter': 'surat',
'lg': 'lagi',
'lgi': 'lagi',
'lngsong': 'langsung',
'lol': 'hehe',
'lorr': 'lah',
'low': 'rendah',
'lps': 'lepas',
'luggage': 'bagasi',
'lumbe': 'lumba',
'lyak': 'layak',
'maap': 'maaf',
'maapkan': 'maafkan',
'mahai': 'mahal',
'mampos': 'mampus',
'mart': 'kedai',
'mau': 'mahu',
'mcm': 'macam',
'mcmtu': 'macam itu',
'memerlukn': 'memerlukan',
'mengembirakan': 'menggembirakan',
'mengmbilnyer': 'mengambilnya',
'mengtasi': 'mengatasi',
'mg': 'memang',
'mihak': 'memihak',
'min': 'admin',
'mingu': 'minggu',
'mintak': 'minta',
'mjtuhkn': 'menjatuhkan',
'mkyong': 'mak yong',
'mlibatkn': 'melibatkan',
'mmg': 'memang',
'mmnjang': 'memanjang',
'mmpos': 'mampus',
'mn': 'mana',
'mna': 'mana',
'mntak': 'minta',
'mntk': 'minta',
'mnyusun': 'menyusun',
'mood': 'suasana',
'most': 'paling',
'mr': 'tuan',
'msa': 'masa',
'msia': 'malaysia',
'mst': 'mesti',
'mu': 'awak',
'much': 'banyak',
'muko': 'muka',
'mum': 'emak',
'n': 'dan',
'nah': 'nah',
'nanny': 'nenek',
'napo': 'kenapa',
'nati': 'nanti',
'ngan': 'dengan',
'ngn': 'dengan',
'ni': 'ini',
'nie': 'ini',
'nii': 'ini',
'nk': 'nak',
'nmpk': 'nampak',
'nye': 'nya',
'ofis': 'pejabat',
'ohh': 'oh',
'oii': 'hoi',
'one': 'satu',
'online': 'dalam talian',
'or': 'atau',
'org': 'orang',
'orng': 'orang',
'otek': 'otak',
'p': 'pergi',
'paid': 'dah bayar',
'palabana': 'kepala otak',
'pasni': 'lepas ini',
'passengers': 'penumpang',
'passengger': 'penumpang',
'pastu': 'lepas itu',
'pd': 'pada',
'pegi': 'pergi',
'pekerje': 'pekerja',
'pekrja': 'pekerja',
'perabih': 'perabis',
'perkerja': 'pekerja',
'pg': 'pergi',
'phuii': 'puih',
'pikir': 'fikir',
'pilot': 'juruterbang',
'pk': 'fikir',
'pkerja': 'pekerja',
'pkerjaan': 'pekerjaan',
'pki': 'pakai',
'please': 'tolong',
'pls': 'tolong',
'pn': 'pun',
'pnh': 'pernah',
'pnt': 'penat',
'pnya': 'punya',
'pon': 'pun',
'priority': 'keutamaan',
'properties': 'harta benda',
'ptugas': 'petugas',
'pub': 'kelab malam',
'pulak': 'pula',
'puye': 'punya',
'pwrcuma': 'percuma',
'pyahnya': 'payahnya',
'quality': 'kualiti',
'quit': 'keluar',
'ramly': 'ramly',
'rege': 'harga',
'reger': 'harga',
'report': 'laporan',
'resigned': 'meletakkan jawatan',
'respect': 'hormat',
'rizal': 'rizal',
'rosak': 'rosak',
'rosok': 'rosak',
'rse': 'rasa',
'sacked': 'buang',
'sado': 'tegap',
'salute': 'sanjung',
'sam': 'sama',
'same': 'sama',
'samp': 'sampah',
'sbb': 'sebab',
'sbgai': 'sebagai',
'sblm': 'sebelum',
'sblum': 'sebelum',
'sbnarnya': 'sebenarnya',
'sbum': 'sebelum',
'sdg': 'sedang',
'sebb': 'sebab',
'sebijik': 'sebiji',
'see': 'lihat',
'seen': 'dilihat',
'selangor': 'selangor',
'selfie': 'swafoto',
'sempoi': 'cantik',
'senaraihitam': 'senarai hitam',
'seorg': 'seorang',
'service': 'perkhidmatan',
'sgt': 'sangat',
'shared': 'kongsi',
'shirt': 'kemeja',
'shut': 'tutup',
'sib': 'nasib',
'skali': 'sekali',
'sket': 'sikit',
'sma': 'sama',
'smoga': 'semoga',
'smpoi': 'cantik',
'sndiri': 'sendiri',
'sndr': 'sendiri',
'sndri': 'sendiri',
'sne': 'sana',
'so': 'jadi',
'sop': 'tatacara pengendalian piawai',
'sorang': 'seorang',
'spoting': 'pembintikan',
'sronok': 'seronok',
'ssh': 'susah',
'staff': 'staf',
'standing': 'berdiri',
'start': 'mula',
'steady': 'mantap',
'stiap': 'setiap',
'stress': 'stres',
'student': 'pelajar',
'study': 'belajar',
'studycase': 'kajian kes',
'sure': 'pasti',
'sykt': 'syarikat',
'tah': 'entah',
'taik': 'tahi',
'takan': 'tak akan',
'takat': 'setakat',
'takde': 'tak ada',
'takkan': 'tak akan',
'taknak': 'tak nak',
'tang': 'tentang',
'tanggungjawab': 'bertanggungjawab',
'taraa': 'sementara',
'tau': 'tahu',
'tbabit': 'terbabit',
'team': 'pasukan',
'terbaekk': 'terbaik',
'teruknye': 'teruknya',
'tgk': 'tengok',
'that': 'itu',
'thinking': 'fikir',
'those': 'itu',
'time': 'masa',
'tk': 'tak',
'tnggongjwb': 'tanggungjawab',
'tngok': 'tengok',
'tngu': 'tunggu',
'to': 'kepada',
'tosak': 'rosak',
'tp': 'tapi',
'tpi': 'tapi',
'tpon': 'telefon',
'transfer': 'pindah',
'trgelak': 'tergelak',
'ts': 'tan sri',
'tstony': 'tan sri tony',
'tu': 'itu',
'tuh': 'itu',
'tula': 'itulah',
'umeno': 'umno',
'unfortunately': 'malangnya',
'unhappy': 'tidak gembira',
'up': 'naik',
'upkan': 'naikkan',
'ur': 'awak',
'utk': 'untuk',
'very': 'sangat',
'viral': 'tular',
'vote': 'undi',
'warning': 'amaran',
'warranty': 'waranti',
'wassap': 'whatsapp',
'wat': 'apa',
'weii': 'wei',
'well': 'maklumlah',
'win': 'menang',
'with': 'dengan',
'wt': 'buat',
'x': 'tak',
'tw': 'tahu',
'ye': 'ya',
'yee': 'ya',
'yg': 'yang',
'yng': 'yang',
'you': 'awak',
'your': 'awak',
'sakai': 'selekeh',
'rmb': 'billion ringgit',
'rmj': 'juta ringgit',
'rmk': 'ribu ringgit',
'rm': 'ringgit',
}
permulaan = [
'bel',
'se',
'ter',
'men',
'meng',
'mem',
'memper',
'di',
'pe',
'me',
'ke',
'ber',
'pen',
'per',
]
hujung = ['kan', 'kah', 'lah', 'tah', 'nya', 'an', 'wan', 'wati', 'ita']
def naive_stemmer(word):
assert isinstance(word, str), 'input must be a string'
hujung_result = [e for e in hujung if word.endswith(e)]
if len(hujung_result):
hujung_result = max(hujung_result, key = len)
if len(hujung_result):
word = word[: -len(hujung_result)]
permulaan_result = [e for e in permulaan if word.startswith(e)]
if len(permulaan_result):
permulaan_result = max(permulaan_result, key = len)
if len(permulaan_result):
word = word[len(permulaan_result) :]
return word
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 3)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def classification_textcleaning(string):
string = re.sub(
'http\S+|www.\S+',
'',
' '.join(
[i for i in string.split() if i.find('#') < 0 and i.find('@') < 0]
),
)
string = unidecode(string).replace('.', ' . ').replace(',', ' , ')
string = re.sub('[^A-Za-z ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string.lower()).strip()
string = [rules_normalizer.get(w, w) for w in string.split()]
string = [naive_stemmer(word) for word in string]
return ' '.join([word for word in string if len(word) > 1])
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i].split()[:maxlen][::-1]):
X[i, -1 - no] = dic.get(k, UNK)
return X
classification_textcleaning('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya')
df = pd.read_csv('sentiment-data-v2.csv')
Y = LabelEncoder().fit_transform(df.label)
with open('polarity-negative-translated.txt','r') as fopen:
texts = fopen.read().split('\n')
labels = [0] * len(texts)
with open('polarity-positive-translated.txt','r') as fopen:
positive_texts = fopen.read().split('\n')
labels += [1] * len(positive_texts)
texts += positive_texts
texts += df.iloc[:,1].tolist()
labels += Y.tolist()
assert len(labels) == len(texts)
import json
with open('bm-amazon.json') as fopen:
amazon = json.load(fopen)
with open('bm-imdb.json') as fopen:
imdb = json.load(fopen)
with open('bm-yelp.json') as fopen:
yelp = json.load(fopen)
texts += amazon['negative']
labels += [0] * len(amazon['negative'])
texts += amazon['positive']
labels += [1] * len(amazon['positive'])
texts += imdb['negative']
labels += [0] * len(imdb['negative'])
texts += imdb['positive']
labels += [1] * len(imdb['positive'])
texts += yelp['negative']
labels += [0] * len(yelp['negative'])
texts += yelp['positive']
labels += [1] * len(yelp['positive'])
import os
for i in [i for i in os.listdir('negative') if 'Store' not in i]:
with open('negative/'+i) as fopen:
a = json.load(fopen)
texts += a
labels += [0] * len(a)
import os
for i in [i for i in os.listdir('positive') if 'Store' not in i]:
with open('positive/'+i) as fopen:
a = json.load(fopen)
texts += a
labels += [1] * len(a)
for i in range(len(texts)):
texts[i] = classification_textcleaning(texts[i])
concat = ' '.join(texts).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
max_features = len(dictionary)
maxlen = 100
batch_size = 32
embedded_size = 256
train_X, test_X, train_Y, test_Y = train_test_split(texts,
labels,
test_size = 0.2)
class Model:
def __init__(
self, embedded_size, dict_size, dimension_output, learning_rate
):
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(
tf.random_uniform([dict_size, embedded_size], -1, 1)
)
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
self.logits = tf.identity(
tf.layers.dense(
tf.reduce_mean(encoder_embedded, 1), dimension_output
),
name = 'logits',
)
self.cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
correct_pred = tf.equal(
tf.argmax(self.logits, 1, output_type = tf.int32), self.Y
)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(embedded_size, max_features, 2, 5e-4)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'fast-text/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name)
and 'Adam' not in n.name
and 'beta' not in n.name
]
)
strings.split(',')
tf.trainable_variables()
from tqdm import tqdm
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
batch_x = str_idx(train_X[i : min(i + batch_size, len(train_X))], dictionary, maxlen)
batch_y = train_Y[i : min(i + batch_size, len(train_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
assert not np.isnan(cost)
train_loss += cost
train_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc = 'test minibatch loop')
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
test_loss += cost
test_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss /= len(train_X) / batch_size
train_acc /= len(train_X) / batch_size
test_loss /= len(test_X) / batch_size
test_acc /= len(test_X) / batch_size
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (EPOCH, train_loss, train_acc, test_loss, test_acc)
)
EPOCH += 1
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
predict_Y += np.argmax(
sess.run(
model.logits, feed_dict = {model.X: batch_x, model.Y: batch_y}
),
1,
).tolist()
real_Y += batch_y
saver.save(sess, 'fast-text/model.ckpt')
print(
metrics.classification_report(
real_Y, predict_Y, target_names = ['negative', 'positive']
)
)
text = 'kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya'
new_vector = str_idx([classification_textcleaning(text)], dictionary, len(text.split()))
sess.run(tf.nn.softmax(model.logits), feed_dict={model.X:new_vector})
import json
with open('fast-text-sentiment.json','w') as fopen:
fopen.write(json.dumps({'dictionary':dictionary,'reverse_dictionary':rev_dictionary}))
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('fast-text', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('fast-text/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
test_sess.run(tf.nn.softmax(logits), feed_dict = {x: new_vector})
```
| github_jupyter |
Subsets and Splits