path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
reading_assignments/questions/3_Note-Classification.ipynb | ###Markdown
$\newcommand{\xv}{\mathbf{x}} \newcommand{\wv}{\mathbf{w}} \newcommand{\Chi}{\mathcal{X}} \newcommand{\R}{\rm I\!R} \newcommand{\sign}{\text{sign}} \newcommand{\Tm}{\mathbf{T}} \newcommand{\Xm}{\mathbf{X}}$ Gaussian DistributionHow can we model a data? What is a good representation? One simple statistic is mean of the value, which describe the center of data points. $$ \mu = \frac{1}{N} \sum_i^N x_i$$Including the center point, we can have a model that shows how the data is spread around the center. The model that we want is the highest around center, and decreasing as it goes far from the center.Also the model outputs are expected to be positive. So, we can think of $$m(x) = \frac{1}{\Vert x - \mu \Vert)}$$
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
xs = np.linspace(-5,5,1000)
mu = 0
plt.plot(xs, 1/np.linalg.norm((xs - mu).reshape((-1, 1)), axis=1))
plt.ylim(0,20)
plt.plot([mu, mu], [0, 20], 'r--',lw=2)
plt.xlabel('$x$')
plt.ylabel('$m(x)$');
###Output
_____no_output_____
###Markdown
The model meets our needs,1. positive values2. centering around the mean, decreasing when moving away from the center. But, the problem of this model is the infinite value at the mean. We can transform the denominator with exponent to prevent becoming zero. $$m(x) = \frac{1}{2^{\Vert x - \mu \Vert}}$$
###Code
plt.plot(xs, 1/2**np.linalg.norm((xs - mu).reshape((-1, 1)), axis=1))
plt.plot([mu, mu], [0, 1.1], 'r--',lw=2)
plt.xlabel('$x$')
plt.ylabel('$m(x)$');
###Output
_____no_output_____
###Markdown
This drops to fast and I do not like the discontinuity in the center. We can square the mean function to slow down:$$m(x) = \frac{1}{2^{\Vert x - \mu \Vert^2}}$$
###Code
plt.plot(xs, 1/2**((xs - mu)** 2))
plt.plot([mu, mu], [0, 1.1], 'r--',lw=2)
plt.xlabel('$x$')
plt.ylabel('$m(x)$');
###Output
_____no_output_____
###Markdown
Can I control the drop rate? Try to add the coefficient on it:$$m(x) = \frac{1}{2^{c \Vert x - \mu \Vert^2}}$$
###Code
c = 0.4
plt.plot(xs, 1/2**(c * (xs - mu)** 2))
plt.plot([mu, mu], [0, 1.1], 'r--',lw=2)
plt.xlabel('$x$')
plt.ylabel('$m(x)$');
###Output
_____no_output_____
###Markdown
Now, we reached the shape of the model we wanted. We also have a scalar to control the spread factor.For numerical manupulation, exponent base 2 is not useful. For mathematical convenience, let us change base to $e$:$$m(x) = \frac{1}{e^{c \Vert x - \mu \Vert^2}}$$
###Code
c = 0.4
plt.plot(xs, 1/np.exp(c * (xs - mu)** 2))
plt.plot([mu, mu], [0, 1.1], 'r--',lw=2)
plt.xlabel('$x$')
plt.ylabel('$m(x)$')
###Output
_____no_output_____
###Markdown
The scalar $c$ spread curve when it is small, and it narrows down the curve when it is large. So, we define the $\sigma = c^{-\frac{1}{2}}$ to be more intuitive interpretation. Now, when $\sigma$ grows, the curve widen the distribution of data. $$m(x) = \frac{1}{e^{\sigma^{-2} \Vert x - \boldsymbol\mu \Vert^2}} = \frac{1}{ e^{ \frac{ \Vert x - \boldsymbol\mu \Vert^2}{\sigma^2} } } = e^{ -\big( \frac{ \Vert x - \boldsymbol\mu \Vert}{\sigma} \big)^2 } $$Considering the derivatives in the future, the square term will have the exponent 2 product. To cancel this out, we can multiply the exponent by $\frac{1}{2}$. $$m(x) = e^{ - \frac{1}{2} \big( \frac{ \Vert x - \boldsymbol\mu \Vert}{\sigma} \big)^2 } $$To represent a probability distribution of data, we need to scale $m(x)$ to $p(x)$ that satisfy - $0 < p(x) < 1 $, - $\int_{-\infty}^{+\infty} p(x) dx = 1$.There, we arrive at the Gaussian distribution function: $$p(x) = \frac{1}{\sqrt{2\pi\sigma^2}} e^{ - \frac{1}{2} \big( \frac{ \Vert x - \boldsymbol\mu \Vert}{\sigma}. \big)^2 } $$Here we call $\mu$ mean, and $\sigma$ as standard deviation. Multivariate Normal Distributionhttps://upload.wikimedia.org/wikipedia/commons/8/8e/MultivariateNormal.pngWe can generalize the previous 1-dimensional Gaussian distribution to multi-dimension. For each variable in $\xv$, we need to consider how they are correlated each other.For instance, the distance to the mean can be computed as $ \boldsymbol\delta = \xv - \boldsymbol\mu = (\delta_1, \delta_2)$. $$\Vert \boldsymbol\delta \Vert^2 = \delta_1^2 + 2 \delta_1\delta_2 + \delta_2^2$$For each term, we can consider three scalars, $(c_1, c_2, c_3)$,$$c_1 \delta_1^2 + 2 c_2 \delta_1\delta_2 + c_3 \delta_2^2$$.Considering the matrix for this, we can define$$\boldsymbol\Sigma = \begin{bmatrix} c_1 & c_2 \\ c_2 & c_3 \end{bmatrix}.$$Now, we can extend $\Vert \boldsymbol\delta \Vert^2 = \boldsymbol\delta^\top \boldsymbol\delta$ with the coefficient, $$ \boldsymbol\delta^\top \boldsymbol\Sigma \boldsymbol\delta = c_1 \delta_1^2 + 2 c_2 \delta_1\delta_2 + c_3 \delta_2^2.$$In 1-d Gaussian function, we divide the distance term with the standard deviation, so we can replace $\boldsymbol\Sigma$ with $\boldsymbol\Sigma^{-1}$. Based on this, converting the normal function to multidimensional, we get $$p(\xv) = \frac{1}{ (2\pi)^{\frac{d}{2}} \vert \boldsymbol\Sigma \vert^{\frac{1}{2}}} e^{ - \frac{1}{2} (\xv - \boldsymbol\mu)^\top \boldsymbol\Sigma^{-1} (\xv - \boldsymbol\mu) }.$$
###Code
def normald(X, mu, sigma):
""" normald:
X contains samples, one per row, N x D.
mu is mean vector, D x 1.
sigma is covariance matrix, D x D. """
D = X.shape[1]
detSigma = sigma if D == 1 else np.linalg.det(sigma)
if detSigma == 0:
raise np.linalg.LinAlgError('normald(): Singular covariance matrix')
sigmaI = 1.0/sigma if D == 1 else np.linalg.inv(sigma)
normConstant = 1.0 / np.sqrt((2*np.pi)**D * detSigma)
diffv = X - mu.T # change column vector mu to be row vector
return normConstant * np.exp(-0.5 * np.sum(np.dot(diffv, sigmaI) * diffv, axis=1))[:,np.newaxis]
X = np.array([[1,2],[3,5],[2.1,1.9]])
mu = np.array([[2],[2]])
Sigma = np.array([[1,0],[0,1]])
print(X)
print(mu)
print(Sigma)
normald(X, mu, Sigma)
###Output
[[ 1. 2. ]
[ 3. 5. ]
[ 2.1 1.9]]
[[2]
[2]]
[[1 0]
[0 1]]
###Markdown
Generative ModelDecision boundary from linear model like perceptron produces the outputs as the class labels. Alternative approoach called the generative model creates a model that can *generate* values for observation and target. Typically, the generative models are probabilistic, so they estimate the joint distribution $P(X, T)$ for the input $X$ and the target labels $T$. Bayes' rule is freqently applied to compute the joint distribution from the conditional probabilty. $$ P(X, T) = P(X \mid T) P(T) = P(T \mid X) P(X)$$$$ P(T \mid X) = \frac{P(X \mid T) P(T)}{P(X)}$$Based on this, we can build a regression or classification model that estimates the target $T$ given the input $X$.Also, the probablistic model for the input and output can give additional information along with additional data sampling. } $\newcommand{\xv}{\mathbf{x}} \newcommand{\wv}{\mathbf{w}} \newcommand{\Chi}{\mathcal{X}} \newcommand{\R}{\rm I\!R} \newcommand{\sign}{\text{sign}} \newcommand{\Tm}{\mathbf{T}} \newcommand{\Xm}{\mathbf{X}}$ Discriminant Analysis Bayes Rule for ClassificationPreviously, we discussed about generative model and Bayes rule for supervised learning. For given data, we assume the target $T$ is discrete as before.For instance, in the MNIST dataset, we can observe various image input $X$, and what we want to know is the probability of each classification given the data, thus $$ P(T = 2 \mid X = x_i) \quad\text{or}\quad P(T = k \mid X = x_i) \quad\text{for class label } k$$From the sample data, we must know or can model the distribution of each class $P(X = x_i \mid T = k)$.Assuming the equally sampled data, $$P(T=k) = \frac{1}{10} \\\\P(X=x_i) = \frac{1}{N} $$where $N$ is the number of sample images. Using the Bayes Rule, $$\begin{align*}P(T = k \mid X = x_i) &= \frac{P(X = x_i \mid T = k) P(T=k) } {P(X=x_i)} \\ \\ &= \frac{P(X = x_i \mid T = k) \frac{1}{10}}{\frac{1}{N}} = \frac{N}{10} P(X = x_i \mid T = k)\end{align*}$$ Choice of LikelihoodNow, how do we get $P(X = x_i \mid T = k)$? One good assumption is Gaussian model (or Normal distribution). Because of mathematical tracktabilty and central limit theorem, Gaussian assumption is popular:$$p(\xv \mid T = k) = \frac{1}{(2\pi)^{\frac{d}{2}} \vert \boldsymbol\Sigma_k \vert^{\frac{1}{2}}} e^{ -\frac{1}{2} (\xv - \boldsymbol\mu_k)^\top \boldsymbol\Sigma_k^{-1} (\xv - \boldsymbol\mu_k) }.$$Here, we simplified the notation $\xv$ for $X = x_i$ with assumption of vector input. Now, let us apply Bayes rule for $P(T = k \mid \xv)$. $$\begin{align*}P(T = k \mid \xv) &= \frac{P(\xv \mid T = k) P(T = k)} { P(\xv) } \\\\ &= \frac{P(\xv \mid T = k) P(T = k)} {\sum_{c=1}^{K} P(\xv, T=c)} \\ \\ &= \frac{P(\xv \mid T = k) P(T = k)} {\sum_{c=1}^{K} P(\xv \mid T = c) P(T = c)} \end{align*}$$Pluggin in the Gaussian model for the likelihood function on this, we achieve $$P(T = k \mid \xv) = \frac{ \Big( (2\pi)^{\frac{d}{2}} \vert \boldsymbol\Sigma_k \vert^{\frac{1}{2}} \Big)^{-1} e^{ -\frac{1}{2} (\xv - \boldsymbol\mu_k)^\top \boldsymbol\Sigma_k^{-1} (\xv - \boldsymbol\mu_k)} P(T = k)} { P(\xv) }.$$ Quadratic Discriminant Analysis (QDA) When we have a binary classification problem, $k \in \{-1, +1\}$, we have a higher posterior probability $P(T = +1 \mid \xv)$ for the sample $\xv$ with the positive label. Thus, $$P(T = +1 \mid \xv) > P(T = -1 \mid \xv).$$The inequality will be the opposite in case of the negative samples. To build our model to meet this expectation, we can play with the algebra a little bit. $$\begin{align*} P(T = +1 \mid \xv) &> P(T = -1 \mid \xv) \\ \\ \frac{P(\xv \mid T = +1) P(T = +1)} { P(\xv) } &> \frac{P(\xv \mid T = -1) P(T = -1)} { P(\xv) } \\ \\ P(\xv \mid T = +1) P(T = +1) &> P(\xv \mid T = -1) P(T = -1) \\ \\ \Big( (2\pi)^{\frac{d}{2}} \vert \boldsymbol\Sigma_+ \vert^{\frac{1}{2}} \Big)^{-1} e^{ -\frac{1}{2} (\xv - \boldsymbol\mu_+)^\top \boldsymbol\Sigma_+^{-1} (\xv - \boldsymbol\mu_+)} P(T = +1) &> \Big( (2\pi)^{\frac{d}{2}} \vert \boldsymbol\Sigma_- \vert^{\frac{1}{2}} \Big)^{-1} e^{ -\frac{1}{2} (\xv - \boldsymbol\mu_-)^\top \boldsymbol\Sigma_-^{-1} (\xv - \boldsymbol\mu_-)} P(T = -1) \\ \\ \Big( \vert \boldsymbol\Sigma_+ \vert^{\frac{1}{2}} \Big)^{-1} e^{ -\frac{1}{2} (\xv - \boldsymbol\mu_+)^\top \boldsymbol\Sigma_+^{-1} (\xv - \boldsymbol\mu_+)} P(T = +1) &> \Big( \vert \boldsymbol\Sigma_- \vert^{\frac{1}{2}} \Big)^{-1} e^{ -\frac{1}{2} (\xv - \boldsymbol\mu_-)^\top \boldsymbol\Sigma_-^{-1} (\xv - \boldsymbol\mu_-)} P(T = -1) \end{align*}$$Logarithm can remove exponent and multiplication for easier computation:$$-\frac{1}{2} \ln \vert \boldsymbol\Sigma_+ \vert - \frac{1}{2} (\xv - \boldsymbol\mu_+)^\top \boldsymbol\Sigma_+^{-1} (\xv - \boldsymbol\mu_+) + \ln P(T = +1) > -\frac{1}{2} \ln \vert \boldsymbol\Sigma_- \vert - \frac{1}{2} (\xv - \boldsymbol\mu_-)^\top \boldsymbol\Sigma_-^{-1} (\xv - \boldsymbol\mu_-) + \ln P(T = -1)$$From the observation that both terms have the same cosmetics, we can define the discriminant function $\delta_k(\xv)$ as$$\delta_k(\xv) = -\frac{1}{2} \ln \vert \boldsymbol\Sigma_k \vert - \frac{1}{2} (\xv - \boldsymbol\mu_k)^\top \boldsymbol\Sigma_k^{-1} (\xv - \boldsymbol\mu_k) + \ln P(T = k). $$Now, for a new sample $\tilde{\xv}$, we can predict the label with$$y = \arg\max_k \delta_k(\tilde{\xv}). $$The decision boundary is placed where the discriminant functions meet such as $\delta_1 == \delta_2$. Since the $\delta_k$ function is quadratic in $\xv$, the decision boundary is quadratic. We call this approach as **Quadratic Discriminant Analysis (QDA)**. Practice - Write a QDA code and apply to the following simple data.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
mu1 = [-1, -1]
cov1 = np.eye(2)
mu2 = [2,3]
cov2 = np.eye(2) * 3
C1 = np.random.multivariate_normal(mu1, cov1, 50)
C2 = np.random.multivariate_normal(mu2, cov2, 50)
plt.plot(C1[:, 0], C1[:, 1], 'or')
plt.plot(C2[:, 0], C2[:, 1], 'xb')
plt.xlim([-3, 6])
plt.ylim([-3, 7])
X = np.vstack((C1, C2))
T = np.ones(100)
T[:50] *= -1
# Train and Test data
N1 = C1.shape[0]
N2 = C2.shape[0]
N = N1 + N2
Xtrain = np.vstack((C1, C2))
Ttrain = np.ones(80)
Ttrain[:N1] *= -1
# define QDA discriminant function
def QDA(X, mu, sigma, prior):
# TODO: finish the discriminant function here.
# QDA train
## compute the mean and covariance
means, stds = np.mean(Xtrain, 0), np.std(Xtrain, 0)
Xs = (Xtrain - means) / stds
mu1 = np.mean(Xs[:N1], 0)
mu2 = np.mean(Xs[N1:], 0)
Sigma1 = np.cov(Xs[:N1].T)
Sigma2 = np.cov(Xs[N1:].T)
prior1 = N1 / N
prior2 = N2 / N
## now compute the discriminant function on test data
xs, ys = np.meshgrid(np.linspace(-3,6, 500), np.linspace(-3,7, 500))
Xtest = np.vstack((xs.flat, ys.flat)).T
XtestS = (Xtest-means)/stds
d1 = QDA(XtestS, mu1, Sigma1, prior1)
d2 = QDA(XtestS, mu2, Sigma2, prior2)
fig = plt.figure(figsize=(8,8))
ax = fig.gca(projection='3d')
ax.plot_surface(xs, ys, d1.reshape(xs.shape), alpha=0.2)
ax.plot_surface(xs, ys, d2.reshape(xs.shape), alpha=0.4)
plt.title("QDA Discriminant Functions")
plt.figure(figsize=(6,6))
plt.contourf(xs, ys, (d1-d2 > 0).reshape(xs.shape))
plt.title("Decision Boundary")
# Plot generative distributions p(x | Class=k) starting with discriminant functions
fig = plt.figure(figsize=(8,8))
ax = fig.gca(projection='3d')
prob1 = np.exp( d1.reshape(xs.shape) - 0.5*X.shape[1]*np.log(2*np.pi) - np.log(prior1))
prob2 = np.exp( d2.reshape(xs.shape) - 0.5*X.shape[1]*np.log(2*np.pi) - np.log(prior2))
ax.plot_surface(xs, ys, prob1, alpha=0.2)
ax.plot_surface(xs, ys, prob2, alpha=0.4)
plt.ylabel("QDA P(x|Class=k)\n from disc funcs", multialignment="center")
###Output
_____no_output_____
###Markdown
Linear Discriminant Analysis (LDA)Maintaining the covariance matrix is not cheap. Considering the input dimension $d$, the symmetric covariance meetric contains $\frac{d (d+1)}{2}$. Also, the data is undersampled, the resulting class boundary has high chance of overfitting. Simply using the same covariance for all the classes, we can reach the **linear discriminant analysis** model, which can overcome the stated problems above. Let $\boldsymbol\Sigma_k = \boldsymbol\Sigma$. $$\begin{align*}\delta_+(\xv) &> \delta_-(\xv) \\\\-\frac{1}{2} \ln \vert \boldsymbol\Sigma \vert - \frac{1}{2} (\xv - \boldsymbol\mu_+)^\top \boldsymbol\Sigma^{-1} (\xv - \boldsymbol\mu_+) + \ln P(T = +1) &> -\frac{1}{2} \ln \vert \boldsymbol\Sigma \vert - \frac{1}{2} (\xv - \boldsymbol\mu_-)^\top \boldsymbol\Sigma^{-1} (\xv - \boldsymbol\mu_-) + \ln P(T = -1)\\\\ - \frac{1}{2} (\xv - \boldsymbol\mu_+)^\top \boldsymbol\Sigma^{-1} (\xv - \boldsymbol\mu_+) + \ln P(T = +1) &> - \frac{1}{2} (\xv - \boldsymbol\mu_-)^\top \boldsymbol\Sigma^{-1} (\xv - \boldsymbol\mu_-) + \ln P(T = -1)\\ \\ - \frac{1}{2} \Big[ \xv^\top \boldsymbol\Sigma^{-1}\xv -2 \xv^\top \boldsymbol\Sigma^{-1} \boldsymbol\mu_+ + \boldsymbol\mu_+^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu_+ \Big] + \ln P(T = +1) &> - \frac{1}{2} \Big[ \xv^\top \boldsymbol\Sigma^{-1}\xv -2 \xv^\top \boldsymbol\Sigma^{-1} \boldsymbol\mu_- + \boldsymbol\mu_-^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu_- \Big] + \ln P(T = -1)\\ \\ \xv^\top \boldsymbol\Sigma^{-1} \boldsymbol\mu_+ -\frac{1}{2} \boldsymbol\mu_+^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu_+ + \ln P(T = +1) &> \xv^\top \boldsymbol\Sigma^{-1} \boldsymbol\mu_- - \frac{1}{2}\boldsymbol\mu_-^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu_- + \ln P(T = -1)\end{align*}$$Unifying the covariance matrix, we can remove the quadratic term in our disciriminant function: $$\delta_k(\xv) = \xv^\top \boldsymbol\Sigma^{-1} \boldsymbol\mu_k -\frac{1}{2} \boldsymbol\mu_k^\top \boldsymbol\Sigma^{-1}\boldsymbol\mu_k + \ln P(T = k).$$In many cases, for simple computation, the covariance matrix $\boldsymbol\Sigma$ is chosen as an average of all the covariance matrices for all classes,$$\boldsymbol\Sigma = \sum_k^K \frac{N_k}{N} \boldsymbol\Sigma_k.$$ Practice - Write a LDA code and apply to the previous data.
###Code
# define LDA discriminant function
def LDA(X, mu, sigma, prior):
# TODO: finish the discriminant function here.
# LDA train
## compute the mean and covariance
means, stds = np.mean(Xtrain, 0), np.std(Xtrain, 0)
Xs = (Xtrain - means) / stds
mu1 = np.mean(Xs[:N1], 0)
mu2 = np.mean(Xs[N1:], 0)
Sigma = np.cov(Xs.T)
prior1 = N1 / N
prior2 = N2 / N
## now compute the discriminant function on test data
xs, ys = np.meshgrid(np.linspace(-3,6, 500), np.linspace(-3,7, 500))
Xtest = np.vstack((xs.flat, ys.flat)).T
XtestS = (Xtest-means)/stds
d1 = LDA(XtestS, mu1, Sigma, prior1)
d2 = LDA(XtestS, mu2, Sigma, prior2)
fig = plt.figure(figsize=(8,8))
ax = fig.gca(projection='3d')
ax.plot_surface(xs, ys, d1.reshape(xs.shape), alpha=0.2)
ax.plot_surface(xs, ys, d2.reshape(xs.shape), alpha=0.4)
plt.title("LDA Discriminant Functions")
plt.figure(figsize=(6,6))
plt.contourf(xs, ys, (d1-d2 > 0).reshape(xs.shape))
plt.title("Decision Boundary")
###Output
_____no_output_____
###Markdown
$\newcommand{\xv}{\mathbf{x}} \newcommand{\wv}{\mathbf{w}} \newcommand{\yv}{\mathbf{y}} \newcommand{\Chi}{\mathcal{X}} \newcommand{\R}{\rm I\!R} \newcommand{\sign}{\text{sign}} \newcommand{\Tm}{\mathbf{T}} \newcommand{\Xm}{\mathbf{X}}$ Logistic Regression Previously we discussed about using least squres to fit on the discrete target for classification.When dealing with multiple classes, it can cause masking problem that one class estimation is masked by other predictions. Now, we consider a linear regression model that directly predicts $P(T=k \mid \xv)$, not the class label $k$. We call this approach as **Logistic Regression**. Again, let us use the same linear model for regression: $$\kappa = f(\xv ; \wv) = \Xm \wv.$$Thus,$$P(T=k \mid \xv) = h(\Xm \wv) = h(\kappa) = \yv.$$ TargetTo generate multiple probability outputs for each class, we consider the indicator output targets. $$\Tm = \begin{bmatrix} t_{1,1} & t_{1,2} & \cdots & t_{1, K} \\ t_{2,1} & t_{2,2} & \cdots & t_{2, K} \\ \vdots & & & \vdots \\ t_{N,1} & t_{N,2} & \cdots & t_{N, K} \\ \end{bmatrix}$$where $t_{n,k}$ is 0 or 1 with only one 1 per each row. Note: Here the weight $\wv$ is not a vector any more. It is matrix with $D+1 \times K$ dimensions. LikelihoodAssuming i.i.d (independently identically distributed) data, we can compute the likelihood as$$P(\Tm \mid \wv) = \prod_{n=1}^{N} \prod_{k=1}^{K} P(T = k \mid x_n)^{t_{n,k}} = \prod_{n=1}^{N} \prod_{k=1}^{K} y_{n,k}^{t_{n,k}}$$Since we maximize the likelihood function, we define our error function as the negative logarithm of it:$$E(\wv) = - \ln P(\Tm \mid \wv) = - \sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} \ln y_{n,k}.$$This function is called *cross-entropy* error function for the multiclass classification problem. Gradient DescentAs we practiced in least mean squares, we need to update the weight $\wv$ with the gradient:$$\wv \leftarrow \wv - \alpha \nabla_\wv E(\wv).$$with the learning rate $\alpha$. Softmax TransformationBefore computing the derivative, let us select the function $h(\cdot)$. Since $P(T=k \mid \xv)$ is probability function, it satisfies - the outputs are non-negative,- the integral of the probability is one. To ensure this,$$P(T=k \mid \xv) = \frac{\kappa_k}{\sum_{c=1}^K \kappa_c}$$Since we are working with the logarithm, an exponent is a good idea.$$g_k(\xv) = P(T=k \mid \xv) = \frac{e^{\kappa_k}}{\sum_{c=1}^K e^{\kappa_c}}$$This function is called as **softmax function**. This generalizes the logistic sigmoid fuunction and the derivatives are given by itself$$\frac{\partial g_k}{\partial y_j} = g_k (I_{kj} - g_j).$$ Back to DerivativeHere, $$\begin{align*}\nabla_{\wv_j} g_{n,k}(\xv) &= g_k(\xv) (I_{kj} - g_j(\xv)) \nabla_{\wv_j} (\wv^\top \xv) \\ \\ &= g_k(\xv) (I_{kj} - g_j(\xv)) \xv.\end{align*}$$$$\begin{align*}\nabla_{\wv_j} E(\wv) &= \nabla_{\wv_j} \Big(-\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} \ln g_{n,k}(\xv_n) \Big) \\ \\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} \frac{1}{g_{n,k}(\xv_n)} \nabla_{\wv_j} g_{n,k}(\xv_n)\\ \\ &= -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n,k} (I_{kj} - g_j(\xv_n)) \xv_n\\ \\ &= -\sum_{n=1}^{N} \Bigg( \sum_{k=1}^{K} t_{n,k} (I_{kj} - g_j(\xv_n)) \Bigg) \xv_n\\ \\ &= -\sum_{n=1}^{N} \Bigg( \sum_{k=1}^{K} t_{n,k} I_{kj} - g_j(\xv_n) \sum_{k=1}^{K} t_{n,k} ) \Bigg) \xv_n\\ \\ &= -\sum_{n=1}^{N} \Bigg( t_{n,j} - g_j(\xv_n)\Bigg) \xv_n\end{align*}$$Using the gradient, now we can update the weights, $$\wv_j \leftarrow \wv_j + \alpha \sum_{n=1}^{N} \Big( t_{n,j} - g_j(\xv_n)\Big) \xv_n.$$Converting the summation into matrix calculation,$$\wv_j \leftarrow \wv_j + \alpha \Xm^\top \Big( t_{*,j} - g_j(\Xm)\Big).$$ Implementation Before writing codes, let us check the matrix size!- $\Xm: N \times (D+1)$- $\Tm: N \times K$- $\wv: (D+1) \times K$- $t_{*,j}: N \times 1 $- $g_j(\Xm): N \times 1 $- $\Xm^\top \big( t_{*,j} - g_j(\Xm) \big)$: $(D+1) \times N \cdot \big( N \times 1 - N \times 1 \big) \Rightarrow (D+1) \times 1$This gradient update one column of the weight matrix, so we can combine the computations:$$\wv \leftarrow \wv + \alpha \Xm^\top \Big( \Tm - g(\Xm)\Big).$$Double checking the size of matrics,- $\Xm^\top \big( \Tm - g(\Xm) \big)$: $(D+1) \times N \cdot \big( N \times K - N \times K \big) \Rightarrow (D+1) \times K$. PracticeRead the note and practice codes.Find TODO comment and finish the following:- $g(.)$ function,- the training codes.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# g(.) the softmax function
def g(X, w):
# TODO: Finish your softmax function here
# Data for testing
N1 = 50
N2 = 50
N = N1 + N2
D = 2
K = 2
mu1 = [-1, -1]
cov1 = np.eye(2)
mu2 = [2,3]
cov2 = np.eye(2) * 3
#
# Train Data
#
C1 = np.random.multivariate_normal(mu1, cov1, N1)
C2 = np.random.multivariate_normal(mu2, cov2, N2)
plt.plot(C1[:, 0], C1[:, 1], 'or')
plt.plot(C2[:, 0], C2[:, 1], 'xb')
plt.xlim([-3, 6])
plt.ylim([-3, 7])
Xtrain = np.vstack((C1, C2))
Ttrain = np.zeros((N, 2))
Ttrain[:50, 0] = 1
Ttrain[50:, 1] = 1
means, stds = np.mean(Xtrain, 0), np.std(Xtrain, 0)
# normalize inputs
Xtrains = (Xtrain - means) / stds
#
# Test Data
#
Ct1 = np.random.multivariate_normal(mu1, cov1, 20)
Ct2 = np.random.multivariate_normal(mu2, cov2, 20)
Xtest = np.vstack((Ct1, Ct2))
Ttest = np.zeros((40, 2))
Ttest[:20, 0] = 1
Ttest[20:, 1] = 1
# normalize inputs
Xtests = (Xtrain - means) / stds
plt.figure()
plt.plot(Ct1[:, 0], Ct1[:, 1], 'or')
plt.plot(Ct2[:, 0], Ct2[:, 1], 'xb')
plt.xlim([-3, 6])
plt.ylim([-3, 7])
# initialize the weight matrix
w = np.random.rand(D+1, K)
#w = np.zeros((D+1), K)
import IPython.display as ipd # for display and clear_output
fig = plt.figure(figsize=(16, 8))
# iterate to update weights
niter = 1000
alpha = 0.1
X1 = np.hstack((np.ones((N, 1)), Xtrain))
likeli = []
for step in range(niter):
# TODO: add training code here!
ipd.clear_output(wait=True)
ipd.display(fig)
ipd.clear_output(wait=True)
X1t = np.hstack((np.ones((Xtest.shape[0],1)), Xtest))
Y = g(X1t, w)
Y
# retrieve labels and plot
Yl = np.argmax(Y, 1)
Tl = np.argmax(Ttest, 1)
plt.plot(Tl)
plt.plot(Yl)
print("Accuracy: ", 100 - np.mean(np.abs(Tl - Yl)) * 100, "%")
# show me the boundary
x = np.linspace(-3, 6, 1000)
y = np.linspace(-3, 7, 1000)
xs, ys = np.meshgrid(x, y)
X = np.vstack((xs.flat, ys.flat)).T
X1 = np.hstack((np.ones((X.shape[0], 1)), X))
Y = g(X1, w)
zs = np.argmax(Y, 1)
plt.figure(figsize=(6,6))
plt.contourf(xs, ys, zs.reshape(xs.shape))
plt.title("Decision Boundary")
plt.plot(Ct1[:, 0], Ct1[:, 1], 'or')
plt.plot(Ct2[:, 0], Ct2[:, 1], 'xb')
###Output
_____no_output_____ |
Collaborative_filtering_MovieRecommendation.ipynb | ###Markdown
Movie Recommendation by prediction of customer rating for a movie Introduction: Goal : Finding customer rating for movies using truncated SVD.Here basic python libraries like numpy, pandas and scipy are only used to understand the algorithm better.Consider we have input information of customer rating given by different users for different movies.We can predict the customer rating of a user for a movie that he /she has not rated yet and thereby recommend them new movies.Since input information (utility matrix) is a sparse matrix, SVD can be used to decompose the user-movie interaction.Here features are selected based on the RMSE function.For the feature 12, it was found that the RMSE value is minimum and hence SVD with features 12 has been used.Inspired from Netflix competition, https://towardsdatascience.com/the-netflix-prize-how-even-ai-leaders-can-trip-up-5c1f38e95c9fCredits to 1. https://towardsdatascience.com/beginners-guide-to-creating-an-svd-recommender-system-1fd7326d1f65 for the reference on the code. Code was adapted to match the latest python version and for additional visualisations.2. https://towardsdatascience.com/movie-recommendation-system-based-on-movielens-ef0df580cd0e for the reference on the dataset**Author:** Akshaya Ravi, **Date:** 11/10/2020
###Code
import pandas as pd
import numpy as np
from scipy.linalg import sqrtm
df_movieinput = pd.read_csv("Movies.csv",delimiter=';')
df_userrating = pd.read_csv("Ratings_small.csv",delimiter=';')
df_userrating.head()
df_userrating.info()
df= df_userrating #Renaming for easier use
df['userId'] = df['userId'].astype('str')
df['movieId'] = df['movieId'].astype('str')
users = df['userId'].unique() #list of all users
movies = df['movieId'].unique() #list of all moviesprint("Number of users", len(users))
print("Number of users", len(users))
print("Number of movies", len(movies))
print(df.head())
test = pd.DataFrame(columns=df.columns)
train = pd.DataFrame(columns=df.columns)
test_ratio = 0.2 #fraction of data to be used as test set.
for u in users:
temp = df[df['userId'] == u]
n = len(temp)
test_size = int(test_ratio*n)
temp = temp.sort_values('timestamp').reset_index()
temp.drop('index', axis=1, inplace=True)
dummy_test = temp.iloc[test_size:]
dummy_train = temp.iloc[:-(n-test_size)]
test = pd.concat([test, dummy_test])
train = pd.concat([train, dummy_train])
# test.head()
print(test.shape)
print(train.shape)
print(df.shape) # Original shape is retained after test and train split
train.head()
test.head()
def create_utility_matrix(data, formatizer = {'user':0, 'item': 1, 'value': 2}):
"""
:param data: Array-like, 2D, nx3
:param formatizer:pass the formatizer
:return: utility matrix (n x m), n=users, m=items
"""
itemField = formatizer['item']
userField = formatizer['user']
valueField = formatizer['value']
userList = data.iloc[:,userField].tolist()
itemList = data.iloc[:,itemField].tolist()
valueList = data.iloc[:,valueField].tolist()
users = list(set(data.iloc[:,userField]))
items = list(set(data.iloc[:,itemField]))
users_index = {users[i]: i for i in range(len(users))}
pd_dict = {item: [np.nan for i in range(len(users))]
for item in items}
for i in range(0,len(data)):
item = itemList[i]
user = userList[i]
value = valueList[i]
pd_dict[item][users_index[user]] = value
X = pd.DataFrame(pd_dict)
X.index = users
itemcols = list(X.columns)
items_index = {itemcols[i]: i for i in range(len(itemcols))}
# users_index gives us a mapping of user_id to index of user
# items_index provides the same for items
return X, users_index, items_index
X,user_index,items_index= create_utility_matrix(train)
X.shape
print(train.userId.nunique())
print(train.movieId.nunique())
print(test.userId.nunique())
print(test.movieId.nunique())
def svd(train, k):
utilMat = np.array(train) # the nan or unavailable entries are masked
mask = np.isnan(utilMat)
masked_arr = np.ma.masked_array(utilMat, mask)
item_means = np.mean(masked_arr, axis=0) # nan entries will replaced by the average rating for each item
utilMat = masked_arr.filled(item_means)
x = np.tile(item_means, (utilMat.shape[0],1))
# we remove the per item average from all entries.
# the above mentioned nan entries will be essentially zero now
utilMat = utilMat - x
# The magic happens here. U and V are user and item features
U, s, V=np.linalg.svd(utilMat, full_matrices=False)
s=np.diag(s) # we take only the k most significant features
s=s[0:k,0:k]
U=U[:,0:k]
V=V[0:k,:]
s_root=sqrtm(s)
Usk=np.dot(U,s_root)
skV=np.dot(s_root,V)
UsV = np.dot(Usk, skV)
UsV = UsV + x
print("svd done")
print("The shape of the matrix from SVD is",UsV.shape)
return UsV
#Calculating rmse value
def rmse(true, pred):
# this will be used towards the end
x = true - pred
return sum([xi*xi for xi in x])/len(x)
# to test the performance over a different number of features
# Selecting the singular values with respect to highest importance
# found by the SVD decomposition
no_of_features = [8,10,12,13,14,17] #hyperparameter
utilMat, users_index, items_index = create_utility_matrix(train)
rmse_features = []
#Iterating for each feature
for f in no_of_features:
svdout = svd(utilMat, k=f)
pred = []
#to store the predicted ratings for each user in test data
for _,row in test.iterrows():
user = row['userId']
item = row['movieId']
u_index = users_index[user]
# test data contains the user already foreseen from training data, cold start probelme is not addressed
if item in items_index:
i_index = items_index[item]
pred_rating = svdout[u_index, i_index] #calls the utility matrix found from SVD
else:
pred_rating = np.mean(svdout[u_index, :]) # When certain item or movie is not found from training,
# mean rating over that user is taken
pred.append(pred_rating)
a=rmse(test['rating'], pred)
print("RMSE value of feature value {} is {}".format(f,a))
rmse_features.append(a)
import matplotlib.pyplot as plt
plt.plot(no_of_features,rmse_features)
plt.rcParams["figure.figsize"] = (6,5)
plt.xlabel('Number of features')
plt.ylabel('RMSE of customer rating')
plt.title('Prediction error of customer rating across different features')
#selecting feature value as 12 based on RMSE
svdout = svd(utilMat, k=12)
pred = []
#to store the predicted ratings for each user in test data
for _,row in test.iterrows():
user = row['userId']
item = row['movieId']
u_index = users_index[user]
# test data contains the user already foreseen from training data, cold start probelme is not addressed
if item in items_index:
i_index = items_index[item]
pred_rating = svdout[u_index, i_index] #calls the utility matrix found from SVD
else:
pred_rating = np.mean(svdout[u_index, :]) # When certain item or movie is not found from training,
# mean rating over that user is taken
pred.append(pred_rating) #prediction for test data is stored in pred
#Checking if it works for random movie id
train.head()
test.shape
new_index=pd.Series(np.arange(0,80251,1))
test.set_index(new_index)
predicted_rating = pred[25]
original_rating = test.iloc[25,2]
print("predicted is", predicted_rating)
print("original is", original_rating)
###Output
predicted is 4.3644602712160605
original is 4.0
|
_numpy_pandas/pandas_unpivot_WDI.ipynb | ###Markdown
Matplotlib: Exploring Data Visualization World Development IndicatorsThis week, we will be using an open dataset from Kaggle. It is The World Development Indicators dataset obtained from the World Bank containing over a thousand annual indicators of economic development from hundreds of countries around the world.This is a slightly modified version of the original dataset from The World BankList of the available indicators and a list of the available countries. Step 1: Initial exploration of the Dataset
###Code
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
!find ~ | grep -i Indicators
!curl http://databank.worldbank.org/data/download/WDI_csv.zip -o ../_data/WDI.zip
!head -1 ../_data/WDIData.csv
!head -1 ../_data/WDICountry.csv
data = pd.read_csv('../_data/../_data/WDIData.csv')
data.info()
# Unpivot (melt) years
data2 = pd.melt(data, id_vars=list(data.columns[:4]), value_vars=list(data.columns[4:]))
data2.info()
# NaN percentage
data2['value'].isnull().sum() / data2['value'].sum() * 100
data2 = data2.dropna()
data2.head()
data = data2
data = data.rename(index=str, columns={"Country Name":"CountryName", "Country Code":"CountryCode",
"Indicator Name":"IndicatorName", "Indicator Code":"IndicatorCode",
"variable": "Year", "value": "Value"})
data.head()
data.columns
###Output
_____no_output_____
###Markdown
Looks like it has different indicators for different countries with the year and value of the indicator. How many UNIQUE country names are there ?
###Code
countries = data['CountryName'].unique().tolist()
len(countries)
###Output
_____no_output_____
###Markdown
Are there same number of country codes ?
###Code
# How many unique country codes are there ? (should be the same #)
countryCodes = data['CountryCode'].unique().tolist()
len(countryCodes)
###Output
_____no_output_____
###Markdown
Are there many indicators or few ?
###Code
# How many unique indicators are there ? (should be the same #)
indicators = data['IndicatorName'].unique().tolist()
len(indicators)
###Output
_____no_output_____
###Markdown
How many years of data do we have ?
###Code
# How many years of data do we have ?
years = data['Year'].unique().tolist()
len(years)
###Output
_____no_output_____
###Markdown
What's the range of years?
###Code
print(min(years)," to ",max(years))
###Output
_____no_output_____
###Markdown
Matplotlib: Basic Plotting, Part 1 Lets pick a country and an indicator to explore: CO2 Emissions per capita and the USA
###Code
data.info()
# select CO2 emissions for the United States
hist_indicator = 'CO2 emissions \(metric'
hist_country = 'USA'
mask1 = data['IndicatorName'].str.contains(hist_indicator)
mask2 = data['CountryCode'].str.contains(hist_country)
# stage is just those indicators matching the USA for country code and CO2 emissions over time.
stage = data[mask1 & mask2]
stage.head()
###Output
_____no_output_____
###Markdown
Let's see how emissions have changed over time using MatplotLib
###Code
# get the years
years = stage['Year'].values
# get the values
co2 = stage['Value'].values
# create
plt.bar(years,co2)
plt.show();
###Output
_____no_output_____
###Markdown
Turns out emissions per capita have dropped a bit over time, but let's make this graphic a bit more appealing before we continue to explore it.
###Code
# switch to a line plot
plt.plot(stage['Year'].values, stage['Value'].values)
# Label the axes
plt.xlabel('Year')
plt.ylabel(stage['IndicatorName'].iloc[0])
#label the figure
plt.title('CO2 Emissions in USA')
# to make more honest, start they y axis at 0
plt.axis([1959, 2011, 0, 25])
plt.show()
###Output
_____no_output_____
###Markdown
Using Histograms to explore the distribution of valuesWe could also visualize this data as a histogram to better explore the ranges of values in CO2 production per year.
###Code
# If you want to just include those within one standard deviation fo the mean, you could do the following
# lower = stage['Value'].mean() - stage['Value'].std()
# upper = stage['Value'].mean() + stage['Value'].std()
# hist_data = [x for x in stage[:10000]['Value'] if x>lower and x<upper ]
# Otherwise, let's look at all the data
hist_data = stage['Value'].values
print(len(hist_data))
# the histogram of the data
plt.hist(hist_data, 10, normed=False, facecolor='green')
plt.xlabel(stage['IndicatorName'].iloc[0])
plt.ylabel('# of Years')
plt.title('Histogram Example')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
So the USA has many years where it produced between 19-20 metric tons per capita with outliers on either side.
###Code
data['Year'] = data['Year'].astype(int)
###Output
_____no_output_____
###Markdown
But how do the USA's numbers relate to those of other countries?
###Code
# select CO2 emissions for all countries in 2011
hist_indicator = 'CO2 emissions \(metric'
hist_year = 2011
mask1 = data['IndicatorName'].str.contains(hist_indicator)
mask2 = data['Year'].isin([hist_year])
mask1.shape, mask2.shape, data[mask1 & mask2].shape
# apply our mask
co2_2011 = data[mask1 & mask2]
co2_2011.head()
###Output
_____no_output_____
###Markdown
For how many countries do we have CO2 per capita emissions data in 2011
###Code
print(len(co2_2011))
# let's plot a histogram of the emmissions per capita by country
# subplots returns a touple with the figure, axis attributes.
fig, ax = plt.subplots()
ax.annotate("USA",
xy=(18, 5), xycoords='data',
xytext=(18, 30), textcoords='data',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
)
plt.hist(co2_2011['Value'], 10, normed=False, facecolor='green')
plt.xlabel(stage['IndicatorName'].iloc[0])
plt.ylabel('# of Countries')
plt.title('Histogram of CO2 Emissions Per Capita')
#plt.axis([10, 22, 0, 14])
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
So the USA, at ~18 CO2 emissions (metric tons per capital) is quite high among all countries.An interesting next step, which we'll save for you, would be to explore how this relates to other industrialized nations and to look at the outliers with those values in the 40s! Matplotlib: Basic Plotting, Part 2 Relationship between GPD and CO2 Emissions in USA
###Code
# select GDP Per capita emissions for the United States
hist_indicator = 'GDP per capita \(constant 20'
hist_country = 'USA'
mask1 = data['IndicatorName'].str.contains(hist_indicator)
mask2 = data['CountryCode'].str.contains(hist_country)
# stage is just those indicators matching the USA for country code and CO2 emissions over time.
gdp_stage = data[mask1 & mask2]
#plot gdp_stage vs stage
gdp_stage.head(2)
stage.head(2)
# switch to a line plot
plt.plot(gdp_stage['Year'].values, gdp_stage['Value'].values)
# Label the axes
plt.xlabel('Year')
plt.ylabel(gdp_stage['IndicatorName'].iloc[0])
#label the figure
plt.title('GDP Per Capita USA')
# to make more honest, start they y axis at 0
#plt.axis([1959, 2011,0,25])
plt.show();
###Output
_____no_output_____
###Markdown
So although we've seen a decline in the CO2 emissions per capita, it does not seem to translate to a decline in GDP per capita ScatterPlot for comparing GDP against CO2 emissions (per capita)First, we'll need to make sure we're looking at the same time frames
###Code
print("GDP Min Year = ", gdp_stage['Year'].min(), "max: ", gdp_stage['Year'].max())
print("CO2 Min Year = ", stage['Year'].min(), "max: ", stage['Year'].max())
###Output
_____no_output_____
###Markdown
We have 3 extra years of GDP data, so let's trim those off so the scatterplot has equal length arrays to compare (this is actually required by scatterplot)
###Code
gdp_stage_trunc = gdp_stage[gdp_stage['Year'] < 2015]
print(len(gdp_stage_trunc))
print(len(stage))
# Sanity check Years in both sets
set(range(1960, 2015)) - set(stage.Year.astype(int))
set(range(1960, 2015)) - set(gdp_stage.Year.astype(int))
%matplotlib inline
import matplotlib.pyplot as plt
fig, axis = plt.subplots()
# Grid lines, Xticks, Xlabel, Ylabel
axis.yaxis.grid(True)
axis.set_title('CO2 Emissions vs. GDP \(per capita\)',fontsize=10)
axis.set_xlabel(gdp_stage_trunc['IndicatorName'].iloc[0],fontsize=10)
axis.set_ylabel(stage['IndicatorName'].iloc[0],fontsize=10)
X = gdp_stage_trunc['Value']
Y = stage['Value']
axis.scatter(X, Y)
plt.show();
###Output
_____no_output_____
###Markdown
This doesn't look like a strong relationship. We can test this by looking at correlation.
###Code
np.corrcoef(gdp_stage_trunc['Value'],stage['Value'])
###Output
_____no_output_____ |
notebooks/phase-correlation.ipynb | ###Markdown
Cross correlation vs Phase correlation
###Code
from scipy import signal
from matplotlib.ticker import ScalarFormatter, AutoMinorLocator
mpl.rcParams['grid.color'] = 'k'
mpl.rcParams['grid.linestyle'] = ':'
mpl.rcParams['grid.linewidth'] = 0.5
mpl.rcParams['font.size'] = 32
mpl.rcParams['figure.autolayout'] = True
mpl.rcParams['figure.figsize'] = (7.2,4.45)
mpl.rcParams['axes.titlesize'] = 32
mpl.rcParams['axes.labelsize'] = 32
mpl.rcParams['lines.linewidth'] = 2
mpl.rcParams['lines.markersize'] = 6
mpl.rcParams['legend.fontsize'] = 13
mpl.rcParams['mathtext.fontset'] = 'stix'
mpl.rcParams['font.family'] = 'STIXGeneral'
mpl.rcParams['lines.linewidth'] = 3.5
mpl.rcParams['xtick.labelsize'] = 32
mpl.rcParams['ytick.labelsize'] = 32
mpl.rcParams['legend.fontsize'] = 32
def setup_axis(ax):
ax.set_xlabel('')
ax.yaxis.set_major_formatter(ScalarFormatter())
ax.yaxis.major.formatter._useMathText = True
ax.yaxis.set_minor_locator( AutoMinorLocator(5))
ax.xaxis.set_minor_locator( AutoMinorLocator(5))
ax.tick_params(direction='out', length=12,
width=2,
grid_alpha=0.5)
ax.tick_params(direction='out', which='minor', length=6,
width=1,
grid_alpha=0.5)
ax.grid(True)
fig, ax = plt.subplots(figsize=(8,8))
ax.plot(t, s1, label='S1')
ax.plot(t, s2, label='S2')
Axis_x = [-length(s1)+1 : 1 : length(s1)-1];
cross_corr = xcorr(s1,s2,'coeff');
figure(2); clf;
plot( Axe_x, coss_corr,'r');
clock = np.arange(1, 1000+1)
phaseshift = 2*np.pi/3
s1 = np.sin(2*np.pi*clock/500)
s2 = np.sin(2*np.pi*clock/500 + phaseshift)
fig, axes = plt.subplots(5, 1, sharex=True, figsize=(12, 15))
setup_axis(axes[0])
setup_axis(axes[1])
setup_axis(axes[2])
setup_axis(axes[3])
setup_axis(axes[4])
axes[0].plot(clock, s1)
#ax_orig.plot(clock, sig[clock], 'ro')
axes[0].set_title('S1 = $\sin(2\pi t/500)$')
#ax_noise.plot(sig_noise)
corr = signal.correlate(s1, s2, mode='same') / len(clock)
axes[1].plot(clock, s2)
axes[1].set_title('S2 = $\sin(2\pi t/500 + 2\pi/3)$')
axes[2].plot(corr)
axes[2].plot(clock, corr[clock-1], 'ro')
axes[2].axhline(0.5, ls=':')
axes[2].set_title('Cross-correlation(S1,S2)')
axes[0].margins(0, 0.1)
lag = np.argmax(corr)
s2_sig = np.roll(s2, shift=int(np.ceil(lag)))
print( 'Cross-correlation error: {}'.format(np.mean((s2_sig - s1)**2)))
axes[3].plot(clock, s2_sig, linestyle='dashed', linewidth=3)
axes[3].plot(clock, s1, linestyle='dashed', linewidth=3)
axes[3].set_title('Aligned using cross-correlation')
fft_sig1 = np.fft.fft(s1)
fft_sig2 = np.fft.fft(s2)
fft_sig2_conj = np.conj(fft_sig2)
R = (fft_sig1 * fft_sig2_conj) / abs(fft_sig1 * fft_sig2_conj)
r = np.fft.ifft(R)
time_shift = np.argmax(r)
print('time shift = %d' % (time_shift))
s2_sig = np.roll(s2, shift=int(np.ceil(time_shift/2)))
print( 'Phase-correlation error: {}'.format(np.mean((s2_sig - s1)**2) ))
axes[4].plot(clock, s2_sig, linestyle='dashed', linewidth=3)
axes[4].plot(clock, s1, linestyle='dashed', linewidth=3)
axes[4].set_title('Aligned using Phase correlation')
fig.tight_layout()
fig.show()
fig.savefig('phase.png')
plt.plot(np.abs(r))
2*np.pi*lag/500
phaseshift
def phase_correlation(a, b):
G_a = np.fft.fft(a)
G_b = np.fft.fft(b)
conj_b = np.ma.conjugate(G_b)
R = G_a*conj_b
R /= np.absolute(R)
r = np.fft.ifft(R).real
return r
plt.plot(clock, phase_correlation(s1, s1))
2*np.pi*327/1000
#Do the correlation. x and y is the x and y components of your data (so I guess x is time and y is whatever you are modeling), template is what you are cross-correlating with
ycorr = scipy.correlate(y, template mode="full")
#Generate an x axis
xcorr = numpy.arange(ycorr.size)
#Convert this into lag units, but still not really physical
lags = xcorr - (y.size-1)
distancePerLag = (x[-1] - x[0])/float(x.size) #This is just the x-spacing (or for you, the timestep) in your data
#Convert your lags into physical units
offsets = -lags*distancePerLag
import colorsys
import numpy as np
from scipy.optimize import minimize
from scipy.interpolate import interp1d
from scipy.ndimage.interpolation import shift
from statsmodels.tsa.stattools import ccovf, ccf
from scipy import signal
import matplotlib.pyplot as plt
def align_spectra(reference, target, ROI, order=1,init=0.1,res=1,b=1):
'''
NH[0], NH[i]
Aligns the target spectrum with in the region of interest (ROI) to the reference spectrum's ROI
res - resolution of the data, only used if passing in higher resolution data and the initial value
is given in native pixel coordinates not the high res coordinates
b - symmetric bounds for constraining the shift search around the initial guess
'''
ROI[0] = int(ROI[0]*res)
ROI[1] = int(ROI[1]*res)
# ROI - region of interest to focus on computing the residuals for
# LIMS - shifting limits
reference = reference/np.mean(reference[ROI[0]:ROI[1]])
# define objective function: returns the array to be minimized
def fcn2min(x):
# x = shift length
shifted = shift(target,x,order=order)
shifted = shifted/np.mean(shifted[ROI[0]:ROI[1]])
return np.sum( ((reference - shifted)**2 )[ROI[0]:ROI[1]] )
#result = minimize(fcn2min,init,method='Nelder-Mead')
minb = min( [(init-b)*res,(init+b)*res] )
maxb = max( [(init-b)*res,(init+b)*res] )
result = minimize(fcn2min,init,method='L-BFGS-B',bounds=[ (minb,maxb) ])
return result.x[0]/res
def phase_spectra(ref,tar,ROI,res=100):
'''
Cross-Correlate data within ROI with a precision of 1./res
interpolate data onto higher resolution grid and
align target to reference
'''
x,r1 = highres(ref[ROI[0]:ROI[1]],kind='linear',res=res)
x,r2 = highres(tar[ROI[0]:ROI[1]],kind='linear',res=res)
r1 -= r1.mean()
r2 -= r2.mean()
cc = ccovf(r1,r2,demean=False,unbiased=False)
if np.argmax(cc) == 0:
cc = ccovf(r2,r1,demean=False,unbiased=False)
mod = -1
else:
mod = 1
s1 = np.argmax(cc)*mod*(1./res)
return s1
# older method that behaves the same just uses more lines of code
x,r1 = highres(ref[ROI[0]:ROI[1]],kind='linear',res=res)
x,r2 = highres(tar[ROI[0]:ROI[1]],kind='linear',res=res)
r1 -= r1.mean()
r1 -= r2.mean()
# compute the POC function
product = np.fft.fft(r1) * np.fft.fft(r2).conj()
cc = np.fft.fftshift(np.fft.ifft(product))
l = ref[ROI[0]:ROI[1]].shape[0]
shifts = np.linspace(-0.5*l,0.5*l,l*res)
return shifts[np.argmax(cc.real)]
def highres(y,kind='cubic',res=100):
# interpolate onto higher resolution grid with res* more data points than original input
# from scipy import interpolate
y = np.array(y)
x = np.arange(0, y.shape[0])
f = interp1d(x, y,kind='cubic')
xnew = np.linspace(0, x.shape[0]-1, x.shape[0]*res)
ynew = f(xnew)
return xnew,ynew
def error(x,y):
# basic uncertainty on poisson quantities of x and y for f(x,y) = x/y
sigx = np.sqrt(x)
sigy = np.sqrt(y)
dfdx = 1./y
dfdy = x/(y*y)
er = np.sqrt( dfdx**2 * sigx**2 + dfdy**2 * sigy**2 )
return er
if __name__ == "__main__":
NPTS = 100
SHIFTVAL = 4
NOISE = 1e-3
# generate some noisy data and simulate a shift
x = np.linspace(0,4*np.pi,NPTS)
y = signal.gaussian(NPTS, std=4) * np.random.normal(1,NOISE,NPTS)
shifted = np.roll( signal.gaussian(NPTS, std=4) ,SHIFTVAL) * np.random.normal(1,NOISE,NPTS)
# np roll can only do integer shifts
# align the shifted spectrum back to the real
s = phase_spectra(y, shifted, [10,190])
print('phase shift value to align is',s)
# chi squared alignment at native resolution
s = align_spectra(y, shifted, [10,190],init=-4,b=1)
print('chi square alignment',s)
plt.plot(x,y,label='original data')
plt.plot(x,shifted,label='shifted data')
plt.plot(x,shift(shifted,s),label='aligned data') # use shift function to linearly interp data
plt.legend(loc='best')
plt.show()
NPTS = 100
SHIFTVAL = 4
NOISE = 1e-3
# generate some noisy data and simulate a shift
x = np.linspace(0,4*np.pi,NPTS)
y = signal.gaussian(NPTS, std=4) * np.random.normal(1,NOISE,NPTS)
shifted = np.roll( signal.gaussian(NPTS, std=4) ,SHIFTVAL) * np.random.normal(1,NOISE,NPTS)
# np roll can only do integer shifts
# align the shifted spectrum back to the real
s = phase_spectra(y, shifted, [10,190])
print('phase shift value to align is',s)
# chi squared alignment at native resolution
s = align_spectra(y, shifted, [10,190],init=-4,b=1)
print('chi square alignment',s)
plt.plot(x,y,'k-',label='original data')
plt.plot(x,shifted,'r-',label='shifted data')
plt.plot(x,shift(shifted,s),'o--',label='aligned data') # use shift function to linearly interp data
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____ |
data_engineering/setup/Install_Python_3_on_Ubuntu.ipynb | ###Markdown
---title: "Install Python 3 on Ubuntu"author: "Kedar Dabhadkar"date: 2021-02-22T05:46:18.464519description: "Steps to install Python3.7 on an Ubuntu machine."type: technical_notedraft: false---
###Code
### Setup Python3.7 on Ubuntu
###Output
_____no_output_____
###Markdown
I often use a Google Cloud VM to for my projects. Depending on the type of the machine, it may or may not come with a preinstalled version of Python 3. Here's a simple snippet that I put together from multiple sources to install Python and configure pip on the environment.
###Code
sudo apt update
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.7
sudo apt-get install python3.7-dev
sudo apt-get install python3.7-venv
sudo apt install python3-pip
###Output
_____no_output_____ |
notebooks/rolldecay/04_simplified_ikeda/05.4_maa_mdl_db_simplified_ikeda_regression.ipynb | ###Markdown
Simplified Ikeda regression
###Code
from jupyterthemes import jtplot
jtplot.style(theme='onedork', context='notebook', ticks=True, grid=False)
%matplotlib inline
%load_ext autoreload
%autoreload 2
import pandas as pd
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
import numpy as np
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = 15, 5
from rolldecayestimators.simplified_ikeda import calculate_roll_damping
from rolldecayestimators import equations
import sympy as sp
from rolldecayestimators import symbols
from rolldecayestimators.substitute_dynamic_symbols import lambdify
from mdldb.tables import Run
from rolldecayestimators.ikeda_estimator import IkedaQuadraticEstimator
from rolldecayestimators.transformers import CutTransformer, LowpassFilterDerivatorTransformer, ScaleFactorTransformer, OffsetTransformer
from sklearn.pipeline import Pipeline
from rolldecay import database
import data
import copy
from rolldecay import database
from sympy.physics.vector.printing import vpprint, vlatex
from IPython.display import display, Math, Latex
df_ikeda = database.load('rolldecay_simplified_ikeda',limit_score=0.5,
exclude_table_name='rolldecay_exclude')
#mask = (df_ikeda['ship_speed']==0) ## Zero speed!
#df_ikeda = df_ikeda.loc[mask].copy()
df_ikeda.describe()
df_ikeda['score'].mean()
B1_zeta_lambda = lambdify(sp.solve(equations.zeta_B1_equation, symbols.B_1)[0])
B2_d_lambda = lambdify(sp.solve(equations.d_B2_equation, symbols.B_2)[0])
def linearize(result):
g=9.81
rho=1000
m = result.Volume*rho/(result.scale_factor**3)
#result['B_1'] = B1_zeta_lambda(GM=result.gm, g=g, m=m, omega0=result.omega0,
# zeta=result.zeta)
#result['B_2'] = B2_d_lambda(GM=result.gm, g=g, m=m, omega0=result.omega0, d=result.d)
factor=1.0 # Factor
phi_a = result.phi_start.abs()/factor # Radians
B_e_lambda=lambdify(sp.solve(equations.B_e_equation, symbols.B_e)[0])
result['B_e'] = B_e_lambda(B_1=result['B_1'], B_2=result['B_2'], omega0=result.omega0,
phi_a=phi_a)
return result
#df_ikeda = linearize(df_ikeda)
df_ikeda.describe()
df_ikeda['score'].hist(bins=30)
df_direct = database.load('rolldecay_quadratic_b',limit_score=0.7)
mask = (df_direct['ship_speed']==0) ## Zero speed!
df_direct = df_direct.loc[mask].copy()
df_direct = linearize(df_direct)
mask = (df_direct['B_e'] < df_direct['B_e'].quantile(q=0.98))
df_direct=df_direct.loc[mask]
df_direct['score'].hist(bins=30)
df_direct['B_e'].hist(bins=30)
df_ikeda['B_e'].hist(bins=30)
df_compare = pd.merge(left=df_ikeda, right=df_direct, how='left', left_index=True, right_index=True,
suffixes=('_ikeda',''))
#df_compare.dropna(inplace=True,subset=['B_e','B_e_ikeda'])
fig,ax=plt.subplots()
df_compare.plot(x='B_e', y='B_e_ikeda', style='o', alpha=0.5, ax=ax)
xlim=ax.get_xlim()
ylim=ax.get_ylim()
ax.plot([0,xlim[1]],[0,xlim[1]],'r-')
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_aspect('equal', 'box')
ax.set_xlabel('$B_e$ (Model test)')
ax.set_ylabel('$B_e$ Ikeda')
fig,ax=plt.subplots()
N=20
bins = np.linspace(df_compare['B_e_ikeda'].min(),df_compare['B_e_ikeda'].max(),N)
df_compare['B_e'].hist(bins=bins, ax=ax, label='linear')
df_compare['B_e_ikeda'].hist(bins=bins, ax=ax, label='linear', alpha=0.5)
###Output
_____no_output_____ |
recommender games/context based.ipynb | ###Markdown
import data
###Code
from pymongo import MongoClient
client = MongoClient("mongodb://analytics:[email protected]:27017,gamerec-shard-00-01-nbybv.mongodb.net:27017,gamerec-shard-00-02-nbybv.mongodb.net:27017/test?ssl=true&replicaSet=gamerec-shard-0&authSource=admin&retryWrites=true")
print(client.gamerec)
client.database_names()
db = client.cleaned_full_comments
collection = db.cleaned_full_comments
import pandas as pd
comm_df= pd.DataFrame(list(collection.find({}, {'_id': 0})))
###Output
_____no_output_____
###Markdown
Comments aggregation per game
###Code
# Use this to get each unique game with platform. Since a game can be on multiple platforms.
df_pivot = pd.pivot_table(comm_df, values = ['Userscore'], index = ['Title', 'Platform'])
unique_games_platform_list = df_pivot.index
# Assign unique ID to each game/platform, in case we lose the index
game_id_list = []
game_list = []
platform_list = []
game_id = 0
for unique_game in unique_games_platform_list:
game_id += 1
game_id_list.append(game_id)
game_list.append(unique_game[0])
platform_list.append(unique_game[1])
game_id_df = pd.DataFrame()
game_id_df['game_id'] = game_id_list
game_id_df['Title'] = game_list
game_id_df['Platform'] = platform_list
game_id_df.head()
# Assign unique ID to each user, in case we lose the index
users_list = set(comm_df['Username'])
user_id_list = []
user_id = 0
for user in users_list:
user_id += 1
user_id_list.append(user_id)
user_id_df = pd.DataFrame()
user_id_df['user_id'] = user_id_list
user_id_df['Username'] = users_list
user_id_df.head()
# Merge game_id_df and user_id_df to original df to apply the ID's
df = pd.merge(comm_df, game_id_df, on=['Title','Platform'])
df = pd.merge(df, user_id_df, on='Username')
df = df.reindex(['game_id','Title','Platform','Userscore', 'user_id','Username','Comment'], axis=1)
df.head()
df.loc[df['game_id']==573]
df['Comment'].apply(str)
# Aggregate all comments for each game/platform in a blob
game_id_comment_list = []
comments_for_games_list = []
#df.applymap(str).loc[df['Comment']]
for i in game_id_list:
review_text = df[df['game_id'] == i]['Comment']
#map to string values in case of int or float values
d = " ".join([str(i) for i in review_text])
game_id_comment_list.append(i)
comments_for_games_list.append(d)
comments_for_games_df = pd.DataFrame()
comments_for_games_df['game_id'] = game_id_comment_list
comments_for_games_df['reviews'] = comments_for_games_list
# Each game should have a blob of text like this
comments_for_games_df['reviews'][5]
###Output
_____no_output_____
###Markdown
Remove all punctuations
###Code
#Remove all punctuations
comments_for_games_df['reviews'] = comments_for_games_df['reviews'].str.replace('[^\w\s]',' ')
###Output
_____no_output_____
###Markdown
Remove stopwords
###Code
game_titles = list(df['Title'].unique())
lowercase_game_titles = [title.lower().split(': ') for title in game_titles]
import nltk
#nltk.download()
###Output
_____no_output_____
###Markdown
NLTK isPython a very useful library for text (words) in converting data to something a computer can understand, and is referred to as pre-processing. One of the major forms of pre-processing is to filter out useless data. In natural language processing, useless words (data), are referred to as stop words. For a intro see the link https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
###Code
from nltk.util import ngrams
from nltk.corpus import stopwords
'''
Some games has duble sequels, and here we are going to separate them, since many reviewers only write out a part of the game title.
'''
lowercase_game_titles = [title.lower().split(': ') for title in game_titles]
titles_to_remove = []
for title in lowercase_game_titles:
if len(title) == 2:
titles_to_remove.append(title[0])
titles_to_remove.append(title[1])
else:
titles_to_remove.append(title[0])
# Add all titles to stopwords list
stop = stopwords.words('english')
stop.extend(titles_to_remove)
###Output
_____no_output_____
###Markdown
In order to extract the best out of the comments content, and most relevant for our putpouse, we will most common and rare words. This will make sure that we don't have any noise in data.
###Code
# Can add some common words into stopwords list; filtered through the top 100 common words and added to stop list
word_frequency = pd.Series(' '.join(comments_for_games_df['reviews']).split()).value_counts()
word_frequency[0:100]
###Output
_____no_output_____
###Markdown
remove rare words
###Code
# Remove words that occur less than 500 times
rare_words = word_frequency[-200900:]
rare_words = list(rare_words.index)
#rare_words
stop.extend(rare_words)
stopword_dict = {}
for stopword in stop:
stopword_dict[stopword] = 1
comments_for_games_df['reviews'] = comments_for_games_df['reviews'].apply(lambda x: " ".join(x for x in x.split() if x not in stopword_dict))
###Output
_____no_output_____
###Markdown
LematizationAmong the words in the blob text we formed earlier, there are words that are used at different tenses and have different inflections. So in order to keep the minimum number of words in our blob texts, we have to reduce these words to their base form. For this, we will use a NLP technique called lematizations "Lemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the base or dictionary form of a word, which is known as the lemma . If confronted with the token saw, stemming might return just s, whereas lemmatization would attempt to return either see or saw depending on whether the use of the token was as a verb or a noun."Compare tp stemming technique that chops off suffixes such as ‘-er’, ‘-ing’, ‘-ed’, etc, but it may not leave you with a real word, Lemmatizing will always output a real word, but it is much more computationally intensive Here, token is used as the root for each words.
###Code
from textblob import Word
comments_for_games_df['reviews'] = comments_for_games_df['reviews'].apply(lambda x: " ".join([Word(word).lemmatize() for word in x.split()]))
###Output
_____no_output_____
###Markdown
Further,we have to learn the vocabulary of the blob texts and then transform them into a dataframe that have meaning and can be used for building models.
###Code
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.decomposition import TruncatedSVD, NMF, LatentDirichletAllocation
from sklearn.neighbors import NearestNeighbors
from gensim.models import word2vec
train_data_reviews = comments_for_games_df['reviews']
###Output
_____no_output_____
###Markdown
CountVectorizer CountVectorizer counts the occurrences of each word in its vocabulary, extremely common words like ‘the’, ‘and’, which becomes very important features while they add little meaning to the text. CountVectorizer CountVectorizer has a few parameters:- stop_words: a list of words you don’t want to use as features (performed earlier)- ngram_range:an n-gram is a string of n words in a row; setting ngram_range=(a,b) where a is the minimum and b is the maximum size of ngrams you want to include in your features.- min_df, max_df: they are the minimum and maximum document frequencies words/n-grams must have to be used as features; min_df defaults to 1 (int) and max_df defaults to 1.0 (float).- max_features: This parameter is pretty self-explanatory. TfidfVectorize converts text to word frequency vectors (has the same parameters as CountVectorizer)
###Code
count_vectorizer = CountVectorizer(ngram_range=(1, 2),
stop_words='english',
token_pattern="\\b[a-z][a-z]+\\b",
lowercase=True,
max_df = 0.6)
tfidf_vectorizer = TfidfVectorizer(ngram_range=(1, 2),
stop_words='english',
token_pattern="\\b[a-z][a-z]+\\b",
lowercase=True,
max_df = 0.6)
cv_data = count_vectorizer.fit_transform(train_data_reviews)
tfidf_data = tfidf_vectorizer.fit_transform(train_data_reviews)
def display_topics(model, feature_names, no_top_words, topic_names=None):
for ix, topic in enumerate(model.components_):
if not topic_names or not topic_names[ix]:
print("\nTopic ", ix)
else:
print("\nTopic: '",topic_names[ix],"'")
print(", ".join([feature_names[i]
for i in topic.argsort()[:-no_top_words - 1:-1]]))
###Output
_____no_output_____
###Markdown
Singular Value Decomposition, or SVD for short, is a data reduction technique similar to PCA and matrix factorization in the same time. More about the technique https://blog.statsbot.co/singular-value-decomposition-tutorial-52c695315254
###Code
n_comp = 20
lsa = TruncatedSVD(n_components=n_comp)
lsa_cv_data = lsa.fit_transform(cv_data)
lsa_tfidf_data = lsa.fit_transform(tfidf_data)
# Display topics for LSA on CountVectorizer
display_topics(lsa,count_vectorizer.get_feature_names(),15)
# Display topics for LSA on TF-IDF Vectorizer
display_topics(lsa,tfidf_vectorizer.get_feature_names(),15)
# Display topics for NMF on TF-IDF Vectorizer
#display_topics(nmf,tfidf_vectorizer.get_feature_names(),15)
###Output
_____no_output_____
###Markdown
LDA or latent Dirichlet allocation is a “generative probabilistic model” of a collection of composites made up of parts. In terms of topic modeling, the composites are the parts are words and/or phrases (n-grams).The probabilistic topic model estimated by LDA consists of two tables (matrices). The first table describes the probability or chance of selecting a particular part when sampling a particular topic (category). The second table describes the chance of selecting a particular topic when sampling a particular document or composite.
###Code
n_comp = 20
lda = LatentDirichletAllocation(n_components=n_comp)
lda_cv_data = lda.fit_transform(cv_data)
lda_tfidf_data = lda.fit_transform(tfidf_data)
# Display topics for LDA on CountVectorizer
display_topics(lda,count_vectorizer.get_feature_names(),5)
# Display topics for LDA on TF-IDF Vectorizer
display_topics(lda,tfidf_vectorizer.get_feature_names(),5)
###Output
Topic 0
zelda, resident evil, resident, zelda game, survival horror
Topic 1
yakuza, glados, ufc, suikoden, overcooked
Topic 2
dark cloud, bayonetta, onimusha, perfect perfect, outland
Topic 3
drake, paper mario, chronicles, birthright, brawl
Topic 4
ezio, ootp, mycareer, abe, cuphead
Topic 5
telltale, iv, borderlands, splinter, splinter cell
Topic 6
warhammer, new vegas, samus, nathan, metroid game
Topic 7
hourglass, warioware, nascar, played career, spelunky
Topic 8
majora, majora mask, best zelda, hyrule, oot
Topic 9
dj hero, minecraft, injustice, rally, dj
Topic 10
multiplayer, combat, puzzle, enemy, car
Topic 11
fifa, soccer, pes, league, ops
Topic 12
spyro, ori, hawk, homeworld, tony hawk
Topic 13
civ, dirt, rally, destiny, kojima
Topic 14
arkham, batman, asylum, arkham asylum, arkham city
Topic 15
pokemon, pokemon game, new pokemon, kuni, ni kuni
Topic 16
faction, wheel, gba, best racing, handling
Topic 17
braid, bayonetta, layton, torgue, freespace
Topic 18
mycareer, dreamcast, disciples, spike lee, steins gate
Topic 19
pikmin, finch, tony hawk, psychonauts, sin punishment
###Markdown
Now that we have transformed the data in a cleaner and usable format, we have to preprocess (Clustering and scaling) it for a machine learning model.
###Code
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import silhouette_score, silhouette_samples
ssx_nmf_cv = StandardScaler().fit_transform(nmf_cv_data)
ssx_nmf_tfidf = StandardScaler().fit_transform(nmf_tfidf_data)
ssx_lsa_cv = StandardScaler().fit_transform(lsa_cv_data)
ssx_lsa_tfidf = StandardScaler().fit_transform(lsa_tfidf_data)
ssx_lda_cv = StandardScaler().fit_transform(lda_cv_data)
ssx_lda_tfidf = StandardScaler().fit_transform(lda_tfidf_data)
def get_cluster_centers(X, labels, k_num):
CC_list = []
for k in range(k_num):
# get the mean coordinates of each cluster
CC_list.append(np.mean(X[labels == k], axis = 0))
return CC_list
# for each cluster substract the mean from each data point to get the error
# then get the magnitude of each error, square it, and sum it
def get_SSE(X, labels):
k_num = len(np.unique(labels))
CC_list = get_cluster_centers(X, labels, k_num)
CSEs = []
for k in range(k_num):
# for each cluster of k we get the coordinates of how far off each point is to the cluster
error_cords = X[labels == k] - CC_list[k]
# square the coordinates and sum to get the magnitude squared
error_cords_sq = error_cords ** 2
error_mag_sq = np.sum(error_cords_sq, axis = 1)
# since we already have the magnitude of the error squared we can just take the sum for the cluster
CSE = np.sum(error_mag_sq)
CSEs.append(CSE)
# sum each cluster's sum of squared errors
return sum(CSEs)
def get_silhouette_sse(vectorized_data, cluster_range):
Sil_coefs = []
SSEs = []
for k in cluster_range:
km = KMeans(n_clusters=k, random_state=25)
km.fit(vectorized_data)
labels = km.labels_
Sil_coefs.append(silhouette_score(vectorized_data, labels, metric='euclidean'))
SSEs.append(get_SSE(vectorized_data, labels))
return cluster_range, Sil_coefs, SSEs
# used to show silhouette scores in detail
for k in range(2,15):
plt.figure(dpi=120, figsize=(8,6))
ax1 = plt.gca()
km = KMeans(n_clusters=k, random_state=1)
km.fit(ssx_lda_cv)
labels = km.labels_
silhouette_avg = silhouette_score(ssx_lda_cv, labels)
print("For n_clusters =", k,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(ssx_lda_cv, labels)
y_lower = 10
for i in range(k):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = plt.cm.jet(float(i) / k)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
km = KMeans(n_clusters=20, n_init=100, max_iter=1000, random_state=25)
ypred = km.fit_predict(ssx_nmf_cv)
tsne_model = TSNE(n_components=2, random_state=25, verbose=2)
low_dim_nmf_cv = tsne_model.fit_transform(ssx_nmf_cv)
plt.figure(dpi=150)
plt.scatter(low_dim_nmf_cv[:,0], low_dim_nmf_cv[:,1], c=km.labels_, cmap=plt.cm.rainbow)
plt.show()
def get_recommendations(first_article, model, vectorizer, training_vectors):
'''
first_article: (string) An article that we want to use to find similar articles
model: (a fit dimensionality reducer) Projects vectorized words onto a subspace
(uses NMF or SVD/LSA typically)
vectorizer: Vectorizes first_article
training_vectors: (numpy array shape) a (num_docs in training) x (NMF/SVD/LSA) dimensional array.
Used to train NearestNeighbors model
'''
new_vec = model.transform(
vectorizer.transform([first_article]))
nn = NearestNeighbors(n_neighbors=100, metric='cosine', algorithm='brute')
nn.fit(training_vectors)
results = nn.kneighbors(new_vec)
return results[1][0]
game_id_df[game_id_df.Title == 'The Legend of Zelda: Breath of the Wild']
data_index = game_id_df[game_id_df.Title == 'The Legend of Zelda: Breath of the Wild'][game_id_df.Platform == 'Switch'].index[0]
train_data['reviews'][data_index]
new_datapoint = [train_data['reviews'][data_index]]
new_datapoint
new_vec = lsa.transform(tfidf_vectorizer.transform(new_datapoint))
nn = NearestNeighbors(n_neighbors=100, metric='cosine', algorithm='brute')
nn.fit(lsa_tfidf_data)
result = nn.kneighbors(new_vec)
result[1][0]
for r in result[1][0]:
#g_id = train_data['game_id'][r]
game = game_id_df.Title[r]
plat = game_id_df.Platform[r]
print(f'{game} on {plat}')
###Output
_____no_output_____ |
notebooks/Dataset F - Indian Liver Patient/Synthetic data generation/WGANGP Dataset F - Indian Liver Patient.ipynb | ###Markdown
This GAN is based on healthGAN description (https://github.com/yknot/ESANN2019)
###Code
import os
import numpy as np
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
from numpy import asarray
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
HOME_PATH = '' #home path of the project
TRAIN_FILE = 'REAL DATASETS/TRAIN DATASETS/F_IndianLiverPatient_Real_Train.csv'
SYNTHETIC_FILE = 'SYNTHETIC DATASETS/WGANGP/F_IndianLiverPatient_Synthetic_WGANGP.csv'
#define directory of functions and actual directory
FUNCTIONS_DIR = HOME_PATH + 'STDG APPROACHES'
ACTUAL_DIR = os.getcwd()
#change directory to functions directory
os.chdir(FUNCTIONS_DIR)
#import functions for univariate resemblance analisys
from preprocessing import DataPreProcessor
#change directory to actual directory
os.chdir(ACTUAL_DIR)
from ydata_synthetic.synthesizers.regular import WGAN_GP
print('Functions imported!!')
###Output
Functions imported!!
###Markdown
Data Preprocessing
###Code
import pandas as pd
real_data = pd.read_csv(HOME_PATH + TRAIN_FILE)
cat_cols = ['gender','class']
for c in cat_cols :
real_data[c] = real_data[c].astype('category')
data_cols = real_data.columns
data_train = real_data
real_data
# data configuration
preprocessor = DataPreProcessor(data_train)
data_train = preprocessor.preprocess_train_data()
data_train
###Output
_____no_output_____
###Markdown
Train the ModelNext, lets define the neural network for generating synthetic data. We will be using a [GAN](https://www.wikiwand.com/en/Generative_adversarial_network) network that comprises of an generator and discriminator that tries to beat each other and in the process learns the vector embedding for the data. The model was taken from a [Github repository](https://github.com/ydataai/gan-playground) where it is used to generate synthetic data on credit card fraud data. Next, lets define the training parameters for the GAN network.
###Code
# training configuration
noise_dim = 32
dim = 128
batch_size = 16
log_step = 200
epochs = 5000+1
learning_rate = 5e-4
beta_1 = 0.5
beta_2 = 0.9
models_dir = 'my_model_datasetF/'
data_dim = data_train.shape[1]
print('Shape of data: ', data_train.shape)
#Define the GAN and training parameters
gan_args = [batch_size, learning_rate, beta_1, beta_2, noise_dim, data_dim, dim]
train_args = ['', epochs, log_step]
###Output
_____no_output_____
###Markdown
Finally, let's run the training and see if the model is able to learn something.
###Code
!mkdir my_model_datasetF
!mkdir my_model_datasetF/gan
!mkdir my_model_datasetF/gan/saved
#Training the GAN model chosen: Vanilla GAN, CGAN, DCGAN, etc.
synthesizer = WGAN_GP(gan_args, n_critic=2)
synthesizer.train(data_train, train_args)
synthesizer.generator.summary()
synthesizer.critic.summary()
###Output
Model: "model_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(16, 13)] 0
_________________________________________________________________
dense_12 (Dense) (16, 512) 7168
_________________________________________________________________
dropout_2 (Dropout) (16, 512) 0
_________________________________________________________________
dense_13 (Dense) (16, 256) 131328
_________________________________________________________________
dropout_3 (Dropout) (16, 256) 0
_________________________________________________________________
dense_14 (Dense) (16, 128) 32896
_________________________________________________________________
dense_15 (Dense) (16, 1) 129
=================================================================
Total params: 171,521
Trainable params: 171,521
Non-trainable params: 0
_________________________________________________________________
###Markdown
Generate data
###Code
size = len(data_train)
generated_samples = synthesizer.sample(size)
generated_samples.columns = data_train.columns
generated_samples
###Output
Synthetic data generation: 100%|██████████| 30/30 [00:00<00:00, 228.13it/s]
###Markdown
Transform and process generated data
###Code
synthetic_data = preprocessor.transform_data(generated_samples)
synthetic_data = synthetic_data[0:len(real_data)]
synthetic_data
print(real_data.dtypes, '\n', synthetic_data.dtypes)
print(real_data.shape, synthetic_data.shape)
real_data.describe()
synthetic_data.describe()
#Save generated samples
synthetic_data.to_csv(HOME_PATH + SYNTHETIC_FILE, index=False)
###Output
_____no_output_____ |
lab_3.2.ipynb | ###Markdown
Дисциплина "Вычислительный практикум" Задание №3.2 Нахождение производных таблично-заданной функции по формулам численного дифференцирования Ковальчуков Александр, 223 группа Вариант №4 Постановка задачиДля таблично-заданной функции $f(x)$ с равноотстоящими узлами шагом $h$,найти значение ее первой и второй производной с точностью до членов порядка $h^2$ в узлах таблицы.Для этого воспользоваться известными простейшими формулами численногодифференцирования, имеющими погрешность порядка $O(h^2)$.Пусть узлы $x_0, x_1, \dots, x_n$ - равноотстоящие, т.е. $x_{i+1} = x_i + h$$(i = 0, 1, \dots, n - 1)$, и пусть для функции $f(x)$ известны значения в этих узлах.$y_i = f(x_i)$.Тогда первая производная вычисляется по следующим формулам:$f'(x_i) = \frac{y_{i+1} - y_{i-1}}{2h} + O(h^2), \; i = 1, 2, \dots, n-1$$f'(x_i) = \frac{ - 3 y_{i} + 4 y_{i+1} - y_{i+2}}{2h} + O(h^2), \; i = 0$$f'(x_i) = \frac{ 3 y_{i} - 4 y_{i-1} + y_{i-2}}{2h} + O(h^2), \; i = n$Вторая производная вычисляется по формуле:$f''(x_i) = \frac{y_{i+1} - 2 y_i + y_{i-1}}{h^2} + O(h^2), \; i = 1, \dots, n-1$Обратим внимание, что формула второй производной порядка точности $h^2$ не позволяет вычислить значениевторой производной в крайних узлах.Решение задачи будем рассматривать на примере функции $f(x) = e^{1.5 * 5 * x}$Параметры задачи:$a$ - начальный узел$m$ - число значений в таблице + 1$h$ - шаг узловПараметры $a, m, h$ предлагается ввести с клавиатуры пользователю.Код программы написан ня языке python с использованием интерактивной среды Jupyter notebook.
###Code
import pandas as pd
from math import exp
from numpy import arange
# Определение функции и её точных производных в коде программы
def f(x):
return exp(1.5 * 5 * x)
def df(x):
return 1.5 * 5 * exp(1.5 * 5 * x)
def d2f(x):
return 1.5**2 * 5**2 * exp(1.5 * 5 * x)
def derivative():
# Ввод параметров
print('Введите a - начальный узел:', end=' ')
a = float(input())
print('Введите m - количество узлов в таблице + 1:', end=' ')
m = int(input())
print('Введите h - шаг узлов:', end=' ')
h = float(input())
# В случае, если в таблице менее 3 значений, вычислить вторую производную не получится
while m < 2:
print("Слишком мало значений в таблице. Введите другое m")
m = int(input())
# Заполняем таблицу равноотстоящими узлами и значениями в них
table = {"x": [i for i in arange(a, a + (m + 0.5) * h, h)],
"f(x)": [f(x) for x in arange(a, a + (m + 0.5) * h, h)],
"f'(x)т": [df(x) for x in arange(a, a + (m + 0.5) * h, h)],
"f'(x)чд": [0 for i in range(m + 1)],
"абс.факт.погр. f'": [0 for i in range(m + 1)],
"относ.погр. f'": [0 for i in range(m + 1)],
"f''(x)т": [d2f(x) for x in arange(a, a + (m + 0.5) * h, h)],
"f''(x)чд": [0 for i in range(m + 1)],
"абс.факт.погр. f''": [0 for i in range(m + 1)],
"относ.погр. f''": [0 for i in range(m + 1)]
}
# Вычисляем первые производные в крайних клетках
table["f'(x)чд"][0] = (-3 * table["f(x)"][0] + 4 * table["f(x)"][1] - table["f(x)"][2]) / (2 * h)
table["f'(x)чд"][m] = (3 * table["f(x)"][m] - 4 * table["f(x)"][m - 1] + table["f(x)"][m - 2]) / (2 * h)
# Вычисляем первые производные в остальных клетках
for i in range(1, m):
table["f'(x)чд"][i] = (table["f(x)"][i+1] - table["f(x)"][i - 1]) / (2 * h)
# Абсюлютная погрешность первой производной
table["абс.факт.погр. f'"] = [abs(table["f'(x)т"][i] - table["f'(x)чд"][i]) for i in range(m + 1)]
table["относ.погр. f'"] = [table["абс.факт.погр. f'"][i] / table["f'(x)т"][i] for i in range(m + 1)]
# Вычисляем вторые производные в крайних клетках
table["f''(x)чд"][0] = 0
table["f''(x)чд"][m] = 0
# Вычисляем вторые производные в остальных клетках
for i in range(1, m):
table["f''(x)чд"][i] = (table["f(x)"][i+1] - 2 * table["f(x)"][i] + table["f(x)"][i - 1]) / (h * h)
# Абсюлютная погрешность второй производной
table["абс.факт.погр. f''"] = [abs(table["f''(x)т"][i] - table["f''(x)чд"][i]) for i in range(m + 1)]
table["относ.погр. f''"] = [table["абс.факт.погр. f''"][i] / table["f''(x)т"][i] for i in range(m + 1)]
table["абс.факт.погр. f''"][0] = table["абс.факт.погр. f''"][-1] = 0
# Вывод результатов
data = pd.DataFrame(table)
#data = data.drop(["f'(x)т", "f''(x)т"], axis=1)
pd.set_option('display.max_rows', data.shape[0]+1)
print(data)
while True:
derivative()
print("\nВведите q, чтобы завершить программу, или любую другую"
" клавишу, чтобы продолжить и ввести новые значения a, m, h:", end=' ')
k = input()
if k == 'q':
break
else:
print('~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\n')
###Output
Введите a - начальный узел: -0.5
Введите m - количество узлов в таблице + 1: 7
Введите h - шаг узлов: 0.01
x f(x) f'(x)т f'(x)чд абс.факт.погр. f' относ.погр. f' \
0 -0.50 0.023518 0.176383 0.176033 0.000350 0.001984
1 -0.49 0.025349 0.190121 0.190299 0.000178 0.000938
2 -0.48 0.027324 0.204928 0.205120 0.000192 0.000938
3 -0.47 0.029452 0.220889 0.221096 0.000207 0.000938
4 -0.46 0.031746 0.238092 0.238316 0.000223 0.000938
5 -0.45 0.034218 0.256636 0.256877 0.000241 0.000938
6 -0.44 0.036883 0.276624 0.276883 0.000259 0.000938
7 -0.43 0.039756 0.298168 0.297640 0.000529 0.001773
f''(x)т f''(x)чд абс.факт.погр. f'' относ.погр. f''
0 1.322873 0.000000 0.000000 1.000000
1 1.425904 1.426573 0.000669 0.000469
2 1.536959 1.537680 0.000721 0.000469
3 1.656664 1.657441 0.000777 0.000469
4 1.785692 1.786529 0.000837 0.000469
5 1.924769 1.925672 0.000902 0.000469
6 2.074678 2.075651 0.000973 0.000469
7 2.236263 0.000000 0.000000 1.000000
Введите q, чтобы завершить программу, или любую другую клавишу, чтобы продолжить и ввести новые значения a, m, h: c
~~~~~~~~~~~~~~~~~~~~~~~~~~
Введите a - начальный узел: 0.4
Введите m - количество узлов в таблице + 1: 11
Введите h - шаг узлов: 0.01
x f(x) f'(x)т f'(x)чд абс.факт.погр. f' \
0 0.40 20.085537 150.641527 150.342615 0.298912
1 0.41 21.649882 162.374114 162.526383 0.152269
2 0.42 23.336065 175.020484 175.184612 0.164128
3 0.43 25.153574 188.651806 188.828717 0.176911
4 0.44 27.112639 203.344792 203.535481 0.190689
5 0.45 29.224284 219.182128 219.387669 0.205541
6 0.46 31.500392 236.252942 236.474492 0.221549
7 0.47 33.953774 254.653302 254.892107 0.238805
8 0.48 36.598234 274.486758 274.744162 0.257404
9 0.49 39.448657 295.864926 296.142378 0.277451
10 0.50 42.521082 318.908115 319.207175 0.299060
11 0.51 45.832800 343.746003 343.136498 0.609505
относ.погр. f' f''(x)т f''(x)чд абс.факт.погр. f'' \
0 0.001984 1129.811452 0.000000 0.000000
1 0.000938 1217.805858 1218.376811 0.570954
2 0.000938 1312.653633 1313.269054 0.615422
3 0.000938 1414.888546 1415.551900 0.663353
4 0.000938 1525.085939 1525.800957 0.715018
5 0.000938 1643.865963 1644.636669 0.770707
6 0.000938 1771.897067 1772.727800 0.830732
7 0.000938 1909.899766 1910.795199 0.895433
8 0.000938 2058.650687 2059.615861 0.965173
9 0.000938 2218.986948 2220.027293 1.040345
10 0.000938 2391.810863 2392.932234 1.121372
11 0.001773 2578.095021 0.000000 0.000000
относ.погр. f''
0 1.000000
1 0.000469
2 0.000469
3 0.000469
4 0.000469
5 0.000469
6 0.000469
7 0.000469
8 0.000469
9 0.000469
10 0.000469
11 1.000000
Введите q, чтобы завершить программу, или любую другую клавишу, чтобы продолжить и ввести новые значения a, m, h: c
~~~~~~~~~~~~~~~~~~~~~~~~~~
Введите a - начальный узел: 0.49
Введите m - количество узлов в таблице + 1: 11
Введите h - шаг узлов: 0.001
x f(x) f'(x)т f'(x)чд абс.факт.погр. f' \
0 0.490 39.448657 295.864926 295.859348 0.005579
1 0.491 39.745634 298.092255 298.095050 0.002795
2 0.492 40.044847 300.336352 300.339168 0.002816
3 0.493 40.346312 302.597343 302.600180 0.002837
4 0.494 40.650047 304.875355 304.878213 0.002858
5 0.495 40.956069 307.170516 307.173396 0.002880
6 0.496 41.264394 309.482956 309.485857 0.002901
7 0.497 41.575041 311.812804 311.815727 0.002923
8 0.498 41.888026 314.160192 314.163137 0.002945
9 0.499 42.203367 316.525251 316.528218 0.002967
10 0.500 42.521082 318.908115 318.911105 0.002990
11 0.501 42.841189 321.308918 321.302927 0.005991
относ.погр. f' f''(x)т f''(x)чд абс.факт.погр. f'' \
0 0.000019 2218.986948 0.000000 0.000000
1 0.000009 2235.691916 2235.702395 0.010480
2 0.000009 2252.522641 2252.533200 0.010559
3 0.000009 2269.480072 2269.490710 0.010638
4 0.000009 2286.565162 2286.575880 0.010718
5 0.000009 2303.778871 2303.789670 0.010799
6 0.000009 2321.122169 2321.133049 0.010880
7 0.000009 2338.596030 2338.606992 0.010962
8 0.000009 2356.201438 2356.212483 0.011045
9 0.000009 2373.939383 2373.950511 0.011128
10 0.000009 2391.810863 2391.822074 0.011212
11 0.000019 2409.816882 0.000000 0.000000
относ.погр. f''
0 1.000000
1 0.000005
2 0.000005
3 0.000005
4 0.000005
5 0.000005
6 0.000005
7 0.000005
8 0.000005
9 0.000005
10 0.000005
11 1.000000
Введите q, чтобы завершить программу, или любую другую клавишу, чтобы продолжить и ввести новые значения a, m, h: c
~~~~~~~~~~~~~~~~~~~~~~~~~~
Введите a - начальный узел: 0.499
Введите m - количество узлов в таблице + 1: 11
Введите h - шаг узлов: 0.0001
x f(x) f'(x)т f'(x)чд абс.факт.погр. f' \
0 0.4990 42.203367 316.525251 316.525192 0.000059
1 0.4991 42.235031 316.762734 316.762764 0.000030
2 0.4992 42.266719 317.000395 317.000425 0.000030
3 0.4993 42.298431 317.238235 317.238264 0.000030
4 0.4994 42.330167 317.476253 317.476282 0.000030
5 0.4995 42.361927 317.714449 317.714479 0.000030
6 0.4996 42.393710 317.952824 317.952854 0.000030
7 0.4997 42.425517 318.191378 318.191408 0.000030
8 0.4998 42.457348 318.430111 318.430141 0.000030
9 0.4999 42.489203 318.669024 318.669053 0.000030
10 0.5000 42.521082 318.908115 318.908145 0.000030
11 0.5001 42.552985 319.147386 319.147326 0.000060
относ.погр. f' f''(x)т f''(x)чд абс.факт.погр. f'' \
0 1.876061e-07 2373.939383 0.000000 0.000000
1 9.374971e-08 2375.720505 2375.720618 0.000113
2 9.375004e-08 2377.502964 2377.503076 0.000112
3 9.375008e-08 2379.286760 2379.286871 0.000111
4 9.374974e-08 2381.071894 2381.072004 0.000110
5 9.374976e-08 2382.858368 2382.858482 0.000113
6 9.374997e-08 2384.646182 2384.646294 0.000111
7 9.375002e-08 2386.435338 2386.435450 0.000113
8 9.374980e-08 2388.225836 2388.225946 0.000110
9 9.374977e-08 2390.017677 2390.017791 0.000114
10 9.374997e-08 2391.810863 2391.810974 0.000111
11 1.873947e-07 2393.605394 0.000000 0.000000
относ.погр. f''
0 1.000000e+00
1 4.743375e-08
2 4.720996e-08
3 4.664411e-08
4 4.621080e-08
5 4.757568e-08
6 4.673769e-08
7 4.715105e-08
8 4.601138e-08
9 4.765752e-08
10 4.661155e-08
11 1.000000e+00
|
examples/Interactive Widgets/Widget Styling.ipynb | ###Markdown
[Index](Index.ipynb) - [Back](Widget%20Events.ipynb) - [Next](Custom Widget - Hello World.ipynb)
###Code
%%html
<style>
.example-container { background: #999999; padding: 2px; min-height: 100px; }
.example-container.sm { min-height: 50px; }
.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}
.example-box.med { width: 65px; height: 65px; }
.example-box.lrg { width: 80px; height: 80px; }
</style>
import ipywidgets as widgets
from IPython.display import display
###Output
_____no_output_____
###Markdown
Widget Styling Basic styling The widgets distributed with IPython can be styled by setting the following traits:- width - height - fore_color - back_color - border_color - border_width - border_style - font_style - font_weight - font_size - font_family The example below shows how a `Button` widget can be styled:
###Code
button = widgets.Button(
description='Hello World!',
width=100, # Integers are interpreted as pixel measurements.
height='2em', # em is valid HTML unit of measurement.
color='lime', # Colors can be set by name,
background_color='#0022FF', # and also by color code.
border_color='red')
display(button)
###Output
_____no_output_____
###Markdown
Parent/child relationships To display widget A inside widget B, widget A must be a child of widget B. Widgets that can contain other widgets have a **`children` attribute**. This attribute can be **set via a keyword argument** in the widget's constructor **or after construction**. Calling display on an **object with children automatically displays those children**, too.
###Code
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
###Output
_____no_output_____
###Markdown
After the parent is displayed Children **can be added to parents** after the parent has been displayed. The **parent is responsible for rendering its children**.
###Code
container = widgets.Box()
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container)
int_range = widgets.IntSlider()
container.children=[int_range]
###Output
_____no_output_____
###Markdown
Fancy boxes If you need to display a more complicated set of widgets, there are **specialized containers** that you can use. To display **multiple sets of widgets**, you can use an **`Accordion` or a `Tab` in combination with one `Box` per set of widgets** (as seen below). The "pages" of these widgets are their children. To set the titles of the pages, one can **call `set_title`**. Accordion
###Code
name1 = widgets.Text(description='Location:')
zip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page1 = widgets.Box(children=[name1, zip1])
name2 = widgets.Text(description='Location:')
zip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page2 = widgets.Box(children=[name2, zip2])
accord = widgets.Accordion(children=[page1, page2])
display(accord)
accord.set_title(0, 'From')
accord.set_title(1, 'To')
###Output
_____no_output_____
###Markdown
TabWidget
###Code
name = widgets.Text(description='Name:')
color = widgets.Dropdown(description='Color:', options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
page1 = widgets.Box(children=[name, color])
age = widgets.IntSlider(description='Age:', min=0, max=120, value=50)
gender = widgets.RadioButtons(description='Gender:', options=['male', 'female'])
page2 = widgets.Box(children=[age, gender])
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
###Output
_____no_output_____
###Markdown
Alignment Most widgets have a **`description` attribute**, which allows a label for the widget to be defined.The label of the widget **has a fixed minimum width**.The text of the label is **always right aligned and the widget is left aligned**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
###Output
_____no_output_____
###Markdown
If a **label is longer** than the minimum width, the **widget is shifted to the right**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="aaaaaaaaaaaaaaaaaa:"))
###Output
_____no_output_____
###Markdown
If a `description` is **not set** for the widget, the **label is not displayed**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text())
###Output
_____no_output_____
###Markdown
Flex boxes Widgets can be aligned using the `FlexBox`, `HBox`, and `VBox` widgets. Application to widgets Widgets display vertically by default:
###Code
buttons = [widgets.Button(description=str(i)) for i in range(3)]
display(*buttons)
###Output
_____no_output_____
###Markdown
Using hbox To make widgets display horizontally, you need to **child them to a `HBox` widget**.
###Code
container = widgets.HBox(children=buttons)
display(container)
###Output
_____no_output_____
###Markdown
By setting the width of the container to 100% and its `pack` to `center`, you can center the buttons.
###Code
container.width = '100%'
container.pack = 'center'
###Output
_____no_output_____
###Markdown
Visibility Sometimes it is necessary to **hide or show widgets** in place, **without having to re-display** the widget.The `visible` property of widgets can be used to hide or show **widgets that have already been displayed** (as seen below). The `visible` property can be:* `True` - the widget is displayed* `False` - the widget is hidden, and the empty space where the widget would be is collapsed* `None` - the widget is hidden, and the empty space where the widget would be is shown
###Code
w1 = widgets.Latex(value="First line")
w2 = widgets.Latex(value="Second line")
w3 = widgets.Latex(value="Third line")
display(w1, w2, w3)
w2.visible=None
w2.visible=False
w2.visible=True
###Output
_____no_output_____
###Markdown
Another example In the example below, a form is rendered, which conditionally displays widgets depending on the state of other widgets. Try toggling the student checkbox.
###Code
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.on_trait_change(on_student_toggle, 'value')
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget Events.ipynb) - [Next](Custom Widget - Hello World.ipynb)
###Code
%%html
<style>
.example-container { background: #999999; padding: 2px; min-height: 100px; }
.example-container.sm { min-height: 50px; }
.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}
.example-box.med { width: 65px; height: 65px; }
.example-box.lrg { width: 80px; height: 80px; }
</style>
from IPython.html import widgets
from IPython.display import display
###Output
_____no_output_____
###Markdown
Widget Styling Basic styling The widgets distributed with IPython can be styled by setting the following traits:- width - height - fore_color - back_color - border_color - border_width - border_style - font_style - font_weight - font_size - font_family The example below shows how a `Button` widget can be styled:
###Code
button = widgets.Button(
description='Hello World!',
width=100, # Integers are interpreted as pixel measurements.
height='2em', # em is valid HTML unit of measurement.
color='lime', # Colors can be set by name,
background_color='#0022FF', # and also by color code.
border_color='red')
display(button)
###Output
_____no_output_____
###Markdown
Parent/child relationships To display widget A inside widget B, widget A must be a child of widget B. Widgets that can contain other widgets have a **`children` attribute**. This attribute can be **set via a keyword argument** in the widget's constructor **or after construction**. Calling display on an **object with children automatically displays those children**, too.
###Code
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
###Output
_____no_output_____
###Markdown
After the parent is displayed Children **can be added to parents** after the parent has been displayed. The **parent is responsible for rendering its children**.
###Code
container = widgets.Box()
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container)
int_range = widgets.IntSlider()
container.children=[int_range]
###Output
_____no_output_____
###Markdown
Fancy boxes If you need to display a more complicated set of widgets, there are **specialized containers** that you can use. To display **multiple sets of widgets**, you can use an **`Accordion` or a `Tab` in combination with one `Box` per set of widgets** (as seen below). The "pages" of these widgets are their children. To set the titles of the pages, one can **call `set_title`**. Accordion
###Code
name1 = widgets.Text(description='Location:')
zip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page1 = widgets.Box(children=[name1, zip1])
name2 = widgets.Text(description='Location:')
zip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page2 = widgets.Box(children=[name2, zip2])
accord = widgets.Accordion(children=[page1, page2])
display(accord)
accord.set_title(0, 'From')
accord.set_title(1, 'To')
###Output
_____no_output_____
###Markdown
TabWidget
###Code
name = widgets.Text(description='Name:')
color = widgets.Dropdown(description='Color:', options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
page1 = widgets.Box(children=[name, color])
age = widgets.IntSlider(description='Age:', min=0, max=120, value=50)
gender = widgets.RadioButtons(description='Gender:', options=['male', 'female'])
page2 = widgets.Box(children=[age, gender])
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
###Output
_____no_output_____
###Markdown
Alignment Most widgets have a **`description` attribute**, which allows a label for the widget to be defined.The label of the widget **has a fixed minimum width**.The text of the label is **always right aligned and the widget is left aligned**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
###Output
_____no_output_____
###Markdown
If a **label is longer** than the minimum width, the **widget is shifted to the right**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="aaaaaaaaaaaaaaaaaa:"))
###Output
_____no_output_____
###Markdown
If a `description` is **not set** for the widget, the **label is not displayed**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text())
###Output
_____no_output_____
###Markdown
Flex boxes Widgets can be aligned using the `FlexBox`, `HBox`, and `VBox` widgets. Application to widgets Widgets display vertically by default:
###Code
buttons = [widgets.Button(description=str(i)) for i in range(3)]
display(*buttons)
###Output
_____no_output_____
###Markdown
Using hbox To make widgets display horizontally, you need to **child them to a `HBox` widget**.
###Code
container = widgets.HBox(children=buttons)
display(container)
###Output
_____no_output_____
###Markdown
By setting the width of the container to 100% and its `pack` to `center`, you can center the buttons.
###Code
container.width = '100%'
container.pack = 'center'
###Output
_____no_output_____
###Markdown
Visibility Sometimes it is necessary to **hide or show widgets** in place, **without having to re-display** the widget.The `visible` property of widgets can be used to hide or show **widgets that have already been displayed** (as seen below). The `visible` property can be:* `True` - the widget is displayed* `False` - the widget is hidden, and the empty space where the widget would be is collapsed* `None` - the widget is hidden, and the empty space where the widget would be is shown
###Code
w1 = widgets.Latex(value="First line")
w2 = widgets.Latex(value="Second line")
w3 = widgets.Latex(value="Third line")
display(w1, w2, w3)
w2.visible=None
w2.visible=False
w2.visible=True
###Output
_____no_output_____
###Markdown
Another example In the example below, a form is rendered, which conditionally displays widgets depending on the state of other widgets. Try toggling the student checkbox.
###Code
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.on_trait_change(on_student_toggle, 'value')
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget Events.ipynb) - [Next](Custom Widget - Hello World.ipynb)
###Code
%%html
<style>
.example-container { background: #999999; padding: 2px; min-height: 100px; }
.example-container.sm { min-height: 50px; }
.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}
.example-box.med { width: 65px; height: 65px; }
.example-box.lrg { width: 80px; height: 80px; }
</style>
import ipywidgets as widgets
from IPython.display import display
###Output
_____no_output_____
###Markdown
Widget Styling Basic styling The widgets distributed with IPython can be styled by setting the following traits:- width - height - fore_color - back_color - border_color - border_width - border_style - font_style - font_weight - font_size - font_family The example below shows how a `Button` widget can be styled:
###Code
button = widgets.Button(
description='Hello World!',
width=100, # Integers are interpreted as pixel measurements.
height='2em', # em is valid HTML unit of measurement.
color='lime', # Colors can be set by name,
background_color='#0022FF', # and also by color code.
border_color='red')
display(button)
###Output
_____no_output_____
###Markdown
Parent/child relationships To display widget A inside widget B, widget A must be a child of widget B. Widgets that can contain other widgets have a **`children` attribute**. This attribute can be **set via a keyword argument** in the widget's constructor **or after construction**. Calling display on an **object with children automatically displays those children**, too.
###Code
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
###Output
_____no_output_____
###Markdown
After the parent is displayed Children **can be added to parents** after the parent has been displayed. The **parent is responsible for rendering its children**.
###Code
container = widgets.Box()
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container)
int_range = widgets.IntSlider()
container.children=[int_range]
###Output
_____no_output_____
###Markdown
Fancy boxes If you need to display a more complicated set of widgets, there are **specialized containers** that you can use. To display **multiple sets of widgets**, you can use an **`Accordion` or a `Tab` in combination with one `Box` per set of widgets** (as seen below). The "pages" of these widgets are their children. To set the titles of the pages, one can **call `set_title`**. Accordion
###Code
name1 = widgets.Text(description='Location:')
zip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page1 = widgets.Box(children=[name1, zip1])
name2 = widgets.Text(description='Location:')
zip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)
page2 = widgets.Box(children=[name2, zip2])
accord = widgets.Accordion(children=[page1, page2])
display(accord)
accord.set_title(0, 'From')
accord.set_title(1, 'To')
###Output
_____no_output_____
###Markdown
TabWidget
###Code
name = widgets.Text(description='Name:')
color = widgets.Dropdown(description='Color:', options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
page1 = widgets.Box(children=[name, color])
age = widgets.IntSlider(description='Age:', min=0, max=120, value=50)
gender = widgets.RadioButtons(description='Gender:', options=['male', 'female'])
page2 = widgets.Box(children=[age, gender])
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
###Output
_____no_output_____
###Markdown
Alignment Most widgets have a **`description` attribute**, which allows a label for the widget to be defined.The label of the widget **has a fixed minimum width**.The text of the label is **always right aligned and the widget is left aligned**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
###Output
_____no_output_____
###Markdown
If a **label is longer** than the minimum width, the **widget is shifted to the right**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text(description="aaaaaaaaaaaaaaaaaa:"))
###Output
_____no_output_____
###Markdown
If a `description` is **not set** for the widget, the **label is not displayed**:
###Code
display(widgets.Text(description="a:"))
display(widgets.Text(description="aa:"))
display(widgets.Text(description="aaa:"))
display(widgets.Text())
###Output
_____no_output_____
###Markdown
Flex boxes Widgets can be aligned using the `FlexBox`, `HBox`, and `VBox` widgets. Application to widgets Widgets display vertically by default:
###Code
buttons = [widgets.Button(description=str(i)) for i in range(3)]
display(*buttons)
###Output
_____no_output_____
###Markdown
Using hbox To make widgets display horizontally, you need to **child them to a `HBox` widget**.
###Code
container = widgets.HBox(children=buttons)
display(container)
###Output
_____no_output_____
###Markdown
By setting the width of the container to 100% and its `pack` to `center`, you can center the buttons.
###Code
container.width = '100%'
container.pack = 'center'
###Output
_____no_output_____
###Markdown
Visibility Sometimes it is necessary to **hide or show widgets** in place, **without having to re-display** the widget.The `visible` property of widgets can be used to hide or show **widgets that have already been displayed** (as seen below). The `visible` property can be:* `True` - the widget is displayed* `False` - the widget is hidden, and the empty space where the widget would be is collapsed* `None` - the widget is hidden, and the empty space where the widget would be is shown
###Code
w1 = widgets.Latex(value="First line")
w2 = widgets.Latex(value="Second line")
w3 = widgets.Latex(value="Third line")
display(w1, w2, w3)
w2.visible=None
w2.visible=False
w2.visible=True
###Output
_____no_output_____
###Markdown
Another example In the example below, a form is rendered, which conditionally displays widgets depending on the state of other widgets. Try toggling the student checkbox.
###Code
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.on_trait_change(on_student_toggle, 'value')
###Output
_____no_output_____ |
HiSeqRuns_combined/04_assemblies/01_LLMGA/03_Rep/02_llmga.ipynb | ###Markdown
Table of Contents1 Goal2 Var3 Init4 Just Reptilia5 llmga5.1 Config5.2 Run6 Summary6.1 Load6.2 No. of genomes6.3 CheckM6.3.1 Taxonomy6.3.2 Taxonomic novelty6.3.3 Quality ~ Taxonomy7 sessionInfo Goal* Running LLMGA pipeline on all Reptilia samples Var
###Code
work_dir = '/ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA/'
samples_file = '/ebio/abt3_projects/Georg_animal_feces/data/metagenome/HiSeqRuns126-133-0138/wOutVertebrata/LLMGQC/samples_cov-gte0.3.tsv'
metadata_file = '/ebio/abt3_projects/Georg_animal_feces/data/mapping/unified_metadata_complete_190529.tsv'
pipeline_dir = '/ebio/abt3_projects/Georg_animal_feces/bin/llmga/'
###Output
_____no_output_____
###Markdown
Init
###Code
library(dplyr)
library(tidyr)
library(ggplot2)
source('/ebio/abt3_projects/Georg_animal_feces/code/misc_r_functions/init.R')
make_dir(work_dir)
###Output
Directory already exists: /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA/
###Markdown
Just Reptilia
###Code
meta = read.delim(metadata_file, sep='\t') %>%
dplyr::select(SampleID, class, order, family, genus, scientific_name, diet, habitat)
meta %>% dfhead
samps = read.delim(samples_file, sep='\t') %>%
mutate(Sample = gsub('^XF', 'F', Sample))
samps %>% dfhead
setdiff(samps$Sample, meta$Sample)
# joining
samps = samps %>%
inner_join(meta, c('Sample'='SampleID'))
samps %>% dfhead
# all metadata
samps %>%
group_by(class) %>%
summarize(n = n()) %>%
ungroup()
samps_f = samps %>%
filter(class == 'Reptilia')
samps_f %>% dfhead
outF = file.path(work_dir, 'samples_rep.tsv')
samps_f %>%
arrange(class, order, family, genus) %>%
write.table(outF, sep='\t', quote=FALSE, row.names=FALSE)
cat('File written:', outF, '\n')
###Output
File written: /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA//samples_rep.tsv
###Markdown
llmga Config
###Code
F = file.path(work_dir, 'config.yaml')
cat_file(F)
###Output
#-- I/O --#
# table with sample --> read_file information
samples_file: /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA/samples_rep.tsv
# output location
output_dir: /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA/
#-- reference genome(s) for metacompass --#
metacompass_ref: /ebio/abt3_projects/Georg_animal_feces/data/metagenome/HiSeqRuns126-133-0138/wOutVertebrata/MG_assembly_rep/LLMGA-find-refs/references/ref_genomes.fna
#-- database --#
kraken2_db: /ebio/abt3_projects/databases_no-backup/kraken2/nt_db/hash.k2d
krakenuniq_db: /ebio/abt3_projects/databases_no-backup/krakenuniq/taxonomy/nodes.dmp
checkM_data: /ebio/abt3_projects/databases_no-backup/checkM/
sourmash_db: /ebio/abt3_projects/databases_no-backup/sourmash/genbank-k31.sbt.json
sourmash_lca_db: /ebio/abt3_projects/databases_no-backup/sourmash/v2/genbank-k31.lca.json.gz
gtdbtk_db: /ebio/abt3_projects/databases_no-backup/GTDB/release89/db_info.md
#-- re-running --#
# use this to prevent re-running the assembly steps if you just need to rerun post-assembly steps
skip_assembly: False
#-- software parameters --#
# Notes:
## Use "Skip" to skip any step. If no params, just use ""
## Note: You must skip either metacompass_megahit or metacompass_metaspades (or both)
## Note: for the *_batches params, the number must be <= the number of MAGs
## Note: using >15mil paired-end reads may not scale well!
## Note: subsampling just applies to samples with > the number of reads
## Note: for diff. cov. binning, you can select a certain number of samples to use (the origin sample is always used)
params:
subsample_reads: 10000000
fastqc_on_raw: ""
# metacompass
## ref-based assembly; use "Skip" to skip the ref-based assembly
metacompass: ""
metacompass_buildcontig: --pickref breadth --mincov 3 -l 500 -n T -b F -u F
## de-novo assembly; use "Skip" to skip the de-novo assembly
metacompass_metaspades: -k auto --only-assembler
metacompass_megahit: Skip # eg., (--min-count 3 --min-contig-len 500 --presets meta-sensitive)
## contig length cutoff
metacompass_min_contig_len: 2000 # min contig length retained
metacompass_derep_contigs: minidentity=100 minscaf=500 minoverlappercent=95
# co-assembly
## normalization/subsampling/dereplication
bbnorm_metacompass_unmapped: Skip # eg., (target=100 k=31 minkmers=15 prefilter=t passes=1)
subsample_combined: Skip # max number of read pairs to use for co-assembly (eg., 10000000)
bbnorm_metacompass_unmapped_combined: Skip # eg., (target=100 k=31 minkmers=15 prefilter=t passes=1)
## co-assembly (default => Skipped)
coassemble_metaspades_hybrid: Skip # eg., (-k auto --only-assembler)
# combined, final contigs
combine_all_contigs: Skip # 'Skip' will cause all samples to be binned seperately; must not be skipped if using co-assembly
metaquast: --max-ref-number 0 # job run on rick to use internet if `--max-ref-number` > 0
contig_rename: minscaf=2000 # minscaf = scaffold length cutoff
cut_up_fasta: Skip # cutting long contigs for possibly better binning (eg., -c 20000 -o 0)
# contig binning
## mapping
num_map_samples: 30 # how many samples to use for differential cov. binning ('all' = all samples)
### bowtie2 (more sensitive than kraken)
samtools: -q 0 # -q = MAPQ cutoff
bam_to_depth: --percentIdentity 97
### kraken (faster than bowtie2, but less sensitive)
kraken2: Skip #--memory-mapping
krakenuniq_build: --kmer-len 31 --minimizer-len 15
krakenuniq: --hll-precision 12
krakenuniq_kmer_cutoff: 1000
## binning
### maxbin2 (2 different binning parameters used)
maxbin2_low_prob: -min_contig_length 2000 -markerset 40 -prob_threshold 0.6
maxbin2_high_prob: -min_contig_length 2000 -markerset 40 -prob_threshold 0.8
### metabat (2 different binning parameters used)
metabat2_low_PE: --minContig 2000 --minCV 0.5 --minCVSum 0.5 --maxP 92 --maxEdges 150 --seed 8394
metabat2_high_PE: --minContig 2000 --minCV 0.5 --minCVSum 0.5 --maxP 97 --maxEdges 500 --seed 8394
### vamb
vamb: Skip #-m 2000
# bin refinement/assessment
## selecting the 'best' bins from all binning methods
das_tool: --search_engine diamond
## bin assessment
bin_batches: 10 # process MAGs in batches
checkm: --tab_table
sourmash_compute: --scaled 10000 -k 31
sourmash_gather: -k 31 --dna
fastani_batches: 10 # process MAGs in batches
fastani: --fragLen 1000 --minFrag 50 -k 16
gtdbtk_classify_wf: --min_perc_aa 10
drep: -comp 50 -con 5 -sa 99
### anivo; use "Skip" on anvio_gen_contigs_db to skip all of anvio
anvio_gen_contigs_db: Skip # eg., (--skip-mindful-splitting --kmer-size 4)
anvio_run_ncbi_cogs: --cog-data-dir /ebio/abt3_projects/databases_no-backup/anvio_v4/
anvio_centrifuge: -x /ebio/abt3_projects/databases_no-backup/centrifuge/p+h+v
anvio_profile: --min-mean-coverage 0 --min-contig-length 2000 --cluster-contigs
anvio_merge: -S coassembly_contigs --skip-concoct-binning
anvio_interactive_script: -P 8080 -C DAS_Tool
#-- snakemake pipeline --#
# your username will be added automatically to the `temp_folder` path
pipeline:
bwlimit: 100m # rsync bwlimit
snakemake_folder: ./
script_folder: bin/scripts/
temp_folder: /tmp/global2/
random_number_seed: 83421
###Markdown
Run ```(snakemake_dev) @ rick:/ebio/abt3_projects/vadinCA11/bin/llmga$ screen -L -S llmga-ga-rep ./snakemake_sge.sh /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA/config.yaml cluster.json /ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA/SGE_log 20``` Summary
###Code
asmbl_dir = '/ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/wOutVertebrata/MG_assembly_rep/LLMGA/'
checkm_markers_file = file.path(asmbl_dir, 'checkm', 'markers_qa_summary.tsv')
gtdbtk_bac_sum_file = file.path(asmbl_dir, 'gtdbtk', 'gtdbtk_bac_summary.tsv')
gtdbtk_arc_sum_file = file.path(asmbl_dir, 'gtdbtk', 'gtdbtk_ar_summary.tsv')
bin_dir = file.path(asmbl_dir, 'bin')
das_tool_dir = file.path(asmbl_dir, 'bin_refine', 'DAS_Tool')
drep_dir = file.path(asmbl_dir, 'drep', 'drep')
###Output
_____no_output_____
###Markdown
Load
###Code
# bin genomes
## maxbin2
bin_files = list.files(bin_dir, '*.fasta$', full.names=TRUE, recursive=TRUE)
bin = data.frame(binID = gsub('\\.fasta$', '', basename(bin_files)),
fasta = bin_files,
binner = bin_files %>% dirname %>% basename,
sample = bin_files %>% dirname %>% dirname %>% basename)
## metabat2
bin_files = list.files(bin_dir, '*.fa$', full.names=TRUE, recursive=TRUE)
X = data.frame(binID = gsub('\\.fa$', '', basename(bin_files)),
fasta = bin_files,
binner = bin_files %>% dirname %>% basename,
sample = bin_files %>% dirname %>% dirname %>% basename)
## combine
bin = rbind(bin, X)
X = NULL
bin %>% dfhead
# DAS-tool genomes
dastool_files = list.files(das_tool_dir, '*.fa$', full.names=TRUE, recursive=TRUE)
dastool = data.frame(binID = gsub('\\.fa$', '', basename(dastool_files)),
fasta = dastool_files)
dastool %>% dfhead
# drep genome files
P = file.path(drep_dir, 'dereplicated_genomes')
drep_files = list.files(P, '*.fa$', full.names=TRUE)
drep = data.frame(binID = gsub('\\.fa$', '', basename(drep_files)),
fasta = drep_files)
drep %>% dfhead
# checkm info
markers_sum = read.delim(checkm_markers_file, sep='\t')
markers_sum %>% nrow %>% print
drep_j = drep %>%
inner_join(markers_sum, c('binID'='Bin.Id'))
drep_j %>% dfhead
# gtdb
## bacteria
X = read.delim(gtdbtk_bac_sum_file, sep='\t') %>%
dplyr::select(-other_related_references.genome_id.species_name.radius.ANI.AF.) %>%
separate(classification, c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'), sep=';')
X %>% nrow %>% print
if(file.size(gtdbtk_arc_sum_file) > 0){
## archaea
Y = read.delim(gtdbtk_arc_sum_file, sep='\t') %>%
dplyr::select(-other_related_references.genome_id.species_name.radius.ANI.AF.) %>%
separate(classification, c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'), sep=';')
Y %>% nrow %>% print
X = rbind(X, Y)
}
## combined
drep_j = drep_j %>%
left_join(X, c('binID'='user_genome'))
## status
X = Y = NULL
drep_j %>% dfhead
###Output
[1] 11
###Markdown
No. of genomes
###Code
cat('Number of binned genomes:', bin$fasta %>% unique %>% length)
cat('Number of DAS-Tool passed genomes:', dastool$binID %>% unique %>% length)
cat('Number of 99% ANI de-rep genomes:', drep_j$binID %>% unique %>% length)
###Output
Number of 99% ANI de-rep genomes: 7
###Markdown
CheckM
###Code
# checkm stats
p = drep_j %>%
dplyr::select(binID, Completeness, Contamination) %>%
gather(Metric, Value, -binID) %>%
ggplot(aes(Value)) +
geom_histogram(bins=30) +
labs(y='No. of MAGs\n(>=99% ANI derep.)') +
facet_grid(Metric ~ ., scales='free_y') +
theme_bw()
dims(4,3)
plot(p)
###Output
_____no_output_____
###Markdown
Taxonomy
###Code
# summarizing by taxonomy
p = drep_j %>%
unite(Taxonomy, Phylum, Class, sep=';', remove=FALSE) %>%
group_by(Taxonomy, Phylum) %>%
summarize(n = n()) %>%
ungroup() %>%
ggplot(aes(Taxonomy, n, fill=Phylum)) +
geom_bar(stat='identity') +
coord_flip() +
labs(y='No. of MAGs\n(>=99% ANI derep.)') +
theme_bw()
dims(7,4)
plot(p)
###Output
_____no_output_____
###Markdown
Taxonomic novelty
###Code
# no close ANI matches
p = drep_j %>%
unite(Taxonomy, Phylum, Class, sep=';', remove=FALSE) %>%
mutate(closest_placement_ani = closest_placement_ani %>% as.character,
closest_placement_ani = ifelse(closest_placement_ani == 'N/A',
0, closest_placement_ani),
closest_placement_ani = ifelse(is.na(closest_placement_ani),
0, closest_placement_ani),
closest_placement_ani = closest_placement_ani %>% as.Num) %>%
mutate(has_species_placement = ifelse(closest_placement_ani >= 95,
'ANI >= 95%', 'No match')) %>%
ggplot(aes(Taxonomy, fill=Phylum)) +
geom_bar() +
facet_grid(. ~ has_species_placement) +
coord_flip() +
labs(y='Closest placement ANI') +
theme_bw()
dims(7,4)
plot(p)
p = drep_j %>%
filter(Genus == 'g__') %>%
unite(Taxonomy, Phylum, Class, Order, Family, sep='; ', remove=FALSE) %>%
mutate(Taxonomy = stringr::str_wrap(Taxonomy, 45),
Taxonomy = gsub(' ', '', Taxonomy)) %>%
group_by(Taxonomy, Phylum) %>%
summarize(n = n()) %>%
ungroup() %>%
ggplot(aes(Taxonomy, n, fill=Phylum)) +
geom_bar(stat='identity') +
coord_flip() +
labs(y='No. of MAGs lacking a\ngenus-level classification') +
theme_bw()
dims(8,4)
plot(p)
###Output
_____no_output_____
###Markdown
Quality ~ Taxonomy
###Code
p = drep_j %>%
unite(Taxonomy, Phylum, Class, sep='; ', remove=FALSE) %>%
dplyr::select(Taxonomy, Phylum, Completeness, Contamination) %>%
gather(Metric, Value, -Taxonomy, -Phylum) %>%
ggplot(aes(Taxonomy, Value, color=Phylum)) +
geom_boxplot() +
facet_grid(. ~ Metric, scales='free_x') +
coord_flip() +
labs(y='CheckM quality') +
theme_bw()
dims(7,4)
plot(p)
# just unclassified at genus/species
p = drep_j %>%
filter(Genus == 'g__' | Species == 's__') %>%
unite(Taxonomy, Phylum, Class, sep='; ', remove=FALSE) %>%
dplyr::select(Taxonomy, Phylum, Completeness, Contamination) %>%
gather(Metric, Value, -Taxonomy, -Phylum) %>%
ggplot(aes(Taxonomy, Value, color=Phylum)) +
geom_boxplot() +
facet_grid(. ~ Metric, scales='free_x') +
coord_flip() +
labs(y='CheckM quality') +
theme_bw()
dims(7,4)
plot(p)
# just unclassified at genus
p = drep_j %>%
filter(Genus == 'g__') %>%
unite(Taxonomy, Phylum, Class, sep='; ', remove=FALSE) %>%
dplyr::select(Taxonomy, Phylum, Completeness, Contamination) %>%
gather(Metric, Value, -Taxonomy, -Phylum) %>%
ggplot(aes(Taxonomy, Value, color=Phylum)) +
geom_boxplot() +
facet_grid(. ~ Metric, scales='free_x') +
coord_flip() +
labs(y='CheckM quality') +
theme_bw()
dims(7,4)
plot(p)
###Output
_____no_output_____
###Markdown
sessionInfo
###Code
pipelineInfo(pipeline_dir)
sessionInfo()
###Output
_____no_output_____ |
Experiments/Notebook4.ipynb | ###Markdown
GPDM Experiments---
###Code
import GPflow
import numpy as np
import matplotlib as mpl
import matplotlib.cm as cm
from GPflow.gplvm import GPLVM
from gpdm import GPDM
import matplotlib.pyplot as plt
from bcgplvm import BCGPLVM
from GPflow import kernels, ekernels
from GPflow.plotting import plotLatent
%matplotlib inline
# set parameters for experiments
np.random.seed(42)
# throw error if quadrature is used for kernel expectations
GPflow.settings.numerics.quadrature = 'error'
###Output
_____no_output_____
###Markdown
Experiment with Linear Dynamical System---
###Code
# d = 2, D = 10, linear dyn, linear map
d = 2
D = 10
nSamples = 100
noiseVar = 0.01
t = np.linspace(0.0,5.0,num=nSamples,endpoint=True)
# generate latent points based on LDS x(t) = [[-1 0],[0 -2]]*x(t-1)
X = np.exp(np.asarray([-t,-2*t])).T
X = X - X.mean(axis=0)
# generate high dimensional observation data
Y1 = np.matmul(np.random.randn(D,d),X.T).T + np.random.randn(nSamples,D)*np.sqrt(noiseVar)
Y2 = np.sin(np.matmul(np.random.randn(D,d),X.T).T)*np.tanh(np.matmul(np.random.randn(D,d),X.T).T) + np.random.randn(nSamples,D)*np.sqrt(noiseVar)
# visualize original latent data
fig = plt.figure(figsize=(10,6))
plt.scatter(X[:,0], X[:,1], color='k')
plt.title('Original Data')
# train GPLVM, GPDM on given data
m1 = GPLVM(Y1, d, kern=kernels.Linear(d))
m1.likelihood.variance = noiseVar
_ = m1.optimize(disp=True, maxiter=3000)
m2 = GPDM(Y1, d, map_kern=kernels.Linear(d), dyn_kern=kernels.Linear(d))
m2.likelihood.variance = noiseVar
m2.dyn_likelihood.variance = 1e-5
_ = m2.optimize(disp=True, maxiter=3000)
# visualize original latent data
fig,ax = plt.subplots(1,2,figsize=(10,6))
ax[0].scatter(m1.X.value[:,0], m1.X.value[:,1], color='k')
ax[0].set_title('GPLVM Latent Space')
ax[1].scatter(m2.X.value[:,0], m2.X.value[:,1], color='k')
ax[1].set_title('GPDM Latent Space')
plt.suptitle('Linear Mapping')
# train GPLVM, GPDM on given data
m1 = GPLVM(Y2, d, kern=kernels.RBF(d))
m1.likelihood.variance = noiseVar
_ = m1.optimize(disp=True, maxiter=3000)
m2 = GPDM(Y2, d, map_kern=kernels.RBF(d), dyn_kern=kernels.Linear(d))
m2.likelihood.variance = noiseVar
m2.dyn_likelihood.variance = 1e-5
_ = m2.optimize(disp=True, maxiter=3000)
# visualize original latent data
fig,ax = plt.subplots(1,2,figsize=(10,6))
ax[0].scatter(m1.X.value[:,0], m1.X.value[:,1], color='k')
ax[0].set_title('GPLVM Latent Space')
ax[1].scatter(m2.X.value[:,0], m2.X.value[:,1], color='k')
ax[1].set_title('GPDM Latent Space')
plt.suptitle('Nonlinear Mapping')
###Output
_____no_output_____ |
_notebooks/2017-08-13-mf-autograd-adagrad.ipynb | ###Markdown
Adagrad based matrix factorization> Adagrad optimizer for matrix factorisation- toc: true - badges: true- comments: true- author: Nipun Batra- categories: [ML] In a [previous post](./nnmf-tensorflow.html), we had seen how to perfom non-negative matrix factorization (NNMF) using Tensorflow. In [another previous post](./linear-regression-adagrad-vs-gd.html), I had shown how to use Adagrad for linear regression. This current post can be considered an extension of the linear regression using Adagrad post. Just for the purpose of education, I'll poorly initialise the estimate of one of the decomposed matrix, to see how well Adagrad can adjust weights! Customary imports
###Code
import autograd.numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.animation import FuncAnimation
from matplotlib import gridspec
%matplotlib inline
###Output
_____no_output_____
###Markdown
Creating the matrix to be decomposed
###Code
A = np.array([[3, 4, 5, 2],
[4, 4, 3, 3],
[5, 5, 4, 3]], dtype=np.float32).T
###Output
_____no_output_____
###Markdown
Masking one entry
###Code
A[0, 0] = np.NAN
A
###Output
_____no_output_____
###Markdown
Defining the cost function
###Code
def cost(param_list):
W, H = param_list
pred = np.dot(W, H)
mask = ~np.isnan(A)
return np.sqrt(((pred - A)[mask].flatten() ** 2).mean(axis=None))
###Output
_____no_output_____
###Markdown
Decomposition params
###Code
rank = 2
learning_rate=0.01
n_steps = 10000
###Output
_____no_output_____
###Markdown
Adagrad routine
###Code
def adagrad_gd(param_init, cost, niter=5, lr=1e-2, eps=1e-8, random_seed=0):
"""
param_init: List of initial values of parameters
cost: cost function
niter: Number of iterations to run
lr: Learning rate
eps: Fudge factor, to avoid division by zero
"""
from copy import deepcopy
from autograd import grad
# Fixing the random_seed
np.random.seed(random_seed)
# Function to compute the gradient of the cost function
grad_cost = grad(cost)
params = deepcopy(param_init)
param_array, grad_array, lr_array, cost_array = [params], [], [[lr*np.ones_like(_) for _ in params]], [cost(params)]
# Initialising sum of squares of gradients for each param as 0
sum_squares_gradients = [np.zeros_like(param) for param in params]
for i in range(niter):
out_params = []
gradients = grad_cost(params)
# At each iteration, we add the square of the gradients to `sum_squares_gradients`
sum_squares_gradients= [eps + sum_prev + np.square(g) for sum_prev, g in zip(sum_squares_gradients, gradients)]
# Adapted learning rate for parameter list
lrs = [np.divide(lr, np.sqrt(sg)) for sg in sum_squares_gradients]
# Paramter update
params = [param-(adapted_lr*grad_param) for param, adapted_lr, grad_param in zip(params, lrs, gradients)]
param_array.append(params)
lr_array.append(lrs)
grad_array.append(gradients)
cost_array.append(cost(params))
return params, param_array, grad_array, lr_array, cost_array
###Output
_____no_output_____
###Markdown
Running Adagrad Fixing initial parametersI'm poorly initialising `H` here to see how the learning rates vary for `W` and `H`.
###Code
np.random.seed(0)
shape = A.shape
H_init = -5*np.abs(np.random.randn(rank, shape[1]))
W_init = np.abs(np.random.randn(shape[0], rank))
param_init = [W_init, H_init]
H_init
W_init
# Cost for initial set of parameters
cost(param_init)
lr = 0.1
eps=1e-8
niter=2000
ada_params, ada_param_array, ada_grad_array, ada_lr_array, ada_cost_array = adagrad_gd(param_init, cost, niter=niter, lr=lr, eps=eps)
###Output
_____no_output_____
###Markdown
Cost v/s iterations
###Code
pd.Series(ada_cost_array).plot(logy=True)
plt.ylabel("Cost (log scale)")
plt.xlabel("# Iterations")
###Output
_____no_output_____
###Markdown
Final set of parameters and recovered matrix
###Code
W_final, H_final = ada_params
pred = np.dot(W_final, H_final)
pred_df = pd.DataFrame(pred).round()
pred_df
###Output
_____no_output_____
###Markdown
Learning rate evolution for W
###Code
W_lrs = np.array(ada_lr_array)[:, 0]
W_lrs = np.array(ada_lr_array)[:, 0]
fig= plt.figure(figsize=(4, 2))
gs = gridspec.GridSpec(1, 2, width_ratios=[8, 1])
ax = plt.subplot(gs[0]), plt.subplot(gs[1])
max_W, min_W = np.max([np.max(x) for x in W_lrs]), np.min([np.min(x) for x in W_lrs])
def update(iteration):
ax[0].cla()
ax[1].cla()
sns.heatmap(W_lrs[iteration], vmin=min_W, vmax=max_W, ax=ax[0], annot=True, fmt='.4f', cbar_ax=ax[1])
ax[0].set_title("Learning rate update for W\nIteration: {}".format(iteration))
fig.tight_layout()
anim = FuncAnimation(fig, update, frames=np.arange(0, 200, 10), interval=500)
anim.save('W_update.gif', dpi=80, writer='imagemagick')
plt.close()
###Output
_____no_output_____
###Markdown
 Learning rate evolution for H
###Code
H_lrs = np.array(ada_lr_array)[:, 1]
fig= plt.figure(figsize=(4, 2))
gs = gridspec.GridSpec(1, 2, width_ratios=[10, 1])
ax = plt.subplot(gs[0]), plt.subplot(gs[1])
max_H, min_H = np.max([np.max(x) for x in H_lrs]), np.min([np.min(x) for x in H_lrs])
def update(iteration):
ax[0].cla()
ax[1].cla()
sns.heatmap(H_lrs[iteration], vmin=min_H, vmax=max_H, ax=ax[0], annot=True, fmt='.2f', cbar_ax=ax[1])
ax[0].set_title("Learning rate update for H\nIteration: {}".format(iteration))
fig.tight_layout()
anim = FuncAnimation(fig, update, frames=np.arange(0, 200, 10), interval=500)
anim.save('H_update.gif', dpi=80, writer='imagemagick')
plt.close()
###Output
_____no_output_____ |
tools/capacity_spectrum_method_class-procB.ipynb | ###Markdown
ATC40 - Capacity Spectrum Method (Proc. B)
###Code
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
from streng.tools.bilin import Bilin
import streng.codes.eurocodes.ec8.cls.seismic_action.spectra as spec_ec8
from streng.codes.eurocodes.ec8.raw.ch3.seismic_action.spectra import η
from streng.codes.usa.atc40.cls.nl_static_analysis.csm import CapacitySpectrumMethodProcedureB as csm_procB
from streng.codes.usa.atc40.cls.nl_static_analysis.csm import StructureProperties, Demand
from streng.common.math.numerical import intersection
bl = Bilin()
# bl.load_space_delimited(r'D:/MyBooks/TEI/RepairsExample/sapfiles/fema/PushoverCurve_modal.pushcurve', ' ')
bl.curve_ini.load_delimited(r'http://seivas.net/mkd/PushoverCurve_modal.pushcurve', ' ')
mystructure = StructureProperties(m = np.array([39.08, 39.08, 39.08]),
φ = np.array([0.0483, 0.0920, 0.1217]),
T0 = 0.753,
pushover_curve_F = bl.curve_ini.y,
pushover_curve_δ = bl.curve_ini.x,
behavior ='A')
###Output
_____no_output_____
###Markdown
Βήματα 1 και 2
###Code
damps = list(range(5, 41, 5))
T_range = np.linspace(1e-10, 4, 401)
mydemands = []
for d in damps:
dem = Demand(T_range=T_range,
Sa=None,
Sd=None,
TC=None)
dem.ec8_elastic(αgR=0.24*9.81,
γI=1.0,
ground_type = 'C',
spectrum_type = 1,
η = η(d),
q=1.0,
β=0.2)
mydemands.append({'damping': d, 'demand': dem})
for dem in mydemands:
plt.plot(dem['demand'].Sd, dem['demand'].Sa, lw=2, label=f'{dem["damping"]}%')
plt.ylabel('$S_{a}$ (m/sec2)')
plt.xlabel('$S_{d}$ (m)')
plt.title('EC8 elastic spectrum: Sa-Sd')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Βήμα 3
###Code
mycsm = csm_procB(structure = mystructure,
demands = mydemands)
plt.plot(mycsm.structure.Sd, mycsm.structure.Sa)
for dem in mycsm.demands:
plt.plot(dem['demand'].Sd, dem['demand'].Sa, lw=2, label=f'{dem["damping"]}%')
plt.ylabel('$Sa$ (m/sec2)')
plt.xlabel('$Sd$ (m)')
plt.show()
###Output
_____no_output_____
###Markdown
Βήμα 4
###Code
mycsm.calc_performance_point()
print(f'd*={mycsm.dstar_intersection[0]:.3f}')
print(f'a(d*)={mycsm.dstar_intersection[1]:.3f}m/sec2. Προσοχή, είναι το σημείο τομής, όχι το a*')
###Output
d*=0.079
a(d*)=5.257m/sec2. Προσοχή, είναι το σημείο τομής, όχι το a*
###Markdown
Διγραμμική καμπύλη μέχρι το d*
###Code
print(mycsm.bilinear_curve.all_quantities)
plt.figure(figsize=(12,8))
plt.plot(mycsm.structure.Sd, mycsm.structure.Sa)
for dem in mycsm.demands:
plt.plot(dem['demand'].Sd, dem['demand'].Sa, lw=2, label=f'{dem["damping"]}%')
plt.plot([0, mycsm.dstar_intersection[0]], [0, mycsm.dstar_intersection[1]],'--')
plt.plot(mycsm.dstar_intersection[0], mycsm.dstar_intersection[1],'*k')
plt.plot(mycsm.bilinear_curve.d_array, mycsm.bilinear_curve.a_array, '-.', lw=3, color='black', label='bilin')
plt.plot(mycsm.bilinear_curve.d_array, mycsm.bilinear_curve.a_array, 'bo', markersize=10)
plt.ylabel('$Sa$ (m/sec2)')
plt.xlabel('$Sd$ (m)')
plt.legend()
plt.show()
print(f'a*={mycsm.astar:.2f}m/sec2')
###Output
a*=2.81m/sec2
###Markdown
Βήμα 5Παίρνω μερικές τιμές της dpi λίγο πριν και λίγο μετά την d\*. Εδώ χρησιμοποιώ 11 τιμές, από 0.5d\* έως 1.5d\*.Στη συνέχεις υπολογίζω τα αντίστοιχα api, β0, βeff για κάθε dpi
###Code
print(f'dpi_rng = {mycsm.dpi_rng}')
print(f'api_rng = {mycsm.api_rng}')
print(f'β0_rng = {mycsm.β0_rng}')
print(f'βeff_rng = {mycsm.βeff_rng}')
###Output
dpi_rng = [0.03973914 0.04768696 0.05563479 0.06358262 0.07153044 0.07947827
0.0874261 0.09537392 0.10332175 0.11126958 0.11921741]
api_rng = [2.42829909 2.50474221 2.58118532 2.65762843 2.73407154 2.81051465
2.88695776 2.96340087 3.03984398 3.11628709 3.1927302 ]
β0_rng = [0.06615026 0.1401169 0.18863524 0.22157418 0.24438408 0.26030995
0.27139694 0.27899262 0.28401713 0.28711728 0.28875883]
βeff_rng = [0.11615026 0.1901169 0.23465186 0.26104843 0.27830909 0.28986627
0.29767203 0.30290606 0.30631755 0.30840228 0.30949992]
###Markdown
Βήμα 6 Υπολογίζω τα καινούρια φάσματα Sa-Sd για κάθε τιμή της βeffΕδώ το ATC40 το κάνει γραφικά, γι'αυτό και χρησιμοποιεί τα πολλά σχεδιασμένα φάσματα. Τελικά τα πολλά για αποσβέσεις που διαφέρουν 5% δεν τα χρησιμοποιώ καθόλου στον υπολογισμό.
###Code
new_demands = []
for d in mycsm.βeff_rng :
dem = Demand(T_range=T_range,
Sa=None,
Sd=None,
TC=None)
dem.ec8_elastic(αgR=0.24*9.81,
γI=1.0,
ground_type = 'C',
spectrum_type = 1,
η = η(100*d),
q=1.0,
β=0.2)
new_demands.append({'damping': d, 'demand': dem})
api_rng = []
for i, dem in enumerate(new_demands):
_Sa = dem['demand'].Sa
_Sd = dem['demand'].Sd
_api = np.interp(mycsm.dpi_rng[i], _Sd, _Sa)
api_rng.append(_api)
###Output
_____no_output_____
###Markdown
Βήμα 7Για κάθε νέο φάσμα βρίσκω την Sa που αντιστοιχεί σε κάθε dpi. Στη συνέχεια ενώνω με μια γραμμή τα σημεία αυτά και από την τομή με την καμπύλη αντίστασης έχω τη λύση για την Sd
###Code
plt.figure(figsize=(12,8))
plt.plot(mycsm.structure.Sd, mycsm.structure.Sa)
for dem in new_demands:
plt.plot(dem['demand'].Sd, dem['demand'].Sa, lw=2, label=f'{dem["damping"]:.3f}%')
plt.plot(mycsm.bilinear_curve.d_array, mycsm.bilinear_curve.a_array, '-.', lw=3, color='black', label='bilin')
plt.plot(mycsm.dpi_rng, api_rng,'*k', color='red', markersize=10)
plt.plot(mycsm.dpi_rng, api_rng,'--', lw=1, color='red', label='solution')
plt.ylabel('$Sa$ (m/sec2)')
plt.xlabel('$Sd$ (m)')
plt.legend()
plt.show()
solution_sd, solution_sa = intersection(mycsm.dpi_rng, np.array(api_rng), mycsm.structure.Sd, mycsm.structure.Sa)
print(f'Λύση: Sd={solution_sd[0]:.3f}m - Sa={solution_sa[0]:.2f}m/sec2')
###Output
Λύση: Sd=0.056m - Sa=2.64m/sec2
|
notebooksML101/06_CIFAR10_First_CNN.ipynb | ###Markdown
Building a CNN to classify images in the CIFAR-10 DatasetWe will work with the CIFAR-10 Dataset. This is a well-known dataset for image classification, which consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.The 10 classes are: airplane automobile bird cat deer dog frog horse ship truckFor details about CIFAR-10 see:https://www.cs.toronto.edu/~kriz/cifar.htmlFor a compilation of published performance results on CIFAR 10, see:http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html--- Building Convolutional Neural NetsIn this exercise we will build and train our first convolutional neural networks. In the first part, we walk through the different layers and how they are configured. In the second part, you will build your own model, train it, and compare the performance.
###Code
from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
## Each image is a 32 x 32 x 3 numpy array
x_train[444].shape
## Let's look at one of the images
print(y_train[444])
plt.imshow(x_train[444]);
num_classes = 10
print(y_test)
y_test_lab=np.copy(y_test)
print(y_test_lab)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# now instead of classes described by an integer between 0-9 we have a vector with a 1 in the (Pythonic) 9th position
y_train[444]
# As before, let's make everything float and scale
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
###Output
_____no_output_____
###Markdown
Keras Layers for CNNs- Previously we built Neural Networks using primarily the Dense, Activation and Dropout Layers.- Here we will describe how to use some of the CNN-specific layers provided by Keras Conv2D```pythonkeras.layers.convolutional.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)```A few parameters explained:- `filters`: the number of filter used per location. In other words, the depth of the output.- `kernel_size`: an (x,y) tuple giving the height and width of the kernel to be used- `strides`: and (x,y) tuple giving the stride in each dimension. Default is `(1,1)`- `input_shape`: required only for the first layerNote, the size of the output will be determined by the kernel_size, strides MaxPooling2D`keras.layers.pooling.MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format=None)`- `pool_size`: the (x,y) size of the grid to be pooled.- `strides`: Assumed to be the `pool_size` unless otherwise specified FlattenTurns its input into a one-dimensional vector (per instance). Usually used when transitioning between convolutional layers and fully connected layers.--- First CNNBelow we will build our first CNN. For demonstration purposes (so that it will train quickly) it is not very deep and has relatively few parameters. We use strides of 2 in the first two convolutional layers which quickly reduces the dimensions of the output. After a MaxPooling layer, we flatten, and then have a single fully connected layer before our final classification layer.
###Code
# Let's build a CNN using Keras' Sequential capabilities
model_1 = Sequential()
## 5x5 convolution with 2x2 stride and 32 filters
model_1.add(Conv2D(32, (5, 5), strides = (2,2), padding='same',
input_shape=x_train.shape[1:]))
model_1.add(Activation('relu'))
## Another 5x5 convolution with 2x2 stride and 32 filters
model_1.add(Conv2D(32, (5, 5), strides = (2,2)))
model_1.add(Activation('relu'))
## 2x2 max pooling reduces to 3 x 3 x 32
model_1.add(MaxPooling2D(pool_size=(2, 2)))
model_1.add(Dropout(0.25))
## Flatten turns 3x3x32 into 288x1
model_1.add(Flatten())
model_1.add(Dense(512))
model_1.add(Activation('relu'))
model_1.add(Dropout(0.5))
model_1.add(Dense(num_classes))
model_1.add(Activation('softmax'))
model_1.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 16, 16, 32) 2432
_________________________________________________________________
activation_1 (Activation) (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 6, 6, 32) 25632
_________________________________________________________________
activation_2 (Activation) (None, 6, 6, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 3, 3, 32) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 3, 3, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 288) 0
_________________________________________________________________
dense_1 (Dense) (None, 512) 147968
_________________________________________________________________
activation_3 (Activation) (None, 512) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 5130
_________________________________________________________________
activation_4 (Activation) (None, 10) 0
=================================================================
Total params: 181,162
Trainable params: 181,162
Non-trainable params: 0
_________________________________________________________________
###Markdown
We still have 181K parameters, even though this is a "small" model.
###Code
batch_size = 32
# initiate RMSprop optimizer
opt = keras.optimizers.rmsprop(lr=0.0005, decay=1e-6)
# Let's train the model using RMSprop
model_1.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
run_hist_1=model_1.fit(x_train, y_train,
batch_size=batch_size,
epochs=15,
validation_data=(x_test, y_test),
shuffle=True)
from sklearn.metrics import confusion_matrix, precision_recall_curve, roc_auc_score, roc_curve, accuracy_score
y_pred_class = model_1.predict_classes(x_test)
y_pred_prob = model_1.predict_proba(x_test)
print('accuracy is {:.3f}'.format(accuracy_score(y_test_lab,y_pred_class)))
fig, ax = plt.subplots()
ax.plot(run_hist_1.history["loss"],'r', marker='.', label="Train Loss")
ax.plot(run_hist_1.history["val_loss"],'b', marker='.', label="Validation Loss")
ax.plot(run_hist_1.history["acc"],'g', marker='.', label="Train acc")
ax.plot(run_hist_1.history["val_acc"],'k', marker='.', label="Validation acc")
ax.legend()
###Output
_____no_output_____ |
use-cases/retail_recommend/2_retail_recommend_train_tune.ipynb | ###Markdown
Recommendation Engine for E-Commerce Sales: Part 2. Train and Make PredictionsThis notebook gives an overview of techniques and services offer by SageMaker to build and deploy a personalized recommendation engine. DatasetThe dataset for this demo comes from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Online+Retail). It contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail. The company mainly sells unique all-occasion gifts. The following attributes are included in our dataset:+ InvoiceNo: Invoice number. Nominal, a 6-digit integral number uniquely assigned to each transaction. If this code starts with letter 'c', it indicates a cancellation.+ StockCode: Product (item) code. Nominal, a 5-digit integral number uniquely assigned to each distinct product.+ Description: Product (item) name. Nominal.+ Quantity: The quantities of each product (item) per transaction. Numeric.+ InvoiceDate: Invice Date and time. Numeric, the day and time when each transaction was generated.+ UnitPrice: Unit price. Numeric, Product price per unit in sterling.+ CustomerID: Customer number. Nominal, a 5-digit integral number uniquely assigned to each customer.+ Country: Country name. Nominal, the name of the country where each customer resides. Citation: Daqing Chen, Sai Liang Sain, and Kun Guo, Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining, Journal of Database Marketing and Customer Strategy Management, Vol. 19, No. 3, pp. 197–208, 2012 (Published online before print: 27 August 2012. doi: 10.1057/dbm.2012.17)
###Code
!pip install sagemaker==2.21.0 boto3==1.16.40
%store -r
%store
import sagemaker
from sagemaker.lineage import context, artifact, association, action
import boto3
from model_package_src.inference_specification import InferenceSpecification
import json
import numpy as np
import pandas as pd
import datetime
import time
from scipy.sparse import csr_matrix, hstack, load_npz
from sklearn.preprocessing import OneHotEncoder
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
assert sagemaker.__version__ >= "2.21.0"
region = boto3.Session().region_name
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client("s3", region_name=region)
sagemaker_boto_client = boto_session.client("sagemaker")
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_boto_client
)
sagemaker_role = sagemaker.get_execution_role()
bucket = sagemaker_session.default_bucket()
prefix = "personalization"
output_prefix = f"s3://{bucket}/{prefix}/output"
###Output
_____no_output_____
###Markdown
Read the data Prepare Data For Modeling+ Split the data into training and testing sets+ Write the data to protobuf recordIO format for Pipe mode. [Read more](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html) about protobuf recordIO format.
###Code
# load array
X_train = load_npz("./data/X_train.npz")
X_test = load_npz("./data/X_test.npz")
y_train_npzfile = np.load("./data/y_train.npz")
y_test_npzfile = np.load("./data/y_test.npz")
y_train = y_train_npzfile.f.arr_0
y_test = y_test_npzfile.f.arr_0
X_train.shape, X_test.shape, y_train.shape, y_test.shape
input_dims = X_train.shape[1]
%store input_dims
###Output
Stored 'input_dims' (int)
###Markdown
Train the factorization machine modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. We'll use the Amazon SageMaker Python SDK to kick off training and monitor status until it is completed. In this example that takes only a few minutes. Despite the model only need 1-2 minutes to train, there is some extra time required upfront to provision hardware and load the algorithm container.First, let's specify our containers. To find the rigth container, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation.](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html)
###Code
container = sagemaker.image_uris.retrieve("factorization-machines", region=boto_session.region_name)
fm = sagemaker.estimator.Estimator(
container,
sagemaker_role,
instance_count=1,
instance_type="ml.c5.xlarge",
output_path=output_prefix,
sagemaker_session=sagemaker_session,
)
fm.set_hyperparameters(
feature_dim=input_dims,
predictor_type="regressor",
mini_batch_size=1000,
num_factors=64,
epochs=20,
)
if 'training_job_name' not in locals():
fm.fit({'train': train_data_location, 'test': test_data_location})
training_job_name = fm.latest_training_job.job_name
%store training_job_name
else:
print(f'Using previous training job: {training_job_name}')
training_job_info = sagemaker_boto_client.describe_training_job(TrainingJobName=training_job_name)
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_info["InputDataConfig"][0]["DataSource"]["S3DataSource"][
"S3Uri"
]
matching_artifacts = list(
artifact.Artifact.list(source_uri=training_data_s3_uri, sagemaker_session=sagemaker_session)
)
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f"Using existing artifact: {training_data_artifact.artifact_arn}")
else:
training_data_artifact = artifact.Artifact.create(
artifact_name="TrainingData",
source_uri=training_data_s3_uri,
artifact_type="Dataset",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL")
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/cdd7fbecb4eefa22c43b2ad48140acc2
###Markdown
Code ArtifactWe do not need a code artifact because we are using a built-in SageMaker Algorithm called Factorization Machines. The Factorization Machines container contains all of the code and, by default, our model training stores the Factorization Machines image for tracking purposes. Model artifact
###Code
trained_model_s3_uri = training_job_info["ModelArtifacts"]["S3ModelArtifacts"]
matching_artifacts = list(
artifact.Artifact.list(source_uri=trained_model_s3_uri, sagemaker_session=sagemaker_session)
)
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f"Using existing artifact: {model_artifact.artifact_arn}")
else:
model_artifact = artifact.Artifact.create(
artifact_name="TrainedModel",
source_uri=trained_model_s3_uri,
artifact_type="Model",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {model_artifact.artifact_arn}: SUCCESSFUL")
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/3acde2fc029adeff9c767be68feac3a7
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(
TrialComponentName=training_job_name + "-aws-training-job"
)
trial_component_arn = trial_component["TrialComponentArn"]
###Output
_____no_output_____
###Markdown
Store artifacts
###Code
artifact_list = [[training_data_artifact, "ContributedTo"], [model_artifact, "Produced"]]
for art, assoc in artifact_list:
try:
association.Association.create(
source_arn=art.artifact_arn,
destination_arn=trial_component_arn,
association_type=assoc,
sagemaker_session=sagemaker_session,
)
print(f"Association with {art.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {art.artifact_type}")
model_name = "retail-recommendations"
model_matches = sagemaker_boto_client.list_models(NameContains=model_name)["Models"]
if not model_matches:
print(f"Creating model {model_name}")
model = sagemaker_session.create_model_from_job(
name=model_name,
training_job_name=training_job_info["TrainingJobName"],
role=sagemaker_role,
image_uri=training_job_info["AlgorithmSpecification"]["TrainingImage"],
)
else:
print(f"Model {model_name} already exists.")
###Output
_____no_output_____
###Markdown
SageMaker Model RegistryOnce a useful model has been trained and its artifacts properly associated, the next step is to register the model for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide autiomatic versioning.
###Code
if 'mpg_name' not in locals():
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M')
mpg_name = f'retail-recommendation-{timestamp}'
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageGroupDescription": "Recommendation for Online Retail Sales",
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)[
"ModelPackageGroupSummaryList"
]
if matching_mpg:
print(f"Using existing Model Package Group: {mpg_name}")
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f"Create Model Package Group {mpg_name}: SUCCESSFUL")
model_metrics_report = {"regression_metrics": {}}
for metric in training_job_info["FinalMetricDataList"]:
stat = {metric["MetricName"]: {"value": metric["Value"]}}
model_metrics_report["regression_metrics"].update(stat)
with open("training_metrics.json", "w") as f:
json.dump(model_metrics_report, f)
metrics_s3_key = f"training_jobs/{training_job_info['TrainingJobName']}/training_metrics.json"
s3_client.upload_file(Filename="training_metrics.json", Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_info["AlgorithmSpecification"]["TrainingImage"],
supports_gpu=False,
supported_content_types=["application/x-recordio-protobuf", "application/json"],
supported_mime_types=["text/csv"],
)
mp_inference_spec["InferenceSpecification"]["Containers"][0]["ModelDataUrl"] = training_job_info[
"ModelArtifacts"
]["S3ModelArtifacts"]
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
"ModelQuality": {
"Statistics": {
"ContentType": "application/json",
"S3Uri": f"s3://{bucket}/{metrics_s3_key}",
}
}
}
mp_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageDescription": "Factorization Machine Model to create personalized retail recommendations",
"ModelApprovalStatus": "PendingManualApproval",
"ModelMetrics": model_metrics,
}
mp_input_dict.update(mp_inference_spec)
mp_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
while mp_status not in ["Completed", "Failed"]:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
print(f"model package status: {mp_status}")
print(f"model package status: {mp_status}")
model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)[
"ModelPackageSummaryList"
][0]
model_package_update = {
"ModelPackageArn": model_package["ModelPackageArn"],
"ModelApprovalStatus": "Approved",
}
update_response = sagemaker_boto_client.update_model_package(**model_package_update)
from sagemaker.lineage.visualizer import LineageTableVisualizer
viz = LineageTableVisualizer(sagemaker_session)
display(viz.show(training_job_name=training_job_name))
###Output
_____no_output_____
###Markdown
Make PredictionsNow that we've trained our model, we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically.Note, Amazon SageMaker allows you the flexibility of importing models trained elsewhere, as well as the choice of not importing models if the target of model creation is AWS Lambda, AWS Greengrass, Amazon Redshift, Amazon Athena, or other deployment target.Here we will take the top customer, the customer who spent the most money, and try to find which items to recommend to them.
###Code
from sagemaker.deserializers import JSONDeserializer
from sagemaker.serializers import JSONSerializer
class FMSerializer(JSONSerializer):
def serialize(self, data):
js = {"instances": []}
for row in data:
js["instances"].append({"features": row.tolist()})
return json.dumps(js)
fm_predictor = fm.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
serializer=FMSerializer(),
deserializer=JSONDeserializer(),
)
# find customer who spent the most money
df = pd.read_csv("data/online_retail_preprocessed.csv")
df["invoice_amount"] = df["Quantity"] * df["UnitPrice"]
top_customer = (
df.groupby("CustomerID").sum()["invoice_amount"].sort_values(ascending=False).index[0]
)
def get_recommendations(df, customer_id, n_recommendations, n_ranks=100):
popular_items = (
df.groupby(["StockCode", "UnitPrice"])
.nunique()["CustomerID"]
.sort_values(ascending=False)
.reset_index()
)
top_n_items = popular_items["StockCode"].iloc[:n_ranks].values
top_n_prices = popular_items["UnitPrice"].iloc[:n_ranks].values
# stock codes can have multiple descriptions, so we will choose whichever description is most common
item_map = df.groupby("StockCode").agg(lambda x: x.value_counts().index[0])["Description"]
# find customer's country
df_subset = df.loc[df["CustomerID"] == customer_id]
country = df_subset["Country"].value_counts().index[0]
data = {
"StockCode": top_n_items,
"Description": [item_map[i] for i in top_n_items],
"CustomerID": customer_id,
"Country": country,
"UnitPrice": top_n_prices,
}
df_inference = pd.DataFrame(data)
# we need to build the data set similar to how we built it for training
# it should have the same number of features as the training data
enc = OneHotEncoder(handle_unknown="ignore")
onehot_cols = ["StockCode", "CustomerID", "Country"]
enc.fit(df[onehot_cols])
onehot_output = enc.transform(df_inference[onehot_cols])
vectorizer = TfidfVectorizer(min_df=2)
unique_descriptions = df["Description"].unique()
vectorizer.fit(unique_descriptions)
tfidf_output = vectorizer.transform(df_inference["Description"])
row = range(len(df_inference))
col = [0] * len(df_inference)
unit_price = csr_matrix((df_inference["UnitPrice"].values, (row, col)), dtype="float32")
X_inference = hstack([onehot_output, tfidf_output, unit_price], format="csr")
result = fm_predictor.predict(X_inference.toarray())
preds = [i["score"] for i in result["predictions"]]
index_array = np.array(preds).argsort()
items = enc.inverse_transform(onehot_output)[:, 0]
top_recs = np.take_along_axis(items, index_array, axis=0)[: -n_recommendations - 1 : -1]
recommendations = [[i, item_map[i]] for i in top_recs]
return recommendations
print("Top 5 recommended products:")
get_recommendations(df, top_customer, n_recommendations=5, n_ranks=100)
###Output
Top 5 recommended products:
###Markdown
Recommendation Engine for E-Commerce SalesThis notebook gives an overview of techniques and services offer by SageMaker to build and deploy a personalized recommendation engine. DatasetThe dataset for this demo comes from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Online+Retail). It contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail. The company mainly sells unique all-occasion gifts. The following attributes are included in our dataset:+ InvoiceNo: Invoice number. Nominal, a 6-digit integral number uniquely assigned to each transaction. If this code starts with letter 'c', it indicates a cancellation.+ StockCode: Product (item) code. Nominal, a 5-digit integral number uniquely assigned to each distinct product.+ Description: Product (item) name. Nominal.+ Quantity: The quantities of each product (item) per transaction. Numeric.+ InvoiceDate: Invice Date and time. Numeric, the day and time when each transaction was generated.+ UnitPrice: Unit price. Numeric, Product price per unit in sterling.+ CustomerID: Customer number. Nominal, a 5-digit integral number uniquely assigned to each customer.+ Country: Country name. Nominal, the name of the country where each customer resides. Citation: Daqing Chen, Sai Liang Sain, and Kun Guo, Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining, Journal of Database Marketing and Customer Strategy Management, Vol. 19, No. 3, pp. 197–208, 2012 (Published online before print: 27 August 2012. doi: 10.1057/dbm.2012.17)
###Code
!pip install sagemaker==2.21.0 boto3==1.16.40
%store -r
%store
import sagemaker
from sagemaker.lineage import context, artifact, association, action
import boto3
from model_package_src.inference_specification import InferenceSpecification
import json
import numpy as np
import pandas as pd
import datetime
import time
from scipy.sparse import csr_matrix, hstack, load_npz
from sklearn.preprocessing import OneHotEncoder
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
assert sagemaker.__version__ >= '2.21.0'
region = boto3.Session().region_name
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client('s3', region_name=region)
sagemaker_boto_client = boto_session.client('sagemaker')
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session,
sagemaker_client=sagemaker_boto_client)
sagemaker_role = sagemaker.get_execution_role()
bucket = sagemaker_session.default_bucket()
prefix = 'personalization'
output_prefix = f's3://{bucket}/{prefix}/output'
###Output
_____no_output_____
###Markdown
Read the data Prepare Data For Modeling+ Split the data into training and testing sets+ Write the data to protobuf recordIO format for Pipe mode. [Read more](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html) about protobuf recordIO format.
###Code
# load array
X_train = load_npz('./data/X_train.npz')
X_test = load_npz('./data/X_test.npz')
y_train_npzfile = np.load('./data/y_train.npz')
y_test_npzfile = np.load('./data/y_test.npz')
y_train = y_train_npzfile.f.arr_0
y_test = y_test_npzfile.f.arr_0
X_train.shape, X_test.shape,y_train.shape, y_test.shape
input_dims = X_train.shape[1]
%store input_dims
###Output
Stored 'input_dims' (int)
###Markdown
Train the factorization machine modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. We'll use the Amazon SageMaker Python SDK to kick off training and monitor status until it is completed. In this example that takes only a few minutes. Despite the model only need 1-2 minutes to train, there is some extra time required upfront to provision hardware and load the algorithm container.First, let's specify our containers. To find the rigth container, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation.](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html)
###Code
container = sagemaker.image_uris.retrieve("factorization-machines", region=boto_session.region_name)
fm = sagemaker.estimator.Estimator(container,
sagemaker_role,
instance_count=1,
instance_type='ml.c5.xlarge',
output_path=output_prefix,
sagemaker_session=sagemaker_session)
fm.set_hyperparameters(feature_dim=input_dims,
predictor_type='regressor',
mini_batch_size=1000,
num_factors=64,
epochs=20)
if 'training_job_name' not in locals():
fm.fit({'train': train_data_location, 'test': test_data_location})
training_job_name = fm.latest_training_job.job_name
%store training_job_name
else:
print(f'Using previous training job: {training_job_name}')
training_job_info = sagemaker_boto_client.describe_training_job(TrainingJobName=training_job_name)
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_info['InputDataConfig'][0]['DataSource']['S3DataSource']['S3Uri']
matching_artifacts = list(artifact.Artifact.list(
source_uri=training_data_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f'Using existing artifact: {training_data_artifact.artifact_arn}')
else:
training_data_artifact = artifact.Artifact.create(
artifact_name='TrainingData',
source_uri=training_data_s3_uri,
artifact_type='Dataset',
sagemaker_session=sagemaker_session)
print(f'Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL')
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/cdd7fbecb4eefa22c43b2ad48140acc2
###Markdown
Code ArtifactWe do not need a code artifact because we are using a built-in SageMaker Algorithm called Factorization Machines. The Factorization Machines container contains all of the code and, by default, our model training stores the Factorization Machines image for tracking purposes. Model artifact
###Code
trained_model_s3_uri = training_job_info['ModelArtifacts']['S3ModelArtifacts']
matching_artifacts = list(artifact.Artifact.list(
source_uri=trained_model_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f'Using existing artifact: {model_artifact.artifact_arn}')
else:
model_artifact = artifact.Artifact.create(
artifact_name='TrainedModel',
source_uri=trained_model_s3_uri,
artifact_type='Model',
sagemaker_session=sagemaker_session)
print(f'Create artifact {model_artifact.artifact_arn}: SUCCESSFUL')
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/3acde2fc029adeff9c767be68feac3a7
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(TrialComponentName=training_job_name+'-aws-training-job')
trial_component_arn = trial_component['TrialComponentArn']
###Output
_____no_output_____
###Markdown
Store artifacts
###Code
artifact_list = [
[training_data_artifact, 'ContributedTo'],
[model_artifact, 'Produced']
]
for art, assoc in artifact_list:
try:
association.Association.create(
source_arn=art.artifact_arn,
destination_arn=trial_component_arn,
association_type=assoc,
sagemaker_session=sagemaker_session)
print(f"Association with {art.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {art.artifact_type}")
model_name = 'retail-recommendations'
model_matches = sagemaker_boto_client.list_models(NameContains=model_name)['Models']
if not model_matches:
print(f'Creating model {model_name}')
model = sagemaker_session.create_model_from_job(
name=model_name,
training_job_name=training_job_info['TrainingJobName'],
role=sagemaker_role,
image_uri=training_job_info['AlgorithmSpecification']['TrainingImage'])
else:
print(f"Model {model_name} already exists.")
###Output
_____no_output_____
###Markdown
SageMaker Model RegistryOnce a useful model has been trained and its artifacts properly associated, the next step is to register the model for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide autiomatic versioning.
###Code
if 'mpg_name' not in locals():
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M')
mpg_name = f'retail-recommendation-{timestamp}'
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageGroupDescription': 'Recommendation for Online Retail Sales'
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)['ModelPackageGroupSummaryList']
if matching_mpg:
print(f'Using existing Model Package Group: {mpg_name}')
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f'Create Model Package Group {mpg_name}: SUCCESSFUL')
model_metrics_report = {
'regression_metrics': {}
}
for metric in training_job_info['FinalMetricDataList']:
stat = {
metric['MetricName']: {
'value': metric['Value']
}
}
model_metrics_report['regression_metrics'].update(stat)
with open('training_metrics.json', 'w') as f:
json.dump(model_metrics_report, f)
metrics_s3_key = f"training_jobs/{training_job_info['TrainingJobName']}/training_metrics.json"
s3_client.upload_file(Filename='training_metrics.json', Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_info['AlgorithmSpecification']['TrainingImage'],
supports_gpu=False,
supported_content_types=['application/x-recordio-protobuf', 'application/json'],
supported_mime_types=['text/csv'])
mp_inference_spec['InferenceSpecification']['Containers'][0]['ModelDataUrl'] = training_job_info['ModelArtifacts']['S3ModelArtifacts']
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
'ModelQuality': {
'Statistics': {
'ContentType': 'application/json',
'S3Uri': f's3://{bucket}/{metrics_s3_key}'
}
}
}
mp_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageDescription': 'Factorization Machine Model to create personalized retail recommendations',
'ModelApprovalStatus': 'PendingManualApproval',
'ModelMetrics': model_metrics
}
mp_input_dict.update(mp_inference_spec)
mp_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
while mp_status not in ['Completed', 'Failed']:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
print(f'model package status: {mp_status}')
print(f'model package status: {mp_status}')
model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList'][0]
model_package_update = {
'ModelPackageArn': model_package['ModelPackageArn'],
'ModelApprovalStatus': 'Approved'
}
update_response = sagemaker_boto_client.update_model_package(**model_package_update)
from sagemaker.lineage.visualizer import LineageTableVisualizer
viz = LineageTableVisualizer(sagemaker_session)
display(viz.show(training_job_name=training_job_name))
###Output
_____no_output_____
###Markdown
Make PredictionsNow that we've trained our model, we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically.Note, Amazon SageMaker allows you the flexibility of importing models trained elsewhere, as well as the choice of not importing models if the target of model creation is AWS Lambda, AWS Greengrass, Amazon Redshift, Amazon Athena, or other deployment target.Here we will take the top customer, the customer who spent the most money, and try to find which items to recommend to them.
###Code
from sagemaker.deserializers import JSONDeserializer
from sagemaker.serializers import JSONSerializer
class FMSerializer(JSONSerializer):
def serialize(self, data):
js = {'instances': []}
for row in data:
js['instances'].append({'features': row.tolist()})
return json.dumps(js)
fm_predictor = fm.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
serializer=FMSerializer(),
deserializer= JSONDeserializer()
)
# find customer who spent the most money
df = pd.read_csv('data/online_retail_preprocessed.csv')
df['invoice_amount'] = df['Quantity'] * df['UnitPrice']
top_customer = df.groupby('CustomerID').sum()['invoice_amount'].sort_values(ascending=False).index[0]
def get_recommendations(df, customer_id, n_recommendations, n_ranks = 100):
popular_items = df.groupby(['StockCode', 'UnitPrice']).nunique()['CustomerID'].sort_values(ascending=False).reset_index()
top_n_items = popular_items['StockCode'].iloc[:n_ranks].values
top_n_prices = popular_items['UnitPrice'].iloc[:n_ranks].values
# stock codes can have multiple descriptions, so we will choose whichever description is most common
item_map = df.groupby('StockCode').agg(lambda x: x.value_counts().index[0])['Description']
# find customer's country
df_subset = df.loc[df['CustomerID'] == customer_id]
country = df_subset['Country'].value_counts().index[0]
data = {'StockCode': top_n_items,
'Description': [item_map[i] for i in top_n_items],
'CustomerID': customer_id,
'Country': country,
'UnitPrice': top_n_prices
}
df_inference = pd.DataFrame(data)
# we need to build the data set similar to how we built it for training
# it should have the same number of features as the training data
enc = OneHotEncoder(handle_unknown='ignore')
onehot_cols = ['StockCode', 'CustomerID', 'Country']
enc.fit(df[onehot_cols])
onehot_output = enc.transform(df_inference[onehot_cols])
vectorizer = TfidfVectorizer(min_df=2)
unique_descriptions = df['Description'].unique()
vectorizer.fit(unique_descriptions)
tfidf_output = vectorizer.transform(df_inference['Description'])
row = range(len(df_inference))
col = [0] * len(df_inference)
unit_price = csr_matrix((df_inference['UnitPrice'].values, (row, col)), dtype='float32')
X_inference = hstack([onehot_output, tfidf_output, unit_price], format='csr')
result = fm_predictor.predict(X_inference.toarray())
preds = [i['score'] for i in result['predictions']]
index_array = np.array(preds).argsort()
items = enc.inverse_transform(onehot_output)[:,0]
top_recs = np.take_along_axis(items, index_array, axis=0)[:-n_recommendations-1:-1]
recommendations = [[i, item_map[i]] for i in top_recs]
return recommendations
print('Top 5 recommended products:')
get_recommendations(df, top_customer, n_recommendations=5, n_ranks=100)
###Output
Top 5 recommended products:
###Markdown
Store artifacts
###Code
artifact_list = [
[training_data_artifact, 'ContributedTo'],
[model_artifact, 'Produced']
]
for art, assoc in artifact_list:
try:
association.Association.create(
source_arn=art.artifact_arn,
destination_arn=trial_component_arn,
association_type=assoc,
sagemaker_session=sagemaker_session)
print(f"Association with {art.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {art.artifact_type}")
model_name = 'retail-recommendations'
model_matches = sagemaker_boto_client.list_models(NameContains=model_name)['Models']
if not model_matches:
print(f'Creating model {model_name}')
model = sagemaker_session.create_model_from_job(
name=model_name,
training_job_name=training_job_info['TrainingJobName'],
role=sagemaker_role,
image_uri=training_job_info['AlgorithmSpecification']['TrainingImage'])
else:
print(f"Model {model_name} already exists.")
###Output
_____no_output_____
###Markdown
SageMaker Model RegistryOnce a useful model has been trained and its artifacts properly associated, the next step is to register the model for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide autiomatic versioning.
###Code
if 'mpg_name' not in locals():
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M')
mpg_name = f'retail-recommendation-{timestamp}'
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageGroupDescription': 'Recommendation for Online Retail Sales'
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)['ModelPackageGroupSummaryList']
if matching_mpg:
print(f'Using existing Model Package Group: {mpg_name}')
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f'Create Model Package Group {mpg_name}: SUCCESSFUL')
model_metrics_report = {
'regression_metrics': {}
}
for metric in training_job_info['FinalMetricDataList']:
stat = {
metric['MetricName']: {
'value': metric['Value']
}
}
model_metrics_report['regression_metrics'].update(stat)
with open('training_metrics.json', 'w') as f:
json.dump(model_metrics_report, f)
metrics_s3_key = f"training_jobs/{training_job_info['TrainingJobName']}/training_metrics.json"
s3_client.upload_file(Filename='training_metrics.json', Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_info['AlgorithmSpecification']['TrainingImage'],
supports_gpu=False,
supported_content_types=['application/x-recordio-protobuf', 'application/json'],
supported_mime_types=['text/csv'])
mp_inference_spec['InferenceSpecification']['Containers'][0]['ModelDataUrl'] = training_job_info['ModelArtifacts']['S3ModelArtifacts']
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
'ModelQuality': {
'Statistics': {
'ContentType': 'application/json',
'S3Uri': f's3://{bucket}/{metrics_s3_key}'
}
}
}
mp_input_dict = {
'ModelPackageGroupName': mpg_name,
'ModelPackageDescription': 'Factorization Machine Model to create personalized retail recommendations',
'ModelApprovalStatus': 'PendingManualApproval',
'ModelMetrics': model_metrics
}
mp_input_dict.update(mp_inference_spec)
mp_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
while mp_status not in ['Completed', 'Failed']:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(ModelPackageName=mp_response['ModelPackageArn'])
mp_status = mp_info['ModelPackageStatus']
print(f'model package status: {mp_status}')
print(f'model package status: {mp_status}')
model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)['ModelPackageSummaryList'][0]
model_package_update = {
'ModelPackageArn': model_package['ModelPackageArn'],
'ModelApprovalStatus': 'Approved'
}
update_response = sagemaker_boto_client.update_model_package(**model_package_update)
from sagemaker.lineage.visualizer import LineageTableVisualizer
viz = LineageTableVisualizer(sagemaker_session)
display(viz.show(training_job_name=training_job_name))
###Output
_____no_output_____
###Markdown
Make PredictionsNow that we've trained our model, we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically.Note, Amazon SageMaker allows you the flexibility of importing models trained elsewhere, as well as the choice of not importing models if the target of model creation is AWS Lambda, AWS Greengrass, Amazon Redshift, Amazon Athena, or other deployment target.Here we will take the top customer, the customer who spent the most money, and try to find which items to recommend to them.
###Code
from sagemaker.deserializers import JSONDeserializer
from sagemaker.serializers import JSONSerializer
class FMSerializer(JSONSerializer):
def serialize(self, data):
js = {'instances': []}
for row in data:
js['instances'].append({'features': row.tolist()})
return json.dumps(js)
fm_predictor = fm.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
serializer=FMSerializer(),
deserializer= JSONDeserializer()
)
# find customer who spent the most money
df = pd.read_csv('data/online_retail_preprocessed.csv')
df['invoice_amount'] = df['Quantity'] * df['UnitPrice']
top_customer = df.groupby('CustomerID').sum()['invoice_amount'].sort_values(ascending=False).index[0]
def get_recommendations(df, customer_id, n_recommendations, n_ranks = 100):
popular_items = df.groupby(['StockCode', 'UnitPrice']).nunique()['CustomerID'].sort_values(ascending=False).reset_index()
top_n_items = popular_items['StockCode'].iloc[:n_ranks].values
top_n_prices = popular_items['UnitPrice'].iloc[:n_ranks].values
# stock codes can have multiple descriptions, so we will choose whichever description is most common
item_map = df.groupby('StockCode').agg(lambda x: x.value_counts().index[0])['Description']
# find customer's country
df_subset = df.loc[df['CustomerID'] == customer_id]
country = df_subset['Country'].value_counts().index[0]
data = {'StockCode': top_n_items,
'Description': [item_map[i] for i in top_n_items],
'CustomerID': customer_id,
'Country': country,
'UnitPrice': top_n_prices
}
df_inference = pd.DataFrame(data)
# we need to build the data set similar to how we built it for training
# it should have the same number of features as the training data
enc = OneHotEncoder(handle_unknown='ignore')
onehot_cols = ['StockCode', 'CustomerID', 'Country']
enc.fit(df[onehot_cols])
onehot_output = enc.transform(df_inference[onehot_cols])
vectorizer = TfidfVectorizer(min_df=2)
unique_descriptions = df['Description'].unique()
vectorizer.fit(unique_descriptions)
tfidf_output = vectorizer.transform(df_inference['Description'])
row = range(len(df_inference))
col = [0] * len(df_inference)
unit_price = csr_matrix((df_inference['UnitPrice'].values, (row, col)), dtype='float32')
X_inference = hstack([onehot_output, tfidf_output, unit_price], format='csr')
result = fm_predictor.predict(X_inference.toarray())
preds = [i['score'] for i in result['predictions']]
index_array = np.array(preds).argsort()
items = enc.inverse_transform(onehot_output)[:,0]
top_recs = np.take_along_axis(items, index_array, axis=0)[:-n_recommendations-1:-1]
recommendations = [[i, item_map[i]] for i in top_recs]
return recommendations
print('Top 5 recommended products:')
get_recommendations(df, top_customer, n_recommendations=5, n_ranks=100)
###Output
Top 5 recommended products:
###Markdown
Recommendation Engine for E-Commerce Sales: Part 2. Train and Make PredictionsThis notebook gives an overview of techniques and services offer by SageMaker to build and deploy a personalized recommendation engine. DatasetThe dataset for this demo comes from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Online+Retail). It contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail. The company mainly sells unique all-occasion gifts. The following attributes are included in our dataset:+ InvoiceNo: Invoice number. Nominal, a 6-digit integral number uniquely assigned to each transaction. If this code starts with letter 'c', it indicates a cancellation.+ StockCode: Product (item) code. Nominal, a 5-digit integral number uniquely assigned to each distinct product.+ Description: Product (item) name. Nominal.+ Quantity: The quantities of each product (item) per transaction. Numeric.+ InvoiceDate: Invice Date and time. Numeric, the day and time when each transaction was generated.+ UnitPrice: Unit price. Numeric, Product price per unit in sterling.+ CustomerID: Customer number. Nominal, a 5-digit integral number uniquely assigned to each customer.+ Country: Country name. Nominal, the name of the country where each customer resides. Citation: Daqing Chen, Sai Liang Sain, and Kun Guo, Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining, Journal of Database Marketing and Customer Strategy Management, Vol. 19, No. 3, pp. 197–208, 2012 (Published online before print: 27 August 2012. doi: 10.1057/dbm.2012.17) Solution Architecture----
###Code
!pip install -Uq sagemaker boto3
%store -r
%store
import sagemaker
from sagemaker.lineage import context, artifact, association, action
import boto3
from model_package_src.inference_specification import InferenceSpecification
import json
import numpy as np
import pandas as pd
import datetime
import time
from scipy.sparse import csr_matrix, hstack, load_npz
from sklearn.preprocessing import OneHotEncoder
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
assert sagemaker.__version__ >= "2.21.0"
region = boto3.Session().region_name
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client("s3", region_name=region)
sagemaker_boto_client = boto_session.client("sagemaker")
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_boto_client
)
sagemaker_role = sagemaker.get_execution_role()
bucket = sagemaker_session.default_bucket()
prefix = "personalization"
output_prefix = f"s3://{bucket}/{prefix}/output"
###Output
_____no_output_____
###Markdown
Read the data Prepare Data For Modeling+ Split the data into training and testing sets+ Write the data to protobuf recordIO format for Pipe mode. [Read more](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html) about protobuf recordIO format.
###Code
# load array
X_train = load_npz("./data/X_train.npz")
X_test = load_npz("./data/X_test.npz")
y_train_npzfile = np.load("./data/y_train.npz")
y_test_npzfile = np.load("./data/y_test.npz")
y_train = y_train_npzfile.f.arr_0
y_test = y_test_npzfile.f.arr_0
X_train.shape, X_test.shape, y_train.shape, y_test.shape
input_dims = X_train.shape[1]
%store input_dims
###Output
Stored 'input_dims' (int)
###Markdown
Train the factorization machine modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. We'll use the Amazon SageMaker Python SDK to kick off training and monitor status until it is completed. In this example that takes only a few minutes. Despite the model only need 1-2 minutes to train, there is some extra time required upfront to provision hardware and load the algorithm container.First, let's specify our containers. To find the rigth container, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation.](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html)
###Code
container = sagemaker.image_uris.retrieve("factorization-machines", region=boto_session.region_name)
fm = sagemaker.estimator.Estimator(
container,
sagemaker_role,
instance_count=1,
instance_type="ml.c5.xlarge",
output_path=output_prefix,
sagemaker_session=sagemaker_session,
)
fm.set_hyperparameters(
feature_dim=input_dims,
predictor_type="regressor",
mini_batch_size=1000,
num_factors=64,
epochs=20,
)
if 'training_job_name' not in locals():
fm.fit({'train': train_data_location, 'test': test_data_location})
training_job_name = fm.latest_training_job.job_name
%store training_job_name
else:
print(f'Using previous training job: {training_job_name}')
training_job_info = sagemaker_boto_client.describe_training_job(TrainingJobName=training_job_name)
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_info["InputDataConfig"][0]["DataSource"]["S3DataSource"][
"S3Uri"
]
matching_artifacts = list(
artifact.Artifact.list(source_uri=training_data_s3_uri, sagemaker_session=sagemaker_session)
)
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f"Using existing artifact: {training_data_artifact.artifact_arn}")
else:
training_data_artifact = artifact.Artifact.create(
artifact_name="TrainingData",
source_uri=training_data_s3_uri,
artifact_type="Dataset",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL")
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/cdd7fbecb4eefa22c43b2ad48140acc2
###Markdown
Code ArtifactWe do not need a code artifact because we are using a built-in SageMaker Algorithm called Factorization Machines. The Factorization Machines container contains all of the code and, by default, our model training stores the Factorization Machines image for tracking purposes. Model artifact
###Code
trained_model_s3_uri = training_job_info["ModelArtifacts"]["S3ModelArtifacts"]
matching_artifacts = list(
artifact.Artifact.list(source_uri=trained_model_s3_uri, sagemaker_session=sagemaker_session)
)
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f"Using existing artifact: {model_artifact.artifact_arn}")
else:
model_artifact = artifact.Artifact.create(
artifact_name="TrainedModel",
source_uri=trained_model_s3_uri,
artifact_type="Model",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {model_artifact.artifact_arn}: SUCCESSFUL")
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/3acde2fc029adeff9c767be68feac3a7
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(
TrialComponentName=training_job_name + "-aws-training-job"
)
trial_component_arn = trial_component["TrialComponentArn"]
###Output
_____no_output_____
###Markdown
Store artifacts
###Code
artifact_list = [[training_data_artifact, "ContributedTo"], [model_artifact, "Produced"]]
for art, assoc in artifact_list:
try:
association.Association.create(
source_arn=art.artifact_arn,
destination_arn=trial_component_arn,
association_type=assoc,
sagemaker_session=sagemaker_session,
)
print(f"Association with {art.artifact_type}: SUCCEESFUL")
except:
print(f"Association already exists with {art.artifact_type}")
model_name = "retail-recommendations"
model_matches = sagemaker_boto_client.list_models(NameContains=model_name)["Models"]
if not model_matches:
print(f"Creating model {model_name}")
model = sagemaker_session.create_model_from_job(
name=model_name,
training_job_name=training_job_info["TrainingJobName"],
role=sagemaker_role,
image_uri=training_job_info["AlgorithmSpecification"]["TrainingImage"],
)
else:
print(f"Model {model_name} already exists.")
###Output
_____no_output_____
###Markdown
SageMaker Model RegistryOnce a useful model has been trained and its artifacts properly associated, the next step is to register the model for future reference and possible deployment. Create Model Package GroupA Model Package Groups holds multiple versions or iterations of a model. Though it is not required to create them for every model in the registry, they help organize various models which all have the same purpose and provide autiomatic versioning.
###Code
if 'mpg_name' not in locals():
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M')
mpg_name = f'retail-recommendation-{timestamp}'
%store mpg_name
print(f'Model Package Group name: {mpg_name}')
mpg_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageGroupDescription": "Recommendation for Online Retail Sales",
}
matching_mpg = sagemaker_boto_client.list_model_package_groups(NameContains=mpg_name)[
"ModelPackageGroupSummaryList"
]
if matching_mpg:
print(f"Using existing Model Package Group: {mpg_name}")
else:
mpg_response = sagemaker_boto_client.create_model_package_group(**mpg_input_dict)
print(f"Create Model Package Group {mpg_name}: SUCCESSFUL")
model_metrics_report = {"regression_metrics": {}}
for metric in training_job_info["FinalMetricDataList"]:
stat = {metric["MetricName"]: {"value": metric["Value"]}}
model_metrics_report["regression_metrics"].update(stat)
with open("training_metrics.json", "w") as f:
json.dump(model_metrics_report, f)
metrics_s3_key = f"training_jobs/{training_job_info['TrainingJobName']}/training_metrics.json"
s3_client.upload_file(Filename="training_metrics.json", Bucket=bucket, Key=metrics_s3_key)
###Output
_____no_output_____
###Markdown
Define the inference spec
###Code
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_info["AlgorithmSpecification"]["TrainingImage"],
supports_gpu=False,
supported_content_types=["application/x-recordio-protobuf", "application/json"],
supported_mime_types=["text/csv"],
)
mp_inference_spec["InferenceSpecification"]["Containers"][0]["ModelDataUrl"] = training_job_info[
"ModelArtifacts"
]["S3ModelArtifacts"]
###Output
_____no_output_____
###Markdown
Define model metricsMetrics other than model quality can be defined. See the Boto3 documentation for [creating a model package](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.htmlSageMaker.Client.create_model_package).
###Code
model_metrics = {
"ModelQuality": {
"Statistics": {
"ContentType": "application/json",
"S3Uri": f"s3://{bucket}/{metrics_s3_key}",
}
}
}
mp_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageDescription": "Factorization Machine Model to create personalized retail recommendations",
"ModelApprovalStatus": "PendingManualApproval",
"ModelMetrics": model_metrics,
}
mp_input_dict.update(mp_inference_spec)
mp_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
###Output
_____no_output_____
###Markdown
Wait until model package is completed
###Code
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
while mp_status not in ["Completed", "Failed"]:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
print(f"model package status: {mp_status}")
print(f"model package status: {mp_status}")
model_package = sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)[
"ModelPackageSummaryList"
][0]
model_package_update = {
"ModelPackageArn": model_package["ModelPackageArn"],
"ModelApprovalStatus": "Approved",
}
update_response = sagemaker_boto_client.update_model_package(**model_package_update)
from sagemaker.lineage.visualizer import LineageTableVisualizer
viz = LineageTableVisualizer(sagemaker_session)
display(viz.show(training_job_name=training_job_name))
###Output
_____no_output_____
###Markdown
Make PredictionsNow that we've trained our model, we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically.Note, Amazon SageMaker allows you the flexibility of importing models trained elsewhere, as well as the choice of not importing models if the target of model creation is AWS Lambda, AWS Greengrass, Amazon Redshift, Amazon Athena, or other deployment target.Here we will take the top customer, the customer who spent the most money, and try to find which items to recommend to them.
###Code
from sagemaker.deserializers import JSONDeserializer
from sagemaker.serializers import JSONSerializer
class FMSerializer(JSONSerializer):
def serialize(self, data):
js = {"instances": []}
for row in data:
js["instances"].append({"features": row.tolist()})
return json.dumps(js)
fm_predictor = fm.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
serializer=FMSerializer(),
deserializer=JSONDeserializer(),
)
# find customer who spent the most money
df = pd.read_csv("data/online_retail_preprocessed.csv")
df["invoice_amount"] = df["Quantity"] * df["UnitPrice"]
top_customer = (
df.groupby("CustomerID").sum()["invoice_amount"].sort_values(ascending=False).index[0]
)
def get_recommendations(df, customer_id, n_recommendations, n_ranks=100):
popular_items = (
df.groupby(["StockCode", "UnitPrice"])
.nunique()["CustomerID"]
.sort_values(ascending=False)
.reset_index()
)
top_n_items = popular_items["StockCode"].iloc[:n_ranks].values
top_n_prices = popular_items["UnitPrice"].iloc[:n_ranks].values
# stock codes can have multiple descriptions, so we will choose whichever description is most common
item_map = df.groupby("StockCode").agg(lambda x: x.value_counts().index[0])["Description"]
# find customer's country
df_subset = df.loc[df["CustomerID"] == customer_id]
country = df_subset["Country"].value_counts().index[0]
data = {
"StockCode": top_n_items,
"Description": [item_map[i] for i in top_n_items],
"CustomerID": customer_id,
"Country": country,
"UnitPrice": top_n_prices,
}
df_inference = pd.DataFrame(data)
# we need to build the data set similar to how we built it for training
# it should have the same number of features as the training data
enc = OneHotEncoder(handle_unknown="ignore")
onehot_cols = ["StockCode", "CustomerID", "Country"]
enc.fit(df[onehot_cols])
onehot_output = enc.transform(df_inference[onehot_cols])
vectorizer = TfidfVectorizer(min_df=2)
unique_descriptions = df["Description"].unique()
vectorizer.fit(unique_descriptions)
tfidf_output = vectorizer.transform(df_inference["Description"])
row = range(len(df_inference))
col = [0] * len(df_inference)
unit_price = csr_matrix((df_inference["UnitPrice"].values, (row, col)), dtype="float32")
X_inference = hstack([onehot_output, tfidf_output, unit_price], format="csr")
result = fm_predictor.predict(X_inference.toarray())
preds = [i["score"] for i in result["predictions"]]
index_array = np.array(preds).argsort()
items = enc.inverse_transform(onehot_output)[:, 0]
top_recs = np.take_along_axis(items, index_array, axis=0)[: -n_recommendations - 1 : -1]
recommendations = [[i, item_map[i]] for i in top_recs]
return recommendations
print("Top 5 recommended products:")
get_recommendations(df, top_customer, n_recommendations=5, n_ranks=100)
###Output
Top 5 recommended products:
###Markdown
Recommendation Engine for E-Commerce Sales: Part 2. Train and Make PredictionsThis notebook gives an overview of techniques and services offer by SageMaker to build and deploy a personalized recommendation engine. DatasetThe dataset for this demo comes from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Online+Retail). It contains all the transactions occurring between 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail. The company mainly sells unique all-occasion gifts. The following attributes are included in our dataset:+ InvoiceNo: Invoice number. Nominal, a 6-digit integral number uniquely assigned to each transaction. If this code starts with letter 'c', it indicates a cancellation.+ StockCode: Product (item) code. Nominal, a 5-digit integral number uniquely assigned to each distinct product.+ Description: Product (item) name. Nominal.+ Quantity: The quantities of each product (item) per transaction. Numeric.+ InvoiceDate: Invice Date and time. Numeric, the day and time when each transaction was generated.+ UnitPrice: Unit price. Numeric, Product price per unit in sterling.+ CustomerID: Customer number. Nominal, a 5-digit integral number uniquely assigned to each customer.+ Country: Country name. Nominal, the name of the country where each customer resides. Citation: Daqing Chen, Sai Liang Sain, and Kun Guo, Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining, Journal of Database Marketing and Customer Strategy Management, Vol. 19, No. 3, pp. 197–208, 2012 (Published online before print: 27 August 2012. doi: 10.1057/dbm.2012.17)
###Code
!pip install sagemaker==2.21.0 boto3==1.16.40
%store -r
%store
import sagemaker
from sagemaker.lineage import context, artifact, association, action
import boto3
from model_package_src.inference_specification import InferenceSpecification
import json
import numpy as np
import pandas as pd
import datetime
import time
from scipy.sparse import csr_matrix, hstack, load_npz
from sklearn.preprocessing import OneHotEncoder
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
assert sagemaker.__version__ >= '2.21.0'
region = boto3.Session().region_name
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client('s3', region_name=region)
sagemaker_boto_client = boto_session.client('sagemaker')
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session,
sagemaker_client=sagemaker_boto_client)
sagemaker_role = sagemaker.get_execution_role()
bucket = sagemaker_session.default_bucket()
prefix = 'personalization'
output_prefix = f's3://{bucket}/{prefix}/output'
###Output
_____no_output_____
###Markdown
Read the data Prepare Data For Modeling+ Split the data into training and testing sets+ Write the data to protobuf recordIO format for Pipe mode. [Read more](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html) about protobuf recordIO format.
###Code
# load array
X_train = load_npz('./data/X_train.npz')
X_test = load_npz('./data/X_test.npz')
y_train_npzfile = np.load('./data/y_train.npz')
y_test_npzfile = np.load('./data/y_test.npz')
y_train = y_train_npzfile.f.arr_0
y_test = y_test_npzfile.f.arr_0
X_train.shape, X_test.shape,y_train.shape, y_test.shape
input_dims = X_train.shape[1]
%store input_dims
###Output
Stored 'input_dims' (int)
###Markdown
Train the factorization machine modelOnce we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. We'll use the Amazon SageMaker Python SDK to kick off training and monitor status until it is completed. In this example that takes only a few minutes. Despite the model only need 1-2 minutes to train, there is some extra time required upfront to provision hardware and load the algorithm container.First, let's specify our containers. To find the rigth container, we'll create a small lookup. More details on algorithm containers can be found in [AWS documentation.](https://docs-aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html)
###Code
container = sagemaker.image_uris.retrieve("factorization-machines", region=boto_session.region_name)
fm = sagemaker.estimator.Estimator(container,
sagemaker_role,
instance_count=1,
instance_type='ml.c5.xlarge',
output_path=output_prefix,
sagemaker_session=sagemaker_session)
fm.set_hyperparameters(feature_dim=input_dims,
predictor_type='regressor',
mini_batch_size=1000,
num_factors=64,
epochs=20)
if 'training_job_name' not in locals():
fm.fit({'train': train_data_location, 'test': test_data_location})
training_job_name = fm.latest_training_job.job_name
%store training_job_name
else:
print(f'Using previous training job: {training_job_name}')
training_job_info = sagemaker_boto_client.describe_training_job(TrainingJobName=training_job_name)
###Output
_____no_output_____
###Markdown
Training data artifact
###Code
training_data_s3_uri = training_job_info['InputDataConfig'][0]['DataSource']['S3DataSource']['S3Uri']
matching_artifacts = list(artifact.Artifact.list(
source_uri=training_data_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
training_data_artifact = matching_artifacts[0]
print(f'Using existing artifact: {training_data_artifact.artifact_arn}')
else:
training_data_artifact = artifact.Artifact.create(
artifact_name='TrainingData',
source_uri=training_data_s3_uri,
artifact_type='Dataset',
sagemaker_session=sagemaker_session)
print(f'Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL')
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/cdd7fbecb4eefa22c43b2ad48140acc2
###Markdown
Code ArtifactWe do not need a code artifact because we are using a built-in SageMaker Algorithm called Factorization Machines. The Factorization Machines container contains all of the code and, by default, our model training stores the Factorization Machines image for tracking purposes. Model artifact
###Code
trained_model_s3_uri = training_job_info['ModelArtifacts']['S3ModelArtifacts']
matching_artifacts = list(artifact.Artifact.list(
source_uri=trained_model_s3_uri,
sagemaker_session=sagemaker_session))
if matching_artifacts:
model_artifact = matching_artifacts[0]
print(f'Using existing artifact: {model_artifact.artifact_arn}')
else:
model_artifact = artifact.Artifact.create(
artifact_name='TrainedModel',
source_uri=trained_model_s3_uri,
artifact_type='Model',
sagemaker_session=sagemaker_session)
print(f'Create artifact {model_artifact.artifact_arn}: SUCCESSFUL')
###Output
Using existing artifact: arn:aws:sagemaker:us-east-2:645431112437:artifact/3acde2fc029adeff9c767be68feac3a7
###Markdown
Set artifact associations
###Code
trial_component = sagemaker_boto_client.describe_trial_component(TrialComponentName=training_job_name+'-aws-training-job')
trial_component_arn = trial_component['TrialComponentArn']
###Output
_____no_output_____ |
.ipynb_checkpoints/database_processing-checkpoint.ipynb | ###Markdown
SETUP
###Code
#import libraries
import pandas as pd
import sqlite3
#df = pd.DataFrame(#data here :/)
df = pd.DataFrame({'name': ['Juan', 'Victoria', 'Mary'], \
'age': [23, 34, 43], 'city': ['Miami', 'Buenos Aries', 'Santiago']})
df
#We will sqlite3 library and create a connection
cnn = sqlite3.connect('jupyter_sql_tutorial.db')
df.to_sql('people', cnn)
%load_ext sql
%sql sqlite:///jupyter_sql_tutorial.db
%%sql
SELECT *
FROM people
%%sql
SELECT count(*)
FROM people
%%sql
SELECT sum(age) as 'age_sum'
FROM people
###Output
* sqlite:///jupyter_sql_tutorial.db
Done.
###Markdown
Parameters walkthrough
###Code
#create dummy dataframe
df = pd.DataFrame({'transaction_id': ['9', '8', '7', '6', '5', '4', '3'], \
'user_id': ['rafa', 'roy', 'kenny', 'brendan', 'jurgen', 'roy', 'roy'],\
'transaction_date': ['2021-12-21', '2020-12-21', '2019-12-21',\
'2018-11-21', '2017-10-21', '2019-03-02', '2010-01-01'],\
'amount': ['10', '15', '20', '24', '25', '31', '42']})
df
#We will sqlite3 library and create a connection
cnn = sqlite3.connect('dummy.db')
df.to_sql('managers1', cnn)
%reload_ext sql
%sql sqlite:///dummy.db
%%sql
SELECT *
FROM managers1
%%sql
SELECT sum(amount) as 'spend_sum'
FROM managers1
%%sql
SELECT
user_id
, count(*) as num_transactions
, sum(amount) as total_amount
FROM
managers1
WHERE
user_id = 'roy'
and transaction_date = '2019-03-02'
###Output
* sqlite:///dummy.db
Done.
###Markdown
Pandas solution Cell at the bottom is writing the correct data for one player. Next step is to clean this up. ovechal01, ovi81228 * 11,
###Code
#import libraries
import csv
from datetime import timedelta
from dateutil.parser import parse
import pandas as pd
import sqlite3
import time
count = 0
def find_dates(player):
d = []
stat_handle = player
stat_sheet = f'./data/{stat_handle}_stats.csv'
stat_df = pd.read_csv(stat_sheet, sep= '\t', header= 0, index_col= None)
for row in stat_df.iterrows():
values = row[1]
pd_date = parse(values[0])
#time delta to 1 day prior,
#worth investigating the effect of using time delta to travel to midnight on gameday
start_date = pd_date - timedelta(days = 1)
time_range = (start_date, pd_date)
#append to dates list
#print(time_range)
d.append(time_range)
return d
with open('lehner_data_points.csv', 'w', encoding='utf-8') as w:
writer = csv.writer(w, delimiter = '\t')
point_list = []
twitter_handle = 'robinlehner'
tweet_sheet = f'./data/{twitter_handle}.csv'
twitter_df = pd.read_csv(tweet_sheet, sep= '\t', header= 0, index_col= None)
windows = []
#print(find_dates('ovechal01'))
dates = find_dates('lehnero01')
#print(dates)
for date in dates:
start = date[0]
end = date[1]
window = (start, end)
windows.append(window)
for i in range(len(twitter_df)):
tweet_time = twitter_df.loc[i, 'date time']
tweet_time = parse(tweet_time)
content = twitter_df.loc[i, 'content']
#print(twitter_df.loc[i, 'date time'], twitter_df.loc[i, 'content'])
#if row >= start and row <= end:
for window in windows:
start = window[0]
end = window[1]
if tweet_time >= start and tweet_time <= end:
count += 1
row = [tweet_time, content]
writer.writerow(row)
#print(tweet_time, content)
print(f'{count} points found')
#import libraries
import csv
from datetime import timedelta
from dateutil.parser import parse
import pandas as pd
import sqlite3
import time
count = 0
def find_dates(player):
d = []
stat_handle = player
stat_sheet = f'./data/{stat_handle}_stats.csv'
stat_df = pd.read_csv(stat_sheet, sep= '\t', header= 0, index_col= None)
for row in stat_df.iterrows():
values = row[1]
pd_date = parse(values[0])
#time delta to 1 day prior,
#worth investigating the effect of using time delta to travel to midnight on gameday
start_date = pd_date - timedelta(days = 1)
time_range = (start_date, pd_date)
#append to dates list
#print(time_range)
d.append(time_range)
return d
with open('./data/lehner_data_points.csv', 'w', encoding='utf-8') as w:
writer = csv.writer(w, delimiter = '\t')
point_list = []
twitter_handle = 'robinlehner'
tweet_sheet = f'./data/{twitter_handle}.csv'
twitter_df = pd.read_csv(tweet_sheet, sep= '\t', header= 0, index_col= None)
windows = []
#print(find_dates('ovechal01'))
dates = find_dates('lehnero01')
#print(dates)
for date in dates:
start = date[0]
end = date[1]
window = (start, end)
windows.append(window)
for i in range(len(twitter_df)):
tweet_time = twitter_df.loc[i, 'date time']
tweet_time = parse(tweet_time)
content = twitter_df.loc[i, 'content']
#print(twitter_df.loc[i, 'date time'], twitter_df.loc[i, 'content'])
#if row >= start and row <= end:
for window in windows:
start = window[0]
end = window[1]
if tweet_time >= start and tweet_time <= end:
count += 1
row = [tweet_time, content]
writer.writerow(row)
#print(tweet_time, content)
print(f'{count} points found')
###Output
_____no_output_____ |
assets/Application of Skills-ml/.ipynb_checkpoints/01_indeed_scrape-checkpoint.ipynb | ###Markdown
House-keeping
###Code
import requests
import bs4
from bs4 import BeautifulSoup
import pandas as pd
import time
from IPython.display import Audio, display
import pickle
###Output
_____no_output_____
###Markdown
Handy functions
###Code
def allDone():
'''this function outputs a short audio when called.
Typically this is used to signal a task completion'''
display(Audio(url='https://sound.peal.io/ps/audios/000/000/537/original/woo_vu_luvub_dub_dub.wav', autoplay=True))
###Output
_____no_output_____
###Markdown
collect job postings with predefined category
###Code
def indeedScrape(tSearch = 'data scientist', nMax = 50):
columns = ['location', 'company_name', 'job_title', 'summary', 'full_info', 'ref']
df = pd.DataFrame(columns = columns)
metaUrl = 'https://www.indeed.co.uk/jobs?q=%(search)s&l=United+Kingdom&radius=100&start=' % {'search':tSearch.replace(' ', '+')}
for start in range(0, nMax, 10):
try:
url = metaUrl + str(start)
page = requests.get(url)
# print('retriving url: ', url)
except:
print('---Failed to retrieve---')
print('url:', url)
continue
soup = BeautifulSoup(page.text, 'html.parser')
# time.sleep(1)
## metadata from mainpage
# extract info from class:row
# company name and job title
companies, jobs = [], []
for div in soup.find_all(name = 'div', attrs = {'class':'row'}):
company = div.find_all(name = 'span', attrs = {'class':'company'})
if len(company) > 0:
for b in company:
companies.append(b.text.strip())
else:
sec_try = div.find_all(name = 'span', attrs = {'class':'result-link-source'})
for span in sec_try:
companies.append(span.text.strip())
for a in div.find_all(name = 'a', attrs = {'data-tn-element':"jobTitle"}):
jobs.append(a['title'])
# extract location
locations = []
spans = soup.find_all('span', attrs={'class':'location'})
for span in spans:
locations.append(span.text)
# extract summaries
summaries = []
divs = soup.find_all('div', attrs = {'class':'summary'})
for i, div in enumerate(divs):
summaries.append(div.text.strip())
## crawl to subpages
descriptions = []
link_list = []
for adlink in soup.select('a[onmousedown*="return rclk(this,jobmap["]'):
suburl = "https://www.indeed.com" + adlink['href']
link_list.append(suburl)
try:
subpage = requests.get(suburl)
subsoup = BeautifulSoup(subpage.text)
except:
print('--- Failed to retrieve sub-URL ---')
print('url: ', suburl)
# extract descriptions
for des in subsoup.select('div[class*="jobsearch-JobComponent-description"]'):
descriptions.append(des.get_text())
df_temp = list(zip(locations, companies, jobs, summaries, descriptions, link_list))
df_temp = pd.DataFrame(df_temp, columns = columns)
df = df.append(df_temp).reset_index(drop = True)
return df
# Financial part - Done!
# # 13-2099.01
# Financial_Quantitative_Analysts = indeedScrape(tSearch='Financial Quantitative Analysts', nMax=300)
# Financial_Quantitative_Analysts['soc'] = '13-2099.01'
# # 13-2051.00
# Financial_Analysts = indeedScrape(tSearch='Financial Analysts', nMax=1000)
# Financial_Analysts['soc'] = '13-2051.00'
# # 13-2052.00
# Financial_Advisors = indeedScrape(tSearch='Financial Advisors', nMax=2000)
# Financial_Advisors['soc'] = '13-2052.00'
# df = pd.concat([Financial_Analysts, Financial_Advisors, Financial_Quantitative_Analysts], ignore_index=True)
# dirPData = '../data/'
# f_name = dirPData + 'financial_jobs_with_soc.pickle'
# with open(f_name, "wb") as f:
# pickle.dump(df, f)
# allDone()
# Telecommunication
# 17-2071.00
Electrical_Engineers = indeedScrape(tSearch='Electrical Engineers', nMax = 1000)
Electrical_Engineers['soc'] = '17-2071.00'
# 17-2141.00
Mechanical_Engineers = indeedScrape(tSearch='Mechanical Engineers', nMax = 1000)
Mechanical_Engineers['soc'] = '17-2141.00'
df = pd.concat([Electrical_Engineers, Mechanical_Engineers], ignore_index=True)
dirPData = '../data/'
f_name = dirPData + 'telecom_jobs_with_soc.pickle'
with open(f_name, "wb") as f:
pickle.dump(df, f)
###Output
_____no_output_____ |
doc/source/analyzing/units/6)_Unit_Equivalencies.ipynb | ###Markdown
Some physical quantities are directly related to other unitful quantities by a constant, but otherwise do not have the same units. To facilitate conversions between these quantities, `yt` implements a system of unit equivalencies (inspired by the [AstroPy implementation](http://docs.astropy.org/en/latest/units/equivalencies.html)). The possible unit equivalencies are:* `"thermal"`: conversions between temperature and energy ($E = k_BT$)* `"spectral"`: conversions between wavelength, frequency, and energy for photons ($E = h\nu = hc/\lambda, c = \lambda\nu$)* `"mass_energy"`: conversions between mass and energy ($E = mc^2$)* `"lorentz"`: conversions between velocity and Lorentz factor ($\gamma = 1/\sqrt{1-(v/c)^2}$)* `"schwarzschild"`: conversions between mass and Schwarzschild radius ($R_S = 2GM/c^2$)* `"compton"`: conversions between mass and Compton wavelength ($\lambda = h/mc$)The following unit equivalencies only apply under conditions applicable for an ideal gas with a constant mean molecular weight $\mu$ and ratio of specific heats $\gamma$:* `"number_density"`: conversions between density and number density ($n = \rho/\mu{m_p}$)* `"sound_speed"`: conversions between temperature and sound speed for an ideal gas ($c_s^2 = \gamma{k_BT}/\mu{m_p}$)A `YTArray` or `YTQuantity` can be converted to an equivalent using `in_units` (previously described in [Fields and Unit Conversion](fields_and_unit_conversion.html)), where the unit and the equivalence name are provided as additional arguments:
###Code
import yt
from yt import YTQuantity
import numpy as np
ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
dd = ds.all_data()
print (dd["temperature"].in_units("erg", equivalence="thermal"))
print (dd["temperature"].in_units("eV", equivalence="thermal"))
# Rest energy of the proton
from yt.units import mp
E_p = mp.in_units("GeV", equivalence="mass_energy")
print (E_p)
###Output
_____no_output_____
###Markdown
Most equivalencies can go in both directions, without any information required other than the unit you want to convert to (this is not the case for the electromagnetic equivalencies, which we'll discuss later):
###Code
from yt.units import clight
v = 0.1*clight
g = v.in_units("dimensionless", equivalence="lorentz")
print (g)
print (g.in_units("c", equivalence="lorentz"))
###Output
_____no_output_____
###Markdown
The previously described `to_value` method, which works like `in_units` except that it returns a bare NumPy array or floating-point number, also accepts equivalencies:
###Code
print (dd["temperature"].to_value("erg", equivalence="thermal"))
print (mp.to_value("GeV", equivalence="mass_energy"))
###Output
_____no_output_____
###Markdown
Special Equivalencies Some equivalencies can take supplemental information. The `"number_density"` equivalence can take a custom mean molecular weight (default is $\mu = 0.6$):
###Code
print (dd["density"].max())
print (dd["density"].in_units("cm**-3", equivalence="number_density").max())
print (dd["density"].in_units("cm**-3", equivalence="number_density", mu=0.75).max())
###Output
_____no_output_____
###Markdown
The `"sound_speed"` equivalence optionally takes the ratio of specific heats $\gamma$ and the mean molecular weight $\mu$ (defaults are $\gamma$ = 5/3, $\mu = 0.6$):
###Code
print (dd["temperature"].in_units("km/s", equivalence="sound_speed").mean())
print (dd["temperature"].in_units("km/s", equivalence="sound_speed", gamma=4./3., mu=0.5).mean())
###Output
_____no_output_____
###Markdown
These options must be used with caution, and only if you know the underlying data adheres to these assumptions! Electromagnetic Equivalencies Special, one-way equivalencies exist for converting between electromagnetic units in the cgs and SI unit systems. These exist since in the cgs system, electromagnetic units are comprised of the base units of seconds, grams and centimeters, whereas in the SI system Ampere is a base unit. For example, the dimensions of charge are completely different in the two systems:
###Code
Q1 = YTQuantity(1.0,"C")
Q2 = YTQuantity(1.0,"esu")
print ("Q1 dims =", Q1.units.dimensions)
print ("Q2 dims =", Q2.units.dimensions)
print ("Q1 base units =", Q1.in_mks())
print ("Q2 base units =", Q2.in_cgs())
###Output
_____no_output_____
###Markdown
To convert from a cgs unit to an SI unit, use the "SI" equivalency:
###Code
from yt.units import qp # the elementary charge in esu
qp_SI = qp.in_units("C", equivalence="SI") # convert to Coulombs
print (qp)
print (qp_SI)
###Output
_____no_output_____
###Markdown
To convert from an SI unit to a cgs unit, use the "CGS" equivalency:
###Code
B = YTQuantity(1.0,"T") # magnetic field in Tesla
print (B, B.in_units("gauss", equivalence="CGS")) # convert to Gauss
###Output
_____no_output_____
###Markdown
Equivalencies exist between the SI and cgs dimensions of charge, current, magnetic field, electric potential, and resistance. As a neat example, we can convert current in Amperes and resistance in Ohms to their cgs equivalents, and then use them to calculate the "Joule heating" of a conductor with resistance $R$ and current $I$:
###Code
I = YTQuantity(1.0,"A")
I_cgs = I.in_units("statA", equivalence="CGS")
R = YTQuantity(1.0,"ohm")
R_cgs = R.in_units("statohm", equivalence="CGS")
P = I**2*R
P_cgs = I_cgs**2*R_cgs
###Output
_____no_output_____
###Markdown
The dimensions of current and resistance in the two systems are completely different, but the formula gives us the power dissipated dimensions of energy per time, so the dimensions and the result should be the same, which we can check:
###Code
print (P_cgs.units.dimensions == P.units.dimensions)
print (P.in_units("W"), P_cgs.in_units("W"))
###Output
_____no_output_____
###Markdown
Determining Valid Equivalencies If a certain equivalence does not exist for a particular unit, then an error will be thrown:
###Code
from yt.utilities.exceptions import YTInvalidUnitEquivalence
try:
x = v.in_units("angstrom", equivalence="spectral")
except YTInvalidUnitEquivalence as e:
print (e)
###Output
_____no_output_____
###Markdown
You can check if a `YTArray` has a given equivalence with `has_equivalent`:
###Code
print (mp.has_equivalent("compton"))
print (mp.has_equivalent("thermal"))
###Output
_____no_output_____
###Markdown
To list the equivalencies available for a given `YTArray` or `YTQuantity`, use the `list_equivalencies` method:
###Code
E_p.list_equivalencies()
###Output
_____no_output_____
###Markdown
Some physical quantities are directly related to other unitful quantities by a constant, but otherwise do not have the same units. To facilitate conversions between these quantities, `yt` implements a system of unit equivalencies (inspired by the [AstroPy implementation](http://docs.astropy.org/en/latest/units/equivalencies.html)). The possible unit equivalencies are:* `"thermal"`: conversions between temperature and energy ($E = k_BT$)* `"spectral"`: conversions between wavelength, frequency, and energy for photons ($E = h\nu = hc/\lambda, c = \lambda\nu$)* `"mass_energy"`: conversions between mass and energy ($E = mc^2$)* `"lorentz"`: conversions between velocity and Lorentz factor ($\gamma = 1/\sqrt{1-(v/c)^2}$)* `"schwarzschild"`: conversions between mass and Schwarzschild radius ($R_S = 2GM/c^2$)* `"compton"`: conversions between mass and Compton wavelength ($\lambda = h/mc$)The following unit equivalencies only apply under conditions applicable for an ideal gas with a constant mean molecular weight $\mu$ and ratio of specific heats $\gamma$:* `"number_density"`: conversions between density and number density ($n = \rho/\mu{m_p}$)* `"sound_speed"`: conversions between temperature and sound speed for an ideal gas ($c_s^2 = \gamma{k_BT}/\mu{m_p}$)A `YTArray` or `YTQuantity` can be converted to an equivalent using `to_equivalent`, where the unit and the equivalence name are provided as arguments:
###Code
import yt
from yt import YTQuantity
import numpy as np
ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
dd = ds.all_data()
print (dd["temperature"].to_equivalent("erg", "thermal"))
print (dd["temperature"].to_equivalent("eV", "thermal"))
# Rest energy of the proton
from yt.units import mp
E_p = mp.to_equivalent("GeV", "mass_energy")
print (E_p)
###Output
_____no_output_____
###Markdown
Most equivalencies can go in both directions, without any information required other than the unit you want to convert to (this is not the case for the electromagnetic equivalencies, which we'll discuss later):
###Code
from yt.units import clight
v = 0.1*clight
g = v.to_equivalent("dimensionless", "lorentz")
print (g)
print (g.to_equivalent("c", "lorentz"))
###Output
_____no_output_____
###Markdown
Special Equivalencies Some equivalencies can take supplemental information. The `"number_density"` equivalence can take a custom mean molecular weight (default is $\mu = 0.6$):
###Code
print (dd["density"].max())
print (dd["density"].to_equivalent("cm**-3", "number_density").max())
print (dd["density"].to_equivalent("cm**-3", "number_density", mu=0.75).max())
###Output
_____no_output_____
###Markdown
The `"sound_speed"` equivalence optionally takes the ratio of specific heats $\gamma$ and the mean molecular weight $\mu$ (defaults are $\gamma$ = 5/3, $\mu = 0.6$):
###Code
print (dd["temperature"].to_equivalent("km/s", "sound_speed").mean())
print (dd["temperature"].to_equivalent("km/s", "sound_speed", gamma=4./3., mu=0.5).mean())
###Output
_____no_output_____
###Markdown
These options must be used with caution, and only if you know the underlying data adheres to these assumptions! Electromagnetic Equivalencies Special, one-way equivalencies exist for converting between electromagnetic units in the cgs and SI unit systems. These exist since in the cgs system, electromagnetic units are comprised of the base units of seconds, grams and centimeters, whereas in the SI system Ampere is a base unit. For example, the dimensions of charge are completely different in the two systems:
###Code
Q1 = YTQuantity(1.0,"C")
Q2 = YTQuantity(1.0,"esu")
print ("Q1 dims =", Q1.units.dimensions)
print ("Q2 dims =", Q2.units.dimensions)
print ("Q1 base units =", Q1.in_mks())
print ("Q2 base units =", Q2.in_cgs())
###Output
_____no_output_____
###Markdown
To convert from a cgs unit to an SI unit, use the "SI" equivalency:
###Code
from yt.units import qp # the elementary charge in esu
qp_SI = qp.to_equivalent("C","SI") # convert to Coulombs
print (qp)
print (qp_SI)
###Output
_____no_output_____
###Markdown
To convert from an SI unit to a cgs unit, use the "CGS" equivalency:
###Code
B = YTQuantity(1.0,"T") # magnetic field in Tesla
print (B, B.to_equivalent("gauss","CGS")) # convert to Gauss
###Output
_____no_output_____
###Markdown
Equivalencies exist between the SI and cgs dimensions of charge, current, magnetic field, electric potential, and resistance. As a neat example, we can convert current in Amperes and resistance in Ohms to their cgs equivalents, and then use them to calculate the "Joule heating" of a conductor with resistance $R$ and current $I$:
###Code
I = YTQuantity(1.0,"A")
I_cgs = I.to_equivalent("statA","CGS")
R = YTQuantity(1.0,"ohm")
R_cgs = R.to_equivalent("statohm","CGS")
P = I**2*R
P_cgs = I_cgs**2*R_cgs
###Output
_____no_output_____
###Markdown
The dimensions of current and resistance in the two systems are completely different, but the formula gives us the power dissipated dimensions of energy per time, so the dimensions and the result should be the same, which we can check:
###Code
print (P_cgs.units.dimensions == P.units.dimensions)
print (P.in_units("W"), P_cgs.in_units("W"))
###Output
_____no_output_____
###Markdown
Determining Valid Equivalencies If a certain equivalence does not exist for a particular unit, then an error will be thrown:
###Code
from yt.utilities.exceptions import YTInvalidUnitEquivalence
try:
x = v.to_equivalent("angstrom", "spectral")
except YTInvalidUnitEquivalence as e:
print (e)
###Output
_____no_output_____
###Markdown
You can check if a `YTArray` has a given equivalence with `has_equivalent`:
###Code
print (mp.has_equivalent("compton"))
print (mp.has_equivalent("thermal"))
###Output
_____no_output_____
###Markdown
To list the equivalencies available for a given `YTArray` or `YTQuantity`, use the `list_equivalencies` method:
###Code
E_p.list_equivalencies()
###Output
_____no_output_____ |
pandas/3_summary.ipynb | ###Markdown
Topics- describe- info- value_counts(), unique()- map vs reduce- apply()
###Code
import pandas as pd
df = pd.read_csv('titanic.csv')
df.describe()[['Age', 'Fare']]
df.info()
list(df)
df['Sex'].unique()
df['Pclass'].unique()
df['Sex'].value_counts()
[i for i in range(1, 11)]
# i perform some operation here...
# result 1: 55
# result 2: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100
df.Name
sample_name = "Braund, Mr. Owen Harris"
sample_name.split()[1]
def honorifics(name):
return name.split()[1]
honorifics("Subedi, Mr. Ayush")
df.head()
df['Honor'] = df.Name.apply(honorifics)
df.head()
df.Embarked.value_counts()
df.shape[0] - df.Embarked.value_counts().sum()
df.Embarked.isnull().sum()
df.Age.isnull().sum()
df.shape[0]
df.Fare.mean()
def mahango_ki_sasto(value):
mean = 32.2042
if (value > mean):
return 'mahango'
return 'sasto'
df['m_ki_s'] = df.Fare.apply(mahango_ki_sasto)
df
sum = 0
for i in range (1, 11):
sum = sum + i
print (sum)
###Output
55
|
ensemble_network.ipynb | ###Markdown
Dataset Import
###Code
from __future__ import print_function, division
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import pickle
import cv2
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
plt.ion() # interactive mode
class EnsembleDataset(Dataset):
"""Ensemble dataset."""
def __init__(self, root_dir, inc_img=False, transform=None):
self.root_dir = root_dir
self.inc_img = inc_img
self.transform = transform
def __len__(self):
return 10533 #13167
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
inp = cv2.imread(self.root_dir+"/img/"+str(idx)+".png", cv2.IMREAD_UNCHANGED)
n1 = io.imread(self.root_dir+"/net1/"+str(idx)+".png", cv2.IMREAD_UNCHANGED)[:,:,np.newaxis]
n2 = io.imread(self.root_dir+"/net2/"+str(idx)+".png", cv2.IMREAD_UNCHANGED)[:,:,np.newaxis]
n3 = io.imread(self.root_dir+"/net3/"+str(idx)+".png", cv2.IMREAD_UNCHANGED)[:,:,np.newaxis]
n4 = io.imread(self.root_dir+"/net4/"+str(idx)+".png", cv2.IMREAD_UNCHANGED)[:,:,np.newaxis]
n5 = io.imread(self.root_dir+"/net5/"+str(idx)+".png", cv2.IMREAD_UNCHANGED)[:,:,np.newaxis]
res = np.dstack((inp,n1,n2,n3,n4,n5))/255
gt = io.imread(self.root_dir+"/gt/"+str(idx)+".png", cv2.IMREAD_UNCHANGED)[:,:,np.newaxis]/255
sample = {'name': idx, 'inp': res, 'gt': gt}
if self.transform:
sample = self.transform(sample)
return sample
from skimage.transform import resize
from torchvision import transforms, utils
class Resize(object):
def __init__(self, size, n_channels):
self.size = size
self.n_channels = n_channels
def __call__(self,sample):
name,inp,gt = sample["name"],sample["inp"],sample["gt"]
return {"name": name, "inp": resize(inp,(self.size,self.size,self.n_channels),preserve_range=True), "gt": resize(gt,(self.size,self.size,1),preserve_range=True)}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
name,inp,gt = sample["name"],sample["inp"],sample["gt"]
# swap color axis because
# numpy image: H x W x C
# torch image: C x H x W
inp = inp.transpose((2, 0, 1))
gt = gt.transpose((2, 0, 1))
return {"name": name,
"inp": torch.from_numpy(inp),
"gt": torch.from_numpy(gt)}
trainset = EnsembleDataset(root_dir='data/coco_bitwise_or_reduced_ensemble_results',
inc_img=True,
transform=transforms.Compose([Resize(512,N_CHANNELS),
ToTensor()]))
trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=6)
len(trainset)
###Output
_____no_output_____
###Markdown
Training
###Code
from torch.utils.tensorboard import SummaryWriter
#PATH = "work_dirs/simplenet_1/"
def train(net, trainloader, criterion, optimizer, save_path, tensorboard_path, checkpoint=None):
EPOCH = 0
writer = SummaryWriter(log_dir=tensorboard_path)
if checkpoint != None:
checkpoint = torch.load(checkpoint)
net.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
EPOCH = checkpoint['epoch']
loss = checkpoint['loss']
net.train()
for epoch in range(EPOCH,100): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
im_seg = data["inp"].to(device, dtype=torch.float)
im_res = data["gt"].to(device, dtype=torch.float)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
output = net(im_seg.float())
loss = criterion(output.float(), im_res.float())
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
"""print('[%d, %5d] segm loss: %.6f class loss: %.6f loss: %.6f' %
(epoch + 1, i + 1, running_loss_segm / 50, running_loss_class / 50, running_loss / 50))"""
print('[%d, %5d] loss: %.6f' %
(epoch + 1, i + 1, running_loss / 1999))
running_loss = 0.0
input_ = im_seg.cpu().detach()
output_ = output.cpu().detach()
gt_output_ = im_res.cpu().detach()
#output_ = torch.argmax(output_,1)
#print(output_.shape)
input_ = input_.numpy()[0].transpose((1,2,0))
output_ = output_.numpy()[0].transpose((1,2,0))
gt_output_ = gt_output_.numpy()[0].transpose((1,2,0)).squeeze(axis=2)
fig, ax = plt.subplots(nrows=1, ncols=9, figsize=(15,15))
ax=ax.flat
ax[0].set_title("Original Image")
ax[0].imshow(input_[:,:,0:3])
for i in range(0,5):
#ax.append(fig.add_subplot(2, 4, i+1))
ax[i+1].set_title("Input "+str(i+1)) # set title
ax[i+1].imshow(input_[:,:,i+3],cmap='gray',vmin=0,vmax=1)
ax[6].set_title("Output") # set title
ax[6].imshow(output_,cmap='gray',vmin=0,vmax=1)
ax[7].set_title("Output Rounded") # set title
ax[7].imshow(np.around(output_),cmap='gray',vmin=0,vmax=1)
#ax.append(fig.add_subplot(2, 4, 7))
ax[8].set_title("Ground Truth") # set title
ax[8].imshow(gt_output_,cmap='gray',vmin=0,vmax=1)
fig.tight_layout()
plt.show()
writer.add_scalar('Loss', loss, epoch)
if epoch % 2 == 1:
torch.save({
'epoch': epoch,
'model_state_dict': net.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
}, save_path+"epoch_"+str(epoch+1)+".pt")
writer.close()
print('Finished Training')
import torch.optim as optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
OPTIMIZER = "SGD"
ACTIVATION = "lrelu"
LOSS = "BCELoss"
layers = [""]
#for layers in #[[(3,8,16),(3,16,32),(5,32,64),(5,64,32),(3,32,16),(3,16,2)]]:
print("Starting training on network ",layers)
net = UNet2(N_CHANNELS,1)#layers,activation=ACTIVATION)
net = net.to(device).float()
if LOSS == "BCELoss":
criterion = nn.BCELoss()
elif LOSS == "CrossEntropyLoss":
criterion = nn.CrossEntropyLoss()
if OPTIMIZER == "SGD":
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
elif OPTIMIZER == "Adam":
optimizer = optim.Adam(net.parameters(), lr=0.001)
checkpoint_path = "work_dirs/unet2_bitwise_or_img_ensemble_reduced_do_40"
for layer in layers:
checkpoint_path += "_"+str(layer)
checkpoint_path += "/" + OPTIMIZER + "_" + ACTIVATION + "_" + LOSS + "/"
tensorboard_path = checkpoint_path+"tb/"
os.makedirs(tensorboard_path,exist_ok=True)
train(net,trainloader,criterion,optimizer, checkpoint_path, tensorboard_path)#, checkpoint="work_dirs/simplenet_1/epoch_25.pt")
from torchinfo import summary
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = SimpleNet([(3,8,16),(3,16,32),(3,32,64),(3,64,32),(3,32,16),(3,16,2)],activation="lrelu").float().to(device)
summary(model, (1,8,572,572))
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
event_acc = EventAccumulator('work_dirs/simplenet_1_1_1/sigmoid_BCELoss/tb')
event_acc.Reload()
# Show all tags in the log file
print(event_acc.Tags())
# E. g. get wall clock, number of steps and value for a scalar 'Accuracy'
w_times, step_nums, vals = zip(*event_acc.Scalars('Loss'))
###Output
_____no_output_____
###Markdown
Network Summary
###Code
# for i in range(1):
data = trainset[i]
im_seg = data['im_seg']
im_res = data['im_res']
res = im_seg[0:3,:,:].numpy().transpose((1,2,0))
fig = plt.figure()
plt.imshow(res)
###Output
_____no_output_____ |
2.training.your.first.neural.network.ipynb | ###Markdown
Deep Learning with PyTorchAuthor: [Anand Saha](http://teleported.in/) 2. Building a simple neural network
###Code
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
from torch.autograd import Variable
# Custom DataSet
from data import iris
###Output
_____no_output_____
###Markdown
The Dataset and the challengeThe **Iris** flower, image source: [Wikimedia](https://en.wikipedia.org/wiki/Iris_(plant))| sepal_length_cm | sepal_width_cm | petal_length_cm | petal_width_cm | class ||-----------------|----------------|-----------------|----------------|-----------------|| 5.1 | 3.5 | 1.4 | 0.2 | Iris-setosa || 7.0 | 3.2 | 4.7 | 1.4 | Iris-versicolor || 6.3 | 3.3 | 6.0 | 2.5 | Iris-virginica |* Total instances: 150 (we have separated 20% into validation set, rest into training set)* Download: [Data Source](https://archive.ics.uci.edu/ml/datasets/iris) Let's do a head on the raw file
###Code
!head data/iris.data.txt
###Output
_____no_output_____
###Markdown
Create the Fully Connected Feed Forward Neural Network **Create the module**
###Code
class IrisNet(nn.Module):
def __init__(self, input_size, hidden1_size, hidden2_size, num_classes):
super(IrisNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden2_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu1(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
return out
###Output
_____no_output_____
###Markdown
**Print the module**
###Code
model = IrisNet(4, 100, 50, 3)
print(model)
###Output
_____no_output_____
###Markdown
Create the DataLoader
###Code
batch_size = 60
iris_data_file = 'data/iris.data.txt'
# Get the datasets
train_ds, test_ds = iris.get_datasets(iris_data_file)
# How many instances have we got?
print('# instances in training set: ', len(train_ds))
print('# instances in testing/validation set: ', len(test_ds))
# Create the dataloaders - for training and validation/testing
# We will be using the term validation and testing data interchangably
train_loader = torch.utils.data.DataLoader(dataset=train_ds, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_ds, batch_size=batch_size, shuffle=True)
###Output
_____no_output_____
###Markdown
Instantiate the network, the loss function and the optimizer
###Code
# Our model
net = IrisNet(4, 100, 50, 3)
# Out loss function
criterion = nn.CrossEntropyLoss()
# Our optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate, nesterov=True, momentum=0.9, dampening=0)
###Output
_____no_output_____
###Markdown
Train it!
###Code
num_epochs = 500
train_loss = []
test_loss = []
train_accuracy = []
test_accuracy = []
for epoch in range(num_epochs):
train_correct = 0
train_total = 0
for i, (items, classes) in enumerate(train_loader):
# Convert torch tensor to Variable
items = Variable(items)
classes = Variable(classes)
net.train() # Put the network into training mode
optimizer.zero_grad() # Clear off the gradients from any past operation
outputs = net(items) # Do the forward pass
loss = criterion(outputs, classes) # Calculate the loss
loss.backward() # Calculate the gradients with help of back propagation
optimizer.step() # Ask the optimizer to adjust the parameters based on the gradients
# Record the correct predictions for training data
train_total += classes.size(0)
_, predicted = torch.max(outputs.data, 1)
train_correct += (predicted == classes.data).sum()
print ('Epoch %d/%d, Iteration %d/%d, Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_ds)//batch_size, loss.data[0]))
net.eval() # Put the network into evaluation mode
# Book keeping
# Record the loss
train_loss.append(loss.data[0])
# What was our train accuracy?
train_accuracy.append((100 * train_correct / train_total))
# How did we do on the test set (the unseen set)
# Record the correct predictions for test data
test_items = torch.FloatTensor(test_ds.data.values[:, 0:4])
test_classes = torch.LongTensor(test_ds.data.values[:, 4])
outputs = net(Variable(test_items))
loss = criterion(outputs, Variable(test_classes))
test_loss.append(loss.data[0])
_, predicted = torch.max(outputs.data, 1)
total = test_classes.size(0)
correct = (predicted == test_classes).sum()
test_accuracy.append((100 * correct / total))
###Output
_____no_output_____
###Markdown
Plot loss vs iterations
###Code
fig = plt.figure(figsize=(12, 8))
plt.plot(train_loss, label='train loss')
plt.plot(test_loss, label='test loss')
plt.title("Train and Test Loss")
plt.legend()
plt.show()
fig = plt.figure(figsize=(12, 8))
plt.plot(train_accuracy, label='train accuracy')
plt.plot(test_accuracy, label='test accuracy')
plt.title("Train and Test Accuracy")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Savign the model to disk, and loading it back
###Code
torch.save(net.state_dict(), "./2.model.pth")
net2 = IrisNet(4, 100, 50, 3)
net2.load_state_dict(torch.load("./2.model.pth"))
output = net2(Variable(torch.FloatTensor([[5.1, 3.5, 1.4, 0.2]])))
_, predicted_class = torch.max(output.data, 1)
print('Predicted class: ', predicted_class.numpy()[0])
print('Expected class: ', 0 )
###Output
_____no_output_____ |
server/color_type/ml/train/ml_steps.ipynb | ###Markdown
Train a model to predict a color type
###Code
# eliminate library warnings
import warnings
warnings.simplefilter('ignore')
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
# set matplotlib outputs/plots to be embeded inline to notebook cells
%matplotlib inline
# read data
df = pd.read_csv('data/warm_cold_colors.csv')
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 136 entries, 0 to 135
Data columns (total 4 columns):
r 136 non-null int64
g 136 non-null int64
b 136 non-null int64
is_warm 136 non-null int64
dtypes: int64(4)
memory usage: 4.3 KB
###Markdown
It is very usefull to downcast the data types from the default ones to reduce the memory consumption and consequentially to speed-up computation process
###Code
# cast the data types
df[['r', 'g', 'b']] = df[['r', 'g', 'b']].astype(np.int16)
df['is_warm'] = df['is_warm'].astype(np.int8)
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 136 entries, 0 to 135
Data columns (total 4 columns):
r 136 non-null int16
g 136 non-null int16
b 136 non-null int16
is_warm 136 non-null int8
dtypes: int16(3), int8(1)
memory usage: 1.0 KB
###Markdown
In our case, we reduced the memory consumtion by the factor of **4**(!) even though it wasn't critical since we have a tiny data set, however in case of 1M+ records, it is a reasonable first step prior to building a machine learning model. In fact, types downcasting is used for tensorflow lite as a part of the post-training quantization (see details here).
###Code
# check the classes distribution skewness
df[['is_warm', 'r']].groupby(['is_warm'])\
.count()\
.reset_index()
###Output
_____no_output_____
###Markdown
Explorative Data Analysis - EDA
###Code
axes_combinations = [('r', 'g'),
('r', 'b'),
('g', 'b')]
color_scale = {'Cool': 'blue',
'Warm':'red'}
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 10))
for i, ax in enumerate(axes.flat):
x, y = axes_combinations[i]
im = ax.scatter(x=df[x],
y=df[y],
c=df['is_warm'],
cmap=matplotlib.colors.ListedColormap(color_scale.values()))
ax.set_xlabel(x)
ax.set_ylabel(y)
ax.add_line(mlines.Line2D([0,255], [0,255],
color='k'))
cb = fig.colorbar(im,
ax=axes.ravel().tolist(),
orientation='horizontal',
shrink=.2)
cb.ax.set_title('Color Type', pad=10)
cb.set_ticks([.25,.75])
cb.set_ticklabels(list(color_scale.keys()))
plt.show()
###Output
_____no_output_____
###Markdown
Modelling MetricsHow close to reality a model's prediction can be; for a classificaiton problem, consufion matrix is usually used.
###Code
%%html
<img src="https://upload.wikimedia.org/wikipedia/commons/2/26/Precisionrecall.svg" alt="Precisionrecall.svg" height="480" width="264">
<div style="text-align:center">
By <a href="//commons.wikimedia.org/wiki/User:Walber" title="User:Walber">Walber</a> - <span class="int-own-work" lang="en">Own work</span>,
<a href="https://creativecommons.org/licenses/by-sa/4.0" title="Creative Commons Attribution-Share Alike 4.0">CC BY-SA 4.0</a>, <a href="https://commons.wikimedia.org/w/index.php?curid=36926283">Link</a>
</div>
###Output
_____no_output_____
###Markdown
In order to reduce the flase negative and false positive rate, F1 score to be used as the model validation metric: $F_{1} = \frac{2}{\frac{1}{Recall} + \frac{1}{Precision}}$ $Recall = \frac{TP}{TP+FN}$ describes what fraction of all true "positive" points did a model predict correctly $Precision = \frac{TP}{TP+FP}$ descibes what fraction of all predicted "Positive" points did a model predict correctly Another widely used abused metric, accuracy: $Accuracy = \frac{TP+TN}{TP+FP+TN+FN}$
###Code
def eval_metrics(y_true: np.array,
y_pred: np.array) -> dict:
"""
Function to calculate classification metrics
Args:
y_true: real labels
y_pred: predicitons
Returs:
dict with the keys:
confusion_matrix - confusion matrix
accuracy = (TP+TN)/(TP+TN+FP+FN)
precision = TP/(TP+FP)
recall = TP/(TP+FN)
f1_score = 2/(1/recall+1/precision)
"""
assert len(y_true) != 0, "Empty array, please check input"
true_positives = np.where(y_true==1)[0]
TP = sum(y_pred[true_positives]==1)
FN = sum(y_pred[true_positives]==0)
true_negatives = np.where(y_true==0)[0]
FP = sum(y_pred[true_negatives]==1)
TN = sum(y_pred[true_negatives]==0)
accuracy = (TP + TN) / len(y_true)
recall = precision = f1_score = None
if FN > 0 or TP > 0:
recall = float(TP / (TP + FN))
if FP > 0 or TP > 0:
precision = float(TP / (TP + FP))
if recall and precision:
f1_score = 2/(1/recall+1/precision)
return {
"confusion_matrix":
{
"TP": TP, "FP": FP,
"FN": FN, "TN": TN
},
"accuracy": accuracy,
"recall": recall,
"precision": precision,
"f1_score": f1_score
}
###Output
_____no_output_____
###Markdown
Baseline As a baseline, we can set the condition: ```javascriptif r > g > b: color_type = 'Warm'else: color_type = 'Cool'```
###Code
class model_baseline:
""" Baseline model """
import pandas as pd
def __init__(self):
pass
@classmethod
def _rule(cls, row: pd.DataFrame) -> int:
""" Model rule
Args:
Row: pd.DataFrame row
Return:
int
"""
if row['r'] > row['g'] > row['b']:
return 1
return 0
def predict(cls, X: pd.DataFrame) -> pd.core.series.Series:
""" Function to run a base line prediction
Args:
X: input data
Returns:
array
"""
return X.apply(lambda row: cls._rule(row), axis=1)
# baseline model
model_v1 = model_baseline()
y_predict_baseline = model_v1.predict(df[['r', 'g', 'b']])
# baseline model evaluation
eval_metrics(df['is_warm'], y_predict_baseline)
###Output
_____no_output_____
###Markdown
So without any machine learning, we were managed to get the F1 score of 0.74 :) Let's assess out "model" using a random data point:
###Code
# test point -> a color of the "Cool" type/class 0
test_point = pd.DataFrame({'r': [8], 'g': [103], 'b': [203]})
model_v1.predict(test_point).squeeze()
###Output
_____no_output_____
###Markdown
Let's deploy the baseline model as a POC We can improve our ML service by improving our model's accuracy Data preparesion
###Code
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
seed = 2019
df = df.sample(frac=1, random_state=seed)\
.reset_index(drop=True)
scale = MinMaxScaler()
X = df.drop('is_warm', axis=1)
X_scaled = scale.fit_transform(X)
x_train, x_test, y_train, y_test = train_test_split(X_scaled, df['is_warm'],
test_size=0.2,
random_state=seed)
###Output
_____no_output_____
###Markdown
XGBoost
###Code
from xgboost import XGBClassifier as xgb_class
x_train, x_test, y_train, y_test = train_test_split(df.drop('is_warm', axis=1), df['is_warm'],
test_size=0.2,
random_state=seed)
params = {
"objective": 'binary:logistic',
"learning_rate": 0.5,
"n_estimators": 100,
"max_depth": 3,
"n_jobs": 4,
"silent": False,
"subsample": 0.8,
"random_state": seed
}
model_v2 = xgb_class(**params)
model_v2.fit(x_train, y_train, verbose=True)
y_predict_xgb = model_v2.predict(df.drop('is_warm', axis=1))
eval_metrics(df['is_warm'], y_predict_xgb)
# point test
model_v2.predict(test_point).squeeze()
###Output
_____no_output_____
###Markdown
0.97 is quite an improvement, so let's dump the model and re-deploy the service
###Code
import pickle
def save_object(obj, filename):
"""
Function to save/pickle python object
Args:
filename: str path to pickle file
"""
with open(filename, 'wb') as output:
pickle.dump(obj, output, -1)
save_object(model_v2, '../model/v2/model_v2.sav')
###Output
_____no_output_____ |
project-2-image-classification/dlnd_image_classification.ipynb | ###Markdown
Image ClassificationIn this project, you'll classify images from the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the DataRun the following cell to download the [CIFAR-10 dataset for python](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
###Output
All files found!
###Markdown
Explore the DataThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named `data_batch_1`, `data_batch_2`, etc.. Each batch contains the labels and images that are one of the following:* airplane* automobile* bird* cat* deer* dog* frog* horse* ship* truckUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the `batch_id` and `sample_id`. The `batch_id` is the id for a batch (1-5). The `sample_id` is the id for a image and label pair in the batch.Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
###Output
Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]
Example of Image 5:
Image - Min Value: 0 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile
###Markdown
Implement Preprocess Functions NormalizeIn the cell below, implement the `normalize` function to take in image data, `x`, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as `x`.
###Code
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return (x - np.min(x)) / (np.max(x) - np.min(x))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
###Output
Tests Passed
###Markdown
One-hot encodeJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the `one_hot_encode` function. The input, `x`, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to `one_hot_encode`. Make sure to save the map of encodings outside the function.Hint: Don't reinvent the wheel.
###Code
n_labels = 10
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
return np.eye(n_labels)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
###Output
Tests Passed
###Markdown
Randomize DataAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save itRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
###Output
_____no_output_____
###Markdown
Build the networkFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.>**Note:** If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.>However, if you would like to get the most out of this course, try to solve all the problems _without_ using anything from the TF Layers packages. You **can** still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the `conv2d` class, [tf.layers.conv2d](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d), you would want to use the TF Neural Network version of `conv2d`, [tf.nn.conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d). Let's begin! InputThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions* Implement `neural_net_image_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `image_shape` with batch size set to `None`. * Name the TensorFlow placeholder "x" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).* Implement `neural_net_label_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `n_classes` with batch size set to `None`. * Name the TensorFlow placeholder "y" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).* Implement `neural_net_keep_prob_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).These names will be used at the end of the project to load your saved model.Note: `None` for shapes in TensorFlow allow for a dynamic size.
###Code
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape=[None, *image_shape],
name="x")
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape=(None,n_classes),
name="y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32,
name="keep_prob")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
###Output
Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.
###Markdown
Convolution and Max Pooling LayerConvolution layers have a lot of success with images. For this code cell, you should implement the function `conv2d_maxpool` to apply convolution then max pooling:* Create the weight and bias using `conv_ksize`, `conv_num_outputs` and the shape of `x_tensor`.* Apply a convolution to `x_tensor` using weight and `conv_strides`. * We recommend you use same padding, but you're welcome to use any padding.* Add bias* Add a nonlinear activation to the convolution.* Apply Max Pooling using `pool_ksize` and `pool_strides`. * We recommend you use same padding, but you're welcome to use any padding.**Note:** You **can't** use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for **this** layer, but you can still use TensorFlow's [Neural Network](https://www.tensorflow.org/api_docs/python/tf/nn) package. You may still use the shortcut option for all the **other** layers.
###Code
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
in_depth = int(x_tensor.shape[3])
out_depth = conv_num_outputs
# weight = tf.Variable(tf.truncated_normal([*conv_ksize, in_depth, out_depth]))
weight = tf.Variable(tf.random_normal([*conv_ksize, in_depth, out_depth], stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
# convolution
conv_layer = tf.nn.conv2d(x_tensor,
weight,
strides=[1, *conv_strides, 1],
padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
# pooling
conv_layer = tf.nn.max_pool(conv_layer,
ksize=[1, *pool_ksize, 1],
strides=[1, *pool_strides, 1],
padding='SAME')
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
###Output
Tests Passed
###Markdown
Flatten LayerImplement the `flatten` function to change the dimension of `x_tensor` from a 4-D tensor to a 2-D tensor. The output should be the shape (*Batch Size*, *Flattened Image Size*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
###Code
from functools import reduce
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
batch_size, *flatten = x_tensor.get_shape().as_list()
flatten = reduce(lambda x,y: x*y, flatten)
return tf.reshape(x_tensor, [-1, flatten])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
###Output
Tests Passed
###Markdown
Fully-Connected LayerImplement the `fully_conn` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
###Code
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
_, x = x_tensor.get_shape().as_list()
# weights = tf.Variable(tf.truncated_normal((int(x), num_outputs)))
weights = tf.Variable(tf.random_normal((int(x), num_outputs), stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
full_layer = tf.add(tf.matmul(x_tensor, weights), bias)
full_layer = tf.nn.relu(full_layer)
return full_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
###Output
Tests Passed
###Markdown
Output LayerImplement the `output` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.**Note:** Activation, softmax, or cross entropy should **not** be applied to this.
###Code
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
_, x = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.random_normal((int(x), num_outputs), stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
full_layer = tf.add(tf.matmul(x_tensor, weights), bias)
return full_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
###Output
Tests Passed
###Markdown
Create Convolutional ModelImplement the function `conv_net` to create a convolutional neural network model. The function takes in a batch of images, `x`, and outputs logits. Use the layers you created above to create this model:* Apply 1, 2, or 3 Convolution and Max Pool layers* Apply a Flatten Layer* Apply 1, 2, or 3 Fully Connected Layers* Apply an Output Layer* Return the output* Apply [TensorFlow's Dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout) to one or more layers in the model using `keep_prob`.
###Code
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x = conv2d_maxpool(x,
conv_num_outputs=32,
conv_ksize=(2, 2),
conv_strides=(1, 1),
pool_ksize=(2, 2),
pool_strides=(1, 1))
x = conv2d_maxpool(x,
conv_num_outputs=64,
conv_ksize=(2, 2),
conv_strides=(2, 2),
pool_ksize=(2, 2),
pool_strides=(2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 512)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x, 128)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(x, 10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
###Output
Neural Network Built!
###Markdown
Train the Neural Network Single OptimizationImplement the function `train_neural_network` to do a single optimization. The optimization should use `optimizer` to optimize in `session` with a `feed_dict` of the following:* `x` for image input* `y` for labels* `keep_prob` for keep probability for dropoutThis function will be called for each batch, so `tf.global_variables_initializer()` has already been called.Note: Nothing needs to be returned. This function is only optimizing the neural network.
###Code
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
###Output
Tests Passed
###Markdown
Show StatsImplement the function `print_stats` to print loss and validation accuracy. Use the global variables `valid_features` and `valid_labels` to calculate validation accuracy. Use a keep probability of `1.0` to calculate the loss and validation accuracy.
###Code
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.
})
valid_accuracy = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.
})
print('Loss: {:>10.4f} - Accuracy: {:.6f}'.format(loss, valid_accuracy))
###Output
_____no_output_____
###Markdown
HyperparametersTune the following parameters:* Set `epochs` to the number of iterations until the network stops learning or start overfitting* Set `batch_size` to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ...* Set `keep_probability` to the probability of keeping a node using dropout
###Code
# TODO: Tune Parameters
epochs = 32
batch_size = 128
keep_probability = 0.5
###Output
_____no_output_____
###Markdown
Train on a Single CIFAR-10 BatchInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
###Output
Checking the Training on a Single Batch...
Epoch 1, CIFAR-10 Batch 1: Loss: 2.0292 - Accuracy: 0.308400
Epoch 2, CIFAR-10 Batch 1: Loss: 1.8137 - Accuracy: 0.405200
Epoch 3, CIFAR-10 Batch 1: Loss: 1.5444 - Accuracy: 0.458200
Epoch 4, CIFAR-10 Batch 1: Loss: 1.3623 - Accuracy: 0.492800
Epoch 5, CIFAR-10 Batch 1: Loss: 1.2525 - Accuracy: 0.495200
Epoch 6, CIFAR-10 Batch 1: Loss: 1.0797 - Accuracy: 0.515200
Epoch 7, CIFAR-10 Batch 1: Loss: 0.9489 - Accuracy: 0.548600
Epoch 8, CIFAR-10 Batch 1: Loss: 0.8353 - Accuracy: 0.547600
Epoch 9, CIFAR-10 Batch 1: Loss: 0.7314 - Accuracy: 0.564400
Epoch 10, CIFAR-10 Batch 1: Loss: 0.6599 - Accuracy: 0.572800
Epoch 11, CIFAR-10 Batch 1: Loss: 0.6052 - Accuracy: 0.579200
Epoch 12, CIFAR-10 Batch 1: Loss: 0.4706 - Accuracy: 0.581000
Epoch 13, CIFAR-10 Batch 1: Loss: 0.4368 - Accuracy: 0.576800
Epoch 14, CIFAR-10 Batch 1: Loss: 0.3869 - Accuracy: 0.596800
Epoch 15, CIFAR-10 Batch 1: Loss: 0.3408 - Accuracy: 0.595600
Epoch 16, CIFAR-10 Batch 1: Loss: 0.2867 - Accuracy: 0.605800
Epoch 17, CIFAR-10 Batch 1: Loss: 0.2357 - Accuracy: 0.601400
Epoch 18, CIFAR-10 Batch 1: Loss: 0.2328 - Accuracy: 0.593400
Epoch 19, CIFAR-10 Batch 1: Loss: 0.1663 - Accuracy: 0.608800
Epoch 20, CIFAR-10 Batch 1: Loss: 0.1318 - Accuracy: 0.595200
Epoch 21, CIFAR-10 Batch 1: Loss: 0.1057 - Accuracy: 0.612400
Epoch 22, CIFAR-10 Batch 1: Loss: 0.1013 - Accuracy: 0.597600
Epoch 23, CIFAR-10 Batch 1: Loss: 0.0828 - Accuracy: 0.607200
Epoch 24, CIFAR-10 Batch 1: Loss: 0.0601 - Accuracy: 0.605800
Epoch 25, CIFAR-10 Batch 1: Loss: 0.0399 - Accuracy: 0.606400
Epoch 26, CIFAR-10 Batch 1: Loss: 0.0339 - Accuracy: 0.613000
Epoch 27, CIFAR-10 Batch 1: Loss: 0.0240 - Accuracy: 0.603400
Epoch 28, CIFAR-10 Batch 1: Loss: 0.0268 - Accuracy: 0.602800
Epoch 29, CIFAR-10 Batch 1: Loss: 0.0233 - Accuracy: 0.599600
Epoch 30, CIFAR-10 Batch 1: Loss: 0.0156 - Accuracy: 0.588000
Epoch 31, CIFAR-10 Batch 1: Loss: 0.0114 - Accuracy: 0.598400
Epoch 32, CIFAR-10 Batch 1: Loss: 0.0129 - Accuracy: 0.612600
###Markdown
Fully Train the ModelNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
###Output
Training...
Epoch 1, CIFAR-10 Batch 1: Loss: 2.0677 - Accuracy: 0.311400
Epoch 1, CIFAR-10 Batch 2: Loss: 1.7245 - Accuracy: 0.402600
Epoch 1, CIFAR-10 Batch 3: Loss: 1.4318 - Accuracy: 0.420800
Epoch 1, CIFAR-10 Batch 4: Loss: 1.5511 - Accuracy: 0.464200
Epoch 1, CIFAR-10 Batch 5: Loss: 1.4719 - Accuracy: 0.496600
Epoch 2, CIFAR-10 Batch 1: Loss: 1.4694 - Accuracy: 0.518600
Epoch 2, CIFAR-10 Batch 2: Loss: 1.2601 - Accuracy: 0.535600
Epoch 2, CIFAR-10 Batch 3: Loss: 1.1251 - Accuracy: 0.533000
Epoch 2, CIFAR-10 Batch 4: Loss: 1.1841 - Accuracy: 0.553200
Epoch 2, CIFAR-10 Batch 5: Loss: 1.1747 - Accuracy: 0.547600
Epoch 3, CIFAR-10 Batch 1: Loss: 1.2612 - Accuracy: 0.571800
Epoch 3, CIFAR-10 Batch 2: Loss: 0.9819 - Accuracy: 0.587200
Epoch 3, CIFAR-10 Batch 3: Loss: 0.9724 - Accuracy: 0.588000
Epoch 3, CIFAR-10 Batch 4: Loss: 0.9866 - Accuracy: 0.601000
Epoch 3, CIFAR-10 Batch 5: Loss: 0.9802 - Accuracy: 0.596200
Epoch 4, CIFAR-10 Batch 1: Loss: 0.9956 - Accuracy: 0.610400
Epoch 4, CIFAR-10 Batch 2: Loss: 0.8450 - Accuracy: 0.620600
Epoch 4, CIFAR-10 Batch 3: Loss: 0.7585 - Accuracy: 0.621600
Epoch 4, CIFAR-10 Batch 4: Loss: 0.8542 - Accuracy: 0.630800
Epoch 4, CIFAR-10 Batch 5: Loss: 0.8406 - Accuracy: 0.620800
Epoch 5, CIFAR-10 Batch 1: Loss: 0.8496 - Accuracy: 0.635000
Epoch 5, CIFAR-10 Batch 2: Loss: 0.7355 - Accuracy: 0.642000
Epoch 5, CIFAR-10 Batch 3: Loss: 0.6310 - Accuracy: 0.646200
Epoch 5, CIFAR-10 Batch 4: Loss: 0.7697 - Accuracy: 0.641400
Epoch 5, CIFAR-10 Batch 5: Loss: 0.7102 - Accuracy: 0.647000
Epoch 6, CIFAR-10 Batch 1: Loss: 0.7060 - Accuracy: 0.651000
Epoch 6, CIFAR-10 Batch 2: Loss: 0.6259 - Accuracy: 0.660200
Epoch 6, CIFAR-10 Batch 3: Loss: 0.5222 - Accuracy: 0.661000
Epoch 6, CIFAR-10 Batch 4: Loss: 0.5999 - Accuracy: 0.667200
Epoch 6, CIFAR-10 Batch 5: Loss: 0.5839 - Accuracy: 0.662200
Epoch 7, CIFAR-10 Batch 1: Loss: 0.5550 - Accuracy: 0.652200
Epoch 7, CIFAR-10 Batch 2: Loss: 0.4778 - Accuracy: 0.677000
Epoch 7, CIFAR-10 Batch 3: Loss: 0.4647 - Accuracy: 0.674200
Epoch 7, CIFAR-10 Batch 4: Loss: 0.5362 - Accuracy: 0.678400
Epoch 7, CIFAR-10 Batch 5: Loss: 0.4623 - Accuracy: 0.680800
Epoch 8, CIFAR-10 Batch 1: Loss: 0.5157 - Accuracy: 0.681400
Epoch 8, CIFAR-10 Batch 2: Loss: 0.4495 - Accuracy: 0.683200
Epoch 8, CIFAR-10 Batch 3: Loss: 0.4124 - Accuracy: 0.679400
Epoch 8, CIFAR-10 Batch 4: Loss: 0.4186 - Accuracy: 0.685200
Epoch 8, CIFAR-10 Batch 5: Loss: 0.3829 - Accuracy: 0.682200
Epoch 9, CIFAR-10 Batch 1: Loss: 0.4372 - Accuracy: 0.679600
Epoch 9, CIFAR-10 Batch 2: Loss: 0.3635 - Accuracy: 0.696800
Epoch 9, CIFAR-10 Batch 3: Loss: 0.2975 - Accuracy: 0.700800
Epoch 9, CIFAR-10 Batch 4: Loss: 0.3321 - Accuracy: 0.686800
Epoch 9, CIFAR-10 Batch 5: Loss: 0.3316 - Accuracy: 0.691600
Epoch 10, CIFAR-10 Batch 1: Loss: 0.3428 - Accuracy: 0.691000
Epoch 10, CIFAR-10 Batch 2: Loss: 0.2786 - Accuracy: 0.706000
Epoch 10, CIFAR-10 Batch 3: Loss: 0.2630 - Accuracy: 0.694800
Epoch 10, CIFAR-10 Batch 4: Loss: 0.2695 - Accuracy: 0.699400
Epoch 10, CIFAR-10 Batch 5: Loss: 0.2607 - Accuracy: 0.700800
Epoch 11, CIFAR-10 Batch 1: Loss: 0.3601 - Accuracy: 0.696400
Epoch 11, CIFAR-10 Batch 2: Loss: 0.2282 - Accuracy: 0.711400
Epoch 11, CIFAR-10 Batch 3: Loss: 0.2286 - Accuracy: 0.696600
Epoch 11, CIFAR-10 Batch 4: Loss: 0.2239 - Accuracy: 0.704200
Epoch 11, CIFAR-10 Batch 5: Loss: 0.2043 - Accuracy: 0.708600
Epoch 12, CIFAR-10 Batch 1: Loss: 0.2466 - Accuracy: 0.706000
Epoch 12, CIFAR-10 Batch 2: Loss: 0.1740 - Accuracy: 0.714200
Epoch 12, CIFAR-10 Batch 3: Loss: 0.1767 - Accuracy: 0.710200
Epoch 12, CIFAR-10 Batch 4: Loss: 0.1905 - Accuracy: 0.706800
Epoch 12, CIFAR-10 Batch 5: Loss: 0.1731 - Accuracy: 0.699000
Epoch 13, CIFAR-10 Batch 1: Loss: 0.1994 - Accuracy: 0.712400
Epoch 13, CIFAR-10 Batch 2: Loss: 0.1500 - Accuracy: 0.715800
Epoch 13, CIFAR-10 Batch 3: Loss: 0.1488 - Accuracy: 0.705000
Epoch 13, CIFAR-10 Batch 4: Loss: 0.1454 - Accuracy: 0.718000
Epoch 13, CIFAR-10 Batch 5: Loss: 0.1340 - Accuracy: 0.711800
Epoch 14, CIFAR-10 Batch 1: Loss: 0.2105 - Accuracy: 0.704400
Epoch 14, CIFAR-10 Batch 2: Loss: 0.1362 - Accuracy: 0.717600
Epoch 14, CIFAR-10 Batch 3: Loss: 0.1576 - Accuracy: 0.689400
Epoch 14, CIFAR-10 Batch 4: Loss: 0.1753 - Accuracy: 0.704600
Epoch 14, CIFAR-10 Batch 5: Loss: 0.1139 - Accuracy: 0.711200
Epoch 15, CIFAR-10 Batch 1: Loss: 0.1758 - Accuracy: 0.713800
Epoch 15, CIFAR-10 Batch 2: Loss: 0.1225 - Accuracy: 0.717000
Epoch 15, CIFAR-10 Batch 3: Loss: 0.1146 - Accuracy: 0.715000
Epoch 15, CIFAR-10 Batch 4: Loss: 0.1204 - Accuracy: 0.713800
Epoch 15, CIFAR-10 Batch 5: Loss: 0.0880 - Accuracy: 0.710600
Epoch 16, CIFAR-10 Batch 1: Loss: 0.1384 - Accuracy: 0.717600
Epoch 16, CIFAR-10 Batch 2: Loss: 0.0878 - Accuracy: 0.721000
Epoch 16, CIFAR-10 Batch 3: Loss: 0.1028 - Accuracy: 0.694800
Epoch 16, CIFAR-10 Batch 4: Loss: 0.0907 - Accuracy: 0.720600
Epoch 16, CIFAR-10 Batch 5: Loss: 0.0521 - Accuracy: 0.710600
Epoch 17, CIFAR-10 Batch 1: Loss: 0.0953 - Accuracy: 0.712200
Epoch 17, CIFAR-10 Batch 2: Loss: 0.0874 - Accuracy: 0.712000
Epoch 17, CIFAR-10 Batch 3: Loss: 0.0696 - Accuracy: 0.694000
Epoch 17, CIFAR-10 Batch 4: Loss: 0.0716 - Accuracy: 0.720200
Epoch 17, CIFAR-10 Batch 5: Loss: 0.0440 - Accuracy: 0.710800
Epoch 18, CIFAR-10 Batch 1: Loss: 0.0784 - Accuracy: 0.711200
Epoch 18, CIFAR-10 Batch 2: Loss: 0.0705 - Accuracy: 0.716600
Epoch 18, CIFAR-10 Batch 3: Loss: 0.0509 - Accuracy: 0.701800
Epoch 18, CIFAR-10 Batch 4: Loss: 0.0506 - Accuracy: 0.721600
Epoch 18, CIFAR-10 Batch 5: Loss: 0.0443 - Accuracy: 0.710200
Epoch 19, CIFAR-10 Batch 1: Loss: 0.0765 - Accuracy: 0.704600
Epoch 19, CIFAR-10 Batch 2: Loss: 0.0550 - Accuracy: 0.711000
Epoch 19, CIFAR-10 Batch 3: Loss: 0.0413 - Accuracy: 0.710400
Epoch 19, CIFAR-10 Batch 4: Loss: 0.0439 - Accuracy: 0.708600
Epoch 19, CIFAR-10 Batch 5: Loss: 0.0328 - Accuracy: 0.718200
Epoch 20, CIFAR-10 Batch 1: Loss: 0.0691 - Accuracy: 0.703600
Epoch 20, CIFAR-10 Batch 2: Loss: 0.0870 - Accuracy: 0.707800
Epoch 20, CIFAR-10 Batch 3: Loss: 0.0321 - Accuracy: 0.712600
Epoch 20, CIFAR-10 Batch 4: Loss: 0.0466 - Accuracy: 0.710600
Epoch 20, CIFAR-10 Batch 5: Loss: 0.0228 - Accuracy: 0.717600
Epoch 21, CIFAR-10 Batch 1: Loss: 0.0619 - Accuracy: 0.717200
Epoch 21, CIFAR-10 Batch 2: Loss: 0.0357 - Accuracy: 0.722400
Epoch 21, CIFAR-10 Batch 3: Loss: 0.0323 - Accuracy: 0.704800
Epoch 21, CIFAR-10 Batch 4: Loss: 0.0329 - Accuracy: 0.712200
Epoch 21, CIFAR-10 Batch 5: Loss: 0.0181 - Accuracy: 0.716400
Epoch 22, CIFAR-10 Batch 1: Loss: 0.0419 - Accuracy: 0.716400
Epoch 22, CIFAR-10 Batch 2: Loss: 0.0306 - Accuracy: 0.725000
Epoch 22, CIFAR-10 Batch 3: Loss: 0.0265 - Accuracy: 0.712800
Epoch 22, CIFAR-10 Batch 4: Loss: 0.0239 - Accuracy: 0.712200
Epoch 22, CIFAR-10 Batch 5: Loss: 0.0133 - Accuracy: 0.714000
Epoch 23, CIFAR-10 Batch 1: Loss: 0.0388 - Accuracy: 0.713200
Epoch 23, CIFAR-10 Batch 2: Loss: 0.0222 - Accuracy: 0.710000
Epoch 23, CIFAR-10 Batch 3: Loss: 0.0185 - Accuracy: 0.724200
Epoch 23, CIFAR-10 Batch 4: Loss: 0.0201 - Accuracy: 0.708400
Epoch 23, CIFAR-10 Batch 5: Loss: 0.0162 - Accuracy: 0.714200
Epoch 24, CIFAR-10 Batch 1: Loss: 0.0400 - Accuracy: 0.715400
Epoch 24, CIFAR-10 Batch 2: Loss: 0.0163 - Accuracy: 0.713400
Epoch 24, CIFAR-10 Batch 3: Loss: 0.0142 - Accuracy: 0.705800
Epoch 24, CIFAR-10 Batch 4: Loss: 0.0239 - Accuracy: 0.709000
Epoch 24, CIFAR-10 Batch 5: Loss: 0.0099 - Accuracy: 0.715400
Epoch 25, CIFAR-10 Batch 1: Loss: 0.0307 - Accuracy: 0.706000
Epoch 25, CIFAR-10 Batch 2: Loss: 0.0231 - Accuracy: 0.713800
Epoch 25, CIFAR-10 Batch 3: Loss: 0.0136 - Accuracy: 0.717000
Epoch 25, CIFAR-10 Batch 4: Loss: 0.0186 - Accuracy: 0.711200
Epoch 25, CIFAR-10 Batch 5: Loss: 0.0120 - Accuracy: 0.717400
Epoch 26, CIFAR-10 Batch 1: Loss: 0.0252 - Accuracy: 0.712600
Epoch 26, CIFAR-10 Batch 2: Loss: 0.0206 - Accuracy: 0.719600
Epoch 26, CIFAR-10 Batch 3: Loss: 0.0122 - Accuracy: 0.725200
Epoch 26, CIFAR-10 Batch 4: Loss: 0.0137 - Accuracy: 0.710200
Epoch 26, CIFAR-10 Batch 5: Loss: 0.0066 - Accuracy: 0.712000
Epoch 27, CIFAR-10 Batch 1: Loss: 0.0321 - Accuracy: 0.702800
Epoch 27, CIFAR-10 Batch 2: Loss: 0.0094 - Accuracy: 0.714200
Epoch 27, CIFAR-10 Batch 3: Loss: 0.0090 - Accuracy: 0.712200
Epoch 27, CIFAR-10 Batch 4: Loss: 0.0119 - Accuracy: 0.711200
Epoch 27, CIFAR-10 Batch 5: Loss: 0.0068 - Accuracy: 0.721600
Epoch 28, CIFAR-10 Batch 1: Loss: 0.0247 - Accuracy: 0.704400
Epoch 28, CIFAR-10 Batch 2: Loss: 0.0082 - Accuracy: 0.721000
Epoch 28, CIFAR-10 Batch 3: Loss: 0.0076 - Accuracy: 0.711200
Epoch 28, CIFAR-10 Batch 4: Loss: 0.0092 - Accuracy: 0.711200
Epoch 28, CIFAR-10 Batch 5: Loss: 0.0033 - Accuracy: 0.718000
Epoch 29, CIFAR-10 Batch 1: Loss: 0.0161 - Accuracy: 0.718600
Epoch 29, CIFAR-10 Batch 2: Loss: 0.0109 - Accuracy: 0.715600
Epoch 29, CIFAR-10 Batch 3: Loss: 0.0066 - Accuracy: 0.722600
Epoch 29, CIFAR-10 Batch 4: Loss: 0.0061 - Accuracy: 0.722200
Epoch 29, CIFAR-10 Batch 5: Loss: 0.0040 - Accuracy: 0.735000
Epoch 30, CIFAR-10 Batch 1: Loss: 0.0156 - Accuracy: 0.717400
Epoch 30, CIFAR-10 Batch 2: Loss: 0.0058 - Accuracy: 0.718000
Epoch 30, CIFAR-10 Batch 3: Loss: 0.0055 - Accuracy: 0.716800
Epoch 30, CIFAR-10 Batch 4: Loss: 0.0096 - Accuracy: 0.722800
Epoch 30, CIFAR-10 Batch 5: Loss: 0.0032 - Accuracy: 0.721800
Epoch 31, CIFAR-10 Batch 1: Loss: 0.0084 - Accuracy: 0.717000
Epoch 31, CIFAR-10 Batch 2: Loss: 0.0066 - Accuracy: 0.728600
Epoch 31, CIFAR-10 Batch 3: Loss: 0.0073 - Accuracy: 0.719800
Epoch 31, CIFAR-10 Batch 4: Loss: 0.0080 - Accuracy: 0.714600
Epoch 31, CIFAR-10 Batch 5: Loss: 0.0030 - Accuracy: 0.718600
Epoch 32, CIFAR-10 Batch 1: Loss: 0.0117 - Accuracy: 0.712000
Epoch 32, CIFAR-10 Batch 2: Loss: 0.0041 - Accuracy: 0.722000
Epoch 32, CIFAR-10 Batch 3: Loss: 0.0067 - Accuracy: 0.716000
Epoch 32, CIFAR-10 Batch 4: Loss: 0.0059 - Accuracy: 0.718400
Epoch 32, CIFAR-10 Batch 5: Loss: 0.0014 - Accuracy: 0.715400
###Markdown
CheckpointThe model has been saved to disk. Test ModelTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
###Output
Testing Accuracy: 0.7155854430379747
|
activities/class-notes/2019-03-07 Class Workbook - CSVs & oswalk.ipynb | ###Markdown
Creating a File Inventory
###Code
import os
from os.path import join, getsize, getctime
import csv
walk_this_directory = os.path.join('..','assets','Bundle-web-files-small')
print(walk_this_directory)
dir_list = os.listdir(walk_this_directory)
print(dir_list)
###Output
['.DS_Store', 'audio', 'image', 'pdf', 'presentation', 'video', 'web-files-small-metadata.csv']
###Markdown
Using os.walk()
###Code
for folderName, subfolders, filenames in os.walk(walk_this_directory):
# see what's here
print('the filenames is',filenames)
# get the information about each of the files:
for folderName, subfolders, filenames in os.walk(walk_this_directory):
for filename in filenames:
filename = filename
folder = folderName
path = os.path.join(folderName, filename)
size = os.path.getsize(path)
print('Found:',filename, size)
# let's set up a list for information about the file,
# and a list for the manifest
fileInfo = list()
manifestInfo = list()
for folderName, subfolders, filenames in os.walk(walk_this_directory):
for filename in filenames:
filename = filename
folder = folderName
path = os.path.join(folderName, filename)
size = os.path.getsize(path)
fileInfo = [filename, folder, path, size]
manifestInfo.append(fileInfo)
print(manifestInfo)
# let's set up a list for information about the file,
# and a list for the manifest
fileInfo = list()
manifestInfo = list()
# set up the csv header info
headers = ['filename', 'folder', 'path', 'size']
for folderName, subfolders, filenames in os.walk(walk_this_directory):
for filename in filenames:
filename = filename
folder = folderName
path = os.path.join(folderName, filename)
size = os.path.getsize(path)
fileInfo = [filename, folder, path, size]
manifestInfo.append(fileInfo)
#write out csv file
with open('file-manifest.csv', 'w') as fout:
writer = csv.writer(fout)
writer.writerow(headers)
for file in manifestInfo:
print(file)
writer.writerow(file)
print('Success!!!!!!')
###Output
['.DS_Store', '../assets/Bundle-web-files-small', '../assets/Bundle-web-files-small/.DS_Store', 6148]
['web-files-small-metadata.csv', '../assets/Bundle-web-files-small', '../assets/Bundle-web-files-small/web-files-small-metadata.csv', 9069]
['000727.ram', '../assets/Bundle-web-files-small/audio', '../assets/Bundle-web-files-small/audio/000727.ram', 79]
['11-3250JohnsonvFolinoEtAl.wma', '../assets/Bundle-web-files-small/audio', '../assets/Bundle-web-files-small/audio/11-3250JohnsonvFolinoEtAl.wma', 21423499]
['mj_telework_exchange_final_100710.mp3', '../assets/Bundle-web-files-small/audio', '../assets/Bundle-web-files-small/audio/mj_telework_exchange_final_100710.mp3', 3471488]
['NEWSLINE_802AF71F439D401585C6FCB02F358307.mp3', '../assets/Bundle-web-files-small/audio', '../assets/Bundle-web-files-small/audio/NEWSLINE_802AF71F439D401585C6FCB02F358307.mp3', 961195]
['1005107061.tif', '../assets/Bundle-web-files-small/image', '../assets/Bundle-web-files-small/image/1005107061.tif', 395734]
['13080t.jpg', '../assets/Bundle-web-files-small/image', '../assets/Bundle-web-files-small/image/13080t.jpg', 3764]
['k7989-7x.jpg', '../assets/Bundle-web-files-small/image', '../assets/Bundle-web-files-small/image/k7989-7x.jpg', 7864]
['m237a2f.gif', '../assets/Bundle-web-files-small/image', '../assets/Bundle-web-files-small/image/m237a2f.gif', 7376]
['orca.via_.moc_.noaa_.jpg', '../assets/Bundle-web-files-small/image', '../assets/Bundle-web-files-small/image/orca.via_.moc_.noaa_.jpg', 82546]
['01-1480.pdf', '../assets/Bundle-web-files-small/pdf', '../assets/Bundle-web-files-small/pdf/01-1480.pdf', 49088]
['Chapter03.pdf', '../assets/Bundle-web-files-small/pdf', '../assets/Bundle-web-files-small/pdf/Chapter03.pdf', 51919]
['file.pdf', '../assets/Bundle-web-files-small/pdf', '../assets/Bundle-web-files-small/pdf/file.pdf', 1538]
['HR2021 commtext.pdf', '../assets/Bundle-web-files-small/pdf', '../assets/Bundle-web-files-small/pdf/HR2021 commtext.pdf', 36305]
['PFCHEJ.pdf', '../assets/Bundle-web-files-small/pdf', '../assets/Bundle-web-files-small/pdf/PFCHEJ.pdf', 10577]
['ADAEMPLOYMENTTaxIncentives.ppt', '../assets/Bundle-web-files-small/presentation', '../assets/Bundle-web-files-small/presentation/ADAEMPLOYMENTTaxIncentives.ppt', 137216]
['BudgetandGrants012710.ppt', '../assets/Bundle-web-files-small/presentation', '../assets/Bundle-web-files-small/presentation/BudgetandGrants012710.ppt', 85504]
['Non-FTE-Trainee-Activities-060109.ppt', '../assets/Bundle-web-files-small/presentation', '../assets/Bundle-web-files-small/presentation/Non-FTE-Trainee-Activities-060109.ppt', 67072]
['04-04-21full.asf', '../assets/Bundle-web-files-small/video', '../assets/Bundle-web-files-small/video/04-04-21full.asf', 101]
['glmp_cig.EQ.wm.p20.t12z', '../assets/Bundle-web-files-small/video', '../assets/Bundle-web-files-small/video/glmp_cig.EQ.wm.p20.t12z', 8296]
['oct17cc.asx', '../assets/Bundle-web-files-small/video', '../assets/Bundle-web-files-small/video/oct17cc.asx', 106945]
['vlwhcssc.asx', '../assets/Bundle-web-files-small/video', '../assets/Bundle-web-files-small/video/vlwhcssc.asx', 364]
Success!!!!!!
|
SuperStore.ipynb | ###Markdown
Global SuperStore - 2016Source: https://www.kaggle.com/shekpaul/global-superstoreThe dataset is not clean, and first we need to check the information and clean the columns or/and rows that is necessary. After cleaning the data, let´s work to answer the questions below. Questions1. Total, how many orders have cross the shipping cost of 500?2. Count the number of segments, countries, regions, markets, categories, and sub-categories present in the global_superstore_2016 data.3. Get the list of Order ID's where the Indian customer's have bought the things under the category 'Technology' after paying the Shipping Cost more than 500. 4. Get the list of Order ID's where the Indian customer's have bought the things under the category 'Technology' the Sales greater than 500.5. How many people from the State 'Karnataka' have bought the things under the category 'Technology'?6. Get the list of countries where the 'Profit' and 'Shipping Cost's are greater than or equal to 2000 and 300 respectively.7. Find the list of Indian states where the people have purchased the things under the category Technology.8. Find the overall rank of "India" where the 'Profit' is maximum under the category 'Technology'.9. Display the data with min, max, average and std of 'Profit' & 'Sales' for each Sub-Category under each Category
###Code
import numpy as np
import pandas as pd
import re
###Output
_____no_output_____
###Markdown
Data extraction
###Code
df = pd.read_csv('Files/global_superstore_2016.csv')
df.info()
df.head(3)
###Output
_____no_output_____
###Markdown
Data cleansingThe column with dates are not in one specific format so we need to convert in datetime type\The column Sales and Profit have special symbols and we need to convert in float type
###Code
# when we use parse_dates on read_excel the convertion is automatic -> parse_dates=['Ship Date','Order Date'])
# converting each column in datetime
df['Ship Date'] = pd.to_datetime(df['Ship Date'])
df['Order Date'] = pd.to_datetime(df['Order Date'])
# check if it is ok
df[['Ship Date','Order Date']]
print(df['Sales'])
# we will use regular expression to delete the symbols on column Sales, before casting in float
# clean some symbols like $ ( ) ,
df['Sales'] = df['Sales'].str.replace('$','', regex=True)
df['Sales'] = df['Sales'].str.replace(',','', regex=True)
df['Sales'] = df['Sales'].str.replace(')','', regex=True)
df['Sales'] = df['Sales'].str.replace('(','-', regex=True)
df['Sales'] = df['Sales'].astype(np.dtype('float64'))
# we will use regular expression to delete the symbols on column Profit, before casting in float
# clean some symbols like $ ( )
df['Profit'] = df['Profit'].str.replace('$','', regex=True)
df['Profit'] = df['Profit'].str.replace(',','', regex=True)
df['Profit'] = df['Profit'].str.replace(')','', regex=True)
# If the Profit is between () so it is negative
df['Profit'] = df['Profit'].str.replace('(','-', regex=True)
df['Profit'] = df['Profit'].astype(np.dtype('float64'))
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 51290 entries, 0 to 51289
Data columns (total 24 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Row ID 51290 non-null int64
1 Order ID 51290 non-null object
2 Order Date 51290 non-null datetime64[ns]
3 Ship Date 51290 non-null datetime64[ns]
4 Ship Mode 51290 non-null object
5 Customer ID 51290 non-null object
6 Customer Name 51290 non-null object
7 Segment 51290 non-null object
8 Postal Code 9994 non-null float64
9 City 51290 non-null object
10 State 51290 non-null object
11 Country 51290 non-null object
12 Region 51290 non-null object
13 Market 51290 non-null object
14 Product ID 51290 non-null object
15 Category 51290 non-null object
16 Sub-Category 51290 non-null object
17 Product Name 51290 non-null object
18 Sales 51290 non-null float64
19 Quantity 51290 non-null int64
20 Discount 51290 non-null float64
21 Profit 51290 non-null float64
22 Shipping Cost 51290 non-null float64
23 Order Priority 51290 non-null object
dtypes: datetime64[ns](2), float64(5), int64(2), object(15)
memory usage: 9.4+ MB
###Markdown
Answering... 1. Total, how many orders have cross the shipping cost of 500?
###Code
df['Order ID'][(df['Shipping Cost'] > 500)].count()
###Output
_____no_output_____
###Markdown
2. Count the number of segments, countries, regions, markets, categories, and sub-categories present in the global_superstore_2016 data.
###Code
# shows per group
df[['Segment', 'Country', 'Region', 'Market', 'Category', 'Sub-Category']].value_counts()
# shows individualy
len(df['Country'].value_counts())
# or
len(df['Country'].unique())
###Output
_____no_output_____
###Markdown
3. Get the list of Order ID's where the Indian customer's have bought the things under the category 'Technology' after paying the Shipping Cost more than 500.
###Code
list(df['Order ID'][(df['Country'] == 'India') & (df['Category'] == 'Technology') & (df['Shipping Cost'] > 500)])
###Output
_____no_output_____
###Markdown
4. Get the list of Order ID's where the Indian customer's have bought the things under the category 'Technology' the Sales greater than 500.
###Code
# let´s show the 5 first...
list(df['Order ID'][(df['Country'] == 'India') & (df['Category'] == 'Technology') & (df['Sales'] > 500)].head(5))
###Output
_____no_output_____
###Markdown
5. How many people from the State 'Karnataka' have bought the things under the category 'Technology'?
###Code
len(df[(df['State'] == 'Karnataka') & (df['Category'] == 'Technology')])
###Output
_____no_output_____
###Markdown
6. Get the list of countries where the 'Profit' and 'Shipping Cost's are greater than or equal to 2000 and 300 respectively.
###Code
list(df['Country'][(df['Profit'] >= 2000) & (df['Shipping Cost'] >= 300)].unique())
###Output
_____no_output_____
###Markdown
7. Find the list of Indian states where the people have purchased the things under the category Technology.
###Code
list(df['State'][(df['Country'] == 'India') & (df['Category'] == 'Technology')].unique())
###Output
_____no_output_____
###Markdown
8. Find the overall rank of "India" where the 'Profit' is maximum under the category 'Technology'.
###Code
# Checking the sum of profit per Country and shows the ranking 5
df_rank = df[['Country','Profit']][df['Category'] == 'Technology'].groupby('Country').sum()
df_rank.sort_values(by='Profit', ascending=False).head(5)
###Output
_____no_output_____
###Markdown
9. Display the data with min, max, average and std of 'Profit' & 'Sales' for each Sub-Category under each Category
###Code
df_min_max = df[['Category','Sub-Category','Profit','Sales']].groupby(['Category','Sub-Category']).agg(['min','max','mean','std'])
df_min_max
###Output
_____no_output_____
###Markdown
Visualisation
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(20,8))
# shows the top countries by profit
plt.title('Top 10 Countries per Profit', fontsize=20)
df_top_profit = df[['Country','Profit']].groupby(['Country']).sum()
df_top_profit.sort_values(by='Profit', ascending=False, inplace=True)
df_top_profit = df_top_profit.head(10)
plt.bar(df_top_profit.index, df_top_profit['Profit'])
plt.xticks(rotation=45)
plt.show()
# shows the top category per sales and profit
df_top_profit = df[['Category','Profit','Sales']].groupby(['Category']).sum()
df_top_profit.sort_values(by='Sales', ascending=False, inplace=True)
df_top_profit = df_top_profit
x = df_top_profit.index
# to put the column bar side by side we need to specify the length of the bar using width
x_indexes = np.arange(len(x))
width = 0.3
y1 = df_top_profit['Profit']
y2 = df_top_profit['Sales']
fig,ax = plt.subplots(nrows=1,ncols=1,sharex=True,figsize=(20,8)) # row x col ->return list
ax.ticklabel_format(style='plain')
ax.set_title('Total Sales and Profit per Category', fontsize=20)
ax.bar(x_indexes+width,y2, color='green',label='Sales',width=width)
ax.bar(x_indexes,y1, color="blue",label='Profit',width=width)
ax.legend(fontsize=14)
plt.xticks(ticks=x_indexes, labels=x, fontsize=14)
plt.show()
###Output
_____no_output_____ |
assignment_01_solutions.ipynb | ###Markdown
Encoding a text Let's start our first small project. Imagine, that you want to send a secret message. Therefore, you want to use a very simple encoding method called [Ceasar Cipher](https://en.wikipedia.org/wiki/Caesar_cipher). Let's decide which characters we are allowed to use and which message we want to send.
###Code
alphabet = list("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ .") # all characters we want to use
text_clear = "Mission accomplished. Meeting point is Mailand. I will wear a black coat." # message we want to send
###Output
_____no_output_____
###Markdown
Now, to get a encoded alphabet we need to shift our list by a certain amount of characters. Therefore we can use a combination of __pop()__ and __append()__.Run the cell below and try to understand what is happening. By now the list is shifted by 3 characters. To increase the security level, modify the cell, that it shifts the list by the length of the text you want to encode.
###Code
alphabet_encoded = list("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ .")
for i in range(3):
alphabet_encoded.append(alphabet_encoded.pop(0))
print(alphabet_encoded)
###Output
['d', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', ' ', '.', 'a', 'b', 'c']
###Markdown
In the next step, we need to map the alphabet to its encoded version, by using a dictionary.
###Code
encoder = dict(zip(alphabet, alphabet_encoded))
encoder
###Output
_____no_output_____
###Markdown
Finally, we write our secrete message. Starting with an empty string, we go through our message and add an encoded letter to it.
###Code
text_encoded = ""
for letter in text_clear:
text_encoded += encoder[letter]
print(text_clear)
print(text_encoded)
###Output
Mission accomplished. Meeting point is Mailand. I will wear a black coat.
PlvvlrqbdffrpsolvkhgcbPhhwlqjbsrlqwblvbPdlodqgcbLbzloobzhdubdbeodfnbfrdwc
###Markdown
Decode the message.
###Code
#@solution
decoder = dict(zip(alphabet_encoded, alphabet))
#@solution
text_decoded = ""
for letter in text_encoded:
text_decoded += decoder[letter]
#@solution
print(text_decoded)
###Output
Mission accomplished. Meeting point is Mailand. I will wear a black coat.
|
lessons/ETLPipelines/18_final_exercise/18_final_exercise.ipynb | ###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = np.array(data, ndmin=2)
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = data_array.reshape((1, -1))
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan (HINT: Use the replace module and np.nan)
df = pd.DataFrame(data=data_array, columns=colnames).replace('', np.nan)
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
df = df.drop(['Indicator Name', 'Indicator Code'], axis=1)
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp')
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan
results = []
for _, row in df_melt.iterrows():
try:
int(row['gdp'])
results.append(row.to_list())
except ValueError:
pass
return results
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = result
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = f"""INSERT INTO gdp VALUES ("{countryname}", '{countrycode}', '{year}', '{gdp}');"""
# connect to database and execute query
try:
cur.execute(sql_string)
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form
# [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = np.array(data,ndmin=2)
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = np.reshape(data_array,(1,63))
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan
# (HINT: Use the replace module and np.nan)
df = pd.DataFrame(data=data_array, columns=colnames,\
).replace('',np.nan)
# print(df.columns)
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
df.drop(columns=['Indicator Name','Indicator Code','\n'],inplace=True,axis=1)
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = pd.melt(df,id_vars=['Country Name','Country Code'],\
var_name='year', value_name='value')
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan
results = []
for index, row in df_melt.iterrows():
country, countrycode, year, gdp = row
if str(gdp) != 'nan':
results.append([country, countrycode, year, gdp])
# templist = [row['Country Name'],row['Country Code'],\
# row['year'],row['value']]
# if templist[3]!=np.nan:
# results.append(templist)
# print("end transform_indicator_data")
return results
# transform_indicator_data(data, colnames)
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
# print(result)
countryname, countrycode, year, gdp = result[0], result[1], result[2], result[3]
# print((countryname, countrycode, year, gdp))
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
# sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp)\
# VALUES ({}, {}, {}, {})'.format(countryname, countrycode, year, gdp)
sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES \
("{}", "{}", {}, {});'.format(countryname, countrycode, year,gdp)
# connect to database and execute query
try:
# print("load_indicator_data")
cur.execute(sql_string)
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = np.array(data, ndmin=2)
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = data_array.reshape(1, 63)
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan (HINT: Use the replace module and np.nan)
df = pd.DataFrame(data=data_array, columns=colnames).replace({'': np.nan})
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
df.drop(columns=['\n', 'Indicator Name', 'Indicator Code'], inplace=True)
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp')
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan'
results = []
df_melt.replace({np.nan: 'nan'})
for idx, row in df_melt.iterrows():
country, countrycode, year, gdp = row
if str(gdp) != 'nan':
results.append(row)
return results
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = result
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = f"""
INSERT INTO gdp (countryname, countrycode, year, gdp)
VALUES ("{countryname}", "{countrycode}", {year}, {gdp});
"""
# connect to database and execute query
try:
cur.execute(sql_string)
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
%%time
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = None
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = None
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = None
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan (HINT: Use the replace module and np.nan)
df = None
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = None
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan
results = []
return results
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = None
# TODO: create a cursor object
cur = None
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = None
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = None
# connect to database and execute query
try:
cur.execute(sql_string)
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = np.array(data, ndmin=2)
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = data_array.reshape(1, 63)
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan (HINT: Use the replace module and np.nan)
df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan)
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
df.drop(['Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp')
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan
results = []
for index, row in df_melt.iterrows():
countryname, countrycode, year, gdp = row
if str(gdp) != 'nan':
results.append([countryname, countrycode, year, gdp])
return results
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = result
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = f'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{countryname}", "{countrycode}", {year}, {gdp});'
# connect to database and execute query
try:
cur.execute(sql_string)
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
# get rid of quote marks
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
data_array = np.array(data, ndmin=2)
data_array.reshape(1, 63),
df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan)
df.drop(['\n', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp')
results = []
for index, row in df_melt.iterrows():
country, countrycode, year, gdp = row
if str(gdp) != 'nan':
results.append([country, countrycode, year, gdp])
return results
def load_indicator_data(results):
conn = sqlite3.connect('worldbank.db')
cur = conn.cursor()
if results:
for result in results:
countryname, countrycode, year, gdp = result
sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{}", "{}", {}, {});'.format(countryname, countrycode, year, gdp)
# connect to database and execute query
try:
cur.execute(sql_string)
except Exception as e:
print('error occurred:', e, result)
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
import sqlite3
conn = sqlite3.connect('worldbank.db')
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
import pandas as pd
import numpy as np
import sqlite3
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
if country not in non_countries:
data_array = np.array(data, ndmin=2)
data_array.reshape(1, 63)
df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan)
df.drop(['\r\n','Indicator Name', 'Indicator Code'], inplace=True, axis=1)
df_melt = df.melt(id_vars=['Country Name', 'Country Code'],
var_name='year',
value_name='gdp')
results = []
for index, row in df_melt.iterrows():
country, countrycode, year, gdp = row
if str(gdp) != 'nan':
results.append([country, countrycode, year, gdp])
return results
def load_indicator_data(results):
conn = sqlite3.connect('worldbank.db')
cur = conn.cursor()
if results:
for result in results:
country, countrycode, year, gdp = result
sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{}", "{}", {}, {});'.format(country, countrycode, year, gdp)
try:
cur.execute(sql_string)
except Exception as e:
print('error occurred:', e, result)
conn.commit()
conn.close()
return None
# Execute ETL pipeline
with open('../data/gdp_data.csv') as f:
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells. 1. Connect to SQLite
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, \
gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
###Output
_____no_output_____
###Markdown
2. Extract line
###Code
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
###Output
_____no_output_____
###Markdown
3 Transform data
###Code
# TODO: fill out the code wherever you find a TODO in this cell
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
country = data[0]
# filter out values that are not countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
if country not in non_countries:
data_array = np.array(data, ndmin=2)
data_array.reshape(1, 63)
df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan)
df.drop(['\n', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# Reshape the data sets so that they are in long format
df_melt = df.melt(id_vars=['Country Name', 'Country Code'],
var_name='year',
value_name='gdp')
results = []
for index, row in df_melt.iterrows():
country, countrycode, year, gdp = row
if str(gdp) != 'nan':
results.append([country, countrycode, year, gdp])
return results
###Output
_____no_output_____
###Markdown
4. Load data
###Code
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = result
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) \
VALUES ("{}", "{}", {}, {});'.format(countryname, countrycode, year, gdp)
# connect to database and execute query
try:
cur.execute(sql_string)
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
###Output
_____no_output_____
###Markdown
5. ETL pipeline
###Code
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
###Output
_____no_output_____
###Markdown
6. Check
###Code
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = np.array(data, ndmin=2)
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = data_array.reshape(1, 63)
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan (HINT: Use the replace module and np.nan)
df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan)
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
df.drop(['Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = df.melt(id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp')
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan
results = []
for index, row in df_melt.iterrows():
country, countrycode, year, gdp = row
if str(gdp) != 'nan':
results.append([country, countrycode, year, gdp])
return results
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = result
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES ("{}", "{}", "{}", "{}");'.format(countryname, countrycode, year, gdp)
# connect to database and execute query
try:
cur.execute(sql_string)
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = np.array(data, ndmin=2)
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = data_array.reshape(1, 63)
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan (HINT: Use the replace module and np.nan)
df = pd.DataFrame(data_array, columns=colnames).replace("", np.nan).drop(columns=['Indicator Name', 'Indicator Code', "\n"])
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = df.melt(id_vars=["Country Name", "Country Code"], var_name="year", value_name="gdp")
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan
results = []
for _, (country, countrycode, year, gdp) in df_melt.loc[:, ["Country Name", "Country Code", "year", "gdp"]].iterrows():
if gdp is not None:
results.append([country, countrycode, year, gdp])
return results
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = result
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = 'INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES (?,?,?, ?)'
# connect to database and execute query
try:
cur.execute(sql_string, (countryname, countrycode, year, gdp))
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____
###Markdown
Final Exercise - Putting it All TogetherIn this last exercise, you'll write a full ETL pipeline for the GDP data. That means you'll extract the World Bank data, transform the data, and load the data all in one go. In other words, you'll want one Python script that can do the entire process.Why would you want to do this? Imagine working for a company that creates new data every day. As new data comes in, you'll want to write software that periodically and automatically extracts, transforms, and loads the data.To give you a sense for what this is like, you'll extract the GDP data one line at a time. You'll then transform that line of data and load the results into a SQLite database. The code in this exercise is somewhat tricky.Here is an explanation of how this Jupyter notebook is organized:1. The first cell connects to a SQLite database called worldbank.db and creates a table to hold the gdp data. You do not need to do anything in this code cell other than executing the cell.2. The second cell has a function called extract_line(). You don't need to do anything in this code cell either besides executing the cell. This function is a [Python generator](https://wiki.python.org/moin/Generators). You don't need to understand how this works in order to complete the exercise. Essentially, a generator is like a regular function except instead of a return statement, a generator has a yield statement. Generators allow you to use functions in a for loop. In essence, this function will allow you to read in a data file one line at a time, run a transformation on that row of data, and then move on to the next row in the file.3. The third cell contains a function called transform_indicator_data(). This function receives a line from the csv file and transforms the data in preparation for a load step.4. The fourth cell contains a function called load_indicator_data(), which loads the trasnformed data into the gdp table in the worldbank.db database.5. The fifth cell runs the ETL pipeilne6. The sixth cell runs a query against the database to make sure everything worked correctly.You'll need to modify the third and fourth cells.
###Code
# run this cell to create a database and a table, called gdp, to hold the gdp data
# You do not need to change anything in this code cell
import sqlite3
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# drop the test table in case it already exists
cur.execute("DROP TABLE IF EXISTS gdp")
# create the test table including project_id as a primary key
cur.execute("CREATE TABLE gdp (countryname TEXT, countrycode TEXT, year INTEGER, gdp REAL, PRIMARY KEY (countrycode, year));")
conn.commit()
conn.close()
# Generator for reading in one line at a time
# generators are useful for data sets that are too large to fit in RAM
# You do not need to change anything in this code cell
def extract_lines(file):
while True:
line = file.readline()
if not line:
break
yield line
# TODO: fill out the code wherever you find a TODO in this cell
# This function has two inputs:
# data, which is a row of data from the gdp csv file
# colnames, which is a list of column names from the csv file
# The output should be a list of [countryname, countrycode, year, gdp] values
# In other words, the output would look like:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
#
import pandas as pd
import numpy as np
import sqlite3
import math
# transform the indicator data
def transform_indicator_data(data, colnames):
# get rid of quote marks
for i, datum in enumerate(data):
data[i] = datum.replace('"','')
# TODO: the data variable contains a list of data in the form [countryname, countrycode, 1960, 1961, 1962,...]
# since this is the format of the data in the csv file. Extract the countryname from the list
# and put the result in the country variable
country = data[0]
# these are "countryname" values that are not actually countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# filter out country name values that are in the above list
if country not in non_countries:
# In this section, you'll convert the single row of data into a data frame
# The advantage of converting a single row of data into a data frame is that you can
# re-use code from earlier in the lesson to clean the data
# TODO: convert the data variable into a numpy array
# Use the ndmin=2 option
data_array = np.array(data, ndmin=2)
# TODO: reshape the data_array so that it is one row and 63 columns
data_array = data_array.reshape(1, 63)
# TODO: convert the data_array variable into a pandas dataframe
# Note that you can specify the column names as well using the colnames variable
# Also, replace all empty strings in the dataframe with nan (HINT: Use the replace module and np.nan)
df = pd.DataFrame(data_array, columns=colnames).replace('', np.nan)
# TODO: Drop the 'Indicator Name' and 'Indicator Code' columns
# TODO: Reshape the data sets so that they are in long format
# The id_vars should be Country Name and Country Code
# You can name the variable column year and the value column gdp
# HINT: Use the pandas melt() method
# HINT: This was already done in a previous exercise
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code'], var_name='year', value_name='gdp')
# TODO: Iterate through the rows in df_melt
# For each row, extract the country, countrycode, year, and gdp values into a list like this:
# [country, countrycode, year, gdp]
# If the gdp value is not null, append the row (in the form of a list) to the results variable
# Finally, return the results list after iterating through the df_melt data
# HINT: the iterrows() method would be useful
# HINT: to check if gdp is equal to nan, you might want to convert gdp to a string and compare to the
# string 'nan
results = []
for _, var in df_melt.iterrows():
country, countrycode, year, gdp = tuple(var)
try:
gdp = float(gdp)
if not math.isnan(gdp):
results.append(list(var))
except ValueError:
pass
return results
# TODO: fill out the code wherever you find a TODO in this cell
# This function loads data into the gdp table of the worldbank.db database
# The input is a list of data outputted from the transformation step that looks like this:
# [[Aruba, ABW, 1994, 1.330168e+09], [Aruba, ABW, 1995, 1.320670e+09], ...]
# The function does not return anything. Instead, the function iterates through the input and inserts each
# value into the gdp data set.
def load_indicator_data(results):
# TODO: connect to the worldbank.db database using the sqlite3 library
conn = sqlite3.connect('worldbank.db')
# TODO: create a cursor object
cur = conn.cursor()
if results:
# iterate through the results variable and insert each result into the gdp table
for result in results:
# TODO: extract the countryname, countrycode, year, and gdp from each iteration
countryname, countrycode, year, gdp = tuple(result)
# TODO: prepare a query to insert a countryname, countrycode, year, gdp value
sql_string = \
"INSERT INTO gdp (countryname, countrycode, year, gdp) VALUES (?, ?, ?, ?);"
# connect to database and execute query
try:
cur.execute(sql_string,
(str(countryname), str(countrycode), str(year), float(gdp)))
# print out any errors (like if the primary key constraint is violated)
except Exception as e:
print('error occurred:', e, result)
# commit changes and close the connection
conn.commit()
conn.close()
return None
# Execute this code cell to run the ETL pipeline
# You do not need to change anything in this cell
# open the data file
with open('../data/gdp_data.csv') as f:
# execute the generator to read in the file line by line
for line in extract_lines(f):
# split the comma separated values
data = line.split(',')
# check the length of the line because the first four lines of the csv file are not data
if len(data) == 63:
# check if the line represents column names
if data[0] == '"Country Name"':
colnames = []
# get rid of quote marks in the results to make the data easier to work with
for i, datum in enumerate(data):
colnames.append(datum.replace('"',''))
else:
# transform and load the line of indicator data
results = transform_indicator_data(data, colnames)
load_indicator_data(results)
# Execute this code cell to output the values in the gdp table
# You do not need to change anything in this cell
# connect to the database
# the database file will be worldbank.db
# note that sqlite3 will create this database file if it does not exist already
conn = sqlite3.connect('worldbank.db')
# get a cursor
cur = conn.cursor()
# create the test table including project_id as a primary key
df = pd.read_sql("SELECT * FROM gdp", con=conn)
conn.commit()
conn.close()
df
###Output
_____no_output_____ |
Garden Walk Real Estates Model.ipynb | ###Markdown
Train Test Splitting
###Code
# Was for the learning
import numpy as np
def split_train_test(data, test_ratio):
np.random.seed(42)
shuffled = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled[:test_set_size]
train_indices = shuffled[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
# train_set, test_set = split_train_test(housingdata, 0.2)
# print(f"Rows in train set: {len(train_set)}\nRows in test set: {len(test_set)}\n")
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housingdata, test_size=0.2, random_state=42)
print(f"Rows in train set: {len(train_set)}\nRows in test set: {len(test_set)}\n")
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housingdata, housingdata['CHAS ']):
strat_train_set = housingdata.loc[train_index]
strat_test_set = housingdata.loc[test_index]
strat_test_set.describe()
strat_test_set['CHAS '].value_counts()
strat_train_set['CHAS '].value_counts()
housingdata = strat_train_set.copy()
###Output
_____no_output_____
###Markdown
Locking for Correlations
###Code
corr_matrix = housingdata.corr()
corr_matrix['MEDV'].sort_values(ascending=False)
from pandas.plotting import scatter_matrix
attributes = ["MEDV", "RM", "ZN", "LSTAT"]
scatter_matrix(housingdata[attributes], figsize=(16,12))
housingdata.plot(kind="scatter", x="RM", y="MEDV", alpha=0.8)
###Output
_____no_output_____
###Markdown
Trying out Attribute Combinitions
###Code
housingdata["TAXPRM"] = housingdata["TAX"]/housingdata["RM"]
housingdata["TAXPRM"]
housingdata.head()
corr_matrix = housingdata.corr()
corr_matrix['MEDV'].sort_values(ascending=False)
housingdata.plot(kind="scatter", x="TAXPRM", y="MEDV", alpha=0.8)
housingdata = strat_train_set.drop("MEDV", axis=1)
housingdata_labels = strat_train_set["MEDV"].copy()
###Output
_____no_output_____
###Markdown
Missing Attributes
###Code
# To take care of missing attributes, we've threee options:
# 1> Get rid of the missing data points
# 2> Get rid of the whole attribute
# 3> Set the value to some value (0, mean or median)
# Option 1
a = housingdata.dropna(subset=["RM"])
a.shape
# it remove the null values but the original housing data remaine unchanged and we recive a copy of in a
# Option 2
housingdata.drop("RM", axis=1).shape
# it removes the RM attribute but original dataframe is unchanged
# Option 3
# We're fitting the median in place of missing data points
median = housingdata["RM"].median()
median
housingdata["RM"].fillna(median)
# Note: The original data fram is unchanged
housingdata.shape
# Before the imputing housing data looks like
# Before we started filling missing attributes
housingdata.describe()
# Here RM's count is: 400 before imputer
# Doing above thing with sklearn
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy="median")
imputer.fit(housingdata)
imputer.statistics_
imputer.statistics_.shape
X = imputer.transform(housingdata)
housingdata_tr = pd.DataFrame(X, columns=housingdata.columns)
housingdata_tr.describe()
# Here RM's count is: 404, after imputer, we fill the missing attributes
###Output
_____no_output_____
###Markdown
Scikit-learn Design Primarily three type of objects this library1. Estimators - It estimates some parameters based on the dataset. Eg. ImputerIt has a fit and transform methodFit Method - Fits the dataset and calculates internal parameters2. Transformers - It takes input and return output based on the learnings from fit()It also has a convenience function called fit_tranform(), which fits and then transform.3. Predictors - LinearRegression model KNN are the example of predictor.It has fit() and predict() methods.It also gives score() fucntion which will evaluate the predictions Feature Scalling When we want all of our features in same numerical range. eg. As our MEDV goes from 10 to 50 and ZN 0 to 100. Some of the ML model performs really well upon same numerical saclling.Primarily two types of feature scalling methods:1. Min - max scalling, this is also called Normalization (value - min / (min - max)) Sklearn provides a class called MinMaxScaler for this2. Standardization ((value - mean) / std) Sklearn provides a class called StandardScaler for this Creating a Pipeline
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
my_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
# ...... we can add as many as we want in out pipeline
('std_scaler', StandardScaler())
])
housingdata_num_tr = my_pipeline.fit_transform(housingdata)
housingdata_num_tr
housingdata_num_tr.shape
###Output
_____no_output_____
###Markdown
Selecting a desired model for Dragon Real Estates
###Code
from sklearn.linear_model import LinearRegression
# Trying anothr one
from sklearn.tree import DecisionTreeRegressor
# Trying another one
from sklearn.ensemble import RandomForestRegressor
# model = LinearRegression()
# model = DecisionTreeRegressor()
model = RandomForestRegressor()
model.fit(housingdata_num_tr, housingdata_labels)
some_data = housingdata.iloc[:5]
some_labels = housingdata_labels.iloc[:5]
prepared_data = my_pipeline.transform(some_data)
model.predict(prepared_data)
list(some_labels)
###Output
_____no_output_____
###Markdown
Evalulating the model
###Code
from sklearn.metrics import mean_squared_error
housing_predictions = model.predict(housingdata_num_tr)
mse = mean_squared_error(housingdata_labels, housing_predictions)
rmse = np.sqrt(mse)
rmse
# Here the mean error for the LinearRegressor is very high, 23.28628016032237, so that's why we won't use
# For the DecisionTree it does overfit the data and that's why we're getting the mse of 0.0
###Output
_____no_output_____
###Markdown
Using better evaluating technique - Cross Validation
###Code
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, housingdata_num_tr, housingdata_labels, scoring="neg_mean_squared_error", cv=10)
rmse_score = np.sqrt(-scores)
rmse_score
def print_scores(scores):
print("Scores: ", scores)
print("Mean: ", scores.mean())
print("Standard Deviation", scores.std())
print_scores(rmse_score)
# Decision Tree
# Mean: 3.969124107936868
# Standard Deviation 0.5567267457675251
# Linear Regressoin
# Mean: 5.025326263156095
# Standard Deviation 1.0631151808849306
# RandomForest Regressor
# Mean: 3.2661950722010182
# Standard Deviation 0.6999361385245859
###Output
_____no_output_____
###Markdown
Saving the model
###Code
from joblib import dump, load
dump(model, 'Graden_Walk.joblib')
###Output
_____no_output_____
###Markdown
Testing the model on test data
###Code
X_test = strat_test_set.drop("MEDV", axis=1)
y_test = strat_test_set["MEDV"].copy()
X_test_prepared = my_pipeline.transform(X_test)
final_predictions = model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
print(final_predictions, list(y_test))
final_rmse
###Output
_____no_output_____
###Markdown
Using the model
###Code
from joblib import dump, load
model = load('Graden_Walk.joblib')
features = np.array([[-0.43942006, 1.12628155, -3.12165014, -0.27288841, -1.42262747,
0.8323753721, -3.31238772, 2.62111401, -1.0016859 , -0.232778192 ,
-0.17491834, 0.41164221, -0.1122091034]])
model.predict(features)
###Output
_____no_output_____ |
DTreeRealRoad.ipynb | ###Markdown
Подключаем Googledrive к GoogleColab
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Необходимые библиотеки
###Code
import sklearn.tree as tree
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor,plot_tree
import pydotplus
from sklearn.externals.six import StringIO
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Чтение данных
###Code
#file_table="/content/drive/My Drive/melanch/data _Ulitca 17.04.xlsx"
file_table="/content/drive/My Drive/Диссертация/data/data _Ulitca 17.04.xlsx"
#Читаем входные данные, которые сохранены в Лист 1
NList=1
DataListID=(NList-1)*2
df=pd.read_excel(file_table,sheet_name=DataListID)
# Читаем имена атрибутов, которые сохранены в Лист_1
AttrListID=1
dft=pd.read_excel(file_table,sheet_name=AttrListID)
###Output
_____no_output_____
###Markdown
Смотрим месторасположение улицы на карте
###Code
print("Name: ",df.iloc[0]["Name"])
print("Map: ",df.iloc[0].GPS)
###Output
Name: ул. Промышленная
Map: "https://yandex.ru/maps/2/saint-petersburg/?ll=30.273518%2C59.899136&mode=usermaps&source=constructorLink&um=constructor%3A536dd7a9b4f7fb9a8a1e80fccbac7dd471fdee9fc4e273d30f1a731f20a8b4ac&z=16"
###Markdown
Использование всех таблиц
###Code
df_from_each_file=(pd.read_excel(file_table,sheet_name=i) for i in range(0,12,2))
dfvec=pd.concat(df_from_each_file, ignore_index=True).fillna(0,downcast='infer')
df=dfvec
# Нам нужно отформатировать таблицу атрибутов
dft
###Output
_____no_output_____
###Markdown
Подготовка данных
###Code
# Меняем формат таблиц атрибутов, чтобы попроще добраться до названия атрибутов
# Например: dft90.Attr1
dft90=dft.T.copy()
dft90.columns=dft["Reference"]
dft90=dft90.drop(index="Reference")
attr_cols = [c for c in df if c.startswith('Attr')]
target_names=dft90[attr_cols].iloc[0].values
feature_names=df["PC"].values
# Разделяем данные на признаки(feature) и на цели(target)
# имена признаков, feature
X=df[attr_cols]
# выбор колонки для классификации, target
Y=df["PC"]
###Output
_____no_output_____
###Markdown
Проверка данных
###Code
dft90
df
###Output
_____no_output_____
###Markdown
Построение дерева решений
###Code
# Построение классификационного дерева на основе X и Y
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X,Y)
attr_cols.append("PC")
###Output
_____no_output_____
###Markdown
Визуализация дерева решений
###Code
dot_data = StringIO()
tree.export_graphviz(clf,
out_file=dot_data,
class_names=target_names, # the target names.
filled=True, # Whether to fill in the boxes with colours.
rounded=True, # Whether to round the corners of the boxes.
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
###Output
_____no_output_____
###Markdown
Тестирование дерева решений
###Code
# Тестируем дерево на основе данных, которые уже использовались при построении дерева
# Тестовые входные данные те же самые, что и использовались для тренировки дерева
# Чтобы показать, что использование библиотек и тренировка дерева прошли успешно
def TestSec2PC(SecN):
PC=clf.predict([X.iloc[SecN].values])
print("Мы тестируем Sec = "+str(SecN)+" и получаем результат: нам нужен пешеходный переход на "+df['Name'].iloc[SecN]+": "+str(PC), " До этого было "+df.iloc[SecN].PC)
for SecN in range(0,10):
TestSec2PC(SecN)
# Читаем новые входные данные для тестирования, которые не имеют информации о переходах
file_table="/content/drive/My Drive/Диссертация/data/data _Ulitca 17.04.xlsx"
#file_table="/content/drive/My Drive/melanch/data _Ulitca 17.04.xlsx"
#Читаем входные данные, которые сохранены в Лист 7
NList=7
DataListID=(NList-1)*2
dfTest=pd.read_excel(file_table,sheet_name=DataListID)
print("Name: ",dfTest.iloc[0]["Name"])
print("Map: ",dfTest.iloc[0].GPS)
dfTest
# Готовим входные данные для тестирования
# Так как наше дерево было натренировано на 19 входных атрибутов, нам необходимо подготовить входную таблицу с тем же количеством атрибутов
attr_cols_test = [c for c in dfTest if c.startswith('Attr')]
dfTestX=X.copy(deep=True)
dfTestX.drop(dfTestX.index, inplace=True)
dfTestX[attr_cols_test]=dfTest[attr_cols_test]
dfTestX.fillna(0,inplace=True,downcast='infer')# Неизвестным значениям задаем 0, так как они не влияют на результат решения
# Тестируем дерево принятия решений на входные тестовые данные
for SecN in range(0,10):
PC=clf.predict([dfTestX.iloc[SecN].values])
print("Мы тестируем Sec = "+str(SecN)+" и получаем результат: нам нужен пешеходный переход на "+dfTest['Name'].iloc[SecN]+": "+str(PC), " До этого было "+dfTest.iloc[SecN].PC)
###Output
Мы тестируем Sec = 0 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['NO'] До этого было NO
Мы тестируем Sec = 1 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['NO'] До этого было NO
Мы тестируем Sec = 2 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['NO'] До этого было NO
Мы тестируем Sec = 3 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['NO'] До этого было NO
Мы тестируем Sec = 4 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['NO'] До этого было NO
Мы тестируем Sec = 5 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['NO'] До этого было NO
Мы тестируем Sec = 6 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['NO'] До этого было NO
Мы тестируем Sec = 7 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['YES'] До этого было NO
Мы тестируем Sec = 8 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['YES'] До этого было NO
Мы тестируем Sec = 9 и получаем результат: нам нужен пешеходный переход на пер. Челиева: ['YES'] До этого было NO
|
Python/Square/1MiceProtein/UFS_10.ipynb | ###Markdown
1. Import libraries
###Code
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import tensorflow as tf
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
#tf.set_random_seed(seed)
tf.compat.v1.set_random_seed(seed)
#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer
from keras.layers.normalization import BatchNormalization
from keras.utils import to_categorical
from keras import optimizers,initializers,constraints,regularizers
from keras import backend as K
from keras.callbacks import LambdaCallback,ModelCheckpoint
from keras.utils import plot_model
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
import h5py
import math
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import scipy.sparse as sparse
#--------------------------------------------------------------------------------------------------------------------------------
#Import ourslef defined methods
import sys
sys.path.append(r"./Defined")
import Functions as F
# The following code should be added before the keras model
#np.random.seed(seed)
###Output
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Using TensorFlow backend.
###Markdown
2. Loading data
###Code
data_frame=pd.read_excel('./Dataset/Data_Cortex_Nuclear.xls',sheet_name='Hoja1')
data_arr=(np.array(data_frame)[:,1:78]).copy()
label_arr=(np.array(data_frame)[:,81]).copy()
for index_i in np.arange(len(label_arr)):
if label_arr[index_i]=='c-CS-s':
label_arr[index_i]='0'
if label_arr[index_i]=='c-CS-m':
label_arr[index_i]='1'
if label_arr[index_i]=='c-SC-s':
label_arr[index_i]='2'
if label_arr[index_i]=='c-SC-m':
label_arr[index_i]='3'
if label_arr[index_i]=='t-CS-s':
label_arr[index_i]='4'
if label_arr[index_i]=='t-CS-m':
label_arr[index_i]='5'
if label_arr[index_i]=='t-SC-s':
label_arr[index_i]='6'
if label_arr[index_i]=='t-SC-m':
label_arr[index_i]='7'
label_arr_onehot=label_arr#to_categorical(label_arr)
# Show before Imputer
#print(data_arr[558])
imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
imp_mean.fit(data_arr)
data_arr=imp_mean.transform(data_arr)
# Show after Imputer
#print(data_arr[558])
data_arr=MinMaxScaler(feature_range=(0,1)).fit_transform(data_arr)
C_train_x,C_test_x,C_train_y,C_test_y= train_test_split(data_arr,label_arr_onehot,test_size=0.2,random_state=seed)
x_train,x_validate,y_train_onehot,y_validate_onehot= train_test_split(C_train_x,C_train_y,test_size=0.1,random_state=seed)
x_test=C_test_x
y_test_onehot=C_test_y
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_validate: ' + str(x_validate.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_validate: ' + str(y_validate_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
print('Shape of C_train_x: ' + str(C_train_x.shape))
print('Shape of C_train_y: ' + str(C_train_y.shape))
print('Shape of C_test_x: ' + str(C_test_x.shape))
print('Shape of C_test_y: ' + str(C_test_y.shape))
key_feture_number=10
###Output
_____no_output_____
###Markdown
3.Model
###Code
np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
class Feature_Select_Layer(Layer):
def __init__(self, output_dim, **kwargs):
super(Feature_Select_Layer, self).__init__(**kwargs)
self.output_dim = output_dim
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1],),
initializer=initializers.RandomUniform(minval=0.999999, maxval=0.9999999, seed=seed),
trainable=True)
super(Feature_Select_Layer, self).build(input_shape)
def call(self, x, selection=False,k=key_feture_number):
kernel=K.pow(self.kernel,2)
if selection:
kernel_=K.transpose(kernel)
kth_largest = tf.math.top_k(kernel_, k=k)[0][-1]
kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel)
return K.dot(x, tf.linalg.tensor_diag(kernel))
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
#--------------------------------------------------------------------------------------------------------------------------------
def Autoencoder(p_data_feature=x_train.shape[1],\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3):
input_img = Input(shape=(p_data_feature,), name='input_img')
encoded = Dense(p_encoding_dim, activation='linear',kernel_initializer=initializers.glorot_uniform(seed))(input_img)
bottleneck=encoded
decoded = Dense(p_data_feature, activation='linear',kernel_initializer=initializers.glorot_uniform(seed))(encoded)
latent_encoder = Model(input_img, bottleneck)
autoencoder = Model(input_img, decoded)
autoencoder.compile(loss='mean_squared_error', optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
#print('Latent Encoder Structure-------------------------------------')
#latent_encoder.summary()
return autoencoder,latent_encoder
#--------------------------------------------------------------------------------------------------------------------------------
def Identity_Autoencoder(p_data_feature=x_train.shape[1],\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
bottleneck_score=encoded_score
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
latent_encoder_score = Model(input_img, bottleneck_score)
autoencoder = Model(input_img, decoded_score)
autoencoder.compile(loss='mean_squared_error',\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,latent_encoder_score
#--------------------------------------------------------------------------------------------------------------------------------
def Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate=1E-3,\
p_loss_weight_1=1,\
p_loss_weight_2=2):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
feature_selection_choose=feature_selection(input_img,selection=True,k=p_feture_number)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
encoded_choose=encoded(feature_selection_choose)
bottleneck_score=encoded_score
bottleneck_choose=encoded_choose
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
decoded_choose =decoded(bottleneck_choose)
latent_encoder_score = Model(input_img, bottleneck_score)
latent_encoder_choose = Model(input_img, bottleneck_choose)
feature_selection_output=Model(input_img,feature_selection_choose)
autoencoder = Model(input_img, [decoded_score,decoded_choose])
autoencoder.compile(loss=['mean_squared_error','mean_squared_error'],\
loss_weights=[p_loss_weight_1, p_loss_weight_2],\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,feature_selection_output,latent_encoder_score,latent_encoder_choose
###Output
_____no_output_____
###Markdown
3.1 Structure and paramter testing
###Code
epochs_number=200
batch_size_value=128
###Output
_____no_output_____
###Markdown
--- 3.1.1 Fractal Autoencoder---
###Code
loss_weight_1=0.0078125
F_AE,\
feature_selection_output,\
latent_encoder_score_F_AE,\
latent_encoder_choose_F_AE=Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3,\
p_loss_weight_1=loss_weight_1,\
p_loss_weight_2=1)
file_name="./log/F_AE_"+str(key_feture_number)+".png"
plot_model(F_AE, to_file=file_name,show_shapes=True)
model_checkpoint=ModelCheckpoint('./log_weights/F_AE_'+str(key_feture_number)+'_weights_'+str(loss_weight_1)+'.{epoch:04d}.hdf5',period=100,save_weights_only=True,verbose=1)
#print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: print(F_AE.layers[1].get_weights()))
F_AE_history = F_AE.fit(x_train, [x_train,x_train],\
epochs=epochs_number,\
batch_size=batch_size_value,\
shuffle=True,\
validation_data=(x_validate, [x_validate,x_validate]),\
callbacks=[model_checkpoint])
loss = F_AE_history.history['loss']
val_loss = F_AE_history.history['val_loss']
epochs = range(epochs_number)
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'r', label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.plot(epochs[250:], loss[250:], 'bo', label='Training Loss')
plt.plot(epochs[250:], val_loss[250:], 'r', label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
p_data=F_AE.predict(x_test)
numbers=x_test.shape[0]*x_test.shape[1]
print("MSE for one-to-one map layer",np.sum(np.power(np.array(p_data)[0]-x_test,2))/numbers)
print("MSE for feature selection layer",np.sum(np.power(np.array(p_data)[1]-x_test,2))/numbers)
###Output
MSE for one-to-one map layer 0.00956202675485365
MSE for feature selection layer 0.01022934131585331
###Markdown
--- 3.1.2 Feature selection layer output---
###Code
FS_layer_output=feature_selection_output.predict(x_test)
print(np.sum(FS_layer_output[0]>0))
###Output
10
###Markdown
--- 3.1.3 Key features show---
###Code
key_features=F.top_k_keepWeights_1(F_AE.get_layer(index=1).get_weights()[0],key_feture_number)
print(np.sum(F_AE.get_layer(index=1).get_weights()[0]>0))
###Output
77
###Markdown
4 Classifying 4.1 Extra Trees
###Code
train_feature=C_train_x
train_label=C_train_y
test_feature=C_test_x
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
selected_position_list=np.where(key_features>0)[0]
###Output
_____no_output_____
###Markdown
--- 4.1.1. On Identity Selection layer---a) with zeros
###Code
train_feature=feature_selection_output.predict(C_train_x)
print("train_feature>0: ",np.sum(train_feature[0]>0))
print(train_feature.shape)
train_label=C_train_y
test_feature=feature_selection_output.predict(C_test_x)
print("test_feature>0: ",np.sum(test_feature[0]>0))
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
train_feature>0: 10
(864, 77)
test_feature>0: 10
(216, 77)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9583333333333334
Testing accuracy: 0.9583333333333334
###Markdown
---b) Sparse matrix
###Code
train_feature=feature_selection_output.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=feature_selection_output.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
train_feature_sparse=sparse.coo_matrix(train_feature)
test_feature_sparse=sparse.coo_matrix(test_feature)
p_seed=seed
F.ETree(train_feature_sparse,train_label,test_feature_sparse,test_label,p_seed)
###Output
(864, 77)
(216, 77)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9583333333333334
Testing accuracy: 0.9583333333333334
###Markdown
---c) Compression
###Code
train_feature_=feature_selection_output.predict(C_train_x)
train_feature=F.compress_zero(train_feature_,key_feture_number)
print(train_feature.shape)
train_label=C_train_y
test_feature_=feature_selection_output.predict(C_test_x)
test_feature=F.compress_zero(test_feature_,key_feture_number)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
(864, 10)
(216, 10)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9583333333333334
Testing accuracy: 0.9583333333333334
###Markdown
---d) Compression with structure
###Code
train_feature_=feature_selection_output.predict(C_train_x)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(train_feature.shape)
train_label=C_train_y
test_feature_=feature_selection_output.predict(C_test_x)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
(864, 10)
(216, 10)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9907407407407407
Testing accuracy: 0.9907407407407407
###Markdown
--- 4.1.2. On Original Selection---a) with zeros
###Code
train_feature=np.multiply(C_train_x, key_features)
print("train_feature>0: ",np.sum(train_feature[0]>0))
print(train_feature.shape)
train_label=C_train_y
test_feature=np.multiply(C_test_x, key_features)
print("test_feature>0: ",np.sum(test_feature[0]>0))
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
train_feature>0: 10
(864, 77)
test_feature>0: 10
(216, 77)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9583333333333334
Testing accuracy: 0.9583333333333334
###Markdown
---b) Sparse matrix
###Code
train_feature=np.multiply(C_train_x, key_features)
print(train_feature.shape)
train_label=C_train_y
test_feature=np.multiply(C_test_x, key_features)
print(test_feature.shape)
test_label=C_test_y
train_feature_sparse=sparse.coo_matrix(train_feature)
test_feature_sparse=sparse.coo_matrix(test_feature)
p_seed=seed
F.ETree(train_feature_sparse,train_label,test_feature_sparse,test_label,p_seed)
###Output
(864, 77)
(216, 77)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9583333333333334
Testing accuracy: 0.9583333333333334
###Markdown
---c) Compression
###Code
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero(train_feature_,key_feture_number)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero(test_feature_,key_feture_number)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
(864, 10)
(216, 10)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9722222222222222
Testing accuracy: 0.9722222222222222
###Markdown
---d) Compression with structure
###Code
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
(864, 10)
(216, 10)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.9907407407407407
Testing accuracy: 0.9907407407407407
###Markdown
--- 4.1.3. Latent space---
###Code
train_feature=latent_encoder_score_F_AE.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=latent_encoder_score_F_AE.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
train_feature=latent_encoder_choose_F_AE.predict(C_train_x)
print(train_feature.shape)
train_label=C_train_y
test_feature=latent_encoder_choose_F_AE.predict(C_test_x)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature,train_label,test_feature,test_label,p_seed)
###Output
(864, 10)
(216, 10)
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.8888888888888888
Testing accuracy: 0.8888888888888888
###Markdown
--- 6 Feature group compare---
###Code
Selected_Weights=F.top_k_keep(F_AE.get_layer(index=1).get_weights()[0],key_feture_number)
selected_position_group=F.k_index_argsort_1d(Selected_Weights,key_feture_number)
train_feature_=np.multiply(C_train_x, key_features)
train_feature=F.compress_zero_withkeystructure(train_feature_,selected_position_group)
print(train_feature.shape)
train_label=C_train_y
test_feature_=np.multiply(C_test_x, key_features)
test_feature=F.compress_zero_withkeystructure(test_feature_,selected_position_group)
print(test_feature.shape)
test_label=C_test_y
p_seed=seed
F.ETree(train_feature[:,0:5],train_label,test_feature[:,0:5],test_label,p_seed)
p_seed=seed
F.ETree(train_feature[:,5:],train_label,test_feature[:,5:],test_label,p_seed)
p_seed=seed
F.ETree(train_feature[:,0:6],train_label,test_feature[:,0:6],test_label,p_seed)
p_seed=seed
F.ETree(train_feature[:,6:],train_label,test_feature[:,6:],test_label,p_seed)
###Output
Training accuracy: 1.0
Training accuracy: 1.0
Testing accuracy: 0.6435185185185185
Testing accuracy: 0.6435185185185185
###Markdown
6. Reconstruction loss
###Code
from sklearn.linear_model import LinearRegression
def mse_check(train, test):
LR = LinearRegression(n_jobs = -1)
LR.fit(train[0], train[1])
MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean()
return MSELR
train_feature_=np.multiply(C_train_x, key_features)
C_train_selected_x=F.compress_zero_withkeystructure(train_feature_,selected_position_list)
print(C_train_selected_x.shape)
test_feature_=np.multiply(C_test_x, key_features)
C_test_selected_x=F.compress_zero_withkeystructure(test_feature_,selected_position_list)
print(C_test_selected_x.shape)
train_feature_tuple=(C_train_selected_x,C_train_x)
test_feature_tuple=(C_test_selected_x,C_test_x)
reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple)
print(reconstruction_loss)
###Output
(864, 10)
(216, 10)
0.00812428850046984
|
Projects/Real Time Face Mask Detector/Face-Mask-Model.ipynb | ###Markdown
Let's have a look at our Data
###Code
# Path to the folders containing images
data_path = 'data'
mask_path = 'data/with_mask/'
nomask_path = 'data/without_mask/'
test_path = 'data/test/'
# function to show images from the input path
def view(path):
images = list()
for img in random.sample(os.listdir(path),9):
images.append(img)
i = 0
fig,ax = plt.subplots(nrows=3, ncols=3, figsize=(20,10))
for row in range(3):
for col in range(3):
ax[row,col].imshow(cv2.imread(os.path.join(path,images[i])))
i+=1
# sample images of people wearing masks
view(mask_path)
#sample images of people NOT wearning masks
view(nomask_path)
###Output
_____no_output_____
###Markdown
Splitting of Data- Mask : 755- No Mask : 753Since the images are already augmented, I have used sklearn to split the dataset into training and test set. 10% of the images are taken as the test set and the rest are further distributed into training and validation set.
###Code
# Initializing labels for masked and non-masked image output
categories=os.listdir(data_path)
labels=[i for i in range(len(categories))]
label_dict=dict(zip(categories,labels)) #empty dictionary
print(label_dict)
print(categories)
print(labels)
# Storing the size of the input images and the path to feature and label for the model (via opencv).
img_size=224
data=[]
target=[]
for category in categories:
folder_path=os.path.join(data_path,category)
img_names=os.listdir(folder_path)
for img_name in img_names:
img_path=os.path.join(folder_path,img_name)
img=cv2.imread(img_path)
try:
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#Coverting the image into gray scale
resized=cv2.resize(gray,(img_size,img_size))
#resizing the gray scale into 224x224, since we need a fixed common size for all the images in the dataset
data.append(resized)
target.append(label_dict[category])
#appending the image and the label(categorized) into the list (dataset)
except Exception as e:
print('Exception:',e)
# Normalization of the data is done and converted into numpy arrays
data=np.array(data)/255.0
data=np.reshape(data,(data.shape[0],img_size,img_size,1))
target=np.array(target)
# Using 'utils' from keras, the target is converted into one-hot encoder for binary output
from keras.utils import np_utils
target=np_utils.to_categorical(target)
np.save('data',data)
np.save('target',target)
###Output
_____no_output_____
###Markdown
Preparation of Data Pipelining
###Code
batch_size = 32
# Splitting of training data into training and test set
# 10% of the images are sent for testing.
from sklearn.model_selection import train_test_split
train_data,test_data,train_target,test_target=train_test_split(data,target,test_size=0.1)
###Output
_____no_output_____
###Markdown
Building the Model- In the next step, we build our Sequential CNN model with various layers such as Conv2D, MaxPooling2D, Flatten, Dropout and Dense. - In the last Dense layer, we use the ‘**softmax**’ function to output a vector that gives the probability of each of the two classes.- Regularization is done to prevent overfitting of the data. It is neccessary since our dataset in not very large and just around 5000 images in total.
###Code
model=Sequential()
model.add(Conv2D(224,(3,3), activation ='relu', input_shape=data.shape[1:], kernel_regularizer=regularizers.l2(0.003)))
model.add(MaxPooling2D() )
model.add(Conv2D(100,(3,3), activation ='relu', kernel_regularizer=regularizers.l2(0.003)))
model.add(MaxPooling2D() )
model.add(Conv2D(100,(3,3), activation ='relu', kernel_regularizer=regularizers.l2(0.003)))
model.add(MaxPooling2D() )
model.add(Conv2D(50,(3,3), activation ='relu', kernel_regularizer=regularizers.l2(0.003)))
model.add(MaxPooling2D() )
model.add(Conv2D(30,(3,3), activation ='relu', kernel_regularizer=regularizers.l2(0.003)))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(90, activation ='relu'))
model.add(Dense(30, activation = 'relu'))
model.add(Dense(2, activation ='sigmoid'))
model.summary()
# Optimization of the model is done via Adam optimizer
# Loss is measures in the form of Binary Categorical Cross Entropy as our output contains 2 classes, with_mask and without_mask
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])
#Model Checkpoint to save the model after training, so that it can be re-used while detecting faces
# Include the epoch in the file name (uses `str.format`)
checkpoint_path = "/content/drive/My Drive/Colab Notebooks/Face Mask Detector/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
checkpoint = ModelCheckpoint(
filepath = checkpoint_path,
monitor='val_loss',
verbose=0,
save_best_only=True,
save_weights_only=True,
mode='auto'
)
# Save the weights using the `checkpoint_path` format
model.save_weights(checkpoint_path.format(epoch=0))
# Training of the Model is done
history=model.fit(train_data, train_target, epochs=50, batch_size = batch_size, validation_split=0.15)
plt.plot(history.history['loss'],'r',label='Training Loss')
plt.plot(history.history['val_loss'],label='Validation Loss')
plt.xlabel('No. of Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.plot(history.history['accuracy'],'r',label='Training Accuracy')
plt.plot(history.history['val_accuracy'],label='Validation Accuracy')
plt.xlabel('No. of Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
print(model.evaluate(test_data,test_target))
!pip install pyyaml h5py # Required to save models in HDF5 format
###Output
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (3.13)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (2.10.0)
Requirement already satisfied: numpy>=1.7 in /usr/local/lib/python3.6/dist-packages (from h5py) (1.18.5)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from h5py) (1.15.0)
###Markdown
Now, look at the resulting checkpoints and choose the latest one:
###Code
model.save('/content/drive/My Drive/Colab Notebooks/Face Mask Detector/model.h5')
# Importing the saved model from the IPython notebook
# model = load_model('/content/drive/My Drive/Colab Notebooks/Face Mask Detector/train_model.h5')
# Importing the Face Classifier XML file containing all features of the face
face_classifier=cv2.CascadeClassifier('/content/drive/My Drive/Colab Notebooks/Face Mask Detector/haarcascade_frontalface_default.xml')
# To open a video via link to be inserted in the () of VideoCapture()
# To open the web cam connected to your laptop/PC, write '0' (without quotes) in the () of VideoCapture()
src_cap = cv2.VideoCapture(0)
labels_dict = {
0 : 'MASK ON',
1 : 'NO MASK!'
}
colorMap = {
0 : (0,255,0),
1 : (0,0,255)
}
while(True):
_, img = src_cap.read()
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# detect MultiScale / faces
faces = face_classifier.detectMultiScale(gray, 1.3, 5)
# Draw rectangles around each face
for (x, y, w, h) in faces:
#Save just the rectangle faces in SubRecFaces
face_img = gray[y:y+w, x:x+w]
resized = cv2.resize(face_img, (224,224))
normalized = resized/255.0
reshaped = np.reshape(normalized, (1,224,224,1))
result = model.predict(reshaped)
print(result)
label = np.argmax(result, axis=1)[0]
cv2.rectangle(img, (x,y), (x+w,y+h), colorMap[label],2)
cv2.rectangle(img, (x,y-40), (x+w,y), colorMap[label],-1)
cv2.putText(img, labels_dict[label], (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,255,255), 2)
# Show the image
cv2.imshow('LIVE DETECTION', img)
key = cv2.waitKey(1)
# if Esc key (27) is press then break out of the loop
if(key==27):
break
# Stop video
src_cap.release()
# Close all started windows
cv2.destroyAllWindows()
###Output
_____no_output_____ |
LAB03-Nonlinear-optimization-pt-2/NelinearnaOptimizacija2.ipynb | ###Markdown
Box
###Code
def box(x0, xd, xg, func, ogranicenja, alfa=2.0, epsilon=1e-6, ispis=False):
for i in range(x0.br_stup):
if not (x0[0][i] >= xd and x0[0][i] <= xg):
raise Exception("Pocetna tocka je krivo zadana. Ne postuje eksplicitna ogranicenja.")
return
if not provjeri(x0, ogranicenja):
raise Exception("Pocetna tocka je krivo zadana. Ne postuje implicitna ogranicenja.")
return
n = x0.br_stup
xc = x0.copy()
tocke = [x0]
#generiranje skupa 2n tocaka
for t in range(2*n):
xi = []
for i in range(n):
r = randint(0, 1)
xi.append(xd + r * (xg - xd))
tocke.append(Matrica(1, n, [xi]))
while True:
if provjeri(tocke[t], ogranicenja): break
tocke[t] = 0.5 * (tocke[t] + xc)
#novi centroid
xc = centroid(tocke, n)
# print("xc: ", xc, "tocke: ")
# for t in tocke: print(t)
cnt = 0
fxc_tmp = func.vrijednost(xc)
while True:
#pronalazak h i h2
h = no.index_max(tocke, func, ispis)
tocke_bez_h = tocke.copy()
tocke_bez_h.pop(h)
h2 = no.index_max(tocke_bez_h, func, ispis)
xc = centroid(tocke_bez_h, n)
xr = (1 + alfa) * xc - alfa * tocke[h]
for i in range(n):
if xr[0][i] < xd: xr[0][i] = xd
if xr[0][i] > xg: xr[0][i] = xg
while True:
if provjeri(xr, ogranicenja): break
xr = 0.5 * (xr + xc)
if func.vrijednost(xr) > func.vrijednost(tocke[h2]):
xr = 0.5 * (xr + xc)
tocke[h] = xr
if uvjet_zaustavljanja(tocke[h], xc, epsilon):
print("Broj iteracija: ", func.br_poziva)
return xc
fxc = func.vrijednost(xc)
if fxc != fxc_tmp:
cnt = 0
else:
cnt += 1
if cnt >= 100:
print("Ne konvergira.")
print("Broj iteracija: ", func.br_poziva)
return xc
fxc_tmp = fxc
def provjeri(x, ogranicenja):
for o in ogranicenja:
if o(x) < 0: return False
return True
def centroid(tocke, n):
xc = Matrica(1, n)
for xi in tocke:
xc += xi
for i in range(n):
xc[0][i] /= len(tocke)
return xc
def uvjet_zaustavljanja(xh, xc, e):
for i in range(xh.br_stup):
if abs(xh[0][i] - xc[0][i]) < e: return True
return False
###Output
_____no_output_____
###Markdown
Transformacija u problem bez ogranicenja na mjesoviti nacin
###Code
def transformacija(x0, func, ogranicenja, ogr_jednakosti=None, t=1., epsilon=1e-6):
x = x0.copy()
transformirana_fja = Transformirana_fja(func, ogranicenja, ogr_jednakosti, t, epsilon)
while True:
# print(x)
optimized_value = no.hooke_jeeves(x, transformirana_fja)
# print("optimized value: ", optimized_value)
transformirana_fja.t *= 10
if uvjet_zaustavljanja_transf(x, optimized_value, epsilon):
trenutni_t = transformirana_fja.t
cnt = int(math.log10(t)) + 1
return optimized_value
x = optimized_value
def uvjet_zaustavljanja_transf(x, optimized_value, epsilon=1e-6):
for i in range(x.br_stup):
if abs(x[0][i] - optimized_value[0][i]) > epsilon:
return False
return True
class Transformirana_fja:
def __init__(self, func, ogranicenja, ogr_jednakosti, t=1.0, epsilon=1e-6):
self.func = func
self.ogranicenja = ogranicenja
self.ogr_jednakosti = ogr_jednakosti
self.t = t
self.epsilon = epsilon
def vrijednost(self, x0):
x = x0.copy()
fx = self.func.vrijednost(x)
# ogranicenja nejednakosti
gx = 0
for o in self.ogranicenja:
# print(o)
ox = o(x)
if ox <= 0: return 1234567890
else:
# print("ox: ", ox)
gx += math.log(ox)
gx /= self.t
#ogranicenja jednakosti
if self.ogr_jednakosti != None:
hx = self.t * (x[0][1] - 1)**2
else:
hx = 0
# print(fx, hx, fx - gx + hx)
return fx - gx + hx
###Output
_____no_output_____
###Markdown
Newton-Raphson
###Code
def newton_raphson(x0, func, grad, hess, zl_rez=False, epsilon=1e-6, ispis=False):
n = x0.br_stup
x = x0.copy()
cnt = 0
f_tmp = func.vrijednost(x)
while True:
gradijent = Matrica(1, n, [grad.vrijednost(x)])
if ispis: print("gradijent: ", gradijent)
hessian = Matrica(1, n, hess.vrijednost(x))
if ispis: print("hessian: ", hessian)
v = hessian * (gradijent.transponiraj())
if norm(v) <= epsilon:
break
if cnt >= 100:
print("Ne konvergira.")
print("Broj iteracija: ", func.br_poziva)
print("Broj poziva gradijentne fje: ", grad.br_poziva)
print("Broj izracuna Hessove matrice: ", hess.br_poziva)
return x
if zl_rez:
l_func = no.Funkcija(lambda l: func.vrijednost(x + l * gradijent))
faktor = no.minimum(Matrica(1, 1, [[0.]]), l_func, gradijent, ispis)
else:
faktor = -1
x += faktor * gradijent
fx = func.vrijednost(x)
if fx != f_tmp: cnt=0
else: cnt += 1
f_tmp = fx
print("Broj iteracija: ", func.br_poziva)
print("Broj poziva gradijentne fje: ", grad.br_poziva)
print("Broj izracuna Hessove matrice: ", hess.br_poziva)
return x
###Output
_____no_output_____
###Markdown
Gradijentni spust
###Code
def grad_desc(x0, func, grad, zl_rez=False, epsilon=1e-6, ispis=False):
n = x0.br_stup
x = x0.copy()
cnt = 0
f_tmp = func.vrijednost(x)
while True:
gradijent = Matrica(1, n, [grad.vrijednost(x)])
if norm(gradijent) < epsilon:
print("Broj iteracija: ", func.br_poziva)
print("Broj poziva gradijentne fje: ", grad.br_poziva)
break
if cnt >= 100:
print("Ne konvergira.")
print("Broj iteracija: ", func.br_poziva)
print("Broj poziva gradijentne fje: ", grad.br_poziva)
return x
if zl_rez:
l_func = no.Funkcija(lambda l: func.vrijednost(x + l * gradijent))
faktor = no.minimum(Matrica(1, 1, [[0.]]), l_func, gradijent, ispis)
else:
faktor = -1
x += faktor * gradijent
fx = func.vrijednost(x)
# print("cnt1: ", cnt)
if fx != f_tmp: cnt=0
else: cnt += 1
# print("cnt2: ", cnt)
f_tmp = fx
return x
def norm(x):
squared = [xi**2 for xi in x[0]]
sum_of_squares = sum(squared)
return math.sqrt(sum_of_squares)
###Output
_____no_output_____
###Markdown
Zadatci 1. Primijenite postupak gradijentnog spusta na funkciju 3, uz i bez određivanja optimalnog iznosakoraka. Što možete zaključiti iz rezultata?
###Code
x0 = Matrica(1, 2, [[0, 0]])
print(grad_desc(x0, no.Funkcija(no.f3), no.gf3, zl_rez=False))
print()
print(grad_desc(x0, no.Funkcija(no.f3), no.gf3, zl_rez=True))
###Output
_____no_output_____
###Markdown
2. Primijenite postupak gradijentnog spusta i Newton-Raphsonov postupak na funkcije 1 i 2 sodređivanjem optimalnog iznosa koraka. Kako se Newton-Raphsonov postupak ponaša na ovimfunkcijama? Ispišite broj izračuna funkcije, gradijenta i Hesseove matrice.
###Code
x1 = Matrica(1, 2, [[-1.9, 2]])
print(grad_desc(x1, no.Funkcija(no.f1), no.Funkcija(no.gf1), zl_rez=True), "\n")
print(newton_raphson(x1, no.Funkcija(no.f1), no.Funkcija(no.gf1), no.Funkcija(no.hf1), zl_rez=True), "\n")
x2 = Matrica(1, 2, [[0.1, 0.3]])
print(grad_desc(x2, no.Funkcija(no.f2), no.Funkcija(no.gf2), zl_rez=True), "\n")
print(newton_raphson(x2, no.Funkcija(no.f2), no.Funkcija(no.gf2), no.Funkcija(no.hf2), zl_rez=False))
###Output
Ne konvergira.
Broj iteracija: 150821
Broj poziva gradijentne fje: 4077
[[1.00000348 1.00000697]]
Ne konvergira.
Broj iteracija: 150821
Broj poziva gradijentne fje: 4077
Broj izracuna Hessove matrice: 4077
[[1.00000348 1.00000697]]
Broj iteracija: 1000
Broj poziva gradijentne fje: 28
[[3.99999967 2.00000005]]
Ne konvergira.
Broj iteracija: 283
Broj poziva gradijentne fje: 283
Broj izracuna Hessove matrice: 283
[[ 1.00000000e-001 -3.53261416e+238]]
###Markdown
3. Primijenite postupak po Boxu na funkcije 1 i 2 uz implicitna ograničenja: (x2-x1 >= 0), (2-x1 >= 0) ieksplicitna ograničenja prema kojima su sve varijable u intervalu [-100, 100]. Mijenja li se položajoptimuma uz nametnuta ograničenja?
###Code
x1 = Matrica(1, 2, [[-1.9, 2]])
x2 = Matrica(1, 2, [[0.1, 0.3]])
print(box(x1, -100, 100, no.Funkcija(no.f1), [no.o1, no.o2, no.o31, no.o32, no.o41, no.o42]))
print(box(x2, -100, 100, no.Funkcija(no.f2), [no.o1, no.o2, no.o31, no.o32, no.o41, no.o42]))
###Output
Ne konvergira.
Broj iteracija: 2458
[[0.18091964 3.06077117]]
Broj iteracija: 1617
[[1.99999877 2.00216542]]
###Markdown
4. Primijenite postupak transformacije u problem bez ograničenja na funkcije 1 i 2 s ograničenjima izprethodnog zadatka (zanemarite eksplicitna ograničenja). Novodobiveni problem optimizacije bezograničenja minimizirajte koristeći postupak Hooke-Jeeves ili postupak simpleksa po Nelderu iMeadu. Može li se uz zadanu početnu točku pronaći optimalno rješenje problema s ograničenjima?Ako ne, probajte odabrati početnu točku iz koje je moguće pronaći rješenje.
###Code
x1 = Matrica(1, 2, [[-1.9, 2]])
x2 = Matrica(1, 2, [[0.1, 0.3]])
func1 = no.Funkcija(no.f1)
func2 = no.Funkcija(no.f2)
print(transformacija(x1, func1, [no.o1, no.o2, no.o31, no.o32, no.o41, no.o42]))
print("Broj poziva f1: ", func1.br_poziva)
print(transformacija(x2, func2, [no.o1, no.o2, no.o31, no.o32, no.o41, no.o42]))
print("Broj poziva f2: ", func2.br_poziva)
###Output
[[0.01019058 0.01019096]]
Broj poziva f1: 2235
[[1.99999962 2.00000076]]
Broj poziva f2: 2812
###Markdown
5. Za funkciju 4 s ograničenjima (3-x1-x2>=0), (3+1.5*x1-x2>=0) i (x2-1=0) probajte pronaćiminimum koristeći postupak transformacije u problem bez ograničenja (također koristite HookeJeeves ili postupak simpleksa po Nelderu i Meadu za minimizaciju). Probajte kao početnu točkupostaviti neku točku koja ne zadovoljava ograničenja nejednakosti (primjerice točku (5,5)) tepomoću postupka pronalaženja unutarnje točke odredite drugu točku koja zadovoljava ograničenjanejednakosti te ju iskoristite kao početnu točku za postupak minimizacije.
###Code
x4 = Matrica(1, 2, [[0, 0]])
x5 = Matrica(1, 2, [[5, 5]])
print(transformacija(x4, no.Funkcija(no.f4), [no.o5, no.o6], 1))
print(transformacija(x5, no.Funkcija(no.f4), [no.o5, no.o6], 1))
###Output
[[2.00002289 0.9999752 ]]
[[5. 5.]]
###Markdown
Provjere
###Code
hello = 1
if hello != None: print("yes")
tf = Transformirana_fja(3, 4, 5)
tf.t *= 10
print(tf.t)
p = 1.
print(p)
g = [5, 6]
d = Matrica(1, 2, [g])
d1 = Matrica(1, 2, [[3, 4]])
d1 += 2*d
print(d1)
norm(d)
x = [1, 2, 3]
norm(x)
n = 2
tocke = [
Matrica(1, n, [[10, 13]]),
Matrica(1, n, [[12, 16]]),
Matrica(1, n, [[14, 13]])
]
xc = Matrica(1, n)
for xi in tocke:
xc += xi
for i in range(n):
xc[0][i] /= len(tocke)
xcopy = xc.copy()
print(xcopy)
np.zeros((2, 3))
###Output
_____no_output_____ |
codigo/Live071/P001.ipynb | ###Markdown
Problem 1If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.Find the sum of all the multiples of 3 or 5 below 1000. Soluções enviada por um colega:
###Code
def euler35(n):
lis3 = [i for i in range(3,n,3)]
lis5 = [i for i in range(5,n,5)]
lis15 = [i for i in range(15,n,15)]
ans = sum(lis3) + sum (lis5) - sum(lis15)
return ans
def euler35_v2(n):
l = (i for i in range(3,n) if i%3==0 or i%5==0)
return sum(l)
euler35(10), euler35(1000)
euler35_v2(10), euler35_v2(1000)
###Output
_____no_output_____
###Markdown
Conceitos* Comprehensions OtimizaçãoUsar soma de Progressão aritmética.$$ 1 + 2 + 3 + .. + 100 = (1 + 100) + (2 + 99) + ... + (50 + 51) = 50 * 101 $$
###Code
assert sum(range(101)) == 50 * 101
def sum_multiples_below(n, limit):
i = (limit - 1) // n
return (i * n * (1 + i)) // 2
def solve(limit=1000):
return sum_multiples_below(3, limit) + sum_multiples_below(5, 1000) - sum_multiples_below(15, 1000)
solve()
###Output
_____no_output_____ |
The k-Scheme.ipynb | ###Markdown
Advection using the k-Scheme CH EN 6355 - Computational Fluid Dynamics**Prof. Tony Saad (www.tsaad.net) slides at: www.tsaad.netDepartment of Chemical Engineering University of Utah** Here, we will implement the k-scheme or kappa-schemes for advection. It is easiest to implement this scheme since for different values of k, we recover all sorts of high-order flux approximations. We will assume a positive advecting velocity for illustration purposes.We are solving the constant speed advection equation given by\begin{equation}u_t = - c u_x = - F_x;\quad F = cu\end{equation}We will use a simple Forward Euler explicit method. Using a finite volume integration, we get\begin{equation}u_i^{n+1} = u_i^n - \frac{\Delta t}{\Delta x} (F_{i+\tfrac{1}{2}}^n - F_{i-\tfrac{1}{2}}^n)\end{equation}For constant grid spacing, the k-Scheme is given by\begin{equation}{\phi _f} = {\phi _{\rm{C}}} + \frac{{1 - k}}{4}({\phi _{\rm{C}}} - {\phi _{\rm{U}}}) + \frac{{1 + k}}{4}({\phi _{\rm{D}}} - {\phi _{\rm{C}}})\end{equation}which, for a positive advecting velocity, gives us\begin{equation}F_{i + {\textstyle{1 \over 2}}}^n = c\phi _{i + {\textstyle{1 \over 2}}}^n = c{\phi _i} + c\frac{{1 - k}}{4}({\phi _i} - {\phi _{i - 1}}) + c\frac{{1 + k}}{4}({\phi _{i + 1}} - {\phi _i})\end{equation}
###Code
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
import matplotlib.animation as animation
plt.rcParams['animation.html'] = 'html5'
from matplotlib import cm
def step(x,x0):
x0 = 0.6
x1 = 0.8
result = x - x0
result[x-x1<x1] = 1.0
result[x<x0] = 0.0
result[x>x1] = 0.0
return result
def gaussian(x,x0):
s = 0.08
s = s*s
result = np.exp( -(x-x0)**2/s)
return result
L = 1.0
n = 128 # cells
dx = L/n # n intervals
x = np.linspace(-3*dx/2, L + 3*dx/2, n+4) # include ghost cells - we will include 2 ghost cells on each side for high order schemes
# create arrays
phi = np.zeros(n+4) # cell centered quantity
f = np.zeros(n+4+1) # flux
u = np.ones(n+4+1) # velocity field - assumed to live on faces same as flux
x0 = 0.3
# u0 = np.zeros(N + 2)
# u0[1:-1] = np.sin(2*np.pi*x)
# u0 = np.zeros(N)
# phi0 = np.sin(np.pi*x)
phi0 = gaussian(x,x0) + step(x,x0)
# u0 = triangle(x,0.5,0.75,1)
# u0[0:N//2] = 1.0
plt.plot(x,phi0)
cfl =0.5
c = 1.0 # use a negative value for left traveling waves
dt = cfl*dx/abs(c)
print('dt=',dt)
print('dx=',dx)
# the k scheme
k = 0.5
# finite volume implementation with arrays for fluxes
t = 0
tend= L/abs(c)
sol = []
sol.append(phi0)
ims = []
fig = plt.figure(figsize=[5,3],dpi=200)
plt.rcParams["font.family"] = "serif"
plt.rcParams["font.size"] = 10
plt.rc('text', usetex=True)
# plt.grid()
plt.xlim([0.,L])
plt.ylim([-0.25,1.25])
plt.xlabel('$x$')
plt.ylabel('$\phi$')
plt.tight_layout()
# plot initial condition
plt.plot(x,phi0,'darkred',animated=True)
i = 0
while t < tend:
phin = sol[-1]
# if (i%16==0):
# shift = int(np.ceil(c*(t-dt)/dx))
# im = plt.plot(x[2:-2], np.roll(phin[2:-2], -shift) ,'k-o',markevery=2,markersize=3.5,markerfacecolor='deepskyblue',
# markeredgewidth=0.25, markeredgecolor='k',linewidth=0.45, animated=True)
# ims.append(im)
# impose periodic conditions
phin[-2] = phin[2]
phin[-1] = phin[3]
phin[0] = phin[-4]
phin[1] = phin[-3]
phi = np.zeros_like(phi0)
# predictor - take half a step and use upwind
# du/dt = -c*du/dx
if c >= 0:
ϕc = phin[1:-2] # phi upwind
else:
ϕc = phin[2:-1] # phi upwind
f[2:-2] = c*ϕc
phi[2:-2] = phin[2:-2] - dt/2.0/dx*(f[3:-2] - f[2:-3])
phi[-2] = phi[2]
phi[-1] = phi[3]
phi[0] = phi[-4]
phi[1] = phi[-3]
# du/dt = -c*du/dx
if c >= 0:
ϕc = phi[1:-2] # phi upwind
ϕu = phi[:-3] # phi far upwind
ϕd = phi[2:-1] # phi downwind
else:
ϕc = phi[2:-1] # phi upwind
ϕu = phi[3:] # phi far upwind
ϕd = phi[1:-2] # phi downwind
f[2:-2] = ϕc + (1-k)/4.0*(ϕc - ϕu) + (1+k)/4.0*(ϕd - ϕc)
f = c*f # multiply the flux by the velocity
# advect
phi[2:-2] = phin[2:-2] - c * dt/dx*(f[3:-2] - f[2:-3]) #+ dt/dx/dx*diffusion
t += dt
i+=1
sol.append(phi)
# plt.annotate('k = '+ str(k), xy=(0.5, 0.8), xytext=(0.015, 0.9),fontsize=8)
# plt.legend(('exact','numerical'),loc='upper left',fontsize=7)
# ani = animation.ArtistAnimation(fig, ims, interval=100, blit=True,
# repeat_delay=1000)
# ani.save('k-scheme-'+str(k)+'.mp4',dpi=300,fps=24)
plt.plot(sol[0], label='initial condition')
plt.plot(sol[-1], label='one residence time')
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Create Animation in Moving Reference Frame
###Code
"""
Create Animation in Moving Reference Frame
"""
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
matplotlib.use("Agg")
fig, ax = plt.subplots(figsize=(4,3),dpi=150)
ax.grid(True)
f0 = sol[0]
line0, = ax.plot(x[2:-2], f0[2:-2] ,'r-',linewidth=0.75, animated=True)
line1, = ax.plot(x[2:-2], f0[2:-2] ,'k-o',markevery=2,markersize=3.5,markerfacecolor='deepskyblue',
markeredgewidth=0.25, markeredgecolor='k',linewidth=0.45, animated=True)
ann = ax.annotate('time ='+str(round(t,3))+' s', xy=(2, 1), xytext=(40, 200),xycoords='figure points')
ax.annotate('k ='+str(k) + ' (k-scheme)', xy=(2, 1), xytext=(40, 190),xycoords='figure points')
plt.tight_layout()
def animate_moving(i):
print('time=',i*dt)
t = i*dt
xt = x + i*1.1*c*dt
line0.set_xdata(xt[2:-2])
line1.set_xdata(xt[2:-2])
ax.axes.set_xlim(xt[0],0.0*dx + xt[-1])
f = sol[i]
ax.axes.set_ylim(1.1*min(f) - 0.1,1.1*max(f))
ann.set_text('time ='+str(round(t,4))+'s (' + str(i)+ ').')
shift =int(np.ceil(i*c*dt/dx))
line1.set_ydata(np.roll(f[2:-2], -shift))
f0 = sol[0]
line0.set_ydata(f0[2:-2])
return line0,line1
# Init only required for blitting to give a clean slate.
def init():
line0.set_ydata(np.ma.array(x[2:-2], mask=True))
line1.set_ydata(np.ma.array(x[2:-2], mask=True))
return line0,line1
ani = animation.FuncAnimation(fig, animate_moving, np.arange(0,len(sol),2*int(1/cfl)), init_func=init,
interval=20, blit=False)
print('done!')
ani.save('__k-scheme_' + str(k)+'.mp4',fps=24,dpi=300)
import urllib
import requests
from IPython.core.display import HTML
def css_styling():
styles = requests.get("https://raw.githubusercontent.com/saadtony/NumericalMethods/master/styles/custom.css")
return HTML(styles.text)
css_styling()
###Output
_____no_output_____ |
Image Classification/CGIAR Wheat Growth Stage Challenge/neurofitting/zindi_cgiar_wheat_growth_stage_challenge/train_lq2_only_effnet_b4_step1/train_lq2_only_effnet_b4_step1_fold4.ipynb | ###Markdown
This colab notebook must be run on a **P100** GPU instance otherwise it will crash. Use the Cell-1 to ensure that it has a **P100** GPU instance Cell-1: Ensure the required gpu instance (P100)
###Code
#no.of sockets i.e available slots for physical processors
!lscpu | grep 'Socket(s):'
#no.of cores each processor is having
!lscpu | grep 'Core(s) per socket:'
#no.of threads each core is having
!lscpu | grep 'Thread(s) per core'
#GPU count and name
!nvidia-smi -L
#use this command to see GPU activity while doing Deep Learning tasks, for this command 'nvidia-smi' and for above one to work, go to 'Runtime > change runtime type > Hardware Accelerator > GPU'
!nvidia-smi
###Output
_____no_output_____
###Markdown
Cell-2: Add Google Drive
###Code
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
Cell-3: Install Required Dependencies
###Code
!pip install efficientnet_pytorch==0.7.0
!pip install albumentations==0.4.5
!pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch\_stable.html -q\
###Output
_____no_output_____
###Markdown
Cell-4: Run this cell to generate current fold weight ( Estimated Time for training this fold is around 2 hours 48 minutes )
###Code
import sys
sys.path.insert(0, "/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/src_lq2")
from dataset import *
from model import *
from trainer import *
from utils import *
import numpy as np
from sklearn.model_selection import StratifiedKFold
from torch.utils.data import DataLoader
config = {
'n_folds': 5,
'random_seed': 7200,
'run_fold': 4,
'model_name': 'efficientnet-b4',
'global_dim': 1792,
'batch_size': 48,
'n_core': 2,
'weight_saving_path': '/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/train_lq2_only_effnet_b4_step1/weights/',
'resume_checkpoint_path': None,
'lr': 0.01,
'total_epochs': 100,
}
if __name__ == '__main__':
set_random_state(config['random_seed'])
imgs = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_imgs.npy')
labels = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels.npy')
labels_quality = np.load('/content/gdrive/My Drive/zindi_cgiar_wheat_growth_stage_challenge/zindi_npy_data/train_labels_quality.npy')
imgs = imgs[labels_quality == 2]
labels = labels[labels_quality == 2]
labels = labels - 1
skf = StratifiedKFold(n_splits=config['n_folds'], shuffle=True, random_state=config['random_seed'])
for fold_number, (train_index, val_index) in enumerate(skf.split(X=imgs, y=labels)):
if fold_number != config['run_fold']:
continue
train_dataset = ZCDataset(
imgs[train_index],
labels[train_index],
transform=get_train_transforms(),
test=False,
)
train_loader = DataLoader(
train_dataset,
batch_size=config['batch_size'],
shuffle=True,
num_workers=config['n_core'],
drop_last=True,
pin_memory=True,
)
val_dataset = ZCDataset(
imgs[val_index],
labels[val_index],
transform=get_val_transforms(),
test=True,
)
val_loader = DataLoader(
val_dataset,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['n_core'],
pin_memory=True,
)
del imgs, labels
model = CNN_Model(config['model_name'], config['global_dim'])
args = {
'model': model,
'Loaders': [train_loader,val_loader],
'metrics': {'Loss':AverageMeter, 'f1_score':PrintMeter, 'rmse':PrintMeter},
'checkpoint_saving_path': config['weight_saving_path'],
'resume_train_from_checkpoint': False,
'resume_checkpoint_path': config['resume_checkpoint_path'],
'lr': config['lr'],
'fold': fold_number,
'epochsTorun': config['total_epochs'],
'batch_size': config['batch_size'],
'test_run_for_error': False,
'problem_name': 'zindi_cigar',
}
Trainer = ModelTrainer(**args)
Trainer.fit()
###Output
_____no_output_____ |
notebooks/Storage API sample.ipynb | ###Markdown
Storage API sample
###Code
import gcp
import gcp.storage as storage
from gcp.context import Context
import random
import pandas as pd
from StringIO import StringIO
project = Context.default().project_id
bucket_name = "yukoga-kaggle"
bucket_path = "gs://" + bucket_name
test_sample_size = 1000
train_sample_size = 1000
sample_submission_sample_size = 1000
%%bash
curl --silent -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/service-accounts/default/email
# get skiprows for pandas.DataFrame
def get_skiprows(sample_size, num_records):
return sorted(random.sample(range(1,num_records),num_records - sample_size))
# test data
%storage read --object "gs://yukoga-kaggle/facebook-checkin/test.csv" --variable tmp_table
num_records = len(tmp_table.split('\n'))
test = pd.read_csv(StringIO(tmp_table), skiprows=get_skiprows(test_sample_size, num_records))
del tmp_table
test.head()
# get sampled records from cloud storage
def read_sampled_lines(item, sample_size):
"""Reads the content of this item as text, and return a list of lines up to some max.
Args:
item: item object from Google Cloud Storage.
start_offset_line: an index indicates start offset records within a item.
max_lines: max number of lines to return. If None, return all lines.
Returns:
The text content of the item as a list of lines.
Raises:
Exception if there was an error requesting the item's content.
"""
def read_specific_lines(item, offset, num_records):
start_to_read = 100 * (0 if offset is None else offset)
max_to_read = item.metadata.size
num_records = max_to_read if num_records is None else num_records
bytes_to_read = min(100 * num_records, item.metadata.size)
lines = []
while True:
content = item.read_from(start_offset=start_to_read, bytes_to_read)
lines = content.split('\n')
if len(lines) > num_records or bytes_to_read >= max_to_read:
break
bytes_to_read = min_lines or bytes_to_read >= max_to_read:
del lines[-1]
return lines[0:num_records]
mybucket = storage.Bucket(bucket_name)
for item in mybucket.items():
print item.metadata.name + " : " + str(item.metadata.size)
help(item.read_lines)
import inspect
print inspect.getsource(item._api.object_download)
print inspect.getsource(item.read_from)
print inspect.getsource(item.read_lines)
print inspect.getsource(gcp._util.Http.request)
# test data
%storage read --object "gs://yukoga-kaggle/facebook-checkin/test.csv" --variable tmp_table
num_records = len(tmp_table.split('\n'))
test = pd.read_csv(StringIO(tmp_table), skiprows=get_skiprows(test_sample_size, num_records))
del tmp_table
# sample submission data
%storage read --object "gs://yukoga-kaggle/facebook-checkin/sample_submission.csv" --variable tmp_table
num_records = len(tmp_table.split('\n'))
sample_submission = pd.read_csv(StringIO(tmp_table), skiprows=get_skiprows(sample_submission_sample_size, num_records))
del tmp_table
# train data
%storage read --object "gs://yukoga-kaggle/facebook-checkin/train.csv" --variable tmp_table
num_records = len(tmp_table.split('\n'))
train = pd.read_csv(StringIO(tmp_table), skiprows=get_skiprows(train_sample_size, num_records))
del tmp_table
###Output
_____no_output_____ |
Ch03/03_01/03_01.ipynb | ###Markdown
___ Chapter 3 - Basic Math and Statistics Segment 1 - Using NumPy to perform arithmetic operations on data
###Code
import numpy as np
from numpy.random import randn
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
Creating arrays Creating arrays using a list
###Code
a = np.array([1,2,3,4,5,6])
a
b = np.array([[10,20,30], [40,50,60]])
b
###Output
_____no_output_____
###Markdown
Creating arrays via assignment
###Code
np.random.seed(25)
c = 36*np.random.randn(6)
c
d = np.arange(1,35)
d
###Output
_____no_output_____
###Markdown
Performing arthimetic on arrays
###Code
a * 10
c + a
c - a
c * a
c / a
###Output
_____no_output_____
###Markdown
Multiplying matrices and basic linear algebra
###Code
aa = np.array([[2.,4.,6.], [1.,3.,5.], [10.,20.,30.]])
aa
bb = np.array([[0.,1.,2.], [3.,4.,5.], [6.,7.,8.]])
bb
aa*bb
np.dot(aa,bb)
###Output
_____no_output_____
###Markdown
___ Chapter 3 - Basic Math and Statistics Segment 1 - Using NumPy to perform arithmetic operations on data
###Code
import numpy as np
from numpy.random import randn
np.set_printoptions(precision=2)
###Output
_____no_output_____
###Markdown
Creating arrays Creating arrays using a list
###Code
a = np.array([1,2,3,4,5,6])
a
b = np.array([[10,20,30], [40,50,60]])
b
###Output
_____no_output_____
###Markdown
Creating arrays via assignment
###Code
np.random.seed(25)
c = 36*np.random.randn(6)
c
d = np.arange(1,35)
d
###Output
_____no_output_____
###Markdown
Performing arthimetic on arrays
###Code
a * 10
c + a
c - a
c * a
c / a
###Output
_____no_output_____
###Markdown
Multiplying matrices and basic linear algebra
###Code
aa = np.array([[2.,4.,6.], [1.,3.,5.], [10.,20.,30.]])
aa
bb = np.array([[0.,1.,2.], [3.,4.,5.], [6.,7.,8.]])
bb
aa*bb
np.dot(aa,bb)
###Output
_____no_output_____ |
DSVC-mod/machinelearning/Lesson2/SavingModel.ipynb | ###Markdown
Saving Model 实验说明训练模型的保存和读取
###Code
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
import os
from sklearn.externals import joblib
data = pd.read_csv('iris.data', header=None)
x = data[[0, 1]]
y = pd.Categorical(data[4]).codes
if os.path.exists('iris.model'):
print('Load Model...')
lr = joblib.load('iris.model')
else:
print('Train Model...')
lr = Pipeline([('sc', StandardScaler()),
('poly', PolynomialFeatures(degree=3)),
('clf', LogisticRegression()) ])
lr.fit(x, y.ravel())
y_hat = lr.predict(x)
joblib.dump(lr, 'iris.model')
print('y_hat = \n', y_hat)
print('accuracy = %.3f%%' % (100*accuracy_score(y, y_hat)))
###Output
Load Model...
y_hat =
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 1 2 1 2 1 2 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1
2 2 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 2 2 1 2 2 2 2
2 2 1 1 2 2 2 2 1 2 1 2 1 2 2 1 1 2 2 2 2 2 1 1 2 2 2 1 2 2 2 1 2 2 2 1 2
2 1]
accuracy = 80.667%
|
analysis/biorxiv_1/summary_stats.ipynb | ###Markdown
Summary stats
###Code
import anndata
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.patches as mpatches
import scanpy as sc
from scipy.stats import ks_2samp, ttest_ind
import ast
from scipy.sparse import csr_matrix
import warnings
warnings.filterwarnings('ignore')
def nd(arr):
return np.asarray(arr).reshape(-1)
fsize=20
plt.rcParams.update({'font.size': fsize})
%config InlineBackend.figure_format = 'retina'
ss = anndata.read_h5ad("../cell_ranger_annotation/no_filter_gene.h5ad")
tenx = anndata.read_h5ad("../cell_ranger_annotation/10xv3_gene.h5ad")
mfish = anndata.read_h5ad("../cell_ranger_annotation/merfish.h5ad")
###Output
Transforming to str index.
###Markdown
Number of cells
###Code
print("SMART-Seq {:,}".format(ss.shape[0]))
print("10xv3 {:,}".format(tenx.shape[0]))
print("MERFISH {:,}".format(mfish.shape[0]))
###Output
SMART-Seq 6,295
10xv3 94,162
MERFISH 243,799
###Markdown
Number of Genes
###Code
print("SMART-Seq {:,}".format(ss.shape[1]))
print("10xv3 {:,}".format(tenx.shape[1]))
print("MERFISH {:,}".format(mfish.shape[1]))
###Output
SMART-Seq 31,053
10xv3 31,053
MERFISH 254
###Markdown
Number of detected genes per cell (average)
###Code
print("SMART-Seq {:,.0f}".format((ss.layers["X"]>0).sum(axis=1).mean()))
print("10xv3 {:,.0f}".format((tenx.X>0).sum(axis=1).mean()))
print("MERFISH {:,.0f}".format((mfish.layers["X"]>0).sum(axis=1).mean()))
###Output
SMART-Seq 10,333
10xv3 5,891
MERFISH 74
###Markdown
Number of clusters
###Code
print("SMART-Seq {:,}".format(ss.obs.cluster_label.nunique()))
print("10xv3 {:,}".format(tenx.obs.cluster_label.nunique()))
print("MERFISH {:,}".format(mfish.obs.label.nunique()))
###Output
SMART-Seq 62
10xv3 147
MERFISH 93
###Markdown
Reads Processed
###Code
"SMART-Seq {:,} reads".format(15229289828)
tenx_reads = [1048408446,
1466307916,
2941873323,
1152751524,
1708764205,
1926459540,
1600417861,
1897698358,
1919010597,
2247342604,
2465213703,
2321988388]
"10x: {:,} reads".format(np.sum(tenx_reads))
###Output
_____no_output_____
###Markdown
Reads per cell
###Code
"SMART-Seq {:,.0f} reads per cell".format(15229289828/ss.shape[0])
"10x {:,.0f} reads per cell".format(np.sum(tenx_reads)/tenx.shape[0])
"SMART-Seq was sequenced {:,.0f}x deeper per cell than 10xv3.".format(15229289828/ss.shape[0]/(np.sum(tenx_reads)/tenx.shape[0]))
###Output
_____no_output_____
###Markdown
Isoform
###Code
ss_iso = anndata.read_h5ad("../cell_ranger_annotation/no_filter_isoform.h5ad")
print("SMART-Seq {:,.0f}".format((ss_iso.layers["X"]>0).sum(axis=1).mean()))
###Output
SMART-Seq 20,319
|
tutorials/sample_vqe_program/qiskit_runtime_vqe_program.ipynb | ###Markdown
Creating Custom Programs for the Qiskit RuntimePaul NationIBM Quantum Partners Technical Enablement TeamHere we will demonstrate how to create, upload, and use a custom Program for the Qiskit Runtime. As the utility of the Runtime execution engine lies in its ability to execute many quantum circuits with low latencies, this tutorial will show how to create your own Variational Quantum Eigensolver (VQE) program from scratch. Prerequisites- You must have Qiskit 0.30+ installed.- You must have an IBM Quantum Experience account with the ability to upload a Runtime program. **Currently there is no way to know if you have Runtime upload ability outside of an email from IBM**. Current limitationsThe Runtime execution engine currently has the following limitations that must be kept in mind:- The Docker images used by the runtime include only Qiskit and its dependencies, with few exceptions. One exception is the inclusion of the `mthree` measurement mitigation package.- For security reasons, the runtime cannot make internet calls outside of the environment.- Your Runtime program name must not contain an underscore`_`, otherwise it will cause an error when you try to execute it.As the the Runtime matures these limitations will be removed. Simple VQEVQE is an hybrid quantum-classical optimization procedure that finds the lowest eigenstate and eigenenergy of a linear system defined by a given Hamiltonian of Pauli operators. For example, consider the following two-qubit Hamiltonian:$$H = A X_{1}\otimes X_{0} + A Y_{1}\otimes Y_{0} + A Z_{1}\otimes Z_{0},$$where $A$ is numerical coefficient and the subscripts label the qubits on which the operators act. The zero index being farthest right is the ordering used in Qiskit. The Pauli operators tell us which measurement basis to to use when measuring each of the qubits.We want to find the ground state (lowest energy state) of this Hamiltonian, and the associated eigenvector. To do this we must start at a given initial state and iteratively vary the parameters that define this state using a classical optimizer such that the computed energies of subsequent steps are nominally lower than those previously. The parameterized state of the system is defined by an ansatz quantum circuit that should have non-zero support in the direction of the ground state. Because in general we do not know the solution, the choice of ansatz circuit can be highly problem specific with a form dictated by additional information. For further information about variational algorithmms, we point the reader to [Nature Reviews Physics volume 3, 625 (2021)](https://doi.org/10.1038/s42254-021-00348-9).Thus we need at least the following inputs to create our VQE quantum program:1. A representation of the Hamiltonian that specifies the problem.2. A choice of parameterized ansatz circuit, and the ability to pass configuration options, if any.However, the following are also beneficial inputs that users might want to have:3. Add the ability to pass an initial state.4. Vary the number of shots that are taken.5. Ability to select which classical optimizer is used, and set configuraton values, if any. 6. Ability to turn on and off measurement mitigation. Specifying the form of the input valuesAll inputs to Runtime programs must be serializable objects. That is to say that whatever you pass into a Runtime program must be able to be converted to JSON format. Thus it is beneficial to keep inputs limited to basic data types and structures unless you have experience with custom object serialization, or they are common Qiskit types such as QuantumCircuit etc. Fortunately, the VQE program described above can be made out of simple Python components.First, it is possible to represent any Hamiltonian using a list of values with each containing the numerical coefficeint for each term and the string representation for the Pauli operators. For the above example, the ground state energy with $A=1$ is $-3$ and we can write it as:
###Code
H = [(1, 'XX'), (1, 'YY'), (1, 'ZZ')]
###Output
_____no_output_____
###Markdown
Next we have to provide the ability to specify the parameterized Ansatz circuit. Here we will take advange of the fact that many ansatz circuits are pre-defined in the Qiskit Circuit Library. Examples chan be found in the [N-local circuits section](https://qiskit.org/documentation/apidoc/circuit_library.htmln-local-circuits).We would like the user to be able to select between ansatz options such as: `NLocal`, `TwoLocal`, and `EfficientSU2`. We could have the user pass the whole ansatz circuit to the Program, however in order to reduce the size of the upload we will pass the ansatz by name. In the runtime program, we can take this name and get the class that it corresponds to from the library using, for example
###Code
import qiskit.circuit.library.n_local as lib_local
ansatz = getattr(lib_local, 'EfficientSU2')
###Output
_____no_output_____
###Markdown
For the ansatz cvonfiguration, we will pass a simple `dict` of values. Optionals - If we want to add the ability to pass an initial state, then we will need to add the ability to pass a 1D list/ NumPy array. Because the number of parameters depends on the ansatz and its configuration, the user would have to know what ansatz they are doing ahead of time.- Selecting a number of shots requires simply passing an integer value.- Here we will allow selecting a classical optimizer by name from those in SciPy, and a `dict` of configuration parameters. Note that for execution on an actual system, the noise inherent in today's quantum systems makes having a stochastic optimizer crucial to success. SciPy does not have such a choice, and the one built into Qiskit is wrapped in such a manner as to make it difficult to use elsewhere. As such, here we will use a SPSA optimizer written to match the style of those in SciPy. This function is given in [Appendix A](Appendix-A). - Finally, for measurement error mitigation we can simply pass a boolean (True/False) value. Main programWe are now in a position to start building our main program. However, before doing so we point out that it makes the code cleaner to make a separate fuction that takes strings of Pauli operators that define our Hamiltonian and convert them to a list of circuits with single-qubit gates that change the measurement basis for each qubit, if needed. This function is given in [Appendix B](Appendix-B). Required signatureEvery runtime program is defined via the `main` function, and must have the following input signature:```main(backend, user_message, *args, **kwargs)```where `backend` is the backend that the Program is to be executed on, and `user_message` is the class by which interm (and possibly final) results are communicated back to the user. After these two items, we add our program specific arguments and keyword arguments. The main VQE programHere is the main program for our sample VQE. What each element of the function does is written in the comments before the element appears.
###Code
# Grab functions and modules from dependencies
import numpy as np
import scipy.optimize as opt
from scipy.optimize import OptimizeResult
import mthree
# Grab functions and modules from Qiskit needed
from qiskit import QuantumCircuit, transpile
import qiskit.circuit.library.n_local as lib_local
# The entrypoint for our Runtime Program
def main(backend, user_messenger,
hamiltonian,
ansatz='EfficientSU2',
ansatz_config={},
x0=None,
optimizer='SPSA',
optimizer_config={'maxiter': 100},
shots = 8192,
use_measurement_mitigation=False
):
"""
The main sample VQE program.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
user_messenger (UserMessenger): Used to communicate with the
program user.
hamiltonian (list): Hamiltonian whose ground state we want to find.
ansatz (str): Optional, name of ansatz quantum circuit to use,
default='EfficientSU2'
ansatz_config (dict): Optional, configuration parameters for the
ansatz circuit.
x0 (array_like): Optional, initial vector of parameters.
optimizer (str): Optional, string specifying classical optimizer,
default='SPSA'.
optimizer_config (dict): Optional, configuration parameters for the
optimizer.
shots (int): Optional, number of shots to take per circuit.
use_measurement_mitigation (bool): Optional, use measurement mitigation,
default=False.
Returns:
OptimizeResult: The result in SciPy optimization format.
"""
# Split the Hamiltonian into two arrays, one for coefficients, the other for
# operator strings
coeffs = np.array([item[0] for item in hamiltonian], dtype=complex)
op_strings = [item[1] for item in hamiltonian]
# The number of qubits needed is given by the number of elements in the strings
# the defiune the Hamiltonian. Here we grab this data from the first element.
num_qubits = len(op_strings[0])
# We grab the requested ansatz circuit class from the Qiskit circuit library
# n_local module and configure it using the number of qubits and options
# passed in the ansatz_config.
ansatz_instance = getattr(lib_local, ansatz)
ansatz_circuit = ansatz_instance(num_qubits, **ansatz_config)
# Here we use our convenence function from Appendix B to get measurement circuits
# with the correct single-qubit rotation gates.
meas_circs = opstr_to_meas_circ(op_strings)
# When computing the expectation value for the energy, we need to know if we
# evaluate a Z measurement or and identity measurement. Here we take and X and Y
# operator in the strings and convert it to a Z since we added the rotations
# with the meas_circs.
meas_strings = [string.replace('X', 'Z').replace('Y', 'Z') for string in op_strings]
# Take the ansatz circuits, add the single-qubit measurement basis rotations from
# meas_circs, and finally append the measurements themselves.
full_circs = [ansatz_circuit.compose(mcirc).measure_all(inplace=False) for mcirc in meas_circs]
# Get the number of parameters in the ansatz circuit.
num_params = ansatz_circuit.num_parameters
# Use a given initial state, if any, or do random initial state.
if x0:
x0 = np.asarray(x0, dtype=float)
if x0.shape[0] != num_params:
raise ValueError('Number of params in x0 ({}) does not match number \
of ansatz parameters ({})'. format(x0.shape[0],
num_params))
else:
x0 = 2*np.pi*np.random.rand(num_params)
# Because we are in general targeting a real quantum system, our circuits must be transpiled
# to match the system topology and, hopefully, optimize them.
# Here we will set the transpiler to the most optimal settings where 'sabre' layout and
# routing are used, along with full O3 optimization.
# This works around a bug in Qiskit where Sabre routing fails for simulators (Issue #7098)
trans_dict = {}
if not backend.configuration().simulator:
trans_dict = {'layout_method': 'sabre', 'routing_method': 'sabre'}
trans_circs = transpile(full_circs, backend, optimization_level=3, **trans_dict)
# If using measurement mitigation we need to find out which physical qubits our transpiled
# circuits actually measure, construct a mitigation object targeting our backend, and
# finally calibrate our mitgation by running calibration circuits on the backend.
if use_measurement_mitigation:
maps = mthree.utils.final_measurement_mapping(trans_circs)
mit = mthree.M3Mitigation(backend)
mit.cals_from_system(maps)
# Here we define a callback function that will stream the optimizer parameter vector
# back to the user after each iteration. This uses the `user_messenger` object.
# Here we convert to a list so that the return is user readable locally, but
# this is not required.
def callback(xk):
user_messenger.publish(list(xk))
# This is the primary VQE function executed by the optimizer. This function takes the
# parameter vector as input and returns the energy evaluated using an ansatz circuit
# bound with those parameters.
def vqe_func(params):
# Attach (bind) parameters in params vector to the transpiled circuits.
bound_circs = [circ.bind_parameters(params) for circ in trans_circs]
# Submit the job and get the resultant counts back
counts = backend.run(bound_circs, shots=shots).result().get_counts()
# If using measurement mitigation apply the correction and
# compute expectation values from the resultant quasiprobabilities
# using the measurement strings.
if use_measurement_mitigation:
quasi_collection = mit.apply_correction(counts, maps)
expvals = quasi_collection.expval(meas_strings)
# If not doing any mitigation just compute expectation values
# from the raw counts using the measurement strings.
# Since Qiskit does not have such functionality we use the convenence
# function from the mthree mitigation module.
else:
expvals = mthree.utils.expval(counts, meas_strings)
# The energy is computed by simply taking the product of the coefficients
# and the computed expectation values and summing them. Here we also
# take just the real part as the coefficients can possibly be complex,
# but the energy (eigenvalue) of a Hamiltonian is always real.
energy = np.sum(coeffs*expvals).real
return energy
# Here is where we actually perform the computation. We begin by seeing what
# optimization routine the user has requested, eg. SPSA verses SciPy ones,
# and dispatch to the correct optimizer. The selected optimizer starts at
# x0 and calls 'vqe_func' everytime the optimizer needs to evaluate the cost
# function. The result is returned as a SciPy OptimizerResult object.
# Additionally, after every iteration, we use the 'callback' function to
# publish the interm results back to the user. This is important to do
# so that if the Program terminates unexpectedly, the user can start where they
# left off.
# Since SPSA is not in SciPy need if statement
if optimizer == 'SPSA':
res = fmin_spsa(vqe_func, x0, args=(), **optimizer_config,
callback=callback)
# All other SciPy optimizers here
else:
res = opt.minimize(vqe_func, x0, method=optimizer,
options=optimizer_config, callback=callback)
# Return result. OptimizeResult is a subclass of dict.
return res
###Output
_____no_output_____
###Markdown
Local testingImportant: You need to execute the code blocks in Appendices A and B before continuing.We can test whether our routine works by simply calling the `main` function with a backend instance, a `UserMessenger`, and sample arguments.
###Code
from qiskit.providers.ibmq.runtime import UserMessenger
msg = UserMessenger()
# Use the local Aer simulator
from qiskit import Aer
backend = Aer.get_backend('qasm_simulator')
# Execute the main routine for our simple two-qubit Hamiltonian H, and perform 5 iterations of the SPSA solver.
main(backend, msg, H, optimizer_config={'maxiter': 5})
###Output
[1.3866438513555424, 2.061101094147009, 2.710143598453931, 1.458760090093447, 2.3058208994643126, 1.0733073295503854, 0.9668603895188339, 1.2860160155170703, 1.14379618119804, 3.7817924045673936, 3.6661096501688366, 5.08966796207572, 2.2474981078982554, 1.8422666234402352, 5.473605998866756, 0.08161955255295296]
[1.451401822349035, 1.9963431231535165, 2.7749015694474233, 1.5235180610869397, 2.370578870457805, 1.1380653005438781, 0.9021024185253412, 1.350773986510563, 1.0790382102045473, 3.846550375560886, 3.601351679175344, 5.024909991082227, 2.312256078891748, 1.7775086524467425, 5.5383639698602485, 0.14637752354644562]
[1.5726151521761795, 2.117556452980661, 2.896114899274568, 1.6447313909140842, 2.2493655406306603, 1.0168519707167336, 1.0233157483524857, 1.2295606566834185, 1.2002515400316918, 3.967763705388031, 3.722565009002489, 5.146123320909371, 2.191042749064603, 1.656295322619598, 5.417150640033104, 0.26759085337359023]
[1.8221631813867472, 2.3671044821912286, 2.6465668700640004, 1.3951833617035165, 1.9998175114200927, 0.767303941506166, 1.2728637775630534, 0.9800126274728509, 1.4497995692422594, 3.718215676177463, 3.4730169797919213, 4.896575291698803, 2.440590778275171, 1.4067472934090304, 5.666698669243672, 0.01804282416302258]
[2.023489393498135, 2.1657782700798407, 2.8478930821753883, 1.1938571495921286, 1.7984912993087048, 0.968630153617554, 1.4741899896744413, 0.7786864153614629, 1.2484733571308715, 3.5168894640660753, 3.2716907676805334, 4.695249079587415, 2.239264566163783, 1.6080735055204183, 5.465372457132284, 0.21936903627441057]
###Markdown
Having executed the above, we see that there are 5 parameter arrays returned, one for each callback, along with the final optimzation result. The parameter arrays are the interm results, and the `UserMessenger` object prints these values to the cell output. The output itself is the answer we obtained, expressed as a SciPy `OptimizerResult` object. Program metadataProgram metadata is essentially the docstring for a runtime program. It describes overall program information such as the program `name`, `description`, `version`, and the `max_execution_time` the program is allowed to run, as well as details the inputs and the outputs the program expects. At a bare minimum the values described above are required:
###Code
meta = {
"name": "sample-vqe",
"description": "A sample VQE program.",
"max_execution_time": 100000,
"version": "1.0",
}
###Output
_____no_output_____
###Markdown
It is important to set the `max_execution_time` high enough so that your Program does not get terminated unexpectedly. Additionally, one should make sure that interm results are sent back to the user so that, if soemthing does happen, the user can start where they left off.It is however good form to detail the parameters and return types, as well as iterm results. That being said, if makign a runtime intended to be used by others, this information would also likely be mirrored in the signature of a function or class that the user would interact with directly; End users should not directly call runtime programs. We will see why below. Never the less, let us add to our metadata. First, the `parameters` section details the inputs the user is able to pass:
###Code
meta["parameters"] = [
{"name": "hamiltonian", "description": "Hamiltonian whose ground state we want to find.", "type": "list", "required": True},
{"name": "ansatz", "description": "Name of ansatz quantum circuit to use, default='EfficientSU2'", "type": "str", "required": False},
{"name": "ansatz_config", "description": "Configuration parameters for the ansatz circuit.", "type": "dict", "required": False},
{"name": "x0", "description": "Initial vector of parameters.", "type": "ndarray", "required": False},
{"name": "optimizer", "description": "Classical optimizer to use, default='SPSA'.", "type": "str", "required": False},
{"name": "optimizer_config", "description": "Configuration parameters for the optimizer.", "type": "dict", "required": False},
{"name": "shots", "description": "Number of shots to take per circuit.", "type": "int", "required": False},
{"name": "use_measurement_mitigation", "description": "Use measurement mitigation, default=False.", "type": "bool", "required": False}
]
###Output
_____no_output_____
###Markdown
Next, the `return_values` section tells about the return types:
###Code
meta['return_values'] = [
{"name": "result", "description": "Final result in SciPy optimizer format.", "type": "OptimizeResult"}
]
###Output
_____no_output_____
###Markdown
and finally let us specify what comes back when an interm result is returned:
###Code
meta["interim_results"] = [
{"name": "params", "description": "Parameter vector at current optimization step", "type": "ndarray"},
]
###Output
_____no_output_____
###Markdown
Uploading the programWe now have all the ingredients needed to upload our program. To do so we need to collect all of our code in one file, here called `sample_vqe.py` for uploading. This limitation will be removed in later versions of the Runtime. Alternatively, if the entire code is contained within a single Jupyter notebook cell, then this can be done using the magic function```%%writefile my_program.py```To actually upload the program we need to get a Provider from our IBM Quantum account:
###Code
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(group='deployed')
###Output
_____no_output_____
###Markdown
Program uploadThe call to `program_upload` takes the target Python file as `data` and the metadata as inputs. **If you have already uploaded the program this will raise an error and you must delete it first to continue**.
###Code
program_id = provider.runtime.upload_program(data='sample_vqe.py', metadata=meta)
program_id
###Output
_____no_output_____
###Markdown
Here the returned `program_id` is the same as the program `name` given in the metadata. However, this need not be the case if there are multiple programs with the same name. In that case, `program_id` is the unique identifier that needs to be used in calling the program later. Program informationWe can query the program for information and see that our metadata is corectly being attached:
###Code
prog = provider.runtime.program(program_id)
print(prog)
###Output
sample-vqe:
Name: sample-vqe
Description: A sample VQE program.
Version: 1.0
Creation date: 2021-10-06T13:38:19.000000
Max execution time: 100000
Input parameters:
- hamiltonian:
Description: Hamiltonian whose ground state we want to find.
Type: list
Required: True
- ansatz:
Description: Name of ansatz quantum circuit to use, default='EfficientSU2'
Type: str
Required: False
- ansatz_config:
Description: Configuration parameters for the ansatz circuit.
Type: dict
Required: False
- x0:
Description: Initial vector of parameters.
Type: ndarray
Required: False
- optimizer:
Description: Classical optimizer to use, default='SPSA'.
Type: str
Required: False
- optimizer_config:
Description: Configuration parameters for the optimizer.
Type: dict
Required: False
- shots:
Description: Number of shots to take per circuit.
Type: int
Required: False
- use_measurement_mitigation:
Description: Use measurement mitigation, default=False.
Type: bool
Required: False
Interim results:
- params:
Description: Parameter vector at current optimization step
Type: ndarray
Returns:
- result:
Description: Final result in SciPy optimizer format.
Type: OptimizeResult
###Markdown
Deleting a programIf you make a mistake and need to delete and/or re-upload the program you can run the following, passing the `program_id`:
###Code
#provider.runtime.delete_program(program_id)
###Output
_____no_output_____
###Markdown
Running the program Specify parametersTo run the program we need to specify the `options` that are used in the runtime environemnt (not the program variables). At present, only the `backend_name` is required.
###Code
backend = provider.backend.ibmq_qasm_simulator
options = {'backend_name': backend.name()}
###Output
_____no_output_____
###Markdown
The `inputs` dictionary is used to pass arguements to the `main` function itself. For example:
###Code
inputs = {}
inputs['hamiltonian'] = H
inputs['optimizer_config']={'maxiter': 10}
###Output
_____no_output_____
###Markdown
Execute the programWe now can execute the program and grab the result.
###Code
job = provider.runtime.run(program_id, options=options, inputs=inputs)
job.result()
###Output
_____no_output_____
###Markdown
A few things need to be pointed out. First, we did not get back any interm results, and second the return object is a plain dictionary. This is because we did not listen for the return results, and we did not tell the job how to format the return result. Listening for interm resultsTo listen for interm results we need to pass a callback function to `provider.runtime.run` that stores the results. The callback takes two arguments `job_id` and the returned data:
###Code
interm_results = []
def vqe_callback(job_id, data):
interm_results.append(data)
###Output
_____no_output_____
###Markdown
Executing again we get:
###Code
job2 = provider.runtime.run(program_id, options=options, inputs=inputs, callback=vqe_callback)
job2.result()
print(interm_results)
###Output
[[6.242814925001226, 5.046288794393892, 1.343121114475193, 2.6379574923082076, 6.634801396657214, 2.2371025934312705, 1.3494123213893983, 4.706980812960231, -0.08498930038430019, 2.238011315792888, 5.678058655479549, 1.7252317954712644, 0.3277004890910993, 3.9902383499582776, 3.7536593566165557, 5.7438449084199155], [5.952836869002238, 4.7563107383949035, 1.053143058476205, 2.927935548307196, 6.344823340658226, 2.5270806494302587, 1.6393903773883862, 4.9969588689592195, -0.37496735638328815, 2.527989371791876, 5.388080599480561, 2.015209851470252, 0.03772243309211132, 3.70026029395929, 3.4636813006175675, 5.453866852420927], [5.901206983578188, 4.807940623818953, 1.0015131730521551, 2.876305662883146, 6.293193455234176, 2.475450764006209, 1.5877604919643364, 4.94532898353517, -0.3233374709592383, 2.4763594863678264, 5.4397104849046105, 1.9635799660462023, -0.013907452331938498, 3.7518901793833397, 3.5153111860416173, 5.4022369669968775], [5.99632820340434, 4.903061843645105, 0.9063919532260031, 2.9714268827092982, 6.198072235408024, 2.570571983832361, 1.4926392721381843, 4.850207763709018, -0.41845869078539044, 2.3812382665416743, 5.534831704730762, 2.0587011858723545, 0.0812137674942136, 3.847011399209492, 3.6104324058677695, 5.497358186823029], [5.77362611243404, 5.125763934615405, 0.6836898622557036, 2.748724791738999, 6.420774326378324, 2.347869892862062, 1.2699371811678848, 5.072909854679318, -0.19575659981509097, 2.6039403575119735, 5.3121296137604626, 2.2814032768426538, 0.3039158584645131, 4.069713490179791, 3.8331344968380687, 5.720060277793329], [5.796528867306504, 5.10286117974294, 0.6607871073832394, 2.7716275466114633, 6.443677081250788, 2.370772647734526, 1.2928399360403489, 5.050007099806853, -0.17285384494262684, 2.626843112384438, 5.335032368632927, 2.2585005219701895, 0.3268186133369772, 4.046810735307327, 3.856037251710533, 5.742963032665793], [6.018330604319341, 4.881059442730104, 0.438985370370403, 2.9934292836242995, 6.665478818263625, 2.5925743847473623, 1.0710381990275124, 4.828205362794017, -0.39465558195546324, 2.4050413753716016, 5.11323063162009, 2.4803022589830257, 0.1050168763241408, 4.2686124723201635, 4.07783898872337, 5.96476476967863], [6.0069596791809685, 4.892430367868476, 0.45035629550877526, 2.982058358485927, 6.654107893125253, 2.58120345960899, 1.0596672738891402, 4.839576287932389, -0.383284656817091, 2.393670450233229, 5.124601556758463, 2.4689313338446532, 0.09364595118576854, 4.279983397458536, 4.089209913861742, 5.953393844540257], [6.199502817535771, 4.699887229513673, 0.6428994338635784, 3.1746014968407303, 6.46156475477045, 2.3886603212541866, 1.2522104122439432, 4.647033149577586, -0.5758277951718942, 2.5862135885880324, 5.3171446951132655, 2.6614744721994565, -0.09889718716903459, 4.087440259103733, 3.896666775506939, 5.760850706185455], [6.2289809057745344, 4.67040914127491, 0.6134213456248151, 3.2040795850794934, 6.432086666531687, 2.4181384094929497, 1.22273232400518, 4.617555061338823, -0.6053058834106575, 2.6156916768267955, 5.287666606874502, 2.6319963839606935, -0.1283752754077979, 4.116918347342496, 3.8671886872681753, 5.7313726179466915]]
###Markdown
Formatting the returned resultsIn order to format the return results into the desired format, we need to specify a decoder. This decoder must have a `decode` method that gets called to do the actual conversion. In our case `OptimizeResult` is a simple sub-class of `dict` so the formatting is simple.
###Code
from qiskit.providers.ibmq.runtime import ResultDecoder
from scipy.optimize import OptimizeResult
class VQEResultDecoder(ResultDecoder):
@classmethod
def decode(cls, data):
data = super().decode(data) # This is required to preformat the data returned.
return OptimizeResult(data)
###Output
_____no_output_____
###Markdown
We can then use this when returning the job result:
###Code
job3 = provider.runtime.run(program_id, options=options, inputs=inputs)
job3.result(decoder=VQEResultDecoder)
###Output
_____no_output_____
###Markdown
Simplifying program execution with wrapping functionsWhile runtime programs are pwoerful and flexible, they are not the most friendly things to interact with. Therefore if your program is intended to be used by others it is best to make wrapper functions and/or classes that simply the user experience. Moreoever, such wrappers allow for validation of user inputs client side, which can quickly find errors that would otherwise be raised latter during the execution process; something that might have taken hours waiting in queue to get to.Here we will make two helper routines. First, a job wrapper that allows us to attach and retrieve the interm results directly from the job object itself, as well as does the decoding for us so that the end user need not worry about formatting the results themselves.
###Code
class RuntimeJobWrapper():
"""A simple Job wrapper that attaches interm results directly to the job object itself
in the `interm_results attribute` via the `_callback` function.
"""
def __init__(self):
self._job = None
self._decoder = VQEResultDecoder
self.interm_results = []
def _callback(self, job_id, xk):
"""The callback function that attaches interm results:
Parameters:
job_id (str): The job ID.
xk (array_like): A list or NumPy array to attach.
"""
self.interm_results.append(xk)
def __getattr__(self, attr):
if attr == 'result':
return self.result
else:
if attr in dir(self._job):
return getattr(self._job, attr)
raise AttributeError("Class does not have {}.".format(attr))
def result(self):
"""Get the result of the job as a SciPy OptimizerResult object.
This blocks until job is done, cancelled, or errors.
Returns:
OptimizerResult: A SciPy optimizer result object.
"""
return self._job.result(decoder=self._decoder)
###Output
_____no_output_____
###Markdown
Next, we create the actual function we want users to call to execute our program. To this function we will add a series of simple validation checks (not all checks will be done for simplicity), as well as use the Job wrapper defined above to simply the output.
###Code
import qiskit.circuit.library.n_local as lib_local
def vqe_runner(backend, hamiltonian,
ansatz='EfficientSU2', ansatz_config={},
x0=None, optimizer='SPSA',
optimizer_config={'maxiter': 100},
shots = 8192,
use_measurement_mitigation=False):
"""Routine that executes a given VQE problem via the sample-vqe program on the target backend.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
hamiltonian (list): Hamiltonian whose ground state we want to find.
ansatz (str): Optional, name of ansatz quantum circuit to use, default='EfficientSU2'
ansatz_config (dict): Optional, configuration parameters for the ansatz circuit.
x0 (array_like): Optional, initial vector of parameters.
optimizer (str): Optional, string specifying classical optimizer, default='SPSA'.
optimizer_config (dict): Optional, configuration parameters for the optimizer.
shots (int): Optional, number of shots to take per circuit.
use_measurement_mitigation (bool): Optional, use measurement mitigation, default=False.
Returns:
OptimizeResult: The result in SciPy optimization format.
"""
options = {'backend_name': backend.name()}
inputs = {}
# Validate Hamiltonian is correct
num_qubits = len(H[0][1])
for idx, ham in enumerate(hamiltonian):
if len(ham[1]) != num_qubits:
raise ValueError('Number of qubits in Hamiltonian term {} does not match {}'.format(idx,
num_qubits))
inputs['hamiltonian'] = hamiltonian
# Validate ansatz is in the module
ansatz_circ = getattr(lib_local, ansatz, None)
if not ansatz_circ:
raise ValueError('Ansatz {} not in n_local circuit library.'.format(ansatz))
inputs['ansatz'] = ansatz
inputs['ansatz_config'] = ansatz_config
# If given x0, validate its length against num_params in ansatz:
if x0:
x0 = np.asarray(x0)
ansatz_circ = ansatz_circ(num_qubits, **ansatz_config)
num_params = ansatz_circ.num_parameters
if x0.shape[0] != num_params:
raise ValueError('Length of x0 {} does not match number of params in ansatz {}'.format(x0.shape[0],
num_params))
inputs['x0'] = x0
# Set the rest of the inputs
inputs['optimizer'] = optimizer
inputs['optimizer_config'] = optimizer_config
inputs['shots'] = shots
inputs['use_measurement_mitigation'] = use_measurement_mitigation
rt_job = RuntimeJobWrapper()
job = provider.runtime.run('sample-vqe', options=options, inputs=inputs, callback=rt_job._callback)
rt_job._job = job
return rt_job
###Output
_____no_output_____
###Markdown
We can now execute our runtime program via this runner function:
###Code
job4 = vqe_runner(backend, H, optimizer_config={'maxiter': 15})
job4.result()
###Output
_____no_output_____
###Markdown
The interm results are now attached to the job `interm_results` attribute and, as expected, we see that the lenght matches the number of iterations performed.
###Code
len(job4.interm_results)
###Output
_____no_output_____
###Markdown
ConclusionWe have demonstrated how to create, upload, and use a custom Qiskit Runtime by creating our own VQE solver from scratch. This tutorial was meant to touch upon every aspect of the process for a real-world example. Within the current limitations of the Runtime environment, this example should enable readers to develop their own single-file runtime program. This program is also a good starting off point for exploring addtional flavours of VQE runtime. For example, it is straightforward to vary the number of shots per iteration, increasing shots as the number of iterations increases. Those looking to go deeper can consider implimenting an [adaptive VQE](https://doi.org/10.1038/s41467-019-10988-2), where the ansatz is not fixed at initialization. Appendix AHere we code a simple simultaneous perturbation stochastic approximation (SPSA) optimizer for use on noisy quantum systems. Most optimizers do not handle fluctuating cost functions well, so this is a needed addition for executing on real quantum hardware.
###Code
import numpy as np
from scipy.optimize import OptimizeResult
def fmin_spsa(func, x0, args=(), maxiter=100,
a=1.0, alpha=0.602, c=1.0, gamma=0.101,
callback=None):
"""
Minimization of scalar function of one or more variables using simultaneous
perturbation stochastic approximation (SPSA).
Parameters:
func (callable): The objective function to be minimized.
``fun(x, *args) -> float``
where x is an 1-D array with shape (n,) and args is a
tuple of the fixed parameters needed to completely
specify the function.
x0 (ndarray): Initial guess. Array of real elements of size (n,),
where ‘n’ is the number of independent variables.
maxiter (int): Maximum number of iterations. The number of function
evaluations is twice as many. Optional.
a (float): SPSA gradient scaling parameter. Optional.
alpha (float): SPSA gradient scaling exponent. Optional.
c (float): SPSA step size scaling parameter. Optional.
gamma (float): SPSA step size scaling exponent. Optional.
callback (callable): Function that accepts the current parameter vector
as input.
Returns:
OptimizeResult: Solution in SciPy Optimization format.
Notes:
See the `SPSA homepage <https://www.jhuapl.edu/SPSA/>`_ for usage and
additional extentions to the basic version implimented here.
"""
A = 0.01 * maxiter
x0 = np.asarray(x0)
x = x0
for kk in range(maxiter):
ak = a*(kk+1.0+A)**-alpha
ck = c*(kk+1.0)**-gamma
# Bernoulli distribution for randoms
deltak = 2*np.random.randint(2, size=x.shape[0])-1
grad = (func(x + ck*deltak, *args) - func(x - ck*deltak, *args))/(2*ck*deltak)
x -= ak*grad
if callback is not None:
callback(x)
return OptimizeResult(fun=func(x, *args), x=x, nit=maxiter, nfev=2*maxiter,
message='Optimization terminated successfully.',
success=True)
###Output
_____no_output_____
###Markdown
Appendix BThis is a helper function that converts the Pauli operators in the strings that define the Hamiltonian operators into the appropriate measurements at the end of the circuits. For $X$ operators this involves adding an $H$ gate to the qubits to be measured, where as a $Y$ operator needs $S^{+}$ followed by a $H$. Other choices of Pauli operators require no additinal gates prior to measurement.
###Code
def opstr_to_meas_circ(op_str):
"""Takes a list of operator strings and makes circuit with the correct post-rotations for measurements.
Parameters:
op_str (list): List of strings representing the operators needed for measurements.
Returns:
list: List of circuits for measurement post-rotations
"""
num_qubits = len(op_str[0])
circs = []
for op in op_str:
qc = QuantumCircuit(num_qubits)
for idx, item in enumerate(op):
if item == 'X':
qc.h(num_qubits-idx-1)
elif item == 'Y':
qc.sdg(num_qubits-idx-1)
qc.h(num_qubits-idx-1)
circs.append(qc)
return circs
from qiskit.tools.jupyter import *
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Creating Custom Programs for Qiskit RuntimePaul NationIBM Quantum Partners Technical Enablement TeamHere we will demonstrate how to create, upload, and use a custom Program for Qiskit Runtime. As the utility of the Runtime execution engine lies in its ability to execute many quantum circuits with low latencies, this tutorial will show how to create your own Variational Quantum Eigensolver (VQE) program from scratch. Prerequisites- You must have Qiskit 0.30+ installed.- You must have an IBM Quantum account with the ability to upload a runtime program. You have this ability if you belong to more than one provider. Current limitationsThe runtime execution engine currently has the following limitations that must be kept in mind:- The Docker images used by the runtime include only Qiskit and its dependencies, with few exceptions. One exception is the inclusion of the `mthree` measurement mitigation package.- For security reasons, the runtime cannot make internet calls outside of the environment.- Your runtime program name must not contain an underscore`_`, otherwise it will cause an error when you try to execute it.As Qiskit Runtime matures, these limitations will be removed. Simple VQEVQE is an hybrid quantum-classical optimization procedure that finds the lowest eigenstate and eigenenergy of a linear system defined by a given Hamiltonian of Pauli operators. For example, consider the following two-qubit Hamiltonian:$$H = A X_{1}\otimes X_{0} + A Y_{1}\otimes Y_{0} + A Z_{1}\otimes Z_{0},$$where $A$ is numerical coefficient and the subscripts label the qubits on which the operators act. The zero index being farthest right is the ordering used in Qiskit. The Pauli operators tell us which measurement basis to to use when measuring each of the qubits.We want to find the ground state (lowest energy state) of this Hamiltonian, and the associated eigenvector. To do this we must start at a given initial state and iteratively vary the parameters that define this state using a classical optimizer, such that the computed energies of subsequent steps are nominally lower than those previously. The parameterized state of the system is defined by an ansatz quantum circuit that should have non-zero support in the direction of the ground state. Because in general we do not know the solution, the choice of ansatz circuit can be highly problem-specific with a form dictated by additional information. For further information about variational algorithms, we point the reader to [Nature Reviews Physics volume 3, 625 (2021)](https://doi.org/10.1038/s42254-021-00348-9).Thus we need at least the following inputs to create our VQE quantum program:1. A representation of the Hamiltonian that specifies the problem.2. A choice of parameterized ansatz circuit, and the ability to pass configuration options, if any.However, the following are also beneficial inputs that users might want to have:3. Add the ability to pass an initial state.4. Vary the number of shots that are taken.5. Ability to select which classical optimizer is used, and set configuraton values, if any. 6. Ability to turn on and off measurement mitigation. Specifying the form of the input valuesAll inputs to runtime programs must be serializable objects. That is to say, whatever you pass into a runtime program must be able to be converted to JSON format. It is thus beneficial to keep inputs limited to basic data types and structures unless you have experience with custom object serialization, or they are common Qiskit types such as ``QuantumCircuit`` etc that the built-in `RuntimeEncoder` can handle. Fortunately, the VQE program described above can be made out of simple Python components.First, it is possible to represent any Hamiltonian using a list of values with each containing the numerical coefficeint for each term and the string representation for the Pauli operators. For the above example, the ground state energy with $A=1$ is $-3$ and we can write it as:
###Code
H = [(1, 'XX'), (1, 'YY'), (1, 'ZZ')]
###Output
_____no_output_____
###Markdown
Next we have to provide the ability to specify the parameterized Ansatz circuit. Here we will take advange of the fact that many ansatz circuits are pre-defined in the Qiskit Circuit Library. Examples can be found in the [N-local circuits section](https://qiskit.org/documentation/apidoc/circuit_library.htmln-local-circuits).We would like the user to be able to select between ansatz options such as: `NLocal`, `TwoLocal`, and `EfficientSU2`. We could have the user pass the whole ansatz circuit to the program; however, in order to reduce the size of the upload we will pass the ansatz by name. In the runtime program, we can take this name and get the class that it corresponds to from the library using, for example,
###Code
import qiskit.circuit.library.n_local as lib_local
ansatz = getattr(lib_local, 'EfficientSU2')
###Output
_____no_output_____
###Markdown
For the ansatz configuration, we will pass a simple `dict` of values. Optionals - If we want to add the ability to pass an initial state, then we will need to add the ability to pass a 1D list/ NumPy array. Because the number of parameters depends on the ansatz and its configuration, the user would have to know what ansatz they are doing ahead of time.- Selecting a number of shots requires simply passing an integer value.- Here we will allow selecting a classical optimizer by name from those in SciPy, and a `dict` of configuration parameters. Note that for execution on an actual system, the noise inherent in today's quantum systems makes having a stochastic optimizer crucial to success. SciPy does not have such a choice, and the one built into Qiskit is wrapped in such a manner as to make it difficult to use elsewhere. As such, here we will use an SPSA optimizer written to match the style of those in SciPy. This function is given in [Appendix A](Appendix-A). - Finally, for measurement error mitigation we can simply pass a boolean (True/False) value. Main programWe are now in a position to start building our main program. However, before doing so we point out that it makes the code cleaner to make a separate fuction that takes strings of Pauli operators that define our Hamiltonian and convert them to a list of circuits with single-qubit gates that change the measurement basis for each qubit, if needed. This function is given in [Appendix B](Appendix-B). Required signatureEvery runtime program is defined via the `main` function, and must have the following input signature:```main(backend, user_message, *args, **kwargs)```where `backend` is the backend that the program is to be executed on, and `user_message` is the class by which interim (and possibly final) results are communicated back to the user. After these two items, we add our program-specific arguments and keyword arguments. The main VQE programHere is the main program for our sample VQE. What each element of the function does is written in the comments before the element appears.
###Code
# Grab functions and modules from dependencies
import numpy as np
import scipy.optimize as opt
from scipy.optimize import OptimizeResult
import mthree
# Grab functions and modules from Qiskit needed
from qiskit import QuantumCircuit, transpile
import qiskit.circuit.library.n_local as lib_local
# The entrypoint for our Runtime Program
def main(backend, user_messenger,
hamiltonian,
ansatz='EfficientSU2',
ansatz_config={},
x0=None,
optimizer='SPSA',
optimizer_config={'maxiter': 100},
shots = 8192,
use_measurement_mitigation=False
):
"""
The main sample VQE program.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
user_messenger (UserMessenger): Used to communicate with the
program user.
hamiltonian (list): Hamiltonian whose ground state we want to find.
ansatz (str): Optional, name of ansatz quantum circuit to use,
default='EfficientSU2'
ansatz_config (dict): Optional, configuration parameters for the
ansatz circuit.
x0 (array_like): Optional, initial vector of parameters.
optimizer (str): Optional, string specifying classical optimizer,
default='SPSA'.
optimizer_config (dict): Optional, configuration parameters for the
optimizer.
shots (int): Optional, number of shots to take per circuit.
use_measurement_mitigation (bool): Optional, use measurement mitigation,
default=False.
Returns:
OptimizeResult: The result in SciPy optimization format.
"""
# Split the Hamiltonian into two arrays, one for coefficients, the other for
# operator strings
coeffs = np.array([item[0] for item in hamiltonian], dtype=complex)
op_strings = [item[1] for item in hamiltonian]
# The number of qubits needed is given by the number of elements in the strings
# the defiune the Hamiltonian. Here we grab this data from the first element.
num_qubits = len(op_strings[0])
# We grab the requested ansatz circuit class from the Qiskit circuit library
# n_local module and configure it using the number of qubits and options
# passed in the ansatz_config.
ansatz_instance = getattr(lib_local, ansatz)
ansatz_circuit = ansatz_instance(num_qubits, **ansatz_config)
# Here we use our convenence function from Appendix B to get measurement circuits
# with the correct single-qubit rotation gates.
meas_circs = opstr_to_meas_circ(op_strings)
# When computing the expectation value for the energy, we need to know if we
# evaluate a Z measurement or and identity measurement. Here we take and X and Y
# operator in the strings and convert it to a Z since we added the rotations
# with the meas_circs.
meas_strings = [string.replace('X', 'Z').replace('Y', 'Z') for string in op_strings]
# Take the ansatz circuits, add the single-qubit measurement basis rotations from
# meas_circs, and finally append the measurements themselves.
full_circs = [ansatz_circuit.compose(mcirc).measure_all(inplace=False) for mcirc in meas_circs]
# Get the number of parameters in the ansatz circuit.
num_params = ansatz_circuit.num_parameters
# Use a given initial state, if any, or do random initial state.
if x0:
x0 = np.asarray(x0, dtype=float)
if x0.shape[0] != num_params:
raise ValueError('Number of params in x0 ({}) does not match number \
of ansatz parameters ({})'. format(x0.shape[0],
num_params))
else:
x0 = 2*np.pi*np.random.rand(num_params)
# Because we are in general targeting a real quantum system, our circuits must be transpiled
# to match the system topology and, hopefully, optimize them.
# Here we will set the transpiler to the most optimal settings where 'sabre' layout and
# routing are used, along with full O3 optimization.
# This works around a bug in Qiskit where Sabre routing fails for simulators (Issue #7098)
trans_dict = {}
if not backend.configuration().simulator:
trans_dict = {'layout_method': 'sabre', 'routing_method': 'sabre'}
trans_circs = transpile(full_circs, backend, optimization_level=3, **trans_dict)
# If using measurement mitigation we need to find out which physical qubits our transpiled
# circuits actually measure, construct a mitigation object targeting our backend, and
# finally calibrate our mitgation by running calibration circuits on the backend.
if use_measurement_mitigation:
maps = mthree.utils.final_measurement_mapping(trans_circs)
mit = mthree.M3Mitigation(backend)
mit.cals_from_system(maps)
# Here we define a callback function that will stream the optimizer parameter vector
# back to the user after each iteration. This uses the `user_messenger` object.
# Here we convert to a list so that the return is user readable locally, but
# this is not required.
def callback(xk):
user_messenger.publish(list(xk))
# This is the primary VQE function executed by the optimizer. This function takes the
# parameter vector as input and returns the energy evaluated using an ansatz circuit
# bound with those parameters.
def vqe_func(params):
# Attach (bind) parameters in params vector to the transpiled circuits.
bound_circs = [circ.bind_parameters(params) for circ in trans_circs]
# Submit the job and get the resultant counts back
counts = backend.run(bound_circs, shots=shots).result().get_counts()
# If using measurement mitigation apply the correction and
# compute expectation values from the resultant quasiprobabilities
# using the measurement strings.
if use_measurement_mitigation:
quasi_collection = mit.apply_correction(counts, maps)
expvals = quasi_collection.expval(meas_strings)
# If not doing any mitigation just compute expectation values
# from the raw counts using the measurement strings.
# Since Qiskit does not have such functionality we use the convenence
# function from the mthree mitigation module.
else:
expvals = mthree.utils.expval(counts, meas_strings)
# The energy is computed by simply taking the product of the coefficients
# and the computed expectation values and summing them. Here we also
# take just the real part as the coefficients can possibly be complex,
# but the energy (eigenvalue) of a Hamiltonian is always real.
energy = np.sum(coeffs*expvals).real
return energy
# Here is where we actually perform the computation. We begin by seeing what
# optimization routine the user has requested, eg. SPSA verses SciPy ones,
# and dispatch to the correct optimizer. The selected optimizer starts at
# x0 and calls 'vqe_func' everytime the optimizer needs to evaluate the cost
# function. The result is returned as a SciPy OptimizerResult object.
# Additionally, after every iteration, we use the 'callback' function to
# publish the interm results back to the user. This is important to do
# so that if the Program terminates unexpectedly, the user can start where they
# left off.
# Since SPSA is not in SciPy need if statement
if optimizer == 'SPSA':
res = fmin_spsa(vqe_func, x0, args=(), **optimizer_config,
callback=callback)
# All other SciPy optimizers here
else:
res = opt.minimize(vqe_func, x0, method=optimizer,
options=optimizer_config, callback=callback)
# Return result. OptimizeResult is a subclass of dict.
return res
###Output
_____no_output_____
###Markdown
Local testingImportant: You need to execute the code blocks in Appendices A and B before continuing.We can test whether our routine works by simply calling the `main` function with a backend instance, a `UserMessenger`, and sample arguments.
###Code
from qiskit.providers.ibmq.runtime import UserMessenger
msg = UserMessenger()
# Use the local Aer simulator
from qiskit import Aer
backend = Aer.get_backend('qasm_simulator')
# Execute the main routine for our simple two-qubit Hamiltonian H, and perform 5 iterations of the SPSA solver.
main(backend, msg, H, optimizer_config={'maxiter': 5})
###Output
[1.419780432710152, 2.3984284215892018, 1.1306533554149105, 1.8357672762510684, 5.414120644000338, 6.107301966755861, -0.013391355872252708, 5.615586607539193, 4.211781149943555, 1.792388243059789, 4.203949657158362, 0.1038271369149637, 2.4220098073658884, 4.617958787629208, 2.9969591661895865, 1.5490655190231735]
[2.1084925021737537, 3.0871404910528035, 0.4419412859513089, 2.52447934571467, 4.725408574536736, 5.418589897292259, -0.7021034253358543, 6.3042986770027944, 3.523069080479953, 1.1036761735961873, 3.5152375876947604, 0.7925392063785653, 3.11072187682949, 5.30667085709281, 3.685671235653188, 0.8603534495595718]
[1.7365578685005831, 3.459075124725974, 0.8138759196244794, 2.8964139793878405, 4.353473940863566, 5.046655263619089, -1.0740380590090248, 5.932364043329624, 3.1511344468067826, 1.475610807269358, 3.8871722213679307, 1.1644738400517358, 2.73878724315632, 4.934736223419639, 4.057605869326359, 1.2322880832327423]
[1.7839871181735734, 3.4116458750529834, 0.766446669951489, 2.84898472971485, 4.306044691190576, 5.094084513292079, -1.0266088093360346, 5.884934793656634, 3.198563696479773, 1.5230400569423481, 3.8397429716949403, 1.1170445903787456, 2.6913579934833294, 4.887306973746649, 4.105035118999349, 1.2797173329057325]
[1.122687940285629, 4.072945052940928, 1.4277458478394336, 2.1876855518269056, 3.6447455133026314, 5.755383691180024, -1.687907987223979, 6.546233971544579, 2.5372645185918286, 2.1843392348302926, 4.501042149582885, 1.7783437682666903, 3.352657171371274, 4.226007795858704, 4.766334296887294, 0.618418155017788]
###Markdown
Having executed the above, we see that there are 5 parameter arrays returned, one for each callback, along with the final optimization result. The parameter arrays are the interim results, and the `UserMessenger` object prints these values to the cell output. The output itself is the answer we obtained, expressed as a SciPy `OptimizerResult` object. Program metadataProgram metadata is essentially the docstring for a runtime program. It describes overall program information such as the program `name`, `description`, and the `max_execution_time` the program is allowed to run, as well as details the inputs and the outputs the program expects. At a bare minimum the values described above are required
###Code
meta = {
"name": "sample-vqe",
"description": "A sample VQE program.",
"max_execution_time": 100000,
"spec": {}
}
###Output
_____no_output_____
###Markdown
It is important to set the `max_execution_time` high enough so that your program does not get terminated unexpectedly. Additionally, one should make sure that interim results are sent back to the user so that, if something does happen, the user can start where they left off.It is, however, good form to detail the parameters and return types, as well as interim results. That being said, if making a runtime intended to be used by others, this information would also likely be mirrored in the signature of a function or class that the user would interact with directly; end users should not directly call runtime programs. We will see why below. Nevertheless, let us add to our metadata. First, the `parameters` section details the inputs the user is able to pass:
###Code
meta["spec"]["parameters"] = {
"$schema": "https://json-schema.org/draft/2019-09/schema",
"properties": {
"hamiltonian": {
"description": "Hamiltonian whose ground state we want to find.",
"type": "array"
},
"ansatz": {
"description": "Name of ansatz quantum circuit to use, default='EfficientSU2'",
"type": "string",
"default": "EfficientSU2"
},
"ansatz_config": {
"description": "Configuration parameters for the ansatz circuit.",
"type": "object"
},
"optimizer": {
"description": "Classical optimizer to use, default='SPSA'.",
"type": "string",
"default": "SPSA"
},
"x0": {
"description": "Initial vector of parameters. This is a numpy array.",
"type": "array"
},
"optimizer_config": {
"description": "Configuration parameters for the optimizer.",
"type": "object"
},
"shots": {
"description": "The number of shots used for each circuit evaluation.",
"type": "integer"
},
"use_measurement_mitigation": {
"description": "Use measurement mitigation, default=False.",
"type": "boolean",
"default": False
}
},
"required": [
"hamiltonian"
]
}
###Output
_____no_output_____
###Markdown
Next, the `return_values` section tells about the return types:
###Code
meta["spec"]["return_values"] = {
"$schema": "https://json-schema.org/draft/2019-09/schema",
"description": "Final result in SciPy optimizer format",
"type": "object"
}
###Output
_____no_output_____
###Markdown
and finally let us specify what comes back when an interim result is returned:
###Code
meta["spec"]["interim_results"] = {
"$schema": "https://json-schema.org/draft/2019-09/schema",
"description": "Parameter vector at current optimization step. This is a numpy array.",
"type": "array"
}
###Output
_____no_output_____
###Markdown
Uploading the programWe now have all the ingredients needed to upload our program. To do so we need to collect all of our code in one file, here called `sample_vqe.py` for uploading. This limitation will be removed in later versions of Qiskit Runtime. Alternatively, if the entire code is contained within a single Jupyter notebook cell, this can be done using the magic function```%%writefile my_program.py```To actually upload the program we need to get a Provider from our IBM Quantum account:
###Code
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(group='deployed')
###Output
_____no_output_____
###Markdown
Program uploadThe call to `program_upload` takes the target Python file as `data` and the metadata as inputs.
###Code
program_id = provider.runtime.upload_program(data='sample_vqe.py', metadata=meta)
program_id
###Output
_____no_output_____
###Markdown
Here the `upload_program()` method returns a `program_id`, which is how you should reference your program. Program informationWe can query the program for information and see that our metadata is correctly being attached:
###Code
prog = provider.runtime.program(program_id)
print(prog)
###Output
sample-vqe-G3YBjmvlPr:
Name: sample-vqe
Description: A sample VQE program.
Creation date: 2021-11-10T17:10:18.903742Z
Update date: 2021-11-10T17:10:18.903742Z
Max execution time: 100000
Input parameters:
Properties:
- ansatz:
Default: EfficientSU2
Description: Name of ansatz quantum circuit to use, default='EfficientSU2'
Type: string
Required: False
- ansatz_config:
Description: Configuration parameters for the ansatz circuit.
Type: object
Required: False
- hamiltonian:
Description: Hamiltonian whose ground state we want to find.
Type: array
Required: True
- optimizer:
Default: SPSA
Description: Classical optimizer to use, default='SPSA'.
Type: string
Required: False
- optimizer_config:
Description: Configuration parameters for the optimizer.
Type: object
Required: False
- shots:
Description: The number of shots used for each circuit evaluation.
Type: integer
Required: False
- use_measurement_mitigation:
Default: False
Description: Use measurement mitigation, default=False.
Type: boolean
Required: False
- x0:
Description: Initial vector of parameters. This is a numpy array.
Type: array
Required: False
Interim results:
Description: Parameter vector at current optimization step. This is a numpy array.
Type: array
Returns:
Description: Final result in SciPy optimizer format
Type: object
###Markdown
Deleting a programIf you make a mistake and need to delete and/or re-upload the program, you can run the following, passing the `program_id`:
###Code
#provider.runtime.delete_program(program_id)
###Output
_____no_output_____
###Markdown
Running the program Specify parametersTo run the program we need to specify the `options` that are used in the runtime environment (not the program variables). At present, only the `backend_name` is required.
###Code
backend = provider.backend.ibmq_qasm_simulator
options = {'backend_name': backend.name()}
###Output
_____no_output_____
###Markdown
The `inputs` dictionary is used to pass arguments to the `main` function itself. For example:
###Code
inputs = {}
inputs['hamiltonian'] = H
inputs['optimizer_config']={'maxiter': 10}
###Output
_____no_output_____
###Markdown
Execute the programWe now can execute the program and grab the result.
###Code
job = provider.runtime.run(program_id, options=options, inputs=inputs)
job.result()
###Output
_____no_output_____
###Markdown
A few things need to be pointed out. First, we did not get back any interim results, and second, the return object is a plain dictionary. This is because we did not listen for the return results, and we did not tell the job how to format the return result. Listening for interim resultsTo listen for interm results we need to pass a callback function to `provider.runtime.run` that stores the results. The callback takes two arguments `job_id` and the returned data:
###Code
interm_results = []
def vqe_callback(job_id, data):
interm_results.append(data)
###Output
_____no_output_____
###Markdown
Executing again we get:
###Code
job2 = provider.runtime.run(program_id, options=options, inputs=inputs, callback=vqe_callback)
job2.result()
print(interm_results)
###Output
[[1.1839280526666394, 2.391820224610454, 2.7491281736833244, 0.5771768054969294, 2.349087960882593, 0.20251406828095217, 5.3527505036344865, 1.80726551800796, 2.8686317344166947, 2.4545878612072003, -0.04047464122825306, 4.2780676963333795, 3.27599724292225, 3.5527489679560844, 2.1472927005219273, 3.1637626657075555], [1.1855194978035488, 2.3902287794735444, 2.750719618820234, 0.5755853603600198, 2.3506794060195024, 0.20092262314404263, 5.351159058497577, 1.8088569631448694, 2.870223179553604, 2.452996416070291, -0.04206608636516258, 4.27647625119647, 3.2775886880591596, 3.554340413092994, 2.148884145658837, 3.165354110844465], [1.0411904999135912, 2.534557777363502, 2.8950486167101914, 0.7199143582499773, 2.206350408129545, 0.05659362525408518, 5.206830060607619, 1.664527965254912, 3.0145521774435617, 2.5973254139602484, 0.10226291152479487, 4.420805249086427, 3.133259690169202, 3.6986694109829514, 2.004555147768879, 3.0210251129545074], [1.005580093753927, 2.5701681835231662, 2.9306590228698557, 0.7555247644096416, 2.241960814289209, 0.020983219094420913, 5.242440466767284, 1.7001383714145764, 3.050162583603226, 2.561715007800584, 0.13787331768445915, 4.456415655246091, 3.0976492840095378, 3.663059004823287, 2.0401655539285435, 3.0566355191141716], [1.07047876838977, 2.6350668581590093, 2.8657603482340126, 0.8204234390454845, 2.177062139653366, 0.08588189373026392, 5.307339141403126, 1.6352396967787333, 2.985263908967383, 2.496816333164741, 0.20277199232030216, 4.521314329881934, 3.162547958645381, 3.7279576794591303, 1.9752668792927004, 2.9917368444783285], [1.3994411335364108, 2.96402922330565, 3.1947227133806533, 0.4914610738988439, 2.5060245048000067, -0.2430804714163767, 5.636301506549767, 1.3062773316320926, 3.3142262741140236, 2.8257786983113817, -0.12619037282633846, 4.192351964735293, 3.4915103237920215, 3.3989953143124896, 2.304229244439341, 3.3206992096249692], [1.325020213130704, 3.0384501437113567, 3.1203017929749466, 0.5658819943045507, 2.5804454252057134, -0.16865955101066996, 5.710722426955474, 1.231856411226386, 3.3886471945197303, 2.751357777905675, -0.2006112932320452, 4.117931044329586, 3.417089403386315, 3.4734162347181963, 2.2298083240336344, 3.395120130030676], [1.031941029864989, 2.7453709604456416, 2.8272226097092314, 0.2728028110388356, 2.2873662419399983, 0.12441963225504513, 6.003801610221189, 1.524935594492101, 3.6817263777854454, 2.45827859463996, 0.09246789003366987, 3.8248518610638707, 3.71016858665203, 3.7664954179839114, 1.9367291407679192, 3.102040946764961], [1.4127118235825624, 3.126141754163215, 2.446451815991658, -0.10796798267873797, 1.9065954482224248, 0.5051904259726187, 5.623030816503616, 1.1441648007745275, 4.062497171503019, 2.8390493883575334, 0.47323868375124345, 3.444081067346297, 4.090939380369604, 4.147266211701485, 1.5559583470503457, 3.4828117404825343], [1.3962500340466297, 3.1096799646272824, 2.4629136055275906, -0.09150619314280523, 1.890133658686492, 0.4887286364366859, 5.606569026967683, 1.1277030112385948, 4.046035381967086, 2.855511177893466, 0.4567768942153107, 3.46054285688223, 4.107401169905537, 4.163728001237418, 1.539496557514413, 3.4663499509466016]]
###Markdown
Formatting the returned resultsIn order to format the return results into the desired format, we need to specify a decoder. This decoder must have a `decode` method that gets called to do the actual conversion. In our case `OptimizeResult` is a simple sub-class of `dict` so the formatting is simple.
###Code
from qiskit.providers.ibmq.runtime import ResultDecoder
from scipy.optimize import OptimizeResult
class VQEResultDecoder(ResultDecoder):
@classmethod
def decode(cls, data):
data = super().decode(data) # This is required to preformat the data returned.
return OptimizeResult(data)
###Output
_____no_output_____
###Markdown
We can then use this when returning the job result:
###Code
job3 = provider.runtime.run(program_id, options=options, inputs=inputs)
job3.result(decoder=VQEResultDecoder)
###Output
_____no_output_____
###Markdown
Simplifying program execution with wrapping functionsWhile runtime programs are powerful and flexible, they are not the most friendly things to interact with. Therefore, if your program is intended to be used by others, it is best to make wrapper functions and/or classes that simplify the user experience. Moreover, such wrappers allow for validation of user inputs on the client side, which can quickly find errors that would otherwise be raised later during the execution process - something that might have taken hours waiting in queue to get to.Here we will make two helper routines. First, a job wrapper that allows us to attach and retrieve the interim results directly from the job object itself, as well as decodes for us so that the end user need not worry about formatting the results themselves.
###Code
class RuntimeJobWrapper():
"""A simple Job wrapper that attaches interm results directly to the job object itself
in the `interm_results attribute` via the `_callback` function.
"""
def __init__(self):
self._job = None
self._decoder = VQEResultDecoder
self.interm_results = []
def _callback(self, job_id, xk):
"""The callback function that attaches interm results:
Parameters:
job_id (str): The job ID.
xk (array_like): A list or NumPy array to attach.
"""
self.interm_results.append(xk)
def __getattr__(self, attr):
if attr == 'result':
return self.result
else:
if attr in dir(self._job):
return getattr(self._job, attr)
raise AttributeError("Class does not have {}.".format(attr))
def result(self):
"""Get the result of the job as a SciPy OptimizerResult object.
This blocks until job is done, cancelled, or errors.
Returns:
OptimizerResult: A SciPy optimizer result object.
"""
return self._job.result(decoder=self._decoder)
###Output
_____no_output_____
###Markdown
Next, we create the actual function we want users to call to execute our program. To this function we will add a series of simple validation checks (not all checks will be done for simplicity), as well as use the job wrapper defined above to simply the output.
###Code
import qiskit.circuit.library.n_local as lib_local
def vqe_runner(backend, hamiltonian,
ansatz='EfficientSU2', ansatz_config={},
x0=None, optimizer='SPSA',
optimizer_config={'maxiter': 100},
shots = 8192,
use_measurement_mitigation=False):
"""Routine that executes a given VQE problem via the sample-vqe program on the target backend.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
hamiltonian (list): Hamiltonian whose ground state we want to find.
ansatz (str): Optional, name of ansatz quantum circuit to use, default='EfficientSU2'
ansatz_config (dict): Optional, configuration parameters for the ansatz circuit.
x0 (array_like): Optional, initial vector of parameters.
optimizer (str): Optional, string specifying classical optimizer, default='SPSA'.
optimizer_config (dict): Optional, configuration parameters for the optimizer.
shots (int): Optional, number of shots to take per circuit.
use_measurement_mitigation (bool): Optional, use measurement mitigation, default=False.
Returns:
OptimizeResult: The result in SciPy optimization format.
"""
options = {'backend_name': backend.name()}
inputs = {}
# Validate Hamiltonian is correct
num_qubits = len(H[0][1])
for idx, ham in enumerate(hamiltonian):
if len(ham[1]) != num_qubits:
raise ValueError('Number of qubits in Hamiltonian term {} does not match {}'.format(idx,
num_qubits))
inputs['hamiltonian'] = hamiltonian
# Validate ansatz is in the module
ansatz_circ = getattr(lib_local, ansatz, None)
if not ansatz_circ:
raise ValueError('Ansatz {} not in n_local circuit library.'.format(ansatz))
inputs['ansatz'] = ansatz
inputs['ansatz_config'] = ansatz_config
# If given x0, validate its length against num_params in ansatz:
if x0:
x0 = np.asarray(x0)
ansatz_circ = ansatz_circ(num_qubits, **ansatz_config)
num_params = ansatz_circ.num_parameters
if x0.shape[0] != num_params:
raise ValueError('Length of x0 {} does not match number of params in ansatz {}'.format(x0.shape[0],
num_params))
inputs['x0'] = x0
# Set the rest of the inputs
inputs['optimizer'] = optimizer
inputs['optimizer_config'] = optimizer_config
inputs['shots'] = shots
inputs['use_measurement_mitigation'] = use_measurement_mitigation
rt_job = RuntimeJobWrapper()
job = provider.runtime.run(program_id, options=options, inputs=inputs, callback=rt_job._callback)
rt_job._job = job
return rt_job
###Output
_____no_output_____
###Markdown
We can now execute our runtime program via this runner function:
###Code
job4 = vqe_runner(backend, H, optimizer_config={'maxiter': 15})
job4.result()
###Output
_____no_output_____
###Markdown
The interim results are now attached to the job `interm_results` attribute and, as expected, we see that the length matches the number of iterations performed.
###Code
len(job4.interm_results)
###Output
_____no_output_____
###Markdown
ConclusionWe have demonstrated how to create, upload, and use a custom Qiskit Runtime by creating our own VQE solver from scratch. This tutorial was meant to touch upon every aspect of the process for a real-world example. Within the current limitations of the runtime environment, this example should enable readers to develop their own single-file runtime program. This program is also a good starting point for exploring additional flavors of VQE runtime. For example, it is straightforward to vary the number of shots per iteration, increasing shots as the number of iterations increases. Those looking to go deeper can consider implimenting an [adaptive VQE](https://doi.org/10.1038/s41467-019-10988-2), where the ansatz is not fixed at initialization. Appendix AHere we code a simple simultaneous perturbation stochastic approximation (SPSA) optimizer for use on noisy quantum systems. Most optimizers do not handle fluctuating cost functions well, so this is a needed addition for executing on real quantum hardware.
###Code
import numpy as np
from scipy.optimize import OptimizeResult
def fmin_spsa(func, x0, args=(), maxiter=100,
a=1.0, alpha=0.602, c=1.0, gamma=0.101,
callback=None):
"""
Minimization of scalar function of one or more variables using simultaneous
perturbation stochastic approximation (SPSA).
Parameters:
func (callable): The objective function to be minimized.
``fun(x, *args) -> float``
where x is an 1-D array with shape (n,) and args is a
tuple of the fixed parameters needed to completely
specify the function.
x0 (ndarray): Initial guess. Array of real elements of size (n,),
where ‘n’ is the number of independent variables.
maxiter (int): Maximum number of iterations. The number of function
evaluations is twice as many. Optional.
a (float): SPSA gradient scaling parameter. Optional.
alpha (float): SPSA gradient scaling exponent. Optional.
c (float): SPSA step size scaling parameter. Optional.
gamma (float): SPSA step size scaling exponent. Optional.
callback (callable): Function that accepts the current parameter vector
as input.
Returns:
OptimizeResult: Solution in SciPy Optimization format.
Notes:
See the `SPSA homepage <https://www.jhuapl.edu/SPSA/>`_ for usage and
additional extentions to the basic version implimented here.
"""
A = 0.01 * maxiter
x0 = np.asarray(x0)
x = x0
for kk in range(maxiter):
ak = a*(kk+1.0+A)**-alpha
ck = c*(kk+1.0)**-gamma
# Bernoulli distribution for randoms
deltak = 2*np.random.randint(2, size=x.shape[0])-1
grad = (func(x + ck*deltak, *args) - func(x - ck*deltak, *args))/(2*ck*deltak)
x -= ak*grad
if callback is not None:
callback(x)
return OptimizeResult(fun=func(x, *args), x=x, nit=maxiter, nfev=2*maxiter,
message='Optimization terminated successfully.',
success=True)
###Output
_____no_output_____
###Markdown
Appendix BThis is a helper function that converts the Pauli operators in the strings that define the Hamiltonian operators into the appropriate measurements at the end of the circuits. For $X$ operators this involves adding an $H$ gate to the qubits to be measured, whereas a $Y$ operator needs $S^{+}$ followed by a $H$. Other choices of Pauli operators require no additional gates prior to measurement.
###Code
def opstr_to_meas_circ(op_str):
"""Takes a list of operator strings and makes circuit with the correct post-rotations for measurements.
Parameters:
op_str (list): List of strings representing the operators needed for measurements.
Returns:
list: List of circuits for measurement post-rotations
"""
num_qubits = len(op_str[0])
circs = []
for op in op_str:
qc = QuantumCircuit(num_qubits)
for idx, item in enumerate(op):
if item == 'X':
qc.h(num_qubits-idx-1)
elif item == 'Y':
qc.sdg(num_qubits-idx-1)
qc.h(num_qubits-idx-1)
circs.append(qc)
return circs
from qiskit.tools.jupyter import *
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Creating Custom Programs for Qiskit RuntimePaul NationIBM Quantum Partners Technical Enablement TeamHere we will demonstrate how to create, upload, and use a custom Program for Qiskit Runtime. As the utility of the Runtime execution engine lies in its ability to execute many quantum circuits with low latencies, this tutorial will show how to create your own Variational Quantum Eigensolver (VQE) program from scratch. Prerequisites- You must have Qiskit 0.30+ installed.- You must have an IBM Quantum account with the ability to upload a runtime program. You have this ability if you belong to more than one provider. Current limitationsThe runtime execution engine currently has the following limitations that must be kept in mind:- The Docker images used by the runtime include only Qiskit and its dependencies, with few exceptions. One exception is the inclusion of the `mthree` measurement mitigation package.- For security reasons, the runtime cannot make internet calls outside of the environment.- Your runtime program name must not contain an underscore`_`, otherwise it will cause an error when you try to execute it.As Qiskit Runtime matures, these limitations will be removed. Simple VQEVQE is an hybrid quantum-classical optimization procedure that finds the lowest eigenstate and eigenenergy of a linear system defined by a given Hamiltonian of Pauli operators. For example, consider the following two-qubit Hamiltonian:$$H = A X_{1}\otimes X_{0} + A Y_{1}\otimes Y_{0} + A Z_{1}\otimes Z_{0},$$where $A$ is numerical coefficient and the subscripts label the qubits on which the operators act. The zero index being farthest right is the ordering used in Qiskit. The Pauli operators tell us which measurement basis to to use when measuring each of the qubits.We want to find the ground state (lowest energy state) of this Hamiltonian, and the associated eigenvector. To do this we must start at a given initial state and iteratively vary the parameters that define this state using a classical optimizer, such that the computed energies of subsequent steps are nominally lower than those previously. The parameterized state of the system is defined by an ansatz quantum circuit that should have non-zero support in the direction of the ground state. Because in general we do not know the solution, the choice of ansatz circuit can be highly problem-specific with a form dictated by additional information. For further information about variational algorithms, we point the reader to [Nature Reviews Physics volume 3, 625 (2021)](https://doi.org/10.1038/s42254-021-00348-9).Thus we need at least the following inputs to create our VQE quantum program:1. A representation of the Hamiltonian that specifies the problem.2. A choice of parameterized ansatz circuit, and the ability to pass configuration options, if any.However, the following are also beneficial inputs that users might want to have:3. Add the ability to pass an initial state.4. Vary the number of shots that are taken.5. Ability to select which classical optimizer is used, and set configuraton values, if any. 6. Ability to turn on and off measurement mitigation. Specifying the form of the input valuesAll inputs to runtime programs must be serializable objects. That is to say, whatever you pass into a runtime program must be able to be converted to JSON format. It is thus beneficial to keep inputs limited to basic data types and structures unless you have experience with custom object serialization, or they are common Qiskit types such as ``QuantumCircuit`` etc that the built-in `RuntimeEncoder` can handle. Fortunately, the VQE program described above can be made out of simple Python components.First, it is possible to represent any Hamiltonian using a list of values with each containing the numerical coefficeint for each term and the string representation for the Pauli operators. For the above example, the ground state energy with $A=1$ is $-3$ and we can write it as:
###Code
H = [(1, 'XX'), (1, 'YY'), (1, 'ZZ')]
###Output
_____no_output_____
###Markdown
Next we have to provide the ability to specify the parameterized Ansatz circuit. Here we will take advange of the fact that many ansatz circuits are pre-defined in the Qiskit Circuit Library. Examples can be found in the [N-local circuits section](https://qiskit.org/documentation/apidoc/circuit_library.htmln-local-circuits).We would like the user to be able to select between ansatz options such as: `NLocal`, `TwoLocal`, and `EfficientSU2`. We could have the user pass the whole ansatz circuit to the program; however, in order to reduce the size of the upload we will pass the ansatz by name. In the runtime program, we can take this name and get the class that it corresponds to from the library using, for example,
###Code
import qiskit.circuit.library.n_local as lib_local
ansatz = getattr(lib_local, 'EfficientSU2')
###Output
_____no_output_____
###Markdown
For the ansatz configuration, we will pass a simple `dict` of values. Optionals - If we want to add the ability to pass an initial state, then we will need to add the ability to pass a 1D list/ NumPy array. Because the number of parameters depends on the ansatz and its configuration, the user would have to know what ansatz they are doing ahead of time.- Selecting a number of shots requires simply passing an integer value.- Here we will allow selecting a classical optimizer by name from those in SciPy, and a `dict` of configuration parameters. Note that for execution on an actual system, the noise inherent in today's quantum systems makes having a stochastic optimizer crucial to success. SciPy does not have such a choice, and the one built into Qiskit is wrapped in such a manner as to make it difficult to use elsewhere. As such, here we will use an SPSA optimizer written to match the style of those in SciPy. This function is given in [Appendix A](Appendix-A). - Finally, for measurement error mitigation we can simply pass a boolean (True/False) value. Main programWe are now in a position to start building our main program. However, before doing so we point out that it makes the code cleaner to make a separate fuction that takes strings of Pauli operators that define our Hamiltonian and convert them to a list of circuits with single-qubit gates that change the measurement basis for each qubit, if needed. This function is given in [Appendix B](Appendix-B). Required signatureEvery runtime program is defined via the `main` function, and must have the following input signature:```main(backend, user_message, *args, **kwargs)```where `backend` is the backend that the program is to be executed on, and `user_message` is the class by which interim (and possibly final) results are communicated back to the user. After these two items, we add our program-specific arguments and keyword arguments. The main VQE programHere is the main program for our sample VQE. What each element of the function does is written in the comments before the element appears.
###Code
# Grab functions and modules from dependencies
import numpy as np
import scipy.optimize as opt
from scipy.optimize import OptimizeResult
import mthree
# Grab functions and modules from Qiskit needed
from qiskit import QuantumCircuit, transpile
import qiskit.circuit.library.n_local as lib_local
# The entrypoint for our Runtime Program
def main(backend, user_messenger,
hamiltonian,
ansatz='EfficientSU2',
ansatz_config={},
x0=None,
optimizer='SPSA',
optimizer_config={'maxiter': 100},
shots = 8192,
use_measurement_mitigation=False
):
"""
The main sample VQE program.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
user_messenger (UserMessenger): Used to communicate with the
program user.
hamiltonian (list): Hamiltonian whose ground state we want to find.
ansatz (str): Optional, name of ansatz quantum circuit to use,
default='EfficientSU2'
ansatz_config (dict): Optional, configuration parameters for the
ansatz circuit.
x0 (array_like): Optional, initial vector of parameters.
optimizer (str): Optional, string specifying classical optimizer,
default='SPSA'.
optimizer_config (dict): Optional, configuration parameters for the
optimizer.
shots (int): Optional, number of shots to take per circuit.
use_measurement_mitigation (bool): Optional, use measurement mitigation,
default=False.
Returns:
OptimizeResult: The result in SciPy optimization format.
"""
# Split the Hamiltonian into two arrays, one for coefficients, the other for
# operator strings
coeffs = np.array([item[0] for item in hamiltonian], dtype=complex)
op_strings = [item[1] for item in hamiltonian]
# The number of qubits needed is given by the number of elements in the strings
# the defiune the Hamiltonian. Here we grab this data from the first element.
num_qubits = len(op_strings[0])
# We grab the requested ansatz circuit class from the Qiskit circuit library
# n_local module and configure it using the number of qubits and options
# passed in the ansatz_config.
ansatz_instance = getattr(lib_local, ansatz)
ansatz_circuit = ansatz_instance(num_qubits, **ansatz_config)
# Here we use our convenence function from Appendix B to get measurement circuits
# with the correct single-qubit rotation gates.
meas_circs = opstr_to_meas_circ(op_strings)
# When computing the expectation value for the energy, we need to know if we
# evaluate a Z measurement or and identity measurement. Here we take and X and Y
# operator in the strings and convert it to a Z since we added the rotations
# with the meas_circs.
meas_strings = [string.replace('X', 'Z').replace('Y', 'Z') for string in op_strings]
# Take the ansatz circuits, add the single-qubit measurement basis rotations from
# meas_circs, and finally append the measurements themselves.
full_circs = [ansatz_circuit.compose(mcirc).measure_all(inplace=False) for mcirc in meas_circs]
# Get the number of parameters in the ansatz circuit.
num_params = ansatz_circuit.num_parameters
# Use a given initial state, if any, or do random initial state.
if x0:
x0 = np.asarray(x0, dtype=float)
if x0.shape[0] != num_params:
raise ValueError('Number of params in x0 ({}) does not match number \
of ansatz parameters ({})'. format(x0.shape[0],
num_params))
else:
x0 = 2*np.pi*np.random.rand(num_params)
# Because we are in general targeting a real quantum system, our circuits must be transpiled
# to match the system topology and, hopefully, optimize them.
# Here we will set the transpiler to the most optimal settings where 'sabre' layout and
# routing are used, along with full O3 optimization.
# This works around a bug in Qiskit where Sabre routing fails for simulators (Issue #7098)
trans_dict = {}
if not backend.configuration().simulator:
trans_dict = {'layout_method': 'sabre', 'routing_method': 'sabre'}
trans_circs = transpile(full_circs, backend, optimization_level=3, **trans_dict)
# If using measurement mitigation we need to find out which physical qubits our transpiled
# circuits actually measure, construct a mitigation object targeting our backend, and
# finally calibrate our mitgation by running calibration circuits on the backend.
if use_measurement_mitigation:
maps = mthree.utils.final_measurement_mapping(trans_circs)
mit = mthree.M3Mitigation(backend)
mit.cals_from_system(maps)
# Here we define a callback function that will stream the optimizer parameter vector
# back to the user after each iteration. This uses the `user_messenger` object.
# Here we convert to a list so that the return is user readable locally, but
# this is not required.
def callback(xk):
user_messenger.publish(list(xk))
# This is the primary VQE function executed by the optimizer. This function takes the
# parameter vector as input and returns the energy evaluated using an ansatz circuit
# bound with those parameters.
def vqe_func(params):
# Attach (bind) parameters in params vector to the transpiled circuits.
bound_circs = [circ.bind_parameters(params) for circ in trans_circs]
# Submit the job and get the resultant counts back
counts = backend.run(bound_circs, shots=shots).result().get_counts()
# If using measurement mitigation apply the correction and
# compute expectation values from the resultant quasiprobabilities
# using the measurement strings.
if use_measurement_mitigation:
quasi_collection = mit.apply_correction(counts, maps)
expvals = quasi_collection.expval(meas_strings)
# If not doing any mitigation just compute expectation values
# from the raw counts using the measurement strings.
# Since Qiskit does not have such functionality we use the convenence
# function from the mthree mitigation module.
else:
expvals = mthree.utils.expval(counts, meas_strings)
# The energy is computed by simply taking the product of the coefficients
# and the computed expectation values and summing them. Here we also
# take just the real part as the coefficients can possibly be complex,
# but the energy (eigenvalue) of a Hamiltonian is always real.
energy = np.sum(coeffs*expvals).real
return energy
# Here is where we actually perform the computation. We begin by seeing what
# optimization routine the user has requested, eg. SPSA verses SciPy ones,
# and dispatch to the correct optimizer. The selected optimizer starts at
# x0 and calls 'vqe_func' everytime the optimizer needs to evaluate the cost
# function. The result is returned as a SciPy OptimizerResult object.
# Additionally, after every iteration, we use the 'callback' function to
# publish the interm results back to the user. This is important to do
# so that if the Program terminates unexpectedly, the user can start where they
# left off.
# Since SPSA is not in SciPy need if statement
if optimizer == 'SPSA':
res = fmin_spsa(vqe_func, x0, args=(), **optimizer_config,
callback=callback)
# All other SciPy optimizers here
else:
res = opt.minimize(vqe_func, x0, method=optimizer,
options=optimizer_config, callback=callback)
# Return result. OptimizeResult is a subclass of dict.
return res
###Output
_____no_output_____
###Markdown
Local testingImportant: You need to execute the code blocks in Appendices A and B before continuing.We can test whether our routine works by simply calling the `main` function with a backend instance, a `UserMessenger`, and sample arguments.
###Code
from qiskit.providers.ibmq.runtime import UserMessenger
msg = UserMessenger()
# Use the local Aer simulator
from qiskit import Aer
backend = Aer.get_backend('qasm_simulator')
# Execute the main routine for our simple two-qubit Hamiltonian H, and perform 5 iterations of the SPSA solver.
main(backend, msg, H, optimizer_config={'maxiter': 5})
###Output
[1.419780432710152, 2.3984284215892018, 1.1306533554149105, 1.8357672762510684, 5.414120644000338, 6.107301966755861, -0.013391355872252708, 5.615586607539193, 4.211781149943555, 1.792388243059789, 4.203949657158362, 0.1038271369149637, 2.4220098073658884, 4.617958787629208, 2.9969591661895865, 1.5490655190231735]
[2.1084925021737537, 3.0871404910528035, 0.4419412859513089, 2.52447934571467, 4.725408574536736, 5.418589897292259, -0.7021034253358543, 6.3042986770027944, 3.523069080479953, 1.1036761735961873, 3.5152375876947604, 0.7925392063785653, 3.11072187682949, 5.30667085709281, 3.685671235653188, 0.8603534495595718]
[1.7365578685005831, 3.459075124725974, 0.8138759196244794, 2.8964139793878405, 4.353473940863566, 5.046655263619089, -1.0740380590090248, 5.932364043329624, 3.1511344468067826, 1.475610807269358, 3.8871722213679307, 1.1644738400517358, 2.73878724315632, 4.934736223419639, 4.057605869326359, 1.2322880832327423]
[1.7839871181735734, 3.4116458750529834, 0.766446669951489, 2.84898472971485, 4.306044691190576, 5.094084513292079, -1.0266088093360346, 5.884934793656634, 3.198563696479773, 1.5230400569423481, 3.8397429716949403, 1.1170445903787456, 2.6913579934833294, 4.887306973746649, 4.105035118999349, 1.2797173329057325]
[1.122687940285629, 4.072945052940928, 1.4277458478394336, 2.1876855518269056, 3.6447455133026314, 5.755383691180024, -1.687907987223979, 6.546233971544579, 2.5372645185918286, 2.1843392348302926, 4.501042149582885, 1.7783437682666903, 3.352657171371274, 4.226007795858704, 4.766334296887294, 0.618418155017788]
###Markdown
Having executed the above, we see that there are 5 parameter arrays returned, one for each callback, along with the final optimization result. The parameter arrays are the interim results, and the `UserMessenger` object prints these values to the cell output. The output itself is the answer we obtained, expressed as a SciPy `OptimizerResult` object. Program metadataProgram metadata is essentially the docstring for a runtime program. It describes overall program information such as the program `name`, `description`, `version`, and the `max_execution_time` the program is allowed to run, as well as details the inputs and the outputs the program expects. At a bare minimum the values described above are requiredImportant: As of the time of writing, runtime names must be unique amongst all users. Because of this, we will add a unique ID (UUID) to the program name. This limitation will be removed in a future release.
###Code
import uuid
meta = {
"name": "sample-vqe-{}".format(uuid.uuid4()),
"description": "A sample VQE program.",
"max_execution_time": 100000,
"version": "1.0",
}
###Output
_____no_output_____
###Markdown
It is important to set the `max_execution_time` high enough so that your program does not get terminated unexpectedly. Additionally, one should make sure that interim results are sent back to the user so that, if something does happen, the user can start where they left off.It is, however, good form to detail the parameters and return types, as well as interim results. That being said, if making a runtime intended to be used by others, this information would also likely be mirrored in the signature of a function or class that the user would interact with directly; end users should not directly call runtime programs. We will see why below. Nevertheless, let us add to our metadata. First, the `parameters` section details the inputs the user is able to pass:
###Code
meta["parameters"] = [
{"name": "hamiltonian", "description": "Hamiltonian whose ground state we want to find.", "type": "list", "required": True},
{"name": "ansatz", "description": "Name of ansatz quantum circuit to use, default='EfficientSU2'", "type": "str", "required": False},
{"name": "ansatz_config", "description": "Configuration parameters for the ansatz circuit.", "type": "dict", "required": False},
{"name": "x0", "description": "Initial vector of parameters.", "type": "ndarray", "required": False},
{"name": "optimizer", "description": "Classical optimizer to use, default='SPSA'.", "type": "str", "required": False},
{"name": "optimizer_config", "description": "Configuration parameters for the optimizer.", "type": "dict", "required": False},
{"name": "shots", "description": "Number of shots to take per circuit.", "type": "int", "required": False},
{"name": "use_measurement_mitigation", "description": "Use measurement mitigation, default=False.", "type": "bool", "required": False}
]
###Output
_____no_output_____
###Markdown
Next, the `return_values` section tells about the return types:
###Code
meta['return_values'] = [
{"name": "result", "description": "Final result in SciPy optimizer format.", "type": "OptimizeResult"}
]
###Output
_____no_output_____
###Markdown
and finally let us specify what comes back when an interim result is returned:
###Code
meta["interim_results"] = [
{"name": "params", "description": "Parameter vector at current optimization step", "type": "ndarray"},
]
###Output
_____no_output_____
###Markdown
Uploading the programWe now have all the ingredients needed to upload our program. To do so we need to collect all of our code in one file, here called `sample_vqe.py` for uploading. This limitation will be removed in later versions of Qiskit Runtime. Alternatively, if the entire code is contained within a single Jupyter notebook cell, this can be done using the magic function```%%writefile my_program.py```To actually upload the program we need to get a Provider from our IBM Quantum account:
###Code
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(group='deployed')
###Output
_____no_output_____
###Markdown
Program uploadThe call to `program_upload` takes the target Python file as `data` and the metadata as inputs. **If you have already uploaded the program this will raise an error and you must delete it first to continue**.
###Code
program_id = provider.runtime.upload_program(data='sample_vqe.py', metadata=meta)
program_id
###Output
_____no_output_____
###Markdown
Here the returned `program_id` is the same as the program `name` given in the metadata. You cannot have more than one program with the same `name` and `program_id`. The `program_id` is how you should reference your program. Program informationWe can query the program for information and see that our metadata is correctly being attached:
###Code
prog = provider.runtime.program(program_id)
print(prog)
###Output
sample-vqe-1c65cfd4-9551-42fc-bfbe-099e7fd2574f:
Name: sample-vqe-1c65cfd4-9551-42fc-bfbe-099e7fd2574f
Description: A sample VQE program.
Version: 1.0
Creation date: 2021-10-06T22:21:05.000000
Max execution time: 100000
Input parameters:
- hamiltonian:
Description: Hamiltonian whose ground state we want to find.
Type: list
Required: True
- ansatz:
Description: Name of ansatz quantum circuit to use, default='EfficientSU2'
Type: str
Required: False
- ansatz_config:
Description: Configuration parameters for the ansatz circuit.
Type: dict
Required: False
- x0:
Description: Initial vector of parameters.
Type: ndarray
Required: False
- optimizer:
Description: Classical optimizer to use, default='SPSA'.
Type: str
Required: False
- optimizer_config:
Description: Configuration parameters for the optimizer.
Type: dict
Required: False
- shots:
Description: Number of shots to take per circuit.
Type: int
Required: False
- use_measurement_mitigation:
Description: Use measurement mitigation, default=False.
Type: bool
Required: False
Interim results:
- params:
Description: Parameter vector at current optimization step
Type: ndarray
Returns:
- result:
Description: Final result in SciPy optimizer format.
Type: OptimizeResult
###Markdown
Deleting a programIf you make a mistake and need to delete and/or re-upload the program, you can run the following, passing the `program_id`:
###Code
#provider.runtime.delete_program(program_id)
###Output
_____no_output_____
###Markdown
Running the program Specify parametersTo run the program we need to specify the `options` that are used in the runtime environment (not the program variables). At present, only the `backend_name` is required.
###Code
backend = provider.backend.ibmq_qasm_simulator
options = {'backend_name': backend.name()}
###Output
_____no_output_____
###Markdown
The `inputs` dictionary is used to pass arguments to the `main` function itself. For example:
###Code
inputs = {}
inputs['hamiltonian'] = H
inputs['optimizer_config']={'maxiter': 10}
###Output
_____no_output_____
###Markdown
Execute the programWe now can execute the program and grab the result.
###Code
job = provider.runtime.run(program_id, options=options, inputs=inputs)
job.result()
###Output
_____no_output_____
###Markdown
A few things need to be pointed out. First, we did not get back any interim results, and second, the return object is a plain dictionary. This is because we did not listen for the return results, and we did not tell the job how to format the return result. Listening for interim resultsTo listen for interm results we need to pass a callback function to `provider.runtime.run` that stores the results. The callback takes two arguments `job_id` and the returned data:
###Code
interm_results = []
def vqe_callback(job_id, data):
interm_results.append(data)
###Output
_____no_output_____
###Markdown
Executing again we get:
###Code
job2 = provider.runtime.run(program_id, options=options, inputs=inputs, callback=vqe_callback)
job2.result()
print(interm_results)
###Output
[[1.1839280526666394, 2.391820224610454, 2.7491281736833244, 0.5771768054969294, 2.349087960882593, 0.20251406828095217, 5.3527505036344865, 1.80726551800796, 2.8686317344166947, 2.4545878612072003, -0.04047464122825306, 4.2780676963333795, 3.27599724292225, 3.5527489679560844, 2.1472927005219273, 3.1637626657075555], [1.1855194978035488, 2.3902287794735444, 2.750719618820234, 0.5755853603600198, 2.3506794060195024, 0.20092262314404263, 5.351159058497577, 1.8088569631448694, 2.870223179553604, 2.452996416070291, -0.04206608636516258, 4.27647625119647, 3.2775886880591596, 3.554340413092994, 2.148884145658837, 3.165354110844465], [1.0411904999135912, 2.534557777363502, 2.8950486167101914, 0.7199143582499773, 2.206350408129545, 0.05659362525408518, 5.206830060607619, 1.664527965254912, 3.0145521774435617, 2.5973254139602484, 0.10226291152479487, 4.420805249086427, 3.133259690169202, 3.6986694109829514, 2.004555147768879, 3.0210251129545074], [1.005580093753927, 2.5701681835231662, 2.9306590228698557, 0.7555247644096416, 2.241960814289209, 0.020983219094420913, 5.242440466767284, 1.7001383714145764, 3.050162583603226, 2.561715007800584, 0.13787331768445915, 4.456415655246091, 3.0976492840095378, 3.663059004823287, 2.0401655539285435, 3.0566355191141716], [1.07047876838977, 2.6350668581590093, 2.8657603482340126, 0.8204234390454845, 2.177062139653366, 0.08588189373026392, 5.307339141403126, 1.6352396967787333, 2.985263908967383, 2.496816333164741, 0.20277199232030216, 4.521314329881934, 3.162547958645381, 3.7279576794591303, 1.9752668792927004, 2.9917368444783285], [1.3994411335364108, 2.96402922330565, 3.1947227133806533, 0.4914610738988439, 2.5060245048000067, -0.2430804714163767, 5.636301506549767, 1.3062773316320926, 3.3142262741140236, 2.8257786983113817, -0.12619037282633846, 4.192351964735293, 3.4915103237920215, 3.3989953143124896, 2.304229244439341, 3.3206992096249692], [1.325020213130704, 3.0384501437113567, 3.1203017929749466, 0.5658819943045507, 2.5804454252057134, -0.16865955101066996, 5.710722426955474, 1.231856411226386, 3.3886471945197303, 2.751357777905675, -0.2006112932320452, 4.117931044329586, 3.417089403386315, 3.4734162347181963, 2.2298083240336344, 3.395120130030676], [1.031941029864989, 2.7453709604456416, 2.8272226097092314, 0.2728028110388356, 2.2873662419399983, 0.12441963225504513, 6.003801610221189, 1.524935594492101, 3.6817263777854454, 2.45827859463996, 0.09246789003366987, 3.8248518610638707, 3.71016858665203, 3.7664954179839114, 1.9367291407679192, 3.102040946764961], [1.4127118235825624, 3.126141754163215, 2.446451815991658, -0.10796798267873797, 1.9065954482224248, 0.5051904259726187, 5.623030816503616, 1.1441648007745275, 4.062497171503019, 2.8390493883575334, 0.47323868375124345, 3.444081067346297, 4.090939380369604, 4.147266211701485, 1.5559583470503457, 3.4828117404825343], [1.3962500340466297, 3.1096799646272824, 2.4629136055275906, -0.09150619314280523, 1.890133658686492, 0.4887286364366859, 5.606569026967683, 1.1277030112385948, 4.046035381967086, 2.855511177893466, 0.4567768942153107, 3.46054285688223, 4.107401169905537, 4.163728001237418, 1.539496557514413, 3.4663499509466016]]
###Markdown
Formatting the returned resultsIn order to format the return results into the desired format, we need to specify a decoder. This decoder must have a `decode` method that gets called to do the actual conversion. In our case `OptimizeResult` is a simple sub-class of `dict` so the formatting is simple.
###Code
from qiskit.providers.ibmq.runtime import ResultDecoder
from scipy.optimize import OptimizeResult
class VQEResultDecoder(ResultDecoder):
@classmethod
def decode(cls, data):
data = super().decode(data) # This is required to preformat the data returned.
return OptimizeResult(data)
###Output
_____no_output_____
###Markdown
We can then use this when returning the job result:
###Code
job3 = provider.runtime.run(program_id, options=options, inputs=inputs)
job3.result(decoder=VQEResultDecoder)
###Output
_____no_output_____
###Markdown
Simplifying program execution with wrapping functionsWhile runtime programs are powerful and flexible, they are not the most friendly things to interact with. Therefore, if your program is intended to be used by others, it is best to make wrapper functions and/or classes that simplify the user experience. Moreover, such wrappers allow for validation of user inputs on the client side, which can quickly find errors that would otherwise be raised later during the execution process - something that might have taken hours waiting in queue to get to.Here we will make two helper routines. First, a job wrapper that allows us to attach and retrieve the interim results directly from the job object itself, as well as decodes for us so that the end user need not worry about formatting the results themselves.
###Code
class RuntimeJobWrapper():
"""A simple Job wrapper that attaches interm results directly to the job object itself
in the `interm_results attribute` via the `_callback` function.
"""
def __init__(self):
self._job = None
self._decoder = VQEResultDecoder
self.interm_results = []
def _callback(self, job_id, xk):
"""The callback function that attaches interm results:
Parameters:
job_id (str): The job ID.
xk (array_like): A list or NumPy array to attach.
"""
self.interm_results.append(xk)
def __getattr__(self, attr):
if attr == 'result':
return self.result
else:
if attr in dir(self._job):
return getattr(self._job, attr)
raise AttributeError("Class does not have {}.".format(attr))
def result(self):
"""Get the result of the job as a SciPy OptimizerResult object.
This blocks until job is done, cancelled, or errors.
Returns:
OptimizerResult: A SciPy optimizer result object.
"""
return self._job.result(decoder=self._decoder)
###Output
_____no_output_____
###Markdown
Next, we create the actual function we want users to call to execute our program. To this function we will add a series of simple validation checks (not all checks will be done for simplicity), as well as use the job wrapper defined above to simply the output.
###Code
import qiskit.circuit.library.n_local as lib_local
def vqe_runner(backend, hamiltonian,
ansatz='EfficientSU2', ansatz_config={},
x0=None, optimizer='SPSA',
optimizer_config={'maxiter': 100},
shots = 8192,
use_measurement_mitigation=False):
"""Routine that executes a given VQE problem via the sample-vqe program on the target backend.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
hamiltonian (list): Hamiltonian whose ground state we want to find.
ansatz (str): Optional, name of ansatz quantum circuit to use, default='EfficientSU2'
ansatz_config (dict): Optional, configuration parameters for the ansatz circuit.
x0 (array_like): Optional, initial vector of parameters.
optimizer (str): Optional, string specifying classical optimizer, default='SPSA'.
optimizer_config (dict): Optional, configuration parameters for the optimizer.
shots (int): Optional, number of shots to take per circuit.
use_measurement_mitigation (bool): Optional, use measurement mitigation, default=False.
Returns:
OptimizeResult: The result in SciPy optimization format.
"""
options = {'backend_name': backend.name()}
inputs = {}
# Validate Hamiltonian is correct
num_qubits = len(H[0][1])
for idx, ham in enumerate(hamiltonian):
if len(ham[1]) != num_qubits:
raise ValueError('Number of qubits in Hamiltonian term {} does not match {}'.format(idx,
num_qubits))
inputs['hamiltonian'] = hamiltonian
# Validate ansatz is in the module
ansatz_circ = getattr(lib_local, ansatz, None)
if not ansatz_circ:
raise ValueError('Ansatz {} not in n_local circuit library.'.format(ansatz))
inputs['ansatz'] = ansatz
inputs['ansatz_config'] = ansatz_config
# If given x0, validate its length against num_params in ansatz:
if x0:
x0 = np.asarray(x0)
ansatz_circ = ansatz_circ(num_qubits, **ansatz_config)
num_params = ansatz_circ.num_parameters
if x0.shape[0] != num_params:
raise ValueError('Length of x0 {} does not match number of params in ansatz {}'.format(x0.shape[0],
num_params))
inputs['x0'] = x0
# Set the rest of the inputs
inputs['optimizer'] = optimizer
inputs['optimizer_config'] = optimizer_config
inputs['shots'] = shots
inputs['use_measurement_mitigation'] = use_measurement_mitigation
rt_job = RuntimeJobWrapper()
job = provider.runtime.run(program_id, options=options, inputs=inputs, callback=rt_job._callback)
rt_job._job = job
return rt_job
###Output
_____no_output_____
###Markdown
We can now execute our runtime program via this runner function:
###Code
job4 = vqe_runner(backend, H, optimizer_config={'maxiter': 15})
job4.result()
###Output
_____no_output_____
###Markdown
The interim results are now attached to the job `interm_results` attribute and, as expected, we see that the length matches the number of iterations performed.
###Code
len(job4.interm_results)
###Output
_____no_output_____
###Markdown
ConclusionWe have demonstrated how to create, upload, and use a custom Qiskit Runtime by creating our own VQE solver from scratch. This tutorial was meant to touch upon every aspect of the process for a real-world example. Within the current limitations of the runtime environment, this example should enable readers to develop their own single-file runtime program. This program is also a good starting point for exploring additional flavors of VQE runtime. For example, it is straightforward to vary the number of shots per iteration, increasing shots as the number of iterations increases. Those looking to go deeper can consider implimenting an [adaptive VQE](https://doi.org/10.1038/s41467-019-10988-2), where the ansatz is not fixed at initialization. Appendix AHere we code a simple simultaneous perturbation stochastic approximation (SPSA) optimizer for use on noisy quantum systems. Most optimizers do not handle fluctuating cost functions well, so this is a needed addition for executing on real quantum hardware.
###Code
import numpy as np
from scipy.optimize import OptimizeResult
def fmin_spsa(func, x0, args=(), maxiter=100,
a=1.0, alpha=0.602, c=1.0, gamma=0.101,
callback=None):
"""
Minimization of scalar function of one or more variables using simultaneous
perturbation stochastic approximation (SPSA).
Parameters:
func (callable): The objective function to be minimized.
``fun(x, *args) -> float``
where x is an 1-D array with shape (n,) and args is a
tuple of the fixed parameters needed to completely
specify the function.
x0 (ndarray): Initial guess. Array of real elements of size (n,),
where ‘n’ is the number of independent variables.
maxiter (int): Maximum number of iterations. The number of function
evaluations is twice as many. Optional.
a (float): SPSA gradient scaling parameter. Optional.
alpha (float): SPSA gradient scaling exponent. Optional.
c (float): SPSA step size scaling parameter. Optional.
gamma (float): SPSA step size scaling exponent. Optional.
callback (callable): Function that accepts the current parameter vector
as input.
Returns:
OptimizeResult: Solution in SciPy Optimization format.
Notes:
See the `SPSA homepage <https://www.jhuapl.edu/SPSA/>`_ for usage and
additional extentions to the basic version implimented here.
"""
A = 0.01 * maxiter
x0 = np.asarray(x0)
x = x0
for kk in range(maxiter):
ak = a*(kk+1.0+A)**-alpha
ck = c*(kk+1.0)**-gamma
# Bernoulli distribution for randoms
deltak = 2*np.random.randint(2, size=x.shape[0])-1
grad = (func(x + ck*deltak, *args) - func(x - ck*deltak, *args))/(2*ck*deltak)
x -= ak*grad
if callback is not None:
callback(x)
return OptimizeResult(fun=func(x, *args), x=x, nit=maxiter, nfev=2*maxiter,
message='Optimization terminated successfully.',
success=True)
###Output
_____no_output_____
###Markdown
Appendix BThis is a helper function that converts the Pauli operators in the strings that define the Hamiltonian operators into the appropriate measurements at the end of the circuits. For $X$ operators this involves adding an $H$ gate to the qubits to be measured, whereas a $Y$ operator needs $S^{+}$ followed by a $H$. Other choices of Pauli operators require no additional gates prior to measurement.
###Code
def opstr_to_meas_circ(op_str):
"""Takes a list of operator strings and makes circuit with the correct post-rotations for measurements.
Parameters:
op_str (list): List of strings representing the operators needed for measurements.
Returns:
list: List of circuits for measurement post-rotations
"""
num_qubits = len(op_str[0])
circs = []
for op in op_str:
qc = QuantumCircuit(num_qubits)
for idx, item in enumerate(op):
if item == 'X':
qc.h(num_qubits-idx-1)
elif item == 'Y':
qc.sdg(num_qubits-idx-1)
qc.h(num_qubits-idx-1)
circs.append(qc)
return circs
from qiskit.tools.jupyter import *
%qiskit_copyright
###Output
_____no_output_____ |
part1_introduction_to_computer_vision/1_2_Convolutional_Filters_and_Edge_Detection/3. Gaussian Blur.ipynb | ###Markdown
Gaussian Blur, Medical Images Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/brain_MR.jpg')
# Make a copy of the image
image_copy = np.copy(image)
# Change color to RGB
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
plt.imshow(image_copy)
###Output
_____no_output_____
###Markdown
Gaussian blur the image
###Code
# Convert to grayscale for filtering
gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY)
# Create a Gaussian blurred image
gray_blur = cv2.GaussianBlur(gray, (3,3), 0)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.set_title('original gray')
ax1.imshow(gray, cmap='gray')
ax2.set_title('blurred gray')
ax2.imshow(gray_blur, cmap='gray')
###Output
_____no_output_____
###Markdown
Test performance with a high-pass filter
###Code
# High-pass filter
# 3x3 sobel filters for edge detection
sobel_x = np.array([[ -1, 0, 1],
[ -2, 0, 2],
[ -1, 0, 1]])
sobel_y = np.array([[-1, -2, -1],
[0, 0, 0],
[1, 2, 1]])
# Filter the original and blurred grayscale images
filtered = cv2.filter2D(gray, -1, sobel_y)
filtered_blurred = cv2.filter2D(gray_blur, -1, sobel_y)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.set_title('original gray')
ax1.imshow(filtered, cmap='gray')
ax2.set_title('blurred image')
ax2.imshow(filtered_blurred, cmap='gray')
# Create threshold that sets all the filtered pixels
# to white above a certain threshold
retval, binary_image = cv2.threshold(
filtered_blurred,
50,
255,
cv2.THRESH_BINARY)
plt.figure(figsize = (10,10))
plt.imshow(binary_image, cmap='gray')
###Output
_____no_output_____ |
MSE_with_Motion.ipynb | ###Markdown
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
# utility function(s)
def imshow(image, *args, **kwargs):
"""A replacement for cv2.imshow() for use in Jupyter notebooks using matplotlib.
Args:
image : np.ndarray. shape (N, M) or (N, M, 1) is an NxM grayscale image. shape
(N, M, 3) is an NxM BGR color image.
"""
if len(image.shape) == 3:
# Height, width, channels
# Assume BGR, do a conversion
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Draw the image
im = plt.imshow(image, *args, **kwargs)
# We'll also disable drawing the axes and tick marks in the plot, since it's actually an image
plt.axis('off')
# Make sure it outputs
# plt.show()
return im
###Output
_____no_output_____
###Markdown
A single binary view (area: $w \times w$) is composed of zero background and of a target of size $a \times a$. The mean and variance of such a 2D view are $$\mu = \frac{a^2}{w^2} \\\sigma = \mu(\mu-1)^2 + (1-\mu)(\mu-0)^2 = \mu - \mu^2$$, where the mean ($\mu$) also describes the size relation of the target with respect to the view. The term $(\mu-1)^2$ is the variance computation in the target and $(\mu-0)^2 = \mu^2$ the variance for the background.The code below shows such an example:
###Code
w = 300
a = 50
sx,sy = int(w/3), int(w/3)
single = np.zeros((300,300))
single[sx:sx+a,sy:sy+a] = 1.0
imshow(single)
plt.title( 'mean: {}, var: {}'.format(np.mean(single), np.var(single)))
plt.show()
###Output
_____no_output_____
###Markdown
If multiple such single views are averaged (as typically done in AOS) it has no effect on the variance and mean as long as the target is perfectly registered. If the target, however, is not registered (e.g., a moving target or by defocus) the statistics change. Let's first look at the extreme case, where the averaged targets are not overlapping anymore. This is showcased below by introducing a shift $d$ for $N$ images.The mean ($\mu$) is not altered, but the variance changes:$$\sigma = N\mu(\mu-\frac{1}{N})^2 + (1-N\mu)(\mu-0)^2 = \frac{\mu}{N} - \mu^2 \text{.}$$The change of $\sigma$ is inverse proportional to $N$. Note that $N\mu$ describes the area covered by the non-overlapping instances of the target.
###Code
w = 300
a = 20
r = a*a / w**2 # ratio/mean
print(r)
d = 25
N = 10
sx,sy = int(a), int(a)
sum = np.zeros((w,w))
for i in range(N):
single = np.zeros((w,w))
x,y = sx + i*d, sy
single[x:x+a,y:y+a] = 1.0
sum += single
sum = sum/N
imshow(sum, vmin=0.0, vmax=1.0)
plt.title( 'mean: {}, var: {}'.format(np.mean(sum), np.var(sum)))
plt.show()
# variance computation
v_overlap = r - r**2
v_nonoverlap = r/N - r**2
print('var (non-overlap): {}'.format( v_nonoverlap ) )
###Output
0.0044444444444444444
###Markdown
If the shift $d$ is less then the target size $a$ $(d < a)$ the targets will overlap in the integral image. For simplicity we will just look at the problem in 1D now. The area (normalized by the area size) that is covered by the non-overlapping targets can be expressed by $$ g = \frac{d(N-1)+a}{w}$$and the number of overlaps is expressed by $$ M = \frac{a}{d}$$, where it has to be ensured that $M$ does not exceed $N$. Furthermore, there will be different regions with a varying amount of overlap. For example a target with $a=5$ a shift of $d=2$ and $N=7$ results in 4 regions without overlap in 8 regions where two target instances overlap and 5 regions with an overlap of three targets. Note that this is illustrated in the example below. Furthermore, in this simulation a region is a pixel or array cell. To compute the variance the different overlaps have to be considered. We introduce this as a count $c_i$, where $i$ is the number of overlapping targets. In the example this results in $c_1=4, c_2=8$, and $c_3=5$.The equation to compute the variance thus expands to$$ \sigma = (1-g)\mu^2 + \frac{1}{w} \sum_i c_i (\mu - \frac{i}{N})^2 \\ = \mu^2 - \frac{2\mu}{Nw} \sum_i c_i i + \frac{1}{N^2w} \sum_i c_i i^2 \text{.}$$By subsituting $\mu = a/w$ (in 1D) it further simplifies to$$ \sigma = \frac{a^2}{w^2} - \frac{2a}{Nw^2} \sum_i{ c_i i }+ \frac{1}{N^2w} \sum_i c_i i^2 \text{.}$$Note that it is propably impractical to always compute $c_i$ so it might be possible to simplify or approximate these terms. A first attempt would be to approximate the terms by$$ \sum_i{ c_i i } \approx M (d (N-1-M)+a) \\ \sum_i{ c_i i^2 } \approx M^2 (d (N-1-M)+a) \text{.}$$This, however does not allways lead to close results (see below).
###Code
w = 30
a = 5
r = a / w # ratio/mean
print(r)
d = 2
N = 7
sx,sy = int(a), int(a)
sum = np.zeros((1,w))
for i in range(N):
single = np.zeros_like(sum)
x,y = sx + i*d, sy
single[:,x:x+a] = 1.0
sum += single
count,bins=np.histogram(sum, bins=np.arange(np.max(sum)+2))
print(count)
print(np.asarray(bins[:-1],dtype=np.int16))
sum = sum/N
imshow(sum, vmin=0.0, vmax=1.0)
plt.title( 'mean: {}, var: {}'.format(np.mean(sum), np.var(sum)))
plt.show()
# variance computation
if d<=0:
M = N
else:
M = max(min(a/d,N),1)
v_overlap = r - r**2 # assuming everything is overlapping
v_nonoverlap = r/N - r**2 # assuming nothing is overlapping in the integral
term1 = np.sum(bins[:-1] * count)
term2 = np.sum(bins[:-1]**2 * count)
v = a**2/w**2 - 2*a/(N*w**2)*term1 + 1/(N**2*w)*term2
print('var (new): {}'.format( v ) )
# approximate term1 and term2
term1_ = M * (d*(N-1-M)+a)
term2_ = term1_ * M
v_ = a**2/w**2 - 2*a/(N*w**2)*term1_ + 1/(N**2*w)*term2_
print('var (approx): {}'.format( v_ ) )
###Output
0.16666666666666666
[13 4 8 5]
[0 1 2 3]
###Markdown
In this section, we discuss the statistical model of the$MSE$ between an integral image $X$ and an hypotheticalocclusion-free reference $S$ . $$MSE = E[(X- S)^2] = E[X^2] -2E[XS] +E[S^2]$$ Below is theorithical MSE calulation provided ( signal mean, siganl variance, occluder mean, occluder variance, occluder density, no of integrated images, numof overlapping images)--- Current Equation works when number of overlapping images images are integers ( i.e image size is multiple of shipt)
###Code
def theoritcal_MSE(signalmean,signalvar,occlmean,occlvar,occldens,noofintegratedimage,numofover):
MSE = ((1-(numofover*(1-occldens)/noofintegratedimage))**2 + (numofover*occldens*(1-occldens)/(noofintegratedimage**2)))*(signalvar+(occlmean-signalmean)**2) + (((numofover*occldens+noofintegratedimage-numofover)/(noofintegratedimage**2))*occlvar)
return MSE
def theoritcal_MSE_Parallel_Sequential(signalmean,signalvar,occlmean,occlvar,occldens,noofintegratedimage,numofover, numofparallel):
numofover = numofover * numofparallel
MSE = ((1-(numofover*(1-occldens)/noofintegratedimage))**2 + (numofover*occldens*(1-occldens)/(noofintegratedimage**2)))*(signalvar+(occlmean-signalmean)**2) + (((numofover*occldens+noofintegratedimage-numofover)/(noofintegratedimage**2))*occlvar)
return MSE
###Output
_____no_output_____
###Markdown
Below we measure mse when no occlusion is present in 1D Case
###Code
w = 100
a = 15
r = a / w # ratio/mean
print(r)
d = 5
N = 10
signalmean = 0.5
sx,sy = int(a), int(a)
sum = np.zeros((1,w))
singleimagearray =[]
for i in range(N):
single = np.zeros_like(sum)
x,y = sx + i*d, sy
single[:,x:x+a] = signalmean
imshow(single, vmin=0.0, vmax=1.0)
plt.show()
singleimagearray.append(single)
sum += single
sum = sum/N
imshow(sum, vmin=0.0, vmax=1.0)
plt.title( 'mean: {}, var: {}'.format(np.mean(sum), np.var(sum)))
plt.show()
#####Calculate Mean square error#############
noofpix = d
x,y = sx + 0*d, sy
endx,endy = sx + (N-2)*d, sy
imshow(sum[:,x+a-noofpix:endx+a-noofpix], vmin=0.0, vmax=1.0)
plt.title( 'Integrated Signal mean: {}, var: {}'.format(np.mean(sum[:,x+a-noofpix:endx+a-noofpix]), np.var(sum[:,x+a-noofpix:endx+a-noofpix])))
plt.show()
nopa = sum[:,x+a-noofpix:endx+a-noofpix]
nop = len(nopa)
sourcesingle = np.zeros((1,w))
sourcesingle[:,x+a-noofpix:endx+a-noofpix] = signalmean
imshow(sourcesingle[:,x+a-noofpix:endx+a-noofpix], vmin=0.0, vmax=1.0)
plt.title( 'Source Signal mean: {}, var: {}'.format(np.mean(sourcesingle[:,x+a-noofpix:endx+a-noofpix]), np.var(sourcesingle[:,x+a-noofpix:endx+a-noofpix])))
plt.show()
calcmse = np.mean((sum[:,x+a-noofpix:endx+a-noofpix] - sourcesingle[:,x+a-noofpix:endx+a-noofpix])**2)
print("calculated mse",calcmse)
theoriticalmse = theoritcal_MSE(0.5,0,0,0,0,N,int(a/d))
print("theoritical mse",theoriticalmse)
###Output
0.15
###Markdown
Below we measure when occlusion are randomly present with Density D
###Code
w = 100
a = 10
r = a / w # ratio/mean
print(r)
d = 2
o_size = 2
o_shift = 2
o_dens = 0.2
N = 20
sx,sy = int(a), int(a)
sum = np.zeros((1,w))
singleimagearray =[]
occlimage = np.random.choice([0, 1], size=(1,50), p=[o_dens, 1-o_dens]) #np.random.binomial(n=1, p=1-o_dens, size=(1,50))
print(np.mean(occlimage), np.count_nonzero(occlimage), np.count_nonzero(occlimage)/50)
imshow(occlimage, vmin=0.0, vmax=1.0)
plt.show()
occlimage = occlimage.repeat(o_size, axis=1)
meas_dens = 1 - np.count_nonzero(occlimage)/100
print(np.mean(occlimage), np.count_nonzero(occlimage),np.count_nonzero(occlimage)/100)
imshow(occlimage, vmin=0.0, vmax=1.0)
plt.show()
shiftedocclimage = np.roll(occlimage, 2)
imshow(shiftedocclimage, vmin=0.0, vmax=1.0)
plt.show()
for i in range(N):
single = np.zeros_like(sum)
x,y = sx + i*d, sy
single[:,x:x+a] = 0.5
shiftedocclimage = np.roll(occlimage, i*o_shift)
imshow(shiftedocclimage, vmin=0.0, vmax=1.0)
plt.show()
single = single * shiftedocclimage
imshow(single, vmin=0.0, vmax=1.0)
plt.show()
singleimagearray.append(single)
sum += single
sum = sum/N
imshow(sum, vmin=0.0, vmax=1.0)
plt.title( 'mean: {}, var: {}'.format(np.mean(sum), np.var(sum)))
plt.show()
#####Calculate Mean square error#############
noofpix = d
x,y = sx + 0*d, sy
endx,endy = sx + (N-4)*d, sy
imshow(sum[:,x+a-noofpix:endx+a-noofpix], vmin=0.0, vmax=1.0)
plt.title( 'Integrated Signal mean: {}, var: {}'.format(np.mean(sum[:,x+a-noofpix:endx+a-noofpix]), np.var(sum[:,x+a-noofpix:endx+a-noofpix])))
plt.show()
nopa = sum[:,x+a-noofpix:endx+a-noofpix]
nop = len(nopa)
sourcesingle = np.zeros((1,w))
sourcesingle[:,x+a-noofpix:endx+a-noofpix] = signalmean
imshow(sourcesingle[:,x+a-noofpix:endx+a-noofpix], vmin=0.0, vmax=1.0)
plt.title( 'Source Signal mean: {}, var: {}'.format(np.mean(sourcesingle[:,x+a-noofpix:endx+a-noofpix]), np.var(sourcesingle[:,x+a-noofpix:endx+a-noofpix])))
plt.show()
calcmse = np.mean((sum[:,x+a-noofpix:endx+a-noofpix] - sourcesingle[:,x+a-noofpix:endx+a-noofpix])**2)
print("calculated mse",calcmse)
theoriticalmse = theoritcal_MSE(0.5,0,0,0,meas_dens,N,int(a/d))
print("theoritical mse",theoriticalmse)
nums = np.random.choice([0, 1], size=1000, p=[.1, .9])
print(nums)
print(np.mean(nums), np.count_nonzero(nums)/1000)
import time
w = 100 # simualtion width / fov on the ground
a = 1 # size of the target on the ground
r = a / w # ratio / mean between target and
d = 1 # movement of the target
o_size = 1 # occluder size
o_shift = 1 # occluder shift every image
o_dens = 0.0 # occlusion density
s_mean = 1.0 # signal mean
o_mean = 0.1 # noise mean
N = 20 # number of images recorded
M = N if d<=0 else max(min(a/d,N),1) # number of overlaps with moving [1 ... N].
sim_trails = 100 # number of simulation trails
def simulate(w,N,a,d,s_mean,o_mean,o_dens,o_size,o_shift):
# create occlusion with a certain density
occlimage = np.random.choice([0, 1], size=(1,int(w/o_size)), p=[o_dens, 1-o_dens]) #np.random.binomial(n=1, p=1-o_dens, size=(1,50))
occlimage = occlimage.repeat(o_size, axis=1)
# create signal
sx,sy = int(a), int(a)
single = np.ones((1,w)) * o_mean
x,y = sx + d, sy
single[:,x:x+a] = s_mean
# move occlusion by o_shift and signal by d N times.
def mov(img,dist,iter):
return np.roll(img, iter*dist)
images = np.stack( list( map(lambda i : mov(single,d,i) ,range(N)) ), axis=2 )
occls = np.stack( list( map(lambda i : mov(occlimage,-o_shift,i) ,range(N)) ), axis=2 )
# GT average image:
gt_avg = (np.mean(images,axis=2))
# non-noise count: print(np.sum(occls,axis=2))
# integral image:
X = images
X[occls==0] = o_mean
integral = np.mean(X,axis=2)
# compute MSE
if d==0:
mask = single > 0
else:
mask = np.zeros((1,w),dtype=np.bool8)
mask[:,sx+a-d+1:sx+N*d] = True
#print(single)
#print(mask)
#print(gt_avg[mask])
z_mean,z_var = np.mean(gt_avg[mask]), np.var(gt_avg[mask])
assert( np.isclose(z_var,0) )
#assert( np.isclose(z_mean, s_mean * M / N ) )
# simulate MSE
mse = np.square(integral[mask] - s_mean).mean(axis=None)
return mse
tic = time.perf_counter()
print( np.mean( [ simulate(w,N,a,d,s_mean,o_mean,o_dens,o_size,o_shift) for i in range(sim_trails) ] ) )
toc = time.perf_counter()
print(f"simualtion took {toc - tic:0.4f} seconds")
print( theoritcal_MSE(s_mean,0,o_mean,0,o_dens,N,int(M)) )
# run over multiple Ms and Ds
w = 1000
N = 100
a = N
s_mean = .88 # signal mean
o_mean = .12 # noise mean
sim_trails = 100 # number of simulation trails
Ds = list(np.linspace(0.0, 1.0, 10))
Ms = list(range(1,N,5)) # M < N, causes problems with the simulation otherwise
sim_mses = np.zeros((len(Ds),len(Ms)))
clc_mses = np.zeros_like(sim_mses)
for D in Ds:
for M in Ms:
try:
sim_mses[Ds.index(D), Ms.index(M)] = np.mean( [ simulate(w,N,M,1,s_mean,o_mean,D,o_size,o_shift) for i in range(sim_trails) ] )
except:
sim_mses[Ds.index(D), Ms.index(M)] = np.nan
clc_mses[Ds.index(D), Ms.index(M)] = theoritcal_MSE(s_mean,0,o_mean,0,D,N,int(M))
# takes a few seconds to compute ... ⌛
# display nicely
plt.figure(figsize=(20,10))
plt.subplot(131)
im = imshow(clc_mses, vmin=0, vmax=np.max(clc_mses,axis=None))
plt.yticks(range(len(Ds)),[ '{:0.4f}'.format(d) for d in Ds]), plt.xticks(range(len(Ms)),Ms)
plt.ylabel('D'), plt.xlabel('M')
plt.title( 'MSE (equation)' )
plt.axis('on')
plt.colorbar(im)
plt.subplot(132)
im = imshow(sim_mses, vmin=0, vmax=np.max(clc_mses,axis=None))
plt.yticks(range(len(Ds)),[ '{:0.4f}'.format(d) for d in Ds]), plt.xticks(range(len(Ms)),Ms)
plt.ylabel('D'), plt.xlabel('M')
plt.title( 'MSE (simulation)' )
plt.axis('on')
plt.colorbar(im)
plt.subplot(133)
im = imshow(np.abs(sim_mses-clc_mses), cmap='inferno')
plt.yticks(range(len(Ds)),[ '{:0.4f}'.format(d) for d in Ds]), plt.xticks(range(len(Ms)),Ms)
plt.ylabel('D'), plt.xlabel('M')
plt.title( 'MSE (diff)' )
plt.axis('on')
plt.colorbar(im)
plt.show()
N = 50
a = N
sim_trails = 100 # number of simulation trails
Ds = list(np.linspace(0.0, 1.0, 30))
Ms = list(range(1,N)) # M < N, causes problems with the simulation otherwise
sim_mses = np.zeros((len(Ds),len(Ms)))
clc_mses = np.zeros_like(sim_mses)
for D in Ds:
for M in Ms:
try:
sim_mses[Ds.index(D), Ms.index(M)] = np.mean( [ simulate(w,N,M,1,s_mean,D,o_size,o_shift) for i in range(sim_trails) ] )
except:
sim_mses[Ds.index(D), Ms.index(M)] = np.nan
clc_mses[Ds.index(D), Ms.index(M)] = theoritcal_MSE(s_mean,0,0,0,D,N,int(M))
# takes a few seconds to compute ... ⌛
# display nicely
plt.figure(figsize=(20,10))
plt.subplot(131)
im = imshow(clc_mses, vmin=0, vmax=np.max(clc_mses,axis=None))
plt.yticks(range(len(Ds)),[ '{:0.4f}'.format(d) for d in Ds]), plt.xticks(range(len(Ms)),Ms)
plt.ylabel('D'), plt.xlabel('M')
plt.title( 'MSE (equation)' )
plt.axis('on')
plt.colorbar(im)
plt.subplot(132)
im = imshow(sim_mses, vmin=0, vmax=np.max(clc_mses,axis=None))
plt.yticks(range(len(Ds)),[ '{:0.4f}'.format(d) for d in Ds]), plt.xticks(range(len(Ms)),Ms)
plt.ylabel('D'), plt.xlabel('M')
plt.title( 'MSE (simulation)' )
plt.axis('on')
plt.colorbar(im)
plt.subplot(133)
im = imshow(np.abs(sim_mses-clc_mses), cmap='inferno')
plt.yticks(range(len(Ds)),[ '{:0.4f}'.format(d) for d in Ds]), plt.xticks(range(len(Ms)),Ms)
plt.ylabel('D'), plt.xlabel('M')
plt.title( 'MSE (diff)' )
plt.axis('on')
plt.colorbar(im)
plt.show()
#theoritcal_MSE(signalmean,signalvar,occlmean,occlvar,occldens,noofintegratedimage,numofover)
s_mean = 1.0
N = 30
a = N
sim_trails = 100 # number of simulation trails
Ds = list(np.linspace(0.0, 1.0, 30))
Ms = list(range(1,N,3)) # M < N, causes problems with the simulation otherwise
pure_mses = np.zeros((len(Ds),len(Ms)))
over_mses = np.zeros_like(pure_mses)
for D in Ds:
for M in Ms:
pure_mses[Ds.index(D), Ms.index(M)] = theoritcal_MSE(s_mean,0,0,0,D,N,N)
over_mses[Ds.index(D), Ms.index(M)] = theoritcal_MSE(s_mean,0,0,0,D,N,int(M))
plt.figure()
plt.plot(Ds, pure_mses)
plt.plot(Ds, over_mses)
plt.show()
###Output
_____no_output_____
###Markdown
Correction Factors Trying to find a correction factor that relates the previous equation $D^2 + \frac{D (1-D)}{N}$ to the new one $(1-\frac{A (1-D)}{N})^2 + \frac{A D (1-D)}{N^2}$Note, we are assuming signal mean of 1, occluder mean of 0, and 0 variances (signal and occluders).By applying a correction term $\gamma$ we can relate the two equations$$ D'^2 + \frac{D' (1-D')}{N'} = (1-\frac{A (1-D)}{N})^2 + \frac{A D (1-D)}{N^2} \text{,}$$where $D' = \gamma D$, $N' = \gamma N$, and $$\gamma = \frac{A (D-1) + N}{D N}$$. By expressing the term $A/N$ as $\alpha$ the equation simplifies to $$ \gamma = \frac{\alpha(D-1)+1}{D} \text{.}$$[See Wolfram Alpha for derivation.](https://www.wolframalpha.com/input/?i=solve+%28x*D%29%5E2+%2B+%28x*D%29*%281-x*D%29%2F%28x*N%29%3D%281-A*%281-D%29%2FN%29%5E2%2BA*D*%281-D%29%2FN%5E2+for+x)Note that $\gamma$ is not defined and not applicable for $D=0$. For $D=0$ the equation $(1-\frac{A (1-D)}{N})^2 + \frac{A D (1-D)}{N^2}$ simplifies to $(\frac{A}{N}-1)^2$, which cannot be expressed by a multiplication.
###Code
def correction_factor(D,N,A):
if D==0:
return np.nan # if D is zero there is no correction term!
else:
return (A *(-1 + D) + N)/(D * N)
corr_mses = np.zeros_like(pure_mses)
corr_factor = np.zeros_like(pure_mses)
for D in Ds:
for M in Ms:
cf = correction_factor(D,N,int(M))
corr_mses[Ds.index(D), Ms.index(M)] = theoritcal_MSE(s_mean,0,0,0,D*cf,N*cf,N*cf)
corr_factor[Ds.index(D), Ms.index(M)] = cf
plt.figure(figsize=(16,10))
plt.plot(Ds, pure_mses, 'k'), plt.annotate('a=1.00',(0,0),ha='right', va='top')
#plt.plot(Ds, over_mses)
plt.plot(Ds, corr_mses, ':')
for M in Ms:
plt.annotate(f'a={M/N:.2f}',(Ds[1],corr_mses[1,Ms.index(M)]),ha='right',va='bottom')
plt.xlabel('D'), plt.ylabel('MSE'), plt.title( 'MSEs')
plt.show()
plt.figure(figsize=(16,10))
plt.plot(Ds, corr_factor)
for M in Ms:
plt.annotate(f'a={M/N:.2f}',(Ds[1],corr_factor[1,Ms.index(M)]),ha='right',va='bottom')
plt.xlabel('D'), plt.ylabel('correction factor ($\gamma$)'), plt.title( 'correction')
plt.show()
###Output
_____no_output_____
###Markdown
An alternative might be to express the difference as an overall offset:[See Alpha](https://www.wolframalpha.com/input/?i=solve+%28D%29%5E2+%2B+%28D%29*%281-D%29%2F%28N%29%2Bo%3D%281-a*%281-D%29%29%5E2%2Ba*D*%281-D%29%2FN+for+o)The Problem, however, is that the equation is rather large, now.
###Code
def correction_offset(D,N,A):
a = A/N
return (a-1)*(D-1)*(D*(a*N+N-1)-a*N+N) / N
corr_mses = np.zeros_like(pure_mses)
corr_offs = np.zeros_like(pure_mses)
for D in Ds:
for M in Ms:
co = correction_offset(D,N,int(M))
corr_mses[Ds.index(D), Ms.index(M)] = theoritcal_MSE(s_mean,0,0,0,D,N,N) + co
corr_offs[Ds.index(D), Ms.index(M)] = co
plt.figure(figsize=(16,10))
plt.plot(Ds, pure_mses, 'k'), plt.annotate('a=1.00',(0,0),ha='right', va='top')
#plt.plot(Ds, over_mses)
plt.plot(Ds, corr_mses, ':')
for M in Ms:
plt.annotate(f'a={M/N:.2f}',(0,corr_mses[0,Ms.index(M)]),ha='right',va='bottom')
plt.xlabel('D'), plt.ylabel('MSE'), plt.title( 'MSEs')
plt.show()
plt.figure(figsize=(16,10))
plt.plot(Ds, corr_offs)
for M in Ms:
plt.annotate(f'a={M/N:.2f}',(0,corr_offs[0,Ms.index(M)]),ha='right',va='bottom')
plt.xlabel('D'), plt.ylabel('MSE offset'), plt.title( 'correction offsets')
plt.show()
###Output
_____no_output_____
###Markdown
Moving Camera Array
If multiple cameras are mounted on the drone the equations are slightly changed.
We use the equation $$
(1-\frac{A (1-D)}{N})^2 + \frac{A D (1-D)}{N^2} \text{,}
$$ as basis.
If the camera array has $C$ cameras that record and move simultaneously the new $\hat{N} = N C$ and $\hat{A} = A C$, which means that the overall ratio of $A/N$ remains the same. The number of cameras, however is increased by the factor $C$.
Thus, the previous equation reformulates to
$$
(1-\frac{\hat A (1-D)}{\hat N})^2 + \frac{\hat A D (1-D)}{\hat{N}^2} = (1-\frac{AC (1-D)}{NC})^2 + \frac{AC D (1-D)}{N^2C^2} \text{,}
$$
which further simplifies to
$$
\left(1-\frac{A (1-D)}{N}\right)^2 + \frac{A D (1-D)}{N^2C} \text{.}
$$
With $\alpha = A/N$ this yields to
$$
\left(1-{\alpha (1-D)}\right)^2 + \frac{\alpha D (1-D)}{N C} \text{,}
$$
We could keep N as number of total images integrated and that would not put N.C in the denominator.
A = min(No of sequential steps whose images are integrated, number of images overlapping a signal due to motion)
C = number of cameras in camera array
Thus, the previous equation reformulates to
$$
MSE = (1-\frac{AC (1-D)}{N})^2 + \frac{AC D (1-D)}{N^2} \text{,}
$$
**verified by simpler simulation.**
###Code
max_no_integrated_images = 201
occl_density = 0.5
occl_size = 5
occl_disparity = 5
motion_shift = 50
num_parallel_camera = 5
imgSize = (2048,2048)
integral_image = np.zeros(imgSize)
mixType = 'replace'
signalType = 'binarymotion'
signalMean = 0.0
signalSigma = 0
signalsize = (400,400)
noiseType = 'binary'
noiseSigma = 0
noiseMean = 1
showimages = False
N = 10
if signalType == 'binarymotion':
signal = np.ones(imgSize) # create signal filled with ones and create a signal region in the image
signal[int(np.floor(imgSize[0]/2-signalsize[0]/2)):int(np.ceil(imgSize[0]/2+signalsize[0]/2)),int(np.floor((signalsize[1]+1)-signalsize[1]/2)):int(np.ceil((signalsize[1]+1)+signalsize[1]/2))] = signalMean
imshow(signal, vmin=0.0, vmax=1.0)
plt.show()
## To Check Moving Signal
#for i in range(1,max_no_integrated_images):
# rotsignal = np.roll(signal, i*motion_shift, axis=1)
# imshow(rotsignal, vmin=0.0, vmax=1.0)
# plt.show()
mse = []
theo_mse = []
singleimage_stack = []
noiseImgSize = (int(np.ceil( imgSize[0]/occl_size + (max_no_integrated_images*occl_disparity))),int(np.ceil( imgSize[1]/occl_size + max_no_integrated_images*occl_disparity)))
print(noiseImgSize)
#create uniformly distributed random image filled with ones
uniform_rand_img = (np.random.uniform(low = 0.0, high = 1.0,size = noiseImgSize) <= occl_density).astype(int) * noiseMean
#resize the image to create occluders of size occl_size
shiftImg = cv2.resize(src = uniform_rand_img, dsize = (noiseImgSize[0]*occl_size,noiseImgSize[1]*occl_size), interpolation = cv2.INTER_NEAREST)
#To check if occluders are binary with noise mean
nonzerosimg = shiftImg[np.nonzero(shiftImg)]
print('min: {}, max: {}'.format(min(nonzerosimg), max(nonzerosimg)))
#imshow(shiftImg, vmin=0.0, vmax=1.0)
#plt.title( 'mean: {}, var: {}'.format(np.mean(shiftImg), np.var(shiftImg)))
#plt.show()
summedimage = np.zeros(imgSize)
numberofmotionshift = 1
for i in range(1,max_no_integrated_images):
pixShift = i * occl_disparity
tmp = np.zeros(imgSize)
#Take a shifted portion of the noise image
tmp = shiftImg[0:imgSize[0],pixShift+0:pixShift+imgSize[1]]
print('i',i)
if (i-1) % num_parallel_camera == 0:
print('shifted signal')
#shift the signal to show the motion
signal = np.roll(signal, motion_shift, axis=1)
numberofmotionshift = numberofmotionshift + 1
#replace where noise is zero
combimg = tmp.copy()
combimg[tmp == 0] = signal[tmp == 0]
#combimg = tmp + signal
#combimg[combimg>=noiseMean] = noiseMean
#imshow(combimg, vmin=0.0, vmax=1.0)
#plt.show()
#imshow(signal, vmin=0.0, vmax=1.0)
#plt.show()
#add to the sum image
summedimage = summedimage + combimg
# divide by i to get the mean integral image
integral_image = summedimage / i
if i % num_parallel_camera == 0:
#calculate start and end pos of area for which mse is calulated ---
# For N < A we take image regions where N signal images are integrated
# For N > A we take image regions where A signal images are integrated
startpos = np.floor((signalsize[1]+1)-signalsize[1]/2) + (min(numberofmotionshift, np.ceil(signalsize[1]/motion_shift))) * motion_shift
endpos = np.floor((signalsize[1]+1)+signalsize[1]/2) + (max(numberofmotionshift-np.ceil(signalsize[1]/motion_shift),1)) * motion_shift
# Copy the selected region from the integral image
projimg = integral_image[int(np.floor(imgSize[0]/2-signalsize[0]/2)):int(np.ceil(imgSize[0]/2+signalsize[0]/2)),int(startpos):int(endpos)]
# create a binary signal image of same region
sigimg = np.ones(projimg.shape)*signalMean
print('Startpos',startpos,'endpse',endpos)
if showimages:
imshow(integral_image, vmin=0.0, vmax=1.0)
plt.show()
imshow(projimg, vmin=0.0, vmax=1.0)
plt.show()
imshow(sigimg, vmin=0.0, vmax=1.0)
plt.show()
# calculate mse
squared_subtractimg = np.square(np.subtract(projimg,sigimg))
avg = np.mean(squared_subtractimg)
# For N < A we take A = N
# For N >= A we take A
noofimageoverlap = min((numberofmotionshift-1), np.ceil(signalsize[1]/motion_shift))
# Calculate theoritical mse
theoriticalmse = theoritcal_MSE_Parallel_Sequential(signalMean,signalSigma,noiseMean,noiseSigma,occl_density,i,noofimageoverlap,num_parallel_camera)
print('measured mse: {}, theoritical mse: {}'.format(avg, theoriticalmse))
mse.append(avg)
theo_mse.append(theoriticalmse)
plt.plot(range(1,numberofmotionshift), mse, 'g--', linewidth=2, markersize=12 , label = 'measured mse')
plt.plot(range(1,numberofmotionshift), theo_mse, 'r', linewidth=2, markersize=12 , label = 'theoritical mse')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
# utility function(s)
def imshow(image, *args, **kwargs):
"""A replacement for cv2.imshow() for use in Jupyter notebooks using matplotlib.
Args:
image : np.ndarray. shape (N, M) or (N, M, 1) is an NxM grayscale image. shape
(N, M, 3) is an NxM BGR color image.
"""
if len(image.shape) == 3:
# Height, width, channels
# Assume BGR, do a conversion
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Draw the image
plt.imshow(image, *args, **kwargs)
# We'll also disable drawing the axes and tick marks in the plot, since it's actually an image
plt.axis('off')
# Make sure it outputs
# plt.show()
###Output
_____no_output_____
###Markdown
A single binary view (area: $w \times w$) is composed of zero background and of a target of size $a \times a$. The mean and variance of such a 2D view are $$\mu = \frac{a^2}{w^2} \\\sigma = \mu(\mu-1)^2 + (1-\mu)(\mu-0)^2 = \mu - \mu^2$$, where the mean ($\mu$) also describes the size relation of the target with respect to the view. The term $(\mu-1)^2$ is the variance computation in the target and $(\mu-0)^2 = \mu^2$ the variance for the background.The code below shows such an example:
###Code
w = 300
a = 50
sx,sy = int(w/3), int(w/3)
single = np.zeros((300,300))
single[sx:sx+a,sy:sy+a] = 1.0
imshow(single)
plt.title( 'mean: {}, var: {}'.format(np.mean(single), np.var(single)))
plt.show()
###Output
_____no_output_____
###Markdown
If multiple such single views are averaged (as typically done in AOS) it has no effect on the variance and mean as long as the target is perfectly registered. If the target, however, is not registered (e.g., a moving target or by defocus) the statistics change. Let's first look at the extreme case, where the averaged targets are not overlapping anymore. This is showcased below by introducing a shift $d$ for $N$ images.The mean ($\mu$) is not altered, but the variance changes:$$\sigma = N\mu(\mu-\frac{1}{N})^2 + (1-N\mu)(\mu-0)^2 = \frac{\mu}{N} - \mu^2 \text{.}$$The change of $\sigma$ is inverse proportional to $N$. Note that $N\mu$ describes the area covered by the non-overlapping instances of the target.
###Code
w = 300
a = 20
r = a*a / w**2 # ratio/mean
print(r)
d = 25
N = 10
sx,sy = int(a), int(a)
sum = np.zeros((w,w))
for i in range(N):
single = np.zeros((w,w))
x,y = sx + i*d, sy
single[x:x+a,y:y+a] = 1.0
sum += single
sum = sum/N
imshow(sum, vmin=0.0, vmax=1.0)
plt.title( 'mean: {}, var: {}'.format(np.mean(sum), np.var(sum)))
plt.show()
# variance computation
v_overlap = r - r**2
v_nonoverlap = r/N - r**2
print('var (non-overlap): {}'.format( v_nonoverlap ) )
###Output
0.0044444444444444444
###Markdown
If the shift $d$ is less then the target size $a$ $(d < a)$ the targets will overlap in the integral image. For simplicity we will just look at the problem in 1D now. The area (normalized by the area size) that is covered by the non-overlapping targets can be expressed by $$ g = \frac{d(N-1)+a}{w}$$and the number of overlaps is expressed by $$ M = \frac{a}{d}$$, where it has to be ensured that $M$ does not exceed $N$. Furthermore, there will be different regions with a varying amount of overlap. For example a target with $a=5$ a shift of $d=2$ and $N=7$ results in 4 regions without overlap in 8 regions where two target instances overlap and 5 regions with an overlap of three targets. Note that this is illustrated in the example below. Furthermore, in this simulation a region is a pixel or array cell. To compute the variance the different overlaps have to be considered. We introduce this as a count $c_i$, where $i$ is the number of overlapping targets. In the example this results in $c_1=4, c_2=8$, and $c_3=5$.The equation to compute the variance thus expands to$$ \sigma = (1-g)\mu^2 + \frac{1}{w} \sum_i c_i (\mu - \frac{i}{N})^2 \\ = \mu^2 - \frac{2\mu}{Nw} \sum_i c_i i + \frac{1}{N^2w} \sum_i c_i i^2 \text{.}$$By subsituting $\mu = a/w$ (in 1D) it further simplifies to$$ \sigma = \frac{a^2}{w^2} - \frac{2a}{Nw^2} \sum_i{ c_i i }+ \frac{1}{N^2w} \sum_i c_i i^2 \text{.}$$Note that it is propably impractical to always compute $c_i$ so it might be possible to simplify or approximate these terms. A first attempt would be to approximate the terms by$$ \sum_i{ c_i i } \approx M (d (N-1-M)+a) \\ \sum_i{ c_i i^2 } \approx M^2 (d (N-1-M)+a) \text{.}$$This, however does not allways lead to close results (see below).
###Code
w = 30
a = 5
r = a / w # ratio/mean
print(r)
d = 2
N = 7
sx,sy = int(a), int(a)
sum = np.zeros((1,w))
for i in range(N):
single = np.zeros_like(sum)
x,y = sx + i*d, sy
single[:,x:x+a] = 1.0
sum += single
count,bins=np.histogram(sum, bins=np.arange(np.max(sum)+2))
print(count)
print(np.asarray(bins[:-1],dtype=np.int16))
sum = sum/N
imshow(sum, vmin=0.0, vmax=1.0)
plt.title( 'mean: {}, var: {}'.format(np.mean(sum), np.var(sum)))
plt.show()
# variance computation
if d<=0:
M = N
else:
M = max(min(a/d,N),1)
v_overlap = r - r**2 # assuming everything is overlapping
v_nonoverlap = r/N - r**2 # assuming nothing is overlapping in the integral
term1 = np.sum(bins[:-1] * count)
term2 = np.sum(bins[:-1]**2 * count)
v = a**2/w**2 - 2*a/(N*w**2)*term1 + 1/(N**2*w)*term2
print('var (new): {}'.format( v ) )
# approximate term1 and term2
term1_ = M * (d*(N-1-M)+a)
term2_ = term1_ * M
v_ = a**2/w**2 - 2*a/(N*w**2)*term1_ + 1/(N**2*w)*term2_
print('var (approx): {}'.format( v_ ) )
###Output
0.16666666666666666
[13 4 8 5]
[0 1 2 3]
###Markdown
In this section, we discuss the statistical model of the$MSE$ between an integral image $X$ and an hypotheticalocclusion-free reference $S$ . $$MSE = E[(X- S)^2] = E[X^2] -2E[XS] +E[S^2]$$
###Code
def theoritcal_MSE(signalmean,signalvar,occlmean,occlvar,occldens,noofintegratedimage,numofover):
return MSE = ((1-(numofover*(1-occldens)/noofintegratedimage))**2 + (numofover*occldens*(1-occldens)/(noofintegratedimage**2)))*(signalvar+(occlmean - signalmean)**2) + (((numofover*occldens+noofintegratedimage-numofover)/(noofintegratedimage**2))*occlvar)
###Output
_____no_output_____ |
2_gym_wrappers_saving_loading.ipynb | ###Markdown
Stable Baselines Tutorial - Gym wrappers, saving and loading modelsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19Stable-Baselines: https://github.com/hill-a/stable-baselinesDocumentation: https://stable-baselines.readthedocs.io/en/master/RL Baselines zoo: https://github.com/araffin/rl-baselines-zoo IntroductionIn this notebook, you will learn how to use *Gym Wrappers* which allow to do monitoring, normalization, limit the number of steps, feature augmentation, ...You will also see the *loading* and *saving* functions, and how to read the outputed files for possible exporting. Install Dependencies and Stable Baselines Using Pip
###Code
# Stable Baselines only supports tensorflow 1.x for now
%tensorflow_version 1.x
!apt install swig
!pip install stable-baselines[mpi]==2.10.0
import gym
from stable_baselines import A2C, SAC, PPO2, TD3
###Output
_____no_output_____
###Markdown
Saving and loadingSaving and loading stable-baselines models is straightforward: you can directly call `.save()` and `.load()` on the models.
###Code
import os
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO2('MlpPolicy', 'Pendulum-v0', verbose=0).learn(8000)
# The model will be saved under PPO2_tutorial.zip
model.save(save_dir + "/PPO2_tutorial")
# sample an observation from the environment
obs = model.env.observation_space.sample()
# Check prediction before saving
print("pre saved", model.predict(obs, deterministic=True))
del model # delete trained model to demonstrate loading
loaded_model = PPO2.load(save_dir + "/PPO2_tutorial")
# Check that the prediction is the same after loading (for the same observation)
print("loaded", loaded_model.predict(obs, deterministic=True))
###Output
_____no_output_____
###Markdown
Saving in stable-baselines is quite powerful, as you save the training hyperparameters, with the current weights. This means in practice, you can simply load a custom model, without redefining the parameters, and continue learning.The loading function can also update the model's class variables when loading.
###Code
import os
from stable_baselines.common.vec_env import DummyVecEnv
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = A2C('MlpPolicy', 'Pendulum-v0', verbose=0, gamma=0.9, n_steps=20).learn(8000)
# The model will be saved under A2C_tutorial.zip
model.save(save_dir + "/A2C_tutorial")
del model # delete trained model to demonstrate loading
# load the model, and when loading set verbose to 1
loaded_model = A2C.load(save_dir + "/A2C_tutorial", verbose=1)
# show the save hyperparameters
print("loaded:", "gamma =", loaded_model.gamma, "n_steps =", loaded_model.n_steps)
# as the environment is not serializable, we need to set a new instance of the environment
loaded_model.set_env(DummyVecEnv([lambda: gym.make('Pendulum-v0')]))
# and continue training
loaded_model.learn(8000)
###Output
_____no_output_____
###Markdown
Gym and VecEnv wrappers Anatomy of a gym wrapper A gym wrapper follows the [gym](https://stable-baselines.readthedocs.io/en/master/guide/custom_env.html) interface: it has a `reset()` and `step()` method.Because a wrapper is *around* an environment, we can access it with `self.env`, this allow to easily interact with it without modifying the original env.There are many wrappers that have been predefined, for a complete list refer to [gym documentation](https://github.com/openai/gym/tree/master/gym/wrappers)
###Code
class CustomWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Call the parent constructor, so we can access self.env later
super(CustomWrapper, self).__init__(env)
def reset(self):
"""
Reset the environment
"""
obs = self.env.reset()
return obs
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
obs, reward, done, info = self.env.step(action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
First example: limit the episode lengthOne practical use case of a wrapper is when you want to limit the number of steps by episode, for that you will need to overwrite the `done` signal when the limit is reached. It is also a good practice to pass that information in the `info` dictionnary.
###Code
class TimeLimitWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
:param max_steps: (int) Max number of steps per episode
"""
def __init__(self, env, max_steps=100):
# Call the parent constructor, so we can access self.env later
super(TimeLimitWrapper, self).__init__(env)
self.max_steps = max_steps
# Counter of steps per episode
self.current_step = 0
def reset(self):
"""
Reset the environment
"""
# Reset the counter
self.current_step = 0
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
self.current_step += 1
obs, reward, done, info = self.env.step(action)
# Overwrite the done signal when
if self.current_step >= self.max_steps:
done = True
# Update the info dict to signal that the limit was exceeded
info['time_limit_reached'] = True
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test the wrapper
###Code
from gym.envs.classic_control.pendulum import PendulumEnv
# Here we create the environment directly because gym.make() already wrap the environement in a TimeLimit wrapper otherwise
env = PendulumEnv()
# Wrap the environment
env = TimeLimitWrapper(env, max_steps=100)
obs = env.reset()
done = False
n_steps = 0
while not done:
# Take random actions
random_action = env.action_space.sample()
obs, reward, done, info = env.step(random_action)
n_steps += 1
print(n_steps, info)
###Output
_____no_output_____
###Markdown
In practice, `gym` already have a wrapper for that named `TimeLimit` (`gym.wrappers.TimeLimit`) that is used by most environments. Second example: normalize actionsIt is usually a good idea to normalize observations and actions before giving it to the agent, this prevent [hard to debug issue](https://github.com/hill-a/stable-baselines/issues/473).In this example, we are going to normalize the action space of *Pendulum-v0* so it lies in [-1, 1] instead of [-2, 2].Note: here we are dealing with continuous actions, hence the `gym.Box` space
###Code
import numpy as np
class NormalizeActionWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Retrieve the action space
action_space = env.action_space
assert isinstance(action_space, gym.spaces.Box), "This wrapper only works with continuous action space (spaces.Box)"
# Retrieve the max/min values
self.low, self.high = action_space.low, action_space.high
# We modify the action space, so all actions will lie in [-1, 1]
env.action_space = gym.spaces.Box(low=-1, high=1, shape=action_space.shape, dtype=np.float32)
# Call the parent constructor, so we can access self.env later
super(NormalizeActionWrapper, self).__init__(env)
def rescale_action(self, scaled_action):
"""
Rescale the action from [-1, 1] to [low, high]
(no need for symmetric action space)
:param scaled_action: (np.ndarray)
:return: (np.ndarray)
"""
return self.low + (0.5 * (scaled_action + 1.0) * (self.high - self.low))
def reset(self):
"""
Reset the environment
"""
# Reset the counter
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
# Rescale action from [-1, 1] to original [low, high] interval
rescaled_action = self.rescale_action(action)
obs, reward, done, info = self.env.step(rescaled_action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test before rescaling actions
###Code
original_env = gym.make("Pendulum-v0")
print(original_env.action_space.low)
for _ in range(10):
print(original_env.action_space.sample())
###Output
_____no_output_____
###Markdown
Test the NormalizeAction wrapper
###Code
env = NormalizeActionWrapper(gym.make("Pendulum-v0"))
print(env.action_space.low)
for _ in range(10):
print(env.action_space.sample())
###Output
_____no_output_____
###Markdown
Test with a RL algorithmWe are going to use the Monitor wrapper of stable baselines, wich allow to monitor training stats (mean episode reward, mean episode length)
###Code
from stable_baselines.bench import Monitor
from stable_baselines.common.vec_env import DummyVecEnv
env = Monitor(gym.make('Pendulum-v0'), filename=None, allow_early_resets=True)
env = DummyVecEnv([lambda: env])
model = A2C("MlpPolicy", env, verbose=1).learn(int(1000))
###Output
_____no_output_____
###Markdown
With the action wrapper
###Code
normalized_env = Monitor(gym.make('Pendulum-v0'), filename=None, allow_early_resets=True)
# Note that we can use multiple wrappers
normalized_env = NormalizeActionWrapper(normalized_env)
normalized_env = DummyVecEnv([lambda: normalized_env])
model_2 = A2C("MlpPolicy", normalized_env, verbose=1).learn(int(1000))
###Output
_____no_output_____
###Markdown
Additional wrappers: VecEnvWrappersIn the same vein as gym wrappers, stable baselines provide wrappers for `VecEnv`. Among the different that exist (and you can create your own), you should know: - VecNormalize: it computes a running mean and standard deviation to normalize observation and returns- VecFrameStack: it stacks several consecutive observations (useful to integrate time in the observation, e.g. sucessive frame of an atari game)More info in the [documentation](https://stable-baselines.readthedocs.io/en/master/guide/vec_envs.htmlwrappers)Note: when using `VecNormalize` wrapper, you must save the running mean and std along with the model, otherwise you will not get proper results when loading the agent again. If you use the [rl zoo](https://github.com/araffin/rl-baselines-zoo), this is done automatically
###Code
from stable_baselines.common.vec_env import VecNormalize, VecFrameStack
env = DummyVecEnv([lambda: gym.make("Pendulum-v0")])
normalized_vec_env = VecNormalize(env)
obs = normalized_vec_env.reset()
for _ in range(10):
action = [normalized_vec_env.action_space.sample()]
obs, reward, _, _ = normalized_vec_env.step(action)
print(obs, reward)
###Output
_____no_output_____
###Markdown
Exercise: code you own monitor wrapperNow that you know how does a wrapper work and what you can do with it, it's time to experiment.The goal here is to create a wrapper that will monitor the training progress, storing both the episode reward (sum of reward for one episode) and episode length (number of steps in for the last episode).You will return those values using the `info` dict after each end of episode.
###Code
class MyMonitorWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Call the parent constructor, so we can access self.env later
super(MyMonitorWrapper, self).__init__(env)
# === YOUR CODE HERE ===#
# Initialize the variables that will be used
# to store the episode length and episode reward
# ====================== #
def reset(self):
"""
Reset the environment
"""
obs = self.env.reset()
# === YOUR CODE HERE ===#
# Reset the variables
# ====================== #
return obs
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
obs, reward, done, info = self.env.step(action)
# === YOUR CODE HERE ===#
# Update the current episode reward and episode length
# ====================== #
if done:
# === YOUR CODE HERE ===#
# Store the episode length and episode reward in the info dict
# ====================== #
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test your wrapper
###Code
# To use LunarLander, you need to install box2d box2d-kengz (pip) and swig (apt-get)
!pip install box2d box2d-kengz
env = gym.make("LunarLander-v2")
# === YOUR CODE HERE ===#
# Wrap the environment
# Reset the environment
# Take random actions in the enviromnent and check
# that it returns the correct values after the end of each episode
# ====================== #
###Output
_____no_output_____
###Markdown
Conclusion In this notebook, we have seen: - how to easily save and load a model - what is wrapper and what we can do with it - how to create your own wrapper Wrapper Bonus: changing the observation space: a wrapper for episode of fixed length
###Code
from gym.wrappers import TimeLimit
class TimeFeatureWrapper(gym.Wrapper):
"""
Add remaining time to observation space for fixed length episodes.
See https://arxiv.org/abs/1712.00378 and https://github.com/aravindr93/mjrl/issues/13.
:param env: (gym.Env)
:param max_steps: (int) Max number of steps of an episode
if it is not wrapped in a TimeLimit object.
:param test_mode: (bool) In test mode, the time feature is constant,
equal to zero. This allow to check that the agent did not overfit this feature,
learning a deterministic pre-defined sequence of actions.
"""
def __init__(self, env, max_steps=1000, test_mode=False):
assert isinstance(env.observation_space, gym.spaces.Box)
# Add a time feature to the observation
low, high = env.observation_space.low, env.observation_space.high
low, high= np.concatenate((low, [0])), np.concatenate((high, [1.]))
env.observation_space = gym.spaces.Box(low=low, high=high, dtype=np.float32)
super(TimeFeatureWrapper, self).__init__(env)
if isinstance(env, TimeLimit):
self._max_steps = env._max_episode_steps
else:
self._max_steps = max_steps
self._current_step = 0
self._test_mode = test_mode
def reset(self):
self._current_step = 0
return self._get_obs(self.env.reset())
def step(self, action):
self._current_step += 1
obs, reward, done, info = self.env.step(action)
return self._get_obs(obs), reward, done, info
def _get_obs(self, obs):
"""
Concatenate the time feature to the current observation.
:param obs: (np.ndarray)
:return: (np.ndarray)
"""
# Remaining time is more general
time_feature = 1 - (self._current_step / self._max_steps)
if self._test_mode:
time_feature = 1.0
# Optionnaly: concatenate [time_feature, time_feature ** 2]
return np.concatenate((obs, [time_feature]))
###Output
_____no_output_____
###Markdown
Going further - Saving format The format for saving and loading models has been recently revamped as of Stable-Baselines (>2.7.0).It is a zip-archived JSON dump and NumPy zip archive of the arrays:```saved_model.zip/├── data JSON file of class-parameters (dictionary)├── parameter_list JSON file of model parameters and their ordering (list)├── parameters Bytes from numpy.savez (a zip file of the numpy arrays). ... ├── ... Being a zip-archive itself, this object can also be opened ... ├── ... as a zip-archive and browsed.``` Save and find
###Code
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO2('MlpPolicy', 'Pendulum-v0', verbose=0).learn(8000)
model.save(save_dir + "/PPO2_tutorial")
!ls /tmp/gym/PPO2_tutorial*
import zipfile
archive = zipfile.ZipFile("/tmp/gym/PPO2_tutorial.zip", 'r')
for f in archive.filelist:
print(f.filename)
###Output
_____no_output_____
###Markdown
Stable Baselines3 Tutorial - Gym wrappers, saving and loading modelsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19/tree/sb3/Stable-Baselines3: https://github.com/DLR-RM/stable-baselines3Documentation: https://stable-baselines3.readthedocs.io/en/master/RL Baselines3 zoo: https://github.com/DLR-RM/rl-baselines3-zoo IntroductionIn this notebook, you will learn how to use *Gym Wrappers* which allow to do monitoring, normalization, limit the number of steps, feature augmentation, ...You will also see the *loading* and *saving* functions, and how to read the outputed files for possible exporting. Install Dependencies and Stable Baselines3 Using Pip
###Code
!apt install swig
!pip install stable-baselines3[extra]
import gym
from stable_baselines3 import A2C, SAC, PPO, TD3
###Output
_____no_output_____
###Markdown
Saving and loadingSaving and loading stable-baselines models is straightforward: you can directly call `.save()` and `.load()` on the models.
###Code
import os
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO('MlpPolicy', 'Pendulum-v0', verbose=0).learn(8000)
# The model will be saved under PPO_tutorial.zip
model.save(save_dir + "/PPO_tutorial")
# sample an observation from the environment
obs = model.env.observation_space.sample()
# Check prediction before saving
print("pre saved", model.predict(obs, deterministic=True))
del model # delete trained model to demonstrate loading
loaded_model = PPO.load(save_dir + "/PPO_tutorial")
# Check that the prediction is the same after loading (for the same observation)
print("loaded", loaded_model.predict(obs, deterministic=True))
###Output
pre saved (array([-0.01057339], dtype=float32), None)
loaded (array([-0.01057339], dtype=float32), None)
###Markdown
Saving in stable-baselines is quite powerful, as you save the training hyperparameters, with the current weights. This means in practice, you can simply load a custom model, without redefining the parameters, and continue learning.The loading function can also update the model's class variables when loading.
###Code
import os
from stable_baselines3.common.vec_env import DummyVecEnv
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = A2C('MlpPolicy', 'Pendulum-v0', verbose=0, gamma=0.9, n_steps=20).learn(8000)
# The model will be saved under A2C_tutorial.zip
model.save(save_dir + "/A2C_tutorial")
del model # delete trained model to demonstrate loading
# load the model, and when loading set verbose to 1
loaded_model = A2C.load(save_dir + "/A2C_tutorial", verbose=1)
# show the save hyperparameters
print("loaded:", "gamma =", loaded_model.gamma, "n_steps =", loaded_model.n_steps)
# as the environment is not serializable, we need to set a new instance of the environment
loaded_model.set_env(DummyVecEnv([lambda: gym.make('Pendulum-v0')]))
# and continue training
loaded_model.learn(8000)
###Output
loaded: gamma = 0.9 n_steps = 20
###Markdown
Gym and VecEnv wrappers Anatomy of a gym wrapper A gym wrapper follows the [gym](https://stable-baselines.readthedocs.io/en/master/guide/custom_env.html) interface: it has a `reset()` and `step()` method.Because a wrapper is *around* an environment, we can access it with `self.env`, this allow to easily interact with it without modifying the original env.There are many wrappers that have been predefined, for a complete list refer to [gym documentation](https://github.com/openai/gym/tree/master/gym/wrappers)
###Code
class CustomWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Call the parent constructor, so we can access self.env later
super(CustomWrapper, self).__init__(env)
def reset(self):
"""
Reset the environment
"""
obs = self.env.reset()
return obs
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
obs, reward, done, info = self.env.step(action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
First example: limit the episode lengthOne practical use case of a wrapper is when you want to limit the number of steps by episode, for that you will need to overwrite the `done` signal when the limit is reached. It is also a good practice to pass that information in the `info` dictionnary.
###Code
class TimeLimitWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
:param max_steps: (int) Max number of steps per episode
"""
def __init__(self, env, max_steps=100):
# Call the parent constructor, so we can access self.env later
super(TimeLimitWrapper, self).__init__(env)
self.max_steps = max_steps
# Counter of steps per episode
self.current_step = 0
def reset(self):
"""
Reset the environment
"""
# Reset the counter
self.current_step = 0
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
self.current_step += 1
obs, reward, done, info = self.env.step(action)
# Overwrite the done signal when
if self.current_step >= self.max_steps:
done = True
# Update the info dict to signal that the limit was exceeded
info['time_limit_reached'] = True
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test the wrapper
###Code
from gym.envs.classic_control.pendulum import PendulumEnv
# Here we create the environment directly because gym.make() already wrap the environement in a TimeLimit wrapper otherwise
env = PendulumEnv()
# Wrap the environment
env = TimeLimitWrapper(env, max_steps=100)
obs = env.reset()
done = False
n_steps = 0
while not done:
# Take random actions
random_action = env.action_space.sample()
obs, reward, done, info = env.step(random_action)
n_steps += 1
print(n_steps, info)
###Output
100 {'time_limit_reached': True}
###Markdown
In practice, `gym` already have a wrapper for that named `TimeLimit` (`gym.wrappers.TimeLimit`) that is used by most environments. Second example: normalize actionsIt is usually a good idea to normalize observations and actions before giving it to the agent, this prevent [hard to debug issue](https://github.com/hill-a/stable-baselines/issues/473).In this example, we are going to normalize the action space of *Pendulum-v0* so it lies in [-1, 1] instead of [-2, 2].Note: here we are dealing with continuous actions, hence the `gym.Box` space
###Code
import numpy as np
class NormalizeActionWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Retrieve the action space
action_space = env.action_space
assert isinstance(action_space, gym.spaces.Box), "This wrapper only works with continuous action space (spaces.Box)"
# Retrieve the max/min values
self.low, self.high = action_space.low, action_space.high
# We modify the action space, so all actions will lie in [-1, 1]
env.action_space = gym.spaces.Box(low=-1, high=1, shape=action_space.shape, dtype=np.float32)
# Call the parent constructor, so we can access self.env later
super(NormalizeActionWrapper, self).__init__(env)
def rescale_action(self, scaled_action):
"""
Rescale the action from [-1, 1] to [low, high]
(no need for symmetric action space)
:param scaled_action: (np.ndarray)
:return: (np.ndarray)
"""
return self.low + (0.5 * (scaled_action + 1.0) * (self.high - self.low))
def reset(self):
"""
Reset the environment
"""
# Reset the counter
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
# Rescale action from [-1, 1] to original [low, high] interval
rescaled_action = self.rescale_action(action)
obs, reward, done, info = self.env.step(rescaled_action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test before rescaling actions
###Code
original_env = gym.make("Pendulum-v0")
print(original_env.action_space.low)
for _ in range(10):
print(original_env.action_space.sample())
###Output
[-2.]
[0.07473034]
[-1.2432275]
[0.53824383]
[-0.48907268]
[0.3432211]
[-0.95533466]
[-0.5442549]
[1.8221357]
[1.4915677]
[-1.9463363]
###Markdown
Test the NormalizeAction wrapper
###Code
env = NormalizeActionWrapper(gym.make("Pendulum-v0"))
print(env.action_space.low)
for _ in range(10):
print(env.action_space.sample())
###Output
[-1.]
[-0.86028206]
[-0.63513726]
[0.565501]
[0.53458834]
[0.92259634]
[-0.40233672]
[-0.41188562]
[-0.42891768]
[-0.26115742]
[-0.37986052]
###Markdown
Test with a RL algorithmWe are going to use the Monitor wrapper of stable baselines, wich allow to monitor training stats (mean episode reward, mean episode length)
###Code
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.vec_env import DummyVecEnv
env = Monitor(gym.make('Pendulum-v0'))
env = DummyVecEnv([lambda: env])
model = A2C("MlpPolicy", env, verbose=1).learn(int(1000))
###Output
Using cpu device
###Markdown
With the action wrapper
###Code
normalized_env = Monitor(gym.make('Pendulum-v0'))
# Note that we can use multiple wrappers
normalized_env = NormalizeActionWrapper(normalized_env)
normalized_env = DummyVecEnv([lambda: normalized_env])
model_2 = A2C("MlpPolicy", normalized_env, verbose=1).learn(int(1000))
###Output
Using cpu device
###Markdown
Additional wrappers: VecEnvWrappersIn the same vein as gym wrappers, stable baselines provide wrappers for `VecEnv`. Among the different that exist (and you can create your own), you should know: - VecNormalize: it computes a running mean and standard deviation to normalize observation and returns- VecFrameStack: it stacks several consecutive observations (useful to integrate time in the observation, e.g. sucessive frame of an atari game)More info in the [documentation](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.htmlwrappers)Note: when using `VecNormalize` wrapper, you must save the running mean and std along with the model, otherwise you will not get proper results when loading the agent again. If you use the [rl zoo](https://github.com/DLR-RM/rl-baselines3-zoo), this is done automatically
###Code
from stable_baselines3.common.vec_env import VecNormalize, VecFrameStack
env = DummyVecEnv([lambda: gym.make("Pendulum-v0")])
normalized_vec_env = VecNormalize(env)
obs = normalized_vec_env.reset()
for _ in range(10):
action = [normalized_vec_env.action_space.sample()]
obs, reward, _, _ = normalized_vec_env.step(action)
print(obs, reward)
###Output
[[ 0.00432401 -0.00659526 -0.00301316]] [-1.9998431]
[[-0.93556964 -0.7536059 -0.99976414]] [-1.2453712]
[[-1.2780777 -1.1824399 -1.1202489]] [-1.0047954]
[[-1.5215688 -1.4151894 -1.4574584]] [-0.8740486]
[[-1.6586064 -1.4429885 -1.4815722]] [-0.8773711]
[[-1.7601846 -1.2510948 -1.5574012]] [-0.8724717]
[[-1.8062371 -0.53193015 -1.4936209 ]] [-0.88744575]
[[-1.8342572 1.2594657 -1.56056 ]] [-0.85494155]
[[-1.8177049 2.395693 -1.5368611]] [-0.85144466]
[[-1.742472 2.5590906 -1.4060296]] [-0.8232367]
###Markdown
Exercise: code you own monitor wrapperNow that you know how does a wrapper work and what you can do with it, it's time to experiment.The goal here is to create a wrapper that will monitor the training progress, storing both the episode reward (sum of reward for one episode) and episode length (number of steps in for the last episode).You will return those values using the `info` dict after each end of episode.
###Code
class MyMonitorWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env, max_steps=100):
# Call the parent constructor, so we can access self.env later
action_space = env.action_space
assert isinstance(action_space, gym.spaces.Box), "This wrapper only works with continuous action space (spaces.Box)"
# Retrieve the max/min values
self.low, self.high = action_space.low, action_space.high
# We modify the action space, so all actions will lie in [-1, 1]
env.action_space = gym.spaces.Box(low=-1, high=1, shape=action_space.shape, dtype=np.float32)
super(MyMonitorWrapper, self).__init__(env)
self.max_steps = max_steps
# Counter of steps per episode
self.current_step = 0
def rescale_action(self, scaled_action):
"""
Rescale the action from [-1, 1] to [low, high]
(no need for symmetric action space)
:param scaled_action: (np.ndarray)
:return: (np.ndarray)
"""
return self.low + (0.5 * (scaled_action + 1.0) * (self.high - self.low))
def reset(self):
"""
Reset the environment
"""
# Reset the counter
self.current_step = 0
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
# Rescale action from [-1, 1] to original [low, high] interval
rescaled_action = self.rescale_action(action)
self.current_step += 1
obs, reward, done, info = self.env.step(rescaled_action)
# Overwrite the done signal when
if self.current_step >= self.max_steps:
done = True
# Update the info dict to signal that the limit was exceeded
info['time_limit_reached'] = True
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test your wrapper
###Code
# To use LunarLander, you need to install box2d box2d-kengz (pip) and swig (apt-get)
!pip install box2d box2d-kengz
env = gym.make("Pendulum-v0")
# Wrap the environment
# Wrap the environment
env = MyMonitorWrapper(env, max_steps=500)
# Reset the environment
# Take random actions in the enviromnent and check
# that it returns the correct values after the end of each episode
# ====================== #
obs = env.reset()
done = False
n_steps = 0
while not done:
# Take random actions
random_action = env.action_space.sample()
obs, reward, done, info = env.step(random_action)
n_steps += 1
print(n_steps, info)
###Output
200 {'TimeLimit.truncated': True}
###Markdown
Conclusion In this notebook, we have seen: - how to easily save and load a model - what is wrapper and what we can do with it - how to create your own wrapper Wrapper Bonus: changing the observation space: a wrapper for episode of fixed length
###Code
from gym.wrappers import TimeLimit
class TimeFeatureWrapper(gym.Wrapper):
"""
Add remaining time to observation space for fixed length episodes.
See https://arxiv.org/abs/1712.00378 and https://github.com/aravindr93/mjrl/issues/13.
:param env: (gym.Env)
:param max_steps: (int) Max number of steps of an episode
if it is not wrapped in a TimeLimit object.
:param test_mode: (bool) In test mode, the time feature is constant,
equal to zero. This allow to check that the agent did not overfit this feature,
learning a deterministic pre-defined sequence of actions.
"""
def __init__(self, env, max_steps=1000, test_mode=False):
assert isinstance(env.observation_space, gym.spaces.Box)
# Add a time feature to the observation
low, high = env.observation_space.low, env.observation_space.high
low, high= np.concatenate((low, [0])), np.concatenate((high, [1.]))
env.observation_space = gym.spaces.Box(low=low, high=high, dtype=np.float32)
super(TimeFeatureWrapper, self).__init__(env)
if isinstance(env, TimeLimit):
self._max_steps = env._max_episode_steps
else:
self._max_steps = max_steps
self._current_step = 0
self._test_mode = test_mode
def reset(self):
self._current_step = 0
return self._get_obs(self.env.reset())
def step(self, action):
self._current_step += 1
obs, reward, done, info = self.env.step(action)
return self._get_obs(obs), reward, done, info
def _get_obs(self, obs):
"""
Concatenate the time feature to the current observation.
:param obs: (np.ndarray)
:return: (np.ndarray)
"""
# Remaining time is more general
time_feature = 1 - (self._current_step / self._max_steps)
if self._test_mode:
time_feature = 1.0
# Optionnaly: concatenate [time_feature, time_feature ** 2]
return np.concatenate((obs, [time_feature]))
env = gym.make("Pendulum-v0")
# Wrap the environment
# Wrap the environment
env = TimeFeatureWrapper(env, max_steps=500)
obs = env.reset()
done = False
n_steps = 0
while not done:
# Take random actions
random_action = env.action_space.sample()
obs, reward, done, info = env.step(random_action)
n_steps += 1
print(n_steps, info)
###Output
200 {'TimeLimit.truncated': True}
###Markdown
Going further - Saving format The format for saving and loading models is a zip-archived JSON dump and NumPy zip archive of the arrays:```saved_model.zip/├── data JSON file of class-parameters (dictionary)├── parameter_list JSON file of model parameters and their ordering (list)├── parameters Bytes from numpy.savez (a zip file of the numpy arrays). ... ├── ... Being a zip-archive itself, this object can also be opened ... ├── ... as a zip-archive and browsed.``` Save and find
###Code
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO('MlpPolicy', 'Pendulum-v0', verbose=0).learn(8000)
model.save(save_dir + "/PPO_tutorial")
!ls /tmp/gym/PPO_tutorial*
import zipfile
archive = zipfile.ZipFile("/tmp/gym/PPO_tutorial.zip", 'r')
for f in archive.filelist:
print(f.filename)
###Output
data
pytorch_variables.pth
policy.pth
policy.optimizer.pth
_stable_baselines3_version
###Markdown
Stable Baselines3 Tutorial - Gym wrappers, saving and loading modelsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19/tree/sb3/Stable-Baselines3: https://github.com/DLR-RM/stable-baselines3Documentation: https://stable-baselines3.readthedocs.io/en/master/RL Baselines3 zoo: https://github.com/DLR-RM/rl-baselines3-zoo IntroductionIn this notebook, you will learn how to use *Gym Wrappers* which allow to do monitoring, normalization, limit the number of steps, feature augmentation, ...You will also see the *loading* and *saving* functions, and how to read the outputed files for possible exporting. Install Dependencies and Stable Baselines3 Using Pip
###Code
!apt install swig
!pip install stable-baselines3[extra]
import gym
from stable_baselines3 import A2C, SAC, PPO, TD3
###Output
_____no_output_____
###Markdown
Saving and loadingSaving and loading stable-baselines models is straightforward: you can directly call `.save()` and `.load()` on the models.
###Code
import os
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO('MlpPolicy', 'Pendulum-v0', verbose=0).learn(8000)
# The model will be saved under PPO_tutorial.zip
model.save(save_dir + "/PPO_tutorial")
# sample an observation from the environment
obs = model.env.observation_space.sample()
# Check prediction before saving
print("pre saved", model.predict(obs, deterministic=True))
del model # delete trained model to demonstrate loading
loaded_model = PPO.load(save_dir + "/PPO_tutorial")
# Check that the prediction is the same after loading (for the same observation)
print("loaded", loaded_model.predict(obs, deterministic=True))
###Output
_____no_output_____
###Markdown
Saving in stable-baselines is quite powerful, as you save the training hyperparameters, with the current weights. This means in practice, you can simply load a custom model, without redefining the parameters, and continue learning.The loading function can also update the model's class variables when loading.
###Code
import os
from stable_baselines3.common.vec_env import DummyVecEnv
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = A2C('MlpPolicy', 'Pendulum-v0', verbose=0, gamma=0.9, n_steps=20).learn(8000)
# The model will be saved under A2C_tutorial.zip
model.save(save_dir + "/A2C_tutorial")
del model # delete trained model to demonstrate loading
# load the model, and when loading set verbose to 1
loaded_model = A2C.load(save_dir + "/A2C_tutorial", verbose=1)
# show the save hyperparameters
print("loaded:", "gamma =", loaded_model.gamma, "n_steps =", loaded_model.n_steps)
# as the environment is not serializable, we need to set a new instance of the environment
loaded_model.set_env(DummyVecEnv([lambda: gym.make('Pendulum-v0')]))
# and continue training
loaded_model.learn(8000)
###Output
_____no_output_____
###Markdown
Gym and VecEnv wrappers Anatomy of a gym wrapper A gym wrapper follows the [gym](https://stable-baselines.readthedocs.io/en/master/guide/custom_env.html) interface: it has a `reset()` and `step()` method.Because a wrapper is *around* an environment, we can access it with `self.env`, this allow to easily interact with it without modifying the original env.There are many wrappers that have been predefined, for a complete list refer to [gym documentation](https://github.com/openai/gym/tree/master/gym/wrappers)
###Code
class CustomWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Call the parent constructor, so we can access self.env later
super(CustomWrapper, self).__init__(env)
def reset(self):
"""
Reset the environment
"""
obs = self.env.reset()
return obs
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
obs, reward, done, info = self.env.step(action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
First example: limit the episode lengthOne practical use case of a wrapper is when you want to limit the number of steps by episode, for that you will need to overwrite the `done` signal when the limit is reached. It is also a good practice to pass that information in the `info` dictionnary.
###Code
class TimeLimitWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
:param max_steps: (int) Max number of steps per episode
"""
def __init__(self, env, max_steps=100):
# Call the parent constructor, so we can access self.env later
super(TimeLimitWrapper, self).__init__(env)
self.max_steps = max_steps
# Counter of steps per episode
self.current_step = 0
def reset(self):
"""
Reset the environment
"""
# Reset the counter
self.current_step = 0
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
self.current_step += 1
obs, reward, done, info = self.env.step(action)
# Overwrite the done signal when
if self.current_step >= self.max_steps:
done = True
# Update the info dict to signal that the limit was exceeded
info['time_limit_reached'] = True
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test the wrapper
###Code
from gym.envs.classic_control.pendulum import PendulumEnv
# Here we create the environment directly because gym.make() already wrap the environement in a TimeLimit wrapper otherwise
env = PendulumEnv()
# Wrap the environment
env = TimeLimitWrapper(env, max_steps=100)
obs = env.reset()
done = False
n_steps = 0
while not done:
# Take random actions
random_action = env.action_space.sample()
obs, reward, done, info = env.step(random_action)
n_steps += 1
print(n_steps, info)
###Output
_____no_output_____
###Markdown
In practice, `gym` already have a wrapper for that named `TimeLimit` (`gym.wrappers.TimeLimit`) that is used by most environments. Second example: normalize actionsIt is usually a good idea to normalize observations and actions before giving it to the agent, this prevent [hard to debug issue](https://github.com/hill-a/stable-baselines/issues/473).In this example, we are going to normalize the action space of *Pendulum-v0* so it lies in [-1, 1] instead of [-2, 2].Note: here we are dealing with continuous actions, hence the `gym.Box` space
###Code
import numpy as np
class NormalizeActionWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Retrieve the action space
action_space = env.action_space
assert isinstance(action_space, gym.spaces.Box), "This wrapper only works with continuous action space (spaces.Box)"
# Retrieve the max/min values
self.low, self.high = action_space.low, action_space.high
# We modify the action space, so all actions will lie in [-1, 1]
env.action_space = gym.spaces.Box(low=-1, high=1, shape=action_space.shape, dtype=np.float32)
# Call the parent constructor, so we can access self.env later
super(NormalizeActionWrapper, self).__init__(env)
def rescale_action(self, scaled_action):
"""
Rescale the action from [-1, 1] to [low, high]
(no need for symmetric action space)
:param scaled_action: (np.ndarray)
:return: (np.ndarray)
"""
return self.low + (0.5 * (scaled_action + 1.0) * (self.high - self.low))
def reset(self):
"""
Reset the environment
"""
# Reset the counter
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
# Rescale action from [-1, 1] to original [low, high] interval
rescaled_action = self.rescale_action(action)
obs, reward, done, info = self.env.step(rescaled_action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test before rescaling actions
###Code
original_env = gym.make("Pendulum-v0")
print(original_env.action_space.low)
for _ in range(10):
print(original_env.action_space.sample())
###Output
_____no_output_____
###Markdown
Test the NormalizeAction wrapper
###Code
env = NormalizeActionWrapper(gym.make("Pendulum-v0"))
print(env.action_space.low)
for _ in range(10):
print(env.action_space.sample())
###Output
_____no_output_____
###Markdown
Test with a RL algorithmWe are going to use the Monitor wrapper of stable baselines, wich allow to monitor training stats (mean episode reward, mean episode length)
###Code
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.vec_env import DummyVecEnv
env = Monitor(gym.make('Pendulum-v0'))
env = DummyVecEnv([lambda: env])
model = A2C("MlpPolicy", env, verbose=1).learn(int(1000))
###Output
_____no_output_____
###Markdown
With the action wrapper
###Code
normalized_env = Monitor(gym.make('Pendulum-v0'))
# Note that we can use multiple wrappers
normalized_env = NormalizeActionWrapper(normalized_env)
normalized_env = DummyVecEnv([lambda: normalized_env])
model_2 = A2C("MlpPolicy", normalized_env, verbose=1).learn(int(1000))
###Output
_____no_output_____
###Markdown
Additional wrappers: VecEnvWrappersIn the same vein as gym wrappers, stable baselines provide wrappers for `VecEnv`. Among the different that exist (and you can create your own), you should know: - VecNormalize: it computes a running mean and standard deviation to normalize observation and returns- VecFrameStack: it stacks several consecutive observations (useful to integrate time in the observation, e.g. sucessive frame of an atari game)More info in the [documentation](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.htmlwrappers)Note: when using `VecNormalize` wrapper, you must save the running mean and std along with the model, otherwise you will not get proper results when loading the agent again. If you use the [rl zoo](https://github.com/DLR-RM/rl-baselines3-zoo), this is done automatically
###Code
from stable_baselines3.common.vec_env import VecNormalize, VecFrameStack
env = DummyVecEnv([lambda: gym.make("Pendulum-v0")])
normalized_vec_env = VecNormalize(env)
obs = normalized_vec_env.reset()
for _ in range(10):
action = [normalized_vec_env.action_space.sample()]
obs, reward, _, _ = normalized_vec_env.step(action)
print(obs, reward)
###Output
_____no_output_____
###Markdown
Exercise: code you own monitor wrapperNow that you know how does a wrapper work and what you can do with it, it's time to experiment.The goal here is to create a wrapper that will monitor the training progress, storing both the episode reward (sum of reward for one episode) and episode length (number of steps in for the last episode).You will return those values using the `info` dict after each end of episode.
###Code
class MyMonitorWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Call the parent constructor, so we can access self.env later
super(MyMonitorWrapper, self).__init__(env)
# === YOUR CODE HERE ===#
# Initialize the variables that will be used
# to store the episode length and episode reward
# ====================== #
def reset(self):
"""
Reset the environment
"""
obs = self.env.reset()
# === YOUR CODE HERE ===#
# Reset the variables
# ====================== #
return obs
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
obs, reward, done, info = self.env.step(action)
# === YOUR CODE HERE ===#
# Update the current episode reward and episode length
# ====================== #
if done:
# === YOUR CODE HERE ===#
# Store the episode length and episode reward in the info dict
# ====================== #
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test your wrapper
###Code
# To use LunarLander, you need to install box2d box2d-kengz (pip) and swig (apt-get)
!pip install box2d box2d-kengz
env = gym.make("LunarLander-v2")
# === YOUR CODE HERE ===#
# Wrap the environment
# Reset the environment
# Take random actions in the enviromnent and check
# that it returns the correct values after the end of each episode
# ====================== #
###Output
_____no_output_____
###Markdown
Conclusion In this notebook, we have seen: - how to easily save and load a model - what is wrapper and what we can do with it - how to create your own wrapper Wrapper Bonus: changing the observation space: a wrapper for episode of fixed length
###Code
from gym.wrappers import TimeLimit
class TimeFeatureWrapper(gym.Wrapper):
"""
Add remaining time to observation space for fixed length episodes.
See https://arxiv.org/abs/1712.00378 and https://github.com/aravindr93/mjrl/issues/13.
:param env: (gym.Env)
:param max_steps: (int) Max number of steps of an episode
if it is not wrapped in a TimeLimit object.
:param test_mode: (bool) In test mode, the time feature is constant,
equal to zero. This allow to check that the agent did not overfit this feature,
learning a deterministic pre-defined sequence of actions.
"""
def __init__(self, env, max_steps=1000, test_mode=False):
assert isinstance(env.observation_space, gym.spaces.Box)
# Add a time feature to the observation
low, high = env.observation_space.low, env.observation_space.high
low, high= np.concatenate((low, [0])), np.concatenate((high, [1.]))
env.observation_space = gym.spaces.Box(low=low, high=high, dtype=np.float32)
super(TimeFeatureWrapper, self).__init__(env)
if isinstance(env, TimeLimit):
self._max_steps = env._max_episode_steps
else:
self._max_steps = max_steps
self._current_step = 0
self._test_mode = test_mode
def reset(self):
self._current_step = 0
return self._get_obs(self.env.reset())
def step(self, action):
self._current_step += 1
obs, reward, done, info = self.env.step(action)
return self._get_obs(obs), reward, done, info
def _get_obs(self, obs):
"""
Concatenate the time feature to the current observation.
:param obs: (np.ndarray)
:return: (np.ndarray)
"""
# Remaining time is more general
time_feature = 1 - (self._current_step / self._max_steps)
if self._test_mode:
time_feature = 1.0
# Optionnaly: concatenate [time_feature, time_feature ** 2]
return np.concatenate((obs, [time_feature]))
###Output
_____no_output_____
###Markdown
Going further - Saving format The format for saving and loading models is a zip-archived JSON dump and NumPy zip archive of the arrays:```saved_model.zip/├── data JSON file of class-parameters (dictionary)├── parameter_list JSON file of model parameters and their ordering (list)├── parameters Bytes from numpy.savez (a zip file of the numpy arrays). ... ├── ... Being a zip-archive itself, this object can also be opened ... ├── ... as a zip-archive and browsed.``` Save and find
###Code
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO('MlpPolicy', 'Pendulum-v0', verbose=0).learn(8000)
model.save(save_dir + "/PPO_tutorial")
!ls /tmp/gym/PPO_tutorial*
import zipfile
archive = zipfile.ZipFile("/tmp/gym/PPO_tutorial.zip", 'r')
for f in archive.filelist:
print(f.filename)
###Output
_____no_output_____
###Markdown
Stable Baselines3 Tutorial - Gym wrappers, saving and loading modelsGithub repo: https://github.com/araffin/rl-tutorial-jnrr19/tree/sb3/Stable-Baselines3: https://github.com/DLR-RM/stable-baselines3Documentation: https://stable-baselines3.readthedocs.io/en/master/RL Baselines3 zoo: https://github.com/DLR-RM/rl-baselines3-zoo IntroductionIn this notebook, you will learn how to use *Gym Wrappers* which allow to do monitoring, normalization, limit the number of steps, feature augmentation, ...You will also see the *loading* and *saving* functions, and how to read the outputed files for possible exporting. Install Dependencies and Stable Baselines3 Using Pip
###Code
!apt install swig
!pip install stable-baselines3[extra]
import gym
from stable_baselines3 import A2C, SAC, PPO, TD3
###Output
_____no_output_____
###Markdown
Saving and loadingSaving and loading stable-baselines models is straightforward: you can directly call `.save()` and `.load()` on the models.
###Code
import os
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO('MlpPolicy', 'Pendulum-v1', verbose=0).learn(8000)
# The model will be saved under PPO_tutorial.zip
model.save(save_dir + "/PPO_tutorial")
# sample an observation from the environment
obs = model.env.observation_space.sample()
# Check prediction before saving
print("pre saved", model.predict(obs, deterministic=True))
del model # delete trained model to demonstrate loading
loaded_model = PPO.load(save_dir + "/PPO_tutorial")
# Check that the prediction is the same after loading (for the same observation)
print("loaded", loaded_model.predict(obs, deterministic=True))
###Output
_____no_output_____
###Markdown
Saving in stable-baselines is quite powerful, as you save the training hyperparameters, with the current weights. This means in practice, you can simply load a custom model, without redefining the parameters, and continue learning.The loading function can also update the model's class variables when loading.
###Code
import os
from stable_baselines3.common.vec_env import DummyVecEnv
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = A2C('MlpPolicy', 'Pendulum-v1', verbose=0, gamma=0.9, n_steps=20).learn(8000)
# The model will be saved under A2C_tutorial.zip
model.save(save_dir + "/A2C_tutorial")
del model # delete trained model to demonstrate loading
# load the model, and when loading set verbose to 1
loaded_model = A2C.load(save_dir + "/A2C_tutorial", verbose=1)
# show the save hyperparameters
print("loaded:", "gamma =", loaded_model.gamma, "n_steps =", loaded_model.n_steps)
# as the environment is not serializable, we need to set a new instance of the environment
loaded_model.set_env(DummyVecEnv([lambda: gym.make('Pendulum-v1')]))
# and continue training
loaded_model.learn(8000)
###Output
_____no_output_____
###Markdown
Gym and VecEnv wrappers Anatomy of a gym wrapper A gym wrapper follows the [gym](https://stable-baselines.readthedocs.io/en/master/guide/custom_env.html) interface: it has a `reset()` and `step()` method.Because a wrapper is *around* an environment, we can access it with `self.env`, this allow to easily interact with it without modifying the original env.There are many wrappers that have been predefined, for a complete list refer to [gym documentation](https://github.com/openai/gym/tree/master/gym/wrappers)
###Code
class CustomWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Call the parent constructor, so we can access self.env later
super(CustomWrapper, self).__init__(env)
def reset(self):
"""
Reset the environment
"""
obs = self.env.reset()
return obs
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
obs, reward, done, info = self.env.step(action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
First example: limit the episode lengthOne practical use case of a wrapper is when you want to limit the number of steps by episode, for that you will need to overwrite the `done` signal when the limit is reached. It is also a good practice to pass that information in the `info` dictionnary.
###Code
class TimeLimitWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
:param max_steps: (int) Max number of steps per episode
"""
def __init__(self, env, max_steps=100):
# Call the parent constructor, so we can access self.env later
super(TimeLimitWrapper, self).__init__(env)
self.max_steps = max_steps
# Counter of steps per episode
self.current_step = 0
def reset(self):
"""
Reset the environment
"""
# Reset the counter
self.current_step = 0
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
self.current_step += 1
obs, reward, done, info = self.env.step(action)
# Overwrite the done signal when
if self.current_step >= self.max_steps:
done = True
# Update the info dict to signal that the limit was exceeded
info['time_limit_reached'] = True
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test the wrapper
###Code
from gym.envs.classic_control.pendulum import PendulumEnv
# Here we create the environment directly because gym.make() already wrap the environement in a TimeLimit wrapper otherwise
env = PendulumEnv()
# Wrap the environment
env = TimeLimitWrapper(env, max_steps=100)
obs = env.reset()
done = False
n_steps = 0
while not done:
# Take random actions
random_action = env.action_space.sample()
obs, reward, done, info = env.step(random_action)
n_steps += 1
print(n_steps, info)
###Output
_____no_output_____
###Markdown
In practice, `gym` already have a wrapper for that named `TimeLimit` (`gym.wrappers.TimeLimit`) that is used by most environments. Second example: normalize actionsIt is usually a good idea to normalize observations and actions before giving it to the agent, this prevent [hard to debug issue](https://github.com/hill-a/stable-baselines/issues/473).In this example, we are going to normalize the action space of *Pendulum-v1* so it lies in [-1, 1] instead of [-2, 2].Note: here we are dealing with continuous actions, hence the `gym.Box` space
###Code
import numpy as np
class NormalizeActionWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Retrieve the action space
action_space = env.action_space
assert isinstance(action_space, gym.spaces.Box), "This wrapper only works with continuous action space (spaces.Box)"
# Retrieve the max/min values
self.low, self.high = action_space.low, action_space.high
# We modify the action space, so all actions will lie in [-1, 1]
env.action_space = gym.spaces.Box(low=-1, high=1, shape=action_space.shape, dtype=np.float32)
# Call the parent constructor, so we can access self.env later
super(NormalizeActionWrapper, self).__init__(env)
def rescale_action(self, scaled_action):
"""
Rescale the action from [-1, 1] to [low, high]
(no need for symmetric action space)
:param scaled_action: (np.ndarray)
:return: (np.ndarray)
"""
return self.low + (0.5 * (scaled_action + 1.0) * (self.high - self.low))
def reset(self):
"""
Reset the environment
"""
# Reset the counter
return self.env.reset()
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
# Rescale action from [-1, 1] to original [low, high] interval
rescaled_action = self.rescale_action(action)
obs, reward, done, info = self.env.step(rescaled_action)
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test before rescaling actions
###Code
original_env = gym.make("Pendulum-v1")
print(original_env.action_space.low)
for _ in range(10):
print(original_env.action_space.sample())
###Output
_____no_output_____
###Markdown
Test the NormalizeAction wrapper
###Code
env = NormalizeActionWrapper(gym.make("Pendulum-v1"))
print(env.action_space.low)
for _ in range(10):
print(env.action_space.sample())
###Output
_____no_output_____
###Markdown
Test with a RL algorithmWe are going to use the Monitor wrapper of stable baselines, wich allow to monitor training stats (mean episode reward, mean episode length)
###Code
from stable_baselines3.common.monitor import Monitor
from stable_baselines3.common.vec_env import DummyVecEnv
env = Monitor(gym.make('Pendulum-v1'))
env = DummyVecEnv([lambda: env])
model = A2C("MlpPolicy", env, verbose=1).learn(int(1000))
###Output
_____no_output_____
###Markdown
With the action wrapper
###Code
normalized_env = Monitor(gym.make('Pendulum-v1'))
# Note that we can use multiple wrappers
normalized_env = NormalizeActionWrapper(normalized_env)
normalized_env = DummyVecEnv([lambda: normalized_env])
model_2 = A2C("MlpPolicy", normalized_env, verbose=1).learn(int(1000))
###Output
_____no_output_____
###Markdown
Additional wrappers: VecEnvWrappersIn the same vein as gym wrappers, stable baselines provide wrappers for `VecEnv`. Among the different that exist (and you can create your own), you should know: - VecNormalize: it computes a running mean and standard deviation to normalize observation and returns- VecFrameStack: it stacks several consecutive observations (useful to integrate time in the observation, e.g. sucessive frame of an atari game)More info in the [documentation](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.htmlwrappers)Note: when using `VecNormalize` wrapper, you must save the running mean and std along with the model, otherwise you will not get proper results when loading the agent again. If you use the [rl zoo](https://github.com/DLR-RM/rl-baselines3-zoo), this is done automatically
###Code
from stable_baselines3.common.vec_env import VecNormalize, VecFrameStack
env = DummyVecEnv([lambda: gym.make("Pendulum-v1")])
normalized_vec_env = VecNormalize(env)
obs = normalized_vec_env.reset()
for _ in range(10):
action = [normalized_vec_env.action_space.sample()]
obs, reward, _, _ = normalized_vec_env.step(action)
print(obs, reward)
###Output
_____no_output_____
###Markdown
Exercise: code you own monitor wrapperNow that you know how does a wrapper work and what you can do with it, it's time to experiment.The goal here is to create a wrapper that will monitor the training progress, storing both the episode reward (sum of reward for one episode) and episode length (number of steps in for the last episode).You will return those values using the `info` dict after each end of episode.
###Code
class MyMonitorWrapper(gym.Wrapper):
"""
:param env: (gym.Env) Gym environment that will be wrapped
"""
def __init__(self, env):
# Call the parent constructor, so we can access self.env later
super(MyMonitorWrapper, self).__init__(env)
# === YOUR CODE HERE ===#
# Initialize the variables that will be used
# to store the episode length and episode reward
# ====================== #
def reset(self):
"""
Reset the environment
"""
obs = self.env.reset()
# === YOUR CODE HERE ===#
# Reset the variables
# ====================== #
return obs
def step(self, action):
"""
:param action: ([float] or int) Action taken by the agent
:return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations
"""
obs, reward, done, info = self.env.step(action)
# === YOUR CODE HERE ===#
# Update the current episode reward and episode length
# ====================== #
if done:
# === YOUR CODE HERE ===#
# Store the episode length and episode reward in the info dict
# ====================== #
return obs, reward, done, info
###Output
_____no_output_____
###Markdown
Test your wrapper
###Code
# To use LunarLander, you need to install box2d box2d-kengz (pip) and swig (apt-get)
!pip install box2d box2d-kengz
env = gym.make("LunarLander-v2")
# === YOUR CODE HERE ===#
# Wrap the environment
# Reset the environment
# Take random actions in the enviromnent and check
# that it returns the correct values after the end of each episode
# ====================== #
###Output
_____no_output_____
###Markdown
Conclusion In this notebook, we have seen: - how to easily save and load a model - what is wrapper and what we can do with it - how to create your own wrapper Wrapper Bonus: changing the observation space: a wrapper for episode of fixed length
###Code
from gym.wrappers import TimeLimit
class TimeFeatureWrapper(gym.Wrapper):
"""
Add remaining time to observation space for fixed length episodes.
See https://arxiv.org/abs/1712.00378 and https://github.com/aravindr93/mjrl/issues/13.
:param env: (gym.Env)
:param max_steps: (int) Max number of steps of an episode
if it is not wrapped in a TimeLimit object.
:param test_mode: (bool) In test mode, the time feature is constant,
equal to zero. This allow to check that the agent did not overfit this feature,
learning a deterministic pre-defined sequence of actions.
"""
def __init__(self, env, max_steps=1000, test_mode=False):
assert isinstance(env.observation_space, gym.spaces.Box)
# Add a time feature to the observation
low, high = env.observation_space.low, env.observation_space.high
low, high= np.concatenate((low, [0])), np.concatenate((high, [1.]))
env.observation_space = gym.spaces.Box(low=low, high=high, dtype=np.float32)
super(TimeFeatureWrapper, self).__init__(env)
if isinstance(env, TimeLimit):
self._max_steps = env._max_episode_steps
else:
self._max_steps = max_steps
self._current_step = 0
self._test_mode = test_mode
def reset(self):
self._current_step = 0
return self._get_obs(self.env.reset())
def step(self, action):
self._current_step += 1
obs, reward, done, info = self.env.step(action)
return self._get_obs(obs), reward, done, info
def _get_obs(self, obs):
"""
Concatenate the time feature to the current observation.
:param obs: (np.ndarray)
:return: (np.ndarray)
"""
# Remaining time is more general
time_feature = 1 - (self._current_step / self._max_steps)
if self._test_mode:
time_feature = 1.0
# Optionnaly: concatenate [time_feature, time_feature ** 2]
return np.concatenate((obs, [time_feature]))
###Output
_____no_output_____
###Markdown
Going further - Saving format The format for saving and loading models is a zip-archived JSON dump and NumPy zip archive of the arrays:```saved_model.zip/├── data JSON file of class-parameters (dictionary)├── parameter_list JSON file of model parameters and their ordering (list)├── parameters Bytes from numpy.savez (a zip file of the numpy arrays). ... ├── ... Being a zip-archive itself, this object can also be opened ... ├── ... as a zip-archive and browsed.``` Save and find
###Code
# Create save dir
save_dir = "/tmp/gym/"
os.makedirs(save_dir, exist_ok=True)
model = PPO('MlpPolicy', 'Pendulum-v1', verbose=0).learn(8000)
model.save(save_dir + "/PPO_tutorial")
!ls /tmp/gym/PPO_tutorial*
import zipfile
archive = zipfile.ZipFile("/tmp/gym/PPO_tutorial.zip", 'r')
for f in archive.filelist:
print(f.filename)
###Output
_____no_output_____ |
M1_Python/d6_Numpy/100_Numpy_exercises_with_hint.ipynb | ###Markdown
100 numpy exercises with hintThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆) (**hint**: import … as …) 2. Print the numpy version and the configuration (★☆☆) (**hint**: np.\_\_version\_\_, np.show\_config) 3. Create a null vector of size 10 (★☆☆) (**hint**: np.zeros) 4. How to find the memory size of any array (★☆☆) (**hint**: size, itemsize) 5. How to get the documentation of the numpy add function from the command line? (★☆☆) (**hint**: np.info) 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆) (**hint**: array\[4\])
###Code
#np.full creates a vector of the size in the first tuple, and of value in the second argument
###Output
_____no_output_____
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆) (**hint**: np.arange) 8. Reverse a vector (first element becomes last) (★☆☆) (**hint**: array\[::-1\]) 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆) (**hint**: reshape) 10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆) (**hint**: np.nonzero) 11. Create a 3x3 identity matrix (★☆☆) (**hint**: np.eye) 12. Create a 3x3x3 array with random values (★☆☆) (**hint**: np.random.random) 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆) (**hint**: min, max) 14. Create a random vector of size 30 and find the mean value (★☆☆) (**hint**: mean) 15. Create a 2d array with 1 on the border and 0 inside (★☆☆) (**hint**: array\[1:-1, 1:-1\]) 16. How to add a border (filled with 0's) around an existing array? (★☆☆) (**hint**: np.pad) 17. What is the result of the following expression? (★☆☆) (**hint**: NaN = not a number, inf = infinity) ```python0 * np.nannp.nan == np.nannp.inf > np.nannp.nan - np.nannp.nan in set([np.nan])0.3 == 3 * 0.1``` nanFalseFalsenantruefalse 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆) (**hint**: np.diag) 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆) (**hint**: array\[::2\]) 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element? (**hint**: np.unravel_index) 21. Create a checkerboard 8x8 matrix using the tile function (★☆☆) (**hint**: np.tile)
###Code
import numpy as np
arr = np.array([50-100,100-100,150-100])
arr2 = np.array([100-100,100-100,100-100])
arr2
###Output
_____no_output_____
###Markdown
22. Normalize a 5x5 random matrix (★☆☆) (**hint**: (x - mean) / std)
###Code
np.random.seed(42)
z = np.random.random((5,5))
z
z.mean()
z.std()
norm_z = (z-np.mean(z))/(np.std(z))
norm_z
###Output
_____no_output_____ |
trading_with_momentum.ipynb | ###Markdown
Udacity Artificial Intelligence for Trading Nanodegree - Project: Trading with Momentum IntroductionThis project implements a trading strategy and tests it to see if it has the potential to be profitable. InstructionsFollow the instructions in the [README](README.md) to setup the environment and install the requirements. Load Packages
###Code
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
###Output
_____no_output_____
###Markdown
Market Data Load DataThe data we use for most of the projects is end of day data. This contains data for many stocks, but we'll be looking at stocks in the S&P 500. We also made things a little easier to run by narrowing down our range of time period instead of using all of the data.
###Code
df = pd.read_csv('./data/eod-quotemedia.csv', parse_dates=['date'], index_col=False)
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
print('Loaded Data')
###Output
Loaded Data
###Markdown
View DataRun the cell below to see what the data looks like for `close`.
###Code
project_helper.print_dataframe(close)
###Output
_____no_output_____
###Markdown
Stock ExampleLet's see what a single stock looks like from the closing prices. For this example and future display examples in this project, we'll use Apple's stock (AAPL). If we tried to graph all the stocks, it would be too much information.
###Code
apple_ticker = 'AAPL'
project_helper.plot_stock(close[apple_ticker], f'{apple_ticker} Stock')
###Output
_____no_output_____
###Markdown
Resample Adjusted PricesThe trading signal you'll develop in this project does not need to be based on daily prices, for instance, you can use month-end prices to perform trading once a month. To do this, you must first resample the daily adjusted closing prices into monthly buckets, and select the last observation of each month.Implement the `resample_prices` to resample `close_prices` at the sampling frequency of `freq`.
###Code
def resample_prices(close_prices, freq='M'):
"""
Resample close prices for each ticker at specified frequency.
Parameters
----------
close_prices : DataFrame
Close prices for each ticker and date
freq : str
What frequency to sample at
For valid freq choices, see http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases
Returns
-------
prices_resampled : DataFrame
Resampled prices for each ticker and date
"""
prices_resampled = close_prices.resample(freq).last()
return prices_resampled
project_tests.test_resample_prices(resample_prices)
###Output
Tests Passed
###Markdown
View DataLet's apply this function to `close` and view the results.
###Code
monthly_close = resample_prices(close)
project_helper.plot_resampled_prices(
monthly_close.loc[:, apple_ticker],
close.loc[:, apple_ticker],
f'{apple_ticker} Stock - Close Vs Monthly Close')
###Output
_____no_output_____
###Markdown
Compute Log ReturnsCompute log returns ($R_t$) from prices ($P_t$) as your primary momentum indicator:$$R_t = log_e(P_t) - log_e(P_{t-1})$$Implement the `compute_log_returns` function below, such that it accepts a dataframe (like one returned by `resample_prices`), and produces a similar dataframe of log returns. Use Numpy's [log function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log.html) to help you calculate the log returns.
###Code
def compute_log_returns(prices):
"""
Compute log returns for each ticker.
Parameters
----------
prices : DataFrame
Prices for each ticker and date
Returns
-------
log_returns : DataFrame
Log returns for each ticker and date
"""
return np.log(prices) - np.log(prices.shift(periods=1))
project_tests.test_compute_log_returns(compute_log_returns)
###Output
Tests Passed
###Markdown
View DataUsing the same data returned from `resample_prices`, we'll generate the log returns.
###Code
monthly_close_returns = compute_log_returns(monthly_close)
project_helper.plot_returns(
monthly_close_returns.loc[:, apple_ticker],
f'Log Returns of {apple_ticker} Stock (Monthly)')
###Output
_____no_output_____
###Markdown
Shift ReturnsImplement the `shift_returns` function to shift the log returns to the previous or future returns in the time series. For example, the parameter `shift_n` is 2 and `returns` is the following:``` Returns A B C D2013-07-08 0.015 0.082 0.096 0.020 ...2013-07-09 0.037 0.095 0.027 0.063 ...2013-07-10 0.094 0.001 0.093 0.019 ...2013-07-11 0.092 0.057 0.069 0.087 ...... ... ... ... ...```the output of the `shift_returns` function would be:``` Shift Returns A B C D2013-07-08 NaN NaN NaN NaN ...2013-07-09 NaN NaN NaN NaN ...2013-07-10 0.015 0.082 0.096 0.020 ...2013-07-11 0.037 0.095 0.027 0.063 ...... ... ... ... ...```Using the same `returns` data as above, the `shift_returns` function should generate the following with `shift_n` as -2:``` Shift Returns A B C D2013-07-08 0.094 0.001 0.093 0.019 ...2013-07-09 0.092 0.057 0.069 0.087 ...... ... ... ... ... ...... ... ... ... ... ...... NaN NaN NaN NaN ...... NaN NaN NaN NaN ...```_Note: The "..." represents data points we're not showing._
###Code
def shift_returns(returns, shift_n):
"""
Generate shifted returns
Parameters
----------
returns : DataFrame
Returns for each ticker and date
shift_n : int
Number of periods to move, can be positive or negative
Returns
-------
shifted_returns : DataFrame
Shifted returns for each ticker and date
"""
return returns.shift(periods=shift_n)
project_tests.test_shift_returns(shift_returns)
###Output
Tests Passed
###Markdown
View DataLet's get the previous month's and next month's returns.
###Code
prev_returns = shift_returns(monthly_close_returns, 1)
lookahead_returns = shift_returns(monthly_close_returns, -1)
project_helper.plot_shifted_returns(
prev_returns.loc[:, apple_ticker],
monthly_close_returns.loc[:, apple_ticker],
'Previous Returns of {} Stock'.format(apple_ticker))
project_helper.plot_shifted_returns(
lookahead_returns.loc[:, apple_ticker],
monthly_close_returns.loc[:, apple_ticker],
'Lookahead Returns of {} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Generate Trading SignalA trading signal is a sequence of trading actions, or results that can be used to take trading actions. A common form is to produce a "long" and "short" portfolio of stocks on each date (e.g. end of each month, or whatever frequency you desire to trade at). This signal can be interpreted as rebalancing your portfolio on each of those dates, entering long ("buy") and short ("sell") positions as indicated.Here's a strategy that we will try:> For each month-end observation period, rank the stocks by _previous_ returns, from the highest to the lowest. Select the top performing stocks for the long portfolio, and the bottom performing stocks for the short portfolio.Implement the `get_top_n` function to get the top performing stock for each month. Get the top performing stocks from `prev_returns` by assigning them a value of 1. For all other stocks, give them a value of 0. For example, using the following `prev_returns`:``` Previous Returns A B C D E F G2013-07-08 0.015 0.082 0.096 0.020 0.075 0.043 0.0742013-07-09 0.037 0.095 0.027 0.063 0.024 0.086 0.025... ... ... ... ... ... ... ...```The function `get_top_n` with `top_n` set to 3 should return the following:``` Previous Returns A B C D E F G2013-07-08 0 1 1 0 1 0 02013-07-09 0 1 0 1 0 1 0... ... ... ... ... ... ... ...```*Note: You may have to use Panda's [`DataFrame.iterrows`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.iterrows.html) with [`Series.nlargest`](https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.Series.nlargest.html) in order to implement the function. This is one of those cases where creating a vecorization solution is too difficult.*
###Code
def get_top_n(prev_returns, top_n):
"""
Select the top performing stocks
Parameters
----------
prev_returns : DataFrame
Previous shifted returns for each ticker and date
top_n : int
The number of top performing stocks to get
Returns
-------
top_stocks : DataFrame
Top stocks for each ticker and date marked with a 1
"""
# Copy previous shifted returns as you should never modify something you are iterating over
top_stocks = prev_returns.copy()
top_stocks[:] = 0
for date, returns in prev_returns.iterrows():
top_stocks.loc[date, returns.nlargest(top_n).index] = 1
return top_stocks.astype('int')
project_tests.test_get_top_n(get_top_n)
###Output
Tests Passed
###Markdown
View DataWe want to get the best performing and worst performing stocks. To get the best performing stocks, we'll use the `get_top_n` function. To get the worst performing stocks, we'll also use the `get_top_n` function. However, we pass in `-1*prev_returns` instead of just `prev_returns`. Multiplying by negative one will flip all the positive returns to negative and negative returns to positive. Thus, it will return the worst performing stocks.
###Code
top_bottom_n = 50
df_long = get_top_n(prev_returns, top_bottom_n)
df_short = get_top_n(-1*prev_returns, top_bottom_n)
project_helper.print_top(df_long, 'Longed Stocks')
project_helper.print_top(df_short, 'Shorted Stocks')
###Output
10 Most Longed Stocks:
INCY, AMD, AVGO, NFLX, NFX, SWKS, ILMN, UAL, NVDA, MU
10 Most Shorted Stocks:
FCX, RRC, CHK, MRO, GPS, DVN, FTI, WYNN, NEM, KORS
###Markdown
Projected ReturnsIt's now time to check if your trading signal has the potential to become profitable!We'll start by computing the net returns this portfolio would return. For simplicity, we'll assume every stock gets an equal dollar amount of investment. This makes it easier to compute a portfolio's returns as the simple arithmetic average of the individual stock returns.Implement the `portfolio_returns` function to compute the expected portfolio returns. Using `df_long` to indicate which stocks to long and `df_short` to indicate which stocks to short, calculate the returns using `lookahead_returns`. To help with calculation, we've provided you with `n_stocks` as the number of stocks we're investing in a single period.
###Code
def portfolio_returns(df_long, df_short, lookahead_returns, n_stocks):
"""
Compute expected returns for the portfolio, assuming equal investment in each long/short stock.
Parameters
----------
df_long : DataFrame
Top stocks for each ticker and date marked with a 1
df_short : DataFrame
Bottom stocks for each ticker and date marked with a 1
lookahead_returns : DataFrame
Lookahead returns for each ticker and date
n_stocks: int
The number number of stocks chosen for each month
Returns
-------
portfolio_returns : DataFrame
Expected portfolio returns for each ticker and date
"""
return (df_long - df_short) * lookahead_returns / n_stocks
project_tests.test_portfolio_returns(portfolio_returns)
###Output
Tests Passed
###Markdown
View DataTime to see how the portfolio did.
###Code
expected_portfolio_returns = portfolio_returns(df_long, df_short, lookahead_returns, 2*top_bottom_n)
project_helper.plot_returns(expected_portfolio_returns.T.sum(), 'Portfolio Returns')
###Output
_____no_output_____
###Markdown
Statistical Tests Annualized Rate of Return
###Code
expected_portfolio_returns_by_date = expected_portfolio_returns.T.sum().dropna()
portfolio_ret_mean = expected_portfolio_returns_by_date.mean()
portfolio_ret_ste = expected_portfolio_returns_by_date.sem()
portfolio_ret_annual_rate = (np.exp(portfolio_ret_mean * 12) - 1) * 100
print(f"""
Mean: {portfolio_ret_mean:.6f}
Standard Error: {portfolio_ret_ste:.6f}
Annualized Rate of Return: {portfolio_ret_annual_rate:.2f}
""")
###Output
Mean: 0.003185
Standard Error: 0.002158
Annualized Rate of Return: 3.90
###Markdown
The annualized rate of return allows you to compare the rate of return from this strategy to other quoted rates of return, which are usually quoted on an annual basis. T-TestOur null hypothesis ($H_0$) is that the actual mean return from the signal is zero. We'll perform a one-sample, one-sided t-test on the observed mean return, to see if we can reject $H_0$.We'll need to first compute the t-statistic, and then find its corresponding p-value. The p-value will indicate the probability of observing a t-statistic equally or more extreme than the one we observed if the null hypothesis were true. A small p-value means that the chance of observing the t-statistic we observed under the null hypothesis is small, and thus casts doubt on the null hypothesis. It's good practice to set a desired level of significance or alpha ($\alpha$) _before_ computing the p-value, and then reject the null hypothesis if $p < \alpha$.For this project, we'll use $\alpha = 0.05$, since it's a common value to use.Implement the `analyze_alpha` function to perform a t-test on the sample of portfolio returns. We've imported the `scipy.stats` module for you to perform the t-test.Note: [`scipy.stats.ttest_1samp`](https://docs.scipy.org/doc/scipy-1.0.0/reference/generated/scipy.stats.ttest_1samp.html) performs a two-sided test, so divide the p-value by 2 to get 1-sided p-value
###Code
from scipy import stats
def analyze_alpha(expected_portfolio_returns_by_date):
"""
Perform a t-test with the null hypothesis being that the expected mean return is zero.
Parameters
----------
expected_portfolio_returns_by_date : Pandas Series
Expected portfolio returns for each date
Returns
-------
t_values
T-statistic from t-test
p_value
Corresponding p-value
"""
# Compute the t-statistic and p-value
t_statistic, p_value = stats.ttest_1samp(expected_portfolio_returns_by_date, popmean=0)
# As we want a one-sided t-test, we divide the p-value by 2
return t_statistic, p_value / 2.0
project_tests.test_analyze_alpha(analyze_alpha)
###Output
Tests Passed
###Markdown
View DataLet's see what values we get with our portfolio. After you run this, make sure to answer the question below.
###Code
t_value, p_value = analyze_alpha(expected_portfolio_returns_by_date)
print(f"""
Alpha analysis:
t-value: {t_value:.3f}
p-value: {p_value:.6f}
""")
###Output
Alpha analysis:
t-value: 1.476
p-value: 0.073339
|
sagemaker/20_automatic_speech_recognition_inference/sagemaker-notebook.ipynb | ###Markdown
Automatic Speech Recogntion with Hugging Face's Transformers & Amazon SageMaker Transformer models are changing the world of machine learning, starting with natural language processing, and now, with audio and computer vision. Hugging Face's mission is to democratize good machine learning and give anyone the opportunity to use these new state-of-the-art machine learning models. Together with Amazon SageMaker and AWS have we been working on extending the functionalities of the Hugging Face Inference DLC and the Python SageMaker SDK to make it easier to use speech and vision models together with `transformers`. You can now use the Hugging Face Inference DLC to do [automatic speech recognition](https://huggingface.co/tasks/automatic-speech-recognition) using MetaAIs [wav2vec2](https://arxiv.org/abs/2006.11477) model or Microsofts [WavLM](https://arxiv.org/abs/2110.13900) or use NVIDIAs [SegFormer](https://arxiv.org/abs/2105.15203) for [semantic segmentation](https://huggingface.co/tasks/image-segmentation).This guide will walk you through how to do [automatic speech recognition](https://huggingface.co/tasks/automatic-speech-recognition) using [wav2veec2](https://huggingface.co/facebook/wav2vec2-base-960h) and new `DataSerializer`.In this example you will learn how to: 1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.2. Deploy a wav2vec2 model to Amazon SageMaker for automatic speech recogntion3. Send requests to the endpoint to do speech recognition. Let's get started! 🚀---*If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.* 1. Setup a development Environment and permissions for deploying Amazon SageMaker Inference Endpoints.Setting up the development environment and permissions needs to be done for the automatic-speech-recognition example and the semantic-segmentation example. First we update the `sagemaker` SDK to make sure we have new `DataSerializer`.
###Code
%pip install sagemaker --upgrade
import sagemaker
assert sagemaker.__version__ >= "2.86.0"
###Output
_____no_output_____
###Markdown
After we have update the SDK we can set the permissions._If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
###Code
import sagemaker
import boto3
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
###Output
Couldn't call 'get_role' to get Role ARN from role name philippschmid to get Role path.
###Markdown
2. Deploy a wav2vec2 model to Amazon SageMaker for automatic speech recogntionAutomatic Speech Recognition (ASR), also known as Speech to Text (STT), is the task of transcribing a given audio to text. It has many applications, such as voice user interfaces.We use the [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) model running our recognition endpoint. This model is a fine-tune checkpoint of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio achieving 1.8/3.3 WER on the clean/other test sets.
###Code
from sagemaker.huggingface.model import HuggingFaceModel
from sagemaker.serializers import DataSerializer
# Hub Model configuration. <https://huggingface.co/models>
hub = {
'HF_MODEL_ID':'facebook/wav2vec2-base-960h',
'HF_TASK':'automatic-speech-recognition',
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
env=hub, # configuration for loading model from Hub
role=role, # iam role with permissions to create an Endpoint
transformers_version="4.17", # transformers version used
pytorch_version="1.10", # pytorch version used
py_version='py38', # python version used
)
###Output
_____no_output_____
###Markdown
Before we are able to deploy our `HuggingFaceModel` class we need to create a new serializer, which supports our audio data. The Serializer are used in Predictor and in the `predict` method to serializer our data to a specific `mime-type`, which send to the endpoint. The default serialzier for the HuggingFacePredcitor is a JSNON serializer, but since we are not going to send text data to the endpoint we will use the DataSerializer.
###Code
# create a serializer for the data
audio_serializer = DataSerializer(content_type='audio/x-audio') # using x-audio to support multiple audio formats
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.g4dn.xlarge', # ec2 instance type
serializer=audio_serializer, # serializer for our audio data.
)
###Output
-----------!
###Markdown
3. Send requests to the endpoint to do speech recognition.The `.deploy()` returns an `HuggingFacePredictor` object with our `DataSeriliazer` which can be used to request inference. This `HuggingFacePredictor` makes it easy to send requests to your endpoint and get the results back.We will use 3 different methods to send requests to the endpoint:a. Provide a audio file via path to the predictor b. Provide binary audio data object to the predictor a. Provide a audio file via path to the predictorUsing a audio file as input is easy as easy as providing the path to its location. The `DataSerializer` will then read it and send the bytes to the endpoint. We can use a `libirispeech` sample hosted on huggingface.co
###Code
!wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
###Output
_____no_output_____
###Markdown
To send a request with provide our path to the audio file we can use the following code:
###Code
audio_path = "sample1.flac"
res = predictor.predict(data=audio_path)
print(res)
###Output
{'text': "GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"}
###Markdown
b. Provide binary audio data object to the predictorInstead of providing a path to the audio file we can also directy provide the bytes of it reading the file in python._make sure `sample1.flac` is in the directory_
###Code
audio_path = "sample1.flac"
with open(audio_path, "rb") as data_file:
audio_data = data_file.read()
res = predictor.predict(data=audio_data)
print(res)
###Output
{'text': "GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS"}
###Markdown
Clean up
###Code
predictor.delete_model()
predictor.delete_endpoint()
###Output
_____no_output_____ |
examples/.ipynb_checkpoints/template-Copy1-checkpoint.ipynb | ###Markdown
Think BayesThis notebook presents example code and exercise solutions for Think Bayes.Copyright 2016 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Suite, Joint, MakePoissonPmf, MakeNormalPmf, MakeMixture, EvalWeibullCdf, EvalWeibullPdf, MakeBinomialPmf
from scipy.stats import lognorm
import numpy as np
import thinkplot
import itertools
from math import exp,sqrt
class Weight(Suite, Joint):
def Likelihood(self, data, hypo):
mu,sigma = hypo
return lognorm.pdf(x,sigma,scale=exp(mu))
lognorm.pdf(1,1,scale=exp(1))
class LightBulb(Suite,Joint):
def Likelihood(self,data,hypo):
lam,k = hypo
return EvalWeibullPdf(data,lam,k)
suite = LightBulb(itertools.product(np.linspace(.1,1,10),np.linspace(.1,1,10)));
suite.Update(.75);
suite.Update(1);
suite.Update(2);
x=0
for hypo,p in suite.Items():
lam,k = hypo
x += p*EvalWeibullCdf(1,lam,k)
print(x)
%psource MakeBinomialPmf
###Output
_____no_output_____
###Markdown
Think BayesThis notebook presents example code and exercise solutions for Think Bayes.Copyright 2016 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Suite
import thinkplot
class Subclass(Suite):
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
data:
hypo:
"""
like = 1
return like
prior = Subclass([1,2,3])
thinkplot.Hist(prior)
thinkplot.Config(xlabel='x', ylabel='PMF')
posterior = prior.Copy()
posterior.Update(1)
thinkplot.Hist(prior, color='gray')
thinkplot.Hist(posterior)
thinkplot.Config(xlabel='x', ylabel='PMF')
###Output
No handles with labels found to put in legend.
|
autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb | ###Markdown
Amazon SageMaker での AutoGluon-Tabular の活用[AutoGluon](https://github.com/awslabs/autogluon) は高精度な機械学習モデル構築を自動化します。数行のコードによって、テーブル、画像、テキストなどのデータに対して、精度の良い深層学習モデルを学習し、デプロイすることができます。このノートブックでは、Amazon SageMaker で、独自コンテナを用いてAutoGluon-Tabular を使用する方法をお伝えします。 準備今回のチュートリアルでは、「conda_mxnet_p36」というカーネルを使います。
###Code
# Make sure docker compose is set up properly for local mode
!./setup.sh
# Imports
import os
import boto3
import sagemaker
from time import sleep
from collections import Counter
import numpy as np
import pandas as pd
from sagemaker import get_execution_role, local, Model, utils, fw_utils, s3
from sagemaker.estimator import Estimator
from sagemaker.predictor import RealTimePredictor, csv_serializer, StringDeserializer
from sklearn.metrics import accuracy_score, classification_report
from IPython.core.display import display, HTML
from IPython.core.interactiveshell import InteractiveShell
# Print settings
InteractiveShell.ast_node_interactivity = "all"
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 10)
# Account/s3 setup
session = sagemaker.Session()
local_session = local.LocalSession()
bucket = session.default_bucket()
prefix = 'sagemaker/autogluon-tabular'
region = session.boto_region_name
role = get_execution_role()
client = session.boto_session.client(
"sts", region_name=region, endpoint_url=utils.sts_regional_endpoint(region)
)
account = client.get_caller_identity()['Account']
ecr_uri_prefix = utils.get_ecr_image_uri_prefix(account, region)
registry_id = fw_utils._registry_id(region, 'mxnet', 'py3', account, '1.6.0')
registry_uri = utils.get_ecr_image_uri_prefix(registry_id, region)
###Output
_____no_output_____
###Markdown
Docker イメージのビルド まず、autogluon パッケージを Docker イメージへコピーするためにビルドします。
###Code
if not os.path.exists('package'):
!pip install PrettyTable -t package
!pip install --upgrade boto3 -t package
!pip install bokeh -t package
!pip install --upgrade matplotlib -t package
!pip install autogluon -t package
###Output
_____no_output_____
###Markdown
次に、学習用と推論ようのそれぞれのコンテナイメージをビルドし、Amazon Elastic Container Repository (ECR) へアップロードします。
###Code
training_algorithm_name = 'autogluon-sagemaker-training'
inference_algorithm_name = 'autogluon-sagemaker-inference'
!./container-training/build_push_training.sh {account} {region} {training_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri}
!./container-inference/build_push_inference.sh {account} {region} {inference_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri}
###Output
_____no_output_____
###Markdown
データの取得 このサンプルでは、ダイレクトマーケティングの提案を受け入れるかどうかを二値分類で予測するモデルを開発します。そのためのデータをダウンロードし、学習用データとテスト用データへ分割します。AutoGluon では K -交差検証を自動で行うため、事前に検証データを分割する必要はありません。データをダウンロードし、学習用とテスト用へデータを分割します。
###Code
# Download and unzip the data
!aws s3 cp --region {region} s3://sagemaker-sample-data-{region}/autopilot/direct_marketing/bank-additional.zip .
!unzip -qq -o bank-additional.zip
!rm bank-additional.zip
local_data_path = './bank-additional/bank-additional-full.csv'
data = pd.read_csv(local_data_path)
# Split train/test data
train = data.sample(frac=0.7, random_state=42)
test = data.drop(train.index)
# Split test X/y
label = 'y'
y_test = test[label]
X_test = test.drop(columns=[label])
###Output
_____no_output_____
###Markdown
データの確認
###Code
train.head(3)
train.shape
test.head(3)
test.shape
X_test.head(3)
X_test.shape
###Output
_____no_output_____
###Markdown
Amazon S3 へデータをアップロードします。
###Code
train_file = 'train.csv'
train.to_csv(train_file,index=False)
train_s3_path = session.upload_data(train_file, key_prefix='{}/data'.format(prefix))
test_file = 'test.csv'
test.to_csv(test_file,index=False)
test_s3_path = session.upload_data(test_file, key_prefix='{}/data'.format(prefix))
X_test_file = 'X_test.csv'
X_test.to_csv(X_test_file,index=False)
X_test_s3_path = session.upload_data(X_test_file, key_prefix='{}/data'.format(prefix))
###Output
_____no_output_____
###Markdown
ハイパーパラメータの設定最小の設定は, `fit_args['label']` を選ぶことです。しかし他の追加設定も`fit_args`を通して、`autogluon.task.TabularPrediction.fit`へ渡すことができます。下記は、[Predicting Columns in a Table - In Depth](https://autogluon.mxnet.io/tutorials/tabular_prediction/tabular-indepth.htmlmodel-ensembling-with-stacking-bagging)にある通り、 AutoGluon-Tabular のハイパーパラメーターをより詳細に設定した例です。詳細は [fit parameters](https://autogluon.mxnet.io/api/autogluon.task.html?highlight=eval_metricautogluon.task.TabularPrediction.fit) をご確認下さい。SageMaker で設定を行う際には、`fit_args['hyperparameters']` のそれぞれの値は string 型で渡す必要があります。```pythonnn_options = { 'num_epochs': "10", 'learning_rate': "ag.space.Real(1e-4, 1e-2, default=5e-4, log=True)", 'activation': "ag.space.Categorical('relu', 'softrelu', 'tanh')", 'layers': "ag.space.Categorical([100],[1000],[200,100],[300,200,100])", 'dropout_prob': "ag.space.Real(0.0, 0.5, default=0.1)"}gbm_options = { 'num_boost_round': "100", 'num_leaves': "ag.space.Int(lower=26, upper=66, default=36)"}model_hps = {'NN': nn_options, 'GBM': gbm_options} fit_args = { 'label': 'y', 'presets': ['best_quality', 'optimize_for_deployment'], 'time_limits': 60*10, 'hyperparameters': model_hps, 'hyperparameter_tune': True, 'search_strategy': 'skopt'}hyperparameters = { 'fit_args': fit_args, 'feature_importance': True}```**Note:** ハイパーパラメータの選択によって、モデルパッケージの大きさに影響を及ぼすかも知れません。その場合、モデルのアップロードや学習時間により多くの時間がかかる可能性があります。`fit_args['presets']` の中にある、`'optimize_for_deployment'` を設定することでアップロード時間を短縮することができます。
###Code
# Define required label and optional additional parameters
fit_args = {
'label': 'y',
# Adding 'best_quality' to presets list will result in better performance (but longer runtime)
'presets': ['optimize_for_deployment'],
}
# Pass fit_args to SageMaker estimator hyperparameters
hyperparameters = {
'fit_args': fit_args,
'feature_importance': True
}
###Output
_____no_output_____
###Markdown
学習ノートブックインスタンス上での学習には、`train_instance_type` を `local` に、学習用インスタンスをお使いになる場合には `ml.m5.2xlarge` が推奨です。**Note:** 学習させるモデルの種類の数によっては、`train_volume_size` を増やす必要があるかも知れません。
###Code
%%time
instance_type = 'ml.m5.2xlarge'
#instance_type = 'local'
ecr_image = f'{ecr_uri_prefix}/{training_algorithm_name}:latest'
estimator = Estimator(image_name=ecr_image,
role=role,
train_instance_count=1,
train_instance_type=instance_type,
hyperparameters=hyperparameters,
train_volume_size=100)
# Set inputs. Test data is optional, but requires a label column.
inputs = {'training': train_s3_path, 'testing': test_s3_path}
estimator.fit(inputs)
###Output
_____no_output_____
###Markdown
モデル作成
###Code
# Create predictor object
class AutoGluonTabularPredictor(RealTimePredictor):
def __init__(self, *args, **kwargs):
super().__init__(*args, content_type='text/csv',
serializer=csv_serializer,
deserializer=StringDeserializer(), **kwargs)
ecr_image = f'{ecr_uri_prefix}/{inference_algorithm_name}:latest'
if instance_type == 'local':
model = estimator.create_model(image=ecr_image, role=role)
else:
model_uri = os.path.join(estimator.output_path, estimator._current_job_name, "output", "model.tar.gz")
model = Model(model_uri, ecr_image, role=role, sagemaker_session=session, predictor_cls=AutoGluonTabularPredictor)
###Output
_____no_output_____
###Markdown
バッチ変換 ローカルモードでは `s3:////output/` か `file:///` を出力用に使うことができます。教師データのラベルをテストデータに含むことで、 予測精度を評価することもできます。 (今回の例では, `test_s3_path` を `X_test_s3_path`の代わりに渡しています)。
###Code
output_path = f's3://{bucket}/{prefix}/output/'
# output_path = f'file://{os.getcwd()}'
transformer = model.transformer(instance_count=1,
instance_type=instance_type,
strategy='MultiRecord',
max_payload=6,
max_concurrent_transforms=1,
output_path=output_path)
transformer.transform(test_s3_path, content_type='text/csv', split_type='Line')
transformer.wait()
###Output
_____no_output_____
###Markdown
推論用エンドポイント ローカモードでのデプロイ
###Code
instance_type = 'ml.m5.2xlarge'
#instance_type = 'local'
predictor = model.deploy(initial_instance_count=1,
instance_type=instance_type)
###Output
_____no_output_____
###Markdown
エンドポイントへのアタッチ
###Code
# Select standard or local session based on instance_type
if instance_type == 'local':
sess = local_session
else:
sess = session
# Attach to endpoint
predictor = AutoGluonTabularPredictor(predictor.endpoint, sagemaker_session=sess)
###Output
_____no_output_____
###Markdown
ラベル付けされていないデータへの推論
###Code
results = predictor.predict(X_test.to_csv(index=False)).splitlines()
# Check output
print(Counter(results))
###Output
_____no_output_____
###Markdown
ラベルカラムを含むデータへの推論 予測精度の指標がエンドポイントのログとして表示されます。
###Code
results = predictor.predict(test.to_csv(index=False)).splitlines()
# Check output
print(Counter(results))
###Output
_____no_output_____
###Markdown
分類精度の指標の確認
###Code
y_results = np.array(results)
print("accuracy: {}".format(accuracy_score(y_true=y_test, y_pred=y_results)))
print(classification_report(y_true=y_test, y_pred=y_results, digits=6))
###Output
_____no_output_____
###Markdown
エンドポイントの削除
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
developer-data-analysis.ipynb | ###Markdown
Data Cleaning
###Code
# Loading file data
df = pd.read_csv("stackoverflow_data.csv")
X = pd.DataFrame()
X['OpenSource'] = df['OpenSource'].eq('Yes').mul(1)
X['Hobby'] = df['Hobby'].eq('Yes').mul(1)
X['Student'] = df['Student'].str.contains('Yes').mul(1)
# YearsCoding
YearCodingMap = {
'3-5 years':4,
'30 or more years':30,
'24-26 years':25,
'18-20 years':19,
'6-8 years':7,
'9-11 years':10,
'0-2 years':1,
'15-17 years':16,
'12-14 years':13,
'21-23 years':22,
'27-29 years':28,
}
X['YearsCoding'] = df['YearsCoding'].replace(YearCodingMap)
X['YearsCoding'].fillna(X['YearsCoding'].mean(),inplace=True)
companySizeMap = {
'20 to 99 employees':60,
'10,000 or more employees':10000,
'100 to 499 employees':300,
'10 to 19 employees':15,
'500 to 999 employees':750,
'1,000 to 4,999 employees':3000,
'5,000 to 9,999 employees':7500,
'Fewer than 10 employees':10
}
X['CompanySize'] = df['CompanySize'].replace(companySizeMap)
X.dropna(subset=["CompanySize"],inplace=True) #Droping NaN values
# Formal Education
dummy1 = pd.get_dummies(df['FormalEducation'], drop_first=True)
dummy1.drop(['I never completed any formal education','Primary/elementary school','Secondary school (e.g. American high school, German Realschule or Gymnasium, etc.)','Some college/university study without earning a degree'],axis=1,inplace=True)
X.join(dummy1)
# AssessJob and Benefits Added
df.fillna(df.iloc[:,17:38].mean(),inplace=True)
for col in df.iloc[:,17:38].columns:
X[col] = df[col]
# JobSatisfaction Mapping
SatisfactionMapping = {
'Extremely satisfied':6,
'Moderately satisfied':5,
'Slightly satisfied':4,
'Neither satisfied nor dissatisfied':3,
'Moderately dissatisfied':2,
'Slightly dissatisfied':1,
'Extremely dissatisfied':0
}
df['JobSatisfaction'].replace(SatisfactionMapping,inplace=True)
df.fillna({'JobSatisfaction':3},inplace=True)
X['JobSatisfaction'] = df['JobSatisfaction']
# CareerSatisfaction
df['CareerSatisfaction'].replace(SatisfactionMapping,inplace=True)
df.fillna({'CareerSatisfaction':3},inplace=True)
X['CareerSatisfaction'] = df['CareerSatisfaction']
# HackathonReasons
X['HackathonParticipated'] = df['HackathonReasons'].notna()*1
# ConvertedSalary
X['ConvertedSalary'] = df['ConvertedSalary']
X.dropna(subset=["ConvertedSalary"],inplace=True) #Droping NaN values
###Output
_____no_output_____
###Markdown
One Hot Encoding
###Code
def CustomOneHotEncoding(data,X):
temp = data.str.split(';', expand=True)
new_columns = pd.unique(temp.values.ravel())
for col in new_columns:
if col is not None and col is not np.NaN:
X[col] = data.str.contains(col, regex=False).fillna(False)*1
# LanguageWorkedWith
CustomOneHotEncoding(df['LanguageWorkedWith'],X)
CustomOneHotEncoding(df['DevType'],X)
CustomOneHotEncoding(df['DatabaseWorkedWith'],X)
CustomOneHotEncoding(df['PlatformWorkedWith'],X)
CustomOneHotEncoding(df['FrameworkWorkedWith'],X)
CustomOneHotEncoding(df['IDE'],X)
# Methodology
CustomOneHotEncoding(df['Methodology'],X)
# RaceEthnicity
CustomOneHotEncoding(df['RaceEthnicity'],X)
# CheckInCode
CheckInCodeMapping = {
'Multiple times per day':730,
'A few times per week':156,
'Weekly or a few times per month':52,
'Never':0,
'Less than once per month':12,
'Once a day':365
}
X['CheckInCode'] = df['CheckInCode'].replace(CheckInCodeMapping)
X['CheckInCode'].fillna(X['CheckInCode'].mean(),inplace=True)
AgeMapping = {
'25 - 34 years old':29.5,
'35 - 44 years old':39.5,
'18 - 24 years old':21,
'45 - 54 years old':49.5,
'55 - 64 years old':59.5,
'Under 18 years old':18,
'65 years or older':65
}
X['Age'] = df['Age'].replace(AgeMapping)
X.dropna(subset=["Age"],inplace=True) #Droping NaN values
X['MilitaryUS'] = (df['MilitaryUS']=='Yes')*1
X['Dependents'] = (df['Dependents']=='Yes')*1
X['Gender'] = (df['Gender']=='Female')*1
# Exercise
ExerciseFreqMap = {
'3 - 4 times per week':((3+4)/2)*52,
'Daily or almost every day':365,
"I don't typically exercise":0,
'1 - 2 times per week':52
}
X['Exercise'] = df['Exercise'].replace(ExerciseFreqMap)
X['Exercise'].fillna(X['Exercise'].mean(),inplace=True)
# HoursCompMap
HoursCompMap = {
'9 - 12 hours':10.5,
'5 - 8 hours':6.5,
'Over 12 hours':12,
'1 - 4 hours':2.5,
'Less than 1 hour':1
}
X['HoursComputer'] = df['HoursComputer'].replace(HoursCompMap)
X['HoursComputer'].fillna(X['HoursComputer'].mean(),inplace=True)
# HypotheticalTools1-5
HypoToolMap = {
'Extremely interested':5,
'Very interested':4,
'Somewhat interested':3,
'A little bit interested':2,
'Not at all interested':1
}
hypotheticalToolsList = ['HypotheticalTools1','HypotheticalTools2','HypotheticalTools3','HypotheticalTools4','HypotheticalTools5']
for col in hypotheticalToolsList:
X[col] = df[col].replace(HypoToolMap)
X[col].fillna(X[col].median(),inplace=True)
# EducationParents -> Higher Educated Parents
EducatedParentsMap = {
"Bachelor’s degree (BA, BS, B.Eng., etc.)":1,
'Some college/university study without earning a degree':0,
'Secondary school (e.g. American high school, German Realschule or Gymnasium, etc.)':0,
"Master’s degree (MA, MS, M.Eng., MBA, etc.)":1,
'Primary/elementary school':0,
'Associate degree':1,
'They never completed any formal education':0,
'Other doctoral degree (Ph.D, Ed.D., etc.)':1,
'Professional degree (JD, MD, etc.)':1
}
X['ParentsWithHighEducation'] = df['EducationParents'].replace(EducatedParentsMap)
X.dropna(subset=["ParentsWithHighEducation"],inplace=True) #Droping NaN values
SelfTaughtValues = ['Some college/university study without earning a degree','I never completed any formal education','Primary/elementary school','Secondary school (e.g. American high school, German Realschule or Gymnasium, etc.)']
X['SelfTaught'] = df['FormalEducation'].isin(SelfTaughtValues)*1
InferiorityMap = {
'Neither Agree nor Disagree':0,
'Strongly disagree':0,
'Strongly agree':1,
'Disagree':0,
'Agree':1
}
X['FeelingInferior'] = df['AgreeDisagree3'].replace(InferiorityMap)
X['FeelingInferior'].fillna(0,inplace=True)
###Output
_____no_output_____
###Markdown
Hypothesis Testing
###Code
X_dummy=X
#Age and Job and Career Satisfaction
#Null Hypothesis Career Satisfaction remains same at all Age level.
careerSatisfied = X_dummy[X_dummy['CareerSatisfaction']==1][['Age']]
careerNotSatisfied = X_dummy[X_dummy['CareerSatisfaction']==0][['Age']]
u,p=stats.mannwhitneyu(careerSatisfied,careerNotSatisfied)
print(p)
# Null Hypothesis Not Reject hence we cannot say that there is a career satisfaction difference between older and younger people
# Null Hypothesis :- People who are self taught have no inferior complex compared to people who had traditional degrees
self_inferior =X_dummy[X_dummy['SelfTaught']==1][['FeelingInferior']].sum()
self_non_inferior = X_dummy[X_dummy['SelfTaught']==1][['FeelingInferior']].shape[0] - self_inferior
not_self_inferior =X_dummy[X_dummy['SelfTaught']==0][['FeelingInferior']].sum()
not_self_non_inferior = X_dummy[X_dummy['SelfTaught']==0][['FeelingInferior']].shape[0] - self_inferior
cat_matrix = [
[self_inferior,self_non_inferior],
[not_self_inferior,not_self_non_inferior]
]
chi2, pchi, dof, ex = stats.chi2_contingency(cat_matrix)
pchi
# p-values is less than significance level (0.05) hence we safely reject null hypothesis
# which means that there is a considerable number of self taught people who feel that they are not as good as their peers.
#Null Hypothesis Career Satisfaction remains same at all Age level.
M1=X_dummy['Age']
M2=X_dummy['CareerSatisfaction']
u,p=stats.mannwhitneyu(M1,M2)
print(p)
#%% Hypothesis for US male and female developers equally paid
#Code is designed in such way can compare for any country.
#Performing T test because assuming the sample is representative of actual population parameters.
df.dropna(subset=['ConvertedSalary'], inplace=True)
df_allgender=df.groupby(['Gender']).count()
df_allgender.sort_values(by=['Respondent'], ascending=False, inplace=True)
df_allgender.iloc[0:3, 0].plot.bar()
df_allgender=df_allgender.T
df_gender= df_allgender[['Female', 'Male']]
df_gender=df_gender.iloc[0, :]
df_female= df[df['Gender']== 'Female']
df_male= df[df['Gender']== 'Male']
femaleSalaries_df= df_female[['ConvertedSalary']]
maleSalaries_df= df_male[['ConvertedSalary']]
t,p= stats.ttest_ind(femaleSalaries_df, maleSalaries_df)
print(p)
#As p<0.05, we can reject the Null Hypothesis, but we don't have enough evidence to prove that male and female are equally paid.
###Output
[0.04068129]
###Markdown
PCA
###Code
# Calcuating zscore for normalizing the dataset
zscoredData = stats.zscore(X)
from sklearn.decomposition import PCA
pca = PCA().fit(zscoredData)
eigValues = pca.explained_variance_
loadings_v = pca.components_
u = pca.fit_transform(zscoredData)
covarExplained = (sum(eigValues[:10])/sum(eigValues))*100
covarExplained
eigValues>1 #50 columns
X_transformed = u[:,0:50]
X_transformed.shape
import matplotlib.pyplot as plt
numPredictors = X.shape[1]
plt.bar(np.linspace(1,numPredictors,numPredictors),eigValues)
plt.axhline(y=1, color='r', linestyle='-')
plt.title('Scree plot')
plt.xlabel('Factors')
plt.ylabel('Eigenvalues')
maxFeat=[]
for i in range(47):
maxFeat.append(np.argmax(loadings_v[i,:]*-1))
set(maxFeat)
###Output
_____no_output_____
###Markdown
Clustering
###Code
X.loc[X['JobSatisfaction'] <= 3.0, 'JobSatisfaction'] = 0
X.loc[X['JobSatisfaction'] > 3.0, 'JobSatisfaction'] = 1
X['JobSatisfaction'].value_counts()
col=X.columns
for i in range(10):
plt.figure()
f1=col[i]
f2=col[i+1]
plt.scatter(X_transformed[:,i], X_transformed[:,i+1])
plt.xlabel(f1)
plt.ylabel(f2)
plt.show()
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples
numClusters = 9 # how many clusters are we looping over? (from 2 to 10)
Q = np.empty([numClusters,1]) # init container to store sums
Q[:] = np.NaN # convert to NaN
ans=[]
plt.figure()
# Compute kMeans:
for ii in range(2, 11): # Loop through each cluster (from 2 to 10!)
kMeans = KMeans(n_clusters = int(ii)).fit(X_transformed) # compute kmeans
cId = kMeans.labels_ # vector of cluster IDs that the row belongs to
cCoords = kMeans.cluster_centers_ # coordinate location for center of each cluster
my_dict = {cCoords[i, 0]: np.where(cId== i)[0] for i in range(kMeans.n_clusters)}
ans.append(my_dict)
s = silhouette_samples(X_transformed,cId) # compute the mean silhouette coefficient of all samples
# print(s.shape)
Q[ii-2] = sum(s) # take sum
# Plot data:
plt.subplot(3,3,ii-1)
plt.hist(s,bins=100)
plt.xlim(-0.2,1)
plt.ylim(0,500)
plt.xlabel('Silhouette score')
plt.ylabel('Count')
plt.title('Sum: {}'.format(int(Q[ii-2])))
plt.figure()
plt.plot(np.linspace(2,10,numClusters),Q)
plt.xlabel('Number of clusters')
plt.ylabel('Sum of silhouette scores')
plt.show()
c= (np.argmax(Q)+2)
#Which two features do you want to visualize
v=[0,2]
plt.figure()
indexVector = np.linspace(1,c,c)
for ii in indexVector:
plotIndex = np.argwhere(cId == int(ii-1))
plt.plot(u[plotIndex,v[0]],u[plotIndex,v[1]],'o',markersize=1)
plt.plot(cCoords[int(ii-1),v[0]],cCoords[int(ii-1),v[1]],'o',markersize=5,color='black')
plt.xlabel('Questions')
plt.ylabel('Loadings')
kMeans = KMeans(n_clusters = 2).fit(X_transformed)
y_pred=kMeans.fit_predict(X_transformed)
y_pred
###Output
_____no_output_____
###Markdown
Classification
###Code
X_transformed
df=pd.DataFrame(X_transformed)
df['cluster']=y_pred
df['cluster'].value_counts()
Y=X['JobSatisfaction']
Y
from sklearn.ensemble import RandomForestClassifier
X_train,X_test,y_train,y_test=train_test_split(X_transformed,Y)
clf=RandomForestClassifier()
clf.fit(X_train,y_train)
# y_hat=clf.predict(X_test,y_test)
print(clf.score(X_test, y_test))
from sklearn.linear_model import LogisticRegression
clf=LogisticRegression()
clf.fit(X_train,y_train)
print(clf.score(X_test, y_test))
from sklearn.svm import SVC
kernels = ['linear','rbf','poly']
for kernel in kernels:
clf=SVC(kernel=kernel)
clf.fit(X_train,y_train)
print(kernel,clf.score(X_test, y_test))
###Output
linear 0.8988944939358162
rbf 0.8940646130728775
poly 0.8850488354620586
###Markdown
Regression
###Code
X_new=X
X_new.shape
Y=X['ConvertedSalary']
Y
sc=StandardScaler()
sc.fit(X_new)
X_new=sc.transform(X_new)
X_train,X_test,y_train,y_test=train_test_split(X_new,Y)
alphas = [0.0, 1e-8, 1e-5, 0.1, 1, 10]
alphaErrMap = {}
for alpha in alphas:
reg = Ridge(alpha=alpha)
reg.fit(X_train,y_train)
df_Y_test_pred = reg.predict(X_test)
testing_error = mean_squared_error(y_test, df_Y_test_pred)
# iI) testing error
print("testing error",alpha, testing_error)
alphaErrMap[alpha] = testing_error
optimal_alpha = min(alphaErrMap, key=alphaErrMap.get)
print("optimal_alpha",optimal_alpha,alphaErrMap[optimal_alpha])
pd.DataFrame(df_Y_test_pred, y_test)
alphas = [1e-3, 1e-2, 1e-1, 1]
for alpha in alphas:
est=make_pipeline(Lasso(alpha=alpha))
est.fit(X_train, y_train)
Y_hat=est.predict(X_test)
print(est.score(X_test, y_test))
from sklearn.ensemble import RandomForestRegressor
regr = RandomForestRegressor()
regr.fit(X_train, y_train)
Y_hat=est.predict(X_test)
print(regr.score(X_test, y_test))
pd.DataFrame(Y_hat, y_test)
import xgboost as xgb
from sklearn.model_selection import KFold
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import auc, accuracy_score, confusion_matrix, mean_squared_error
xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
max_depth = 10, alpha = 10, n_estimators = 10)
xg_reg.fit(X_train,y_train)
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
results = cross_val_score(xg_reg, X_train, y_train, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)
scores = np.absolute(results)
print('Mean MAE: %.3f (%.3f)' % (scores.mean(), scores.std()) )
# y_test_pred = xg_reg.predict(X_test)
# mse = mean_squared_error(y_test_pred, y_test)
# print(results, mse)
###Output
_____no_output_____
###Markdown
Summary and Conclusions(EDA)
###Code
df = pd.read_csv("stackoverflow_data.csv")
#Some facts about Does Developer Code For Hobby?
codeforHobby_df= df[df['Hobby']== 'Yes']
# Which country code for Hobby most?
codeforHobbyCountry_df= codeforHobby_df.groupby('Country').count()
codeforHobbyCountry_df.sort_values(by=['Respondent'], ascending=False, inplace=True)
codeforHobbyCountry_df.iloc[0:10, 0].plot.bar()
plt.title('Code for hobby based on Country')
plt.show()
# How many years are developer coding for Hobby most?
codeforHobbyYearsCoding_df= codeforHobby_df.groupby('YearsCoding').count()
codeforHobbyYearsCoding_df.sort_values(by=['Respondent'], ascending=False, inplace=True)
codeforHobbyYearsCoding_df.iloc[0:10, 0].plot.bar()
plt.title('Code for based on Years of Coding')
plt.show()
#How many Developers contribute to opensource
opensource_df= df['OpenSource'].value_counts()
opensource_df.plot.pie()
plt.title('How many Developers contribute to opensource')
#Top Programming languages on which most developers have worked on
language_df= df['LanguageWorkedWith'].value_counts()
language_df.iloc[0:5].plot.bar()
plt.title('Top Programming languages on which most developers have worked on')
#Top Desired Databases on which most developers want to work on
desiredDatabase_df=df['DatabaseDesireNextYear'].value_counts()
desiredDatabase_df.iloc[0:5].plot.bar()
plt.title('Top Desired Databases on which most developers want to work on')
#Something about AI
#What Does Developer think about 'AI is Future'?
AI_df= df['AIFuture'].value_counts()
AI_df.plot.pie()
plt.title('Developers opinion on AI Is Future')
# Top countries with Female developers
female_df= df.groupby('Country')['Gender', 'Respondent'].count()
female_df.sort_values(by=['Gender'], ascending=False, inplace=True)
female_df.iloc[0:10, 0].plot.bar()
plt.title('Top countries with Female developers ')
plt.show()
###Output
_____no_output_____ |
attalos/imgtxt_algorithms/TestLR-WordSpace.ipynb | ###Markdown
Test Linear RegressionThis notebook is an example that will test the generalization capability of a regression to word vectors. There are three corpora involved. Required Data1. The _word vector corpora_ * Examples: New York Times, Wikipedia Text8 * Data: Pretrained word vectors (word2vec, etc.)2. The _training corpora_ * Examples: IAPR-TC12, MSCOCO, Visual Genome * Data: - Training image features and text labels - Testing image features and text labels <-- Used as validation data3. A _testing corpora_ with a different vocabulary * Examples: MSCOCO, Visual Genome, etc. * Data: Training and testing image features Imports
###Code
import numpy as np
import matplotlib.pylab as plt
import sys
## Should probably update to PYTHONPATH
sys.path.append('/work/attalos/karllab41-attalos/')
## Import word vector load in
import attalos.imgtxt_algorithms.util.readw2v as rw2v
## Import linear regression
import attalos.imgtxt_algorithms.linearregression.LinearRegression as linreg
## Import evaluation code (right now, using Octave soft evaluation)
# from attalos.evaluation.evaluation import Eval
from oct2py import octave
octave.addpath('../evaluation/')
reload(linreg)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the word vectors in
###Code
import pickle
import numpy as np
from scipy.special import expit
def save_centroids(centroids, target_path):
with open(target_path, "wb") as f:
pickle.dump(centroids, f)
def load_centroids(target_path):
with open(target_path, "rb") as f:
centroids = pickle.load(f)
return centroids
def compute_centroid_projection(basis, v):
projection = []
for dim in xrange(0, len(basis.keys())):
if dim not in basis:
projection.append(0)
continue
similarity = np.dot(v, basis[dim])
similarity = expit(similarity)
projection.append(similarity)
return np.asarray(projection)
###Output
_____no_output_____
###Markdown
Load training corpora in
###Code
data = np.load('linearregression/data/iaprtc_alexfc7.npz')
D = open('linearregression/data/iaprtc_dictionary.txt').read().splitlines()
train_ims = [ im.split('/')[-1] for im in open('linearregression/data/iaprtc_trainlist.txt').read().splitlines() ]
xTr = data['xTr'].T
yTr = data['yTr'].T
xTe = data['xTe'].T
yTe = data['yTe'].T
test_ims_full = [ im for im in open('linearregression/data/iaprtc_testlist.txt').read().splitlines() ]
train_ims_full = [ im for im in open('linearregression/data/iaprtc_trainlist.txt').read().splitlines() ]
###Output
_____no_output_____
###Markdown
Load testing corpora in ------------------------------- Train and validate
###Code
mp_solution = linreg.LinearRegression(normX = True)
mp_solution.train(xTr, yTr)
yHat = mp_solution.predict(xTe)
###Output
Building W matrix = Y \ X = Y^T X (X X^T)^-1
###Markdown
Test Evaluate the regression
###Code
[precision, recall, f1score] = octave.evaluate(yTe.T, yHat.T, 5)
print precision
print recall
print f1score
###Output
0.390792064489
0.213105627141
0.275808267324
###Markdown
Visualize
###Code
# Randomly select an image
i=np.random.randint(0, yTe.shape[1])
# Run example
imname='linearregression/images/'+test_ims_full[i]+'.jpg';
print "Looking at the "+str(i)+"th image: "+imname
im=plt.imread(imname)
# Prediction
ypwords = [D[j] for j in yHat[i].argsort()[::-1] [ 0:(yHat[i]>0.2).sum() ] ]
# Truth
ytwords = [D[j] for j in np.where(yTe[i] > 0.5)[0] ]
plt.imshow(im)
print 'Predicted: '+ ', '.join(ypwords)
print 'Truth: '+ ', '.join(ytwords)
plt.figure()
plt.stem( yHat[i] )
###Output
_____no_output_____ |
Ejercicios-numpy-SOLUCIONES.ipynb | ###Markdown
Crear un vector con valores dentro del rango 10 a 49
###Code
a = np.arange(10,50)
a
###Output
_____no_output_____
###Markdown
Invertir el vector
###Code
a[::-1]
###Output
_____no_output_____
###Markdown
Crear una matriz 3x3 con valores de 0 a 8
###Code
np.arange(0,9).reshape(3,3)
###Output
_____no_output_____
###Markdown
Encontrar los indices que no son ceros del arreglo [1,2,4,2,4,0,1,0,0,0,12,4,5,6,7,0]
###Code
a = np.array([1,2,4,2,4,0,1,0,0,0,12,4,5,6,7,0])
np.argwhere( a!=0 )
###Output
_____no_output_____
###Markdown
Crear una matriz identidad de 6x6
###Code
np.identity(6)
###Output
_____no_output_____
###Markdown
Crear una matriz con valores al azar con forma 3x3x3
###Code
r = np.random.random((3,3,3))
r
###Output
_____no_output_____
###Markdown
Encontrar los indices de los valores minimos y maximos de la anterior matriz
###Code
print( r.argmax() )
print( r.ravel()[r.argmax()] )
print(r)
print(np.unravel_index(r.argmax(), r.shape))
r[np.unravel_index(r.argmax(), r.shape)]
###Output
9
0.966876800689
[[[ 0.76397724 0.44106772 0.61463619]
[ 0.78438605 0.27900196 0.37379515]
[ 0.13011713 0.25126419 0.08604375]]
[[ 0.9668768 0.90837987 0.958807 ]
[ 0.72465688 0.2603027 0.05378951]
[ 0.84580746 0.64123084 0.17195369]]
[[ 0.56064236 0.13471053 0.60909779]
[ 0.95955252 0.06483762 0.16022758]
[ 0.72095279 0.74251344 0.11964072]]]
(1, 0, 0)
###Markdown
Crear una matriz de 10x10 con 1's en los bordes y 0 en el interior (con rangos de indices)
###Code
z = np.ones((10,10))
z[1:-1,1:-1] = 0
z
###Output
_____no_output_____
###Markdown
Crear una matriz de 5x5 con valores en los renglones que vayan de 0 a 4
###Code
np.tile( np.arange(0,5) , 5).reshape(5,5)
###Output
_____no_output_____
###Markdown
Crear dos arreglos al azar A y B, verificar si son iguales
###Code
a = np.random.random((3,3))
b = np.random.random((3,3))
a == b
np.allclose(a,b) # Con tolerancia, dada por argumentos: rtol atol
np.array_equal(a,b)
###Output
_____no_output_____ |
programming/pandas/quiz_2.ipynb | ###Markdown
Quiz_2- 타이타닉 데이터를 가져와서 연령대별 생존률을 구하고 그래프를 그리세요
###Code
# 타이타닉 데이터 가져오기
# ["Survived","Age"] 컬럼을 가지는 titanic_df 데이터 프레임을 만들고 Age가 NaN인 row 데이터를 삭제
# Ages 컬럼을 만들고 Ages 컬럼에는 연령대에 대한 데이터 삽입
# 연령대별로 생존률
# 컬럼명을 변경하고 연령대별 생존, 사망, 생존률 그래프 그리기
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
###Output
_____no_output_____ |
opt:comparison/Comparison.ipynb | ###Markdown
Example 1, pyhmc
###Code
def logprob(theta):
logp = 2 * theta**2 - theta**4
grad= 4 * theta - 4 * theta**3
return logp, grad
theta0=np.array([0])
samp=hmc(logprob,x0=theta0,n_samples=10000)
plt1 = sns.distplot(samp, kde = True, hist = False)
plt.suptitle("density plot, pyhmc")
fig1 = plt1.get_figure()
fig1.savefig("ex1-pyhmc.png")
###Output
_____no_output_____
###Markdown
Example 1, pystan
###Code
model_ex1 = '''
functions {
# log probability density function
real ex1_lpdf(real theta){return -1*(-2*theta^2+theta^4);}
}
data {
}
parameters {
real theta;
}
model{
theta ~ ex1_lpdf();
}
'''
ex1_data = {}
sim_ex1 = pystan.StanModel(model_code = model_ex1)
fit = sim_ex1.sampling(data = ex1_data, iter = 10000, chains = 4)
stanplt1 = sns.kdeplot(fit["theta"])
plt.suptitle("kernel density plot, pystan")
fig2 = stanplt1.get_figure()
fig2.savefig("ex1-pystan.png")
###Output
_____no_output_____
###Markdown
Example 2, pyhmc
###Code
import autograd.numpy as np
def lprior(theta):
return (-1/(2*10))*theta.T@theta
def ldatap(theta, x):
return np.log(0.5 * np.exp(-0.5*(theta[0]-x)**2) + 0.5* np.exp(-0.5*(theta[1]-x)**2))
def U(theta, x, n, batch_size):
return -lprior(theta) - (n/batch_size)*sum(ldatap(theta, x))
gradU = jacobian(U, argnum = 0)
def logprob(theta):
logp = np.sum(U(theta, x=x, n=n, batch_size=n))
gradu = gradU(theta, x=x, n=n, batch_size=n).reshape((-1,))
return logp, gradu
mu = np.array([-3,3])
np.random.seed(123)
n = 200
x = np.r_[
np.random.normal(mu[0], 1, n),
np.random.normal(mu[1], 1, n)].reshape(-1,1)
eps = 0.01
sim_hmc = hmc(logprob, x0=mu.reshape(-1), n_samples=100, epsilon=0.01)
plt2 = sns.kdeplot(sim_hmc[:,0], sim_hmc[:,1])
plt.suptitle("kernel density plot, pyhmc")
fig3 = plt2.get_figure()
fig3.savefig("ex2-pyhmc.png")
###Output
_____no_output_____
###Markdown
Example 2, pystan
###Code
np.random.seed(1234)
model_ex2 = '''
data {
int N;
vector[N] y; #number of observations
int n_groups; #number of mixture models
vector<lower = 0>[n_groups] sigma;
vector<lower=0>[n_groups] weights;
}
parameters {
vector[n_groups] mu; #unknown mu
}
model {
vector[n_groups] contributions;
mu ~ normal(0, 10);
# log likelihood
for(i in 1:N) {
for(k in 1:n_groups) {
contributions[k] = log(weights[k]) + normal_lpdf(y[i] | mu[k], sigma[k]);
}
target += log_sum_exp(contributions);
}
}
'''
#set up data
p = 2
mu = np.array([-3,3]).reshape(2,1)
n = 200
x = np.r_[
np.random.normal(mu[0], 1, n),
np.random.normal(mu[1], 1, n)].flatten()
sigma = np.array([1,1])
weights = np.array([0.5,0.5])
ex2_data = {
'N': len(x),
'y': x,
'n_groups': p,
'sigma': sigma,
'weights': weights
}
sim_stan = pystan.StanModel(model_code=model_ex2)
fit_mn = sim_stan.sampling(data = ex2_data, iter = 3000, chains = 4)
stanplt2 = sns.kdeplot(fit_mn["mu"])
plt.suptitle("kernel density plot, pystan")
fig4 = stanplt2.get_figure()
fig4.savefig("ex2-pystan.png")
###Output
/opt/conda/lib/python3.6/site-packages/seaborn/distributions.py:679: UserWarning: Passing a 2D dataset for a bivariate plot is deprecated in favor of kdeplot(x, y), and it will cause an error in future versions. Please update your code.
warnings.warn(warn_msg, UserWarning)
|
Notebooks/logistic_vif.ipynb | ###Markdown
logml=sm.GLM(y_train,(sm.add_constant(x_train))),family=sm.families.Bionomial())
###Code
logml=sm.GLM(y_train,(sm.add_constant(x_train)),family=sm.families.Binomial())
logml.fit().summary()
df.head()
df1_exp=df
plt.figure(figsize=(30,10))
sns.heatmap(df1_exp["Baseline Features"].corr(),annot=True)
round(df1_exp['Baseline Features'].corr(),3)
form_data=pd.read_csv("pd_speech_features2.csv")
form_data.drop(['id','gender'],axis=1)
form_data.drop(columns=['locAbsJitter','rapJitter','ddpJitter','ppq5Jitter','locShimmer','apq5Shimmer','meanNoiseToHarmHarmonicity','numPeriodsPulses','meanPeriodPulses','apq11Shimmer'],axis=1)
form_data.drop(columns=['GQ_std_cycle_closed','tqwt_medianValue_dec_5','tqwt_medianValue_dec_16','tqwt_medianValue_dec_16','tqwt_medianValue_dec_20',
'tqwt_medianValue_dec_24','tqwt_medianValue_dec_25','tqwt_meanValue_dec_5','tqwt_meanValue_dec_5'
,'tqwt_meanValue_dec_30','tqwt_meanValue_dec_35','tqwt_meanValue_dec_23','tqwt_meanValue_dec_27',
'tqwt_meanValue_dec_31',
'tqwt_meanValue_dec_16','tqwt_meanValue_dec_11'])
plt.figure(figsize=(30,10))
sns.heatmap(df1_exp["Intensity Parameters"].corr(),annot=True)
def normalize_data(x):
return((x-np.min(x))/max(x)-min(x))
form_data.apply(normalize_data)
data = form_data.to_numpy(dtype=np.float32)
features, labels = data[:, :-1], data[:, -1]
x_train,x_test,y_train,y_test=train_test_split(features,labels,test_size=0.3,random_state=0)
x_train.shape,y_train.shape
from sklearn.linear_model import LogisticRegression
clf=LogisticRegression(tol=0.1)#tolerence is 0.1
clf.fit(x_train,y_train)
y_pred=clf.predict(x_test)
a=cross_val_score(clf,x_train,y_train,cv=2,scoring="accuracy")#cross_validation
a.mean()*100
print(classification_report(y_test,y_pred))
print(f"confusion_matrix:\n{confusion_matrix(y_test,y_pred)}")
logistic_fpr,logistic_tpr,threshold=roc_curve(y_test,y_pred)
auc_logistic=auc(logistic_fpr,logistic_tpr)
plt.figure(figsize=(5,5),dpi=100)
plt.plot(logistic_fpr,logistic_tpr,marker=".",label="Logistic(auc=%0.3f)"%auc_logistic)
plt.xlabel("False Positive Rate-->")
plt.ylabel("True Positive Rate-->")
plt.legend()
plt.show()
###Output
_____no_output_____ |
NLP_sentiment_classification/CBOW/CBOW.ipynb | ###Markdown
data는 e9t(Lucy Park)님께서 github에 공유해주신 네이버 영화평점 데이터를 사용하였습니다. https://github.com/e9t/nsmc 불러오기
###Code
from collections import defaultdict
import numpy as np
def read_txt(path_to_file):
txt_ls = []
label_ls = []
with open(path_to_file) as f:
for i, line in enumerate(f.readlines()[1:]):
id_num, txt, label = line.split('\t')
txt_ls.append(txt)
label_ls.append(int(label.replace('\n','')))
return txt_ls, label_ls
# 데이터 불러오기
x_train, y_train = read_txt('../ratings_train.txt')
x_test, y_test = read_txt('../ratings_test.txt')
x_train[0]
###Output
_____no_output_____
###Markdown
비어있는 리뷰 제거
###Code
def remove_empty_review(X, Y):
empty_idx_ls = []
for idx, review in enumerate(X):
if len(review) == 0:
empty_idx_ls.append(idx)
# idx 값이 큰 것부터 제거 (앞으로 밀리는 것을 방지)
empty_idx_ls = sorted(empty_idx_ls, reverse = True)
for empty_idx in empty_idx_ls:
del X[empty_idx], Y[empty_idx]
return X, Y
x_train, y_train = remove_empty_review(x_train, y_train)
x_test, y_test = remove_empty_review(x_test, y_test)
x_train[0]
len(x_train), len(x_test)
###Output
_____no_output_____
###Markdown
토큰 인덱싱 (token2idx)
###Code
# 단어에 대한 idx 부여
def convert_token_to_idx(token_ls):
for tokens in token_ls:
yield [token2idx[token] for token in tokens.split(' ')]
return
token2idx = defaultdict(lambda : len(token2idx)) # token과 index를 매칭시켜주는 딕셔너리
pad = token2idx['<PAD>'] # pytorch Variable로 변환하기 위해, 문장의 길이를 맞춰주기 위한 padding
x_train = list(convert_token_to_idx(x_train))
x_test = list(convert_token_to_idx(x_test))
idx2token = {val : key for key,val in token2idx.items()}
###Output
_____no_output_____
###Markdown
인덱싱 결과 확인
###Code
x_train[0]
###Output
_____no_output_____
###Markdown
원본 텍스트로 변환 확인
###Code
[idx2token[x] for x in x_train[0]]
###Output
_____no_output_____
###Markdown
Add Padding
###Code
# Pytorch Variable로 변환하기 위해서는 모든 data의 길이(length)가 동일하여야 한다.
# 영화 리뷰는 길이가 제각각이므로, 길이를 맞춰주는 작업을 수행
# 짧은 문장에는 padding(공간을 채우기 위해 사용하는 더미)을 추가하고,
# 긴 문장은 짤라서 줄인다.
# Sequence Length를 맞추기 위한 padding
def add_padding(token_ls, max_len):
for i, tokens in enumerate(token_ls):
n_token = len(tokens)
# 길이가 짧으면 padding을 추가
if n_token < max_len:
token_ls[i] += [pad] * (max_len - n_token) # 부족한 만큼 padding을 추가함
# 길이가 길면, max_len에서 짜름
elif n_token > max_len:
token_ls[i] = tokens[:max_len]
return token_ls
max_len = 30
x_train = add_padding(x_train, max_len)
x_test = add_padding(x_test, max_len)
###Output
_____no_output_____
###Markdown
Padding 결과 확인
###Code
' '.join([idx2token[x] for x in x_train[0]])
###Output
_____no_output_____
###Markdown
Pytorch 모델 학습을 위해 Data의 type을 Variable 로 변환
###Code
import torch.nn as nn
import torch
from torch.autograd import Variable
import torch.nn.functional as F
# torch Variable로 변환
def convert_to_long_variable(w2i_ls):
return Variable(torch.LongTensor(w2i_ls))
x_train = convert_to_long_variable(x_train)
x_test = convert_to_long_variable(x_test)
y_train = convert_to_long_variable(y_train)
y_test = convert_to_long_variable(y_test)
x_train[0]
###Output
_____no_output_____
###Markdown
CBOW with Pytorch
###Code
class CBOW(nn.Module):
def __init__(self, n_words, embed_size, pad_index, hid_size, dropout, n_class):
super(CBOW, self).__init__()
self.n_words = n_words # 고유한 토큰의 갯수
self.embed_size = embed_size # 임베딩 차원의 크기
self.pad_index = pad_index # 문장에 포함된 padding_token, embedding 과정에서 제외시킴
self.embed = nn.Embedding(n_words, embed_size, padding_idx=pad_index) # non-static embedding with Pytorch
self.hid_size = hid_size # Fully-Connet layer의 히든 레이어의 갯수
self.dropout = dropout # 드롭아웃 비율
self.n_class = n_class # 카테고리의 갯수
# pre-train된 embedding을 사용하고 싶다면,
# self.embed.weight = pre_trained_weight_matrix
# self.embed.weight.requires_grad = False # embedding weight 고정 : static
self.lin = nn.Sequential(
nn.Linear(embed_size, hid_size), nn.ReLU(), nn.Dropout(),
nn.Linear(hid_size, n_class)
)
def forward(self, x):
x_embeded = self.embed(x) # batch_size x sequence_length x embed_size
# 모든 단어의 embedding vector를 모두 더하여 sentence를 모델링한다.
x_cbow = x_embeded.sum(dim=1) # batch_size x 1 x embeded_size
x_cbow = x_cbow.squeeze(1) # fully-connet를 위해, 1번째 차원을 축소
logit = self.lin(x_cbow)
return logit
params = {
'n_words' : len(token2idx), # 고유한 토큰의 갯수
'embed_size' : 32, # embedding 차원의 크기
'pad_index' : token2idx['<PAD>'], # embedding 과정에서 제외시킬, padding token
'hid_size' : 32, # 히든 레이어 갯수
'dropout' : 0.5, # 드롭아웃 비율
'n_class' : 2, # 카테고리 갯수 (긍/부정)
}
model = CBOW(**params)
model
###Output
_____no_output_____
###Markdown
Train
###Code
import random
def adjust_learning_rate(optimizer, epoch, init_lr=0.001, lr_decay_epoch=10):
"""Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs."""
lr = init_lr * (0.1**(epoch // lr_decay_epoch))
if epoch % lr_decay_epoch == 0:
print('LR is set to %s'%(lr))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
return optimizer
epochs = 20
lr = 0.003
batch_size = 10000
train_idx = np.arange(x_train.size(0))
test_idx = np.arange(x_test.size(0))
optimizer = torch.optim.Adam(model.parameters(),lr) # Adam Optimizer 사용
criterion = nn.CrossEntropyLoss(reduction='sum') # model이 logit을 반환하므로, 크로스-엔트로피-Loss를 사용,
# 크로스-엔트로피-Loss는 Log_softmax + NLL_loss
loss_ls = []
for epoch in range(1, epochs+1):
model.train()
# input 데이터 순서 섞기
random.shuffle(train_idx)
x_train = x_train[train_idx]
y_train = y_train[train_idx]
train_loss = 0
for start_idx, end_idx in zip(range(0, x_train.size(0), batch_size),
range(batch_size, x_train.size(0)+1, batch_size)):
x_batch = x_train[start_idx : end_idx]
y_batch = y_train[start_idx : end_idx].long()
scores = model(x_batch)
predict = F.softmax(scores, dim=1).argmax(dim=1)
acc = (predict == y_batch).sum().item() / batch_size
loss = criterion(scores, y_batch)
train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Train epoch : %s, loss : %s, accuracy :%.3f'%(epoch, train_loss / batch_size, acc))
print('=================================================================================================')
loss_ls.append(train_loss)
optimizer = adjust_learning_rate(optimizer, epoch, lr, 10) # adjust learning_rate while training
if (epoch+1) % 10 == 0:
model.eval()
scores = model(x_test)
predict = F.softmax(scores, dim=1).argmax(dim = 1)
acc = (predict == y_test).sum().item() / y_test.size(0)
loss = criterion(scores, y_test.long())
print('*************************************************************************************************')
print('*************************************************************************************************')
print('Test Epoch : %s, Test Loss : %.03f , Test Accuracy : %.03f'%(epoch, loss.item()/y_test.size(0), acc))
print('*************************************************************************************************')
print('*************************************************************************************************')
###Output
Train epoch : 1, loss : 10.597307177734375, accuracy :0.519
=================================================================================================
Train epoch : 2, loss : 9.757400048828124, accuracy :0.544
=================================================================================================
Train epoch : 3, loss : 9.49492783203125, accuracy :0.583
=================================================================================================
Train epoch : 4, loss : 9.2565888671875, accuracy :0.615
=================================================================================================
Train epoch : 5, loss : 8.845070458984376, accuracy :0.660
=================================================================================================
Train epoch : 6, loss : 8.246880810546875, accuracy :0.701
=================================================================================================
Train epoch : 7, loss : 7.528774365234375, accuracy :0.739
=================================================================================================
Train epoch : 8, loss : 6.74151396484375, accuracy :0.773
=================================================================================================
Train epoch : 9, loss : 5.959237329101563, accuracy :0.821
=================================================================================================
*************************************************************************************************
*************************************************************************************************
Test Epoch : 9, Test Loss : 0.604 , Test Accuracy : 0.714
*************************************************************************************************
*************************************************************************************************
Train epoch : 10, loss : 5.225964819335937, accuracy :0.841
=================================================================================================
LR is set to 0.00030000000000000003
Train epoch : 11, loss : 4.737377661132813, accuracy :0.857
=================================================================================================
Train epoch : 12, loss : 4.6755966796875, accuracy :0.864
=================================================================================================
Train epoch : 13, loss : 4.616915454101562, accuracy :0.861
=================================================================================================
Train epoch : 14, loss : 4.55448466796875, accuracy :0.867
=================================================================================================
Train epoch : 15, loss : 4.4762162109375, accuracy :0.865
=================================================================================================
Train epoch : 16, loss : 4.429692041015625, accuracy :0.867
=================================================================================================
Train epoch : 17, loss : 4.3804462890625, accuracy :0.875
=================================================================================================
Train epoch : 18, loss : 4.323177685546875, accuracy :0.875
=================================================================================================
Train epoch : 19, loss : 4.250740942382812, accuracy :0.877
=================================================================================================
*************************************************************************************************
*************************************************************************************************
Test Epoch : 19, Test Loss : 0.643 , Test Accuracy : 0.730
*************************************************************************************************
*************************************************************************************************
Train epoch : 20, loss : 4.207217700195312, accuracy :0.881
=================================================================================================
LR is set to 3.0000000000000008e-05
|
Basics/Operators.ipynb | ###Markdown
Operators in Python- Operators enables creation of expression. - Executing expressions gives us results- Expression refers to various operations on variables and valuesThere are following categories of operators:- Arithmetic operators- Assignment operators- Comparison operators- Logical operators- Identity operators- Membership operators- Bitwise operators
###Code
# Arithmetic Operators
#Addition
print(1+2)
#subtraction
print(1-2)
#Multiplication
print(9*7)
#Division
print(10/2)
#Modulus
print(5%2)
#Exponentiation
print(2**10)
#Floor Division
print(17//5)
###Output
3
-1
63
5.0
1
1024
3
###Markdown
Assignment operators> Basically assigment operator uses equals to operator ( = ). In addition to that we can add other operators ( arithmetic and bitwise operators ) and equals to operator for other operation and assigment operation.
###Code
#Normal Assignment operation
a = 10
b = 20
#addition and assignment operation
a += b
#a = a + b
print(b)
print(a)
###Output
20
30
###Markdown
**Operators****Topics Covered** > Arithmetic > Relational > Logical > Identity > Membership--------1. **Arithmetic oprators** 
###Code
1 + 1 # Addition
5 - 3 # Subtraction
2 * 1 # Multiplication
4 / 2 # Division
5 % 3 # Mod it returns Remainder of the division operation
2 ** 2 # Power
14.5 // 2 # The floor division rounds the result down to the nearest whole number
14.5 / 2
###Output
_____no_output_____
###Markdown
**Using Variables**
###Code
a = 20
b = 30
c = a + b
c
d = b - a
d
e = a * b
e
f = b / a
f
g = b // a
g
h = b % a
h
i = b ** a
i
###Output
_____no_output_____
###Markdown
-----------2. **Relational Operators*** Relational operators are used to establish some sort of relationship between the two operands. * Python understands these types of operators and accordingly return the output, which can be either `True` or `False`. 
###Code
x = 2 # Assignment : We are assigning value 2 to x
myWallet = 20
samWallet = 30
myWallet < samWallet # Less than
myWallet > samWallet # Greater than
myWallet = 30
myWallet >= samWallet # Greater than or Equal to
myWallet <= samWallet # Less than or Equal to
myWallet = 20
myWallet == samWallet # Equals to
myWallet != samWallet # Not Equals to
myWallet !== samWallet
###Output
_____no_output_____
###Markdown
> The above error is encountered because not equal to operator `!=`
###Code
k = 0.5
k >= 0 # Greater than or equal to
# -0.5 0 0.5
###Output
_____no_output_____
###Markdown
3. **Logical Operators** : * Logical operators are mainly used to control program flow. Usually, you will find them as part of an if, a while, or some other control statement. * They allow a program to make a decision based on multiple conditions. Each operand is considered a condition that can be evaluated to a True or False value. Then the value of the conditions is used to determine the overall value of the op1 operator op2 or not(op1) grouping.* 3 logical operators are `and`, `or`, `not`
###Code
# If the value of X and Y are the following.
x = 10
y = 20
###Output
_____no_output_____
###Markdown
------* **and** Operator The `and` operator is used to determine whether both operands or conditions are True.-------
###Code
x < 5 and y < 10 # and operator
###Output
_____no_output_____
###Markdown
------* **or** Operator The `or` operator is used to determine whether either of the conditions is True. If the first operand of the `or` operator evaluates to True, the second operand will not be evaluated. -----
###Code
x < 5 or y > 5 # or operator
###Output
_____no_output_____
###Markdown
------* **not** Operator : negation The `not` operator is used to convert True values to False and False values to True.-------
###Code
y = False
print(y)
print(not(y))
not(x < 5 and y < 10) # not operator
###Output
_____no_output_____
###Markdown
4. **Identity Operators** : Identity operator (“is” and “is not”) is used to compare the object’s memory location. When an object is created in memory a unique memory address is allocated to that object.> is : Returns true if both variables are the same object. > is not : Returns true if both variables are not the same object
###Code
x = ["Black", "White"]
y = ["Black", "White"]
z = x
# returns True because z is the same object as x
print(x is z)
# returns False because x is not the same object as y, even if they have the same content
print(x is y)
# to demonstrate the difference betweeen "is" and "==": this comparison returns True because x is equal to y
print(x == y)
###Output
True
False
True
###Markdown
-------**Q** What is the difference between `==` and `is` operator? * ‘==’ compares if both the object values are identical or not. * ‘is’ compares if both the object belongs to the same memory location. ----------- 5. **Membership Operators** They are used to check if an element is present in a sequence or not. > `in` : Returns True if a sequence with the specified value is present in the object. > `not in` : Returns True if a sequence with the specified value is not present in the object
###Code
x = 'Hello world'
'H' in x
'z' in x
'hello' not in x
'Hello' not in x
###Output
_____no_output_____ |
02_Convolutional_Neural_Network_zh_CN.ipynb | ###Markdown
TensorFlow 教程 02 卷积神经网络by [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ)中文翻译 [thrillerist](https://zhuanlan.zhihu.com/insight-pixel)/[Github](https://github.com/thrillerist/TensorFlow-Tutorials) 介绍先前的教程展示了一个简单的线性模型,对MNIST数据集中手写数字的识别率达到了91%。在这个教程中,我们会在TensorFlow中实现一个简单的卷积神经网络,它能达到大约99%的分类准确率,如果你做了一些建议的练习,准确率还可能更高。卷积神经网络在一张输入图片上移动一个小的滤波器。这意味着在遍历整张图像来识别模式时,要重复使用这些滤波器。这让卷积神经网络在拥有相同数量的变量时比全连接网络(Fully-Connected)更强大,也让卷积神经网络训练得更快。你应该熟悉基本的线性代数、Python和Jupyter Notebook编辑器。如果你是TensorFlow新手,在本教程之前应该先学习第一篇教程。 流程图 下面的图表直接显示了之后实现的卷积神经网络中数据的传递。
###Code
from IPython.display import Image
Image('images/02_network_flowchart.png')
###Output
_____no_output_____
###Markdown
输入图像在第一层卷积层里使用权重过滤器处理。结果在16张新图里,每张代表了卷积层里一个过滤器(的处理结果)。图像经过降采样,分辨率从28x28减少到14x14。16张小图在第二个卷积层中处理。这16个通道以及这层输出的每个通道都需要一个过滤权重。总共有36个输出,所以在第二个卷积层有16 x 36 = 576个滤波器。输出图再一次降采样到7x7个像素。第二个卷积层的输出是36张7x7像素的图像。它们被转换到一个长为7 x 7 x 36 = 1764的向量中去,它作为一个有128个神经元(或元素)的全连接网络的输入。这些又输入到另一个有10个神经元的全连接层中,每个神经元代表一个类别,用来确定图像的类别,即图像上的数字。卷积滤波一开始是随机挑选的,因此分类也是随机完成的。根据交叉熵(cross-entropy)来测量输入图预测值和真实类别间的错误。然后优化器用链式法则自动地将这个误差在卷积网络中传递,更新滤波权重来提升分类质量。这个过程迭代了几千次,直到分类误差足够低。这些特定的滤波权重和中间图像是一个优化结果,和你执行代码所看到的可能会有所不同。注意,这些在TensorFlow上的计算是在一部分图像上执行,而非单独的一张图,这使得计算更有效。也意味着在TensorFlow上实现时,这个流程图实际上会有更多的数据维度。 卷积层 下面的图片展示了在第一个卷积层中处理图像的基本思想。输入图片描绘了数字7,这里显示了它的四张拷贝,我们可以很清晰的看到滤波器是如何在图像的不同位置移动。在滤波器的每个位置上,计算滤波器以及滤波器下方图像像素的点乘,得到输出图像的一个像素。因此,在整张输入图像上移动时,会有一张新的图像生成。红色的滤波权重表示滤波器对输入图的黑色像素有正响应,蓝色的代表有负响应。在这个例子中,很明显这个滤波器识别数字7的水平线段,在输出图中可以看到它对线段的强烈响应。
###Code
Image('images/02_convolution.png')
###Output
_____no_output_____
###Markdown
滤波器遍历输入图的移动步长称为stride。在水平和竖直方向各有一个stride。在下面的源码中,两个方向的stride都设为1,这说明滤波器从输入图像的左上角开始,下一步移动到右边1个像素去。当滤波器到达图像的右边时,它会返回最左边,然后向下移动1个像素。持续这个过程,直到滤波器到达输入图像的右下角,同时,也生成了整张输出图片。当滤波器到达输入图的右端或底部时,它会用零(白色像素)来填充。因为输出图要和输入图一样大。此外,卷积层的输出可能会传递给修正线性单元(ReLU),它用来保证输出是正值,将负值置为零。输出还会用最大池化(max-pooling)进行降采样,它使用了2x2的小窗口,只保留像素中的最大值。这让输入图分辨率减小一半,比如从28x28到14x14。第二个卷积层更加复杂,因为它有16个输入通道。我们想给每个通道一个单独的滤波,因此需要16个。另外,我们想从第二个卷积层得到36个输出,因此总共需要16 x 36 = 576个滤波器。要理解这些如何工作可能有些困难。 导入
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
###Output
_____no_output_____
###Markdown
使用Python3.5.2(Anaconda)开发,TensorFlow版本是:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
神经网络的配置方便起见,在这里定义神经网络的配置,你可以很容易找到或改变这些数值,然后重新运行Notebook。
###Code
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 36 # There are 36 of these filters.
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
###Output
_____no_output_____
###Markdown
载入数据 MNIST数据集大约12MB,如果没在文件夹中找到就会自动下载。
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
现在已经载入了MNIST数据集,它由70,000张图像和对应的标签(比如图像的类别)组成。数据集分成三份互相独立的子集。我们在教程中只用训练集和测试集。
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
类型标签使用One-Hot编码,这意外每个标签是长为10的向量,除了一个元素之外,其他的都为零。这个元素的索引就是类别的数字,即相应图片中画的数字。我们也需要测试数据集类别数字的整型值,用下面的方法来计算。
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
数据维度 在下面的源码中,有很多地方用到了数据维度。它们只在一个地方定义,因此我们可以在代码中使用这些数字而不是直接写数字。
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
用来绘制图片的帮助函数 这个函数用来在3x3的栅格中画9张图像,然后在每张图像下面写出真实类别和预测类别。
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
绘制几张图像来看看数据是否正确
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow图TensorFlow的全部目的就是使用一个称之为计算图(computational graph)的东西,它会比直接在Python中进行相同计算量要高效得多。TensorFlow比Numpy更高效,因为TensorFlow了解整个需要运行的计算图,然而Numpy只知道某个时间点上唯一的数学运算。TensorFlow也能够自动地计算需要优化的变量的梯度,使得模型有更好的表现。这是由于图是简单数学表达式的结合,因此整个图的梯度可以用链式法则推导出来。TensorFlow还能利用多核CPU和GPU,Google也为TensorFlow制造了称为TPUs(Tensor Processing Units)的特殊芯片,它比GPU更快。一个TensorFlow图由下面几个部分组成,后面会详细描述:* 占位符变量(Placeholder)用来改变图的输入。* 模型变量(Model)将会被优化,使得模型表现得更好。* 模型本质上就是一些数学函数,它根据Placeholder和模型的输入变量来计算一些输出。* 一个cost度量用来指导变量的优化。* 一个优化策略会更新模型的变量。另外,TensorFlow图也包含了一些调试状态,比如用TensorBoard打印log数据,本教程不涉及这些。 Helper-functions for creating new variables 创建新变量的帮助函数 函数用来根据给定大小创建TensorFlow变量,并将它们用随机值初始化。需注意的是在此时并未完成初始化工作,仅仅是在TensorFlow图里定义它们。
###Code
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
###Output
_____no_output_____
###Markdown
创建卷积层的帮助函数 这个函数为TensorFlow在计算图里创建了新的卷积层。这里并没有执行什么计算,只是在TensorFlow图里添加了数学公式。假设输入的是四维的张量,各个维度如下:1. 图像数量2. 每张图像的Y轴3. 每张图像的X轴4. 每张图像的通道数输入通道可能是彩色通道,当输入是前面的卷积层生成的时候,它也可能是滤波通道。输出是另外一个4通道的张量,如下:1. 图像数量,与输入相同2. 每张图像的Y轴。如果用到了2x2的池化,是输入图像宽高的一半。3. 每张图像的X轴。同上。4. 卷积滤波生成的通道数。
###Code
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
###Output
_____no_output_____
###Markdown
转换一个层的帮助函数卷积层生成了4维的张量。我们会在卷积层之后添加一个全连接层,因此我们需要将这个4维的张量转换成可被全连接层使用的2维张量。
###Code
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
###Output
_____no_output_____
###Markdown
创建一个全连接层的帮助函数 这个函数为TensorFlow在计算图中创建了一个全连接层。这里也不进行任何计算,只是往TensorFlow图中添加数学公式。输入是大小为`[num_images, num_inputs]`的二维张量。输出是大小为`[num_images, num_outputs]`的2维张量。
###Code
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
###Output
_____no_output_____
###Markdown
占位符 (Placeholder)变量 Placeholder是作为图的输入,每次我们运行图的时候都可能会改变它们。将这个过程称为feeding placeholder变量,后面将会描述它。首先我们为输入图像定义placeholder变量。这让我们可以改变输入到TensorFlow图中的图像。这也是一个张量(tensor),代表一个多维向量或矩阵。数据类型设置为float32,形状设为`[None, img_size_flat]`,`None`代表tensor可能保存着任意数量的图像,每张图象是一个长度为`img_size_flat`的向量。
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
卷积层希望`x`被编码为4维张量,因此我们需要将它的形状转换至`[num_images, img_height, img_width, num_channels]`。注意`img_height == img_width == img_size`,如果第一维的大小设为-1, `num_images`的大小也会被自动推导出来。转换运算如下:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
接下来我们为输入变量`x`中的图像所对应的真实标签定义placeholder变量。变量的形状是`[None, num_classes]`,这代表着它保存了任意数量的标签,每个标签是长度为`num_classes`的向量,本例中长度为10。
###Code
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
###Output
_____no_output_____
###Markdown
我们也可以为class-number提供一个placeholder,但这里用argmax来计算它。这里只是TensorFlow中的一些操作,没有执行什么运算。
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
卷积层 1创建第一个卷积层。将`x_image`当作输入,创建`num_filters1`个不同的滤波器,每个滤波器的宽高都与 `filter_size1`相等。最终我们会用2x2的max-pooling将图像降采样,使它的尺寸减半。
###Code
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
###Output
_____no_output_____
###Markdown
检查卷积层输出张量的大小。它是(?,14, 14, 16),这代表着有任意数量的图像(?代表数量),每张图像有14个像素的宽和高,有16个不同的通道,每个滤波器各有一个通道。
###Code
layer_conv1
###Output
_____no_output_____
###Markdown
卷积层 2创建第二个卷积层,它将第一个卷积层的输出作为输入。输入通道的数量对应着第一个卷积层的滤波数。
###Code
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
###Output
_____no_output_____
###Markdown
核对一下这个卷积层输出张量的大小。它的大小是(?, 7, 7, 36),其中?也代表着任意数量的图像,每张图有7像素的宽高,每个滤波器有36个通道。
###Code
layer_conv2
###Output
_____no_output_____
###Markdown
转换层这个卷积层输出一个4维张量。现在我们想将它作为一个全连接网络的输入,这就需要将它转换成2维张量。
###Code
layer_flat, num_features = flatten_layer(layer_conv2)
###Output
_____no_output_____
###Markdown
这个张量的大小是(?, 1764),意味着共有一定数量的图像,每张图像被转换成长为1764的向量。其中1764 = 7 x 7 x 36。
###Code
layer_flat
num_features
###Output
_____no_output_____
###Markdown
全连接层 1往网络中添加一个全连接层。输入是一个前面卷积得到的被转换过的层。全连接层中的神经元或节点数为`fc_size`。我们可以用ReLU来学习非线性关系。
###Code
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
###Output
_____no_output_____
###Markdown
全连接层的输出是一个大小为(?,128)的张量,?代表着一定数量的图像,并且`fc_size` == 128。
###Code
layer_fc1
###Output
_____no_output_____
###Markdown
全连接层 2添加另外一个全连接层,它的输出是一个长度为10的向量,它确定了输入图是属于哪个类别。这层并没有用到ReLU。
###Code
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc2
###Output
_____no_output_____
###Markdown
预测类别 第二个全连接层估算了输入图有多大的可能属于10个类别中的其中一个。然而,这是很粗略的估计并且很难解释,因为数值可能很小或很大,因此我们会对它们做归一化,将每个元素限制在0到1之间,并且相加为1。这用一个称为softmax的函数来计算的,结果保存在`y_pred`中。
###Code
y_pred = tf.nn.softmax(layer_fc2)
###Output
_____no_output_____
###Markdown
类别数字是最大元素的索引。
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
优化损失函数 为了使模型更好地对输入图像进行分类,我们必须改变`weights`和`biases`变量。首先我们需要对比模型`y_pred`的预测输出和期望输出的`y_true`,来了解目前模型的性能如何。交叉熵(cross-entropy)是在分类中使用的性能度量。交叉熵是一个常为正值的连续函数,如果模型的预测值精准地符合期望的输出,它就等于零。因此,优化的目的就是通过改变网络层的变量来最小化交叉熵。TensorFlow有一个内置的计算交叉熵的函数。这个函数内部计算了softmax,所以我们要用`layer_fc2`的输出而非直接用`y_pred`,因为`y_pred`上已经计算了softmax。
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
###Output
_____no_output_____
###Markdown
我们为每个图像分类计算了交叉熵,所以有一个当前模型在每张图上表现的度量。但是为了用交叉熵来指导模型变量的优化,我们需要一个额外的标量值,因此简单地利用所有图像分类交叉熵的均值。
###Code
cost = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
优化方法 既然我们有一个需要被最小化的损失度量,接着就可以建立优化一个优化器。这个例子中,我们使用的是梯度下降的变体`AdamOptimizer`。优化过程并不是在这里执行。实际上,还没计算任何东西,我们只是往TensorFlow图中添加了优化器,以便之后的操作。
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
###Output
_____no_output_____
###Markdown
性能度量 我们需要另外一些性能度量,来向用户展示这个过程。这是一个布尔值向量,代表预测类型是否等于每张图片的真实类型。
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
上面的计算先将布尔值向量类型转换成浮点型向量,这样子False就变成0,True变成1,然后计算这些值的平均数,以此来计算分类的准确度。
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
运行TensorFlow 创建TensorFlow会话(session)一旦创建了TensorFlow图,我们需要创建一个TensorFlow会话,用来运行图。
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
初始化变量我们需要在开始优化weights和biases变量之前对它们进行初始化。
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
用来优化迭代的帮助函数 在训练集中有50,000张图。用这些图像计算模型的梯度会花很多时间。因此我们利用随机梯度下降的方法,它在优化器的每次迭代里只用到了一小部分的图像。如果内存耗尽导致电脑死机或变得很慢,你应该试着减少这些数量,但同时可能还需要更优化的迭代。
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
函数执行了多次的优化迭代来逐步地提升网络层的变量。在每次迭代中,从训练集中选择一批新的数据,然后TensorFlow用这些训练样本来执行优化器。每100次迭代会打印出相关信息。
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
###Output
_____no_output_____
###Markdown
用来绘制错误样本的帮助函数 函数用来绘制测试集中被误分类的样本。
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
绘制混淆(confusion)矩阵的帮助函数
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
展示性能的帮助函数 函数用来打印测试集上的分类准确度。为测试集上的所有图片计算分类会花费一段时间,因此我们直接用这个函数来调用上面的结果,这样就不用每次都重新计算了。这个函数可能会占用很多电脑内存,这也是为什么将测试集分成更小的几个部分。如果你的电脑内存比较小或死机了,就要试着降低batch-size。
###Code
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
优化之前的性能测试集上的准确度很低,这是由于模型只做了初始化,并没做任何优化,所以它只是对图像做随机分类。
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 10.9% (1093 / 10000)
###Markdown
1次迭代后的性能做了一次优化后,此时优化器的学习率很低,性能其实并没有多大提升。
###Code
optimize(num_iterations=1)
print_test_accuracy()
###Output
Accuracy on Test-Set: 13.0% (1296 / 10000)
###Markdown
100次迭代优化后的性能100次优化迭代之后,模型显著地提升了分类的准确度。
###Code
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 66.6% (6656 / 10000)
Example errors:
###Markdown
1000次优化迭代后的性能1000次优化迭代之后,模型在测试集上的准确度超过了90%。
###Code
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 93.1% (9308 / 10000)
Example errors:
###Markdown
10,000次优化迭代后的性能经过10,000次优化迭代后,测试集上的分类准确率高达99%。
###Code
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.8% (9880 / 10000)
Example errors:
###Markdown
权重和层的可视化为了理解为什么卷积神经网络可以识别手写数字,我们将会对卷积滤波和输出图像进行可视化。 绘制卷积权重的帮助函数
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
绘制卷积层输出的帮助函数
###Code
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
输入图像 绘制图像的帮助函数
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
如下所示,绘制一张测试集中的图像。
###Code
image1 = data.test.images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
绘制测试集里的另一张图像。
###Code
image2 = data.test.images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
卷积层 1 现在绘制第一个卷积层的滤波权重。其中正值权重是红色的,负值为蓝色。
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
将这些卷积滤波添加到第一张输入图像,得到以下输出,它们也作为第二个卷积层的输入。注意这些图像被降采样到14 x 14像素,即原始输入图分辨率的一半。
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
下面是将卷积滤波添加到第二张图像的结果。
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
从这些图像很难看出卷积滤波的作用是什么。显然,它们生成了输入图像的一些变体,就像光线从不同角度打到图像上并产生阴影一样。 卷积层 2 现在绘制第二个卷积层的滤波权重。第一个卷积层有16个输出通道,代表着第二个卷基层有16个输入。第二个卷积层的每个输入通道也有一些权重滤波。我们先绘制第一个通道的权重滤波。同样的,正值是红色,负值是蓝色。
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
第二个卷积层共有16个输入通道,我们可以同样地画出其他图像。这里我们画出第二个通道的图像。
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
由于这些滤波是高维度的,很难理解它们是如何应用的。给第一个卷积层的输出加上这些滤波,得到下面的图像。这些图像被降采样至7 x 7的像素,即上一个卷积层输出的一半。
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
这是给第二张图像加上滤波权重的结果。
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
从这些图像来看,似乎第二个卷积层会检测输入图像中的线段和模式,这对输入图中的局部变化不那么敏感。 关闭TensorFlow会话 现在我们已经用TensorFlow完成了任务,关闭session,释放资源。
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____ |
Handling_imbalanced_dataset.ipynb | ###Markdown
###Code
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv("https://raw.githubusercontent.com/nahin333/DL-practice-codes/main/customer_churn.csv")
df.sample(5)
df.Churn.value_counts()
df.drop('customerID',axis='columns',inplace=True)
df.dtypes
df[pd.to_numeric(df.TotalCharges,errors='coerce').isnull()]
df1 = df[df.TotalCharges!=' ']
df1.shape
df1.TotalCharges = pd.to_numeric(df1.TotalCharges)
df1.TotalCharges = pd.to_numeric(df1.TotalCharges)
df1.TotalCharges.values
tenure_churn_no = df1[df1.Churn=='No'].tenure
tenure_churn_yes = df1[df1.Churn=='Yes'].tenure
plt.xlabel("tenure")
plt.ylabel("Number Of Customers")
plt.title("Customer Churn Prediction Visualiztion")
plt.hist([tenure_churn_yes, tenure_churn_no], rwidth=0.95, color=['green','red'],label=['Churn=Yes','Churn=No'])
plt.legend()
mc_churn_no = df1[df1.Churn=='No'].MonthlyCharges
mc_churn_yes = df1[df1.Churn=='Yes'].MonthlyCharges
plt.xlabel("Monthly Charges")
plt.ylabel("Number Of Customers")
plt.title("Customer Churn Prediction Visualiztion")
plt.hist([mc_churn_yes, mc_churn_no], rwidth=0.95, color=['green','red'],label=['Churn=Yes','Churn=No'])
plt.legend()
def print_unique_col_values(df):
for column in df:
if df[column].dtypes=='object':
print(f'{column}: {df[column].unique()}')
print_unique_col_values(df1)
df1.replace('No internet service','No',inplace=True)
df1.replace('No phone service','No',inplace=True)
print_unique_col_values(df1)
yes_no_columns = ['Partner','Dependents','PhoneService','MultipleLines','OnlineSecurity','OnlineBackup',
'DeviceProtection','TechSupport','StreamingTV','StreamingMovies','PaperlessBilling','Churn']
for col in yes_no_columns:
df1[col].replace({'Yes': 1,'No': 0},inplace=True)
for col in df1:
print(f'{col}: {df1[col].unique()}')
df1['gender'].replace({'Female':1,'Male':0},inplace=True)
df2 = pd.get_dummies(data=df1, columns=['InternetService','Contract','PaymentMethod'])
df2.columns
df2.dtypes
cols_to_scale = ['tenure','MonthlyCharges','TotalCharges']
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
df2[cols_to_scale] = scaler.fit_transform(df2[cols_to_scale])
for col in df2:
print(f'{col}: {df2[col].unique()}')
X = df2.drop('Churn',axis='columns')
y = testLabels = df2.Churn.astype(np.float32)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15, stratify=y)
y_train.value_counts()
y.value_counts()
y_test.value_counts()
print(X_train.shape,X_test.shape)
pip install tensorflow-addons==0.15.0
import tensorflow as tf
from tensorflow import keras
from sklearn.metrics import confusion_matrix , classification_report
from tensorflow_addons import losses
def ANN(X_train, y_train, X_test, y_test, loss, weights):
model = keras.Sequential([
keras.layers.Dense(26, input_dim=26, activation='relu'),
keras.layers.Dense(15, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss=loss, metrics=['accuracy'])
if weights == -1:
model.fit(X_train, y_train, epochs=100)
else:
model.fit(X_train, y_train, epochs=100, class_weight = weights)
print(model.evaluate(X_test, y_test))
y_preds = model.predict(X_test)
y_preds = np.round(y_preds)
print("Classification Report: \n", classification_report(y_test, y_preds))
return y_preds
y_preds = ANN(X_train, y_train, X_test, y_test, 'binary_crossentropy', -1)
###Output
Epoch 1/100
176/176 [==============================] - 1s 1ms/step - loss: 0.4946 - accuracy: 0.7493
Epoch 2/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4320 - accuracy: 0.7852
Epoch 3/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4217 - accuracy: 0.7934
Epoch 4/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4177 - accuracy: 0.7945
Epoch 5/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4133 - accuracy: 0.8023
Epoch 6/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4117 - accuracy: 0.8023
Epoch 7/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4098 - accuracy: 0.8025
Epoch 8/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4074 - accuracy: 0.8050
Epoch 9/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4059 - accuracy: 0.8105
Epoch 10/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4058 - accuracy: 0.8082
Epoch 11/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4033 - accuracy: 0.8091
Epoch 12/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4016 - accuracy: 0.8082
Epoch 13/100
176/176 [==============================] - 0s 1ms/step - loss: 0.4030 - accuracy: 0.8117
Epoch 14/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3994 - accuracy: 0.8064
Epoch 15/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3978 - accuracy: 0.8082
Epoch 16/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3972 - accuracy: 0.8114
Epoch 17/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3968 - accuracy: 0.8084
Epoch 18/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3950 - accuracy: 0.8117
Epoch 19/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3942 - accuracy: 0.8121
Epoch 20/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3940 - accuracy: 0.8114
Epoch 21/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3926 - accuracy: 0.8107
Epoch 22/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3925 - accuracy: 0.8146
Epoch 23/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3905 - accuracy: 0.8153
Epoch 24/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3901 - accuracy: 0.8133
Epoch 25/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3891 - accuracy: 0.8162
Epoch 26/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3889 - accuracy: 0.8146
Epoch 27/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3880 - accuracy: 0.8121
Epoch 28/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3878 - accuracy: 0.8176
Epoch 29/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3865 - accuracy: 0.8183
Epoch 30/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3855 - accuracy: 0.8162
Epoch 31/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3846 - accuracy: 0.8132
Epoch 32/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3852 - accuracy: 0.8165
Epoch 33/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3847 - accuracy: 0.8148
Epoch 34/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3835 - accuracy: 0.8183
Epoch 35/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3829 - accuracy: 0.8169
Epoch 36/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3834 - accuracy: 0.8156
Epoch 37/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3812 - accuracy: 0.8222
Epoch 38/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3813 - accuracy: 0.8160
Epoch 39/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3814 - accuracy: 0.8183
Epoch 40/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3785 - accuracy: 0.8192
Epoch 41/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3797 - accuracy: 0.8215
Epoch 42/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3776 - accuracy: 0.8181
Epoch 43/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3784 - accuracy: 0.8178
Epoch 44/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3775 - accuracy: 0.8219
Epoch 45/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3754 - accuracy: 0.8251
Epoch 46/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3762 - accuracy: 0.8178
Epoch 47/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3758 - accuracy: 0.8222
Epoch 48/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3749 - accuracy: 0.8181
Epoch 49/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3741 - accuracy: 0.8192
Epoch 50/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3720 - accuracy: 0.8236
Epoch 51/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3739 - accuracy: 0.8196
Epoch 52/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3724 - accuracy: 0.8226
Epoch 53/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3725 - accuracy: 0.8256
Epoch 54/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3711 - accuracy: 0.8238
Epoch 55/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3719 - accuracy: 0.8208
Epoch 56/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3718 - accuracy: 0.8215
Epoch 57/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3699 - accuracy: 0.8283
Epoch 58/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3694 - accuracy: 0.8228
Epoch 59/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3675 - accuracy: 0.8251
Epoch 60/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3675 - accuracy: 0.8274
Epoch 61/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3672 - accuracy: 0.8233
Epoch 62/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3674 - accuracy: 0.8263
Epoch 63/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3668 - accuracy: 0.8283
Epoch 64/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3661 - accuracy: 0.8252
Epoch 65/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3652 - accuracy: 0.8267
Epoch 66/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3663 - accuracy: 0.8274
Epoch 67/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3631 - accuracy: 0.8281
Epoch 68/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3637 - accuracy: 0.8292
Epoch 69/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3640 - accuracy: 0.8283
Epoch 70/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3623 - accuracy: 0.8265
Epoch 71/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3632 - accuracy: 0.8274
Epoch 72/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3623 - accuracy: 0.8276
Epoch 73/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3625 - accuracy: 0.8247
Epoch 74/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3612 - accuracy: 0.8300
Epoch 75/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3602 - accuracy: 0.8318
Epoch 76/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3595 - accuracy: 0.8308
Epoch 77/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3598 - accuracy: 0.8302
Epoch 78/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3602 - accuracy: 0.8306
Epoch 79/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3586 - accuracy: 0.8329
Epoch 80/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3593 - accuracy: 0.8306
Epoch 81/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3572 - accuracy: 0.8324
Epoch 82/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3575 - accuracy: 0.8297
Epoch 83/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3563 - accuracy: 0.8327
Epoch 84/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3575 - accuracy: 0.8322
Epoch 85/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3568 - accuracy: 0.8316
Epoch 86/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3553 - accuracy: 0.8315
Epoch 87/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3565 - accuracy: 0.8302
Epoch 88/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3560 - accuracy: 0.8332
Epoch 89/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3550 - accuracy: 0.8327
Epoch 90/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3544 - accuracy: 0.8320
Epoch 91/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3545 - accuracy: 0.8311
Epoch 92/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3530 - accuracy: 0.8343
Epoch 93/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3532 - accuracy: 0.8327
Epoch 94/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3535 - accuracy: 0.8327
Epoch 95/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3521 - accuracy: 0.8354
Epoch 96/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3521 - accuracy: 0.8332
Epoch 97/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3525 - accuracy: 0.8345
Epoch 98/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3516 - accuracy: 0.8347
Epoch 99/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3519 - accuracy: 0.8345
Epoch 100/100
176/176 [==============================] - 0s 1ms/step - loss: 0.3499 - accuracy: 0.8364
44/44 [==============================] - 0s 1ms/step - loss: 0.4683 - accuracy: 0.7775
[0.46832484006881714, 0.7775408625602722]
Classification Report:
precision recall f1-score support
0.0 0.82 0.89 0.85 1033
1.0 0.60 0.48 0.53 374
accuracy 0.78 1407
macro avg 0.71 0.68 0.69 1407
weighted avg 0.77 0.78 0.77 1407
###Markdown
Method 1: Undersampling
###Code
# Class count
count_class_0, count_class_1 = df1.Churn.value_counts()
# Divide by class
df_class_0 = df2[df2['Churn'] == 0]
df_class_1 = df2[df2['Churn'] == 1]
df_class_0_under = df_class_0.sample(count_class_1)
df_test_under = pd.concat([df_class_0_under, df_class_1], axis =0)
print('Random under-samploing')
print(df_test_under.Churn.value_counts())
X = df_test_under.drop('Churn', axis='columns')
y = df_test_under['Churn']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15, stratify=y)
y_train.value_counts()
y_preds = ANN(X_train, y_train, X_test, y_test, 'binary_crossentropy', -1)
###Output
Epoch 1/100
94/94 [==============================] - 1s 1ms/step - loss: 0.6369 - accuracy: 0.6502
Epoch 2/100
94/94 [==============================] - 0s 1ms/step - loss: 0.5158 - accuracy: 0.7548
Epoch 3/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4978 - accuracy: 0.7569
Epoch 4/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4894 - accuracy: 0.7619
Epoch 5/100
94/94 [==============================] - 0s 2ms/step - loss: 0.4862 - accuracy: 0.7642
Epoch 6/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4826 - accuracy: 0.7635
Epoch 7/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4801 - accuracy: 0.7722
Epoch 8/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4778 - accuracy: 0.7682
Epoch 9/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4773 - accuracy: 0.7709
Epoch 10/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4744 - accuracy: 0.7719
Epoch 11/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4727 - accuracy: 0.7722
Epoch 12/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4717 - accuracy: 0.7716
Epoch 13/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4708 - accuracy: 0.7719
Epoch 14/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4697 - accuracy: 0.7722
Epoch 15/100
94/94 [==============================] - 0s 2ms/step - loss: 0.4674 - accuracy: 0.7786
Epoch 16/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4665 - accuracy: 0.7759
Epoch 17/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4641 - accuracy: 0.7829
Epoch 18/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4638 - accuracy: 0.7766
Epoch 19/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4615 - accuracy: 0.7799
Epoch 20/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4605 - accuracy: 0.7849
Epoch 21/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4591 - accuracy: 0.7773
Epoch 22/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4571 - accuracy: 0.7793
Epoch 23/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4574 - accuracy: 0.7816
Epoch 24/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4557 - accuracy: 0.7846
Epoch 25/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4534 - accuracy: 0.7829
Epoch 26/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4551 - accuracy: 0.7813
Epoch 27/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4528 - accuracy: 0.7866
Epoch 28/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4496 - accuracy: 0.7870
Epoch 29/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4488 - accuracy: 0.7896
Epoch 30/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4495 - accuracy: 0.7866
Epoch 31/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4464 - accuracy: 0.7886
Epoch 32/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4449 - accuracy: 0.7866
Epoch 33/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4442 - accuracy: 0.7883
Epoch 34/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4413 - accuracy: 0.7906
Epoch 35/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4416 - accuracy: 0.7890
Epoch 36/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4409 - accuracy: 0.7930
Epoch 37/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4378 - accuracy: 0.7920
Epoch 38/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4378 - accuracy: 0.7913
Epoch 39/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4359 - accuracy: 0.7950
Epoch 40/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4351 - accuracy: 0.7983
Epoch 41/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4343 - accuracy: 0.7933
Epoch 42/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4337 - accuracy: 0.7926
Epoch 43/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4361 - accuracy: 0.7933
Epoch 44/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4301 - accuracy: 0.7987
Epoch 45/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4310 - accuracy: 0.7957
Epoch 46/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4291 - accuracy: 0.7960
Epoch 47/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4282 - accuracy: 0.7990
Epoch 48/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4270 - accuracy: 0.8010
Epoch 49/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4255 - accuracy: 0.8013
Epoch 50/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4234 - accuracy: 0.8027
Epoch 51/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4242 - accuracy: 0.8030
Epoch 52/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4247 - accuracy: 0.7997
Epoch 53/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4222 - accuracy: 0.8037
Epoch 54/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4206 - accuracy: 0.8050
Epoch 55/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4193 - accuracy: 0.8007
Epoch 56/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4200 - accuracy: 0.8074
Epoch 57/100
94/94 [==============================] - 0s 2ms/step - loss: 0.4202 - accuracy: 0.8094
Epoch 58/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4217 - accuracy: 0.8017
Epoch 59/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4183 - accuracy: 0.8047
Epoch 60/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4157 - accuracy: 0.8077
Epoch 61/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4145 - accuracy: 0.8064
Epoch 62/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4167 - accuracy: 0.8020
Epoch 63/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4140 - accuracy: 0.8057
Epoch 64/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4133 - accuracy: 0.8117
Epoch 65/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4119 - accuracy: 0.8090
Epoch 66/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4119 - accuracy: 0.8134
Epoch 67/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4092 - accuracy: 0.8107
Epoch 68/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4099 - accuracy: 0.8097
Epoch 69/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4075 - accuracy: 0.8057
Epoch 70/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4083 - accuracy: 0.8080
Epoch 71/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4086 - accuracy: 0.8130
Epoch 72/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4046 - accuracy: 0.8100
Epoch 73/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4063 - accuracy: 0.8100
Epoch 74/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4060 - accuracy: 0.8137
Epoch 75/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4047 - accuracy: 0.8074
Epoch 76/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4010 - accuracy: 0.8127
Epoch 77/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4047 - accuracy: 0.8087
Epoch 78/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4034 - accuracy: 0.8120
Epoch 79/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4006 - accuracy: 0.8144
Epoch 80/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4017 - accuracy: 0.8120
Epoch 81/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4011 - accuracy: 0.8174
Epoch 82/100
94/94 [==============================] - 0s 1ms/step - loss: 0.4000 - accuracy: 0.8154
Epoch 83/100
94/94 [==============================] - 0s 2ms/step - loss: 0.4020 - accuracy: 0.8137
Epoch 84/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3970 - accuracy: 0.8197
Epoch 85/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3997 - accuracy: 0.8191
Epoch 86/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3959 - accuracy: 0.8197
Epoch 87/100
94/94 [==============================] - 0s 2ms/step - loss: 0.3972 - accuracy: 0.8231
Epoch 88/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3968 - accuracy: 0.8181
Epoch 89/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3956 - accuracy: 0.8204
Epoch 90/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3937 - accuracy: 0.8194
Epoch 91/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3938 - accuracy: 0.8164
Epoch 92/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3957 - accuracy: 0.8181
Epoch 93/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3945 - accuracy: 0.8171
Epoch 94/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3942 - accuracy: 0.8204
Epoch 95/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3920 - accuracy: 0.8211
Epoch 96/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3928 - accuracy: 0.8217
Epoch 97/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3895 - accuracy: 0.8187
Epoch 98/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3917 - accuracy: 0.8214
Epoch 99/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3912 - accuracy: 0.8241
Epoch 100/100
94/94 [==============================] - 0s 1ms/step - loss: 0.3897 - accuracy: 0.8258
24/24 [==============================] - 0s 1ms/step - loss: 0.5567 - accuracy: 0.7406
[0.556694746017456, 0.740641713142395]
Classification Report:
precision recall f1-score support
0 0.73 0.76 0.75 374
1 0.75 0.72 0.74 374
accuracy 0.74 748
macro avg 0.74 0.74 0.74 748
weighted avg 0.74 0.74 0.74 748
###Markdown
Method 2: Oversampling
###Code
count_class_0, count_class_1
df_class_1_over = df_class_1.sample(count_class_0, replace=True)
df_test_over = pd.concat([df_class_0, df_class_1_over], axis= 0)
print("Random over-sampling")
print(df_test_over.Churn.value_counts())
X = df_test_over.drop('Churn', axis='columns')
y = df_test_over['Churn']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15, stratify=y)
y_preds = ANN(X_train, y_train, X_test, y_test, 'binary_crossentropy', -1)
###Output
Epoch 1/100
259/259 [==============================] - 1s 1ms/step - loss: 0.5435 - accuracy: 0.7271
Epoch 2/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4894 - accuracy: 0.7656
Epoch 3/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4828 - accuracy: 0.7677
Epoch 4/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4802 - accuracy: 0.7692
Epoch 5/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4761 - accuracy: 0.7724
Epoch 6/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4720 - accuracy: 0.7709
Epoch 7/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4697 - accuracy: 0.7755
Epoch 8/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4656 - accuracy: 0.7749
Epoch 9/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4634 - accuracy: 0.7793
Epoch 10/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4604 - accuracy: 0.7817
Epoch 11/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4581 - accuracy: 0.7835
Epoch 12/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4549 - accuracy: 0.7849
Epoch 13/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4541 - accuracy: 0.7850
Epoch 14/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4516 - accuracy: 0.7849
Epoch 15/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4499 - accuracy: 0.7866
Epoch 16/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4479 - accuracy: 0.7850
Epoch 17/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4453 - accuracy: 0.7879
Epoch 18/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4433 - accuracy: 0.7890
Epoch 19/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4419 - accuracy: 0.7909
Epoch 20/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4414 - accuracy: 0.7916
Epoch 21/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4385 - accuracy: 0.7942
Epoch 22/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4363 - accuracy: 0.7943
Epoch 23/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4358 - accuracy: 0.7912
Epoch 24/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4344 - accuracy: 0.7943
Epoch 25/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4323 - accuracy: 0.7949
Epoch 26/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4321 - accuracy: 0.7948
Epoch 27/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4296 - accuracy: 0.7995
Epoch 28/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4293 - accuracy: 0.7988
Epoch 29/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4277 - accuracy: 0.7996
Epoch 30/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4262 - accuracy: 0.8011
Epoch 31/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4253 - accuracy: 0.8024
Epoch 32/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4244 - accuracy: 0.8023
Epoch 33/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4231 - accuracy: 0.8039
Epoch 34/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4214 - accuracy: 0.8056
Epoch 35/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4201 - accuracy: 0.8018
Epoch 36/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4184 - accuracy: 0.8065
Epoch 37/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4177 - accuracy: 0.8062
Epoch 38/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4165 - accuracy: 0.8075
Epoch 39/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4145 - accuracy: 0.8052
Epoch 40/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4125 - accuracy: 0.8082
Epoch 41/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4121 - accuracy: 0.8092
Epoch 42/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4112 - accuracy: 0.8087
Epoch 43/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4098 - accuracy: 0.8091
Epoch 44/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4077 - accuracy: 0.8125
Epoch 45/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4069 - accuracy: 0.8126
Epoch 46/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4061 - accuracy: 0.8130
Epoch 47/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4048 - accuracy: 0.8148
Epoch 48/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4039 - accuracy: 0.8133
Epoch 49/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4016 - accuracy: 0.8160
Epoch 50/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4005 - accuracy: 0.8156
Epoch 51/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3996 - accuracy: 0.8159
Epoch 52/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3985 - accuracy: 0.8172
Epoch 53/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3977 - accuracy: 0.8191
Epoch 54/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3966 - accuracy: 0.8195
Epoch 55/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3956 - accuracy: 0.8200
Epoch 56/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3932 - accuracy: 0.8190
Epoch 57/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3939 - accuracy: 0.8196
Epoch 58/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3923 - accuracy: 0.8196
Epoch 59/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3916 - accuracy: 0.8228
Epoch 60/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3909 - accuracy: 0.8217
Epoch 61/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3886 - accuracy: 0.8232
Epoch 62/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3879 - accuracy: 0.8246
Epoch 63/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3878 - accuracy: 0.8260
Epoch 64/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3876 - accuracy: 0.8262
Epoch 65/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3864 - accuracy: 0.8252
Epoch 66/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3849 - accuracy: 0.8264
Epoch 67/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3834 - accuracy: 0.8269
Epoch 68/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3830 - accuracy: 0.8281
Epoch 69/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3820 - accuracy: 0.8264
Epoch 70/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3815 - accuracy: 0.8318
Epoch 71/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3822 - accuracy: 0.8260
Epoch 72/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3797 - accuracy: 0.8274
Epoch 73/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3785 - accuracy: 0.8331
Epoch 74/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3790 - accuracy: 0.8272
Epoch 75/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3774 - accuracy: 0.8321
Epoch 76/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3763 - accuracy: 0.8315
Epoch 77/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3748 - accuracy: 0.8310
Epoch 78/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3749 - accuracy: 0.8335
Epoch 79/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3740 - accuracy: 0.8355
Epoch 80/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3734 - accuracy: 0.8345
Epoch 81/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3719 - accuracy: 0.8350
Epoch 82/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3710 - accuracy: 0.8357
Epoch 83/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3710 - accuracy: 0.8362
Epoch 84/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3714 - accuracy: 0.8358
Epoch 85/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3686 - accuracy: 0.8384
Epoch 86/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3689 - accuracy: 0.8395
Epoch 87/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3676 - accuracy: 0.8386
Epoch 88/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3667 - accuracy: 0.8398
Epoch 89/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3660 - accuracy: 0.8407
Epoch 90/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3659 - accuracy: 0.8392
Epoch 91/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3651 - accuracy: 0.8413
Epoch 92/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3647 - accuracy: 0.8421
Epoch 93/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3639 - accuracy: 0.8438
Epoch 94/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3649 - accuracy: 0.8414
Epoch 95/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3625 - accuracy: 0.8429
Epoch 96/100
259/259 [==============================] - 0s 2ms/step - loss: 0.3622 - accuracy: 0.8426
Epoch 97/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3617 - accuracy: 0.8456
Epoch 98/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3600 - accuracy: 0.8427
Epoch 99/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3604 - accuracy: 0.8436
Epoch 100/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3602 - accuracy: 0.8455
65/65 [==============================] - 0s 940us/step - loss: 0.4594 - accuracy: 0.7928
[0.4593632221221924, 0.7928364276885986]
Classification Report:
precision recall f1-score support
0 0.83 0.74 0.78 1033
1 0.77 0.85 0.80 1033
accuracy 0.79 2066
macro avg 0.80 0.79 0.79 2066
weighted avg 0.80 0.79 0.79 2066
###Markdown
Method 3: SMOTE (Synthetic Minority Oversampling Technique)
###Code
X = df2.drop('Churn', axis='columns')
y = df2['Churn']
!pip install imbalanced-learn
from imblearn.over_sampling import SMOTE
smote = SMOTE(sampling_strategy='minority')
X_sm, y_sm = smote.fit_resample(X, y)
y_sm.value_counts()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_sm, y_sm, test_size=0.2, random_state=15, stratify=y_sm)
y_train.value_counts()
y_test.value_counts()
y_preds = ANN(X_train, y_train, X_test, y_test, 'binary_crossentropy', -1)
###Output
Epoch 1/100
259/259 [==============================] - 1s 1ms/step - loss: 0.5521 - accuracy: 0.7285
Epoch 2/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4810 - accuracy: 0.7712
Epoch 3/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4695 - accuracy: 0.7736
Epoch 4/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4646 - accuracy: 0.7785
Epoch 5/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4591 - accuracy: 0.7791
Epoch 6/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4540 - accuracy: 0.7851
Epoch 7/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4494 - accuracy: 0.7866
Epoch 8/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4452 - accuracy: 0.7881
Epoch 9/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4400 - accuracy: 0.7944
Epoch 10/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4362 - accuracy: 0.7956
Epoch 11/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4319 - accuracy: 0.7989
Epoch 12/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4286 - accuracy: 0.7972
Epoch 13/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4252 - accuracy: 0.8010
Epoch 14/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4210 - accuracy: 0.8048
Epoch 15/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4168 - accuracy: 0.8068
Epoch 16/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4138 - accuracy: 0.8128
Epoch 17/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4109 - accuracy: 0.8128
Epoch 18/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4076 - accuracy: 0.8150
Epoch 19/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4044 - accuracy: 0.8177
Epoch 20/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4022 - accuracy: 0.8191
Epoch 21/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4005 - accuracy: 0.8191
Epoch 22/100
259/259 [==============================] - 0s 1ms/step - loss: 0.4025 - accuracy: 0.8214
Epoch 23/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3934 - accuracy: 0.8234
Epoch 24/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3922 - accuracy: 0.8258
Epoch 25/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3900 - accuracy: 0.8272
Epoch 26/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3898 - accuracy: 0.8255
Epoch 27/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3895 - accuracy: 0.8242
Epoch 28/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3856 - accuracy: 0.8274
Epoch 29/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3842 - accuracy: 0.8295
Epoch 30/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3829 - accuracy: 0.8320
Epoch 31/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3807 - accuracy: 0.8327
Epoch 32/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3795 - accuracy: 0.8280
Epoch 33/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3771 - accuracy: 0.8328
Epoch 34/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3751 - accuracy: 0.8322
Epoch 35/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3767 - accuracy: 0.8343
Epoch 36/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3770 - accuracy: 0.8324
Epoch 37/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3755 - accuracy: 0.8335
Epoch 38/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3721 - accuracy: 0.8352
Epoch 39/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3725 - accuracy: 0.8344
Epoch 40/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3707 - accuracy: 0.8356
Epoch 41/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3713 - accuracy: 0.8361
Epoch 42/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3682 - accuracy: 0.8346
Epoch 43/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3694 - accuracy: 0.8361
Epoch 44/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3671 - accuracy: 0.8400
Epoch 45/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3651 - accuracy: 0.8368
Epoch 46/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3648 - accuracy: 0.8385
Epoch 47/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3654 - accuracy: 0.8404
Epoch 48/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3632 - accuracy: 0.8398
Epoch 49/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3623 - accuracy: 0.8397
Epoch 50/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3612 - accuracy: 0.8425
Epoch 51/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3614 - accuracy: 0.8414
Epoch 52/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3593 - accuracy: 0.8432
Epoch 53/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3583 - accuracy: 0.8439
Epoch 54/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3609 - accuracy: 0.8427
Epoch 55/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3593 - accuracy: 0.8416
Epoch 56/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3571 - accuracy: 0.8439
Epoch 57/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3595 - accuracy: 0.8400
Epoch 58/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3567 - accuracy: 0.8435
Epoch 59/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3556 - accuracy: 0.8449
Epoch 60/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3560 - accuracy: 0.8441
Epoch 61/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3569 - accuracy: 0.8448
Epoch 62/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3549 - accuracy: 0.8450
Epoch 63/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3551 - accuracy: 0.8446
Epoch 64/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3523 - accuracy: 0.8455
Epoch 65/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3522 - accuracy: 0.8464
Epoch 66/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3526 - accuracy: 0.8492
Epoch 67/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3506 - accuracy: 0.8492
Epoch 68/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3535 - accuracy: 0.8456
Epoch 69/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3503 - accuracy: 0.8489
Epoch 70/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3501 - accuracy: 0.8487
Epoch 71/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3505 - accuracy: 0.8477
Epoch 72/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3471 - accuracy: 0.8485
Epoch 73/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3492 - accuracy: 0.8521
Epoch 74/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3477 - accuracy: 0.8472
Epoch 75/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3476 - accuracy: 0.8496
Epoch 76/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3489 - accuracy: 0.8476
Epoch 77/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3464 - accuracy: 0.8492
Epoch 78/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3464 - accuracy: 0.8471
Epoch 79/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3454 - accuracy: 0.8504
Epoch 80/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3442 - accuracy: 0.8496
Epoch 81/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3450 - accuracy: 0.8492
Epoch 82/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3438 - accuracy: 0.8513
Epoch 83/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3432 - accuracy: 0.8515
Epoch 84/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3442 - accuracy: 0.8511
Epoch 85/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3435 - accuracy: 0.8479
Epoch 86/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3412 - accuracy: 0.8511
Epoch 87/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3443 - accuracy: 0.8496
Epoch 88/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3422 - accuracy: 0.8493
Epoch 89/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3423 - accuracy: 0.8475
Epoch 90/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3441 - accuracy: 0.8485
Epoch 91/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3407 - accuracy: 0.8547
Epoch 92/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3418 - accuracy: 0.8495
Epoch 93/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3387 - accuracy: 0.8541
Epoch 94/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3401 - accuracy: 0.8534
Epoch 95/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3390 - accuracy: 0.8513
Epoch 96/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3392 - accuracy: 0.8545
Epoch 97/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3381 - accuracy: 0.8542
Epoch 98/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3372 - accuracy: 0.8533
Epoch 99/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3379 - accuracy: 0.8528
Epoch 100/100
259/259 [==============================] - 0s 1ms/step - loss: 0.3367 - accuracy: 0.8551
65/65 [==============================] - 0s 943us/step - loss: 0.4388 - accuracy: 0.8025
[0.43879789113998413, 0.8025169372558594]
Classification Report:
precision recall f1-score support
0 0.82 0.77 0.80 1033
1 0.79 0.83 0.81 1033
accuracy 0.80 2066
macro avg 0.80 0.80 0.80 2066
weighted avg 0.80 0.80 0.80 2066
###Markdown
Method 4: Use of Ensemble with undersampling
###Code
df2.Churn.value_counts()
X = df2.drop('Churn', axis='columns')
y = df2['Churn']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15, stratify=y)
y_train.value_counts()
df3 = X_train.copy()
df3['Churn'] = y_train
df3_class0 = df3[df3.Churn==0]
df3_class1 = df3[df3.Churn==1]
def get_train_batch(df_majority, df_minority, start, end):
df_train = pd.concat([df_majority[start:end], df_minority], axis=0)
X_train = df_train.drop('Churn', axis = 'columns')
y_train = df_train.Churn
return X_train, y_train
X_train, y_train = get_train_batch(df3_class0, df3_class1, 0, 1495)
y_pred1 = ANN(X_train, y_train, X_test, y_test, 'binary_crossentropy', -1)
X_train, y_train = get_train_batch(df3_class0, df3_class1, 1495, 2290)
y_pred2 = ANN(X_train, y_train, X_test, y_test, 'binary_crossentropy', -1)
X_train, y_train = get_train_batch(df3_class0, df3_class1,2990, 4130)
y_pred3 = ANN(X_train, y_train, X_test, y_test, 'binary_crossentropy', -1)
y_pred_final = y_pred1.copy()
for i in range(len(y_pred1)):
n_ones = y_pred1[i] + y_pred2[i] + y_pred3[i]
if n_ones > 1:
y_pred_final[i] = 1
else:
y_pred_final[i] = 0
print(classification_report(y_test, y_pred_final))
###Output
precision recall f1-score support
0 0.91 0.64 0.75 1033
1 0.45 0.82 0.58 374
accuracy 0.69 1407
macro avg 0.68 0.73 0.67 1407
weighted avg 0.79 0.69 0.71 1407
|
Sample/Day_33_Sample.ipynb | ###Markdown
* 重點: * 掌握假設檢定的種類,釐清你想了解的問題 * 假設檢定的進階概念,掌握假設檢定的誤差類型 假設檢定的種類 * 根據 $H_1$ 所定訂範圍分類,可分為 * 右尾檢定:店長認為品牌市占率至少 12%,$ H_1: \mu < 0.12$ * 雙尾檢定:店長認為品牌市占率為 12%,$ H_1: \mu \neq 0.12$ * 左尾檢定:店長認為品牌市占率至多 12%,$ H_1: \mu > 0.12$* 根據樣本的範圍,可分為 類型 資料特性 使用情境 使用時機 平均數檢定 單樣本檢定 做實驗只分一組 一位店長認為其品牌在市場之佔有率至多為 12%,一共訪問 300 位消費者,其中有 31 位表示喜歡 檢樣我們收集的樣本,所算出的統計值,是否高於、低於或等於某一特定值 $H_0: \mu \leq 0.12$, $H_1: \mu 雙樣本檢定(非相依) 做實驗分成兩組 * 想比較男女生在薪資上是否有差異性? * 父親每日陪伴孩子時間和母親每日陪伴孩子的時間是否有差異? 需比較兩種群體或選擇,哪個較好 $H_0: \mu_女 = \mu_男$, $H_1: \mu_女 \neq \mu_男 $ 成對樣本檢定(相依) 分成兩組,但兩租有前後或相依的特性 * 成對樣本:分析夫妻分別的年收入多寡是否有差異 * 重複量測:參加減肥試驗的一群人,參加試驗前與規律運動 3 個月後的體重是否有差異 $H_0: D \geq 0$, $H_1: D > 0$, 其中 $D = X_夫-X_妻$ * 根據檢定目的,可分為(以雙樣本為例) 類型 使用情境 平均數檢定 平均數檢定 台灣男性的平均腰圍是否比女性腰圍來的多 $H_0:\mu_男-\mu_女 \leq 0$, $H_1:\mu_男-\mu_女 > 0$ 比例檢定 兩種不同的email主旨,50封是統一式開頭,50封是個人化開頭,請問個人化的開信率有比統一的開信率來的高? $H_0:p_客-p_統 \leq 0$, $H_1:p_客-p_統 > 0$ 重要的抽樣分配 * z 分配 * X 是常態分配的隨機變數,平均數為 μ,標準差為 σ,抽出 $X_1,...,X_n$ 服從 $X \sim N(\mu, \sigma^2)$ → $ Z = \frac{\bar{X_n}-\mu}{\frac{\sigma}{\sqrt{n}}}$* t 分配 * 學生t-分配(Student's t-distribution) 可簡稱為 t 分配 * 應用於在母體標準差(σ)未知的情況下,不論樣本數量大或小皆可應用學生 t 分配估計呈常態分布且變異數未知的總體的平均值 * X 是常態分配的隨機變數,標準差 $\sigma$ 未知,則用 $S_n$ 估計 $\sigma$ → $ T = \frac{\bar{X_n}-\mu}{\frac{S_n}{\sqrt{n}}} $ * t 分配當自由度為 30 時,很接近常態分配 * t 分配當樣本數越多時,自由度就會越高* 卡方分配:卡方分配是標準常態的變形 * Z 為標準常態分配,Z 的平方為自由度為 1 的卡方分配 $ \to Z \sim N(0,1), Y=Z^2 \to Y \sim X^2(1) $ * 特性 * N 個卡方分配相加為自由度为 nv 的卡方分 $ \to Y_i \sim \chi^2(\nu) \to \sum_{i=1}^n Y_1+...+Y_N \sim \chi^2(n \nu) $ * N 個常態分配的平方相加為自由度為 n 的卡方分配 $ \to Z_i \sim N(0,1) \to \sum_{i=1}^n Z_i^2+...+Z_n^2 \sim \chi^2(n)$* F 分配 * 使用時機| 母體標準差 | 樣本數量 | 選用分配 ||-----------|:--------:|:--------:|| 已知 | 小樣本 | Z分配 || 已知 | 大樣本 | Z分配 || 未知 | 小樣本(小於30) | t分配 || 未知 | 大樣本 | t分配/Z分配(兩者皆可) | 假設檢定的誤差類型 Truth(Decision) Positive Negative Test(Truth) Positive True Positive False Positive Type I $\alpha$ Total Testing Positive Negative False Negative Type II $\beta$ True Negative Total Testing Negative Total Truly Positive Total Truly Negative Total * $\alpha$:Type I error/型一誤差,又稱偽陽性 false positive,$H_0$ 是對的,但是我們做了實驗後,卻拒絕 $H_0$,又稱顯著水準(significant level),設定 $\alpha$ 值愈小,表示希望檢測時的誤判機率愈低(即希望檢定能愈準確),又稱偽陽性 false positive* $\beta$:Type II error/型二誤差,又稱偽陰性 false negative,H0 是錯的,但是我們做了實驗後,卻沒有證據拒絕 $H_0$,又稱偽陰性false negative* $1-\beta$:又稱檢定力,$H_0$ 是錯的,但是我們做了實驗後拒絕 $H_0$ 的能力* 範例說明 * 若用驗孕棒為一位未懷孕的女士驗孕,結果是已懷孕(positive),這是第一型錯誤 * 若用驗孕棒為一位已懷孕的女士驗孕,結果是未懷孕(negativ),這是第二型錯誤
###Code
```R
with(plots):
g1<-animatecurve((1/((2*Pi)^0.5))*exp((-0.5)*x^2),x=-5..5,frames=80,color=red):
g2<-animatecurve((GAMMA(15.5)/GAMMA(15))*(1/Pi^0.5)*(1/30^0.5)*(1+x^2/30)^(-15.5),x=-5..5,frames=80,color=black):
g3<-animatecurve((GAMMA(25.5)/GAMMA(25))*(1/Pi^0.5)*(1/50^0.5)*(1+x^2/50)^(-25.5),x=-5..5,frames=80,color=green):
g4<-animatecurve((GAMMA(3)/GAMMA(2.5))*(1/Pi^0.5)*(1/5^0.5)*(1+x^2/5)^(-3),x=-5..5,frames=80,color=blue):
display(g1,g2,g3,g4)
```
###Output
_____no_output_____ |
BCC_Calculations/Fe-Cu_Ni_Si/.ipynb_checkpoints/FeCu_simulation-checkpoint.ipynb | ###Markdown
Cu calculations
###Code
vu0 = 4.4447
vu2 = 2.6848
Dconv=1e-2
# Need to change the way we are dealing with mdbs to be able to change the pre-factors with consistent results.
predb0, enedb0 = np.ones(1)*np.exp(0.05), np.array([E_f_pdb])
# We'll measure every formation energy relative to the solute formation energy.
preS, eneS = np.ones(1), np.array([0.0])
# Next, interaction or the excess energies and pre-factors for solutes and dumbbells.
preSdb, eneSdb = np.ones(onsagercalculator.thermo.mixedstartindex), \
np.zeros(onsagercalculator.thermo.mixedstartindex)
# Now, we go over the necessary stars and assign interaction energies
for (key, index) in name_to_themo_star.items():
eneSdb[index] = name_to_Ef[key] - E_f_pdb
predb2, enedb2 = np.ones(1), np.array([E_f_mdb])
# Transition state energies - For omega0, omega2 and omega43, the first type is the Johnson jump,
# and the second one is the Rigid jump.
# Omega0 TS eneriges
preT0, eneT0 = Dconv*vu0*np.ones(1), np.array([E_f_pdb+0.335115123, E_f_pdb + 0.61091396, E_f_pdb+0.784315123])
# Omega2 TS energies
Nj2 = len(onsagercalculator.jnet2)
preT2, eneT2 = Dconv*vu2*np.ones(Nj2), np.array([ef_ts_2, ef_ts_2_rigid])
# Omega43 TS energies
preT43, eneT43 = Dconv*vu0*np.ones(1), np.array([ef_ts_43])
# Omega1 TS energies - need to be careful here
preT1 = Dconv*vu0*np.ones(len(onsagercalculator.jnet1))
eneT1 = np.array([eneT0[i] for i in onsagercalculator.om1types])
# Now, we go over the jumps that are provided and make the necessary changes
for (key, index) in jmpdict.items():
eneT1[index] = Jname_2_ef_ts[key]
eneT1[0] = 0.0
# print(eneT1)
data_Cu = {"puredb_data":(predb0, enedb0), "mixed_db_data":(predb2, enedb2), "omega0_data":(preT0, eneT0),
"omega2_data":(preT2, eneT2),"omega43_data":(preT43, eneT43), "omega1_data":(preT1, eneT1),
"S-db_interaction_data":(preSdb, eneSdb)}
# Then we calculate the transport coefficients
# Now, we set the temperatures
T_arr = temp
# 1b. Now get the beta*free energy values.
diff_aa_Cu = np.zeros(len(T_arr))
diff_ab_Cu = np.zeros(len(T_arr))
diff_bb_Cu = np.zeros(len(T_arr))
diff_bb_non_loc = np.zeros(len(T_arr))
D0_bb = {}
start = time.time()
for i in tqdm(range(len(T_arr)), position=0, leave=True):
T = T_arr[i]
kT = kB*T
bFdb0, bFdb2, bFS, bFSdb, bFT0, bFT1, bFT2, bFT3, bFT4 = \
onsagercalculator.preene2betafree(kT, predb0, enedb0, preS, eneS, preSdb, eneSdb, predb2, enedb2,
preT0, eneT0, preT2, eneT2, preT1, eneT1, preT43, eneT43)
# bFdicts[i] = [bFdb0, bFdb2, bFS, bFSdb, bFT0, bFT1, bFT2, bFT3, bFT4]
# get the probabilities and other data from L_ij
L0bb, (L_uc_aa,L_c_aa), (L_uc_bb,L_c_bb), (L_uc_ab,L_c_ab), GF_total, GF20, betaFs, del_om, part_func,\
probs, omegas, stateprobs =\
onsagercalculator.L_ij(bFdb0, bFT0, bFdb2, bFT2, bFS, bFSdb, bFT1, bFT3, bFT4)
L_aa = L_uc_aa + L_c_aa
L_bb = L_uc_bb + L_c_bb
L_ab = L_uc_ab + L_c_ab
diff_aa_Cu[i] = L_aa[0][0]
diff_ab_Cu[i] = L_ab[0][0]
diff_bb_Cu[i] = L_bb[0][0]
diff_bb_non_loc[i] = L0bb[0][0]
D0_bb[i] = D0bb
print(time.time() - start)
plt.figure(figsize=(7,8))
plt.semilogy(1/T_arr, diff_ab_Cu/(diff_bb_non_loc), label="Calculated",
linewidth=4, ms=10)
plt.semilogy(np.array(1/np.array(temp)),np.array(dat), linewidth=2,
label="Schuler et. al.")
plt.xlabel(r"1/T (K$^{-1}$)", fontsize=18)
plt.ylabel(r"$\frac{L_{Cu-Fe_i}}{L_{Fe_i-Fe_i}}$", fontsize=30, rotation = 0, labelpad=50)
# plt.legend(loc="best", fontsize=16)
# plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))
plt.xticks(fontsize=16, rotation = 30)
plt.yticks(fontsize=16)
# plt.xlim(400, 1301)
plt.tight_layout()
plt.legend(loc='upper right',fontsize=14)
# plt.savefig("pdcr_Cu_Fe_log.png")
import h5py
with h5py.File("Cu_data.h5","w") as fl:
fl.create_dataset("diff_aa", data=diff_aa_Cu)
fl.create_dataset("diff_ab", data=diff_ab_Cu)
fl.create_dataset("diff_bb_nl", data=diff_bb_non_loc)
fl.create_dataset("diff_bb", data=diff_bb_Cu)
fl.create_dataset("Temp", data=temp)
# Now let's do the infinite temeperature limit
kT = np.inf
bFdb0, bFdb2, bFS, bFSdb, bFT0, bFT1, bFT2, bFT3, bFT4 = \
onsagercalculator.preene2betafree(kT, predb0, enedb0, preS, eneS, preSdb, eneSdb, predb2, enedb2,
preT0, eneT0, preT2, eneT2, preT1, eneT1, preT43, eneT43)
# bFdicts[i] = [bFdb0, bFdb2, bFS, bFSdb, bFT0, bFT1, bFT2, bFT3, bFT4]
# get the probabilities and other data from L_ij
L0bb, (L_uc_aa,L_c_aa), (L_uc_bb,L_c_bb), (L_uc_ab,L_c_ab), GF_total, GF20, betaFs, del_om, part_func,\
probs, omegas, stateprobs =\
onsagercalculator.L_ij(bFdb0, bFT0, bFdb2, bFT2, bFS, bFSdb, bFT1, bFT3, bFT4)
L_aa = L_uc_aa + L_c_aa
L_bb = L_uc_bb + L_c_bb
L_ab = L_uc_ab + L_c_ab
L_ab[0][0]/L_aa[0][0]
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/image_classification/solutions/4_tpu_training.ipynb | ###Markdown
Transfer Learning on TPUsIn the previous notebook, we learned how to do transfer learning with [TensorFlow Hub](https://www.tensorflow.org/hub). In this notebook, we're going to kick up our training speed with [TPUs](https://www.tensorflow.org/guide/tpu). Learning Objectives1. Know how to set up a [TPU strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy?version=nightly) for training2. Know how to use a TensorFlow Hub Module when training on a TPU3. Know how to create and specify a TPU for trainingFirst things first. Configure the parameters below to match your own Google Cloud project details.
###Code
import os
os.environ["BUCKET"] = "your-bucket-here"
###Output
_____no_output_____
###Markdown
Packaging the ModelIn order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in `tpu_models` with the data processing functions from the previous lab copied into util.py.Similarly, the model building and training functions are pulled into model.py. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new `task.py` file.We've added five command line arguments which are standard for cloud training of a TensorFlow model: `epochs`, `steps_per_epoch`, `train_path`, `eval_path`, and `job-dir`. There are two new arguments for TPU training: `tpu_address` and `hub_path``tpu_address` is going to be our TPU name as it appears in [Compute Engine Instances](console.cloud.google.com/compute/instances). We can specify this name with the [ctpu up](https://cloud.google.com/tpu/docs/ctpu-referenceup) command.`hub_path` is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.The other big difference is some code to deploy our model on a TPU. To begin, we'll set up a [TPU Cluster Resolver](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver), which will help tensorflow communicate with the hardware to set up workers for training ([more on TensorFlow Cluster Resolvers](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver)). Once the resolver [connects to](https://www.tensorflow.org/api_docs/python/tf/config/experimental_connect_to_cluster) and [initializes](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/initialize_tpu_system) the TPU system, our Tensorflow Graphs can be initialized within a [TPU distribution strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy), allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.**TODO 1: Set up a TPU strategy**
###Code
%%writefile tpu_models/trainer/task.py
import argparse
import json
import os
import sys
import tensorflow as tf
from . import model
from . import util
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=5)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=500)
parser.add_argument(
'--train_path',
help='The path to the training data',
type=str, default="gs://cloud-ml-data/img/flower_photos/train_set.csv")
parser.add_argument(
'--eval_path',
help='The path to the evaluation data',
type=str, default="gs://cloud-ml-data/img/flower_photos/eval_set.csv")
parser.add_argument(
'--tpu_address',
help='The path to the TPUs we will use in training',
type=str, required=True)
parser.add_argument(
'--hub_path',
help='The path to TF Hub module to use in GCS',
type=str, required=True)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, required=True)
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=args.tpu_address)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch,
train_data, eval_data, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
The TPU serverBefore we can start training with this code, we need a way to pull in [MobileNet](https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/4). When working with TPUs in the cloud, the TPU will [not have access to the VM's local file directory](https://cloud.google.com/tpu/docs/troubleshootingcannot_use_local_filesystem) since the TPU worker acts as a server. Because of this **all data used by our model must be hosted on an outside storage system** such as Google Cloud Storage. This makes [caching](https://www.tensorflow.org/api_docs/python/tf/data/Datasetcache) our dataset especially critical in order to speed up training time.To access MobileNet with these restrictions, we can download a compressed [saved version](https://www.tensorflow.org/hub/tf2_saved_model) of the model by using the [wget](https://www.gnu.org/software/wget/manual/wget.html) command. Adding `?tf-hub-format=compressed` at the end of our module handle gives us a download URL.
###Code
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
###Output
_____no_output_____
###Markdown
This model is still compressed, so lets uncompress it with the `tar` command below and place it in our `tpu_models` directory.
###Code
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
###Output
_____no_output_____
###Markdown
Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using [gsutil cp](https://cloud.google.com/storage/docs/gsutil/commands/cp) to copy everything.
###Code
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
###Output
_____no_output_____
###Markdown
Spinning up a TPUTime to wake up a TPU! Open the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) and copy the [ctpu up]((https://cloud.google.com/tpu/docs/ctpu-referenceup)) command below. Say 'Yes' to the prompts to spin up the TPU.`ctpu up --zone=us-central1-b --tf-version=2.1 --name=my-tpu`It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively [Compute Engine Interface](https://console.cloud.google.com/compute/instances) can be used to SSH in. You'll know you're running on a TPU when the command line starts with `your-username@your-tpu-name`.This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the `.` at the end as it tells gsutil to copy data into the currect directory.
###Code
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
###Output
_____no_output_____
###Markdown
Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out. **TODO 2 and 3: Specify the `tpu_address` and `hub_path`**
###Code
!echo "python3 -m tpu_models.trainer.task \
--tpu_address=my-tpu \
--hub_path=gs://$BUCKET/tpu_models/hub/ \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
###Output
_____no_output_____
###Markdown
Transfer Learning on TPUsIn the previous notebook, we learned how to do transfer learning with [TensorFlow Hub](https://www.tensorflow.org/hub). In this notebook, we're going to kick up our training speed with [TPUs](https://www.tensorflow.org/guide/tpu). Learning Objectives1. Know how to set up a [TPU strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy?version=nightly) for training2. Know how to use a TensorFlow Hub Module when training on a TPU3. Know how to create and specify a TPU for trainingFirst things first. Configure the parameters below to match your own Google Cloud project details.
###Code
import os
os.environ["BUCKET"] = "your-bucket-here"
###Output
_____no_output_____
###Markdown
Packaging the ModelIn order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in `tpu_models` with the data processing functions from the previous lab copied into util.py.Similarly, the model building and training functions are pulled into model.py. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new `task.py` file.We've added five command line arguments which are standard for cloud training of a TensorFlow model: `epochs`, `steps_per_epoch`, `train_path`, `eval_path`, and `job-dir`. There are two new arguments for TPU training: `tpu_address` and `hub_path``tpu_address` is going to be our TPU name as it appears in [Compute Engine Instances](console.cloud.google.com/compute/instances). We can specify this name with the [ctpu up](https://cloud.google.com/tpu/docs/ctpu-referenceup) command.`hub_path` is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.The other big difference is some code to deploy our model on a TPU. To begin, we'll set up a [TPU Cluster Resolver](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver), which will help tensorflow communicate with the hardware to set up workers for training ([more on TensorFlow Cluster Resolvers](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver)). Once the resolver [connects to](https://www.tensorflow.org/api_docs/python/tf/config/experimental_connect_to_cluster) and [initializes](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/initialize_tpu_system) the TPU system, our Tensorflow Graphs can be initialized within a [TPU distribution strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy), allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.**TODO 1: Set up a TPU strategy**
###Code
%%writefile tpu_models/trainer/task.py
import argparse
import json
import os
import sys
import tensorflow as tf
from . import model
from . import util
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=5)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=500)
parser.add_argument(
'--train_path',
help='The path to the training data',
type=str, default="gs://cloud-ml-data/img/flower_photos/train_set.csv")
parser.add_argument(
'--eval_path',
help='The path to the evaluation data',
type=str, default="gs://cloud-ml-data/img/flower_photos/eval_set.csv")
parser.add_argument(
'--tpu_address',
help='The path to the TPUs we will use in training',
type=str, required=True)
parser.add_argument(
'--hub_path',
help='The path to TF Hub module to use in GCS',
type=str, required=True)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, required=True)
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=args.tpu_address)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch,
train_data, eval_data, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
The TPU serverBefore we can start training with this code, we need a way to pull in [MobileNet](https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/4). When working with TPUs in the cloud, the TPU will [not have access to the VM's local file directory](https://cloud.google.com/tpu/docs/troubleshootingcannot_use_local_filesystem) since the TPU worker acts as a server. Because of this **all data used by our model must be hosted on an outside storage system** such as Google Cloud Storage. This makes [caching](https://www.tensorflow.org/api_docs/python/tf/data/Datasetcache) our dataset especially critical in order to speed up training time.To access MobileNet with these restrictions, we can download a compressed [saved version](https://www.tensorflow.org/hub/tf2_saved_model) of the model by using the [wget](https://www.gnu.org/software/wget/manual/wget.html) command. Adding `?tf-hub-format=compressed` at the end of our module handle gives us a download URL.
###Code
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
###Output
_____no_output_____
###Markdown
This model is still compressed, so lets uncompress it with the `tar` command below and place it in our `tpu_models` directory.
###Code
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
###Output
_____no_output_____
###Markdown
Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using [gsutil cp](https://cloud.google.com/storage/docs/gsutil/commands/cp) to copy everything.
###Code
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
###Output
_____no_output_____
###Markdown
Spinning up a TPUTime to wake up a TPU! Open the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) and copy the [gcloud compute](https://cloud.google.com/sdk/gcloud/reference/compute/tpus/execution-groups/create) command below. Say 'Yes' to the prompts to spin up the TPU.`gcloud compute tpus execution-groups create \ --name=my-tpu \ --zone=us-central1-b \ --tf-version=2.3.2 \ --machine-type=n1-standard-1 \ --accelerator-type=v3-8`It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively [Compute Engine Interface](https://console.cloud.google.com/compute/instances) can be used to SSH in. You'll know you're running on a TPU when the command line starts with `your-username@your-tpu-name`.This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the `.` at the end as it tells gsutil to copy data into the currect directory.
###Code
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
###Output
_____no_output_____
###Markdown
Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out. **TODO 2 and 3: Specify the `tpu_address` and `hub_path`**
###Code
%%bash
export TPU_NAME=my-tpu
echo "export TPU_NAME="$TPU_NAME
echo "python3 -m tpu_models.trainer.task \
--tpu_address=\$TPU_NAME \
--hub_path=gs://$BUCKET/tpu_models/hub/ \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
###Output
_____no_output_____
###Markdown
Transfer Learning on TPUsIn the previous notebook, we learned how to do transfer learning with [TensorFlow Hub](https://www.tensorflow.org/hub). In this notebook, we're going to kick up our training speed with [TPUs](https://www.tensorflow.org/guide/tpu). Learning Objectives1. Know how to set up a [TPU strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy?version=nightly) for training2. Know how to use a TensorFlow Hub Module when training on a TPU3. Know how to create and specify a TPU for trainingFirst things first. Configure the parameters below to match your own Google Cloud project details.
###Code
import os
os.environ["BUCKET"] = "your-bucket-here"
###Output
_____no_output_____
###Markdown
Packaging the ModelIn order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in `tpu_models` with the data processing functions from the previous lab copied into util.py.Similarly, the model building and training functions are pulled into model.py. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new `task.py` file.We've added five command line arguments which are standard for cloud training of a TensorFlow model: `epochs`, `steps_per_epoch`, `train_path`, `eval_path`, and `job-dir`. There are two new arguments for TPU training: `tpu_address` and `hub_path``tpu_address` is going to be our TPU name as it appears in [Compute Engine Instances](console.cloud.google.com/compute/instances). We can specify this name with the [ctpu up](https://cloud.google.com/tpu/docs/ctpu-referenceup) command.`hub_path` is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.The other big difference is some code to deploy our model on a TPU. To begin, we'll set up a [TPU Cluster Resolver](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver), which will help tensorflow communicate with the hardware to set up workers for training ([more on TensorFlow Cluster Resolvers](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver)). Once the resolver [connects to](https://www.tensorflow.org/api_docs/python/tf/config/experimental_connect_to_cluster) and [initializes](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/initialize_tpu_system) the TPU system, our Tensorflow Graphs can be initialized within a [TPU distribution strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy), allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.**TODO 1: Set up a TPU strategy**
###Code
%%writefile tpu_models/trainer/task.py
import argparse
import json
import os
import sys
import tensorflow as tf
from . import model
from . import util
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=5)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=500)
parser.add_argument(
'--train_path',
help='The path to the training data',
type=str, default="gs://cloud-ml-data/img/flower_photos/train_set.csv")
parser.add_argument(
'--eval_path',
help='The path to the evaluation data',
type=str, default="gs://cloud-ml-data/img/flower_photos/eval_set.csv")
parser.add_argument(
'--tpu_address',
help='The path to the TPUs we will use in training',
type=str, required=True)
parser.add_argument(
'--hub_path',
help='The path to TF Hub module to use in GCS',
type=str, required=True)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, required=True)
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=args.tpu_address)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch,
train_data, eval_data, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
The TPU serverBefore we can start training with this code, we need a way to pull in [MobileNet](https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/4). When working with TPUs in the cloud, the TPU will [not have access to the VM's local file directory](https://cloud.google.com/tpu/docs/troubleshootingcannot_use_local_filesystem) since the TPU worker acts as a server. Because of this **all data used by our model must be hosted on an outside storage system** such as Google Cloud Storage. This makes [caching](https://www.tensorflow.org/api_docs/python/tf/data/Datasetcache) our dataset especially critical in order to speed up training time.To access MobileNet with these restrictions, we can download a compressed [saved version](https://www.tensorflow.org/hub/tf2_saved_model) of the model by using the [wget](https://www.gnu.org/software/wget/manual/wget.html) command. Adding `?tf-hub-format=compressed` at the end of our module handle gives us a download URL.
###Code
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
###Output
_____no_output_____
###Markdown
This model is still compressed, so lets uncompress it with the `tar` command below and place it in our `tpu_models` directory.
###Code
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
###Output
_____no_output_____
###Markdown
Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using [gsutil cp](https://cloud.google.com/storage/docs/gsutil/commands/cp) to copy everything.
###Code
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
###Output
_____no_output_____
###Markdown
Spinning up a TPUTime to wake up a TPU! Open the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) and copy the [gcloud compute](https://cloud.google.com/sdk/gcloud/reference/compute/tpus/execution-groups/create) command below. Say 'Yes' to the prompts to spin up the TPU.`gcloud compute tpus execution-groups create \ --name=my-tpu \ --zone=us-central1-b \ --tf-version=2.3.2 \ --machine-type=n1-standard-1 \ --accelerator-type=v3-8`It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively [Compute Engine Interface](https://console.cloud.google.com/compute/instances) can be used to SSH in. You'll know you're running on a TPU when the command line starts with `your-username@your-tpu-name`.This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the `.` at the end as it tells gsutil to copy data into the currect directory.
###Code
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
###Output
_____no_output_____
###Markdown
Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out. **TODO 2 and 3: Specify the `tpu_address` and `hub_path`**
###Code
%%bash
export TPU_NAME=my-tpu
echo "export TPU_NAME="$TPU_NAME
echo "python3 -m tpu_models.trainer.task \
--tpu_address=\$TPU_NAME \
--hub_path=gs://$BUCKET/tpu_models/hub/ \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
###Output
_____no_output_____
###Markdown
Transfer Learning on TPUsIn the previous notebook, we learned how to do transfer learning with [TensorFlow Hub](https://www.tensorflow.org/hub). In this notebook, we're going to kick up our training speed with [TPUs](https://www.tensorflow.org/guide/tpu). Learning Objectives1. Know how to set up a [TPU strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy?version=nightly) for training2. Know how to use a TensorFlow Hub Module when training on a TPU3. Know how to create and specify a TPU for trainingFirst things first. Configure the parameters below to match your own Google Cloud project details.
###Code
import os
os.environ["BUCKET"] = "your-bucket-here"
###Output
_____no_output_____
###Markdown
Packaging the ModelIn order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in `tpu_models` with the data processing functions from the pevious lab copied into util.py.Similarly, the model building and training functions are pulled into model.py. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new `task.py` file.We've added five command line arguments which are standard for cloud training of a TensorFlow model: `epochs`, `steps_per_epoch`, `train_path`, `eval_path`, and `job-dir`. There are two new arguments for TPU training: `tpu_address` and `hub_path``tpu_address` is going to be our TPU name as it appears in [Compute Engine Instances](console.cloud.google.com/compute/instances). We can specify this name with the [ctpu up](https://cloud.google.com/tpu/docs/ctpu-referenceup) command.`hub_path` is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.The other big difference is some code to deploy our model on a TPU. To begin, we'll set up a [TPU Cluster Resolver](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver), which will help tensorflow communicate with the hardware to set up workers for training ([more on TensorFlow Cluster Resolvers](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver)). Once the resolver [connects to](https://www.tensorflow.org/api_docs/python/tf/config/experimental_connect_to_cluster) and [initializes](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/initialize_tpu_system) the TPU system, our Tensorflow Graphs can be initialized within a [TPU distribution strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy), allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.**TODO 1: Set up a TPU strategy**
###Code
%%writefile tpu_models/trainer/task.py
import argparse
import json
import os
import sys
import tensorflow as tf
from . import model
from . import util
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=5)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=500)
parser.add_argument(
'--train_path',
help='The path to the training data',
type=str, default="gs://cloud-ml-data/img/flower_photos/train_set.csv")
parser.add_argument(
'--eval_path',
help='The path to the evaluation data',
type=str, default="gs://cloud-ml-data/img/flower_photos/eval_set.csv")
parser.add_argument(
'--tpu_address',
help='The path to the evaluation data',
type=str, required=True)
parser.add_argument(
'--hub_path',
help='The path to TF Hub module to use in GCS',
type=str, required=True)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, required=True)
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=args.tpu_address)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch,
train_data, eval_data, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
The TPU serverBefore we can start training with this code, we need a way to pull in [MobileNet](https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/4). When working with TPUs in the cloud, the TPU will [not have access to the VM's local file directory](https://cloud.google.com/tpu/docs/troubleshootingcannot_use_local_filesystem) since the TPU worker acts as a server. Because of this **all data used by our model must be hosted on an outside storage system** such as Google Cloud Storage. This makes [caching](https://www.tensorflow.org/api_docs/python/tf/data/Datasetcache) our dataset especially critical in order to speed up training time.To access MobileNet with these restrictions, we can download a compressed [saved version](https://www.tensorflow.org/hub/tf2_saved_model) of the model by using the [wget](https://www.gnu.org/software/wget/manual/wget.html) command. Adding `?tf-hub-format=compressed` at the end of our module handle gives us a download URL.
###Code
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
###Output
_____no_output_____
###Markdown
This model is still compressed, so lets uncompress it with the `tar` command below and place it in our `tpu_models` directory.
###Code
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
###Output
_____no_output_____
###Markdown
Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using [gsutil cp](https://cloud.google.com/storage/docs/gsutil/commands/cp) to copy everything.
###Code
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
###Output
_____no_output_____
###Markdown
Spinning up a TPUTime to wake up a TPU! Open the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) and copy the [ctpu up]((https://cloud.google.com/tpu/docs/ctpu-referenceup)) command below. Say 'Yes' to the prompts to spin up the TPU.`ctpu up --zone=us-central1-b --tf-version=2.1 --name=my-tpu`It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively [Compute Engine Interface](https://console.cloud.google.com/compute/instances) can be used to SSH in. You'll know you're running on a TPU when the command line starts with `your-username@your-tpu-name`.This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the `.` at the end as it tells gsutil to copy data into the currect directory.
###Code
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
###Output
_____no_output_____
###Markdown
Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out. **TODO 2 and 3: Specify the `tpu_address` and `hub_path`**
###Code
!echo "python3 -m tpu_models.trainer.task \
--tpu_address=my-tpu \
--hub_path=gs://$BUCKET/tpu_models/hub/ \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
###Output
_____no_output_____
###Markdown
Transfer Learning on TPUsIn the previous notebook, we learned how to do transfer learning with [TensorFlow Hub](https://www.tensorflow.org/hub). In this notebook, we're going to kick up our training speed with [TPUs](https://www.tensorflow.org/guide/tpu). Learning Objectives1. Know how to set up a [TPU strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy?version=nightly) for training2. Know how to use a TensorFlow Hub Module when training on a TPU3. Know how to create and specify a TPU for trainingFirst things first. Configure the parameters below to match your own Google Cloud project details.Each learning objective will correspond to a __TODO__ in the [student lab notebook](../labs/4_tpu_training.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
###Code
import os
os.environ["BUCKET"] = "your-bucket-here"
###Output
_____no_output_____
###Markdown
Packaging the ModelIn order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in `tpu_models` with the data processing functions from the previous lab copied into util.py.Similarly, the model building and training functions are pulled into model.py. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new `task.py` file.We've added five command line arguments which are standard for cloud training of a TensorFlow model: `epochs`, `steps_per_epoch`, `train_path`, `eval_path`, and `job-dir`. There are two new arguments for TPU training: `tpu_address` and `hub_path``tpu_address` is going to be our TPU name as it appears in [Compute Engine Instances](https://console.cloud.google.com/compute/instances). We can specify this name with the [ctpu up](https://cloud.google.com/tpu/docs/ctpu-referenceup) command.`hub_path` is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.The other big difference is some code to deploy our model on a TPU. To begin, we'll set up a [TPU Cluster Resolver](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/TPUClusterResolver), which will help tensorflow communicate with the hardware to set up workers for training ([more on TensorFlow Cluster Resolvers](https://www.tensorflow.org/api_docs/python/tf/distribute/cluster_resolver/ClusterResolver)). Once the resolver [connects to](https://www.tensorflow.org/api_docs/python/tf/config/experimental_connect_to_cluster) and [initializes](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/initialize_tpu_system) the TPU system, our Tensorflow Graphs can be initialized within a [TPU distribution strategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy), allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.**TODO 1: Set up a TPU strategy**
###Code
%%writefile tpu_models/trainer/task.py
import argparse
import json
import os
import sys
import tensorflow as tf
from . import model
from . import util
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=5)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=500)
parser.add_argument(
'--train_path',
help='The path to the training data',
type=str, default="gs://cloud-ml-data/img/flower_photos/train_set.csv")
parser.add_argument(
'--eval_path',
help='The path to the evaluation data',
type=str, default="gs://cloud-ml-data/img/flower_photos/eval_set.csv")
parser.add_argument(
'--tpu_address',
help='The path to the TPUs we will use in training',
type=str, required=True)
parser.add_argument(
'--hub_path',
help='The path to TF Hub module to use in GCS',
type=str, required=True)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, required=True)
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=args.tpu_address)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch,
train_data, eval_data, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
The TPU serverBefore we can start training with this code, we need a way to pull in [MobileNet](https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/feature_vector/4). When working with TPUs in the cloud, the TPU will [not have access to the VM's local file directory](https://cloud.google.com/tpu/docs/troubleshootingcannot_use_local_filesystem) since the TPU worker acts as a server. Because of this **all data used by our model must be hosted on an outside storage system** such as Google Cloud Storage. This makes [caching](https://www.tensorflow.org/api_docs/python/tf/data/Datasetcache) our dataset especially critical in order to speed up training time.To access MobileNet with these restrictions, we can download a compressed [saved version](https://www.tensorflow.org/hub/tf2_saved_model) of the model by using the [wget](https://www.gnu.org/software/wget/manual/wget.html) command. Adding `?tf-hub-format=compressed` at the end of our module handle gives us a download URL.
###Code
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
###Output
_____no_output_____
###Markdown
This model is still compressed, so lets uncompress it with the `tar` command below and place it in our `tpu_models` directory.
###Code
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
###Output
_____no_output_____
###Markdown
Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using [gsutil cp](https://cloud.google.com/storage/docs/gsutil/commands/cp) to copy everything.
###Code
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
###Output
_____no_output_____
###Markdown
Spinning up a TPUTime to wake up a TPU! Open the [Google Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true) and copy the [gcloud compute](https://cloud.google.com/sdk/gcloud/reference/compute/tpus/execution-groups/create) command below. Say 'Yes' to the prompts to spin up the TPU.`gcloud compute tpus execution-groups create \ --name=my-tpu \ --zone=us-central1-b \ --tf-version=2.3.2 \ --machine-type=n1-standard-1 \ --accelerator-type=v3-8`It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively [Compute Engine Interface](https://console.cloud.google.com/compute/instances) can be used to SSH in. You'll know you're running on a TPU when the command line starts with `your-username@your-tpu-name`.This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the `.` at the end as it tells gsutil to copy data into the currect directory.
###Code
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
###Output
_____no_output_____
###Markdown
Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out. **TODO 2 and 3: Specify the `tpu_address` and `hub_path`**
###Code
%%bash
export TPU_NAME=my-tpu
echo "export TPU_NAME="$TPU_NAME
echo "python3 -m tpu_models.trainer.task \
--tpu_address=\$TPU_NAME \
--hub_path=gs://$BUCKET/tpu_models/hub/ \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
###Output
_____no_output_____ |
Paper-1/VGG16.ipynb | ###Markdown
**SIMPLE VGG ARCHITECTURE **
###Code
architecture = [64 , 64 , -1 , 128 , 128 , -1 , 256 , 256, 256 , -1, 512, 512, 512 , -1 , 512 ,512, 512,-1]
class VGG(nn.Module):
def __init__(self,in_channels=3,num_classes=10):
super(VGG,self).__init__()
self.in_channels = in_channels
self.mai = self.main_architecture(architecture)
self.fcs = nn.Sequential(nn.Linear(512,4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096,4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096,num_classes))
def forward(self,x):
x = self.mai(x)
x = x.reshape(x.shape[0], -1)
x = self.fcs(x)
return x
def main_architecture(self,architecture):
blocks = []
in_channels = self.in_channels
for layer in architecture:
if layer != -1 :
out_channels = layer
blocks+=[nn.Conv2d(in_channels=in_channels,out_channels = out_channels , kernel_size = (3,3) , stride =(1,1) ,padding = (1,1) ),nn.BatchNorm2d(layer),nn.ReLU()]
in_channels = layer
elif layer == -1 :
blocks+=[(nn.MaxPool2d(kernel_size=(2,2),stride= (2,2)))]
return nn.Sequential(*blocks)
transform = transforms.Compose([transforms.Pad(4),transforms.RandomHorizontalFlip(),transforms.RandomCrop(32),transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])
test_transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize(mean=(0.5,0.5,0.5),std=(0.5,0.5,0.5))])
train_dataset = torchvision.datasets.CIFAR10(root='../../data/', train=True,transform=transform, download=True)
test_dataset = torchvision.datasets.CIFAR10(root='../../data/', train=False,transform=test_transform)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,batch_size=64, shuffle=False)
depth = 2
epochs = 5
batch_size = 256
base_lr = 0.001
lr_decay = 0.1
milestones = '[80, 120]'
device = "cuda"
num_workers = 3
model = VGG(in_channels = 3, num_classes = 10).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=base_lr)
for epoch in range(epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 50 == 0:
print ("Epoch {}, Step {} Loss: {:.4f}".format(epoch+1, i+1, loss.item()))
model.eval
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy ( test images ) : {} %'.format(100 * correct / total))
###Output
Accuracy ( test images ) : 55.59 %
|
practice_zameen_com.ipynb | ###Markdown
###Code
#importing all required libarary
import pandas as pd
from bs4 import BeautifulSoup
import requests
url="https://www.zameen.com/Houses_Property/Lahore_Defence_(DHA)_Phase_6-1448-{}.html"
price=[]
location=[]
detail=[]
added=[]
article=[]
description=[]
for i in range(1,10):
response=requests.get("https://www.zameen.com/Houses_Property/Lahore_Defence_(DHA)_Phase_6-1448-{}.html".format(i))
soup=BeautifulSoup(response.content,"html.parser")
for i in soup.findAll("span",class_="f343d9ce"):
price.append(i.text)
for i in soup.findAll("div",class_="_162e6469"):
location.append(i.string)
for i in soup.findAll("span",class_="b6a29bc0"):
detail.append(i.string)
for i in soup.findAll("div",class_="_08b01580"):
added.append(i.text)
for i in soup.findAll("h2",class_="c0df3811"):
article.append(i.string)
for i in soup.findAll("div",class_="ee550b27"):
description.append(i.string)
soup.title.string
area=detail[2::3]
bed=detail[0::3]
bath=detail[1::3]
data={"Bed":bed,"Price":price,"Location":location,"Added_time":added,"Features":article,"Description":description}
df=pd.DataFrame(data)
df["Bath"]=pd.Series(bath)
df["Area"]=pd.Series(area)
df.reindex(columns=["Bed","Bath","Area","Location","Features","Description","Added_time","Price"])
df.to_excel("zameen_data.xlsx")
###Output
_____no_output_____ |
notebooks/recommender_mvp.ipynb | ###Markdown
Create Restaurant Matrix from miles_from_galvanize_db
###Code
V = mile_from_galvanize_db.drop(columns=['id','image_url', 'location', 'rating', 'review_count',
'transactions', 'url', 'dist_from_galvanize', 'cats', 'popularity'])
V
#V.reset_index(drop=True, inplace=True)
#V.set_index(keys='alias', inplace=True)
V.to_pickle('restaurant_matrix.pkl')
###Output
_____no_output_____
###Markdown
Save restaurants list to a file
###Code
restaurants = V.index.tolist()
restaurants
with open('restaurant_aliases.txt', 'w') as file:
for listitem in restaurants:
file.write('%s\n' % listitem)
###Output
_____no_output_____
###Markdown
Create User Matrix
###Code
bum = BuildUserMatrix(V, survey_results, usernames)
U = bum.compile()
U
###Output
_____no_output_____
###Markdown
Create Ratings Matrix
###Code
rr = RestaurantRecommender(mile_from_galvanize_db, U, V)
R = rr.build_ratings_matrix()
R
R
rr.find_individual_recs('jonny', R, 30)
recs = rr.max_sat_recs('gabe', 'jonny')
recs
formatted_mile_from_galvanize_db = mile_from_galvanize_db.reset_index(drop=True)
formatted_mile_from_galvanize_db.set_index('alias', inplace=True)
formatted_mile_from_galvanize_db
popularity_weight = np.log(np.log(formatted_mile_from_galvanize_db['popularity'] + 1)+1)
popularity_weight.min()
average_rating_weight = np.log(formatted_mile_from_galvanize_db['rating'])
average_rating_weight.min()
import json
with open ('/Users/gnishimura/galvanize/dsi-week-6/dsi-spark/data/yelp_academic_dataset_business.json') as f:
yelp_df = pd.DataFrame([json.loads(line) for line in f])
a = yelp_df['city'].unique()
a.sort()
yelp_df[yelp_df['city'] == 'Seattle']
(7+7+6.57)/3
from survey_results import survey_results
restaurant_ids = set()
for survey in survey_results:
restaurant_ids.update(survey.keys())
restaurant_ids
rest_mask = np.array([x in restaurant_ids for x in V.index])
catcounts = V[rest_mask].sum(axis=0)
catcounts[catcounts>0]
###Output
_____no_output_____ |
examples/pandas_helper/.ipynb_checkpoints/pandas_helper_example-checkpoint.ipynb | ###Markdown
Exploring the new descibe methods
###Code
df.helper.describe() # Prints both Numeric and Categorical Variable Summaries
df.helper.describe_categorical() # Descripton of all categorical variables
df.helper.level_counts()
# Same can also be achieved by using verbose option in describe_categorical
# df.helper.describe_categorical(verbose=True)
###Output
----------------------------------------------------------------------
Printing number of observations in each level of categorical variables
----------------------------------------------------------------------
Name
Jarvis, Mr. John Denzil 1
Harper, Miss. Annie Jessie "Nina" 1
Olsen, Mr. Henry Margido 1
Guggenheim, Mr. Benjamin 1
Hippach, Mrs. Louis Albert (Ida Sophia Fischer) 1
Baclini, Miss. Marie Catherine 1
Futrelle, Mr. Jacques Heath 1
Rosblom, Mrs. Viktor (Helena Wilhelmina) 1
Dakic, Mr. Branko 1
de Mulder, Mr. Theodore 1
Bailey, Mr. Percy Andrew 1
Leitch, Miss. Jessie Wills 1
Jonkoff, Mr. Lalio 1
Bengtsson, Mr. John Viktor 1
Goodwin, Miss. Lillian Amy 1
Rice, Master. George Hugh 1
Sagesser, Mlle. Emma 1
Campbell, Mr. William 1
Toufik, Mr. Nakli 1
Fortune, Mr. Mark 1
Johannesen-Bratthammer, Mr. Bernt 1
Moore, Mr. Leonard Charles 1
Barkworth, Mr. Algernon Henry Wilson 1
Youseff, Mr. Gerious 1
Mamee, Mr. Hanna 1
Nosworthy, Mr. Richard Cater 1
Taussig, Mr. Emil 1
Karun, Miss. Manca 1
Abbing, Mr. Anthony 1
Lindahl, Miss. Agda Thorilda Viktoria 1
..
Givard, Mr. Hans Kristensen 1
Baclini, Mrs. Solomon (Latifa Qurban) 1
Chibnall, Mrs. (Edith Martha Bowerman) 1
Nicholson, Mr. Arthur Ernest 1
Caldwell, Master. Alden Gates 1
Endres, Miss. Caroline Louise 1
Slemen, Mr. Richard James 1
Boulos, Miss. Nourelain 1
Rogers, Mr. William John 1
Perkin, Mr. John Henry 1
Navratil, Mr. Michel ("Louis M Hoffman") 1
Asplund, Master. Edvin Rojj Felix 1
Robbins, Mr. Victor 1
Lang, Mr. Fang 1
Keefe, Mr. Arthur 1
Salkjelsvik, Miss. Anna Kristine 1
Jerwan, Mrs. Amin S (Marie Marthe Thuillard) 1
Zabour, Miss. Hileni 1
Farrell, Mr. James 1
Natsch, Mr. Charles H 1
Strom, Mrs. Wilhelm (Elna Matilda Persson) 1
Oreskovic, Miss. Marija 1
Ryan, Mr. Patrick 1
Heininen, Miss. Wendla Maria 1
Farthing, Mr. John 1
Vander Cruyssen, Mr. Victor 1
Petterson, Mr. Johan Emil 1
Porter, Mr. Walter Chamberlain 1
Dahlberg, Miss. Gerda Ulrika 1
McCoy, Miss. Agnes 1
Name: Name, Length: 891, dtype: int64
Sex
male 577
female 314
Name: Sex, dtype: int64
Ticket
1601 7
347082 7
CA. 2343 7
CA 2144 6
3101295 6
347088 6
S.O.C. 14879 5
382652 5
PC 17757 4
2666 4
347077 4
LINE 4
113781 4
113760 4
17421 4
4133 4
19950 4
349909 4
W./C. 6608 4
230080 3
110152 3
PC 17582 3
PC 17755 3
248727 3
SC/Paris 2123 3
29106 3
24160 3
347742 3
363291 3
PC 17572 3
..
323951 1
2690 1
350406 1
343275 1
315088 1
113804 1
244358 1
113510 1
29104 1
349212 1
7267 1
A/5. 851 1
C.A. 17248 1
SOTON/OQ 392086 1
C.A. 6212 1
11752 1
STON/O 2. 3101269 1
349224 1
STON/O 2. 3101293 1
A/5. 3337 1
PC 17590 1
13213 1
347089 1
347076 1
342826 1
315086 1
SOTON/OQ 392089 1
PC 17474 1
349247 1
364512 1
Name: Ticket, Length: 681, dtype: int64
Cabin
G6 4
C23 C25 C27 4
B96 B98 4
F2 3
F33 3
E101 3
C22 C26 3
D 3
B49 2
D26 2
B18 2
C52 2
B57 B59 B63 B66 2
F G73 2
D33 2
E67 2
B51 B53 B55 2
C83 2
E44 2
B22 2
B77 2
B58 B60 2
E24 2
D35 2
D17 2
C93 2
C68 2
B20 2
B5 2
C125 2
..
D10 D12 1
B86 1
E31 1
B30 1
D11 1
C54 1
C128 1
C110 1
C70 1
C45 1
A19 1
F E69 1
D56 1
D15 1
D9 1
C103 1
C85 1
E40 1
D37 1
E10 1
C104 1
C30 1
C86 1
B41 1
C95 1
C32 1
D45 1
A16 1
T 1
A23 1
Name: Cabin, Length: 147, dtype: int64
Embarked
S 644
C 168
Q 77
Name: Embarked, dtype: int64
###Markdown
Dropping Columns with helper extension
###Code
df.info()
df.drop(['Cabin'],inplace=True, axis=1)
df.info() # Column 'Cabin' has been dropped
# Deleting it a second time gives an error with default drop behavior (hence commenting out for now)
# df.drop(['Cabin'],inplace=True, axis=1)
# Deleting it again with helper extension only gives a warning
df.helper.drop_columns(['Cabin'],inplace=True)
# Inplace = False drop with helper extension (similar functionality to default drop pandas behavior)
df.helper.drop_columns(['Embarked']).head()
# Inplace = False drop with helper extension for already deleted column
df.helper.drop_columns(['Cabin']).head()
# No issues if some columns are present while others are not
df.helper.drop_columns(['col1','col2','Age'], inplace=True)
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 10 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Embarked 889 non-null object
dtypes: float64(1), int64(5), object(4)
memory usage: 69.7+ KB
###Markdown
Checking the functionality of new describe methods after inplace column deletions
###Code
df.helper.describe_categorical()
# Cabin is absent now
df.helper.describe_numeric()
# Age is absent now
###Output
----------------------------------------
Summary Statictics for Numeric Variables
----------------------------------------
PassengerId Survived Pclass SibSp Parch Fare
count 891.000000 891.000000 891.000000 891.000000 891.000000 891.000000
mean 446.000000 0.383838 2.308642 0.523008 0.381594 32.204208
std 257.353842 0.486592 0.836071 1.102743 0.806057 49.693429
min 1.000000 0.000000 1.000000 0.000000 0.000000 0.000000
25% 223.500000 0.000000 2.000000 0.000000 0.000000 7.910400
50% 446.000000 0.000000 3.000000 0.000000 0.000000 14.454200
75% 668.500000 1.000000 3.000000 1.000000 0.000000 31.000000
max 891.000000 1.000000 3.000000 8.000000 6.000000 512.329200
|
notebooks/0.eda_test.ipynb | ###Markdown
Set-up
###Code
# DATA MANIPULATION
import numpy as np # linear algebra
import random as rd # generating random numbers
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import datetime # manipulating date formats
# VIZUALIZATION
import matplotlib.pyplot as plt # basic plotting
%matplotlib inline
# Read data
train = pd.read_csv('../input/processed/train_from2017.csv', parse_dates=['date'])
test = pd.read_csv('../input/test.csv', parse_dates=['date'])
test[test.date=='2017-08-16'].groupby(['store_nbr']).item_nbr.count()
items_train = train.item_nbr.unique()
items_test = test.item_nbr.unique()
print("Number of unique items in ")
print("Training dataset : ", len(items_train))
print("Test dataset : ", len(items_test))
import pickle
pickle.dump(items_test, open('../input/processed/items_test.pickle', 'wb'))
###Output
_____no_output_____
###Markdown
Explore
###Code
train.info()
train.describe()
###Output
_____no_output_____
###Markdown
Target variable
###Code
## There are negative values???
train[train.unit_sales<0].unit_sales.hist()
train[train.unit_sales<0].head(10)
## Transform target variable
train.loc[train.unit_sales < 0., 'unit_sales'] = 0.
train['unit_sales_log1p'] = np.log1p(train.unit_sales)
# Histograms
plt.figure(figsize=(15,5))
train.unit_sales.hist(ax=plt.subplot(1,2,1))
train.unit_sales_log1p.hist(ax=plt.subplot(1,2,2))
###Output
_____no_output_____
###Markdown
EDA
###Code
plt.figure(figsize=(15,7))
for item in sorted(train.item_nbr.unique())[:10]:
#print(item)
df = train[(train.item_nbr==item) & (train.date>='2017-01-01')]
ts = pd.Series(df['unit_sales_log1p'].values, index = df.date)
plt.plot(ts, label='Item %s'%(item))
plt.show()
###Output
_____no_output_____
###Markdown
Set-up
###Code
# DATA MANIPULATION
import numpy as np # linear algebra
import random as rd # generating random numbers
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import datetime # manipulating date formats
# VIZUALIZATION
import matplotlib.pyplot as plt # basic plotting
%matplotlib inline
# Read data
train = pd.read_csv('../input/processed/train_from2017.csv', parse_dates=['date'])
test = pd.read_csv('../input/test.csv', parse_dates=['date'])
test[test.date=='2017-08-16'].groupby(['store_nbr']).item_nbr.count()
items_train = train.item_nbr.unique()
items_test = test.item_nbr.unique()
print("Number of unique items in ")
print("Training dataset : ", len(items_train))
print("Test dataset : ", len(items_test))
import pickle
pickle.dump(items_test, open('../input/processed/items_test.pickle', 'wb'))
###Output
_____no_output_____
###Markdown
Explore
###Code
train.info()
train.describe()
###Output
_____no_output_____
###Markdown
Target variable
###Code
## There are negative values???
train[train.unit_sales<0].unit_sales.hist()
train[train.unit_sales<0].head(10)
## Transform target variable
train.loc[train.unit_sales < 0., 'unit_sales'] = 0.
train['unit_sales_log1p'] = np.log1p(train.unit_sales)
# Histograms
plt.figure(figsize=(15,5))
train.unit_sales.hist(ax=plt.subplot(1,2,1))
train.unit_sales_log1p.hist(ax=plt.subplot(1,2,2))
###Output
_____no_output_____
###Markdown
EDA
###Code
plt.figure(figsize=(15,7))
for item in sorted(train.item_nbr.unique())[:10]:
#print(item)
df = train[(train.item_nbr==item) & (train.date>='2017-01-01')]
ts = pd.Series(df['unit_sales_log1p'].values, index = df.date)
plt.plot(ts, label='Item %s'%(item))
plt.show()
###Output
_____no_output_____ |
Part 2 - Python Notebook/recipes/Caffe2/Caffe2-GPU-Distributed/Caffe2-GPU-Distributed.ipynb | ###Markdown
Caffe2 GPU Distributed IntroductionThis example demonstrates how to run standard Caffe2 [resnet50_trainer.py](https://github.com/caffe2/caffe2/blob/master/caffe2/python/examples/resnet50_trainer.py) example using Batch AI. You can run it on a single or multiple compute nodes. Details- Standard Caffe2 sample script [resnet50_trainer.py](https://github.com/caffe2/caffe2/blob/master/caffe2/python/examples/resnet50_trainer.py) is used;- MNIST Dataset has been translated into a lmdb database, and can be obtained at http://download.caffe2.ai/databases/mnist-lmdb.zip;- Automatically created NFS folder will be used for rendezvous temp files to coordinate between each shard/node - Standard output of the job will be stored on Azure File Share. Instructions Install Dependencies and Create Configuration file.Follow [instructions](/recipes) to install all dependencies and create configuration file. Read Configuration and Create Batch AI client
###Code
from __future__ import print_function
from datetime import datetime
import os
import sys
import zipfile
from azure.storage.file import FileService
from azure.storage.blob import BlockBlobService
import azure.mgmt.batchai.models as models
# The BatchAI/utilities folder contains helper functions used by different notebooks
sys.path.append('../../../')
import utilities as utils
cfg = utils.config.Configuration('../../configuration.json')
client = utils.config.create_batchai_client(cfg)
###Output
_____no_output_____
###Markdown
Create Resoruce Group and Batch AI workspace if not exists:
###Code
utils.config.create_resource_group(cfg)
_ = client.workspaces.create(cfg.resource_group, cfg.workspace, cfg.location).result()
###Output
_____no_output_____
###Markdown
1. Prepare Training Dataset and Script in Azure Storage Create Azure Blob ContainerWe will create a new Blob Container with name `batchaisample` under your storage account. This will be used to store the *input training dataset***Note** You don't need to create new blob Container for every cluster. We are doing this in this sample to simplify resource management for you.
###Code
azure_blob_container_name = 'batchaisample'
blob_service = BlockBlobService(cfg.storage_account_name, cfg.storage_account_key)
blob_service.create_container(azure_blob_container_name, fail_on_exist=False)
###Output
_____no_output_____
###Markdown
Upload MNIST Dataset to Azure Blob ContainerFor demonstration purposes, we will download preprocessed MNIST dataset to the current directory and upload it to Azure Blob Container directory named `mnist_dataset`.There are multiple ways to create folders and upload files into Azure Blob Container - you can use [Azure Portal](https://ms.portal.azure.com), [Storage Explorer](http://storageexplorer.com/), [Azure CLI2](/azure-cli-extension) or Azure SDK for your preferable programming language.In this example we will use Azure SDK for python to copy files into Blob.
###Code
mnist_dataset_directory = 'mnist_dataset'
utils.dataset.download_and_upload_mnist_dataset_to_blob(
blob_service, azure_blob_container_name, mnist_dataset_directory)
###Output
_____no_output_____
###Markdown
Create Azure File ShareFor this example we will create a new File Share with name `batchaisample` under your storage account. This will be used to share the *training script file* and *output file*.**Note** You don't need to create new file share for every cluster. We are doing this in this sample to simplify resource management for you.
###Code
azure_file_share_name = 'batchaisample'
file_service = FileService(cfg.storage_account_name, cfg.storage_account_key)
file_service.create_share(azure_file_share_name, fail_on_exist=False)
###Output
_____no_output_____
###Markdown
Deploy Sample Script to Azure File ShareDownload original sample script
###Code
script_to_deploy = 'resnet50_trainer.py'
utils.dataset.download_file('https://raw.githubusercontent.com/caffe2/caffe2/v0.6.0/caffe2/python/examples/resnet50_trainer.py', script_to_deploy)
###Output
_____no_output_____
###Markdown
We will create a folder on Azure File Share containing a copy of original sample script
###Code
script_directory = 'Caffe2Samples'
file_service.create_directory(
azure_file_share_name, script_directory, fail_on_exist=False)
file_service.create_file_from_path(
azure_file_share_name, script_directory, script_to_deploy, script_to_deploy)
###Output
_____no_output_____
###Markdown
2. Create Azure Batch AI Compute Cluster Configure Compute Cluster- For this example we will use a GPU cluster of `STANDARD_NC6` nodes. Number of nodes in the cluster is configured with `nodes_count` variable;- We will call the cluster `nc6`;So, the cluster will have the following parameters:
###Code
nodes_count = 2
cluster_name = 'nc6'
parameters = models.ClusterCreateParameters(
location=cfg.location,
vm_size='STANDARD_NC6',
scale_settings=models.ScaleSettings(
manual=models.ManualScaleSettings(target_node_count=nodes_count)
),
user_account_settings=models.UserAccountSettings(
admin_user_name=cfg.admin,
admin_user_password=cfg.admin_password or None,
admin_user_ssh_public_key=cfg.admin_ssh_key or None,
)
)
###Output
_____no_output_____
###Markdown
Create Compute Cluster
###Code
_ = client.clusters.create(cfg.resource_group, cfg.workspace, cluster_name, parameters).result()
###Output
_____no_output_____
###Markdown
Monitor Cluster CreationGet the just created cluster. The `utilities` module contains a helper function to print out all kind of nodes count in the cluster.
###Code
cluster = client.clusters.get(cfg.resource_group, cfg.workspace, cluster_name)
utils.cluster.print_cluster_status(cluster)
###Output
_____no_output_____
###Markdown
3. Run Azure Batch AI Training Job Configure Job- The job will use `caffe2ai/caffe2` container.- Will run `resnet50_trainer.py` from SCRIPT input directory;- Will output standard output and error streams to file share;- Will mount file share at folder with name `afs`. Full path of this folder on a computer node will be `$AZ_BATCHAI_JOB_MOUNT_ROOT/afs`;- Will mount Azure Blob Container at folder with name `bfs`. Full path of this folder on a computer node will be `$AZ_BATCHAI_JOB_MOUNT_ROOT/bfs`;- The job needs to know where to find mnist_replica.py and input MNIST dataset. We will create two input directories for this. The job will be able to reference those directories using environment variables: - ```AZ_BATCHAI_INPUT_SCRIPT``` : refers to the directory containing the scripts at mounted Azure File Share - ```AZ_BATCHAI_INPUT_DATASET``` : refers to the directory containing the training data on mounted Azure Blob Container- Will use $AZ_BATCHAI_SHARED_JOB_TEMP shared directory created by Batch AI to coordinate execution between nodes;- For demostration purpose, we will only run 5 epochs with epoch size as 2000.
###Code
azure_file_share = 'afs'
azure_blob = 'bfs'
parameters = models.JobCreateParameters(
location=cfg.location,
cluster=models.ResourceId(id=cluster.id),
node_count=2,
mount_volumes=models.MountVolumes(
azure_file_shares=[
models.AzureFileShareReference(
account_name=cfg.storage_account_name,
credentials=models.AzureStorageCredentialsInfo(
account_key=cfg.storage_account_key),
azure_file_url='https://{0}.file.core.windows.net/{1}'.format(
cfg.storage_account_name, azure_file_share_name),
relative_mount_path=azure_file_share)
],
azure_blob_file_systems=[
models.AzureBlobFileSystemReference(
account_name=cfg.storage_account_name,
credentials=models.AzureStorageCredentialsInfo(
account_key=cfg.storage_account_key),
container_name=azure_blob_container_name,
relative_mount_path=azure_blob)
]
),
input_directories = [
models.InputDirectory(
id='SCRIPT',
path='$AZ_BATCHAI_JOB_MOUNT_ROOT/{0}/{1}'.format(azure_file_share, script_directory)),
models.InputDirectory(
id='DATASET',
path='$AZ_BATCHAI_JOB_MOUNT_ROOT/{0}/{1}'.format(azure_blob, mnist_dataset_directory))
],
std_out_err_path_prefix='$AZ_BATCHAI_JOB_MOUNT_ROOT/{0}'.format(azure_file_share),
container_settings=models.ContainerSettings(
image_source_registry=models.ImageSourceRegistry(image='caffe2ai/caffe2')),
caffe2_settings = models.Caffe2Settings(
python_script_file_path='$AZ_BATCHAI_INPUT_SCRIPT/'+script_to_deploy,
command_line_args='--num_shards 2 --shard_id $AZ_BATCHAI_TASK_INDEX --run_id 0 --epoch_size 2000 --num_epochs 5 --train_data $AZ_BATCHAI_INPUT_DATASET/mnist_train_lmdb --file_store_path $AZ_BATCHAI_SHARED_JOB_TEMP'))
###Output
_____no_output_____
###Markdown
Create a training Job and wait for Job completion
###Code
experiment_name = 'caffe2_experiment'
experiment = client.experiments.create(cfg.resource_group, cfg.workspace, experiment_name).result()
job_name = datetime.utcnow().strftime('caffe2_%m_%d_%Y_%H%M%S')
job = client.jobs.create(cfg.resource_group, cfg.workspace, experiment_name, job_name, parameters).result()
print('Created Job {0} in Experiment {1}'.format(job.name, experiment.name))
###Output
_____no_output_____
###Markdown
Wait for Job to FinishThe job will start running when the cluster will have enough idle nodes. The following code waits for job to start running printing the cluster state. During job run, the code prints current content of stdout.txt.**Note** Execution may take several minutes to complete.
###Code
utils.job.wait_for_job_completion(client, cfg.resource_group, cfg.workspace,
experiment_name, job_name, cluster_name, 'stdouterr', 'stderr-1.txt')
###Output
_____no_output_____
###Markdown
List stdout.txt and stderr.txt files for the Job
###Code
files = client.jobs.list_output_files(cfg.resource_group, cfg.workspace, experiment_name, job_name,
models.JobsListOutputFilesOptions(outputdirectoryid='stdouterr'))
for f in list(files):
print(f.name, f.download_url or 'directory')
###Output
_____no_output_____
###Markdown
4. Clean Up (Optional) Delete the Job
###Code
_ = client.jobs.delete(cfg.resource_group, cfg.workspace, experiment_name, job_name)
###Output
_____no_output_____
###Markdown
Delete the ClusterWhen you are finished with the sample and don't want to submit any more jobs you can delete the cluster using the following code.
###Code
_ = client.clusters.delete(cfg.resource_group, cfg.workspace, cluster_name)
###Output
_____no_output_____
###Markdown
Delete File ShareWhen you are finished with the sample and don't want to submit any more jobs you can delete the file share completely with all files using the following code.
###Code
service = FileService(cfg.storage_account_name, cfg.storage_account_key)
service.delete_share(azure_file_share_name)
###Output
_____no_output_____ |
Notebooks/ProjectQ_first_program.ipynb | ###Markdown
ProjectQ First Program This exercise is based on the ProjectQ compiler tutorial. See https://github.com/ProjectQ-Framework/ProjectQ/blob/develop/examples/compiler_tutorial.ipynb for the original version.Please check out [ProjectQ paper](http://arxiv.org/abs/1612.08091) for an introduction to the basic concepts behind this compiler.This exercise will create the program to make the [superdense coding] (https://en.wikipedia.org/wiki/Superdense_coding). Load the modules* projectq, includes main functionalities* projectq.backends, includes the backends for the execution of the program. Initially, you will load the CommandPrinter, which prints the final gate sequence generated by the compilers.* projectq.operation, includes the main defined operations, as common quantum gates (H,X,etc), or quantum full subroutines, as QFT.
###Code
import projectq
from projectq.backends import Simulator
from projectq.ops import CNOT, X, Y, Z, H, Measure,All
###Output
_____no_output_____
###Markdown
**The first step is to create an Engine. The sintax to create this object is using [MainEngine](http://projectq.readthedocs.io/en/latest/projectq.cengines.htmlprojectq.cengines.MainEngine):*****MainEngine(backend, engine_list, setups, verbose)*****In this case, the selected backend is [Simulator](http://projectq.readthedocs.io/en/latest/projectq.backends.htmlprojectq.backends.Simulator) which will simulate the final sequence of gates.**
###Code
# create the compiler and specify the backend:
eng = projectq.MainEngine(backend=Simulator())
###Output
_____no_output_____
###Markdown
**On this Engine, you must first allocate space for the qubits. You will allocate a register with two qubits.**
###Code
qureg = eng.allocate_qureg(2)
###Output
_____no_output_____
###Markdown
**First, you (Bob) must create the [Bell's state](https://en.wikipedia.org/wiki/Bell_state):**$$|\psi\rangle = \frac{1}{\sqrt{2}} (|00\rangle+|11\rangle)$$**To do it, apply a Hadamard gate (H) to the first qubit and, afterwards, a CNOT gate on qubit 1 using the qubit 0 as control bit.**qubit 1 = qureg[1]qubit 0 = qureg[0]**To apply operations, ProjectQ uses the sintax:*****Operation | registers*****In the case of CNOT, the first qubit is the control qubit, the second the controlled qubit.**
###Code
H | qureg[0]
CNOT | (qureg[0],qureg[1])
###Output
_____no_output_____
###Markdown
In ProjectQ, nothing is computed until you flush the set of gates. At anytime, because you are using the simulator, you can get the state of the Quantum Register using the cheat backend operation. The first part shows how the qubits have been mapped and the second the current quantum state.
###Code
eng.flush()
eng.backend.cheat()
###Output
_____no_output_____
###Markdown
**Now, after you (Bob) sent the qubit 1 to Alice, she applies one gate to transfer the information. The agreed protocol is:*** 00, I* 01, X* 10, Z* 11, Y**Select one option for Alice!**
###Code
Y| qureg[1]
###Output
_____no_output_____
###Markdown
**Now, Alice sends her qubit to Bob, who uncomputes the entanglement (apply the inverse gates in reversed order. CNOT and H are their own inverse)**
###Code
CNOT | (qureg[0],qureg[1])
H | qureg[0]
###Output
_____no_output_____
###Markdown
**And, now, measure the results. In ProjectQ, to get the results, first you must flush the program content, so compilers and backends make their work. In this case, the Simulator.**
###Code
All(Measure) | qureg
eng.flush()
print("Message from Alice: {}{}".format(int(qureg[0]),int(qureg[1])))
###Output
Message from Alice: 11
###Markdown
**You can explore the sequence of engines that have been applied before the Simulator.**
###Code
engines=eng
while engines.next_engine!=None:
print("Engine {}".format(engines.next_engine.__class__.__name__))
engines=engines.next_engine
###Output
Engine TagRemover
Engine LocalOptimizer
Engine AutoReplacer
Engine TagRemover
Engine LocalOptimizer
Engine Simulator
###Markdown
ProjectQ First Program This exercise is based on the ProjectQ compiler tutorial. See https://github.com/ProjectQ-Framework/ProjectQ/blob/develop/examples/compiler_tutorial.ipynb for the original version.Please check out [ProjectQ paper](http://arxiv.org/abs/1612.08091) for an introduction to the basic concepts behind this compiler.This exercise will create the program to make the [superdense coding] (https://en.wikipedia.org/wiki/Superdense_coding). Load the modules* projectq, includes main functionalities* projectq.backends, includes the backends for the execution of the program. Initially, you will load the CommandPrinter, which prints the final gate sequence generated by the compilers.* projectq.operation, includes the main defined operations, as common quantum gates (H,X,etc), or quantum full subroutines, as QFT.
###Code
import projectq
from projectq.backends import Simulator
from projectq.ops import CNOT, X, Y, Z, H, Measure,All
###Output
_____no_output_____
###Markdown
**The first step is to create an Engine. The sintax to create this object is using [MainEngine](http://projectq.readthedocs.io/en/latest/projectq.cengines.htmlprojectq.cengines.MainEngine):*****MainEngine(backend, engine_list, setups, verbose)*****In this case, the selected backend is [Simulator](http://projectq.readthedocs.io/en/latest/projectq.backends.htmlprojectq.backends.Simulator) which will simulate the final sequence of gates.**
###Code
# create the compiler and specify the backend:
eng = projectq.MainEngine(backend=Simulator())
###Output
_____no_output_____
###Markdown
**On this Engine, you must first allocate space for the qubits. You will allocate a register with two qubits.**
###Code
qureg = eng.allocate_qureg(2)
###Output
_____no_output_____
###Markdown
**First, you (Bob) must create the [Bell's state](https://en.wikipedia.org/wiki/Bell_state):**$$|\psi\rangle = \frac{1}{\sqrt{2}} (|00\rangle+|11\rangle)$$**To do it, apply a Hadamard gate (H) to the first qubit and, afterwards, a CNOT gate on qubit 1 using the qubit 0 as control bit.**qubit 1 = qureg[1]qubit 0 = qureg[0]**To apply operations, ProjectQ uses the sintax:*****Operation | registers*****In the case of CNOT, the first qubit is the control qubit, the second the controlled qubit.**
###Code
H | qureg[0]
CNOT | (qureg[0],qureg[1])
###Output
_____no_output_____
###Markdown
In ProjectQ, nothing is computed until you flush the set of gates. At anytime, because you are using the simulator, you can get the state of the Quantum Register using the cheat backend operation. The first part shows how the qubits have been mapped and the second the current quantum state.
###Code
eng.flush()
eng.backend.cheat()
###Output
_____no_output_____
###Markdown
**Now, after you (Bob) sent the qubit 1 to Alice, she applies one gate to transfer the information. The agreed protocol is:*** 00, I* 01, X* 10, Z* 11, Y**Select one option for Alice!**
###Code
Y| qureg[1]
###Output
_____no_output_____
###Markdown
**Now, Alice sends her qubit to Bob, who uncomputes the entanglement (apply the inverse gates in reversed order. CNOT and H are their own inverse)**
###Code
CNOT | (qureg[0],qureg[1])
H | qureg[0]
###Output
_____no_output_____
###Markdown
**And, now, measure the results. In ProjectQ, to get the results, first you must flush the program content, so compilers and backends make their work. In this case, the Simulator.**
###Code
All(Measure) | qureg
eng.flush()
print("Message from Alice: {}{}".format(int(qureg[0]),int(qureg[1])))
###Output
Message from Alice: 11
###Markdown
**You can explore the sequence of engines that have been applied before the Simulator.**
###Code
engines=eng
while engines.next_engine!=None:
print("Engine {}".format(engines.next_engine.__class__.__name__))
engines=engines.next_engine
###Output
Engine TagRemover
Engine LocalOptimizer
Engine AutoReplacer
Engine TagRemover
Engine LocalOptimizer
Engine Simulator
|
notebooks/06-Resampling/resampling_notebook.ipynb | ###Markdown
Introduction to Bootstrapping======= Estimating errors is something that on the surface seems extremely straightforward, but can in reality be a significant headache. In your labs, you've probably learned the theory behind error propagation, and while that works in a lot of cases, it assumes that you understand your errors in your measurements fully and that your errors are distributed normally. This is almost never the case in astronomy!In the previous notebook, we found the errors in our measurements of $H_0$ by propogating the uncertainties in our measurements of $\mu$ in our polynomial fitting. In this notebook, we'll instead use bootstrapping. Before we get to the slopes, let's deal with a simpler example: combining tons of measurements of the Hubble constant that have already been made. To start, let's read in those measurements:
###Code
data = pd.read_fwf('hubble_trim.dat',widths=[4,5,5,9,3,80],comment='#',
names=['h0','ep','em','date','method','source'],skiprows=1)
data
###Output
_____no_output_____
###Markdown
In the cell below, plot the measurements of H0, with errors, as a function of time. Plot with a log scale on the y-axis.
###Code
plt.errorbar(data.date, data.h0, yerr=[np.abs(data.em), data.ep], fmt='b.')
plt.semilogy()
plt.xlabel('Date')
plt.ylabel(r'$H_0 (\frac{km/s}{Mpc})$')
plt.show()
###Output
_____no_output_____
###Markdown
We can see that the measurements have become significantly more consistent over time, narrowing down to around 70. Let's do a cut so that we plot just the data from 2001 onward and make the same plot, this time with a linear scale.
###Code
data_modern = data.query('date>2001')
plt.errorbar(data_modern.date, data_modern.h0, yerr=[np.abs(data_modern.em), data_modern.ep], fmt='b.')
plt.xlabel('Date')
plt.ylabel(r'$H_0 (\frac{km/s}{Mpc})$')
plt.show()
###Output
_____no_output_____
###Markdown
As we can see, the errors aren't symmetric. Let's try plotting a histogram of the measurements, with the mean plotted on top as a dotted line.
###Code
hdelta_mean = np.mean(data_modern.h0)
plt.hist(data_modern.h0, bins=np.arange(30,100, 2))
plt.axvline(hdelta_mean, ls='--', color='black')
plt.xlabel(r'$H_0 (\frac{km/s}{Mpc})$')
plt.show()
###Output
_____no_output_____
###Markdown
The bootstrap method is pretty simple: it's sampling with replacement. What we do is draw a random sample of size n from our sample m times, and then we compute the statistic we're after m times. These realizations let us get an idea of what it might look like if we had actually measured the sample m times, and as a result we can get a measure of the statistic that we would have measured. Let's do this for 10000 samples, drawing a sample that is the same length as our measured sample. To do this, we can use np.random.choice()
###Code
ndata=len(data_modern.h0)
nbootstraps=int(1e4)
hboot=np.random.choice(data_modern.h0, size=(nbootstraps,ndata), replace=True)
np.shape(hboot)
###Output
_____no_output_____
###Markdown
hboot is now a 10000x134 array, with each row as a random draw from our distribution. The total distribution of this should match very closely with our original one for this many draws. Let's check that in the below cell, using density=True to normalize the histograms.
###Code
plt.hist(data_modern.h0, density=True, histtype='step', bins=np.arange(30,100, 2), alpha=0.5)
plt.hist(np.ravel(hboot), density=True, histtype='step', bins=np.arange(30,100, 2), alpha=0.5)
plt.xlabel(r'$H_0 (\frac{km/s}{Mpc})$')
plt.show()
###Output
_____no_output_____
###Markdown
This matches perfectly, which is what we expect. Now, let's look at the first 5 draws instead:
###Code
for i in range(0,5):
plt.hist(hboot[i,:], density=True, histtype='step', bins=np.arange(30,100, 2), alpha=0.5)
plt.xlabel(r'$H_0 (\frac{km/s}{Mpc})$')
plt.show()
###Output
_____no_output_____
###Markdown
From these draws, we get different distributions that will measure different statistics. By measuring something like the mean multiple times, we can get a distribution of that statistic which will inform the bounds that we can put on that statistic.
###Code
h0_mean = np.mean(hboot, axis=1)
print('Mean of the original distribution: ', np.mean(data_modern.h0))
print('Median of our bootstrap realizations: ', np.median(h0_mean))
plt.hist(h0_mean, bins=np.arange(64,70, 0.2))
plt.axvline(np.mean(data_modern.h0), ls='--', color='black')
plt.show()
###Output
_____no_output_____
###Markdown
This mean distribution seems to be relatively symmetric, but that doesn't necessarily need to be the case. One of the advantages of bootstrapping is that we can get asymmetric distributions of statistics and quantify confidence levels with percentiles. We can do that with np.percentile()
###Code
print('2.5%, 16%, 50%, 84%, 97.5%: ', np.percentile(h0_mean, [2.5,16,50,84,97.5]))
###Output
_____no_output_____
###Markdown
In the cell below, find the distribution of the median using a bootstrapping method, plot a histogram of them, and get the 3-sigma upper and lower bounds on that measurement.
###Code
h0_median = np.median(?
plt.hist(h0_median, bins=np.arange(64,73,0.2))
plt.show()
print('Upper and Lower 2-sigma values: ', np.percentile(?))
###Output
_____no_output_____
###Markdown
As you can see, the problem with the median here is that because our data is discrete, we don't really get a very evenly sampled set of measurements. One way to handle this is to add in noise to smooth out the measurements. Let's add in randomly sampled noise from a normal distribution. The magnitude of this noise is more of an art than a science; we just want it to be significantly smaller than the actual variance of the data. In this case, drawing from a normal distribution with sigma=1 is a reasonable thing to do.
###Code
sboot = hboot + np.random.randn(nbootstraps,ndata)
h0_median_smoothed = np.median(sboot, axis=1)
plt.hist(h0_median_smoothed, bins=np.arange(64,73,0.2))
plt.show()
print('Upper and Lower 2-sigma values: ', np.percentile(?))
###Output
_____no_output_____
###Markdown
Simple statistics are very easy to recover with a method like this and are not very computational expensive. Now, let's return to the slope example from the previous notebook and fit $H_0$ with errors. Costraining Errors in the Our Measurement from Last Session w/ Bootstrapping======
###Code
# CHANGE THE BELOW LINE TO POINT TO THE DIRECTORY CONTAINING SNDATA.TXT
path = ''
# the pandas way: the file is in "fixed-width format" so we use read_fwf
data=pd.read_fwf(path+'sndata.txt')
cz=data['cz'] #already in km/s
logv = np.log10(data['cz'])
mu=data['mu']
sigma_mu=data['sigma_mu']
weight = 1/sigma_mu**2
coeffs, covar = np.polyfit(logv,mu,1,w=weight,cov=True)
slope_best = coeffs[0]
intercept_best = coeffs[1]
intercept_err_best = np.sqrt(covar[1,1])
def int_to_H0(b):
return(10**(-0.2*b) * 10**5)
#fit for the best fitting H0 value and the symmetric error
h0_best = int_to_H0(intercept_best)
h0_best_err = (int_to_H0(intercept_best-intercept_err_best)-int_to_H0(intercept_best+intercept_err_best))/2
###Output
_____no_output_____
###Markdown
Now, let's synthesize everything we've learned. In the cell below, I want you to generate 10000 bootstrap samples of logv, mu, and sigma_mu. Assume a linear model for Hubble's Law and fit the slope and intercept for each of those realizations. Plot them against one another. Do you see any dependence between them? Where does the best fit value lie?
###Code
nsims = int(1e4)
ndata = len(mu)
#generate 10000 samples, with replacement, of logv, mu, and weight. There are lots of ways to do this
rand_indices = np.random.randint(0, ndata, (nsims, ndata))
#initialize an array zeros for the h0 from the bootstrap methods
h0_boot = np.zeros(nsims)
intercept_arr = np.zeros(nsims)
slope_arr = np.zeros(nsims)
for i in range(0,nsims):
#in this loop, populate h0_boot by finding the intercepts of our samples
h0_boot[i] =
plt.plot(intercept_arr, slope_arr, '.', color='blue')
plt.plot(intercept_best, slope_best, '*', color='red')
plt.xlabel('Intercept')
plt.ylabel('Slope')
plt.show()
###Output
_____no_output_____
###Markdown
Once you have the measurements of the intercept, convert them into a measurement of $H_0$ and plot a histogram of the values with the best fitting value as a dashed line. Are our measurements symmetric?
###Code
#plot a histogram of the h0 values
###Output
_____no_output_____
###Markdown
Now, let's compare to the error we get from our polynomial fit. Print the best fitting value +/- the error we got from polyfit for that value, as well as the 16, 50, and 84th percentile values from our bootstrap realizations.
###Code
print('Polyfit Values: ',[int_to_H0(intercept_best+intercept_err_best),
h0_best, int_to_H0(intercept_best-intercept_err_best)])
print('Bootstrap Values: ', np.percentile(h0_boot, [16,50,84]))
###Output
_____no_output_____
###Markdown
As you should see, our median is very close to the value we get from our best fit, but the errors from our bootstrap samples capture the asymmtetry in the distribution and are larger than the original values. This type of error analysis is really useful for allowing us to capture those types of asymmetries and to account for covariances. Jacknife Resampling======= Jackknife resampling is an older technique that's less widely used now because we have enough computational power to just bootstrap, but we can still go over it briefly. The way it works is that you remove one data point, make the measurement that you're going to make, replace the data point, remove another point, make the measurement, etc. until you've tested the data with every data point removed. The variance of the sample can be estimated from there as:$\sigma_{jack}^2 ~=~ (n-1)*\sigma_{sample}^2$Where $\sigma_{sample}^2$ is the variance of the measurements we make on the data with one point removed and $\sigma_{jack}$ is an estimate of the true variance. Jackknife methods are generally only useful when you don't have a lot of data points, and they don't really give us any way to estimate confidence intervals the way that bootstrap methods do, so they're not particularly useful for a data set like this. Nevertheless, let's compute them in the cell below.
###Code
ndata = len(mu)
logv_jack = np.zeros((ndata, ndata-1))
mu_jack = np.zeros((ndata, ndata-1))
weight_jack = np.zeros((ndata, ndata-1))
h0_jack = np.zeros(ndata-1)
for i in range(0,ndata):
logv_jack[i] = np.concatenate((logv[:i], logv[i+1:]))
mu_jack[i] = np.concatenate((mu[:i], mu[i+1:]))
weight_jack[i] = np.concatenate((weight[:i], weight[i+1:]))
h0_jack = np.zeros(ndata)
for i in range(0,ndata):
#do the same fitting as above in this loop
h0_jack[i] = int_to_H0(intercept)
plt.hist(h0_jack, bins=20)
plt.show()
h0_sigma_jack = np.std(h0_jack)*np.sqrt(ndata-1)
print('sigma jackknife, sigma_polyfit')
print(h0_sigma_jack, h0_best_err)
###Output
_____no_output_____ |
Implementations/unsupervised/.ipynb_checkpoints/K-means - Complete-checkpoint.ipynb | ###Markdown
K-means clusteringWhen working with large datasets it can be helpful to group similar observations together. This process, known as clustering, is one of the most widely used in Machine Learning and is often used when our dataset comes without pre-existing labels. In this notebook we're going to implement the classic K-means algorithm, the simplest and most widely used clustering method. Once we've implemented it we'll use it to split a dataset into groups and see how our clustering compares to the 'true' labelling. Import Modules
###Code
import numpy as np
import random
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
###Output
_____no_output_____
###Markdown
Generate Dataset
###Code
modelParameters = {'mu':[[-2,1], [0.5, -1], [0,1]],
'pi':[0.2, 0.35, 0.45],
'sigma':0.4,
'n':200}
#Check that pi sums to 1
if np.sum(modelParameters['pi']) != 1:
print('Mixture weights must sum to 1!')
data = []
#determine which mixture each point belongs to
def generateLabels(n, pi):
#Generate n realisations of a categorical distribution given the parameters pi
unif = np.random.uniform(size = n) #Generate uniform random variables
labels = [(u < np.cumsum(pi)).argmax() for u in unif] #assign cluster
return labels
#Given the labels, generate from the corresponding normal distribution
def generateMixture(labels, params):
normalSamples = []
for label in labels:
#Select Parameters
mu = params['mu'][label]
Sigma = np.diag([params['sigma']**2]*len(mu))
#sample from multivariate normal
samp = np.random.multivariate_normal(mean = mu, cov = Sigma, size = 1)
normalSamples.append(samp)
normalSamples = np.reshape(normalSamples, (len(labels), len(params['mu'][0])))
return normalSamples
labels = generateLabels(100, modelParameters['pi']) #labels - (in practice we don't actually know what these are!)
X = generateMixture(labels, modelParameters) #features - (we do know what these are)
###Output
_____no_output_____
###Markdown
Quickly plot the data so we know what it looks like
###Code
plt.figure(figsize=(10,6))
plt.scatter(X[:,0], X[:,1],c = labels)
plt.show()
###Output
_____no_output_____
###Markdown
When doing K-means clustering, our goal is to sort the data into 3 clusters using the data $X$. When we're doing clustering we don't have access to the colour (label) of each point, so the data we're actually given would look like this:
###Code
plt.figure(figsize=(10,6))
plt.scatter(X[:,0], X[:,1])
plt.title('Example data - no labels')
plt.show()
###Output
_____no_output_____
###Markdown
If we inspect the data we can still see that the data are roughly made up by 3 groups, one in the top left corner, one in the top right corner and one in the bottom right corner How does K-means work?The K in K-means represents the number of clusters, K, that we will sort the data into.Let's imagine we had already sorted the data into K clusters (like in the first plot above) and were trying to decide what the label of a new point should be. It would make sense to assign it to the cluster which it is closest to.But how do we define 'closest to'? One way would be to give it the same label as the point that is closest to it (a 'nearest neighbour' approach), but a more robust way would be to determine where the 'middle' of each cluster was and assign the new point to the cluster with the closest middle. We call this 'middle' the Cluster Centroid and we calculate it be taking the average of all the points in the cluster. That's all very well and good if we already have the clusters in place, but the whole point of the algorithm is to find out what the clusters are!To find the clusters, we do the following:1. Randomly initialise K Cluster Centroids2. Assign each point to the Cluster Centroid that it is closest to.3. Update the Cluster Centroids as the average of all points currently assigned to that centroid4. Repeat steps 2-3 until convergence Why does K-means work?Our aim is to find K Cluster Centroids such that the overall distance between each datapoint and its Cluster Centroid is minimised. That is, we want to choose cluster centroids $C = \{C_1,...,C_K\}$ such that the error function:$$E(C) = \sum_{i=1}^n ||x_i-C_{x_i}||^2$$is minimised, where $C_{x_i}$ is the Cluster Centroid associated with the ith observation and $||x_i-C_{x_i}||$ is the Euclidean distance between the ith observation and associated Cluster Centroid. Now assume after $m$ iterations of the algorithm, the current value of $E(C)$ was $\alpha$. By carrying out step 2, we make sure that each point is assigned to the nearest cluster centroid - by doing this, either $\alpha$ stays the same (every point was already assigned to the closest centroid) or $\alpha$ gets smaller (one or more points is moved to a nearer centroid and hence the total distance is reduced). Similarly with step 3, by changing the centroid to be the average of all points in the cluster, we minimise the total distance associated with that cluster, meaning $\alpha$ can either stay the same or go down.In this way we see that as we run the algorithm $E(C)$ is non-increasing, so by continuing to run the algorithm our results can't get worse - hopefully if we run it for long enough then the results will be sensible!
###Code
class KMeans:
def __init__(self, data, K):
self.data = data #dataset with no labels
self.K = K #Number of clusters to sort the data into
#Randomly initialise Centroids
self.Centroids = np.random.normal(0,1,(self.K, self.data.shape[1])) #If the data has p features then should be a K x p array
def closestCentroid(self, x):
#Takes a single example and returns the index of the closest centroid
distancetoCentroids = [np.linalg.norm(x - centroid) for centroid in self.Centroids]
return np.argmin(distancetoCentroids)
def assignToCentroid(self):
#Want to assign each observation to a centroid by passing each observation to the function closestCentroid
self.assignments = [self.closestCentroid(x) for x in self.data]
def updateCentroids(self):
#Now based on the current cluster assignments (stored in self.assignments)
for i in range(self.K):
#For each cluster
observationsInCluster = []
for j, observation in enumerate(self.data):
if self.assignments[j] == i: #If that observation is in the cluster
observationsInCluster.append(observation)
observationsInCluster = np.array(observationsInCluster) #Convert to a numpy array, instead of list of arrays
self.xx = observationsInCluster
self.Centroids[i] = np.mean(observationsInCluster, axis = 0) #Take the means (each observation is a 2d vector) of all observations in cluster
def runKMeans(self, tolerance = 0.00001):
#When the improvement between two successive evaluations of our error function is less than tolerance, we stop
change = 1000 #Initialise change to be a big number
numIterations = 0
self.CentroidStore = [np.copy(self.Centroids)] #We want to be able to keep track of how the centroids evolved over time
while change > tolerance:
#Now we run the algorithm:
#Save current centroid values - we'll need them to check for convergence
self.OldCentroids = np.copy(self.Centroids) #Make sure to use copy otherwise OldCentroid will change alongside Centroids!
#Assign points to closest centroid
self.assignToCentroid()
#Update Cluster Centroids
self.updateCentroids()
self.CentroidStore.append(np.copy(self.Centroids))#Store Centroid values
#Calculate change in cluster centroids
change = np.linalg.norm(self.Centroids - self.OldCentroids)
#Increment iteration count
numIterations += 1
print(f'K-means Algorithm converged in {numIterations} steps')
myKM = KMeans(X,3)
myKM.runKMeans()
###Output
K-means Algorithm converged in 4 steps
###Markdown
Let's plot the results
###Code
c = [0,1,2]*len(myKM.CentroidStore)
plt.figure(figsize=(10,6))
plt.scatter(np.array(myKM.CentroidStore).reshape(-1,2)[:,0], np.array(myKM.CentroidStore).reshape(-1,2)[:,1],c=np.array(c), s = 200, marker = '*')
plt.scatter(X[:,0], X[:,1], s = 12)
plt.title('Example data from a mixture of Gaussians - Cluster Centroid traces')
plt.show()
###Output
_____no_output_____
###Markdown
The stars of each colour above represents the trajectory of each cluster centroid as the algorithm progressed. Starting from a random initialisation, the centroids raplidly converged to a separate cluster, which is encouraging.Now let's plot the data with the associated labels that we've assigned to them.
###Code
plt.figure(figsize=(10,6))
plt.scatter(X[:,0], X[:,1], s = 20, c = myKM.assignments)
plt.scatter(np.array(myKM.Centroids).reshape(-1,2)[:,0], np.array(myKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red')
plt.title('Example data from a mixture of Gaussians - Including Cluster Centroids')
plt.show()
###Output
_____no_output_____
###Markdown
The plot above shows the final clusters (with red Cluster Centroids) assigned by the model, which should be pretty close to the 'true' clusters at the top of the page. Note: It's possible that although the clusters are the same the labels might be different - remember that K-means isn't supposed to identify the correct label, it's supposed to group the data in clusters which in reality share the same labels. The data we've worked with in this notebook had an underlying structure that made it easy for K-means to identify distinct clusters. However let's look at an example where K-means doesn't perform so well The sting in the tail - A more complex data structure
###Code
theta = np.linspace(0, 2*np.pi, 100)
r = 15
x1 = r*np.cos(theta)
x2 = r*np.sin(theta)
#Perturb the values in the circle
x1 = x1 + np.random.normal(0,2,x1.shape[0])
x2 = x2 + np.random.normal(0,2,x2.shape[0])
z1 = np.random.normal(0,3,x1.shape[0])
z2 = np.random.normal(0,3,x2.shape[0])
x1 = np.array([x1,z1]).reshape(-1)
x2 = np.array([x2,z2]).reshape(-1)
plt.scatter(x1,x2)
plt.show()
###Output
_____no_output_____
###Markdown
It might be the case that the underlying generative structure that we want to capture is that the 'outer ring' in the plot corresponds to a certain kind of process and the 'inner circle' corresponds to another.
###Code
#Get data in the format we want
newX = []
for i in range(x1.shape[0]):
newX.append([x1[i], x2[i]])
newX = np.array(newX)
#Run KMeans
myNewKM = KMeans(newX,2)
myNewKM.runKMeans()
plt.figure(figsize=(10,6))
plt.scatter(newX[:,0], newX[:,1], s = 20, c = np.array(myNewKM.assignments))
plt.scatter(np.array(myNewKM.Centroids).reshape(-1,2)[:,0], np.array(myNewKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red')
plt.title('Assigned K-Means labels for Ring data ')
plt.show()
###Output
_____no_output_____ |
jupyter_notebooks/beta_sample_size_dependency.ipynb | ###Markdown
$\beta$-sample size dependency$\beta$ is defined as $\beta = \frac{\langle ( \delta m)^2 \rangle_j}{\langle m \rangle_j^2}$,where $m$ is the mean cloud mass flux. Since for a perfect exponential distribution the variance is equal to the square of the mean, $\beta$ is an indicator whether a distribution is narrower or broader than anexponential distribution.In this notebook, we test how sensitive this parameter is to the size of the sample out of the exponential distribution. This allows us then to define a minimum sample size for which we trust our statistics. To do this we draw `n_sample` numbers from an exponential distribution and compute $\beta$. We repeat this `n_iter` times for each sample size and then look at the mean $\beta$. Since our original distribution is perfectly exponential, we expect to get $\beta = 1$.
###Code
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Define function to compute beta
def calc_beta(sample):
return np.var(sample, ddof = 1)/np.mean(sample)**2
# Define settings
mean_m = 5.07e7 # Mean m from paper
n_iter = 100000 # Number of iterations
# Define artificial sample sizes
n_sample = (range(2,100,1) + range(100, 1000, 20) + range(1000, 2000, 200))
# Loop over sample sizes and compute mean beta over all iterations
beta_list = []
for ns in n_sample:
tmplist_beta = []
for ni in range(n_iter):
sample_beta = np.random.exponential(mean_m, ns)
tmplist_beta.append(calc_beta(sample_beta))
beta_list.append(np.mean(tmplist_beta))
# Plot result
fig, ax = plt.subplots(1,1, figsize=(10, 5))
ax.plot(n_sample, beta_list, linewidth = 2, c = 'k')
ax.set_xlabel('Sample size')
ax.set_ylabel(r'$\beta$')
ax.set_xscale('log')
ax.set_title(r'Dependency of $\beta$ on sample size')
ax.axhline(1, color='gray', zorder=0.1)
plt.tight_layout()
###Output
_____no_output_____ |
.ipynb_checkpoints/multi_class_sentiment_analysis-checkpoint.ipynb | ###Markdown
ROADMAP FOR MULTI-CLASS SENTIMENT ANALYSIS WITH DEEP LEARNING A practical guide to create increasingly accurate models(This blog assumes some familiarity with deep learning)Sentiment analysis quickly gets difficult as we increase the number of classes. For this blog, we'll have a look at what difficulties you might face and how to get around them when you try to solve such a problem. Instead of prioritizing theoretical rigor, I'll focus on how to practically apply some ideas on a toy dataset and how to edge yourself out of a rut. I'll be using **Keras** throughout. As a disclaimer, I'd say it's unwise to throw the most powerful model at your problem at first glance. Traditional natural language processing methods work surprisingly well on most problems and your initial analysis of the dataset can be built upon with deep learning. However, this blog aims to be a refresher for deep learning techniques _exclusively_ and an implementational baseline or a general flowchart for hackathons or competitions. Theory throughout this post will either be oversimplified or absent, to avoid losing the attention of the casual reader. The problem---We'll analyze a fairly simple dataset I recently came across, which can be downloaded [from here](https://github.com/ad71/multi-class-sentiment-analysis/blob/master/data/data.zip).About 50 thousand people were asked to respond to a single question,>"What is one recent incident that made you happy?".Their responses were tabulated and their reason of happiness was categorized into seven broad classes like 'affection', 'bonding', 'leisure', etc. Additionally, we also know whether the incident happened within 24 hours of the interview or not.This problem is quite different from your regular positive negative classification because even though there are seven classes, all the responses are inherently happy and differentiating between them might be quite difficult even for humans.Before we start, [this is where](https://github.com/ad71/multi-class-sentiment-analysis) you'll find the complete notebook for this blog as well as all the discussed architectures in separate files if you want to tinker with them yourself. You are free to use whatever you find there, however you like, no strings attached.
###Code
import numpy as np
import pandas as pd
import nltk
import gensim
from gensim.models.doc2vec import TaggedDocument
from gensim.models.word2vec import Word2Vec
from gensim.scripts.glove2word2vec import glove2word2vec
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
from sklearn.utils import class_weight
from sklearn.preprocessing import scale
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
import tensorflow as tf
import tensorflow_hub as hub
from keras import backend as K
from keras.engine import Layer
from keras.models import Sequential, Model, load_model
from keras.layers import Input, Dense, LSTM, GRU, LeakyReLU, Dropout
from keras.layers import CuDNNLSTM, CuDNNGRU, Embedding, Bidirectional
from keras.callbacks import ModelCheckpoint, TensorBoard, EarlyStopping
from keras.optimizers import Adam
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import bokeh.plotting as bp
from bokeh.models import HoverTool, BoxSelectTool
from bokeh.plotting import figure, show, output_notebook
import matplotlib.pyplot as plt
%matplotlib inline
###Output
C:\Users\Aman Deep Singh\Anaconda3\envs\tf-gpu\lib\site-packages\gensim\utils.py:1212: UserWarning: detected Windows; aliasing chunkize to chunkize_serial
warnings.warn("detected Windows; aliasing chunkize to chunkize_serial")
Using TensorFlow backend.
###Markdown
The dataset---Let's see what we're working with.
###Code
df = pd.read_csv('D:/Datasets/mc-sent/p_train.csv', low_memory=False)
df.head()
###Output
_____no_output_____
###Markdown
Here's what each column means- `id` is just a unique id for each sentence- `period` is the period during which the interviewee had their experience, which can be either during the last 24 hours (`24h`) or the last 3 months (`3m`)- `response` is the response of the interviewee and the most important independent variable- `n` is the number of sentences in the response, and- `sentiment` is our target variable
###Code
labels = df[['id', 'sentiment']]
classes = sorted(labels.sentiment.unique())
classes
###Output
_____no_output_____
###Markdown
Preprocessing---To keep the first model simple, we'll go ahead and drop the `n` column. We'll see soon that it doesn't matter anyway.We'll also drop the `id` column because that's just a random number...or is it?(cue vsauce music)Assuming anything about the data beforehand will almost always mislead our model. For example, it might be possible that while collecting the data, the ids were assigned serially and it just so happened that every fifth observation was taken in a park full of people, where the predominant cause of happiness was `exercise` or `nature`. This is probably useless in the real world, but insights like these might win you a hackathon. We'll keep it to track if our shuffles are working correctly but we won't be using it for training our models.And we'll obviously drop the `sentiment` column as it is the target variable.
###Code
df.drop(['n', 'sentiment'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Usually with these problems, the classes are not always balanced, but we'll worry about that later. First, we want to get a simple model up and running to compare our future models with.Let's quickly convert our categories into one-hot arrays before proceeding further.
###Code
label_to_cat = dict()
for i in range(len(classes)):
dummy = np.zeros((len(classes),), dtype='int8')
dummy[i] = 1
label_to_cat[classes[i]] = dummy
cat_to_label = dict()
for k, v in label_to_cat.items():
cat_to_label[tuple(v)] = k
y = np.array([label_to_cat[label] for label in labels.sentiment])
y[:5]
###Output
_____no_output_____
###Markdown
Converting the response column to lowercase.
###Code
df.response = df.response.apply(str.lower)
df.head()
###Output
_____no_output_____
###Markdown
All the steps upto here are dataset-independent. We would have to go through the same preprocessing steps for our test set as well as all the other models we'll try, regardless of architecture. Postprocessing---Our first few models will follow the traditional approach of doing a lot of work ourselves and gradually move on to higher and higher levels of abstraction. However, the _preprocessing_ step will be common across all pipelines.Neural networks cannot process strings, let alone strings of arbitrary size, so we first split them at punctuations and spaces after lowercasing the sentence. This is called tokenization (well...it's a bit more complicated than what I just said).We'll use the `word_tokenize` function from `nltk` to help us with this.
###Code
def tokenize(df):
df['tokens'] = df['response'].map(lambda x: nltk.word_tokenize(x))
tokenize(df)
df.head()
###Output
_____no_output_____
###Markdown
Stopwords are words that appear way too frequently in the English language to be actually meaningful, like 'a', 'an', 'the', 'there', etc. `nltk.corpus` has a handy `stopwords` function that enumerates these.We could do a stopword removal process while tokenization, but I decided against it as it might affect the context. The stopword corpus includes a 'not', a negation that can flip the emotion of the passage. Moreover, phrases like 'To be or not to be' would be entirely removed. We could make our own corpus of stopwords, but the performance would hardly improve as our dataset is pretty small already. So we drop the idea and move on. Once we have the tokens, we don't need the original responses, because our model can't make any sense of it anyway.
###Code
df.drop(['response'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
It's a great time now to separate a part of the training set into the validation set, to make sure we aren't cheating. As the data is unstructured, a random shuffle will work just fine.
###Code
df_train, df_val, y_train, y_val = train_test_split(df, y, test_size=0.15, random_state=42)
###Output
_____no_output_____
###Markdown
Remove the random-seed parameter if you want a new permutation every run.
###Code
print(df_train.shape, y_train.shape)
print(df_val.shape, y_val.shape)
###Output
(46172, 3) (46172, 7)
(8149, 3) (8149, 7)
###Markdown
Embeddings---There is just one more problem. Neural networks work on strictly numerical data and still can't make sense of the tokens in our dataset. We need to find a way to represent each word as a vector, somehow.Let's take a little detour.Suppose we want to differentiate between _pop_ and _metal_. What are some properties we can use to describe these genres?Let's use percussion, electric guitar, acoustic guitar, synth, happiness, sadness, anger and complexity as the features to describe each genre.The vector for _pop_ might look something like$$ (0.5\ \ 0.2\ \ 0.5\ \ 1.0\ \ 0.8\ \ 0.5\ \ 0.2\ \ 0.3) $$and the one for _metal_ might look like$$ (0.9\ \ 0.9\ \ 0.3\ \ 0.1\ \ 0.4\ \ 0.5\ \ 0.8\ \ 0.7) $$So if we want to classify _heavy-metal_, its vector might be$$ (1.0\ \ 1.0\ \ 0.0\ \ 0.1\ \ 0.1\ \ 0.5\ \ 1.0\ \ 0.9) $$These vectors can be plotted in an 8-dimensional space and the euclidean distance (`np.linalg.norm`) between _metal_ and _heavy-metal_ (0.529) will be closer than the euclidean distance between _pop_ and _metal_ (1.476), for example.Similarly, we can encode every single word in our corpus in some way, to form a vector. We have algorithms that can train a model to generate an n-dimensional vector for each word. We have no way of interpreting (that I know of) what features were selected or what the numbers in the vectors actually mean, but we'll see that they work anyway and similar words huddle up together.`gensim` provides a handy tool that can train a set of embeddings according to your corpus, but we have to 'Tag' them first as the model accepts a vector of `TaggedDocument` objects.
###Code
def tag_sentences(sentences, label):
tagged = []
for index, sentence in enumerate(sentences):
label = f'{label}_{index}'
tagged.append(TaggedDocument(sentence, [label]))
return tagged
vector_train_corpus = tag_sentences(df_train.tokens, 'TRAIN')
vector_val_corpus = tag_sentences(df_val.tokens, 'TEST')
###Output
_____no_output_____
###Markdown
A tagged vector looks like this.
###Code
vector_train_corpus[1]
###Output
_____no_output_____
###Markdown
The `Word2Vec` module can train a dictionary of embeddings, given a vector of `TaggedDocument` objects.
###Code
embeddings = Word2Vec(size=200, min_count=3)
embeddings.build_vocab([sentence.words for sentence in vector_train_corpus])
embeddings.train([sentence.words for sentence in vector_train_corpus],
total_examples=embeddings.corpus_count,
epochs=embeddings.epochs)
###Output
_____no_output_____
###Markdown
Let's see if our embeddings are any good.
###Code
embeddings.wv.most_similar('exercise')
###Output
C:\Users\Aman Deep Singh\Anaconda3\envs\tf-gpu\lib\site-packages\gensim\matutils.py:737: FutureWarning: Conversion of the second argument of issubdtype from `int` to `np.signedinteger` is deprecated. In future, it will be treated as `np.int32 == np.dtype(int).type`.
if np.issubdtype(vec.dtype, np.int):
###Markdown
We learnt some good correlations to 'exercise' like 'cardio' and 'workout' but the rest aren't good enough. Anyway, this will do for now. Visualizing the embeddingsWe cannot directly visualize high-dimensional data.To see if our embeddings actually carry useful information, we need to reduce the dimensionality to 2 somehow.There are two extremely useful techniques **PCA** (principal component analysis) and **t-SNE** (T-distributed stochastic neighboring entities) that do just this, flatten high-dimensional data into the best possible representation in the specified number of lower dimensions.t-SNE is a probabilistic method and takes a while to run, but we'll try both methods for the 2000 most common words in our embeddings. PCA
###Code
vectors = [embeddings[word] for word in list(embeddings.wv.vocab.keys())[:2000]]
pca = PCA(n_components=2, random_state=42)
pca_vectors = pca.fit_transform(vectors)
reduced_df = pd.DataFrame(pca_vectors, columns=['dim_1', 'dim_2'])
reduced_df['words'] = list(embeddings.wv.vocab.keys())[:2000]
###Output
_____no_output_____
###Markdown
Bokeh is an extremely useful library for interactive plots which has flown under the radar of quite a lot of people for a long time.
###Code
output_notebook()
b_figure = bp.figure(plot_width=700, plot_height=600,
tools='pan, wheel_zoom, box_zoom, reset, hover, previewsave')
b_figure.scatter(x='dim_1', y='dim_2', source=reduced_df)
hovertool = b_figure.select(dict(type=HoverTool))
hovertool.tooltips={'word': '@words'}
show(b_figure)
###Output
_____no_output_____
###Markdown
T-SNE
###Code
tsne = TSNE(n_components=2, n_iter=300, verbose=1, random_state=42)
tsne_vectors = tsne.fit_transform(vectors)
reduced_df = pd.DataFrame(tsne_vectors, columns=['dim_1', 'dim_2'])
reduced_df['words'] = list(embeddings.wv.vocab.keys())[:2000]
output_notebook()
b_figure = bp.figure(plot_width=700, plot_height=600,
tools='pan, wheel_zoom, box_zoom, reset, hover, previewsave')
b_figure.scatter(x='dim_1', y='dim_2', source=reduced_df)
hovertool = b_figure.select(dict(type=HoverTool))
hovertool.tooltips={'word': '@words'}
show(b_figure)
###Output
_____no_output_____
###Markdown
t-SNE usually does a better job showing more separated clusters, while PCA just bunched everything up in the middle in this example. However, performance is dataset dependent and it never hurts to try both. Dense networks---For our first model, we'll try a very common approach to binary sentiment classification, for which we first need to calculate the `Tf-Idf` score of each word in our corpus. Tf-idf stands for 'Term frequency - inverse document frequency'. If you haven't heard of it, all it does is assign a weight to each word based on the frequency of its appearance in a corpus. Words that appear often, like 'the', 'when' and 'very' will have a low score and the rarer ones, like 'tremendous', 'undergraduate' and 'publication', which might actually help us classify a sentence, will have a higher score. This is a simple heuristic in order to better understand our data. It is corpus specific and we can train one for the embedding vectors we generated. The `TfidfVectorizer` class from `sklearn` makes quick work of it and we can fit one to our vectors as follows.
###Code
gen_tfidf = TfidfVectorizer(analyzer=lambda x: x, min_df=3)
matrix = gen_tfidf.fit_transform([sentence.words for sentence in vector_train_corpus])
tfidf_map = dict(zip(gen_tfidf.get_feature_names(), gen_tfidf.idf_))
len(tfidf_map)
###Output
_____no_output_____
###Markdown
The `min_df` parameter is a threshold for the minimum frequency. In this case, we do not want to track the `tf-idf` score of a word that appears less than thrice in our corpus.Now, for every `response` object, we will create a vector of size 200 (the same dimension as our embedding vector). This is our sentence-level embedding.We will take the average of the embedding vectors of each token in each response and weight it by the `tf-idf` score of each word. The embedding for the sentence "I went out for dinner" can be calculated as follows.The `encode_sentence` function adds up the vector of each token in a sentence, weighted by the tf-idf score and generates a vector of length 200 for each response.
###Code
def encode_sentence(tokens, emb_size):
_vector = np.zeros((1, emb_size))
length = 0
for word in tokens:
try:
_vector += embeddings.wv[word].reshape((1, emb_size)) * tfidf_map[word]
length += 1
except KeyError:
continue
break
if length > 0:
_vector /= length
return _vector
x_train = scale(np.concatenate([encode_sentence(ele, 200) for ele in map(lambda x: x.words, vector_train_corpus)]))
x_val = scale(np.concatenate([encode_sentence(ele, 200) for ele in map(lambda x: x.words, vector_val_corpus)]))
print(x_train.shape, x_val.shape)
###Output
(46172, 200) (8149, 200)
###Markdown
Let's build a simple two layer dense net. This is just to check if we have done everything correctly up to this point.Let's call this our zero'th model. Dense-net on sequential data without transformations is a joke anyway right?
###Code
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=200))
model.add(Dense(7, activation='softmax'))
model.compile(optimizer=Adam(lr=1e-3, decay=1e-6),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, epochs=10, verbose=1)
score = model.evaluate(x_val, y_val, verbose=1)
score
###Output
8149/8149 [==============================] - 0s 54us/step
###Markdown
We get a loss of 1.41 and a validation accuracy of 0.46. This exact same model manages to get a validation score of about 0.8 on binary sentiment analysis, but given the difference in complexity, hopefully you weren't expecting much.Throwing in another dense layer doesn't help either.
###Code
model = Sequential()
model.add(Dense(256, activation='relu', input_dim=200))
model.add(Dense(64, activation='relu'))
model.add(Dense(7, activation='softmax'))
model.compile(optimizer=Adam(lr=1e-3),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, epochs=10, verbose=1)
score = model.evaluate(x_val, y_val, verbose=1)
score
###Output
8149/8149 [==============================] - 1s 61us/step
###Markdown
Unsurprisingly, the results are still pretty bad, as dense layers can not capture temporal correlations. Recurrent networks---A recurrent network using LSTM or GRU cells will surely solve the problem, but upon reading the documentation of `keras.layers.LSTM` you'll realize it expects an input batch shape of `(batch_size, timesteps, data_dim)`. Obviously it would want some data along the dimension of time as well, but our encoded vectors have a shape of `(batch_size, data_dim)`.For our case, `timesteps` refers to the tokens. Instead of averaging out the vectors of each response, we want to keep them as they are.To fit our RNN, we can create a new way of encoding our tokens. We will ignore the tf-idf scores altogether and expect the LSTM to find out whatever useful features it needs for itself over the epochs.There is just _one_ more problem. LSTMs expect same sized inputs for each sample, i.e it wants all the sentences to have exactly the same number of words, which we will call the _sequence length_.To see what we're working with, here's a scatter-plot of the distribution of token lengths in our training set.
###Code
lengths = [len(token) for token in df_train.tokens]
plt.scatter(lengths, range(len(lengths)), alpha=0.2);
print(np.mean(lengths), np.max(lengths))
###Output
20.543121372260245 1349
###Markdown
The longest response was found out to be 1349 words long but the mean length was about 21 words.You can do broadly two things here, set the sequence length equal to the number of words in the longest response you have found, but you don't know how long the longest response in the test set might be and you might have to truncate anyway, or keep your sequence length close to the mean but just enough to not lose much data. We'll see better ways of handling long responses later. Once we decide our sequence length, longer responses will be truncated and shorter responses will be padded with a vector of zeros (or a vector of the means along the transverse axis, but zeros work just fine).For now, I'll use a sequence length of 80. No specific reason.
###Code
def encode_sentence_lstm(tokens, emb_size):
vec = np.zeros((80, 200))
for i, word in enumerate(tokens):
if i > 79:
break
try:
vec[i] = embeddings.wv[word].reshape((1, emb_size))
except KeyError:
continue
return vec
x_train = np.array([encode_sentence_lstm(ele, 200) for ele in map(lambda x: x.words, vector_train_corpus)])
x_train.shape
x_val = np.array([encode_sentence_lstm(ele, 200) for ele in map(lambda x: x.words, vector_val_corpus)])
x_val.shape
###Output
_____no_output_____
###Markdown
We're done here.Finally we can build our first recurrent neural network. I'll use the `CuDNNLSTM` class, which is astronomically faster than the `LSTM` class if you're on a GPU.`LSTM` is so much slower that I don't have the patience to benchmark it for you.Additionally, let's use the functional API of keras instead of the `.add` syntax for a change. It is a lot more flexible. This is our __actual__ baseline model.
###Code
input_tensor = Input(shape=(80, 200))
x = CuDNNLSTM(256, return_sequences=False)(input_tensor)
x = Dense(64, activation='relu')(x)
output_tensor = Dense(7, activation='softmax')(x)
model = Model(inputs=[input_tensor], outputs=[output_tensor])
model.compile(optimizer=Adam(lr=1e-3),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, epochs=10, verbose=1)
score = model.evaluate(x_val, y_val, verbose=1)
score
###Output
8149/8149 [==============================] - 2s 261us/step
###Markdown
The loss now is 0.57 and the validation accuracy is 0.855, which is a great improvement, just as we expected. keras.layers.BidirectionalIn the current state, our model can just remember the past. It might benefit from a bit of context, maybe read a full phrase before sending an output to the next layer.For example, "It was hilarious to see" and "It was hilarious to see how bad it was" mean very different things.A bidirectional recurrent neural network (BRNN) overcomes this difficulty by propagating once in the forward direction and once in the backward direction and weighting them appropriately.I don't expect the score to increase much, as sentiment analysis doesn't really need this structure. Machine translation or handwriting recognition can make better use of bidirectional layers, but it never hurts to try. In keras, you can just call `Bidirectional` with your existing layer.However, Bidirectional LSTMs tend to overfit a bit, so I'll validate after each epoch, just to measure how much impact a bidirectional layer can potentially have. It's a bit unfair to the previous models, but there won't be much improvement anyway.
###Code
input_tensor = Input(shape=(80, 200))
x = Bidirectional(CuDNNLSTM(256, return_sequences=False))(input_tensor)
x = Dense(64, activation='relu')(x)
output_tensor = Dense(7, activation='softmax')(x)
model = Model(inputs=[input_tensor], outputs=[output_tensor])
model.compile(optimizer=Adam(lr=1e-3),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=10, verbose=1)
###Output
Train on 46172 samples, validate on 8149 samples
Epoch 1/10
46172/46172 [==============================] - 32s 683us/step - loss: 0.5449 - acc: 0.8107 - val_loss: 0.4574 - val_acc: 0.8346
Epoch 2/10
46172/46172 [==============================] - 31s 667us/step - loss: 0.3964 - acc: 0.8574 - val_loss: 0.4080 - val_acc: 0.8537
Epoch 3/10
46172/46172 [==============================] - 31s 669us/step - loss: 0.3319 - acc: 0.8791 - val_loss: 0.4106 - val_acc: 0.8515
Epoch 4/10
46172/46172 [==============================] - 31s 668us/step - loss: 0.2779 - acc: 0.8982 - val_loss: 0.4111 - val_acc: 0.8604
Epoch 5/10
46172/46172 [==============================] - 31s 672us/step - loss: 0.2406 - acc: 0.9100 - val_loss: 0.4262 - val_acc: 0.8640
Epoch 6/10
46172/46172 [==============================] - 33s 704us/step - loss: 0.1720 - acc: 0.9350 - val_loss: 0.4547 - val_acc: 0.8569
Epoch 7/10
46172/46172 [==============================] - 31s 682us/step - loss: 0.1349 - acc: 0.9505 - val_loss: 0.5029 - val_acc: 0.8604
Epoch 8/10
46172/46172 [==============================] - 31s 670us/step - loss: 0.1018 - acc: 0.9624 - val_loss: 0.5760 - val_acc: 0.8562
Epoch 9/10
46172/46172 [==============================] - 31s 669us/step - loss: 0.0800 - acc: 0.9713 - val_loss: 0.6163 - val_acc: 0.8578
Epoch 10/10
46172/46172 [==============================] - 31s 668us/step - loss: 0.0654 - acc: 0.9763 - val_loss: 0.6862 - val_acc: 0.8573
###Markdown
The best validation accuracy was 0.8640 at the end of epoch 5, a 1% improvement. It's not much, but we'll take it. keras.layers.Embedding---There is a slightly less stupid way of doing this. We can just add a keras `Embedding` layer and skip dealing with gensim altogether. All the document-tagging, vector-building and training will be taken care of by keras. We can skip tokenization as well, as the `Tokenizer` class in keras tokenizes everything in the way `Embedding` likes. You can rerun this notebook upto the preprocessing section, so that your dataframe looks like this
###Code
df.head()
###Output
_____no_output_____
###Markdown
Shuffle the data
###Code
df_train, df_val, y_train, y_val = train_test_split(df, y, test_size=0.15, random_state=42)
t = Tokenizer()
t.fit_on_texts(df_train.response)
vocab_size = len(t.word_index) + 1
vocab_size
encoded_train_set = t.texts_to_sequences(df_train.response)
len(encoded_train_set)
df_train['tokens'] = encoded_train_set
df_train.drop(['response'], axis=1, inplace=True)
df_train.head()
y_train[:5]
###Output
_____no_output_____
###Markdown
These are our new tokens, which are obviously not all the same length, so we'll quickly pad them with zeros.`pad_sequences` is a handy function to do just this.
###Code
SEQ_LEN = 80
padded_train = pad_sequences(encoded_train_set, maxlen=SEQ_LEN, padding='post')
train_docs = [list(doc) for doc in padded_train]
df_train['tokens'] = train_docs
df_train.head()
###Output
C:\Users\Aman Deep Singh\Anaconda3\envs\tf-gpu\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
###Markdown
We'll be using this two layer RNN extensively to benchmark different approaches. The `Embedding` layer takes in a vocabulary size, the length of each word-vector, the input sequence length and a boolean that tells it whether it should train itself. We set this to false if we're using embeddings from someone else, unless we're transfer-learning, or training from scratch.
###Code
input_tensor = Input(shape=(SEQ_LEN,), dtype='int32')
e = Embedding(vocab_size, 300, input_length=SEQ_LEN, trainable=True)(input_tensor)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(e)
x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(x)
x = Dense(64, activation='relu')(x)
output_tensor = Dense(7, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
model.compile(optimizer=Adam(lr=1e-3),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 80) 0
_________________________________________________________________
embedding_1 (Embedding) (None, 80, 300) 5742600
_________________________________________________________________
bidirectional_1 (Bidirection (None, 80, 256) 440320
_________________________________________________________________
bidirectional_2 (Bidirection (None, 128) 164864
_________________________________________________________________
dense_1 (Dense) (None, 64) 8256
_________________________________________________________________
dense_2 (Dense) (None, 7) 455
=================================================================
Total params: 6,356,495
Trainable params: 6,356,495
Non-trainable params: 0
_________________________________________________________________
###Markdown
`df_train.tokens` returns a list, but we need a numpy array of numpy arrays as our training set
###Code
x_train = np.array([np.array(token) for token in df_train.tokens])
x_train.shape
model.fit(x_train, y_train, epochs=10, verbose=1)
encoded_val_set = t.texts_to_sequences(df_val.response)
len(encoded_val_set)
df_val['tokens'] = encoded_val_set
padded_val = pad_sequences(encoded_val_set, maxlen=SEQ_LEN, padding='post')
val_vectors = [list(doc) for doc in padded_val]
df_val.tokens = val_vectors
df_val.head()
x_val = np.array([np.array(token) for token in df_val.tokens])
print(x_val.shape, y_val.shape)
score = model.evaluate(x_val, y_val, verbose=1)
score
###Output
8149/8149 [==============================] - 3s 341us/step
###Markdown
Our validation score is good. With half the work, we managed to get a slightly better model than the previous one, or is it because we have two LSTM layers this time? The influences are compounded and it might not work out so well for the test set.There is just one problem. If you train your own embeddings on a dataset this small, you're likely to not generalize well on the test set. Your real world accuracy might plummet further if you plan to use that model in production.To prevent this, you need to train on a larger dataset, but the 6 million parameters will soon be 6 billion parameters. Besides, it might not be easy to collect more data if you're solving a problem for a company. Pre-trained embeddings---Let's face it. Nobody trains their own embeddings nowadays, unless your model needs to understand domain-specific language. If you take somebody's model, tweak it and call it your own, you'll have better results in less time.Using pre-trained models is part of transfer learning, where you try to create a ripoff of a great model to suit your dataset.More specifically, there are two very commonly used open source embeddings that will outperform self-trained embeddings 95 out of 100 times. There's nothing special about it, they're just high dimensional vectors trained on huge datasets, on hardware more powerful than anything you'll ever own, and they give _the best_ results for most NLP tasks.(Spoiler: No they don't. _Even_ better embeddings were released last year. We'll get to that.) GloVe**Glo**bal **Ve**ctors for word representation is a suite of word embeddings trained on a billion tokens with a vocabulary of 400 thousand words. These embeddings can be downloaded [here](https://nlp.stanford.edu/projects/glove/)From here onwards, we will use the keras `Embedding` layer as it is easier to work with. We'll also use the keras `Tokenizer` class as it works well with `Embedding`.There is a major difference between `keras.preprocessing.text.Tokenizer` and `nltk.word_tokenize`. `Tokenizer` returns a list of numbers, assigned according to frequency, instead of a list of words and internally maintains a vocabulary dictionary that maps words to numbers. Restart your kernel and rerun upto the preprocessing section if you're running out of memory. Now is a good time to split our dataset into training and validation sets. We shouldn't be training the tokenizer on data we aren't allowed to see
###Code
df_train, df_val, y_train, y_val = train_test_split(df, y, test_size=0.15, random_state=42)
t = Tokenizer()
t.fit_on_texts(df_train.response)
vocab_size = len(t.word_index) + 1
vocab_size
encoded_train_set = t.texts_to_sequences(df_train.response)
len(encoded_train_set)
df_train['tokens'] = encoded_train_set
df_train.drop(['response'], axis=1, inplace=True)
df_train.head()
y_train[:5]
###Output
_____no_output_____
###Markdown
We'll `pad_sequences` just like last time.
###Code
SEQ_LEN = 80
padded_train = pad_sequences(encoded_train_set, maxlen=SEQ_LEN, padding='post')
train_docs = [list(doc) for doc in padded_train]
df_train['tokens'] = train_docs
df_train.head()
###Output
C:\Users\Aman Deep Singh\Anaconda3\envs\tf-gpu\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
###Markdown
We'll use `gensim` to generate a dictionary of embeddings from the downloaded data, however the file you downloaded isn't in the format `gensim` likes. Thankfully, there's a workaround for this by gensim themselves. The `glove2word2vec` function converts the file into set of vectors. We'll save this file in the same directory as the original.
###Code
glove_input = 'D:/Datasets/embeddings/GloVe-6B/glove.6B.300d.txt'
word2vec_output = 'D:/Datasets/embeddings/GloVe-6B/glove.6B.300d.txt.word2vec'
glove2word2vec(glove_input, word2vec_output)
embedding_index = gensim.models.KeyedVectors.load_word2vec_format('D:/Datasets/embeddings/GloVe-6B/glove.6B.300d.txt.word2vec', binary=False)
###Output
_____no_output_____
###Markdown
We just want embeddings for words that are actually in our corpus. Filter out the unwanted words and count the number of words that we don't have embeddings for.
###Code
embedding_matrix = np.zeros((vocab_size, 300))
count = 0
for word, i in t.word_index.items():
try:
embedding_vector = embedding_index[word]
embedding_matrix[i] = embedding_vector
except KeyError:
count += 1
count
embedding_matrix.shape
###Output
_____no_output_____
###Markdown
We still don't have everything we need.For multi class classification, tracking the accuracy is often misleading, especially if you have a class weight imbalance. You can trivially get 90% accuracy on a dataset that has 90 positive samples and 10 negative samples by just predicting the mode, but the model will be pretty useless. We should instead track the __F1 score__ as well. If you know what _precision_ and _recall_ is, you probably know what an _f1-score_ is.__Precision__ measures how many positive-predicted samples were actually positive.__Recall__ measures how many actual positive samples were predicted to be positive.The _F1 score_ is the harmonic mean of the two, which serves as a great metric for tracking your model's progress.Unfortunately, the native f1-score metrics of keras was removed in version 2.0, so we have to write our own. Keras accuracy metrics expect vectors of target classes and predicted classes.
###Code
def recall(y_true, y_pred):
true_pos = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_pos = K.sum(K.round(K.clip(y_true, 0, 1)))
_recall = true_pos / (possible_pos + K.epsilon())
return _recall
def precision(y_true, y_pred):
true_pos = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_pos = K.sum(K.round(K.clip(y_pred, 0, 1)))
_precision = true_pos / (predicted_pos + K.epsilon())
return _precision
def f1(y_true, y_pred):
p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
return 2 * ((p * r) / (p + r + K.epsilon()))
###Output
_____no_output_____
###Markdown
We can finally build our model using the `Embedding` class. The weights will be initialized using the `emb_matrix` and `trainable` will be set to False. Setting `trainable` to True usually gives slightly better results at the expense of ~6 million more trainable variables (corpus dependent). Suit yourself. As an aside, I will intentionally leave out GRUs throughout this notebook as LSTMs almost always work better in practice. But you can try them out yourself. Just replace `LSTM` with `GRU`, or `CuDNNLSTM` with `CuDNNGRU` if you're on a GPU.
###Code
input_tensor = Input(shape=(SEQ_LEN,), dtype='int32')
e = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=SEQ_LEN, trainable=False)(input_tensor)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(e)
x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(x)
x = Dense(64, activation='relu')(x)
output_tensor = Dense(7, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
model.compile(optimizer=Adam(lr=1e-3),
loss='categorical_crossentropy',
metrics=['accuracy', f1])
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 80) 0
_________________________________________________________________
embedding_1 (Embedding) (None, 80, 300) 5742600
_________________________________________________________________
bidirectional_1 (Bidirection (None, 80, 256) 440320
_________________________________________________________________
bidirectional_2 (Bidirection (None, 128) 164864
_________________________________________________________________
dense_1 (Dense) (None, 64) 8256
_________________________________________________________________
dense_2 (Dense) (None, 7) 455
=================================================================
Total params: 6,356,495
Trainable params: 613,895
Non-trainable params: 5,742,600
_________________________________________________________________
###Markdown
`df_train.tokens` returns a list, but we need a numpy array of numpy arrays as our training set
###Code
x_train = np.array([np.array(token) for token in df_train.tokens])
x_train.shape
model.fit(x_train, y_train, epochs=10, verbose=1)
###Output
Epoch 1/10
46172/46172 [==============================] - 46s 992us/step - loss: 0.5379 - acc: 0.8149 - f1: 0.8076
Epoch 2/10
46172/46172 [==============================] - 41s 889us/step - loss: 0.3644 - acc: 0.8686 - f1: 0.8674
Epoch 3/10
46172/46172 [==============================] - 42s 911us/step - loss: 0.2963 - acc: 0.8905 - f1: 0.8908
Epoch 4/10
46172/46172 [==============================] - 44s 953us/step - loss: 0.2369 - acc: 0.9129 - f1: 0.9121- ETA: 1s - loss: 0.2380 - acc: 0.9124 - f1: - ETA: 0s - loss: 0.2370 - acc: 0.9129 - f1
Epoch 5/10
46172/46172 [==============================] - 42s 914us/step - loss: 0.1790 - acc: 0.9338 - f1: 0.9343
Epoch 6/10
46172/46172 [==============================] - 41s 878us/step - loss: 0.1295 - acc: 0.9527 - f1: 0.9530s - loss: 0.1
Epoch 7/10
46172/46172 [==============================] - 42s 912us/step - loss: 0.0890 - acc: 0.9671 - f1: 0.9672
Epoch 8/10
46172/46172 [==============================] - 40s 871us/step - loss: 0.0649 - acc: 0.9773 - f1: 0.9774
Epoch 9/10
46172/46172 [==============================] - 41s 894us/step - loss: 0.0486 - acc: 0.9820 - f1: 0.9820
Epoch 10/10
46172/46172 [==============================] - 41s 897us/step - loss: 0.0378 - acc: 0.9868 - f1: 0.9868
###Markdown
Let's validate our model. We'll go through the exact same preprocessing steps as our training set.
###Code
encoded_val_set = t.texts_to_sequences(df_val.response)
len(encoded_val_set)
df_val['tokens'] = encoded_val_set
padded_val = pad_sequences(encoded_val_set, maxlen=SEQ_LEN, padding='post')
val_vectors = [list(doc) for doc in padded_val]
df_val.tokens = val_vectors
df_val.head()
x_val = np.array([np.array(token) for token in df_val.tokens])
print(x_val.shape, y_val.shape)
score = model.evaluate(x_val, y_val, verbose=1)
score
###Output
8149/8149 [==============================] - 3s 388us/step
###Markdown
The validation score this time is 0.88, but pre-trained embeddings will almost certainly generalize better to the test set or real world data, and handle anomalies more effectively. Word2VecGoogle released their pre-trained Word2Vec embeddings a few years ago. It was trained on the __Google News__ corpus of about 3 billion tokens. You can download the vectors [here](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing)Let's split the dataset.
###Code
df_train, df_val, y_train, y_val = train_test_split(df, y, test_size=0.15, random_state=42)
t = Tokenizer()
t.fit_on_texts(df_train.response)
vocab_size = len(t.word_index) + 1
vocab_size
encoded_train_set = t.texts_to_sequences(df_train.response)
len(encoded_train_set)
df_train['tokens'] = encoded_train_set
df_train.drop(['response'], axis=1, inplace=True)
df_train.head()
y_train[:5]
SEQ_LEN = 80
padded_train = pad_sequences(encoded_train_set, maxlen=SEQ_LEN, padding='post')
train_vectors = [list(doc) for doc in padded_train]
df_train.tokens = train_vectors
lengths = [len(doc) for doc in train_vectors]
np.mean(lengths)
###Output
_____no_output_____
###Markdown
This time, the downloaded file is good enough for `gensim` to import it directly. The model and everything else is exactly the same like above and we'll still be tracking the F1-score.
###Code
embeddings_index = gensim.models.KeyedVectors.load_word2vec_format('D:/Datasets/embeddings/Word2Vec/GoogleNews-vectors-negative300.bin', binary=True)
embedding_matrix = np.zeros((vocab_size, 300))
count = 0
for word, i in t.word_index.items():
try:
embedding_vector = embeddings_index[word]
embedding_matrix[i] = embedding_vector
except KeyError:
count += 1
count
embedding_matrix.shape
input_tensor = Input(shape=(SEQ_LEN,), dtype='int32')
e = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=SEQ_LEN, trainable=False)(input_tensor)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(e)
x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(x)
x = Dense(128, activation='relu')(x)
output = Dense(7, activation='softmax')(x)
model = Model(input_tensor, output)
model.compile(optimizer=Adam(lr=1e-3), loss='categorical_crossentropy', metrics=['accuracy', f1])
model.summary()
x_train = np.array([np.array(token) for token in df_train.tokens])
x_train.shape
model.fit(x_train, y_train, epochs=10, verbose=1)
encoded_val_set = t.texts_to_sequences(df_val.response)
len(encoded_val_set)
df_val['tokens'] = encoded_val_set
df_val.head()
padded_val = pad_sequences(encoded_val_set, maxlen=SEQ_LEN, padding='post')
val_vectors = [list(doc) for doc in padded_val]
df_val.tokens = val_vectors
df_val.head()
lengths = [len(doc) for doc in val_vectors]
np.mean(lengths)
x_val = np.array([np.array(token) for token in df_val.tokens])
print(x_val.shape, y_val.shape)
score = model.evaluate(x_val, y_val, verbose=1)
score
###Output
8149/8149 [==============================] - 3s 429us/step
###Markdown
The validation score is 0.879 this time, which is a very small difference from the previous model and we can't objectively say which model is better. Word2Vec is usually slightly better than GloVe on most NLP applications, but this time it wasn't. DebuggingOver the last few models, our validation score has parked itself at about 0.88, which leads us to think, is this the best accuracy we can reach? Our training accuracies have almost always surpassed 95%, are we overfitting? Or are we underfitting? Maybe adding more layers interspersed with Dropout layers or other regularization will help?For multi-class classification, if you have flatlined, the answer to these questions is almost always no. This is where you should have a look at your dataset. Plot all charts that you think might be useful and try to gain some insights. Maybe plotting the confusion matrix for our last model will help.
###Code
y_pred = model.predict(x_val, verbose=1)
print(y_pred.shape, y_val.shape)
###Output
(8149, 7) (8149, 7)
###Markdown
The confusion matrix can not handle one-hot vectors, let's convert them into integer classes.
###Code
y_pred_class = np.array([np.argmax(x) for x in y_pred])
y_val_class = np.array([np.argmax(x) for x in y_val])
print(y_pred_class.shape, y_val_class.shape)
c = confusion_matrix(y_val_class, y_pred_class)
classes = [v for k, v in cat_to_label.items()]
plt.figure(figsize=(20, 20))
plt.imshow(c, interpolation='nearest', cmap='jet')
plt.colorbar()
ticks = np.arange(len(classes))
plt.xticks(ticks, classes, rotation=45)
plt.yticks(ticks, classes)
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
It classified 'achievement' and 'affection' pretty accurately, was horrible at classifying 'nature' and 'exercise' and pretty bad at everything else. Our model was also somewhat confused between 'achievement' and 'enjoy_the_moment', which, if you think about it, would be the case even for a human sometimes.Right now, our model is basically an affection classifier.The large discrepancy between accuracies of different classes is what stands out and it only means one thing. Class imbalance. Let's plot a pie chart to see how bad it is.
###Code
plt.figure(figsize=(7, 7))
plt.pie(labels.sentiment.value_counts(), labels=classes);
###Output
_____no_output_____
###Markdown
Turns out, its pretty bad!
###Code
labels.sentiment.value_counts()
###Output
_____no_output_____
###Markdown
The smallest class, 'exercise' has about 3.5% the number of samples as the largest class, 'achievement'.Ideally you would want the exact same number of samples for all classes in your training set. In practice, a little variance doesn't hurt. SamplingTo overcome this problem, there are a few things we can do, the first being sampling. To balance our datasets, we can __oversample__ instances of the minority class or __undersample__ instances of the majority class.Both come with their disadvantages however, which are more prominent in datasets with a greater imbalance, like ours.__Oversampling__ the minority overfits the model because of the high duplication, while __undersampling__ might leave crucial information out.A more powerful sampling method __SMOTE__, artificially generates new instances of the minority class by forming combinations of neighboring clusters, but this still doesn't eliminate overfitting.We won't try undersampling, as it would leave our training set with about 4500 samples, which is too small even for binary classification.Let's try oversampling. We'll not make the number of samples exactly equal, but bring it within the same ballpark. We'll start afresh
###Code
df = pd.read_csv('D:/Datasets/mc-sent/p_train.csv', low_memory=False)
df.head()
###Output
_____no_output_____
###Markdown
We need to first split our training and validation sets. Since we normally wouldn't augment our test set, we shouldn't augment our validation set either.
###Code
df, df_val = train_test_split(df, test_size=0.15, random_state=42)
labels = df[['id', 'sentiment']]
classes = sorted(labels.sentiment.unique())
classes
###Output
_____no_output_____
###Markdown
Let's separate the dataframes by sentiment.
###Code
dfs = []
for sentiment in classes:
df_temp = df.where(df.sentiment == sentiment)
df_temp.dropna(axis=0, inplace=True)
dfs.append(df_temp)
ls = [len(df) for df in dfs]
dfs[0].head()
ls
###Output
_____no_output_____
###Markdown
`pd.concat([df] * int(max(lengths) / len(df))` generates a new dataframe with `df` replicated the required number of times.We can write a one-liner to generate a list of augmented dataframes.
###Code
new_dfs = [pd.concat([df]*int(max(ls)/len(df)), ignore_index=True)
for df in dfs]
new_ls = [len(df) for df in new_dfs]
new_ls
###Output
_____no_output_____
###Markdown
The new classes look pretty balanced. Let's concatenate everything into one large dataframe
###Code
df = pd.concat(new_dfs, ignore_index=True)
labels = df[['id', 'sentiment']]
print(df.shape, len(labels))
classes = sorted(labels.sentiment.unique())
classes
plt.figure(figsize=(7, 7))
plt.pie(labels.sentiment.value_counts(), labels=classes);
###Output
_____no_output_____
###Markdown
Looks good. We just have to try preventing overfitting.
###Code
df.drop(['n', 'sentiment'], axis=1, inplace=True)
label_to_cat = dict()
for i in range(len(classes)):
dummy = np.zeros((len(classes),), dtype='int8')
dummy[i] = 1
label_to_cat[classes[i]] = dummy
cat_to_label = dict()
for k, v in label_to_cat.items():
cat_to_label[tuple(v)] = k
y = np.array([label_to_cat[label] for label in labels.sentiment])
y[:5]
df.response = df.response.apply(str.lower)
df.head()
###Output
_____no_output_____
###Markdown
Let's shuffle the dataset.
###Code
df_train, _, y_train, _ = train_test_split(df, y, test_size=0, random_state=42)
print(df_train.shape, y_train.shape)
print(df_val.shape)
###Output
(105576, 3) (105576, 7)
(8149, 5)
###Markdown
We'll use the GoogleNews Word2Vec model to train on this set. All the steps are exactly the same.
###Code
t = Tokenizer()
t.fit_on_texts(df_train.response)
vocab_size = len(t.word_index) + 1
vocab_size
encoded_train_set = t.texts_to_sequences(df_train.response)
len(encoded_train_set)
df_train['tokens'] = encoded_train_set
df_train.drop(['response'], axis=1, inplace=True)
df_train.head()
y_train[:5]
SEQ_LEN = 80
padded_train = pad_sequences(encoded_train_set, maxlen=SEQ_LEN, padding='post')
train_vectors = [list(doc) for doc in padded_train]
df_train.tokens = train_vectors
np.mean([len(doc) for doc in train_vectors])
embeddings_index = gensim.models.KeyedVectors.load_word2vec_format('D:/Datasets/embeddings/Word2Vec/GoogleNews-vectors-negative300.bin', binary=True)
embedding_matrix = np.zeros((vocab_size, 300))
count = 0
for word, i in t.word_index.items():
try:
embedding_vector = embeddings_index[word]
embedding_matrix[i] = embedding_vector
except KeyError:
count += 1
count
embedding_matrix.shape
input_tensor = Input(shape=(SEQ_LEN,), dtype='int32')
e = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=SEQ_LEN, trainable=False)(input_tensor)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(e)
x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(x)
x = Dense(128, activation='relu')(x)
output = Dense(7, activation='softmax')(x)
model = Model(input_tensor, output)
###Output
_____no_output_____
###Markdown
This will take longer to train, so let's validate after each epoch and save a checkpoint each time our validation score increases.We just need to prepare our validation set.
###Code
df_val.head()
val_labels = df_val[['id', 'sentiment']]
df_val.drop(['n', 'sentiment'], axis=1, inplace=True)
df_val.response = df_val.response.str.lower()
df_val.head()
encoded_val_set = t.texts_to_sequences(df_val.response)
np.mean([len(doc) for doc in encoded_val_set])
df_val['tokens'] = encoded_val_set
df_val.drop(['response'], axis=1, inplace=True)
padded_val = pad_sequences(encoded_val_set, maxlen=SEQ_LEN, padding='post')
val_vectors = [list(doc) for doc in padded_val]
df_val.tokens = val_vectors
df_val.head()
np.mean([len(doc) for doc in val_vectors])
x_val = np.array([np.array(token) for token in df_val.tokens])
x_val.shape
y_val = np.array([np.array(label_to_cat[label]) for label in val_labels.sentiment])
y_val.shape
y_val[:5]
###Output
_____no_output_____
###Markdown
The `ModelCheckpoint` callback expects a file path, and a metric to monitor.`save_best_only` was set to True to save us some disk space.Additionally, I have also set the learning rate to decay by a factor of $10^{-6}$ after each epoch as our model will overfit pretty quickly.
###Code
checkpoint = ModelCheckpoint('D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5',
monitor='val_acc',
save_best_only=True,
mode='max',
verbose=1)
model.compile(optimizer=Adam(lr=1e-3, decay=1e-6),
loss='categorical_crossentropy',
metrics=['accuracy', f1])
model.summary()
x_train = np.array([np.array(token) for token in df_train.tokens])
x_train.shape
model.fit(x_train, y_train,
validation_data=[x_val, y_val],
callbacks=[checkpoint],
epochs=10,
verbose=1)
###Output
Train on 105576 samples, validate on 8149 samples
Epoch 1/10
105576/105576 [==============================] - 79s 749us/step - loss: 0.4303 - acc: 0.8509 - f1: 0.8461 - val_loss: 0.4505 - val_acc: 0.8347 - val_f1: 0.8349
Epoch 00001: val_acc improved from -inf to 0.83470, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 2/10
105576/105576 [==============================] - 77s 731us/step - loss: 0.2673 - acc: 0.9056 - f1: 0.9055 - val_loss: 0.3595 - val_acc: 0.8687 - val_f1: 0.8697
Epoch 00002: val_acc improved from 0.83470 to 0.86870, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 3/10
105576/105576 [==============================] - 79s 750us/step - loss: 0.1873 - acc: 0.9331 - f1: 0.9330 - val_loss: 0.3844 - val_acc: 0.8671 - val_f1: 0.8670
Epoch 00003: val_acc did not improve from 0.86870
Epoch 4/10
105576/105576 [==============================] - 76s 717us/step - loss: 0.1294 - acc: 0.9548 - f1: 0.9547 - val_loss: 0.3857 - val_acc: 0.8833 - val_f1: 0.8840
Epoch 00004: val_acc improved from 0.86870 to 0.88330, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 5/10
105576/105576 [==============================] - 88s 830us/step - loss: 0.0901 - acc: 0.9685 - f1: 0.9686 - val_loss: 0.4271 - val_acc: 0.8806 - val_f1: 0.8812
Epoch 00005: val_acc did not improve from 0.88330
Epoch 6/10
105576/105576 [==============================] - 93s 883us/step - loss: 0.0660 - acc: 0.9773 - f1: 0.9774 - val_loss: 0.4792 - val_acc: 0.8812 - val_f1: 0.8817
Epoch 00006: val_acc did not improve from 0.88330
Epoch 7/10
105576/105576 [==============================] - 89s 845us/step - loss: 0.0479 - acc: 0.9833 - f1: 0.9834 - val_loss: 0.5615 - val_acc: 0.8842 - val_f1: 0.8852
Epoch 00007: val_acc improved from 0.88330 to 0.88416, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 8/10
105576/105576 [==============================] - 81s 772us/step - loss: 0.0392 - acc: 0.9873 - f1: 0.9873 - val_loss: 0.5830 - val_acc: 0.8880 - val_f1: 0.8884
Epoch 00008: val_acc improved from 0.88416 to 0.88796, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 9/10
105576/105576 [==============================] - 78s 737us/step - loss: 0.0288 - acc: 0.9907 - f1: 0.9907 - val_loss: 0.6150 - val_acc: 0.8929 - val_f1: 0.8929
Epoch 00009: val_acc improved from 0.88796 to 0.89287, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 10/10
105576/105576 [==============================] - 76s 719us/step - loss: 0.0255 - acc: 0.9916 - f1: 0.9916 - val_loss: 0.6297 - val_acc: 0.8899 - val_f1: 0.8898
Epoch 00010: val_acc did not improve from 0.89287
###Markdown
Training accuracy reached 99.16%, but validation accuracy didn't cross 90%. Though this is the best result we got so far, we definitely did overfit. Using the same dataset, we'll now try to create a bigger model, but with more regularization, in an attempt to reduce overfitment. Additionally, let's use `LeakyReLU` activations.If you use `LeayReLU` as an activation function of a layer in keras, using `model.save()` later will give you this error (at the time of writing this blog)`AttributeError: 'LeakyReLU' object has no attribute '__name__'`To fix this, you will have to use `LeakyReLU` as a layer.We'll use LeakyReLU with `alpha = 0.1` and additionally, `Dropout` will be used for regularization.
###Code
input_tensor = Input(shape=(SEQ_LEN,), dtype='int32')
e = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=SEQ_LEN, trainable=False)(input_tensor)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(e)
x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(x)
x = Dense(256)(x)
x = LeakyReLU(alpha=0.1)(x)
x = Dropout(0.6)(x)
x = Dense(128)(x)
x = LeakyReLU(alpha=0.1)(x)
x = Dropout(0.5)(x)
x = Dense(64)(x)
x = LeakyReLU(alpha=0.1)(x)
x = Dropout(0.4)(x)
output_tensor = Dense(7, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
checkpoint = ModelCheckpoint('D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5',
monitor='val_acc',
save_best_only=True,
mode='max',
verbose=1)
model.compile(optimizer=Adam(lr=1e-3, decay=1e-6),
loss='categorical_crossentropy',
metrics=['accuracy', f1])
model.summary()
model.fit(x_train, y_train,
validation_data=[x_val, y_val],
callbacks=[checkpoint],
epochs=10,
verbose=1)
###Output
Train on 105576 samples, validate on 8149 samples
Epoch 1/10
105576/105576 [==============================] - 89s 840us/step - loss: 0.5632 - acc: 0.8177 - f1: 0.8021 - val_loss: 0.4719 - val_acc: 0.8386 - val_f1: 0.8368
Epoch 00001: val_acc improved from -inf to 0.83863, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 2/10
105576/105576 [==============================] - 91s 859us/step - loss: 0.3406 - acc: 0.8905 - f1: 0.8880 - val_loss: 0.4258 - val_acc: 0.8394 - val_f1: 0.8388
Epoch 00002: val_acc improved from 0.83863 to 0.83937, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 3/10
105576/105576 [==============================] - 93s 882us/step - loss: 0.2582 - acc: 0.9148 - f1: 0.9143 - val_loss: 0.4269 - val_acc: 0.8494 - val_f1: 0.8493
Epoch 00003: val_acc improved from 0.83937 to 0.84943, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 4/10
105576/105576 [==============================] - 87s 824us/step - loss: 0.1981 - acc: 0.9350 - f1: 0.9344 - val_loss: 0.3708 - val_acc: 0.8850 - val_f1: 0.8854
Epoch 00004: val_acc improved from 0.84943 to 0.88502, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 5/10
105576/105576 [==============================] - 85s 803us/step - loss: 0.1471 - acc: 0.9507 - f1: 0.9508 - val_loss: 0.4497 - val_acc: 0.8645 - val_f1: 0.8647
Epoch 00005: val_acc did not improve from 0.88502
Epoch 6/10
105576/105576 [==============================] - 85s 807us/step - loss: 0.1135 - acc: 0.9639 - f1: 0.9636 - val_loss: 0.4745 - val_acc: 0.8806 - val_f1: 0.8806
Epoch 00006: val_acc did not improve from 0.88502
Epoch 7/10
105576/105576 [==============================] - 84s 799us/step - loss: 0.0883 - acc: 0.9721 - f1: 0.9721 - val_loss: 0.5491 - val_acc: 0.8856 - val_f1: 0.8856
Epoch 00007: val_acc improved from 0.88502 to 0.88563, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 8/10
105576/105576 [==============================] - 84s 797us/step - loss: 0.0727 - acc: 0.9770 - f1: 0.9771 - val_loss: 0.5372 - val_acc: 0.8871 - val_f1: 0.8885
Epoch 00008: val_acc improved from 0.88563 to 0.88710, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 9/10
105576/105576 [==============================] - 82s 778us/step - loss: 0.0584 - acc: 0.9820 - f1: 0.9820 - val_loss: 0.6433 - val_acc: 0.8873 - val_f1: 0.8875
Epoch 00009: val_acc improved from 0.88710 to 0.88735, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5
Epoch 10/10
105576/105576 [==============================] - 81s 771us/step - loss: 0.0518 - acc: 0.9842 - f1: 0.9842 - val_loss: 0.6485 - val_acc: 0.8828 - val_f1: 0.8834
Epoch 00010: val_acc did not improve from 0.88735
###Markdown
Our validation accuracy did not change much even though training accuracy crossed 98%. The regularized model isn't doing any better either, we overfit again due to the imbalance. Let's plot the confusion matrix for this model to see if anything changed. If we run `model.predict` now, we'll use the `model` object that was trained for the complete 10 epochs, not the one that gave us the highest validation accuracy. To use the best one, we need to load it from our last checkpoint file. We also have to define what custom objects we have used, for example, `load_model` doesn't know what `f1` means.
###Code
model = load_model('D:/Datasets/mc-sent/models/w2v_balanced_v1.hdf5',
custom_objects={'f1': f1})
y_pred = model.predict(x_val, verbose=1)
print(y_pred.shape, y_val.shape)
y_pred_class = np.array([np.argmax(x) for x in y_pred])
y_val_class = np.array([np.argmax(x) for x in y_val])
c = confusion_matrix(y_val_class, y_pred_class)
classes = [v for k, v in cat_to_label.items()]
plt.figure(figsize=(20, 20))
plt.imshow(c, interpolation='nearest', cmap='jet')
plt.colorbar()
ticks = np.arange(len(classes))
plt.xticks(ticks, classes, rotation=45)
plt.yticks(ticks, classes)
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
The confusion matrix is hardly any different, so our model overfit after all.The imbalance in this dataset is proving to be too difficult to combat.But there's another, perhaps less stupid way of dealing with imbalance that we haven't tried yet. Cost-sensitive learningIn this method, we penalize misclassifications differently. Misclassifications of the minority class are penalized more heavily than ones of the major class, which means, the loss is different for each class. Such a penalty system may induce the model to pay more attention to the minority class.Concretely, we calculate a class weight dictionary and feed it to the `.fit` method during training and keras modifies the loss function accordingly. Scikit-learn has a handy function to calculate class weights.
###Code
df = pd.read_csv('D:/Datasets/mc-sent/p_train.csv', low_memory=False)
df.head()
df, df_val = train_test_split(df, test_size=0.15, random_state=42)
labels = df[['id', 'sentiment']]
classes = sorted(labels.sentiment.unique())
classes
class_weights = class_weight.compute_class_weight('balanced', np.unique(sorted(labels.sentiment)), labels.sentiment)
class_weights
###Output
_____no_output_____
###Markdown
We need to convert this into an enumerated dictionary for keras to be able to parse it.
###Code
class_weight_dict = dict(enumerate(class_weights))
class_weight_dict
###Output
_____no_output_____
###Markdown
We can pass this dictionary to keras to change its loss function accordingly.
###Code
print(df.shape, labels.shape)
print(df_val.shape)
df.drop(['n', 'sentiment'], axis=1, inplace=True)
label_to_cat = dict()
for i in range(len(classes)):
dummy = np.zeros((len(classes),), dtype='int8')
dummy[i] = 1
label_to_cat[classes[i]] = dummy
cat_to_label = dict()
for k, v in label_to_cat.items():
cat_to_label[tuple(v)] = k
y = np.array([label_to_cat[label] for label in labels.sentiment])
df.response = df.response.apply(str.lower)
df.head()
df_train = df.copy()
y_train = y.copy()
print(df_train.shape, y_train.shape)
print(df_val.shape)
t = Tokenizer()
t.fit_on_texts(df_train.response)
vocab_size = len(t.word_index) + 1
vocab_size
encoded_train_set = t.texts_to_sequences(df_train.response)
len(encoded_train_set)
df_train['tokens'] = encoded_train_set
df_train.drop(['response'], axis=1, inplace=True)
df_train.head()
y_train[:5]
SEQ_LEN = 80
padded_train = pad_sequences(encoded_train_set, maxlen=SEQ_LEN, padding='post')
train_docs = [list(doc) for doc in padded_train]
df_train['tokens'] = train_docs
df_train.head()
embeddings_index = gensim.models.KeyedVectors.load_word2vec_format('D:/Datasets/embeddings/Word2Vec/GoogleNews-vectors-negative300.bin', binary=True)
embedding_matrix = np.zeros((vocab_size, 300))
count = 0
for word, i in t.word_index.items():
try:
embedding_vector = embeddings_index[word]
embedding_matrix[i] = embedding_vector
except KeyError:
count += 1
count
embedding_matrix.shape
###Output
_____no_output_____
###Markdown
We'll use the same model as above, but this time, we'll set trainable to True in the embedding layer.
###Code
input_tensor = Input(shape=(SEQ_LEN,), dtype='int32')
e = Embedding(vocab_size, 300, weights=[embedding_matrix], input_length=SEQ_LEN, trainable=True)(input_tensor)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(e)
x = Bidirectional(CuDNNLSTM(64, return_sequences=False))(x)
x = Dense(256)(x)
x = LeakyReLU(alpha=0.1)(x)
x = Dropout(0.6)(x)
x = Dense(128)(x)
x = LeakyReLU(alpha=0.1)(x)
x = Dropout(0.5)(x)
x = Dense(64)(x)
x = LeakyReLU(alpha=0.1)(x)
x = Dropout(0.4)(x)
output_tensor = Dense(7, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
df_val.head()
val_labels = df_val[['id', 'sentiment']]
df_val.drop(['n', 'sentiment'], axis=1, inplace=True)
df_val.response = df_val.response.str.lower()
df_val.head()
encoded_val_set = t.texts_to_sequences(df_val.response)
np.mean([len(doc) for doc in encoded_val_set])
df_val['tokens'] = encoded_val_set
df_val.drop(['response'], axis=1, inplace=True)
padded_val = pad_sequences(encoded_val_set, maxlen=SEQ_LEN, padding='post')
val_vectors = [list(doc) for doc in padded_val]
df_val.tokens = val_vectors
df_val.head()
np.mean([len(doc) for doc in val_vectors])
x_val = np.array([np.array(token) for token in df_val.tokens])
x_val.shape
y_val = np.array([np.array(label_to_cat[label]) for label in val_labels.sentiment])
y_val.shape
y_val[:5]
checkpoint = ModelCheckpoint('D:/Datasets/mc-sent/models/w2v_balanced_v3.hdf5',
monitor='val_acc',
save_best_only=True,
mode='max',
verbose=1)
model.compile(optimizer=Adam(lr=1e-3),
loss='categorical_crossentropy',
metrics=['accuracy', f1])
model.summary()
x_train = np.array([np.array(token) for token in df_train.tokens])
x_train.shape
###Output
_____no_output_____
###Markdown
Set the `class_weight` parameter before calling `fit`.
###Code
model.fit(x_train, y_train,
validation_data=[x_val, y_val],
callbacks=[checkpoint],
class_weight=class_weights,
epochs=15,
verbose=1)
###Output
Train on 46172 samples, validate on 8149 samples
Epoch 1/15
46172/46172 [==============================] - 48s 1ms/step - loss: 0.5927 - acc: 0.8086 - f1: 0.7944 - val_loss: 0.3753 - val_acc: 0.8704 - val_f1: 0.8670
Epoch 00001: val_acc improved from -inf to 0.87041, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v3.hdf5
Epoch 2/15
46172/46172 [==============================] - 44s 952us/step - loss: 0.2836 - acc: 0.9089 - f1: 0.9067 - val_loss: 0.2758 - val_acc: 0.9022 - val_f1: 0.9032
Epoch 00002: val_acc improved from 0.87041 to 0.90220, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v3.hdf5
Epoch 3/15
46172/46172 [==============================] - 44s 952us/step - loss: 0.1664 - acc: 0.9478 - f1: 0.9479 - val_loss: 0.2979 - val_acc: 0.9016 - val_f1: 0.9030
Epoch 00003: val_acc did not improve from 0.90220
Epoch 4/15
46172/46172 [==============================] - 44s 949us/step - loss: 0.1082 - acc: 0.9674 - f1: 0.9673 - val_loss: 0.3726 - val_acc: 0.9059 - val_f1: 0.9064
Epoch 00004: val_acc improved from 0.90220 to 0.90588, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v3.hdf5
Epoch 5/15
46172/46172 [==============================] - 44s 952us/step - loss: 0.0758 - acc: 0.9762 - f1: 0.9761 - val_loss: 0.3933 - val_acc: 0.9098 - val_f1: 0.9097
Epoch 00005: val_acc improved from 0.90588 to 0.90980, saving model to D:/Datasets/mc-sent/models/w2v_balanced_v3.hdf5
Epoch 6/15
46172/46172 [==============================] - 44s 951us/step - loss: 0.0576 - acc: 0.9828 - f1: 0.9829 - val_loss: 0.4271 - val_acc: 0.9058 - val_f1: 0.9077
Epoch 00006: val_acc did not improve from 0.90980
Epoch 7/15
46172/46172 [==============================] - 44s 956us/step - loss: 0.0473 - acc: 0.9863 - f1: 0.9862 - val_loss: 0.4686 - val_acc: 0.9072 - val_f1: 0.9086
Epoch 00007: val_acc did not improve from 0.90980
Epoch 8/15
46172/46172 [==============================] - 44s 949us/step - loss: 0.0338 - acc: 0.9899 - f1: 0.9899 - val_loss: 0.5735 - val_acc: 0.9075 - val_f1: 0.9082
Epoch 00008: val_acc did not improve from 0.90980
Epoch 9/15
46172/46172 [==============================] - 44s 952us/step - loss: 0.0326 - acc: 0.9909 - f1: 0.9909 - val_loss: 0.5111 - val_acc: 0.9016 - val_f1: 0.9006
Epoch 00009: val_acc did not improve from 0.90980
Epoch 10/15
46172/46172 [==============================] - 47s 1ms/step - loss: 0.0285 - acc: 0.9914 - f1: 0.9916 - val_loss: 0.5419 - val_acc: 0.9049 - val_f1: 0.9051
Epoch 00010: val_acc did not improve from 0.90980
Epoch 11/15
46172/46172 [==============================] - 45s 977us/step - loss: 0.0236 - acc: 0.9937 - f1: 0.9936 - val_loss: 0.6981 - val_acc: 0.8993 - val_f1: 0.8998
Epoch 00011: val_acc did not improve from 0.90980
Epoch 12/15
46172/46172 [==============================] - 44s 960us/step - loss: 0.0232 - acc: 0.9939 - f1: 0.9939 - val_loss: 0.6500 - val_acc: 0.9006 - val_f1: 0.9013
Epoch 00012: val_acc did not improve from 0.90980
Epoch 13/15
46172/46172 [==============================] - 45s 979us/step - loss: 0.0198 - acc: 0.9946 - f1: 0.9946 - val_loss: 0.7691 - val_acc: 0.8966 - val_f1: 0.8974
Epoch 00013: val_acc did not improve from 0.90980
Epoch 14/15
46172/46172 [==============================] - 47s 1ms/step - loss: 0.0175 - acc: 0.9953 - f1: 0.9952 - val_loss: 0.7831 - val_acc: 0.8988 - val_f1: 0.8996
Epoch 00014: val_acc did not improve from 0.90980
Epoch 15/15
46172/46172 [==============================] - 46s 990us/step - loss: 0.0188 - acc: 0.9954 - f1: 0.9952 - val_loss: 0.7357 - val_acc: 0.8961 - val_f1: 0.8967
Epoch 00015: val_acc did not improve from 0.90980
###Markdown
We've finally hit almost 91% validation accuracy!There's one last thing I want us to try. ELMo EmbeddingsThese are sentence-level embeddings, released by [Allen NLP](https://allennlp.org/elmo) last year. As per the inventors, > ELMo is a deep contextualized word representation that models both complex characters of word use, and how these uses vary across linguistic contexts. The word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus.These embeddings are available through the tensorflow hub API. Since these embeddings are sentence-level, we don't need to tokenize them. Make sure your dataframe has the following columns.
###Code
df_train.head()
x_train = np.array([np.array(sentence) for sentence in df_train.response])
x_train[:5]
y_train[:5]
###Output
_____no_output_____
###Markdown
We'll have to write our own class inheriting the `Layer` class from keras and define a few mandatory functions.
###Code
class ELMo(Layer):
def __init__(self, **kwargs):
self.dimensions = 1024
self.trainable = False # set trainable to False
super(ELMo, self).__init__(**kwargs)
def build(self, input_shape):
self.elmo = hub.Module('https://tfhub.dev/google/elmo/2', trainable=self.trainable, name='{}_module'.format(self.name))
self.trainable_weights += K.tf.trainable_variables(scope="^{}_module/.*".format(self.name))
super(ELMo, self).build(input_shape)
def call(self, x, mask=None):
result = self.elmo(K.squeeze(K.cast(x, tf.string), axis=1), as_dict=True, signature='default',)['default']
return result
def compute_mask(self, inputs, mask=None):
return K.not_equal(inputs, '--PAD--')
def compute_output_shape(self, input_shape):
return (input_shape[0], self.dimensions)
df_val.head()
x_val = np.array([np.array(sentence) for sentence in df_val.response])
x_val.shape
val_labels = df_val[['id', 'sentiment']]
y_val = np.array([label_to_cat[label] for label in val_labels.sentiment])
x_val[:5]
y_val[:5]
###Output
_____no_output_____
###Markdown
We'll be using the same architecture as before, but let's drop the fancy activation function this time.
###Code
input_tensor = Input(shape=(1,), dtype='string')
e = ELMo()(input_tensor)
x = Dense(256, activation='relu')(e)
x = Dropout(0.6)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.4)(x)
output_tensor = Dense(7, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
checkpoint = ModelCheckpoint('D:/Datasets/mc-sent/models/w2v_balanced_elmo_v1.hdf5',
monitor='val_acc',
save_best_only=True,
mode='max',
verbose=1)
model.compile(optimizer=Adam(lr=1e-3),
loss='categorical_crossentropy',
metrics=['accuracy', f1])
model.summary()
model.fit(x_train, y_train,
batch_size=8,
validation_data=[x_val, y_val],
callbacks=[checkpoint],
class_weight=class_weights,
epochs=5,
verbose=1)
###Output
_____no_output_____ |
ComputerScience/SimpleMovingAverageStockTradingStrategy/SimpleMovingAverageStockTradingStrategy.ipynb | ###Markdown
Simple Moving Average Stock Trading Strategy Based on [Simple Moving Average Stock Trading Strategy Using Python](https://www.youtube.com/watch?v=PUk5E8G1r44) from [Computer Science](https://www.youtube.com/channel/UCbmb5IoBtHZTpYZCDBOC1CA) **Disclaimer:** _Investing in the stock market involves risk and can lead to monetary loss. This material is purely for educational purposes and should not be taken as professional investment advice. Invest at your own discretion._
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
###Output
_____no_output_____
###Markdown
Load Bitcoin data
###Code
df = pd.read_csv('HDFC.csv')
###Output
_____no_output_____
###Markdown
Show the data
###Code
df
###Output
_____no_output_____
###Markdown
Visually show the close price
###Code
plt.figure(figsize=(16,8))
plt.title('Close Price History', fontsize=18)
plt.plot(df['Close'])
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Create a function to get the simple movinga average (SMA)
###Code
def SMA(data, period=30, column='Close'):
return data[column].rolling(window=period).mean()
###Output
_____no_output_____
###Markdown
Create new columns to store the 20 day SMA and 50 day SMA
###Code
df['SMA20'] = SMA(df, 20)
df['SMA50'] = SMA(df, 50)
###Output
_____no_output_____
###Markdown
Get the buy and sell signals
###Code
df['Signal'] = np.where(df['SMA20'] > df['SMA50'], 1, 0)
df['Postion'] = df['Signal'].diff()
df['Buy'] = np.where(df['Postion'] == 1, df['Close'], np.NAN)
df['Sell'] = np.where(df['Postion'] == -1, df['Close'], np.NAN)
###Output
_____no_output_____
###Markdown
Visually show the close price with the SMAs and Buy & Sell signals
###Code
plt.figure(figsize=(16,8))
plt.title('Close Price History w/ Buy & Sell Signals', fontsize=18)
plt.plot(df['Close'], alpha=0.5, label='Close')
plt.plot(df['SMA20'], alpha=0.5, label='SMA20')
plt.plot(df['SMA50'], alpha=0.5, label='SMA50')
plt.scatter(df.index, df['Buy'], alpha=1, label='Buy Signal', marker='^', color='green')
plt.scatter(df.index, df['Sell'], alpha=1, label='Sell Signal', marker='v', color='red')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Create a function to see the dates of each death cross and golden cross within the dataset
###Code
plt.figure(figsize=(16,8))
plt.title('Close Price History w/ Buy & Sell Signals', fontsize=18)
plt.plot(df['Close'], alpha=0.5, label='Close')
plt.scatter(df.index, df['Buy'], alpha=1, label='Buy Signal', marker='^', color='green')
plt.scatter(df.index, df['Sell'], alpha=1, label='Sell Signal', marker='v', color='red')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price', fontsize=18)
plt.show()
###Output
_____no_output_____ |
Figure2_RBFOX1_analysis.ipynb | ###Markdown
Plots for RBFOX1 analysis
###Code
import os
import numpy as np
import pandas as pd
import logomaker
from tensorflow import keras
import matplotlib.pyplot as plt
%matplotlib inline
from residualbind import ResidualBind
import helper, explain
normalization = 'log_norm' # 'log_norm' or 'clip_norm'
ss_type = 'seq' # 'seq', 'pu', or 'struct'
data_path = '../data/RNAcompete_2013/rnacompete2013.h5'
results_path = os.path.join('../results', 'rnacompete_2013')
save_path = os.path.join(results_path, normalization+'_'+ss_type)
plot_path = helper.make_directory(save_path, 'FINAL')
experiment = 'RNCMPT00168'
rbp_index = helper.find_experiment_index(data_path, experiment)
# load rbp dataset
train, valid, test = helper.load_rnacompete_data(data_path,
ss_type=ss_type,
normalization=normalization,
rbp_index=rbp_index)
# load residualbind model
input_shape = list(train['inputs'].shape)[1:]
weights_path = os.path.join(save_path, experiment + '_weights.hdf5')
model = ResidualBind(input_shape, num_class=1, weights_path=weights_path)
# load pretrained weights
model.load_weights()
# get predictions for test sequences
predictions = model.predict(test['inputs'])
# motif scan test sequences
motif = 'UGCAUG'
M = len(motif)
motif_onehot = np.zeros((M, 4))
for i, m in enumerate(motif):
motif_onehot[i, "ACGU".index(m)] = 1
max_scan = []
for x in test['inputs']:
scan = []
for l in range(41-M):
scan.append(np.sum(x[range(l,l+M),:]*motif_onehot))
max_scan.append(np.max(scan))
index = [29849, 105952]
X = test['inputs'][index]
attr_map = explain.mutagenesis(model.model, X, class_index=0, layer=-1)
#scores = np.sum(attr_map * X, axis=2, keepdims=True)
scores = np.sum(attr_map**2, axis=2, keepdims=True)*X
fig = plt.figure()
plt.plot([-3,11], [-3,11], '--k')
plt.scatter(predictions[:,0], test['targets'][:,0], c=max_scan, cmap='viridis', alpha=0.5, rasterized=True)
plt.scatter(predictions[index,0], test['targets'][index,0], marker='x', c='r', s=80)
plt.xlabel('Predicted binding scores', fontsize=12)
plt.ylabel('Experimental binding scores', fontsize=12)
plt.xticks([-2, 0, 2, 4, 6, 8, 10], fontsize=12)
plt.yticks([-2, 0, 2, 4, 6, 8, 10], fontsize=12)
outfile = os.path.join(plot_path, 'rbfox1_scatter.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
fig = plt.figure()
plt.plot([-3,11], [-3,11], '--k')
plt.scatter(predictions[:,0], test['targets'][:,0], c=max_scan, cmap='viridis', alpha=0.5)
plt.scatter(predictions[index,0], test['targets'][index,0], marker='x', c='r', s=80)
plt.xlabel('Predicted binding scores', fontsize=12)
plt.ylabel('Experimental binding scores', fontsize=12)
plt.xticks([-2, 0, 2, 4, 6, 8, 10], fontsize=12)
plt.yticks([-2, 0, 2, 4, 6, 8, 10], fontsize=12)
outfile = os.path.join(plot_path, 'rbfox1_scatter_hires.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Plot mutagenesis logo
###Code
N, L, A = X.shape
for k in range(len(X)):
counts_df = pd.DataFrame(data=0.0, columns=list('ACGU'), index=list(range(L)))
for a in range(A):
for l in range(L):
counts_df.iloc[l,a] = scores[k,l,a]
fig = plt.figure(figsize=(25,3))
ax = plt.subplot(1,1,1)
logomaker.Logo(counts_df, ax=ax)
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_ticks_position('none')
plt.xticks([])
plt.yticks([])
fig = plt.gcf()
ax2 = ax.twinx()
#plt.title(index[k], fontsize=16)
#plt.ylabel(np.round(pr_score[k],4), fontsize=16)
plt.yticks([])
outfile = os.path.join(plot_path, str(index[k])+'_rbfox1_saliency.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
GIA for multiple binding sites
###Code
from residualbind import GlobalImportance
alphabet = 'ACGU'
# instantiate global importance
gi = GlobalImportance(model, alphabet)
# set null sequence model
gi.set_null_model(null_model='profile', base_sequence=test['inputs'], num_sample=1000, binding_scores=test['targets'])
# GIA for optimal binding site
motif = 'UGCAUG'
positions = [4, 12, 20]
all_scores = gi.multiple_sites(motif, positions, class_index=0)
fig = plt.figure(figsize=(4,5))
flierprops = dict(marker='^', markerfacecolor='green', markersize=14,linestyle='none')
box = plt.boxplot(all_scores.T, showfliers=False, showmeans=True, meanprops=flierprops);
plt.xticks(range(1,len(positions)+1), [motif+' (x1)', motif+' (x2)', motif+' (x3)'], rotation=40, fontsize=14, ha='right');
ax = plt.gca();
plt.setp(ax.get_yticklabels(),fontsize=14)
plt.ylabel('Importance', fontsize=14);
x = np.linspace(1,3,3)
p = np.polyfit(x, np.mean(all_scores, axis=1), 1)
determination = np.corrcoef(x, np.mean(all_scores, axis=1))[0,1]**2
x = np.linspace(0.5,3.5,10)
plt.plot(x, x*p[0] + p[1], '--k', alpha=0.5)
MAX = 0
for w in box['whiskers']:
MAX = np.maximum(MAX, np.max(w.get_ydata()))
scale = (np.percentile(all_scores, 90) - np.percentile(all_scores,10))/10
plt.text(0.6, MAX-scale, "$R^2$ = %.3f"%(determination), fontsize=14)
outfile = os.path.join(plot_path, 'rbfox1_multiple_binding_sites.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
# GIA for sub-optimal binding site
motif = 'AGAAUG'
positions = [4, 12, 20]
all_scores = gi.multiple_sites(motif, positions, class_index=0)
fig = plt.figure(figsize=(4,5))
flierprops = dict(marker='^', markerfacecolor='green', markersize=14,linestyle='none')
box = plt.boxplot(all_scores.T, showfliers=False, showmeans=True, meanprops=flierprops);
plt.xticks(range(1,len(positions)+1), [motif+' (x1)', motif+' (x2)', motif+' (x3)'], rotation=40, fontsize=14, ha='right');
ax = plt.gca();
plt.setp(ax.get_yticklabels(),fontsize=14)
plt.ylabel('Importance', fontsize=14);
x = np.linspace(1,3,3)
p = np.polyfit(x, np.mean(all_scores, axis=1), 1)
determination = np.corrcoef(x, np.mean(all_scores, axis=1))[0,1]**2
x = np.linspace(0.5,3.5,10)
plt.plot(x, x*p[0] + p[1], '--k', alpha=0.5)
MAX = 0
for w in box['whiskers']:
MAX = np.maximum(MAX, np.max(w.get_ydata()))
scale = (np.percentile(all_scores, 90) - np.percentile(all_scores,10))/10
plt.text(0.6, MAX-scale, "$R^2$ = %.3f"%(determination), fontsize=14)
plt.plot([0.5,3.5],[5.18859, 5.18859], '--r')
outfile = os.path.join(plot_path, 'rbfox1_mutated_multiple_binding_sites.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
GIA for motif spacing
###Code
names = ['UGCAUG', 'UGCAUGCAUG', 'UGCAUGUGCAUG', 'UGCAUGNNNUGCAUG']
motif = 'UGCAUG'
positions = [[17,17], [16, 20], [15, 21], [13, 22]]
all_scores = []
for position in positions:
interventions = []
for pos in position:
interventions.append((motif, pos))
all_scores.append(gi.embed_predict_effect(interventions, class_index))
all_scores = np.array(all_scores)
fig = plt.figure(figsize=(5,5))
flierprops = dict(marker='^', markerfacecolor='green', markersize=14,linestyle='none')
box = plt.boxplot(all_scores.T, showfliers=False, showmeans=True, meanprops=flierprops);
plt.xticks(range(1,len(positions)+1), names, rotation=40, fontsize=14, ha='right');
ax = plt.gca();
plt.setp(ax.get_yticklabels(),fontsize=14)
plt.ylabel('Importance', fontsize=14);
outfile = os.path.join(plot_path, 'rbfox1_separation.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Compare to Experimental KD measurements
###Code
# measurements from Auweter et al. "Molecular basis of RNA recognition by the
# human alternative splicing factor Fox-1" EMBO 2006
patterns = ['UGCAUGU', 'AGCAUGU', 'CGCAUGU', 'UGUAUGU', 'UACAUGU', 'UGCACGU', 'UGCAUAU']
exp_scores = np.array([0.83, 4.8, 6.1, 280, 350, 4.9, 1830])
# GIA analysis for same mutations with experimental measurements
position = 17
class_index = 0
all_scores = []
for pattern in patterns:
scores = gi.embed_predict_effect((pattern, position), class_index)
all_scores.append(np.mean(scores))
all_scores = np.array(all_scores)
# perform linear regression and get statistics
from scipy import stats
results = stats.linregress(all_scores, np.log(exp_scores))
results
fig = plt.figure(figsize=(6,4))
plt.scatter(all_scores, np.log(exp_scores), s=80)
p = np.polyfit(all_scores, np.log(exp_scores), 1)
x = np.linspace(0,4.5,20)
y = results.slope*x + results.intercept
plt.plot(x,y,'--r')
plt.text(2.3, 7, '$R^2$ = %.3f'%(results.rvalue**2), fontsize=14)
plt.text(2.3, 6, '$p$-value = %.4f'%(results.pvalue), fontsize=14)
ax = plt.gca();
plt.setp(ax.get_yticklabels(),fontsize=14)
plt.setp(ax.get_xticklabels(),fontsize=14)
plt.ylabel('Experimental $\ln~{K_D}$ ratio', fontsize=14);
plt.xlabel('Global Importance', fontsize=14);
#exp_scores = np.array([0.83, 4.8, 6.1, 280, 350, 4.9, 1830])
plt.text(all_scores[0]-1.1, np.log(exp_scores[0])+-.1, 'UGCAUGU', fontsize=12)
plt.text(all_scores[1]+.15, np.log(exp_scores[1])+.0, 'AGCAUGU', fontsize=12)
plt.text(all_scores[2]-.4, np.log(exp_scores[2])+.45, 'CGCAUGU', fontsize=12)
plt.text(all_scores[5]-.9, np.log(exp_scores[5])-.7, 'UGCACGU', fontsize=12)
plt.text(all_scores[3]+.2, np.log(exp_scores[3])-0, 'UGUAUGU', fontsize=12)
plt.text(all_scores[4]-.23, np.log(exp_scores[4])-.8, 'UACAUGU', fontsize=12)
plt.text(all_scores[6]-.25, np.log(exp_scores[6])-.8, 'UGCAUAU', fontsize=12)
outfile = os.path.join(plot_path, 'rbfox1_GIA_vs_experimentalKD.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
###Output
_____no_output_____ |
Model backlog/Deep Learning/VGG16/[15th] - Bottleneck VGG16 img256.ipynb | ###Markdown
Bottleneck features using VGG16
###Code
# Model parameters
BATCH_SIZE = 64
EPOCHS = 100
LEARNING_RATE = 0.1
HEIGHT = 256
WIDTH = 256
CANAL = 3
N_CLASSES = labels.shape[0]
classes = list(map(str, range(N_CLASSES)))
def f2_score_thr(threshold=0.5):
def f2_score(y_true, y_pred):
beta = 2
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold), K.floatx())
true_positives = K.sum(K.clip(y_true * y_pred, 0, 1), axis=1)
predicted_positives = K.sum(K.clip(y_pred, 0, 1), axis=1)
possible_positives = K.sum(K.clip(y_true, 0, 1), axis=1)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
return K.mean(((1+beta**2)*precision*recall) / ((beta**2)*precision+recall+K.epsilon()))
return f2_score
def step_decay(epoch):
initial_lrate = LEARNING_RATE
drop = 0.5
epochs_drop = 10
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=False,
class_mode=None,
target_size=(HEIGHT, WIDTH))
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_dataframe(
dataframe=test,
directory = "../input/imet-2019-fgvc6/test",
x_col="id",
target_size=(HEIGHT, WIDTH),
batch_size=1,
shuffle=False,
class_mode=None)
# Build the VGG16 network
model_vgg = VGG16(weights=None, include_top=False)
model_vgg.load_weights('../input/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5')
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
train_data = model_vgg.predict_generator(train_generator, STEP_SIZE_TRAIN)
train_labels = []
for label in train['attribute_ids'][:train_data.shape[0]].values:
zeros = np.zeros(N_CLASSES)
for label_i in label:
zeros[int(label_i)] = 1
train_labels.append(zeros)
train_labels = np.asarray(train_labels)
X_train, X_val, Y_train, Y_val = train_test_split(train_data, train_labels, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Model
###Code
model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(N_CLASSES, activation="sigmoid"))
optimizer = optimizers.SGD(lr=LEARNING_RATE, momentum=0.8, decay=0.0, nesterov=False)
thresholds = [0.15, 0.2, 0.25, 0.3, 0.4, 0.5]
metrics = ["accuracy", "categorical_accuracy", f2_score_thr(0.15), f2_score_thr(0.2),
f2_score_thr(0.25), f2_score_thr(0.3), f2_score_thr(0.4), f2_score_thr(0.5)]
lrate = LearningRateScheduler(step_decay)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=10)
callbacks = [lrate, es]
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
history = model.fit(x=X_train, y=Y_train,
validation_data=(X_val, Y_val),
epochs=EPOCHS,
batch_size=BATCH_SIZE,
callbacks=callbacks,
verbose=2)
###Output
_____no_output_____
###Markdown
Model graph loss
###Code
sns.set_style("whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7))
ax1.plot(history.history['loss'], label='Train loss')
ax1.plot(history.history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history.history['acc'], label='Train Accuracy')
ax2.plot(history.history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
ax3.plot(history.history['categorical_accuracy'], label='Train Cat Accuracy')
ax3.plot(history.history['val_categorical_accuracy'], label='Validation Cat Accuracy')
ax3.legend(loc='best')
ax3.set_title('Cat Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
fig, axes = plt.subplots(3, 2, sharex='col', figsize=(20,7))
axes[0][0].plot(history.history['f2_score'], label='Train F2 Score')
axes[0][0].plot(history.history['val_f2_score'], label='Validation F2 Score')
axes[0][0].legend(loc='best')
axes[0][0].set_title('F2 Score threshold 0.15')
axes[0][1].plot(history.history['f2_score_1'], label='Train F2 Score')
axes[0][1].plot(history.history['val_f2_score_1'], label='Validation F2 Score')
axes[0][1].legend(loc='best')
axes[0][1].set_title('F2 Score threshold 0.2')
axes[1][0].plot(history.history['f2_score_2'], label='Train F2 Score')
axes[1][0].plot(history.history['val_f2_score_2'], label='Validation F2 Score')
axes[1][0].legend(loc='best')
axes[1][0].set_title('F2 Score threshold 0.25')
axes[1][1].plot(history.history['f2_score_3'], label='Train F2 Score')
axes[1][1].plot(history.history['val_f2_score_3'], label='Validation F2 Score')
axes[1][1].legend(loc='best')
axes[1][1].set_title('F2 Score threshold 0.3')
axes[2][0].plot(history.history['f2_score_4'], label='Train F2 Score')
axes[2][0].plot(history.history['val_f2_score_4'], label='Validation F2 Score')
axes[2][0].legend(loc='best')
axes[2][0].set_title('F2 Score threshold 0.4')
axes[2][1].plot(history.history['f2_score_5'], label='Train F2 Score')
axes[2][1].plot(history.history['val_f2_score_5'], label='Validation F2 Score')
axes[2][1].legend(loc='best')
axes[2][1].set_title('F2 Score threshold 0.5')
plt.xlabel('Epochs')
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
Find best threshold value
###Code
best_thr = 0
best_thr_val = history.history['val_f2_score'][-1]
for i in range(1, len(metrics)-2):
if best_thr_val < history.history['val_f2_score_%s' % i][-1]:
best_thr_val = history.history['val_f2_score_%s' % i][-1]
best_thr = i
threshold = thresholds[best_thr]
print('Best threshold is: %s' % threshold)
###Output
_____no_output_____
###Markdown
Apply model to test set and output predictions
###Code
test_generator.reset()
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size
bottleneck_preds = model_vgg.predict_generator(test_generator, steps=STEP_SIZE_TEST)
preds = model.predict(bottleneck_preds)
predictions = []
for pred_ar in preds:
valid = ''
for idx, pred in enumerate(pred_ar):
if pred > threshold:
if len(valid) == 0:
valid += str(idx)
else:
valid += (' %s' % idx)
if len(valid) == 0:
valid = np.argmax(pred_ar)
predictions.append(valid)
filenames = test_generator.filenames
results = pd.DataFrame({'id':filenames, 'attribute_ids':predictions})
results['id'] = results['id'].map(lambda x: str(x)[:-4])
results.to_csv('submission.csv',index=False)
results.head(10)
###Output
_____no_output_____ |
AlphabetSoupCharity_Optimization3.2.ipynb | ###Markdown
Deliverable 2: Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
number_input_features = len(X_train[0])
hidden_nodes_layer1 = 180
hidden_nodes_layer2 = 90
hidden_nodes_layer3 = 60
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="tanh"))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="tanh"))
# Third hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation="tanh"))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation="relu"))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Import checkpoint dependencies
import os
from tensorflow.keras.callbacks import ModelCheckpoint
# Define the checkpoint path and filenames
os.makedirs("checkpoints/",exist_ok=True)
checkpoint_path = "checkpoints/weights.{epoch:02d}.hdf5"
# Create a callback that saves the model's weights every 5 epochs
cp_callback = ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq="epoch",
period=5)
# Train the model
fit_model = nn.fit(X_train_scaled,y_train,epochs=100,callbacks=[cp_callback])
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn.save("AlphabetSoupCharity_Optimization3.2.h5")
###Output
_____no_output_____ |
Modulo2/ClaseRepaso.ipynb | ###Markdown
Clase de repaso> El objetivo de esta clase es resolver una serie de ejercicios teóricos y prácticos relacionados con los contenidos de los módulos 1 y 2, en preparación para el examen.> Es válido que propongan ustedes mismos sus dudas de los temas tratados en estos módulos, o ejercicios que no hayan quedado claros de clases anteriores, tareas pasadas y/o quices.> La recomendación principal para el examen es que COMPRENDAN cada uno de los ejercicios de los quices y de las tareas. Si todo eso está claro, el examen será un simple trámite.___ Ejercicios varios tipo quiz.Una parte del examen consta de ejercicios parecidos a los de los quices realizados en los módulos 1 y 2. La diferencia con los quices, es que además de seleccionar la respuesta se debe dar una justificación de el porqué de la selección.Repasemos algunos ejercicios similares a los de los quices pasados. **Pregunta 1.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $E[r_A] = 25.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 10.00\%$.B. $E[r_A] = 8.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 3.30\%$.C. $E[r_A] = 25.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 10.00\%$.D. $E[r_A] = 8.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 3.30\%$. La respuesta correcta es (4%): D
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
import pandas as pd
tabla = pd.DataFrame(
{'Prob': [0.3, 0.4, 0.3],
'A': [-0.2, 0.05, 0.4],
'B': [-0.05, 0.1, 0.15],
'C': [0.05, 0.03, 0.02]
}
)
tabla
ErA = (tabla['Prob'] * tabla['A']).sum()
ErB = (tabla['Prob'] * tabla['B']).sum()
ErC = (tabla['Prob'] * tabla['C']).sum()
ErA, ErB, ErC
###Output
_____no_output_____
###Markdown
**Pregunta 2.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_A = 27.33\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.91\%$.B. $\sigma_A = 23.37\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.19\%$.C. $\sigma_A = 23.37\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.91\%$.D. $\sigma_A = 27.33\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.19\%$. La respuesta correcta es (4%): B
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
sA = (tabla['Prob'] * (tabla['A'] - ErA)**2).sum()**0.5
sB = (tabla['Prob'] * (tabla['B'] - ErB)**2).sum()**0.5
sC = (tabla['Prob'] * (tabla['C'] - ErC)**2).sum()**0.5
sA, sB, sC
###Output
_____no_output_____
###Markdown
**Pregunta 3.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = 0.00096$.B. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = 0.00096$.C. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = -0.00096$.D. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = -0.00096$. La respuesta correcta es (4%): C
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
sAB = (tabla['Prob'] * (tabla['A'] - ErA) * (tabla['B'] - ErB)).sum()
sAC = (tabla['Prob'] * (tabla['A'] - ErA) * (tabla['C'] - ErC)).sum()
sBC = (tabla['Prob'] * (tabla['B'] - ErB) * (tabla['C'] - ErC)).sum()
sAB, sAC, sBC
###Output
_____no_output_____
###Markdown
**Pregunta 4.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuál es el rendimiento esperado y volatilidad de un portafolio formado por 20% del activo A, 30% del activo B y 50% del activo C?A. $E[r_P] = 5.53\%$, $\sigma_P=6.39\%$.B. $E[r_P] = 5.53\%$, $\sigma_P=7.71\%$.C. $E[r_P] = 3.55\%$, $\sigma_P=7.71\%$.D. $E[r_P] = 5.35\%$, $\sigma_P=6.39\%$. La respuesta correcta es (4%): D
###Code
import numpy as np
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
ErP = 0.2 * ErA + 0.3 * ErB + 0.5 * ErC
cov = np.array([[sA**2, sAB, sAC],
[sAB, sB**2, sBC],
[sAC, sBC, sC**2]
])
w = np.array([0.2, 0.3, 0.5])
sP = (w.T.dot(cov).dot(w))**0.5
ErP, sP
###Output
_____no_output_____
###Markdown
Clase de repaso> El objetivo de esta clase es resolver una serie de ejercicios teóricos y prácticos relacionados con los contenidos de los módulos 1 y 2, en preparación para el examen.> Es válido que propongan ustedes mismos sus dudas de los temas tratados en estos módulos, o ejercicios que no hayan quedado claros de clases anteriores, tareas pasadas y/o quices.> La recomendación principal para el examen es que COMPRENDAN cada uno de los ejercicios de los quices y de las tareas. Si todo eso está claro, el examen será un simple trámite.___ Ejercicios varios tipo quiz.Una parte del examen consta de ejercicios parecidos a los de los quices realizados en los módulos 1 y 2. La diferencia con los quices, es que además de seleccionar la respuesta se debe dar una justificación de el porqué de la selección.Repasemos algunos ejercicios similares a los de los quices pasados. **Pregunta 1.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $E[r_A] = 25.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 10.00\%$.B. $E[r_A] = 8.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 3.30\%$.C. $E[r_A] = 25.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 10.00\%$.D. $E[r_A] = 8.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 3.30\%$. La respuesta correcta es (4%):
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
tabla=pd.DataFrame(columns=['prob','A','B','C'])
tabla['prob']=[0.30,0.4,0.3]
tabla['A']=[-0.20,0.05,0.40]
tabla['B']=[-0.05,0.10,0.15]
tabla['C']=[0.05,0.03,0.02]
tabla
EA=(tabla['prob']*tabla['A']).sum()
EB=(tabla['prob']*tabla['B']).sum()
EC=(tabla['prob']*tabla['C']).sum()
EA,EB,EC
###Output
_____no_output_____
###Markdown
La respuesta correcta es B.$E[r_A] = 8.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 3.30\%$. **Pregunta 2.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_A = 27.33\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.91\%$.B. $\sigma_A = 23.37\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.19\%$.C. $\sigma_A = 23.37\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.91\%$.D. $\sigma_A = 27.33\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.19\%$. La respuesta correcta es (4%):
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
sA=((tabla['A']-EA)**2*tabla['prob']).sum()**0.5
sB=((tabla['B']-EB)**2*tabla['prob']).sum()**0.5
sC=((tabla['C']-EC)**2*tabla['prob']).sum()**0.5
sA,sB,sC
###Output
_____no_output_____
###Markdown
la respuesta correcta es la B. $\sigma_A = 23.37\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.19\%$. **Pregunta 3.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = 0.00096$.B. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = 0.00096$.C. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = -0.00096$.D. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = -0.00096$.
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
#debemos obtener la covarianza
sAB =(tabla['prob']*(tabla['A']-EA)*(tabla['B']-EB)).sum()
sAC =(tabla['prob']*(tabla['A']-EA)*(tabla['C']-EC)).sum()
sBC =(tabla['prob']*(tabla['B']-EB)*(tabla['C']-EC)).sum()
sAB,sAC,sBC
###Output
_____no_output_____
###Markdown
La respuesta correcta es la C. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = -0.00096$. **Pregunta 4.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuál es el rendimiento esperado y volatilidad de un portafolio formado por 20% del activo A, 30% del activo B y 50% del activo C?A. $E[r_P] = 5.53\%$, $\sigma_P=6.39\%$.B. $E[r_P] = 5.53\%$, $\sigma_P=7.71\%$.C. $E[r_P] = 3.55\%$, $\sigma_P=7.71\%$.D. $E[r_P] = 5.35\%$, $\sigma_P=6.39\%$. La respuesta correcta es (4%):
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
#Forma 1
tabla['port']=0.2*tabla['A']+0.3*tabla['B']+0.5*tabla['C']
Eport=(tabla['prob']*tabla['port']).sum()
s_port= ((tabla['port']-Eport)**2*tabla['prob']).sum()**0.5
Eport, s_port
###Output
_____no_output_____
###Markdown
Clase de repaso> El objetivo de esta clase es resolver una serie de ejercicios teóricos y prácticos relacionados con los contenidos de los módulos 1 y 2, en preparación para el examen.> Es válido que propongan ustedes mismos sus dudas de los temas tratados en estos módulos, o ejercicios que no hayan quedado claros de clases anteriores, tareas pasadas y/o quices.> La recomendación principal para el examen es que COMPRENDAN cada uno de los ejercicios de los quices y de las tareas. Si todo eso está claro, el examen será un simple trámite.___ Ejercicios varios tipo quiz.Una parte del examen consta de ejercicios parecidos a los de los quices realizados en los módulos 1 y 2. La diferencia con los quices, es que además de seleccionar la respuesta se debe dar una justificación de el porqué de la selección.Repasemos algunos ejercicios similares a los de los quices pasados. **Pregunta 1.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $E[r_A] = 25.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 10.00\%$.B. $E[r_A] = 8.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 3.30\%$.C. $E[r_A] = 25.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 10.00\%$.D. $E[r_A] = 8.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 3.30\%$. La respuesta correcta es (4%): D
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
tabla = pd.DataFrame(columns=['prob', 'A', 'B', 'C'])
tabla['prob'] = [0.3, 0.4, 0.3]
tabla['A'] = [-0.2, 0.05, 0.4]
tabla['B'] = [-0.05, 0.1, 0.15]
tabla['C'] = [0.05, 0.03, 0.02]
tabla
EA = (tabla['prob'] * tabla['A']).sum()
EB = (tabla['prob'] * tabla['B']).sum()
EC = (tabla['prob'] * tabla['C']).sum()
EA, EB, EC
###Output
_____no_output_____
###Markdown
**Pregunta 2.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_A = 27.33\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.91\%$.B. $\sigma_A = 23.37\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.19\%$.C. $\sigma_A = 23.37\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.91\%$.D. $\sigma_A = 27.33\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.19\%$. La respuesta correcta es (4%): B
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
sA = ((tabla['A'] - EA)**2 * tabla['prob']).sum()**0.5
sB = ((tabla['B'] - EB)**2 * tabla['prob']).sum()**0.5
sC = ((tabla['C'] - EC)**2 * tabla['prob']).sum()**0.5
sA, sB, sC
###Output
_____no_output_____
###Markdown
**Pregunta 3.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = 0.00096$.B. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = 0.00096$.C. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = -0.00096$.D. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = -0.00096$. La respuesta correcta es (4%): C
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
sAB = (tabla['prob'] * (tabla['A'] - EA) * (tabla['B'] - EB)).sum()
sAC = (tabla['prob'] * (tabla['A'] - EA) * (tabla['C'] - EC)).sum()
sBC = (tabla['prob'] * (tabla['B'] - EB) * (tabla['C'] - EC)).sum()
sAB, sAC, sBC
###Output
_____no_output_____
###Markdown
**Pregunta 4.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuál es el rendimiento esperado y volatilidad de un portafolio formado por 20% del activo A, 30% del activo B y 50% del activo C?A. $E[r_P] = 5.53\%$, $\sigma_P=6.39\%$.B. $E[r_P] = 5.53\%$, $\sigma_P=7.71\%$.C. $E[r_P] = 3.55\%$, $\sigma_P=7.71\%$.D. $E[r_P] = 5.35\%$, $\sigma_P=6.39\%$. La respuesta correcta es (4%): D
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
# 1 forma
tabla['port'] = 0.2 * tabla['A'] + 0.3 * tabla['B'] + 0.5 * tabla['C']
Eport = (tabla['prob'] * tabla['port']).sum()
sport = ((tabla['port'] - Eport)**2 * tabla['prob']).sum()**0.5
Eport, sport
# Otra forma
E = np.array([EA, EB, EC])
Sigma = np.array([[sA**2, sAB, sAC],
[sAB, sB**2, sBC],
[sAC, sBC, sC**2]])
w = np.array([0.2, 0.3, 0.5])
Eport = E.T.dot(w)
sport = (w.T.dot(Sigma).dot(w))**0.5
Eport, sport
###Output
_____no_output_____
###Markdown
Clase de repaso> El objetivo de esta clase es resolver una serie de ejercicios teóricos y prácticos relacionados con los contenidos de los módulos 1 y 2, en preparación para el examen.> Es válido que propongan ustedes mismos sus dudas de los temas tratados en estos módulos, o ejercicios que no hayan quedado claros de clases anteriores, tareas pasadas y/o quices.> La recomendación principal para el examen es que COMPRENDAN cada uno de los ejercicios de los quices y de las tareas. Si todo eso está claro, el examen será un simple trámite.___ Ejercicios varios tipo quiz.Una parte del examen consta de ejercicios parecidos a los de los quices realizados en los módulos 1 y 2. La diferencia con los quices, es que además de seleccionar la respuesta se debe dar una justificación de el porqué de la selección.Repasemos algunos ejercicios similares a los de los quices pasados. **Pregunta 1.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $E[r_A] = 25.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 10.00\%$.B. $E[r_A] = 8.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 3.30\%$.C. $E[r_A] = 25.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 10.00\%$.D. $E[r_A] = 8.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 3.30\%$. La respuesta correcta es (4%):
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
###Output
_____no_output_____
###Markdown
**Pregunta 2.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_A = 27.33\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.91\%$.B. $\sigma_A = 23.37\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.19\%$.C. $\sigma_A = 23.37\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.91\%$.D. $\sigma_A = 27.33\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.19\%$. La respuesta correcta es (4%):
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
###Output
_____no_output_____
###Markdown
**Pregunta 3.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = 0.00096$.B. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = 0.00096$.C. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = -0.00096$.D. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = -0.00096$.
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
###Output
_____no_output_____
###Markdown
**Pregunta 4.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuál es el rendimiento esperado y volatilidad de un portafolio formado por 20% del activo A, 30% del activo B y 50% del activo C?A. $E[r_P] = 5.53\%$, $\sigma_P=6.39\%$.B. $E[r_P] = 5.53\%$, $\sigma_P=7.71\%$.C. $E[r_P] = 3.55\%$, $\sigma_P=7.71\%$.D. $E[r_P] = 5.35\%$, $\sigma_P=6.39\%$. La respuesta correcta es (4%):
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
###Output
_____no_output_____
###Markdown
Clase de repaso> El objetivo de esta clase es resolver una serie de ejercicios teóricos y prácticos relacionados con los contenidos de los módulos 1 y 2, en preparación para el examen.> Es válido que propongan ustedes mismos sus dudas de los temas tratados en estos módulos, o ejercicios que no hayan quedado claros de clases anteriores, tareas pasadas y/o quices.> La recomendación principal para el examen es que COMPRENDAN cada uno de los ejercicios de los quices y de las tareas. Si todo eso está claro, el examen será un simple trámite.___ Ejercicios varios tipo quiz.Una parte del examen consta de ejercicios parecidos a los de los quices realizados en los módulos 1 y 2. La diferencia con los quices, es que además de seleccionar la respuesta se debe dar una justificación de el porqué de la selección.Repasemos algunos ejercicios similares a los de los quices pasados. **Pregunta 1.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuál de las siguientes afirmaciones son correctas?A. $E[r_A] = 25.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 10.00\%$.B. $E[r_A] = 8.00\%$, $E[r_B] = 20.00\%$, $E[r_C] = 3.30\%$.C. $E[r_A] = 25.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 10.00\%$.D. $E[r_A] = 8.00\%$, $E[r_B] = 7.00\%$, $E[r_C] = 3.30\%$. La respuesta correcta es (4%): D
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
tabla = pd.DataFrame(columns=['prob', 'A', 'B', 'C'])
tabla['prob'] = [0.3, 0.4, 0.3]
tabla['A'] = [-0.2, 0.05, 0.4]
tabla['B'] = [-0.05, 0.1, 0.15]
tabla['C'] = [0.05, 0.03, 0.02]
tabla
EA = (tabla['prob'] * tabla['A']).sum()
EB = (tabla['prob'] * tabla['B']).sum()
EC = (tabla['prob'] * tabla['C']).sum()
EA, EB, EC
###Output
_____no_output_____
###Markdown
**Pregunta 2.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_A = 27.33\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.91\%$.B. $\sigma_A = 23.37\%$, $\sigma_B = 8.12\%$, $\sigma_C = 1.19\%$.C. $\sigma_A = 23.37\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.91\%$.D. $\sigma_A = 27.33\%$, $\sigma_B = 12.08\%$, $\sigma_C = 1.19\%$. La respuesta correcta es (4%): B
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
sA = ((tabla['A'] - EA)**2 * tabla['prob']).sum()**0.5
sB = ((tabla['B'] - EB)**2 * tabla['prob']).sum()**0.5
sC = ((tabla['C'] - EC)**2 * tabla['prob']).sum()**0.5
sA, sB, sC
###Output
_____no_output_____
###Markdown
**Pregunta 3.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuáles de las siguientes afirmaciones son correctas?A. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = 0.00096$.B. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = 0.00096$.C. $\sigma_{AB} = 0.0174$, $\sigma_{AC} = -0.00264$, $\sigma_{BC} = -0.00096$.D. $\sigma_{AB} = -0.0174$, $\sigma_{AC} = 0.00264$, $\sigma_{BC} = -0.00096$. La respuesta correcta es (4%): C
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
sAB = (tabla['prob'] * (tabla['A'] - EA) * (tabla['B'] - EB)).sum()
sAC = (tabla['prob'] * (tabla['A'] - EA) * (tabla['C'] - EC)).sum()
sBC = (tabla['prob'] * (tabla['B'] - EB) * (tabla['C'] - EC)).sum()
sAB, sAC, sBC
###Output
_____no_output_____
###Markdown
**Pregunta 4.** Considere la siguiente distribución de rendimientos de los activos A, B y C:| Probabilidad | Rendimiento A | Rendimiento B | Rendimiento C || ---------------- | ------------------ | ------------------- | ------------------ || 30% | -0.20 | -0.05 | 0.05 || 40% | 0.05 | 0.10 | 0.03 || 30% | 0.40 | 0.15 | 0.02 |¿Cuál es el rendimiento esperado y volatilidad de un portafolio formado por 20% del activo A, 30% del activo B y 50% del activo C?A. $E[r_P] = 5.53\%$, $\sigma_P=6.39\%$.B. $E[r_P] = 5.53\%$, $\sigma_P=7.71\%$.C. $E[r_P] = 3.55\%$, $\sigma_P=7.71\%$.D. $E[r_P] = 5.35\%$, $\sigma_P=6.39\%$. La respuesta correcta es (4%): D
###Code
# La justificación a esta pregunta son los cálculos necesarios para llegar al resultado (4%)
# 1 forma
tabla['port'] = 0.2 * tabla['A'] + 0.3 * tabla['B'] + 0.5 * tabla['C']
Eport = (tabla['prob'] * tabla['port']).sum()
sport = ((tabla['port'] - Eport)**2 * tabla['prob']).sum()**0.5
Eport, sport
# Otra forma
E = np.array([EA, EB, EC])
Sigma = np.array([[sA**2, sAB, sAC],
[sAB, sB**2, sBC],
[sAC, sBC, sC**2]])
w = np.array([0.2, 0.3, 0.5])
Eport = E.T.dot(w)
sport = (w.T.dot(Sigma).dot(w))**0.5
Eport, sport
###Output
_____no_output_____ |
training-efficientdet.ipynb | ###Markdown
Really good training pipeline for pytorch EfficientDet Hi everyone!My name is Alex Shonenkov, I am DL/NLP/CV/TS research engineer. Especially I am in Love with NLP & DL.Recently I have created kernel for this competition about Weighted Boxes Fusion:- [WBF approach for ensemble](https://www.kaggle.com/shonenkov/wbf-approach-for-ensemble)I hope it is useful for you, my friends! If you didn't read this kernel, don't forget to do it! :)Today I would like to share really good training pipeline for this competition using SOTA [EfficientDet: Scalable and Efficient Object Detection](https://arxiv.org/pdf/1911.09070.pdf) Main Idea I read [all public kernels about EfficientDet in kaggle community](https://www.kaggle.com/search?q=efficientdet+in%3Anotebooks) and understand that kaggle don't have really good working public kernels with good score. Why? You can see below picture about COCO AP for different architectures, I think everyone should be able to use such a strong tools EfficientDet for own research, lets do it! Dependencies and imports
###Code
import sys
sys.path.insert(0, "timm-efficientdet-pytorch")
sys.path.insert(0, "omegaconf")
import torch
import os
from datetime import datetime
import time
import random
import cv2
import pandas as pd
import numpy as np
import albumentations as A
import matplotlib.pyplot as plt
from albumentations.pytorch.transforms import ToTensorV2
from sklearn.model_selection import StratifiedKFold
from torch.utils.data import Dataset,DataLoader
from torch.utils.data.sampler import SequentialSampler, RandomSampler
from glob import glob
SEED = 42
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
seed_everything(SEED)
marking = pd.read_csv('/home/hy/dataset/gwd/train.csv')
bboxs = np.stack(marking['bbox'].apply(lambda x: np.fromstring(x[1:-1], sep=',')))
for i, column in enumerate(['x', 'y', 'w', 'h']):
marking[column] = bboxs[:,i]
marking.drop(columns=['bbox'], inplace=True)
###Output
_____no_output_____
###Markdown
About data splitting you can read [here](https://www.kaggle.com/shonenkov/wbf-approach-for-ensemble):
###Code
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
df_folds = marking[['image_id']].copy()
df_folds.loc[:, 'bbox_count'] = 1
df_folds = df_folds.groupby('image_id').count()
df_folds.loc[:, 'source'] = marking[['image_id', 'source']].groupby('image_id').min()['source']
df_folds.loc[:, 'stratify_group'] = np.char.add(
df_folds['source'].values.astype(str),
df_folds['bbox_count'].apply(lambda x: f'_{x // 15}').values.astype(str)
)
df_folds.loc[:, 'fold'] = 0
for fold_number, (train_index, val_index) in enumerate(skf.split(X=df_folds.index, y=df_folds['stratify_group'])):
df_folds.loc[df_folds.iloc[val_index].index, 'fold'] = fold_number
###Output
/home/hy/anaconda3/envs/badeda/lib/python3.7/site-packages/sklearn/model_selection/_split.py:667: UserWarning: The least populated class in y has only 1 members, which is less than n_splits=5.
% (min_groups, self.n_splits)), UserWarning)
###Markdown
Albumentations
###Code
def get_train_transforms():
return A.Compose(
[
A.RandomSizedCrop(min_max_height=(800, 800), height=1024, width=1024, p=0.5),
A.OneOf([
A.HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit= 0.2,
val_shift_limit=0.2, p=0.9),
A.RandomBrightnessContrast(brightness_limit=0.2,
contrast_limit=0.2, p=0.9),
],p=0.9),
A.ToGray(p=0.01),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.Resize(height=512, width=512, p=1),
A.Cutout(num_holes=8, max_h_size=64, max_w_size=64, fill_value=0, p=0.5),
ToTensorV2(p=1.0),
],
p=1.0,
bbox_params=A.BboxParams(
format='pascal_voc',
min_area=0,
min_visibility=0,
label_fields=['labels']
)
)
def get_valid_transforms():
return A.Compose(
[
A.Resize(height=512, width=512, p=1.0),
ToTensorV2(p=1.0),
],
p=1.0,
bbox_params=A.BboxParams(
format='pascal_voc',
min_area=0,
min_visibility=0,
label_fields=['labels']
)
)
###Output
_____no_output_____
###Markdown
Dataset
###Code
TRAIN_ROOT_PATH = '/home/hy/dataset/gwd/train'
class DatasetRetriever(Dataset):
def __init__(self, marking, image_ids, transforms=None, test=False):
super().__init__()
self.image_ids = image_ids
self.marking = marking
self.transforms = transforms
self.test = test
def __getitem__(self, index: int):
image_id = self.image_ids[index]
if self.test or random.random() > 0.5:
image, boxes = self.load_image_and_boxes(index)
else:
image, boxes = self.load_cutmix_image_and_boxes(index)
# there is only one class
labels = torch.ones((boxes.shape[0],), dtype=torch.int64)
target = {}
target['boxes'] = boxes
target['labels'] = labels
target['image_id'] = torch.tensor([index])
if self.transforms:
for i in range(10):
sample = self.transforms(**{
'image': image,
'bboxes': target['boxes'],
'labels': labels
})
if len(sample['bboxes']) > 0:
image = sample['image']
target['boxes'] = torch.stack(tuple(map(torch.tensor, zip(*sample['bboxes'])))).permute(1, 0)
target['boxes'][:,[0,1,2,3]] = target['boxes'][:,[1,0,3,2]] #yxyx: be warning
break
return image, target, image_id
def __len__(self) -> int:
return self.image_ids.shape[0]
def load_image_and_boxes(self, index):
image_id = self.image_ids[index]
image = cv2.imread(f'{TRAIN_ROOT_PATH}/{image_id}.jpg', cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)
image /= 255.0
records = self.marking[self.marking['image_id'] == image_id]
boxes = records[['x', 'y', 'w', 'h']].values
boxes[:, 2] = boxes[:, 0] + boxes[:, 2]
boxes[:, 3] = boxes[:, 1] + boxes[:, 3]
return image, boxes
def load_cutmix_image_and_boxes(self, index, imsize=1024):
"""
This implementation of cutmix author: https://www.kaggle.com/nvnnghia
Refactoring and adaptation: https://www.kaggle.com/shonenkov
"""
w, h = imsize, imsize
s = imsize // 2
xc, yc = [int(random.uniform(imsize * 0.25, imsize * 0.75)) for _ in range(2)] # center x, y
indexes = [index] + [random.randint(0, self.image_ids.shape[0] - 1) for _ in range(3)]
result_image = np.full((imsize, imsize, 3), 1, dtype=np.float32)
result_boxes = []
for i, index in enumerate(indexes):
image, boxes = self.load_image_and_boxes(index)
if i == 0:
x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
elif i == 1: # top right
x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
elif i == 2: # bottom left
x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, max(xc, w), min(y2a - y1a, h)
elif i == 3: # bottom right
x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
result_image[y1a:y2a, x1a:x2a] = image[y1b:y2b, x1b:x2b]
padw = x1a - x1b
padh = y1a - y1b
boxes[:, 0] += padw
boxes[:, 1] += padh
boxes[:, 2] += padw
boxes[:, 3] += padh
result_boxes.append(boxes)
result_boxes = np.concatenate(result_boxes, 0)
np.clip(result_boxes[:, 0:], 0, 2 * s, out=result_boxes[:, 0:])
result_boxes = result_boxes.astype(np.int32)
result_boxes = result_boxes[np.where((result_boxes[:,2]-result_boxes[:,0])*(result_boxes[:,3]-result_boxes[:,1]) > 0)]
return result_image, result_boxes
fold_number = 2
print("fold_number:",fold_number)
train_dataset = DatasetRetriever(
image_ids=df_folds[df_folds['fold'] != fold_number].index.values,
marking=marking,
transforms=get_train_transforms(),
test=False,
)
validation_dataset = DatasetRetriever(
image_ids=df_folds[df_folds['fold'] == fold_number].index.values,
marking=marking,
transforms=get_valid_transforms(),
test=True,
)
image, target, image_id = train_dataset[1]
boxes = target['boxes'].cpu().numpy().astype(np.int32)
numpy_image = image.permute(1,2,0).cpu().numpy()
fig, ax = plt.subplots(1, 1, figsize=(16, 8))
for box in boxes:
cv2.rectangle(numpy_image, (box[1], box[0]), (box[3], box[2]), (0, 1, 0), 2)
ax.set_axis_off()
ax.imshow(numpy_image);
###Output
_____no_output_____
###Markdown
Fitter
###Code
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
import torch
from torch.optim.optimizer import Optimizer
class QHAdamW(Optimizer):
r"""
Combines the weight decay decoupling from AdamW (Decoupled Weight Decay Regularization. Loshchilov and Hutter, 2019)
with QHAdam (Quasi-hyperbolic momentum and Adam for deep learning. Ma and Yarats, 2019).
Args:
params (iterable):
iterable of parameters to optimize or dicts defining parameter
groups
lr (float, optional): learning rate (:math:`\alpha` from the paper)
(default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of the gradient and its square
(default: (0.995, 0.999))
nus (Tuple[float, float], optional): immediate discount factors used to
estimate the gradient and its square
(default: (0.7, 1.0))
eps (float, optional): term added to the denominator to improve
numerical stability
(default: 1e-8)
weight_decay (float, optional): weight decay
(L2 regularization coefficient, times two)
(default: 0.0)
Example:
>>> optimizer = QHAdamW(
... model.parameters(),
... lr=3e-4, nus=(0.8, 1.0), betas=(0.99, 0.999))
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
QHAdam paper:
.. _`(Ma and Yarats, 2019)`: https://arxiv.org/abs/1810.06801
AdamW paper:
.. _`(Loshchilov and Hutter, 2019)`: https://arxiv.org/abs/1711.05101
"""
def __init__(self, params, lr=1e-3, betas=(0.995, 0.999), nus=(0.7, 1.0), weight_decay=0.0, eps=1e-8):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
if weight_decay < 0.0:
raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
defaults = {"lr": lr, "betas": betas, "nus": nus, "weight_decay": weight_decay, "eps": eps}
super(QHAdamW, self).__init__(params, defaults)
def step(self, closure=None):
"""Performs a single optimization step.
Args:
closure (callable, optional):
A closure that reevaluates the model and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
lr = group["lr"]
beta1, beta2 = group["betas"]
nu1, nu2 = group["nus"]
weight_decay = group["weight_decay"]
eps = group["eps"]
for p in group["params"]:
if p.grad is None:
continue
d_p = p.grad.data
if d_p.is_sparse:
raise RuntimeError("QHAdamW does not support sparse gradients")
param_state = self.state[p]
# Original QHAdam implementation for weight decay:
# if weight_decay != 0:
# d_p.add_(weight_decay, p.data)
d_p_sq = d_p.mul(d_p)
if len(param_state) == 0:
param_state["beta1_weight"] = 0.0
param_state["beta2_weight"] = 0.0
param_state["exp_avg"] = torch.zeros_like(p.data)
param_state["exp_avg_sq"] = torch.zeros_like(p.data)
param_state["beta1_weight"] = 1.0 + beta1 * param_state["beta1_weight"]
param_state["beta2_weight"] = 1.0 + beta2 * param_state["beta2_weight"]
beta1_weight = param_state["beta1_weight"]
beta2_weight = param_state["beta2_weight"]
exp_avg = param_state["exp_avg"]
exp_avg_sq = param_state["exp_avg_sq"]
beta1_adj = 1.0 - (1.0 / beta1_weight)
beta2_adj = 1.0 - (1.0 / beta2_weight)
exp_avg.mul_(beta1_adj).add_(1.0 - beta1_adj, d_p)
exp_avg_sq.mul_(beta2_adj).add_(1.0 - beta2_adj, d_p_sq)
avg_grad = exp_avg.mul(nu1)
if nu1 != 1.0:
avg_grad.add_(1.0 - nu1, d_p)
avg_grad_rms = exp_avg_sq.mul(nu2)
if nu2 != 1.0:
avg_grad_rms.add_(1.0 - nu2, d_p_sq)
avg_grad_rms.sqrt_()
if eps != 0.0:
avg_grad_rms.add_(eps)
# Original QHAdam implementation:
# p.data.addcdiv_(-lr, avg_grad, avg_grad_rms)
# Implementation following AdamW paper:
p.data.add_(-weight_decay, p.data).addcdiv_(-lr, avg_grad, avg_grad_rms)
return loss
import warnings
warnings.filterwarnings("ignore")
class Fitter:
def __init__(self, model, device, config):
self.config = config
self.epoch = 0
self.base_dir = f'./{config.folder}'
if not os.path.exists(self.base_dir):
os.makedirs(self.base_dir)
self.log_path = f'{self.base_dir}/log.txt'
self.best_summary_loss = 10**5
self.model = model
self.device = device
param_optimizer = list(self.model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.001},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
#self.optimizer = torch.optim.AdamW(self.model.parameters(), lr=config.lr)
self.optimizer = QHAdamW(self.model.parameters(), lr=config.lr)
self.scheduler = config.SchedulerClass(self.optimizer, **config.scheduler_params)
self.log(f'Fitter prepared. Device is {self.device}')
def fit(self, train_loader, validation_loader):
for e in range(self.config.n_epochs):
if self.config.verbose:
lr = self.optimizer.param_groups[0]['lr']
timestamp = datetime.utcnow().isoformat()
self.log(f'\n{timestamp}\nLR: {lr}')
t = time.time()
summary_loss = self.train_one_epoch(train_loader)
self.log(f'[RESULT]: Train. Epoch: {self.epoch}, summary_loss: {summary_loss.avg:.5f}, time: {(time.time() - t):.5f}')
self.save(f'{self.base_dir}/last-checkpoint.bin')
t = time.time()
summary_loss = self.validation(validation_loader)
self.log(f'[RESULT]: Val. Epoch: {self.epoch}, summary_loss: {summary_loss.avg:.5f}, time: {(time.time() - t):.5f}')
if summary_loss.avg < self.best_summary_loss:
self.best_summary_loss = summary_loss.avg
self.model.eval()
self.save(f'{self.base_dir}/best-checkpoint-{str(self.epoch).zfill(3)}epoch.bin')
for path in sorted(glob(f'{self.base_dir}/best-checkpoint-*epoch.bin'))[:-3]:
os.remove(path)
if self.config.validation_scheduler:
self.scheduler.step(metrics=summary_loss.avg)
self.epoch += 1
def validation(self, val_loader):
self.model.eval()
summary_loss = AverageMeter()
t = time.time()
for step, (images, targets, image_ids) in enumerate(val_loader):
if self.config.verbose:
if step % self.config.verbose_step == 0:
print(
f'Val Step {step}/{len(val_loader)}, ' + \
f'summary_loss: {summary_loss.avg:.5f}, ' + \
f'time: {(time.time() - t):.5f}', end='\r'
)
with torch.no_grad():
images = torch.stack(images)
batch_size = images.shape[0]
images = images.to(self.device).float()
boxes = [target['boxes'].to(self.device).float() for target in targets]
labels = [target['labels'].to(self.device).float() for target in targets]
loss, _, _ = self.model(images, boxes, labels)
summary_loss.update(loss.detach().item(), batch_size)
return summary_loss
def train_one_epoch(self, train_loader):
self.model.train()
summary_loss = AverageMeter()
t = time.time()
for step, (images, targets, image_ids) in enumerate(train_loader):
if self.config.verbose:
if step % self.config.verbose_step == 0:
print(
f'Train Step {step}/{len(train_loader)}, ' + \
f'summary_loss: {summary_loss.avg:.5f}, ' + \
f'time: {(time.time() - t):.5f}', end='\r'
)
images = torch.stack(images)
images = images.to(self.device).float()
batch_size = images.shape[0]
boxes = [target['boxes'].to(self.device).float() for target in targets]
labels = [target['labels'].to(self.device).float() for target in targets]
self.optimizer.zero_grad()
loss, _, _ = self.model(images, boxes, labels)
loss.backward()
summary_loss.update(loss.detach().item(), batch_size)
self.optimizer.step()
if self.config.step_scheduler:
self.scheduler.step()
return summary_loss
def save(self, path):
self.model.eval()
torch.save({
'model_state_dict': self.model.model.state_dict(),
'optimizer_state_dict': self.optimizer.state_dict(),
'scheduler_state_dict': self.scheduler.state_dict(),
'best_summary_loss': self.best_summary_loss,
'epoch': self.epoch,
}, path)
def load(self, path):
checkpoint = torch.load(path)
self.model.model.load_state_dict(checkpoint['model_state_dict'])
self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
self.scheduler.load_state_dict(checkpoint['scheduler_state_dict'])
self.best_summary_loss = checkpoint['best_summary_loss']
self.epoch = checkpoint['epoch'] + 1
def log(self, message):
if self.config.verbose:
print(message)
with open(self.log_path, 'a+') as logger:
logger.write(f'{message}\n')
class TrainGlobalConfig:
num_workers = 4
batch_size = 8
#n_epochs = 40
n_epochs = 50
lr = 0.0002
folder = '0522_effdet5-cutmix-augmix_f2'
# -------------------
verbose = True
verbose_step = 1
# -------------------
# --------------------
step_scheduler = False # do scheduler.step after optimizer.step
validation_scheduler = True # do scheduler.step after validation stage loss
# SchedulerClass = torch.optim.lr_scheduler.OneCycleLR
# scheduler_params = dict(
# max_lr=0.001,
# epochs=n_epochs,
# steps_per_epoch=int(len(train_dataset) / batch_size),
# pct_start=0.1,
# anneal_strategy='cos',
# final_div_factor=10**5
# )
SchedulerClass = torch.optim.lr_scheduler.ReduceLROnPlateau
scheduler_params = dict(
mode='min',
factor=0.5,
patience=1,
verbose=False,
threshold=0.0001,
threshold_mode='abs',
cooldown=0,
min_lr=1e-8,
eps=1e-08
)
# --------------------
def collate_fn(batch):
return tuple(zip(*batch))
def run_training():
device = torch.device('cuda:0')
net.to(device)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=TrainGlobalConfig.batch_size,
sampler=RandomSampler(train_dataset),
pin_memory=False,
drop_last=True,
num_workers=TrainGlobalConfig.num_workers,
collate_fn=collate_fn,
)
val_loader = torch.utils.data.DataLoader(
validation_dataset,
batch_size=TrainGlobalConfig.batch_size,
num_workers=TrainGlobalConfig.num_workers,
shuffle=False,
sampler=SequentialSampler(validation_dataset),
pin_memory=False,
collate_fn=collate_fn,
)
fitter = Fitter(model=net, device=device, config=TrainGlobalConfig)
fitter.fit(train_loader, val_loader)
from effdet import get_efficientdet_config, EfficientDet, DetBenchTrain
from effdet.efficientdet import HeadNet
def get_net():
config = get_efficientdet_config('tf_efficientdet_d5')
net = EfficientDet(config, pretrained_backbone=False)
checkpoint = torch.load('efficientdet_d5-ef44aea8.pth')
net.load_state_dict(checkpoint)
config.num_classes = 1
config.image_size = 512
net.class_net = HeadNet(config, num_outputs=config.num_classes, norm_kwargs=dict(eps=.001, momentum=.01))
return DetBenchTrain(net, config)
net = get_net()
run_training()
###Output
_____no_output_____ |
CSE20_Applied-Functions-String-Formatting.ipynb | ###Markdown
Beginning Programming in Python Applied Functions/String Formatting CSE20 - Spring 2021Interactive Slides: [https://tinyurl.com/cse20-spr21-applied-func-str](https://tinyurl.com/cse20-spr21-applied-func-str) Documenting Your Functions- To help users of your function understand how to use your function, it is a good practice to document your - functions using `docstrings`- These are then accessible via the `.__doc__` method and via `help()`A `docstring` is a triple-quoted string right below the first line of the function declaration.```pythondef some_f(arg1, arg2): """A one sentence description of what the function does args: arg1 (type of arg1): Description of what the argument is arg2 (type of arg2): Description of what the argument is returns: A description of what the function returns """``` Documenting Your Fucntions
###Code
def addition(num1, num2):
"""Returns the sum of `num1` and `num2`.
args:
num1 (int): The first number to be added
num2 (int): The second number to be added
returns:
The sum of `num1` and `num2`, ie num1+num2
"""
return num1 + num2
addition()
###Output
Help on function addition in module __main__:
addition(num1, num2)
Returns the sum of `num1` and `num2`.
args:
num1 (int): The first number to be added
num2 (int): The second number to be added
returns:
The sum of `num1` and `num2`, ie num1+num2
###Markdown
Slug Money: A Program To Track Your Finances Where we are now- Store any number of transactions- Infinitely validate user input What we can add using the last two week's techniques- Add functions- Add better output formatting
###Code
transactions = {
"w": [],
"d": []
}
print("Welcome To Slug Money")
current_balance = float(input("Enter your balance: $"))
new_balance = current_balance
t_type = input("Transaction type:(d)eposit, (w)ithdrawal, (q)uit:")
while t_type != "q":
transaction_amount = float(input("Enter a transaction amount: $"))
while transaction_amount <= 0:
transaction_amount = float(input("Please enter strictly positive values: $"))
# update balance
if t_type == "d":
new_balance = current_balance + transaction_amount
else:
new_balance = current_balance - transaction_amount
transactions[t_type].append(transaction_amount)
t_type = input("Transaction type:(d)eposit, (w)ithdrawal, (q)uit:")
print()
print("Your starting balance was: $", current_balance)
print("Deposit Transactions:")
for deposit in transactions["d"]:
print(deposit)
print("Withdrawal Transactions:")
for withdrawal in transactions["w"]:
print(withdrawal)
print("Your ending balance is: $", new_balance)
print("Thanks for using Slug Money!")
green = lambda s:"\033[32m {}\033[00m" .format(s)
def is_valid_number(num_str):
return set(num_str).issubset(set("0123456789.")) and num_str.count(".")<2
def get_decimal_input(msg):
user_input = input(msg)
while not is_valid_number(user_input):
user_input = input(msg)
return float(user_input)
def get_valid_string_input(msg, valid_values):
user_input = input(msg).lower()
while user_input not in valid_values:
user_input = input(msg).lower()
return user_input
transactions = []
print(green("$" * 50))
print("{:^50}".format("Welcome to Slug Money"))
print(green("$" * 50))
current_balance = get_decimal_input("Enter your balance: $")
new_balance = current_balance
transaction_msg = "Transaction type:(d)eposit, (w)ithdrawal, (q)uit:"
t_type = get_valid_string_input(transaction_msg, ["d", "w", "q"])
while t_type != "q":
transaction_amount = get_decimal_input("Enter transaction amount: $")
transactions.append((t_type, transaction_amount))
t_type = get_valid_string_input(transaction_msg, ["d", "w", "q"])
print("{:=^50}".format("Statement"))
print("Your starting balance was: ${:.2f}".format(current_balance))
print("{:>40}{:>10}".format("Transaction Amount", "Balance"))
for t_type, amount in transactions:
if t_type=="d":
new_balance += amount
amount_str = "{:.2f}".format(amount)
else:
new_balance -= amount
amount_str = "({:.2f})".format(amount)
print("{:>40}{:>10.2f}".format(amount_str, new_balance))
print("Your ending balance is: ${:.2f}".format(new_balance))
print("Thanks for using Slug Money!")
###Output
[32m $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$[00m
Welcome to Slug Money
[32m $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$[00m
Enter your balance: $12
Transaction type:(d)eposit, (w)ithdrawal, (q)uit:d
Enter transaction amount: $1
Transaction type:(d)eposit, (w)ithdrawal, (q)uit:q
====================Statement=====================
Your starting balance was: $ 12.0
Transaction Amount Balance
1.00 13.00
Your ending balance is: $13.00
Thanks for using Slug Money!
|
models/par_gen/test.ipynb | ###Markdown
Interactive parameter exploration for SIR modelhttps://github.com/bloomberg/bqplothttps://ipywidgets.readthedocs.io/
###Code
import math
import bqplot
import ipywidgets as widgets
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# Fixed parameters
N = 200 # Number of participants
Tmax = 120 # total duration of the simulaion
# Tunable parameters
Tinf = 10 # infectious time
I0 = 1 # number of initial cases
Itot = 150 # total number of cases
cr = 0.005 # per capita contact rate,
# cr x N is the number of contacts per unit of time an infectious individual make
S0 = N - I0
R0 = N - I0 - S0
Send = N - Itot
z = Itot/math.log(S0/Send)
print("gamma/beta =", z)
print("R0 =", S0/z)
Imax = N - z + z * math.log(z) - z * math.log(S0)
print("Imax =", Imax)
gamma = 1/Tinf
beta = gamma/z
print("beta =", beta)
p = beta/cr # probability that a contact with a susceptible individual results in transmission
print("Probability of infection =", p)
# https://scipython.com/book/chapter-8-scipy/additional-examples/the-sir-epidemic-model/
# S'(t) = −beta * I * S
# I'(t) = beta * I S − gamma * I
# R'(t) = gamma * I
# S(0) = S0
# I(0) = I0
# R(0) = N - S0 - I0 = 0
# A grid of time points (in minutes)
t = np.linspace(0, Tmax, Tmax)
# The SIR model differential equations.
def deriv(y, t, N, beta, gamma):
S, I, R = y
dSdt = -beta * S * I
dIdt = beta * S * I - gamma * I
dRdt = gamma * I
return dSdt, dIdt, dRdt
# Initial conditions vector
y0 = S0, I0, R0
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma))
S, I, R = ret.T
# Plot the data on three separate curves for S(t), I(t) and R(t)
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot(t, S, 'b', alpha=0.5, lw=2, label='Susceptible')
ax.plot(t, I, 'r', alpha=0.5, lw=2, label='Infected')
ax.plot(t, R, 'g', alpha=0.5, lw=2, label='Removed')
ax.set_xlabel('Minutes')
ax.set_ylabel('Number')
ax.set_ylim(0, N)
ax.yaxis.set_tick_params(length=0)
ax.xaxis.set_tick_params(length=0)
ax.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax.legend()
legend.get_frame().set_alpha(0.5)
# for spine in ('top', 'right', 'bottom', 'left'):
# ax.spines[spine].set_visible(False)
plt.show()
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
def f(m, b):
plt.figure(2)
x = np.linspace(-10, 10, num=1000)
plt.plot(x, m * x + b)
plt.ylim(-5, 5)
plt.show()
interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
# Using Tinf as parameter
def funp(Tinf=Tmax/10, I0=1, Itot=0.8*N, cr=1/N):
S0 = N - I0
R0 = N - I0 - S0
Send = N - Itot
z = Itot/math.log(S0/Send)
BRN = S0/z
print("gamma/beta =", z)
print("Basic reproductive number =", BRN)
Imax = N - z + z * math.log(z) - z * math.log(S0)
print("Imax = ", Imax)
gamma = 1/Tinf
beta = gamma/z
print("beta =", beta)
p = beta/cr # probability that a contact with a susceptible individual results in transmission
print("Probability of infection =", p)
# Initial conditions vector
y0 = S0, I0, R0
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma))
S, I, R = ret.T
# Plot the data on three separate curves for S(t), I(t) and R(t)
fig, ax = plt.subplots(figsize=(10, 8))
ax.plot(t, S, 'b', alpha=0.5, lw=2, label='Susceptible')
ax.plot(t, I, 'r', alpha=0.5, lw=2, label='Infected')
ax.plot(t, R, 'g', alpha=0.5, lw=2, label='Removed')
ax.set_xlabel('Minutes')
ax.set_ylabel('Number')
ax.set_ylim(0, N)
ax.yaxis.set_tick_params(length=0)
ax.xaxis.set_tick_params(length=0)
ax.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax.legend()
legend.get_frame().set_alpha(0.5)
plt.show()
interactive_plot = interactive(funp, Tinf=(1.0, Tmax/2), I0=(1, 10), Itot=(0, N-1), cr=(0.1/N, 10/N, 0.1/N))
output = interactive_plot.children[-1]
output.layout.height = '700px'
interactive_plot
###Output
_____no_output_____ |
visualize_confs.ipynb | ###Markdown
let's look at some of the (hopefully) pretty molecules we generate!
###Code
from rdkit import Chem
import pickle
from ipywidgets import interact, fixed, IntSlider
import ipywidgets
import py3Dmol
def show_mol(mol, view, grid):
mb = Chem.MolToMolBlock(mol)
view.removeAllModels(viewer=grid)
view.addModel(mb,'sdf', viewer=grid)
view.setStyle({'model':0},{'stick': {}}, viewer=grid)
view.zoomTo(viewer=grid)
return view
def view_single(mol):
view = py3Dmol.view(width=600, height=600, linked=False, viewergrid=(1,1))
show_mol(mol, view, grid=(0, 0))
return view
def MolTo3DView(mol, size=(600, 600), style="stick", surface=False, opacity=0.5, confId=0):
"""Draw molecule in 3D
Args:
----
mol: rdMol, molecule to show
size: tuple(int, int), canvas size
style: str, type of drawing molecule
style can be 'line', 'stick', 'sphere', 'carton'
surface, bool, display SAS
opacity, float, opacity of surface, range 0.0-1.0
Return:
----
viewer: py3Dmol.view, a class for constructing embedded 3Dmol.js views in ipython notebooks.
"""
assert style in ('line', 'stick', 'sphere', 'carton')
mol[confId] = Chem.RemoveHs(mol[confId])
mblock = Chem.MolToMolBlock(mol[confId])
viewer = py3Dmol.view(width=size[0], height=size[1])
viewer.addModel(mblock, 'mol')
viewer.setStyle({style:{}})
if surface:
viewer.addSurface(py3Dmol.SAS, {'opacity': opacity})
viewer.zoomTo()
return viewer
def conf_viewer(idx, mol):
return MolTo3DView(mol, confId=idx).show()
with open('test_run/test_mols.pkl', 'rb') as f:
test_mols = pickle.load(f)
smiles = list(test_mols.keys())
test_idx = 850
smi = smiles[test_idx]
print(smi)
mol_graph = Chem.MolFromSmiles(smi)
display(mol_graph)
mols = test_mols[smi]
MolTo3DView(mols)
#interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(mols)-1, step=1), mol=fixed(mols));
###Output
O=C[C@@H]1C[C@@]12OC[C@@H]2O
###Markdown
let's look at some of the (hopefully) pretty molecules we generate!
###Code
from rdkit import Chem
import pickle
from ipywidgets import interact, fixed, IntSlider
import ipywidgets
import py3Dmol
def show_mol(mol, view, grid):
mb = Chem.MolToMolBlock(mol)
view.removeAllModels(viewer=grid)
view.addModel(mb,'sdf', viewer=grid)
view.setStyle({'model':0},{'stick': {}}, viewer=grid)
view.zoomTo(viewer=grid)
return view
def view_single(mol):
view = py3Dmol.view(width=600, height=600, linked=False, viewergrid=(1,1))
show_mol(mol, view, grid=(0, 0))
return view
def MolTo3DView(mol, size=(600, 600), style="stick", surface=False, opacity=0.5, confId=0):
"""Draw molecule in 3D
Args:
----
mol: rdMol, molecule to show
size: tuple(int, int), canvas size
style: str, type of drawing molecule
style can be 'line', 'stick', 'sphere', 'carton'
surface, bool, display SAS
opacity, float, opacity of surface, range 0.0-1.0
Return:
----
viewer: py3Dmol.view, a class for constructing embedded 3Dmol.js views in ipython notebooks.
"""
assert style in ('line', 'stick', 'sphere', 'carton')
mblock = Chem.MolToMolBlock(mol[confId])
viewer = py3Dmol.view(width=size[0], height=size[1])
viewer.addModel(mblock, 'mol')
viewer.setStyle({style:{}})
if surface:
viewer.addSurface(py3Dmol.SAS, {'opacity': opacity})
viewer.zoomTo()
return viewer
def conf_viewer(idx, mol):
return MolTo3DView(mol, confId=idx).show()
with open('trained_models/qm9/test_mols.pkl', 'rb') as f:
test_mols = pickle.load(f)
smiles = list(test_mols.keys())
test_idx = 0
smi = smiles[test_idx]
print(smi)
mol_graph = Chem.MolFromSmiles(smi)
display(mol_graph)
mols = test_mols[smi]
interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(mols)-1, step=1), mol=fixed(mols));
###Output
_____no_output_____
###Markdown
let's look at some of the (hopefully) pretty molecules we generate!
###Code
from rdkit import Chem
import pickle
from ipywidgets import interact, fixed, IntSlider
import ipywidgets
import py3Dmol
def show_mol(mol, view, grid):
mb = Chem.MolToMolBlock(mol)
view.removeAllModels(viewer=grid)
view.addModel(mb,'sdf', viewer=grid)
view.setStyle({'model':0},{'stick': {}}, viewer=grid)
view.zoomTo(viewer=grid)
return view
def view_single(mol):
view = py3Dmol.view(width=600, height=600, linked=False, viewergrid=(1,1))
show_mol(mol, view, grid=(0, 0))
return view
def MolTo3DView(mol, size=(600, 600), style="stick", surface=False, opacity=0.5, confId=0):
"""Draw molecule in 3D
Args:
----
mol: rdMol, molecule to show
size: tuple(int, int), canvas size
style: str, type of drawing molecule
style can be 'line', 'stick', 'sphere', 'carton'
surface, bool, display SAS
opacity, float, opacity of surface, range 0.0-1.0
Return:
----
viewer: py3Dmol.view, a class for constructing embedded 3Dmol.js views in ipython notebooks.
"""
assert style in ('line', 'stick', 'sphere', 'carton')
mblock = Chem.MolToMolBlock(mol[confId])
viewer = py3Dmol.view(width=size[0], height=size[1])
viewer.addModel(mblock, 'mol')
viewer.setStyle({style:{}})
if surface:
viewer.addSurface(py3Dmol.SAS, {'opacity': opacity})
viewer.zoomTo()
return viewer
def conf_viewer(idx, mol):
return MolTo3DView(mol, confId=idx).show()
from rdkit import Chem
import pickle
from pathlib import Path
path = Path("/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/1-4-14-20")
raw_smi = "Cc1cc(C(=O)c2cnc(/N=C/N(C)C)s2)c(F)cc1Cl"
corrected_smi = r"Cc1cc(C(=O)c2cnc(/N=C\N(C)C)s2)c(F)cc1Cl"
# raw_smi = "O=S(=O)(/N=C(/c1ccccc1)N1CCOCC1)c1ccc(Br)cc1"
# corrected_smi = "O=S(=O)(/N=C(\c1ccccc1)N1CCOCC1)c1ccc(Br)cc1"
# rep_smi = "O=S(=O)(_N=C(_c1ccccc1)N1CCOCC1)c1ccc(Br)cc1"
rep_smi = raw_smi.replace('/', '_')
with open(path / "test_ref.pickle", 'rb') as f:
ref_data = pickle.load(f)
with open(path / "test_ref_cleaned.pickle", 'rb') as f:
cleaned_ref_data = pickle.load(f)
with open(path / "test_GeoMol.pickle", 'rb') as f:
geo_data = pickle.load(f)
with open(path / "test_GeoMol_cleaned.pickle", 'rb') as f:
cleaned_geo_data = pickle.load(f)
from rdkit import Chem
from statistics import mode, StatisticsError
from collections import Counter
datapath = Path("/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/data/DRUGS/drugs")
with open(datapath / (rep_smi + ".pickle"), 'rb') as f:
data = pickle.load(f)
mols = [ data['conformers'][i]['rd_mol'] for i in range(data['uniqueconfs']) ]
smis = [ Chem.MolToSmiles(Chem.RemoveHs(mol)) for mol in mols ]
# mode(smis)
print(Counter(smis))
for smi in smis:
print(smi)
interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(mols)-1, step=1), mol=fixed(mols));
# mol = Chem.MolFromSmiles(mode(smis))
# for atom in mol.GetAtoms():
# atom.SetProp('atomLabel',str(atom.GetIdx()+1))
# display(mol)
# display(Chem.MolFromSmiles(mode(smis)))
# pklfile = "/pubhome/qcxia02/git-repo/AI-CONF/GeoMol/scripts/cis1.pickle"
# smi = r"Cc1cc(C(=O)c2cnc(/N=C\N(C)C)s2)c(F)cc1Cl"
# pklfile = "/pubhome/qcxia02/git-repo/AI-CONF/GeoMol/scripts/trans1.pickle"
# smi = r"Cc1cc(C(=O)c2cnc(/N=C/N(C)C)s2)c(F)cc1Cl"
with open(pklfile, 'rb') as f:
data = pickle.load(f)
mols = data[smi]
smis = [ Chem.MolToSmiles(Chem.RemoveHs(mol)) for mol in mols ]
# mode(smis)
# print(Counter(smis))
# mols[0]
interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(mols)-1, step=1), mol=fixed(mols));
raw_mols = ref_data[raw_smi]
cleaned_raw_mols = cleaned_ref_data[corrected_smi]
geo_mols = geo_data[raw_smi]
corrected_mols = cleaned_geo_data[corrected_smi]
# mols = raw_mols
# mols = cleaned_raw_mols
# mols = geo_mols
mols = corrected_mols
smis = [ Chem.MolToSmiles(Chem.RemoveHs(mol)) for mol in mols ]
# mode(smis)
# print(Counter(smis))
for smi in smis:
print(smi)
interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(mols)-1, step=1), mol=fixed(mols));
import rdkit.Chem.rdMolAlign as MA
import pickle
from rdkit.Chem.rdmolops import Get3DDistanceMatrix
# from rdkit.DataManip.Metric import rdMetricMatrixCalc as RMM
# with open('trained_models/qm9/test_mols.pkl', 'rb') as f:
# with open('/pubhome/qcxia02/git-repo/AI-CONF/GeoMol/scripts/test_GeoMol_qm9_demo.pickle', 'rb') as f:
with open('/pubhome/qcxia02/git-repo/AI-CONF/GeoMol/scripts/test_GeoMol_drugs_pre.pickle', 'rb') as f:
# with open('/pubhome/qcxia02/git-repo/AI-CONF/GeoMol/scripts/test_GeoMol_qm9_pre.pickle', 'rb') as f:
test_mols = pickle.load(f)
# test_mols
# smi='C1CCCCC1'
# smi='CCCC'
# smi="CC(C)=O"
# """
# smi='c1ccccc1'
smi='COC(=O)C(C(C)C)C(O)c1ccccc1'
mols = test_mols[smi]
a = Get3DDistanceMatrix(mols[0])
print((a == 0).any())
# RMM.GetEuclideanDistMat(mols[0])
# interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(mols)-1, step=1), mol=fixed(mols));
# """
# test_mols
# print(MA.AlignMol(mols[0],mols[1], maxIters=10000))
# print(MA.GetBestRMS(mols[0],mols[1]))
test_idx = 0
# smi = smiles[test_idx]
smi='C1CCCCC1'
print(smi)
mol_graph = Chem.MolFromSmiles(smi)
display(mol_graph)
Chem.AddHs(mol_graph)
# mols = test_mols[smi]
interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(mols)-1, step=1), mol=fixed(mols));
import pickle
geo_pkl = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-11-15-37/test_GeoMol.pickle"
ref_pkl = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-11-15-37/test_ref.pickle"
rdk_pkl = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-11-15-37/test_rdkit.pickle"
# ref_pkl = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/test_ref.pickle"
with open (geo_pkl, 'rb') as fb:
geo_data = pickle.load(fb)
with open (ref_pkl, 'rb') as fb:
ref_data = pickle.load(fb)
with open (rdk_pkl, 'rb') as fb:
rdk_data = pickle.load(fb)
ref_data[list(ref_data.keys())[0]]
list(ref_data.keys())
smi = 'Cn1c(=O)c2c(n3cnnc13)-c1ccccc1CC21CCCC1' #1
smi
from rdkit.Chem.rdmolops import RemoveHs # We do not remove Hs to show add-hydgeon capability
display(geo_data[smi][0])
display(RemoveHs(geo_data[smi][0]))
display(Chem.AddHs(RemoveHs(geo_data[smi][0])))
# display(ref_data[smi][0])
# interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(geo_data[smi])-1, step=1), mol=fixed(geo_data[smi]));
# RemovH version
interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(geo_data[smi])-1, step=1), mol=fixed(list(map(RemoveHs, geo_data[smi]))));
interact(conf_viewer, idx=ipywidgets.IntSlider(min=0, max=len(ref_data[smi])-1, step=1), mol=fixed(ref_data[smi]));
a0 = ref_data[smi][0].GetConformer().GetPositions()
a1 = ref_data[smi][1].GetConformer().GetPositions()
print(a0)
print(a1)
import numpy as np
import pickle
from rdkit.ML.Descriptors import MoleculeDescriptors
import pandas as pd
from pathlib import Path
calculator = MoleculeDescriptors.MolecularDescriptorCalculator(['NumRotatableBonds'])
dirty_rdk_smis = Path("/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/train_5epoch/rdkit_err_smiles_25.txt")
dirty_smi_list = dirty_rdk_smis.read_text().split("\n")
with open("/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/train_5epoch/test_ref.pickle", 'rb') as f:
refdata = pickle.load(f)
smis = list(refdata.keys())
# numrots = [ calculator.CalcDescriptors(mol[0])[0] for _, mol in refdata.items() ]
numrots = [ calculator.CalcDescriptors(mol)[0] for smi, mol in refdata.items() if smi not in dirty_smi_list]
# print(numrots)
indexes = [smis.index(smi)+1 for smi in smis if smi not in dirty_smi_list]
covfile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/train_5epoch/test_GeoMol_50-COV_R-th0.5-woh.npy"
matfile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/train_5epoch/test_GeoMol_50-MAT_R-th0.5-woh.npy"
covs_GeoMol = list(np.load(covfile))
mats_GeoMol = list(np.load(matfile))
covfile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/train_5epoch/test_rdkit_50-COV_R-th0.5-woh.npy"
matfile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/train_5epoch/test_rdkit_50-MAT_R-th0.5-woh.npy"
covs_rdkit = list(np.load(covfile))
mats_rdkit = list(np.load(matfile))
num_cov_mat_dict = {
'No.': indexes,
'num_rotatable': numrots,
'cov_GeoMol': covs_GeoMol,
'mat_GeoMol': mats_GeoMol,
'cov_rdkit': covs_rdkit,
'mat_rdkit': mats_rdkit
}
# covs_GeoMol
print(len(covs_GeoMol))
print(len(mats_GeoMol))
print(len(covs_rdkit))
print(len(mats_rdkit))
df = pd.DataFrame(num_cov_mat_dict)
# outfile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/test_result.csv"
# df.to_csv(outfile, index=False)
# print(np.where(covs==1.))
# print(len(np.where(covs==1.)[0]))
from matplotlib import pyplot as plt
from collections import Counter
import seaborn as sns
recounted = Counter(df['num_rotatable'].values) # This df is already filtered by error smiles
dict_want = {}
for key, value in list(dict(sorted(recounted.items())).items()):
dict_want[str(key)] = value
series = pd.Series(dict_want)
series.plot.bar(color='black')
# a
# """
df_geo_cov1 = df[df['cov_GeoMol'] == 1.0]
df_rdk_cov1 = df[df['cov_rdkit'] == 1.0]
print(df_geo_cov1['No.'])
print(df_rdk_cov1['No.'])
# recounted = Counter(df_rdk_cov1['num_rotatable'].values)
# dict_want = {}
# for key, value in list(dict(sorted(recounted.items())).items()):
# # print(key)
# # print(value)
# dict_want[str(key)] = value
# series = pd.Series(dict_want)
# series.plot.bar(color='blue')
# recounted = Counter(df_geo_cov1['num_rotatable'].values)
# dict_want = {}
# for key, value in list(dict(sorted(recounted.items())).items()):
# # print(key)
# # print(value)
# dict_want[str(key)] = value
# series = pd.Series(dict_want)
# series.plot.bar(color='red')
# sns.distplot(df['num_rotatable'].values, bins=10)
# sns.histplot(df_cov1['num_rotatable'],bins=10, edgecolor="black")
# """
print(smis[1])
print(smis[2858])
print(smis[2829])
print(smis[2840])
print(smis[2843])
# plt.hist(x=df_cov1['num_rotatable'],bins=10, edgecolor="black")
import numpy as np
datapath = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-11-15-37/test_GeoMol-ingroup-rmsd-woh.npy"
geodata = np.load(datapath)
datapath = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-11-15-37/test_rdkit-ingroup-rmsd-woh.npy"
rdkdata = np.load(datapath)
print(geodata.mean())
print(rdkdata.mean())
print(np.median(geodata))
print(np.median(rdkdata))
import pandas as pd
import numpy as np
from math import sqrt
from matplotlib import pyplot
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.pyplot import MultipleLocator
rdkfile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-19-20-5/test_rdkit-th1.25-maxm100.csv"
geofile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-19-20-5/test_GeoMol-th1.25-maxm100.csv"
# rdkfile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-19-20-5/test_rdkit-th1.25-removeH-maxm100.csv"
# geofile = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-split0/12-19-20-5/test_GeoMol-th1.25-removeH-maxm100.csv"
data_rdk = pd.read_csv(rdkfile)
data_rdk['method'] = "ETKDG"
data_geo = pd.read_csv(geofile)
data_geo['method'] = "GeoMol"
data_total = pd.concat([data_rdk, data_geo])
# print(data_total)
# """
# %%
plt.figure(dpi=100)
sns.set_theme(style="darkgrid")
# sns.set(rc={'figure.figsize':(16,12)})
# palette = sns.xkcd_palette(["orange", "green"])
palette = sns.xkcd_palette(["green", "blue"])
# Plot the responses for different events and regions
ax = sns.lineplot(x="num_rotatable", y="cov_R",
hue="method",
# style="",
data=data_total,
palette=palette
)
ax.set_xlim(0,13)
x_major_locator=MultipleLocator(2)
ax.xaxis.set_major_locator(x_major_locator)
# """
import pandas as pd
import numpy as np
from math import sqrt
from matplotlib import pyplot
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.pyplot import MultipleLocator
rdk = pd.DataFrame({
'threshold': np.arange(0.1, 2.6, 0.1),
'cov-R': [0.0016, 0.0168, 0.0372, 0.0660, 0.0949, 0.1221, 0.1555, 0.1904, 0.2326, 0.2786, 0.3336, 0.3949, 0.4498, 0.4989, 0.5483, 0.5956, 0.6336, 0.6757, 0.7115, 0.7462, 0.7778, 0.8052, 0.8316, 0.8554, 0.8759],
'method':'ETKDG'
})
geo = pd.DataFrame({
'threshold': np.arange(0.1, 2.6, 0.1),
'cov-R': [0.0010, 0.0067, 0.0200, 0.0454, 0.0730, 0.1057, 0.1428, 0.1907, 0.2427, 0.2968, 0.3567, 0.4237, 0.4911, 0.5516, 0.6185, 0.6770, 0.7306, 0.7796, 0.8266, 0.8675, 0.9003, 0.9263, 0.9470, 0.9621, 0.9746],
'method':'GeoMol'
})
rdk_woh = pd.DataFrame({
'threshold': np.arange(0.1, 2.6, 0.1),
'cov-R': [0.0167, 0.0497, 0.0913, 0.1326, 0.1809, 0.2377, 0.2972, 0.3679, 0.4321, 0.4940, 0.5528, 0.6043, 0.6489, 0.6873, 0.7217, 0.7534, 0.7814, 0.8095, 0.8330, 0.8559, 0.8767, 0.8954, 0.9116, 0.9254, 0.9372],
'method':'ETKDG'
})
geo_woh = pd.DataFrame({
'threshold': np.arange(0.1, 2.6, 0.1),
'cov-R': [0.0046, 0.0287, 0.0584, 0.1078, 0.1673, 0.2406, 0.3120, 0.3939, 0.4763, 0.5642, 0.6392, 0.7086, 0.7692, 0.8217, 0.8638, 0.8973, 0.9259, 0.9471, 0.9618, 0.9725, 0.9810, 0.9870, 0.9910, 0.9934, 0.9950],
'method':'GeoMol'
})
data = pd.concat([rdk, geo])
data_woh = pd.concat([rdk_woh, geo_woh])
plt.figure(dpi=100)
# plt.plot(rdk['threshold'],rdk['cov-R'], color='green')
# plt.plot(geo['threshold'],geo['cov-R'], color='blue')
plt.plot(rdk_woh['threshold'],rdk_woh['cov-R'], color='green')
plt.plot(geo_woh['threshold'],geo_woh['cov-R'], color='blue')
plt.grid(b=None, which='major', axis='both', )
# plt.grid(color = 'r', linestyle = '--', linewidth = 0.5)
# plt.show()
plt.savefig("test.png")
"""
sns.set_theme(style="darkgrid")
# sns.set(rc={'figure.figsize':(16,12)})
# palette = sns.xkcd_palette(["orange", "green"])
palette = sns.xkcd_palette(["green", "blue"])
# Plot the responses for different events and regions
ax = sns.lineplot(x="threshold", y="cov_R",
hue="method",
# style="",
data=data,
palette=palette
)
ax.set_xlim(0,13)
x_major_locator=MultipleLocator(2)
ax.xaxis.set_major_locator(x_major_locator)
"""
import seaborn as sns
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
comp_csv = "/pubhome/qcxia02/git-repo/AI-CONF/datasets/GeoMol/test/drugs-plati/pre-train/test_GeoMol_50-test_rdkit_50-th5.0-maxm100-removeH-sumamry.csv"
data = pd.read_csv(comp_csv)
# data
# """
plt.figure(dpi=100)
ax = sns.jointplot(x = "group_rmsd_1", y="group_rmsd_2", data=data,
kind="reg",
# kind="kde",
# truncate=False,
# color="b", height=7,
xlim=(0,4), ylim=(4,0)
)
# plt.close()
plt.figure(dpi=100)
ax = sns.jointplot(x = "group_rmsd_1", y="num_rotatable-Desc", data=data,
kind="scatter",
# kind="kde",
# truncate=False,
# color="b", height=7,
# xlim=(0,4), ylim=(4,0)
)
# plt.close()
# """
import rdkit
from rdkit import Chem
mol = Chem.MolFromSmiles("O=C(Nc1ccccc1)c1cc(S(=O)(=O)Nc2cccnc2)ccc1Cl")
mol = Chem.AddHs(mol)
for atom in mol.GetAtoms():
atom.SetProp('atomLabel',str(atom.GetIdx()+1))
display(mol)
mol.GetNumHeavyAtoms()
###Output
_____no_output_____ |
notebooks/15.Pipelining_Estimators.ipynb | ###Markdown
Pipelining estimators In this section we study how different estimators maybe be chained. A simple example: feature extraction and selection before an estimator Feature extraction: vectorizer For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.To illustrate we load the SMS spam dataset we used earlier.
###Code
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
###Output
_____no_output_____
###Markdown
Previously, we applied the feature extraction manually, like so:
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The situation where we learn a transformation and then apply it to the test data is very common in machine learning.Therefore scikit-learn has a shortcut for this, called pipelines:
###Code
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.After the first step is fit, it will use the ``transform`` method of the first step to create a new representation.This will then be fed to the ``fit`` of the next step, and so on.Finally, on the last step, only ``fit`` is called.If we call ``score``, only ``transform`` will be called on each step - this could be the test set after all! Then, on the last step, ``score`` is called with the new representation. The same goes for ``predict``. Building pipelines not only simplifies the code, it is also important for model selection.Say we want to grid-search C to tune our Logistic Regression above.Let's say we do it like this:
###Code
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
2.1.2 What did we do wrong? Here, we did grid-search with cross-validation on ``X_train``. However, when applying ``TfidfVectorizer``, it saw all of the ``X_train``,not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.We can fix this with the pipeline, though:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
Note that we need to tell the pipeline where at which step we wanted to set the parameter ``C``.We can do this using the special ``__`` syntax. The name before the ``__`` is simply the name of the class, the part after ``__`` is the parameter we want to set with grid-search. Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with ``GridSearchCV``:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
###Output
{'logisticregression__C': 100, 'tfidfvectorizer__ngram_range': (1, 2)}
###Markdown
EXERCISE: Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
###Code
# %load solutions/15A_ridge_grid.py
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
boston = load_boston()
text_train, text_test, y_train, y_test = train_test_split(boston.data,
boston.target,
test_size=0.25,
random_state=123)
pipeline = make_pipeline(StandardScaler(),
PolynomialFeatures(),
Ridge())
grid = GridSearchCV(pipeline,
param_grid={'polynomialfeatures__degree': [1, 2, 3]}, cv=5)
grid.fit(text_train, y_train)
print('best parameters:', grid.best_params_)
print('best score:', grid.best_score_)
print('test score:', grid.score(text_test, y_test))
###Output
best parameters: {'polynomialfeatures__degree': 2}
best score: 0.8176389414974904
test score: 0.8313120138601886
###Markdown
Pipelining estimators In this section we study how different estimators maybe be chained. A simple example: feature extraction and selection before an estimator Feature extraction: vectorizer For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.To illustrate we load the SMS spam dataset we used earlier.
###Code
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
###Output
_____no_output_____
###Markdown
Previously, we applied the feature extraction manually, like so:
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The situation where we learn a transformation and then apply it to the test data is very common in machine learning.Therefore scikit-learn has a shortcut for this, called pipelines:
###Code
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.After the first step is fit, it will use the ``transform`` method of the first step to create a new representation.This will then be fed to the ``fit`` of the next step, and so on.Finally, on the last step, only ``fit`` is called.If we call ``score``, only ``transform`` will be called on each step - this could be the test set after all! Then, on the last step, ``score`` is called with the new representation. The same goes for ``predict``. Building pipelines not only simplifies the code, it is also important for model selection.Say we want to grid-search C to tune our Logistic Regression above.Let's say we do it like this:
###Code
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
What did we do wrong? Here, we did grid-search with cross-validation on ``X_train``. However, when applying ``TfidfVectorizer``, it saw all of the ``X_train``,not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.We can fix this with the pipeline, though:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
Note that we need to tell the pipeline where at which step we wanted to set the parameter ``C``.We can do this using the special ``__`` syntax. The name before the ``__`` is simply the name of the class, the part after ``__`` is the parameter we want to set with grid-search. Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with ``GridSearchCV``:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
EXERCISE: Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
###Code
# %load solutions/15A_ridge_grid.py
###Output
_____no_output_____
###Markdown
Pipelining estimators In this section we study how different estimators maybe be chained. A simple example: feature extraction and selection before an estimator Feature extraction: vectorizer For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.To illustrate we load the SMS spam dataset we used earlier.
###Code
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
###Output
_____no_output_____
###Markdown
Previously, we applied the feature extraction manually, like so:
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The situation where we learn a transformation and then apply it to the test data is very common in machine learning.Therefore scikit-learn has a shortcut for this, called pipelines:
###Code
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.After the first step is fit, it will use the ``transform`` method of the first step to create a new representation.This will then be fed to the ``fit`` of the next step, and so on.Finally, on the last step, only ``fit`` is called.If we call ``score``, only ``transform`` will be called on each step - this could be the test set after all! Then, on the last step, ``score`` is called with the new representation. The same goes for ``predict``. Building pipelines not only simplifies the code, it is also important for model selection.Say we want to grid-search C to tune our Logistic Regression above.Let's say we do it like this:
###Code
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
2.1.2 What did we do wrong? Here, we did grid-search with cross-validation on ``X_train``. However, when applying ``TfidfVectorizer``, it saw all of the ``X_train``,not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.We can fix this with the pipeline, though:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
Note that we need to tell the pipeline where at which step we wanted to set the parameter ``C``.We can do this using the special ``__`` syntax. The name before the ``__`` is simply the name of the class, the part after ``__`` is the parameter we want to set with grid-search. Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with ``GridSearchCV``:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
EXERCISE: Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
###Code
# %load solutions/15A_ridge_grid.py
###Output
_____no_output_____
###Markdown
Pipelining estimators In this section we study how different estimators maybe be chained. A simple example: feature extraction and selection before an estimator Feature extraction: vectorizer For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.To illustrate we load the SMS spam dataset we used earlier.
###Code
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
###Output
_____no_output_____
###Markdown
Previously, we applied the feature extraction manually, like so:
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The situation where we learn a transformation and then apply it to the test data is very common in machine learning.Therefore scikit-learn has a shortcut for this, called pipelines:
###Code
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.After the first step is fit, it will use the ``transform`` method of the first step to create a new representation.This will then be fed to the ``fit`` of the next step, and so on.Finally, on the last step, only ``fit`` is called.If we call ``score``, only ``transform`` will be called on each step - this could be the test set after all! Then, on the last step, ``score`` is called with the new representation. The same goes for ``predict``. Building pipelines not only simplifies the code, it is also important for model selection.Say we want to grid-search C to tune our Logistic Regression above.Let's say we do it like this:
###Code
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
2.1.2 What did we do wrong? Here, we did grid-search with cross-validation on ``X_train``. However, when applying ``TfidfVectorizer``, it saw all of the ``X_train``,not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.We can fix this with the pipeline, though:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
Note that we need to tell the pipeline where at which step we wanted to set the parameter ``C``.We can do this using the special ``__`` syntax. The name before the ``__`` is simply the name of the class, the part after ``__`` is the parameter we want to set with grid-search. Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with ``GridSearchCV``:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
EXERCISE: Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
###Code
# %load solutions/15A_ridge_grid.py
###Output
_____no_output_____
###Markdown
Pipelining estimators In this section we study how different estimators maybe be chained. A simple example: feature extraction and selection before an estimator Feature extraction: vectorizer For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.To illustrate we load the SMS spam dataset we used earlier.
###Code
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
###Output
_____no_output_____
###Markdown
Previously, we applied the feature extraction manually, like so:
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The situation where we learn a transformation and then apply it to the test data is very common in machine learning.Therefore scikit-learn has a shortcut for this, called pipelines:
###Code
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.After the first step is fit, it will use the ``transform`` method of the first step to create a new representation.This will then be fed to the ``fit`` of the next step, and so on.Finally, on the last step, only ``fit`` is called.If we call ``score``, only ``transform`` will be called on each step - this could be the test set after all! Then, on the last step, ``score`` is called with the new representation. The same goes for ``predict``. Building pipelines not only simplifies the code, it is also important for model selection.Say we want to grid-search C to tune our Logistic Regression above.Let's say we do it like this:
###Code
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
What did we do wrong? Here, we did grid-search with cross-validation on ``X_train``. However, when applying ``TfidfVectorizer``, it saw all of the ``X_train``,not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.We can fix this with the pipeline, though:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
Note that we need to tell the pipeline where at which step we wanted to set the parameter ``C``.We can do this using the special ``__`` syntax. The name before the ``__`` is simply the name of the class, the part after ``__`` is the parameter we want to set with grid-search. Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with ``GridSearchCV``:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
EXERCISE: Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
###Code
# %load solutions/15A_ridge_grid.py
###Output
_____no_output_____
###Markdown
Pipelining estimators In this section we study how different estimators maybe be chained. A simple example: feature extraction and selection before an estimator Feature extraction: vectorizer For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.To illustrate we load the SMS spam dataset we used earlier.
###Code
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
###Output
_____no_output_____
###Markdown
Previously, we applied the feature extraction manually, like so:
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The situation where we learn a transformation and then apply it to the test data is very common in machine learning.Therefore scikit-learn has a shortcut for this, called pipelines:
###Code
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.After the first step is fit, it will use the ``transform`` method of the first step to create a new representation.This will then be fed to the ``fit`` of the next step, and so on.Finally, on the last step, only ``fit`` is called.If we call ``score``, only ``transform`` will be called on each step - this could be the test set after all! Then, on the last step, ``score`` is called with the new representation. The same goes for ``predict``. Building pipelines not only simplifies the code, it is also important for model selection.Say we want to grid-search C to tune our Logistic Regression above.Let's say we do it like this:
###Code
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
2.1.2 What did we do wrong? Here, we did grid-search with cross-validation on ``X_train``. However, when applying ``TfidfVectorizer``, it saw all of the ``X_train``,not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.We can fix this with the pipeline, though:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
###Output
_____no_output_____
###Markdown
Note that we need to tell the pipeline where at which step we wanted to set the parameter ``C``.We can do this using the special ``__`` syntax. The name before the ``__`` is simply the name of the class, the part after ``__`` is the parameter we want to set with grid-search. Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with ``GridSearchCV``:
###Code
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
###Output
{'logisticregression__C': 100, 'tfidfvectorizer__ngram_range': (1, 2)}
###Markdown
EXERCISE: Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
###Code
from sklearn.datasets import load_boston
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import make_pipeline
data = load_boston()
X, y = data.data, data.target
print(X.shape, y.shape)
X_train, X_test, y_train, y_test = train_test_split(X,y,
random_state=1234)
print(X_train.shape, X_test.shape)
pipeline = make_pipeline(StandardScaler(), PolynomialFeatures(), Ridge())
grid = {'polynomialfeatures__degree':[1,2,3]}
reg = GridSearchCV(pipeline, param_grid=grid, verbose=3, cv=5)
reg.fit(X_train, y_train)
print(reg.best_params_)
print("score: %f"%(reg.score(X_test, y_test)))
# %load solutions/15A_ridge_grid.py
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Ridge
boston = load_boston()
text_train, text_test, y_train, y_test = train_test_split(boston.data,
boston.target,
test_size=0.25,
random_state=123)
pipeline = make_pipeline(StandardScaler(),
PolynomialFeatures(),
Ridge())
grid = GridSearchCV(pipeline,
param_grid={'polynomialfeatures__degree': [1, 2, 3]}, cv=5)
grid.fit(text_train, y_train)
print('best parameters:', grid.best_params_)
print('best score:', grid.best_score_)
print('test score:', grid.score(text_test, y_test))
###Output
best parameters: {'polynomialfeatures__degree': 2}
best score: 0.8176389414974885
test score: 0.8313120138601877
|
lessons/ETLPipelines/10_imputation_exercise/10_imputations_exercise.ipynb | ###Markdown
Table of Contents1 Imputing Data2 Exercise - Part 13 Excercise - Part 24 Exercise - Part 35 Conclusion Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head(2)
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If you aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
# HINT: You can do this with these methods: groupby(), transform(), a lambda function, fillna(), and mean()
df_melt['GDP_filled'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values(['year']).groupby('Country Name')['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = None
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If you aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
# HINT: You can do this with these methods: groupby(), transform(), a lambda function, fillna(), and mean()
mean_gdp = df_melt.groupby(['Country Name']).mean()
df_melt['GDP_filled'] = df_melt.groupby(['Country Name'])['GDP'].transform(lambda x : x.fillna(x.mean()))
df_melt.head()
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.groupby(['Country Name'])['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.groupby(['Country Name'])['GDP'].fillna(method='bfill')
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
df_melt.head(20)
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values(by=['year']).groupby('Country Name')['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values(by=['year']).groupby('Country Name')['GDP'].fillna(method='bfill')
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
# plot the results
plot_results('GDP_ff_bf')
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values(by=['year']).groupby('Country Name')['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values(by=['year']).groupby('Country Name')['GDP'].fillna(method='backfill')
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
plot_results('GDP_ff_bf')
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
pd.melt?
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# df_melt[df_melt['Country Name'] == 'Afghanistan'].fillna(\
# df_melt[df_melt['Country Name'] == 'Afghanistan']["GDP"].isnull().sum()
# .mean())
df_melt.groupby('Country Name')["GDP"]
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = None
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
df_melt.sort_values?
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values(by='year').groupby('Country Name')['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = None
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby("Country Name").GDP.transform(
lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values('year').groupby("Country Name").GDP.transform(
lambda x: x.fillna(method='ffill'))
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values('year').groupby("Country Name").GDP.transform(
lambda x: x.fillna(method='bfill'))
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
df_melt
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = None
dfGrouped = df_melt.groupby('Country Name').sum()
dfGrouped.head()
df[df.index == 'Afghanistan']['GDP']
df_melt['GDP_filled'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
C:\Users\cerion\AppData\Roaming\Python\Python38\site-packages\pandas\plotting\_matplotlib\core.py:1182: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels(xticklabels)
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.ffill()))
# plot the results
plot_results('GDP_ffill')
###Output
C:\Users\cerion\AppData\Roaming\Python\Python38\site-packages\pandas\plotting\_matplotlib\core.py:1182: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels(xticklabels)
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.bfill()))
# plot the results
plot_results('GDP_bfill')
###Output
C:\Users\cerion\AppData\Roaming\Python\Python38\site-packages\pandas\plotting\_matplotlib\core.py:1182: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels(xticklabels)
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
df_melt.head()
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
import numpy as np
mean_gdp = df_melt.groupby('Country Name', as_index=False)['GDP'].transform(np.mean)
df_melt['GDP_filled'] = df_melt.fillna(mean_gdp)['GDP']
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
df_melt.head()
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values(by='year').groupby('Country Name', as_index=False)['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values(by='year').groupby('Country Name', as_index=False)['GDP'].fillna(method='bfill')
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results.
###Code
df_melt.head()
###Output
_____no_output_____
###Markdown
Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='bfill')
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = None
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = None
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = None
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby('Country Name').transform(lambda x: x.fillna(x.mean()))["GDP"]
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values(by="year").groupby('Country Name')["GDP"].fillna(method="ffill")
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values(by="year").groupby('Country Name')["GDP"].fillna(method="bfill")
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. plot_results('GDP_filled') Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_ffill'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill')
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.sort_values('year').groupby
('Country Name')['GDP'].fillna(method='ffill').fillna
(method='bfill')
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____
###Markdown
Imputing DataWhen a dataset has missing values, you can either remove those values or fill them in. In this exercise, you'll work with World Bank GDP (Gross Domestic Product) data to fill in missing values.
###Code
# run this code cell to read in the data set
import pandas as pd
df = pd.read_csv('../data/gdp_data.csv', skiprows=4)
df.drop('Unnamed: 62', axis=1, inplace=True)
# run this code cell to see what the data looks like
df.head()
# Run this code cell to check how many null values are in the data set
df.isnull().sum()
###Output
_____no_output_____
###Markdown
There are quite a few null values. Run the code below to plot the data for a few countries in the data set.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# put the data set into long form instead of wide
df_melt = pd.melt(df, id_vars=['Country Name', 'Country Code', 'Indicator Name', 'Indicator Code'], var_name='year', value_name='GDP')
# convert year to a date time
df_melt['year'] = pd.to_datetime(df_melt['year'])
def plot_results(column_name):
# plot the results for Afghanistan, Albania, and Honduras
fig, ax = plt.subplots(figsize=(8,6))
df_melt[(df_melt['Country Name'] == 'Afghanistan') |
(df_melt['Country Name'] == 'Albania') |
(df_melt['Country Name'] == 'Honduras')].groupby('Country Name').plot('year', column_name, legend=True, ax=ax)
ax.legend(labels=['Afghanistan', 'Albania', 'Honduras'])
plot_results('GDP')
###Output
_____no_output_____
###Markdown
Afghanistan and Albania are missing data, which show up as gaps in the results. Exercise - Part 1Your first task is to calculate mean GDP for each country and fill in missing values with the country mean. This is a bit tricky to do in pandas. Here are a few links that should be helpful:* https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.DataFrame.groupby.html* https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html* https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html
###Code
# TODO: Use the df_melt dataframe and fill in missing values with a country's mean GDP
# If aren't sure how to do this,
# look up something like "how to group data and fill in nan values in pandas" in a search engine
# Put the results in a new column called 'GDP_filled'.
df_melt['GDP_filled'] = df_melt.groupby("Country Name").transform(lambda x: x.fillna(x.mean()))
# Plot the results
plot_results('GDP_filled')
###Output
_____no_output_____
###Markdown
This is somewhat of an improvement. At least there is no missing data; however, because GDP tends to increase over time, the mean GDP is probably not the best way to fill in missing values for this particular case. Next, try using forward fill to deal with any missing values. Excercise - Part 2Use the fillna forward fill method to fill in the missing data. Here is the [documentation](https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.fillna.html). As explained in the course video, forward fill takes previous values to fill in nulls.The pandas fillna method has a forward fill option. For example, if you wanted to use forward fill on the GDP dataset, you could execute `df_melt['GDP'].fillna(method='ffill')`. However, there are two issues with that code. 1. You want to first make sure the data is sorted by year2. You need to group the data by country name so that the forward fill stays within each countryWrite code to first sort the df_melt dataframe by year, then group by 'Country Name', and finally use the forward fill method.
###Code
# TODO: Use forward fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt = df_melt.sort_values('year')
df_melt['GDP_ffill'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(method='ffill'))
# plot the results
plot_results('GDP_ffill')
###Output
_____no_output_____
###Markdown
This looks better at least for the Afghanistan data; however, the Albania data is still missing values. You can fill in the Albania data using back fill. That is what you'll do next. Exercise - Part 3This part is similar to Part 2, but now you will use backfill. Write code that backfills the missing GDP data.
###Code
# TODO: Use back fill to fill in missing GDP values
# HINTS: use the sort_values(), groupby(), and fillna() methods
df_melt['GDP_bfill'] = df_melt.groupby('Country Name')['GDP'].transform(lambda x: x.fillna(method='bfill'))
# plot the results
plot_results('GDP_bfill')
###Output
_____no_output_____
###Markdown
Conclusion In this case, the GDP data for all three countries is now complete. Note that forward fill did not fill all the Albania data because the first data entry in 1960 was NaN. Forward fill would try to fill the 1961 value with the NaN value from 1960.To completely fill the entire GDP data for all countries, you might have to run both forward fill and back fill. Note as well that the results will be slightly different depending on if you run forward fill first or back fill first. Afghanistan, for example, is missing data in the middle of the data set. Hence forward fill and back fill will have slightly different results.Run this next code cell to see if running both forward fill and back fill end up filling all the GDP NaN values.
###Code
# Run forward fill and backward fill on the GDP data
df_melt['GDP_ff_bf'] = df_melt.sort_values('year').groupby('Country Name')['GDP'].fillna(method='ffill').fillna(method='bfill')
# Check if any GDP values are null
df_melt['GDP_ff_bf'].isnull().sum()
###Output
_____no_output_____ |
Dynamic Programming/1008/474. Ones and Zeroes.ipynb | ###Markdown
说明: 给定一个数组strs,其字符串仅包含0和1。 还有两个整数m和n。现在您的任务是找到在给定的m 0和n 1的情况下可以形成的最大字符串数。 0和1最多只能使用一次。要求: 从数组中选择元素,选择尽可能多的元素,但是最多只能有m 个 0, n 个 1.Example 1: Input: strs = ["10","0001","111001","1","0"], m = 5, n = 3 Output: 4 Explanation: This are totally 4 strings can be formed by the using of 5 0s and 3 1s, which are "10","0001","1","0".Example 2: Input: strs = ["10","0","1"], m = 1, n = 1 Output: 2 Explanation: You could form "10", but then you'd have nothing left. Better form "0" and "1". Constraints: 1、1 <= strs.length <= 600 2、1 <= strs[i].length <= 100 3、strs[i] consists only of digits '0' and '1'. 4、1 <= m, n <= 100
###Code
from collections import Counter
class Solution:
def findMaxForm(self, strs, m: int, n: int) -> int:
dp = [[0] * m for _ in range(n)]
for s in strs:
ones = Counter(s)
zeros = Counter(s)
for z in range(n, ones-1, -1):
for o in range(m, zeros-1, -1):
class Solution:
def findMaxForm(self, strs, m: int, n: int) -> int:
dp = [[0] * (m + 1) for _ in range(n + 1)]
for s in strs:
z_s = s.count('0')
o_s = s.count('1')
for i in range(n, o_s - 1, -1):
for j in range(m, z_s - 1, -1):
dp[i][j] = max(dp[i][j], dp[i - o_s][j - z_s] + 1)
print(dp, z_s, o_s)
return dp[-1][-1]
solution = Solution()
solution.findMaxForm(["10","0001","111001","1","0"], 5, 3)
[[0, 1, 1, 1, 1, 1],
[1, 2, 2, 2, 2, 2],
[1, 2, 3, 3, 3, 3],
[1, 2, 3, 3, 3, 4]]
0 1 2 3 4 5
0 [0, 0, 0, 0, 0, 0],
1 [0, 0, 0, 0, 0, 0],
2 [0, 0, 0, 0, 0, 0],
3 [0, 0, 0, 0, 0, 0]
[[0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1]]
[[0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 2, 2],
[0, 1, 1, 1, 2, 2]]
###Output
_____no_output_____ |
notebooks/05a_prediction_notebook.ipynb | ###Markdown
Load model
###Code
artifact_path = os.path.join(TRACKING_URI.replace("file://", ""),
EXPERIMENT_ID,
RUN_ID,
'artifacts')
# Load data pkl
data_path = os.path.join(artifact_path, LOG_DATA_PKL)
with open(data_path, 'rb') as handle:
data_pkl = pickle.load(handle)
# Load model pkl
model_path = os.path.join(artifact_path, LOG_MODEL_PKL)
with open(model_path, 'rb') as handle:
model_pkl = pickle.load(handle)
model = model_pkl["model_object"]
model
###Output
_____no_output_____
###Markdown
Predict sample entry
###Code
CLUSTERS_YAML_PATH = "../data/processed/features_skills_clusters_description.yaml"
CLUSTERS_YAML_PATH
with open(CLUSTERS_YAML_PATH, "r") as stream:
clusters_config = yaml.safe_load(stream)
molten_clusters = [(cluster_name, cluster_skill)
for cluster_name, cluster_skills in clusters_config.items()
for cluster_skill in cluster_skills]
clusters_df = pd.DataFrame(molten_clusters, columns=["cluster_name", "skill"])
###Output
_____no_output_____
###Markdown
Recreate cluster features
###Code
sample_skills = ['Pandas', 'TensorFlow', 'Torch/PyTorch', 'Python', 'Keras']
sample_clusters = clusters_df.copy()
sample_clusters["sample_skills"] = sample_clusters["skill"].isin(sample_skills)
cluster_features = sample_clusters.groupby("cluster_name")["sample_skills"].sum()
cluster_features
###Output
_____no_output_____
###Markdown
Create OneHotEncoded skills
###Code
features_names = pd.Series(data_pkl["features_names"])
skills_names = features_names[~features_names.isin(cluster_features.index)]
sample_skills
skills_names
skills_names = features_names[~features_names.isin(cluster_features.index)]
ohe_skills = pd.Series(skills_names.isin(sample_skills).astype(int).tolist(),
index=skills_names)
ohe_skills
###Output
_____no_output_____
###Markdown
Combine features
###Code
features = pd.concat([ohe_skills, cluster_features])
features = features[data_pkl["features_names"]]
features
###Output
_____no_output_____
###Markdown
Predict
###Code
predictions = model.predict_proba([features.values])
positive_probs = [prob[0][1] for prob in predictions]
pd.Series(positive_probs,
index=data_pkl["targets_names"]).sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Load model
###Code
artifact_path = os.path.join(TRACKING_URI.replace("file://", ""),
EXPERIMENT_ID,
RUN_ID,
'artifacts')
# Load data pkl
data_path = os.path.join(artifact_path, LOG_DATA_PKL)
with open(data_path, 'rb') as handle:
data_pkl = pickle.load(handle)
# Load model pkl
model_path = os.path.join(artifact_path, LOG_MODEL_PKL)
with open(model_path, 'rb') as handle:
model_pkl = pickle.load(handle)
model = model_pkl["model_object"]
model
###Output
_____no_output_____
###Markdown
Predict sample entry
###Code
CLUSTERS_YAML_PATH = "../data/processed/features_skills_clusters_description.yaml"
CLUSTERS_YAML_PATH
with open(CLUSTERS_YAML_PATH, "r") as stream:
clusters_config = yaml.safe_load(stream)
molten_clusters = [(cluster_name, cluster_skill)
for cluster_name, cluster_skills in clusters_config.items()
for cluster_skill in cluster_skills]
clusters_df = pd.DataFrame(molten_clusters, columns=["cluster_name", "skill"])
###Output
_____no_output_____
###Markdown
Recreate cluster features
###Code
sample_skills = ['Pandas', 'TensorFlow', 'Torch/PyTorch', 'Python', 'Keras']
sample_clusters = clusters_df.copy()
sample_clusters["sample_skills"] = sample_clusters["skill"].isin(sample_skills)
cluster_features = sample_clusters.groupby("cluster_name")["sample_skills"].sum()
cluster_features
###Output
_____no_output_____
###Markdown
Create OneHotEncoded skills
###Code
features_names = pd.Series(data_pkl["features_names"])
skills_names = features_names[~features_names.isin(cluster_features.index)]
sample_skills
skills_names
skills_names = features_names[~features_names.isin(cluster_features.index)]
ohe_skills = pd.Series(skills_names.isin(sample_skills).astype(int).tolist(),
index=skills_names)
ohe_skills
###Output
_____no_output_____
###Markdown
Combine features
###Code
features = pd.concat([ohe_skills, cluster_features])
features = features[data_pkl["features_names"]]
features
###Output
_____no_output_____
###Markdown
Predict
###Code
predictions = model.predict_proba([features.values])
positive_probs = [prob[0][1] for prob in predictions]
pd.Series(positive_probs,
index=data_pkl["targets_names"]).sort_values(ascending=False)
###Output
_____no_output_____ |
examples/drafts/trf_mudensity.ipynb | ###Markdown
MU Density from trf logfilePyMedPhys exposes tools to read in trf logfiles into objects which can be easily passed around. In this example we will be reading a logfile directly from disk into a `Delivery` object, then using the values within this object to calculate an MU Density.
###Code
from glob import glob
import numpy as np
import matplotlib.pyplot as plt
import pymedphys
###Output
_____no_output_____
###Markdown
For the purpose of this exercise one of the log files used for constancy testing within PyMedPhys will be used. Any trf log file path can be provided in the string below.
###Code
logfile_path_search_string = '../../../tests/fileformats/trf/data/*/*VMAT*.trf'
example_logfile_from_tests = glob(logfile_path_search_string)[0]
example_logfile_from_tests
###Output
_____no_output_____
###Markdown
Delivery Data from a Log File`Delivery` is an object within PyMedPhys which holds monitor units, gantry angles, collimator angles, as well as MLC and Jaw positions. It can be parameterised by control points, or by time interval. This particular object is a likely candiated for being adjusted in the future.Helper functions are provided within PyMedPhys to extract `Delivery` from Mosaiq SQL queries as well as log files. In the future DICOM RT plan files are also expected to be supported.The API for creating and interacting with `Delivery` is likely to change in the future.
###Code
delivery_data = pymedphys.Delivery.from_logfile(example_logfile_from_tests)
mu = delivery_data.monitor_units
mlc = delivery_data.mlc
jaw = delivery_data.jaw
###Output
_____no_output_____
###Markdown
Calculating and displaying the MU DensityOnce MU, MLC, and Jaw parameters are known these can be used to calculate an MU Density.
###Code
first_10_seconds = slice(0, 10 * 25, 1)
mu_density = pymedphys.mudensity.calculate(
mu[first_10_seconds], mlc[first_10_seconds],
jaw[first_10_seconds])
grid = pymedphys.mudensity.grid()
plt.figure(figsize=(6,4))
pymedphys.mudensity.display(grid, mu_density)
plt.xlim([-60, 60])
plt.ylim([50, -60])
###Output
_____no_output_____ |
project-management/seed-contributors.ipynb | ###Markdown
Seed contributorsBy Ben WelshSeeds a master list of California Civic Data Coalition participants with open-source contributors drawn from the GitHub API. Last harvested on Dec. 18, 2016, [using a Python script that interacts with GitHub's API](https://github.com/california-civic-data-coalition/django-calaccess-raw-data/blob/master/example/network-analysis/contributors.csv).
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Load in the data
###Code
table = pd.read_csv("./input/contributors.csv")
table.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 225 entries, 0 to 224
Data columns (total 9 columns):
repo 225 non-null object
login 225 non-null object
name 175 non-null object
email 108 non-null object
company 115 non-null object
location 145 non-null object
bio 55 non-null object
avatar_url 225 non-null object
contributions 225 non-null int64
dtypes: int64(1), object(8)
memory usage: 15.9+ KB
###Markdown
Clean up strings
###Code
table.replace(np.nan, "", inplace=True)
table.login = table.login.map(str.strip).str.lower()
table.company = table.company.map(str.strip)
table.location = table.location.map(str.strip)
table.avatar_url = table.avatar_url.map(str.strip)
###Output
_____no_output_____
###Markdown
Merge in corrections
###Code
corrections = pd.read_csv("./input/contributors-corrections.csv")
table = table.merge(corrections, on="login", how="left")
table.name = table.corrected_name.fillna(table.name)
table.company = table.corrected_company.fillna(table.company)
table.location = table.corrected_location.fillna(table.location)
table.email = table.corrected_email.fillna(table.email)
table.drop('corrected_name', axis=1, inplace=True)
table.drop('corrected_company', axis=1, inplace=True)
table.drop('corrected_location', axis=1, inplace=True)
table.drop('corrected_email', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Merge some common variations
###Code
table.loc[table.location.isin(['Los Angeles', 'Los Angeles, California']), 'location'] = 'Los Angeles, CA'
table.loc[table.location.isin(['Washington D.C.', 'District of Columbia', 'Washington, D.C.']), 'location'] = 'Washington, DC'
table.loc[table.location == 'Chicago', 'location'] = 'Chicago, IL'
table.loc[table.location == 'San Francisco', 'location'] = 'San Francisco, CA'
table.loc[table.location == 'Palo Alto', 'location'] = 'Palo Alto, CA'
table.loc[table.location == 'Spokane, Wash.', 'location'] = 'Spokane, WA'
table.loc[table.location == 'Hackney, London', 'location'] = 'London, UK'
table.loc[table.location.isin(['Brooklyn', 'Brooklyn NY', 'Brooklyn, NY', 'NYC', 'New York']), 'location'] = 'New York, NY'
table.loc[table.location == 'Columbia, Missouri', 'location'] = 'Columbia, MO'
table.loc[table.location == 'Tucson, Arizona', 'location'] = 'Tucson, AZ'
table.loc[table.location == 'Toronto', 'location'] = 'Toronto, Canada'
table.loc[table.location == 'Salt Lake City, Utah', 'location'] = 'Salt Lake City, UT'
table.loc[table.location == 'Houston', 'location'] = 'Houston, TX'
table.loc[table.location == 'Orange County, Calif.', 'location'] = 'Houston, TX'
table.company = table.company.str.replace("The ", "")
table.loc[table.company == 'Sunnmorsposten', 'company'] = 'Sunnmørsposten'
table.loc[table.company == 'Wall Street Journal.', 'company'] = 'Wall Street Journal'
table.loc[table.company == 'Northwestern University Knight Lab', 'company'] = 'Northwestern'
table.loc[table.company == 'Investigative News Network', 'company'] = 'Institute for Nonprofit News'
table.loc[table.company == 'Stanford', 'company'] = 'Stanford University'
table.loc[table.company == 'Missouri School of Journalism', 'company'] = 'University of Missouri'
table.loc[table.company == 'University of Iowa School of Journalism', 'company'] = 'University of Iowa'
table.loc[table.company == 'Knight-Mozilla fellow 2015', 'company'] = 'Mozilla OpenNews'
table.loc[table.company == 'Knight-Mozilla Fellow', 'company'] = 'Mozilla OpenNews'
###Output
_____no_output_____
###Markdown
Output unique list
###Code
columns = [
"login",
"name",
"email",
"company",
"location",
"bio",
"avatar_url"
]
unique_contributors = table.groupby(columns, as_index=False).contributions.sum()
login_list = [
'palewire',
'gordonje',
'sahilchinoy',
'aboutaaron',
'armendariz',
'cephillips',
'jlagetz'
]
unique_contributors['in_coalition'] = unique_contributors.login.isin(login_list)
###Output
_____no_output_____
###Markdown
California v. everybody
###Code
unique_contributors['in_california'] = False
unique_contributors.loc[unique_contributors.location.str.endswith(", CA"), 'in_california'] = True
###Output
_____no_output_____
###Markdown
Count the different states and countries
###Code
unique_contributors.loc[unique_contributors.location == '', 'in_usa'] = np.NaN
unique_contributors.loc[unique_contributors.location.str.contains(", \w{2}$"), 'in_usa'] = True
unique_contributors.loc[unique_contributors.location.str.contains(", \w{3,}$"), 'in_usa'] = False
def split_state(val):
if val == np.NaN:
return val
elif val == "":
return np.NaN
else:
try:
parent = val.split(", ")[1]
except IndexError:
return val
if len(parent) == 2:
return parent
else:
return np.NaN
unique_contributors['state'] = unique_contributors['location'].apply(split_state)
def split_country(val):
if val == np.NaN:
return val
elif val == "":
return np.NaN
else:
try:
parent = val.split(", ")[1]
except IndexError:
return val
if len(parent) == 2:
return "United States of America"
elif len(parent) > 2:
return parent
else:
return np.NaN
unique_contributors['country'] = unique_contributors['location'].apply(split_country)
###Output
_____no_output_____
###Markdown
Output data
###Code
unique_contributors.to_csv("./output/participants.csv", index=False)
###Output
_____no_output_____ |
mta_2021_cleaning.ipynb | ###Markdown
Import
###Code
#engine = create_engine('sqlite:///Data/raw/mta_data.db')
#mta = pd.read_sql('SELECT * FROM mta_data WHERE (TIME <"08" OR TIME >="20") AND (substr(DATE,1,2) =="06" OR substr(DATE,1,2) =="07" OR substr(DATE,1,2) =="08") AND (substr(DATE,9,2) =="21");', engine)
#mta.head()
mta = pd.read_csv('Data/raw/2021.csv')
zip_boro_station = pd.read_csv('Data/Processed/zip_boro_geo.csv',dtype={'ZIP':'object'})
###Output
_____no_output_____
###Markdown
Merge to filter for stations in Brooklyn and Manhattan only
###Code
mta['STATION'] = (mta.STATION.str.strip().str.replace('AVE','AV')
.str.replace('STREET','ST').str.replace('COLLEGE','CO')
.str.replace('SQUARE','SQ').replace('STS','ST').replace('/','-'))
df = (mta.merge(zip_boro_station.loc[:,['STATION','BOROUGH']], on='STATION'))
df = df[(df.BOROUGH=='Manhattan')|(df.BOROUGH=='Brooklyn')]
# Convert to datetime
df["DATE_TIME"] = pd.to_datetime(df.DATE + " " + df.TIME, format="%m/%d/%Y %H:%M:%S")
df["DATE"] = pd.to_datetime(df.DATE, format="%m/%d/%Y")
df["TIME"] = pd.to_datetime(df.TIME)
###Output
_____no_output_____
###Markdown
Drop DuplicatesIt seems the RECOVER AUD entries are irregular, so we will drop them when they have REGULAR homologue (or duplicate).
###Code
# Check for duplicates
duplicates_count = (df.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"])
.ENTRIES.count()
.reset_index()
.sort_values("ENTRIES", ascending=False))
print(duplicates_count.value_counts('ENTRIES'))
# Drop duplicates
df.sort_values(["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"],
inplace=True, ascending=False)
df.drop_duplicates(subset=["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"], inplace=True)
df.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE_TIME"]).ENTRIES.count().value_counts()
# Drop Desc Column. To prevent errors in multiple run of cell, errors on drop is ignored
df = df.drop(["DESC","EXITS"], axis=1, errors="ignore")
###Output
ENTRIES
1 1195194
2 3681
dtype: int64
###Markdown
Get late-night entries only Look at timestamps, we want the late-night entries instead of hourly cumulative. Compare the first stamp of the evening against the last stamp of the early morning, dropping the day we don't have a comparison for (last).
###Code
evening = df[df.TIME.dt.time > dt.time(19,59)]
morning = df[df.TIME.dt.time < dt.time(4,1)]
first_stamp = (evening.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE"])
.ENTRIES.first())
last_stamp = (morning.groupby(["C/A", "UNIT", "SCP", "STATION", "DATE"])
.ENTRIES.last())
timestamps = pd.merge(first_stamp, last_stamp, on=["C/A", "UNIT", "SCP", "STATION", "DATE"], suffixes=['_CUM_AM','_CUM_PM'])
timestamps.reset_index(inplace=True)
timestamps['ENTRIES_CUM_AM'] = (timestamps.groupby(["C/A", "UNIT", "SCP", "STATION"])
.ENTRIES_CUM_AM.shift(-1))
# Drop Sundays, where we don't have data from the next morning.
timestamps.dropna(subset=['ENTRIES_CUM_AM'], inplace=True)
timestamps.head()
###Output
_____no_output_____
###Markdown
Get evening entries instead of cumulative. Getting the absolute value, since some of the turnstiles are counting backwards.
###Code
timestamps['ENTRIES'] = abs(timestamps.ENTRIES_CUM_AM - timestamps.ENTRIES_CUM_PM)
timestamps.head()
###Output
_____no_output_____
###Markdown
Get weekend data onlyWe are only interested in the weekends, so lets filter for those. I am doing this now so when we filter for outliers the mean will be more accurate (weekday entries skew the data).
###Code
timestamps['DAY_WEEK'] = timestamps.DATE.dt.dayofweek
weekend = timestamps[timestamps.DAY_WEEK.isin([3,4,5])]
weekend.head()
weekend.sort_values('ENTRIES', ascending=False).head()
###Output
_____no_output_____
###Markdown
Cleaning
###Code
# Cleaning Functions
def max_counter(row, threshold):
counter = row['ENTRIES']
if counter < 0:
counter = -counter
if counter > threshold:
counter = row['MEDIAN']
if counter > threshold:
counter = 0
return counter
def outl_to_med(x):
res = (x['ENTRIES']*x['~OUTLIER'])+(x['MEDIAN']*x['OUTLIER'])
return res
# Replace outliers with the turnstile median
weekend['MEDIAN'] = (weekend.groupby(['C/A','UNIT','SCP','STATION'])
.ENTRIES.transform(lambda x: x.median()))
weekend['OUTLIER'] = (weekend.groupby(['C/A','UNIT','SCP','STATION'])
.ENTRIES.transform(lambda x: zscore(x)>3))
weekend['~OUTLIER'] = weekend.OUTLIER.apply(lambda x: not x)
weekend['ENTRIES'] = weekend.apply(outl_to_med, axis=1)
# There are still irregular values, set them to the updated median.
# If the median is still too high, replace with 0.
weekend['MEDIAN'] = (weekend.groupby(['C/A','UNIT','SCP','STATION'])
.ENTRIES.transform(lambda x: x.median()))
weekend['ENTRIES'] = weekend.apply(max_counter, axis=1, threshold=3500)
print(weekend.MEDIAN.max())
weekend[weekend.ENTRIES>3000].shape
weekend.sort_values('ENTRIES', ascending=False).head()
###Output
_____no_output_____
###Markdown
Drop unnecessary rows
###Code
weekend.drop(['MEDIAN','OUTLIER','~OUTLIER', 'ENTRIES_CUM_AM', 'ENTRIES_CUM_PM'], axis=1, inplace=True, errors='ignore')
###Output
_____no_output_____
###Markdown
Sanity Check: visualize to check for irregularities
###Code
import matplotlib.pyplot as plt
import seaborn as sns
weekend.info()
weekend['WEEK'] = weekend.DATE.dt.week
per_week_station = weekend.groupby(['STATION','WEEK'])['ENTRIES'].sum().reset_index()
per_week_station.rename(columns={'ENTRIES':"WEEKEND_ENTRIES"}, inplace=True)
sns.relplot(x='WEEK', y='WEEKEND_ENTRIES', data=per_week_station, kind='line', hue='STATION')
plt.show()
# Something is happening on week 26
# Upon closer inspection we can see that it corresponds with 4th July weekend.
# Many New Yorkers leave the city for that date, so it makes sense.
weekend[weekend.WEEK==26].head()
###Output
_____no_output_____
###Markdown
Export
###Code
weekend.to_csv('Data/Processed/weekend_21.csv', index=False)
weekend_geo = weekend.merge(zip_boro_station, on='STATION')
weekend_geo.to_csv('Data/Processed/weekend_geo_21.csv', index=False)
# Export the total by station with its corresponding coordinates.
station_totals = weekend.groupby('STATION').ENTRIES.sum()\
.reset_index().merge(zip_boro_station, on='STATION')
station_totals.rename(columns={'ENTRIES':'TOTAL_ENTRIES'}, inplace=True)
station_totals.to_csv('Data/Processed/totals_geo_21.csv', index=False)
###Output
_____no_output_____ |
3. YABAI.ipynb | ###Markdown
Сложные репрезентации **Примечание в ретроспективе** Здесь я (пока) не загружал веса эмбеддингов в Embedding-слои напрямую, а использовал самый тупой и примитивный метод предобработки с помощью эмбеддингов. Rest assured, на следующих этапах всё используется как положено. И продолжим мы как водится предобученными эмбеддингами. Сначала используем GoogleNews W2v.
###Code
import numpy as np
import gensim
en_w2v = gensim.models.KeyedVectors.load_word2vec_format("embeddings/GoogleNews-vectors-negative300.bin", binary=True)
###Output
_____no_output_____
###Markdown
Сначала представим тексты просто как среднее от векторов их составляющих.
###Code
en_w2v['dog'].shape
def vectorize(text):
vectors = []
for word in text.split():
try:
vectors.append(en_w2v[word])
except KeyError:
vectors.append(np.zeros((300,)))
return np.mean(vectors, axis=0)
X_vectors = normalize(np.array([vectorize(text) for text in tqdm(X_text)])).reshape(48000,300,1)
X_vectors.shape, y.shape
def train_dev_test(X, y, seed=42):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=seed)
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, test_size=0.5, random_state=seed)
return X_train, X_val, X_test, y_train, y_val, y_test
X_train, X_val, X_test, y_train, y_val, y_test = train_dev_test(X_vectors, y, 42)
###Output
_____no_output_____
###Markdown
На этом попробуем простую FF-сеть.
###Code
import tensorflow as tf
from sklearn.preprocessing import LabelEncoder
import tensorflow.keras as keras
from tensorflow.keras import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Dense, Dropout, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
seed=42
def build_ff(emb_size, dropout_rate, num_classes):
word_input = Input(shape=(emb_size,1))
z = Dense(2048, activation='relu')(word_input)
z = BatchNormalization(trainable=True)(z)
z = Dropout(dropout_rate)(z)
z = Dense(1024, activation='relu')(word_input)
z = BatchNormalization(trainable=True)(z)
z = Dropout(dropout_rate)(z)
z = Dense(512, activation='relu')(word_input)
z = BatchNormalization(trainable=True)(z)
z = Dropout(dropout_rate)(z)
z = Dense(256, activation='relu')(z)
z = BatchNormalization(trainable=True)(z)
z = Dropout(dropout_rate)(z)
y = Dense(num_classes, activation='softmax')(z)
model = Model(inputs=word_input, outputs=y)
return model
emb_size = 300
dropout_rate = 0.2
batch_size = 512
epochs = 100
num_classes = 3
model = build_ff(emb_size, dropout_rate, num_classes)
mc = ModelCheckpoint('checkpoints/best_ff.h5', monitor='val_loss', mode='auto', save_best_only=True)
earlystop = EarlyStopping(monitor='val_loss', patience=3)
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
print(model.summary())
history=model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
callbacks = [mc, earlystop],
verbose=1,
validation_data=(X_val, y_val))
def plot_train_acc(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
plot_epochs = range(1, len(acc) + 1)
plt.plot(plot_epochs, acc, 'r', label='Training acc')
plt.plot(plot_epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
def plot_train_loss(history):
loss = history.history['loss']
val_loss = history.history['val_loss']
plot_epochs = range(1, len(loss) + 1)
plt.plot(plot_epochs, loss, 'r', label='Training loss')
plt.plot(plot_epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
model.load_weights('checkpoints/best_ff.h5')
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.7156716156005859
Test accuracy: 0.6933333
###Markdown
Обычная feedforward сеть не очень хорошо подходит для решения такой задачи. Попробуем CNN.
###Code
from tensorflow.keras.layers import Flatten, Conv1D, MaxPooling1D
def build_cnn(emb_size, dropout_rate, num_classes):
word_input = Input(shape=(emb_size,1))
conv1 = Conv1D(26, 2, activation='relu')(word_input)
conv1 = MaxPooling1D(3)(conv1)
conv1 = Flatten()(conv1)
conv1 = BatchNormalization(trainable=True)(conv1)
z1 = Dense(256, activation='relu')(conv1)
z1 = BatchNormalization(trainable=True)(z1)
z1 = Dropout(dropout_rate)(z1)
z2 = Dense(128, activation='relu')(z1)
z2 = BatchNormalization(trainable=True)(z2)
z2 = Dropout(dropout_rate)(z2)
y = Dense(num_classes, activation='softmax')(z2)
model = Model(inputs=word_input, outputs=y)
return model
cnn = build_cnn(emb_size, dropout_rate, num_classes)
mc = ModelCheckpoint('checkpoints/best_cnn.h5', monitor='val_loss', mode='auto', save_best_only=True)
cnn.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
print(cnn.summary())
history=cnn.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
callbacks = [mc, earlystop],
verbose=1,
validation_data=(X_val, y_val))
plot_train_acc(history)
plot_train_loss(history)
cnn.load_weights('checkpoints/best_cnn.h5')
score = cnn.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
cnn = build_cnn(emb_size, dropout_rate, num_classes)
cnn.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
print(cnn.summary())
cnn.fit(X_vectors, y,
batch_size=batch_size,
epochs=10,
callbacks = [mc],
verbose=1,
validation_data=(X_val, y_val))
###Output
Train on 48000 samples, validate on 2400 samples
Epoch 1/10
48000/48000 [==============================] - 5s 101us/sample - loss: 0.9459 - acc: 0.5889 - val_loss: 1.0735 - val_acc: 0.4229
Epoch 2/10
48000/48000 [==============================] - 4s 81us/sample - loss: 0.7639 - acc: 0.6579 - val_loss: 1.0521 - val_acc: 0.6142
Epoch 3/10
48000/48000 [==============================] - 4s 79us/sample - loss: 0.7076 - acc: 0.6901 - val_loss: 1.0285 - val_acc: 0.5471
Epoch 4/10
48000/48000 [==============================] - 4s 79us/sample - loss: 0.6766 - acc: 0.7054 - val_loss: 1.0029 - val_acc: 0.4904
Epoch 5/10
48000/48000 [==============================] - 4s 80us/sample - loss: 0.6461 - acc: 0.7217 - val_loss: 0.8908 - val_acc: 0.6708
Epoch 6/10
48000/48000 [==============================] - 6s 122us/sample - loss: 0.6179 - acc: 0.7362 - val_loss: 0.7492 - val_acc: 0.7233
Epoch 7/10
48000/48000 [==============================] - 4s 82us/sample - loss: 0.5885 - acc: 0.7497 - val_loss: 0.6026 - val_acc: 0.7738
Epoch 8/10
48000/48000 [==============================] - 4s 82us/sample - loss: 0.5614 - acc: 0.7637 - val_loss: 0.4943 - val_acc: 0.7979
Epoch 9/10
48000/48000 [==============================] - 4s 83us/sample - loss: 0.5308 - acc: 0.7776 - val_loss: 0.4374 - val_acc: 0.8392
Epoch 10/10
48000/48000 [==============================] - 4s 84us/sample - loss: 0.5065 - acc: 0.7897 - val_loss: 0.4002 - val_acc: 0.8487
###Markdown
**Примечание в ретроспективе**: не обращай внимание на валидационные цифры, я хз, зачем я валидацию тут оставлял. В общем это просто обучение на всех данных для предсказания результатов на тестовой выборке, не более.
###Code
X_test_vectors = normalize(np.array([vectorize(text) for text in tqdm(test['text'])])).reshape(12000,300,1)
outs = [np.argmax(j) for j in cnn.predict(X_test_vectors)]
X_id = test['id']
out_rows=list(zip(X_id, outs))
out_rows = [('Id','Predicted')] + out_rows
out_rows = [f'{t[0]},{t[1]}' for t in out_rows]
with open(f'submissions/4.w2v+cnn.csv', 'w') as a:
a.write('\n'.join(out_rows))
a.close()
###Output
_____no_output_____
###Markdown
Сабмишн всего 66%: не лучший вариант, даже хуже примитивного бейслайна. Это объяснимо, потому что применять CNN к эмбеддингам подобным образом не имеет особого смысла: между элементами внутри эмбеддинга связи нет. Что имеет смысл, это привести тексты к какой-то длине, представить их как вектора эмбеддингов и брать окна размером e.g. (5,300), чтобы захватить отношения между словами. Так и поступим, и возьмём GloVe 840B GeneralCrawl модель.
###Code
from gensim.scripts.glove2word2vec import glove2word2vec
from gensim.models.keyedvectors import KeyedVectors
###Output
_____no_output_____
###Markdown
glove2word2vec(glove_input_file="embeddings/glove.840B.300d.txt", word2vec_output_file="embeddings/w2v.840B.300d.txt")
###Code
en_w2v = KeyedVectors.load_word2vec_format("embeddings/w2v.840B.300d.txt", binary=False)
text_lengths = [len(t.split()) for t in X_text]
min(text_lengths), max(text_lengths) # минимальная и максимальная длины текстов
sum(l < 10 for l in text_lengths)
sorted(text_lengths)[24000], sum(text_lengths) / 48000 # медиана и среднее
###Output
_____no_output_____
###Markdown
Выбросы не так сильно воздействуют на среднее, как могло бы оказаться. Возьмём порог в 50 слов! Всё, что больше, будем обрезать, всё, что меньше - дополнять до 50 нулями.
###Code
def text_to_matrix(text, size=50):
text = text.split()
l = len(text)
if len(text) < size:
text = text + ['aAaA']*(size-len(text)) # aAaA is not in vocab and will thus throw KeyError
else:
text = text[:size]
matrix = []
for word in text:
try:
matrix.append(en_w2v[word])
except KeyError:
matrix.append(np.zeros(300,))
return np.array(matrix)
X_text[510]
len(X_text[510].split()), len(X_text[511].split())
text_to_matrix(X_text[510]).shape, text_to_matrix(X_text[511]).shape
X = np.array([normalize(text_to_matrix(text)) for text in tqdm(X_text)]).reshape(48000, 300, 50, 1)
X.shape
np.save('data/X_train_vectors.npy', X)
X_test_vecs = np.array([normalize(text_to_matrix(text)) for text in tqdm(test['text'])]).reshape(12000, 300, 50, 1)
np.save('data/X_test_vectors.npy', X_test_vecs)
X_train, X_dev, X_test, y_train, y_dev, y_test = train_dev_test(X, y)
###Output
_____no_output_____
###Markdown
Построим 2D сеть.
###Code
from tensorflow.keras.layers import Flatten, Conv2D, MaxPooling2D
def build_2d_cnn(emb_size=300, text_size=50, dropout_rate=0.2, num_classes=3):
word_input = Input(shape=(emb_size, text_size, 1))
conv1 = Conv2D(filters=3, kernel_size=(7, 7), activation='relu')(word_input)
conv1 = MaxPooling2D(pool_size=(1,3), strides=None, padding='same')(conv1)
conv1 = BatchNormalization(trainable=True)(conv1)
conv2 = Conv2D(filters=5, kernel_size=(3, 3), activation='relu')(conv1)
conv2 = MaxPooling2D(pool_size=(1,3), strides=None, padding='same')(conv2)
conv2 = BatchNormalization(trainable=True)(conv2)
conv3 = Flatten()(conv2)
z1 = Dense(256, activation='relu')(conv3)
z1 = BatchNormalization(trainable=True)(z1)
z1 = Dropout(dropout_rate)(z1)
z2 = Dense(128, activation='relu')(z1)
z2 = BatchNormalization(trainable=True)(z2)
z2 = Dropout(dropout_rate)(z2)
y = Dense(num_classes, activation='softmax')(z2)
model = Model(inputs=word_input, outputs=y)
return model
###Output
_____no_output_____
###Markdown
Я бы рад сделать хорошую сложную архитектуру, но у меня нет в распоряжении офигенного кластера, у ноутбука нет гпу (хотя оперативной памяти выше крыши), а от всего что сложнее колаб отваливается с истерическими воплями, даже если всё предобрабатывать у себя и загружать на диск исключительно npy, так что сорян, и там и здесь работать невозможно
###Code
cnn_2d = build_2d_cnn()
mc = ModelCheckpoint('checkpoints/best_2d_cnn.h5', monitor='val_loss', mode='auto', save_best_only=True)
earlystop = EarlyStopping(monitor='val_loss', patience=3)
cnn_2d.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['acc'])
print(cnn_2d.summary())
batch_size=512
epochs=100
history=cnn_2d.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
callbacks = [mc, earlystop],
verbose=1,
validation_data=(X_dev, y_dev))
plot_train_acc(history)
plot_train_loss(history)
###Output
_____no_output_____
###Markdown
LSTM В общем, для данной задачи CNN делает мало смысла, жрёт много памяти, вышибает Google Colab и вынуждает рвать волосы на копчике. А что насчёт LSTM, который естественно подходит для задачи обработки текста? Давайте проверим, почему бы и нет!
###Code
import tensorflow as tf
import numpy as np
import pandas as pd
from tqdm import tqdm
from sklearn.preprocessing import normalize
from utils import train_dev_test, plot_train_acc, plot_train_loss, classifier_out
train = pd.read_csv('data/train_texts.csv')
test = pd.read_csv('data/test_texts.csv')
# Сколько максимум слов из словаря нам юзать
MAX_NB_WORDS = 50000
# Ограничимся 250 словами
MAX_SEQUENCE_LENGTH = 250
# Пусть размерность эмбеддинга будет 100
EMBEDDING_DIM = 100
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words=MAX_NB_WORDS, lower=True)
tokenizer.fit_on_texts(train['text'].values)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
X = tokenizer.texts_to_sequences(train['text'].values)
X = pad_sequences(X, maxlen=MAX_SEQUENCE_LENGTH)
print('Shape of data tensor:', X.shape)
y = train['class']
X_train, X_dev, X_test, y_train, y_dev, y_test = train_dev_test(X, y)
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Embedding, SpatialDropout1D, Bidirectional, LSTM, Dense, Dropout, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
model = Sequential()
model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1]))
model.add(SpatialDropout1D(0.2))
model.add(BatchNormalization())
model.add(Bidirectional(LSTM(units=128 , return_sequences = True , recurrent_dropout = 0.4 , dropout = 0.4)))
model.add(Bidirectional(LSTM(units=128 , recurrent_dropout = 0.2 , dropout = 0.2)))
model.add(Dense(3, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
epochs = 100
#epochs = 3
batch_size = 512
mc = ModelCheckpoint('checkpoints/best_lstm.h5', monitor='val_loss', mode='auto', save_best_only=True)
earlystop = EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)
history = model.fit(X_train, y_train,
epochs=epochs,
batch_size=batch_size,
callbacks=[mc, earlystop],
validation_data=(X_dev, y_dev))
accr = model.evaluate(X_test, y_test)
print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1]))
###Output
2400/2400 [==============================] - 3s 1ms/sample - loss: 0.6578 - acc: 0.7471
Test set
Loss: 0.658
Accuracy: 0.747
###Markdown
**Примечание в ретроспективе**: прервал потому что производил обучение (на той же архитектуре) в Google Colab и там в общем-то было быстрее. Резы действительно получились такие, и это первая модель, которая вселила в меня надежду в светлое будущее. Даже на простейшей LSTMке резы уже лучше, чем они были бы у CNN (у сабмишна 0.742!) Её и будем тюнить, используя при этом дополнительные фичи.
###Code
plot_train_acc(history)
plot_train_loss(history)
###Output
_____no_output_____
###Markdown
**В ретроспективе**: на этом этапе я ещё раз проверил, какие резы на этой задаче (в её нормальной версии) у других людей и как предобрабатывают они. Я решил сказать "а, в задницу всё" и просто тупо, без человеческих способов предобработки типа удаления разметки, попробовать проделать точно так же и посмотреть, что получится.
###Code
import os
import random
import re
import time
import numpy as np
import pandas as pd
import plotly.express as px
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier, Perceptron, RidgeClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.svm import LinearSVC
from sklearn.tree import DecisionTreeClassifier
train = pd.read_parquet('data/train.parquet')
test = pd.read_parquet('data/test.parquet')
train.head()
train['Body'] = train['Title'] + " " + train['Body']
test['Body'] = test['Title'] + " " + test['Body']
train.head()
# Clean the data
def clean_text(text):
text = text.lower()
text = re.sub(r'[^(a-zA-Z)\s]','', text)
return text
train['Body'] = train['Body'].apply(clean_text)
test['Body'] = test['Body'].apply(clean_text)
train.head()
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
example_sent = "This is a sample sentence, showing off the stop words filtration."
stop_words = set(stopwords.words('english'))
def remove_stopword(words):
list_clean = [w for w in words.split(' ') if not w in stop_words]
return ' '.join(list_clean)
train['Body'] = train['Body'].apply(remove_stopword)
test['Body'] = test['Body'].apply(remove_stopword)
train.head()
X_text = train['Body']
X_test_text = test['Body']
y = train['target']
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_text)
X_train_counts.shape
X_test_counts = count_vect.transform(X_test_text)
X_test_counts.shape
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_test_tfidf = tfidf_transformer.transform(X_test_counts)
X_train_tfidf.shape
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X_train_tfidf, y, test_size=0.05, random_state=42)
###Output
_____no_output_____
###Markdown
LinearSVC
###Code
lsvc = LinearSVC().fit(X_train,y_train)
lsvc.score(X_val,y_val)
###Output
_____no_output_____ |
Remove_Protected_Attributes/Adult.ipynb | ###Markdown
Results with protected attributes
###Code
np.random.seed(0)
## Divide into train,validation,test
dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2, random_state=0,shuffle = True)
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
# --- LSR
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100)
# --- CART
# clf = tree.DecisionTreeClassifier()
# clf.fit(X_train, y_train)
# y_pred = clf.predict(X_test)
# cnf_matrix_test = confusion_matrix(y_test,y_pred)
# print(cnf_matrix_test)
# TN, FP, FN, TP = confusion_matrix(y_test,y_pred).ravel()
print("recall :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'recall'))
print("far :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'far'))
print("precision :", measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'precision'))
print("accuracy :",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'accuracy'))
print("aod sex:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'aod'))
print("eod sex:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'eod'))
print("aod race:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'race', 'aod'))
print("eod race:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'race', 'eod'))
# print("TPR:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'TPR'))
# print("FPR:",measure_final_score(dataset_orig_test, clf, X_train, y_train, X_test, y_test, 'sex', 'FPR'))
# print("Precision", metrics.precision_score(y_test,y_pred))
# print("Recall", metrics.recall_score(y_test,y_pred))
# print(X_train.columns)
# print(clf.coef_)
# import matplotlib.pyplot as plt
# y = np.arange(len(dataset_orig.columns)-1)
# plt.barh(y,clf.coef_[0])
# plt.yticks(y,dataset_orig_train.columns)
# plt.show()
###Output
_____no_output_____
###Markdown
Results without protected attributes
###Code
## Drop race and sex
dataset_orig = dataset_orig.drop(['sex','race'],axis=1)
## Divide into train,validation,test
np.random.seed(0)
## Divide into train,validation,test
dataset_orig_train, dataset_orig_test = train_test_split(dataset_orig, test_size=0.2, random_state=0,shuffle = True)
X_train, y_train = dataset_orig_train.loc[:, dataset_orig_train.columns != 'Probability'], dataset_orig_train['Probability']
X_test , y_test = dataset_orig_test.loc[:, dataset_orig_test.columns != 'Probability'], dataset_orig_test['Probability']
# --- LSR
clf = LogisticRegression(C=1.0, penalty='l2', solver='liblinear', max_iter=100)
# --- CART
# clf = tree.DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
cnf_matrix_test = confusion_matrix(y_test,y_pred)
print(cnf_matrix_test)
TN, FP, FN, TP = confusion_matrix(y_test,y_pred).ravel()
print("recall :", calculate_recall(TP,FP,FN,TN))
print("far :",calculate_far(TP,FP,FN,TN))
print("precision :", calculate_precision(TP,FP,FN,TN))
print("accuracy :",calculate_accuracy(TP,FP,FN,TN))
print(X_train.columns)
print(clf.coef_)
###Output
_____no_output_____ |
notebooks/4.2.1_Szenarien_Ueberblick.ipynb | ###Markdown
[Inhaltsverzeichnis](../AP4.ipynb) | [ next](wohin?) 4.2.1 Szenarien ÜberblickIm folgenden werden die im Forschungsprojekt FLUCCO+ verwendeten Szenarien für ein erneuerbares Österreich 2040 bzw. 2050 dargestellt.
###Code
# OPTIONAL: Load the "autoreload" extension so that code can change
%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
%autoreload 2
%matplotlib inline
from FLUCCOplus.notebooks import *
from FLUCCOplus import scenarios
sc_raw = scenarios.read("szenarien_w2s.xlsx")
sc_raw
sc = (sc_raw
.pipe(scenarios.start_pipeline)
.pipe(scenarios.NaNtoZero)
.pipe(scenarios.format_df)
.pipe(scenarios.convert_PJ_to_GWH)
)
###Output
_____no_output_____
###Markdown
Strom Erzeugung nach Energieträgern
###Code
all = ['Jahr', 'Strombedarf', 'Mismatch', 'Importe', 'Stromproduktion',
'Wasserkraft', 'Windkraft', 'Photovoltaik', 'Volatile EE',
'Nicht-Volatile', 'Laufkraft', 'Pumpspeicher', 'RES0', 'RES1', 'RES2']
pp_carriers = ['Laufkraft','Windkraft', 'Photovoltaik', 'Pumpspeicher', 'Nicht-Volatile']
sci = sc#.rename(index={name: i+1 for i, name in enumerate(sc.index)})
sci
fig, ax = plt.subplots(1,1)
(sci[pp_carriers]/1000).reindex(index=sci[pp_carriers].index[::-1]).plot(kind="barh", stacked=True, color=config.COLORS.values(), rot=0, ax=ax)
ax.set(xlabel="Endenergie [TWh/a]")
ax.set(ylabel="Strom-Erzeugung in Österreich (Szenarien)")
fig.savefig("../data/processed/figures/Szenarien_EndenergienNamed", dpi=config.DPI, bbox_inches = 'tight')
for name, i in enumerate(sc.index):
print(str(name+1)+": "+i, end=", ")
###Output
1: EM2018, 2: EM2019, 3: E-Control 2019, 4: Energie und Klimazukunft 2030 (Veigl17), 5: Erneuerbare Energie 2030 (UBA16), 6: WEM 2030 (UBA17), 7: Transition 2030 (UBA17), 8: Energie und Klimazukunft 2050 (Veigl17), 9: Erneuerbare Energie 2050 (UBA16), 10: WEM 2050 (UBA17), 11: Transition 2050 (UBA17), 12: 100% Erneuerbare Deckung 2050 (FLUCCO+), 13: 100% Erneuerbare Deckung 2050 inkl Methan (FLUCCO+),
###Markdown
Anteil Erneuerbarer Stromerzeugung am Endenergiemix
###Code
fig, ax = plt.subplots(1,1)
sc[["Volatile EE", "Nicht-Volatile"]].plot(ax=ax,kind="bar", stacked=True, color=["orange", "darkgreen"])
ax.set(ylabel="Endenergie [GWh/a]")
for i, label in enumerate(list(sc.index)):
score = sc.loc[label, "Stromproduktion"]
ax.annotate(f"{f'{score:,.0f}'.replace(',',' ')}", (i - 0.2, score))
###Output
_____no_output_____
###Markdown
Ermittlung der Jahres-Skalierungsfaktoren Prinzipiell können natürlich alle Szenarien als Skalierungsgröße verwendet werden. Unser Szenario-Letztstand,der auf der [EnInnov 2020 Graz](https://www.tugraz.at/events/eninnov2020/nachlese/download-beitraege/stream-a/) vorgestellt wurde, ist die Variante "Streicher 2b". Das entspricht dem ursprünglichen Szenario [Streicher, et al. 2011] mit folgenden Adaptionen: * Reallokation der Energie aus Geothermie zu jeweils 50/50 auf Windkraft/PV, Methanisierung nur auf Windkraft (Streicher 2a) * Endenergiebedarf der Mobilität aus UBA17 herangezogen, Landwirtschaft ergänzt
###Code
sc.index[-2]
s = scenarios
s.factors("EM2019",-2, sc)
###Output
_____no_output_____
###Markdown
[Inhaltsverzeichnis](../AP4.ipynb) | [ next](wohin?) 4.2.1 Szenarien ÜberblickIm folgenden werden die im Forschungsprojekt FLUCCO+ verwendeten Szenarien für ein erneuerbares Österreich 2040 bzw. 2050 dargestellt.
###Code
# OPTIONAL: Load the "autoreload" extension so that code can change
%load_ext autoreload
# OPTIONAL: always reload modules so that as you change code in src, it gets loaded
%autoreload 2
%matplotlib inline
from FLUCCOplus.notebooks import *
from FLUCCOplus import scenarios
sc_raw = scenarios.read("szenarien_w2s.xlsx")
sc_raw.head()
sc = (sc_raw
.pipe(scenarios.start_pipeline)
.pipe(scenarios.NaNtoZero)
.pipe(scenarios.format_df)
.pipe(scenarios.convert_PJ_to_GWH)
)
###Output
_____no_output_____
###Markdown
Strom Erzeugung nach Energieträgern
###Code
all = ['Jahr', 'Strombedarf', 'Mismatch', 'Importe', 'Stromproduktion',
'Wasserkraft', 'Windkraft', 'Photovoltaik', 'Volatile EE',
'Nicht-Volatile', 'Laufkraft', 'Pumpspeicher', 'RES0', 'RES1', 'RES2']
pp_carriers = ['Laufkraft','Windkraft', 'Photovoltaik', 'Pumpspeicher', 'Nicht-Volatile']
sci = sc#.rename(index={name: i+1 for i, name in enumerate(sc.index)})
sci
fig, ax = plt.subplots(1,1)
(sci[pp_carriers]/1000).reindex(index=sci[pp_carriers].index[::-1]).plot(kind="barh", stacked=True, color=config.COLORS.values(), rot=0, ax=ax)
ax.set(xlabel="Endenergie [TWh/a]")
ax.set(ylabel="Strom-Erzeugung in Österreich (Szenarien)")
fig.savefig("../data/processed/figures/Szenarien_EndenergienNamed", dpi=config.DPI, bbox_inches = 'tight')
for name, i in enumerate(sc.index):
print(str(name+1)+": "+i, end=", ")
###Output
1: EM2018, 2: EM2019, 3: E-Control 2019, 4: Energie und Klimazukunft 2030 (Veigl17), 5: Erneuerbare Energie 2030 (UBA16), 6: WEM 2030 (UBA17), 7: Transition 2030 (UBA17), 8: Energie und Klimazukunft 2050 (Veigl17), 9: Erneuerbare Energie 2050 (UBA16), 10: WEM 2050 (UBA17), 11: Transition 2050 (UBA17), 12: 100% Erneuerbare Deckung 2050 (FLUCCO+), 13: 100% Erneuerbare Deckung 2050 inkl Methan (FLUCCO+),
###Markdown
Anteil Erneuerbarer Stromerzeugung am Endenergiemix
###Code
fig, ax = plt.subplots(1,1)
sc[["Volatile EE", "Nicht-Volatile"]].plot(ax=ax,kind="bar", stacked=True, color=["orange", "darkgreen"])
ax.set(ylabel="Endenergie [GWh/a]")
for i, label in enumerate(list(sc.index)):
score = sc.loc[label, "Stromproduktion"]
ax.annotate(f"{f'{score:,.0f}'.replace(',',' ')}", (i - 0.2, score))
###Output
_____no_output_____
###Markdown
Ermittlung der Jahres-Skalierungsfaktoren Prinzipiell können natürlich alle Szenarien als Skalierungsgröße verwendet werden. Unser Szenario-Letztstand,der auf der [EnInnov 2020 Graz](https://www.tugraz.at/events/eninnov2020/nachlese/download-beitraege/stream-a/) vorgestellt wurde, ist die Variante "Streicher 2b". Das entspricht dem ursprünglichen Szenario [Streicher, et al. 2011] mit folgenden Adaptionen: * Reallokation der Energie aus Geothermie zu jeweils 50/50 auf Windkraft/PV, Methanisierung nur auf Windkraft (Streicher 2a) * Endenergiebedarf der Mobilität aus UBA17 herangezogen, Landwirtschaft ergänzt
###Code
sc.index[-2]
s = scenarios
s.factors("EM2019",-2, sc)
###Output
_____no_output_____ |
quality_embeddings/generate_latin_word_vector.ipynb | ###Markdown
Generating a Latin WordVector Parameter suggestions brought to you by: * Word2vec applied to Recommendation: Hyperparameters Matter - https://arxiv.org/pdf/1804.04212 * How to Generate a Good Word Embedding? - https://arxiv.org/pdf/1507.05523.pdf Guidelines/key points as quotes:* for semantic property tasks, larger dimensions will lead to better performance * For most NLP tasks a dimensionality of 50 is typically sufficient.* ... multiple iterations are necessary. The performance increases by a large margin when weiterate more than once, regardless of the task and the cor-pus. * Early stopping for regularization based on minimizing the validation loss isn't as useful as with other ML tasks; ideally a specific test would be implemented, but this is difficult to implement.
###Code
import json
import logging
import multiprocessing
from datetime import datetime
from pathlib import Path
import os
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
from cltk.stop.latin import PERSEUS_STOPS
LOG = logging.getLogger('make_word_vec')
logging.basicConfig(format='%(levelname)s : %(message)s', level=logging.INFO)
# corpus_characteristics = 'non_lemmatized'
# corpus_filename = 'latin_library.preprocessed.cor'
# corpus_characteristics = 'lemmatized'
# corpus_filename ='latin_library.lemmatized.preprocessed.cor'
corpus_characteristics = ''
corpus_filename ='latin_library.preprocessed.cor'
STOPWORDS = set(PERSEUS_STOPS)
# additional stops
additional_stops ='ille iste ispe haec quem illic qui sic hic haec quae '.split()
for stop in additional_stops:
STOPWORDS.add(stop)
corpus_file_wo_stopwords = 'latin_library.wostops.cor'
with open(corpus_filename, 'rt') as infile:
with open(corpus_file_wo_stopwords, 'wt') as outfile:
for line in infile:
words = [word for word in line.split() if word not in STOPWORDS]
sent = ' '.join(words).strip()
outfile.write('{}\n'.format(sent))
keyword_params = {
'size': 50,
'iter': 30,
'min_count': 3, # Ignores all words with total frequency lower than this.
'max_vocab_size': None,
'ns_exponent': 0.75, # the default, optimal for linguistic tasks; also try -0.5 for recommenders
'alpha': 0.025,
'min_alpha': 0.004,
'sg': 1, # skip gram
'window': 10, # number of surrounding words to consider
'workers': multiprocessing.cpu_count() - 1,
'negative': 15, # 15 may be best
'sample': 0 # 0.00001 # sample=1e-05 downsamples 4158 most-common words
# sample=0.001 downsamples 32 most-common words
}
LOG.info('Creating vector with parameters: %s', json.dumps(keyword_params))
latin_lib_vec = Word2Vec(corpus_file=corpus_file_wo_stopwords, **keyword_params)
LOG.info('Saving word2vec for latin library corpus')
latin_lib_vec.save('latin_library.{}.vec'.format( datetime.now().strftime('%Y.%m.%d')))
with open('latin_library.vec.{}.params'.format(corpus_characteristics, datetime.now().strftime('%Y.%m.%d')), 'wt') as writer:
json.dump(keyword_params, writer)
###Output
_____no_output_____
###Markdown
Persist the word vectors to diskthey should be cross platform, cross language loadable
###Code
word_vectors = latin_lib_vec.wv
the_filename = 'latin_library.{}.kv'.format(datetime.now().strftime('%Y.%m.%d'))
# word_vectors.save_word2vec_format(the_filename, binary=False)
word_vectors.save(the_filename)
###Output
INFO : saving Word2VecKeyedVectors object under latin_library.2019.06.01.kv, separately None
INFO : not storing attribute vectors_norm
INFO : saved latin_library.2019.06.01.kv
###Markdown
Some QA
###Code
latin_lib_vec.wv.most_similar('puella')
if 'haec' in latin_lib_vec:
latin_lib_vec.wv.similar_by_word('haec')
latin_lib_vec.wv.similar_by_word('uiolenter')
the_filename = 'latin_library.{}.kv'.format( datetime.now().strftime('%Y.%m.%d'))
latin_word_vectors = KeyedVectors.load(the_filename, mmap='r')
latin_word_vectors.most_similar('uir')
latin_lib_vec.wv.most_similar('homo')
latin_lib_vec.wv.most_similar('canere', topn=10)
latin_lib_vec.wv.most_similar('piger', topn=10)
latin_lib_vec.wv.most_similar('scandere')
latin_lib_vec.wv.most_similar('praelucere')
latin_lib_vec.wv.similar_by_word('ciuis')
the_lemmatized_filename = 'latin_library.2019.03.07.kv'
lem_lat_wordvec = KeyedVectors.load(the_lemmatized_filename, mmap='r')
lem_lat_wordvec.most_similar('puella')
lem_lat_wordvec.most_similar('puer')
'eccum' in lem_lat_wordvec
lem_lat_wordvec.most_similar('eccum')
the_date ='2019.03.08'
#the_date =datetime.now().strftime('%Y.%m.%d')
the_filename = 'latin_library.{}.kv'.format(the_date )
latin_word_vectors = KeyedVectors.load(the_filename, mmap='r')
the_filename = 'latin_library.{}.txt'.format(the_date)
latin_word_vectors.save_word2vec_format(the_filename, binary=False)
###Output
INFO : storing 147262x600 projection weights into latin_library.2019.03.08.txt
|
02_practico_I-Copy1.ipynb | ###Markdown
Universidad Nacional de Córdoba - Facultad de Matemática, Astronomía, Física y ComputaciónDiplomatura en Ciencia de Datos, Aprendizaje Automático y sus Aplicaciones Práctico I - Estadística Análisis y Visualización de Datos - 2019 Durante este práctico vamos a trabajar sobre el dataset [Human Freedom Index 2018](https://www.cato.org/human-freedom-index-new) de el instituto Cato. Este índice mide en detalle lo que entendemos como libertad, utilizando 79 indicadores de libertad personal y económica en distintos aspectos, hasta obtener un hermoso numerito del 1 al 10. Usaremos una [versión ya limpia del dataset](https://www.kaggle.com/gsutters/the-human-freedom-index/home) que pueden descargar desde Kaggle.Las variables más importantes sobre las que trabaja el dataset son:* Rule of Law* Security and Safety* Movement* Religion* Association, Assembly, and Civil Society* Expression and Information* Identity and Relationships* Size of Government* Legal System and Property Rights* Access to Sound Money* Freedom to Trade Internationally* Regulation of Credit, Labor, and BusinessNosotros centrarermos nuestro análisis en variables relacionadas a *Identity and Relationships* en paises de Latinoamérica, y los compararemos con las estadísticas globales. La pregunta a responder es simple: **¿Qué niveles de libertad se viven en Latinoamérica, especificamente en cuanto libertades de indentidad?**. Sin embargo, para hacer un análisis de los datos tenemos que platear también estas sub preguntas:1. ¿Qué significa tener un puntaje de 4.5? Hay que poner los puntajes de la región en contexto con los datos del resto del mundo.2. ¿Cuál es la tendencia a lo largo de los años? ¿Estamos mejorando, empeorando?3. En este estudio, la libertad se mide con dos estimadores principales: *hf_score* que hace referencia a Human Freedom, y *ef_score* que hace referencia a Economic Freedom. Estos dos estimadores, ¿se relacionan de la misma manera con la libertad de identidad?Inicialmente, en toda exploración de datos tenemos muy poca información a priori sobre el significado de los datos y tenemos que empezar por comprenderlos. Les proponemos los siguientes ejercicios como guía para comenzar esta exploración.
###Code
import matplotlib.pyplot as plt
import numpy
import pandas
import seaborn
seaborn.__version__
dataset = pandas.read_csv('../datasets/hfi_cc_2018.csv')
dataset.shape
dataset.columns # Way too many columns!
###Output
_____no_output_____
###Markdown
Por suerte las columnas tienen un prefijo que nos ayuda a identificar a qué sección pertenecen. Nos quedamos sólo con las que comienzan con *pf_indentity*, junto con otras columnas más generales
###Code
important_cols = ['year', 'ISO_code', 'countries', 'region']
important_cols += [col for col in dataset.columns if 'pf_identity' in col]
important_cols += [
'ef_score', # Economic Freedom (score)
'ef_rank', # Economic Freedom (rank)
'hf_score', # Human Freedom (score)
'hf_rank', # Human Freedom (rank)
]
dataset
dataset = dataset[important_cols]
#dataset_regions = dataset['region'].drop_duplicates()
dataset_regions = dataset['region'].unique()
dataset_regions
dataset.shape
dataset
numb_columns = dataset.iloc[:,4:]
numb_columns
for n_c in numb_columns:
print(n_c, dataset[n_c].max() - dataset[n_c].min())
for region in dataset_regions:
print('Region = ', region)
print('ef_score = ', dataset[dataset['region'] == region]['ef_score'].mean())
print('hf_score = ', dataset[dataset['region'] == region]['hf_score'].mean(), '\n')
print('Media global')
print('ef_score = ', dataset['ef_score'].mean())
print('hf_score = ', dataset['hf_score'].mean())
for region in dataset_regions:
print('Region = ', region)
print('pf_identity = ', dataset[dataset['region'] == region]['pf_identity'].mean(), '\n')
#print('pf_score = ', dataset[dataset['region'] == region]['pf_score'].mean(), '\n')
print('Media global')
print('pf_identity = ', dataset['pf_identity'].mean())
#print('pf_score = ', dataset['pf_score'].mean())
plt.figure(figsize=(10,6))
seaborn.barplot(data=dataset, x='year', y='hf_score')
plt.ylim(6.5, 7.5)
plt.title('Progreso de la variable "Human freedom" entre 2008 y 2016', fontsize=20)
seaborn.despine(left=True)
plt.figure(figsize=(10,6))
seaborn.barplot(data=dataset, x='year', y='ef_score')
plt.ylim(6.5, 7)
plt.title('Progreso de la variable "Economic freedom" entre 2008 y 2016', fontsize=20)
seaborn.despine(left=True)
plt.figure(figsize=(10,6))
seaborn.barplot(data=dataset, x='year', y='pf_identity')
plt.ylim(6, 8.5)
plt.title('Progreso de la variable "Identidad y relaciones" entre 2008 y 2016', fontsize=20)
seaborn.despine(left=True)
###Output
_____no_output_____
###Markdown
1. Estadísticos descriptivos 1. Para comenzar con un pantallazo de los datos, calcular el rango de las variables. 2. Obtener media, mediana y desviación estándar de las variables *pf_identity* y *hf_score* en el mundo y compararla con la de Latinoamérica y el caribe. ¿Tiene sentido calcular la moda? 3. ¿Son todos los valores de *pf_identity* y *hf_score* directamente comparables? ¿Qué otra variable podría influenciarlos? 4. ¿Cómo pueden sanearse los valores faltantes? 5. ¿Encuentra outliers en estas dos variables? ¿Qué método utiliza para detectarlos? ¿Los outliers, son globales o por grupo? ¿Los eliminaría del conjunto de datos? 2. Agregación de datos1. Grafiquen la media de la variable *pf_identity* y *hf_score* a través de los años.2. Realicen los mismos gráficos, pero separando por regiones (Cada variable en un gráfico distinto, sino no se ve nada). ¿La tendencia observada, es la misma que si no dividimos por regiones?3. Si lo consideran necesario, grafiquen algunos países de Latinoamerica para tratar de explicar la tendencia de la variable *pf_identity* en la región. ¿Cómo seleccionarion los países relevantes a esa tendencia?Hint: hay un gráfico de seaborn que hace todo por vos!Sólo por curiosidad, graficar la tendencia de *hf_score* y *ef_score* a través de los años. ¿Tienen alguna hipótesis para este comportamiento? 2. Distribuciones 1. Graficar en un mismo histograma la distribución de la variable *pf_identity* en global, y en Latinoamérica y el caribe. Repetir para la variable *hf_score*. ¿Visualmente, a qué tipo de distribución corresponde cada variable? ¿Es correcto utilizar todos los registros para esas zonas en estos gráficos? 2. Realizar una prueba de Kolmogorov-Smirnof para comprobar analíticamente si estas variables responden la distribución propuesta en el ejercicio anterior. Hint: podés usar https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.kstest.html, pero hay que tener en cuenta que si la distribución es "norm", entonces va a comparar los datos con una distribución normal con media 0 y desviación estándar 1. Se puede utilizar la distribución sobre todos los datos o sólo sobre Latinoamérica. 3. Realizar un gráfico QQ de las mismas distribuciones. Se puede utilizar a,bas distribuciones sobre todos los datos o sólo sobre Latinoamérica, pero no cruzadas. 4. Medir la asimetría (skew) y curtosis de las mismas distribuciones con las que realizó el gráfico anterior. ¿Cómo se relacionan estos estadísticos con la forma del gráfico QQ obtenido previamente? ¿El gráfico QQ provee más información que no esté presente en estos estadísticos? 3. CorrelacionesEn este ejercicio queremos responder a las preguntas* Las libertades sociales y económicas, ¿van siempre de la mano?* ¿Cómo se relacionan ambas con las libertades individuales y respectivas a las relaciones personales?Para ello, analizaremos las correlaciones entre las variables pf_identity, hf_score y ef_score. Como pf_indentity contribuye al cálculo de hf_score y ef_score, esperamos hallar algún grado de correlación. Sin embargo, queremos medir qué tanta correlación. 1. ¿Qué conclusiones puede sacar de un gráfico pairplot de estas tres variables? ¿Es adecuado para los valores de pf_identity? ¿Por qué?2. Graficar la correlación entre pf_identity y hf_score; y entre pf_identity y ef_score. Analizar el resultado, ¿se pueden sacar conclusiones? Tengan en cuenta que como pf_identity es el resultado de un promedio, sólo toma algunos valores. Es, en efecto, discreta.3. Calcular algún coeficiente de correlación adecuado entre los dos pares de variables, dependiendo de la cantidad de datos, el tipo de datos y la distribución de los mismo. Algunas opciones son: coeficiente de pearson, coeficiente de spearman, coeficientes de tau y de kendall. Interpretar los resultados y justificar si las variables están correlacionadas o no. 4. [Opcional] Analizar la correlación entre la region y el hf_score (y/o el ef_score); y entre la region y el pf_identity. Considerar que como la variable *region* es ordinal, debe utilizarse algún tipo de test. Explicar cuáles son los requisitos necesarios para la aplicación de ese test. (Si no se cumplieran, se pueden agregar algunos datos para generar más registros)
###Code
print(dataset['pf_identity'].mean())
print(dataset['pf_identity'].median())
#dataset[dataset]
print(dataset[dataset['region'] == 'Latin America & the Caribbean']['pf_identity'].mean())
dataset
plt.figure(figsize=(10,6))
catplot_data = dataset
seaborn.catplot(data=catplot_data, y='hf_score', order='region', kind='box')
#plt.title('Boxplot de hf_score en latam + caribe', size=20)
plt.ylim(3,9)
#seaborn.despine(left=True)
regions = dataset['region'].unique()
for region in regions:
plt.figure(figsize=(10,6))
seaborn.lineplot(data=dataset[dataset['region'] == region], y='hf_score', x='year', ci=None)
plt.title(region, size=20)
#plt.ylim(5,9.5)
seaborn.despine(left=True)
plt.figure(figsize=(10,6))
seaborn.barplot(data=dataset, y='pf_identity', x='year')
plt.title('Barplot de pf_identity por año', size=20)
plt.ylim(6,8.5)
seaborn.despine(left=True)
regions = dataset['region'].unique()
for region in regions:
plt.figure(figsize=(10,6))
seaborn.barplot(data=dataset[dataset['region'] == region], y='hf_score', x='year')
plt.title(region, size=20)
plt.ylim(5,9.5)
seaborn.despine(left=True)
plt.figure(figsize=(10,6))
seaborn.lineplot(data=dataset, y='hf_score', x='year')
plt.title(region, size=20)
plt.ylim(5,9.5)
seaborn.despine(left=True)
ds_latam = dataset[dataset['region'] == 'Latin America & the Caribbean']
ds_latam
plt.figure(figsize=(10,6))
seaborn.lineplot(data=ds_latam, y='pf_identity', x='year', hue='countries', ci=None)
countries = list(ds_latam['countries'].unique())
years = ds_latam['year'].unique()
for country in countries:
result = ds_latam[ds_latam['countries'] == country]['pf_identity']
result = pandas.Series.tolist(result)
result = result[8] - result[0]
if result > 0:
print(country)
#plt.figure(figsize=(10,6))
seaborn.lineplot(data=ds_latam[ds_latam['countries'] == country], y='pf_identity', x='year', ci=None)
#plt.legend()
import itertools
groups = itertools.groupby(countries, key = lambda x: x[2])
print(list(groups))
ds_latam
itertools.__version__
plt.figure(figsize=(10,6))
seaborn.barplot(data=dataset, y='ef_score', x='year')
plt.title('Barplot de ef_score por año', size=20)
plt.ylim(6.5,7.2)
seaborn.despine(left=True)
new_dataset = dataset.dropna()
#print(new_dataset.mean())
plt.figure(figsize=(10,6))
seaborn.distplot(new_dataset['pf_identity'])
plt.title('hf_score en el mundo', size=20)
plt.ylim(2,10)
seaborn.despine(left=True)
###Output
_____no_output_____ |
preprocessing2.ipynb | ###Markdown
Number of sessions for each file
###Code
num=[]
for i in range(len(data[2])-1):
if data[2][i+1]==1:
num.append(data[2][i])
num.append(data[2][len(data[2])-1])
num1=[i for i in num for j in range(40)]
len(num1)
b=[]
def search(dirname):
filenames = os.listdir(dirname)
for filename in filenames:
full_filename = os.path.join(dirname, filename)
ext = os.path.splitext(full_filename)[-1]
if ext == '.txt':
b.append(full_filename)
for root, dirs, files in os.walk("C:/Users/user/machine/워크봇운동데이터/워크봇운동데이터"):
print(root)
search(root)
len(b)
search('C:/Users/user/machine/워크봇운동데이터/워크봇운동데이터/전착한/2018년01월08일15시24분26초')
for i in b:
i.replace("\\","\/",1)
print(i)
a=enumerate(b)
len(list(a))
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i+n]
int(num1[0])
g=pd.DataFrame({})
ans=pd.DataFrame({})
g=pd.DataFrame({})
ans=pd.DataFrame({})
h=0
for j in range(len(b)):
h+=1
k=0
f2 = open(b[j], "r")
while True:
k=k+1
line=f2.readline()
if not line:
break
if k<100:
continue
c=[]
d=[]
e=[]
f = open(b[j], 'r')
while True:
line = f.readline()
if not line: break
line = float(line[0:-1])
c.append(line)
f.close()
d=list(chunks(c,round(len(c)/int(num1[j]))))
mean=pd.Series({i:np.mean(d[i]) for i in range(num1[j])})
var=pd.Series({i:np.var(d[i]) for i in range(num1[j])})
e=pd.DataFrame({'mean':mean,'var':var}) ## construct 3*2 dataframe
g = pd.concat([g, e], axis=1) # horizontal bind
if h==39:
h=-1 #initialize
ans=pd.concat([ans, g],axis=0) ##vertical bind
g=pd.DataFrame({}) ##initialize
print(g)
print(h)
b[39][-18:]=='Right_LoadCell.txt'
b[3]
###Output
_____no_output_____
###Markdown
df_13_axis1 = pd.concat([df_1, df_3], axis=1) column bind출처: http://rfriend.tistory.com/256 [R, Python 분석과 프로그래밍 (by R Friend)]
###Code
os.chdir("C://Users//user//machine//워크봇운동데이터")
os.getcwd()
writer = pd.ExcelWriter('output.xlsx')
ans.to_excel(writer,'Sheet1')
writer.save()
ans
###Output
_____no_output_____ |
0-kde-hist-gradient-studies.ipynb | ###Markdown
First steps:- Sample some points from a gaussian mixture - Fixed sample sizes (one big, one small), 5 different random seeds- Create both kde and normal histograms for some binning and bandwidth- Value of truth dist bin is area under curve between bin endpoints (just like kde hist!)- Make these plots for a range of bandwidths Expected behavior:- stdev of count estimate across random seeds decreases with more samples- for large bandwidth, you will see a bias error, as you're smoothing out the shape of the distribution
###Code
import jax
import jax.numpy as jnp
from jax.random import normal, PRNGKey
rng = PRNGKey(7)
from matplotlib.colors import to_rgb
import matplotlib.pyplot as plt
plt.rc('figure',figsize=[7.3,5],dpi=120,facecolor='w')
from functools import partial
###Output
_____no_output_____
###Markdown
Let's generate `num_samples` points from a set of normal distributions with slowly increasing means:
###Code
lo, hi = -2, 2
grid_points = 500
mu_grid = jnp.linspace(lo, hi, grid_points)
num_samples = 100
points = jnp.tile(
normal(rng, shape = (num_samples,)),
reps = (grid_points,1)
) + mu_grid.reshape(-1,1)
points.shape
###Output
_____no_output_____
###Markdown
Each index of `points` is a set of `num_samples` samples drawn for a given $\mu$ value. We want to make histograms for these sets of points, and then focus our attention on just one bin.
###Code
bins = jnp.linspace(lo-1,hi+1,6)
make_hists = jax.vmap(partial(jnp.histogram, bins = bins))
hists, _ = make_hists(points)
###Output
_____no_output_____
###Markdown
We can start by inspecting a couple of these histograms to see the behaviour of varying $\mu$ upwards:
###Code
centers = bins[:-1] + jnp.diff(bins) / 2.0
width = (bins[-1] - bins[0])/(len(bins) - 1)
fig, axs = plt.subplots(1,3)
# first mu value
axs[0].bar(
centers,
hists[0],
width = width,
label=f'$\mu$={mu_grid[0]}'
)
axs[0].legend()
axs[0].axis('off')
# middle mu value
axs[1].bar(
centers, hists[len(hists)//2],
width = width,
label=f'$\mu$={mu_grid[len(hists)//2]:.2f}',
color = 'C1'
)
axs[1].legend()
axs[1].axis('off')
# last mu value
axs[2].bar(
centers,
hists[-1],
width = width,
label=f'$\mu$={mu_grid[-1]}',
color = 'C2'
)
axs[2].legend()
axs[2].axis('off');
###Output
_____no_output_____
###Markdown
As one may expect, shifting $\mu$ to the right subsequently skews the resulting histogram. Now, let's focus on the behavior of the middle bin by plotting its height across a large range of $\mu$ values:
###Code
# cool color scheme
from matplotlib.colors import to_rgb
def fade(c1,c2, num_points):
start = jnp.array(to_rgb(c1))
end = jnp.array(to_rgb(c2))
interp = jax.vmap(partial(jnp.linspace, num=num_points))
return interp(start,end).T
color_scheme = fade('C1', 'C3', num_points=grid_points)
middle = len(bins)//2 - 1
mu_width = mu_grid[1]-mu_grid[0]
plt.bar(mu_grid, hists[:,middle], color=color_scheme, width=mu_width, edgecolor= 'black',linewidth = 0.05,alpha=0.7)
plt.xlabel('$\mu$');
###Output
_____no_output_____
###Markdown
We can see that this bin goes up then down in value as expected, but it does so in a jagged, unfriendly way, meaning that the gradient of the bin height with respect to $\mu$ is also badly behaved. This gradient is crucial to evaluate if you want to do end-to-end optimization, since histograms are an extremely common component in high-energy physics.A solution to remedy this jaggedness can be found by changing the way we construct the histogram. In particular, we can perform a kernel density estimate for each set of samples, then discretize the result by partitioning the area under the curve with the same binning as we used to make the histogram.
###Code
import jax.scipy as jsc
def kde_hist(events, bins, bandwidth=None, density=False):
edge_hi = bins[1:] # ending bin edges ||<-
edge_lo = bins[:-1] # starting bin edges ->||
# get cumulative counts (area under kde) for each set of bin edges
cdf_up = jsc.stats.norm.cdf(edge_hi.reshape(-1, 1), loc=events, scale=bandwidth)
cdf_dn = jsc.stats.norm.cdf(edge_lo.reshape(-1, 1), loc=events, scale=bandwidth)
# sum kde contributions in each bin
counts = (cdf_up - cdf_dn).sum(axis=1)
if density: # normalize by bin width and counts for total area = 1
db = jnp.array(jnp.diff(bins), float) # bin spacing
return counts / db / counts.sum(axis=0)
return counts
# make hists as before
bins = jnp.linspace(lo-1,hi+1,6)
make_kde_hists = jax.vmap(partial(kde_hist, bins = bins, bandwidth = .5))
kde_hists = make_kde_hists(points)
middle = len(bins)//2 - 1
mu_width = mu_grid[1]-mu_grid[0]
fig, axs = plt.subplots(2,1, sharex=True)
axs[0].bar(
mu_grid,
hists[:,middle],
# fill=False,
color = fade('C1', 'C3', num_points=grid_points),
width = mu_width,
alpha = .7,
label = 'histogram',
edgecolor= 'black',
linewidth = 0.05
)
axs[0].legend()
axs[1].bar(
mu_grid,
kde_hists[:,middle],
color = fade('C0', 'C9', num_points=grid_points),
width = mu_width,
alpha = .7,
label = 'kde',
edgecolor= 'black',
linewidth = 0.05
)
axs[1].legend()
plt.xlabel('$\mu$');
###Output
_____no_output_____
###Markdown
This envelope is much smoother than that of the original histogram, which follows from the smoothness of the (cumulative) density function defined by the kde, and allows us to get gradients! Now that we have a histogram we can differentiate, we need to study its properties (and the gradients themselves!)Two things to study:- Quality of approximation to an actual histogram (and to the true distribution)- Stability and validity of gradientsTo make this more concrete of a comparison, let's introduce a third plot to the above panel that shows the area under the true distribution:
###Code
def true_hist(bins, mu):
edge_hi = bins[1:] # ending bin edges ||<-
edge_lo = bins[:-1] # starting bin edges ->||
# get cumulative counts (area under curve) for each set of bin edges
cdf_up = jsc.stats.norm.cdf(edge_hi.reshape(-1, 1), loc=mu)
cdf_dn = jsc.stats.norm.cdf(edge_lo.reshape(-1, 1), loc=mu)
counts = (cdf_up - cdf_dn).T
return counts
truth = true_hist(bins,mu_grid)
# make hists as before (but normalize)
bins = jnp.linspace(lo-1,hi+1,6)
make_kde_hists = jax.vmap(partial(kde_hist, bins = bins, bandwidth = .5, density=True))
kde_hists = make_kde_hists(points)
make_hists = jax.vmap(partial(jnp.histogram, bins = bins, density = True))
hists, _ = make_hists(points)
middle = len(bins)//2 - 1
mu_width = mu_grid[1]-mu_grid[0]
plt.plot(
mu_grid,
truth[:,middle],
color = 'C6',
alpha = .7,
label = 'true',
)
plt.plot(
mu_grid,
hists[:,middle],
# fill=False,
color = 'C1',
alpha = .7,
label = 'histogram',
)
plt.plot(
mu_grid,
kde_hists[:,middle],
color = 'C9',
alpha = .7,
label = 'kde',
)
plt.legend()
plt.xlabel('$\mu$')
plt.suptitle("bandwidth = 0.5, #samples = 100")
###Output
_____no_output_____
###Markdown
The hyperparameter that will cause the quality of estimation to vary the most will be the *bandwidth* of the kde, which controls the width of the individual point-wise kernels. Moreover, since the kde is a data-driven estimator, the number of samples will also play a role. Let's wrap the above plot construction into functions that we can call.
###Code
def make_points(num_samples, grid_points=300, lo=-2, hi=+2):
mu_grid = jnp.linspace(lo, hi, grid_points)
rngs = [PRNGKey(i) for i in range(9)]
points = jnp.asarray(
[
jnp.tile(
normal(rng, shape = (num_samples,)),
reps = (grid_points,1)
) + mu_grid.reshape(-1,1) for rng in rngs
]
)
return points, mu_grid
def make_kdes(points, bandwidth, bins):
make_kde_hists = jax.vmap(
partial(kde_hist, bins = bins, bandwidth = bandwidth)
)
return make_kde_hists(points)
def make_mu_scan(bandwidth, num_samples, grid_points=500, lo=-2, hi=+2):
points, mu_grid = make_points(num_samples, grid_points, lo, hi)
bins = jnp.linspace(lo-3,hi+3,6)
truth = true_hist(bins,mu_grid)*num_samples
get_kde_hists = jax.vmap(partial(make_kdes, bins=bins, bandwidth=bandwidth))
kde_hists = get_kde_hists(points)
make_hists = jax.vmap(jax.vmap(partial(jnp.histogram, bins = bins)))
hists, _ = make_hists(points)
study_bin = len(bins)//2 - 1
h = jnp.array([truth[:,study_bin],
hists[:,:,study_bin].mean(axis=0),
kde_hists[:,:,study_bin].mean(axis=0)])
stds = jnp.array([hists[:,:,study_bin].std(axis=0),
kde_hists[:,:,study_bin].std(axis=0)])
return h, stds
###Output
_____no_output_____
###Markdown
Make hists in $\mu$ plane for different bws:
###Code
bws = jnp.array([0.05,0.5,0.8])
lo_samp = jax.vmap(partial(make_mu_scan, num_samples = 20))
mid_samp = jax.vmap(partial(make_mu_scan, num_samples = 100))
hi_samp = jax.vmap(partial(make_mu_scan, num_samples = 5000))
lo_hists, lo_stds = lo_samp(bws)
mid_hists, mid_stds = mid_samp(bws)
hi_hists, hi_stds = hi_samp(bws)
###Output
_____no_output_____
###Markdown
Plot!
###Code
colors = fade('C0','C9',num_points=7)
fig, axarr = plt.subplots(3,len(bws), sharex=True, sharey='row')
up, mid, down = axarr
for i,res in enumerate(zip(lo_hists, lo_stds)):
hists, stds = res
up[i].plot(mu_grid,hists[0],alpha=.4, color='C3',label="actual", linestyle=':')
up[i].fill_between(mu_grid, hists[1]+stds[0], hists[1]-stds[0], alpha=.2,color='C1',label='histogram variance')
up[i].plot(mu_grid,hists[1],alpha=.4, color='C1',label="histogram")
up[i].fill_between(mu_grid, hists[2]+stds[1], hists[2]-stds[1], alpha=.2,color='C0')
up[i].plot(mu_grid,hists[2],alpha=.6,color='C0',label="kde histogram")
up[i].set_title(f'bw={bws[i]:.2f}', color='C0')
for i,res in enumerate(zip(mid_hists, mid_stds)):
hists, stds = res
mid[i].plot(mu_grid,hists[0],alpha=.4, color='C3',label="true bin height", linestyle=':')
mid[i].fill_between(mu_grid, hists[1]+stds[0], hists[1]-stds[0], alpha=.2,color='C1',label='histogram $\pm$ std')
mid[i].plot(mu_grid,hists[1],alpha=.4,color='C1',label="histogram")
mid[i].fill_between(mu_grid, hists[2]+stds[1], hists[2]-stds[1], alpha=.2,color='C0',label='kde histogram $\pm$ std')
mid[i].plot(mu_grid,hists[2],alpha=.6,color='C0',label="kde histogram")
for i,res in enumerate(zip(hi_hists, hi_stds)):
hists, stds = res
down[i].plot(mu_grid,hists[0],alpha=.4, color='C3',label="actual", linestyle=':')
down[i].fill_between(mu_grid, hists[1]+stds[0], hists[1]-stds[0], alpha=.2,color='C1')
down[i].plot(mu_grid,hists[1],alpha=.4, color='C1',label="histogram")
down[i].fill_between(mu_grid, hists[2]+stds[1], hists[2]-stds[1], alpha=.2,color='C0')
down[i].plot(mu_grid,hists[2],alpha=.6,color='C0',label="kde histogram")
#down[0].set_ylabel('n=1e6', rotation=0, size='large')
down[1].set_xlabel("$\mu$",size='large')
mid[0].set_ylabel("frequency",size='large',labelpad=11)
mid[-1].legend(bbox_to_anchor=(1.1, 1.05), frameon=False)
fig.tight_layout();
plt.savefig('samples_vs_bw_nofancy.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Cool! Now, let's think about gradients.Since we know analytically that the height of a bin defined by $(a,b)$ for a given $\mu$ value is just$$bin_{\mathsf{true}}(\mu) = \mathsf{normcdf}(b;\mu) - \mathsf{normcdf}(a;\mu) $$we can then just diff this wrt $\mu$ by hand!$$\mathsf{normcdf}(x;\mu) = \frac{1}{2}\left[1+\operatorname{erf}\left(\frac{x-\mu}{\sigma \sqrt{2}}\right)\right]$$$$\Rightarrow \frac{\partial}{\partial\mu}\mathsf{normcdf}(x;\mu) = \frac{1}{2}\left[1-\left(\frac{2}{\sqrt{2\pi}\sigma} e^{-\frac{(x-\mu)^2}{2\sigma^2}}\right)\right]$$since $\frac{d}{d z} \operatorname{erf}(z)=\frac{2}{\sqrt{\pi}} e^{-z^{2}}$. We have $\sigma=1$, making this simpler:$$\Rightarrow \frac{\partial}{\partial\mu}\mathsf{normcdf}(x;\mu) = \frac{1}{2}\left[1-\left(\frac{2}{\sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2}}\right)\right]$$All together:$$\Rightarrow \frac{\partial}{\partial\mu}bin_{\mathsf{true}}(\mu) = -\frac{1}{\sqrt{2\pi}}\left[\left(e^{-\frac{(b-\mu)^2}{2}}\right) - \left( e^{-\frac{(a-\mu)^2}{2}}\right)\right]$$The histogram's gradient will be ill-defined, but we can get an estimate of this through finite differences:$$\mathsf{grad}_{\mathsf{hist}}(bin)(\mu_i) \approx \frac{bin(\mu_{i+1})-bin(\mu_i)}{\mu_{i+1}-\mu_{i}}$$for a kde, we can just use autodiff.
###Code
def true_grad(mu,bins):
b = bins[1:] # ending bin edges ||<-
a = bins[:-1] # starting bin edges ->||
return -(1/((2*jnp.pi)**0.5))*(jnp.exp(-((b-mu)**2)/2) - jnp.exp(-((a-mu)**2)/2))
bins = jnp.linspace(-5,5,6)
mus = jnp.linspace(-2,2,300)
true_grad_many = jax.vmap(partial(true_grad, bins = bins))
grads = true_grad_many(mus)
plt.plot(mus, grads[:,2]);
###Output
_____no_output_____
###Markdown
Shape looks good!
###Code
def gen_points(mu, jrng, nsamples):
points = normal(jrng, shape = (nsamples,))+mu
return points
def bin_height(mu, jrng, bw, nsamples, bins):
points = gen_points(mu, jrng, nsamples)
return kde_hist(points, bins, bandwidth=bw)[2]
def kde_grads(bw, nsamples, lo=-2, hi=+2, grid_size=300):
bins = jnp.linspace(lo-3,hi+3,6)
mu_grid = jnp.linspace(lo,hi,grid_size)
rngs = [PRNGKey(i) for i in range(9)]
grad_fun = jax.grad(bin_height)
grads = []
for i,jrng in enumerate(rngs):
get_grads = jax.vmap(partial(
grad_fun, jrng=jrng, bw=bw, nsamples=nsamples, bins=bins
))
grads.append(get_grads(mu_grid))
return jnp.asarray(grads)
x = kde_grads(0.2,1000).mean(axis=0)
mus = jnp.linspace(-2,2,300)
plt.plot(mus,x)
bins = jnp.linspace(-5,5,6)
true_grad_many = jax.vmap(partial(true_grad, bins = bins))
grads = true_grad_many(mus)*1000
plt.plot(mus, grads[:,2]);
###Output
_____no_output_____
###Markdown
Okay, looks like the kde grads work as anticipated -- we just need to look at the hist grads now.
###Code
def get_hist(mu, jrng, nsamples, bins):
points = gen_points(mu, jrng, nsamples)
hist, _ = jnp.histogram(points, bins)
return hist[2]
def hist_grad_numerical(bin_heights, mu_width):
# in mu plane
lo = bin_heights[:-1]
hi = bin_heights[1:]
bin_width = (bins[1]-bins[0])
grad_left = -(lo-hi)/mu_width
# grad_right = -grad_left
return grad_left
def hist_grads(nsamples, lo=-2, hi=+2, grid_size=300):
bins = jnp.linspace(lo-3,hi+3,6)
mu_grid = jnp.linspace(lo,hi,grid_size)
rngs = [PRNGKey(i) for i in range(9)]
grad_fn = partial(hist_grad_numerical, mu_width=mu_grid[1]-mu_grid[0])
grads = []
for jrng in rngs:
get_heights = jax.vmap(partial(
get_hist, jrng=jrng, nsamples=nsamples, bins=bins
))
grads.append(grad_fn(get_heights(mu_grid)))
return jnp.asarray(grads)
hist_grads(1000).shape
x = kde_grads(0.2,1000).mean(axis=0)
mus = jnp.linspace(-2,2,300)
plt.plot(mus,x, label= 'kde')
bins = jnp.linspace(-5,5,6)
plt.plot(mus[:-1],hist_grads(1000).mean(axis=0), label = 'hist')
true_grad_many = jax.vmap(partial(true_grad, bins = bins))
grads = true_grad_many(mus)*1000
plt.plot(mus, grads[:,2], label='true')
plt.legend();
###Output
_____no_output_____
###Markdown
Cool! Everything is scaling properly to the number of samples, and we can see the jaggedness of the histogram gradients.Now let's combine these functions into one, and run that over the same bandwidth and sample numbers as before!~
###Code
def both_grads(bw, nsamples, lo=-2, hi=+2, grid_size=300):
bins = jnp.linspace(lo-3,hi+3,6)
mu_grid = jnp.linspace(lo,hi,grid_size)
hist_grad_fun = partial(hist_grad_numerical, mu_width=mu_grid[1]-mu_grid[0])
grad_fun = jax.grad(bin_height)
hist_grads = []
kde_grads = []
rngs = [PRNGKey(i) for i in range(9)]
for jrng in rngs:
get_heights = jax.vmap(partial(
get_hist, jrng=jrng, nsamples=nsamples, bins=bins
))
hist_grads.append(hist_grad_fun(get_heights(mu_grid)))
get_grads = jax.vmap(partial(
grad_fun, jrng=jrng, bw=bw, nsamples=nsamples, bins=bins
))
kde_grads.append(get_grads(mu_grid))
hs = jnp.array(hist_grads)
ks = jnp.array(kde_grads)
h = jnp.array([hs.mean(axis=0),hs.std(axis=0)])
k = jnp.array([ks.mean(axis=0),ks.std(axis=0)])
return h,k
bws = jnp.array([0.05,0.5,0.8])
samps = [20,100,5000]
grid_size = 60
lo_samp = jax.vmap(partial(both_grads, nsamples = samps[0],grid_size=grid_size))
mid_samp = jax.vmap(partial(both_grads, nsamples = samps[1],grid_size=grid_size))
hi_samp = jax.vmap(partial(both_grads, nsamples = samps[2],grid_size=grid_size))
lo_hist, lo_kde = lo_samp(bws)
mid_hist, mid_kde = mid_samp(bws)
hi_hist, hi_kde = hi_samp(bws)
mu_grid = jnp.linspace(-2,2,grid_size)
true = [true_grad_many(mu_grid)[:,2]*s for s in samps]
fig, axarr = plt.subplots(3,len(bws), sharex=True, sharey='row')
up, mid, down = axarr
for i,res in enumerate(zip(lo_hist, lo_kde)):
hist_grads, hist_stds = res[0]
kde_grads, kde_stds = res[1]
up[i].plot(mu_grid,true[0],alpha=.4, color='C3',label="actual", linestyle=':')
y = jnp.array(up[i].get_ylim())
up[i].plot(mu_grid[:-1], hist_grads,alpha=.3, color='C1',label="histogram",linewidth=0.5)
up[i].fill_between(mu_grid, kde_grads+kde_stds, kde_grads-kde_stds, alpha=.2,color='C0',label='kde histogram $\pm$ std')
up[i].plot(mu_grid,kde_grads,alpha=.6,color='C0',label="kde histogram")
up[i].set_title(f'bw={bws[i]:.2f}', color='C0')
up[i].set_ylim(y*1.3)
for i,res in enumerate(zip(mid_hist, mid_kde)):
hist_grads, hist_stds = res[0]
kde_grads, kde_stds = res[1]
mid[i].plot(mu_grid,true[1],alpha=.4, color='C3',label="actual", linestyle=':')
y = jnp.array(mid[i].get_ylim())
mid[i].plot(mu_grid[:-1], hist_grads,alpha=.3, color='C1',label="histogram",linewidth=0.5)
mid[i].fill_between(mu_grid, kde_grads+kde_stds, kde_grads-kde_stds, alpha=.2,color='C0',label='kde histogram $\pm$ std')
mid[i].plot(mu_grid,kde_grads,alpha=.6,color='C0',label="kde histogram")
mid[i].set_ylim(y*1.3)
for i,res in enumerate(zip(hi_hist, hi_kde)):
hist_grads, hist_stds = res[0]
kde_grads, kde_stds = res[1]
down[i].plot(mu_grid,true[2],alpha=.4, color='C3',label="actual", linestyle=':')
y = jnp.array(down[i].get_ylim())
down[i].plot(mu_grid[:-1], hist_grads,alpha=.2, color='C1',label="histogram")
down[i].fill_between(mu_grid, kde_grads+kde_stds, kde_grads-kde_stds, alpha=.2,color='C0')
down[i].plot(mu_grid,kde_grads,alpha=.6,color='C0',label="kde histogram")
down[i].set_ylim(y*1.3)
down[1].set_xlabel("$\mu$",size='large')
mid[0].set_ylabel("$\partial\,$frequency / $\partial\mu$",size='large',labelpad=11)
mid[-1].legend(bbox_to_anchor=(1.1, 1.05), frameon=False)
fig.tight_layout();
plt.savefig('samples_vs_bw_nofancy_gradients.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Suuuuuuper! Let's now look at metrics of quality.
###Code
bws = jnp.linspace(0.05,0.5,6)
samps = jnp.linspace(1000,50000,6).astype('int')
funcs = [jax.vmap(partial(kde_grads_mse, nsamples=n)) for n in samps]
mses = jnp.array([f(bws) for f in funcs])
X, Y = jnp.meshgrid(bws,samps)
p = plt.contourf(X,Y,mses)
c = plt.colorbar(p)
c.set_label('gradient mean relative error',rotation=270, labelpad=15)
mindex = jnp.argmin(mses.ravel())
plt.scatter(X.ravel(), Y.ravel(), alpha=0.8)
plt.scatter(X.ravel()[mindex], Y.ravel()[mindex], label = 'minimum error', color='C1')
# plt.scatter(X.ravel()[mindex+20], Y.ravel()[mindex+20], label = 'minimum error', color='C1')
plt.xlabel('bandwidth')
plt.ylabel('#samples')
plt.legend()
###Output
_____no_output_____ |
preprocess/preprocess_v4/preprocess_v4.ipynb | ###Markdown
DESCRIPTION_TRANSLATEDの欠損値の置換
###Code
train_df.loc[train_df['DESCRIPTION_TRANSLATED'].isna(), 'DESCRIPTION_TRANSLATED'] = train_df.loc[train_df['DESCRIPTION_TRANSLATED'].isna(), 'DESCRIPTION']
train_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 91333 entries, 0 to 91332
Data columns (total 17 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 LOAN_ID 91333 non-null int64
1 ORIGINAL_LANGUAGE 91333 non-null object
2 DESCRIPTION 91333 non-null object
3 DESCRIPTION_TRANSLATED 91333 non-null object
4 IMAGE_ID 91333 non-null int64
5 ACTIVITY_NAME 91333 non-null object
6 SECTOR_NAME 91333 non-null object
7 LOAN_USE 91333 non-null object
8 COUNTRY_CODE 91333 non-null object
9 COUNTRY_NAME 91333 non-null object
10 TOWN_NAME 88573 non-null object
11 CURRENCY_POLICY 91333 non-null object
12 CURRENCY_EXCHANGE_COVERAGE_RATE 82061 non-null float64
13 CURRENCY 91333 non-null object
14 TAGS 73347 non-null object
15 REPAYMENT_INTERVAL 91333 non-null object
16 DISTRIBUTION_MODEL 91333 non-null object
dtypes: float64(1), int64(2), object(14)
memory usage: 11.8+ MB
###Markdown
DESCRIPTIONとDESCRIPTION_TRANSLATEDの前処理
###Code
puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…', '\n', '\xa0', '\t',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─', '\u3000', '\u202f',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞', '«',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
html_tags = ['<p>', '</p>', '<table>', '</table>', '<tr>', '</tr>', '<ul>', '<ol>', '<dl>', '</ul>', '</ol>',
'</dl>', '<li>', '<dd>', '<dt>', '</li>', '</dd>', '</dt>', '<h1>', '</h1>',
'<br>', '<br/>', '<br />','<strong>', '</strong>', '<span>', '</span>', '<blockquote>', '</blockquote>',
'<pre>', '</pre>', '<div>', '</div>', '<h2>', '</h2>', '<h3>', '</h3>', '<h4>', '</h4>', '<h5>', '</h5>',
'<h6>', '</h6>', '<blck>', '<pr>', '<code>', '<th>', '</th>', '<td>', '</td>', '<em>', '</em>']
empty_expressions = ['<', '>', '&', ' ',
' ', '–', '—', ' '
'"', ''']
def pre_preprocess(x):
return str(x).lower()
def rm_spaces(text):
spaces = ['\u200b', '\u200e', '\u202a', '\u2009', '\u2028', '\u202c', '\ufeff', '\uf0d8', '\u2061', '\u3000', '\x10', '\x7f', '\x9d', '\xad',
'\x97', '\x9c', '\x8b', '\x81', '\x80', '\x8c', '\x85', '\x92', '\x88', '\x8d', '\x80', '\x8e', '\x9a', '\x94', '\xa0',
'\x8f', '\x82', '\x8a', '\x93', '\x90', '\x83', '\x96', '\x9b', '\x9e', '\x99', '\x87', '\x84', '\x9f',
]
for space in spaces:
text = text.replace(space, ' ')
return text
def remove_urls(x):
x = re.sub(r'(https?://[a-zA-Z0-9.-]*)', r'', x)
# original
x = re.sub(r'(quote=\w+\s?\w+;?\w+)', r'', x)
return x
def clean_puncts(x):
for punct in puncts:
x = x.replace(punct, f' {punct} ')
return x
def clean_html_tags(x, stop_words=[]):
for r in html_tags:
x = x.replace(r, '')
for r in empty_expressions:
x = x.replace(r, ' ')
for r in stop_words:
x = x.replace(r, '')
return x
def preprocess(data):
data = data.apply(lambda x: pre_preprocess(x))
data = data.apply(lambda x: rm_spaces(x))
data = data.apply(lambda x: remove_urls(x))
data = data.apply(lambda x: clean_html_tags(x))
data = data.apply(lambda x: clean_puncts(x))
return data
train_df['clean_DESCRIPTION_TRANSLATED'] = preprocess(train_df['DESCRIPTION_TRANSLATED'])
test_df['clean_DESCRIPTION_TRANSLATED'] = preprocess(test_df['DESCRIPTION_TRANSLATED'])
train_df.loc[0, 'clean_DESCRIPTION_TRANSLATED']
###Output
_____no_output_____
###Markdown
カテゴリエンコーディング
###Code
# category list
OTHER_COUNTRY = ['EG',
'MZ',
'HT',
'MX',
'BO',
'US',
'TO',
'SB',
'AL',
'CR',
'GE',
'SL',
'ZM',
'FJ',
'BR',
'MD',
'ML',
'CM',
'MW',
'DO',
'XK',
'TR',
'TH',
'NP',
'PG',
'PA',
'PR',
'LS',
'IL',
'AM']
OTHER_SECTOR_NAME = ['Transportation',
'Construction',
'Manufacturing',
'Entertainment',
'Wholesale']
train_df['SECTOR_NAME'] = train_df['SECTOR_NAME'].apply(lambda x: x if x not in OTHER_SECTOR_NAME else 'other')
train_df['COUNTRY_CODE'] = train_df['COUNTRY_CODE'].apply(lambda x: x if x not in OTHER_COUNTRY else 'other')
test_df['SECTOR_NAME'] = test_df['SECTOR_NAME'].apply(lambda x: x if x not in OTHER_SECTOR_NAME else 'other')
test_df['COUNTRY_CODE'] = test_df['COUNTRY_CODE'].apply(lambda x: x if x not in OTHER_COUNTRY else 'other')
df = pd.concat([train_df, test_df]).reset_index(drop=True)
df.head()
label_enc_features = ['SECTOR_NAME', 'COUNTRY_CODE']
ce_label_enc = ce.OrdinalEncoder(cols=label_enc_features, handle_unknown='impute')
ce_label_enc.fit(df)
train_df = ce_label_enc.transform(train_df)
test_df = ce_label_enc.transform(test_df)
joblib.dump(ce_label_enc, 'ce_label_enc.joblib')
train_df['SECTOR_NAME'] = train_df['SECTOR_NAME'] - 1
train_df['COUNTRY_CODE'] = train_df['COUNTRY_CODE'] - 1
test_df['SECTOR_NAME'] = test_df['SECTOR_NAME'] - 1
test_df['COUNTRY_CODE'] = test_df['COUNTRY_CODE'] - 1
train_df['SECTOR_NAME'].value_counts()
test_df['SECTOR_NAME'].value_counts()
train_df['LOAN_AMOUNT'] = target
train_df.head()
train_df.to_csv('preprocess_train.csv', index=False)
test_df.to_csv('preprocess_test.csv', index=False)
test_df.loc[0, 'clean_DESCRIPTION_TRANSLATED']
train_df.loc[0, 'clean_DESCRIPTION_TRANSLATED']
###Output
_____no_output_____ |
Solution/Day_35_Solution.ipynb | ###Markdown
作業:課程範例以 訓練資料集來檢視,先看一下測試資料特性,再把測試資料集和訓練資料集合併,並回答下列問題, 目的:讓大家熟悉對應這樣的問題,我們要提取怎樣的函數來進行計算。 * Q1: 觀察測試(test)資料集和訓練(Train)資料集的變數的差異性?* Q2: 測試資料集是否有遺失值?* Q3: 從合併資料選取一個變數,嘗試去做各種不同遺失值的處理,並透過圖形或數值來做輔助判斷,補值前與後的差異,你覺得以這個變數而言,試著說明每一個方法的差異。
###Code
#把需要的 library import 進來
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display
#讓圖可以在 jupyter notebook顯示
%matplotlib inline
#顯示圖形的函數,可不先不用理解,直接用
from IPython.display import display
from IPython.display import display_html
def display_side_by_side(*args):
html_str=''
for df in args:
html_str+=df.to_html()
display_html(html_str.replace('table','table style="display:inline"'),raw=True)
# 把兩個訓練資料集和測試資料集讀進來
df_train = pd.read_csv("Titanic_train.csv")
df_test = pd.read_csv("Titanic_test.csv")
###Output
_____no_output_____
###Markdown
Q1: 判斷 測試資料集和訓練資料集欄位變數是否有差異性?
###Code
# Q1: 判斷 測試資料集和訓練資料集欄位變數是否有差異性?
'''
暗示,可以用那些函數,來看出資料的欄位變數
'''
print(df_test.columns)
print(df_train.columns)
###Output
Index(['PassengerId', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch',
'Ticket', 'Fare', 'Cabin', 'Embarked'],
dtype='object')
Index(['PassengerId', 'Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp',
'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked'],
dtype='object')
###Markdown
A1 : Test 資料集沒有 'Survived' Q2: 測試資料集是否有遺失值?
###Code
#測試資料集的特性
print("資料筆數=",df_test.shape)
# 判斷測試資料集,是否有遺失值
# 会判断哪些”列”存在缺失值
# any:判斷一個tuple或者list是否全為空,0,False。如果全為空,0,False,則返回False;如果不全為空,則返回True。
print(df_test.isnull().any())
# 統計 data 裡有空值的變數個數
print(df_test.isnull().any().sum())
###Output
PassengerId False
Pclass False
Name False
Sex False
Age True
SibSp False
Parch False
Ticket False
Fare True
Cabin True
Embarked False
dtype: bool
3
###Markdown
A2: 測試資料集有遺失值 Q3: 從合併資料選取一個變數,嘗試去做各種不同遺失值的處理,並透過圖形來做輔助判斷,補值前與後的差異,你覺得以這個變數而言,試著說明每一個方法的差異。
###Code
#合併資料
data = df_train.append(df_test)
print(data.info())
print('cabin 遺失個數=',data['Cabin'].isnull().sum())
# 以 Cabin 為例,先看 Cabin 出現值的特性
print(data["Cabin"].value_counts())
###Output
C23 C25 C27 6
G6 5
B57 B59 B63 B66 5
F4 4
C22 C26 4
..
A6 1
B61 1
T 1
D9 1
C47 1
Name: Cabin, Length: 186, dtype: int64
###Markdown
cabin 不能隨意補值,須先進一步觀察和處理 * 方法1:遺失的屬於另一類。 * 方法2:看 cabin 和其他變數有無關係,可以進行補值。* 方法3:遺失比例太高,可以先不放入模型。
###Code
#* 方法1:遺失的屬於另一類。
data['Cabin'].head(10)
data["Cabin"] = data['Cabin'].apply(lambda x : str(x)[0] if not pd.isnull(x) else 'NoCabin')
data["Cabin"].unique()
# 挑整後的 Cabin 觀察遺失的樣態
sns.countplot(data['Cabin'], hue=data['Survived'])
#結論,遺失的死亡率比較高
#數值計算
data[['Cabin', 'Survived']].groupby(['Cabin'], as_index=False).mean().sort_values(by='Survived', ascending=False)
# NoCabin的比例和 T 較接近
###Output
_____no_output_____ |
Pedro_SPL_most_severe_consequence_prediction.ipynb | ###Markdown
Pedro
###Code
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pydotplus
from IPython.display import Image
from six import StringIO
import matplotlib.image as mpimg
#%pylab inline
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor, plot_tree, export_graphviz
from sklearn import preprocessing, metrics
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix, mean_squared_error
#!pip install biopython
from Bio import Entrez
from Bio import SeqIO
url = "https://raw.githubusercontent.com/waldeyr/Pedro_RED_SPL/main/RED_SPL_severe.csv"
df = pd.read_csv(url, sep=',' )
df[['most_severe_cons']].tail()
df.columns
temp = df.all_cons.str.split(' ', expand=True)
temp.columns = ['cons01', 'cons02', 'cons03', 'cons04', 'cons05', 'cons06', 'cons07', 'cons08', 'cons09', 'cons10', 'cons11']
df = pd.concat([df, temp], axis=1)
df.columns
def getChromossome( ncbi_id ):
if "chr" in ncbi_id:
return ncbi_id
else:
Entrez.email = "[email protected]"
with Entrez.efetch( db="nucleotide", rettype="gb", id=ncbi_id ) as handle:
record = SeqIO.read(handle, "gb")
for f in record.features:
if f.qualifiers['chromosome'][0]:
return "chr" + str(f.qualifiers['chromosome'][0])
else:
return ncbi_id
df['Region'] = df['Region'].apply(lambda x: getChromossome(x))
def setRegion(Region):
if Region == 'chrX': return 23 # chromossome X
if Region == 'chrY': return 24 # chromossome Y
if Region == 'chrM': return 25 # Mitochondrial
return re.sub('chr', '', Region)
df['Region'] = df['Region'].apply(lambda x: setRegion(str(x)))
df = df.fillna(int(0)) # all NaN fields are strings type, so they will be factorize later and the zero will be a category
df.drop('most_severe_cons', axis=1, inplace=True)
df.drop('all_cons', axis=1, inplace=True)
df.drop('cons02', axis=1, inplace=True)
df.drop('cons03', axis=1, inplace=True)
df.drop('cons04', axis=1, inplace=True)
df.drop('cons05', axis=1, inplace=True)
df.drop('cons06', axis=1, inplace=True)
df.drop('cons07', axis=1, inplace=True)
df.drop('cons08', axis=1, inplace=True)
df.drop('cons09', axis=1, inplace=True)
df.drop('cons10', axis=1, inplace=True)
df.drop('cons11', axis=1, inplace=True)
df.Region = pd.factorize(df.Region, na_sentinel=None)[0]
df.subs = pd.factorize(df.subs, na_sentinel=None)[0]
df.defSubs = pd.factorize(df.defSubs, na_sentinel=None)[0]
df.sym = pd.factorize(df.sym, na_sentinel=None)[0]
df.ensembl_id = pd.factorize(df.ensembl_id, na_sentinel=None)[0]
df.type = pd.factorize(df.type, na_sentinel=None)[0]
df.genetic_var = pd.factorize(df.genetic_var, na_sentinel=None)[0]
df.aa_change = pd.factorize(df.aa_change, na_sentinel=None)[0]
df.codons_change = pd.factorize(df.codons_change, na_sentinel=None)[0]
df.RED_type = pd.factorize(df.RED_type, na_sentinel=None)[0]
df.cons01 = pd.factorize(df.cons01, na_sentinel=None)[0]
# df.cons02 = pd.factorize(df.cons02, na_sentinel=None)[0]
# df.cons03 = pd.factorize(df.cons03, na_sentinel=None)[0]
# df.cons04 = pd.factorize(df.cons04, na_sentinel=None)[0]
# df.cons05 = pd.factorize(df.cons05, na_sentinel=None)[0]
# df.cons06 = pd.factorize(df.cons06, na_sentinel=None)[0]
# df.cons07 = pd.factorize(df.cons07, na_sentinel=None)[0]
# df.cons08 = pd.factorize(df.cons08, na_sentinel=None)[0]
# df.cons09 = pd.factorize(df.cons09, na_sentinel=None)[0]
# df.cons10 = pd.factorize(df.cons10, na_sentinel=None)[0]
# df.cons11 = pd.factorize(df.cons11, na_sentinel=None)[0]
df.tail()
y = df['cons01'].values
y
# Removing not representative columns
df.drop('Position', axis=1, inplace=True)
df.drop('p', axis=1, inplace=True)
df.drop('p_adj', axis=1, inplace=True)
df.drop('ensembl_id', axis=1, inplace=True)
df.drop('sym', axis=1, inplace=True)
X = df.drop(['cons01'], axis=1)
X.columns
X.dtypes
X
X_treino, X_teste, y_treino, y_teste = train_test_split(X, y, test_size = 0.1, shuffle = True, random_state = 1)
arvore = DecisionTreeClassifier(criterion='entropy', max_depth=2, min_samples_leaf=30, random_state=0)
modelo = arvore.fit(X_treino, y_treino)
%pylab inline
previsao = arvore.predict(X_teste)
np.sqrt(mean_squared_error(y_teste, previsao))
pylab.figure(figsize=(50,40))
plot_tree(arvore, feature_names=X_treino.columns)
# Aplicando mo modelo gerado na base de testes
y_predicoes = modelo.predict(X_teste)
# Avaliação do modelo
print(f"Acurácia da árvore: {metrics.accuracy_score(y_teste, y_predicoes)}")
print(classification_report(y_teste, y_predicoes))
###Output
Acurácia da árvore: 0.631578947368421
precision recall f1-score support
0 0.56 1.00 0.71 10
1 0.00 0.00 0.00 14
2 0.00 0.00 0.00 1
3 0.00 0.00 0.00 1
4 0.85 0.83 0.84 53
5 0.74 0.82 0.78 17
6 0.49 0.93 0.64 43
7 0.00 0.00 0.00 17
8 0.00 0.00 0.00 4
9 0.00 0.00 0.00 7
10 0.00 0.00 0.00 4
accuracy 0.63 171
macro avg 0.24 0.33 0.27 171
weighted avg 0.49 0.63 0.54 171
###Markdown
Only annotated
###Code
df_new = df.loc[df['annoted'] == 1]
df_new.columns
y = df_new['cons01'].values
X = df_new.drop(['cons01'], axis=1)
X_treino, X_teste, y_treino, y_teste = train_test_split(X, y, test_size = 0.1, shuffle = True, random_state = 1)
arvore = DecisionTreeClassifier(criterion='entropy', max_depth=5, min_samples_leaf=30, random_state=0)
modelo = arvore.fit(X_treino, y_treino)
%pylab inline
previsao = arvore.predict(X_teste)
np.sqrt(mean_squared_error(y_teste, previsao))
pylab.figure(figsize=(50,40))
plot_tree(arvore, feature_names=X_treino.columns)
# Aplicando mo modelo gerado na base de testes
y_predicoes = modelo.predict(X_teste)
# Avaliação do modelo
print(f"Acurácia da árvore: {metrics.accuracy_score(y_teste, y_predicoes)}")
print(classification_report(y_teste, y_predicoes))
###Output
Acurácia da árvore: 0.6944444444444444
precision recall f1-score support
1 0.50 0.33 0.40 3
4 0.60 0.75 0.67 8
5 0.00 0.00 0.00 1
6 0.75 0.86 0.80 21
7 0.00 0.00 0.00 3
accuracy 0.69 36
macro avg 0.37 0.39 0.37 36
weighted avg 0.61 0.69 0.65 36
|
ds_book/docs/Lesson2b_prep_data_ML_segmentation.ipynb | ###Markdown
Process dataset for use with deep learning segmentation network> A guide for processing raster data and labels into ML-ready format for use with a deep-learning based semantic segmentation. Setup Notebook ```{admonition} **Version control**Colab updates without warning to users, which can cause notebooks to break. Therefore, we are pinning library versions.```
###Code
# install required libraries
!pip install -q rasterio==1.2.10
!pip install -q geopandas==0.10.2
!pip install -q radiant_mlhub # for dataset access, see: https://mlhub.earth/
# import required libraries
import os, glob, functools, fnmatch, io, shutil, tarfile, json
from zipfile import ZipFile
from itertools import product
from pathlib import Path
import urllib.request
import numpy as np
from fractions import Fraction
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['axes.grid'] = False
mpl.rcParams['figure.figsize'] = (12,12)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import matplotlib.image as mpimg
import pandas as pd
from PIL import Image
import rasterio
from rasterio.merge import merge
from rasterio.plot import show
from rasterio import features, mask, windows
import geopandas as gpd
from IPython.display import clear_output
import cv2
from timeit import default_timer as timer
from tqdm.notebook import tqdm
from radiant_mlhub import Dataset, client, get_session, Collection
# configure Radiant Earth MLHub access
!mlhub configure
# Mount google drive.
from google.colab import drive
drive.mount('/content/gdrive')
# set your root directory and tiled data folders
if 'google.colab' in str(get_ipython()):
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
root_dir = '/content/gdrive/My Drive/tf-eo-devseed/'
workshop_dir = '/content/gdrive/My Drive/tf-eo-devseed-workshop'
dirs = [root_dir, workshop_dir]
for d in dirs:
if not os.path.exists(d):
os.makedirs(d)
print('Running on Colab')
else:
root_dir = os.path.abspath("./data/tf-eo-devseed")
workshop_dir = os.path.abspath('./tf-eo-devseed-workshop')
print(f'Not running on Colab, data needs to be downloaded locally at {os.path.abspath(root_dir)}')
%cd $root_dir
###Output
_____no_output_____
###Markdown
Enabling GPU```{Tip}This notebook can utilize a GPU and works better if you use one. Hopefully this notebook is using a GPU, and we can check with the following code.If it's not using a GPU you can change your session/notebook to use a GPU. See [Instructions](https://colab.research.google.com/notebooks/gpu.ipynbscrollTo=sXnDmXR7RDr2).```
###Code
%tensorflow_version 2.x
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
###Output
_____no_output_____
###Markdown
Access the dataset We will use a crop type classification dataset from Radiant Earth MLHub: https://mlhub.earth/data/dlr_fusion_competition_germany
###Code
ds = Dataset.fetch('dlr_fusion_competition_germany')
for c in ds.collections:
print(c.id)
collections = [
'dlr_fusion_competition_germany_train_source_planet_5day',
'dlr_fusion_competition_germany_test_source_planet_5day',
'dlr_fusion_competition_germany_train_labels',
'dlr_fusion_competition_germany_test_labels'
]
def download(collection_id):
print(f'Downloading {collection_id}...')
collection = Collection.fetch(collection_id)
path = collection.download('.')
tar = tarfile.open(path, "r:gz")
tar.extractall()
tar.close()
os.remove(path)
def resolve_path(base, path):
return Path(os.path.join(base, path)).resolve()
def load_df(collection_id):
collection = json.load(open(f'{collection_id}/collection.json', 'r'))
rows = []
item_links = []
for link in collection['links']:
if link['rel'] != 'item':
continue
item_links.append(link['href'])
for item_link in item_links:
item_path = f'{collection_id}/{item_link}'
current_path = os.path.dirname(item_path)
item = json.load(open(item_path, 'r'))
tile_id = item['id'].split('_')[-1]
for asset_key, asset in item['assets'].items():
rows.append([
tile_id,
None,
None,
asset_key,
str(resolve_path(current_path, asset['href']))
])
for link in item['links']:
if link['rel'] != 'source':
continue
link_path = resolve_path(current_path, link['href'])
source_path = os.path.dirname(link_path)
try:
source_item = json.load(open(link_path, 'r'))
except FileNotFoundError:
continue
datetime = source_item['properties']['datetime']
satellite_platform = source_item['collection'].split('_')[-1]
for asset_key, asset in source_item['assets'].items():
rows.append([
tile_id,
datetime,
satellite_platform,
asset_key,
str(resolve_path(source_path, asset['href']))
])
return pd.DataFrame(rows, columns=['tile_id', 'datetime', 'satellite_platform', 'asset', 'file_path'])
for c in collections:
download(c)
train_df = load_df('dlr_fusion_competition_germany_train_labels')
test_df = load_df('dlr_fusion_competition_germany_test_labels')
###Output
_____no_output_____
###Markdown
Check out the labels Class names and identifiers extracted from the documentation provided here: https://radiantearth.blob.core.windows.net/mlhub/esa-food-security-challenge/Crops_GT_Brandenburg_Doc.pdf
###Code
# Read the classes
pd.set_option('display.max_colwidth', None)
data = {'class_names': ['Background', 'Wheat', 'Rye', 'Barley', 'Oats', 'Corn', 'Oil Seeds', 'Root Crops', 'Meadows', 'Forage Crops'],
'class_ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
}
classes = pd.DataFrame(data)
print(classes)
classes.to_csv('lulc_classes.csv')
# Let's check the class labels
labels_geo = gpd.read_file('dlr_fusion_competition_germany_train_labels/dlr_fusion_competition_germany_train_labels_33N_18E_242N/labels.geojson')
classes = labels_geo.crop_id.unique()
classes.sort()
print("classes in labels geojson: ", classes)
###Output
class_names class_ids
0 Background 0
1 Wheat 1
2 Rye 2
3 Barley 3
4 Oats 4
5 Corn 5
6 Oil Seeds 6
7 Root Crops 7
8 Meadows 8
9 Forage Crops 9
classes in labels geojson: [1 2 3 4 5 6 7 8 9]
###Markdown
Raster processing ```{admonition} **IMPORTANT**This section contains helper functions for processing the raw raster composites.``` Get the Planet fusion images. &8681
###Code
def raster_read(raster_dir):
print(raster_dir)
# Read band metadata and arrays
# metadata
rgbn = rasterio.open(os.path.join(raster_dir,'sr.tif')) #rgbn
rgbn_src = rgbn
target_crs = rgbn_src.crs
print("rgbn: ", rgbn)
# arrays
# Read and re-scale the original 16 bit image to 8 bit.
scale = True
if scale:
rgbn_norm = cv2.normalize(rgbn.read(), None, 0, 255, cv2.NORM_MINMAX)
rgbn_norm_out=rasterio.open(os.path.join(raster_dir,'sr_byte_scaled.tif'), 'w', driver='Gtiff',
width=rgbn_src.width, height=rgbn_src.height,
count=4,
crs=rgbn_src.crs,
transform=rgbn_src.transform,
dtype='uint8')
rgbn_norm_out.write(rgbn_norm)
rgbn_norm_out.close()
rgbn = rasterio.open(os.path.join(raster_dir,'sr_byte_scaled.tif')) #rgbn
else:
rgbn = rasterio.open(os.path.join(raster_dir,'sr_byte_scaled.tif')) #rgbn
print("Scaled to 8bit.")
return raster_dir, rgbn, rgbn_src, target_crs
###Output
_____no_output_____
###Markdown
Calculate relevant spectral indices&8681 **WDRVI**: Wide Dynamic Range Vegetation Index \**NDVI**: Normalized Difference Vegetation Index \**SI**: Shadow Index
###Code
# calculate spectral indices and concatenate them into one 3 channel image
def indexnormstack(red, green, blue, nir):
def WDRVIcalc(nir, red):
a = 0.15
wdrvi = (a * nir-red)/(a * nir+red)
return wdrvi
def NPCRIcalc(red,blue):
npcri = (red-blue)/(red+blue)
return npcri
def NDVIcalc(nir, red):
ndvi = (nir - red) / (nir + red + 1e-5)
return ndvi
def SIcalc(red, green, blue):
expo = Fraction('1/3')
si = (((1-red)*(1-green)*(1-blue))**expo)
return si
def norm(arr):
scaler = MinMaxScaler(feature_range=(0, 255))
scaler = scaler.fit(arr)
arr_norm = scaler.transform(arr)
# Checking reconstruction
#arr_norm = scaler.inverse_transform(arr_norm)
return arr_norm
wdrvi = WDRVIcalc(nir,red)
#npcri = NPCRIcalc(red,blue)
ndi = NDVIcalc(nir, red)
si = SIcalc(red,green,blue)
print("wdrvi: ", wdrvi.min(), wdrvi.max(), "ndi: ", ndi.min(), ndi.max(), "si: ", si.min(), si.max())
wdrvi = norm(wdrvi)
ndi = norm(ndi)
si = norm(si)
index_stack = np.dstack((wdrvi, ndi, si))
return index_stack
###Output
_____no_output_____
###Markdown
Stack bands of interest.&8681
###Code
def bandstack(red, green, blue, nir):
stack = np.dstack((red, green, blue))
return stack
###Output
_____no_output_____
###Markdown
(Optional) color correction for the optical composite. &8681
###Code
# function to increase the brightness in an image
def change_brightness(img, value=30):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
v = cv2.add(v,value)
v[v > 255] = 255
v[v < 0] = 0
final_hsv = cv2.merge((h, s, v))
img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
return img
###Output
_____no_output_____
###Markdown
If you are rasterizing the labels from a vector file (e.g. GeoJSON or Shapefile).&8681 Read label shapefile into geopandas dataframe, check for invalid geometries and set to local CRS. Then, rasterize the labeled polygons using the metadata from one of the grayscale band images. In this function, `geo_1` is used when there are two vector files used for labeling. The latter is given preference over the former because it overwrites when intersections occur.
###Code
def label(geos, labels_src):
geo_0 = gpd.read_file(geos[0])
# check for and remove invalid geometries
geo_0 = geo_0.loc[geo_0.is_valid]
# reproject training data into local coordinate reference system
geo_0 = geo_0.to_crs(crs={'init': target_crs})
#convert the class identifier column to type integer
geo_0['landcover_int'] = geo_0.crop_id.astype(int)
# pair the geometries and their integer class values
shapes_0 = ((geom,value) for geom, value in zip(geo_0.geometry, geo_0.landcover_int))
if len(geos) > 1:
geo_1 = gpd.read_file(geos[1])
geo_1 = geo_1.loc[geo_1.is_valid]
geo_1 = geo_1.to_crs(crs={'init': target_crs})
geo_1['landcover_int'] = geo_1.crop_id.astype(int)
shapes_1 = ((geom,value) for geom, value in zip(geo_1.geometry, geo_1.landcover_int))
else:
print("Only one source of vector labels.") #continue
# get the metadata (height, width, channels, transform, CRS) to use in constructing the labeled image array
labels_src_prf = labels_src.profile
# construct a blank array from the metadata and burn the labels in
labels = features.rasterize(shapes=shapes_0, out_shape=(labels_src_prf['height'], labels_src_prf['width']), fill=0, all_touched=True, transform=labels_src_prf['transform'], dtype=labels_src_prf['dtype'])
if len(geos) > 1:
labels = features.rasterize(shapes=shapes_1, fill=0, all_touched=True, out=labels, transform=labels_src_prf['transform'])
else:
print("Only one source of vector labels.") #continue
print("Values in labeled image: ", np.unique(labels))
return labels
###Output
_____no_output_____
###Markdown
Write the processed rasters to file.&8681
###Code
def save_images(raster_dir, rgb_norm, stack, index_stack, labels, rgb_src):
stack_computed = True # change to True if using the stack helper function above
if stack_computed:
stack_t = stack.transpose(2,0,1)
else:
stack_t = stack
stack_out=rasterio.open(os.path.join(raster_dir,'stack.tif'), 'w', driver='Gtiff',
width=rgb_src.width, height=rgb_src.height,
count=3,
crs=rgb_src.crs,
transform=rgb_src.transform,
dtype='uint8')
stack_out.write(stack_t)
indices_computed = True # change to True if using the index helper function above
if indices_computed:
index_stack_t = index_stack.transpose(2,0,1)
else:
index_stack_t = index_stack
index_stack_out=rasterio.open(os.path.join(raster_dir,'index_stack.tif'), 'w', driver='Gtiff',
width=rgb_src.width, height=rgb_src.height,
count=3,
crs=rgb_src.crs,
transform=rgb_src.transform,
dtype='uint8')
index_stack_out.write(index_stack_t)
#index_stack_out.close()
labels = labels.astype(np.uint8)
labels_out=rasterio.open(os.path.join(raster_dir,'labels.tif'), 'w', driver='Gtiff',
width=rgb_src.width, height=rgb_src.height,
count=1,
crs=rgb_src.crs,
transform=rgb_src.transform,
dtype='uint8')
labels_out.write(labels, 1)
#labels_out.close()
print("written")
return os.path.join(raster_dir,'stack.tif'), os.path.join(raster_dir,'index_stack.tif'), os.path.join(raster_dir,'labels.tif')
###Output
_____no_output_____
###Markdown
Now let's divide the optical/index stack and labeled image into 224x224 pixel tiles.&8681
###Code
def tile(index_stack, labels, prefix, width, height, raster_dir, output_dir, brighten=False):
tiles_dir = os.path.join(output_dir,'tiled/')
img_dir = os.path.join(output_dir,'tiled/stacks_brightened/')
label_dir = os.path.join(output_dir,'tiled/labels/')
dirs = [tiles_dir, img_dir, label_dir]
for d in dirs:
if not os.path.exists(d):
os.makedirs(d)
def get_tiles(ds):
# get number of rows and columns (pixels) in the entire input image
nols, nrows = ds.meta['width'], ds.meta['height']
# get the grid from which tiles will be made
offsets = product(range(0, nols, width), range(0, nrows, height))
# get the window of the entire input image
big_window = windows.Window(col_off=0, row_off=0, width=nols, height=nrows)
# tile the big window by mini-windows per grid cell
for col_off, row_off in offsets:
window = windows.Window(col_off=col_off, row_off=row_off, width=width, height=height).intersection(big_window)
transform = windows.transform(window, ds.transform)
yield window, transform
tile_width, tile_height = width, height
def crop(inpath, outpath, c):
# read input image
image = rasterio.open(inpath)
# get the metadata
meta = image.meta.copy()
print("meta: ", meta)
# set the number of channels to 3 or 1, depending on if its the index image or labels image
meta['count'] = int(c)
# set the tile output file format to PNG (saves spatial metadata unlike JPG)
meta['driver']='PNG'
meta['dtype']='uint8'
# tile the input image by the mini-windows
i = 0
for window, transform in get_tiles(image):
meta['transform'] = transform
meta['width'], meta['height'] = window.width, window.height
outfile = os.path.join(outpath,"tile_%s_%s.png" % (prefix, str(i)))
with rasterio.open(outfile, 'w', **meta) as outds:
if brighten:
imw = image.read(window=window)
imw = imw.transpose(1,2,0)
imwb = change_brightness(imw, value=50)
imwb = imwb.transpose(2,0,1)
outds.write(imwb)
else:
outds.write(image.read(window=window))
i = i+1
def process_tiles(index_flag):
# tile the input images, when index_flag == True, we are tiling the spectral index image,
# when False we are tiling the labels image
if index_flag==True:
inpath = os.path.join(raster_dir,'stack.tif')
outpath=img_dir
crop(inpath, outpath, 3)
else:
inpath = os.path.join(raster_dir,'labels.tif')
outpath=label_dir
crop(inpath, outpath, 1)
process_tiles(index_flag=True) # tile stack
process_tiles(index_flag=False) # tile labels
return tiles_dir, img_dir, label_dir
###Output
_____no_output_____
###Markdown
Run the image processing workflow.&9888 Long running code &9888 Google Drive based workflows can incur timeouts and latency issues. If this happens, try running the affected cell again. Having a VM with a mounted SSD would be a good start to solving these associated latency problems incurred from I/O of data hosted in Google Drive.&8681
###Code
train_images_dir = 'dlr_fusion_competition_germany_train_source_planet_5day'
%cd $train_images_dir
train_images_dirs = [f.path for f in os.scandir('./') if f.is_dir()]
train_images_dirs = [x.replace('./', '') if type(x) is str else x for x in train_images_dirs]
%cd $root_dir
process = True
if process:
raster_out_dir = os.path.join(root_dir,'rasters/')
if not os.path.exists(raster_out_dir):
os.makedirs(raster_out_dir)
# If you want to write the files out to your personal drive, set write_out = True, but I recommend trying
# that in your free time because it takes about 2 hours or more for all composites.
write_out = True #False
if write_out == True:
for train_image_dir in train_images_dirs: #[0:1]:
# read the rasters and scale to 8bit
print("reading and scaling rasters...")
raster_dir, rgbn, rgbn_src, target_crs = raster_read(os.path.join(train_images_dir,train_image_dir))
# Calculate indices and combine the indices into one single 3 channel image
print("calculating spectral indices...")
index_stack = indexnormstack(rgbn.read(3), rgbn.read(2), rgbn.read(1), rgbn.read(4))
# Stack channels of interest (RGB) into one single 3 channel image
print("Stacking channels of interest...")
stack = bandstack(rgbn.read(3), rgbn.read(2), rgbn.read(1), rgbn.read(4))
# Color correct the RGB image
print("Color correcting a RGB image...")
cc_stack = change_brightness(stack)
# Rasterize labels
labels = label([os.path.join(root_dir,'dlr_fusion_competition_germany_train_labels/dlr_fusion_competition_germany_train_labels_33N_18E_242N/labels.geojson')], rgbn_src)
# Save index stack and labels to geotiff
print("writing scaled rasters and labels to file...")
stack_file, index_stack_file, labels_file = save_images(raster_dir, rgbn, cc_stack, index_stack, labels, rgbn_src)
# Tile images into 224x224
print("tiling the indices and labels...")
tiles_dir, img_dir, label_dir = tile(stack, labels, str(train_image_dir), 224, 224, raster_dir, raster_out_dir, brighten=False)
else:
print("Not writing to file; using data in shared drive.")
else:
print("Using pre-processed dataset.")
###Output
_____no_output_____
###Markdown
Read the data into memory Getting set up with the data```{important}The tiled imagery will be available at the following path that is accessible with the google.colab `drive` module: `'/content/gdrive/My Drive/tf-eo-devseed/'````We'll be working with the following folders and files in the `tf-eo-devseed` folder:```tf-eo-devseed/├── stacks/├── stacks_brightened/├── indices/├── labels/├── background_list_train.txt├── train_list_clean.txt└── lulc_classes.csv``` Get lists of image and label tile pairs for training and testing.&8681
###Code
def get_train_test_lists(imdir, lbldir):
imgs = glob.glob(os.path.join(imdir,"*.png"))
#print(imgs[0:1])
dset_list = []
for img in imgs:
filename_split = os.path.splitext(img)
filename_zero, fileext = filename_split
basename = os.path.basename(filename_zero)
dset_list.append(basename)
x_filenames = []
y_filenames = []
for img_id in dset_list:
x_filenames.append(os.path.join(imdir, "{}.png".format(img_id)))
y_filenames.append(os.path.join(lbldir, "{}.png".format(img_id)))
print("number of images: ", len(dset_list))
return dset_list, x_filenames, y_filenames
train_list, x_train_filenames, y_train_filenames = get_train_test_lists(img_dir, label_dir)
###Output
_____no_output_____
###Markdown
Check for the proportion of background tiles. This takes a while. So after running this once, you can skip by loading from saved results.&8681
###Code
skip = False
if not skip:
background_list_train = []
for i in train_list:
# read in each labeled images
# print(os.path.join(label_dir,"{}.png".format(i)))
img = np.array(Image.open(os.path.join(label_dir,"{}.png".format(i))))
# check if no values in image are greater than zero (background value)
if img.max()==0:
background_list_train.append(i)
print("Number of background images: ", len(background_list_train))
with open(os.path.join(root_dir,'background_list_train.txt'), 'w') as f:
for item in background_list_train:
f.write("%s\n" % item)
else:
background_list_train = [line.strip() for line in open("background_list_train.txt", 'r')]
print("Number of background images: ", len(background_list_train))
###Output
_____no_output_____
###Markdown
We will keep only 10% of the total. Too many background tiles can cause a form of class imbalance.&8681
###Code
background_removal = len(background_list_train) * 0.9
train_list_clean = [y for y in train_list if y not in background_list_train[0:int(background_removal)]]
x_train_filenames = []
y_train_filenames = []
for i, img_id in zip(tqdm(range(len(train_list_clean))), train_list_clean):
pass
x_train_filenames.append(os.path.join(img_dir, "{}.png".format(img_id)))
y_train_filenames.append(os.path.join(label_dir, "{}.png".format(img_id)))
print("Number of background tiles: ", background_removal)
print("Remaining number of tiles after 90% background removal: ", len(train_list_clean))
###Output
_____no_output_____
###Markdown
Now that we have our set of files we want to use for developing our model, we need to split them into three sets: * the training set for the model to learn from* the validation set that allows us to evaluate models and make decisions to change models* and the test set that we will use to communicate the results of the best performing model (as determined by the validation set)We will split index tiles and label tiles into train, validation and test sets: 70%, 20% and 10%, respectively.
###Code
x_train_filenames, x_val_filenames, y_train_filenames, y_val_filenames = train_test_split(x_train_filenames, y_train_filenames, test_size=0.3, random_state=42)
x_val_filenames, x_test_filenames, y_val_filenames, y_test_filenames = train_test_split(x_val_filenames, y_val_filenames, test_size=0.33, random_state=42)
num_train_examples = len(x_train_filenames)
num_val_examples = len(x_val_filenames)
num_test_examples = len(x_test_filenames)
print("Number of training examples: {}".format(num_train_examples))
print("Number of validation examples: {}".format(num_val_examples))
print("Number of test examples: {}".format(num_test_examples))
###Output
_____no_output_____
###Markdown
```{warning} **Long running cell** \The code below checks for values in train, val, and test partitions. We won't run this since it takes over 10 minutes on colab due to slow IO.``` &8681
###Code
vals_train = []
vals_val = []
vals_test = []
def get_vals_in_partition(partition_list, x_filenames, y_filenames):
for x,y,i in zip(x_filenames, y_filenames, tqdm(range(len(y_filenames)))):
pass
try:
img = np.array(Image.open(y))
vals = np.unique(img)
partition_list.append(vals)
except:
continue
def flatten(partition_list):
return [item for sublist in partition_list for item in sublist]
get_vals_in_partition(vals_train, x_train_filenames, y_train_filenames)
get_vals_in_partition(vals_val, x_val_filenames, y_val_filenames)
get_vals_in_partition(vals_test, x_test_filenames, y_test_filenames)
print("Values in training partition: ", set(flatten(vals_train)))
print("Values in validation partition: ", set(flatten(vals_val)))
print("Values in test partition: ", set(flatten(vals_test)))
###Output
Values in training partition: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
Values in validation partition: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
Values in test partition: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
###Markdown
Visualize the data ```{warning} **Long running cell** \The code below loads foreground examples randomly. ```&8681
###Code
display_num = 3
background_list_train = [line.strip() for line in open("background_list_train.txt", 'r')]
# select only for tiles with foreground labels present
foreground_list_x = []
foreground_list_y = []
for x,y in zip(x_train_filenames, y_train_filenames):
try:
filename_split = os.path.splitext(y)
filename_zero, fileext = filename_split
basename = os.path.basename(filename_zero)
if basename not in background_list_train:
foreground_list_x.append(x)
foreground_list_y.append(y)
else:
continue
except:
continue
num_foreground_examples = len(foreground_list_y)
# randomlize the choice of image and label pairs
r_choices = np.random.choice(num_foreground_examples, display_num)
plt.figure(figsize=(10, 15))
for i in range(0, display_num * 2, 2):
img_num = r_choices[i // 2]
img_num = i // 2
x_pathname = foreground_list_x[img_num]
y_pathname = foreground_list_y[img_num]
plt.subplot(display_num, 2, i + 1)
plt.imshow(mpimg.imread(x_pathname))
plt.title("Original Image")
example_labels = Image.open(y_pathname)
label_vals = np.unique(np.array(example_labels))
plt.subplot(display_num, 2, i + 2)
plt.imshow(example_labels)
plt.title("Masked Image")
plt.suptitle("Examples of Images and their Masks")
plt.show()
###Output
_____no_output_____
###Markdown
Process dataset for use with deep learning segmentation network> A guide for processing raster data and labels into ML-ready format for use with a deep-learning based semantic segmentation. Setup Notebook ```{admonition} **Version control**Colab updates without warning to users, which can cause notebooks to break. Therefore, we are pinning library versions.```
###Code
# install required libraries
!pip install -q rasterio==1.2.10
!pip install -q geopandas==0.10.2
!pip install -q radiant_mlhub # for dataset access, see: https://mlhub.earth/
# import required libraries
import os, glob, functools, fnmatch, io, shutil, tarfile, json
from zipfile import ZipFile
from itertools import product
from pathlib import Path
import urllib.request
import numpy as np
from fractions import Fraction
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['axes.grid'] = False
mpl.rcParams['figure.figsize'] = (12,12)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import matplotlib.image as mpimg
import pandas as pd
from PIL import Image
import rasterio
from rasterio.merge import merge
from rasterio.plot import show
from rasterio import features, mask, windows
import geopandas as gpd
from IPython.display import clear_output
import cv2
from timeit import default_timer as timer
from tqdm.notebook import tqdm
from radiant_mlhub import Dataset, client, get_session, Collection
# configure Radiant Earth MLHub access
!mlhub configure
# Mount google drive.
from google.colab import drive
drive.mount('/content/gdrive')
# set your root directory and tiled data folders
if 'google.colab' in str(get_ipython()):
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
root_dir = '/content/gdrive/My Drive/tf-eo-devseed/'
workshop_dir = '/content/gdrive/My Drive/tf-eo-devseed-workshop'
dirs = [root_dir, workshop_dir]
for d in dirs:
if not os.path.exists(d):
os.makedirs(d)
print('Running on Colab')
else:
root_dir = os.path.abspath("./data/tf-eo-devseed")
workshop_dir = os.path.abspath('./tf-eo-devseed-workshop')
print(f'Not running on Colab, data needs to be downloaded locally at {os.path.abspath(root_dir)}')
%cd $root_dir
###Output
_____no_output_____
###Markdown
Enabling GPU```{Tip}This notebook can utilize a GPU and works better if you use one. Hopefully this notebook is using a GPU, and we can check with the following code.If it's not using a GPU you can change your session/notebook to use a GPU. See [Instructions](https://colab.research.google.com/notebooks/gpu.ipynbscrollTo=sXnDmXR7RDr2).```
###Code
%tensorflow_version 2.x
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
###Output
_____no_output_____
###Markdown
Access the dataset We will use a crop type classification dataset from Radiant Earth MLHub: https://mlhub.earth/data/dlr_fusion_competition_germany
###Code
ds = Dataset.fetch('dlr_fusion_competition_germany')
for c in ds.collections:
print(c.id)
collections = [
'dlr_fusion_competition_germany_train_source_planet_5day',
'dlr_fusion_competition_germany_test_source_planet_5day',
'dlr_fusion_competition_germany_train_labels',
'dlr_fusion_competition_germany_test_labels'
]
def download(collection_id):
print(f'Downloading {collection_id}...')
collection = Collection.fetch(collection_id)
path = collection.download('.')
tar = tarfile.open(path, "r:gz")
tar.extractall()
tar.close()
os.remove(path)
def resolve_path(base, path):
return Path(os.path.join(base, path)).resolve()
def load_df(collection_id):
collection = json.load(open(f'{collection_id}/collection.json', 'r'))
rows = []
item_links = []
for link in collection['links']:
if link['rel'] != 'item':
continue
item_links.append(link['href'])
for item_link in item_links:
item_path = f'{collection_id}/{item_link}'
current_path = os.path.dirname(item_path)
item = json.load(open(item_path, 'r'))
tile_id = item['id'].split('_')[-1]
for asset_key, asset in item['assets'].items():
rows.append([
tile_id,
None,
None,
asset_key,
str(resolve_path(current_path, asset['href']))
])
for link in item['links']:
if link['rel'] != 'source':
continue
link_path = resolve_path(current_path, link['href'])
source_path = os.path.dirname(link_path)
try:
source_item = json.load(open(link_path, 'r'))
except FileNotFoundError:
continue
datetime = source_item['properties']['datetime']
satellite_platform = source_item['collection'].split('_')[-1]
for asset_key, asset in source_item['assets'].items():
rows.append([
tile_id,
datetime,
satellite_platform,
asset_key,
str(resolve_path(source_path, asset['href']))
])
return pd.DataFrame(rows, columns=['tile_id', 'datetime', 'satellite_platform', 'asset', 'file_path'])
for c in collections:
download(c)
train_df = load_df('dlr_fusion_competition_germany_train_labels')
test_df = load_df('dlr_fusion_competition_germany_test_labels')
###Output
_____no_output_____
###Markdown
Check out the labels Class names and identifiers extracted from the documentation provided here: https://radiantearth.blob.core.windows.net/mlhub/esa-food-security-challenge/Crops_GT_Brandenburg_Doc.pdf
###Code
# Read the classes
pd.set_option('display.max_colwidth', None)
data = {'class_names': ['Background', 'Wheat', 'Rye', 'Barley', 'Oats', 'Corn', 'Oil Seeds', 'Root Crops', 'Meadows', 'Forage Crops'],
'class_ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
}
classes = pd.DataFrame(data)
print(classes)
classes.to_csv('lulc_classes.csv')
# Let's check the class labels
labels_geo = gpd.read_file('dlr_fusion_competition_germany_train_labels/dlr_fusion_competition_germany_train_labels_33N_18E_242N/labels.geojson')
classes = labels_geo.crop_id.unique()
classes.sort()
print("classes in labels geojson: ", classes)
###Output
class_names class_ids
0 Background 0
1 Wheat 1
2 Rye 2
3 Barley 3
4 Oats 4
5 Corn 5
6 Oil Seeds 6
7 Root Crops 7
8 Meadows 8
9 Forage Crops 9
classes in labels geojson: [1 2 3 4 5 6 7 8 9]
###Markdown
Raster processing ```{admonition} **IMPORTANT**This section contains helper functions for processing the raw raster composites.``` Get the Planet fusion images. &8681
###Code
def raster_read(raster_dir):
print(raster_dir)
# Read band metadata and arrays
# metadata
rgbn = rasterio.open(os.path.join(raster_dir,'sr.tif')) #rgbn
rgbn_src = rgbn
target_crs = rgbn_src.crs
print("rgbn: ", rgbn)
# arrays
# Read and re-scale the original 16 bit image to 8 bit.
scale = False
if scale:
rgbn_norm = cv2.normalize(rgbn.read(), None, 0, 255, cv2.NORM_MINMAX)
rgbn_norm_out=rasterio.open(os.path.join(raster_dir,'sr_byte_scaled.tif'), 'w', driver='Gtiff',
width=rgbn_src.width, height=rgbn_src.height,
count=4,
crs=rgbn_src.crs,
transform=rgbn_src.transform,
dtype='uint8')
rgbn_norm_out.write(rgbn_norm)
rgbn_norm_out.close()
rgbn = rasterio.open(os.path.join(raster_dir,'sr_byte_scaled.tif')) #rgbn
else:
rgbn = rasterio.open(os.path.join(raster_dir,'sr_byte_scaled.tif')) #rgbn
print("Scaled to 8bit.")
return raster_dir, rgbn, rgbn_src, target_crs
###Output
_____no_output_____
###Markdown
Calculate relevant spectral indices&8681 **WDRVI**: Wide Dynamic Range Vegetation Index \**NDVI**: Normalized Difference Vegetation Index \**SI**: Shadow Index
###Code
# calculate spectral indices and concatenate them into one 3 channel image
def indexnormstack(red, green, blue, nir):
def WDRVIcalc(nir, red):
a = 0.15
wdrvi = (a * nir-red)/(a * nir+red)
return wdrvi
def NPCRIcalc(red,blue):
npcri = (red-blue)/(red+blue)
return npcri
def NDVIcalc(nir, red):
ndvi = (nir - red) / (nir + red + 1e-5)
return ndvi
def SIcalc(red, green, blue):
expo = Fraction('1/3')
si = (((1-red)*(1-green)*(1-blue))**expo)
return si
def norm(arr):
scaler = MinMaxScaler(feature_range=(0, 255))
scaler = scaler.fit(arr)
arr_norm = scaler.transform(arr)
# Checking reconstruction
#arr_norm = scaler.inverse_transform(arr_norm)
return arr_norm
wdrvi = WDRVIcalc(nir,red)
#npcri = NPCRIcalc(red,blue)
ndi = NDVIcalc(nir, red)
si = SIcalc(red,green,blue)
print("wdrvi: ", wdrvi.min(), wdrvi.max(), "ndi: ", ndi.min(), ndi.max(), "si: ", si.min(), si.max())
wdrvi = norm(wdrvi)
ndi = norm(ndi)
si = norm(si)
index_stack = np.dstack((wdrvi, ndi, si))
return index_stack
###Output
_____no_output_____
###Markdown
Stack bands of interest.&8681
###Code
def stack(red, green, blue, nir):
stack = np.dstack((red, green, blue))
return stack
###Output
_____no_output_____
###Markdown
(Optional) color correction for the optical composite. &8681
###Code
# function to increase the brightness in an image
def change_brightness(img, value=30):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
v = cv2.add(v,value)
v[v > 255] = 255
v[v < 0] = 0
final_hsv = cv2.merge((h, s, v))
img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
return img
###Output
_____no_output_____
###Markdown
If you are rasterizing the labels from a vector file (e.g. GeoJSON or Shapefile).&8681 Read label shapefile into geopandas dataframe, check for invalid geometries and set to local CRS. Then, rasterize the labeled polygons using the metadata from one of the grayscale band images. In this function, `geo_1` is used when there are two vector files used for labeling. The latter is given preference over the former because it overwrites when intersections occur.
###Code
def label(geos, labels_src):
geo_0 = gpd.read_file(geos[0])
# check for and remove invalid geometries
geo_0 = geo_0.loc[geo_0.is_valid]
# reproject training data into local coordinate reference system
geo_0 = geo_0.to_crs(crs={'init': target_crs})
#convert the class identifier column to type integer
geo_0['landcover_int'] = geo_0.crop_id.astype(int)
# pair the geometries and their integer class values
shapes_0 = ((geom,value) for geom, value in zip(geo_0.geometry, geo_0.landcover_int))
if len(geos) > 1:
geo_1 = gpd.read_file(geos[1])
geo_1 = geo_1.loc[geo_1.is_valid]
geo_1 = geo_1.to_crs(crs={'init': target_crs})
geo_1['landcover_int'] = geo_1.crop_id.astype(int)
shapes_1 = ((geom,value) for geom, value in zip(geo_1.geometry, geo_1.landcover_int))
else:
print("Only one source of vector labels.") #continue
# get the metadata (height, width, channels, transform, CRS) to use in constructing the labeled image array
labels_src_prf = labels_src.profile
# construct a blank array from the metadata and burn the labels in
labels = features.rasterize(shapes=shapes_0, out_shape=(labels_src_prf['height'], labels_src_prf['width']), fill=0, all_touched=True, transform=labels_src_prf['transform'], dtype=labels_src_prf['dtype'])
if len(geos) > 1:
labels = features.rasterize(shapes=shapes_1, fill=0, all_touched=True, out=labels, transform=labels_src_prf['transform'])
else:
print("Only one source of vector labels.") #continue
print("Values in labeled image: ", np.unique(labels))
return labels
###Output
_____no_output_____
###Markdown
Write the processed rasters to file.&8681
###Code
def save_images(raster_dir, rgb_norm, stack, index_stack, labels, rgb_src):
stack_computed = True # change to True if using the stack helper function above
if stack_computed:
stack_t = stack.transpose(2,0,1)
else:
stack_t = stack
stack_out=rasterio.open(os.path.join(raster_dir,'stack.tif'), 'w', driver='Gtiff',
width=rgb_src.width, height=rgb_src.height,
count=3,
crs=rgb_src.crs,
transform=rgb_src.transform,
dtype='uint8')
stack_out.write(stack_t)
indices_computed = True # change to True if using the index helper function above
if indices_computed:
index_stack_t = index_stack.transpose(2,0,1)
else:
index_stack_t = index_stack
index_stack_out=rasterio.open(os.path.join(raster_dir,'index_stack.tif'), 'w', driver='Gtiff',
width=rgb_src.width, height=rgb_src.height,
count=3,
crs=rgb_src.crs,
transform=rgb_src.transform,
dtype='uint8')
index_stack_out.write(index_stack_t)
#index_stack_out.close()
labels = labels.astype(np.uint8)
labels_out=rasterio.open(os.path.join(raster_dir,'labels.tif'), 'w', driver='Gtiff',
width=rgb_src.width, height=rgb_src.height,
count=1,
crs=rgb_src.crs,
transform=rgb_src.transform,
dtype='uint8')
labels_out.write(labels, 1)
#labels_out.close()
print("written")
return os.path.join(raster_dir,'stack.tif'), os.path.join(raster_dir,'index_stack.tif'), os.path.join(raster_dir,'labels.tif')
###Output
_____no_output_____
###Markdown
Now let's divide the optical/index stack and labeled image into 224x224 pixel tiles.&8681
###Code
def tile(index_stack, labels, prefix, width, height, raster_dir, output_dir, brighten=False):
tiles_dir = os.path.join(output_dir,'tiled/')
img_dir = os.path.join(output_dir,'tiled/stacks_brightened/')
label_dir = os.path.join(output_dir,'tiled/labels/')
dirs = [tiles_dir, img_dir, label_dir]
for d in dirs:
if not os.path.exists(d):
os.makedirs(d)
def get_tiles(ds):
# get number of rows and columns (pixels) in the entire input image
nols, nrows = ds.meta['width'], ds.meta['height']
# get the grid from which tiles will be made
offsets = product(range(0, nols, width), range(0, nrows, height))
# get the window of the entire input image
big_window = windows.Window(col_off=0, row_off=0, width=nols, height=nrows)
# tile the big window by mini-windows per grid cell
for col_off, row_off in offsets:
window = windows.Window(col_off=col_off, row_off=row_off, width=width, height=height).intersection(big_window)
transform = windows.transform(window, ds.transform)
yield window, transform
tile_width, tile_height = width, height
def crop(inpath, outpath, c):
# read input image
image = rasterio.open(inpath)
# get the metadata
meta = image.meta.copy()
print("meta: ", meta)
# set the number of channels to 3 or 1, depending on if its the index image or labels image
meta['count'] = int(c)
# set the tile output file format to PNG (saves spatial metadata unlike JPG)
meta['driver']='PNG'
meta['dtype']='uint8'
# tile the input image by the mini-windows
i = 0
for window, transform in get_tiles(image):
meta['transform'] = transform
meta['width'], meta['height'] = window.width, window.height
outfile = os.path.join(outpath,"tile_%s_%s.png" % (prefix, str(i)))
with rasterio.open(outfile, 'w', **meta) as outds:
if brighten:
imw = image.read(window=window)
imw = imw.transpose(1,2,0)
imwb = change_brightness(imw, value=50)
imwb = imwb.transpose(2,0,1)
outds.write(imwb)
else:
outds.write(image.read(window=window))
i = i+1
def process_tiles(index_flag):
# tile the input images, when index_flag == True, we are tiling the spectral index image,
# when False we are tiling the labels image
if index_flag==True:
inpath = os.path.join(raster_dir,'stack.tif')
outpath=img_dir
crop(inpath, outpath, 3)
else:
inpath = os.path.join(raster_dir,'labels.tif')
outpath=label_dir
crop(inpath, outpath, 1)
process_tiles(index_flag=True) # tile stack
process_tiles(index_flag=False) # tile labels
return tiles_dir, img_dir, label_dir
###Output
_____no_output_____
###Markdown
Run the image processing workflow.&9888 Long running code &9888 Google Drive based workflows can incur timeouts and latency issues. If this happens, try running the affected cell again. Having a VM with a mounted SSD would be a good start to solving these associated latency problems incurred from I/O of data hosted in Google Drive.&8681
###Code
train_images_dir = 'dlr_fusion_competition_germany_train_source_planet_5day'
%cd $train_images_dir
train_images_dirs = [f.path for f in os.scandir('./') if f.is_dir()]
train_images_dirs = [x.replace('./', '') if type(x) is str else x for x in train_images_dirs]
%cd $root_dir
process = True
if process:
raster_out_dir = os.path.join(root_dir,'rasters/')
if not os.path.exists(raster_out_dir):
os.makedirs(raster_out_dir)
# If you want to write the files out to your personal drive, set write_out = True, but I recommend trying
# that in your free time because it takes about 2 hours or more for all composites.
write_out = True #False
if write_out == True:
for train_image_dir in train_images_dirs: #[0:1]:
# read the rasters and scale to 8bit
print("reading and scaling rasters...")
raster_dir, rgbn, rgbn_src, target_crs = raster_read(os.path.join(train_images_dir,train_image_dir))
# Calculate indices and combine the indices into one single 3 channel image
print("calculating spectral indices...")
index_stack = indexnormstack(rgbn.read(3), rgbn.read(2), rgbn.read(1), rgbn.read(4))
# Stack channels of interest (RGB) into one single 3 channel image
print("Stacking channels of interest...")
stack = stack(rgbn.read(3), rgbn.read(2), rgbn.read(1), rgbn.read(4))
# Color correct the RGB image
print("Color correcting a RGB image...")
cc_stack = change_brightness(stack)
# Rasterize labels
labels = label([os.path.join(root_dir,'dlr_fusion_competition_germany_train_labels/dlr_fusion_competition_germany_train_labels_33N_18E_242N/labels.geojson')], rgbn_src)
# Save index stack and labels to geotiff
print("writing scaled rasters and labels to file...")
stack_file, index_stack_file, labels_file = save_images(raster_dir, rgbn, cc_stack, index_stack, labels, rgbn_src)
# Tile images into 224x224
print("tiling the indices and labels...")
tiles_dir, img_dir, label_dir = tile(stack, labels, str(train_image_dir), 224, 224, raster_dir, raster_out_dir, brighten=False)
else:
print("Not writing to file; using data in shared drive.")
else:
print("Using pre-processed dataset.")
###Output
_____no_output_____
###Markdown
Read the data into memory Getting set up with the data```{important}Create drive shortcuts of the tiled imagery to your own My Drive Folder by Right-Clicking on the Shared folder `tf-eo-devseed`. Then, this folder will be available at the following path that is accessible with the google.colab `drive` module: `'/content/gdrive/My Drive/tf-eo-devseed/'````We'll be working with the following folders and files in the `tf-eo-devseed` folder:```tf-eo-devseed/├── stacks/├── stacks_brightened/├── indices/├── labels/├── background_list_train.txt├── train_list_clean.txt└── lulc_classes.csv``` Get lists of image and label tile pairs for training and testing.&8681
###Code
def get_train_test_lists(imdir, lbldir):
imgs = glob.glob(os.path.join(imdir,"*.png"))
#print(imgs[0:1])
dset_list = []
for img in imgs:
filename_split = os.path.splitext(img)
filename_zero, fileext = filename_split
basename = os.path.basename(filename_zero)
dset_list.append(basename)
x_filenames = []
y_filenames = []
for img_id in dset_list:
x_filenames.append(os.path.join(imdir, "{}.png".format(img_id)))
y_filenames.append(os.path.join(lbldir, "{}.png".format(img_id)))
print("number of images: ", len(dset_list))
return dset_list, x_filenames, y_filenames
train_list, x_train_filenames, y_train_filenames = get_train_test_lists(img_dir, label_dir)
###Output
_____no_output_____
###Markdown
Check for the proportion of background tiles. This takes a while. So after running this once, you can skip by loading from saved results.&8681
###Code
skip = False
if not skip:
background_list_train = []
for i in train_list:
# read in each labeled images
# print(os.path.join(label_dir,"{}.png".format(i)))
img = np.array(Image.open(os.path.join(label_dir,"{}.png".format(i))))
# check if no values in image are greater than zero (background value)
if img.max()==0:
background_list_train.append(i)
print("Number of background images: ", len(background_list_train))
with open(os.path.join(root_dir,'background_list_train.txt'), 'w') as f:
for item in background_list_train:
f.write("%s\n" % item)
else:
background_list_train = [line.strip() for line in open("background_list_train.txt", 'r')]
print("Number of background images: ", len(background_list_train))
###Output
_____no_output_____
###Markdown
We will keep only 10% of the total. Too many background tiles can cause a form of class imbalance.&8681
###Code
background_removal = len(background_list_train) * 0.9
train_list_clean = [y for y in train_list if y not in background_list_train[0:int(background_removal)]]
x_train_filenames = []
y_train_filenames = []
for i, img_id in zip(tqdm(range(len(train_list_clean))), train_list_clean):
pass
x_train_filenames.append(os.path.join(img_dir, "{}.png".format(img_id)))
y_train_filenames.append(os.path.join(label_dir, "{}.png".format(img_id)))
print("Number of background tiles: ", background_removal)
print("Remaining number of tiles after 90% background removal: ", len(train_list_clean))
###Output
_____no_output_____
###Markdown
Now that we have our set of files we want to use for developing our model, we need to split them into three sets: * the training set for the model to learn from* the validation set that allows us to evaluate models and make decisions to change models* and the test set that we will use to communicate the results of the best performing model (as determined by the validation set)We will split index tiles and label tiles into train, validation and test sets: 70%, 20% and 10%, respectively.
###Code
x_train_filenames, x_val_filenames, y_train_filenames, y_val_filenames = train_test_split(x_train_filenames, y_train_filenames, test_size=0.3, random_state=42)
x_val_filenames, x_test_filenames, y_val_filenames, y_test_filenames = train_test_split(x_val_filenames, y_val_filenames, test_size=0.33, random_state=42)
num_train_examples = len(x_train_filenames)
num_val_examples = len(x_val_filenames)
num_test_examples = len(x_test_filenames)
print("Number of training examples: {}".format(num_train_examples))
print("Number of validation examples: {}".format(num_val_examples))
print("Number of test examples: {}".format(num_test_examples))
###Output
_____no_output_____
###Markdown
```{warning} **Long running cell** \The code below checks for values in train, val, and test partitions. We won't run this since it takes over 10 minutes on colab due to slow IO.``` &8681
###Code
vals_train = []
vals_val = []
vals_test = []
def get_vals_in_partition(partition_list, x_filenames, y_filenames):
for x,y,i in zip(x_filenames, y_filenames, tqdm(range(len(y_filenames)))):
pass
try:
img = np.array(Image.open(y))
vals = np.unique(img)
partition_list.append(vals)
except:
continue
def flatten(partition_list):
return [item for sublist in partition_list for item in sublist]
get_vals_in_partition(vals_train, x_train_filenames, y_train_filenames)
get_vals_in_partition(vals_val, x_val_filenames, y_val_filenames)
get_vals_in_partition(vals_test, x_test_filenames, y_test_filenames)
print("Values in training partition: ", set(flatten(vals_train)))
print("Values in validation partition: ", set(flatten(vals_val)))
print("Values in test partition: ", set(flatten(vals_test)))
###Output
Values in training partition: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
Values in validation partition: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
Values in test partition: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
###Markdown
Visualize the data ```{warning} **Long running cell** \The code below loads foreground examples randomly. ```&8681
###Code
display_num = 3
background_list_train = [line.strip() for line in open("background_list_train.txt", 'r')]
# select only for tiles with foreground labels present
foreground_list_x = []
foreground_list_y = []
for x,y in zip(x_train_filenames, y_train_filenames):
try:
filename_split = os.path.splitext(y)
filename_zero, fileext = filename_split
basename = os.path.basename(filename_zero)
if basename not in background_list_train:
foreground_list_x.append(x)
foreground_list_y.append(y)
else:
continue
except:
continue
num_foreground_examples = len(foreground_list_y)
# randomlize the choice of image and label pairs
r_choices = np.random.choice(num_foreground_examples, display_num)
plt.figure(figsize=(10, 15))
for i in range(0, display_num * 2, 2):
img_num = r_choices[i // 2]
img_num = i // 2
x_pathname = foreground_list_x[img_num]
y_pathname = foreground_list_y[img_num]
plt.subplot(display_num, 2, i + 1)
plt.imshow(mpimg.imread(x_pathname))
plt.title("Original Image")
example_labels = Image.open(y_pathname)
label_vals = np.unique(np.array(example_labels))
plt.subplot(display_num, 2, i + 2)
plt.imshow(example_labels)
plt.title("Masked Image")
plt.suptitle("Examples of Images and their Masks")
plt.show()
###Output
_____no_output_____
###Markdown
Process dataset for use with deep learning segmentation network> A guide for processing raster data and labels into ML-ready format for use with a deep-learning based semantic segmentation. Setup Notebook ```{admonition} **Version control**Colab updates without warning to users, which can cause notebooks to break. Therefore, we are pinning library versions.```
###Code
# install required libraries
!pip install -q rasterio==1.2.10
!pip install -q geopandas==0.10.2
# import required libraries
import os, glob, functools, fnmatch, json, requests
from zipfile import ZipFile
from itertools import product
import urllib.request
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['axes.grid'] = False
mpl.rcParams['figure.figsize'] = (12,12)
from sklearn.model_selection import train_test_split
import matplotlib.image as mpimg
import pandas as pd
from PIL import Image
import rasterio
from rasterio.merge import merge
from rasterio.plot import show
from rasterio import features, mask, windows
import geopandas as gpd
from IPython.display import clear_output
import cv2
from timeit import default_timer as timer
from tqdm.notebook import tqdm
# Mount google drive.
from google.colab import drive
drive.mount('/content/gdrive')
# set your root directory and tiled data folders
if 'google.colab' in str(get_ipython()):
root_dir = '/content/gdrive/My Drive/servir-tf-devseed/'
print('Running on Colab')
else:
root_dir = os.path.abspath("./data/servir-tf-devseed")
print(f'Not running on Colab, data needs to be downloaded locally at {os.path.abspath(root_dir)}')
img_dir = os.path.join(root_dir,'indices/') # or os.path.join(root_dir,'images_bright/') if using the optical tiles
label_dir = os.path.join(root_dir,'labels/')
%cd $root_dir
###Output
/content/gdrive/MyDrive/servir-tf/data/servir-tf-devseed
###Markdown
Enabling GPU```{Tip}This notebook can utilize a GPU and works better if you use one. Hopefully this notebook is using a GPU, and we can check with the following code.If it's not using a GPU you can change your session/notebook to use a GPU. See [Instructions](https://colab.research.google.com/notebooks/gpu.ipynbscrollTo=sXnDmXR7RDr2).```
###Code
%tensorflow_version 2.x
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
###Output
_____no_output_____
###Markdown
Raster processing &9888 WARNING &9888 This section contains helper functions for processing the raw raster composites and is optional yet not recommended, as the ML-ready tiled dataset is written to a shared drive folder, as you’ll see in the section titled Read the data into memory. Any cells with markdown containing an &8681 just above them are to be skipped during the workshop. Get the optical, spectral index and label mask images. &8681 ```pythondef raster_read(raster_dir): print(raster_dir) rasters = glob.glob(os.path.join(raster_dir,'/**/*.tif'),recursive=True) print(rasters) Read band metadata and arrays metadata rgb = rasterio.open(os.path.join(raster_dir,'/*rgb.tif*')) rgb rgbn = rasterio.open(os.path.join(raster_dir,'/*rgbn.tif*')) rgbn indices = rasterio.open(os.path.join(raster_dir,'/*indices.tif*')) spectral labels = rasterio.open(os.path.join(raster_dir,'/*label.tif*')) labels rgb_src = rgb labels_src = labels target_crs = rgb_src.crs print("rgb: ", rgb) arrays Read and re-scale the original 16 bit image to 8 bit. rgb = cv2.normalize(rgb.read(), None, 0, 255, cv2.NORM_MINMAX) rgbn = cv2.normalize(rgbn.read(), None, 0, 255, cv2.NORM_MINMAX) indices = cv2.normalize(indices.read(), None, 0, 255, cv2.NORM_MINMAX) labels = labels.read() Check the label mask values. print("values in labels array: ", np.unique(labels)) return raster_dir, rgb, rgbn, indices, labels, rgb_src, labels_src, target_crs``` Color correction for the optical composite. &8681 ```python function to increase the brightness in an imagedef change_brightness(img, value=30): hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) h, s, v = cv2.split(hsv) v = cv2.add(v,value) v[v > 255] = 255 v[v < 0] = 0 final_hsv = cv2.merge((h, s, v)) img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR) return img``` Calculate relevant spectral indices&8681 **WDRVI**: Wide Dynamic Range Vegetation Index \**NPCRI**: Normalized Pigment Chlorophyll Ratio Index \**SI**: Shadow Index```python calculate spectral indices and concatenate them into one 3 channel imagedef indexnormstack(red, green, blue, nir): def WDRVIcalc(nir, red): a = 0.15 wdrvi = (a * nir-red)/(a * nir+red) return wdrvi def NPCRIcalc(red,blue): npcri = (red-blue)/(red+blue) return npcri def SIcalc(red, green, blue): si = ((1-red)*(1-green)*(1-blue))^(1/3) return si def norm(arr): arr_norm = (255*(arr - np.min(arr))/np.ptp(arr)) return arr_norm wdrvi = WDRVIcalc(nir,red) npcri = WRDVIcalc(red,blue) si = SIcalc(red,green,blue) wdrvi = wdrvi.transpose(1,2,0) npcri = npcri.transpose(1,2,0) si = si.transpose(1,2,0) index_stack = np.dstack((wdrvi, npcri, si)) return index_stack``` If you are rasterizing the labels from a vector file (e.g. GeoJSON or Shapefile).&8681 Read label shapefile into geopandas dataframe, check for invalid geometries and set to local CRS. Then, rasterize the labeled polygons using the metadata from one of the grayscale band images. In this fucntion, `geo_1` is used when there are two vector files used for labeling, e.g. Imaflora and Para. The latter is given preference over the former because it overwrites when intersections occur.```pythondef label(geos, labels_src): geo_0 = gpd.read_file(geos[0]) check for and remove invalid geometries geo_0 = geo_0.loc[geo_0.is_valid] reproject training data into local coordinate reference system geo_0 = geo_0.to_crs(crs={'init': target_crs}) convert the class identifier column to type integer geo_0['landcover_int'] = geo_0.landcover.astype(int) pair the geometries and their integer class values shapes_0 = ((geom,value) for geom, value in zip(geo_0.geometry, geo_0.landcover_int)) if len(geos) > 1: geo_1 = gpd.read_file(geos[1]) geo_1 = geo_1.loc[geo_1.is_valid] geo_1 = geo_1.to_crs(crs={'init': target_crs}) geo_1['landcover_int'] = geo_1.landcover.astype(int) shapes_1 = ((geom,value) for geom, value in zip(geo_1.geometry, geo_1.landcover_int)) else: continue get the metadata (height, width, channels, transform, CRS) to use in constructing the labeled image array labels_src_prf = labels_src.profile construct a blank array from the metadata and burn the labels in labels = features.rasterize(shapes=shapes, out_shape=(labels_src_prf['height'], labels_src_prf['width']), fill=0, all_touched=True, transform=labels_src_prf['transform'], dtype=labels_src_prf['dtype']) if geo_1: labels = features.rasterize(shapes=shapes_0, fill=0, all_touched=True, out=labels, transform=labels_src_prf['transform']) else: continue print("Values in labeled image: ", np.unique(labels)) return labels``` Write the processed rasters to file.&8681 ```pythondef save_images(raster_dir, rgb_norm, index_stack, labels, rgb_src, labels_src): rgb_norm_out=rasterio.open(os.path.join(raster_dir,'/rgb_byte_scaled.tif'), 'w', driver='Gtiff', width=rgb_src.width, height=rgb_src.height, count=3, crs=rgb_src.crs, transform=rgb_src.transform, dtype='uint8') rgb_norm_out.write(rgb_norm) rgb_norm_out.close() indices_computed = False change to True if using the index helper function above if indices_computed: index_stack = (index_stack * 255).astype(np.uint8) index_stack_t = index_stack.transpose(2,0,1) else: index_stack_t = index_stack index_stack_out=rasterio.open(os.path.join(raster_dir,'/index_stack.tif'), 'w', driver='Gtiff', width=rgb_src.width, height=rgb_src.height, count=3, crs=rgb_src.crs, transform=rgb_src.transform, dtype='uint8') index_stack_out.write(index_stack_t) index_stack_out.close() labels = labels.astype(np.uint8) labels_out=rasterio.open(os.path.join(raster_dir,'/labels.tif'), 'w', driver='Gtiff', width=labels_src.width, height=labels_src.height, count=1, crs=labels_src.crs, transform=labels_src.transform, dtype='uint8') labels_out.write(labels, 1) labels_out.close() return os.path.join(raster_dir,'/index_stack.tif'), os.path.join(raster_dir,'/labels.tif') ``` Now let's divide the optical/index stack and labeled image into 224x224 pixel tiles.&8681 ```pythondef tile(index_stack, labels, prefix, width, height, output_dir, brighten=False): tiles_dir = os.path.join(output_dir,'tiled/') img_dir = os.path.join(output_dir,'tiled/indices/') label_dir = os.path.join(output_dir,'tiled/labels/') dirs = [tiles_dir, img_dir, label_dir] for d in dirs: if not os.path.exists(d): os.makedirs(d) def get_tiles(ds): get number of rows and columns (pixels) in the entire input image nols, nrows = ds.meta['width'], ds.meta['height'] get the grid from which tiles will be made offsets = product(range(0, nols, width), range(0, nrows, height)) get the window of the entire input image big_window = windows.Window(col_off=0, row_off=0, width=nols, height=nrows) tile the big window by mini-windows per grid cell for col_off, row_off in offsets: window = windows.Window(col_off=col_off, row_off=row_off, width=width, height=height).intersection(big_window) transform = windows.transform(window, ds.transform) yield window, transform tile_width, tile_height = width, height def crop(inpath, outpath, c): read input image image = rasterio.open(inpath) get the metadata meta = image.meta.copy() print("meta: ", meta) set the number of channels to 3 or 1, depending on if its the index image or labels image meta['count'] = int(c) set the tile output file format to PNG (saves spatial metadata unlike JPG) meta['driver']='PNG' meta['dtype']='uint8' tile the input image by the mini-windows i = 0 for window, transform in get_tiles(image): meta['transform'] = transform meta['width'], meta['height'] = window.width, window.height outfile = os.path.join(outpath,"tile_%s_%s.png" % (prefix, str(i))) with rasterio.open(outfile, 'w', **meta) as outds: if brighten: imw = image.read(window=window) imw = imw.transpose(1,2,0) imwb = change_brightness(imw, value=50) imwb = imwb.transpose(2,0,1) outds.write(imwb) else: outds.write(image.read(window=window)) i = i+1 def process_tiles(index_flag): tile the input images, when index_flag == True, we are tiling the spectral index image, when False we are tiling the labels image if index_flag==True: inpath = os.path.join(raster_dir,'/*indices_byte_scaled.tif') outpath=img_dir crop(inpath, outpath, 3) else: inpath = os.path.join(raster_dir,'/*label.tif') outpath=label_dir crop(inpath, outpath, 1) process_tiles(index_flag=True) tile index stack process_tiles(index_flag=False) tile labels return tiles_dir, img_dir, label_dir``` Run the image processing workflow.&9888 Pain point &9888 The reason for not running this raster processing code in this workshop is due to a limitation of a Google Drive based workflow. Having a VM with a mounted SSD would be a good start to solving these associated latency problems incurred from I/O of data hosted in Google Drive.&8681 ```pythonprocess = Falseif process: raster_dir = os.path.join(root_dir,'/rasters/') If you want to write the files out to your personal drive, set write_out = True, but I recommend trying that in your free time because it takes about 1 hour for all composites. write_out = True False if write_out == True: read the rasters and scale to 8bit print("reading and scaling rasters...") raster_dir, rgb, rgbn, indices, labels, rgb_src, labels_src, target_crs = raster_read(raster_dir) Calculate indices and combine the indices into one single 3 channel image print("calculating spectral indices...") index_stack = indexnormstack(rgbn.read(1), rgbn.read(2), rgbn.read(3), rgbn.read(4)) Rasterize labels labels = label([os.path.join(root_dir,'TerraBio_Imaflora.geojson'), os.path.join(root_dir,'TerraBio_Para.geojson')], labels_src) Save index stack and labels to geotiff print("writing scaled rasters and labels to file...") index_stack_file, labels_file = save_images(personal_dir, rgb, index_stack, labels, rgb_src, labels_src) Tile images into 224x224 print("tiling the indices and labels...") tiles_dir, img_dir, label_dir = tile(index_stack, labels, 'terrabio', 224, 224, raster_dir, brighten=False) else: print("Not writing to file; using data in shared drive.")else: print("Using pre-processed dataset.")``` Read the data into memory Getting set up with the data```{important}Create drive shortcuts of the tiled imagery to your own My Drive Folder by Right-Clicking on the Shared folder `servir-tf-devseed`. Then, this folder will be available at the following path that is accessible with the google.colab `drive` module: `'/content/gdrive/My Drive/servir-tf-devseed/'````We'll be working witht he following folders in the `servir-tf-devseed` folder:```servir-tf-devseed/├── images/├── images_bright/├── indices/├── indices_800/├── labels/├── labels_800/├── background_list_train.txt├── train_list_clean.txt└── terrabio_classes.csv```
###Code
# Read the classes
class_index = pd.read_csv(os.path.join(root_dir,'terrabio_classes.csv'))
class_names = class_index.class_name.unique()
print(class_index)
###Output
class_id class_name
0 0 Background
1 1 Bushland
2 2 Pasture
3 3 Roads
4 4 Cocoa
5 5 Tree cover
6 6 Developed
7 7 Water
8 8 Agriculture
###Markdown
```{important}Normally we would read the image files from the directories and then process forward from there with background removal with the next **three** illustrated functions, however, due to slow I/O in Google Colab we will read the images list with 90% background removal already performed from a pre-saved list in the shared drive.```Get lists of image and label tile pairs for training and testing.&8681 ```pythondef get_train_test_lists(imdir, lbldir): imgs = glob.glob(os.path.join(imdir,"/*.png")) print(imgs[0:1]) dset_list = [] for img in imgs: filename_split = os.path.splitext(img) filename_zero, fileext = filename_split basename = os.path.basename(filename_zero) dset_list.append(basename) x_filenames = [] y_filenames = [] for img_id in dset_list: x_filenames.append(os.path.join(imdir, "{}.png".format(img_id))) y_filenames.append(os.path.join(lbldir, "{}.png".format(img_id))) print("number of images: ", len(dset_list)) return dset_list, x_filenames, y_filenamestrain_list, x_train_filenames, y_train_filenames = get_train_test_lists(img_dir, label_dir)```number of images: 37350Check for the proportion of background tiles. This takes a while. So we can skip by loading from saved results.&8681 ```pythonskip = Trueif not skip: background_list_train = [] for i in train_list: read in each labeled images print(os.path.join(label_dir,"{}.png".format(i))) img = np.array(Image.open(os.path.join(label_dir,"{}.png".format(i)))) check if no values in image are greater than zero (background value) if img.max()==0: background_list_train.append(i) print("Number of background images: ", len(background_list_train)) with open(os.path.join(root_dir,'background_list_train.txt'), 'w') as f: for item in background_list_train: f.write("%s\n" % item)else: background_list_train = [line.strip() for line in open("background_list_train.txt", 'r')] print("Number of background images: ", len(background_list_train))```Number of background images: 36489We will keep only 10% of the total. Too many background tiles can cause a form of class imbalance.&8681 ```pythonbackground_removal = len(background_list_train) * 0.9train_list_clean = [y for y in train_list if y not in background_list_train[0:int(background_removal)]]x_train_filenames = []y_train_filenames = []for i, img_id in zip(tqdm(range(len(train_list_clean))), train_list_clean): pass x_train_filenames.append(os.path.join(img_dir, "{}.png".format(img_id))) y_train_filenames.append(os.path.join(label_dir, "{}.png".format(img_id)))print("Number of background tiles: ", background_removal)print("Remaining number of tiles after 90% background removal: ", len(train_list_clean))```Number of background tiles: 32840Remaining number of tiles after 90% background removal: 4510 ```{important}The cell below contains the shortcut read of prepped training image list. ```
###Code
def get_train_test_lists(imdir, lbldir):
train_list = [line.strip() for line in open("train_list_clean.txt", 'r')]
x_filenames = []
y_filenames = []
for img_id in train_list:
x_filenames.append(os.path.join(imdir, "{}.png".format(img_id)))
y_filenames.append(os.path.join(lbldir, "{}.png".format(img_id)))
print("Number of images: ", len(train_list))
return train_list, x_filenames, y_filenames
train_list, x_train_filenames, y_train_filenames = get_train_test_lists(img_dir, label_dir)
###Output
Number of images: 4510
###Markdown
Now that we have our set of files we want to use for developing our model, we need to split them into three sets: * the training set for the model to learn from* the validation set that allows us to evaluate models and make decisions to change models* and the test set that we will use to communicate the results of the best performing model (as determined by the validation set)We will split index tiles and label tiles into train, validation and test sets: 70%, 20% and 10%, respectively.
###Code
x_train_filenames, x_val_filenames, y_train_filenames, y_val_filenames = train_test_split(x_train_filenames, y_train_filenames, test_size=0.3, random_state=42)
x_val_filenames, x_test_filenames, y_val_filenames, y_test_filenames = train_test_split(x_val_filenames, y_val_filenames, test_size=0.33, random_state=42)
num_train_examples = len(x_train_filenames)
num_val_examples = len(x_val_filenames)
num_test_examples = len(x_test_filenames)
print("Number of training examples: {}".format(num_train_examples))
print("Number of validation examples: {}".format(num_val_examples))
print("Number of test examples: {}".format(num_test_examples))
###Output
Number of training examples: 3157
Number of validation examples: 906
Number of test examples: 447
###Markdown
```{warning} **Long running cell** \The code below checks for values in train, val, and test partitions. We won't run this since it takes over 10 minutes on colab due to slow IO.``` &8681 ```pythonvals_train = []vals_val = []vals_test = []def get_vals_in_partition(partition_list, x_filenames, y_filenames): for x,y,i in zip(x_filenames, y_filenames, tqdm(range(len(y_filenames)))): pass try: img = np.array(Image.open(y)) vals = np.unique(img) partition_list.append(vals) except: continuedef flatten(partition_list): return [item for sublist in partition_list for item in sublist]get_vals_in_partition(vals_train, x_train_filenames, y_train_filenames)get_vals_in_partition(vals_val, x_val_filenames, y_val_filenames)get_vals_in_partition(vals_test, x_test_filenames, y_test_filenames)``` ``` pythonprint("Values in training partition: ", set(flatten(vals_train)))print("Values in validation partition: ", set(flatten(vals_val)))print("Values in test partition: ", set(flatten(vals_test)))```Values in training partition: {0, 1, 2, 3, 4, 5, 6, 7, 8}Values in validation partition: {0, 1, 2, 3, 4, 5, 6, 7, 8}Values in test partition: {0, 1, 2, 3, 4, 5, 6, 7, 8} Visualize the data ```{warning} **Long running cell** \The code below loads foreground examples randomly. We won't run this since it takes over a while on colab due to slow I/O.```&8681 ```pythondisplay_num = 3background_list_train = [line.strip() for line in open("background_list_train.txt", 'r')] select only for tiles with foreground labels presentforeground_list_x = []foreground_list_y = []for x,y in zip(x_train_filenames, y_train_filenames): try: filename_split = os.path.splitext(y) filename_zero, fileext = filename_split basename = os.path.basename(filename_zero) if basename not in background_list_train: foreground_list_x.append(x) foreground_list_y.append(y) else: continue except: continuenum_foreground_examples = len(foreground_list_y) randomlize the choice of image and label pairsr_choices = np.random.choice(num_foreground_examples, display_num)``` ```{important}Instead, we will read and plot a few sample foreground training images and labels from their pathnames. Note: this may still take a few execution tries to work. Google colab in practice takes some time to connect to data in Google Drive, so sometimes this returns an error on the first (few) attempt(s).```
###Code
display_num = 3
background_list_train = [line.strip() for line in open("background_list_train.txt", 'r')]
foreground_list_x = [
f'{img_dir}/tile_terrabio_15684.png',
f'{img_dir}/tile_terrabio_23056.png',
f'{img_dir}/tile_terrabio_21877.png'
]
foreground_list_y = [
f'{label_dir}/tile_terrabio_15684.png',
f'{label_dir}/tile_terrabio_23056.png',
f'{label_dir}/tile_terrabio_21877.png'
]
# confirm files exist
for fx, fy in zip(foreground_list_x, foreground_list_y):
if os.path.isfile(fx) and os.path.isfile(fy):
print(fx, " and ", fy, " exist.")
else:
print(fx, " and ", fy, " don't exist.")
num_foreground_examples = len(foreground_list_y)
# randomlize the choice of image and label pairs
#r_choices = np.random.choice(num_foreground_examples, display_num)
plt.figure(figsize=(10, 15))
for i in range(0, display_num * 2, 2):
#img_num = r_choices[i // 2]
img_num = i // 2
x_pathname = foreground_list_x[img_num]
y_pathname = foreground_list_y[img_num]
plt.subplot(display_num, 2, i + 1)
plt.imshow(mpimg.imread(x_pathname))
plt.title("Original Image")
example_labels = Image.open(y_pathname)
label_vals = np.unique(np.array(example_labels))
plt.subplot(display_num, 2, i + 2)
plt.imshow(example_labels)
plt.title("Masked Image")
plt.suptitle("Examples of Images and their Masks")
plt.show()
###Output
/content/gdrive/My Drive/servir-tf-devseed/indices/tile_terrabio_15684.png and /content/gdrive/My Drive/servir-tf-devseed/labels/tile_terrabio_15684.png don't exist.
/content/gdrive/My Drive/servir-tf-devseed/indices/tile_terrabio_23056.png and /content/gdrive/My Drive/servir-tf-devseed/labels/tile_terrabio_23056.png exist.
/content/gdrive/My Drive/servir-tf-devseed/indices/tile_terrabio_21877.png and /content/gdrive/My Drive/servir-tf-devseed/labels/tile_terrabio_21877.png exist.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.