code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Backpropagation Through Time
:label:`chapter_bptt`
So far we repeatedly alluded to things like *exploding gradients*,
*vanishing gradients*, *truncating backprop*, and the need to
*detach the computational graph*. For instance, in the previous
section we invoked `s.detach()` on the sequence. None of this was really fully
explained, in the interest of being able to build a model quickly and
to see how it works. In this section we will delve a bit more deeply
into the details of backpropagation for sequence models and why (and
how) the math works. For a more detailed discussion, e.g. about
randomization and backprop also see the paper by
[Tallec and Ollivier, 2017](https://arxiv.org/abs/1705.08209).
We encountered some of the effects of gradient explosion when we first
implemented recurrent neural networks (:numref:`chapter_rnn_scratch`). In
particular, if you solved the problems in the problem set, you would
have seen that gradient clipping is vital to ensure proper
convergence. To provide a better understanding of this issue, this
section will review how gradients are computed for sequences. Note
that there is nothing conceptually new in how it works. After all, we are still merely applying the chain rule to compute gradients. Nonetheless it is
worth while reviewing backpropagation (:numref:`chapter_backprop`) for
another time.
Forward propagation in a recurrent neural network is relatively
straightforward. Back-propagation through time is actually a specific
application of back propagation in recurrent neural networks. It
requires us to expand the recurrent neural network one time step at a time to
obtain the dependencies between model variables and parameters. Then,
based on the chain rule, we apply back propagation to compute and
store gradients. Since sequences can be rather long this means that the dependency can be rather lengthy. E.g. for a sequence of 1000 characters the first symbol could potentially have significant influence on the symbol at position 1000. This is not really computationally feasible (it takes too long and requires too much memory) and it requires over 1000 matrix-vector products before we would arrive at that very elusive gradient. This is a process fraught with computational and statistical uncertainty. In the following we will address what happens and how to address this in practice.
## A Simplified Recurrent Network
We start with a simplified model of how an RNN works. This model ignores details about the specifics of the hidden state and how it is being updated. These details are immaterial to the analysis and would only serve to clutter the notation and make it look more intimidating.
$$h_t = f(x_t, h_{t-1}, w) \text{ and } o_t = g(h_t, w)$$
Here $h_t$ denotes the hidden state, $x_t$ the input and $o_t$ the output. We have a chain of values $\{\ldots (h_{t-1}, x_{t-1}, o_{t-1}), (h_{t}, x_{t}, o_t), \ldots\}$ that depend on each other via recursive computation. The forward pass is fairly straightforward. All we need is to loop through the $(x_t, h_t, o_t)$ triples one step at a time. This is then evaluated by an objective function measuring the discrepancy between outputs $o_t$ and some desired target $y_t$
$$L(x,y, w) = \sum_{t=1}^T l(y_t, o_t).$$
For backpropagation matters are a bit more tricky. Let's compute the gradients with regard to the parameters $w$ of the objective function $L$. We get that
$$\begin{aligned}
\partial_{w} L & = \sum_{t=1}^T \partial_w l(y_t, o_t) \\
& = \sum_{t=1}^T \partial_{o_t} l(y_t, o_t) \left[\partial_w g(h_t, w) + \partial_{h_t} g(h_t,w) \partial_w h_t\right]
\end{aligned}$$
The first part of the derivative is easy to compute (this is after all the instantaneous loss gradient at time $t$). The second part is where things get tricky, since we need to compute the effect of the parameters on $h_t$. For each term we have the recursion:
$$\begin{aligned}
\partial_w h_t & = \partial_w f(x_t, h_{t-1}, w) + \partial_h f(x_t, h_{t-1}, w) \partial_w h_{t-1} \\
& = \sum_{i=t}^1 \left[\prod_{j=t}^i \partial_h f(x_j, h_{j-1}, w) \right] \partial_w f(x_{i}, h_{i-1}, w)
\end{aligned}$$
This chain can get *very* long whenever $t$ is large. While we can use the chain rule to compute $\partial_w h_t$ recursively, this might not be ideal. Let's discuss a number of strategies for dealing with this problem:
**Compute the full sum.** This is very slow and gradients can blow up,
since subtle changes in the initial conditions can potentially affect
the outcome a lot. That is, we could see things similar to the
butterfly effect where minimal changes in the initial conditions lead to disproportionate changes in the outcome. This is actually quite undesirable in terms of the model that we want to estimate. After all, we are looking for robust estimators that generalize well. Hence this strategy is almost never used in practice.
**Truncate the sum after $\tau$ steps.** This is what we've been
discussing so far. This leads to an *approximation* of the true
gradient, simply by terminating the sum above at $\partial_w
h_{t-\tau}$. The approximation error is thus given by $\partial_h
f(x_t, h_{t-1}, w) \partial_w h_{t-1}$ (multiplied by a product of
gradients involving $\partial_h f$). In practice this works quite
well. It is what is commonly referred to as truncated BPTT
(backpropgation through time). One of the consequences of this is that
the model focuses primarily on short-term influence rather than
long-term consequences. This is actually *desirable*, since it biases
the estimate towards simpler and more stable models.
**Randomized Truncation.** Lastly we can replace $\partial_w h_t$ by a
random variable which is correct in expectation but which truncates
the sequence. This is achieved by using a sequence of $\xi_t$ where
$\mathbf{E}[\xi_t] = 1$ and $\Pr(\xi_t = 0) = 1-\pi$ and furthermore
$\Pr(\xi_t = \pi^{-1}) = \pi$. We use this to replace the gradient:
$$z_t = \partial_w f(x_t, h_{t-1}, w) + \xi_t \partial_h f(x_t, h_{t-1}, w) \partial_w h_{t-1}$$
It follows from the definition of $\xi_t$ that $\mathbf{E}[z_t] = \partial_w h_t$. Whenever $\xi_t = 0$ the expansion terminates at that point. This leads to a weighted sum of sequences of varying lengths where long sequences are rare but appropriately overweighted. [Tallec and Ollivier, 2017](https://arxiv.org/abs/1705.08209) proposed this in their paper. Unfortunately, while appealing in theory, the model does not work much better than simple truncation, most likely due to a number of factors. Firstly, the effect of an observation after a number of backpropagation steps into the past is quite sufficient to capture dependencies in practice. Secondly, the increased variance counteracts the fact that the gradient is more accurate. Thirdly, we actually *want* models that have only a short range of interaction. Hence BPTT has a slight regularizing effect which can be desirable.

The picture above illustrates the three cases when analyzing the first few words of *The Time Machine*: randomized truncation partitions the text into segments of varying length. Regular truncated BPTT breaks it into sequences of the same length, and full BPTT leads to a computationally infeasible expression.
## The Computational Graph
In order to visualize the dependencies between model variables and parameters during computation in a recurrent neural network, we can draw a computational graph for the model, as shown below. For example, the computation of the hidden states of time step 3 $\mathbf{h}_3$ depends on the model parameters $\mathbf{W}_{hx}$ and $\mathbf{W}_{hh}$, the hidden state of the last time step $\mathbf{h}_2$, and the input of the current time step $\mathbf{x}_3$.

## BPTT in Detail
Now that we discussed the general principle let's discuss BPTT in detail, distinguishing between different sets of weight matrices ($\mathbf{W}_{hx}, \mathbf{W}_{hh}$ and $\mathbf{W}_{oh}$) in a simple linear latent variable model:
$$\mathbf{h}_t = \mathbf{W}_{hx} \mathbf{x}_t + \mathbf{W}_{hh} \mathbf{h}_{t-1} \text{ and }
\mathbf{o}_t = \mathbf{W}_{oh} \mathbf{h}_t$$
Following the discussion in :numref:`chapter_backprop` we compute gradients $\partial L/\partial \mathbf{W}_{hx}$, $\partial L/\partial \mathbf{W}_{hh}$, and $\partial L/\partial \mathbf{W}_{oh}$ for
$L(\mathbf{x}, \mathbf{y}, \mathbf{W}) = \sum_{t=1}^T l(\mathbf{o}_t, y_t)$.
Taking the derivatives with respect to $W_{oh}$ is fairly straightforward and we obtain
$$\partial_{\mathbf{W}_{oh}} L = \sum_{t=1}^T \mathrm{prod}
\left(\partial_{\mathbf{o}_t} l(\mathbf{o}_t, y_t), \mathbf{h}_t\right)$$
The dependency on $\mathbf{W}_{hx}$ and $\mathbf{W}_{hh}$ is a bit more tricky since it involves a chain of derivatives. We begin with
$$\begin{aligned}
\partial_{\mathbf{W}_{hh}} L & = \sum_{t=1}^T \mathrm{prod}
\left(\partial_{\mathbf{o}_t} l(\mathbf{o}_t, y_t), \mathbf{W}_{oh}, \partial_{\mathbf{W}_{hh}} \mathbf{h}_t\right) \\
\partial_{\mathbf{W}_{hx}} L & = \sum_{t=1}^T \mathrm{prod}
\left(\partial_{\mathbf{o}_t} l(\mathbf{o}_t, y_t), \mathbf{W}_{oh}, \partial_{\mathbf{W}_{hx}} \mathbf{h}_t\right)
\end{aligned}$$
After all, hidden states depend on each other and on past inputs. The key quantity is how past hidden states affect future hidden states.
$$\partial_{\mathbf{h}_t} \mathbf{h}_{t+1} = \mathbf{W}_{hh}^\top
\text{ and thus }
\partial_{\mathbf{h}_t} \mathbf{h}_T = \left(\mathbf{W}_{hh}^\top\right)^{T-t}$$
Chaining terms together yields
$$\begin{aligned}
\partial_{\mathbf{W}_{hh}} \mathbf{h}_t & = \sum_{j=1}^t \left(\mathbf{W}_{hh}^\top\right)^{t-j} \mathbf{h}_j \\
\partial_{\mathbf{W}_{hx}} \mathbf{h}_t & = \sum_{j=1}^t \left(\mathbf{W}_{hh}^\top\right)^{t-j} \mathbf{x}_j.
\end{aligned}$$
A number of things follow from this potentially very intimidating expression. Firstly, it pays to store intermediate results, i.e. powers of $\mathbf{W}_{hh}$ as we work our way through the terms of the loss function $L$. Secondly, this simple *linear* example already exhibits some key problems of long sequence models: it involves potentially very large powers $\mathbf{W}_{hh}^j$. In it, eigenvalues smaller than $1$ vanish for large $j$ and eigenvalues larger than $1$ diverge. This is numerically unstable and gives undue importance to potentially irrelvant past detail. One way to address this is to truncate the sum at a computationally convenient size. Later on in this chapter we will see how more sophisticated sequence models such as LSTMs can alleviate this further. In code, this truncation is effected by *detaching* the gradient after a given number of steps.
## Summary
* Back-propagation through time is merely an application of backprop to sequence models with a hidden state.
* Truncation is needed for computational convencient and numerical stability.
* High powers of matrices can lead top divergent and vanishing eigenvalues. This manifests itself in the form of exploding or vanishing gradients.
* For efficient computation intermediate values are cached.
## Exercises
1. Assume that we have a symmetric matrix $\mathbf{M} \in \mathbb{R}^{n \times n}$ with eigenvalues $\lambda_i$. Without loss of generality assume that they are ordered in ascending order $\lambda_i \leq \lambda_{i+1}$. Show that $\mathbf{M}^k$ has eigenvalues $\lambda_i^k$.
1. Prove that for a random vector $\mathbf{x} \in \mathbb{R}^n$ with high probability $\mathbf{M}^k \mathbf{x}$ will by very much aligned with the largest eigenvector $\mathbf{v}_n$ of $\mathbf{M}$. Formalize this statement.
1. What does the above result mean for gradients in a recurrent neural network?
1. Besides gradient clipping, can you think of any other methods to cope with gradient explosion in recurrent neural networks?
## Scan the QR Code to [Discuss](https://discuss.mxnet.io/t/2366)

|
github_jupyter
|
# Backpropagation Through Time
:label:`chapter_bptt`
So far we repeatedly alluded to things like *exploding gradients*,
*vanishing gradients*, *truncating backprop*, and the need to
*detach the computational graph*. For instance, in the previous
section we invoked `s.detach()` on the sequence. None of this was really fully
explained, in the interest of being able to build a model quickly and
to see how it works. In this section we will delve a bit more deeply
into the details of backpropagation for sequence models and why (and
how) the math works. For a more detailed discussion, e.g. about
randomization and backprop also see the paper by
[Tallec and Ollivier, 2017](https://arxiv.org/abs/1705.08209).
We encountered some of the effects of gradient explosion when we first
implemented recurrent neural networks (:numref:`chapter_rnn_scratch`). In
particular, if you solved the problems in the problem set, you would
have seen that gradient clipping is vital to ensure proper
convergence. To provide a better understanding of this issue, this
section will review how gradients are computed for sequences. Note
that there is nothing conceptually new in how it works. After all, we are still merely applying the chain rule to compute gradients. Nonetheless it is
worth while reviewing backpropagation (:numref:`chapter_backprop`) for
another time.
Forward propagation in a recurrent neural network is relatively
straightforward. Back-propagation through time is actually a specific
application of back propagation in recurrent neural networks. It
requires us to expand the recurrent neural network one time step at a time to
obtain the dependencies between model variables and parameters. Then,
based on the chain rule, we apply back propagation to compute and
store gradients. Since sequences can be rather long this means that the dependency can be rather lengthy. E.g. for a sequence of 1000 characters the first symbol could potentially have significant influence on the symbol at position 1000. This is not really computationally feasible (it takes too long and requires too much memory) and it requires over 1000 matrix-vector products before we would arrive at that very elusive gradient. This is a process fraught with computational and statistical uncertainty. In the following we will address what happens and how to address this in practice.
## A Simplified Recurrent Network
We start with a simplified model of how an RNN works. This model ignores details about the specifics of the hidden state and how it is being updated. These details are immaterial to the analysis and would only serve to clutter the notation and make it look more intimidating.
$$h_t = f(x_t, h_{t-1}, w) \text{ and } o_t = g(h_t, w)$$
Here $h_t$ denotes the hidden state, $x_t$ the input and $o_t$ the output. We have a chain of values $\{\ldots (h_{t-1}, x_{t-1}, o_{t-1}), (h_{t}, x_{t}, o_t), \ldots\}$ that depend on each other via recursive computation. The forward pass is fairly straightforward. All we need is to loop through the $(x_t, h_t, o_t)$ triples one step at a time. This is then evaluated by an objective function measuring the discrepancy between outputs $o_t$ and some desired target $y_t$
$$L(x,y, w) = \sum_{t=1}^T l(y_t, o_t).$$
For backpropagation matters are a bit more tricky. Let's compute the gradients with regard to the parameters $w$ of the objective function $L$. We get that
$$\begin{aligned}
\partial_{w} L & = \sum_{t=1}^T \partial_w l(y_t, o_t) \\
& = \sum_{t=1}^T \partial_{o_t} l(y_t, o_t) \left[\partial_w g(h_t, w) + \partial_{h_t} g(h_t,w) \partial_w h_t\right]
\end{aligned}$$
The first part of the derivative is easy to compute (this is after all the instantaneous loss gradient at time $t$). The second part is where things get tricky, since we need to compute the effect of the parameters on $h_t$. For each term we have the recursion:
$$\begin{aligned}
\partial_w h_t & = \partial_w f(x_t, h_{t-1}, w) + \partial_h f(x_t, h_{t-1}, w) \partial_w h_{t-1} \\
& = \sum_{i=t}^1 \left[\prod_{j=t}^i \partial_h f(x_j, h_{j-1}, w) \right] \partial_w f(x_{i}, h_{i-1}, w)
\end{aligned}$$
This chain can get *very* long whenever $t$ is large. While we can use the chain rule to compute $\partial_w h_t$ recursively, this might not be ideal. Let's discuss a number of strategies for dealing with this problem:
**Compute the full sum.** This is very slow and gradients can blow up,
since subtle changes in the initial conditions can potentially affect
the outcome a lot. That is, we could see things similar to the
butterfly effect where minimal changes in the initial conditions lead to disproportionate changes in the outcome. This is actually quite undesirable in terms of the model that we want to estimate. After all, we are looking for robust estimators that generalize well. Hence this strategy is almost never used in practice.
**Truncate the sum after $\tau$ steps.** This is what we've been
discussing so far. This leads to an *approximation* of the true
gradient, simply by terminating the sum above at $\partial_w
h_{t-\tau}$. The approximation error is thus given by $\partial_h
f(x_t, h_{t-1}, w) \partial_w h_{t-1}$ (multiplied by a product of
gradients involving $\partial_h f$). In practice this works quite
well. It is what is commonly referred to as truncated BPTT
(backpropgation through time). One of the consequences of this is that
the model focuses primarily on short-term influence rather than
long-term consequences. This is actually *desirable*, since it biases
the estimate towards simpler and more stable models.
**Randomized Truncation.** Lastly we can replace $\partial_w h_t$ by a
random variable which is correct in expectation but which truncates
the sequence. This is achieved by using a sequence of $\xi_t$ where
$\mathbf{E}[\xi_t] = 1$ and $\Pr(\xi_t = 0) = 1-\pi$ and furthermore
$\Pr(\xi_t = \pi^{-1}) = \pi$. We use this to replace the gradient:
$$z_t = \partial_w f(x_t, h_{t-1}, w) + \xi_t \partial_h f(x_t, h_{t-1}, w) \partial_w h_{t-1}$$
It follows from the definition of $\xi_t$ that $\mathbf{E}[z_t] = \partial_w h_t$. Whenever $\xi_t = 0$ the expansion terminates at that point. This leads to a weighted sum of sequences of varying lengths where long sequences are rare but appropriately overweighted. [Tallec and Ollivier, 2017](https://arxiv.org/abs/1705.08209) proposed this in their paper. Unfortunately, while appealing in theory, the model does not work much better than simple truncation, most likely due to a number of factors. Firstly, the effect of an observation after a number of backpropagation steps into the past is quite sufficient to capture dependencies in practice. Secondly, the increased variance counteracts the fact that the gradient is more accurate. Thirdly, we actually *want* models that have only a short range of interaction. Hence BPTT has a slight regularizing effect which can be desirable.

The picture above illustrates the three cases when analyzing the first few words of *The Time Machine*: randomized truncation partitions the text into segments of varying length. Regular truncated BPTT breaks it into sequences of the same length, and full BPTT leads to a computationally infeasible expression.
## The Computational Graph
In order to visualize the dependencies between model variables and parameters during computation in a recurrent neural network, we can draw a computational graph for the model, as shown below. For example, the computation of the hidden states of time step 3 $\mathbf{h}_3$ depends on the model parameters $\mathbf{W}_{hx}$ and $\mathbf{W}_{hh}$, the hidden state of the last time step $\mathbf{h}_2$, and the input of the current time step $\mathbf{x}_3$.

## BPTT in Detail
Now that we discussed the general principle let's discuss BPTT in detail, distinguishing between different sets of weight matrices ($\mathbf{W}_{hx}, \mathbf{W}_{hh}$ and $\mathbf{W}_{oh}$) in a simple linear latent variable model:
$$\mathbf{h}_t = \mathbf{W}_{hx} \mathbf{x}_t + \mathbf{W}_{hh} \mathbf{h}_{t-1} \text{ and }
\mathbf{o}_t = \mathbf{W}_{oh} \mathbf{h}_t$$
Following the discussion in :numref:`chapter_backprop` we compute gradients $\partial L/\partial \mathbf{W}_{hx}$, $\partial L/\partial \mathbf{W}_{hh}$, and $\partial L/\partial \mathbf{W}_{oh}$ for
$L(\mathbf{x}, \mathbf{y}, \mathbf{W}) = \sum_{t=1}^T l(\mathbf{o}_t, y_t)$.
Taking the derivatives with respect to $W_{oh}$ is fairly straightforward and we obtain
$$\partial_{\mathbf{W}_{oh}} L = \sum_{t=1}^T \mathrm{prod}
\left(\partial_{\mathbf{o}_t} l(\mathbf{o}_t, y_t), \mathbf{h}_t\right)$$
The dependency on $\mathbf{W}_{hx}$ and $\mathbf{W}_{hh}$ is a bit more tricky since it involves a chain of derivatives. We begin with
$$\begin{aligned}
\partial_{\mathbf{W}_{hh}} L & = \sum_{t=1}^T \mathrm{prod}
\left(\partial_{\mathbf{o}_t} l(\mathbf{o}_t, y_t), \mathbf{W}_{oh}, \partial_{\mathbf{W}_{hh}} \mathbf{h}_t\right) \\
\partial_{\mathbf{W}_{hx}} L & = \sum_{t=1}^T \mathrm{prod}
\left(\partial_{\mathbf{o}_t} l(\mathbf{o}_t, y_t), \mathbf{W}_{oh}, \partial_{\mathbf{W}_{hx}} \mathbf{h}_t\right)
\end{aligned}$$
After all, hidden states depend on each other and on past inputs. The key quantity is how past hidden states affect future hidden states.
$$\partial_{\mathbf{h}_t} \mathbf{h}_{t+1} = \mathbf{W}_{hh}^\top
\text{ and thus }
\partial_{\mathbf{h}_t} \mathbf{h}_T = \left(\mathbf{W}_{hh}^\top\right)^{T-t}$$
Chaining terms together yields
$$\begin{aligned}
\partial_{\mathbf{W}_{hh}} \mathbf{h}_t & = \sum_{j=1}^t \left(\mathbf{W}_{hh}^\top\right)^{t-j} \mathbf{h}_j \\
\partial_{\mathbf{W}_{hx}} \mathbf{h}_t & = \sum_{j=1}^t \left(\mathbf{W}_{hh}^\top\right)^{t-j} \mathbf{x}_j.
\end{aligned}$$
A number of things follow from this potentially very intimidating expression. Firstly, it pays to store intermediate results, i.e. powers of $\mathbf{W}_{hh}$ as we work our way through the terms of the loss function $L$. Secondly, this simple *linear* example already exhibits some key problems of long sequence models: it involves potentially very large powers $\mathbf{W}_{hh}^j$. In it, eigenvalues smaller than $1$ vanish for large $j$ and eigenvalues larger than $1$ diverge. This is numerically unstable and gives undue importance to potentially irrelvant past detail. One way to address this is to truncate the sum at a computationally convenient size. Later on in this chapter we will see how more sophisticated sequence models such as LSTMs can alleviate this further. In code, this truncation is effected by *detaching* the gradient after a given number of steps.
## Summary
* Back-propagation through time is merely an application of backprop to sequence models with a hidden state.
* Truncation is needed for computational convencient and numerical stability.
* High powers of matrices can lead top divergent and vanishing eigenvalues. This manifests itself in the form of exploding or vanishing gradients.
* For efficient computation intermediate values are cached.
## Exercises
1. Assume that we have a symmetric matrix $\mathbf{M} \in \mathbb{R}^{n \times n}$ with eigenvalues $\lambda_i$. Without loss of generality assume that they are ordered in ascending order $\lambda_i \leq \lambda_{i+1}$. Show that $\mathbf{M}^k$ has eigenvalues $\lambda_i^k$.
1. Prove that for a random vector $\mathbf{x} \in \mathbb{R}^n$ with high probability $\mathbf{M}^k \mathbf{x}$ will by very much aligned with the largest eigenvector $\mathbf{v}_n$ of $\mathbf{M}$. Formalize this statement.
1. What does the above result mean for gradients in a recurrent neural network?
1. Besides gradient clipping, can you think of any other methods to cope with gradient explosion in recurrent neural networks?
## Scan the QR Code to [Discuss](https://discuss.mxnet.io/t/2366)

| 0.964026 | 0.986675 |
```
import numpy as np
A = np.array([
[ 3, 7],
[-4, -6],
[ 7, 8],
[ 1, -1],
[-4, -1],
[-3, -7]
])
A
A.shape
import pandas as pd
df = pd.DataFrame(A ,columns=['a0' , 'a1'])
df
A
a0= A[:,0]
a1 = A[:, 1]
a0
a1
np.cov(a0,a1)
np.sum(a0*a1)/5
A
A.T
sigma = A.T @ A/5
l , x = np.linalg.eig(sigma)
l
x
sigma
sigma@x[:,0]
sigma@x[:,1]
x
print("first principal component")
x[:,1]
print("second principal component")
x[:, 0]
A
pc1_arr = A @ x[:,1]
pc1_arr
pc2_arr = A@x[:,0]
pc2_arr
df = pd.read_csv('glass.data')
df
df = df.drop(columns=['index' , 'Class'] , axis=1)
df
df.describe()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
df_scaled = scaler.fit_transform(df)
df1 = pd.DataFrame(df_scaled)
df_scaled.T @df_scaled/213
df1.describe()
sigma = np.cov(df1)
l , x = np.linalg.eig(df_scaled.T @df_scaled/213)
l
x
pc1_data = df_scaled @ x[:,0]
pc1_data.shape
pc2_data = df_scaled @ x[:,1]
pc2_data
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pc1_data
pca.fit_transform(df1)
df_scaled
df1
import matplotlib.pyplot as plt
pca = PCA()
principal_component = pca.fit_transform(df1)
plt.figure()
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel("number of required component")
plt.ylabel("eVR")
pca.explained_variance_ratio_
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('glass.data')
data.isna().sum()
data
data = data.drop(labels= ['index' , 'Class'] , axis = 1)
data
data.describe()
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
scaled_data = scalar.fit_transform(data)
df = pd.DataFrame(scaled_data, columns=data.columns)
df.describe()
from sklearn.decomposition import PCA
pca = PCA()
pca.fit_transform(df)
plt.figure()
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of column')
plt.ylabel('EVR')
plt.show()
pd.DataFrame(pca.fit_transform(df))
pca.explained_variance_ratio_
pca1 = PCA(n_components=5)
new_data = pca1.fit_transform(df)
new_data
x = pd.DataFrame(new_data , columns=['PC1' , 'PC2','PC3' , 'PC4','PC5'])
data
data1 = pd.read_csv('glass.data')
y = data1.Class
x
y
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
dt_model = DecisionTreeClassifier()
dt_model.fit(x,y)
data1
dt_model.predict(pca1.transform(scalar.transform([[1.52101,13.64,4.49,1.10,71.78,0.06,8.75,0.00,0.0]])))
def pc_calculation(x, no):
pca = {}
scaler = StandardScaler()
x_scaled = scaler.fit_transform(x)
l,x = np.linalg.eig((x_scaled.T @ x_scaled)/(x_scaled.shape[0]-1))
for i in range(no):
pc = x_scaled @x[:,i]
pca[i] = pc
pca_df = pd.DataFrame(pca)
return pca_df
```
|
github_jupyter
|
import numpy as np
A = np.array([
[ 3, 7],
[-4, -6],
[ 7, 8],
[ 1, -1],
[-4, -1],
[-3, -7]
])
A
A.shape
import pandas as pd
df = pd.DataFrame(A ,columns=['a0' , 'a1'])
df
A
a0= A[:,0]
a1 = A[:, 1]
a0
a1
np.cov(a0,a1)
np.sum(a0*a1)/5
A
A.T
sigma = A.T @ A/5
l , x = np.linalg.eig(sigma)
l
x
sigma
sigma@x[:,0]
sigma@x[:,1]
x
print("first principal component")
x[:,1]
print("second principal component")
x[:, 0]
A
pc1_arr = A @ x[:,1]
pc1_arr
pc2_arr = A@x[:,0]
pc2_arr
df = pd.read_csv('glass.data')
df
df = df.drop(columns=['index' , 'Class'] , axis=1)
df
df.describe()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
df_scaled = scaler.fit_transform(df)
df1 = pd.DataFrame(df_scaled)
df_scaled.T @df_scaled/213
df1.describe()
sigma = np.cov(df1)
l , x = np.linalg.eig(df_scaled.T @df_scaled/213)
l
x
pc1_data = df_scaled @ x[:,0]
pc1_data.shape
pc2_data = df_scaled @ x[:,1]
pc2_data
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pc1_data
pca.fit_transform(df1)
df_scaled
df1
import matplotlib.pyplot as plt
pca = PCA()
principal_component = pca.fit_transform(df1)
plt.figure()
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel("number of required component")
plt.ylabel("eVR")
pca.explained_variance_ratio_
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv('glass.data')
data.isna().sum()
data
data = data.drop(labels= ['index' , 'Class'] , axis = 1)
data
data.describe()
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
scaled_data = scalar.fit_transform(data)
df = pd.DataFrame(scaled_data, columns=data.columns)
df.describe()
from sklearn.decomposition import PCA
pca = PCA()
pca.fit_transform(df)
plt.figure()
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of column')
plt.ylabel('EVR')
plt.show()
pd.DataFrame(pca.fit_transform(df))
pca.explained_variance_ratio_
pca1 = PCA(n_components=5)
new_data = pca1.fit_transform(df)
new_data
x = pd.DataFrame(new_data , columns=['PC1' , 'PC2','PC3' , 'PC4','PC5'])
data
data1 = pd.read_csv('glass.data')
y = data1.Class
x
y
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
dt_model = DecisionTreeClassifier()
dt_model.fit(x,y)
data1
dt_model.predict(pca1.transform(scalar.transform([[1.52101,13.64,4.49,1.10,71.78,0.06,8.75,0.00,0.0]])))
def pc_calculation(x, no):
pca = {}
scaler = StandardScaler()
x_scaled = scaler.fit_transform(x)
l,x = np.linalg.eig((x_scaled.T @ x_scaled)/(x_scaled.shape[0]-1))
for i in range(no):
pc = x_scaled @x[:,i]
pca[i] = pc
pca_df = pd.DataFrame(pca)
return pca_df
| 0.558086 | 0.520923 |
## Dependencies
```
import warnings, json, random, os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.metrics import mean_squared_error
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
SEED = 0
seed_everything(SEED)
warnings.filterwarnings('ignore')
```
# Model parameters
```
config = {
"BATCH_SIZE": 64,
"EPOCHS": 100,
"LEARNING_RATE": 1e-3,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
```
# Load data
```
database_base_path = '/kaggle/input/stanford-covid-vaccine/'
train = pd.read_json(database_base_path + 'train.json', lines=True)
test = pd.read_json(database_base_path + 'test.json', lines=True)
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
```
## Auxiliary functions
```
token2int = {x:i for i, x in enumerate('().ACGUBEHIMSX')}
token2int_seq = {x:i for i, x in enumerate('ACGU')}
token2int_struct = {x:i for i, x in enumerate('().')}
token2int_loop = {x:i for i, x in enumerate('BEHIMSX')}
def plot_metrics(history):
metric_list = [m for m in list(history_list[0].keys()) if m is not 'lr']
size = len(metric_list)//2
fig, axes = plt.subplots(size, 1, sharex='col', figsize=(20, size * 5))
if size > 1:
axes = axes.flatten()
else:
axes = [axes]
for index in range(len(metric_list)//2):
metric_name = metric_list[index]
val_metric_name = metric_list[index+size]
axes[index].plot(history[metric_name], label='Train %s' % metric_name)
axes[index].plot(history[val_metric_name], label='Validation %s' % metric_name)
axes[index].legend(loc='best', fontsize=16)
axes[index].set_title(metric_name)
axes[index].axvline(np.argmin(history[metric_name]), linestyle='dashed')
axes[index].axvline(np.argmin(history[val_metric_name]), linestyle='dashed', color='orange')
plt.xlabel('Epochs', fontsize=16)
sns.despine()
plt.show()
def preprocess_inputs(df, encoder, cols=['sequence', 'structure', 'predicted_loop_type']):
return np.transpose(
np.array(
df[cols]
.applymap(lambda seq: [encoder[x] for x in seq])
.values
.tolist()
),
(0, 2, 1)
)
def evaluate_model(df, y_true, y_pred, target_cols):
# Complete data
metrics = []
metrics_clean = []
metrics_noisy = []
for idx, col in enumerate(pred_cols):
metrics.append(np.sqrt(np.mean((y_true[:, :, idx] - y_pred[:, :, idx])**2)))
target_cols = ['Overall'] + target_cols
metrics = [np.mean(metrics)] + metrics
# SN_filter = 1
idxs = train[train['SN_filter'] == 1].index
for idx, col in enumerate(pred_cols):
metrics_clean.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_clean = [np.mean(metrics_clean)] + metrics_clean
# SN_filter = 0
idxs = train[train['SN_filter'] == 0].index
for idx, col in enumerate(pred_cols):
metrics_noisy.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_noisy = [np.mean(metrics_noisy)] + metrics_noisy
metrics_df = pd.DataFrame({'Metric': target_cols, 'MCRMSE': metrics, 'MCRMSE (clean)': metrics_clean,
'MCRMSE (noisy)': metrics_noisy})
return metrics_df
def get_dataset(x, y=None, labeled=True, shuffled=True, batch_size=32, buffer_size=-1, seed=0):
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
else:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(AUTO)
return dataset
def get_dataset_sampling(x, y=None, shuffled=True, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
return dataset
```
# Model
```
def model_fn(embed_dim=100, hidden_dim=128, dropout=0.5, pred_len=68):
inputs_seq = L.Input(shape=(None, 1), name='inputs_seq')
inputs_struct = L.Input(shape=(None, 1), name='inputs_struct')
inputs_loop = L.Input(shape=(None, 1), name='inputs_loop')
shared_embed = L.Embedding(input_dim=len(token2int), output_dim=embed_dim, name='shared_embedding')
embed_seq = shared_embed(inputs_seq)
embed_struct = shared_embed(inputs_struct)
embed_loop = shared_embed(inputs_loop)
x_concat = L.concatenate([embed_seq, embed_struct, embed_loop], axis=2, name='embedding_concatenate')
x_reshaped = L.Reshape((-1, x_concat.shape[2]*x_concat.shape[3]))(x_concat)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x_reshaped)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
# Since we are only making predictions on the first part of each sequence, we have to truncate it
x_truncated = x[:, :pred_len]
outputs = L.Dense(5, activation='linear', name='outputs')(x_truncated)
model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop], outputs=outputs)
opt = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer=opt, loss=losses.MeanSquaredError())
return model
model = model_fn()
model.summary()
```
# Pre-process
```
feature_cols = ['sequence', 'structure', 'predicted_loop_type']
pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C']
encoder_list = [token2int, token2int, token2int]
train_features = np.array([preprocess_inputs(train, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
train_labels = np.array(train[pred_cols].values.tolist()).transpose((0, 2, 1))
public_test = test.query("seq_length == 107").copy()
private_test = test.query("seq_length == 130").copy()
x_test_public = np.array([preprocess_inputs(public_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
x_test_private = np.array([preprocess_inputs(private_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
```
# Training
```
AUTO = tf.data.experimental.AUTOTUNE
skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
history_list = []
oof = train[['id']].copy()
oof_preds = np.zeros(train_labels.shape)
test_public_preds = np.zeros((x_test_public.shape[0], x_test_public.shape[2], len(pred_cols)))
test_private_preds = np.zeros((x_test_private.shape[0], x_test_private.shape[2], len(pred_cols)))
for fold,(train_idx, valid_idx) in enumerate(skf.split(train_labels)):
if fold >= config['N_USED_FOLDS']:
break
print(f'\nFOLD: {fold+1}')
### Create datasets
x_train = train_features[train_idx]
y_train = train_labels[train_idx]
x_valid = train_features[valid_idx]
y_valid = train_labels[valid_idx]
# train_ds = get_dataset(x_train, y_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
valid_ds = get_dataset(x_valid, y_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
# Create clean and noisy datasets
clean_idxs = np.intersect1d(train[train['SN_filter'] == 1].index, train_idx)
noisy_idxs = np.intersect1d(train[train['SN_filter'] == 0].index, train_idx)
clean_ds = get_dataset_sampling(train_features[clean_idxs], train_labels[clean_idxs], shuffled=True, seed=SEED)
noisy_ds = get_dataset_sampling(train_features[noisy_idxs], train_labels[noisy_idxs], shuffled=True, seed=SEED)
# Resampled TF Dataset
resampled_ds = tf.data.experimental.sample_from_datasets([clean_ds, noisy_ds], weights=[0.75, 0.25])
resampled_ds = resampled_ds.batch(config['BATCH_SIZE']).prefetch(AUTO)
### Model
K.clear_session()
model = model_fn()
model_path = f'model_{fold}.h5'
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1)
### Train
history = model.fit(resampled_ds,
validation_data=valid_ds,
callbacks=[es, rlrp],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
verbose=2).history
history_list.append(history)
# Save last model weights
model.save_weights(model_path)
### Inference
valid_preds = model.predict(valid_ds)
# Short sequence (public test)
model = model_fn(pred_len=107)
model.load_weights(model_path)
test_public_preds += model.predict(test_public_ds) * (1 / config['N_USED_FOLDS'])
# Long sequence (private test)
model = model_fn(pred_len=130)
model.load_weights(model_path)
test_private_preds += model.predict(test_private_ds) * (1 / config['N_USED_FOLDS'])
oof_preds[valid_idx] = valid_preds
```
## Model loss graph
```
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
print(f"Train {np.array(history['loss']).min():.5f} Validation {np.array(history['val_loss']).min():.5f}")
plot_metrics(history)
```
# Post-processing
```
# Assign values to OOF set
# Assign labels
for idx, col in enumerate(pred_cols):
val = train_labels[:, :, idx]
oof = oof.assign(**{col: list(val)})
# Assign preds
for idx, col in enumerate(pred_cols):
val = oof_preds[:, :, idx]
oof = oof.assign(**{f'{col}_pred': list(val)})
# Assign values to test set
preds_ls = []
for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]:
for i, uid in enumerate(df.id):
single_pred = preds[i]
single_df = pd.DataFrame(single_pred, columns=pred_cols)
single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])]
preds_ls.append(single_df)
preds_df = pd.concat(preds_ls)
```
# Model evaluation
```
display(evaluate_model(train, train_labels, oof_preds, pred_cols))
```
# Visualize test predictions
```
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
```
# Test set predictions
```
display(submission.head(10))
display(submission.describe())
submission.to_csv('submission.csv', index=False)
```
|
github_jupyter
|
import warnings, json, random, os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.metrics import mean_squared_error
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
SEED = 0
seed_everything(SEED)
warnings.filterwarnings('ignore')
config = {
"BATCH_SIZE": 64,
"EPOCHS": 100,
"LEARNING_RATE": 1e-3,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
database_base_path = '/kaggle/input/stanford-covid-vaccine/'
train = pd.read_json(database_base_path + 'train.json', lines=True)
test = pd.read_json(database_base_path + 'test.json', lines=True)
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
token2int = {x:i for i, x in enumerate('().ACGUBEHIMSX')}
token2int_seq = {x:i for i, x in enumerate('ACGU')}
token2int_struct = {x:i for i, x in enumerate('().')}
token2int_loop = {x:i for i, x in enumerate('BEHIMSX')}
def plot_metrics(history):
metric_list = [m for m in list(history_list[0].keys()) if m is not 'lr']
size = len(metric_list)//2
fig, axes = plt.subplots(size, 1, sharex='col', figsize=(20, size * 5))
if size > 1:
axes = axes.flatten()
else:
axes = [axes]
for index in range(len(metric_list)//2):
metric_name = metric_list[index]
val_metric_name = metric_list[index+size]
axes[index].plot(history[metric_name], label='Train %s' % metric_name)
axes[index].plot(history[val_metric_name], label='Validation %s' % metric_name)
axes[index].legend(loc='best', fontsize=16)
axes[index].set_title(metric_name)
axes[index].axvline(np.argmin(history[metric_name]), linestyle='dashed')
axes[index].axvline(np.argmin(history[val_metric_name]), linestyle='dashed', color='orange')
plt.xlabel('Epochs', fontsize=16)
sns.despine()
plt.show()
def preprocess_inputs(df, encoder, cols=['sequence', 'structure', 'predicted_loop_type']):
return np.transpose(
np.array(
df[cols]
.applymap(lambda seq: [encoder[x] for x in seq])
.values
.tolist()
),
(0, 2, 1)
)
def evaluate_model(df, y_true, y_pred, target_cols):
# Complete data
metrics = []
metrics_clean = []
metrics_noisy = []
for idx, col in enumerate(pred_cols):
metrics.append(np.sqrt(np.mean((y_true[:, :, idx] - y_pred[:, :, idx])**2)))
target_cols = ['Overall'] + target_cols
metrics = [np.mean(metrics)] + metrics
# SN_filter = 1
idxs = train[train['SN_filter'] == 1].index
for idx, col in enumerate(pred_cols):
metrics_clean.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_clean = [np.mean(metrics_clean)] + metrics_clean
# SN_filter = 0
idxs = train[train['SN_filter'] == 0].index
for idx, col in enumerate(pred_cols):
metrics_noisy.append(np.sqrt(np.mean((y_true[idxs, :, idx] - y_pred[idxs, :, idx])**2)))
metrics_noisy = [np.mean(metrics_noisy)] + metrics_noisy
metrics_df = pd.DataFrame({'Metric': target_cols, 'MCRMSE': metrics, 'MCRMSE (clean)': metrics_clean,
'MCRMSE (noisy)': metrics_noisy})
return metrics_df
def get_dataset(x, y=None, labeled=True, shuffled=True, batch_size=32, buffer_size=-1, seed=0):
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
else:
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(AUTO)
return dataset
def get_dataset_sampling(x, y=None, shuffled=True, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'inputs_seq': x[:, 0, :, :],
'inputs_struct': x[:, 1, :, :],
'inputs_loop': x[:, 2, :, :],},
{'outputs': y}))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
return dataset
def model_fn(embed_dim=100, hidden_dim=128, dropout=0.5, pred_len=68):
inputs_seq = L.Input(shape=(None, 1), name='inputs_seq')
inputs_struct = L.Input(shape=(None, 1), name='inputs_struct')
inputs_loop = L.Input(shape=(None, 1), name='inputs_loop')
shared_embed = L.Embedding(input_dim=len(token2int), output_dim=embed_dim, name='shared_embedding')
embed_seq = shared_embed(inputs_seq)
embed_struct = shared_embed(inputs_struct)
embed_loop = shared_embed(inputs_loop)
x_concat = L.concatenate([embed_seq, embed_struct, embed_loop], axis=2, name='embedding_concatenate')
x_reshaped = L.Reshape((-1, x_concat.shape[2]*x_concat.shape[3]))(x_concat)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x_reshaped)
x = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True))(x)
# Since we are only making predictions on the first part of each sequence, we have to truncate it
x_truncated = x[:, :pred_len]
outputs = L.Dense(5, activation='linear', name='outputs')(x_truncated)
model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop], outputs=outputs)
opt = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer=opt, loss=losses.MeanSquaredError())
return model
model = model_fn()
model.summary()
feature_cols = ['sequence', 'structure', 'predicted_loop_type']
pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C']
encoder_list = [token2int, token2int, token2int]
train_features = np.array([preprocess_inputs(train, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
train_labels = np.array(train[pred_cols].values.tolist()).transpose((0, 2, 1))
public_test = test.query("seq_length == 107").copy()
private_test = test.query("seq_length == 130").copy()
x_test_public = np.array([preprocess_inputs(public_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
x_test_private = np.array([preprocess_inputs(private_test, encoder_list[idx], [col]) for idx, col in enumerate(feature_cols)]).transpose((1, 0, 2, 3))
AUTO = tf.data.experimental.AUTOTUNE
skf = KFold(n_splits=config['N_USED_FOLDS'], shuffle=True, random_state=SEED)
history_list = []
oof = train[['id']].copy()
oof_preds = np.zeros(train_labels.shape)
test_public_preds = np.zeros((x_test_public.shape[0], x_test_public.shape[2], len(pred_cols)))
test_private_preds = np.zeros((x_test_private.shape[0], x_test_private.shape[2], len(pred_cols)))
for fold,(train_idx, valid_idx) in enumerate(skf.split(train_labels)):
if fold >= config['N_USED_FOLDS']:
break
print(f'\nFOLD: {fold+1}')
### Create datasets
x_train = train_features[train_idx]
y_train = train_labels[train_idx]
x_valid = train_features[valid_idx]
y_valid = train_labels[valid_idx]
# train_ds = get_dataset(x_train, y_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
valid_ds = get_dataset(x_valid, y_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
# Create clean and noisy datasets
clean_idxs = np.intersect1d(train[train['SN_filter'] == 1].index, train_idx)
noisy_idxs = np.intersect1d(train[train['SN_filter'] == 0].index, train_idx)
clean_ds = get_dataset_sampling(train_features[clean_idxs], train_labels[clean_idxs], shuffled=True, seed=SEED)
noisy_ds = get_dataset_sampling(train_features[noisy_idxs], train_labels[noisy_idxs], shuffled=True, seed=SEED)
# Resampled TF Dataset
resampled_ds = tf.data.experimental.sample_from_datasets([clean_ds, noisy_ds], weights=[0.75, 0.25])
resampled_ds = resampled_ds.batch(config['BATCH_SIZE']).prefetch(AUTO)
### Model
K.clear_session()
model = model_fn()
model_path = f'model_{fold}.h5'
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1)
### Train
history = model.fit(resampled_ds,
validation_data=valid_ds,
callbacks=[es, rlrp],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
verbose=2).history
history_list.append(history)
# Save last model weights
model.save_weights(model_path)
### Inference
valid_preds = model.predict(valid_ds)
# Short sequence (public test)
model = model_fn(pred_len=107)
model.load_weights(model_path)
test_public_preds += model.predict(test_public_ds) * (1 / config['N_USED_FOLDS'])
# Long sequence (private test)
model = model_fn(pred_len=130)
model.load_weights(model_path)
test_private_preds += model.predict(test_private_ds) * (1 / config['N_USED_FOLDS'])
oof_preds[valid_idx] = valid_preds
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
print(f"Train {np.array(history['loss']).min():.5f} Validation {np.array(history['val_loss']).min():.5f}")
plot_metrics(history)
# Assign values to OOF set
# Assign labels
for idx, col in enumerate(pred_cols):
val = train_labels[:, :, idx]
oof = oof.assign(**{col: list(val)})
# Assign preds
for idx, col in enumerate(pred_cols):
val = oof_preds[:, :, idx]
oof = oof.assign(**{f'{col}_pred': list(val)})
# Assign values to test set
preds_ls = []
for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]:
for i, uid in enumerate(df.id):
single_pred = preds[i]
single_df = pd.DataFrame(single_pred, columns=pred_cols)
single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])]
preds_ls.append(single_df)
preds_df = pd.concat(preds_ls)
display(evaluate_model(train, train_labels, oof_preds, pred_cols))
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
display(submission.head(10))
display(submission.describe())
submission.to_csv('submission.csv', index=False)
| 0.541409 | 0.604078 |
# (Re)Introduction to Image Processing
**Version 0.1**
During Session 1 of the DSFP, Robert Lupton provided a problem that brilliantly introduced some of the basic challenges associated with measuring the flux of a point source. As such, we will revisit that problem as a review/introduction to the remainder of the week.
* * *
By AA Miller (CIERA/Northwestern & Adler) <br>
[But please note that this is essentially a copy of Robert's lecture.]
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
```
## Problem 1) An (oversimplified) 1-D Model
For this introductory problem we are going to simulate a 1 dimensional detector (the more complex issues associated will real stars on 2D detectors will be covered tomorrow by Dora). We will generate stars as Gaussians $N(\mu, \sigma^2)$, with mean $\mu$ and variance $\sigma^2$.
As observed by LSST, all stars are point sources that reflect the point spread function (PSF), which is produced by a combination of the atmosphere, telescope, and detector. A standard measure of the PSF's width is the Full Width Half Maximum (FWHM).
There is also a smooth background of light from several sources that I previously mentioned (the atmosphere, the detector, etc). We will refer to this background simply as "The Sky".
**Problem 1a**
Write a function `phi()` to simulate a (noise-free) 1D Gaussian PSF. The function should take `mu` and `fwhm` as arguments, and evaluate the PSF along a user-supplied array `x`.
*Hint* - for a Gaussian $N(0, \sigma^2)$, the FWHM is $2\sqrt{2\ln(2)}\,\sigma \approx 2.3548\sigma$.
```
def phi(x, mu, fwhm):
"""Evalute the 1d PSF N(mu, sigma^2) along x"""
sigma = fwhm/2.3548
flux = 1/np.sqrt(2*np.pi*sigma**2)*np.exp(-(x - mu)**2/(2*sigma**2))
return flux
```
**Problem 1b**
Plot the noise-free PSF for a star with $\mu = 10$ and $\mathrm{FWHM} = 3$. What is the flux of this star?
```
x = np.linspace(0,20,21)
plt.plot(x, phi(x, 10, 3))
print("The flux of the star is: {:.3f}".format(sum(phi(x, 10, 3))))
```
**Problem 1c**
Add Sky noise (a constant in this case) to your model. Define the sky as `S`, with total stellar flux `F`.
Plot the model for `S` = 100 and `F` = 500.
```
S = 100 * np.ones_like(x)
F = 500
plt.plot(x, S + F*phi(x, 10, 3))
```
## Problem 2) Add Noise
We will add noise to this simulation assuming that photon counting contributes the only source of uncertainty (this assumption is far from sufficient in real life). Within each pixel, $n$ photons are detected with an uncertainty that follows a [Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution), which has the property that the mean $\mu$ is equal to the variance $\mu$. If $n \gg 1$ then $P(\mu) \approx N(\mu, \mu)$ [you can safely assume we will be in this regime for the remainder of this problem].
**Problem 2a**
Calculate the noisy flux for the simulated star in Problem 1c.
*Hint* - you may find the function [`np.random.normal()`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.normal.html) helpful.
```
no_noise = S + F*phi(x, 10, 3)
noise = np.random.normal(no_noise, np.sqrt(no_noise))
```
**Problem 2b**
Overplot the noisy signal, with the associated uncertainties, on top of the noise-free signal.
```
plt.plot(x, no_noise)
plt.errorbar(x, noise, noise - no_noise, fmt = 'o')
```
## Problem 3) Flux Measurement
We will now attempt to measure the flux from a simulated star.
**Problem 3a**
Write a function `simulate()` to simulate the noisy flux measurements of a star with centroid `mu`, FWHM `fwhm`, sky background `S`, and flux `F`.
*Hint* - it may be helpful to plot the output of your function.
```
def simulate(x, mu, fwhm, S, F):
source = F * phi(x, mu, fwhm)
sky_plus_source = S * np.ones_like(x) + source
noisy_flux = np.random.normal(sky_plus_source, np.sqrt(sky_plus_source))
return noisy_flux
```
**Problem 3b**
Using an aperture with radius of 5 pixels centered on the source, measure the flux from a star centered at `mu` = 0, with `fwhm` = 5, `S` = 100, and `F` = 1000.
*Hint* - assume you can perfectly measure the background, and subtract this prior to the measurement.
```
x = np.linspace(-20,20,41)
sim_star = simulate(x, 0, 5, 100, 1000)
ap_flux = np.sum((sim_star - 100)[np.where(np.abs(x) <= 5)])
print("The star has flux = {:.3f}".format(ap_flux))
```
**Problem 3c**
Write a Monte Carlo simulator to estimate the mean and standard deviation of the flux from the simulated star.
*Food for thought* - what do you notice if you run your simulator many times?
```
sim_fluxes = np.empty(1000)
for sim_num, dummy in enumerate(sim_fluxes):
sim_star = simulate(x, 0, 5, 100, 1000)
ap_flux = np.sum((sim_star - 100)[np.where(np.abs(x) <= 5)])
sim_fluxes[sim_num] = ap_flux
print("The mean flux = {:.3f} with variance = {:.3f}".format(np.mean(sim_fluxes),
np.var(sim_fluxes, ddof=1)))
```
## Problem 4) PSF Flux measurement
In this problem we are going to use our knowledge of the PSF to estimate the flux of the star. We will compare these measurements to the aperture flux measurements above.
**Problem 4a**
Create the psf model, `psf`, which is equivalent to a noise-free star with `fwhm` = 5.
```
psf = phi(x, 0, 5)
psf /= sum(psf**2) # normalization
```
**Problem 4b**
Using the same parameters as problem 3, simulate a star and measure it's PSF flux.
```
sim_star = simulate(x, 0, 5, 100, 1000)
psf_flux = np.sum((sim_star-100)*psf)
print("The PSF flux is {:.3f}".format(psf_flux))
```
**Problem 4c**
As before, write a Monte Carlo simulator to estimate the PSF flux of the star. How do your results compare to above?
```
sim_fluxes = np.empty(1000)
for sim_num, dummy in enumerate(sim_fluxes):
sim_star = simulate(x, 0, 5, 100, 1000)
psf_flux = np.sum((sim_star-100)*psf)
sim_fluxes[sim_num] = psf_flux
print("The mean flux = {:.3f} with variance = {:.3f}".format(np.mean(sim_fluxes),
np.var(sim_fluxes, ddof=1)))
```
## Challenge Problem
**Problem C1**
Simulate several stars with flux `F` and measure their flux to determine the "detection limit" relative to a sky brightness `S`.
**Problem C2**
Simulate multiple stars and determine the minimum separation for which multiple stars can be detected as a function of FWHM. Is this only a function of FHWM?
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
def phi(x, mu, fwhm):
"""Evalute the 1d PSF N(mu, sigma^2) along x"""
sigma = fwhm/2.3548
flux = 1/np.sqrt(2*np.pi*sigma**2)*np.exp(-(x - mu)**2/(2*sigma**2))
return flux
x = np.linspace(0,20,21)
plt.plot(x, phi(x, 10, 3))
print("The flux of the star is: {:.3f}".format(sum(phi(x, 10, 3))))
S = 100 * np.ones_like(x)
F = 500
plt.plot(x, S + F*phi(x, 10, 3))
no_noise = S + F*phi(x, 10, 3)
noise = np.random.normal(no_noise, np.sqrt(no_noise))
plt.plot(x, no_noise)
plt.errorbar(x, noise, noise - no_noise, fmt = 'o')
def simulate(x, mu, fwhm, S, F):
source = F * phi(x, mu, fwhm)
sky_plus_source = S * np.ones_like(x) + source
noisy_flux = np.random.normal(sky_plus_source, np.sqrt(sky_plus_source))
return noisy_flux
x = np.linspace(-20,20,41)
sim_star = simulate(x, 0, 5, 100, 1000)
ap_flux = np.sum((sim_star - 100)[np.where(np.abs(x) <= 5)])
print("The star has flux = {:.3f}".format(ap_flux))
sim_fluxes = np.empty(1000)
for sim_num, dummy in enumerate(sim_fluxes):
sim_star = simulate(x, 0, 5, 100, 1000)
ap_flux = np.sum((sim_star - 100)[np.where(np.abs(x) <= 5)])
sim_fluxes[sim_num] = ap_flux
print("The mean flux = {:.3f} with variance = {:.3f}".format(np.mean(sim_fluxes),
np.var(sim_fluxes, ddof=1)))
psf = phi(x, 0, 5)
psf /= sum(psf**2) # normalization
sim_star = simulate(x, 0, 5, 100, 1000)
psf_flux = np.sum((sim_star-100)*psf)
print("The PSF flux is {:.3f}".format(psf_flux))
sim_fluxes = np.empty(1000)
for sim_num, dummy in enumerate(sim_fluxes):
sim_star = simulate(x, 0, 5, 100, 1000)
psf_flux = np.sum((sim_star-100)*psf)
sim_fluxes[sim_num] = psf_flux
print("The mean flux = {:.3f} with variance = {:.3f}".format(np.mean(sim_fluxes),
np.var(sim_fluxes, ddof=1)))
| 0.698329 | 0.992767 |
```
import numpy as np
import matplotlib.pyplot as plt
from numpy import linalg as LA
%matplotlib inline
t_hundredth,mean_hundredth,var_hundredth,kappa_hundredth=np.loadtxt("ramp_0.01.dat",skiprows=2,unpack=True)
t_tenth,mean_tenth,var_tenth,kappa_tenth=np.loadtxt("ramp_0.1.dat",skiprows=2,unpack=True)
t_1,mean_1,var_1,kappa_1=np.loadtxt("ramp_1.dat",skiprows=2,unpack=True)
t_5,mean_5,var_5,kappa_5=np.loadtxt("ramp_5.dat",skiprows=2,unpack=True)
t_10,mean_10,var_10,kappa_10=np.loadtxt("ramp_10.dat",skiprows=2,unpack=True)
t_c,mean_c,var_c=np.loadtxt("k_c.dat",skiprows=2,unpack=True)
plt.ylabel(r"$<x>$", fontsize=14)
plt.xlabel("time t")
plt.plot(t_tenth,5*np.exp(-t_tenth), 'k-', label=r"$5e^{-t}$")
plt.plot(t_c,mean_c, 'm--', label="$\kappa=1.0$")
plt.plot(t_10,mean_10, 'g--', label="$t_f=10$")
plt.plot(t_5,mean_5, 'r--', label="$t_f=5$")
plt.plot(t_1,mean_1, 'b--', label="$t_f=1$")
plt.plot(t_tenth,mean_tenth, 'c--', label="$t_f=0.1$")
#plt.plot(t_10,10*np.zeros(len(t_10)), 'k--',label='$<x>=0$')
#plt.plot(t,5*np.exp(-5*t**2), 'r--', label='$5e^{-5t^2}$')
plt.legend(shadow=True, fontsize=15)
plt.savefig("ramp_mean.eps")
plt.ylabel(r"$<x^2>$", fontsize=14)
plt.xlabel("time t")
plt.ylim(0,25)
plt.plot(t_c,var_c, 'k', label="$\kappa=1.0$")
#plt.plot(t_c,fn(t_c), 'r', label="$<x^2>_{anal1}$")
#plt.plot(t,fn1(t), 'r', label="$<x^2>_{anal2}$")
plt.plot(t_10,var_10, 'g--', label="$t_f=10$")
plt.plot(t_5,var_5, 'r--', label="$t_f=5$")
plt.plot(t_1,var_1, 'b--', label="$t_f=1$")
plt.plot(t_tenth,var_tenth, 'm--', label="$t_f=0.1$")
#plt.plot(t_hundredth,var_hundredth, 'c--', label="$t_f=0.01$")
#plt.plot(t,x_var_mediumk, 'c', label="$\kappa=t$")
#plt.plot(t,x_var_highk, 'g', label="$\kappa=5t$")
#plt.plot(t_10,10*np.ones(len(t_10)), 'c--',label='$<x^2>=10$')
#plt.plot(t_10,5*np.ones(len(t_10)), 'k--',label='$<x^2>=5$')
art=[]
lgd=plt.legend(loc=9, bbox_to_anchor=(1.25, 1.0), ncol=1, shadow=True, fontsize=14)
art.append(lgd)
plt.savefig("ramp_sigma.eps", additional_artists=art,bbox_inches="tight")
plt.ylabel(r"$<x^2>$", fontsize=14)
plt.xlabel("time t", fontsize=14)
#plt.ylim(4,12)
#plt.plot(t_c,var_c, 'k', label="$\kappa=1.0$")
#plt.plot(t,fn(t), 'r', label="$<x^2>_{anal1}$")
#plt.plot(t,fn1(t), 'r', label="$<x^2>_{anal2}$")
#plt.plot(t_10,var_10, 'g--', label="$t_f=10$")
#plt.plot(t_5,var_5, 'r--', label="$t_f=5$")
plt.plot(t_1[8000:15000],var_1[8000:15000], 'b--', label="$t_f=1$")
plt.plot(t_tenth[8000:15000],var_tenth[8000:15000], 'm--', label="$t_f=0.1$")
plt.plot(t_hundredth[8000:15000],var_hundredth[8000:15000], 'c--', label="$t_f=0.01$")
#plt.plot(t,x_var_mediumk, 'c', label="$\kappa=t$")
#plt.plot(t,x_var_highk, 'g', label="$\kappa=5t$")
#plt.plot(t_10,10*np.ones(len(t_10)), 'c--',label='$<x^2>=10$')
#plt.plot(t_10,5*np.ones(len(t_10)), 'k--',label='$<x^2>=5$')
art=[]
lgd=plt.legend(loc=9, bbox_to_anchor=(1.25, 1.0), ncol=1, shadow=True, fontsize=14)
art.append(lgd)
plt.savefig("fast_ramp.eps", additional_artists=art,bbox_inches="tight")
def fn(t):
D=10.0
omega=1.0
return 25*np.exp(-2*omega*t)+ (D/omega)*(1- np.exp(-2*omega*t) )
def fn1(t):
D=10.0
omega=2.0
return 25.00*np.exp(-2*omega*(t))+ (D/omega)*(1- np.exp(-2*omega*(t)))
plt.ylabel(r"$<x^2>$", fontsize=14)
plt.xlabel("time t")
plt.ylim(0,25)
#plt.plot(t_c,var_c, 'k', label="$\kappa=1.0$")
plt.plot(t_c,fn(t_c), 'g', label="anal1")
plt.plot(t_c[0:-1],fn1(t_c[0:-1]), 'k', label="anal2")
#plt.plot(t_10,var_10, 'b--', label="$t_f=10$")
plt.plot(t_5,var_5, 'r--', label="$t_f=5$")
#plt.plot(t_1,var_1, 'b--', label="$t_f=1$")
#plt.plot(t_tenth,var_tenth, 'm--', label="$t_f=0.1$")
#plt.plot(t_hundredth,var_hundredth, 'c--', label="$t_f=0.01$")
#plt.plot(t,x_var_mediumk, 'c', label="$\kappa=t$")
#plt.plot(t,x_var_highk, 'g', label="$\kappa=5t$")
#plt.plot(t_10,10*np.ones(len(t_10)), 'c--',label='$<x^2>=10$')
#plt.plot(t_10,5*np.ones(len(t_10)), 'k--',label='$<x^2>=5$')
plt.text(32,5, r'$k_B T/\kappa(T_f)$', color='k', fontweight='medium' ,fontsize=20)
plt.text(32,10, r'$k_B T/\kappa(0)$', color='g', fontweight='medium' ,fontsize=20)
art=[]
lgd=plt.legend(loc=9, bbox_to_anchor=(1.25, 1.0), ncol=1, shadow=True, fontsize=14)
art.append(lgd)
plt.savefig("ramp_anal.eps", additional_artists=art,bbox_inches="tight")
plt.title("Linear ramp")
plt.ylabel(r"$\kappa(t)$", fontsize=15)
plt.xlabel("time t")
plt.ylim(0.0,5)
#plt.plot(t_10,np.ones(len(t_10)),'k-')
#plt.plot(t_10,2*np.ones(len(t_10)),'k-')
plt.plot(t_hundredth,kappa_hundredth,'k--', label="$t_f=0.01$")
#plt.plot(t_tenth,kappa_tenth,'m--', label="$t_f=0.1$")
plt.plot(t_1,kappa_1,'r--', label="$t_f=1$")
plt.plot(t_5,kappa_5,'b--', label="$t_f=5$")
plt.plot(t_10,kappa_10,'g--', label="$t_f=10$")
plt.legend(shadow=True, fontsize=14)
plt.savefig("ramp_protocol.eps")
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from numpy import linalg as LA
%matplotlib inline
t_hundredth,mean_hundredth,var_hundredth,kappa_hundredth=np.loadtxt("ramp_0.01.dat",skiprows=2,unpack=True)
t_tenth,mean_tenth,var_tenth,kappa_tenth=np.loadtxt("ramp_0.1.dat",skiprows=2,unpack=True)
t_1,mean_1,var_1,kappa_1=np.loadtxt("ramp_1.dat",skiprows=2,unpack=True)
t_5,mean_5,var_5,kappa_5=np.loadtxt("ramp_5.dat",skiprows=2,unpack=True)
t_10,mean_10,var_10,kappa_10=np.loadtxt("ramp_10.dat",skiprows=2,unpack=True)
t_c,mean_c,var_c=np.loadtxt("k_c.dat",skiprows=2,unpack=True)
plt.ylabel(r"$<x>$", fontsize=14)
plt.xlabel("time t")
plt.plot(t_tenth,5*np.exp(-t_tenth), 'k-', label=r"$5e^{-t}$")
plt.plot(t_c,mean_c, 'm--', label="$\kappa=1.0$")
plt.plot(t_10,mean_10, 'g--', label="$t_f=10$")
plt.plot(t_5,mean_5, 'r--', label="$t_f=5$")
plt.plot(t_1,mean_1, 'b--', label="$t_f=1$")
plt.plot(t_tenth,mean_tenth, 'c--', label="$t_f=0.1$")
#plt.plot(t_10,10*np.zeros(len(t_10)), 'k--',label='$<x>=0$')
#plt.plot(t,5*np.exp(-5*t**2), 'r--', label='$5e^{-5t^2}$')
plt.legend(shadow=True, fontsize=15)
plt.savefig("ramp_mean.eps")
plt.ylabel(r"$<x^2>$", fontsize=14)
plt.xlabel("time t")
plt.ylim(0,25)
plt.plot(t_c,var_c, 'k', label="$\kappa=1.0$")
#plt.plot(t_c,fn(t_c), 'r', label="$<x^2>_{anal1}$")
#plt.plot(t,fn1(t), 'r', label="$<x^2>_{anal2}$")
plt.plot(t_10,var_10, 'g--', label="$t_f=10$")
plt.plot(t_5,var_5, 'r--', label="$t_f=5$")
plt.plot(t_1,var_1, 'b--', label="$t_f=1$")
plt.plot(t_tenth,var_tenth, 'm--', label="$t_f=0.1$")
#plt.plot(t_hundredth,var_hundredth, 'c--', label="$t_f=0.01$")
#plt.plot(t,x_var_mediumk, 'c', label="$\kappa=t$")
#plt.plot(t,x_var_highk, 'g', label="$\kappa=5t$")
#plt.plot(t_10,10*np.ones(len(t_10)), 'c--',label='$<x^2>=10$')
#plt.plot(t_10,5*np.ones(len(t_10)), 'k--',label='$<x^2>=5$')
art=[]
lgd=plt.legend(loc=9, bbox_to_anchor=(1.25, 1.0), ncol=1, shadow=True, fontsize=14)
art.append(lgd)
plt.savefig("ramp_sigma.eps", additional_artists=art,bbox_inches="tight")
plt.ylabel(r"$<x^2>$", fontsize=14)
plt.xlabel("time t", fontsize=14)
#plt.ylim(4,12)
#plt.plot(t_c,var_c, 'k', label="$\kappa=1.0$")
#plt.plot(t,fn(t), 'r', label="$<x^2>_{anal1}$")
#plt.plot(t,fn1(t), 'r', label="$<x^2>_{anal2}$")
#plt.plot(t_10,var_10, 'g--', label="$t_f=10$")
#plt.plot(t_5,var_5, 'r--', label="$t_f=5$")
plt.plot(t_1[8000:15000],var_1[8000:15000], 'b--', label="$t_f=1$")
plt.plot(t_tenth[8000:15000],var_tenth[8000:15000], 'm--', label="$t_f=0.1$")
plt.plot(t_hundredth[8000:15000],var_hundredth[8000:15000], 'c--', label="$t_f=0.01$")
#plt.plot(t,x_var_mediumk, 'c', label="$\kappa=t$")
#plt.plot(t,x_var_highk, 'g', label="$\kappa=5t$")
#plt.plot(t_10,10*np.ones(len(t_10)), 'c--',label='$<x^2>=10$')
#plt.plot(t_10,5*np.ones(len(t_10)), 'k--',label='$<x^2>=5$')
art=[]
lgd=plt.legend(loc=9, bbox_to_anchor=(1.25, 1.0), ncol=1, shadow=True, fontsize=14)
art.append(lgd)
plt.savefig("fast_ramp.eps", additional_artists=art,bbox_inches="tight")
def fn(t):
D=10.0
omega=1.0
return 25*np.exp(-2*omega*t)+ (D/omega)*(1- np.exp(-2*omega*t) )
def fn1(t):
D=10.0
omega=2.0
return 25.00*np.exp(-2*omega*(t))+ (D/omega)*(1- np.exp(-2*omega*(t)))
plt.ylabel(r"$<x^2>$", fontsize=14)
plt.xlabel("time t")
plt.ylim(0,25)
#plt.plot(t_c,var_c, 'k', label="$\kappa=1.0$")
plt.plot(t_c,fn(t_c), 'g', label="anal1")
plt.plot(t_c[0:-1],fn1(t_c[0:-1]), 'k', label="anal2")
#plt.plot(t_10,var_10, 'b--', label="$t_f=10$")
plt.plot(t_5,var_5, 'r--', label="$t_f=5$")
#plt.plot(t_1,var_1, 'b--', label="$t_f=1$")
#plt.plot(t_tenth,var_tenth, 'm--', label="$t_f=0.1$")
#plt.plot(t_hundredth,var_hundredth, 'c--', label="$t_f=0.01$")
#plt.plot(t,x_var_mediumk, 'c', label="$\kappa=t$")
#plt.plot(t,x_var_highk, 'g', label="$\kappa=5t$")
#plt.plot(t_10,10*np.ones(len(t_10)), 'c--',label='$<x^2>=10$')
#plt.plot(t_10,5*np.ones(len(t_10)), 'k--',label='$<x^2>=5$')
plt.text(32,5, r'$k_B T/\kappa(T_f)$', color='k', fontweight='medium' ,fontsize=20)
plt.text(32,10, r'$k_B T/\kappa(0)$', color='g', fontweight='medium' ,fontsize=20)
art=[]
lgd=plt.legend(loc=9, bbox_to_anchor=(1.25, 1.0), ncol=1, shadow=True, fontsize=14)
art.append(lgd)
plt.savefig("ramp_anal.eps", additional_artists=art,bbox_inches="tight")
plt.title("Linear ramp")
plt.ylabel(r"$\kappa(t)$", fontsize=15)
plt.xlabel("time t")
plt.ylim(0.0,5)
#plt.plot(t_10,np.ones(len(t_10)),'k-')
#plt.plot(t_10,2*np.ones(len(t_10)),'k-')
plt.plot(t_hundredth,kappa_hundredth,'k--', label="$t_f=0.01$")
#plt.plot(t_tenth,kappa_tenth,'m--', label="$t_f=0.1$")
plt.plot(t_1,kappa_1,'r--', label="$t_f=1$")
plt.plot(t_5,kappa_5,'b--', label="$t_f=5$")
plt.plot(t_10,kappa_10,'g--', label="$t_f=10$")
plt.legend(shadow=True, fontsize=14)
plt.savefig("ramp_protocol.eps")
| 0.363082 | 0.557845 |
## Task 16 - Motor Control
### Introduction to modeling and simulation of human movement
https://github.com/BMClab/bmc/blob/master/courses/ModSim2018.md
Desiree Miraldo
* Task (for Lecture 16):
Write a Python Class to implement the muscle model developed during the course.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from Muscle import Muscle
%matplotlib notebook
```
### Subject's anthropometrics
From Thelen(2003) - dorsiflexors:
Using BSP values from Dempster's model adapted by Winter (2009), available in [BodySegmentParameters.ipynb](https://github.com/BMClab/bmc/blob/master/notebooks/BodySegmentParameters.ipynb)
```
m = 75 #kg
#g = 9.81 #(m/s^2)
Lfoot = 26e-2 #(m)
mfoot = 0.0145*m #0.0145 from column Mass (kg)
Rcm = 0.5* Lfoot #0.5 from column CM prox (m)
Ifoot = mfoot*(.69*Lfoot)**2 #0.690 from column Rg prox (kg*m^2)
```
### Initial conditions
```
phi = 0 #start as 0 degree flexion (rad)
phid = 0 #zero velocity
t0 = 0 #Initial time
tf = 12 #Final Time
h = 1e-4 #integration step size and step counter
t = np.arange(t0,tf,h)
F = np.empty([2,t.shape[-1]])
Fkpe = np.empty([2,t.shape[-1]])
FiberLen = np.empty([2,t.shape[-1]])
TendonLen = np.empty([2,t.shape[-1]])
fiberVelocity = np.empty([2,t.shape[-1]])
a_dynamics = np.empty([2,t.shape[-1]])
phi_dynamics = np.empty(t.shape)
moment = np.empty(t.shape)
```
#### Activation dynamics parameters
```
# defining u (Initial conditional for Brain's activation)
form = 'step'
def createinput_u(form,t,h=.01,plot=True):
if (form == 'sinusoid'):
u = .2*np.sin(np.pi*t) +.7
elif (form == 'step'):
u = np.ones[2,t.shape[-1]]*h
u[:int(1/h)] = 0
u[int(1/h):int(3/h)] = 1
elif (form == 'pulse'):
u = np.ones[2,t.shape[-1]]*h
u[int(1/h):int(3/h)] = 1
if plot:
plt.figure()
plt.plot(u)
plt.title('u wave form')
return u
#u = createinput_u(form,h)
u = np.ones(t.shape[-1])/1
```
#### Coeficients from Elias(2014)
```
#parameters from Elias(2014) (meter/deg^ind)
A_TA = np.array([30.60,-7.44e-2,-1.41e-4,2.42e-6,1.50e-8])*1e-2
B_TA = np.array([4.30,1.66e-2,-3.89e-4,-4.45e-6,-4.34e-8])*1e-2
A_SO = np.array([32.30, 7.22e-2, -2.24e-4, -3.15e-6, 9.27e-9])*1e-2
B_SO = np.array([-4.10, 2.57e-2, 5.45e-4, -2.22e-6, -5.50e-9])*1e-2
A_MG = np.array([46.40, 7.48e-2, -1.13e-4, -3.50e-6, 7.35e-9])*1e-2
A_MG = np.array([24.30, 1.30e-2, 6.08e-4, -1.87e-6, -1.02e-8])*1e-2
A_LG = np.array([45.50, 7.62e-2, -1.25e-4, -3.55e-6, 7.65e-9])*1e-2
B_LG = np.array([24.40, 1.44e-2, 6.18e-4, -1.94e-6, -1.02e-8])*1e-2
# Using muscle specific parameters from Thelen(2003) - Table 2
dorsiflexor = Muscle(Lce_o=.09, Lslack=2.4, alpha=7*np.pi/180, Fmax=1400, dt=h)
soleus = Muscle(Lce_o=.049, Lslack=5.9, alpha=25*np.pi/180, Fmax=3150, dt=h)
gastroc = Muscle(Lce_o=.05, Lslack=8.3, alpha=14*np.pi/180, Fmax=1750, dt=h)
plantarflexor = Muscle(Lce_o=.031, Lslack=10, alpha=12*np.pi/180, Fmax=3150, dt=h)
soleus.Fmax = soleus.Fmax + gastroc.Fmax + plantarflexor.Fmax
soleus.Fmax
#dorsiflexor.Lnorm_ce = .087/dorsiflexor.Lce_o
#soleus.Lnorm_ce = .087/soleus.Lce_o
dorsiflexor.Lnorm_ce = ((A_TA[0]-dorsiflexor.Lslack)/np.cos(dorsiflexor.alpha))/dorsiflexor.Lce_o
soleus.Lnorm_ce = ((A_SO[0]-soleus.Lslack)/np.cos(soleus.alpha))/soleus.Lce_o
```
## Functions
```
def totalMuscleLength(phi,A):
'''
Compute length Muscle+tendon - Eq. 8 from Elias(2014)
Inputs:
A = parameters from Elias(2014) (meter/deg^ind)
thetaAnkle = angle of ankle
Output:
Lm = length Muscle+tendon
'''
phi = phi*180/np.pi
Lm = 0
for i in range(len(A)):
Lm += A[i]*(phi**i)
return Lm
def momentArm(phi,B):
'''
Compute moment arm of muscle - Eq. 9 from Elias(2014)
Inputs:
A = parameters from Elias(2014) (meter/deg^ind)
B = parameters from Elias(2014) (meter/deg^ind)
Output:
Lm = length Muscle+tendon
'''
phi = phi*180/np.pi
Rf = 0
for i in range(len(B)):
Rf += B[i]*(phi**i) #Eq. 9 from Elias(2014)
return Rf
def momentJoint(Rf_TA, Fnorm_tendon_TA, Fmax_TA, Rf_SOL, Fnorm_tendon_SOL, Fmax_SOL, m, phi):
g = 9.81 #(m/s^2)
'''
Inputs:
Rf = Moment arm
Fnorm_tendon = Normalized tendon force
m = Segment Mass
g = Acelleration of gravity
Fmax= maximal isometric force
phi = angle (deg)
Output:
M = Total moment with respect to joint
'''
M = Rf_TA*Fnorm_tendon_TA*Fmax_TA + Rf_SOL*Fnorm_tendon_SOL*Fmax_SOL - m*g*Rcm*np.sin(np.pi/2 - phi)
return M
def angularAcelerationJoint(M,I):
'''
Inputs:
M = Total moment with respect to joint
I = Moment of Inertia
Output:
phidd= angular aceleration of the joint
'''
phidd = M/I
return phidd
```
### Check initial conditions
## Simulation - Parallel
```
#Normalizing
for i in range(len(t)):
LmTA = totalMuscleLength(phi,A_TA)
LmSOL = totalMuscleLength(phi,A_SO)
dorsiflexor.updateMuscle(Lm=LmTA, u=u[i])
soleus.updateMuscle(Lm=LmSOL, u=u[i]*.246)
#Compute MomentJoint
RfTA = momentArm(phi,B_TA) #moment arm
RfSOL = momentArm(phi,B_SO) #moment arm
M = momentJoint(RfTA,dorsiflexor.Fnorm_tendon,dorsiflexor.Fmax,RfSOL,soleus.Fnorm_tendon,soleus.Fmax,mfoot,phi)
#Compute Angular Aceleration Joint
phidd = angularAcelerationJoint(M,Ifoot)
#Euler integration steps
phid = phid + h*phidd
phi = phi +h*phid
# Store variables in vectors - dorsiflexor (Tibialis Anterior)
F[0,i] = dorsiflexor.Fnorm_tendon*dorsiflexor.Fmax
Fkpe[0,i] = dorsiflexor.Fnorm_kpe*dorsiflexor.Fmax
FiberLen[0,i] = dorsiflexor.Lnorm_ce*dorsiflexor.Lce_o
TendonLen[0,i] = dorsiflexor.Lnorm_see*dorsiflexor.Lce_o
a_dynamics[0,i] = dorsiflexor.a
fiberVelocity[0,i] = dorsiflexor.Lnorm_cedot*dorsiflexor.Lce_o
# Store variables in vectors - soleus
F[1,i] = soleus.Fnorm_tendon*soleus.Fmax
Fkpe[1,i] = soleus.Fnorm_kpe*soleus.Fmax
FiberLen[1,i] = soleus.Lnorm_ce*soleus.Lce_o
TendonLen[1,i] = soleus.Lnorm_see*soleus.Lce_o
a_dynamics[1,i] = soleus.a
fiberVelocity[1,i] = soleus.Lnorm_cedot*soleus.Lce_o
#joint
phi_dynamics[i] = phi
moment[i] = M
```
## Plots
```
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,a_dynamics[0,:],'r',lw=2.5,label='dorsiflexor')
ax.plot(t,a_dynamics[1,:],'--b',lw=2,label='soleus')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Activation dynamics');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,phi_dynamics*180/np.pi,'purple',lw=2,label='ankle')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Joint angle (deg)');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,F[0,:],'r',lw=2,label='dorsiflexor')
ax.plot(t,F[1,:],'--b',lw=2,label='soleus')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Force (N)');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,moment,'purple',lw=2,label='ankle')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Moment joint');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,fiberVelocity[0,:],'r',lw=2,label='dorsiflexor')
ax.plot(t,fiberVelocity[1,:],'--b',lw=2,label='soleus')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('fiber Velocity (m/s)');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,FiberLen[0,:],'r',lw=2,label='dorsiflexor fiber')
ax.plot(t,TendonLen[0,:],'-.r',lw=2,label='dorsiflexor tendon')
ax.plot(t,FiberLen[1,:],'--b',lw=2,label='soleus fiber')
ax.plot(t,TendonLen[1,:],':b',lw=2,label='soleus tendon')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Length (m)')
ax.legend(loc='best')
fig, ax = plt.subplots(1, 3, figsize=(9,4), sharex=True, sharey=True)
plt.subplots_adjust(wspace=.12, hspace=None,left=None, bottom=None, right=None, top=None,)
ax[0].plot(t,FiberLen[0,:],'r',lw=2,label='dorsiflexor')
ax[1].plot(t,TendonLen[0,:],'r',lw=2,label='dorsiflexor')
ax[2].plot(t,FiberLen[0,:] + TendonLen[0,:],'r',lw=2,label='dorsiflexor')
ax[0].plot(t,FiberLen[1,:],'--b',lw=2,label='soleus')
ax[1].plot(t,TendonLen[1,:],'--b',lw=2,label='soleus')
ax[2].plot(t,FiberLen[1,:] + TendonLen[1,:],'--b',lw=2,label='soleus')
ax[1].set_xlabel('time (s)')
ax[0].set_ylabel('Length (m)');
ax[0].set_title('fiber')
ax[1].set_title('tendon')
ax[2].set_title('fiber + tendon')
ax[0].legend(loc='best')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from Muscle import Muscle
%matplotlib notebook
m = 75 #kg
#g = 9.81 #(m/s^2)
Lfoot = 26e-2 #(m)
mfoot = 0.0145*m #0.0145 from column Mass (kg)
Rcm = 0.5* Lfoot #0.5 from column CM prox (m)
Ifoot = mfoot*(.69*Lfoot)**2 #0.690 from column Rg prox (kg*m^2)
phi = 0 #start as 0 degree flexion (rad)
phid = 0 #zero velocity
t0 = 0 #Initial time
tf = 12 #Final Time
h = 1e-4 #integration step size and step counter
t = np.arange(t0,tf,h)
F = np.empty([2,t.shape[-1]])
Fkpe = np.empty([2,t.shape[-1]])
FiberLen = np.empty([2,t.shape[-1]])
TendonLen = np.empty([2,t.shape[-1]])
fiberVelocity = np.empty([2,t.shape[-1]])
a_dynamics = np.empty([2,t.shape[-1]])
phi_dynamics = np.empty(t.shape)
moment = np.empty(t.shape)
# defining u (Initial conditional for Brain's activation)
form = 'step'
def createinput_u(form,t,h=.01,plot=True):
if (form == 'sinusoid'):
u = .2*np.sin(np.pi*t) +.7
elif (form == 'step'):
u = np.ones[2,t.shape[-1]]*h
u[:int(1/h)] = 0
u[int(1/h):int(3/h)] = 1
elif (form == 'pulse'):
u = np.ones[2,t.shape[-1]]*h
u[int(1/h):int(3/h)] = 1
if plot:
plt.figure()
plt.plot(u)
plt.title('u wave form')
return u
#u = createinput_u(form,h)
u = np.ones(t.shape[-1])/1
#parameters from Elias(2014) (meter/deg^ind)
A_TA = np.array([30.60,-7.44e-2,-1.41e-4,2.42e-6,1.50e-8])*1e-2
B_TA = np.array([4.30,1.66e-2,-3.89e-4,-4.45e-6,-4.34e-8])*1e-2
A_SO = np.array([32.30, 7.22e-2, -2.24e-4, -3.15e-6, 9.27e-9])*1e-2
B_SO = np.array([-4.10, 2.57e-2, 5.45e-4, -2.22e-6, -5.50e-9])*1e-2
A_MG = np.array([46.40, 7.48e-2, -1.13e-4, -3.50e-6, 7.35e-9])*1e-2
A_MG = np.array([24.30, 1.30e-2, 6.08e-4, -1.87e-6, -1.02e-8])*1e-2
A_LG = np.array([45.50, 7.62e-2, -1.25e-4, -3.55e-6, 7.65e-9])*1e-2
B_LG = np.array([24.40, 1.44e-2, 6.18e-4, -1.94e-6, -1.02e-8])*1e-2
# Using muscle specific parameters from Thelen(2003) - Table 2
dorsiflexor = Muscle(Lce_o=.09, Lslack=2.4, alpha=7*np.pi/180, Fmax=1400, dt=h)
soleus = Muscle(Lce_o=.049, Lslack=5.9, alpha=25*np.pi/180, Fmax=3150, dt=h)
gastroc = Muscle(Lce_o=.05, Lslack=8.3, alpha=14*np.pi/180, Fmax=1750, dt=h)
plantarflexor = Muscle(Lce_o=.031, Lslack=10, alpha=12*np.pi/180, Fmax=3150, dt=h)
soleus.Fmax = soleus.Fmax + gastroc.Fmax + plantarflexor.Fmax
soleus.Fmax
#dorsiflexor.Lnorm_ce = .087/dorsiflexor.Lce_o
#soleus.Lnorm_ce = .087/soleus.Lce_o
dorsiflexor.Lnorm_ce = ((A_TA[0]-dorsiflexor.Lslack)/np.cos(dorsiflexor.alpha))/dorsiflexor.Lce_o
soleus.Lnorm_ce = ((A_SO[0]-soleus.Lslack)/np.cos(soleus.alpha))/soleus.Lce_o
def totalMuscleLength(phi,A):
'''
Compute length Muscle+tendon - Eq. 8 from Elias(2014)
Inputs:
A = parameters from Elias(2014) (meter/deg^ind)
thetaAnkle = angle of ankle
Output:
Lm = length Muscle+tendon
'''
phi = phi*180/np.pi
Lm = 0
for i in range(len(A)):
Lm += A[i]*(phi**i)
return Lm
def momentArm(phi,B):
'''
Compute moment arm of muscle - Eq. 9 from Elias(2014)
Inputs:
A = parameters from Elias(2014) (meter/deg^ind)
B = parameters from Elias(2014) (meter/deg^ind)
Output:
Lm = length Muscle+tendon
'''
phi = phi*180/np.pi
Rf = 0
for i in range(len(B)):
Rf += B[i]*(phi**i) #Eq. 9 from Elias(2014)
return Rf
def momentJoint(Rf_TA, Fnorm_tendon_TA, Fmax_TA, Rf_SOL, Fnorm_tendon_SOL, Fmax_SOL, m, phi):
g = 9.81 #(m/s^2)
'''
Inputs:
Rf = Moment arm
Fnorm_tendon = Normalized tendon force
m = Segment Mass
g = Acelleration of gravity
Fmax= maximal isometric force
phi = angle (deg)
Output:
M = Total moment with respect to joint
'''
M = Rf_TA*Fnorm_tendon_TA*Fmax_TA + Rf_SOL*Fnorm_tendon_SOL*Fmax_SOL - m*g*Rcm*np.sin(np.pi/2 - phi)
return M
def angularAcelerationJoint(M,I):
'''
Inputs:
M = Total moment with respect to joint
I = Moment of Inertia
Output:
phidd= angular aceleration of the joint
'''
phidd = M/I
return phidd
#Normalizing
for i in range(len(t)):
LmTA = totalMuscleLength(phi,A_TA)
LmSOL = totalMuscleLength(phi,A_SO)
dorsiflexor.updateMuscle(Lm=LmTA, u=u[i])
soleus.updateMuscle(Lm=LmSOL, u=u[i]*.246)
#Compute MomentJoint
RfTA = momentArm(phi,B_TA) #moment arm
RfSOL = momentArm(phi,B_SO) #moment arm
M = momentJoint(RfTA,dorsiflexor.Fnorm_tendon,dorsiflexor.Fmax,RfSOL,soleus.Fnorm_tendon,soleus.Fmax,mfoot,phi)
#Compute Angular Aceleration Joint
phidd = angularAcelerationJoint(M,Ifoot)
#Euler integration steps
phid = phid + h*phidd
phi = phi +h*phid
# Store variables in vectors - dorsiflexor (Tibialis Anterior)
F[0,i] = dorsiflexor.Fnorm_tendon*dorsiflexor.Fmax
Fkpe[0,i] = dorsiflexor.Fnorm_kpe*dorsiflexor.Fmax
FiberLen[0,i] = dorsiflexor.Lnorm_ce*dorsiflexor.Lce_o
TendonLen[0,i] = dorsiflexor.Lnorm_see*dorsiflexor.Lce_o
a_dynamics[0,i] = dorsiflexor.a
fiberVelocity[0,i] = dorsiflexor.Lnorm_cedot*dorsiflexor.Lce_o
# Store variables in vectors - soleus
F[1,i] = soleus.Fnorm_tendon*soleus.Fmax
Fkpe[1,i] = soleus.Fnorm_kpe*soleus.Fmax
FiberLen[1,i] = soleus.Lnorm_ce*soleus.Lce_o
TendonLen[1,i] = soleus.Lnorm_see*soleus.Lce_o
a_dynamics[1,i] = soleus.a
fiberVelocity[1,i] = soleus.Lnorm_cedot*soleus.Lce_o
#joint
phi_dynamics[i] = phi
moment[i] = M
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,a_dynamics[0,:],'r',lw=2.5,label='dorsiflexor')
ax.plot(t,a_dynamics[1,:],'--b',lw=2,label='soleus')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Activation dynamics');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,phi_dynamics*180/np.pi,'purple',lw=2,label='ankle')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Joint angle (deg)');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,F[0,:],'r',lw=2,label='dorsiflexor')
ax.plot(t,F[1,:],'--b',lw=2,label='soleus')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Force (N)');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,moment,'purple',lw=2,label='ankle')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Moment joint');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,fiberVelocity[0,:],'r',lw=2,label='dorsiflexor')
ax.plot(t,fiberVelocity[1,:],'--b',lw=2,label='soleus')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('fiber Velocity (m/s)');
ax.legend(loc='best')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,FiberLen[0,:],'r',lw=2,label='dorsiflexor fiber')
ax.plot(t,TendonLen[0,:],'-.r',lw=2,label='dorsiflexor tendon')
ax.plot(t,FiberLen[1,:],'--b',lw=2,label='soleus fiber')
ax.plot(t,TendonLen[1,:],':b',lw=2,label='soleus tendon')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Length (m)')
ax.legend(loc='best')
fig, ax = plt.subplots(1, 3, figsize=(9,4), sharex=True, sharey=True)
plt.subplots_adjust(wspace=.12, hspace=None,left=None, bottom=None, right=None, top=None,)
ax[0].plot(t,FiberLen[0,:],'r',lw=2,label='dorsiflexor')
ax[1].plot(t,TendonLen[0,:],'r',lw=2,label='dorsiflexor')
ax[2].plot(t,FiberLen[0,:] + TendonLen[0,:],'r',lw=2,label='dorsiflexor')
ax[0].plot(t,FiberLen[1,:],'--b',lw=2,label='soleus')
ax[1].plot(t,TendonLen[1,:],'--b',lw=2,label='soleus')
ax[2].plot(t,FiberLen[1,:] + TendonLen[1,:],'--b',lw=2,label='soleus')
ax[1].set_xlabel('time (s)')
ax[0].set_ylabel('Length (m)');
ax[0].set_title('fiber')
ax[1].set_title('tendon')
ax[2].set_title('fiber + tendon')
ax[0].legend(loc='best')
| 0.368065 | 0.92617 |
# Object Segmenation on Azure Stack Hub Clusters
For this tutorial, we will fine tune a pre-trained [Mask R-CNN](https://arxiv.org/abs/1703.06870) model in the [Penn-Fudan Database for Pedestrian Detection and Segmentation](https://www.cis.upenn.edu/~jshi/ped_html/). It contains 170 images with 345 instances of pedestrians, and we will use it to train an instance segmentation model on a custom dataset.
You will use [Azure Machine Learning Pipelines](https://aka.ms/aml-pipelines) to define two pipeline steps: a data process step which split data into training and testing, and training step which trains and evaluates the model. The trained model then registered to your AML workspace.
After the model is registered, you then deploy the model for serving or testing. You will deploy the model to different compute platform: 1) Azure Kubernetes Cluster (AKS), 2) your local computer
This is a notebook about using ASH storage and ASH cluster (ARC compute) for training and serving, please make sure the following prerequisites are met.
## Prerequisite
* A Kubernetes cluster deployed on Azure Stack Hub, connected to Azure through ARC.
To create a Kubernetes cluster on Azure Stack Hub, please see [here](https://docs.microsoft.com/en-us/azure-stack/user/azure-stack-kubernetes-aks-engine-overview?view=azs-2008).
* Connect Azure Stack Hubโs Kubernetes cluster to Azure via [ Azure ARC](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster).(For installation, please read note below first)
Important Note: This notebook requires az extension k8s-extension >= 0.1 and connectedk8s >= 0.3.2 installed on the cluster's master node. Current version for connectedk8s in public preview release via [ Azure ARC](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster) is 0.2.8, so you need to install [private preview](https://github.com/Azure/azure-arc-kubernetes-preview/blob/master/docs/k8s-extensions.md). For your convenience, we have include the wheel files, and you just need to run:
<pre>
az extension add --source connectedk8s-0.3.8-py2.py3-none-any.whl --yes
az extension add --source k8s_extension-0.1PP.13-py2.py3-none-any.whl --yes
</pre>
* A storage account deployed on Azure Stack Hub.
* Setup Azure Machine Learning workspace on Azure.
please make sure the following
[Prerequisites](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?tabs=python#prerequisites) are met (we recommend using the Python SDK when communicating with Azure Machine Learning so make sure the SDK is properly installed). We strongly recommend learning more about the [innerworkings and concepts in Azure Machine Learning](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture) before continuing with the rest of this article (optional)
* Last but not least, you need to be able to run a Notebook.
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at [here](https://github.com/Azure/MachineLearningNotebooks) first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
```
import os
from azureml.core import Workspace,Environment, Experiment, Datastore
from azureml.pipeline.core import Pipeline
from azureml.pipeline.steps import PythonScriptStep
from azureml.core.runconfig import RunConfiguration
```
## Preparation
In preparation stage, you will create a AML workspace first, then
* Create a compute target for AML workspace by attaching the cluster deployed on Azure Stack Hub (ASH)
* Create a datastore for AML workspace backed by storage account deployed on ASH.
### Create Workspace
Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.
If you haven't done already please go to `config.json` file and fill in your workspace information.
```
ws = Workspace.from_config()
```
### Create Compute Target by attaching cluster deployed on ASH
The attaching code here depends python package azureml-contrib-k8s which current is in private preview. Install private preview branch of AzureML SDK by running following command (private preview):
<pre>
pip install --disable-pip-version-check --extra-index-url https://azuremlsdktestpypi.azureedge.net/azureml-contrib-k8s-preview/D58E86006C65 azureml-contrib-k8s
</pre>
Attaching ASH cluster the first time may take 7 minutes. It will be much faster after first attachment.
```
from azureml.contrib.core.compute.kubernetescompute import KubernetesCompute
from azureml.core import ComputeTarget
resource_id = "<resource_id>"
attach_config = KubernetesCompute.attach_configuration(
resource_id= resource_id,
)
try:
attach_name = "attachedarc"
arcK_target_result = KubernetesCompute.attach(ws, attach_name, attach_config)
arcK_target_result.wait_for_completion(show_output=True)
print('arc attach success')
except ComputeTargetException as e:
print(e)
print('arc attach failed')
arcK_target = ComputeTarget(ws, attach_name)
```
### Create datastore for AML workspace backed by storage account deployed on ASH
Here is the [instruction](https://github.com/Azure/AML-Kubernetes/blob/master/docs/ASH/Train-AzureArc.md)
## Register Dataset
After downloading and extracting the zip file from [Penn-Fudan Database for Pedestrian Detection and Segmentation](https://www.cis.upenn.edu/~jshi/ped_html/) to your local machine, you will have the following folder structure:
<pre>
PennFudanPed/
PedMasks/
FudanPed00001_mask.png
FudanPed00002_mask.png
FudanPed00003_mask.png
FudanPed00004_mask.png
...
PNGImages/
FudanPed00001.png
FudanPed00002.png
FudanPed00003.png
FudanPed00004.png
</pre>
```
from azureml.core import Workspace, Dataset, Datastore
dataset_name = "pennfudan_2"
datastore_name = "ashstore"
datastore = Datastore.get(ws, datastore_name)
if dataset_name not in ws.datasets:
src_dir, target_path = 'PennFudanPed', 'PennFudanPed'
datastore.upload(src_dir, target_path)
# register data uploaded as AML dataset
datastore_paths = [(datastore, target_path)]
pd_ds = Dataset.File.from_files(path=datastore_paths)
pd_ds.register(ws, dataset_name, "for Pedestrian Detection and Segmentation")
dataset = ws.datasets[dataset_name]
```
## Create a Training-Test split data process Step
For this pipeline run, you will use two pipeline steps. The first step is to split dataset into training and testing.
```
# create run_config first
env = Environment.from_dockerfile(
name='pytorch-obj-seg',
dockerfile='./aml_src/Dockerfile.gpu',
conda_specification='./aml_src/conda-env.yaml')
aml_run_config = RunConfiguration()
aml_run_config.target = arcK_target
aml_run_config.environment = env
source_directory = './aml_src'
# add a data process step
from azureml.data import OutputFileDatasetConfig
dest = (datastore, None)
train_split_data = OutputFileDatasetConfig(name="train_split_data", destination=dest).as_upload(overwrite=False)
test_split_data = OutputFileDatasetConfig(name="test_split_data", destination=dest).as_upload(overwrite=False)
split_step = PythonScriptStep(
name="Train Test Split",
script_name="obj_segment_step_data_process.py",
arguments=["--data-path", dataset.as_named_input('pennfudan_data').as_mount(),
"--train-split", train_split_data, "--test-split", test_split_data,
"--test-size", 50],
compute_target=arcK_target,
runconfig=aml_run_config,
source_directory=source_directory,
allow_reuse=False
)
```
## Create Training Step
```
train_step = PythonScriptStep(
name="training_step",
script_name="obj_segment_step_training.py",
arguments=[
"--train-split", train_split_data.as_input(), "--test-split", test_split_data.as_input(),
'--epochs', 1, # 80
],
compute_target=arcK_target,
runconfig=aml_run_config,
source_directory=source_directory,
allow_reuse=True
)
```
## Create Experiment and Submit Pipeline Run
```
experiment_name = 'obj_seg_step'
experiment = Experiment(workspace=ws, name=experiment_name)
pipeline_steps = [train_step]
pipeline = Pipeline(workspace=ws, steps=pipeline_steps)
print("Pipeline is built.")
pipeline_run = experiment.submit(pipeline, regenerate_outputs=False)
pipeline_run.wait_for_completion()
```
## Register Model
Note: Here we saved and register two models. The model saved at "'outputs/obj_segmentation.pkl'" is registered as 'obj_seg_model_aml'. It contains both model parameters and network which are used by AML deployment and serving. Model 'obj_seg_model_kf_torch' contains only parameter values which maybe used for KFServing as shown in [this notebook](object_segmentation_kfserving.ipynb) If you are not interesting in KFServing, you can safely ignore it
```
train_step_run = pipeline_run.find_step_run(train_step.name)[0]
model_name = 'obj_seg_model_aml' # model for AML serving
train_step_run.register_model(model_name=model_name, model_path='outputs/obj_segmentation.pkl')
model_name = 'obj_seg_model_kf_torch' # model for KFServing using pytorchserver
train_step_run.register_model(model_name=model_name, model_path='outputs/model.pt')
```
# Deploy the Model
Here we give a few examples of deploy the model to different siturations:
* Deploy to Azure Kubernetes Cluster
* [Deploy to local computer as a Local Docker Web Service](#local_deployment)
* [Deploy to ASH clusters with KFServing](object_segmentation_kfserving.ipynb)
```
from azureml.core import Environment, Workspace, Model, ComputeTarget
from azureml.core.compute import AksCompute
from azureml.core.model import InferenceConfig
from azureml.core.webservice import Webservice, AksWebservice
from azureml.core.compute_target import ComputeTargetException
from PIL import Image
from torchvision.transforms import functional as F
import numpy as np
import json
```
## Deploy to Azure Kubernetes Cluster
There are two steps:
1. Create (or use existing) a AKS cluster for serving the model
2. Deploy the model
### Provision the AKS Cluster
This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it. It may take 5 mins to create a new AKS cluster.
```
ws = Workspace.from_config()
# Choose a name for your AKS cluster
aks_name = 'aks-service-2'
# Verify that cluster does not exist already
try:
aks_target = ComputeTarget(workspace=ws, name=aks_name)
is_new_compute = False
print('Found existing cluster, use it.')
except ComputeTargetException:
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
is_new_compute = True
print("using compute target: ", aks_target.name)
```
### Deploy the model
```
env = Environment.from_dockerfile(
name='pytorch-obj-seg',
dockerfile='./aml_src/Dockerfile.gpu',
conda_specification='./aml_src/conda-env.yaml')
env.inferencing_stack_version='latest'
inference_config = InferenceConfig(entry_script='score.py', environment=env)
deploy_config = AksWebservice.deploy_configuration()
deployed_model = model_name
model = ws.models[deployed_model]
service_name = 'objservice10'
service = Model.deploy(workspace=ws,
name=service_name,
models=[model],
inference_config=inference_config,
deployment_config=deploy_config,
deployment_target=aks_target,
overwrite=True)
service.wait_for_deployment(show_output=True)
```
### Test Service
You can
* test service directly with the service object you deployed.
* test use the restful end point.
For testing purpose, you may take the first image FudanPed00001.png as example. This image looks like this 
```
img_nums = ["00001"]
image_paths = ["PennFudanPed\\PNGImages\\FudanPed{}.png".format(item) for item in img_nums]
image_np_list = []
for image_path in image_paths:
img = Image.open(image_path)
img.show("input_image")
img_rgb = img.convert("RGB")
img_tensor = F.to_tensor(img_rgb)
img_np = img_tensor.numpy()
image_np_list.append(img_np.tolist())
inputs = json.dumps({"instances": image_np_list})
resp = service.run(inputs)
predicts = resp["predictions"]
for instance_pred in predicts:
print("labels", instance_pred["labels"])
print("boxes", instance_pred["boxes"])
print("scores", instance_pred["scores"])
image_data = instance_pred["masks"]
img_np = np.array(image_data)
output = Image.fromarray(img_np)
output.show()
```
### Create a function to call the url end point
Creae a simple help function to wrap the restful endpoint call:
```
import urllib.request
import json
from PIL import Image
from torchvision.transforms import functional as F
import numpy as np
def service_infer(url, body, api_key):
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
return result
except urllib.error.HTTPError as error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(json.loads(error.read().decode("utf8", 'ignore')))
```
### Test using restful end point
Go to EndPonts section of your azure machine learning workspace, you will find the service you deployed. Click the service name, then click "consume", you will see the restful end point (the uri) and api key of your service.
```
url = "<endpoints url>"
api_key = '<api_key>' # Replace this with the API key for the web service
img_nums = ["00001"]
image_paths = ["PennFudanPed\\PNGImages\\FudanPed{}.png".format(item) for item in img_nums]
image_np_list = []
for image_path in image_paths:
img = Image.open(image_path)
img.show("input_image")
img_rgb = img.convert("RGB")
img_tensor = F.to_tensor(img_rgb)
img_np = img_tensor.numpy()
image_np_list.append(img_np.tolist())
request = {"instances": image_np_list}
inputs = json.dumps(request)
body = str.encode(inputs)
resp = service_infer(url, body, api_key)
p_obj = json.loads(resp)
predicts = p_obj["predictions"]
for instance_pred in predicts:
print("labels", instance_pred["labels"])
print("boxes", instance_pred["boxes"])
print("scores", instance_pred["scores"])
image_data = instance_pred["masks"]
img_np = np.array(image_data)
output = Image.fromarray(img_np)
output.show()
```
<a id='local_deployment'></a>
## Deploy model to local computer as a Local Docker Web Service
Make sure you have Docker installed and running. Note that the service creation can take few minutes.
NOTE:
The Docker image runs as a Linux container. If you are running Docker for Windows, you need to ensure the Linux Engine is running.
PowerShell command to switch to Linux engine is
<pre>
& 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchLinuxEngine
</pre>
Also, you need to login to az and acr:
<pre>
az login
az acr login --name your_acr_name
</pre>
For more details, please see [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local.ipynb)
```
from azureml.core.webservice import LocalWebservice
ws = Workspace.from_config()
# This is optional, if not provided Docker will choose a random unused port.
deployment_config = LocalWebservice.deploy_configuration(port=6789)
deployed_model = "obj_seg_model_aml" # model_name
model = ws.models[deployed_model]
local_service = Model.deploy(ws, "localtest", [model], inference_config, deployment_config)
local_service.wait_for_deployment()
print('Local service port: {}'.format(local_service.port))
```
### Check Status and Get Container Logs
```
print(local_service.get_logs())
```
### Test local deployment
You can use local_service.run to test your deployment as shown in case of deployment to Kubernetes Cluster. You can use end point to test your inference service as well. In this local deployment, the endpoint is http://localhost:{port}/score
## Next Steps:
[Deploy to ASH clusters with KFServing](object_segmentation_kfserving.ipynb)
|
github_jupyter
|
import os
from azureml.core import Workspace,Environment, Experiment, Datastore
from azureml.pipeline.core import Pipeline
from azureml.pipeline.steps import PythonScriptStep
from azureml.core.runconfig import RunConfiguration
ws = Workspace.from_config()
from azureml.contrib.core.compute.kubernetescompute import KubernetesCompute
from azureml.core import ComputeTarget
resource_id = "<resource_id>"
attach_config = KubernetesCompute.attach_configuration(
resource_id= resource_id,
)
try:
attach_name = "attachedarc"
arcK_target_result = KubernetesCompute.attach(ws, attach_name, attach_config)
arcK_target_result.wait_for_completion(show_output=True)
print('arc attach success')
except ComputeTargetException as e:
print(e)
print('arc attach failed')
arcK_target = ComputeTarget(ws, attach_name)
from azureml.core import Workspace, Dataset, Datastore
dataset_name = "pennfudan_2"
datastore_name = "ashstore"
datastore = Datastore.get(ws, datastore_name)
if dataset_name not in ws.datasets:
src_dir, target_path = 'PennFudanPed', 'PennFudanPed'
datastore.upload(src_dir, target_path)
# register data uploaded as AML dataset
datastore_paths = [(datastore, target_path)]
pd_ds = Dataset.File.from_files(path=datastore_paths)
pd_ds.register(ws, dataset_name, "for Pedestrian Detection and Segmentation")
dataset = ws.datasets[dataset_name]
# create run_config first
env = Environment.from_dockerfile(
name='pytorch-obj-seg',
dockerfile='./aml_src/Dockerfile.gpu',
conda_specification='./aml_src/conda-env.yaml')
aml_run_config = RunConfiguration()
aml_run_config.target = arcK_target
aml_run_config.environment = env
source_directory = './aml_src'
# add a data process step
from azureml.data import OutputFileDatasetConfig
dest = (datastore, None)
train_split_data = OutputFileDatasetConfig(name="train_split_data", destination=dest).as_upload(overwrite=False)
test_split_data = OutputFileDatasetConfig(name="test_split_data", destination=dest).as_upload(overwrite=False)
split_step = PythonScriptStep(
name="Train Test Split",
script_name="obj_segment_step_data_process.py",
arguments=["--data-path", dataset.as_named_input('pennfudan_data').as_mount(),
"--train-split", train_split_data, "--test-split", test_split_data,
"--test-size", 50],
compute_target=arcK_target,
runconfig=aml_run_config,
source_directory=source_directory,
allow_reuse=False
)
train_step = PythonScriptStep(
name="training_step",
script_name="obj_segment_step_training.py",
arguments=[
"--train-split", train_split_data.as_input(), "--test-split", test_split_data.as_input(),
'--epochs', 1, # 80
],
compute_target=arcK_target,
runconfig=aml_run_config,
source_directory=source_directory,
allow_reuse=True
)
experiment_name = 'obj_seg_step'
experiment = Experiment(workspace=ws, name=experiment_name)
pipeline_steps = [train_step]
pipeline = Pipeline(workspace=ws, steps=pipeline_steps)
print("Pipeline is built.")
pipeline_run = experiment.submit(pipeline, regenerate_outputs=False)
pipeline_run.wait_for_completion()
train_step_run = pipeline_run.find_step_run(train_step.name)[0]
model_name = 'obj_seg_model_aml' # model for AML serving
train_step_run.register_model(model_name=model_name, model_path='outputs/obj_segmentation.pkl')
model_name = 'obj_seg_model_kf_torch' # model for KFServing using pytorchserver
train_step_run.register_model(model_name=model_name, model_path='outputs/model.pt')
from azureml.core import Environment, Workspace, Model, ComputeTarget
from azureml.core.compute import AksCompute
from azureml.core.model import InferenceConfig
from azureml.core.webservice import Webservice, AksWebservice
from azureml.core.compute_target import ComputeTargetException
from PIL import Image
from torchvision.transforms import functional as F
import numpy as np
import json
ws = Workspace.from_config()
# Choose a name for your AKS cluster
aks_name = 'aks-service-2'
# Verify that cluster does not exist already
try:
aks_target = ComputeTarget(workspace=ws, name=aks_name)
is_new_compute = False
print('Found existing cluster, use it.')
except ComputeTargetException:
# Use the default configuration (can also provide parameters to customize)
prov_config = AksCompute.provisioning_configuration()
# Create the cluster
aks_target = ComputeTarget.create(workspace = ws,
name = aks_name,
provisioning_configuration = prov_config)
is_new_compute = True
print("using compute target: ", aks_target.name)
env = Environment.from_dockerfile(
name='pytorch-obj-seg',
dockerfile='./aml_src/Dockerfile.gpu',
conda_specification='./aml_src/conda-env.yaml')
env.inferencing_stack_version='latest'
inference_config = InferenceConfig(entry_script='score.py', environment=env)
deploy_config = AksWebservice.deploy_configuration()
deployed_model = model_name
model = ws.models[deployed_model]
service_name = 'objservice10'
service = Model.deploy(workspace=ws,
name=service_name,
models=[model],
inference_config=inference_config,
deployment_config=deploy_config,
deployment_target=aks_target,
overwrite=True)
service.wait_for_deployment(show_output=True)
img_nums = ["00001"]
image_paths = ["PennFudanPed\\PNGImages\\FudanPed{}.png".format(item) for item in img_nums]
image_np_list = []
for image_path in image_paths:
img = Image.open(image_path)
img.show("input_image")
img_rgb = img.convert("RGB")
img_tensor = F.to_tensor(img_rgb)
img_np = img_tensor.numpy()
image_np_list.append(img_np.tolist())
inputs = json.dumps({"instances": image_np_list})
resp = service.run(inputs)
predicts = resp["predictions"]
for instance_pred in predicts:
print("labels", instance_pred["labels"])
print("boxes", instance_pred["boxes"])
print("scores", instance_pred["scores"])
image_data = instance_pred["masks"]
img_np = np.array(image_data)
output = Image.fromarray(img_np)
output.show()
import urllib.request
import json
from PIL import Image
from torchvision.transforms import functional as F
import numpy as np
def service_infer(url, body, api_key):
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
return result
except urllib.error.HTTPError as error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(json.loads(error.read().decode("utf8", 'ignore')))
url = "<endpoints url>"
api_key = '<api_key>' # Replace this with the API key for the web service
img_nums = ["00001"]
image_paths = ["PennFudanPed\\PNGImages\\FudanPed{}.png".format(item) for item in img_nums]
image_np_list = []
for image_path in image_paths:
img = Image.open(image_path)
img.show("input_image")
img_rgb = img.convert("RGB")
img_tensor = F.to_tensor(img_rgb)
img_np = img_tensor.numpy()
image_np_list.append(img_np.tolist())
request = {"instances": image_np_list}
inputs = json.dumps(request)
body = str.encode(inputs)
resp = service_infer(url, body, api_key)
p_obj = json.loads(resp)
predicts = p_obj["predictions"]
for instance_pred in predicts:
print("labels", instance_pred["labels"])
print("boxes", instance_pred["boxes"])
print("scores", instance_pred["scores"])
image_data = instance_pred["masks"]
img_np = np.array(image_data)
output = Image.fromarray(img_np)
output.show()
from azureml.core.webservice import LocalWebservice
ws = Workspace.from_config()
# This is optional, if not provided Docker will choose a random unused port.
deployment_config = LocalWebservice.deploy_configuration(port=6789)
deployed_model = "obj_seg_model_aml" # model_name
model = ws.models[deployed_model]
local_service = Model.deploy(ws, "localtest", [model], inference_config, deployment_config)
local_service.wait_for_deployment()
print('Local service port: {}'.format(local_service.port))
print(local_service.get_logs())
| 0.447219 | 0.976015 |
first we need to train a parametric-umap network for each dataset... (5 datasets x 2 dimensions)
For umap-learn, UMAP AE, Param. UMAP, PCA
- load dataset
- load network
- compute reconstruction MSE
- count time
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU (this may not be needed on your computer)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=2
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
gpu_devices
import numpy as np
import pickle
import pandas as pd
import time
from umap import UMAP
from tfumap.umap import tfUMAP
import tensorflow as tf
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error, mean_absolute_error, median_absolute_error, r2_score
from tqdm.autonotebook import tqdm
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
output_dir = MODEL_DIR/'projections'
reconstruction_acc_df = pd.DataFrame(columns = ['method_', 'dimensions', 'dataset', 'MSE', 'MAE', 'MedAE', 'R2'])
reconstruction_speed_df = pd.DataFrame(columns = ['method_', 'dimensions', 'dataset', 'embed_time', 'recon_time', 'speed', 'nex'])
```
### MNIST
```
dataset = 'fmnist'
dims = (28,28,1)
```
##### load dataset
```
from tensorflow.keras.datasets import fashion_mnist
# load dataset
(train_images, Y_train), (test_images, Y_test) = fashion_mnist.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
# flatten X
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
X_valid_flat= X_valid.reshape((len(X_valid), np.product(np.shape(X_valid)[1:])))
X_test = X_test.reshape((10000, 28,28,1))
print(len(X_train), len(X_valid), len(X_test))
```
### AE
##### 2 dims
```
load_loc = output_dir / dataset / 'autoencoder'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['AE', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
```
##### 64 dims
```
load_loc = output_dir / dataset / '64' / 'autoencoder'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['AE', 64, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
```
### Network
##### 2 dims
```
load_loc = output_dir / dataset / 'recon-network'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "network",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
with tf.device('/CPU:0'):
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network-cpu",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['network', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
```
##### 64 dims
```
load_loc = output_dir / dataset / '64' / 'recon-network'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network",
64,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
with tf.device("/CPU:0"):
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network-cpu",
64,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
reconstruction_speed_df
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['network', 64, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
```
#### UMAP-learn
##### 2 dims
```
embedder = UMAP(n_components = 2, verbose=True)
z_umap = embedder.fit_transform(X_train_flat)
x_test_samples= []
x_test_recon_samples= []
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
embed_time = end_time - start_time
nex = 10 # it would take far too long to reconstruct the entire dataset
samp_idx = np.random.randint(len(z_test), size= nex)
sample = np.array(z_test)[samp_idx]
x_test_samples.append(samp_idx)
start_time = time.monotonic()
x_test_recon = embedder.inverse_transform(sample);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
recon_time = (end_time - start_time)*len(z_test)/nex
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"umap-learn",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
x_test_recon_samples.append(x_test_recon)
x_recon = np.concatenate(x_test_recon_samples)
x_real = np.array(X_test_flat)[np.concatenate(x_test_samples)]
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['umap-learn', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
```
##### PCA
##### 2 dims
```
pca = PCA(n_components=2)
z = pca.fit_transform(X_train_flat)
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = pca.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = pca.inverse_transform(z_test);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
recon_time = (end_time - start_time)
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"pca",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
X_recon = pca.inverse_transform(pca.transform(X_test_flat))
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['pca', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
```
##### 64 dims
```
pca = PCA(n_components=64)
z = pca.fit_transform(X_train_flat)
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = pca.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = pca.inverse_transform(z_test);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
recon_time = (end_time - start_time)
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"pca",
64,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
X_recon = pca.inverse_transform(pca.transform(X_test_flat))
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['pca', 64, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
```
### Save
```
#save_loc = DATA_DIR / 'reconstruction_speed' / (dataset + '.pickle')
#ensure_dir(save_loc)
#reconstruction_speed_df.to_pickle(save_loc)
save_loc = DATA_DIR / 'reconstruction_acc' / (dataset + '.pickle')
ensure_dir(save_loc)
reconstruction_acc_df.to_pickle(save_loc)
```
|
github_jupyter
|
# reload packages
%load_ext autoreload
%autoreload 2
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=2
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
gpu_devices
import numpy as np
import pickle
import pandas as pd
import time
from umap import UMAP
from tfumap.umap import tfUMAP
import tensorflow as tf
from sklearn.decomposition import PCA
from sklearn.metrics import mean_squared_error, mean_absolute_error, median_absolute_error, r2_score
from tqdm.autonotebook import tqdm
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
output_dir = MODEL_DIR/'projections'
reconstruction_acc_df = pd.DataFrame(columns = ['method_', 'dimensions', 'dataset', 'MSE', 'MAE', 'MedAE', 'R2'])
reconstruction_speed_df = pd.DataFrame(columns = ['method_', 'dimensions', 'dataset', 'embed_time', 'recon_time', 'speed', 'nex'])
dataset = 'fmnist'
dims = (28,28,1)
from tensorflow.keras.datasets import fashion_mnist
# load dataset
(train_images, Y_train), (test_images, Y_test) = fashion_mnist.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
# flatten X
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
X_valid_flat= X_valid.reshape((len(X_valid), np.product(np.shape(X_valid)[1:])))
X_test = X_test.reshape((10000, 28,28,1))
print(len(X_train), len(X_valid), len(X_test))
load_loc = output_dir / dataset / 'autoencoder'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['AE', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
load_loc = output_dir / dataset / '64' / 'autoencoder'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['AE', 64, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
load_loc = output_dir / dataset / 'recon-network'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "network",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
with tf.device('/CPU:0'):
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network-cpu",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['network', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
load_loc = output_dir / dataset / '64' / 'recon-network'
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
decoding_method = "autoencoder",
batch_size = 100,
dims = dims
)
encoder = tf.keras.models.load_model((load_loc / 'encoder').as_posix())
embedder.encoder = encoder
decoder = tf.keras.models.load_model((load_loc / 'decoder').as_posix())
embedder.decoder = decoder
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network",
64,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
with tf.device("/CPU:0"):
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = encoder(X_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = decoder(z_test)
end_time = time.monotonic()
print("seconds: ", end_time - start_time)
recon_time = end_time - start_time
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"network-cpu",
64,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
reconstruction_speed_df
X_recon = tf.nn.relu(decoder(encoder(X_test))).numpy()
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['network', 64, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
embedder = UMAP(n_components = 2, verbose=True)
z_umap = embedder.fit_transform(X_train_flat)
x_test_samples= []
x_test_recon_samples= []
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = embedder.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
embed_time = end_time - start_time
nex = 10 # it would take far too long to reconstruct the entire dataset
samp_idx = np.random.randint(len(z_test), size= nex)
sample = np.array(z_test)[samp_idx]
x_test_samples.append(samp_idx)
start_time = time.monotonic()
x_test_recon = embedder.inverse_transform(sample);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
recon_time = (end_time - start_time)*len(z_test)/nex
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"umap-learn",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
x_test_recon_samples.append(x_test_recon)
x_recon = np.concatenate(x_test_recon_samples)
x_real = np.array(X_test_flat)[np.concatenate(x_test_samples)]
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['umap-learn', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
pca = PCA(n_components=2)
z = pca.fit_transform(X_train_flat)
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = pca.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = pca.inverse_transform(z_test);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
recon_time = (end_time - start_time)
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"pca",
2,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
X_recon = pca.inverse_transform(pca.transform(X_test_flat))
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['pca', 2, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
pca = PCA(n_components=64)
z = pca.fit_transform(X_train_flat)
n_repeats = 10
for i in tqdm(range(n_repeats)):
start_time = time.monotonic()
z_test = pca.transform(X_test_flat);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
embed_time = end_time - start_time
start_time = time.monotonic()
x_test_recon = pca.inverse_transform(z_test);
end_time = time.monotonic()
print('seconds: ', end_time - start_time)
recon_time = (end_time - start_time)
reconstruction_speed_df.loc[len(reconstruction_speed_df)] = [
"pca",
64,
dataset,
embed_time,
recon_time,
embed_time + recon_time,
len(X_test_flat)
]
X_recon = pca.inverse_transform(pca.transform(X_test_flat))
x_real = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
x_recon = X_recon.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
MSE = mean_squared_error(
x_real,
x_recon
)
MAE = mean_absolute_error(
x_real,
x_recon
)
MedAE = median_absolute_error(
x_real,
x_recon
)
R2 = r2_score(
x_real,
x_recon
)
reconstruction_acc_df.loc[len(reconstruction_acc_df)] = ['pca', 64, dataset, MSE, MAE, MedAE, R2]
reconstruction_acc_df
#save_loc = DATA_DIR / 'reconstruction_speed' / (dataset + '.pickle')
#ensure_dir(save_loc)
#reconstruction_speed_df.to_pickle(save_loc)
save_loc = DATA_DIR / 'reconstruction_acc' / (dataset + '.pickle')
ensure_dir(save_loc)
reconstruction_acc_df.to_pickle(save_loc)
| 0.514156 | 0.829354 |
# Now You Code 1: Exam Stats
Write a program to input exam scores (out of 100) until you enter 'quit'. After you finish entering exam scores the program will print out the average of all the exam scores.
**HINTS:**
- To figure out the average you must keep a running total of exam scores and a count of the number entered.
- Try to make the program work 1 time, then figure out the exit condition and use a while True: loop.
Example Run:
```
Enter exam score or type 'quit': 100
Enter exam score or type 'quit': 100
Enter exam score or type 'quit': 50
Enter exam score or type 'quit': quit
Number of scores 3. Average is 83.33
```
## Step 1: Problem Analysis
Inputs:
1. exam scores out of 100 until you enter quit
Outputs:
2. average of scores you entered
Algorithm (Steps in Program):
1. input("Enter exam score or type 'quit'": )
2. make a quit for break statment
3. set up user_input <= 100
4. set up total_scores = int(user_input) + total_scores
5. set up count_scores = count_scores + 1
6. print ("Number of scores %d. Average is %.2f" % (count_scores, average_score))
7. set up condition when user_input < 0, user_input > 100
```
# Write Code here:
count_scores = 0
total_scores = 0
try:
while True:
user_input = input("Enter exam score or type 'quit': ")
if user_input == 'quit':
break
elif int(user_input) < 0:
print("You cannot put exam score below 0")
elif int(user_input) <= 100:
total_scores = int(user_input) + total_scores
count_scores = count_scores + 1
elif int(user_input) > 100:
print("You cannot put exam score over 100")
average_score = float(total_scores) / count_scores
print("Number of scores %d. Average is %.2f" % (count_scores, average_score))
except ZeroDivisionError:
print("You cannot divide score by 0")
```
## Reminder of Evaluation Criteria
1. What the problem attempted (analysis, code, and answered questions) ?
2. What the problem analysis thought out? (does the program match the plan?)
3. Does the code execute without syntax error?
4. Does the code solve the intended problem?
5. Is the code well written? (easy to understand, modular, and self-documenting, handles errors)
## Step 3: Questions
1. What is the loop control variable in this program?
1. True is control variable in this program. Because it loops infinetely when is true without special conditions.
2. What is the exit or terminating condition of the loop in this program?
1. exit or terminationg condition of the loop program is break when user put quit for input.
3. What happens when you enter an exam score outside the range of 0 to 100? Re-write your code to not accept exam scores outside this range.
1. Program is still working, but cannot operate it's original purpose. It shows weird results by number. So re-write code using elif to prevent similar kind of error.
```
# Write Code here:
count_scores = 0
total_scores = 0
while True:
user_input = input("Enter exam score or type 'quit': ")
if user_input == 'quit':
break
elif int(user_input) < 0:
print("You cannot put exam score below 0")
elif int(user_input) > 100:
print("You cannot put exam score over 100")
elif int(user_input) <= 100:
total_scores = int(user_input) + total_scores
count_scores = count_scores + 1
average_score = float(total_scores) / count_scores
print("Number of scores %d. Average is %.2f" % (count_scores, average_score))
```
|
github_jupyter
|
Enter exam score or type 'quit': 100
Enter exam score or type 'quit': 100
Enter exam score or type 'quit': 50
Enter exam score or type 'quit': quit
Number of scores 3. Average is 83.33
# Write Code here:
count_scores = 0
total_scores = 0
try:
while True:
user_input = input("Enter exam score or type 'quit': ")
if user_input == 'quit':
break
elif int(user_input) < 0:
print("You cannot put exam score below 0")
elif int(user_input) <= 100:
total_scores = int(user_input) + total_scores
count_scores = count_scores + 1
elif int(user_input) > 100:
print("You cannot put exam score over 100")
average_score = float(total_scores) / count_scores
print("Number of scores %d. Average is %.2f" % (count_scores, average_score))
except ZeroDivisionError:
print("You cannot divide score by 0")
# Write Code here:
count_scores = 0
total_scores = 0
while True:
user_input = input("Enter exam score or type 'quit': ")
if user_input == 'quit':
break
elif int(user_input) < 0:
print("You cannot put exam score below 0")
elif int(user_input) > 100:
print("You cannot put exam score over 100")
elif int(user_input) <= 100:
total_scores = int(user_input) + total_scores
count_scores = count_scores + 1
average_score = float(total_scores) / count_scores
print("Number of scores %d. Average is %.2f" % (count_scores, average_score))
| 0.25488 | 0.748467 |
Univariate analysis of block design, one condition versus rest, single subject
==============================================================================
Authors: Bertrand Thirion, Elvis Dohmatob , Christophe Pallier, 2015--2017
Modified: Ralf Schmaelzle, 2019
In this tutorial, we compare the fMRI signal during periods of auditory stimulation
versus periods of rest, using a General Linear Model (GLM). We will
use a univariate approach in which independent tests are performed at
each single-voxel.
The dataset comes from experiment conducted at the FIL by Geriant Rees
under the direction of Karl Friston. It is provided by FIL methods
group which develops the SPM software.
According to SPM documentation, 96 acquisitions were made (RT=7s), in
blocks of 6, giving 16 42s blocks. The condition for successive blocks
alternated between rest and auditory stimulation, starting with rest.
Auditory stimulation was bi-syllabic words presented binaurally at a
rate of 60 per minute. The functional data starts at acquisiton 4,
image fM00223_004.
The whole brain BOLD/EPI images were acquired on a modified 2T Siemens
MAGNETOM Vision system. Each acquisition consisted of 64 contiguous
slices (64x64x64 3mm x 3mm x 3mm voxels). Acquisition took 6.05s, with
the scan to scan repeat time (RT) set arbitrarily to 7s.
This analyse described here is performed in the native space, on the
original EPI scans without any spatial or temporal preprocessing.
(More sensitive results would likely be obtained on the corrected,
spatially normalized and smoothed images).
## Import modules
```
import os, sys, nibabel
!pip install nistats
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from os.path import join
import seaborn as sns
from nilearn import plotting, datasets, image
from nilearn.image import concat_imgs
from nilearn.input_data import NiftiSpheresMasker
from nistats.first_level_model import FirstLevelModel
from nistats.datasets import fetch_spm_auditory
from nistats.reporting import plot_design_matrix
from nilearn.plotting import plot_stat_map, plot_anat, plot_img
from nibabel.affines import apply_affine
```
Retrieving the data
-------------------
We can list the filenames of the functional images
```
subject_data = fetch_spm_auditory()
subject_data
```
Display the first functional image:
RalfNote: there will be some ugly red error code. Just re-run the cell again and it should be gone
```
%matplotlib inline
plot_img(subject_data.func[0]);
```
Display the subject's anatomical image:
```
plot_anat(subject_data.anat);
#plotting.view_img(subject_data.anat)
```
Next, we concatenate all the 3D EPI image into a single 4D image.
```
fmri_img = concat_imgs(subject_data.func)
print(fmri_img.shape)
```
plot data from one voxel
```
data_from_one_voxel = fmri_img.get_data()[22,30,26,:] #22,30,26
plt.figure(figsize = (10,2))
plt.plot(data_from_one_voxel);
plt.xlabel('Time (volumes)');
plt.ylabel('fMRI Signal');
```
And we average all the EPI images in order to create a background image that will be used to display the activations:
```
mean_img = image.mean_img(fmri_img)
plot_anat(mean_img);
```
Specifying the experimental paradigm
------------------------------------
We must provide a description of the experiment, that is, define the
timing of the auditory stimulation and rest periods. According to
the documentation of the dataset, there were 16 42s blocks --- in
which 6 scans were acquired --- alternating between rest and
auditory stimulation, starting with rest. We use standard python
functions to create a pandas.DataFrame object that specifies the
timings:
```
tr = 7.
slice_time_ref = 0.
n_scans = 96
epoch_duration = 6 * tr # duration in seconds
conditions = ['rest', 'active'] * 8
n_blocks = len(conditions)
duration = epoch_duration * np.ones(n_blocks)
onset = np.linspace(0, (n_blocks - 1) * epoch_duration, n_blocks)
events = pd.DataFrame({'onset': onset, 'duration': duration, 'trial_type': conditions})
```
The ``events`` object contains the information for the design:
```
print(events)
```
Performing the GLM analysis
---------------------------
We need to construct a *design matrix* using the timing information
provided by the ``events`` object. The design matrix contains
regressors of interest as well as regressors of non-interest
modeling temporal drifts:
```
frame_times = np.linspace(0, (n_scans - 1) * tr, n_scans)
drift_model = 'Cosine'
period_cut = 4. * epoch_duration
hrf_model = 'glover + derivative'
```
It is now time to create a ``FirstLevelModel`` object
and fit it to the 4D dataset:
```
fmri_glm = FirstLevelModel(tr, slice_time_ref, noise_model='ar1',
standardize=False, hrf_model=hrf_model,
drift_model=drift_model, period_cut=period_cut)
fmri_glm = fmri_glm.fit(fmri_img, events)
```
One can inspect the design matrix (rows represent time, and
columns contain the predictors):
```
design_matrix = fmri_glm.design_matrices_[0]
fig, ax1 = plt.subplots(figsize=(6, 8), nrows=1, ncols=1)
plot_design_matrix(design_matrix, ax= ax1, rescale= True);
```
The first column contains the expected reponse profile of regions which are
sensitive to the auditory stimulation.
```
plt.plot(design_matrix['active'])
plt.xlabel('scan')
plt.title('Expected Auditory Response')
plt.show()
```
Detecting voxels with significant effects
-----------------------------------------
To access the estimated coefficients (Betas of the GLM model), we
created constrasts with a single '1' in each of the columns:
```
contrast_matrix = np.eye(design_matrix.shape[1])
contrasts = dict([(column, contrast_matrix[i])
for i, column in enumerate(design_matrix.columns)])
"""
contrasts::
{
'active': array([ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'active_derivative': array([ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'constant': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]),
'drift_1': array([ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.]),
'drift_2': array([ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]),
'drift_3': array([ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]),
'drift_4': array([ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]),
'drift_5': array([ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.]),
'drift_6': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]),
'drift_7': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]),
'rest': array([ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'rest_derivative': array([ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])}
"""
```
We can then compare the two conditions 'active' and 'rest' by
generating the relevant contrast:
```
active_minus_rest = contrasts['active'] - contrasts['rest']
eff_map = fmri_glm.compute_contrast(active_minus_rest,
output_type='effect_size')
z_map = fmri_glm.compute_contrast(active_minus_rest,
output_type='z_score')
```
Plot thresholded z scores map
```
plot_stat_map(z_map, bg_img=mean_img, threshold=3.0,
display_mode='z', cut_coords=3, black_bg=True,
title='Active minus Rest (Z>3)');
plotting.view_img(z_map, bg_img=mean_img, threshold=3., title="Active vs. Rest contrast")
```
We can use ``nibabel.save`` to save the effect and zscore maps to the disk
```
outdir = 'results'
if not os.path.exists(outdir):
os.mkdir(outdir)
nibabel.save(z_map, join('results', 'active_vs_rest_z_map.nii'))
nibabel.save(eff_map, join('results', 'active_vs_rest_eff_map.nii'))
```
Extract the signal from a voxels
--------------------------------
We search for the voxel with the larger z-score and plot the signal
(warning: double dipping!)
```
# Find the coordinates of the peak
values = z_map.get_data()
coord_peaks = np.dstack(np.unravel_index(np.argsort(values.ravel()),
values.shape))[0, 0, :]
coord_mm = apply_affine(z_map.affine, coord_peaks)
```
We create a masker for the voxel (allowing us to detrend the signal)
and extract the time course
```
mask = NiftiSpheresMasker([coord_mm], radius=3,
detrend=True, standardize=True,
high_pass=None, low_pass=None, t_r=7.)
sig = mask.fit_transform(fmri_img)
```
Let's plot the signal and the theoretical response
```
plt.plot(frame_times, sig, label='voxel %d %d %d' % tuple(coord_mm))
plt.plot(design_matrix['active'], color='red', label='model')
plt.xlabel('scan')
plt.legend()
plt.show()
plt.figure(figsize = (3,4));
sns.regplot(design_matrix['active'], np.squeeze(sig));
plt.xlim([-0.5, 1.5]);
plt.xlabel('Predicted response')
plt.ylabel('Measured response')
#plt.axis('equal')
np.corrcoef(np.squeeze(sig),design_matrix['active'] )[0,1]
```
|
github_jupyter
|
import os, sys, nibabel
!pip install nistats
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from os.path import join
import seaborn as sns
from nilearn import plotting, datasets, image
from nilearn.image import concat_imgs
from nilearn.input_data import NiftiSpheresMasker
from nistats.first_level_model import FirstLevelModel
from nistats.datasets import fetch_spm_auditory
from nistats.reporting import plot_design_matrix
from nilearn.plotting import plot_stat_map, plot_anat, plot_img
from nibabel.affines import apply_affine
subject_data = fetch_spm_auditory()
subject_data
%matplotlib inline
plot_img(subject_data.func[0]);
plot_anat(subject_data.anat);
#plotting.view_img(subject_data.anat)
fmri_img = concat_imgs(subject_data.func)
print(fmri_img.shape)
data_from_one_voxel = fmri_img.get_data()[22,30,26,:] #22,30,26
plt.figure(figsize = (10,2))
plt.plot(data_from_one_voxel);
plt.xlabel('Time (volumes)');
plt.ylabel('fMRI Signal');
mean_img = image.mean_img(fmri_img)
plot_anat(mean_img);
tr = 7.
slice_time_ref = 0.
n_scans = 96
epoch_duration = 6 * tr # duration in seconds
conditions = ['rest', 'active'] * 8
n_blocks = len(conditions)
duration = epoch_duration * np.ones(n_blocks)
onset = np.linspace(0, (n_blocks - 1) * epoch_duration, n_blocks)
events = pd.DataFrame({'onset': onset, 'duration': duration, 'trial_type': conditions})
print(events)
frame_times = np.linspace(0, (n_scans - 1) * tr, n_scans)
drift_model = 'Cosine'
period_cut = 4. * epoch_duration
hrf_model = 'glover + derivative'
fmri_glm = FirstLevelModel(tr, slice_time_ref, noise_model='ar1',
standardize=False, hrf_model=hrf_model,
drift_model=drift_model, period_cut=period_cut)
fmri_glm = fmri_glm.fit(fmri_img, events)
design_matrix = fmri_glm.design_matrices_[0]
fig, ax1 = plt.subplots(figsize=(6, 8), nrows=1, ncols=1)
plot_design_matrix(design_matrix, ax= ax1, rescale= True);
plt.plot(design_matrix['active'])
plt.xlabel('scan')
plt.title('Expected Auditory Response')
plt.show()
contrast_matrix = np.eye(design_matrix.shape[1])
contrasts = dict([(column, contrast_matrix[i])
for i, column in enumerate(design_matrix.columns)])
"""
contrasts::
{
'active': array([ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'active_derivative': array([ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'constant': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]),
'drift_1': array([ 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.]),
'drift_2': array([ 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]),
'drift_3': array([ 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]),
'drift_4': array([ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.]),
'drift_5': array([ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.]),
'drift_6': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]),
'drift_7': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]),
'rest': array([ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
'rest_derivative': array([ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])}
"""
active_minus_rest = contrasts['active'] - contrasts['rest']
eff_map = fmri_glm.compute_contrast(active_minus_rest,
output_type='effect_size')
z_map = fmri_glm.compute_contrast(active_minus_rest,
output_type='z_score')
plot_stat_map(z_map, bg_img=mean_img, threshold=3.0,
display_mode='z', cut_coords=3, black_bg=True,
title='Active minus Rest (Z>3)');
plotting.view_img(z_map, bg_img=mean_img, threshold=3., title="Active vs. Rest contrast")
outdir = 'results'
if not os.path.exists(outdir):
os.mkdir(outdir)
nibabel.save(z_map, join('results', 'active_vs_rest_z_map.nii'))
nibabel.save(eff_map, join('results', 'active_vs_rest_eff_map.nii'))
# Find the coordinates of the peak
values = z_map.get_data()
coord_peaks = np.dstack(np.unravel_index(np.argsort(values.ravel()),
values.shape))[0, 0, :]
coord_mm = apply_affine(z_map.affine, coord_peaks)
mask = NiftiSpheresMasker([coord_mm], radius=3,
detrend=True, standardize=True,
high_pass=None, low_pass=None, t_r=7.)
sig = mask.fit_transform(fmri_img)
plt.plot(frame_times, sig, label='voxel %d %d %d' % tuple(coord_mm))
plt.plot(design_matrix['active'], color='red', label='model')
plt.xlabel('scan')
plt.legend()
plt.show()
plt.figure(figsize = (3,4));
sns.regplot(design_matrix['active'], np.squeeze(sig));
plt.xlim([-0.5, 1.5]);
plt.xlabel('Predicted response')
plt.ylabel('Measured response')
#plt.axis('equal')
np.corrcoef(np.squeeze(sig),design_matrix['active'] )[0,1]
| 0.461988 | 0.974725 |

> **Copyright (c) 2021 CertifAI Sdn. Bhd.**<br>
<br>
This program is part of OSRFramework. You can redistribute it and/or modify
<br>it under the terms of the GNU Affero General Public License as published by
<br>the Free Software Foundation, either version 3 of the License, or
<br>(at your option) any later version.
<br>
<br>This program is distributed in the hope that it will be useful
<br>but WITHOUT ANY WARRANTY; without even the implied warranty of
<br>MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
<br>GNU Affero General Public License for more details.
<br>
<br>You should have received a copy of the GNU Affero General Public License
<br>along with this program. If not, see <http://www.gnu.org/licenses/>.
<br>
# Introduction
In the last tutorial, you have learned the python datatypes and variables. Now you will be introduced to the python sequences and related operations.
# Notebook Content
* [Python Sequence](#Python-Sequence)
* [Index Operator](#Index-Operator)
* [Sequence Length](#Length)
* [Slice Operator](#Slice-Operator)
* [Concatenation and Repetition](#Concatenation-and-Repetition)
* [Count](#Count)
* [Split vs Join](#Split-vs-Join)
* [Challenges](#Challenge-1)
# Python Sequence
1) **Strings**
- Strings is the as sequential collections of characters
- *Empty string* is a string that contains no characters
2) **Lists**
- List is a **mutable** sequential collection of Python data values
- Values inside a list are called elements
- Each element can be of any data type
3) **Tuples**
- A tuple is a **immutable** sequence of items of any type
- Comma-separated sequence of values, enclosed in parentheses
**Each type of sequence can be accessed by using index**
```
val_1 = "I love programming"
val_2 = [1, 3.142, False, "apple", val_1]
val_3 = (1, 3.142, False, "apple", val_1)
print(type(val_1))
print(type(val_2))
print(type(val_3))
print(val_1)
print(val_2)
print(val_3)
```
# Index Operator
- Select a single character / element / item from a string / list / tuple
- Use index value
> 
- Indexed left to right using **positive numbers** from postion 0 to position 13
- Indexed right to left using **negative numbers** where -1 is the rightmost index
```
school = "Luther College"
m = school[2]
print("m:", m)
lastchar = school[-1]
print("last character:", lastchar)
numbers = [17, 123, 87, 34, 66, 8398, 44]
print(numbers[2])
print(numbers[9-8])
print(numbers[-2])
prices = (1.99, 2.00, 5.50, 20.95, 100.98)
print(prices[0])
print(prices[-1])
print(prices[3-5])
```
# Length
- To determine number of elements in the sequence (string, list, tuple)
- Use *len()* function
```
fruit = "Banana"
length = len(fruit)
print(length)
a_list = [3, 67, "cat", 3.14, False]
print(len(a_list))
a_tuple = ("hi", "morning", "dog", "506", "caterpillar", "balloons", 106, "yo-yo")
print(len(a_tuple))
```
# Slice Operator
[n:m]
where n = starting index (inclusive)
m = ending index (exclusive)
- If you omit the first index (n), the slice starts at the beginning of the string
- If you omit the second index (m), the slice goes to the end of the string
```
title = "CSDISCOVERY"
print(title[3:6])
print(title[:2])
print(title[2:])
a_list = ['a', 'b', 'c', 'd', 'e', 'f']
print(a_list[1:3])
print(a_list[:4])
print(a_list[3:])
print(a_list[:])
julia = ("Julia", "Roberts", 1967, "Duplicity", 2009, "Actress", "Atlanta, Georgia")
print(julia[2:6])
julia = julia[:3] + ("Eat Pray Love", 2010) + julia[5:]
print(julia)
```
# Concatenation and Repetition
- Using + operator in concatenation
- str + str
- list + list
- tuple + tuple
- Using * operator in repetition
```
a_list = [1,3,5]
b_list = [2,4,6]
print(a_list + b_list)
alist = [1,3,5]
print(alist * 3)
```
Why the following code runs into error ?
print(['first'] + "second")
How to resolve it ?
# Count
- Build in method in sequence data types
- Returns the number of times that the argument occured
- Matched elements must have same type and value with the argument
**Important**: When you use *count()* on a string, the argument can only be a string
string = "2322322"
print(string.count(2)) # Error!
```
a = "I have had an apple on my desk before!"
print(a.count("e"))
print(a.count("ha"))
z = ['atoms', 4, 'neutron', 6, 'proton', 4, 'electron', 4, 'electron', 'atoms']
print(z.count("4"))
print(z.count(4))
print(z.count("a"))
print(z.count("electron"))
```
# Index
- Build in method in sequence data types
- Returns the leftmost index where the argument is found
- Error will occur if cannot find the argument
seasons = ["winter", "spring", "summer", "fall"]
print(seasons.index("autumn")) # Error!
**Important**: It takes only string argument when *index()* is used on strings
```
music = "Pull out your music and dancing can begin"
print(music.index("m"))
print(music.index("your"))
bio = ["Metatarsal", "Metatarsal", "Fibula", [], "Tibia", "Tibia", 43, "Femur", "Occipital", "Metatarsal"]
print(bio.index("Metatarsal")) # There are two Metatarsal in the bio list
print(bio.index([])) # Even we can find the index of empty list
print(bio.index(43))
```
# Split vs Join
## - *split()*
- Breaks a string into a list of words
str.split(sep=None, maxsplit=-1)
where sep = delimiter string
maxsplit = Number of times to split
- By default, **whitespace** is delimiter and maxsplit is -1 (**no limit**)
<img src="https://fopp.umsi.education/books/published/fopp/_images/split_default.gif">
- Delimiter can be used to **specify characters** to use as word boundaries
<img src="https://fopp.umsi.education/books/published/fopp/_images/split_on_e.jpeg">
## - *join()*
- The reverse of *split()*
- Join the list with the **seperator (glue)** between each of the elements
str.join(iterable)
iterable = sequence of strings
<img src=https://fopp.umsi.education/books/published/fopp/_images/join.gif>
- **TypeError** will be raised if iterator is non-sting value
```
song = "The rain in Spain..."
word_list_1 = song.split()
print(word_list_1)
word_list_2 = song.split('ai')
print(word_list_2)
colours = ["red", "blue", "green"]
glue = ';'
joining = glue.join(colours)
print(joining)
print(" ".join(colours))
```
# Challenge 1
Write a program that extracts the **last three items** in the list sports and assigns it to the variable **last**.
sports = ['cricket', 'football', 'volleyball', 'baseball', 'softball', 'track and field', 'curling', 'ping pong', 'hockey']
Make sure to write your code so that it works no matter how many items are in the list.
```
sports = ['cricket', 'football', 'volleyball', 'baseball', 'softball', 'track and field', 'curling', 'ping pong', 'hockey']
last = sports[-3:]
print(last)
```
# Challenge 2
Write code to determine **how many 9โs** are in the list nums and assign that value to the variable **how_many**.
nums = [4, 2, 23.4, 9, 545, 9, 1, 234.001, 5, 49, 8, 9 , 34, 52, 1, -2, 9.1, 4]
Do not use a for loop to do this.
```
nums = [4, 2, 23.4, 9, 545, 9, 1, 234.001, 5, 49, 8, 9 , 34, 52, 1, -2, 9.1, 4]
how_many = nums.count(9)
print(how_many)
```
# Challenge 3
Write code that **uses slicing** to **get rid of the the second 8** so that here are only two 8โs in the list bound to the variable **nums**.
nums = [4, 2, 8, 23.4, 8, 9, 545, 9, 1, 234.001, 5, 49, 8, 9 , 34, 52, 1, -2, 9.1, 4]
```
nums = [4, 2, 8, 23.4, 8, 9, 545, 9, 1, 234.001, 5, 49, 8, 9 , 34, 52, 1, -2, 9.1, 4]
nums = nums[:4] + nums[5:]
print(nums)
```
# Challenge 4
Create a variable called **wrds** and assign to it a list whose elements are the **words in the string sent**.
sent = "The bicentennial for our university was in 2017"
Assign the number of elements in wrds to the variable **num_lst**.
```
sent = "The bicentennial for our university was in 2017"
wrds = sent.split()
num_lst = len(wrds)
print(wrds)
print(num_lst)
```
# Contributors
**Author**
<br>Chee Lam
# References
1. [Python Documentation](https://docs.python.org/3/)
2. [Coursera Python Specialization Course](https://www.coursera.org/specializations/python-3-programming)
|
github_jupyter
|
val_1 = "I love programming"
val_2 = [1, 3.142, False, "apple", val_1]
val_3 = (1, 3.142, False, "apple", val_1)
print(type(val_1))
print(type(val_2))
print(type(val_3))
print(val_1)
print(val_2)
print(val_3)
school = "Luther College"
m = school[2]
print("m:", m)
lastchar = school[-1]
print("last character:", lastchar)
numbers = [17, 123, 87, 34, 66, 8398, 44]
print(numbers[2])
print(numbers[9-8])
print(numbers[-2])
prices = (1.99, 2.00, 5.50, 20.95, 100.98)
print(prices[0])
print(prices[-1])
print(prices[3-5])
fruit = "Banana"
length = len(fruit)
print(length)
a_list = [3, 67, "cat", 3.14, False]
print(len(a_list))
a_tuple = ("hi", "morning", "dog", "506", "caterpillar", "balloons", 106, "yo-yo")
print(len(a_tuple))
title = "CSDISCOVERY"
print(title[3:6])
print(title[:2])
print(title[2:])
a_list = ['a', 'b', 'c', 'd', 'e', 'f']
print(a_list[1:3])
print(a_list[:4])
print(a_list[3:])
print(a_list[:])
julia = ("Julia", "Roberts", 1967, "Duplicity", 2009, "Actress", "Atlanta, Georgia")
print(julia[2:6])
julia = julia[:3] + ("Eat Pray Love", 2010) + julia[5:]
print(julia)
a_list = [1,3,5]
b_list = [2,4,6]
print(a_list + b_list)
alist = [1,3,5]
print(alist * 3)
a = "I have had an apple on my desk before!"
print(a.count("e"))
print(a.count("ha"))
z = ['atoms', 4, 'neutron', 6, 'proton', 4, 'electron', 4, 'electron', 'atoms']
print(z.count("4"))
print(z.count(4))
print(z.count("a"))
print(z.count("electron"))
music = "Pull out your music and dancing can begin"
print(music.index("m"))
print(music.index("your"))
bio = ["Metatarsal", "Metatarsal", "Fibula", [], "Tibia", "Tibia", 43, "Femur", "Occipital", "Metatarsal"]
print(bio.index("Metatarsal")) # There are two Metatarsal in the bio list
print(bio.index([])) # Even we can find the index of empty list
print(bio.index(43))
song = "The rain in Spain..."
word_list_1 = song.split()
print(word_list_1)
word_list_2 = song.split('ai')
print(word_list_2)
colours = ["red", "blue", "green"]
glue = ';'
joining = glue.join(colours)
print(joining)
print(" ".join(colours))
sports = ['cricket', 'football', 'volleyball', 'baseball', 'softball', 'track and field', 'curling', 'ping pong', 'hockey']
last = sports[-3:]
print(last)
nums = [4, 2, 23.4, 9, 545, 9, 1, 234.001, 5, 49, 8, 9 , 34, 52, 1, -2, 9.1, 4]
how_many = nums.count(9)
print(how_many)
nums = [4, 2, 8, 23.4, 8, 9, 545, 9, 1, 234.001, 5, 49, 8, 9 , 34, 52, 1, -2, 9.1, 4]
nums = nums[:4] + nums[5:]
print(nums)
sent = "The bicentennial for our university was in 2017"
wrds = sent.split()
num_lst = len(wrds)
print(wrds)
print(num_lst)
| 0.111555 | 0.723871 |
# Mask R-CNN Demo
A quick intro to using the pre-trained model to detect and segment objects.
```
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/balloon/")) # To find local version
from samples.balloon import balloon1
%matplotlib inline
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
config = balloon1.BalloonConfig()
class InferenceConfig(config.__class__):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
import tensorflow as tf
# Device to load the neural network on.
# Useful if you're training a model on the same
# machine, in which case use CPU and leave the
# GPU for training.
DEVICE = "/cpu:0" # /cpu:0 or /gpu:0
# Inspect the model in training or inference modes
# values: 'inference' or 'training'
# TODO: code for 'training' test mode not ready yet
#TEST_MODE = "inference"
TEST_MODE = "inference"
with tf.device(DEVICE):
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR,
config=config)
# Load weights trained on MS-COCO
model.load_weights("D:/logs/ModelMask/LogosWeight.h5", by_name=True)
class_names = ['BG','cocacola','heineken','leo','pepsi','singha','starbucks']
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
# Load a random image from the images folder
#file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread('G:/LOGO_BD/Test/Singha176.jpeg')
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
print(''.join(str(r['class_ids'])))
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
```
|
github_jupyter
|
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/balloon/")) # To find local version
from samples.balloon import balloon1
%matplotlib inline
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
config = balloon1.BalloonConfig()
class InferenceConfig(config.__class__):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
import tensorflow as tf
# Device to load the neural network on.
# Useful if you're training a model on the same
# machine, in which case use CPU and leave the
# GPU for training.
DEVICE = "/cpu:0" # /cpu:0 or /gpu:0
# Inspect the model in training or inference modes
# values: 'inference' or 'training'
# TODO: code for 'training' test mode not ready yet
#TEST_MODE = "inference"
TEST_MODE = "inference"
with tf.device(DEVICE):
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR,
config=config)
# Load weights trained on MS-COCO
model.load_weights("D:/logs/ModelMask/LogosWeight.h5", by_name=True)
class_names = ['BG','cocacola','heineken','leo','pepsi','singha','starbucks']
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
# Load a random image from the images folder
#file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread('G:/LOGO_BD/Test/Singha176.jpeg')
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
print(''.join(str(r['class_ids'])))
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
| 0.347205 | 0.71857 |
```
from pathlib import Path
import pickle
import numpy as np
import numpy.random as nr
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing
import sklearn.model_selection as ms
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score, f1_score
import xgboost as xgb
from xgboost import XGBClassifier
# Print out packages versions
print(f'pandas version is: {pd.__version__}')
print(f'numpy version is: {np.__version__}')
print(f'matplotlib version is: {matplotlib.__version__}')
print(f'sklearn version is: {sklearn.__version__}')
print(f'xgboost version is: {xgb.__version__}')
```
# Helper functions
```
def replace_nan_inf(df, value=None):
"""
Replace missing and infinity values.
Parameters
----------
df : pandas.DataFrame
Dataframe with values to be replaced.
value : int, float
Value to replace any missing or numpy.inf values. Defaults to numpy.nan
Returns
-------
pandas.DataFrame
Dataframe with missing and infinity values replaced with -999.
"""
if value is None:
value = np.nan
return df.replace(to_replace=[np.nan, np.inf, -np.inf],
value=value)
def shift_concat(df, periods=1, fill_value=None):
"""
Build dataframe of shifted index.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be shifted.
periods : int
Number of periods to shift. Should be positive.
fill_value : object, optional
The scalar value to use for newly introduced missing values. Defaults
to numpy.nan.
Returns
-------
pandas.DataFrame
Shifted dataframes concatenated along columns axis.
Notes
-------
Based on Paulo Bestagini's augment_features_window from SEG 2016 ML
competition.
https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try01.ipynb
Example
-------
Shift df by one period and concatenate.
>>> df = pd.DataFrame({'gr': [1.1, 2.1], 'den': [2.1, 2.2]})
>>> shift_concat(df)
gr_shifted_1 den_shifted_1 gr den gr_shifted_-1 den_shifted_-1
0 NaN NaN 1.1 2.1 2.1 2.2
1 1.1 2.1 2.1 2.2 NaN NaN
"""
if fill_value is None:
fill_value = np.nan
dfs = []
for period in range(periods, -1*periods - 1, -1):
if period == 0:
dfs.append(df)
continue
df_shifted = df.shift(period, fill_value=fill_value)
df_shifted.columns = [f'{col}_shifted_{str(period)}'
for col in df_shifted.columns]
dfs.append(df_shifted)
return pd.concat(dfs, axis=1)
def gradient(df, depth_col):
"""
Calculate the gradient for all features along the provided `depth_col`
column.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be used in the gradient calculation.
depth_col : str
Dataframe column name to be used as depth reference.
Returns
-------
pandas.DataFrame
Gradient of `df` along `depth_col` column. The depth column is not in
the output dataframe.
Notes
-------
Based on Paulo Bestagini's augment_features_window from SEG 2016 ML
competition.
https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try01.ipynb
Example
-------
Calculate gradient of columns along `md`.
>>> df = pd.DataFrame({'gr': [100.1, 100.2, 100.3],
'den': [2.1, 2.2, 2.3],
'md': [500, 500.5, 501]})
>>> gradient(df, 'md')
gr den
0 NaN NaN
1 0.2 0.2
2 0.2 0.2
"""
depth_diff = df[depth_col].diff()
denom_zeros = np.isclose(depth_diff, 0)
depth_diff[denom_zeros] = 0.001
df_diff = df.drop(depth_col, axis=1)
df_diff = df_diff.diff()
# Add suffix to column names
df_diff.columns = [f'{col}_gradient' for col in df_diff.columns]
return df_diff.divide(depth_diff, axis=0)
def shift_concat_gradient(df, depth_col, well_col, cat_cols, periods=1, fill_value=None):
"""
Augment features using `shif_concat` and `gradient`.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be augmented.
depth_col : str
Dataframe column name to be used as depth reference.
well_col : str
Dataframe column name to be used as well reference.
cat_cols: list of str
Encoded column names. The gradient calculation is not applied to these
columns.
periods : int
Number of periods to shift. Should be positive.
fill_value : object, optional
The scalar value to use for newly introduced missing values. Defaults
to numpy.nan.
Returns
-------
pandas.DataFrame
Augmented dataframe.
Notes
-------
Based on Paulo Bestagini's augment_features_window from SEG 2016 ML
competition.
https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try01.ipynb
Example
-------
Augment features of `df` by shifting and taking the gradient.
>>> df = pd.DataFrame({'gr': [100.1, 100.2, 100.3, 20.1, 20.2, 20.3],
'den': [2.1, 2.2, 2.3, 1.7, 1.8, 1.9],
'md': [500, 500.5, 501, 1000, 1000.05, 1001],
'well': [1, 1, 1, 2, 2, 2]})
>>> shift_concat_gradient(df, 'md', 'well', periods=1, fill_value=None)
gr_shifted_1 den_shifted_1 gr den ... well md gr_gradient den_gradient
0 NaN NaN 100.1 2.1 ... 1 500.00 NaN NaN
1 100.1 2.1 100.2 2.2 ... 1 500.50 0.200000 0.200000
2 100.2 2.2 100.3 2.3 ... 1 501.00 0.200000 0.200000
3 NaN NaN 20.1 1.7 ... 2 1000.00 NaN NaN
4 20.1 1.7 20.2 1.8 ... 2 1000.05 2.000000 2.000000
5 20.2 1.8 20.3 1.9 ... 2 1001.00 0.105263 0.105263
"""
# TODO 'Consider filling missing values created here with DataFrame.fillna'
# Columns to apply gradient operation
cat_cols.append(well_col)
gradient_cols = [col for col in df.columns if col not in cat_cols]
# Don't shift depth
depth = df.loc[:, depth_col]
grouped = df.groupby(well_col, sort=False)
df_aug_groups = []
for name, group in grouped:
shift_cols_df = group.drop([well_col, depth_col], axis=1)
group_shift = shift_concat(shift_cols_df,
periods=periods,
fill_value=fill_value)
# Add back the well name and depth
group_shift[well_col] = name
group_shift[depth_col] = depth
group_gradient = group.loc[:, gradient_cols]
group_gradient = gradient(group_gradient, depth_col)
group_aug = pd.concat([group_shift, group_gradient], axis=1)
df_aug_groups.append(group_aug)
return pd.concat(df_aug_groups)
def score(y_true, y_pred, scoring_matrix):
"""
Competition scoring function.
Parameters
----------
y_true : pandas.Series
Ground truth (correct) target values.
y_pred : pandas.Series
Estimated targets as returned by a classifier.
scoring_matrix : numpy.array
Competition scoring matrix.
Returns
----------
float
2020 FORCE ML lithology competition custome score.
"""
S = 0.0
for true_val, pred_val in zip(y_true, y_pred):
S -= scoring_matrix[true_val, pred_val]
return S/y_true.shape[0]
def show_evaluation(y_true, y_pred):
"""
Print model performance and evaluation.
Parameters
----------
y_true : pandas.Series
Ground truth (correct) target values.
y_pred: pandas.Series
Estimated targets as returned by a classifier.
"""
print(f'Competition score: {score(y_true, y_pred)}')
print(f'Accuracy: {accuracy_score(y_true, y_pred)}')
print(f'F1: {f1_score(y_true, y_pred, average="weighted")}')
def build_encoding_map(series):
"""
Build dictionary with the mapping of series unique values to encoded
values.
Parameters
----------
series : pandas.Series
Series with categories to be encoded.
Returns
-------
mapping : dict
Dictionary mapping unique categories in series to encoded values.
See Also
--------
label_encode_columns : Label encode a dataframe categorical columns.
"""
unique_values = series.unique()
mapping = {original: encoded
for encoded, original in enumerate(unique_values)
if original is not np.nan}
return mapping
def label_encode_columns(df, cat_cols, mappings):
"""
Label encode a dataframe categorical columns.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be encoded.
cat_cols: list of str
Column names to be encoded.
mappings: dict of dict
Dictionary containing a key-value mapping for each column to be
encoded.
Returns
-------
df : pandas.DataFrame
Dataframe with the encoded columns added and the `cat_cols` removed.
encoded_col_names: list of str
Encoded column names.
See Also
--------
build_encoding_map : Build a series encoding mapping.
"""
df = df.copy()
encoded_col_names = []
for col in cat_cols:
new_col = f'{col}_encoded'
encoded_col_names.append(new_col)
df[new_col] = df[col].map(mappings[col])
df.drop(col, axis=1, inplace=True)
return df, encoded_col_names
```
# Target maps
```
KEYS_TO_ORDINAL = {
30000: 0,
65030: 1,
65000: 2,
80000: 3,
74000: 4,
70000: 5,
70032: 6,
88000: 7,
86000: 8,
99000: 9,
90000: 10,
93000: 11
}
KEYS_TO_LITHOLOGY = {30000: 'Sandstone',
65030: 'Sandstone/Shale',
65000: 'Shale',
80000: 'Marl',
74000: 'Dolomite',
70000: 'Limestone',
70032: 'Chalk',
88000: 'Halite',
86000: 'Anhydrite',
99000: 'Tuff',
90000: 'Coal',
93000: 'Basement'}
ORDINAL_TO_KEYS = {value: key for key, value in KEYS_TO_ORDINAL.items()}
ORDINAL_TO_LITHOLOGY = {}
for ordinal_key, key in ORDINAL_TO_KEYS.items():
ORDINAL_TO_LITHOLOGY[ordinal_key] = KEYS_TO_LITHOLOGY[key]
```
# Import data
First add a shortcut from the [google drive competition data location](https://drive.google.com/drive/folders/1GIkjq4fwgwbiqVQxYwoJnOJWVobZ91pL) to your own google drive. We will mount this drive, and access the data from it.
We will save the results to a diffent folder, where we have write access.
```
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
#should be edited to the present working directory of the user
data_source = '/content/drive/My Drive/FORCE 2020 lithofacies prediction from well logs competition/'
penalty_matrix = np.load(data_source + 'penalty_matrix.npy')
train = pd.read_csv(data_source + 'CSV_train.csv', sep=';')
test = pd.read_csv(data_source + 'CSV_test.csv', sep=';')
# Destination folder
out_data_dir = Path('/content/drive/My Drive/lith_pred/')
```
# Train model
```
class Model():
'''
class to lithology prediction
'''
def preprocess(self, df, cat_columns, mappings):
# # Drop model features
# drop_cols = [
# 'FORCE_2020_LITHOFACIES_CONFIDENCE',
# 'SGR',
# 'DTS',
# 'DCAL',
# 'RMIC',
# 'ROPA',
# 'RXO',
# ]
# # Confirm drop columns are in df
# drop_cols = [col for col in drop_cols if col in df.columns]
# df.drop(drop_cols, axis=1, inplace=True)
# Label encode
df, encoded_col_names = label_encode_columns(df, cat_columns, mappings)
# Augment using Bestagini's functions
df_preprocesed = shift_concat_gradient(df,
'DEPTH_MD',
'WELL',
encoded_col_names,
periods=1,
fill_value=None)
return df_preprocesed
def fit(self, X, y):
split = 5
skf = StratifiedKFold(n_splits=split, shuffle=True)
model = XGBClassifier(n_estimators=100, max_depth=10, booster='gbtree',
objective='multi:softprob', learning_rate=0.1, random_state=0,
subsample=0.9, colsample_bytree=0.9, tree_method='gpu_hist',
eval_metric='mlogloss', verbose=2020, reg_lambda=1500)
models = []
for fold_number, indices in enumerate(skf.split(X, y)):
print(f'Fitting fold: {fold_number}')
train_index, test_index = indices
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model.fit(X_train,
y_train,
early_stopping_rounds=100,
eval_set=[(X_test, y_test)],
verbose=100)
models.append(model)
return models
def fit_predict(self, X_train, y_train, X_pred, pred_wells, save_filename):
# Fit
models = self.fit(X_train, y_train)
# Get lithologies probabilities for each model
models_proba = []
for model_num, model in enumerate(models):
model_proba = model.predict_proba(X_pred)
model_classes = [ORDINAL_TO_LITHOLOGY[lith] for lith in model.classes_]
model_proba_df = pd.DataFrame(model_proba, columns=model_classes)
model_proba_df['MODEL'] = model_num
# Set sample index, well, and MD
pred_wells_df = pred_wells.reset_index()
model_proba_df['index'] = pred_wells_df['index']
model_proba_df['WELL'] = pred_wells_df['WELL']
md = X_pred['DEPTH_MD']
md.reset_index(inplace=True, drop=True)
model_proba_df['DEPTH_MD'] = md
models_proba.append(model_proba_df)
models_proba = pd.concat(models_proba, ignore_index=True)
# Create save directory if it doesn't exists
if not save_filename.parent.is_dir():
save_filename.parent.mkdir(parents=True)
# Save models_proba to CSV
models_proba.to_csv(save_filename, index=False)
return models, models_proba
```
# Prepare train data
```
# Build group of groups
group_of_groups = {
'VIKING GP.': 'VTB GP.',
'BOKNFJORD GP.': 'VTB GP.',
'TYNE GP.': 'VTB GP.',
'ROTLIEGENDES GP.': 'PERMIAN GP.',
'ZECHSTEIN GP.': 'PERMIAN GP.',
}
train['GROUPED'] = train['GROUP']
train['GROUPED'].replace(group_of_groups, inplace=True)
train.drop('GROUP', axis=1, inplace=True)
train['FORCE_2020_LITHOFACIES_LITHOLOGY'] = train['FORCE_2020_LITHOFACIES_LITHOLOGY'].map(KEYS_TO_ORDINAL)
cat_columns = ['FORMATION']
train_mappings = {col: build_encoding_map(train[col]) for col in cat_columns}
# Drop columns with high percent of missing values
drop_cols = [
'FORCE_2020_LITHOFACIES_CONFIDENCE',
'SGR',
'DTS',
'DCAL',
'RMIC',
'ROPA',
'RXO',
]
# Confirm drop columns are in df
drop_cols = [col for col in drop_cols if col in train.columns]
train.drop(drop_cols, axis=1, inplace=True)
# Use different logs per group
limit = 0.68
keep_logs_per_group = {}
for group_data in train.groupby('GROUPED'):
group_name, group = group_data
group_log_coverage = (~group.isna()).sum() / group.shape[0]
cond_more_than_limit = group_log_coverage > limit
keep_logs = [log for log, val in cond_more_than_limit.items() if val]
if 'FORMATION' not in keep_logs:
keep_logs.append('FORMATION')
keep_logs_per_group[group_name] = keep_logs
```
# Prepare predict data
```
test['GROUPED'] = test['GROUP']
test['GROUPED'].replace(group_of_groups, inplace=True)
test.drop('GROUP', axis=1, inplace=True)
# Confirm drop columns are in df
test_drop_cols = [col for col in drop_cols if col in test.columns]
test.drop(test_drop_cols, axis=1, inplace=True)
test.columns
```
# Fit predict groups
```
for group_data in test.groupby('GROUPED'):
group_name, group = group_data
if group_name in train['GROUPED'].unique():
# Select train group features
keep_cols = keep_logs_per_group[group_name]
group_train = train.loc[train['GROUPED']==group_name, keep_cols]
group_train.drop(['GROUPED'], axis=1, inplace=True)
# Drop lithofacies with less than n_split samples
train_group_vc = group_train['FORCE_2020_LITHOFACIES_LITHOLOGY'].value_counts()
for lith, count in train_group_vc.items():
if count <= 5:
cond = group_train['FORCE_2020_LITHOFACIES_LITHOLOGY'] != lith
group_train = group_train.loc[cond, :]
model = Model()
# Define train target and features
y_train = group_train['FORCE_2020_LITHOFACIES_LITHOLOGY']
X = group_train.drop('FORCE_2020_LITHOFACIES_LITHOLOGY', axis=1)
# Augment train features
X_train = model.preprocess(X, cat_columns, train_mappings)
X_train.drop('WELL', axis=1, inplace=True)
# Define predict features
pred_keep_cols = [col for col in keep_cols if col != 'FORCE_2020_LITHOFACIES_LITHOLOGY']
X_pred = group.loc[:, pred_keep_cols]
X_pred.drop(['GROUPED'], axis=1, inplace=True)
# Augment predict features
X_pred = model.preprocess(X_pred, cat_columns, train_mappings)
X_pred.drop('WELL', axis=1, inplace=True)
fn = '_'.join(group_name.lower().split())
save_filename = out_data_dir / f'model_proba/grouped/00/models_proba_grouped_{fn}csv'
predict_group_wells = group['WELL']
print(f'Fitting group: {group_name}')
models, models_proba = model.fit_predict(X_train,
y_train,
X_pred,
predict_group_wells,
save_filename)
# print(X_pred.shape)
# print()
# print(predict_group_wells.shape)
# print(predict_group_wells.head())
# print(predict_group_wells.tail())
# print()
else:
print('Group {group_name} is in the prediction set')
print('but not in the train set')
print('This functionality is currently not supported')
print()
# TODO: What happens when there is a group in the predict set but not in the train set?
```
|
github_jupyter
|
from pathlib import Path
import pickle
import numpy as np
import numpy.random as nr
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing
import sklearn.model_selection as ms
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score, f1_score
import xgboost as xgb
from xgboost import XGBClassifier
# Print out packages versions
print(f'pandas version is: {pd.__version__}')
print(f'numpy version is: {np.__version__}')
print(f'matplotlib version is: {matplotlib.__version__}')
print(f'sklearn version is: {sklearn.__version__}')
print(f'xgboost version is: {xgb.__version__}')
def replace_nan_inf(df, value=None):
"""
Replace missing and infinity values.
Parameters
----------
df : pandas.DataFrame
Dataframe with values to be replaced.
value : int, float
Value to replace any missing or numpy.inf values. Defaults to numpy.nan
Returns
-------
pandas.DataFrame
Dataframe with missing and infinity values replaced with -999.
"""
if value is None:
value = np.nan
return df.replace(to_replace=[np.nan, np.inf, -np.inf],
value=value)
def shift_concat(df, periods=1, fill_value=None):
"""
Build dataframe of shifted index.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be shifted.
periods : int
Number of periods to shift. Should be positive.
fill_value : object, optional
The scalar value to use for newly introduced missing values. Defaults
to numpy.nan.
Returns
-------
pandas.DataFrame
Shifted dataframes concatenated along columns axis.
Notes
-------
Based on Paulo Bestagini's augment_features_window from SEG 2016 ML
competition.
https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try01.ipynb
Example
-------
Shift df by one period and concatenate.
>>> df = pd.DataFrame({'gr': [1.1, 2.1], 'den': [2.1, 2.2]})
>>> shift_concat(df)
gr_shifted_1 den_shifted_1 gr den gr_shifted_-1 den_shifted_-1
0 NaN NaN 1.1 2.1 2.1 2.2
1 1.1 2.1 2.1 2.2 NaN NaN
"""
if fill_value is None:
fill_value = np.nan
dfs = []
for period in range(periods, -1*periods - 1, -1):
if period == 0:
dfs.append(df)
continue
df_shifted = df.shift(period, fill_value=fill_value)
df_shifted.columns = [f'{col}_shifted_{str(period)}'
for col in df_shifted.columns]
dfs.append(df_shifted)
return pd.concat(dfs, axis=1)
def gradient(df, depth_col):
"""
Calculate the gradient for all features along the provided `depth_col`
column.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be used in the gradient calculation.
depth_col : str
Dataframe column name to be used as depth reference.
Returns
-------
pandas.DataFrame
Gradient of `df` along `depth_col` column. The depth column is not in
the output dataframe.
Notes
-------
Based on Paulo Bestagini's augment_features_window from SEG 2016 ML
competition.
https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try01.ipynb
Example
-------
Calculate gradient of columns along `md`.
>>> df = pd.DataFrame({'gr': [100.1, 100.2, 100.3],
'den': [2.1, 2.2, 2.3],
'md': [500, 500.5, 501]})
>>> gradient(df, 'md')
gr den
0 NaN NaN
1 0.2 0.2
2 0.2 0.2
"""
depth_diff = df[depth_col].diff()
denom_zeros = np.isclose(depth_diff, 0)
depth_diff[denom_zeros] = 0.001
df_diff = df.drop(depth_col, axis=1)
df_diff = df_diff.diff()
# Add suffix to column names
df_diff.columns = [f'{col}_gradient' for col in df_diff.columns]
return df_diff.divide(depth_diff, axis=0)
def shift_concat_gradient(df, depth_col, well_col, cat_cols, periods=1, fill_value=None):
"""
Augment features using `shif_concat` and `gradient`.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be augmented.
depth_col : str
Dataframe column name to be used as depth reference.
well_col : str
Dataframe column name to be used as well reference.
cat_cols: list of str
Encoded column names. The gradient calculation is not applied to these
columns.
periods : int
Number of periods to shift. Should be positive.
fill_value : object, optional
The scalar value to use for newly introduced missing values. Defaults
to numpy.nan.
Returns
-------
pandas.DataFrame
Augmented dataframe.
Notes
-------
Based on Paulo Bestagini's augment_features_window from SEG 2016 ML
competition.
https://github.com/seg/2016-ml-contest/blob/master/ispl/facies_classification_try01.ipynb
Example
-------
Augment features of `df` by shifting and taking the gradient.
>>> df = pd.DataFrame({'gr': [100.1, 100.2, 100.3, 20.1, 20.2, 20.3],
'den': [2.1, 2.2, 2.3, 1.7, 1.8, 1.9],
'md': [500, 500.5, 501, 1000, 1000.05, 1001],
'well': [1, 1, 1, 2, 2, 2]})
>>> shift_concat_gradient(df, 'md', 'well', periods=1, fill_value=None)
gr_shifted_1 den_shifted_1 gr den ... well md gr_gradient den_gradient
0 NaN NaN 100.1 2.1 ... 1 500.00 NaN NaN
1 100.1 2.1 100.2 2.2 ... 1 500.50 0.200000 0.200000
2 100.2 2.2 100.3 2.3 ... 1 501.00 0.200000 0.200000
3 NaN NaN 20.1 1.7 ... 2 1000.00 NaN NaN
4 20.1 1.7 20.2 1.8 ... 2 1000.05 2.000000 2.000000
5 20.2 1.8 20.3 1.9 ... 2 1001.00 0.105263 0.105263
"""
# TODO 'Consider filling missing values created here with DataFrame.fillna'
# Columns to apply gradient operation
cat_cols.append(well_col)
gradient_cols = [col for col in df.columns if col not in cat_cols]
# Don't shift depth
depth = df.loc[:, depth_col]
grouped = df.groupby(well_col, sort=False)
df_aug_groups = []
for name, group in grouped:
shift_cols_df = group.drop([well_col, depth_col], axis=1)
group_shift = shift_concat(shift_cols_df,
periods=periods,
fill_value=fill_value)
# Add back the well name and depth
group_shift[well_col] = name
group_shift[depth_col] = depth
group_gradient = group.loc[:, gradient_cols]
group_gradient = gradient(group_gradient, depth_col)
group_aug = pd.concat([group_shift, group_gradient], axis=1)
df_aug_groups.append(group_aug)
return pd.concat(df_aug_groups)
def score(y_true, y_pred, scoring_matrix):
"""
Competition scoring function.
Parameters
----------
y_true : pandas.Series
Ground truth (correct) target values.
y_pred : pandas.Series
Estimated targets as returned by a classifier.
scoring_matrix : numpy.array
Competition scoring matrix.
Returns
----------
float
2020 FORCE ML lithology competition custome score.
"""
S = 0.0
for true_val, pred_val in zip(y_true, y_pred):
S -= scoring_matrix[true_val, pred_val]
return S/y_true.shape[0]
def show_evaluation(y_true, y_pred):
"""
Print model performance and evaluation.
Parameters
----------
y_true : pandas.Series
Ground truth (correct) target values.
y_pred: pandas.Series
Estimated targets as returned by a classifier.
"""
print(f'Competition score: {score(y_true, y_pred)}')
print(f'Accuracy: {accuracy_score(y_true, y_pred)}')
print(f'F1: {f1_score(y_true, y_pred, average="weighted")}')
def build_encoding_map(series):
"""
Build dictionary with the mapping of series unique values to encoded
values.
Parameters
----------
series : pandas.Series
Series with categories to be encoded.
Returns
-------
mapping : dict
Dictionary mapping unique categories in series to encoded values.
See Also
--------
label_encode_columns : Label encode a dataframe categorical columns.
"""
unique_values = series.unique()
mapping = {original: encoded
for encoded, original in enumerate(unique_values)
if original is not np.nan}
return mapping
def label_encode_columns(df, cat_cols, mappings):
"""
Label encode a dataframe categorical columns.
Parameters
----------
df : pandas.DataFrame
Dataframe with columns to be encoded.
cat_cols: list of str
Column names to be encoded.
mappings: dict of dict
Dictionary containing a key-value mapping for each column to be
encoded.
Returns
-------
df : pandas.DataFrame
Dataframe with the encoded columns added and the `cat_cols` removed.
encoded_col_names: list of str
Encoded column names.
See Also
--------
build_encoding_map : Build a series encoding mapping.
"""
df = df.copy()
encoded_col_names = []
for col in cat_cols:
new_col = f'{col}_encoded'
encoded_col_names.append(new_col)
df[new_col] = df[col].map(mappings[col])
df.drop(col, axis=1, inplace=True)
return df, encoded_col_names
KEYS_TO_ORDINAL = {
30000: 0,
65030: 1,
65000: 2,
80000: 3,
74000: 4,
70000: 5,
70032: 6,
88000: 7,
86000: 8,
99000: 9,
90000: 10,
93000: 11
}
KEYS_TO_LITHOLOGY = {30000: 'Sandstone',
65030: 'Sandstone/Shale',
65000: 'Shale',
80000: 'Marl',
74000: 'Dolomite',
70000: 'Limestone',
70032: 'Chalk',
88000: 'Halite',
86000: 'Anhydrite',
99000: 'Tuff',
90000: 'Coal',
93000: 'Basement'}
ORDINAL_TO_KEYS = {value: key for key, value in KEYS_TO_ORDINAL.items()}
ORDINAL_TO_LITHOLOGY = {}
for ordinal_key, key in ORDINAL_TO_KEYS.items():
ORDINAL_TO_LITHOLOGY[ordinal_key] = KEYS_TO_LITHOLOGY[key]
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
#should be edited to the present working directory of the user
data_source = '/content/drive/My Drive/FORCE 2020 lithofacies prediction from well logs competition/'
penalty_matrix = np.load(data_source + 'penalty_matrix.npy')
train = pd.read_csv(data_source + 'CSV_train.csv', sep=';')
test = pd.read_csv(data_source + 'CSV_test.csv', sep=';')
# Destination folder
out_data_dir = Path('/content/drive/My Drive/lith_pred/')
class Model():
'''
class to lithology prediction
'''
def preprocess(self, df, cat_columns, mappings):
# # Drop model features
# drop_cols = [
# 'FORCE_2020_LITHOFACIES_CONFIDENCE',
# 'SGR',
# 'DTS',
# 'DCAL',
# 'RMIC',
# 'ROPA',
# 'RXO',
# ]
# # Confirm drop columns are in df
# drop_cols = [col for col in drop_cols if col in df.columns]
# df.drop(drop_cols, axis=1, inplace=True)
# Label encode
df, encoded_col_names = label_encode_columns(df, cat_columns, mappings)
# Augment using Bestagini's functions
df_preprocesed = shift_concat_gradient(df,
'DEPTH_MD',
'WELL',
encoded_col_names,
periods=1,
fill_value=None)
return df_preprocesed
def fit(self, X, y):
split = 5
skf = StratifiedKFold(n_splits=split, shuffle=True)
model = XGBClassifier(n_estimators=100, max_depth=10, booster='gbtree',
objective='multi:softprob', learning_rate=0.1, random_state=0,
subsample=0.9, colsample_bytree=0.9, tree_method='gpu_hist',
eval_metric='mlogloss', verbose=2020, reg_lambda=1500)
models = []
for fold_number, indices in enumerate(skf.split(X, y)):
print(f'Fitting fold: {fold_number}')
train_index, test_index = indices
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
model.fit(X_train,
y_train,
early_stopping_rounds=100,
eval_set=[(X_test, y_test)],
verbose=100)
models.append(model)
return models
def fit_predict(self, X_train, y_train, X_pred, pred_wells, save_filename):
# Fit
models = self.fit(X_train, y_train)
# Get lithologies probabilities for each model
models_proba = []
for model_num, model in enumerate(models):
model_proba = model.predict_proba(X_pred)
model_classes = [ORDINAL_TO_LITHOLOGY[lith] for lith in model.classes_]
model_proba_df = pd.DataFrame(model_proba, columns=model_classes)
model_proba_df['MODEL'] = model_num
# Set sample index, well, and MD
pred_wells_df = pred_wells.reset_index()
model_proba_df['index'] = pred_wells_df['index']
model_proba_df['WELL'] = pred_wells_df['WELL']
md = X_pred['DEPTH_MD']
md.reset_index(inplace=True, drop=True)
model_proba_df['DEPTH_MD'] = md
models_proba.append(model_proba_df)
models_proba = pd.concat(models_proba, ignore_index=True)
# Create save directory if it doesn't exists
if not save_filename.parent.is_dir():
save_filename.parent.mkdir(parents=True)
# Save models_proba to CSV
models_proba.to_csv(save_filename, index=False)
return models, models_proba
# Build group of groups
group_of_groups = {
'VIKING GP.': 'VTB GP.',
'BOKNFJORD GP.': 'VTB GP.',
'TYNE GP.': 'VTB GP.',
'ROTLIEGENDES GP.': 'PERMIAN GP.',
'ZECHSTEIN GP.': 'PERMIAN GP.',
}
train['GROUPED'] = train['GROUP']
train['GROUPED'].replace(group_of_groups, inplace=True)
train.drop('GROUP', axis=1, inplace=True)
train['FORCE_2020_LITHOFACIES_LITHOLOGY'] = train['FORCE_2020_LITHOFACIES_LITHOLOGY'].map(KEYS_TO_ORDINAL)
cat_columns = ['FORMATION']
train_mappings = {col: build_encoding_map(train[col]) for col in cat_columns}
# Drop columns with high percent of missing values
drop_cols = [
'FORCE_2020_LITHOFACIES_CONFIDENCE',
'SGR',
'DTS',
'DCAL',
'RMIC',
'ROPA',
'RXO',
]
# Confirm drop columns are in df
drop_cols = [col for col in drop_cols if col in train.columns]
train.drop(drop_cols, axis=1, inplace=True)
# Use different logs per group
limit = 0.68
keep_logs_per_group = {}
for group_data in train.groupby('GROUPED'):
group_name, group = group_data
group_log_coverage = (~group.isna()).sum() / group.shape[0]
cond_more_than_limit = group_log_coverage > limit
keep_logs = [log for log, val in cond_more_than_limit.items() if val]
if 'FORMATION' not in keep_logs:
keep_logs.append('FORMATION')
keep_logs_per_group[group_name] = keep_logs
test['GROUPED'] = test['GROUP']
test['GROUPED'].replace(group_of_groups, inplace=True)
test.drop('GROUP', axis=1, inplace=True)
# Confirm drop columns are in df
test_drop_cols = [col for col in drop_cols if col in test.columns]
test.drop(test_drop_cols, axis=1, inplace=True)
test.columns
for group_data in test.groupby('GROUPED'):
group_name, group = group_data
if group_name in train['GROUPED'].unique():
# Select train group features
keep_cols = keep_logs_per_group[group_name]
group_train = train.loc[train['GROUPED']==group_name, keep_cols]
group_train.drop(['GROUPED'], axis=1, inplace=True)
# Drop lithofacies with less than n_split samples
train_group_vc = group_train['FORCE_2020_LITHOFACIES_LITHOLOGY'].value_counts()
for lith, count in train_group_vc.items():
if count <= 5:
cond = group_train['FORCE_2020_LITHOFACIES_LITHOLOGY'] != lith
group_train = group_train.loc[cond, :]
model = Model()
# Define train target and features
y_train = group_train['FORCE_2020_LITHOFACIES_LITHOLOGY']
X = group_train.drop('FORCE_2020_LITHOFACIES_LITHOLOGY', axis=1)
# Augment train features
X_train = model.preprocess(X, cat_columns, train_mappings)
X_train.drop('WELL', axis=1, inplace=True)
# Define predict features
pred_keep_cols = [col for col in keep_cols if col != 'FORCE_2020_LITHOFACIES_LITHOLOGY']
X_pred = group.loc[:, pred_keep_cols]
X_pred.drop(['GROUPED'], axis=1, inplace=True)
# Augment predict features
X_pred = model.preprocess(X_pred, cat_columns, train_mappings)
X_pred.drop('WELL', axis=1, inplace=True)
fn = '_'.join(group_name.lower().split())
save_filename = out_data_dir / f'model_proba/grouped/00/models_proba_grouped_{fn}csv'
predict_group_wells = group['WELL']
print(f'Fitting group: {group_name}')
models, models_proba = model.fit_predict(X_train,
y_train,
X_pred,
predict_group_wells,
save_filename)
# print(X_pred.shape)
# print()
# print(predict_group_wells.shape)
# print(predict_group_wells.head())
# print(predict_group_wells.tail())
# print()
else:
print('Group {group_name} is in the prediction set')
print('but not in the train set')
print('This functionality is currently not supported')
print()
# TODO: What happens when there is a group in the predict set but not in the train set?
| 0.850033 | 0.744703 |
# POC on using Sklearn pipeline with RandomizedSearchCV
```
import os
import sys
import pandas as pd
import numpy as np
import sklearn
from joblib import dump, load
import xgboost
from xgboost import XGBClassifier
import tempfile
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint as sp_randint
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
print("NumPy", np.__version__)
print("Pandas", pd.__version__)
print("Scikit-Learn", sklearn.__version__)
print("XGBoost", xgboost.__version__)
np.random.seed(432)
obs = 10000
population_df = (
# Make columns of noise
pd.DataFrame(np.random.rand(obs, 3), columns=[f"f{i}" for i in range(3)])
.assign(target_class=np.random.rand(obs, 1) > 0.5)
# Make columns that can help predict the target
.assign(f3=lambda p_df: p_df.target_class.apply(lambda tc: tc + np.random.standard_normal()))
.assign(c1=['category1', 'category2', 'category3', 'category4'] * int(obs/4))
.assign(c2=['cat_type_2', 'cat_type_2'] * int(obs/2))
)
print(population_df.head(2))
categorical_features = ['c1', 'c2']
numeric_features = ['f0', 'f1', 'f2', 'f3']
X_train, X_test, y_train, y_test = train_test_split(
population_df.drop(columns=['target_class']), population_df.target_class, test_size=0.1, random_state=43223)
class MultiplierTransformer(BaseEstimator, TransformerMixin):
"""Multiply the numeric cols by a mean of the columns. Just a test"""
def __init__(self, numeric_cols):
self.numeric_cols = numeric_cols
def fit(self, X_df, _):
self.multiple = population_df[numeric_features[2:]].mean().mean()
return self
def transform(self, X_df):
X_df[self.numeric_cols] = X_df[self.numeric_cols] * self.multiple
return X_df
```
# RandomForest
```
# Setup the pipeline and the RandomizedSearch
pipeline_test_1 = make_pipeline(
MultiplierTransformer(numeric_features[2:]),
ColumnTransformer(
remainder='passthrough',
transformers=[
('impute', Pipeline(steps=[('input', SimpleImputer(strategy='mean'))]), numeric_features[:2]),
('cat', Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore'))]), categorical_features),
]),
RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=4431)
)
param_dist = {
"randomforestclassifier__max_depth": sp_randint(1, 21),
"randomforestclassifier__max_features": sp_randint(1, X_train.shape[1] + 1),
"randomforestclassifier__min_samples_split": sp_randint(1, 1000),
"randomforestclassifier__bootstrap": [True, False]
}
pipeline_test_1_search_model = RandomizedSearchCV(
pipeline_test_1,
param_distributions=param_dist,
n_iter=3,
cv=3,
iid=False
)
%%time
%%capture
# Not sure where the SettingWithCopyWarning is coming from so use capture
out_file = tempfile.NamedTemporaryFile()
print("Fitting Model")
pipeline_test_1_search_model.fit(X_train, y_train)
print(f"Saving model to file {out_file.name}")
dump(pipeline_test_1_search_model, out_file.name)
print(f"Pulling model from file {out_file.name}")
pipeline_test_1_search_model = load(out_file.name)
print("Calculating Predictions")
predictions = pipeline_test_1_search_model.predict(X_test)
```
# XGBoost
```
# Setup the pipeline and the RandomizedSearch
pipeline_test_1 = make_pipeline(
MultiplierTransformer(numeric_features[2:]),
ColumnTransformer(
remainder='passthrough',
transformers=[
('impute', Pipeline(steps=[('input', SimpleImputer(strategy='mean'))]), numeric_features[:2]),
('cat', Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore'))]), categorical_features),
]),
XGBClassifier(n_estimators=10, random_state=432, n_jobs=-1)
)
param_dist = {
'xgbclassifier__min_child_weight': sp_randint(1, 10),
'xgbclassifier__gamma': [0.5, 1, 1.5, 2, 5],
'xgbclassifier__subsample': [0.6, 0.8, 1.0],
'xgbclassifier__colsample_bytree': [0.6, 0.8, 1.0],
'xgbclassifier__max_depth': sp_randint(1, 21),
'xgbclassifier__num_feature': sp_randint(1, X_train.shape[1] + 1)
}
print("Configuring RandomSearch")
pipeline_test_1_search_model = RandomizedSearchCV(
pipeline_test_1,
param_distributions=param_dist,
n_iter=3,
cv=3,
iid=False
)
%%time
%%capture
# Not sure where the SettingWithCopyWarning is coming from so use capture
out_file = tempfile.NamedTemporaryFile()
print("Fitting Model")
pipeline_test_1_search_model.fit(X_train, y_train)
print(f"Saving model to file {out_file.name}")
dump(pipeline_test_1_search_model, out_file.name)
print(f"Pulling model from file {out_file.name}")
pipeline_test_1_search_model = load(out_file.name)
print("Calculating Predictions")
predictions = pipeline_test_1_search_model.predict(X_test)
```
|
github_jupyter
|
import os
import sys
import pandas as pd
import numpy as np
import sklearn
from joblib import dump, load
import xgboost
from xgboost import XGBClassifier
import tempfile
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint as sp_randint
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
print("NumPy", np.__version__)
print("Pandas", pd.__version__)
print("Scikit-Learn", sklearn.__version__)
print("XGBoost", xgboost.__version__)
np.random.seed(432)
obs = 10000
population_df = (
# Make columns of noise
pd.DataFrame(np.random.rand(obs, 3), columns=[f"f{i}" for i in range(3)])
.assign(target_class=np.random.rand(obs, 1) > 0.5)
# Make columns that can help predict the target
.assign(f3=lambda p_df: p_df.target_class.apply(lambda tc: tc + np.random.standard_normal()))
.assign(c1=['category1', 'category2', 'category3', 'category4'] * int(obs/4))
.assign(c2=['cat_type_2', 'cat_type_2'] * int(obs/2))
)
print(population_df.head(2))
categorical_features = ['c1', 'c2']
numeric_features = ['f0', 'f1', 'f2', 'f3']
X_train, X_test, y_train, y_test = train_test_split(
population_df.drop(columns=['target_class']), population_df.target_class, test_size=0.1, random_state=43223)
class MultiplierTransformer(BaseEstimator, TransformerMixin):
"""Multiply the numeric cols by a mean of the columns. Just a test"""
def __init__(self, numeric_cols):
self.numeric_cols = numeric_cols
def fit(self, X_df, _):
self.multiple = population_df[numeric_features[2:]].mean().mean()
return self
def transform(self, X_df):
X_df[self.numeric_cols] = X_df[self.numeric_cols] * self.multiple
return X_df
# Setup the pipeline and the RandomizedSearch
pipeline_test_1 = make_pipeline(
MultiplierTransformer(numeric_features[2:]),
ColumnTransformer(
remainder='passthrough',
transformers=[
('impute', Pipeline(steps=[('input', SimpleImputer(strategy='mean'))]), numeric_features[:2]),
('cat', Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore'))]), categorical_features),
]),
RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=4431)
)
param_dist = {
"randomforestclassifier__max_depth": sp_randint(1, 21),
"randomforestclassifier__max_features": sp_randint(1, X_train.shape[1] + 1),
"randomforestclassifier__min_samples_split": sp_randint(1, 1000),
"randomforestclassifier__bootstrap": [True, False]
}
pipeline_test_1_search_model = RandomizedSearchCV(
pipeline_test_1,
param_distributions=param_dist,
n_iter=3,
cv=3,
iid=False
)
%%time
%%capture
# Not sure where the SettingWithCopyWarning is coming from so use capture
out_file = tempfile.NamedTemporaryFile()
print("Fitting Model")
pipeline_test_1_search_model.fit(X_train, y_train)
print(f"Saving model to file {out_file.name}")
dump(pipeline_test_1_search_model, out_file.name)
print(f"Pulling model from file {out_file.name}")
pipeline_test_1_search_model = load(out_file.name)
print("Calculating Predictions")
predictions = pipeline_test_1_search_model.predict(X_test)
# Setup the pipeline and the RandomizedSearch
pipeline_test_1 = make_pipeline(
MultiplierTransformer(numeric_features[2:]),
ColumnTransformer(
remainder='passthrough',
transformers=[
('impute', Pipeline(steps=[('input', SimpleImputer(strategy='mean'))]), numeric_features[:2]),
('cat', Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore'))]), categorical_features),
]),
XGBClassifier(n_estimators=10, random_state=432, n_jobs=-1)
)
param_dist = {
'xgbclassifier__min_child_weight': sp_randint(1, 10),
'xgbclassifier__gamma': [0.5, 1, 1.5, 2, 5],
'xgbclassifier__subsample': [0.6, 0.8, 1.0],
'xgbclassifier__colsample_bytree': [0.6, 0.8, 1.0],
'xgbclassifier__max_depth': sp_randint(1, 21),
'xgbclassifier__num_feature': sp_randint(1, X_train.shape[1] + 1)
}
print("Configuring RandomSearch")
pipeline_test_1_search_model = RandomizedSearchCV(
pipeline_test_1,
param_distributions=param_dist,
n_iter=3,
cv=3,
iid=False
)
%%time
%%capture
# Not sure where the SettingWithCopyWarning is coming from so use capture
out_file = tempfile.NamedTemporaryFile()
print("Fitting Model")
pipeline_test_1_search_model.fit(X_train, y_train)
print(f"Saving model to file {out_file.name}")
dump(pipeline_test_1_search_model, out_file.name)
print(f"Pulling model from file {out_file.name}")
pipeline_test_1_search_model = load(out_file.name)
print("Calculating Predictions")
predictions = pipeline_test_1_search_model.predict(X_test)
| 0.508788 | 0.748444 |
<a href="https://colab.research.google.com/github/altaga/Pytorch-Driving-Guardian/blob/main/Hardware%20Code/Jetson%20Code/YoloV3/YoloV3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Blind Spot (YoloV3) Monitor:
Testing Module
In this section we are going to download all the necessary files directly from the repository to be able to carry out the model tests.
```
# We create the temporary folders of the images and utilities.
!mkdir utils
!mkdir temp-img
# Model and utilities to carry out the conversion and application of the model.
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/model.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/datasets.py -O utils/datasets.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/utils.py -O utils/utils.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/parse_config.py -O utils/parse_config.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/augmentations.py -O utils/augmentations.py
# YoloV3 Original Model Weights, configurations and labels
!wget -c https://pjreddie.com/media/files/yolov3.weights
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/data/coco.names
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/config/yolov3.cfg
# Test image.
!wget https://i.ibb.co/CmKjFdt/yolo.jpg -O temp-img/c1.png
```
Importing Libraries
```
import matplotlib.pyplot as plt
import torch
from torch.utils.data import DataLoader
from torch.autograd import Variable
from model import *
from utils.utils import *
from utils.datasets import *
import os
from PIL import Image
import cv2
import math
from random import randrange
%matplotlib inline
plt.rcParams['figure.dpi'] = 100
```
Pytorch Darknet Neural Network Architecture.
<img src="https://i.stack.imgur.com/js9wN.png">
Layers:
- Input Layer: This layer has an input of (416,416,3), receiving the data from a 416 px high and 416 px long image in color.
- ConvolutionDownsampling: This layer has the function of pooling the image and starting to generate the image filters.
- Dense Connection: This layer is a network of connected normal neurons, like any dense layer in a neural network.
- Spatial Pyramid Pooling: Given an 2D input Tensor, Temporal Pyramid Pooling divides the input in x stripes which extend through the height of the image and width of roughly (input_width / x). These stripes are then each pooled with max- or avg-pooling to calculate the output.
- https://github.com/revidee/pytorch-pyramid-pooling
- Object Detection: The purpose of this layer is to finish determining the objects that are being observed in the image.
Set variables to get the distance of objects from the camera
```
# Value 240 in person, only for testing, the real value for the streets is 70
objects=[240,220,120,50] #person:70, cars:220, motorcycle:120, dogs:50
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Set up model and classes
model = Darknet("yolov3.cfg", img_size=416).to(device)
model.load_darknet_weights("yolov3.weights")
model.eval() # Set in evaluation mode
classes = load_classes("coco.names") # Extracts class labels from file
# Global variables
distance=100000 #Seed distance
distancemem=100000 #Seed memory distance
labelmem=""
labelmod=""
pos=""
imag=""
imgs = [] # Stores image paths
img_detections = [] # Stores detections for each image index
i = randrange(10)
```
Setup the data loader and run once the model
Distance formula:

```
# Pytorch Data loader for the model
dataloader = DataLoader(
ImageFolder("temp-img", img_size=416),
batch_size=1,
shuffle=False,
num_workers=0,
)
Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
for batch_i, (img_paths, input_imgs) in enumerate(dataloader):
# Configure input
input_imgs = Variable(input_imgs.type(Tensor))
# Get detections
with torch.no_grad():
detections = model(input_imgs)
detections = non_max_suppression(detections, 0.8, 0.4)
imgs.extend(img_paths)
img_detections.extend(detections)
# Iterate through images and save plot of detections
for img_i, (path, detections) in enumerate(zip(imgs, img_detections)):
img = np.array(Image.open(path))
imag = cv2.imread(path)
(H, W) = imag.shape[:2]
# Draw bounding boxes and labels of detections
if detections is not None:
# Rescale boxes to original image
detections = rescale_boxes(detections, 416, img.shape[:2])
for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:
if(x1>5000 or y2>5000 or y1>5000 or x2>5000):
# False Detection Low-Pass Filter
break
add=" "
# Check if the object is Left or Right
if((W/2)<(x1+((x2-x1)/2)).item()):
pos="1"
add=add+"left "
else:
pos="0"
add=add+"right "
i=0
# Setup the label depend on detection.
if(classes[int(cls_pred)]=="motorbike"):
i=i+1
check=objects[2]
labelmem="m"+pos
elif(classes[int(cls_pred)]=="dog"):
i=i+2
check=objects[3]
labelmem="d"+pos
elif(classes[int(cls_pred)]=="person"):
i=i+3
check=objects[0]
labelmem="p"+pos
elif(classes[int(cls_pred)]=="car"):
i=i+4
check=objects[1]
labelmem="c"+pos
else:
i=i+5
check = 1000000
# Setup the label color
COLORS1 = int(254 * abs(math.sin(i)))
COLORS2 = int(254 * abs(math.sin(i+1)))
COLORS3 = int(254 * abs(math.sin(i+2)))
color= (COLORS1,COLORS2,COLORS3)
# Calculate Distance formula.
distance=(check*16)/(19*((x2.item()-x1.item())/W))
if(distancemem>distance):
if(300>distance):
distancemem=distance
labelmod = labelmem
add=add+"close "
# Create a Rectangle patch
cv2.rectangle(imag, (int(x1), int(y1)), (int(x2), int(y2)), color, 2)
cv2.putText(imag, classes[int(cls_pred)]+add,(int(x1), int(y1)-20), cv2.FONT_HERSHEY_SIMPLEX, 1, color, 1, cv2.LINE_AA)
cv2.imwrite("display.png",imag)
# Display the result
image = cv2.imread("display.png")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
```
|
github_jupyter
|
# We create the temporary folders of the images and utilities.
!mkdir utils
!mkdir temp-img
# Model and utilities to carry out the conversion and application of the model.
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/model.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/datasets.py -O utils/datasets.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/utils.py -O utils/utils.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/parse_config.py -O utils/parse_config.py
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/utils/augmentations.py -O utils/augmentations.py
# YoloV3 Original Model Weights, configurations and labels
!wget -c https://pjreddie.com/media/files/yolov3.weights
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/data/coco.names
!wget https://raw.githubusercontent.com/altaga/Pytorch-Driving-Guardian/main/Hardware%20Code/Jetson%20Code/YoloV3/config/yolov3.cfg
# Test image.
!wget https://i.ibb.co/CmKjFdt/yolo.jpg -O temp-img/c1.png
import matplotlib.pyplot as plt
import torch
from torch.utils.data import DataLoader
from torch.autograd import Variable
from model import *
from utils.utils import *
from utils.datasets import *
import os
from PIL import Image
import cv2
import math
from random import randrange
%matplotlib inline
plt.rcParams['figure.dpi'] = 100
# Value 240 in person, only for testing, the real value for the streets is 70
objects=[240,220,120,50] #person:70, cars:220, motorcycle:120, dogs:50
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Set up model and classes
model = Darknet("yolov3.cfg", img_size=416).to(device)
model.load_darknet_weights("yolov3.weights")
model.eval() # Set in evaluation mode
classes = load_classes("coco.names") # Extracts class labels from file
# Global variables
distance=100000 #Seed distance
distancemem=100000 #Seed memory distance
labelmem=""
labelmod=""
pos=""
imag=""
imgs = [] # Stores image paths
img_detections = [] # Stores detections for each image index
i = randrange(10)
# Pytorch Data loader for the model
dataloader = DataLoader(
ImageFolder("temp-img", img_size=416),
batch_size=1,
shuffle=False,
num_workers=0,
)
Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
for batch_i, (img_paths, input_imgs) in enumerate(dataloader):
# Configure input
input_imgs = Variable(input_imgs.type(Tensor))
# Get detections
with torch.no_grad():
detections = model(input_imgs)
detections = non_max_suppression(detections, 0.8, 0.4)
imgs.extend(img_paths)
img_detections.extend(detections)
# Iterate through images and save plot of detections
for img_i, (path, detections) in enumerate(zip(imgs, img_detections)):
img = np.array(Image.open(path))
imag = cv2.imread(path)
(H, W) = imag.shape[:2]
# Draw bounding boxes and labels of detections
if detections is not None:
# Rescale boxes to original image
detections = rescale_boxes(detections, 416, img.shape[:2])
for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections:
if(x1>5000 or y2>5000 or y1>5000 or x2>5000):
# False Detection Low-Pass Filter
break
add=" "
# Check if the object is Left or Right
if((W/2)<(x1+((x2-x1)/2)).item()):
pos="1"
add=add+"left "
else:
pos="0"
add=add+"right "
i=0
# Setup the label depend on detection.
if(classes[int(cls_pred)]=="motorbike"):
i=i+1
check=objects[2]
labelmem="m"+pos
elif(classes[int(cls_pred)]=="dog"):
i=i+2
check=objects[3]
labelmem="d"+pos
elif(classes[int(cls_pred)]=="person"):
i=i+3
check=objects[0]
labelmem="p"+pos
elif(classes[int(cls_pred)]=="car"):
i=i+4
check=objects[1]
labelmem="c"+pos
else:
i=i+5
check = 1000000
# Setup the label color
COLORS1 = int(254 * abs(math.sin(i)))
COLORS2 = int(254 * abs(math.sin(i+1)))
COLORS3 = int(254 * abs(math.sin(i+2)))
color= (COLORS1,COLORS2,COLORS3)
# Calculate Distance formula.
distance=(check*16)/(19*((x2.item()-x1.item())/W))
if(distancemem>distance):
if(300>distance):
distancemem=distance
labelmod = labelmem
add=add+"close "
# Create a Rectangle patch
cv2.rectangle(imag, (int(x1), int(y1)), (int(x2), int(y2)), color, 2)
cv2.putText(imag, classes[int(cls_pred)]+add,(int(x1), int(y1)-20), cv2.FONT_HERSHEY_SIMPLEX, 1, color, 1, cv2.LINE_AA)
cv2.imwrite("display.png",imag)
# Display the result
image = cv2.imread("display.png")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
| 0.585694 | 0.923868 |
```
import random
import numpy as np
import pandas as pd
```
# Optimizing Stardew Valley Farming
## Background
When I am not busy with schoolwork, I like to play a game called Stardew Valley. The premise of the game is that your character inherits a farm from your late grandfather and use the land to grow crops in order to earn money. While there are other features, such as fishing, slaying monsters, and making friends with the villagers, optimising the farming component of this game is what will be focused on. This is because crops are usually the most lucrative part of the game while also requiring the most foresight, since all crops take multiple in-game days to grow.
The crops available during the time period in game are as follows. All prices are in gold (the in-game currency)<sup>[1]</sup>:
| Crop | Seed Price (Pierre's) | Seed Price (JojaMart) | Grow Time | Sell Price |
|-------------|------------------|------------------|-----------|------------|
| Blue Jazz | 30 | 37 | 7 | 50 |
| Cauliflower | 80 | 100 | 12 | 175 |
| Green Bean | 60 | 75 | 7 (3) | 40 |
| Kale | 70 | 87 | 6 | 110 |
| Parsnip | 20 | 25 | 4 | 35 |
| Potato | 50 | 62 | 6 | 100 |
| Strawberry | 100 | - | 8 (4) | 120 |
| Tulip | 20 | 25 | 6 | 30 |
| Rice | 40 | - | 6 | 30 |
Some crops (like Parsnips) are cheap to buy seeds for and grow quickly, but sell for less than more expensive crops, like Cauliflower. Green Beans and Strawberries can regrow, meaning that these plants will always grow more produce throughout the season (at the rate in parentheses) after the intial growing time.
In terms of seed cost, Pierre's is always the cheapest place to buy seeds, but on Wednesdays, his shop is closed. So the player must either forego buying seeds that day or buy them at a steeper price at the (implied Walmart equivalent) JojaMart. To complicate seed buying further, all stores are closed on two of the in-game days for festivals and Strawberries specifically can only be bought during the Egg Festival (Day 13) and during no other time.
## Aim
The goal is to optimise total gold earned by the end of the in-game month of Year 1 Spring (i.e. the beginning of the game). We are constraining the problem to Spring since this is the first month in the game, and optimising this time period could make future playthroughs somewhat easier.
The main constraints are as follows:
* The player starts with 500 gold on Day 1, and has until Day 28 to grow crops.
* The player also starts with 5 Parsnip seeds, which we assume they will start growing on Day 1.
* The gold can be exchanged for seeds of varying costs, usually depending on how much the resulting crop will sell for.
* The crop growing times increment each day, meaning that a parsnip that is planted on Day 1 and takes 5 days to grow will be ready to harvest on Day 6, not Day 5.
* Seeds whose grow times exceed the time available left in the season cannot be bought or grown. (This is possible in game, but results in "wasting" the seeds, and thus is suboptimal.)
* It is assumed the player will spend at all gold earned growing crops to buy more seeds for the remaining viable crops at the soonest possible opportunity.
## Model
To represent the maximization of profit by Day 28, the function can roughly be described as follows, where gold represents the variable we wish to maximize:
$$
\text{Gold at time t} = (\text{Produce sold by t}) - (\text{Seeds bought by t})
$$
The relationship between buying a crop and selling a crop at after $k$ days can be represented the following way, where $s_{Crop}(t)$ and $b_{Crop}(t)$ represents the number of crops of a specific type bought or sold at time $t$.:
$$
s_{Crop}(t+k) = -b_{Crop}(t)
$$
The number of seeds that can be bought at anytime is dependent upon the amount of gold available at one time, which itself is dependent upon the cash flow between days. Therefore, the algorithm must operate in the following manner:
* The algorithm starts at $t=1$ and $g(0)=500$, were $t$ and $g(t)$ correspond to time in (in-game) days and gold at time $t$.
* If there is gold to be spent and seeds that can be grown before Day 28 on time (Day) $t$, those seeds will be bought (usually at Pierre's) and planted on most days. If the current day is 3, 10, or 17, seeds wll be bought from Jojamart at a higher price instead (since Pierre's will be closed on those days). If the day is 13 or 24, the buy process is skipped entirely, as these are "Festival Days" in game where items cannot be bought or sold.
* If any crops mature by time $t$, they will be sold and the gold will be available the same day or (on days 3, 10, 13, 17, and 24) the following day. The constraint of the latter exists due to the shopkeeper not being available on the listed days.
* After all the daily processes take place, the day increases by 1 and the loop begins again until Day 29 is reached (meaning the end of the month has been reached).
## Optimisation Methods
The (theoretically) "best" solution would iterate through all combinations of seeds that could be bought on each day. However, this is not a practical solution because all future days are dependent on the buying decisions from the current day. Combine this with the fact that there are multiple seeds that can be bought almost every day, and this presents a challenge when figuring out how to optimise these decisions overtime.
### Method 1: Profit per Day Model
The first approach that was applied was one where, at every given opportunity, the algorithm chooses to always spend the most gold possible when it is available to buy the crops that yield the highest profit per day of growing. This is because the opportunity cost of buying seeds at one day means that there is less gold to use to buy seeds on all subsequent days (at least until the crop grows and is sold). This means that regrowables, which renew their crops after harvest, must have their profit per day recalculated depending on what day it is.
### Method 2: Profit per Day with Randomization
This approach is identical to the above approach except the rate at which all gold is used and the rate at which the algorithm chooses the highest "profit per day" crop varies. This is randomized because it is difficult to foresee cases when saving gold or buying "suboptimal" crops provides a better result than otherwise doing so.
### Methods 3 and 4: Quickest Turnaround (without and with Randomization)
Since time is a constraint, this method is similar to Method 1 and 2 except the seeds prioritized are the ones with the quickest grow rates. This is because while in the long run, profit per day may be the best method of generating gold, time is a limiting factor for our problem.
### Method 5: Highest "Interest"
This method prioritizes buying seeds based on the highest ROI adjusted for time. Specifically, the formula for this "interest" is as follows (for each crop):
$$
\text{Interest} = (\frac{\text{Seed Price}}{\text{Sell Price}})^{\text{Days to Harvest}}
$$
This is modified for the Green Bean and Strawberry crops in the following way, depending on the number of times these crops regenerate more crops after the initial harvest:
$$
\text{Interest} = (\frac{\text{Seed Price}}{\text{Sell Price}})^{\text{Days to Harvest}}+ (\frac{\text{Seed Price}}{\text{Sell Price}})^{\text{Days to Harvest}+\text{Days to Regrow}} + \dots
$$
### Method 6: Mix and Match
This method switches between the above methods in calculating the best seeds to buy during the "season" and uses only non-random methods to do this. There is likely a trade off in mid-term profitability between at least two of the above methods, so switching strategies at some point in the algorithm is likely a good choice.
```
# Import all csv data
seed_list = pd.read_csv('sd_opt_initial.csv')
joja_seed_list = pd.read_csv('sd_opt_joja.csv')
regrow = pd.read_csv('sd_opt_regrowth.csv')
# Remove crops not available in Year 1
seed_list = seed_list[seed_list['Year 2?'] == 0]
joja_seed_list = joja_seed_list[joja_seed_list['Year 2?'] == 0]
# Get lists of crops
crop_list = joja_seed_list[joja_seed_list['Seed Price'] < 1000]['Crop'].values.tolist()
regrow_list = regrow[joja_seed_list['Seed Price'] < 1000]['Crop'].values.tolist()
# Adjust Strawberry prices
egg_festival = seed_list.loc[8]
seed_list.loc[8,'Seed Price'] = 999
joja_seed_list.loc[8,'Seed Price'] = 999
egg_festival.loc['Profit per Day'] = 11.67
# Add profit per day to Joja data
joja_seed_list['Profit per Day'] = (joja_seed_list['Sell Price']-joja_seed_list['Seed Price'])/joja_seed_list['Days to Grow']
# Generate "interest" for crops
joja_seed_list['Interest'] = (joja_seed_list['Sell Price']/joja_seed_list['Seed Price']) ** (1/joja_seed_list['Days to Grow'])
seed_list['Interest'] = (seed_list['Sell Price']/seed_list['Seed Price']) ** (1/seed_list['Days to Grow'])
# Generate net profit for crops
joja_seed_list['Net Profit'] = joja_seed_list['Sell Price']-joja_seed_list['Seed Price']
seed_list['Net Profit'] = seed_list['Sell Price']-seed_list['Seed Price']
# Display seed data (high seed prices are used in place of hard limits against selecting those crops)
seed_list
# Initalize starting parameters
def init():
# Starting gold
gold = 500
# First parsnips
farm_plots = [['Parsnip', 5, 0]]
# First day
day = 1
# Initialize dictionary of crop shopping list per day
buy_order = {}
return gold, day, farm_plots, buy_order
# Recalculates profit per day for green beans for optimising buying choices
def green_beans(day):
if 29-day < 7:
return 0
else:
i = day + 7
amt = 40
while i < 28:
amt += 40
i += 3
return amt
# Recalculates "interest" for green beans based on the day
def green_beans_int(day, seed_list):
if 29-day < 7:
return 0
else:
i = day + 7
amt = (seed_list.loc[4,'Sell Price']/seed_list.loc[4,'Seed Price'])**(1/(i-day))
i += 3
while i < 28:
amt += (seed_list.loc[4,'Sell Price']/seed_list.loc[4,'Seed Price'])**(1/(i-day))
i += 3
return amt
# Sells crops to increase gold if harvest day has arrived
def sell_crops(gold, day, farm_plots):
# Iterate through all "plots" on farm
i = 0
while i < len(farm_plots):
crop_name = farm_plots[i][0]
crop_amt = farm_plots[i][1]
crop_day = farm_plots[i][2]
# If crop is of harvest age, either harvest and remove or regrow if the crop is regrowable
if crop_day >= int(seed_list[seed_list['Crop'] == crop_name]['Days to Grow']):
if crop_name in regrow_list:
farm_plots[i] = [crop_name,crop_amt,0,'r']
gold += int(seed_list[seed_list['Crop'] == crop_name]['Sell Price'] * crop_amt)
i += 1
else:
gold += int(seed_list[seed_list['Crop'] == crop_name]['Sell Price'] * crop_amt)
del farm_plots[i]
# Else if the crop is a regrowable, is tagged with 'r', and of regrow harvest age, harvest and reset age
elif crop_name in regrow_list:
if crop_day >= int(regrow[regrow['Crop'] == crop_name]['Days to Grow']) and len(farm_plots[i]) > 3:
farm_plots[i] = [crop_name,crop_amt,0,'r']
gold += int(seed_list[seed_list['Crop'] == crop_name]['Sell Price'] * crop_amt)
i += 1
# But if the crop still needs to grow, skip to next item
else:
i += 1
return gold, day, farm_plots
# Chooses crops available based on day and available gold
def cycle_buyable_crops(gold,day,limit=28):
buyables = pd.Series(data=None, dtype=object)
if day not in [3,10,13,17,24]:
buyables = seed_list[(seed_list['Seed Price'] <= gold)]
buyables = buyables[buyables['Days to Grow'] <= limit-day]
elif day not in [13, 24]:
buyables = joja_seed_list[(joja_seed_list['Seed Price'] <= gold)]
buyables = buyables[buyables['Days to Grow'] <= limit-day]
elif day == 13 and gold >= 100:
buyables = egg_festival
return buyables
# Add a given crop to the buy order dictionary for that day (or increment amount if already present)
def append_to_record(buy_order, crop_name, crop_amt,day):
if buy_order.get(day):
if buy_order[day].get(crop_name):
buy_order[day][crop_name] += crop_amt
else:
buy_order[day][crop_name] = crop_amt
else:
buy_order[day] = {crop_name: crop_amt}
return buy_order
# Cycle through buyable crop list to buy crop, usually of highest profit per day
def buy_crops(gold, day, farm_plots, buy_order,diversion_rate=0.05,selection='ppd',limit=28):
# Initalize buyable crop list
buyables = cycle_buyable_crops(gold,day,limit)
# Initalize size of farm plot list (to break while loop if no purchase can be made)
initial_length = len(farm_plots)
# Buy either the best profit per day crop or select randomly based on diversion_rate
# Note: if it's the 13th day, no other choice exists except for the Strawberry crop
while len(buyables) > 0:
if random.uniform(0,1) > diversion_rate:
if day == 13:
crop_name,crop_amt,crop_day = buyables['Crop'], 1, 0
gold -= buyables['Seed Price']
farm_plots.append([crop_name,crop_amt,crop_day])
buy_order = append_to_record(buy_order, crop_name, crop_amt, day)
buyables = cycle_buyable_crops(gold,day,limit)
elif random.uniform(0,1) > diversion_rate:
# Choose method of choosing crop at 1-diversion_rate
if selection == 'ppd':
buy_index = int(buyables[['Profit per Day']].idxmax())
elif selection == 'turnover':
buy_index = int(buyables[['Days to Grow']].idxmin())
elif selection == 'interest':
buy_index = int(buyables[['Interest']].idxmax())
elif selection == 'tp':
buy_index = int(buyables[['Net Profit']].idxmax())
else:
buy_index = int(buyables[['Profit per Day']].idxmax())
crop_name,crop_amt,crop_day = buyables.loc[buy_index]['Crop'], 1, 0
gold -= buyables.loc[buy_index]['Seed Price']
farm_plots.append([crop_name,crop_amt,crop_day])
buy_order = append_to_record(buy_order, crop_name, crop_amt, day)
buyables = cycle_buyable_crops(gold,day,limit)
else:
# Choose crop randomly from available choices
buy_index = random.choice(buyables.index.tolist())
crop_name,crop_amt,crop_day = buyables.loc[buy_index]['Crop'], 1, 0
gold -= buyables.loc[buy_index]['Seed Price']
farm_plots.append([crop_name,crop_amt,crop_day])
buy_order = append_to_record(buy_order, crop_name, crop_amt, day)
buyables = cycle_buyable_crops(gold,day,limit)
else:
break
return gold, day, farm_plots, buy_order
# Cycles through buy/sell processes and updates green bean prices; returns results with day incremented
def tick_over(gold,day,farm_plots,buy_order,diversion_rate=0.05,selection='ppd',limit=28):
# Green bean profitability mod
g = green_beans(day)
joja_seed_list.loc[4,'Profit per Day'] = (g-joja_seed_list.loc[4,'Seed Price']) / (limit+1-day)
seed_list.loc[4,'Profit per Day'] = (g-seed_list.loc[4,'Seed Price']) / (limit+1-day)
#Interest
joja_seed_list.loc[4,'Interest'] = green_beans_int(day,joja_seed_list)
seed_list.loc[4,'Interest'] = green_beans_int(day,seed_list)
# Selling and buying processes
gold,day,farm_plots = sell_crops(gold,day,farm_plots)
gold,day,farm_plots,buy_order = buy_crops(gold,day,farm_plots,buy_order,diversion_rate,selection,limit)
# Increase age of all plants
for i in range(len(farm_plots)):
farm_plots[i][2] += 1
return gold, day+1, farm_plots,buy_order
# Combine all above functions to iterate over days and return all main parameters
def generate_run(days=28,diversion_rate=0.05,selection='ppd',starting_params=()):
if not starting_params:
gold, day, farm_plots, buy_order = init()
starting_params = (gold, day, farm_plots, buy_order)
else:
gold, day, farm_plots, buy_order = starting_params
for i in range(days-starting_params[1]):
gold, day, farm_plots, buy_order = tick_over(gold, day, farm_plots, buy_order,diversion_rate,selection,limit=28)
return gold, day+1, farm_plots, buy_order
```
## Results
### Method 1
Using Method 1, the purchase order for crop seeds is as follows:
* Day 1: 6 Green Beans, 2 Parsnips
* Day 5: 5 Green Beans, 1 Parsnip
* Day 9: 2 Parsnips
* Day 11: 3 Green Beans
* Day 14: 5 Potatoes
* Day 15: 3 Potatoes, 1 Parsnip
* Day 17: 4 Potatoes
* Day 18: 1 Potato, 2 Parsnips
* Day 19: 1 Parsnip
* Day 20: 10 Potatoes, 1 Parsnip
* Day 21: 7 Potatoes
* Day 22: 1 Potato, 1 Parsnip
* Day 23: 27 Parsnips
By the end of Day 28, this results in the player having 3537 gold.
```
baseline_run = generate_run(days=28,diversion_rate=0)
print(baseline_run[0],baseline_run[3])
```
### Method 2
Modifying Method 1 to include some randomness and running the algorithm en mass provides somewhat better results. However, this improvement comes at the cost of a much higher runtime. The absolute best path is the following (from using the diversion rate 0.1):
* Day 1: 6 Green Beans, 2 Parsnips
* Day 5: 3 Green Beans
* Day 12: 4 Potatoes
* Day 14: 4 Potatoes
* Day 15: 2 Potatoes
* Day 17: 3 Potatoes, 2 Parsnips
* Day 18: 8 Potatoes, 1 Parsnip
* Day 20: 10 Potatoes
* Day 21: 6 Potatoes, 1 Parsnip
* Day 22: 2 Parsnips
* Day 23: 21 Parsnips
This order of buying seeds yields 3736 gold by the end of Day 28 (an improvement of 199 gold over the non-randomized algorithm).
```
div5 = max([generate_run(diversion_rate=0.05) for i in range(200)], key=lambda x: x[0])
div10 = max([generate_run(diversion_rate=0.1) for i in range(200)], key=lambda x: x[0])
print('Max Profit (Diversion rate 0.05)')
print('Gold: {}'.format(div5[0]))
print('Purchase Order: {}'.format(div5[-1]))
print('Max Profit (Diversion rate 0.10)')
print('Gold: {}'.format(div10[0]))
print('Purchase Order: {}'.format(div10[-1]))
```
## Method 3
The general buying function is the same, except instead of prioritizing profit per day, the function now looks for the crop with the lowest time to harvest (which means the algorithm will always buy Parsnips except for on the 13th).
```
baseline_run = generate_run(days=28,diversion_rate=0,selection='turnover')
print(baseline_run[0],baseline_run[3])
```
The result is fairly boring for Method 3. Essentially, it amounts to buying Parsnips (or Strawberries) at every possible opportunity to do so, and the player will end up with 4915 gold by the end of the month.
## Method 4
This method involves adding some randomness into the Method 3 buying decisons. However, since the point of these last two Methods is to optimize crop output (in hopes it will result in greater profits in the short term), the buy function has been modified so that the algorithm will always use up gold to buy growable seeds until not enough gold remains to buy more.
The best buy order below generated at a diversion rate of 0.05 yields 4420 gold and replaces some of the Parsnip crops for the occasional other crop. However, it does not outperform Method 3.
```
div5 = max([generate_run(diversion_rate=0.05,selection='turnover') for i in range(200)], key=lambda x: x[0])
div10 = max([generate_run(diversion_rate=0.1,selection='turnover') for i in range(200)], key=lambda x: x[0])
print('Max Profit (Diversion rate 0.05)')
print('Gold: {}'.format(div5[0]))
print('Purchase Order: {}'.format(div5[-1]))
print('Max Profit (Diversion rate 0.10)')
print('Gold: {}'.format(div10[0]))
print('Purchase Order: {}'.format(div10[-1]))
```
## Method 5
Choosing crops based on delayed "interest" greatly underperforms compared to other methods. This method prefers buying green beans early, but it appears that the profit from this approach does not increase quickly enough within the 28 day timescale to outperform other methods. The optimal route for this method only yields 2505 gold, nearly half of the previous method.
```
baseline_run = generate_run(days=28,diversion_rate=0,selection='interest')
print(baseline_run[0],baseline_run[3])
```
## Method 6
The result of combining methods of choosing selections does not improve beyond the results from Method 3. The code below only "changes" methods once (though the best result never changes methods at all), but the same loop was run with three "changes" in selections, but this was done in a separate notebook, so the results were not saved here. However, the result was the same buy order from the algorithm that prioritizes crop turnover (i.e. all Parsnips)
```
# Try all permutations
np.seterr(divide='ignore')
runs = []
selections = ['ppd','turnover','interest','tp']
for i in range(5,28):
for j in selections:
for k in selections:
r1 = generate_run(days=i, diversion_rate=0, selection=j)
r2 = generate_run(days=28, diversion_rate=0, selection=k, starting_params=r1)
runs.append(r2 + (j,k,i))
best = max(runs, key=lambda x: x[0])
print(best[0],best[3],best[4:])
# This did not produce a different buy order compared to the above
np.seterr(divide='ignore')
runs = []
selections = ['ppd','turnover','interest','tp']
for i in range(5,28):
for l in range(i,28):
for j in selections:
for k in selections:
for m in selections:
r1 = generate_run(days=i, diversion_rate=0, selection=j)
r2 = generate_run(days=l, diversion_rate=0, selection=k, starting_params=r1)
r3 = generate_run(days=28, diversion_rate=0, selection=m, starting_params=r2)
runs.append(r3 + (j,k,i))
best = max(runs, key=lambda x: x[0])
print(best[0],best[3],best[4:])
```
## Conclusion
The results from our algorithm suggest that crop turnover is the most important thing in terms of optimizing the gold that is received by the end of the season. It appears that the overall profitability of crops depends most on the ability to sell them quickly after planting them instead of net profit. As can be seen below, even when the time scale for the season is inflated to be longer than it actually is in game, crops with high turnover consistently outperform all other selection methods, with flat net profit being the worst valuation method. These methods yield 70,225 and 19,282 gold respectively.
```
np.seterr(divide='ignore')
# Generate runs over 50 days instead of 28
def generate_run(days=28,diversion_rate=0.05,selection='ppd',starting_params=()):
if not starting_params:
gold, day, farm_plots, buy_order = init()
starting_params = (gold, day, farm_plots, buy_order)
else:
gold, day, farm_plots, buy_order = starting_params
for i in range(days-starting_params[1]):
gold, day, farm_plots, buy_order = tick_over(gold, day, farm_plots, buy_order,diversion_rate,selection,limit=50)
return gold, day+1, farm_plots, buy_order
selections = ['ppd','turnover','interest','tp']
x = [(generate_run(days=50, diversion_rate=0, selection=m), m) for m in selections]
best_long = max(x, key=lambda x: x[0][0])
print(best_long[0][0],best_long[0][-1],best_long[1])
# Print gold output for each selection strategy
for i in x:
print(i[0][0],i[1])
```
## Model Critique
The model is useful in that it illustrates how powerful crops with low growing times are if the goal is to grow the most profitable crops. While it is hard to not make money farming in the game, it is difficult to figure out how to optimise seed buying decisions to maximize profitability. (At least after planting enough of the other crops to get certain in-game rewards). Most guides for Stardew Valley will list "Profit per Day" when listing out crops for each season. However, the results of the model clearly that the opportunity cost of planting crops that take a lot of time to grow compared to ones that can be harvested quicker is significant and should actually be taken into account to anyone who would like to optimise their playthrough. (However, brutal optimisation this is actually somewhat antithetical to the ethos the game attempts to portray, but this is beside the point.)
The model might be improved further if it could evaluate the tradeoff between saving gold on certain days to buy Strawberry seeds on day 13. This could not be accounted for in the algorithmn as it currently exists since it assumes that the player will always use gold at the earliest opportunities. The *diversion_rate* component of this algorithm may also be improved if the algorithm was limited to choosing the second best choices according to their selection criteria rather than all available seeds.
The model also does not account for the following game mechanics that would influence the seed buying decisions overtime:
* **Fatigue**: The player must use energy to plow fields to plant the seeds, then use more energy to water the seeds and crops during the growing time. This means that realistically, fatigue can limit the player's ability to keep crops growing overtime. This would most effect crops like Parsnips, which the current model suggests the player should buy in high numbers. Conversely, crops like Cauliflower, which are more expensive to buy but payout more per seed, may be favored more in a model that factors in the upper limits of fatigue. Furthermore, energy use for plowing and watering decreases as the player harvests more crops, which would also influence this theoretical model.
* **Other Income and Expenses**: The player can also earn gold from other methods, such as fishing, mining, and foraging. The model does not account for the income from these activities nor does it account for the player buying items either relating to these activities among other reasons. The model could be written to be more complex either to allow gold to be decremented from the farm (such as to buy certain items) or to account for income from other sources that could then be used to buy more seeds.
* **Crop Quality**: The crop sell prices actually do vary considerably depending on the "quality" of the crop, which are rated as such with a silver, gold, and iridium (made up rare element) stars if it is above the base quality. The chances of getting a higher quality version of the crop from a given harvest depends on either using certain items on the plowed land before planting the seeds and by how high the farmer's "farming level" is. Since the current model assumes no item use and assumes all crops produced are of regular quality, the results of the model actually understate the amount of gold that will be earned on average per crop and actually better approximates the lower limit of what is possible.
# Works Cited
[1] Concerned Ape, Seattle, WA, USA, *Stardew Valley*, 2016 [Steam Version]
|
github_jupyter
|
import random
import numpy as np
import pandas as pd
# Import all csv data
seed_list = pd.read_csv('sd_opt_initial.csv')
joja_seed_list = pd.read_csv('sd_opt_joja.csv')
regrow = pd.read_csv('sd_opt_regrowth.csv')
# Remove crops not available in Year 1
seed_list = seed_list[seed_list['Year 2?'] == 0]
joja_seed_list = joja_seed_list[joja_seed_list['Year 2?'] == 0]
# Get lists of crops
crop_list = joja_seed_list[joja_seed_list['Seed Price'] < 1000]['Crop'].values.tolist()
regrow_list = regrow[joja_seed_list['Seed Price'] < 1000]['Crop'].values.tolist()
# Adjust Strawberry prices
egg_festival = seed_list.loc[8]
seed_list.loc[8,'Seed Price'] = 999
joja_seed_list.loc[8,'Seed Price'] = 999
egg_festival.loc['Profit per Day'] = 11.67
# Add profit per day to Joja data
joja_seed_list['Profit per Day'] = (joja_seed_list['Sell Price']-joja_seed_list['Seed Price'])/joja_seed_list['Days to Grow']
# Generate "interest" for crops
joja_seed_list['Interest'] = (joja_seed_list['Sell Price']/joja_seed_list['Seed Price']) ** (1/joja_seed_list['Days to Grow'])
seed_list['Interest'] = (seed_list['Sell Price']/seed_list['Seed Price']) ** (1/seed_list['Days to Grow'])
# Generate net profit for crops
joja_seed_list['Net Profit'] = joja_seed_list['Sell Price']-joja_seed_list['Seed Price']
seed_list['Net Profit'] = seed_list['Sell Price']-seed_list['Seed Price']
# Display seed data (high seed prices are used in place of hard limits against selecting those crops)
seed_list
# Initalize starting parameters
def init():
# Starting gold
gold = 500
# First parsnips
farm_plots = [['Parsnip', 5, 0]]
# First day
day = 1
# Initialize dictionary of crop shopping list per day
buy_order = {}
return gold, day, farm_plots, buy_order
# Recalculates profit per day for green beans for optimising buying choices
def green_beans(day):
if 29-day < 7:
return 0
else:
i = day + 7
amt = 40
while i < 28:
amt += 40
i += 3
return amt
# Recalculates "interest" for green beans based on the day
def green_beans_int(day, seed_list):
if 29-day < 7:
return 0
else:
i = day + 7
amt = (seed_list.loc[4,'Sell Price']/seed_list.loc[4,'Seed Price'])**(1/(i-day))
i += 3
while i < 28:
amt += (seed_list.loc[4,'Sell Price']/seed_list.loc[4,'Seed Price'])**(1/(i-day))
i += 3
return amt
# Sells crops to increase gold if harvest day has arrived
def sell_crops(gold, day, farm_plots):
# Iterate through all "plots" on farm
i = 0
while i < len(farm_plots):
crop_name = farm_plots[i][0]
crop_amt = farm_plots[i][1]
crop_day = farm_plots[i][2]
# If crop is of harvest age, either harvest and remove or regrow if the crop is regrowable
if crop_day >= int(seed_list[seed_list['Crop'] == crop_name]['Days to Grow']):
if crop_name in regrow_list:
farm_plots[i] = [crop_name,crop_amt,0,'r']
gold += int(seed_list[seed_list['Crop'] == crop_name]['Sell Price'] * crop_amt)
i += 1
else:
gold += int(seed_list[seed_list['Crop'] == crop_name]['Sell Price'] * crop_amt)
del farm_plots[i]
# Else if the crop is a regrowable, is tagged with 'r', and of regrow harvest age, harvest and reset age
elif crop_name in regrow_list:
if crop_day >= int(regrow[regrow['Crop'] == crop_name]['Days to Grow']) and len(farm_plots[i]) > 3:
farm_plots[i] = [crop_name,crop_amt,0,'r']
gold += int(seed_list[seed_list['Crop'] == crop_name]['Sell Price'] * crop_amt)
i += 1
# But if the crop still needs to grow, skip to next item
else:
i += 1
return gold, day, farm_plots
# Chooses crops available based on day and available gold
def cycle_buyable_crops(gold,day,limit=28):
buyables = pd.Series(data=None, dtype=object)
if day not in [3,10,13,17,24]:
buyables = seed_list[(seed_list['Seed Price'] <= gold)]
buyables = buyables[buyables['Days to Grow'] <= limit-day]
elif day not in [13, 24]:
buyables = joja_seed_list[(joja_seed_list['Seed Price'] <= gold)]
buyables = buyables[buyables['Days to Grow'] <= limit-day]
elif day == 13 and gold >= 100:
buyables = egg_festival
return buyables
# Add a given crop to the buy order dictionary for that day (or increment amount if already present)
def append_to_record(buy_order, crop_name, crop_amt,day):
if buy_order.get(day):
if buy_order[day].get(crop_name):
buy_order[day][crop_name] += crop_amt
else:
buy_order[day][crop_name] = crop_amt
else:
buy_order[day] = {crop_name: crop_amt}
return buy_order
# Cycle through buyable crop list to buy crop, usually of highest profit per day
def buy_crops(gold, day, farm_plots, buy_order,diversion_rate=0.05,selection='ppd',limit=28):
# Initalize buyable crop list
buyables = cycle_buyable_crops(gold,day,limit)
# Initalize size of farm plot list (to break while loop if no purchase can be made)
initial_length = len(farm_plots)
# Buy either the best profit per day crop or select randomly based on diversion_rate
# Note: if it's the 13th day, no other choice exists except for the Strawberry crop
while len(buyables) > 0:
if random.uniform(0,1) > diversion_rate:
if day == 13:
crop_name,crop_amt,crop_day = buyables['Crop'], 1, 0
gold -= buyables['Seed Price']
farm_plots.append([crop_name,crop_amt,crop_day])
buy_order = append_to_record(buy_order, crop_name, crop_amt, day)
buyables = cycle_buyable_crops(gold,day,limit)
elif random.uniform(0,1) > diversion_rate:
# Choose method of choosing crop at 1-diversion_rate
if selection == 'ppd':
buy_index = int(buyables[['Profit per Day']].idxmax())
elif selection == 'turnover':
buy_index = int(buyables[['Days to Grow']].idxmin())
elif selection == 'interest':
buy_index = int(buyables[['Interest']].idxmax())
elif selection == 'tp':
buy_index = int(buyables[['Net Profit']].idxmax())
else:
buy_index = int(buyables[['Profit per Day']].idxmax())
crop_name,crop_amt,crop_day = buyables.loc[buy_index]['Crop'], 1, 0
gold -= buyables.loc[buy_index]['Seed Price']
farm_plots.append([crop_name,crop_amt,crop_day])
buy_order = append_to_record(buy_order, crop_name, crop_amt, day)
buyables = cycle_buyable_crops(gold,day,limit)
else:
# Choose crop randomly from available choices
buy_index = random.choice(buyables.index.tolist())
crop_name,crop_amt,crop_day = buyables.loc[buy_index]['Crop'], 1, 0
gold -= buyables.loc[buy_index]['Seed Price']
farm_plots.append([crop_name,crop_amt,crop_day])
buy_order = append_to_record(buy_order, crop_name, crop_amt, day)
buyables = cycle_buyable_crops(gold,day,limit)
else:
break
return gold, day, farm_plots, buy_order
# Cycles through buy/sell processes and updates green bean prices; returns results with day incremented
def tick_over(gold,day,farm_plots,buy_order,diversion_rate=0.05,selection='ppd',limit=28):
# Green bean profitability mod
g = green_beans(day)
joja_seed_list.loc[4,'Profit per Day'] = (g-joja_seed_list.loc[4,'Seed Price']) / (limit+1-day)
seed_list.loc[4,'Profit per Day'] = (g-seed_list.loc[4,'Seed Price']) / (limit+1-day)
#Interest
joja_seed_list.loc[4,'Interest'] = green_beans_int(day,joja_seed_list)
seed_list.loc[4,'Interest'] = green_beans_int(day,seed_list)
# Selling and buying processes
gold,day,farm_plots = sell_crops(gold,day,farm_plots)
gold,day,farm_plots,buy_order = buy_crops(gold,day,farm_plots,buy_order,diversion_rate,selection,limit)
# Increase age of all plants
for i in range(len(farm_plots)):
farm_plots[i][2] += 1
return gold, day+1, farm_plots,buy_order
# Combine all above functions to iterate over days and return all main parameters
def generate_run(days=28,diversion_rate=0.05,selection='ppd',starting_params=()):
if not starting_params:
gold, day, farm_plots, buy_order = init()
starting_params = (gold, day, farm_plots, buy_order)
else:
gold, day, farm_plots, buy_order = starting_params
for i in range(days-starting_params[1]):
gold, day, farm_plots, buy_order = tick_over(gold, day, farm_plots, buy_order,diversion_rate,selection,limit=28)
return gold, day+1, farm_plots, buy_order
baseline_run = generate_run(days=28,diversion_rate=0)
print(baseline_run[0],baseline_run[3])
div5 = max([generate_run(diversion_rate=0.05) for i in range(200)], key=lambda x: x[0])
div10 = max([generate_run(diversion_rate=0.1) for i in range(200)], key=lambda x: x[0])
print('Max Profit (Diversion rate 0.05)')
print('Gold: {}'.format(div5[0]))
print('Purchase Order: {}'.format(div5[-1]))
print('Max Profit (Diversion rate 0.10)')
print('Gold: {}'.format(div10[0]))
print('Purchase Order: {}'.format(div10[-1]))
baseline_run = generate_run(days=28,diversion_rate=0,selection='turnover')
print(baseline_run[0],baseline_run[3])
div5 = max([generate_run(diversion_rate=0.05,selection='turnover') for i in range(200)], key=lambda x: x[0])
div10 = max([generate_run(diversion_rate=0.1,selection='turnover') for i in range(200)], key=lambda x: x[0])
print('Max Profit (Diversion rate 0.05)')
print('Gold: {}'.format(div5[0]))
print('Purchase Order: {}'.format(div5[-1]))
print('Max Profit (Diversion rate 0.10)')
print('Gold: {}'.format(div10[0]))
print('Purchase Order: {}'.format(div10[-1]))
baseline_run = generate_run(days=28,diversion_rate=0,selection='interest')
print(baseline_run[0],baseline_run[3])
# Try all permutations
np.seterr(divide='ignore')
runs = []
selections = ['ppd','turnover','interest','tp']
for i in range(5,28):
for j in selections:
for k in selections:
r1 = generate_run(days=i, diversion_rate=0, selection=j)
r2 = generate_run(days=28, diversion_rate=0, selection=k, starting_params=r1)
runs.append(r2 + (j,k,i))
best = max(runs, key=lambda x: x[0])
print(best[0],best[3],best[4:])
# This did not produce a different buy order compared to the above
np.seterr(divide='ignore')
runs = []
selections = ['ppd','turnover','interest','tp']
for i in range(5,28):
for l in range(i,28):
for j in selections:
for k in selections:
for m in selections:
r1 = generate_run(days=i, diversion_rate=0, selection=j)
r2 = generate_run(days=l, diversion_rate=0, selection=k, starting_params=r1)
r3 = generate_run(days=28, diversion_rate=0, selection=m, starting_params=r2)
runs.append(r3 + (j,k,i))
best = max(runs, key=lambda x: x[0])
print(best[0],best[3],best[4:])
np.seterr(divide='ignore')
# Generate runs over 50 days instead of 28
def generate_run(days=28,diversion_rate=0.05,selection='ppd',starting_params=()):
if not starting_params:
gold, day, farm_plots, buy_order = init()
starting_params = (gold, day, farm_plots, buy_order)
else:
gold, day, farm_plots, buy_order = starting_params
for i in range(days-starting_params[1]):
gold, day, farm_plots, buy_order = tick_over(gold, day, farm_plots, buy_order,diversion_rate,selection,limit=50)
return gold, day+1, farm_plots, buy_order
selections = ['ppd','turnover','interest','tp']
x = [(generate_run(days=50, diversion_rate=0, selection=m), m) for m in selections]
best_long = max(x, key=lambda x: x[0][0])
print(best_long[0][0],best_long[0][-1],best_long[1])
# Print gold output for each selection strategy
for i in x:
print(i[0][0],i[1])
| 0.376967 | 0.969149 |
```
# default_exp catalog
```
# Data Catalog
> API details.
```
#hide
from nbdev.showdoc import *
import sys
#export
from kedro.config import ConfigLoader
from kedro.io import DataCatalog
from fastcore.meta import delegates
#export
from functools import wraps
from contextlib import contextmanager
@contextmanager
def cd(newdir):
prevdir = os.getcwd()
os.chdir(os.path.expanduser(newdir))
try:
yield
finally:
os.chdir(prevdir)
def change_cwd_dir(new_dir):
def decorator(func):
@wraps(func)
def wrapped_func(*args, **kwargs):
with cd(new_dir):
func_result = func(*args, **kwargs)
return func_result
return wrapped_func
return decorator
#export
from kedro.config import ConfigLoader
from kedro.io import DataCatalog
import os
from pathlib import Path
package_outer_folder = Path(__file__).parents[1]
catalog_folder = Path(__file__).parents[0]
@change_cwd_dir(new_dir = catalog_folder)
def get_config(env="base", patterns=['catalog*', 'catalog*/*/','catalog*/*/*']):
# Initialise a ConfigLoader
conf_loader = ConfigLoader(f"conf/{env}")
yaml_patterns = [f"{pattern}.yaml" for pattern in patterns]
yml_patterns = [f"{pattern}.yml" for pattern in patterns]
all_patterns = yml_patterns + yaml_patterns
# Load the data catalog configuration from catalog.yml
conf= conf_loader.get(*all_patterns)
return conf
@change_cwd_dir(new_dir = catalog_folder)
def get_catalog(env="base", patterns=['catalog*', 'catalog*/*/','catalog*/*/*']):
conf_catalog = get_config(env=env, patterns = patterns)
# Create the DataCatalog instance from the configuration
catalog = DataCatalog.from_config(conf_catalog)
catalog.load = change_cwd_dir(catalog_folder)(catalog.load)
catalog.env = env
catalog.patterns = patterns
catalog.reload = reload.__get__(catalog)
return catalog
def reload_catalog(catalog):
return get_catalog(catalog.env, catalog.patterns)
def reload(self):
return reload_catalog(self)
test_patterns = ["test_catalog*/*", "test_catalog"]
conf_test_data_catalog = get_config(patterns = test_patterns)
test_data_catalog = get_catalog(patterns = test_patterns)
#test_parameters = get_config("base", ["parameters*.yml","parameters*.yaml", "parameters*/*.yml", "parameters*/*.yaml"])
#export
test_patterns = ["test_catalog*/*", "test_catalog*"]
conf_test_data_catalog = get_config(patterns = test_patterns)
test_data_catalog = get_catalog(patterns = test_patterns)
#test_data_catalog_cluster = get_catalog(patterns = test_patterns, env = "cluster")
```
The following datasets are available to test this package
```
test_data_catalog.list()
```
|
github_jupyter
|
# default_exp catalog
#hide
from nbdev.showdoc import *
import sys
#export
from kedro.config import ConfigLoader
from kedro.io import DataCatalog
from fastcore.meta import delegates
#export
from functools import wraps
from contextlib import contextmanager
@contextmanager
def cd(newdir):
prevdir = os.getcwd()
os.chdir(os.path.expanduser(newdir))
try:
yield
finally:
os.chdir(prevdir)
def change_cwd_dir(new_dir):
def decorator(func):
@wraps(func)
def wrapped_func(*args, **kwargs):
with cd(new_dir):
func_result = func(*args, **kwargs)
return func_result
return wrapped_func
return decorator
#export
from kedro.config import ConfigLoader
from kedro.io import DataCatalog
import os
from pathlib import Path
package_outer_folder = Path(__file__).parents[1]
catalog_folder = Path(__file__).parents[0]
@change_cwd_dir(new_dir = catalog_folder)
def get_config(env="base", patterns=['catalog*', 'catalog*/*/','catalog*/*/*']):
# Initialise a ConfigLoader
conf_loader = ConfigLoader(f"conf/{env}")
yaml_patterns = [f"{pattern}.yaml" for pattern in patterns]
yml_patterns = [f"{pattern}.yml" for pattern in patterns]
all_patterns = yml_patterns + yaml_patterns
# Load the data catalog configuration from catalog.yml
conf= conf_loader.get(*all_patterns)
return conf
@change_cwd_dir(new_dir = catalog_folder)
def get_catalog(env="base", patterns=['catalog*', 'catalog*/*/','catalog*/*/*']):
conf_catalog = get_config(env=env, patterns = patterns)
# Create the DataCatalog instance from the configuration
catalog = DataCatalog.from_config(conf_catalog)
catalog.load = change_cwd_dir(catalog_folder)(catalog.load)
catalog.env = env
catalog.patterns = patterns
catalog.reload = reload.__get__(catalog)
return catalog
def reload_catalog(catalog):
return get_catalog(catalog.env, catalog.patterns)
def reload(self):
return reload_catalog(self)
test_patterns = ["test_catalog*/*", "test_catalog"]
conf_test_data_catalog = get_config(patterns = test_patterns)
test_data_catalog = get_catalog(patterns = test_patterns)
#test_parameters = get_config("base", ["parameters*.yml","parameters*.yaml", "parameters*/*.yml", "parameters*/*.yaml"])
#export
test_patterns = ["test_catalog*/*", "test_catalog*"]
conf_test_data_catalog = get_config(patterns = test_patterns)
test_data_catalog = get_catalog(patterns = test_patterns)
#test_data_catalog_cluster = get_catalog(patterns = test_patterns, env = "cluster")
test_data_catalog.list()
| 0.420957 | 0.554591 |
```
# This Python 3 environment comes with many helpful analytics libraries installed
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="darkgrid")
# helper function for printing unique values in columns
def print_unique(df, col):
print(f"The {col} column contains:")
u_vals = df[col].unique()
for i, val in enumerate(u_vals):
print(f"{i+1}. {val}")
```
## About the Dataset
I found this dataset [here](https://www.kaggle.com/benroshan/factors-affecting-campus-placement) on Kaggle.
Campus Recruitment is an obstacle that almost all Engineering students face at some point in their lives. (Except of course for those special snowflakes that decide to pursue higher studies instead). As a final year Computer Science Student, I was instantly drawn to this dataset in hopes of not only understanding the general trend in the industry but also of reassuring myself that I was not a lost cause. Although this dataset is from an MBA college, I think it can still be used to extract valuable information about how one's academic choices can impact their placements.
A quick glance at the dataset description on Kaggle tells us that it has the following columns:
1. `sl_no` : Serial Number
2. `gender` : Gender- Male='M',Female='F'
3. `ssc_p` : Secondary Education percentage- 10th Grade
4. `ssc_b` : Board of Education- Central/ Others
5. `hsc_p` : Higher Secondary Education percentage- 12th Grade
6. `hsc_b` : Board of Education- Central/ Others
7. `hsc_s` : Specialization in Higher Secondary Education
8. `degree_p` : Degree Percentage
9. `degree_t` : Under Graduation(Degree type)- Field of degree education
10. `workex` : Work Experience
11. `etest_p` : Employability test percentage ( conducted by college)
12. `specialisation` : Post Graduation(MBA)- Specialization
13. `mba_p` : MBA percentage
14. `status` : Status of placement- Placed/Not placed
15. `salary` : Salary offered by corporate to candidates
Let us quickly view the first few rows and extract some prelimindary information about the dataset.
```
# Loading in the dataset and viewing first few rows
df = pd.read_csv("Placement_Data_Full_Class.csv")
df.head()
df.describe()
```
### Observations:
It seems that we have some big outliers in the salary. The mean is about 2.9 lakhs with a standard deviation of about 90,000 but we have some salary that is over 9 Lakhs, which is more than 6 standard deviations away from the mean!
Looks like we have some missing values in the "salary column". This is probably for the students who were not placed but let us just check to make sure that the students who are placed all have their salary listed.
```
# Check for missing salaries
df.query("status == 'placed'")["salary"].isnull().any()
```
Looks like all the placed students all have their salaries listed so we could try and predict that down the line. However, we will only have 148 samples to work with in that case. Let's go ahead with some analysis for now and we'll cross that bridge when we get to it.
# Business Understanding
The first step of the CRISP-DM process is to develop a **Business Understanding**. This means asking relevant questions that we want the data to answer. After examining the data, I am particularly interested in the `Status` (whether or not the student was placed) and `Salary` (How much the student is paid by the corporate). The data available on hand makes it possible to ask questions such as:
* Does the board of education influence placements?
* Is there a gender bias when it comes to salaries offered to new recruits?
* Do companies prefer candidates with prior work experience?
* Which combination of degree/specialization has the highest salary?
# Data Understanding
Let us break down these questions and others and try to answer them by analysing and visualizing our data.
```
# Checking for correlations among columns
plt.subplots(figsize=(10,8))
sns.heatmap(df.corr(), annot=True);
```
From the above heatmap, there doesnt seem to be any significant correlations present in the data.
We do however, see some weak correlation between the `hsc_p`, `ssc_p` and `degree_p` columns. But this is to be expected as a student who performed well in their 10th is likely to continue to work hard and perform well in their subsequent academic endeavours. The inverse may also be true.
We do see some slight correlation between salary and `mba_p` and `etest_p`. While it makes intuitive sense to me that since the recruitment is for MBA graduates, their MBA scores and E-Test scores (Employability test) might have a correlation with their chances of getting placed or salary, the data on hand does not provide us with strong enough evidence to confirm or deny the same.
## Question 1
### Does the board of education affect placements?
Reasoning: I find this question particularly interesting because the general trend that I have observed in new parents these days is their increasing preference to enrol their kids into the central board than state. This dataset allows us to investigate the following two questions:
a. Are kids from CBSE or higher more likely to get placed than those that are not?
b. Are these kids more likely to get a higher salary?
```
# Check the number of unique values
print_unique(df, "ssc_b")
print_unique(df, "hsc_b")
# Create filter for SSC Central board
ssc_central = df.ssc_b == "Central"
# Create filter for HSC Central board
hsc_central = df.hsc_b == "Central"
both = ssc_central & hsc_central
neither = ~ssc_central & ~hsc_central
df.groupby(["ssc_b", "hsc_b"]).status.value_counts().reset_index(name="count")
# Vizualization of board vs placement rate
grouped_data = df.groupby(["ssc_b", "hsc_b"]).status.value_counts(normalize=True).unstack(2)
grouped_data.plot.bar(title = "Placements by Board of Education", rot = 45).set_xlabel("(SSC, HSC)");
```
From the above plots, it might seem that those who took a combination of Central and Others between their 10th and 12th grades were less likely to get placed than those that took Central or Others consistently. However, looking at the number of samples in each of these groupings, we see that (Others, Central) and (Central, Others) have significantly fewer samples than the other two. For our analysis, we will focus on (Central, Central) and (Others, Others) and see if one has any noticeable advantages over the other.
### Exploring Board vs Salary
From the above graph, it seems that the choice of board doesnt seem to particularly increase one's chances of getting placed but maybe it has a role to play in determining ones salary? Let's check!
```
# Plotting Board vs Salary
salary_by_board = df[both | neither].groupby(["ssc_b", "hsc_b"]).salary.mean().sort_values()
salary_by_board.plot.bar(rot=0);
```
These two groups seem to have about the same average salary. If the board did indeed matter, we would expect one group to have significantly higher salary than the other. In case of the number of placed themselves, the (others, others) group does seem to have slightly higher chances of placements but this could be attributed to the difference in sample counts.
Now that we have checked potential reasons for parents to choose Central over others, let us now analyse from a student's perspective. The general opinion is that Central board can be much harder than others (i.e state board). Does this mean that students have to work harder to get placed in one group than the other? Let us check.
```
# Create a filter for placed students
placed = df["status"] == "Placed"
# Plotting distribution of scores for placed students in Central and Others
fig, axes = plt.subplots(nrows = 1, ncols =2, figsize = (15,5))
axes[0].hist(df[both & placed].ssc_p, alpha = 0.5, label = "Central", color = "red");
axes[0].hist(df[neither & placed].ssc_p, alpha = 0.5, label = "Others", color = "green");
axes[0].set_xlabel("10th Grade")
axes[0].set_ylabel("Number of Students")
axes[1].hist(df[both & placed].hsc_p, alpha = 0.5, label = "Central", color="red");
axes[1].hist(df[neither & placed].hsc_p, alpha = 0.5, label = "Others", color="green");
axes[1].set_xlabel("12th Grade")
axes[1].set_ylabel("Number of Students")
handles, labels = axes[0].get_legend_handles_labels()
fig.legend(handles, labels, loc='upper right')
fig.suptitle("Scores Obtained by Placed Students", fontsize=16)
plt.show()
```
It seems that among the placed students, those from Central Board scored less in the 10th grade than their `Others` counterparts. This could be an indication of how the examinations in the central board tend to be more difficult resulting in a lower average score. However, the opposite seems to be true in the case of the 12th grade. Maybe the more rigorous process the students faced in their secondary schooling better prepared them for their 12th?
You know what they say : "*Some questions are better left ~~unanswered~~ for a different dataset.*"
## Question 2
### Does it really matter how much you score in your school days?
Growing up with Indian parents, I was always told throughout my school days that I should work hard and do well in my exams so that one day I would be able to *reap its benefits*. This led me to always trying to score as highly as I could on my examinations. This dataset makes me want to ask "Does it really matter how much you score?" or to ask a more appropiate question : "Is there evidence to prove that scoring higher in your school days helps you get placed?"
```
# Plots to see
f, ax = plt.subplots(figsize=(8, 8))
ax.set_aspect("equal")
sns.kdeplot(df[placed].hsc_p, df[placed].ssc_p, cmap="Reds",shade_lowest=False)
sns.kdeplot(df[~placed].hsc_p, df[~placed].ssc_p, cmap ="Blues", shade_lowest=False)
plt.show();
```
Interesting! There does seem to be a visible trend here. Students who got Placed (orange) seem to have on an average, scored better in both their 10th and 12th than their Non Placed counterparts (blue). It could be the case that the scores were indeed helpful in impressing potential emloyers but on the other hand it is also possible that those students that worked hard in their school days developed the right mindset to keep working and built other skills that increased their employability.
## Question 3
### Is one stream inherently better than the other?
When we complete our 10th grade education, we are met with arguably the most important crossroads our our lives. What next : Science, Commerce or Arts?
Unfortunately during my 10th grade, I was faced with a number of health issues and I had to rely on my parents and relatives to choose my path for me and they insisted that I take up science, citing reasons such as "there's a lot of scope for science" and "you'll get job easily".
Looking back on my journey so far, I do not regret having taken up Science because eventually Computer Science is where I found my calling. But I'm curious to see if students in one stream are more likely to get a job than some other. Is my relatives' line of reasoning correct?
```
# Plotting placement rate by Stream
df.groupby("hsc_s").status.value_counts(normalize=True).unstack(1).plot.bar(figsize=(7,7), title = "Placement rate by HSC Stream", rot=0);
```
Arts does seem to be lagging behind in placements but then again it has significantly fewer samples than commerce and science. For our anaylsis, we will focus on only Science and Commerce, which from the looks of it, have near-identical placment rates.
Maybe one stream is more likely to be offered a higher salary?
```
df.groupby("hsc_s").salary.mean().plot.bar(rot=0);
```
```
# Create filter for science
science = df["hsc_s"] == "Science"
# Create a filter for commerce
commerce = df["hsc_s"] == "Commerce"
# Plot Salary distribution by Stream
plt.subplots(figsize=(8,8))
sns.boxplot(x="salary", y="hsc_s", data =df[science | commerce]);
```
Looks like both streams have about the same spread of salaries. There are some outliers in both cases and Science does seem to have an ever-so-slightly higher third quartile value. But not enough to make it significant.
***Conclusion*** : *My relatives are lunatics.*
## Question 4
### Which degree and MBA specialization has the highest Salary?
We finally come to what I think might be the most relevant information to an employer. A student's MBA specialization is closely tied to the kind of skills that a company would be looking for and consequently, the salary being offered. The next closest qualification of importance would be the student's undergraduate degree. Let us look at how these two features affect the salaries being offered.
```
# Plotting Salary vs specialization and degree
plt.subplots(figsize=(8,8))
sns.violinplot(x="degree_t", y="salary", hue="specialisation", data=df, scale="count", palette="Set3");
```
It seems as though the highest salaries that were offered were to students who pursued the Marketing & Finance specialisation after obtaining a UG degree in Commerce and Management. However, these are clearly major outliers. It seems than in general, Students with a Science and Technology UG degree pursuring Mkt&Fin specialization were more likely to get higher paying jobs.
## Question 5
### Does gender bias exist in campus recruitment?
The last question that I would like to answer is an important one. Do companies seem to discrimate between men and women when it comes to placements and salaries?
```
# Plotting salaries vs gender by specialization
plt.subplots(figsize=(8,8))
sns.violinplot(y = "salary", x="specialisation", hue = "gender", palette="Set2", data=df, scale="count");
```
There does seem to be a general trend that men in this dataset ghave been paid higher than women. But we can also see that the dataset contains fewer samples of females than it does of males. Regardless, I think this is slightly concerning and that the dataset creator might want to take this up with his college!
# Data Preparation
This dataset has very few samples and even fewer if we eliminate the ones without a specified salary. So instead of predicting salary I would like to predict whether a student would be placed or not. I start by creating a feature matrix `X` containing all the features from the dataset except for `Salary` and `Status`. I also drop the serial number as it could potentially lead to overfitting and adds no useful information to the dataset.
Then I proceeded to One-hot encode for the categorical variables and dropped the original ones. Since most of the categorical columns contained only 2 or 3 unique values, the final dataset was not bloated too much and had 14 columns for the model to work with.
However, when I tried to train the model, I ran into an error that prevented the Linear SVM from converging. [This](https://stackoverflow.com/questions/52670012/convergencewarning-liblinear-failed-to-converge-increase-the-number-of-iterati) post on stackoverflow explained that one way to solve it was to make use of a Standard Scaler to normalize the data in order to speed up convergence and doing so hepled me correctly fit the model.
```
from sklearn.preprocessing import StandardScaler
# Data Preparation
X = df.drop(["salary", "status", "sl_no"], axis = 1)
X = pd.get_dummies(X, drop_first=True)
scaler = StandardScaler()
cols = X.columns
X = pd.DataFrame(scaler.fit_transform(X))
X.columns = cols
y = df["status"]
# Checking the final dataset
X.head(5)
```
# Modelling & Evaluation
Based on the cheat sheet provided [here](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html). I will try to use Linear SVC.
Since the number of samples in the dataset is pretty small, using a traditional train-test split means we would be losing out on data, which is already a precious commodity. We would not be able to trust the final score as much, since we would be introducing some bias towards the training set.
I decided to make use of KFold cross validation so that the final score would be a better representation of real-world performance.
```
from sklearn.svm import LinearSVC
from sklearn.model_selection import KFold
scores = []
kfold = KFold(n_splits=5, random_state = 42, shuffle=True)
for train_index, test_index in kfold.split(X):
model = LinearSVC()
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
model.fit(X_train, y_train);
scores.append(model.score(X_test, y_test))
print(f"CV score is: {sum(scores)/len(scores)}")
```
Not bad! We were able to achieve a cross-validation score of 0.865
Considering that we only had about 200 samples to work with, this score seems pretty decent.This means that although we only had such a small dataset to work with, the Linear SVM model was able to find some patterns within in and is able to predict whether a student will be placed or not, based on the data provided!
# Conclusion
## Quick Recap
In this notebook, we explored dataset containing campus recruitment data for students from an MBA college. Though our analysis, we found that:
1. The board of school education doesnt matter when it comes to placements or even salaries.
2. Placed students seem to have performed better in their 10th and 12th exams than Non Placed Students
3. Among the various specializations, students with a Sci&Tech UG degree and Mkt&Fin MBA specialization appear to have slightly higher salaries.
4. The salaries for female students seems to be generally slightly lower than men.
We then proceeded to fit a linear SVM to predict whether or not a student would be placed and achieved reasonably good results, with a test accuracy of **86.5**% after some data preprocessing.
## What does this mean for you?
We must first understand that the results and observations made in this notebook need to be taken with a grain of salt. After all, this dataset is not representative of the full spectrum of students that one comes across during a campus recruitment program. The dataset itself is quite limited in nature and as such, varies in the number of samples between different groups making it diffult to report with confidence the trends that we come across.
Instead, I urge you to simply follow your heart and make your own choices. Regardless of what a dataset might indicate as being the *best stream* or *best specialization*, ultimately, the *best job* is the one that makes you happy!
Good luck!
|
github_jupyter
|
# This Python 3 environment comes with many helpful analytics libraries installed
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="darkgrid")
# helper function for printing unique values in columns
def print_unique(df, col):
print(f"The {col} column contains:")
u_vals = df[col].unique()
for i, val in enumerate(u_vals):
print(f"{i+1}. {val}")
# Loading in the dataset and viewing first few rows
df = pd.read_csv("Placement_Data_Full_Class.csv")
df.head()
df.describe()
# Check for missing salaries
df.query("status == 'placed'")["salary"].isnull().any()
# Checking for correlations among columns
plt.subplots(figsize=(10,8))
sns.heatmap(df.corr(), annot=True);
# Check the number of unique values
print_unique(df, "ssc_b")
print_unique(df, "hsc_b")
# Create filter for SSC Central board
ssc_central = df.ssc_b == "Central"
# Create filter for HSC Central board
hsc_central = df.hsc_b == "Central"
both = ssc_central & hsc_central
neither = ~ssc_central & ~hsc_central
df.groupby(["ssc_b", "hsc_b"]).status.value_counts().reset_index(name="count")
# Vizualization of board vs placement rate
grouped_data = df.groupby(["ssc_b", "hsc_b"]).status.value_counts(normalize=True).unstack(2)
grouped_data.plot.bar(title = "Placements by Board of Education", rot = 45).set_xlabel("(SSC, HSC)");
# Plotting Board vs Salary
salary_by_board = df[both | neither].groupby(["ssc_b", "hsc_b"]).salary.mean().sort_values()
salary_by_board.plot.bar(rot=0);
# Create a filter for placed students
placed = df["status"] == "Placed"
# Plotting distribution of scores for placed students in Central and Others
fig, axes = plt.subplots(nrows = 1, ncols =2, figsize = (15,5))
axes[0].hist(df[both & placed].ssc_p, alpha = 0.5, label = "Central", color = "red");
axes[0].hist(df[neither & placed].ssc_p, alpha = 0.5, label = "Others", color = "green");
axes[0].set_xlabel("10th Grade")
axes[0].set_ylabel("Number of Students")
axes[1].hist(df[both & placed].hsc_p, alpha = 0.5, label = "Central", color="red");
axes[1].hist(df[neither & placed].hsc_p, alpha = 0.5, label = "Others", color="green");
axes[1].set_xlabel("12th Grade")
axes[1].set_ylabel("Number of Students")
handles, labels = axes[0].get_legend_handles_labels()
fig.legend(handles, labels, loc='upper right')
fig.suptitle("Scores Obtained by Placed Students", fontsize=16)
plt.show()
# Plots to see
f, ax = plt.subplots(figsize=(8, 8))
ax.set_aspect("equal")
sns.kdeplot(df[placed].hsc_p, df[placed].ssc_p, cmap="Reds",shade_lowest=False)
sns.kdeplot(df[~placed].hsc_p, df[~placed].ssc_p, cmap ="Blues", shade_lowest=False)
plt.show();
# Plotting placement rate by Stream
df.groupby("hsc_s").status.value_counts(normalize=True).unstack(1).plot.bar(figsize=(7,7), title = "Placement rate by HSC Stream", rot=0);
df.groupby("hsc_s").salary.mean().plot.bar(rot=0);
# Create filter for science
science = df["hsc_s"] == "Science"
# Create a filter for commerce
commerce = df["hsc_s"] == "Commerce"
# Plot Salary distribution by Stream
plt.subplots(figsize=(8,8))
sns.boxplot(x="salary", y="hsc_s", data =df[science | commerce]);
# Plotting Salary vs specialization and degree
plt.subplots(figsize=(8,8))
sns.violinplot(x="degree_t", y="salary", hue="specialisation", data=df, scale="count", palette="Set3");
# Plotting salaries vs gender by specialization
plt.subplots(figsize=(8,8))
sns.violinplot(y = "salary", x="specialisation", hue = "gender", palette="Set2", data=df, scale="count");
from sklearn.preprocessing import StandardScaler
# Data Preparation
X = df.drop(["salary", "status", "sl_no"], axis = 1)
X = pd.get_dummies(X, drop_first=True)
scaler = StandardScaler()
cols = X.columns
X = pd.DataFrame(scaler.fit_transform(X))
X.columns = cols
y = df["status"]
# Checking the final dataset
X.head(5)
from sklearn.svm import LinearSVC
from sklearn.model_selection import KFold
scores = []
kfold = KFold(n_splits=5, random_state = 42, shuffle=True)
for train_index, test_index in kfold.split(X):
model = LinearSVC()
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y[train_index], y[test_index]
model.fit(X_train, y_train);
scores.append(model.score(X_test, y_test))
print(f"CV score is: {sum(scores)/len(scores)}")
| 0.643665 | 0.927953 |
# Title: Windows Host Explorer
**Notebook Version:** 1.0<br>
**Python Version:** Python 3.6 (including Python 3.6 - AzureML)<br>
**Required Packages**: kqlmagic, msticpy, pandas, numpy, matplotlib, networkx, ipywidgets, ipython, scikit_learn, dnspython, ipwhois, folium, maxminddb_geolite2, holoviews<br>
**Platforms Supported**:
- Azure Notebooks Free Compute
- Azure Notebooks DSVM
- OS Independent
**Data Sources Required**:
- Log Analytics - SecurityAlert, SecurityEvent (EventIDs 4688 and 4624/25), AzureNetworkAnalytics_CL, Heartbeat
- (Optional) - VirusTotal (with API key)
## Description:
Brings together a series of queries and visualizations to help you determine the security state of the Windows host or virtual machine that you are investigating.
<a id='toc'></a>
# Table of Contents
- [Setup and Authenticate](#setup)
- [Get Host Name](#get_hostname)
- [Related Alerts](#related_alerts)
- [Host Logons](#host_logons)
- [Failed Logons](#failed_logons)
- [Session Processes](#examine_win_logon_sess)
- [Check for IOCs in Commandline](#cmdlineiocs)
- [VirusTotal lookup](#virustotallookup)
- [Network Data](#comms_to_other_hosts)
- [Appendices](#appendices)
- [Saving data to Excel](#appendices)
<a id='setup'></a>[Contents](#toc)
# Setup
Make sure that you have installed packages specified in the setup (uncomment the lines to execute)
## Install Packages
The first time this cell runs for a new Azure Notebooks project or local Python environment it will take several minutes to download and install the packages. In subsequent runs it should run quickly and confirm that package dependencies are already installed. Unless you want to upgrade the packages you can feel free to skip execution of the next cell.
If you see any import failures (```ImportError```) in the notebook, please re-run this cell and answer 'y', then re-run the cell where the failure occurred.
Note you may see some warnings about package incompatibility with certain packages. This does not affect the functionality of this notebook but you may need to upgrade the packages producing the warnings to a more recent version.
```
import sys
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
MIN_REQ_PYTHON = (3,6)
if sys.version_info < MIN_REQ_PYTHON:
print('Check the Kernel->Change Kernel menu and ensure that Python 3.6')
print('or later is selected as the active kernel.')
sys.exit("Python %s.%s or later is required.\n" % MIN_REQ_PYTHON)
# Package Installs - try to avoid if they are already installed
try:
import msticpy.sectools as sectools
import Kqlmagic
from dns import reversename, resolver
from ipwhois import IPWhois
import folium
print('If you answer "n" this cell will exit with an error in order to avoid the pip install calls,')
print('This error can safely be ignored.')
resp = input('msticpy and Kqlmagic packages are already loaded. Do you want to re-install? (y/n)')
if resp.strip().lower() != 'y':
sys.exit('pip install aborted - you may skip this error and continue.')
else:
print('After installation has completed, restart the current kernel and run '
'the notebook again skipping this cell.')
except ImportError:
pass
print('\nPlease wait. Installing required packages. This may take a few minutes...')
!pip install git+https://github.com/microsoft/msticpy --upgrade --user
!pip install Kqlmagic --no-cache-dir --upgrade --user
!pip install holoviews
!pip install dnspython --upgrade
!pip install ipwhois --upgrade
!pip install folium --upgrade
# Uncomment to refresh the maxminddb database
# !pip install maxminddb-geolite2 --upgrade
print('To ensure that the latest versions of the installed libraries '
'are used, please restart the current kernel and run '
'the notebook again skipping this cell.')
# Imports
import sys
import warnings
MIN_REQ_PYTHON = (3,6)
if sys.version_info < MIN_REQ_PYTHON:
print('Check the Kernel->Change Kernel menu and ensure that Python 3.6')
print('or later is selected as the active kernel.')
sys.exit("Python %s.%s or later is required.\n" % MIN_REQ_PYTHON)
import numpy as np
from IPython import get_ipython
from IPython.display import display, HTML, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import networkx as nx
import pandas as pd
pd.set_option('display.max_rows', 100)
pd.set_option('display.max_columns', 50)
pd.set_option('display.max_colwidth', 100)
import msticpy.sectools as sectools
import msticpy.nbtools as mas
import msticpy.nbtools.kql as qry
import msticpy.nbtools.nbdisplay as nbdisp
WIDGET_DEFAULTS = {'layout': widgets.Layout(width='95%'),
'style': {'description_width': 'initial'}}
# Some of our dependencies (networkx) still use deprecated Matplotlib
# APIs - we can't do anything about it so suppress them from view
from matplotlib import MatplotlibDeprecationWarning
warnings.simplefilter("ignore", category=MatplotlibDeprecationWarning)
display(HTML(mas.util._TOGGLE_CODE_PREPARE_STR))
HTML('''
<script type="text/javascript">
IPython.notebook.kernel.execute("nb_query_string='".concat(window.location.search).concat("'"));
</script>
''');
```
### Get WorkspaceId
To find your Workspace Id go to [Log Analytics](https://ms.portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.OperationalInsights%2Fworkspaces). Look at the workspace properties to find the ID.
```
import os
from msticpy.nbtools.wsconfig import WorkspaceConfig
ws_config_file = 'config.json'
try:
ws_config = WorkspaceConfig(ws_config_file)
display(Markdown(f'Read Workspace configuration from local config.json for workspace **{ws_config["workspace_name"]}**'))
for cf_item in ['tenant_id', 'subscription_id', 'resource_group', 'workspace_id', 'workspace_name']:
display(Markdown(f'**{cf_item.upper()}**: {ws_config[cf_item]}'))
WORKSPACE_ID = ws_config['workspace_id']
except:
WORKSPACE_ID = None
display(Markdown('**Workspace configuration not found.**\n\n'
'Please go to your Log Analytics workspace, copy the workspace ID and paste here. '
'Or read the workspace_id from the config.json in your Azure Notebooks project.'))
ws_config = None
ws_id = mas.GetEnvironmentKey(env_var='WORKSPACE_ID',
prompt='Please enter your Log Analytics Workspace Id:')
ws_id.display()
```
### Authenticate to Log Analytics
If you are using user/device authentication, run the following cell.
- Click the 'Copy code to clipboard and authenticate' button.
- This will pop up an Azure Active Directory authentication dialog (in a new tab or browser window). The device code will have been copied to the clipboard.
- Select the text box and paste (Ctrl-V/Cmd-V) the copied value.
- You should then be redirected to a user authentication page where you should authenticate with a user account that has permission to query your Log Analytics workspace.
Use the following syntax if you are authenticating using an Azure Active Directory AppId and Secret:
```
%kql loganalytics://tenant(aad_tenant).workspace(WORKSPACE_ID).clientid(client_id).clientsecret(client_secret)
```
instead of
```
%kql loganalytics://code().workspace(WORKSPACE_ID)
```
Note: you may occasionally see a JavaScript error displayed at the end of the authentication - you can safely ignore this.<br>
On successful authentication you should see a ```popup schema``` button.
```
if not WORKSPACE_ID:
try:
WORKSPACE_ID = ws_id.value
except NameError:
raise ValueError('No workspace Id.')
mas.kql.load_kql_magic()
%kql loganalytics://code().workspace(WORKSPACE_ID)
%kql search * | summarize RowCount=count() by Type | project-rename Table=Type
la_table_set = _kql_raw_result_.to_dataframe()
table_index = la_table_set.set_index('Table')['RowCount'].to_dict()
display(Markdown('Current data in workspace'))
display(la_table_set.T)
```
<a id='get_hostname'></a>[Contents](#toc)
# Enter the host name and query time window
```
host_text = widgets.Text(description='Enter the Host name to search for:', **WIDGET_DEFAULTS)
display(host_text)
query_times = mas.QueryTime(units='day', max_before=20, before=5, max_after=1)
query_times.display()
from msticpy.nbtools.entityschema import GeoLocation
from msticpy.sectools.geoip import GeoLiteLookup
iplocation = GeoLiteLookup()
# Get single event - try process creation
if 'SecurityEvent' not in table_index:
raise ValueError('No Windows event log data available in the workspace')
start = f'\'{query_times.start}\''
hostname = host_text.value
find_host_event_query = r'''
SecurityEvent
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| where Computer has '{hostname}'
| top 1 by TimeGenerated desc nulls last
'''.format(start=query_times.start,
end=query_times.end,
hostname=hostname)
print('Checking for event data...')
# Get heartbeat event if available
%kql -query find_host_event_query
if _kql_raw_result_.completion_query_info['StatusCode'] == 0:
host_event_df = _kql_raw_result_.to_dataframe()
host_event = None
host_entity = None
if host_event_df.shape[0] > 0:
host_entity = mas.Host(src_event=host_event_df.iloc[0])
if not host_entity:
raise LookupError(f'Could not find Windows events the name {hostname}')
# Try to get an OMS Heartbeat for this computer
if 'Heartbeat' in table_index:
heartbeat_query = '''
Heartbeat
| where Computer == \'{computer}\'
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| top 1 by TimeGenerated desc nulls last
'''.format(start=query_times.start,
end=query_times.end,
computer=host_entity.computer)
print('Getting heartbeat data...')
%kql -query heartbeat_query
if _kql_raw_result_.completion_query_info['StatusCode'] == 0:
host_hb = _kql_raw_result_.to_dataframe().iloc[0]
host_entity.SourceComputerId = host_hb['SourceComputerId']
host_entity.OSType = host_hb['OSType']
host_entity.OSMajorVersion = host_hb['OSMajorVersion']
host_entity.OSMinorVersion = host_hb['OSMinorVersion']
host_entity.ComputerEnvironment = host_hb['ComputerEnvironment']
host_entity.OmsSolutions = [sol.strip() for sol in host_hb['Solutions'].split(',')]
host_entity.VMUUID = host_hb['VMUUID']
ip_entity = mas.IpAddress()
ip_entity.Address = host_hb['ComputerIP']
geoloc_entity = GeoLocation()
geoloc_entity.CountryName = host_hb['RemoteIPCountry']
geoloc_entity.Longitude = host_hb['RemoteIPLongitude']
geoloc_entity.Latitude = host_hb['RemoteIPLatitude']
ip_entity.Location = geoloc_entity
host_entity.IPAddress = ip_entity # TODO change to graph edge
if 'AzureNetworkAnalytics_CL' in table_index:
print('Looking for IP addresses in network flows...')
aznet_query = '''
AzureNetworkAnalytics_CL
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| where VirtualMachine_s has \'{host}\'
| where ResourceType == 'NetworkInterface'
| top 1 by TimeGenerated desc
| project PrivateIPAddresses = PrivateIPAddresses_s,
PublicIPAddresses = PublicIPAddresses_s
'''.format(start=query_times.start,
end=query_times.end,
host=host_entity.HostName)
%kql -query aznet_query
az_net_df = _kql_raw_result_.to_dataframe()
def convert_to_ip_entities(ip_str):
ip_entities = []
if ip_str:
if ',' in ip_str:
addrs = ip_str.split(',')
elif ' ' in ip_str:
addrs = ip_str.split(' ')
else:
addrs = [ip_str]
for addr in addrs:
ip_entity = mas.IpAddress()
ip_entity.Address = addr.strip()
iplocation.lookup_ip(ip_entity=ip_entity)
ip_entities.append(ip_entity)
return ip_entities
# Add this information to our inv_host_entity
retrieved_address=[]
if len(az_net_df) == 1:
priv_addr_str = az_net_df['PrivateIPAddresses'].loc[0]
host_entity.properties['private_ips'] = convert_to_ip_entities(priv_addr_str)
pub_addr_str = az_net_df['PublicIPAddresses'].loc[0]
host_entity.properties['public_ips'] = convert_to_ip_entities(pub_addr_str)
retrieved_address = [ip.Address for ip in host_entity.properties['public_ips']]
else:
if 'private_ips' not in host_entity.properties:
host_entity.properties['private_ips'] = []
if 'public_ips' not in host_entity.properties:
host_entity.properties['public_ips'] = []
print(host_entity)
```
<a id='related_alerts'></a>[Contents](#toc)
# Related Alerts
```
# set the origin time to the time of our alert
query_times = mas.QueryTime(units='day',
max_before=28, max_after=1, before=5)
query_times.display()
related_alerts_query = r'''
SecurityAlert
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| extend StartTimeUtc = TimeGenerated
| extend AlertDisplayName = DisplayName
| extend Computer = '{host}'
| extend simple_hostname = tostring(split(Computer, '.')[0])
| where Entities has Computer or Entities has simple_hostname
or ExtendedProperties has Computer
or ExtendedProperties has simple_hostname
'''.format(start=query_times.start,
end=query_times.end,
host=host_entity.HostName)
%kql -query related_alerts_query
related_alerts = _kql_raw_result_.to_dataframe()
if related_alerts is not None and not related_alerts.empty:
host_alert_items = (related_alerts[['AlertName', 'TimeGenerated']]\
.groupby('AlertName').TimeGenerated.agg('count').to_dict())
# acct_alert_items = related_alerts\
# .query('acct_match == @True')[['AlertType', 'StartTimeUtc']]\
# .groupby('AlertType').StartTimeUtc.agg('count').to_dict()
# proc_alert_items = related_alerts\
# .query('proc_match == @True')[['AlertType', 'StartTimeUtc']]\
# .groupby('AlertType').StartTimeUtc.agg('count').to_dict()
def print_related_alerts(alertDict, entityType, entityName):
if len(alertDict) > 0:
display(Markdown('### Found {} different alert types related to this {} (\'{}\')'.format(len(alertDict), entityType, entityName)))
for (k,v) in alertDict.items():
print('- {}, Count of alerts: {}'.format(k, v))
else:
print('No alerts for {} entity \'{}\''.format(entityType, entityName))
print_related_alerts(host_alert_items, 'host', host_entity.HostName)
nbdisp.display_timeline(data=related_alerts, title="Alerts", source_columns=['AlertName'], height=200)
else:
display(Markdown('No related alerts found.'))
```
## Browse List of Related Alerts
Select an Alert to view details
```
def disp_full_alert(alert):
global related_alert
related_alert = mas.SecurityAlert(alert)
nbdisp.display_alert(related_alert, show_entities=True)
if related_alerts is not None and not related_alerts.empty:
related_alerts['CompromisedEntity'] = related_alerts['Computer']
display(Markdown('### Click on alert to view details.'))
rel_alert_select = mas.AlertSelector(alerts=related_alerts,
# columns=['TimeGenerated', 'AlertName', 'CompromisedEntity', 'SystemAlertId'],
action=disp_full_alert)
rel_alert_select.display()
```
<a id='host_logons'></a>[Contents](#toc)
# Host Logons
```
from msticpy.nbtools.query_defns import DataFamily, DataEnvironment
params_dict = {}
params_dict['host_filter_eq'] = f'Computer has \'{host_entity.HostName}\''
params_dict['host_filter_neq'] = f'Computer !has \'{host_entity.HostName}\''
params_dict['host_name'] = host_entity.HostName
params_dict['subscription_filter'] = 'true'
if host_entity.OSFamily == 'Linux':
params_dict['data_family'] = DataFamily.LinuxSecurity
params_dict['path_separator'] = '/'
else:
params_dict['data_family'] = DataFamily.WindowsSecurity
params_dict['path_separator'] = '\\'
# set the origin time to the time of our alert
logon_query_times = mas.QueryTime(units='day',
before=1, after=1, max_before=20, max_after=20)
logon_query_times.display()
from msticpy.sectools.eventcluster import dbcluster_events, add_process_features, _string_score
host_logons = qry.list_host_logons(provs=[logon_query_times], **params_dict)
%matplotlib inline
if host_logons is not None and not host_logons.empty:
logon_features = host_logons.copy()
logon_features['AccountNum'] = host_logons.apply(lambda x: _string_score(x.Account), axis=1)
logon_features['LogonIdNum'] = host_logons.apply(lambda x: _string_score(x.TargetLogonId), axis=1)
logon_features['LogonHour'] = host_logons.apply(lambda x: x.TimeGenerated.hour, axis=1)
# you might need to play around with the max_cluster_distance parameter.
# decreasing this gives more clusters.
(clus_logons, _, _) = dbcluster_events(data=logon_features, time_column='TimeGenerated',
cluster_columns=['AccountNum',
'LogonType'],
max_cluster_distance=0.0001)
display(Markdown(f'Number of input events: {len(host_logons)}'))
display(Markdown(f'Number of clustered events: {len(clus_logons)}'))
display(Markdown('### Distinct host logon patterns'))
clus_logons.sort_values('TimeGenerated')
nbdisp.display_logon_data(clus_logons)
display(Markdown('### Logon timeline.'))
tooltip_cols = ['TargetUserName', 'TargetDomainName', 'SubjectUserName',
'SubjectDomainName', 'LogonType', 'IpAddress']
nbdisp.display_timeline(data=host_logons.query('TargetLogonId != "0x3e7"'),
overlay_data=host_logons.query('TargetLogonId == "0x3e7"'),
title="Logons (blue=user, green=system)",
source_columns=tooltip_cols, height=200)
display(Markdown('### Counts of logon events by logon type.'))
display(Markdown('Min counts for each logon type highlighted.'))
logon_by_type = (host_logons[['Account', 'LogonType', 'EventID']]
.groupby(['Account','LogonType']).count().unstack()
.fillna(0)
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.format("{0:0>3.0f}"))
display(logon_by_type)
key = 'logon type key = {}'.format('; '.join([f'{k}: {v}' for k,v in mas.nbdisplay._WIN_LOGON_TYPE_MAP.items()]))
display(Markdown(key))
display(Markdown('### Relative frequencies by account'))
plt.rcParams['figure.figsize'] = (12, 4)
clus_logons.plot.barh(x="Account", y="ClusterSize")
else:
display(Markdown('No logon events found for host.'))
```
<a id='failed logons'></a>[Contents](#toc)
### Failed Logons
```
failedLogons = qry.list_host_logon_failures(provs=[query_times], **params_dict)
if failedLogons.shape[0] == 0:
display(print('No logon failures recorded for this host between {security_alert.start} and {security_alert.start}'))
failedLogons
```
<a id='examine_win_logon_sess'></a>[Contents](#toc)
## Examine a Logon Session
### Select a Logon ID To Examine
```
import re
dist_logons = clus_logons.sort_values('TimeGenerated')[['TargetUserName', 'TimeGenerated',
'LastEventTime', 'LogonType',
'ClusterSize']]
items = dist_logons.apply(lambda x: (f'{x.TargetUserName}: '
f'(logontype={x.LogonType}) '
f'timerange={x.TimeGenerated} - {x.LastEventTime} '
f'count={x.ClusterSize}'),
axis=1).values.tolist()
def get_selected_logon_cluster(selected_item):
acct_match = re.search(r'(?P<acct>[^:]+):\s+\(logontype=(?P<l_type>[^)]+)', selected_item)
if acct_match:
acct = acct_match['acct']
l_type = int(acct_match['l_type'])
return host_logons.query('TargetUserName == @acct and LogonType == @l_type')
logon_list_regex = r'''
(?P<acct>[^:]+):\s+
\(logontype=(?P<logon_type>[^)]+)\)\s+
\(timestamp=(?P<time>[^)]+)\)\s+
logonid=(?P<logonid>[0-9a-fx)]+)
'''
def get_selected_logon(selected_item):
acct_match = re.search(logon_list_regex, selected_item, re.VERBOSE)
if acct_match:
acct = acct_match['acct']
logon_type = int(acct_match['logon_type'])
time_stamp = pd.to_datetime(acct_match['time'])
logon_id = acct_match['logonid']
return host_logons.query('TargetUserName == @acct and LogonType == @logon_type'
' and TargetLogonId == @logon_id')
logon_wgt = mas.SelectString(description='Select logon cluster to examine',
item_list=items, height='200px', width='100%', auto_display=True)
# Calculate time range based on the logons from previous section
selected_logon_cluster = get_selected_logon_cluster(logon_wgt.value)
if len(selected_logon_cluster) > 20:
display(Markdown('<h3><p style="color:red">Warning: the selected '
'cluster has a high number of logons.</p></h1><br>'
'Processes for these logons may be very slow '
'to retrieve and result in high memory usage.<br>'
'You may wish to narrow the time range and sample'
'the data before running the query for the full range.'))
logon_time = selected_logon_cluster['TimeGenerated'].min()
last_logon_time = selected_logon_cluster['TimeGenerated'].max()
time_diff = int((last_logon_time - logon_time).total_seconds() / (60 * 60) + 2)
# set the origin time to the time of our alert
proc_query_times = mas.QueryTime(units='hours', origin_time=logon_time,
before=1, after=time_diff, max_before=60, max_after=120)
proc_query_times.display()
from msticpy.sectools.eventcluster import dbcluster_events, add_process_features
print('Getting process events...', end='')
processes_on_host = qry.list_processes(provs=[proc_query_times], **params_dict)
print('done')
print('Clustering...', end='')
feature_procs = add_process_features(input_frame=processes_on_host,
path_separator=params_dict['path_separator'])
feature_procs['accountNum'] = feature_procs.apply(lambda x: _string_score(x.Account), axis=1)
# you might need to play around with the max_cluster_distance parameter.
# decreasing this gives more clusters.
(clus_events, dbcluster, x_data) = dbcluster_events(data=feature_procs,
cluster_columns=['commandlineTokensFull',
'pathScore',
'accountNum',
'isSystemSession'],
max_cluster_distance=0.0001)
print('done')
print('Number of input events:', len(feature_procs))
print('Number of clustered events:', len(clus_events))
def view_logon_sess(x=''):
global selected_logon
selected_logon = get_selected_logon(x)
logonId = selected_logon['TargetLogonId'].iloc[0]
sess_procs = (processes_on_host.query('TargetLogonId == @logonId | SubjectLogonId == @logonId')
[['NewProcessName', 'CommandLine', 'TargetLogonId']]
.drop_duplicates())
display(sess_procs)
selected_logon_cluster = get_selected_logon_cluster(logon_wgt.value)
selected_tgt_logon = selected_logon_cluster['TargetUserName'].iat[0]
system_logon = selected_tgt_logon.lower() == 'system' or selected_tgt_logon.endswith('$')
if system_logon:
display(Markdown('<h3><p style="color:red">Warning: the selected '
'account name appears to be a system account.</p></h1><br>'
'<i>It is difficult to accurately associate processes '
'with the specific logon sessions.<br>'
'Showing clustered events for entire time selection.'))
display(clus_events.sort_values('TimeGenerated')[['TimeGenerated', 'LastEventTime',
'NewProcessName', 'CommandLine',
'ClusterSize', 'commandlineTokensFull',
'pathScore', 'isSystemSession']])
# Display a pick list for logon instances
items = (selected_logon_cluster
.sort_values('TimeGenerated')
.apply(lambda x: (f'{x.TargetUserName}: '
f'(logontype={x.LogonType}) '
f'(timestamp={x.TimeGenerated}) '
f'logonid={x.TargetLogonId}'),
axis=1).values.tolist())
sess_w = widgets.Select(options=items, description='Select logon instance to examine', **WIDGET_DEFAULTS)
widgets.interactive(view_logon_sess, x=sess_w)
```
<a id='cmdlineiocs'></a>[Contents](#toc)
# Check for IOCs in Commandline for current session
This section looks for Indicators of Compromise (IoC) within the data sets passed to it.
The first section looks at the commandlines for the processes in the selected session. It also looks for base64 encoded strings within the data - this is a common way of hiding attacker intent. It attempts to decode any strings that look like base64. Additionally, if the base64 decode operation returns any items that look like a base64 encoded string or file, a gzipped binary sequence, a zipped or tar archive, it will attempt to extract the contents before searching for potentially interesting IoC observables within the decoded data.
```
if not system_logon:
logonId = selected_logon['TargetLogonId'].iloc[0]
sess_procs = (processes_on_host.query('TargetLogonId == @logonId | SubjectLogonId == @logonId'))
else:
sess_procs = clus_events
ioc_extractor = sectools.IoCExtract()
os_family = host_entity.OSType if host_entity.OSType else 'Windows'
ioc_df = ioc_extractor.extract(data=sess_procs,
columns=['CommandLine'],
os_family=os_family,
ioc_types=['ipv4', 'ipv6', 'dns', 'url',
'md5_hash', 'sha1_hash', 'sha256_hash'])
if len(ioc_df):
display(Markdown("### IoC patterns found in process set."))
display(ioc_df)
else:
display(Markdown("### No IoC patterns found in process tree."))
```
### If any Base64 encoded strings, decode and search for IoCs in the results.
For simple strings the Base64 decoded output is straightforward. However for nested encodings this can get a little complex and difficult to represent in a tabular format.
**Columns**
- reference - The index of the row item in dotted notation in depth.seq pairs (e.g. 1.2.2.3 would be the 3 item at depth 3 that is a child of the 2nd item found at depth 1). This may not always be an accurate notation - it is mainly use to allow you to associate an individual row with the reference value contained in the full_decoded_string column of the topmost item).
- original_string - the original string before decoding.
- file_name - filename, if any (only if this is an item in zip or tar file).
- file_type - a guess at the file type (this is currently elementary and only includes a few file types).
- input_bytes - the decoded bytes as a Python bytes string.
- decoded_string - the decoded string if it can be decoded as a UTF-8 or UTF-16 string. Note: binary sequences may often successfully decode as UTF-16 strings but, in these cases, the decodings are meaningless.
- encoding_type - encoding type (UTF-8 or UTF-16) if a decoding was possible, otherwise 'binary'.
- file_hashes - collection of file hashes for any decoded item.
- md5 - md5 hash as a separate column.
- sha1 - sha1 hash as a separate column.
- sha256 - sha256 hash as a separate column.
- printable_bytes - printable version of input_bytes as a string of \xNN values
- src_index - the index of the row in the input dataframe from which the data came.
- full_decoded_string - the full decoded string with any decoded replacements. This is only really useful for top-level items, since nested items will only show the 'full' string representing the child fragment.
```
dec_df = sectools.b64.unpack_items(data=sess_procs, column='CommandLine')
if len(dec_df) > 0:
display(HTML("<h3>Decoded base 64 command lines</h3>"))
display(HTML("Decoded values and hashes of decoded values shown below."))
display(HTML('Warning - some binary patterns may be decodable as unicode strings. '
'In these cases you should ignore the "decoded_string" column '
'and treat the encoded item as a binary - using the "printable_bytes" '
'column or treat the decoded_string as a binary (bytes) value.'))
display(dec_df[['full_decoded_string', 'decoded_string', 'original_string', 'printable_bytes', 'file_hashes']])
ioc_dec_df = ioc_extractor.extract(data=dec_df, columns=['full_decoded_string'])
if len(ioc_dec_df):
display(HTML("<h3>IoC patterns found in events with base64 decoded data</h3>"))
display(ioc_dec_df)
ioc_df = ioc_df.append(ioc_dec_df ,ignore_index=True)
else:
print("No base64 encodings found.")
```
<a id='virustotallookup'></a>[Contents](#toc)
## Virus Total Lookup
This section uses the popular Virus Total service to check any recovered IoCs against VTs database.
To use this you need an API key from virus total, which you can obtain here: https://www.virustotal.com/.
Note that VT throttles requests for free API keys to 4/minute. If you are unable to process the entire data set, try splitting it and submitting smaller chunks.
**Things to note:**
- Virus Total lookups include file hashes, domains, IP addresses and URLs.
- The returned data is slightly different depending on the input type
- The VTLookup class tries to screen input data to prevent pointless lookups. E.g.:
- Only public IP Addresses will be submitted (no loopback, private address space, etc.)
- URLs with only local (unqualified) host parts will not be submitted.
- Domain names that are unqualified will not be submitted.
- Hash-like strings (e.g 'AAAAAAAAAAAAAAAAAA') that do not appear to have enough entropy to be a hash will not be submitted.
**Output Columns**
- Observable - The IoC observable submitted
- IoCType - the IoC type
- Status - the status of the submission request
- ResponseCode - the VT response code
- RawResponse - the entire raw json response
- Resource - VT Resource
- SourceIndex - The index of the Observable in the source DataFrame. You can use this to rejoin to your original data.
- VerboseMsg - VT Verbose Message
- ScanId - VT Scan ID if any
- Permalink - VT Permanent URL describing the resource
- Positives - If this is not zero, it indicates the number of malicious reports that VT holds for this observable.
- MD5 - The MD5 hash, if any
- SHA1 - The MD5 hash, if any
- SHA256 - The MD5 hash, if any
- ResolvedDomains - In the case of IP Addresses, this contains a list of all domains that resolve to this IP address
- ResolvedIPs - In the case Domains, this contains a list of all IP addresses resolved from the domain.
- DetectedUrls - Any malicious URLs associated with the observable.
```
vt_key = mas.GetEnvironmentKey(env_var='VT_API_KEY',
help_str='To obtain an API key sign up here https://www.virustotal.com/',
prompt='Virus Total API key:')
vt_key.display()
if vt_key.value and ioc_df is not None and not ioc_df.empty:
vt_lookup = sectools.VTLookup(vt_key.value, verbosity=2)
print(f'{len(ioc_df)} items in input frame')
supported_counts = {}
for ioc_type in vt_lookup.supported_ioc_types:
supported_counts[ioc_type] = len(ioc_df[ioc_df['IoCType'] == ioc_type])
print('Items in each category to be submitted to VirusTotal')
print('(Note: items have pre-filtering to remove obvious erroneous '
'data and false positives, such as private IPaddresses)')
print(supported_counts)
print('-' * 80)
vt_results = vt_lookup.lookup_iocs(data=ioc_df, type_col='IoCType', src_col='Observable')
display(vt_results)
```
<a id='comms_to_other_hosts'></a>[Contents](#toc)
# Network Check Communications with Other Hosts
```
# Azure Network Analytics Base Query
az_net_analytics_query =r'''
AzureNetworkAnalytics_CL
| where SubType_s == 'FlowLog'
| where FlowStartTime_t >= datetime({start})
| where FlowEndTime_t <= datetime({end})
| project TenantId, TimeGenerated,
FlowStartTime = FlowStartTime_t,
FlowEndTime = FlowEndTime_t,
FlowIntervalEndTime = FlowIntervalEndTime_t,
FlowType = FlowType_s,
ResourceGroup = split(VM_s, '/')[0],
VMName = split(VM_s, '/')[1],
VMIPAddress = VMIP_s,
PublicIPs = extractall(@"([\d\.]+)[|\d]+", dynamic([1]), PublicIPs_s),
SrcIP = SrcIP_s,
DestIP = DestIP_s,
ExtIP = iif(FlowDirection_s == 'I', SrcIP_s, DestIP_s),
L4Protocol = L4Protocol_s,
L7Protocol = L7Protocol_s,
DestPort = DestPort_d,
FlowDirection = FlowDirection_s,
AllowedOutFlows = AllowedOutFlows_d,
AllowedInFlows = AllowedInFlows_d,
DeniedInFlows = DeniedInFlows_d,
DeniedOutFlows = DeniedOutFlows_d,
RemoteRegion = AzureRegion_s,
VMRegion = Region_s
| extend AllExtIPs = iif(isempty(PublicIPs), pack_array(ExtIP),
iif(isempty(ExtIP), PublicIPs, array_concat(PublicIPs, pack_array(ExtIP)))
)
| project-away ExtIP
| mvexpand AllExtIPs
{where_clause}
'''
ip_q_times = mas.QueryTime(label='Set time bounds for network queries',
units='hour', max_before=48, before=10, after=5,
max_after=24)
ip_q_times.display()
```
### Query Flows by Host IP Addresses
```
if 'AzureNetworkAnalytics_CL' not in table_index:
print('No network flow data available.')
az_net_comms_df = None
else:
all_host_ips = host_entity.private_ips + host_entity.public_ips + [host_entity.IPAddress]
host_ips = {'\'{}\''.format(i.Address) for i in all_host_ips}
host_ip_list = ','.join(host_ips)
if not host_ip_list:
raise ValueError('No IP Addresses for host. Cannot lookup network data')
az_ip_where = f'''
| where (VMIPAddress in ({host_ip_list})
or SrcIP in ({host_ip_list})
or DestIP in ({host_ip_list})
) and
(AllowedOutFlows > 0 or AllowedInFlows > 0)'''
print('getting data...')
az_net_query_byip = az_net_analytics_query.format(where_clause=az_ip_where,
start = ip_q_times.start,
end = ip_q_times.end)
net_default_cols = ['FlowStartTime', 'FlowEndTime', 'VMName', 'VMIPAddress',
'PublicIPs', 'SrcIP', 'DestIP', 'L4Protocol', 'L7Protocol',
'DestPort', 'FlowDirection', 'AllowedOutFlows',
'AllowedInFlows']
%kql -query az_net_query_byip
az_net_comms_df = _kql_raw_result_.to_dataframe()
az_net_comms_df[net_default_cols]
if len(az_net_comms_df) > 0:
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
az_net_comms_df['TotalAllowedFlows'] = az_net_comms_df['AllowedOutFlows'] + az_net_comms_df['AllowedInFlows']
sns.catplot(x="L7Protocol", y="TotalAllowedFlows", col="FlowDirection", data=az_net_comms_df)
sns.relplot(x="FlowStartTime", y="TotalAllowedFlows",
col="FlowDirection", kind="line",
hue="L7Protocol", data=az_net_comms_df).set_xticklabels(rotation=50)
nbdisp.display_timeline(data=az_net_comms_df.query('AllowedOutFlows > 0'),
overlay_data=az_net_comms_df.query('AllowedInFlows > 0'),
title='Network Flows (out=blue, in=green)',
time_column='FlowStartTime',
source_columns=['FlowType', 'AllExtIPs', 'L7Protocol', 'FlowDirection'],
height=300)
else:
print('No network data for specified time range.')
```
### Flow Summary
```
if az_net_comms_df is not None and not az_net_comms_df.empty:
cm = sns.light_palette("green", as_cmap=True)
cols = ['VMName', 'VMIPAddress', 'PublicIPs', 'SrcIP', 'DestIP', 'L4Protocol',
'L7Protocol', 'DestPort', 'FlowDirection', 'AllExtIPs', 'TotalAllowedFlows']
flow_index = az_net_comms_df[cols].copy()
def get_source_ip(row):
if row.FlowDirection == 'O':
return row.VMIPAddress if row.VMIPAddress else row.SrcIP
else:
return row.AllExtIPs if row.AllExtIPs else row.DestIP
def get_dest_ip(row):
if row.FlowDirection == 'O':
return row.AllExtIPs if row.AllExtIPs else row.DestIP
else:
return row.VMIPAddress if row.VMIPAddress else row.SrcIP
flow_index['source'] = flow_index.apply(get_source_ip, axis=1)
flow_index['target'] = flow_index.apply(get_dest_ip, axis=1)
flow_index['value'] = flow_index['L7Protocol']
(flow_index[['source', 'target', 'value', 'L7Protocol', 'FlowDirection', 'TotalAllowedFlows']]
.groupby(['source', 'target', 'value', 'L7Protocol', 'FlowDirection'])
.sum().unstack().style.background_gradient(cmap=cm))
```
## GeoIP Map of External IPs
```
from msticpy.nbtools.foliummap import FoliumMap
folium_map = FoliumMap()
if az_net_comms_df is None or az_net_comms_df.empty:
print('No network flow data available.')
else:
ip_locs_in = set()
ip_locs_out = set()
for _, row in az_net_comms_df.iterrows():
ip = row.AllExtIPs
if ip in ip_locs_in or ip in ip_locs_out or not ip:
continue
ip_entity = mas.IpAddress(Address=ip)
iplocation.lookup_ip(ip_entity=ip_entity)
if not ip_entity.Location:
continue
ip_entity.AdditionalData['protocol'] = row.L7Protocol
if row.FlowDirection == 'I':
ip_locs_in.add(ip_entity)
else:
ip_locs_out.add(ip_entity)
display(HTML('<h3>External IP Addresses communicating with host</h3>'))
display(HTML('Numbered circles indicate multiple items - click to expand'))
display(HTML('Location markers: Blue = outbound, Purple = inbound, Green = Host'))
icon_props = {'color': 'green'}
folium_map.add_ip_cluster(ip_entities=host_entity.public_ips,
**icon_props)
icon_props = {'color': 'blue'}
folium_map.add_ip_cluster(ip_entities=ip_locs_out,
**icon_props)
icon_props = {'color': 'purple'}
folium_map.add_ip_cluster(ip_entities=ip_locs_in,
**icon_props)
display(folium_map.folium_map)
display(Markdown('<p style="color:red">Warning: the folium mapping library '
'does not display correctly in some browsers.</p><br>'
'If you see a blank image please retry with a different browser.'))
```
<a id='appendices'></a>[Contents](#toc)
# Appendices
## Available DataFrames
```
print('List of current DataFrames in Notebook')
print('-' * 50)
current_vars = list(locals().keys())
for var_name in current_vars:
if isinstance(locals()[var_name], pd.DataFrame) and not var_name.startswith('_'):
print(var_name)
```
## Saving Data to Excel
To save the contents of a pandas DataFrame to an Excel spreadsheet
use the following syntax
```
writer = pd.ExcelWriter('myWorksheet.xlsx')
my_data_frame.to_excel(writer,'Sheet1')
writer.save()
```
|
github_jupyter
|
import sys
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
MIN_REQ_PYTHON = (3,6)
if sys.version_info < MIN_REQ_PYTHON:
print('Check the Kernel->Change Kernel menu and ensure that Python 3.6')
print('or later is selected as the active kernel.')
sys.exit("Python %s.%s or later is required.\n" % MIN_REQ_PYTHON)
# Package Installs - try to avoid if they are already installed
try:
import msticpy.sectools as sectools
import Kqlmagic
from dns import reversename, resolver
from ipwhois import IPWhois
import folium
print('If you answer "n" this cell will exit with an error in order to avoid the pip install calls,')
print('This error can safely be ignored.')
resp = input('msticpy and Kqlmagic packages are already loaded. Do you want to re-install? (y/n)')
if resp.strip().lower() != 'y':
sys.exit('pip install aborted - you may skip this error and continue.')
else:
print('After installation has completed, restart the current kernel and run '
'the notebook again skipping this cell.')
except ImportError:
pass
print('\nPlease wait. Installing required packages. This may take a few minutes...')
!pip install git+https://github.com/microsoft/msticpy --upgrade --user
!pip install Kqlmagic --no-cache-dir --upgrade --user
!pip install holoviews
!pip install dnspython --upgrade
!pip install ipwhois --upgrade
!pip install folium --upgrade
# Uncomment to refresh the maxminddb database
# !pip install maxminddb-geolite2 --upgrade
print('To ensure that the latest versions of the installed libraries '
'are used, please restart the current kernel and run '
'the notebook again skipping this cell.')
# Imports
import sys
import warnings
MIN_REQ_PYTHON = (3,6)
if sys.version_info < MIN_REQ_PYTHON:
print('Check the Kernel->Change Kernel menu and ensure that Python 3.6')
print('or later is selected as the active kernel.')
sys.exit("Python %s.%s or later is required.\n" % MIN_REQ_PYTHON)
import numpy as np
from IPython import get_ipython
from IPython.display import display, HTML, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import networkx as nx
import pandas as pd
pd.set_option('display.max_rows', 100)
pd.set_option('display.max_columns', 50)
pd.set_option('display.max_colwidth', 100)
import msticpy.sectools as sectools
import msticpy.nbtools as mas
import msticpy.nbtools.kql as qry
import msticpy.nbtools.nbdisplay as nbdisp
WIDGET_DEFAULTS = {'layout': widgets.Layout(width='95%'),
'style': {'description_width': 'initial'}}
# Some of our dependencies (networkx) still use deprecated Matplotlib
# APIs - we can't do anything about it so suppress them from view
from matplotlib import MatplotlibDeprecationWarning
warnings.simplefilter("ignore", category=MatplotlibDeprecationWarning)
display(HTML(mas.util._TOGGLE_CODE_PREPARE_STR))
HTML('''
<script type="text/javascript">
IPython.notebook.kernel.execute("nb_query_string='".concat(window.location.search).concat("'"));
</script>
''');
import os
from msticpy.nbtools.wsconfig import WorkspaceConfig
ws_config_file = 'config.json'
try:
ws_config = WorkspaceConfig(ws_config_file)
display(Markdown(f'Read Workspace configuration from local config.json for workspace **{ws_config["workspace_name"]}**'))
for cf_item in ['tenant_id', 'subscription_id', 'resource_group', 'workspace_id', 'workspace_name']:
display(Markdown(f'**{cf_item.upper()}**: {ws_config[cf_item]}'))
WORKSPACE_ID = ws_config['workspace_id']
except:
WORKSPACE_ID = None
display(Markdown('**Workspace configuration not found.**\n\n'
'Please go to your Log Analytics workspace, copy the workspace ID and paste here. '
'Or read the workspace_id from the config.json in your Azure Notebooks project.'))
ws_config = None
ws_id = mas.GetEnvironmentKey(env_var='WORKSPACE_ID',
prompt='Please enter your Log Analytics Workspace Id:')
ws_id.display()
%kql loganalytics://tenant(aad_tenant).workspace(WORKSPACE_ID).clientid(client_id).clientsecret(client_secret)
%kql loganalytics://code().workspace(WORKSPACE_ID)
if not WORKSPACE_ID:
try:
WORKSPACE_ID = ws_id.value
except NameError:
raise ValueError('No workspace Id.')
mas.kql.load_kql_magic()
%kql loganalytics://code().workspace(WORKSPACE_ID)
%kql search * | summarize RowCount=count() by Type | project-rename Table=Type
la_table_set = _kql_raw_result_.to_dataframe()
table_index = la_table_set.set_index('Table')['RowCount'].to_dict()
display(Markdown('Current data in workspace'))
display(la_table_set.T)
host_text = widgets.Text(description='Enter the Host name to search for:', **WIDGET_DEFAULTS)
display(host_text)
query_times = mas.QueryTime(units='day', max_before=20, before=5, max_after=1)
query_times.display()
from msticpy.nbtools.entityschema import GeoLocation
from msticpy.sectools.geoip import GeoLiteLookup
iplocation = GeoLiteLookup()
# Get single event - try process creation
if 'SecurityEvent' not in table_index:
raise ValueError('No Windows event log data available in the workspace')
start = f'\'{query_times.start}\''
hostname = host_text.value
find_host_event_query = r'''
SecurityEvent
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| where Computer has '{hostname}'
| top 1 by TimeGenerated desc nulls last
'''.format(start=query_times.start,
end=query_times.end,
hostname=hostname)
print('Checking for event data...')
# Get heartbeat event if available
%kql -query find_host_event_query
if _kql_raw_result_.completion_query_info['StatusCode'] == 0:
host_event_df = _kql_raw_result_.to_dataframe()
host_event = None
host_entity = None
if host_event_df.shape[0] > 0:
host_entity = mas.Host(src_event=host_event_df.iloc[0])
if not host_entity:
raise LookupError(f'Could not find Windows events the name {hostname}')
# Try to get an OMS Heartbeat for this computer
if 'Heartbeat' in table_index:
heartbeat_query = '''
Heartbeat
| where Computer == \'{computer}\'
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| top 1 by TimeGenerated desc nulls last
'''.format(start=query_times.start,
end=query_times.end,
computer=host_entity.computer)
print('Getting heartbeat data...')
%kql -query heartbeat_query
if _kql_raw_result_.completion_query_info['StatusCode'] == 0:
host_hb = _kql_raw_result_.to_dataframe().iloc[0]
host_entity.SourceComputerId = host_hb['SourceComputerId']
host_entity.OSType = host_hb['OSType']
host_entity.OSMajorVersion = host_hb['OSMajorVersion']
host_entity.OSMinorVersion = host_hb['OSMinorVersion']
host_entity.ComputerEnvironment = host_hb['ComputerEnvironment']
host_entity.OmsSolutions = [sol.strip() for sol in host_hb['Solutions'].split(',')]
host_entity.VMUUID = host_hb['VMUUID']
ip_entity = mas.IpAddress()
ip_entity.Address = host_hb['ComputerIP']
geoloc_entity = GeoLocation()
geoloc_entity.CountryName = host_hb['RemoteIPCountry']
geoloc_entity.Longitude = host_hb['RemoteIPLongitude']
geoloc_entity.Latitude = host_hb['RemoteIPLatitude']
ip_entity.Location = geoloc_entity
host_entity.IPAddress = ip_entity # TODO change to graph edge
if 'AzureNetworkAnalytics_CL' in table_index:
print('Looking for IP addresses in network flows...')
aznet_query = '''
AzureNetworkAnalytics_CL
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| where VirtualMachine_s has \'{host}\'
| where ResourceType == 'NetworkInterface'
| top 1 by TimeGenerated desc
| project PrivateIPAddresses = PrivateIPAddresses_s,
PublicIPAddresses = PublicIPAddresses_s
'''.format(start=query_times.start,
end=query_times.end,
host=host_entity.HostName)
%kql -query aznet_query
az_net_df = _kql_raw_result_.to_dataframe()
def convert_to_ip_entities(ip_str):
ip_entities = []
if ip_str:
if ',' in ip_str:
addrs = ip_str.split(',')
elif ' ' in ip_str:
addrs = ip_str.split(' ')
else:
addrs = [ip_str]
for addr in addrs:
ip_entity = mas.IpAddress()
ip_entity.Address = addr.strip()
iplocation.lookup_ip(ip_entity=ip_entity)
ip_entities.append(ip_entity)
return ip_entities
# Add this information to our inv_host_entity
retrieved_address=[]
if len(az_net_df) == 1:
priv_addr_str = az_net_df['PrivateIPAddresses'].loc[0]
host_entity.properties['private_ips'] = convert_to_ip_entities(priv_addr_str)
pub_addr_str = az_net_df['PublicIPAddresses'].loc[0]
host_entity.properties['public_ips'] = convert_to_ip_entities(pub_addr_str)
retrieved_address = [ip.Address for ip in host_entity.properties['public_ips']]
else:
if 'private_ips' not in host_entity.properties:
host_entity.properties['private_ips'] = []
if 'public_ips' not in host_entity.properties:
host_entity.properties['public_ips'] = []
print(host_entity)
# set the origin time to the time of our alert
query_times = mas.QueryTime(units='day',
max_before=28, max_after=1, before=5)
query_times.display()
related_alerts_query = r'''
SecurityAlert
| where TimeGenerated >= datetime({start})
| where TimeGenerated <= datetime({end})
| extend StartTimeUtc = TimeGenerated
| extend AlertDisplayName = DisplayName
| extend Computer = '{host}'
| extend simple_hostname = tostring(split(Computer, '.')[0])
| where Entities has Computer or Entities has simple_hostname
or ExtendedProperties has Computer
or ExtendedProperties has simple_hostname
'''.format(start=query_times.start,
end=query_times.end,
host=host_entity.HostName)
%kql -query related_alerts_query
related_alerts = _kql_raw_result_.to_dataframe()
if related_alerts is not None and not related_alerts.empty:
host_alert_items = (related_alerts[['AlertName', 'TimeGenerated']]\
.groupby('AlertName').TimeGenerated.agg('count').to_dict())
# acct_alert_items = related_alerts\
# .query('acct_match == @True')[['AlertType', 'StartTimeUtc']]\
# .groupby('AlertType').StartTimeUtc.agg('count').to_dict()
# proc_alert_items = related_alerts\
# .query('proc_match == @True')[['AlertType', 'StartTimeUtc']]\
# .groupby('AlertType').StartTimeUtc.agg('count').to_dict()
def print_related_alerts(alertDict, entityType, entityName):
if len(alertDict) > 0:
display(Markdown('### Found {} different alert types related to this {} (\'{}\')'.format(len(alertDict), entityType, entityName)))
for (k,v) in alertDict.items():
print('- {}, Count of alerts: {}'.format(k, v))
else:
print('No alerts for {} entity \'{}\''.format(entityType, entityName))
print_related_alerts(host_alert_items, 'host', host_entity.HostName)
nbdisp.display_timeline(data=related_alerts, title="Alerts", source_columns=['AlertName'], height=200)
else:
display(Markdown('No related alerts found.'))
def disp_full_alert(alert):
global related_alert
related_alert = mas.SecurityAlert(alert)
nbdisp.display_alert(related_alert, show_entities=True)
if related_alerts is not None and not related_alerts.empty:
related_alerts['CompromisedEntity'] = related_alerts['Computer']
display(Markdown('### Click on alert to view details.'))
rel_alert_select = mas.AlertSelector(alerts=related_alerts,
# columns=['TimeGenerated', 'AlertName', 'CompromisedEntity', 'SystemAlertId'],
action=disp_full_alert)
rel_alert_select.display()
from msticpy.nbtools.query_defns import DataFamily, DataEnvironment
params_dict = {}
params_dict['host_filter_eq'] = f'Computer has \'{host_entity.HostName}\''
params_dict['host_filter_neq'] = f'Computer !has \'{host_entity.HostName}\''
params_dict['host_name'] = host_entity.HostName
params_dict['subscription_filter'] = 'true'
if host_entity.OSFamily == 'Linux':
params_dict['data_family'] = DataFamily.LinuxSecurity
params_dict['path_separator'] = '/'
else:
params_dict['data_family'] = DataFamily.WindowsSecurity
params_dict['path_separator'] = '\\'
# set the origin time to the time of our alert
logon_query_times = mas.QueryTime(units='day',
before=1, after=1, max_before=20, max_after=20)
logon_query_times.display()
from msticpy.sectools.eventcluster import dbcluster_events, add_process_features, _string_score
host_logons = qry.list_host_logons(provs=[logon_query_times], **params_dict)
%matplotlib inline
if host_logons is not None and not host_logons.empty:
logon_features = host_logons.copy()
logon_features['AccountNum'] = host_logons.apply(lambda x: _string_score(x.Account), axis=1)
logon_features['LogonIdNum'] = host_logons.apply(lambda x: _string_score(x.TargetLogonId), axis=1)
logon_features['LogonHour'] = host_logons.apply(lambda x: x.TimeGenerated.hour, axis=1)
# you might need to play around with the max_cluster_distance parameter.
# decreasing this gives more clusters.
(clus_logons, _, _) = dbcluster_events(data=logon_features, time_column='TimeGenerated',
cluster_columns=['AccountNum',
'LogonType'],
max_cluster_distance=0.0001)
display(Markdown(f'Number of input events: {len(host_logons)}'))
display(Markdown(f'Number of clustered events: {len(clus_logons)}'))
display(Markdown('### Distinct host logon patterns'))
clus_logons.sort_values('TimeGenerated')
nbdisp.display_logon_data(clus_logons)
display(Markdown('### Logon timeline.'))
tooltip_cols = ['TargetUserName', 'TargetDomainName', 'SubjectUserName',
'SubjectDomainName', 'LogonType', 'IpAddress']
nbdisp.display_timeline(data=host_logons.query('TargetLogonId != "0x3e7"'),
overlay_data=host_logons.query('TargetLogonId == "0x3e7"'),
title="Logons (blue=user, green=system)",
source_columns=tooltip_cols, height=200)
display(Markdown('### Counts of logon events by logon type.'))
display(Markdown('Min counts for each logon type highlighted.'))
logon_by_type = (host_logons[['Account', 'LogonType', 'EventID']]
.groupby(['Account','LogonType']).count().unstack()
.fillna(0)
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.format("{0:0>3.0f}"))
display(logon_by_type)
key = 'logon type key = {}'.format('; '.join([f'{k}: {v}' for k,v in mas.nbdisplay._WIN_LOGON_TYPE_MAP.items()]))
display(Markdown(key))
display(Markdown('### Relative frequencies by account'))
plt.rcParams['figure.figsize'] = (12, 4)
clus_logons.plot.barh(x="Account", y="ClusterSize")
else:
display(Markdown('No logon events found for host.'))
failedLogons = qry.list_host_logon_failures(provs=[query_times], **params_dict)
if failedLogons.shape[0] == 0:
display(print('No logon failures recorded for this host between {security_alert.start} and {security_alert.start}'))
failedLogons
import re
dist_logons = clus_logons.sort_values('TimeGenerated')[['TargetUserName', 'TimeGenerated',
'LastEventTime', 'LogonType',
'ClusterSize']]
items = dist_logons.apply(lambda x: (f'{x.TargetUserName}: '
f'(logontype={x.LogonType}) '
f'timerange={x.TimeGenerated} - {x.LastEventTime} '
f'count={x.ClusterSize}'),
axis=1).values.tolist()
def get_selected_logon_cluster(selected_item):
acct_match = re.search(r'(?P<acct>[^:]+):\s+\(logontype=(?P<l_type>[^)]+)', selected_item)
if acct_match:
acct = acct_match['acct']
l_type = int(acct_match['l_type'])
return host_logons.query('TargetUserName == @acct and LogonType == @l_type')
logon_list_regex = r'''
(?P<acct>[^:]+):\s+
\(logontype=(?P<logon_type>[^)]+)\)\s+
\(timestamp=(?P<time>[^)]+)\)\s+
logonid=(?P<logonid>[0-9a-fx)]+)
'''
def get_selected_logon(selected_item):
acct_match = re.search(logon_list_regex, selected_item, re.VERBOSE)
if acct_match:
acct = acct_match['acct']
logon_type = int(acct_match['logon_type'])
time_stamp = pd.to_datetime(acct_match['time'])
logon_id = acct_match['logonid']
return host_logons.query('TargetUserName == @acct and LogonType == @logon_type'
' and TargetLogonId == @logon_id')
logon_wgt = mas.SelectString(description='Select logon cluster to examine',
item_list=items, height='200px', width='100%', auto_display=True)
# Calculate time range based on the logons from previous section
selected_logon_cluster = get_selected_logon_cluster(logon_wgt.value)
if len(selected_logon_cluster) > 20:
display(Markdown('<h3><p style="color:red">Warning: the selected '
'cluster has a high number of logons.</p></h1><br>'
'Processes for these logons may be very slow '
'to retrieve and result in high memory usage.<br>'
'You may wish to narrow the time range and sample'
'the data before running the query for the full range.'))
logon_time = selected_logon_cluster['TimeGenerated'].min()
last_logon_time = selected_logon_cluster['TimeGenerated'].max()
time_diff = int((last_logon_time - logon_time).total_seconds() / (60 * 60) + 2)
# set the origin time to the time of our alert
proc_query_times = mas.QueryTime(units='hours', origin_time=logon_time,
before=1, after=time_diff, max_before=60, max_after=120)
proc_query_times.display()
from msticpy.sectools.eventcluster import dbcluster_events, add_process_features
print('Getting process events...', end='')
processes_on_host = qry.list_processes(provs=[proc_query_times], **params_dict)
print('done')
print('Clustering...', end='')
feature_procs = add_process_features(input_frame=processes_on_host,
path_separator=params_dict['path_separator'])
feature_procs['accountNum'] = feature_procs.apply(lambda x: _string_score(x.Account), axis=1)
# you might need to play around with the max_cluster_distance parameter.
# decreasing this gives more clusters.
(clus_events, dbcluster, x_data) = dbcluster_events(data=feature_procs,
cluster_columns=['commandlineTokensFull',
'pathScore',
'accountNum',
'isSystemSession'],
max_cluster_distance=0.0001)
print('done')
print('Number of input events:', len(feature_procs))
print('Number of clustered events:', len(clus_events))
def view_logon_sess(x=''):
global selected_logon
selected_logon = get_selected_logon(x)
logonId = selected_logon['TargetLogonId'].iloc[0]
sess_procs = (processes_on_host.query('TargetLogonId == @logonId | SubjectLogonId == @logonId')
[['NewProcessName', 'CommandLine', 'TargetLogonId']]
.drop_duplicates())
display(sess_procs)
selected_logon_cluster = get_selected_logon_cluster(logon_wgt.value)
selected_tgt_logon = selected_logon_cluster['TargetUserName'].iat[0]
system_logon = selected_tgt_logon.lower() == 'system' or selected_tgt_logon.endswith('$')
if system_logon:
display(Markdown('<h3><p style="color:red">Warning: the selected '
'account name appears to be a system account.</p></h1><br>'
'<i>It is difficult to accurately associate processes '
'with the specific logon sessions.<br>'
'Showing clustered events for entire time selection.'))
display(clus_events.sort_values('TimeGenerated')[['TimeGenerated', 'LastEventTime',
'NewProcessName', 'CommandLine',
'ClusterSize', 'commandlineTokensFull',
'pathScore', 'isSystemSession']])
# Display a pick list for logon instances
items = (selected_logon_cluster
.sort_values('TimeGenerated')
.apply(lambda x: (f'{x.TargetUserName}: '
f'(logontype={x.LogonType}) '
f'(timestamp={x.TimeGenerated}) '
f'logonid={x.TargetLogonId}'),
axis=1).values.tolist())
sess_w = widgets.Select(options=items, description='Select logon instance to examine', **WIDGET_DEFAULTS)
widgets.interactive(view_logon_sess, x=sess_w)
if not system_logon:
logonId = selected_logon['TargetLogonId'].iloc[0]
sess_procs = (processes_on_host.query('TargetLogonId == @logonId | SubjectLogonId == @logonId'))
else:
sess_procs = clus_events
ioc_extractor = sectools.IoCExtract()
os_family = host_entity.OSType if host_entity.OSType else 'Windows'
ioc_df = ioc_extractor.extract(data=sess_procs,
columns=['CommandLine'],
os_family=os_family,
ioc_types=['ipv4', 'ipv6', 'dns', 'url',
'md5_hash', 'sha1_hash', 'sha256_hash'])
if len(ioc_df):
display(Markdown("### IoC patterns found in process set."))
display(ioc_df)
else:
display(Markdown("### No IoC patterns found in process tree."))
dec_df = sectools.b64.unpack_items(data=sess_procs, column='CommandLine')
if len(dec_df) > 0:
display(HTML("<h3>Decoded base 64 command lines</h3>"))
display(HTML("Decoded values and hashes of decoded values shown below."))
display(HTML('Warning - some binary patterns may be decodable as unicode strings. '
'In these cases you should ignore the "decoded_string" column '
'and treat the encoded item as a binary - using the "printable_bytes" '
'column or treat the decoded_string as a binary (bytes) value.'))
display(dec_df[['full_decoded_string', 'decoded_string', 'original_string', 'printable_bytes', 'file_hashes']])
ioc_dec_df = ioc_extractor.extract(data=dec_df, columns=['full_decoded_string'])
if len(ioc_dec_df):
display(HTML("<h3>IoC patterns found in events with base64 decoded data</h3>"))
display(ioc_dec_df)
ioc_df = ioc_df.append(ioc_dec_df ,ignore_index=True)
else:
print("No base64 encodings found.")
vt_key = mas.GetEnvironmentKey(env_var='VT_API_KEY',
help_str='To obtain an API key sign up here https://www.virustotal.com/',
prompt='Virus Total API key:')
vt_key.display()
if vt_key.value and ioc_df is not None and not ioc_df.empty:
vt_lookup = sectools.VTLookup(vt_key.value, verbosity=2)
print(f'{len(ioc_df)} items in input frame')
supported_counts = {}
for ioc_type in vt_lookup.supported_ioc_types:
supported_counts[ioc_type] = len(ioc_df[ioc_df['IoCType'] == ioc_type])
print('Items in each category to be submitted to VirusTotal')
print('(Note: items have pre-filtering to remove obvious erroneous '
'data and false positives, such as private IPaddresses)')
print(supported_counts)
print('-' * 80)
vt_results = vt_lookup.lookup_iocs(data=ioc_df, type_col='IoCType', src_col='Observable')
display(vt_results)
# Azure Network Analytics Base Query
az_net_analytics_query =r'''
AzureNetworkAnalytics_CL
| where SubType_s == 'FlowLog'
| where FlowStartTime_t >= datetime({start})
| where FlowEndTime_t <= datetime({end})
| project TenantId, TimeGenerated,
FlowStartTime = FlowStartTime_t,
FlowEndTime = FlowEndTime_t,
FlowIntervalEndTime = FlowIntervalEndTime_t,
FlowType = FlowType_s,
ResourceGroup = split(VM_s, '/')[0],
VMName = split(VM_s, '/')[1],
VMIPAddress = VMIP_s,
PublicIPs = extractall(@"([\d\.]+)[|\d]+", dynamic([1]), PublicIPs_s),
SrcIP = SrcIP_s,
DestIP = DestIP_s,
ExtIP = iif(FlowDirection_s == 'I', SrcIP_s, DestIP_s),
L4Protocol = L4Protocol_s,
L7Protocol = L7Protocol_s,
DestPort = DestPort_d,
FlowDirection = FlowDirection_s,
AllowedOutFlows = AllowedOutFlows_d,
AllowedInFlows = AllowedInFlows_d,
DeniedInFlows = DeniedInFlows_d,
DeniedOutFlows = DeniedOutFlows_d,
RemoteRegion = AzureRegion_s,
VMRegion = Region_s
| extend AllExtIPs = iif(isempty(PublicIPs), pack_array(ExtIP),
iif(isempty(ExtIP), PublicIPs, array_concat(PublicIPs, pack_array(ExtIP)))
)
| project-away ExtIP
| mvexpand AllExtIPs
{where_clause}
'''
ip_q_times = mas.QueryTime(label='Set time bounds for network queries',
units='hour', max_before=48, before=10, after=5,
max_after=24)
ip_q_times.display()
if 'AzureNetworkAnalytics_CL' not in table_index:
print('No network flow data available.')
az_net_comms_df = None
else:
all_host_ips = host_entity.private_ips + host_entity.public_ips + [host_entity.IPAddress]
host_ips = {'\'{}\''.format(i.Address) for i in all_host_ips}
host_ip_list = ','.join(host_ips)
if not host_ip_list:
raise ValueError('No IP Addresses for host. Cannot lookup network data')
az_ip_where = f'''
| where (VMIPAddress in ({host_ip_list})
or SrcIP in ({host_ip_list})
or DestIP in ({host_ip_list})
) and
(AllowedOutFlows > 0 or AllowedInFlows > 0)'''
print('getting data...')
az_net_query_byip = az_net_analytics_query.format(where_clause=az_ip_where,
start = ip_q_times.start,
end = ip_q_times.end)
net_default_cols = ['FlowStartTime', 'FlowEndTime', 'VMName', 'VMIPAddress',
'PublicIPs', 'SrcIP', 'DestIP', 'L4Protocol', 'L7Protocol',
'DestPort', 'FlowDirection', 'AllowedOutFlows',
'AllowedInFlows']
%kql -query az_net_query_byip
az_net_comms_df = _kql_raw_result_.to_dataframe()
az_net_comms_df[net_default_cols]
if len(az_net_comms_df) > 0:
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
az_net_comms_df['TotalAllowedFlows'] = az_net_comms_df['AllowedOutFlows'] + az_net_comms_df['AllowedInFlows']
sns.catplot(x="L7Protocol", y="TotalAllowedFlows", col="FlowDirection", data=az_net_comms_df)
sns.relplot(x="FlowStartTime", y="TotalAllowedFlows",
col="FlowDirection", kind="line",
hue="L7Protocol", data=az_net_comms_df).set_xticklabels(rotation=50)
nbdisp.display_timeline(data=az_net_comms_df.query('AllowedOutFlows > 0'),
overlay_data=az_net_comms_df.query('AllowedInFlows > 0'),
title='Network Flows (out=blue, in=green)',
time_column='FlowStartTime',
source_columns=['FlowType', 'AllExtIPs', 'L7Protocol', 'FlowDirection'],
height=300)
else:
print('No network data for specified time range.')
if az_net_comms_df is not None and not az_net_comms_df.empty:
cm = sns.light_palette("green", as_cmap=True)
cols = ['VMName', 'VMIPAddress', 'PublicIPs', 'SrcIP', 'DestIP', 'L4Protocol',
'L7Protocol', 'DestPort', 'FlowDirection', 'AllExtIPs', 'TotalAllowedFlows']
flow_index = az_net_comms_df[cols].copy()
def get_source_ip(row):
if row.FlowDirection == 'O':
return row.VMIPAddress if row.VMIPAddress else row.SrcIP
else:
return row.AllExtIPs if row.AllExtIPs else row.DestIP
def get_dest_ip(row):
if row.FlowDirection == 'O':
return row.AllExtIPs if row.AllExtIPs else row.DestIP
else:
return row.VMIPAddress if row.VMIPAddress else row.SrcIP
flow_index['source'] = flow_index.apply(get_source_ip, axis=1)
flow_index['target'] = flow_index.apply(get_dest_ip, axis=1)
flow_index['value'] = flow_index['L7Protocol']
(flow_index[['source', 'target', 'value', 'L7Protocol', 'FlowDirection', 'TotalAllowedFlows']]
.groupby(['source', 'target', 'value', 'L7Protocol', 'FlowDirection'])
.sum().unstack().style.background_gradient(cmap=cm))
from msticpy.nbtools.foliummap import FoliumMap
folium_map = FoliumMap()
if az_net_comms_df is None or az_net_comms_df.empty:
print('No network flow data available.')
else:
ip_locs_in = set()
ip_locs_out = set()
for _, row in az_net_comms_df.iterrows():
ip = row.AllExtIPs
if ip in ip_locs_in or ip in ip_locs_out or not ip:
continue
ip_entity = mas.IpAddress(Address=ip)
iplocation.lookup_ip(ip_entity=ip_entity)
if not ip_entity.Location:
continue
ip_entity.AdditionalData['protocol'] = row.L7Protocol
if row.FlowDirection == 'I':
ip_locs_in.add(ip_entity)
else:
ip_locs_out.add(ip_entity)
display(HTML('<h3>External IP Addresses communicating with host</h3>'))
display(HTML('Numbered circles indicate multiple items - click to expand'))
display(HTML('Location markers: Blue = outbound, Purple = inbound, Green = Host'))
icon_props = {'color': 'green'}
folium_map.add_ip_cluster(ip_entities=host_entity.public_ips,
**icon_props)
icon_props = {'color': 'blue'}
folium_map.add_ip_cluster(ip_entities=ip_locs_out,
**icon_props)
icon_props = {'color': 'purple'}
folium_map.add_ip_cluster(ip_entities=ip_locs_in,
**icon_props)
display(folium_map.folium_map)
display(Markdown('<p style="color:red">Warning: the folium mapping library '
'does not display correctly in some browsers.</p><br>'
'If you see a blank image please retry with a different browser.'))
print('List of current DataFrames in Notebook')
print('-' * 50)
current_vars = list(locals().keys())
for var_name in current_vars:
if isinstance(locals()[var_name], pd.DataFrame) and not var_name.startswith('_'):
print(var_name)
writer = pd.ExcelWriter('myWorksheet.xlsx')
my_data_frame.to_excel(writer,'Sheet1')
writer.save()
| 0.266453 | 0.840913 |
# SIT742: Modern Data Science
**(Week 03: Data Wrangling)**
---
- Materials in this module include resources collected from various open-source online repositories.
- You are free to use, change and distribute this package.
- If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)
Prepared by **SIT742 Teaching Team**
---
# Session 3A - Data Wrangling with Pandas
## Table of Content
* Part 1. Scraping data from the web
* Part 2. States and Territories of Australia
* Part 3. Parsing XML files with BeautifulSoup
---
## Part 1. Scraping data from the web
Many of you will probably be interested in scraping data from the web for your projects. For example, what if we were interested in working with some historical Canadian weather data? Well, we can get that from: http://climate.weather.gc.ca using their API. Requests are going to be formatted like this:
```
import pandas as pd
url_template = "http://climate.weather.gc.ca/climate_data/bulk_data_e.html?format=csv&stationID=5415&Year={year}&Month={month}&timeframe=1&submit=Download+Data"
```
Note that we've requested the data be returned as a csv, and that we're going to supply the month and year as inputs when we fire off the query. To get the data for March 2012, we need to format it with month=3, year=2012:
```
url = url_template.format(month=3, year=2012)
url
```
This is great! We can just use the same read_csv function as before, and just give it a URL as a filename. Awesome.
Upon inspection, we find out that there are 16 rows of metadata at the top of this CSV, but pandas knows CSVs are weird, so there's a skiprows options. We parse the dates again, and set 'Date/Time' to be the index column. Here's the resulting dataframe.
```
weather_mar2012 = pd.read_csv(url, skiprows=15, index_col='Date/Time', parse_dates=True, encoding='latin1')
weather_mar2012.head()
```
As before, we can get rid of any comlumns that don't contain real data using ${\tt .dropna()}$
```
weather_mar2012 = weather_mar2012.dropna(axis=1, how='any')
weather_mar2012.head()
```
Getting better! The Year/Month/Day/Time columns are redundant, though, and the Data Quality column doesn't look too useful. Let's get rid of those.
```
weather_mar2012 = weather_mar2012.drop(['Year', 'Month', 'Day', 'Time'], axis=1)
weather_mar2012[:5]
```
Great! Now let's figure out how to download the whole year? It would be nice if we could just send that as a single request, but like many APIs this one is limited to prevent people from hogging bandwidth. No problem: we can write a function!
```
def download_weather_month(year, month):
url = url_template.format(year=year, month=month)
weather_data = pd.read_csv(url, skiprows=15, index_col='Date/Time', parse_dates=True)
weather_data = weather_data.dropna(axis=1)
weather_data.columns = [col.replace('\xb0', '') for col in weather_data.columns]
weather_data = weather_data.drop(['Year', 'Day', 'Month', 'Time'], axis=1)
return weather_data
```
Now to test that this function does the right thing:
```
download_weather_month(2012, 1).head()
```
Woohoo! Now we can iteratively request all the months using a single line. This will take a little while to run.
```
data_by_month = [download_weather_month(2012, i) for i in range(1, 12)]
```
Once that's done, it's easy to concatenate all the dataframes together into one big dataframe using ${\tt pandas.concat()}$. And now we have the whole year's data!
```
weather_2012 = pd.concat(data_by_month)
```
This thing is long, so instead of printing out the whole thing, I'm just going to print a quick summary of the ${\tt DataFrame}$ by calling ${\tt .info()}$:
```
weather_2012.info()
```
And a quick reminder, if we wanted to save that data to a file:
```
weather_2012.to_csv('weather_2012.csv')
!ls
```
And finally, something you should do early on in the wrangling process, plot data:
```
# plot that data
import matplotlib.pyplot as plt
# so now 'plt' means matplotlib.pyplot
df = pd.read_csv('weather_2012.csv', low_memory=False)
df.plot(kind='scatter',x='Dew Point Temp (C)',y='Rel Hum (%)',color='red')
df.plot(kind='scatter',x='Temp (C)',y='Wind Spd (km/h)',color='yellow')
df
#plt.plot(df)
# nothing to see... in iPython you need to specify where the chart will display, usually it's in a new window
# to see them 'inline' use:
%matplotlib inline
# that's better, try other plots, scatter is popular, also boxplot
```
## Part 2. States and Territories of Australia
We are interested in getting State and Territory information from Wikipedia, however we do not want to copy and paste the table : )
Here is the URL
https://en.wikipedia.org/wiki/States_and_territories_of_Australia
We need two libraries to do the task:
Check documentations here:
* [urllib](https://docs.python.org/2/library/urllib.html)
* [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
```
import sys
if sys.version_info[0] == 3:
from urllib.request import urlopen
else:
from urllib import urlopen
from bs4 import BeautifulSoup
```
We first save the link in wiki
```
wiki = "https://en.wikipedia.org/wiki/States_and_territories_of_Australia"
```
Then use ulropen to open the page.
If you get "SSL: CERTIFICATE_VERIFY_FAILED", what you need to do is find where "Install Certificates.command" file is, and click it to upgrade the certificate. Then, you should be able to solve the problem.
```
page = urlopen(wiki)
if sys.version_info[0] == 3:
page = page.read()
```
You will meet BeautifulSoup later in this subject, so don't worry if you feel uncomfortable with it now. You can always revisit.
We begin by reading in the source code and creating a Beautiful Soup object with the BeautifulSoup function.
```
soup = BeautifulSoup(page, "lxml")
```
Then we print and see.
```
print (soup.prettify())
```
For who do not know much about HTML, this might be a bit overwhelming, but essentially it contains lots of tags in the angled brackets providing structural and formatting information that we don't care so much here. What we need is the table.
Let's first check the title.
```
soup.title.string
```
It looks fine, then we would like to find the table.
Let's have a try to extract all contents within the 'table' tag.
```
all_tables = soup.findAll('table')
print(all_tables)
```
This returns a collection of tag objects. It seems that most of the information are useless and it's getting hard to hunt for the table. So searched online and found an instruction here:
https://adesquared.wordpress.com/2013/06/16/using-python-beautifulsoup-to-scrape-a-wikipedia-table/
The class is "wikitable sortable"!! Have a try then.
```
right_table=soup.find('table', class_='wikitable sortable')
print (right_table)
```
Next we need to extract table header row by find the first 'tr'>
```
head_row = right_table.find('tr')
print (head_row)
```
Then we extract header row name via iterate through each row and extract text.
The .findAll function in Python returns a list containing all the elements, which you can iterate through.
```
header_list = []
headers = head_row.findAll('th')
for header in headers:
#print header.find(text = True)
header_list.append(header.find(text = True))
header_list
```
We can probably iterate trough this list and then extract contents. But let's take a simple approach of extracting each column separately.
```
flag=[]
state=[]
abbrev = []
ISO = []
Postal =[]
Type = []
Capital = []
population = []
Area = []
for row in right_table.findAll("tr"):
cells = row.findAll('td')
if len(cells) > 0 and len(cells) == 9:
flag.append(cells[0].find(text=True))
state.append(cells[1].find(text=True))
abbrev.append(cells[2].find(text=True))
ISO.append(cells[3].find(text=True))
Postal.append(cells[4].find(text=True))
Type.append(cells[5].find(text=True))
Capital.append(cells[6].find(text=True))
population.append(cells[7].find(text=True))
Area.append(cells[8].find(text=True))
```
Next we can append all list to the dataframe.
```
df_au = pd.DataFrame()
df_au[header_list[0]] = flag
df_au[header_list[1]] = state
df_au[header_list[2]]=abbrev
df_au[header_list[3]]=ISO
df_au[header_list[4]]=Postal
df_au[header_list[5]]=Type
df_au[header_list[6]]=Capital
df_au[header_list[7]]=population
df_au[header_list[8]]=Area
```
Done !
```
df_au
```
## Part 3. Parsing XML files with BeautifulSoup
Now, we are going to demonstrate how to use BeautifulSoup to extract information from the XML file, called "Melbourne_bike_share.xml".
For the documentation of BeautifulSoup, please refer to it <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all">official website</a>.
```
!pip install wget
import wget
link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/Melbourne_bike_share.xml'
DataSet = wget.download(link_to_data)
!ls
from bs4 import BeautifulSoup
btree = BeautifulSoup(open("Melbourne_bike_share.xml"),"lxml-xml")
```
You can alo print out the Beautifulsoup object by calling the <font color="blue">prettify()</font> function.
```
print(btree.prettify())
```
It is easy to figure out information we would like to extract is stored in the following tags
<ul>
<li>id </li>
<li>featurename </li>
<li>terminalname </li>
<li>nbbikes </li>
<li>nbemptydoc </li>
<li>uploaddate </li>
<li>coordinates </li>
</ul>
Each record is stored in "<row> </row>". To extract information from those tags, except for "coordinates", we use the <font color="blue">find_all()</font> function. Its documentation can be found <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all">here</a>.
```
featuretags = btree.find_all("featurename")
featuretags
```
The output shows that the <font color="blue"> find_all() </font> returns all the 50 station names. Now, we need to exclude the tags and just keep the text stored between the tags.
```
for feature in featuretags:
print (feature.string)
```
Now, we can put all the above code together using list comprehensions.
```
featurenames = [feature.string for feature in btree.find_all("featurename")]
featurenames
```
Similarly, we can use the <font color = "blue">find_all()</font> function to extract the other information.
```
nbbikes = [feature.string for feature in btree.find_all("nbbikes")]
nbbikes
NBEmptydoc = [feature.string for feature in btree.find_all("nbemptydoc")]
NBEmptydoc
TerminalNames = [feature.string for feature in btree.find_all("terminalname")]
TerminalNames
UploadDate = [feature.string for feature in btree.find_all("uploaddate")]
UploadDate
ids = [feature.string for feature in btree.find_all("id")]
ids
```
Now, how can we extract the attribute values from the tage called "coordinates"?
```
lattitudes = [coord["latitude"] for coord in btree.find_all("coordinates")]
lattitudes
longitudes = [coord["longitude"] for coord in btree.find_all("coordinates")]
longitudes
```
After the extraction, we can put all the information in a Pandas DataFrame.
```
import pandas as pd
dataDict = {}
dataDict['Featurename'] = featurenames
dataDict['TerminalName'] = TerminalNames
dataDict['NBBikes'] = nbbikes
dataDict['NBEmptydoc'] = NBEmptydoc
dataDict['UploadDate'] = UploadDate
dataDict['lat'] = lattitudes
dataDict['lon'] = longitudes
df = pd.DataFrame(dataDict, index = ids)
df.index.name = 'ID'
df.head()
```
|
github_jupyter
|
import pandas as pd
url_template = "http://climate.weather.gc.ca/climate_data/bulk_data_e.html?format=csv&stationID=5415&Year={year}&Month={month}&timeframe=1&submit=Download+Data"
url = url_template.format(month=3, year=2012)
url
weather_mar2012 = pd.read_csv(url, skiprows=15, index_col='Date/Time', parse_dates=True, encoding='latin1')
weather_mar2012.head()
weather_mar2012 = weather_mar2012.dropna(axis=1, how='any')
weather_mar2012.head()
weather_mar2012 = weather_mar2012.drop(['Year', 'Month', 'Day', 'Time'], axis=1)
weather_mar2012[:5]
def download_weather_month(year, month):
url = url_template.format(year=year, month=month)
weather_data = pd.read_csv(url, skiprows=15, index_col='Date/Time', parse_dates=True)
weather_data = weather_data.dropna(axis=1)
weather_data.columns = [col.replace('\xb0', '') for col in weather_data.columns]
weather_data = weather_data.drop(['Year', 'Day', 'Month', 'Time'], axis=1)
return weather_data
download_weather_month(2012, 1).head()
data_by_month = [download_weather_month(2012, i) for i in range(1, 12)]
weather_2012 = pd.concat(data_by_month)
weather_2012.info()
weather_2012.to_csv('weather_2012.csv')
!ls
# plot that data
import matplotlib.pyplot as plt
# so now 'plt' means matplotlib.pyplot
df = pd.read_csv('weather_2012.csv', low_memory=False)
df.plot(kind='scatter',x='Dew Point Temp (C)',y='Rel Hum (%)',color='red')
df.plot(kind='scatter',x='Temp (C)',y='Wind Spd (km/h)',color='yellow')
df
#plt.plot(df)
# nothing to see... in iPython you need to specify where the chart will display, usually it's in a new window
# to see them 'inline' use:
%matplotlib inline
# that's better, try other plots, scatter is popular, also boxplot
import sys
if sys.version_info[0] == 3:
from urllib.request import urlopen
else:
from urllib import urlopen
from bs4 import BeautifulSoup
wiki = "https://en.wikipedia.org/wiki/States_and_territories_of_Australia"
page = urlopen(wiki)
if sys.version_info[0] == 3:
page = page.read()
soup = BeautifulSoup(page, "lxml")
print (soup.prettify())
soup.title.string
all_tables = soup.findAll('table')
print(all_tables)
right_table=soup.find('table', class_='wikitable sortable')
print (right_table)
head_row = right_table.find('tr')
print (head_row)
header_list = []
headers = head_row.findAll('th')
for header in headers:
#print header.find(text = True)
header_list.append(header.find(text = True))
header_list
flag=[]
state=[]
abbrev = []
ISO = []
Postal =[]
Type = []
Capital = []
population = []
Area = []
for row in right_table.findAll("tr"):
cells = row.findAll('td')
if len(cells) > 0 and len(cells) == 9:
flag.append(cells[0].find(text=True))
state.append(cells[1].find(text=True))
abbrev.append(cells[2].find(text=True))
ISO.append(cells[3].find(text=True))
Postal.append(cells[4].find(text=True))
Type.append(cells[5].find(text=True))
Capital.append(cells[6].find(text=True))
population.append(cells[7].find(text=True))
Area.append(cells[8].find(text=True))
df_au = pd.DataFrame()
df_au[header_list[0]] = flag
df_au[header_list[1]] = state
df_au[header_list[2]]=abbrev
df_au[header_list[3]]=ISO
df_au[header_list[4]]=Postal
df_au[header_list[5]]=Type
df_au[header_list[6]]=Capital
df_au[header_list[7]]=population
df_au[header_list[8]]=Area
df_au
!pip install wget
import wget
link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/Melbourne_bike_share.xml'
DataSet = wget.download(link_to_data)
!ls
from bs4 import BeautifulSoup
btree = BeautifulSoup(open("Melbourne_bike_share.xml"),"lxml-xml")
print(btree.prettify())
featuretags = btree.find_all("featurename")
featuretags
for feature in featuretags:
print (feature.string)
featurenames = [feature.string for feature in btree.find_all("featurename")]
featurenames
nbbikes = [feature.string for feature in btree.find_all("nbbikes")]
nbbikes
NBEmptydoc = [feature.string for feature in btree.find_all("nbemptydoc")]
NBEmptydoc
TerminalNames = [feature.string for feature in btree.find_all("terminalname")]
TerminalNames
UploadDate = [feature.string for feature in btree.find_all("uploaddate")]
UploadDate
ids = [feature.string for feature in btree.find_all("id")]
ids
lattitudes = [coord["latitude"] for coord in btree.find_all("coordinates")]
lattitudes
longitudes = [coord["longitude"] for coord in btree.find_all("coordinates")]
longitudes
import pandas as pd
dataDict = {}
dataDict['Featurename'] = featurenames
dataDict['TerminalName'] = TerminalNames
dataDict['NBBikes'] = nbbikes
dataDict['NBEmptydoc'] = NBEmptydoc
dataDict['UploadDate'] = UploadDate
dataDict['lat'] = lattitudes
dataDict['lon'] = longitudes
df = pd.DataFrame(dataDict, index = ids)
df.index.name = 'ID'
df.head()
| 0.261519 | 0.94474 |
# _*Pricing Fixed-Income Assets*_
## Introduction
We seek to price a fixed-income asset knowing the distributions describing the relevant interest rates. The cash flows $c_t$ of the asset and the dates at which they occur are known. The total value $V$ of the asset is thus the expectation value of:
$$V = \sum_{t=1}^T \frac{c_t}{(1+r_t)^t}$$
Each cash flow is treated as a zero coupon bond with a corresponding interest rate $r_t$ that depends on its maturity. The user must specify the distribution modeling the uncertainty in each $r_t$ (possibly correlated) as well as the number of qubits he wishes to use to sample each distribution. In this example we expand the value of the asset to first order in the interest rates $r_t$. This corresponds to studying the asset in terms of its duration.
<br>
<br>
The approximation of the objective function follows the following paper:<br>
<a href="https://arxiv.org/abs/1806.06893">Quantum Risk Analysis. Woerner, Egger. 2018.</a>
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import Aer, QuantumCircuit
from qiskit.aqua.algorithms import IterativeAmplitudeEstimation
from qiskit.circuit.library import NormalDistribution
backend = Aer.get_backend('statevector_simulator')
```
### Uncertainty Model
We construct a circuit factory to load a multivariate normal random distribution in $d$ dimensions into a quantum state.
The distribution is truncated to a given box $\otimes_{i=1}^d [low_i, high_i]$ and discretized using $2^{n_i}$ grid points, where $n_i$ denotes the number of qubits used for dimension $i = 1,\ldots, d$.
The unitary operator corresponding to the circuit factory implements the following:
$$\big|0\rangle_{n_1}\ldots\big|0\rangle_{n_d} \mapsto \big|\psi\rangle = \sum_{i_1=0}^{2^n_-1}\ldots\sum_{i_d=0}^{2^n_-1} \sqrt{p_{i_1,...,i_d}}\big|i_1\rangle_{n_1}\ldots\big|i_d\rangle_{n_d},$$
where $p_{i_1, ..., i_d}$ denote the probabilities corresponding to the truncated and discretized distribution and where $i_j$ is mapped to the right interval $[low_j, high_j]$ using the affine map:
$$ \{0, \ldots, 2^{n_{j}}-1\} \ni i_j \mapsto \frac{high_j - low_j}{2^{n_j} - 1} * i_j + low_j \in [low_j, high_j].$$
In addition to the uncertainty model, we can also apply an affine map, e.g. resulting from a principal component analysis. The interest rates used are then given by:
$$ \vec{r} = A * \vec{x} + b,$$
where $\vec{x} \in \otimes_{i=1}^d [low_i, high_i]$ follows the given random distribution.
```
# can be used in case a principal component analysis has been done to derive the uncertainty model, ignored in this example.
A = np.eye(2)
b = np.zeros(2)
# specify the number of qubits that are used to represent the different dimenions of the uncertainty model
num_qubits = [2, 2]
# specify the lower and upper bounds for the different dimension
low = [0, 0]
high = [0.12, 0.24]
mu = [0.12, 0.24]
sigma = 0.01*np.eye(2)
# construct corresponding distribution
bounds = list(zip(low, high))
u = NormalDistribution(num_qubits, mu, sigma, bounds)
# plot contour of probability density function
x = np.linspace(low[0], high[0], 2**num_qubits[0])
y = np.linspace(low[1], high[1], 2**num_qubits[1])
z = u.probabilities.reshape(2**num_qubits[0], 2**num_qubits[1])
plt.contourf(x, y, z)
plt.xticks(x, size=15)
plt.yticks(y, size=15)
plt.grid()
plt.xlabel('$r_1$ (%)', size=15)
plt.ylabel('$r_2$ (%)', size=15)
plt.colorbar()
plt.show()
```
### Cash flow, payoff function, and exact expected value
In the following we define the cash flow per period, the resulting payoff function and evaluate the exact expected value.
For the payoff function we first use a first order approximation and then apply the same approximation technique as for the linear part of the payoff function of the [European Call Option](european_call_option_pricing.ipynb).
```
# specify cash flow
cf = [1.0, 2.0]
periods = range(1, len(cf) + 1)
# plot cash flow
plt.bar(periods, cf)
plt.xticks(periods, size=15)
plt.yticks(size=15)
plt.grid()
plt.xlabel('periods', size=15)
plt.ylabel('cashflow ($)', size=15)
plt.show()
# estimate real value
cnt = 0
exact_value = 0.0
for x1 in np.linspace(low[0], high[0], pow(2, num_qubits[0])):
for x2 in np.linspace(low[1], high[1], pow(2, num_qubits[1])):
prob = u.probabilities[cnt]
for t in range(len(cf)):
# evaluate linear approximation of real value w.r.t. interest rates
exact_value += prob * (cf[t]/pow(1 + b[t], t+1) - (t+1)*cf[t]*np.dot(A[:, t], np.asarray([x1, x2]))/pow(1 + b[t], t+2))
cnt += 1
print('Exact value: \t%.4f' % exact_value)
# specify approximation factor
c_approx = 0.125
# get fixed income circuit appfactory
from qiskit.finance.applications import FixedIncomeExpectedValue
fixed_income = FixedIncomeExpectedValue(num_qubits, A, b, cf, c_approx, bounds)
fixed_income.draw()
state_preparation = QuantumCircuit(fixed_income.num_qubits)
# load probability distribution
state_preparation.append(u, range(u.num_qubits))
# apply function
state_preparation.append(fixed_income, range(fixed_income.num_qubits))
state_preparation.draw()
# set target precision and confidence level
epsilon = 0.01
alpha = 0.05
# set objective qubit
objective = u.num_qubits
# construct amplitude estimation
ae = IterativeAmplitudeEstimation(epsilon=epsilon, alpha=alpha,
state_preparation=state_preparation,
objective_qubits=[objective],
post_processing=fixed_income.post_processing)
result = ae.run(quantum_instance=Aer.get_backend('qasm_simulator'), shots=100)
conf_int = np.array(result['confidence_interval'])
print('Exact value: \t%.4f' % exact_value)
print('Estimated value: \t%.4f' % (result['estimation']))
print('Confidence interval:\t[%.4f, %.4f]' % tuple(conf_int))
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
|
github_jupyter
|
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import Aer, QuantumCircuit
from qiskit.aqua.algorithms import IterativeAmplitudeEstimation
from qiskit.circuit.library import NormalDistribution
backend = Aer.get_backend('statevector_simulator')
# can be used in case a principal component analysis has been done to derive the uncertainty model, ignored in this example.
A = np.eye(2)
b = np.zeros(2)
# specify the number of qubits that are used to represent the different dimenions of the uncertainty model
num_qubits = [2, 2]
# specify the lower and upper bounds for the different dimension
low = [0, 0]
high = [0.12, 0.24]
mu = [0.12, 0.24]
sigma = 0.01*np.eye(2)
# construct corresponding distribution
bounds = list(zip(low, high))
u = NormalDistribution(num_qubits, mu, sigma, bounds)
# plot contour of probability density function
x = np.linspace(low[0], high[0], 2**num_qubits[0])
y = np.linspace(low[1], high[1], 2**num_qubits[1])
z = u.probabilities.reshape(2**num_qubits[0], 2**num_qubits[1])
plt.contourf(x, y, z)
plt.xticks(x, size=15)
plt.yticks(y, size=15)
plt.grid()
plt.xlabel('$r_1$ (%)', size=15)
plt.ylabel('$r_2$ (%)', size=15)
plt.colorbar()
plt.show()
# specify cash flow
cf = [1.0, 2.0]
periods = range(1, len(cf) + 1)
# plot cash flow
plt.bar(periods, cf)
plt.xticks(periods, size=15)
plt.yticks(size=15)
plt.grid()
plt.xlabel('periods', size=15)
plt.ylabel('cashflow ($)', size=15)
plt.show()
# estimate real value
cnt = 0
exact_value = 0.0
for x1 in np.linspace(low[0], high[0], pow(2, num_qubits[0])):
for x2 in np.linspace(low[1], high[1], pow(2, num_qubits[1])):
prob = u.probabilities[cnt]
for t in range(len(cf)):
# evaluate linear approximation of real value w.r.t. interest rates
exact_value += prob * (cf[t]/pow(1 + b[t], t+1) - (t+1)*cf[t]*np.dot(A[:, t], np.asarray([x1, x2]))/pow(1 + b[t], t+2))
cnt += 1
print('Exact value: \t%.4f' % exact_value)
# specify approximation factor
c_approx = 0.125
# get fixed income circuit appfactory
from qiskit.finance.applications import FixedIncomeExpectedValue
fixed_income = FixedIncomeExpectedValue(num_qubits, A, b, cf, c_approx, bounds)
fixed_income.draw()
state_preparation = QuantumCircuit(fixed_income.num_qubits)
# load probability distribution
state_preparation.append(u, range(u.num_qubits))
# apply function
state_preparation.append(fixed_income, range(fixed_income.num_qubits))
state_preparation.draw()
# set target precision and confidence level
epsilon = 0.01
alpha = 0.05
# set objective qubit
objective = u.num_qubits
# construct amplitude estimation
ae = IterativeAmplitudeEstimation(epsilon=epsilon, alpha=alpha,
state_preparation=state_preparation,
objective_qubits=[objective],
post_processing=fixed_income.post_processing)
result = ae.run(quantum_instance=Aer.get_backend('qasm_simulator'), shots=100)
conf_int = np.array(result['confidence_interval'])
print('Exact value: \t%.4f' % exact_value)
print('Estimated value: \t%.4f' % (result['estimation']))
print('Confidence interval:\t[%.4f, %.4f]' % tuple(conf_int))
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
| 0.698227 | 0.988039 |
# Aula 0
Nessa aula serรฃo abordados alguns conceitos bรกsicos de programaรงรฃo. Existem tambรฉm diversas fontes de suporte disponรญveis na internet como:<br/>
https://docs.python.org/3/tutorial/index.html (tutoriais oficiais)<br/> https://docs.python.org/3/library/index.html (referencia de funcoes oficial)<br/>
https://www.learnpython.org/ (tutorial interativo)<br/>
www.urionlinejudge.com.br (Exercรญos e desafios)
## Variรกveis
Sรฃo entidades de programaรงรฃo que armazenam valores. Elas tรชm nome definido pelo usuรกrio e o seu tipo depende de como ela รฉ utilizada (o tipo รฉ deduzido dinamicamente). Para declarar uma variรกvel com nome X e que tenha o valor de 5 basta:
```
X = 5
```
Para saber o tipo de uma variรกvel, podemos inspecionar seu tipo com a funรงรฃo $type$
```
type(X)
```
"int" quer dizer que o tipo รฉ $integer$, ou seja, inteiro. Se quisermos que a variรกvel aceite numeros reais, temos:
```
Y = 5.0
type(Y)
```
"float" quer dizer ponto flutuante. Eles podem representa nรบmeros fracionรกrios.
Podemos tambรฉm forรงar o tipo de uma variรกvel
```
A = int(4)
B = int(5.3)
C = float(3)
D = float(4.3)
E = str('sou uma lista de caracteres')
F = str(5.35153)
print ('O valor de A eh: ' + str(A) + ' e seu tipo eh: ' + str(type(A)))
print ('O valor de B eh: ' + str(B) + ' e seu tipo eh: ' + str(type(B)) +
' note que seu valor foi arredondado para o inteiro mais proximo')
print ('O valor de C eh: ' + str(C) + ' e seu tipo eh: ' + str(type(C)))
print ('O valor de D eh: ' + str(D) + ' e seu tipo eh: ' + str(type(D)))
print ('O valor de D eh: ' + str(E) + ' e seu tipo eh: ' + str(type(E)))
print ('O valor de D eh: ' + str(F) + ' e seu tipo eh: ' + str(type(F)))
```
Tambรฉm podemos representar nรบmeros complexos:
```
num = complex('4+3j')
print (num)
```
Representamos um nรบmero complexo com parte real igual a 4 e parte imaginรกria igual a 3, sendo "j" variรกvel complexa $\sqrt(-1)$.
### Operaรงรตes Bรกsicas
Podemos realizar operaรงรตes algรฉbricas e lรณgicas sobre as variรกveis.
```
A = 5+2 # soma
B = 5-2 # subtracao
C = 5*2 # multiplicacao
D = 5/2 # divisao
E = 5%2 # resto
G = 5**2# exponencial. CUIDADO: 5^2 em python significa a operaรงรฃo binรกrio XOR, e nรฃo exponencial
```
Operaรงรตes de comparaรงรฃo:
```
print(A == B) # igualdade (note que o = serve para atribuir valores)
print(A != B) # nao igual
print(A > B) # maior
print(A < B) # menor
print(A >= B) # maior igual
print(A <= B) # menor igual
```
Os valores True e False acima sรฃo resultado das operaรงรตes de comparaรงรฃo com base nos valores de A e B definidos anteriormente.
Tambรฉm sรฃo suportadas operaรงรตes de lรณgica booleana.
```
A = True
B = False
print(A and B)
print(A and not B)
print(A or B)
```
Para mais informaรงรตes sobre operadores veja o link: https://www.tutorialspoint.com/python/python_basic_operators.htm
### Containers
Python suporta diversas estruturas para armazenamento de variรกveis de forma estruturada. A mais comum รฉ a lista, que pode ser definida das seguintes formas:
```
lista = list([1,2,3,47,3]) # com o comando list explicito
lista2 = [1,4214,1242,124,6] # forma implicita do comando
print('A lista contem ' + str(len(lista)) + ' elementos, sendo eles: ' + str(lista))
print('O primeiro item da lista e ' + str(lista[0]) + ' e o ultimo e: ' + str(lista[-1]))
lista[3] = 77 # modifica o elemento na posicao 3 (o quarto elemento a partir do 0)
print('O quarto elemento foi modificado para: ' + str(lista[3]))
lista.append(99) # insere um elemento no fim da lista
print('O novo ultimo elemento agora e: '+ str(lista[-1]))
lista.insert(2, 101) # insere o valor 101 na posicao 2
print('A lista agora tem os seguintes valores: ' + str(lista))
```
Tambรฉm podemos criar listas que nรฃo podem ser modificadas. Sรฃo chamadas de tuplas. Sรฃo รบteis para quando queremos garantir que a estrutura nรฃo serรก modificada.
```
tupla = (32,1,23,4)
#tupla[2] = 7 # essa linha causaria um erro
```
ร considerado boa prรกtica utilizar listas para itens de mesmo tipo e tuplas para conjuntos heterogรชneos.
```
dia = (25, 'janeiro', 2018)
```
### Vetores e matrizes
Podemos tentar representar vetores e matrizes com listas. Por exemplo:
```
A = [1,2,3,4]
B = [1,2,3,4]
print(A+B)
```
Vemos que listas nรฃo se comportam da forma que esperamos para vetores. Ao invรฉs de somar os vetores elemento a elemento, o Python concatena as duas listas. Vamos tentar multiplicar por um valor escalar:
```
print(A*3)
```
Novamente, o comportamento esperado nรฃo foi obtido, o Python simplesmente replicou a lista 3 vezes.
As noรงรตes matemรกticas de vetores e matrizes nรฃo sรฃo suportados nativamente pelo Python. Entรฃo precisamos, ou programar para que o Python faรงa o esperado, ou simplesmente importar estruturas de dados para os vetores e matrizes de algum lugar externo, chamado biblioteca em programaรงรฃo. No Python, a maneira de trazer componentes externos รฉ atravรฉs de comandos $import$:
```
import numpy as np
```
Esse comando importa a biblioteca Numpy, que define estrutura de matrizes e vetores, e a inclui em nosso programa com o nome "np". Para definirmos um vetor agora podemos simplesmente:
```
vetorlin = np.array([1,2,3,4]) # Vetor linha: 1 linha e 4 colunas
print(vetorlin)
```
Repare que a funรงรฃo $array$ recebe uma lista de nรบmeros e cria uma matriz ou vetor. Para o caso de vetorlin, onde temos uma linha, utilizamos apenas uma lista.
Para criar uma matriz com mais de uma linha, devemos fornecer mรบltiplas listas para a funรงรฃo $array$. Por exemplo, para criar o vetor $vetorlin^T$, devemos passar 4 listas com um elemento em casa. Repare que o conjunto de listas deve estar contido dentro de uma tupla (por isso temos dois parรชntesis de cada lado):
```
vetorcol = np.array(([1],[2],[3],[4])) # Vetor coluna: 4 linhas e 1 coluna
print(vetorcol)
```
E para o caso de uma matriz:
```
matriz = np.array(([1,2,3,4],[5,6,7,8])) # Matrix 2x4: 2 linhas e 4 colunas
print(matriz)
print('O vetor linha tem dimensao ' + str(vetorlin.shape) +' e tipo '+ str(vetorlin.dtype))
```
Repare que o atributo $shape$ referente a variรกvel $vetor$ รฉ uma tupla onde o primeiro elemento รฉ o nรบmero de linhas e o segundo o de colunas (como tem apenas 1 coluna, o parรขmetro รฉ omitido). O tipo $int64$ quer dizer que รฉ um nรบmero inteiro representado utilizando 64 bits.
```
print('O item na posicao linha,coluna:1,3 na matriz eh: ' + str(matriz[1,3]))
```
Listas, tuplas, matrizes e todas as outras estruturas com diversos valores sรฃo indexadas a partir do 0.
Quando matrizes tรชm o mesmo nรบmero de elementos, elas tambรฉm suportam operaรงรตes algรฉbricas e lรณgicas:
```
A = np.array(([1,2],[3,4]))
B = np.array(([4,3],[2,1]))
print(A)
print(B)
```
Confira agora se as matrizes A e B se comportam como o esperado. Pesquise tambรฉm como realizar a operaรงรฃo de produto escalar utilizando a biblioteca Numpy (em inglรชs produto escalar รฉ dot product).
## Loops
Programas geralmente fazem um grande nรบmero de operaรงรตes repetitivas em grandes conjuntos de dados. Por exemplo, se quisermos imprimir todos os elementos da variรกvel "lista", temos que:
```
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
print(lista[4])
print(lista[5])
```
Para evitarmos repetir cรณdigo (e se essa lista tivesse mil elementos???) utilizamos estruturas de loop. Sua estrutura รฉ:
```
for elemento in lista:
print(elemento)
```
De forma similar, podemos obter o indice e elemento atuais para cada "rodada" do loop atravรฉs da funรงรฃo $enumerate$.
```
for indice, elemento in enumerate(lista):
print('O elemento na posicao ' + str(indice) + ' tem valor: ' + str(elemento))
```
Podemos utilizar loops para realizar operaรงรตes que dependem de todos os elementos de uma lista. Por exemplo somar todos os itens em uma lista:
```
soma = 0
for elemento in lista:
soma = soma + elemento
print(soma)
```
## Condicionais
Tambรฉm podemos executar um certo trecho de cรณdigo apenas se uma condiรงรฃo for atendida:
```
if (matriz[1,1] > 3):
print('O valor ' + str(matriz[1,1]) + ' eh maior que 3')
elif (matriz[1,1] < 3):
print('O valor ' + str(matriz[1,1]) + ' eh menor que 3')
else:
print('O valor ' + str(matriz[1,1]) + ' soh pode ser 3')
```
## Funรงรตes
ร comum querermos utilizar uma mesma funcionalidade em diversos pontos do cรณdigo. Funรงรตes permitem que isso aconteรงa. Elas sรฃo entidades que recebem um conjunto de dados ou parรขmetros de entrada e retorna zero ou mais valores de saรญda. Por exemplo a funรงรฃo $print$ que temos utilizado recebe um vetor de caracteres de entrada e sua saรญda รฉ texto formatado na tela.
Para se definir uma funรงรฃo basta:
```
def soma(parametro1, parametro2, parametro3):
# Faz alguma coisa com os parametros
return parametro1+parametro2+parametro3
```
A funรงรฃo acima realiza a soma de 3 paramรขmetros. Para usรก-la:
```
print(soma(3,5,6))
```
## Exercรญcio 1
Faรงa uma funรงรฃo que calcula seu RSG para um dado semestre. A funรงรฃo deve receber valores de crรฉdito e nota final correspondentes e retornar um valor de RSG de 0 a 5. Considere que o semestre terรก apenas 3 disciplinas. Dica: crie primeiro uma funรงรฃo que recebe uma nota de 0 a 100 e gera um conceito (use 5 para A, 4 para B e assim por diante).
## Exercรญcio 2
Adapte a funรงรฃo do Ex. 1 para aceitar como entrada um nรบmero qualquer de disciplinas. Dica: vocรช pode passar uma matriz onde cada linha representa uma disciplina, a primeira coluna representa o nรบmero de crรฉditos e a segunda coluna representa a nota final. Vocรช tambรฉm pode passar uma lista de nรบmero de crรฉditos e outra lista com as notas.
Abaixo listas de notas e crรฉditos para testar suas funรงรตes:
Notas = 90, 50, 40, 30
Creditos = 12, 8, 6, 4
RSG = 2.2666
-----------------
Notas = 43
Creditos = 4
RSG = 0
-----------------
Notas = 100, 98, 97, 95, 94, 80, 74, 68, 49
Crรฉditos = 1, 4, 6, 8, 12, 3, 6, 8, 12
RSG = 3.35
----------------
Notas = 91, 90, 89
Crรฉditos = 1,2,3
RSG = 4.5
---------------
|
github_jupyter
|
X = 5
type(X)
Y = 5.0
type(Y)
A = int(4)
B = int(5.3)
C = float(3)
D = float(4.3)
E = str('sou uma lista de caracteres')
F = str(5.35153)
print ('O valor de A eh: ' + str(A) + ' e seu tipo eh: ' + str(type(A)))
print ('O valor de B eh: ' + str(B) + ' e seu tipo eh: ' + str(type(B)) +
' note que seu valor foi arredondado para o inteiro mais proximo')
print ('O valor de C eh: ' + str(C) + ' e seu tipo eh: ' + str(type(C)))
print ('O valor de D eh: ' + str(D) + ' e seu tipo eh: ' + str(type(D)))
print ('O valor de D eh: ' + str(E) + ' e seu tipo eh: ' + str(type(E)))
print ('O valor de D eh: ' + str(F) + ' e seu tipo eh: ' + str(type(F)))
num = complex('4+3j')
print (num)
A = 5+2 # soma
B = 5-2 # subtracao
C = 5*2 # multiplicacao
D = 5/2 # divisao
E = 5%2 # resto
G = 5**2# exponencial. CUIDADO: 5^2 em python significa a operaรงรฃo binรกrio XOR, e nรฃo exponencial
print(A == B) # igualdade (note que o = serve para atribuir valores)
print(A != B) # nao igual
print(A > B) # maior
print(A < B) # menor
print(A >= B) # maior igual
print(A <= B) # menor igual
A = True
B = False
print(A and B)
print(A and not B)
print(A or B)
lista = list([1,2,3,47,3]) # com o comando list explicito
lista2 = [1,4214,1242,124,6] # forma implicita do comando
print('A lista contem ' + str(len(lista)) + ' elementos, sendo eles: ' + str(lista))
print('O primeiro item da lista e ' + str(lista[0]) + ' e o ultimo e: ' + str(lista[-1]))
lista[3] = 77 # modifica o elemento na posicao 3 (o quarto elemento a partir do 0)
print('O quarto elemento foi modificado para: ' + str(lista[3]))
lista.append(99) # insere um elemento no fim da lista
print('O novo ultimo elemento agora e: '+ str(lista[-1]))
lista.insert(2, 101) # insere o valor 101 na posicao 2
print('A lista agora tem os seguintes valores: ' + str(lista))
tupla = (32,1,23,4)
#tupla[2] = 7 # essa linha causaria um erro
dia = (25, 'janeiro', 2018)
A = [1,2,3,4]
B = [1,2,3,4]
print(A+B)
print(A*3)
import numpy as np
vetorlin = np.array([1,2,3,4]) # Vetor linha: 1 linha e 4 colunas
print(vetorlin)
vetorcol = np.array(([1],[2],[3],[4])) # Vetor coluna: 4 linhas e 1 coluna
print(vetorcol)
matriz = np.array(([1,2,3,4],[5,6,7,8])) # Matrix 2x4: 2 linhas e 4 colunas
print(matriz)
print('O vetor linha tem dimensao ' + str(vetorlin.shape) +' e tipo '+ str(vetorlin.dtype))
print('O item na posicao linha,coluna:1,3 na matriz eh: ' + str(matriz[1,3]))
A = np.array(([1,2],[3,4]))
B = np.array(([4,3],[2,1]))
print(A)
print(B)
print(lista[0])
print(lista[1])
print(lista[2])
print(lista[3])
print(lista[4])
print(lista[5])
for elemento in lista:
print(elemento)
for indice, elemento in enumerate(lista):
print('O elemento na posicao ' + str(indice) + ' tem valor: ' + str(elemento))
soma = 0
for elemento in lista:
soma = soma + elemento
print(soma)
if (matriz[1,1] > 3):
print('O valor ' + str(matriz[1,1]) + ' eh maior que 3')
elif (matriz[1,1] < 3):
print('O valor ' + str(matriz[1,1]) + ' eh menor que 3')
else:
print('O valor ' + str(matriz[1,1]) + ' soh pode ser 3')
def soma(parametro1, parametro2, parametro3):
# Faz alguma coisa com os parametros
return parametro1+parametro2+parametro3
print(soma(3,5,6))
| 0.220762 | 0.974869 |
# Software Installation Tester
This notebook simply tests if external programs have been installed correctly and can run in a Binder environment.
```
import os.path as op
def check_path(path):
"""Check if the specified program is in the PATH and can be run in a shell."""
import shutil
checker = shutil.which(path)
if checker:
print('SUCCESS: {} found!'.format(path))
return checker
else:
raise OSError('FAILURE: unable to find {}'.format(path))
def check_path_run(cmd):
"""Check if a command can be run and print out the stdout and stderr."""
import shlex
import subprocess
program_and_args = shlex.split(cmd)
try:
command = subprocess.Popen(program_and_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = command.communicate()
except FileNotFoundError:
raise OSError('FAILURE: unable to run {}'.format(program_and_args[0]))
print('*****STDOUT*****')
print(out.decode("utf-8") )
print('--------------\n')
print('*****STDERR*****')
print(err.decode("utf-8") )
```
## DSSP
```
dssp_exec = 'dssp'
check_path(dssp_exec)
check_path_run(dssp_exec)
```
## MSMS
```
msms_exec = 'msms'
check_path(msms_exec)
check_path_run(msms_exec)
```
## STRIDE
```
stride_exec = 'stride'
check_path(stride_exec)
check_path_run(stride_exec)
```
## FreeSASA
```
freesasa_exec = 'freesasa'
check_path(freesasa_exec)
check_path_run(freesasa_exec)
```
## FATCAT
```
fatcat_exec = 'fatcat'
check_path(fatcat_exec)
check_path_run(fatcat_exec)
```
## SCRATCH
```
scratch_exec = 'scratch'
check_path(scratch_exec)
check_path_run(scratch_exec)
```
## EMBOSS
### pepstats
```
pepstats_exec = 'pepstats'
check_path(pepstats_exec)
check_path_run(pepstats_exec)
```
### needle
```
needle_exec = 'needle'
check_path(needle_exec)
check_path_run(needle_exec)
```
## nglview
```
import nglview
view = nglview.show_structure_file("../../ssbio/test/test_files/structures/1kf6.pdb")
view
```
## TMHMM
Please see [I-TASSER and TMHMM Install Guide](I-TASSER and TMHMM Install Guide.ipynb) for instructions on how to get this program installed.
```
tmhmm_exec = 'tmhmm'
check_path(tmhmm_exec)
check_path_run(tmhmm_exec)
```
## I-TASSER
Please see [I-TASSER and TMHMM Install Guide](I-TASSER and TMHMM Install Guide.ipynb) for instructions on how to get this program installed.
```
itasser_version = '5.1'
itasser_dir = op.expanduser('~/software/itasser/I-TASSER{}/'.format(itasser_version))
itasser_downloadlib = op.join(itasser_dir, 'download_lib.pl')
itasser_exec = op.join(itasser_dir, 'I-TASSERmod/runI-TASSER.pl')
check_path(itasser_downloadlib)
check_path(itasser_exec)
check_path_run(itasser_exec)
```
|
github_jupyter
|
import os.path as op
def check_path(path):
"""Check if the specified program is in the PATH and can be run in a shell."""
import shutil
checker = shutil.which(path)
if checker:
print('SUCCESS: {} found!'.format(path))
return checker
else:
raise OSError('FAILURE: unable to find {}'.format(path))
def check_path_run(cmd):
"""Check if a command can be run and print out the stdout and stderr."""
import shlex
import subprocess
program_and_args = shlex.split(cmd)
try:
command = subprocess.Popen(program_and_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = command.communicate()
except FileNotFoundError:
raise OSError('FAILURE: unable to run {}'.format(program_and_args[0]))
print('*****STDOUT*****')
print(out.decode("utf-8") )
print('--------------\n')
print('*****STDERR*****')
print(err.decode("utf-8") )
dssp_exec = 'dssp'
check_path(dssp_exec)
check_path_run(dssp_exec)
msms_exec = 'msms'
check_path(msms_exec)
check_path_run(msms_exec)
stride_exec = 'stride'
check_path(stride_exec)
check_path_run(stride_exec)
freesasa_exec = 'freesasa'
check_path(freesasa_exec)
check_path_run(freesasa_exec)
fatcat_exec = 'fatcat'
check_path(fatcat_exec)
check_path_run(fatcat_exec)
scratch_exec = 'scratch'
check_path(scratch_exec)
check_path_run(scratch_exec)
pepstats_exec = 'pepstats'
check_path(pepstats_exec)
check_path_run(pepstats_exec)
needle_exec = 'needle'
check_path(needle_exec)
check_path_run(needle_exec)
import nglview
view = nglview.show_structure_file("../../ssbio/test/test_files/structures/1kf6.pdb")
view
tmhmm_exec = 'tmhmm'
check_path(tmhmm_exec)
check_path_run(tmhmm_exec)
itasser_version = '5.1'
itasser_dir = op.expanduser('~/software/itasser/I-TASSER{}/'.format(itasser_version))
itasser_downloadlib = op.join(itasser_dir, 'download_lib.pl')
itasser_exec = op.join(itasser_dir, 'I-TASSERmod/runI-TASSER.pl')
check_path(itasser_downloadlib)
check_path(itasser_exec)
check_path_run(itasser_exec)
| 0.296451 | 0.801081 |
# Deploying image classification as REST API
This notebook shows how to publish a trained image classification model as a Rest API service. We will start with local deployment (which uses docker), and then illustrate how easy it is using the same approach to instead publish to an Azure Container Service (ACS) with Kubernetes container management.
## Prerequisites
- All numbered scripts until `5_evaluate.py` have to be executed as is described in part 1 of the documentation. This trains the DNN/SVM model which will get deployed.
- We assume the reader is familiar with the excellent deployment section of the [IRIS tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/preview/tutorial-classifying-iris-part-3) and the [Model management setup how-to guide](https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-configuration).
- Local deployment requires a Docker server to be installed and running on the local machine. See the [Docker homepage](https://www.docker.com) or the AML Workbench [Troubleshooting guide](https://docs.microsoft.com/en-us/azure/machine-learning/preview/known-issues-and-troubleshooting-guide) for installation instructions.
Note that, at the time of writing, Docker supports the Windows 10 Operating System but not Windows Server 2016 which is running on the Window's Deep Learning Virtual Machines.
## Rest API Implementation
The Rest API is implemented in the `deploymain.py` script using these three functions:
- Init(): loads the trained DNN/SVM model into memory after the Rest API is deployed.
- Run(): takes an image (in base64 encoding) as input, runs the full image classification pipeline including evaluating the DNN/SVM model, and returns the classification scores as json encoded string.
- Main(): can be used locally to test and debug the init() and run() functions before deployment. It creates a random 5x5 pixel RGB image, converts it to a base64 encoded string, which is then used as input to the run() function.
During deployment, a [swagger specification](https://en.wikipedia.org/wiki/OpenAPI_Specification) file can optionally be specified which defines how to describe and consume the web service. This swagger file is called `deployserviceschema.json` and was automatically created running `deploymain.py` (see the call to `generate_schema()`).
## Initialization
Various files are required by the web-service, including, among others, the trained DNN or SVM models. These files are copied in the code below to a local folder called *tmp*. During deployment, this folder is re-created on the node which runs the Rest API: note how the script `deploymain.py` loads files from that *tmp* folder.
```
import sys, os
sys.path.append(".")
sys.path.append("..")
sys.path.append("libraries")
sys.path.append("../libraries")
from helpers import *
from PARAMETERS import procDir
amlLogger = getAmlLogger()
if amlLogger != []:
amlLogger.log("amlrealworld.ImageClassificationUsingCntk.deploy", "true")
# Set files source and destination information
model_files_to_copy = ["cntk_fixed.model", "cntk_refined.model", "lutId2Label.pickle", "svm.np"]
code_files_to_copy = ["deploymain.py", "deployserviceschema.json"]
library_files_to_copy = ["helpers.py", "helpers_cntk.py", "utilities_CVbasic_v2.py", "utilities_general_v2.py"]
model_folder = procDir
code_folder = "../scripts"
library_folder = "../libraries"
deploy_folder = os.path.join(code_folder, "tmp")
# Copy files
makeDirectory(deploy_folder)
copyFiles(model_files_to_copy, model_folder, deploy_folder)
copyFiles(library_files_to_copy, library_folder, deploy_folder)
copyFiles(code_files_to_copy, code_folder, deploy_folder)
print("Files copied to deployment folder: " + deploy_folder)
```
## Local deployment
We now describe how to deploy the trained model as a Rest API which runs inside a docker container on the local machine.
All commands shown below need to be executed from the AML Workbench command prompt which can be opened via **File**->**Open Command Prompt**.
### Steps:
1. Change to the directory which contains the scripts:
```sh
cd scripts
```
2. Set deployment target to local. Note that the Workbench might prompt the user to login first using the command `az login`.
```sh
az ml env local
```
3. Create Rest API service (this can take 10-20 minutes). Use the command below as-is if `classifier=svm` in `deploymain.py`, but if `classifier=dnn` then change *cntk_fixed.model* to *cntk_refined.model*.
```sh
az ml service create realtime -c ../aml_config/conda_dependencies.yml -f deploymain.py -s deployserviceschema.json -n imgclassapi1 -v -r python -d tmp/helpers.py -d tmp/helpers_cntk.py -d tmp/utilities_CVbasic_v2.py -d tmp/utilities_general_v2.py -d tmp/svm.np -d tmp/lutId2Label.pickle --model-file tmp/cntk_fixed.model
```
4. Test the Rest API directly from the command prompt:
```sh
az ml service run realtime -i imgclassapi1 -d "{\"input_df\": [{\"image base64 string\": \"iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAIAAAACDbGyAAAAFElEQVR4nGP8//8/AxJgYkAFpPIB6vYDBxf2tWQAAAAASUVORK5CYII=\"}]}"
```
5. Obtain information of the Rest API by running:
```sh
az ml service usage realtime -i imgclassapi1
```
Note that this also outputs a *scoring url* which looks like (but is not identical) to *http://127.0.0.1:32773/score*. This url can be used in script `6_callWebservice.py` which shows how to call the REST API from python rather than the command line.
## Cloud deployment
Deploying the image classification service to Azure is very similar compared to the local deployment described above. The main difference is that an Azure Container Service needs to be created first. The actual deployment steps are then identical:
### Steps
- Simply follow the all the instructions in the [model management setup how-to guide](https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-configuration) to set up the cloud deployment environment. Be careful when creating an Azure Container Service to delete it again once not needed anymore.
- Then run all steps except for step 2 as explained in the *local deployment* section.
**Example:**
1. Register the environment provider:
```sh
az provider register -n Microsoft.MachineLearningCompute
az provider register -n Microsoft.ContainerRegistry
```
2. Create an ACS cluster (may take 10-20 minutes to be completely provisioned):
```sh
az ml env setup --cluster -l westcentralus -n acsdeployment
```
3. Find resource group and cluster name in the list of compute targets (look for entry with "current mode: cluster"):
```sh
az ml env list to see compute targes
```
4. Specify which compute target to use for 'cloud' deployment (this also returns the url for the Kubernetes dashboard):
```sh
az ml env set -n pabuehledeployvienna -g pabuehledeployviennarg
```
5. Switch from local to cluster deployment:
```sh
az ml env cluster
```
6. Repeat all steps in the *local deployment* section except for step 2 which sets the local deployment target. The Rest API name should resemble *imgclassapi1.pabuehledeployvienna-bf329684.westcentralus*.
7. Update script `6_` with the new scoring url and with the service key obtained by running:
```sh
az ml service keys realtime -i imgclassapi1.pabuehledeployvienna-bf327884.westcentralus
```
## Clean-up
Finally, we need delete all files in the local (temporary) deployment folder *tmp*. If this is not done, then the project directory will be above the size-limit of 25 MBytes.
```
deleteFiles(model_files_to_copy, deploy_folder)
deleteFiles(library_files_to_copy, deploy_folder)
deleteFiles(code_files_to_copy, deploy_folder)
print("Files deleted from deployment folder : " + deploy_folder)
```
## Debugging
These commands or guidelines can be helpful for understanding e.g. what caused a deployment error or why the deployed Rest API is not working:
- Inspect the docker log for errors:
```bash
docker ps -a # this shows all docker containers with their respective ID
docker logs <containerid>
```
- List all deployed services:
```bash
az ml service list realtime
```
- Wrap all code running in the Rest API within *try...except* statments. See the run() and init() functions in `deploymain.py` as an example. This way, should an error occured during an API call, a description of this error is returned to the user.
|
github_jupyter
|
import sys, os
sys.path.append(".")
sys.path.append("..")
sys.path.append("libraries")
sys.path.append("../libraries")
from helpers import *
from PARAMETERS import procDir
amlLogger = getAmlLogger()
if amlLogger != []:
amlLogger.log("amlrealworld.ImageClassificationUsingCntk.deploy", "true")
# Set files source and destination information
model_files_to_copy = ["cntk_fixed.model", "cntk_refined.model", "lutId2Label.pickle", "svm.np"]
code_files_to_copy = ["deploymain.py", "deployserviceschema.json"]
library_files_to_copy = ["helpers.py", "helpers_cntk.py", "utilities_CVbasic_v2.py", "utilities_general_v2.py"]
model_folder = procDir
code_folder = "../scripts"
library_folder = "../libraries"
deploy_folder = os.path.join(code_folder, "tmp")
# Copy files
makeDirectory(deploy_folder)
copyFiles(model_files_to_copy, model_folder, deploy_folder)
copyFiles(library_files_to_copy, library_folder, deploy_folder)
copyFiles(code_files_to_copy, code_folder, deploy_folder)
print("Files copied to deployment folder: " + deploy_folder)
2. Set deployment target to local. Note that the Workbench might prompt the user to login first using the command `az login`.
```sh
az ml env local
```
3. Create Rest API service (this can take 10-20 minutes). Use the command below as-is if `classifier=svm` in `deploymain.py`, but if `classifier=dnn` then change *cntk_fixed.model* to *cntk_refined.model*.
```sh
az ml service create realtime -c ../aml_config/conda_dependencies.yml -f deploymain.py -s deployserviceschema.json -n imgclassapi1 -v -r python -d tmp/helpers.py -d tmp/helpers_cntk.py -d tmp/utilities_CVbasic_v2.py -d tmp/utilities_general_v2.py -d tmp/svm.np -d tmp/lutId2Label.pickle --model-file tmp/cntk_fixed.model
```
4. Test the Rest API directly from the command prompt:
```sh
az ml service run realtime -i imgclassapi1 -d "{\"input_df\": [{\"image base64 string\": \"iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAIAAAACDbGyAAAAFElEQVR4nGP8//8/AxJgYkAFpPIB6vYDBxf2tWQAAAAASUVORK5CYII=\"}]}"
```
5. Obtain information of the Rest API by running:
```sh
az ml service usage realtime -i imgclassapi1
```
Note that this also outputs a *scoring url* which looks like (but is not identical) to *http://127.0.0.1:32773/score*. This url can be used in script `6_callWebservice.py` which shows how to call the REST API from python rather than the command line.
## Cloud deployment
Deploying the image classification service to Azure is very similar compared to the local deployment described above. The main difference is that an Azure Container Service needs to be created first. The actual deployment steps are then identical:
### Steps
- Simply follow the all the instructions in the [model management setup how-to guide](https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-configuration) to set up the cloud deployment environment. Be careful when creating an Azure Container Service to delete it again once not needed anymore.
- Then run all steps except for step 2 as explained in the *local deployment* section.
**Example:**
1. Register the environment provider:
```sh
az provider register -n Microsoft.MachineLearningCompute
az provider register -n Microsoft.ContainerRegistry
```
2. Create an ACS cluster (may take 10-20 minutes to be completely provisioned):
```sh
az ml env setup --cluster -l westcentralus -n acsdeployment
```
3. Find resource group and cluster name in the list of compute targets (look for entry with "current mode: cluster"):
```sh
az ml env list to see compute targes
```
4. Specify which compute target to use for 'cloud' deployment (this also returns the url for the Kubernetes dashboard):
```sh
az ml env set -n pabuehledeployvienna -g pabuehledeployviennarg
```
5. Switch from local to cluster deployment:
```sh
az ml env cluster
```
6. Repeat all steps in the *local deployment* section except for step 2 which sets the local deployment target. The Rest API name should resemble *imgclassapi1.pabuehledeployvienna-bf329684.westcentralus*.
7. Update script `6_` with the new scoring url and with the service key obtained by running:
```sh
az ml service keys realtime -i imgclassapi1.pabuehledeployvienna-bf327884.westcentralus
```
## Clean-up
Finally, we need delete all files in the local (temporary) deployment folder *tmp*. If this is not done, then the project directory will be above the size-limit of 25 MBytes.
| 0.612889 | 0.97441 |
```
# Data Source:
# https://www.citibikenyc.com/system-data
# https://s3.amazonaws.com/tripdata/index.html
import os
import pandas as pd
import numpy as np
import pickle
from collections import Counter
PATH = "./NYC-bike-share/2017/"
for dirpath, dirs, files in os.walk(PATH):
print('load files')
csv_files = [i for i in files if i.endswith('csv')]
# csv_files = csv_files[::-1]
csv_files
%%time
""" read all csv, concat them and remove necessary rows
"""
df = pd.DataFrame()
list_ = []
for f in csv_files:
if '2017' in f:
df_tmp = pd.read_csv('./NYC-bike-share/2017/' + f, index_col=None, header=None, low_memory=False)
list_.append(df_tmp)
df = pd.concat(list_)
df.columns = ['tripduration', 'starttime', 'stoptime', 'start station id',
'start station name', 'start station latitude',
'start station longitude', 'end station id', 'end station name',
'end station latitude', 'end station longitude', 'bikeid', 'usertype',
'birth year', 'gender']
%%time
df = df.reset_index()
%%time
idx = df[(df['tripduration'] == 'tripduration') | (df['tripduration'] == 'Trip Duration')].index
df.drop(idx, 0, inplace=True)
df.shape
df.head()
df[df['end station id']==3215].head(1)
df[df['end station id']==3478]
def dt2hrs(dt):
time = dt.split()[1]
if int(time[3:5]) < 30:
return int(time[:2])
else:
return int(time[:2]) + .5
%%time
df['starthour'] = df['starttime'].map(lambda x: dt2hrs(x))
%%time
# format="%Y-%m-%d" makes the process at least 10 times faster
df['starttime'] = pd.to_datetime(df['starttime'], format="%Y-%m-%d")
df['stoptime'] = pd.to_datetime(df['stoptime'], format="%Y-%m-%d")
%%time
df = df.sort_values('starttime').reset_index()
df.drop(['index'], 1, inplace=True)
df.isnull().sum()
# imputing missing values
df.loc[(df['usertype'].isnull()) & (df['birth year'].notnull()), 'usertype'] = 'Subscriber'
df.loc[(df['usertype'].isnull()) & (df['birth year'].isnull()), 'usertype'] = 'Customer'
df.loc[df['birth year'].isnull(), 'birth year'] = -1
%%time
int_cols = ['tripduration', 'start station id', 'end station id', 'bikeid', 'birth year', 'gender']
float_cols = ['start station latitude', 'start station longitude',
'end station latitude', 'end station longitude']
for c in int_cols:
df[c] = df[c].astype(int)
for c in float_cols:
df[c] = df[c].astype(float)
%%time
a = list(set(df['start station id']))
a = set([int(i) for i in a])
b = list(set(df['end station id']))
b = set([int(i) for i in b])
print(len(a), len(b), len(a.union(b)))
# station id: name dictionary
stations_df = df.groupby('end station id')['end station name'].first().reset_index()
stations_dict = dict(zip(stations_df['end station id'], stations_df['end station name']))
tmp = df.groupby(['end station id']).\
agg({'end station latitude': 'median',
'end station longitude': 'median'}).reset_index()
stations_latlng = dict()
for i in range(len(tmp)):
stations_latlng[tmp.loc[i, 'end station id']] = (tmp.loc[i, 'end station latitude'],
tmp.loc[i, 'end station longitude'])
# dictionary
pickle.dump(stations_dict, open('stations_dict.p', 'wb'))
pickle.dump(stations_latlng, open('stations_latlng.p', 'wb'))
# stations_dict = pickle.load(open('stations_dict.p', "rb"))
df.dtypes
```
## Save df to feather
```
%%time
df.drop('level_0', 1, inplace=True)
df.drop(['start station name', 'end station name'], 1, inplace=True)
df.drop(['start station latitude', 'start station longitude',
'end station latitude', 'end station longitude'], 1, inplace=True)
df.to_feather('2017_all', index=false)
```
## Add temperature/raining for each date in 2017
[Temperature Data Source](https://www.wunderground.com/history/airport/KNYC/2017/1/1/CustomHistory.html?dayend=31&monthend=12&yearend=2017&req_city=&req_state=&req_statename=&reqdb.zip=&reqdb.magic=&reqdb.wmo=)
```
import datetime
temp_2017_all = pd.read_csv('nyc_temp_2017_all.csv',)
temp_2017_all['Events'].fillna('None', inplace=True)
temp_2017_all.head()
# replace Trace with a small number
temp_2017_all.loc[temp_2017_all['Precip'].str.contains('T'), 'Events'] = 'None'
temp_2017_all.replace('T', 0.001, inplace=True)
temp_2017_all['Precip'] = temp_2017_all['Precip'].astype(float)
temp_2017_all['Rain'] = 0
temp_2017_all.loc[temp_2017_all['Events'].str.contains('Rain'), 'Rain'] = 1
temp_2017_all['Snow'] = 0
temp_2017_all.loc[temp_2017_all['Events'].str.contains('Snow'), 'Snow'] = 1
temp_2017_all['Fog'] = 0
temp_2017_all.loc[temp_2017_all['Events'].str.contains('Fog'), 'Fog'] = 1
temp_2017_all.drop(['Events'], 1, inplace=True)
# proper dates
temp_2017_all['2017'] = np.arange(datetime.date(2017, 1, 1), datetime.date(2018, 1, 1))
temp_2017_all.sample(5)
temp_2017_all.to_csv('nyc_temp_2017.csv', index=False)
```
|
github_jupyter
|
# Data Source:
# https://www.citibikenyc.com/system-data
# https://s3.amazonaws.com/tripdata/index.html
import os
import pandas as pd
import numpy as np
import pickle
from collections import Counter
PATH = "./NYC-bike-share/2017/"
for dirpath, dirs, files in os.walk(PATH):
print('load files')
csv_files = [i for i in files if i.endswith('csv')]
# csv_files = csv_files[::-1]
csv_files
%%time
""" read all csv, concat them and remove necessary rows
"""
df = pd.DataFrame()
list_ = []
for f in csv_files:
if '2017' in f:
df_tmp = pd.read_csv('./NYC-bike-share/2017/' + f, index_col=None, header=None, low_memory=False)
list_.append(df_tmp)
df = pd.concat(list_)
df.columns = ['tripduration', 'starttime', 'stoptime', 'start station id',
'start station name', 'start station latitude',
'start station longitude', 'end station id', 'end station name',
'end station latitude', 'end station longitude', 'bikeid', 'usertype',
'birth year', 'gender']
%%time
df = df.reset_index()
%%time
idx = df[(df['tripduration'] == 'tripduration') | (df['tripduration'] == 'Trip Duration')].index
df.drop(idx, 0, inplace=True)
df.shape
df.head()
df[df['end station id']==3215].head(1)
df[df['end station id']==3478]
def dt2hrs(dt):
time = dt.split()[1]
if int(time[3:5]) < 30:
return int(time[:2])
else:
return int(time[:2]) + .5
%%time
df['starthour'] = df['starttime'].map(lambda x: dt2hrs(x))
%%time
# format="%Y-%m-%d" makes the process at least 10 times faster
df['starttime'] = pd.to_datetime(df['starttime'], format="%Y-%m-%d")
df['stoptime'] = pd.to_datetime(df['stoptime'], format="%Y-%m-%d")
%%time
df = df.sort_values('starttime').reset_index()
df.drop(['index'], 1, inplace=True)
df.isnull().sum()
# imputing missing values
df.loc[(df['usertype'].isnull()) & (df['birth year'].notnull()), 'usertype'] = 'Subscriber'
df.loc[(df['usertype'].isnull()) & (df['birth year'].isnull()), 'usertype'] = 'Customer'
df.loc[df['birth year'].isnull(), 'birth year'] = -1
%%time
int_cols = ['tripduration', 'start station id', 'end station id', 'bikeid', 'birth year', 'gender']
float_cols = ['start station latitude', 'start station longitude',
'end station latitude', 'end station longitude']
for c in int_cols:
df[c] = df[c].astype(int)
for c in float_cols:
df[c] = df[c].astype(float)
%%time
a = list(set(df['start station id']))
a = set([int(i) for i in a])
b = list(set(df['end station id']))
b = set([int(i) for i in b])
print(len(a), len(b), len(a.union(b)))
# station id: name dictionary
stations_df = df.groupby('end station id')['end station name'].first().reset_index()
stations_dict = dict(zip(stations_df['end station id'], stations_df['end station name']))
tmp = df.groupby(['end station id']).\
agg({'end station latitude': 'median',
'end station longitude': 'median'}).reset_index()
stations_latlng = dict()
for i in range(len(tmp)):
stations_latlng[tmp.loc[i, 'end station id']] = (tmp.loc[i, 'end station latitude'],
tmp.loc[i, 'end station longitude'])
# dictionary
pickle.dump(stations_dict, open('stations_dict.p', 'wb'))
pickle.dump(stations_latlng, open('stations_latlng.p', 'wb'))
# stations_dict = pickle.load(open('stations_dict.p', "rb"))
df.dtypes
%%time
df.drop('level_0', 1, inplace=True)
df.drop(['start station name', 'end station name'], 1, inplace=True)
df.drop(['start station latitude', 'start station longitude',
'end station latitude', 'end station longitude'], 1, inplace=True)
df.to_feather('2017_all', index=false)
import datetime
temp_2017_all = pd.read_csv('nyc_temp_2017_all.csv',)
temp_2017_all['Events'].fillna('None', inplace=True)
temp_2017_all.head()
# replace Trace with a small number
temp_2017_all.loc[temp_2017_all['Precip'].str.contains('T'), 'Events'] = 'None'
temp_2017_all.replace('T', 0.001, inplace=True)
temp_2017_all['Precip'] = temp_2017_all['Precip'].astype(float)
temp_2017_all['Rain'] = 0
temp_2017_all.loc[temp_2017_all['Events'].str.contains('Rain'), 'Rain'] = 1
temp_2017_all['Snow'] = 0
temp_2017_all.loc[temp_2017_all['Events'].str.contains('Snow'), 'Snow'] = 1
temp_2017_all['Fog'] = 0
temp_2017_all.loc[temp_2017_all['Events'].str.contains('Fog'), 'Fog'] = 1
temp_2017_all.drop(['Events'], 1, inplace=True)
# proper dates
temp_2017_all['2017'] = np.arange(datetime.date(2017, 1, 1), datetime.date(2018, 1, 1))
temp_2017_all.sample(5)
temp_2017_all.to_csv('nyc_temp_2017.csv', index=False)
| 0.417865 | 0.472683 |
```
from vpython import *
# Stars interacting gravitationally
# Bruce Sherwood
scene.width = scene.height = 600
# Display text below the 3D graphics:
scene.title = "Stars interacting gravitationally"
scene.caption = """Right button drag or Ctrl-drag to rotate "camera" to view scene.
To zoom, drag with middle button or Alt/Option depressed, or use scroll wheel.
On a two-button mouse, middle is left + right.
Touch screen: pinch/extend to zoom, swipe or two-finger rotate."""
Nstars = 20 # change this to have more or fewer stars
G = 6.7e-11 # Universal gravitational constant
# Typical values
Msun = 2E30
Rsun = 2E9
L = 4e10
vsun = 0.8*sqrt(G*Msun/Rsun)
scene.range = 2*L
scene.forward = vec(-1,-1,-1)
xaxis = curve(color=color.gray(0.5), radius=3e8)
xaxis.append(vec(0,0,0))
xaxis.append(vec(L,0,0))
yaxis = curve(color=color.gray(0.5), radius=3e8)
yaxis.append(vec(0,0,0))
yaxis.append(vec(0,L,0))
zaxis = curve(color=color.gray(0.5), radius=3e8)
zaxis.append(vec(0,0,0))
zaxis.append(vec(0,0,L))
Stars = []
star_colors = [color.red, color.green, color.blue,
color.yellow, color.cyan, color.magenta]
psum = vec(0,0,0)
for i in range(Nstars):
star = sphere(pos=L*vec.random(), make_trail=True, retain=150, trail_radius=3e8)
R = Rsun/2+Rsun*random()
star.radius = R
star.mass = Msun*(R/Rsun)**3
star.momentum = vec.random()*vsun*star.mass
star.color = star.trail_color = star_colors[i % 6]
Stars.append( star )
psum = psum + star.momentum
#make total initial momentum equal zero
for i in range(Nstars):
Stars[i].momentum = Stars[i].momentum - psum/Nstars
dt = 1000
hitlist = []
def computeForces():
global hitlist, Stars
hitlist = []
N = len(Stars)
for i in range(N):
si = Stars[i]
if si is None: continue
F = vec(0,0,0)
pos1 = si.pos
m1 = si.mass
radius = si.radius
for j in range(N):
if i == j: continue
sj = Stars[j]
if sj is None: continue
r = sj.pos - pos1
rmag2 = mag2(r)
if rmag2 <= (radius+sj.radius)**2: hitlist.append([i,j])
F = F + (G*m1*sj.mass/(rmag2**1.5))*r
si.momentum = si.momentum + F*dt
while True:
rate(100)
# Compute all forces on all stars
computeForces()
# Having updated all momenta, now update all positions
for star in Stars:
if star is None: continue
star.pos = star.pos + star.momentum*(dt/star.mass)
# If any collisions took place, merge those stars
hit = len(hitlist)-1
while hit > 0:
s1 = Stars[hitlist[hit][0]]
s2 = Stars[hitlist[hit][1]]
if (s1 is None) or (s2 is None): continue
mass = s1.mass + s2.mass
momentum = s1.momentum + s2.momentum
pos = (s1.mass*s1.pos + s2.mass*s2.pos) / mass
s1.color = s1.trail_color = (s1.mass*s1.color + s2.mass*s2.color) / mass
R = Rsun*(mass / Msun)**(1/3)
s1.clear_trail()
s2.clear_trail()
s2.visible = False
s1.mass = mass
s1.momentum = momentum
s1.pos = pos
s1.radius = R
Stars[hitlist[hit][1]] = None
hit -= 1
```
|
github_jupyter
|
from vpython import *
# Stars interacting gravitationally
# Bruce Sherwood
scene.width = scene.height = 600
# Display text below the 3D graphics:
scene.title = "Stars interacting gravitationally"
scene.caption = """Right button drag or Ctrl-drag to rotate "camera" to view scene.
To zoom, drag with middle button or Alt/Option depressed, or use scroll wheel.
On a two-button mouse, middle is left + right.
Touch screen: pinch/extend to zoom, swipe or two-finger rotate."""
Nstars = 20 # change this to have more or fewer stars
G = 6.7e-11 # Universal gravitational constant
# Typical values
Msun = 2E30
Rsun = 2E9
L = 4e10
vsun = 0.8*sqrt(G*Msun/Rsun)
scene.range = 2*L
scene.forward = vec(-1,-1,-1)
xaxis = curve(color=color.gray(0.5), radius=3e8)
xaxis.append(vec(0,0,0))
xaxis.append(vec(L,0,0))
yaxis = curve(color=color.gray(0.5), radius=3e8)
yaxis.append(vec(0,0,0))
yaxis.append(vec(0,L,0))
zaxis = curve(color=color.gray(0.5), radius=3e8)
zaxis.append(vec(0,0,0))
zaxis.append(vec(0,0,L))
Stars = []
star_colors = [color.red, color.green, color.blue,
color.yellow, color.cyan, color.magenta]
psum = vec(0,0,0)
for i in range(Nstars):
star = sphere(pos=L*vec.random(), make_trail=True, retain=150, trail_radius=3e8)
R = Rsun/2+Rsun*random()
star.radius = R
star.mass = Msun*(R/Rsun)**3
star.momentum = vec.random()*vsun*star.mass
star.color = star.trail_color = star_colors[i % 6]
Stars.append( star )
psum = psum + star.momentum
#make total initial momentum equal zero
for i in range(Nstars):
Stars[i].momentum = Stars[i].momentum - psum/Nstars
dt = 1000
hitlist = []
def computeForces():
global hitlist, Stars
hitlist = []
N = len(Stars)
for i in range(N):
si = Stars[i]
if si is None: continue
F = vec(0,0,0)
pos1 = si.pos
m1 = si.mass
radius = si.radius
for j in range(N):
if i == j: continue
sj = Stars[j]
if sj is None: continue
r = sj.pos - pos1
rmag2 = mag2(r)
if rmag2 <= (radius+sj.radius)**2: hitlist.append([i,j])
F = F + (G*m1*sj.mass/(rmag2**1.5))*r
si.momentum = si.momentum + F*dt
while True:
rate(100)
# Compute all forces on all stars
computeForces()
# Having updated all momenta, now update all positions
for star in Stars:
if star is None: continue
star.pos = star.pos + star.momentum*(dt/star.mass)
# If any collisions took place, merge those stars
hit = len(hitlist)-1
while hit > 0:
s1 = Stars[hitlist[hit][0]]
s2 = Stars[hitlist[hit][1]]
if (s1 is None) or (s2 is None): continue
mass = s1.mass + s2.mass
momentum = s1.momentum + s2.momentum
pos = (s1.mass*s1.pos + s2.mass*s2.pos) / mass
s1.color = s1.trail_color = (s1.mass*s1.color + s2.mass*s2.color) / mass
R = Rsun*(mass / Msun)**(1/3)
s1.clear_trail()
s2.clear_trail()
s2.visible = False
s1.mass = mass
s1.momentum = momentum
s1.pos = pos
s1.radius = R
Stars[hitlist[hit][1]] = None
hit -= 1
| 0.749179 | 0.527621 |
# Insurance cost prediction using linear regression
Make a submisson here: https://jovian.ai/learn/deep-learning-with-pytorch-zero-to-gans/assignment/assignment-2-train-your-first-model
In this assignment we're going to use information like a person's age, sex, BMI, no. of children and smoking habit to predict the price of yearly medical bills. This kind of model is useful for insurance companies to determine the yearly insurance premium for a person. The dataset for this problem is taken from [Kaggle](https://www.kaggle.com/mirichoi0218/insurance).
We will create a model with the following steps:
1. Download and explore the dataset
2. Prepare the dataset for training
3. Create a linear regression model
4. Train the model to fit the data
5. Make predictions using the trained model
This assignment builds upon the concepts from the first 2 lessons. It will help to review these Jupyter notebooks:
- PyTorch basics: https://jovian.ai/aakashns/01-pytorch-basics
- Linear Regression: https://jovian.ai/aakashns/02-linear-regression
- Logistic Regression: https://jovian.ai/aakashns/03-logistic-regression
- Linear regression (minimal): https://jovian.ai/aakashns/housing-linear-minimal
- Logistic regression (minimal): https://jovian.ai/aakashns/mnist-logistic-minimal
As you go through this notebook, you will find a **???** in certain places. Your job is to replace the **???** with appropriate code or values, to ensure that the notebook runs properly end-to-end . In some cases, you'll be required to choose some hyperparameters (learning rate, batch size etc.). Try to experiment with the hypeparameters to get the lowest loss.
```
# Uncomment and run the appropriate command for your operating system, if required
# Linux / Binder
# !pip install numpy matplotlib pandas torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
# Windows
# !pip install numpy matplotlib pandas torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
# MacOS
# !pip install numpy matplotlib pandas torch torchvision torchaudio
import torch
import jovian
import torchvision
import torch.nn as nn
import pandas as pd
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torchvision.datasets.utils import download_url
from torch.utils.data import DataLoader, TensorDataset, random_split
project_name='02-insurance-linear-regression' # will be used by jovian.commit
file_name="02-insurance-linear.ipynb"
```
## Step 1: Download and explore the data
Let us begin by downloading the data. We'll use the `download_url` function from PyTorch to get the data as a CSV (comma-separated values) file.
```
DATASET_URL = "https://hub.jovian.ml/wp-content/uploads/2020/05/insurance.csv"
DATA_FILENAME = "insurance.csv"
download_url(DATASET_URL, '.')
```
To load the dataset into memory, we'll use the `read_csv` function from the `pandas` library. The data will be loaded as a Pandas dataframe. See this short tutorial to learn more: https://data36.com/pandas-tutorial-1-basics-reading-data-files-dataframes-data-selection/
```
dataframe_raw = pd.read_csv(DATA_FILENAME)
dataframe_raw.head()
```
We're going to do a slight customization of the data, so that you every participant receives a slightly different version of the dataset. Fill in your name below as a string (enter at least 5 characters)
```
your_name = "Faaizz" # at least 5 characters
```
The `customize_dataset` function will customize the dataset slightly using your name as a source of random numbers.
```
def customize_dataset(dataframe_raw, rand_str):
dataframe = dataframe_raw.copy(deep=True)
# drop some rows
dataframe = dataframe.sample(int(0.95*len(dataframe)), random_state=int(ord(rand_str[0])))
# scale input
dataframe.bmi = dataframe.bmi * ord(rand_str[1])/100.
# scale target
dataframe.charges = dataframe.charges * ord(rand_str[2])/100.
# drop column
if ord(rand_str[3]) % 2 == 1:
dataframe = dataframe.drop(['region'], axis=1)
return dataframe
dataframe = customize_dataset(dataframe_raw, your_name)
dataframe.head()
```
Let us answer some basic questions about the dataset.
**Q: How many rows does the dataset have?**
```
dataframe.info()
dataframe.describe()
num_rows = 1271
print(num_rows)
```
**Q: How many columns doe the dataset have**
```
dataframe.columns
num_cols = 6
print(num_cols)
```
**Q: What are the column titles of the input variables?**
```
list(dataframe.columns)
input_cols = list(dataframe.columns)
```
**Q: Which of the input columns are non-numeric or categorial variables ?**
Hint: `sex` is one of them. List the columns that are not numbers.
```
dataframe.info()
categorical_cols = ["sex", "smoker"]
```
**Q: What are the column titles of output/target variable(s)?**
```
output_cols = ["charges"]
```
**Q: (Optional) What is the minimum, maximum and average value of the `charges` column? Can you show the distribution of values in a graph?**
Use this data visualization cheatsheet for referece: https://jovian.ml/aakashns/dataviz-cheatsheet
```
dataframe.describe()
dataframe["charges"].max()
dataframe["charges"].min()
# Write your answer here
dataframe["charges"].mean()
# Import plotting libraries
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
# Configure plot style
sns.set_style("darkgrid")
matplotlib.rcParams['figure.figsize'] = (9, 5)
# Make histogram plot of charges
plt.title("Distribution of Mdeical Charges")
sns.distplot(dataframe["charges"], kde=False)
```
Remember to commit your notebook to Jovian after every step, so that you don't lose your work.
```
!pip install jovian --upgrade -q
import jovian
jovian.commit(filename=file_name)
```
## Step 2: Prepare the dataset for training
We need to convert the data from the Pandas dataframe into a PyTorch tensors for training. To do this, the first step is to convert it numpy arrays. If you've filled out `input_cols`, `categorial_cols` and `output_cols` correctly, this following function will perform the conversion to numpy arrays.
```
def dataframe_to_arrays(dataframe):
# Make a copy of the original dataframe
dataframe1 = dataframe.copy(deep=True)
# Convert non-numeric categorical columns to numbers
for col in categorical_cols:
dataframe1[col] = dataframe1[col].astype('category').cat.codes
# Extract input & outupts as numpy arrays
inputs_array = dataframe1[input_cols].to_numpy()
targets_array = dataframe1[output_cols].to_numpy()
return inputs_array, targets_array
```
Read through the [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html) to understand how we're converting categorical variables into numbers.
```
inputs_array, targets_array = dataframe_to_arrays(dataframe)
inputs_array, targets_array
```
**Q: Convert the numpy arrays `inputs_array` and `targets_array` into PyTorch tensors. Make sure that the data type is `torch.float32`.**
```
torch.tensor(torch.from_numpy(inputs_array), dtype=torch.float32)
inputs = torch.tensor(torch.from_numpy(inputs_array), dtype=torch.float32)
targets = torch.tensor(torch.from_numpy(targets_array), dtype=torch.float32)
inputs.dtype, targets.dtype
```
Next, we need to create PyTorch datasets & data loaders for training & validation. We'll start by creating a `TensorDataset`.
```
inputs.shape, targets.shape
dataset = TensorDataset(inputs, targets)
```
**Q: Pick a number between `0.1` and `0.2` to determine the fraction of data that will be used for creating the validation set. Then use `random_split` to create training & validation datasets.**
```
val_percent = 0.125 # between 0.1 and 0.2
val_size = int(num_rows * val_percent)
train_size = num_rows - val_size
train_ds, val_ds = torch.utils.data.random_split(dataset, [train_size, val_size]) # Use the random_split function to split dataset into 2 parts of the desired length
```
Finally, we can create data loaders for training & validation.
**Q: Pick a batch size for the data loader.**
```
train_size
batch_size = int(train_size/5)
train_loader = DataLoader(train_ds, batch_size, shuffle=True)
val_loader = DataLoader(val_ds, batch_size)
```
Let's look at a batch of data to verify everything is working fine so far.
```
for xb, yb in train_loader:
print("inputs:", xb)
print("targets:", yb)
break
```
Let's save our work by committing to Jovian.
```
jovian.commit(project=project_name, filename=file_name, environment=None)
```
## Step 3: Create a Linear Regression Model
Our model itself is a fairly straightforward linear regression (we'll build more complex models in the next assignment).
```
input_size = len(input_cols)
output_size = len(output_cols)
```
**Q: Complete the class definition below by filling out the constructor (`__init__`), `forward`, `training_step` and `validation_step` methods.**
Hint: Think carefully about picking a good loss fuction (it's not cross entropy). Maybe try 2-3 of them and see which one works best. See https://pytorch.org/docs/stable/nn.functional.html#loss-functions
```
class InsuranceModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, output_size) # fill this (hint: use input_size & output_size defined above)
def forward(self, xb):
out = self.linear(xb) # fill this
return out
def training_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calcuate loss
loss = F.l1_loss(out, targets) # fill this
return loss
def validation_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calculate loss
loss = F.l1_loss(out, targets) # fill this
return {'val_loss': loss.detach()}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
return {'val_loss': epoch_loss.item()}
def epoch_end(self, epoch, result, num_epochs):
# Print result every 20th epoch
if epoch == 0 or (epoch+1) % 20 == 0 or epoch == num_epochs-1:
print("Epoch [{}], val_loss: {:.4f}".format(epoch+1, result['val_loss']))
```
Let us create a model using the `InsuranceModel` class. You may need to come back later and re-run the next cell to reinitialize the model, in case the loss becomes `nan` or `infinity`.
```
model = InsuranceModel()
```
Let's check out the weights and biases of the model using `model.parameters`.
```
list(model.parameters())
```
One final commit before we train the model.
```
jovian.commit(project=project_name, filename=file_name, environment=None)
```
## Step 4: Train the model to fit the data
To train our model, we'll use the same `fit` function explained in the lecture. That's the benefit of defining a generic training loop - you can use it for any problem.
```
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result, epochs)
history.append(result)
return history
```
**Q: Use the `evaluate` function to calculate the loss on the validation set before training.**
```
result = evaluate(model, val_loader) # Use the the evaluate function
print(result)
```
We are now ready to train the model. You may need to run the training loop many times, for different number of epochs and with different learning rates, to get a good result. Also, if your loss becomes too large (or `nan`), you may have to re-initialize the model by running the cell `model = InsuranceModel()`. Experiment with this for a while, and try to get to as low a loss as possible.
**Q: Train the model 4-5 times with different learning rates & for different number of epochs.**
Hint: Vary learning rates by orders of 10 (e.g. `1e-2`, `1e-3`, `1e-4`, `1e-5`, `1e-6`) to figure out what works.
```
model= InsuranceModel()
result = evaluate(model, val_loader) # Use the the evaluate function
print(result)
epochs = 1000
lr = 1e-6
history1 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 1000
lr = 1e-3
history2 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 2000
lr = 1e-7
history3 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 2000
lr = 1e-9
history4 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 1000
lr = 1e-10
history5 = fit(epochs, lr, model, train_loader, val_loader)
```
**Q: What is the final validation loss of your model?**
```
history5[len(history5)-1]["val_loss"]
val_loss = history5[len(history5)-1]["val_loss"]
```
Let's log the final validation loss to Jovian and commit the notebook
```
jovian.log_metrics(val_loss=val_loss)
jovian.commit(project=project_name, filename=file_name, environment=None)
```
Now scroll back up, re-initialize the model, and try different set of values for batch size, number of epochs, learning rate etc. Commit each experiment and use the "Compare" and "View Diff" options on Jovian to compare the different results.
## Step 5: Make predictions using the trained model
**Q: Complete the following function definition to make predictions on a single input**
```
def predict_single(input, target, model):
inputs = input.unsqueeze(0)
predictions = model(input) # fill this
prediction = predictions[0].detach()
print("Input:", input)
print("Target:", target)
print("Prediction:", prediction)
input, target = val_ds[0]
predict_single(input, target, model)
input, target = val_ds[10]
predict_single(input, target, model)
input, target = val_ds[23]
predict_single(input, target, model)
```
Are you happy with your model's predictions? Try to improve them further.
## (Optional) Step 6: Try another dataset & blog about it
While this last step is optional for the submission of your assignment, we highly recommend that you do it. Try to replicate this notebook for a different linear regression or logistic regression problem. This will help solidify your understanding, and give you a chance to differentiate the generic patterns in machine learning from problem-specific details.You can use one of these starer notebooks (just change the dataset):
- Linear regression (minimal): https://jovian.ai/aakashns/housing-linear-minimal
- Logistic regression (minimal): https://jovian.ai/aakashns/mnist-logistic-minimal
Here are some sources to find good datasets:
- https://lionbridge.ai/datasets/10-open-datasets-for-linear-regression/
- https://www.kaggle.com/rtatman/datasets-for-regression-analysis
- https://archive.ics.uci.edu/ml/datasets.php?format=&task=reg&att=&area=&numAtt=&numIns=&type=&sort=nameUp&view=table
- https://people.sc.fsu.edu/~jburkardt/datasets/regression/regression.html
- https://archive.ics.uci.edu/ml/datasets/wine+quality
- https://pytorch.org/docs/stable/torchvision/datasets.html
We also recommend that you write a blog about your approach to the problem. Here is a suggested structure for your post (feel free to experiment with it):
- Interesting title & subtitle
- Overview of what the blog covers (which dataset, linear regression or logistic regression, intro to PyTorch)
- Downloading & exploring the data
- Preparing the data for training
- Creating a model using PyTorch
- Training the model to fit the data
- Your thoughts on how to experiment with different hyperparmeters to reduce loss
- Making predictions using the model
As with the previous assignment, you can [embed Juptyer notebook cells & outputs from Jovian](https://medium.com/jovianml/share-and-embed-jupyter-notebooks-online-with-jovian-ml-df709a03064e) into your blog.
Don't forget to share your work on the forum: https://jovian.ai/forum/t/linear-regression-and-logistic-regression-notebooks-and-blog-posts/14039
```
jovian.commit(project=project_name, filename=file_name, environment=None)
```
|
github_jupyter
|
# Uncomment and run the appropriate command for your operating system, if required
# Linux / Binder
# !pip install numpy matplotlib pandas torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
# Windows
# !pip install numpy matplotlib pandas torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
# MacOS
# !pip install numpy matplotlib pandas torch torchvision torchaudio
import torch
import jovian
import torchvision
import torch.nn as nn
import pandas as pd
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torchvision.datasets.utils import download_url
from torch.utils.data import DataLoader, TensorDataset, random_split
project_name='02-insurance-linear-regression' # will be used by jovian.commit
file_name="02-insurance-linear.ipynb"
DATASET_URL = "https://hub.jovian.ml/wp-content/uploads/2020/05/insurance.csv"
DATA_FILENAME = "insurance.csv"
download_url(DATASET_URL, '.')
dataframe_raw = pd.read_csv(DATA_FILENAME)
dataframe_raw.head()
your_name = "Faaizz" # at least 5 characters
def customize_dataset(dataframe_raw, rand_str):
dataframe = dataframe_raw.copy(deep=True)
# drop some rows
dataframe = dataframe.sample(int(0.95*len(dataframe)), random_state=int(ord(rand_str[0])))
# scale input
dataframe.bmi = dataframe.bmi * ord(rand_str[1])/100.
# scale target
dataframe.charges = dataframe.charges * ord(rand_str[2])/100.
# drop column
if ord(rand_str[3]) % 2 == 1:
dataframe = dataframe.drop(['region'], axis=1)
return dataframe
dataframe = customize_dataset(dataframe_raw, your_name)
dataframe.head()
dataframe.info()
dataframe.describe()
num_rows = 1271
print(num_rows)
dataframe.columns
num_cols = 6
print(num_cols)
list(dataframe.columns)
input_cols = list(dataframe.columns)
dataframe.info()
categorical_cols = ["sex", "smoker"]
output_cols = ["charges"]
dataframe.describe()
dataframe["charges"].max()
dataframe["charges"].min()
# Write your answer here
dataframe["charges"].mean()
# Import plotting libraries
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
# Configure plot style
sns.set_style("darkgrid")
matplotlib.rcParams['figure.figsize'] = (9, 5)
# Make histogram plot of charges
plt.title("Distribution of Mdeical Charges")
sns.distplot(dataframe["charges"], kde=False)
!pip install jovian --upgrade -q
import jovian
jovian.commit(filename=file_name)
def dataframe_to_arrays(dataframe):
# Make a copy of the original dataframe
dataframe1 = dataframe.copy(deep=True)
# Convert non-numeric categorical columns to numbers
for col in categorical_cols:
dataframe1[col] = dataframe1[col].astype('category').cat.codes
# Extract input & outupts as numpy arrays
inputs_array = dataframe1[input_cols].to_numpy()
targets_array = dataframe1[output_cols].to_numpy()
return inputs_array, targets_array
inputs_array, targets_array = dataframe_to_arrays(dataframe)
inputs_array, targets_array
torch.tensor(torch.from_numpy(inputs_array), dtype=torch.float32)
inputs = torch.tensor(torch.from_numpy(inputs_array), dtype=torch.float32)
targets = torch.tensor(torch.from_numpy(targets_array), dtype=torch.float32)
inputs.dtype, targets.dtype
inputs.shape, targets.shape
dataset = TensorDataset(inputs, targets)
val_percent = 0.125 # between 0.1 and 0.2
val_size = int(num_rows * val_percent)
train_size = num_rows - val_size
train_ds, val_ds = torch.utils.data.random_split(dataset, [train_size, val_size]) # Use the random_split function to split dataset into 2 parts of the desired length
train_size
batch_size = int(train_size/5)
train_loader = DataLoader(train_ds, batch_size, shuffle=True)
val_loader = DataLoader(val_ds, batch_size)
for xb, yb in train_loader:
print("inputs:", xb)
print("targets:", yb)
break
jovian.commit(project=project_name, filename=file_name, environment=None)
input_size = len(input_cols)
output_size = len(output_cols)
class InsuranceModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, output_size) # fill this (hint: use input_size & output_size defined above)
def forward(self, xb):
out = self.linear(xb) # fill this
return out
def training_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calcuate loss
loss = F.l1_loss(out, targets) # fill this
return loss
def validation_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calculate loss
loss = F.l1_loss(out, targets) # fill this
return {'val_loss': loss.detach()}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
return {'val_loss': epoch_loss.item()}
def epoch_end(self, epoch, result, num_epochs):
# Print result every 20th epoch
if epoch == 0 or (epoch+1) % 20 == 0 or epoch == num_epochs-1:
print("Epoch [{}], val_loss: {:.4f}".format(epoch+1, result['val_loss']))
model = InsuranceModel()
list(model.parameters())
jovian.commit(project=project_name, filename=file_name, environment=None)
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result, epochs)
history.append(result)
return history
result = evaluate(model, val_loader) # Use the the evaluate function
print(result)
model= InsuranceModel()
result = evaluate(model, val_loader) # Use the the evaluate function
print(result)
epochs = 1000
lr = 1e-6
history1 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 1000
lr = 1e-3
history2 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 2000
lr = 1e-7
history3 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 2000
lr = 1e-9
history4 = fit(epochs, lr, model, train_loader, val_loader)
epochs = 1000
lr = 1e-10
history5 = fit(epochs, lr, model, train_loader, val_loader)
history5[len(history5)-1]["val_loss"]
val_loss = history5[len(history5)-1]["val_loss"]
jovian.log_metrics(val_loss=val_loss)
jovian.commit(project=project_name, filename=file_name, environment=None)
def predict_single(input, target, model):
inputs = input.unsqueeze(0)
predictions = model(input) # fill this
prediction = predictions[0].detach()
print("Input:", input)
print("Target:", target)
print("Prediction:", prediction)
input, target = val_ds[0]
predict_single(input, target, model)
input, target = val_ds[10]
predict_single(input, target, model)
input, target = val_ds[23]
predict_single(input, target, model)
jovian.commit(project=project_name, filename=file_name, environment=None)
| 0.846863 | 0.991587 |
# Navigation
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893).
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Banana.app"`
- **Windows** (x86): `"path/to/Banana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/Banana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/Banana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/Banana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/Banana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/Banana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Banana.app")
```
```
env = UnityEnvironment(file_name="Banana.app")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
brain
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
'''
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
'''
```
When finished, you can close the environment.
```
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
from dqn_agent import Agent
agent = Agent(state_size, action_size, seed=0)
def dqn(n_episodes=20000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations[0]
score = 0
for t in range(max_t):
action = agent.act(state, eps)
env_info = env.step(action)[brain_name]
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0]
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=13.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0
reward_i=[]# initialize the score
while True:
action = agent.act(state, 0.01) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
reward_i.append(reward)
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
print("Rewards: {}".format(reward_i))
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
|
github_jupyter
|
from unityagents import UnityEnvironment
import numpy as np
env = UnityEnvironment(file_name="Banana.app")
env = UnityEnvironment(file_name="Banana.app")
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
brain
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
'''
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
'''
import gym
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
from dqn_agent import Agent
agent = Agent(state_size, action_size, seed=0)
def dqn(n_episodes=20000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations[0]
score = 0
for t in range(max_t):
action = agent.act(state, eps)
env_info = env.step(action)[brain_name]
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0]
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=13.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
break
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0
reward_i=[]# initialize the score
while True:
action = agent.act(state, 0.01) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
reward_i.append(reward)
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
print("Rewards: {}".format(reward_i))
env.close()
env_info = env.reset(train_mode=True)[brain_name]
| 0.435181 | 0.970381 |
```
import keras
keras.__version__
```
# ๋ด์ค ๊ธฐ์ฌ ๋ถ๋ฅ: ๋ค์ค ๋ถ๋ฅ ๋ฌธ์
์ด ๋
ธํธ๋ถ์ [์ผ๋ผ์ค ์ฐฝ์์์๊ฒ ๋ฐฐ์ฐ๋ ๋ฅ๋ฌ๋](https://tensorflow.blog/์ผ๋ผ์ค-์ฐฝ์์์๊ฒ-๋ฐฐ์ฐ๋-๋ฅ๋ฌ๋/) ์ฑ
์ 3์ฅ 5์ ์ ์ฝ๋ ์์ ์
๋๋ค. ์ฑ
์๋ ๋ ๋ง์ ๋ด์ฉ๊ณผ ๊ทธ๋ฆผ์ด ์์ต๋๋ค. ์ด ๋
ธํธ๋ถ์๋ ์์ค ์ฝ๋์ ๊ด๋ จ๋ ์ค๋ช
๋ง ํฌํจํฉ๋๋ค. ์ด ๋
ธํธ๋ถ์ ์ค๋ช
์ ์ผ๋ผ์ค ๋ฒ์ 2.2.2์ ๋ง์ถ์ด์ ธ ์์ต๋๋ค. ์ผ๋ผ์ค ์ต์ ๋ฒ์ ์ด ๋ฆด๋ฆฌ์ค๋๋ฉด ๋
ธํธ๋ถ์ ๋ค์ ํ
์คํธํ๊ธฐ ๋๋ฌธ์ ์ค๋ช
๊ณผ ์ฝ๋์ ๊ฒฐ๊ณผ๊ฐ ์กฐ๊ธ ๋ค๋ฅผ ์ ์์ต๋๋ค.
----
์ด์ ์น์
์์ ์์ ์ฐ๊ฒฐ๋ ์ ๊ฒฝ๋ง์ ์ฌ์ฉํด ๋ฒกํฐ ์
๋ ฅ์ ์ด๋ป๊ฒ ๋ ๊ฐ์ ํด๋์ค๋ก ๋ถ๋ฅํ๋์ง ๋ณด์์ต๋๋ค. ๋ ๊ฐ ์ด์์ ํด๋์ค๊ฐ ์์ ๋๋ ์ด๋ป๊ฒ ํด์ผ ํ ๊น์?
์ด ์ ์์ ๋ก์ดํฐ ๋ด์ค๋ฅผ 46๊ฐ์ ์ํธ ๋ฐฐํ์ ์ธ ํ ํฝ์ผ๋ก ๋ถ๋ฅํ๋ ์ ๊ฒฝ๋ง์ ๋ง๋ค์ด ๋ณด๊ฒ ์ต๋๋ค. ํด๋์ค๊ฐ ๋ง๊ธฐ ๋๋ฌธ์ ์ด ๋ฌธ์ ๋ ๋ค์ค ๋ถ๋ฅ์ ์์
๋๋ค. ๊ฐ ๋ฐ์ดํฐ ํฌ์ธํธ๊ฐ ์ ํํ ํ๋์ ๋ฒ์ฃผ๋ก ๋ถ๋ฅ๋๊ธฐ ๋๋ฌธ์ ์ข ๋ ์ ํํ ๋งํ๋ฉด ๋จ์ผ ๋ ์ด๋ธ ๋ค์ค ๋ถ๋ฅ ๋ฌธ์ ์
๋๋ค. ๊ฐ ๋ฐ์ดํฐ ํฌ์ธํธ๊ฐ ์ฌ๋ฌ ๊ฐ์ ๋ฒ์ฃผ(๊ฐ๋ น, ํ ํฝ)์ ์ํ ์ ์๋ค๋ฉด ์ด๋ฐ ๋ฌธ์ ๋ ๋ค์ค ๋ ์ด๋ธ ๋ค์ค ๋ถ๋ฅ์ ๋ฌธ์ ๊ฐ ๋ฉ๋๋ค.
## ๋ก์ดํฐ ๋ฐ์ดํฐ์
1986๋
์ ๋ก์ดํฐ์์ ๊ณต๊ฐํ ์งง์ ๋ด์ค ๊ธฐ์ฌ์ ํ ํฝ์ ์งํฉ์ธ ๋ก์ดํฐ ๋ฐ์ดํฐ์
์ ์ฌ์ฉํ๊ฒ ์ต๋๋ค. ์ด ๋ฐ์ดํฐ์
์ ํ
์คํธ ๋ถ๋ฅ๋ฅผ ์ํด ๋๋ฆฌ ์ฌ์ฉ๋๋ ๊ฐ๋จํ ๋ฐ์ดํฐ์
์
๋๋ค. 46๊ฐ์ ํ ํฝ์ด ์์ผ๋ฉฐ ์ด๋ค ํ ํฝ์ ๋ค๋ฅธ ๊ฒ์ ๋นํด ๋ฐ์ดํฐ๊ฐ ๋ง์ต๋๋ค. ๊ฐ ํ ํฝ์ ํ๋ จ ์ธํธ์ ์ต์ํ 10๊ฐ์ ์ํ์ ๊ฐ์ง๊ณ ์์ต๋๋ค.
IMDB์ MNIST์ ๋ง์ฐฌ๊ฐ์ง๋ก ๋ก์ดํฐ ๋ฐ์ดํฐ์
์ ์ผ๋ผ์ค์ ํฌํจ๋์ด ์์ต๋๋ค. ํ ๋ฒ ์ดํด๋ณด์ฃ :
```
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
```
IMDB ๋ฐ์ดํฐ์
์์์ฒ๋ผ num_words=10000 ๋งค๊ฐ๋ณ์๋ ๋ฐ์ดํฐ์์ ๊ฐ์ฅ ์์ฃผ ๋ฑ์ฅํ๋ ๋จ์ด 10,000๊ฐ๋ก ์ ํํฉ๋๋ค.
์ฌ๊ธฐ์๋ 8,982๊ฐ์ ํ๋ จ ์ํ๊ณผ 2,246๊ฐ์ ํ
์คํธ ์ํ์ด ์์ต๋๋ค:
```
len(train_data)
len(test_data)
```
IMDB ๋ฆฌ๋ทฐ์ฒ๋ผ ๊ฐ ์ํ์ ์ ์ ๋ฆฌ์คํธ์
๋๋ค(๋จ์ด ์ธ๋ฑ์ค):
```
train_data[10]
```
๊ถ๊ธํ ๊ฒฝ์ฐ๋ฅผ ์ํด ์ด๋ป๊ฒ ๋จ์ด๋ก ๋์ฝ๋ฉํ๋์ง ์์๋ณด๊ฒ ์ต๋๋ค:
```
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# 0, 1, 2๋ 'ํจ๋ฉ', '๋ฌธ์ ์์', '์ฌ์ ์ ์์'์ ์ํ ์ธ๋ฑ์ค์ด๋ฏ๋ก 3์ ๋บ๋๋ค
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
```
์ํ์ ์ฐ๊ฒฐ๋ ๋ ์ด๋ธ์ ํ ํฝ์ ์ธ๋ฑ์ค๋ก 0๊ณผ 45 ์ฌ์ด์ ์ ์์
๋๋ค.
```
train_labels[10]
```
## ๋ฐ์ดํฐ ์ค๋น
์ด์ ์ ์์ ์ ๋์ผํ ์ฝ๋๋ฅผ ์ฌ์ฉํด์ ๋ฐ์ดํฐ๋ฅผ ๋ฒกํฐ๋ก ๋ณํํฉ๋๋ค:
```
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# ํ๋ จ ๋ฐ์ดํฐ ๋ฒกํฐ ๋ณํ
x_train = vectorize_sequences(train_data)
# ํ
์คํธ ๋ฐ์ดํฐ ๋ฒกํฐ ๋ณํ
x_test = vectorize_sequences(test_data)
```
๋ ์ด๋ธ์ ๋ฒกํฐ๋ก ๋ฐ๊พธ๋ ๋ฐฉ๋ฒ์ ๋ ๊ฐ์ง์
๋๋ค. ๋ ์ด๋ธ์ ๋ฆฌ์คํธ๋ฅผ ์ ์ ํ
์๋ก ๋ณํํ๋ ๊ฒ๊ณผ ์-ํซ ์ธ์ฝ๋ฉ์ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค. ์-ํซ ์ธ์ฝ๋ฉ์ด ๋ฒ์ฃผํ ๋ฐ์ดํฐ์ ๋๋ฆฌ ์ฌ์ฉ๋๊ธฐ ๋๋ฌธ์ ๋ฒ์ฃผํ ์ธ์ฝ๋ฉ์ด๋ผ๊ณ ๋ ๋ถ๋ฆ
๋๋ค. ์-ํซ ์ธ์ฝ๋ฉ์ ๋ํ ์์ธํ ์ค๋ช
์ 6.1์ ์ ์ฐธ๊ณ ํ์ธ์. ์ด ๊ฒฝ์ฐ ๋ ์ด๋ธ์ ์-ํซ ์ธ์ฝ๋ฉ์ ๊ฐ ๋ ์ด๋ธ์ ์ธ๋ฑ์ค ์๋ฆฌ๋ 1์ด๊ณ ๋๋จธ์ง๋ ๋ชจ๋ 0์ธ ๋ฒกํฐ์
๋๋ค:
```
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# ํ๋ จ ๋ ์ด๋ธ ๋ฒกํฐ ๋ณํ
one_hot_train_labels = to_one_hot(train_labels)
# ํ
์คํธ ๋ ์ด๋ธ ๋ฒกํฐ ๋ณํ
one_hot_test_labels = to_one_hot(test_labels)
```
MNIST ์์ ์์ ์ด๋ฏธ ๋ณด์๋ฏ์ด ์ผ๋ผ์ค์๋ ์ด๋ฅผ ์ํ ๋ด์ฅ ํจ์๊ฐ ์์ต๋๋ค:
```
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
```
## ๋ชจ๋ธ ๊ตฌ์ฑ
์ด ํ ํฝ ๋ถ๋ฅ ๋ฌธ์ ๋ ์ด์ ์ ์ํ ๋ฆฌ๋ทฐ ๋ถ๋ฅ ๋ฌธ์ ์ ๋น์ทํด ๋ณด์
๋๋ค. ๋ ๊ฒฝ์ฐ ๋ชจ๋ ์งง์ ํ
์คํธ๋ฅผ ๋ถ๋ฅํ๋ ๊ฒ์ด์ฃ . ์ฌ๊ธฐ์์๋ ์๋ก์ด ์ ์ฝ ์ฌํญ์ด ์ถ๊ฐ๋์์ต๋๋ค. ์ถ๋ ฅ ํด๋์ค์ ๊ฐ์๊ฐ 2์์ 46๊ฐ๋ก ๋์ด๋ ์ ์
๋๋ค. ์ถ๋ ฅ ๊ณต๊ฐ์ ์ฐจ์์ด ํจ์ฌ ์ปค์ก์ต๋๋ค.
์ด์ ์ ์ฌ์ฉํ๋ ๊ฒ์ฒ๋ผ `Dense` ์ธต์ ์์ผ๋ฉด ๊ฐ ์ธต์ ์ด์ ์ธต์ ์ถ๋ ฅ์์ ์ ๊ณตํ ์ ๋ณด๋ง ์ฌ์ฉํ ์ ์์ต๋๋ค. ํ ์ธต์ด ๋ถ๋ฅ ๋ฌธ์ ์ ํ์ํ ์ผ๋ถ ์ ๋ณด๋ฅผ ๋๋ฝํ๋ฉด ๊ทธ ๋ค์ ์ธต์์ ์ด๋ฅผ ๋ณต์ํ ๋ฐฉ๋ฒ์ด ์์ต๋๋ค. ๊ฐ ์ธต์ ์ ์ฌ์ ์ผ๋ก ์ ๋ณด์ ๋ณ๋ชฉ์ด ๋ ์ ์์ต๋๋ค. ์ด์ ์์ ์์ 16์ฐจ์์ ๊ฐ์ง ์ค๊ฐ์ธต์ ์ฌ์ฉํ์ง๋ง 16์ฐจ์ ๊ณต๊ฐ์ 46๊ฐ์ ํด๋์ค๋ฅผ ๊ตฌ๋ถํ๊ธฐ์ ๋๋ฌด ์ ์ฝ์ด ๋ง์ ๊ฒ ๊ฐ์ต๋๋ค. ์ด๋ ๊ฒ ๊ท๋ชจ๊ฐ ์์ ์ธต์ ์ ์ฉํ ์ ๋ณด๋ฅผ ์์ ํ ์๊ฒ ๋๋ ์ ๋ณด์ ๋ณ๋ชฉ ์ง์ ์ฒ๋ผ ๋์ํ ์ ์์ต๋๋ค.
์ด๋ฐ ์ด์ ๋ก ์ข ๋ ๊ท๋ชจ๊ฐ ํฐ ์ธต์ ์ฌ์ฉํ๊ฒ ์ต๋๋ค. 64๊ฐ์ ์ ๋์ ์ฌ์ฉํด ๋ณด์ฃ :
```
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
```
์ด ๊ตฌ์กฐ์์ ์ฃผ๋ชฉํด์ผ ํ ์ ์ด ๋ ๊ฐ์ง ์์ต๋๋ค:
* ๋ง์ง๋ง `Dense` ์ธต์ ํฌ๊ธฐ๊ฐ 46์
๋๋ค. ๊ฐ ์
๋ ฅ ์ํ์ ๋ํด์ 46์ฐจ์์ ๋ฒกํฐ๋ฅผ ์ถ๋ ฅํ๋ค๋ ๋ป์
๋๋ค. ์ด ๋ฒกํฐ์ ๊ฐ ์์(๊ฐ ์ฐจ์)์ ๊ฐ๊ธฐ ๋ค๋ฅธ ์ถ๋ ฅ ํด๋์ค๊ฐ ์ธ์ฝ๋ฉ๋ ๊ฒ์
๋๋ค.
* ๋ง์ง๋ง ์ธต์ `softmax` ํ์ฑํ ํจ์๊ฐ ์ฌ์ฉ๋์์ต๋๋ค. MNIST ์์ ์์ ์ด๋ฐ ๋ฐฉ์์ ๋ณด์์ต๋๋ค. ๊ฐ ์
๋ ฅ ์ํ๋ง๋ค 46๊ฐ์ ์ถ๋ ฅ ํด๋์ค์ ๋ํ ํ๋ฅ ๋ถํฌ๋ฅผ ์ถ๋ ฅํฉ๋๋ค. ์ฆ, 46์ฐจ์์ ์ถ๋ ฅ ๋ฒกํฐ๋ฅผ ๋ง๋ค๋ฉฐ `output[i]`๋ ์ด๋ค ์ํ์ด ํด๋์ค `i`์ ์ํ ํ๋ฅ ์
๋๋ค. 46๊ฐ์ ๊ฐ์ ๋ชจ๋ ๋ํ๋ฉด 1์ด ๋ฉ๋๋ค.
์ด๋ฐ ๋ฌธ์ ์ ์ฌ์ฉํ ์ต์ ์ ์์ค ํจ์๋ `categorical_crossentropy`์
๋๋ค. ์ด ํจ์๋ ๋ ํ๋ฅ ๋ถํฌ์ ์ฌ์ด์ ๊ฑฐ๋ฆฌ๋ฅผ ์ธก์ ํฉ๋๋ค. ์ฌ๊ธฐ์์๋ ๋คํธ์ํฌ๊ฐ ์ถ๋ ฅํ ํ๋ฅ ๋ถํฌ์ ์ง์ง ๋ ์ด๋ธ์ ๋ถํฌ ์ฌ์ด์ ๊ฑฐ๋ฆฌ์
๋๋ค. ๋ ๋ถํฌ ์ฌ์ด์ ๊ฑฐ๋ฆฌ๋ฅผ ์ต์ํํ๋ฉด ์ง์ง ๋ ์ด๋ธ์ ๊ฐ๋ฅํ ๊ฐ๊น์ด ์ถ๋ ฅ์ ๋ด๋๋ก ๋ชจ๋ธ์ ํ๋ จํ๊ฒ ๋ฉ๋๋ค.
```
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
```
## ํ๋ จ ๊ฒ์ฆ
ํ๋ จ ๋ฐ์ดํฐ์์ 1,000๊ฐ์ ์ํ์ ๋ฐ๋ก ๋ผ์ด์ ๊ฒ์ฆ ์ธํธ๋ก ์ฌ์ฉํ๊ฒ ์ต๋๋ค:
```
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
```
์ด์ 20๋ฒ์ ์ํฌํฌ๋ก ๋ชจ๋ธ์ ํ๋ จ์ํต๋๋ค:
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
```
์์ค๊ณผ ์ ํ๋ ๊ณก์ ์ ๊ทธ๋ ค ๋ณด์ฃ :
```
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # ๊ทธ๋ํ๋ฅผ ์ด๊ธฐํํฉ๋๋ค
acc = history.history['acc']
val_acc = history.history['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
์ด ๋ชจ๋ธ์ 9๋ฒ์งธ ์ํฌํฌ ์ดํ์ ๊ณผ๋์ ํฉ์ด ์์๋ฉ๋๋ค. 9๋ฒ์ ์ํฌํฌ๋ก ์๋ก์ด ๋ชจ๋ธ์ ํ๋ จํ๊ณ ํ
์คํธ ์ธํธ์์ ํ๊ฐํ๊ฒ ์ต๋๋ค:
```
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=9,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
```
๋๋ต 78%์ ์ ํ๋๋ฅผ ๋ฌ์ฑํ์ต๋๋ค. ๊ท ํ ์กํ ์ด์ง ๋ถ๋ฅ ๋ฌธ์ ์์ ์์ ํ ๋ฌด์์๋ก ๋ถ๋ฅํ๋ฉด 50%์ ์ ํ๋๋ฅผ ๋ฌ์ฑํฉ๋๋ค. ์ด ๋ฌธ์ ๋ ๋ถ๊ท ํํ ๋ฐ์ดํฐ์
์ ์ฌ์ฉํ๋ฏ๋ก ๋ฌด์์๋ก ๋ถ๋ฅํ๋ฉด 19% ์ ๋๋ฅผ ๋ฌ์ฑํฉ๋๋ค. ์ฌ๊ธฐ์ ๋นํ๋ฉด ์ด ๊ฒฐ๊ณผ๋ ๊ฝค ์ข์ ํธ์
๋๋ค:
```
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
```
## ์๋ก์ด ๋ฐ์ดํฐ์ ๋ํด ์์ธกํ๊ธฐ
๋ชจ๋ธ ์ธ์คํด์ค์ `predict` ๋ฉ์๋๋ 46๊ฐ ํ ํฝ์ ๋ํ ํ๋ฅ ๋ถํฌ๋ฅผ ๋ฐํํฉ๋๋ค. ํ
์คํธ ๋ฐ์ดํฐ ์ ์ฒด์ ๋ํ ํ ํฝ์ ์์ธกํด ๋ณด๊ฒ ์ต๋๋ค:
```
predictions = model.predict(x_test)
```
`predictions`์ ๊ฐ ํญ๋ชฉ์ ๊ธธ์ด๊ฐ 46์ธ ๋ฒกํฐ์
๋๋ค:
```
predictions[0].shape
```
์ด ๋ฒกํฐ์ ์์ ํฉ์ 1์
๋๋ค:
```
np.sum(predictions[0])
```
๊ฐ์ฅ ํฐ ๊ฐ์ด ์์ธก ํด๋์ค๊ฐ ๋ฉ๋๋ค. ์ฆ, ๊ฐ์ฅ ํ๋ฅ ์ด ๋์ ํด๋์ค์
๋๋ค:
```
np.argmax(predictions[0])
```
## ๋ ์ด๋ธ๊ณผ ์์ค์ ๋ค๋ฃจ๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ
์์ ์ธ๊ธํ ๊ฒ์ฒ๋ผ ๋ ์ด๋ธ์ ์ธ์ฝ๋ฉํ๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์ ๋ค์๊ณผ ๊ฐ์ด ์ ์ ํ
์๋ก ๋ณํํ๋ ๊ฒ์
๋๋ค:
```
y_train = np.array(train_labels)
y_test = np.array(test_labels)
```
์ด ๋ฐฉ์์ ์ฌ์ฉํ๋ ค๋ฉด ์์ค ํจ์ ํ๋๋ง ๋ฐ๊พธ๋ฉด ๋ฉ๋๋ค. ์ฝ๋ 3-21์ ์ฌ์ฉ๋ ์์ค ํจ์ `categorical_crossentropy`๋ ๋ ์ด๋ธ์ด ๋ฒ์ฃผํ ์ธ์ฝ๋ฉ๋์ด ์์ ๊ฒ์ด๋ผ๊ณ ๊ธฐ๋ํฉ๋๋ค. ์ ์ ๋ ์ด๋ธ์ ์ฌ์ฉํ ๋๋ `sparse_categorical_crossentropy`๋ฅผ ์ฌ์ฉํด์ผ ํฉ๋๋ค:
```
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
```
์ด ์์ค ํจ์๋ ์ธํฐํ์ด์ค๋ง ๋ค๋ฅผ ๋ฟ์ด๊ณ ์ํ์ ์ผ๋ก๋ `categorical_crossentropy`์ ๋์ผํฉ๋๋ค.
## ์ถฉ๋ถํ ํฐ ์ค๊ฐ์ธต์ ๋์ด์ผ ํ๋ ์ด์
์์ ์ธ๊ธํ ๊ฒ์ฒ๋ผ ๋ง์ง๋ง ์ถ๋ ฅ์ด 46์ฐจ์์ด๊ธฐ ๋๋ฌธ์ ์ค๊ฐ์ธต์ ํ๋ ์ ๋์ด 46๊ฐ๋ณด๋ค ๋ง์ด ์ ์ด์๋ ์ ๋ฉ๋๋ค. 46์ฐจ์๋ณด๋ค ํจ์ฌ ์์ ์ค๊ฐ์ธต(์๋ฅผ ๋ค๋ฉด 4์ฐจ์)์ ๋๋ฉด ์ ๋ณด์ ๋ณ๋ชฉ์ด ์ด๋ป๊ฒ ๋ํ๋๋์ง ํ์ธํด ๋ณด๊ฒ ์ต๋๋ค.
```
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
```
๊ฒ์ฆ ์ ํ๋์ ์ต๊ณ ๊ฐ์ ์ฝ 71%๋ก 8% ์ ๋ ๊ฐ์๋์์ต๋๋ค. ์ด๋ฐ ์์ค์ ๋๋ถ๋ถ ์์ธ์ ๋ง์ ์ ๋ณด(46๊ฐ ํด๋์ค์ ๋ถํ ์ดํ๋ฉด์ ๋ณต์ํ๊ธฐ์ ์ถฉ๋ถํ ์ ๋ณด)๋ฅผ ์ค๊ฐ์ธต์ ์ ์ฐจ์ ํํ ๊ณต๊ฐ์ผ๋ก ์์ถํ๋ ค๊ณ ํ๊ธฐ ๋๋ฌธ์
๋๋ค. ์ด ๋คํธ์ํฌ๋ ํ์ํ ์ ๋ณด ๋๋ถ๋ถ์ 4์ฐจ์ ํํ ์์ ๊ตฌ๊ฒจ ๋ฃ์์ง๋ง ์ ๋ถ๋ ๋ฃ์ง ๋ชปํ์ต๋๋ค.
## ์ถ๊ฐ ์คํ
* ๋ ํฌ๊ฑฐ๋ ์์ ์ธต์ ์ฌ์ฉํด ๋ณด์ธ์: 32๊ฐ ์ ๋, 128๊ฐ ์ ๋ ๋ฑ
* ์ฌ๊ธฐ์์ ๋ ๊ฐ์ ์๋์ธต์ ์ฌ์ฉํ์ต๋๋ค. ํ ๊ฐ์ ์๋์ธต์ด๋ ์ธ ๊ฐ์ ์๋์ธต์ ์ฌ์ฉํด ๋ณด์ธ์.
## ์ ๋ฆฌ
๋ค์์ ์ด ์์ ์์ ๋ฐฐ์ด ๊ฒ๋ค์
๋๋ค.
* N๊ฐ์ ํด๋์ค๋ก ๋ฐ์ดํฐ ํฌ์ธํธ๋ฅผ ๋ถ๋ฅํ๋ ค๋ฉด ๋คํธ์ํฌ์ ๋ง์ง๋ง `Dense` ์ธต์ ํฌ๊ธฐ๋ N์ด์ด์ผ ํฉ๋๋ค.
* ๋จ์ผ ๋ ์ด๋ธ, ๋ค์ค ๋ถ๋ฅ ๋ฌธ์ ์์๋ N๊ฐ์ ํด๋์ค์ ๋ํ ํ๋ฅ ๋ถํฌ๋ฅผ ์ถ๋ ฅํ๊ธฐ ์ํด `softmax` ํ์ฑํ ํจ์๋ฅผ ์ฌ์ฉํด์ผ ํฉ๋๋ค.
* ์ด๋ฐ ๋ฌธ์ ์๋ ํญ์ ๋ฒ์ฃผํ ํฌ๋ก์ค์ํธ๋กํผ๋ฅผ ์ฌ์ฉํด์ผ ํฉ๋๋ค. ์ด ํจ์๋ ๋ชจ๋ธ์ด ์ถ๋ ฅํ ํ๋ฅ ๋ถํฌ์ ํ๊น ๋ถํฌ ์ฌ์ด์ ๊ฑฐ๋ฆฌ๋ฅผ ์ต์ํํฉ๋๋ค.
* ๋ค์ค ๋ถ๋ฅ์์ ๋ ์ด๋ธ์ ๋ค๋ฃจ๋ ๋ ๊ฐ์ง ๋ฐฉ๋ฒ์ด ์์ต๋๋ค.
* ๋ ์ด๋ธ์ ๋ฒ์ฃผํ ์ธ์ฝ๋ฉ(๋๋ ์-ํซ ์ธ์ฝ๋ฉ)์ผ๋ก ์ธ์ฝ๋ฉํ๊ณ `categorical_crossentropy` ์์ค ํจ์๋ฅผ ์ฌ์ฉํฉ๋๋ค.
* ๋ ์ด๋ธ์ ์ ์๋ก ์ธ์ฝ๋ฉํ๊ณ `sparse_categorical_crossentropy` ์์ค ํจ์๋ฅผ ์ฌ์ฉํฉ๋๋ค.
* ๋ง์ ์์ ๋ฒ์ฃผ๋ฅผ ๋ถ๋ฅํ ๋ ์ค๊ฐ์ธต์ ํฌ๊ธฐ๊ฐ ๋๋ฌด ์์ ๋คํธ์ํฌ์ ์ ๋ณด์ ๋ณ๋ชฉ์ด ์๊ธฐ์ง ์๋๋ก ํด์ผ ํฉ๋๋ค.
|
github_jupyter
|
import keras
keras.__version__
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
len(train_data)
len(test_data)
train_data[10]
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# 0, 1, 2๋ 'ํจ๋ฉ', '๋ฌธ์ ์์', '์ฌ์ ์ ์์'์ ์ํ ์ธ๋ฑ์ค์ด๋ฏ๋ก 3์ ๋บ๋๋ค
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
train_labels[10]
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# ํ๋ จ ๋ฐ์ดํฐ ๋ฒกํฐ ๋ณํ
x_train = vectorize_sequences(train_data)
# ํ
์คํธ ๋ฐ์ดํฐ ๋ฒกํฐ ๋ณํ
x_test = vectorize_sequences(test_data)
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# ํ๋ จ ๋ ์ด๋ธ ๋ฒกํฐ ๋ณํ
one_hot_train_labels = to_one_hot(train_labels)
# ํ
์คํธ ๋ ์ด๋ธ ๋ฒกํฐ ๋ณํ
one_hot_test_labels = to_one_hot(test_labels)
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # ๊ทธ๋ํ๋ฅผ ์ด๊ธฐํํฉ๋๋ค
acc = history.history['acc']
val_acc = history.history['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=9,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
predictions = model.predict(x_test)
predictions[0].shape
np.sum(predictions[0])
np.argmax(predictions[0])
y_train = np.array(train_labels)
y_test = np.array(test_labels)
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
| 0.790247 | 0.894052 |
# Data Connector for DBLP
In this example, we will be going over how to use Data Connector with DBLP.
## Preprocessing
data_connector is a component in the dataprep library that aims to simplify the data access by providing a standard API set. The goal is to help the users skip the complex API configuration. In this tutorial, we demonstrate how to use data_connector library with DBLP.
If you haven't installed dataprep, run command `pip install dataprep` or execute the following cell.
```
># Run me if you'd like to install
>!pip install dataprep
```
# Download and store the configuration files in dataprep.
The configuration files are used to configure the parameters and initial setup for the API. The available configuration files can be manually downloaded here: [Configuration Files](https://github.com/sfu-db/DataConnectorConfigs) or automatically downloaded at usage.
Store the configuration file in the dataprep folder.
# Initialize data_connector
To initialize run the following code. Unlike Yelp and Spotify, tokens and client information are not needed.
```
from dataprep.data_connector import Connector
dc = Connector("./DataConnectorConfigs/DBLP")
```
# Functionalities
Data connector has several functions you can perform to gain insight on the data downloaded from DBLP.
### Connector.info
The info method gives information and guidelines of using the connector. There are 3 sections in the response and they are table, parameters and examples.
>1. Table - The table(s) being accessed.
>2. Parameters - Identifies which parameters can be used to call the method. For DBLP, there is no required **parameter**.
>3. Examples - Shows how you can call the methods in the Connector class.
```
dc.info()
```
### Connector.show_schema
The show_schema method returns the schema of the website data to be returned in a Dataframe. There are two columns in the response. The first column is the column name and the second is the datatype.
As an example, lets see what is in the publication table.
```
dc.show_schema("publication")
```
### Connector.query
The query method downloads the website data and displays it in a Dataframe. The parameters must meet the requirements as indicated in connector.info for the operation to run.
When the data is received from the server, it will either be in a JSON or XML format. The data_connector reformats the data in pandas Dataframe for the convenience of downstream operations.
As an example, let's try to get the data from the "publication" table, providing the query search for "lee".
```
df = dc.query("businesses", term="publication", location="lee")
```
From query results, you can see how easy it is to download the publication data from DBLP into a pandas Dataframe.
Now that you have an understanding of how data connector operates, you can easily accomplish the task with two lines of code.
>1. dc = Connector(...)
>2. dc.query(...)
# That's all for now.
If you are interested in writing your own configuration file or modify an existing one, refer to the [Configuration Files](https://github.com/sfu-db/DataConnectorConfigs>).
|
github_jupyter
|
># Run me if you'd like to install
>!pip install dataprep
from dataprep.data_connector import Connector
dc = Connector("./DataConnectorConfigs/DBLP")
dc.info()
dc.show_schema("publication")
df = dc.query("businesses", term="publication", location="lee")
| 0.3295 | 0.972908 |
```
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_valid_scaled = scaler.transform(
x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_test_scaled = scaler.transform(
x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
model = keras.models.Sequential()
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
padding='same',
activation='selu',
input_shape=(28, 28, 1)))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
padding='same',
activation='selu'))
# ๆฑ ๅๅฑ
model.add(keras.layers.MaxPool2D(pool_size=2))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
padding='same',
activation='selu'))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
padding='same',
activation='selu'))
# ๆฑ ๅๅฑ
model.add(keras.layers.MaxPool2D(pool_size=2))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
padding='same',
activation='selu'))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
padding='same',
activation='selu'))
# ๆฑ ๅๅฑ
model.add(keras.layers.MaxPool2D(pool_size=2))
# ๅฑๅนณ
model.add(keras.layers.Flatten())
# ๅ
จ่ฟๆฅๅฑ
model.add(keras.layers.Dense(128, activation='selu'))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer = "sgd",
metrics = ["accuracy"])
model.summary()
logdir = './cnn-selu-callbacks'
if not os.path.exists(logdir):
os.mkdir(logdir)
output_model_file = os.path.join(logdir,
"fashion_mnist_model.h5")
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.ModelCheckpoint(output_model_file,
save_best_only = True),
keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3)
]
history = model.fit(x_train_scaled, y_train, epochs=10,
validation_data=(x_valid_scaled, y_valid),
callbacks = callbacks)
def plot_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plot_learning_curves(history)
model.evaluate(x_test_scaled, y_test)
```
|
github_jupyter
|
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
print(module.__name__, module.__version__)
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_valid_scaled = scaler.transform(
x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_test_scaled = scaler.transform(
x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
model = keras.models.Sequential()
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
padding='same',
activation='selu',
input_shape=(28, 28, 1)))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
padding='same',
activation='selu'))
# ๆฑ ๅๅฑ
model.add(keras.layers.MaxPool2D(pool_size=2))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
padding='same',
activation='selu'))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
padding='same',
activation='selu'))
# ๆฑ ๅๅฑ
model.add(keras.layers.MaxPool2D(pool_size=2))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
padding='same',
activation='selu'))
# ๅท็งฏๅฑ
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
padding='same',
activation='selu'))
# ๆฑ ๅๅฑ
model.add(keras.layers.MaxPool2D(pool_size=2))
# ๅฑๅนณ
model.add(keras.layers.Flatten())
# ๅ
จ่ฟๆฅๅฑ
model.add(keras.layers.Dense(128, activation='selu'))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
optimizer = "sgd",
metrics = ["accuracy"])
model.summary()
logdir = './cnn-selu-callbacks'
if not os.path.exists(logdir):
os.mkdir(logdir)
output_model_file = os.path.join(logdir,
"fashion_mnist_model.h5")
callbacks = [
keras.callbacks.TensorBoard(logdir),
keras.callbacks.ModelCheckpoint(output_model_file,
save_best_only = True),
keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3)
]
history = model.fit(x_train_scaled, y_train, epochs=10,
validation_data=(x_valid_scaled, y_valid),
callbacks = callbacks)
def plot_learning_curves(history):
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plot_learning_curves(history)
model.evaluate(x_test_scaled, y_test)
| 0.532425 | 0.645693 |
[source](../api/alibi_detect.od.vaegmm.rst)
# Variational Auto-Encoding Gaussian Mixture Model
## Overview
The Variational Auto-Encoding Gaussian Mixture Model (VAEGMM) Outlier Detector follows the [Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection](https://openreview.net/forum?id=BJJLHbb0-) paper but with a [VAE](https://arxiv.org/abs/1312.6114) instead of a regular Auto-Encoder. The encoder compresses the data while the reconstructed instances generated by the decoder are used to create additional features based on the reconstruction error between the input and the reconstructions. These features are combined with encodings and fed into a Gaussian Mixture Model ([GMM](https://en.wikipedia.org/wiki/Mixture_model#Gaussian_mixture_model)). The VAEGMM outlier detector is first trained on a batch of unlabeled, but normal (*inlier*) data. Unsupervised or semi-supervised training is desirable since labeled data is often scarce. The sample energy of the GMM can then be used to determine whether an instance is an outlier (high sample energy) or not (low sample energy). The algorithm is suitable for tabular and image data.
## Usage
### Initialize
Parameters:
* `threshold`: threshold value for the sample energy above which the instance is flagged as an outlier.
* `latent_dim`: latent dimension of the VAE.
* `n_gmm`: number of components in the GMM.
* `encoder_net`: `tf.keras.Sequential` instance containing the encoder network. Example:
```python
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(n_features,)),
Dense(60, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(10, activation=tf.nn.tanh),
Dense(latent_dim, activation=None)
])
```
* `decoder_net`: `tf.keras.Sequential` instance containing the decoder network. Example:
```python
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(10, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(60, activation=tf.nn.tanh),
Dense(n_features, activation=None)
])
```
* `gmm_density_net`: layers for the GMM network wrapped in a `tf.keras.Sequential` class. Example:
```python
gmm_density_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim + 2,)),
Dense(10, activation=tf.nn.tanh),
Dense(n_gmm, activation=tf.nn.softmax)
])
```
* `vaegmm`: instead of using a separate encoder, decoder and GMM density net, the VAEGMM can also be passed as a `tf.keras.Model`.
* `samples`: number of samples drawn during detection for each instance to detect.
* `beta`: weight on the KL-divergence loss term following the $\beta$-[VAE](https://openreview.net/forum?id=Sy2fzU9gl) framework. Default equals 1.
* `recon_features`: function to extract features from the reconstructed instance by the decoder. Defaults to a combination of the mean squared reconstruction error and the cosine similarity between the original and reconstructed instances by the VAE.
* `data_type`: can specify data type added to metadata. E.g. *'tabular'* or *'image'*.
Initialized outlier detector example:
```python
from alibi_detect.od import OutlierVAEGMM
od = OutlierVAEGMM(
threshold=7.5,
encoder_net=encoder_net,
decoder_net=decoder_net,
gmm_density_net=gmm_density_net,
latent_dim=4,
n_gmm=2,
samples=10
)
```
### Fit
We then need to train the outlier detector. The following parameters can be specified:
* `X`: training batch as a numpy array of preferably normal data.
* `loss_fn`: loss function used for training. Defaults to the custom VAEGMM loss which is a combination of the [elbo](https://en.wikipedia.org/wiki/Evidence_lower_bound) loss, sample energy of the GMM and a loss term penalizing small values on the diagonals of the covariance matrices in the GMM to avoid trivial solutions. It is important to balance the loss weights below so no single loss term dominates during the optimization.
* `w_recon`: weight on elbo loss term. Defaults to 1e-7.
* `w_energy`: weight on sample energy loss term. Defaults to 0.1.
* `w_cov_diag`: weight on covariance diagonals. Defaults to 0.005.
* `optimizer`: optimizer used for training. Defaults to [Adam](https://arxiv.org/abs/1412.6980) with learning rate 1e-4.
* `cov_elbo`: dictionary with covariance matrix options in case the elbo loss function is used. Either use the full covariance matrix inferred from X (*dict(cov_full=None)*), only the variance (*dict(cov_diag=None)*) or a float representing the same standard deviation for each feature (e.g. *dict(sim=.05)*) which is the default.
* `epochs`: number of training epochs.
* `batch_size`: batch size used during training.
* `verbose`: boolean whether to print training progress.
* `log_metric`: additional metrics whose progress will be displayed if verbose equals True.
```python
od.fit(
X_train,
epochs=10,
batch_size=1024
)
```
It is often hard to find a good threshold value. If we have a batch of normal and outlier data and we know approximately the percentage of normal data in the batch, we can infer a suitable threshold:
```python
od.infer_threshold(
X,
threshold_perc=95
)
```
### Detect
We detect outliers by simply calling `predict` on a batch of instances `X` to compute the instance level sample energies. We can also return the instance level outlier score by setting `return_instance_score` to True.
The prediction takes the form of a dictionary with `meta` and `data` keys. `meta` contains the detector's metadata while `data` is also a dictionary which contains the actual predictions stored in the following keys:
* `is_outlier`: boolean whether instances are above the threshold and therefore outlier instances. The array is of shape *(batch size,)*.
* `instance_score`: contains instance level scores if `return_instance_score` equals True.
```python
preds = od.predict(
X,
return_instance_score=True
)
```
## Examples
### Tabular
[Outlier detection on KDD Cup 99](../examples/od_aegmm_kddcup.nblink)
|
github_jupyter
|
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(n_features,)),
Dense(60, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(10, activation=tf.nn.tanh),
Dense(latent_dim, activation=None)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(10, activation=tf.nn.tanh),
Dense(30, activation=tf.nn.tanh),
Dense(60, activation=tf.nn.tanh),
Dense(n_features, activation=None)
])
gmm_density_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim + 2,)),
Dense(10, activation=tf.nn.tanh),
Dense(n_gmm, activation=tf.nn.softmax)
])
from alibi_detect.od import OutlierVAEGMM
od = OutlierVAEGMM(
threshold=7.5,
encoder_net=encoder_net,
decoder_net=decoder_net,
gmm_density_net=gmm_density_net,
latent_dim=4,
n_gmm=2,
samples=10
)
od.fit(
X_train,
epochs=10,
batch_size=1024
)
od.infer_threshold(
X,
threshold_perc=95
)
preds = od.predict(
X,
return_instance_score=True
)
| 0.937225 | 0.994413 |
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from chainerrl import wrappers
from pathlib import Path
import pandas as pd
import seaborn
seaborn.set_context("talk")
from dacbench.benchmarks import LubyBenchmark, SigmoidBenchmark
from dacbench.wrappers import PerformanceTrackingWrapper, RewardNoiseWrapper
from dacbench.logger import Logger, load_logs, log2dataframe
from dacbench.plotting import plot_performance, plot_performance_per_instance
from examples.example_utils import make_chainer_dqn, train_chainer
```
# First steps: running an episode
### Creating a benchmark object
Benchmarks are environments created by a benchmark object.
First, we take a look at that object and the configuration it holds:
```
bench = LubyBenchmark()
for k in bench.config.keys():
print(f"{k}: {bench.config[k]}")
```
### Getting the benchmark environment
Now we can either get the default benchmark setting like this:
```
env = bench.get_benchmark(seed=1)
```
Or first modify the the benchmark, e.g. by setting a different seed.
In that case, we use the get_environment() method to explicitly override the defaults.
```
bench.config.seed = 9
env = bench.get_environment()
```
To then save our modification as a config file to share with others, run:
```
bench.save_config("test_config.json")
```
We can also load an existing benchmark configuration (e.g. from our challenge benchmarks):
```
bench = LubyBenchmark("dacbench/challenge_benchmarks/reward_quality_challenge/level_1/luby.json")
env = bench.get_environment()
```
### Running the benchmark
To execute a run, first reset the environment. It will return an initial state:
```
state = env.reset()
print(state)
```
Then we can run steps until the algorithm run is done:
```
done = False
cum_reward = 0
while not done:
state, reward, done, info = env.step(1)
cum_reward += reward
print(f"Episode 1/1...........................................Reward: {cum_reward}")
```
# A more complex example: multiple seeds & logging
### Creating env and logger object
Using a Logger object, we can track the performance on the environment over time.
To make the benchmark more difficult, we use a wrapper to add noise to the reward signal.
```
# Automatically setting up env and logger
def setup_env(seed):
# Get benchmark env
bench = SigmoidBenchmark()
env = bench.get_benchmark(seed=seed)
# Make logger to write results to file
logger = Logger(experiment_name=f"Sigmoid_s{seed}", output_path=Path("example"))
perf_logger = logger.add_module(PerformanceTrackingWrapper)
# Wrap the environment to add noise and track the reward
env = RewardNoiseWrapper(env, noise_dist="normal", dist_args=[0, 0.3])
env = PerformanceTrackingWrapper(env, logger=perf_logger)
logger.set_env(env)
logger.set_additional_info(seed=seed)
return env, logger
```
Now we run the environment on 5 different seeds, logging the results of each one.
In this example, we use a simple RL agent.
```
for seed in [0, 1, 2, 3, 4]:
env, logger = setup_env(seed)
# This could be any optimization or learning method
rl_agent = make_chainer_dqn(env.observation_space.low.size, env.action_space)
env = wrappers.CastObservationToFloat32(env)
train_chainer(rl_agent, env, num_episodes=10, logger=logger)
```
After we're done training, we can load the results into a pandas DataFrame for further inspection
```
results = []
for seed in [0, 1, 2, 3, 4]:
logs = load_logs(f"example/Sigmoid_s{seed}/PerformanceTrackingWrapper.jsonl")
results.append(log2dataframe(logs, wide=True))
results = pd.concat(results)
```
DACBench includes plotting functionality for commonly used plots like performance over time:
```
# Plot performance over time
plot = plot_performance(results, title="Reward on Sigmoid Benchmark (with 95% CI)", aspect=2)
```
This plot can also be split into the different seeded runs:
```
plot = plot_performance(results, title="Reward per seed on Sigmoid Benchmark (with 95% CI)", hue="seed", aspect=2)
```
And as we're training on an instance set, we can also see how the mean performance per instance looks:
```
plot_performance_per_instance(results, title="Performance per training instance (Sigmoid)", aspect=2)
```
|
github_jupyter
|
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from chainerrl import wrappers
from pathlib import Path
import pandas as pd
import seaborn
seaborn.set_context("talk")
from dacbench.benchmarks import LubyBenchmark, SigmoidBenchmark
from dacbench.wrappers import PerformanceTrackingWrapper, RewardNoiseWrapper
from dacbench.logger import Logger, load_logs, log2dataframe
from dacbench.plotting import plot_performance, plot_performance_per_instance
from examples.example_utils import make_chainer_dqn, train_chainer
bench = LubyBenchmark()
for k in bench.config.keys():
print(f"{k}: {bench.config[k]}")
env = bench.get_benchmark(seed=1)
bench.config.seed = 9
env = bench.get_environment()
bench.save_config("test_config.json")
bench = LubyBenchmark("dacbench/challenge_benchmarks/reward_quality_challenge/level_1/luby.json")
env = bench.get_environment()
state = env.reset()
print(state)
done = False
cum_reward = 0
while not done:
state, reward, done, info = env.step(1)
cum_reward += reward
print(f"Episode 1/1...........................................Reward: {cum_reward}")
# Automatically setting up env and logger
def setup_env(seed):
# Get benchmark env
bench = SigmoidBenchmark()
env = bench.get_benchmark(seed=seed)
# Make logger to write results to file
logger = Logger(experiment_name=f"Sigmoid_s{seed}", output_path=Path("example"))
perf_logger = logger.add_module(PerformanceTrackingWrapper)
# Wrap the environment to add noise and track the reward
env = RewardNoiseWrapper(env, noise_dist="normal", dist_args=[0, 0.3])
env = PerformanceTrackingWrapper(env, logger=perf_logger)
logger.set_env(env)
logger.set_additional_info(seed=seed)
return env, logger
for seed in [0, 1, 2, 3, 4]:
env, logger = setup_env(seed)
# This could be any optimization or learning method
rl_agent = make_chainer_dqn(env.observation_space.low.size, env.action_space)
env = wrappers.CastObservationToFloat32(env)
train_chainer(rl_agent, env, num_episodes=10, logger=logger)
results = []
for seed in [0, 1, 2, 3, 4]:
logs = load_logs(f"example/Sigmoid_s{seed}/PerformanceTrackingWrapper.jsonl")
results.append(log2dataframe(logs, wide=True))
results = pd.concat(results)
# Plot performance over time
plot = plot_performance(results, title="Reward on Sigmoid Benchmark (with 95% CI)", aspect=2)
plot = plot_performance(results, title="Reward per seed on Sigmoid Benchmark (with 95% CI)", hue="seed", aspect=2)
plot_performance_per_instance(results, title="Performance per training instance (Sigmoid)", aspect=2)
| 0.587233 | 0.742071 |
## Los Alamos National Lab Data Preparation
*Source:* [LANL dataset](https://csr.lanl.gov/data/cyber1/)
This data set represents 58 consecutive days of de-identified event data collected from five sources within Los Alamos National Laboratoryโs corporate, internal computer network.
Only the auth.txt file is used in our current work, as all red team activity appearing in the data correspond exclusively to authentication events. Future work includes utilizing additional data streams (namely; proc.txt.gz). We perform a pre-processing step on the file redteam.txt.gz so that its log lines are expanded to match the full log line which they correspond to in the auth.txt.gz file. This adds general convenience, and speeds up the process of querying to find out if a given log line is malicious.
This notebook outlines methods used for translating log lines into integer vectors which can be acted on by event level models. Note that the scripts in **/safekit/features/lanl** can also be used standalone to accomplish the same translation, but only operate on the auth.txt data file.
่ฟไธช็ฌ่ฎฐๆฌๆฆ่ฟฐไบๅฐlogๆฅๅฟ็่ก่ฝฌๆขไธบๆดๆฐๅ้็ๆนๆณ๏ผ่ฟไบๅ้ๅฏไปฅ็ฑไบไปถ็บงๅซๆจกๅ่ฟ่กๆไฝใๆณจๆ,/safekit/features/lanl ไธญ็่ๆฌไนๅฏไปฅ็ฌ็ซไฝฟ็จๆฅๅฎๆ็ธๅ็่ฝฌๆข๏ผไฝๅช่ฝๅจ auth.txt ๆฐๆฎๆไปถไธๆไฝใ
### Character Level
----
*note: /safekit/features/lanl/char_feats.py*
At the character level, the ascii value for each character in a log line is used as a token in the input sequence for the model.
The translation used is
```
def translate_line(string, pad_len):
return "0 " + " ".join([str(ord(c) - 30) for c in string]) + " 1 " + " ".join(["0"] * pad_len) + "\n"
```
Where **string** is a log line to be translated, and **pad_len** is the number of 0's to append so that the length of the translated string has the same number of characters as the longest log line in the dataset (character-wise). **0** and **1** are used to describe start and end of the translated sentence.
The length of the max log line can be obtained using
```
max_len = 0
with open("auth_proc.txt", "r") as infile:
for line in infile:
tmp = line.strip().split(',')
line_minus_time = tmp[0] + ',' + ','.join(tmp[2:]) # ๅป้คไบๆถ้ด
if len(line_minus_time) > max_len:
max_len = len(line)
print (max_len)
```
We chose to filter out weekends for this dataset, as they capture little activity. These days are
```
weekend_days = [3, 4, 10, 11, 17, 18, 24, 25, 31, 32, 38, 39, 45, 46, 47, 52, 53]
```
It is also convenient to keep track of which lines are in fact red events. Note that labels are not used during unsupervised training, this step is included to simplify the evaluation process later on.
```
with open("redevents.txt", 'r') as red:
redevents = set(red.readlines())
```
**redevents.txt** contains all of the red team log lines verbatim from auth.txt.
It is now possible to parse the data file, reading in (raw) and writing out (translated) log lines.
```
import math
with open("auth_proc.txt", 'r') as infile, open("ap_char_feats.txt", 'w') as outfile:
outfile.write('line_number second day user red seq_len start_sentence\n') # header
infile.readline()
for line_num, line in enumerate(infile):
tmp = line.strip().split(',')
line_minus_time = tmpline_minus_time = tmp[0] + ',' + ','.join(tmp[2:])[0] + ',' + ','.join(tmp[2:])
diff = max_len - len(line_minus_time)
raw_line = line.split(",")
sec = raw_line[1]
user = raw_line[2].strip().split('@')[0]
day = math.floor(int(sec)/86400)
red = 0
line_minus_event = ",".join(raw_line[1:])
red += int(line_minus_event in redevents) # 1 if line is red event
if user.startswith('U') and day not in weekend_days:
translated_line = translate_line(line_minus_time, 120) # diff
outfile.write("%s %s %s %s %s %s %s" % (line_num, sec, day,
user.replace("U", ""),
red, len(line_minus_time) + 1, translated_line))
```
The final preprocessing step is to split the translated data into multiple files; one for each day.
```
import os
os.mkdir('./char_feats')
with open('./ap_char_feats.txt', 'r') as data:
current_day = '0'
outfile = open('./char_feats/' + current_day + '.txt', 'w')
for line in data:
larray = line.strip().split(' ')
if int(larray[2]) == int(current_day):
outfile.write(line)
else:
outfile.close()
current_day = larray[2]
outfile = open('./char_feats/' + current_day + '.txt', 'w')
outfile.write(line)
outfile.close()
```
The char_feats folder can now be passed to the tiered or simple language model.
The config (data spec) or this experiment is shown below (write as string to json file).
>{**"sentence_length"**: 129,
**"token_set_size"**: 96,
**"num_days"**: 30,
**"test_files"**: ["0head.txt", "1head.txt", "2head.txt"],
**"weekend_days"**: [3, 4, 10, 11, 17, 18, 24, 25]}
### Word Level
----
Instead of using the ascii values of individual characters, this approach operates on the features of a log line as if they were a sequence. In other words, each log line is split on "," and each index in the resulting array corresponds to a timestep in the input sequence of the event level model.
To map the token strings to integer values, a vocabulary is constructed for the dataset. Any tokens encountered during evaluation which were not present in the initial data are mapped to a common "out of vocabulary" (OOV) value during translation. If the number of unique tokens within the data is known to be prohibitively large, a count dictionary can be used to infer an arbitrary cutoff which maps the least likely tokens in the data to the OOV integer. Eg; all tokens which appear less than 5 times in the data map to the OOV token during translation.
Note that in our AICS paper we have seperate OOV tokens for user, pc, and domain tokens.
In the following code, every unique token is included in the vocabulary.
```
index = 4 # <sos> : 1, <eos>: 2, auth_event: 3, proc_event: 4
vocab = {"OOV": "0", "<sos>": "1", "<eos>": "2","auth_event": "3", "proc_event": "4"}
def lookup(key):
global index, vocab
if key in vocab:
return vocab[key]
else:
index += 1
vocab[key] = str(index)
return str(index)
```
Translated log lines should be padded out to the maximum length fields_list. In the case of LANL, the proc events are longer and contain 11 tokens.
```
def translate(fields_list):
translated_list = list(map(lookup, fields_list))
while len(translated_list) < 11:
translated_list.append("0")
translated_list.insert(0, "1") # <sos>
translated_list.append("2") # <eos>
translated_list.insert(0, str(len(translated_list))) # sentence len
return translated_list
```
We chose to filter out weekends for this dataset, as they capture little activity. It is also convenient to keep track of which lines are in fact red events. Note that labels are not used during unsupervised training, this step is included to simplify the evaluation process later on.
```
weekend_days = [3, 4, 10, 11, 17, 18, 24, 25, 31, 32, 38, 39, 45, 46, 47, 52, 53]
with open("redevents.txt", 'r') as red:
redevents = set(red.readlines())
```
Since OOV cutoffs are not being considered, the translation can be done in a single pass over the data.
```
import math
with open("auth_proc.txt", 'r') as infile, open("ap_word_feats.txt", 'w') as outfile:
outfile.write('line_number second day user red sentence_len translated_line padding \n')
for line_num, line in enumerate(infile):
line = line.strip()
fields = line.replace("@", ",").replace("$", "").split(",")
sec = fields[1]
translated = translate(fields[0:1] + fields[2:])
user = fields[2]
day = math.floor(int(sec)/86400)
red = 0
red += int(line in redevents)
if user.startswith('U') and day not in weekend_days:
outfile.write("%s %s %s %s %s %s" % (line_num, sec, day, user.replace("U", ""),
red, " ".join(translated)))
print(len(vocab))
```
The final preprocessing step is to split the translated data into multiple files; one for each day.
```
import os
os.mkdir('./word_feats')
with open('./ap_word_feats.txt', 'r') as data:
current_day = '0'
outfile = open('./word_feats/' + current_day + '.txt', 'w')
for line in data:
larray = line.strip().split(' ')
if int(larray[2]) == int(current_day):
outfile.write(line)
else:
outfile.close()
current_day = larray[2]
outfile = open('./word_feats/' + current_day + '.txt', 'w')
outfile.write(line)
outfile.close()
```
The word_feats directory can now be passed to the tiered or simple language model.
The config json (data spec) for the expiriment is shown below (write as string to json file).
>{**"sentence_length"**: 12,
**"token_set_size"**: len(vocab),
**"num_days"**: 30,
**"test_files"**: ["0head.txt", "1head.txt", "2head.txt"],
**"weekend_days"**: [3, 4, 10, 11, 17, 18, 24, 25]}
|
github_jupyter
|
def translate_line(string, pad_len):
return "0 " + " ".join([str(ord(c) - 30) for c in string]) + " 1 " + " ".join(["0"] * pad_len) + "\n"
max_len = 0
with open("auth_proc.txt", "r") as infile:
for line in infile:
tmp = line.strip().split(',')
line_minus_time = tmp[0] + ',' + ','.join(tmp[2:]) # ๅป้คไบๆถ้ด
if len(line_minus_time) > max_len:
max_len = len(line)
print (max_len)
weekend_days = [3, 4, 10, 11, 17, 18, 24, 25, 31, 32, 38, 39, 45, 46, 47, 52, 53]
with open("redevents.txt", 'r') as red:
redevents = set(red.readlines())
import math
with open("auth_proc.txt", 'r') as infile, open("ap_char_feats.txt", 'w') as outfile:
outfile.write('line_number second day user red seq_len start_sentence\n') # header
infile.readline()
for line_num, line in enumerate(infile):
tmp = line.strip().split(',')
line_minus_time = tmpline_minus_time = tmp[0] + ',' + ','.join(tmp[2:])[0] + ',' + ','.join(tmp[2:])
diff = max_len - len(line_minus_time)
raw_line = line.split(",")
sec = raw_line[1]
user = raw_line[2].strip().split('@')[0]
day = math.floor(int(sec)/86400)
red = 0
line_minus_event = ",".join(raw_line[1:])
red += int(line_minus_event in redevents) # 1 if line is red event
if user.startswith('U') and day not in weekend_days:
translated_line = translate_line(line_minus_time, 120) # diff
outfile.write("%s %s %s %s %s %s %s" % (line_num, sec, day,
user.replace("U", ""),
red, len(line_minus_time) + 1, translated_line))
import os
os.mkdir('./char_feats')
with open('./ap_char_feats.txt', 'r') as data:
current_day = '0'
outfile = open('./char_feats/' + current_day + '.txt', 'w')
for line in data:
larray = line.strip().split(' ')
if int(larray[2]) == int(current_day):
outfile.write(line)
else:
outfile.close()
current_day = larray[2]
outfile = open('./char_feats/' + current_day + '.txt', 'w')
outfile.write(line)
outfile.close()
index = 4 # <sos> : 1, <eos>: 2, auth_event: 3, proc_event: 4
vocab = {"OOV": "0", "<sos>": "1", "<eos>": "2","auth_event": "3", "proc_event": "4"}
def lookup(key):
global index, vocab
if key in vocab:
return vocab[key]
else:
index += 1
vocab[key] = str(index)
return str(index)
def translate(fields_list):
translated_list = list(map(lookup, fields_list))
while len(translated_list) < 11:
translated_list.append("0")
translated_list.insert(0, "1") # <sos>
translated_list.append("2") # <eos>
translated_list.insert(0, str(len(translated_list))) # sentence len
return translated_list
weekend_days = [3, 4, 10, 11, 17, 18, 24, 25, 31, 32, 38, 39, 45, 46, 47, 52, 53]
with open("redevents.txt", 'r') as red:
redevents = set(red.readlines())
import math
with open("auth_proc.txt", 'r') as infile, open("ap_word_feats.txt", 'w') as outfile:
outfile.write('line_number second day user red sentence_len translated_line padding \n')
for line_num, line in enumerate(infile):
line = line.strip()
fields = line.replace("@", ",").replace("$", "").split(",")
sec = fields[1]
translated = translate(fields[0:1] + fields[2:])
user = fields[2]
day = math.floor(int(sec)/86400)
red = 0
red += int(line in redevents)
if user.startswith('U') and day not in weekend_days:
outfile.write("%s %s %s %s %s %s" % (line_num, sec, day, user.replace("U", ""),
red, " ".join(translated)))
print(len(vocab))
import os
os.mkdir('./word_feats')
with open('./ap_word_feats.txt', 'r') as data:
current_day = '0'
outfile = open('./word_feats/' + current_day + '.txt', 'w')
for line in data:
larray = line.strip().split(' ')
if int(larray[2]) == int(current_day):
outfile.write(line)
else:
outfile.close()
current_day = larray[2]
outfile = open('./word_feats/' + current_day + '.txt', 'w')
outfile.write(line)
outfile.close()
| 0.111102 | 0.932883 |
```
import re
import numpy as np
import collections
from sklearn import metrics
from sklearn.cross_validation import train_test_split
import tensorflow as tf
import pandas as pd
from unidecode import unidecode
from sklearn.preprocessing import LabelEncoder
from tqdm import tqdm
import time
import malaya
tokenizer = malaya.preprocessing._SocialTokenizer().tokenize
rules_normalizer = malaya.texts._tatabahasa.rules_normalizer
def is_number_regex(s):
if re.match("^\d+?\.\d+?$", s) is None:
return s.isdigit()
return True
def detect_money(word):
if word[:2] == 'rm' and is_number_regex(word[2:]):
return True
else:
return False
def preprocessing(string):
tokenized = tokenizer(unidecode(string))
tokenized = [malaya.stem.naive(w) for w in tokenized]
tokenized = [w.lower() for w in tokenized if len(w) > 1]
tokenized = [rules_normalizer.get(w, w) for w in tokenized]
tokenized = ['<NUM>' if is_number_regex(w) else w for w in tokenized]
tokenized = ['<MONEY>' if detect_money(w) else w for w in tokenized]
return tokenized
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 3)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen][::-1]):
X[i, -1 - no] = dic.get(k, UNK)
return X
import json
with open('tokenized.json') as fopen:
dataset = json.load(fopen)
texts = dataset['x']
labels = dataset['y']
del dataset
import itertools
concat = list(itertools.chain(*texts))
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
import json
with open('toxicity-dictionary.json','w') as fopen:
fopen.write(json.dumps({'dictionary':dictionary,'reverse_dictionary':rev_dictionary}))
def position_encoding(inputs):
T = tf.shape(inputs)[1]
repr_dim = inputs.get_shape()[-1].value
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])
class Model:
def __init__(
self,
size_layer,
num_layers,
dimension_output,
learning_rate,
dropout,
dict_size,
):
def cells(size, reuse = False):
return tf.contrib.rnn.DropoutWrapper(
tf.nn.rnn_cell.LSTMCell(
size,
initializer = tf.orthogonal_initializer(),
reuse = reuse,
),
state_keep_prob = dropout,
output_keep_prob = dropout,
)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None, dimension_output])
encoder_embeddings = tf.Variable(
tf.random_uniform([dict_size, size_layer], -1, 1)
)
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
encoder_embedded += position_encoding(encoder_embedded)
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(
num_units = size_layer, memory = encoder_embedded
)
rnn_cells = tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.MultiRNNCell(
[cells(size_layer) for _ in range(num_layers)]
),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer,
alignment_history = True,
)
outputs, last_state = tf.nn.dynamic_rnn(
rnn_cells, encoder_embedded, dtype = tf.float32
)
self.alignments = tf.transpose(
last_state.alignment_history.stack(), [1, 2, 0]
)
self.logits_seq = tf.layers.dense(outputs, dimension_output)
self.logits_seq = tf.identity(self.logits_seq, name = 'logits_seq')
self.logits = self.logits_seq[:, -1]
self.logits = tf.identity(self.logits, name = 'logits')
self.cost = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(
learning_rate = learning_rate
).minimize(self.cost)
correct_prediction = tf.equal(tf.round(tf.nn.sigmoid(self.logits)), tf.round(self.Y))
all_labels_true = tf.reduce_min(tf.cast(correct_prediction, tf.float32), 1)
self.accuracy = tf.reduce_mean(all_labels_true)
self.attention = tf.nn.softmax(
tf.reduce_sum(self.alignments[0], 1), name = 'alphas'
)
size_layer = 256
num_layers = 2
dimension_output = len(labels[0])
learning_rate = 1e-4
batch_size = 32
dropout = 0.8
maxlen = 100
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(
size_layer,
num_layers,
dimension_output,
learning_rate,
dropout,
len(dictionary),
)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'bahdanau/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name)
and 'Adam' not in n.name
and 'beta' not in n.name
]
)
strings.split(',')
train_X, test_X, train_Y, test_Y = train_test_split(
texts, labels, test_size = 0.2
)
from tqdm import tqdm
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
batch_x = str_idx(train_X[i : min(i + batch_size, len(train_X))], dictionary, maxlen)
batch_y = train_Y[i : min(i + batch_size, len(train_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
assert not np.isnan(cost)
train_loss += cost
train_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc = 'test minibatch loop')
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
test_loss += cost
test_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss /= len(train_X) / batch_size
train_acc /= len(train_X) / batch_size
test_loss /= len(test_X) / batch_size
test_acc /= len(test_X) / batch_size
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (EPOCH, train_loss, train_acc, test_loss, test_acc)
)
EPOCH += 1
saver.save(sess, 'bahdanau/model.ckpt')
text = preprocessing('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya, tapi gay bodoh')
new_vector = str_idx([text], dictionary, len(text))
sess.run(tf.nn.sigmoid(model.logits), feed_dict={model.X:new_vector})
sess.run(tf.nn.sigmoid(model.logits_seq), feed_dict={model.X:new_vector})
stack = []
pbar = range(0, len(test_X), batch_size)
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
stack.append(sess.run(tf.nn.sigmoid(model.logits),
feed_dict = {model.X: batch_x}))
np.around(np.concatenate(stack,axis=0)).shape
np.array(test_Y).shape
print(metrics.classification_report(np.array(test_Y),np.around(np.concatenate(stack,axis=0)),
target_names=["toxic", "severe_toxic", "obscene",
"threat", "insult", "identity_hate"]))
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('bahdanau', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('bahdanau/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits_seq = g.get_tensor_by_name('import/logits_seq:0')
logits = g.get_tensor_by_name('import/logits:0')
alphas = g.get_tensor_by_name('import/alphas:0')
test_sess = tf.InteractiveSession(graph = g)
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
news_string = 'Kerajaan juga perlu prihatin dan peka terhadap nasib para nelayan yang bergantung rezeki sepenuhnya kepada sumber hasil laut. Malah, projek ini memberikan kesan buruk yang berpanjangan kepada alam sekitar selain menjejaskan mata pencarian para nelayan'
text = preprocessing(news_string)
new_vector = str_idx([text], dictionary, len(text))
result = test_sess.run([tf.nn.sigmoid(logits), alphas, tf.nn.sigmoid(logits_seq)], feed_dict = {x: new_vector})
plt.figure(figsize = (15, 7))
labels = [word for word in text]
val = [val for val in result[1]]
plt.bar(np.arange(len(labels)), val)
plt.xticks(np.arange(len(labels)), labels, rotation = 'vertical')
plt.show()
result[2]
```
|
github_jupyter
|
import re
import numpy as np
import collections
from sklearn import metrics
from sklearn.cross_validation import train_test_split
import tensorflow as tf
import pandas as pd
from unidecode import unidecode
from sklearn.preprocessing import LabelEncoder
from tqdm import tqdm
import time
import malaya
tokenizer = malaya.preprocessing._SocialTokenizer().tokenize
rules_normalizer = malaya.texts._tatabahasa.rules_normalizer
def is_number_regex(s):
if re.match("^\d+?\.\d+?$", s) is None:
return s.isdigit()
return True
def detect_money(word):
if word[:2] == 'rm' and is_number_regex(word[2:]):
return True
else:
return False
def preprocessing(string):
tokenized = tokenizer(unidecode(string))
tokenized = [malaya.stem.naive(w) for w in tokenized]
tokenized = [w.lower() for w in tokenized if len(w) > 1]
tokenized = [rules_normalizer.get(w, w) for w in tokenized]
tokenized = ['<NUM>' if is_number_regex(w) else w for w in tokenized]
tokenized = ['<MONEY>' if detect_money(w) else w for w in tokenized]
return tokenized
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 3)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen][::-1]):
X[i, -1 - no] = dic.get(k, UNK)
return X
import json
with open('tokenized.json') as fopen:
dataset = json.load(fopen)
texts = dataset['x']
labels = dataset['y']
del dataset
import itertools
concat = list(itertools.chain(*texts))
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
import json
with open('toxicity-dictionary.json','w') as fopen:
fopen.write(json.dumps({'dictionary':dictionary,'reverse_dictionary':rev_dictionary}))
def position_encoding(inputs):
T = tf.shape(inputs)[1]
repr_dim = inputs.get_shape()[-1].value
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])
class Model:
def __init__(
self,
size_layer,
num_layers,
dimension_output,
learning_rate,
dropout,
dict_size,
):
def cells(size, reuse = False):
return tf.contrib.rnn.DropoutWrapper(
tf.nn.rnn_cell.LSTMCell(
size,
initializer = tf.orthogonal_initializer(),
reuse = reuse,
),
state_keep_prob = dropout,
output_keep_prob = dropout,
)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None, dimension_output])
encoder_embeddings = tf.Variable(
tf.random_uniform([dict_size, size_layer], -1, 1)
)
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
encoder_embedded += position_encoding(encoder_embedded)
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(
num_units = size_layer, memory = encoder_embedded
)
rnn_cells = tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.MultiRNNCell(
[cells(size_layer) for _ in range(num_layers)]
),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer,
alignment_history = True,
)
outputs, last_state = tf.nn.dynamic_rnn(
rnn_cells, encoder_embedded, dtype = tf.float32
)
self.alignments = tf.transpose(
last_state.alignment_history.stack(), [1, 2, 0]
)
self.logits_seq = tf.layers.dense(outputs, dimension_output)
self.logits_seq = tf.identity(self.logits_seq, name = 'logits_seq')
self.logits = self.logits_seq[:, -1]
self.logits = tf.identity(self.logits, name = 'logits')
self.cost = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(
learning_rate = learning_rate
).minimize(self.cost)
correct_prediction = tf.equal(tf.round(tf.nn.sigmoid(self.logits)), tf.round(self.Y))
all_labels_true = tf.reduce_min(tf.cast(correct_prediction, tf.float32), 1)
self.accuracy = tf.reduce_mean(all_labels_true)
self.attention = tf.nn.softmax(
tf.reduce_sum(self.alignments[0], 1), name = 'alphas'
)
size_layer = 256
num_layers = 2
dimension_output = len(labels[0])
learning_rate = 1e-4
batch_size = 32
dropout = 0.8
maxlen = 100
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(
size_layer,
num_layers,
dimension_output,
learning_rate,
dropout,
len(dictionary),
)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'bahdanau/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name
or 'alphas' in n.name)
and 'Adam' not in n.name
and 'beta' not in n.name
]
)
strings.split(',')
train_X, test_X, train_Y, test_Y = train_test_split(
texts, labels, test_size = 0.2
)
from tqdm import tqdm
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
batch_x = str_idx(train_X[i : min(i + batch_size, len(train_X))], dictionary, maxlen)
batch_y = train_Y[i : min(i + batch_size, len(train_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
assert not np.isnan(cost)
train_loss += cost
train_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc = 'test minibatch loop')
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.Y: batch_y,
model.X: batch_x
},
)
test_loss += cost
test_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss /= len(train_X) / batch_size
train_acc /= len(train_X) / batch_size
test_loss /= len(test_X) / batch_size
test_acc /= len(test_X) / batch_size
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (EPOCH, train_loss, train_acc, test_loss, test_acc)
)
EPOCH += 1
saver.save(sess, 'bahdanau/model.ckpt')
text = preprocessing('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya, tapi gay bodoh')
new_vector = str_idx([text], dictionary, len(text))
sess.run(tf.nn.sigmoid(model.logits), feed_dict={model.X:new_vector})
sess.run(tf.nn.sigmoid(model.logits_seq), feed_dict={model.X:new_vector})
stack = []
pbar = range(0, len(test_X), batch_size)
for i in pbar:
batch_x = str_idx(test_X[i : min(i + batch_size, len(test_X))], dictionary, maxlen)
batch_y = test_Y[i : min(i + batch_size, len(test_X))]
stack.append(sess.run(tf.nn.sigmoid(model.logits),
feed_dict = {model.X: batch_x}))
np.around(np.concatenate(stack,axis=0)).shape
np.array(test_Y).shape
print(metrics.classification_report(np.array(test_Y),np.around(np.concatenate(stack,axis=0)),
target_names=["toxic", "severe_toxic", "obscene",
"threat", "insult", "identity_hate"]))
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('bahdanau', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('bahdanau/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits_seq = g.get_tensor_by_name('import/logits_seq:0')
logits = g.get_tensor_by_name('import/logits:0')
alphas = g.get_tensor_by_name('import/alphas:0')
test_sess = tf.InteractiveSession(graph = g)
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
news_string = 'Kerajaan juga perlu prihatin dan peka terhadap nasib para nelayan yang bergantung rezeki sepenuhnya kepada sumber hasil laut. Malah, projek ini memberikan kesan buruk yang berpanjangan kepada alam sekitar selain menjejaskan mata pencarian para nelayan'
text = preprocessing(news_string)
new_vector = str_idx([text], dictionary, len(text))
result = test_sess.run([tf.nn.sigmoid(logits), alphas, tf.nn.sigmoid(logits_seq)], feed_dict = {x: new_vector})
plt.figure(figsize = (15, 7))
labels = [word for word in text]
val = [val for val in result[1]]
plt.bar(np.arange(len(labels)), val)
plt.xticks(np.arange(len(labels)), labels, rotation = 'vertical')
plt.show()
result[2]
| 0.483648 | 0.377196 |
# Regression with Amazon SageMaker XGBoost algorithm
_**Single machine training for regression with Amazon SageMaker XGBoost algorithm**_
---
---
## Contents
1. [Introduction](#Introduction)
2. [Setup](#Setup)
1. [Fetching the dataset](#Fetching-the-dataset)
2. [Data Ingestion](#Data-ingestion)
3. [Training the XGBoost model](#Training-the-XGBoost-model)
1. [Plotting evaluation metrics](#Plotting-evaluation-metrics)
4. [Set up hosting for the model](#Set-up-hosting-for-the-model)
1. [Import model into hosting](#Import-model-into-hosting)
2. [Create endpoint configuration](#Create-endpoint-configuration)
3. [Create endpoint](#Create-endpoint)
5. [Validate the model for use](#Validate-the-model-for-use)
---
## Introduction
This notebook demonstrates the use of Amazon SageMakerโs implementation of the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements.
## Prerequisites
Ensuring the latest sagemaker sdk is installed. For a major version upgrade, there might be some apis that may get deprecated.
```
!pip install -qU awscli boto3 sagemaker
```
## Setup
This notebook was created and tested on an ml.m4.4xlarge notebook instance.
Let's start by specifying:
1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.
1. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
```
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
```
### Fetching the dataset
Following methods split the data into train/test/validation datasets and upload files to S3.
```
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
```
### Data ingestion
Next, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets.
```
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
```
## Training the XGBoost model
After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 5 and 6 minutes.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '1.0-1')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
```
Note that the "validation" channel has been initialized too. The SageMaker XGBoost algorithm actually calculates RMSE and writes it to the CloudWatch logs on the data passed to the "validation" channel.
## Set up hosting for the model
In order to set up hosting, we have to import the model from training to hosting.
### Import model into hosting
Register the model with hosting. This allows the flexibility of importing models trained elsewhere.
```
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
```
### Create endpoint configuration
SageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment.
```
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
```
### Create endpoint
Lastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete.
```
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
```
## Validate the model for use
Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint.
```
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
```
Start with a single prediction.
```
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
```
OK, a single prediction works. Let's do a whole batch to see how good is the predictions accuracy.
```
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
```
The following helps us calculate the Median Absolute Percent Error (MdAPE) on the batch dataset.
```
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
```
### Delete Endpoint
Once you are done using the endpoint, you can use the following to delete it.
```
client.delete_endpoint(EndpointName=endpoint_name)
```
|
github_jupyter
|
!pip install -qU awscli boto3 sagemaker
%%time
import os
import boto3
import re
import sagemaker
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = 'sagemaker/DEMO-xgboost-abalone-default'
# customize to your bucket where you have stored the data
bucket_path = 'https://s3-{}.amazonaws.com/{}'.format(region, bucket)
%%time
import io
import boto3
import random
def data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST):
data = [l for l in open(FILE_DATA, 'r')]
train_file = open(FILE_TRAIN, 'w')
valid_file = open(FILE_VALIDATION, 'w')
tests_file = open(FILE_TEST, 'w')
num_of_data = len(data)
num_train = int((PERCENT_TRAIN/100.0)*num_of_data)
num_valid = int((PERCENT_VALIDATION/100.0)*num_of_data)
num_tests = int((PERCENT_TEST/100.0)*num_of_data)
data_fractions = [num_train, num_valid, num_tests]
split_data = [[],[],[]]
rand_data_ind = 0
for split_ind, fraction in enumerate(data_fractions):
for i in range(fraction):
rand_data_ind = random.randint(0, len(data)-1)
split_data[split_ind].append(data[rand_data_ind])
data.pop(rand_data_ind)
for l in split_data[0]:
train_file.write(l)
for l in split_data[1]:
valid_file.write(l)
for l in split_data[2]:
tests_file.write(l)
train_file.close()
valid_file.close()
tests_file.close()
def write_to_s3(fobj, bucket, key):
return boto3.Session(region_name=region).resource('s3').Bucket(bucket).Object(key).upload_fileobj(fobj)
def upload_to_s3(bucket, channel, filename):
fobj=open(filename, 'rb')
key = prefix+'/'+channel
url = 's3://{}/{}/{}'.format(bucket, key, filename)
print('Writing to {}'.format(url))
write_to_s3(fobj, bucket, key)
%%time
import urllib.request
# Load the dataset
FILE_DATA = 'abalone'
urllib.request.urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA)
#split the downloaded data into train/test/validation files
FILE_TRAIN = 'abalone.train'
FILE_VALIDATION = 'abalone.validation'
FILE_TEST = 'abalone.test'
PERCENT_TRAIN = 70
PERCENT_VALIDATION = 15
PERCENT_TEST = 15
data_split(FILE_DATA, FILE_TRAIN, FILE_VALIDATION, FILE_TEST, PERCENT_TRAIN, PERCENT_VALIDATION, PERCENT_TEST)
#upload the files to the S3 bucket
upload_to_s3(bucket, 'train', FILE_TRAIN)
upload_to_s3(bucket, 'validation', FILE_VALIDATION)
upload_to_s3(bucket, 'test', FILE_TEST)
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(region, 'xgboost', '1.0-1')
%%time
import boto3
from time import gmtime, strftime
job_name = 'DEMO-xgboost-regression-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", job_name)
#Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": bucket_path + "/" + prefix + "/single-xgboost"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.2xlarge",
"VolumeSizeInGB": 5
},
"TrainingJobName": job_name,
"HyperParameters": {
"max_depth":"5",
"eta":"0.2",
"gamma":"4",
"min_child_weight":"6",
"subsample":"0.7",
"silent":"0",
"objective":"reg:linear",
"num_round":"50"
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/train',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": bucket_path + "/" + prefix + '/validation',
"S3DataDistributionType": "FullyReplicated"
}
},
"ContentType": "libsvm",
"CompressionType": "None"
}
]
}
client = boto3.client('sagemaker', region_name=region)
client.create_training_job(**create_training_params)
import time
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
while status !='Completed' and status!='Failed':
time.sleep(60)
status = client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
%%time
import boto3
from time import gmtime, strftime
model_name=job_name + '-model'
print(model_name)
info = client.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
from time import gmtime, strftime
endpoint_config_name = 'DEMO-XGBoostEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m5.xlarge',
'InitialVariantWeight':1,
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
%%time
import time
endpoint_name = 'DEMO-XGBoostEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
while status=='Creating':
print("Status: " + status)
time.sleep(60)
resp = client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Arn: " + resp['EndpointArn'])
print("Status: " + status)
runtime_client = boto3.client('runtime.sagemaker', region_name=region)
!head -1 abalone.test > abalone.single.test
%%time
import json
from itertools import islice
import math
import struct
file_name = 'abalone.single.test' #customize to your test file
with open(file_name, 'r') as f:
payload = f.read().strip()
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/x-libsvm',
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
result = [math.ceil(float(i)) for i in result]
label = payload.strip(' ').split()[0]
print ('Label: ',label,'\nPrediction: ', result[0])
import sys
import math
def do_predict(data, endpoint_name, content_type):
payload = '\n'.join(data)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType=content_type,
Body=payload)
result = response['Body'].read()
result = result.decode("utf-8")
result = result.split(',')
preds = [float((num)) for num in result]
preds = [math.ceil(num) for num in preds]
return preds
def batch_predict(data, batch_size, endpoint_name, content_type):
items = len(data)
arrs = []
for offset in range(0, items, batch_size):
if offset+batch_size < items:
results = do_predict(data[offset:(offset+batch_size)], endpoint_name, content_type)
arrs.extend(results)
else:
arrs.extend(do_predict(data[offset:items], endpoint_name, content_type))
sys.stdout.write('.')
return(arrs)
%%time
import json
import numpy as np
with open(FILE_TEST, 'r') as f:
payload = f.read().strip()
labels = [int(line.split(' ')[0]) for line in payload.split('\n')]
test_data = [line for line in payload.split('\n')]
preds = batch_predict(test_data, 100, endpoint_name, 'text/x-libsvm')
print('\n Median Absolute Percent Error (MdAPE) = ', np.median(np.abs(np.array(labels) - np.array(preds)) / np.array(labels)))
client.delete_endpoint(EndpointName=endpoint_name)
| 0.402862 | 0.97377 |
Demonstrate the resolution of a regression problem using a k-Nearest Neighbor and the interpolation of the target using both barycenter and constant weights.
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
### Version
```
import sklearn
sklearn.__version__
```
### Imports
```
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
import numpy as np
from sklearn import neighbors
```
### Calculations
```
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
def data_to_plotly(x):
k = []
for i in range(0, len(x)):
k.append(x[i][0])
return k
```
### Plot Results
```
data = [[], []]
titles = []
n_neighbors = 5
for i, weights in enumerate(['uniform', 'distance']):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
if(i==0):
leg=True
else:
leg=False
p1 = go.Scatter(x=data_to_plotly(X), y=y,
mode='markers', showlegend=leg,
marker=dict(color='black'),
name='data')
p2 = go.Scatter(x=data_to_plotly(T), y=y_,
mode='lines', showlegend=leg,
line=dict(color='green'),
name='prediction')
data[i].append(p1)
data[i].append(p2)
titles.append("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors,
weights))
fig = tools.make_subplots(rows=2, cols=1,
subplot_titles=tuple(titles),
print_grid=False)
for i in range(0, len(data)):
for j in range(0, len(data[i])):
fig.append_trace(data[i][j], i+1, 1)
fig['layout'].update(height=700, hovermode='closest')
for i in map(str, range(1, 3)):
x = 'xaxis' + i
y = 'yaxis' + i
fig['layout'][x].update(showgrid=False, zeroline=False)
fig['layout'][y].update(showgrid=False, zeroline=False)
py.iplot(fig)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Nearest Neighbors Regression.ipynb', 'scikit-learn/plot-regression/', 'Nearest Neighbors Regression | plotly',
' ',
title = 'Nearest Neighbors Regression | plotly',
name = 'Nearest Neighbors Regression',
has_thumbnail='true', thumbnail='thumbnail/nearest-regression.jpg',
language='scikit-learn', page_type='example_index',
display_as='nearest_neighbors', order=1,
ipynb= '~Diksha_Gabha/3450')
```
|
github_jupyter
|
import sklearn
sklearn.__version__
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
import numpy as np
from sklearn import neighbors
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
def data_to_plotly(x):
k = []
for i in range(0, len(x)):
k.append(x[i][0])
return k
data = [[], []]
titles = []
n_neighbors = 5
for i, weights in enumerate(['uniform', 'distance']):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
if(i==0):
leg=True
else:
leg=False
p1 = go.Scatter(x=data_to_plotly(X), y=y,
mode='markers', showlegend=leg,
marker=dict(color='black'),
name='data')
p2 = go.Scatter(x=data_to_plotly(T), y=y_,
mode='lines', showlegend=leg,
line=dict(color='green'),
name='prediction')
data[i].append(p1)
data[i].append(p2)
titles.append("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors,
weights))
fig = tools.make_subplots(rows=2, cols=1,
subplot_titles=tuple(titles),
print_grid=False)
for i in range(0, len(data)):
for j in range(0, len(data[i])):
fig.append_trace(data[i][j], i+1, 1)
fig['layout'].update(height=700, hovermode='closest')
for i in map(str, range(1, 3)):
x = 'xaxis' + i
y = 'yaxis' + i
fig['layout'][x].update(showgrid=False, zeroline=False)
fig['layout'][y].update(showgrid=False, zeroline=False)
py.iplot(fig)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Nearest Neighbors Regression.ipynb', 'scikit-learn/plot-regression/', 'Nearest Neighbors Regression | plotly',
' ',
title = 'Nearest Neighbors Regression | plotly',
name = 'Nearest Neighbors Regression',
has_thumbnail='true', thumbnail='thumbnail/nearest-regression.jpg',
language='scikit-learn', page_type='example_index',
display_as='nearest_neighbors', order=1,
ipynb= '~Diksha_Gabha/3450')
| 0.464416 | 0.963541 |
# Hail workshop
This notebook will introduce the following concepts:
- Using Jupyter notebooks effectively
- Loading genetic data into Hail
- General-purpose data exploration functionality
- Plotting functionality
- Quality control of sequencing data
- Running a Genome-Wide Association Study (GWAS)
- Rare variant burden tests
## Hail on Jupyter
From https://jupyter.org:
"The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more."
In the last year, the Jupyter development team [released Jupyter Lab](https://blog.jupyter.org/jupyterlab-is-ready-for-users-5a6f039b8906), an integrated environment for data, code, and visualizations. If you've used R Studio, this is the closest thing that works in Python (and many other languages!).
### Why notebooks?
Part of what we think is so exciting about Hail is that it has coincided with a larger shift in the data science community.
Three years ago, most computational biologists at Broad analyzed genetic data using command-line tools, and took advantage of research compute clusters by explicitly using scheduling frameworks like LSF or Sun Grid Engine.
Now, they have the option to use Hail in interactive Python notebooks backed by thousands of cores on public compute clouds like [Google Cloud](https://cloud.google.com/), [Amazon Web Services](https://aws.amazon.com/), or [Microsoft Azure](https://azure.microsoft.com/).
# Using Jupyter
### Running cells
Evaluate cells using `SHIFT + ENTER`. Select the next cell and run it.
```
print('Hello, world')
```
### Modes
Jupyter has two modes, a **navigation mode** and an **editor mode**.
#### Navigation mode:
- <font color="blue"><strong>BLUE</strong></font> cell borders
- `UP` / `DOWN` move between cells
- `ENTER` while a cell is selected will move to **editing mode**.
- Many letters are keyboard shortcuts! This is a common trap.
#### Editor mode:
- <font color="green"><strong>GREEN</strong></font> cell borders
- `UP` / `DOWN`/ move within cells before moving between cells.
- `ESC` will return to **navigation mode**.
- `SHIFT + ENTER` will evaluate a cell and return to **navigation mode**.
### Cell types
There are several types of cells in Jupyter notebooks. The two you will see here are **Markdown** (text) and **Code**.
```
# This is a code cell
my_variable = 5
```
**This is a markdown cell**, so even if something looks like code (as below), it won't get executed!
my_variable += 1
```
print(my_variable)
```
### Common gotcha: a code cell turns into markdown
This can happen if you are in **navigation mode** and hit the keyboard shortcut `m` while selecting a code cell.
You can either navigate to `Cell > Cell Type > Code` through the top menu, or use the keyboard shortcut `y` to turn it back to code.
### Tips and tricks
Keyboard shortcuts:
- `SHIFT + ENTER` to evaluate a cell
- `ESC` to return to navigation mode
- `y` to turn a markdown cell into code
- `m` to turn a code cell into markdown
- `a` to add a new cell **above** the currently selected cell
- `b` to add a new cell **below** the currently selected cell
- `d, d` (repeated) to delete the currently selected cell
- `TAB` to activate code completion
To try this out, create a new cell below this one using `b`, and print `my_variable` by starting with `print(my` and pressing `TAB`!
### Common gotcha: the state of your code seems wrong
Jupyter makes it easy to get yourself into trouble by executing cells out-of-order, or multiple times.
For example, if I declare `x`:
```
x = 5
```
Then have a cell that reads:
```
x += 1
```
And finally:
```
print(x)
```
If you execute these cells in order and once, I'll see the notebook print `6`. However, there is **nothing stopping you** from executing the middle cell ten times, printing `16`!
### Solution
If you get yourself into trouble into this way, the solution is to clear the kernel (Python process) and start again from the top.
First, `Kernel > Restart & Clear Output > (accept dialog)`.
Second, `Cell > Run all above`.
# Set up our Python environment
In addition to Hail, we import a few methods from the [bokeh](https://bokeh.pydata.org/en/latest/) plotting library. We'll see examples soon!
```
import hail as hl
from bokeh.io import output_notebook, show
```
Now we initialize Hail and set up Bokeh to display inline in the notebook.
```
hl.init()
output_notebook()
```
# Download public 1000 Genomes data
The workshop materials are designed to work on a small (~20MB) downsampled chunk of the public 1000 Genomes dataset.
You can run these same functions on your computer or on the cloud!
```
hl.utils.get_1kg('data/')
```
It is possible to call command-line utilities from Jupyter by prefixing a line with a `!`:
```
! ls -1 data/
```
# Part 1: Explore genetic data with Hail
### Import data from VCF
The [Variant Call Format (VCF)](https://en.wikipedia.org/wiki/Variant_Call_Format) is a common file format for representing genetic data collected on multiple individuals (samples).
Hail's [import_vcf](https://hail.is/docs/0.2/methods/impex.html#hail.methods.import_vcf) function can read this format.
However, VCF is a text format that is easy for humans but very bad for computers. The first thing we do is `write` to a Hail native file format, which is much faster!
```
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
```
### Read 1KG into Hail
We represent genetic data as a Hail [MatrixTable](https://hail.is/docs/0.2/hailpedia/matrix_table.html), and name our variable `mt` to indicate this.
```
mt = hl.read_matrix_table('data/1kg.mt')
```
### What is a `MatrixTable`?
Let's describe it!
The `describe` method prints the **schema**, that is, the fields in the dataset and their types.
You can see:
- **numeric** types:
- integers (`int32`, `int64`), e.g. `5`
- floating point numbers (`float32`, `float64`), e.g. `5.5` or `3e-8`
- **strings** (`str`), e.g. `"Foo"`
- **boolean** values (`bool`) e.g. `True`
- **collections**:
- arrays (`array`), e.g. `[1,1,2,3]`
- sets (`set`), e.g. `{1,3}`
- dictionaries (`dict`), e.g. `{'Foo': 5, 'Bar': 10}`
- **genetic data types**:
- loci (`locus`), e.g. `[GRCh37] 1:10000` or `[GRCh38] chr1:10024`
- genotype calls (`call`), e.g. `0/2` or `1|0`
```
mt.describe()
```
#### `count`
`MatrixTable.count` returns a tuple with the number of rows (variants) and number of columns (samples).
```
mt.count()
```
#### `show`
There is no `mt.show()` method, but you can show individual fields like the sample ID, `s`, or the locus.
```
mt.s.show(5)
mt.locus.show(5)
```
### <font color="brightred"><strong>Exercise: </strong></font> show other fields
You can see the names of fields above. `show()` the first few values for a few of them, making sure to include at least one **row field** and **at least one entry field**. Capitalization is important.
To print fields inside the `info` structure, you must add another dot, e.g. `mt.info.AN`.
What do you notice being printed alongside some of the fields?
### Hail has functions built for genetics
For example, `hl.summarize_variants` prints useful statistics about the genetic variants in the dataset.
```
hl.summarize_variants(mt)
```
### Most of Hail's functionality is totally general-purpose!
Functions like `summarize_variants` are built out of Hail's general-purpose data manipulation functionality. We can use Hail to ask arbitrary questions about the data:
```
mt.aggregate_rows(hl.agg.count_where(mt.alleles == ['A', 'T']))
```
Or if we had flight data:
```
flight_data.aggregate(
hl.agg.count_where(flight_data.departure_city == 'Boston')
)
```
The `counter` aggregator makes it possible to see distributions of categorical data, like alleles:
```
snp_counts = mt.aggregate_rows(
hl.array(hl.agg.counter(mt.alleles)))
snp_counts
```
By sorting the result in Python, we can recover an interesting bit of biology...
```
sorted(snp_counts,
key=lambda x: x[1])
```
### <font color="brightred"><strong>Question: </strong></font> What is interesting about this distribution?
### <font color="brightred"><strong>Question: </strong></font> Why do the counts come in pairs?
## A closer look at GQ
The GQ field in our dataset is an interesting one to explore further, and we can use various pieces of Hail's functionality to do so.
**GQ** stands for **Genotype Quality**, and reflects confidence in a genotype call. It is a non-negative **integer** truncated at 99, and is the **phred-scaled** probability of the second-most-likely genotype call.
Phred-scaling a value is the following transformation:
$\quad Phred(x) = -10 * log_{10}(x)$
#### Example:
$\quad p_{0/0} = 0.9899$
$\quad p_{0/1} = 0.01$
$\quad p_{1/1} = 0.001$
In this case,
$\quad GQ = -10 * log_{10} (0.01) = 20$
Higher GQ values indicate higher confidence. $GQ=10$ is 90% confidence, $GQ=20$ is 99% confidence, $GQ=30$ is 99.9% confidence, and so on.
```
mt.aggregate_entries(hl.agg.stats(mt.GQ))
```
Using our equation above, the mean value indicates about 99.9% confidence. But it's not generally a good idea to draw conclusions just based on a mean and standard deviation...
It is possible to build more complicated queries using small pieces. We can use `hl.agg.filter` to compute conditional statistics:
```
mt.aggregate_entries(hl.agg.filter(mt.GT.is_het(),
hl.agg.stats(mt.GQ)))
```
To look at GQ at genotypes that are **not heterozygous**, we need add only one character (`~`):
```
mt.aggregate_entries(hl.agg.filter(~mt.GT.is_het(),
hl.agg.stats(mt.GQ)))
```
There are often many ways to accomplish something in Hail. We could have done these both together (and more efficiently!) using `hl.agg.group_by`:
```
mt.aggregate_entries(hl.agg.group_by(mt.GT,
hl.agg.stats(mt.GQ)))
```
Of course, the best way to understand a distribution is to **look** at it!
```
p = hl.plot.histogram(
mt.GQ,
bins=100)
show(p)
```
### <font color="brightred"><strong>Exercise: </strong></font> What's going on here? Investigate!
*Hint: try copying some of the cells above and looking at DP, the sequencing depth, as well as GQ. The ratio between the two may also be interesting...*
*Hint: if you want to plot a filtered GQ distribution, you can use something like:*
p = hl.plot.histogram(mt.filter_entries(mt.GT.is_het()).GQ, bins=100)
Remember that you can create a new cell using keyboard shortcuts `A` or `B` in **navigation mode**.
# Part 2: Annotation and quality control
## Integrate sample information
We're building toward a genome-wide association test in part 3, but we don't just need genetic data to do a GWAS -- we also need phenotype data! Luckily, our `hl.utils.get_1kg` function also downloaded some simulated phenotype data.
This is a text file:
```
! head data/1kg_annotations.txt
```
We can import it as a [Hail Table](https://hail.is/docs/0.2/hailpedia/table.html) with [hl.import_table](https://hail.is/docs/0.2/methods/impex.html?highlight=import_table#hail.methods.import_table).
We call it "sa" for "sample annotations".
```
sa = hl.import_table('data/1kg_annotations.txt',
impute=True,
key='Sample')
```
While we can see the names and types of fields in the logging messages, we can also `describe` and `show` this table:
```
sa.describe()
sa.show()
```
## Add sample metadata into our 1KG `MatrixTable`
It's short and easy:
```
mt = mt.annotate_cols(pheno = sa[mt.s])
```
### What's going on here?
Understanding what's going on here is a bit more difficult. To understand, we need to understand a few pieces:
#### 1. `annotate` methods
In Hail, `annotate` methods refer to **adding new fields**.
- `MatrixTable`'s `annotate_cols` adds new column fields.
- `MatrixTable`'s `annotate_rows` adds new row fields.
- `MatrixTable`'s `annotate_entries` adds new entry fields.
- `Table`'s `annotate` adds new row fields.
In the above cell, we are adding a new coluimn field called "pheno". This field should be the values in our table `sa` associated with the sample ID `s` in our `MatrixTable` - that is, this is performing a **join**.
Python uses square brackets to look up values in dictionaries:
d = {'foo': 5, 'bar': 10}
d['foo']
You should think of this in much the same way - for each column of `mt`, we are looking up the fields in `sa` using the sample ID `s`.
```
mt.describe()
```
### <font color="brightred"><strong>Exercise: </strong></font> Query some of these column fields using `mt.aggregate_cols`.
Some of the aggregators we used earlier:
- `hl.agg.counter`
- `hl.agg.stats`
- `hl.agg.count_where`
## Sample QC
We'll start with examples of sample QC.
Hail has the function [hl.sample_qc](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.sample_qc) to compute a list of useful statistics about samples from sequencing data.
**Click the link** above to see the documentation, which lists the fields and their descriptions.
```
mt = hl.sample_qc(mt)
mt.sample_qc.describe()
p = hl.plot.scatter(x=mt.sample_qc.r_het_hom_var,
y=mt.sample_qc.call_rate)
show(p)
```
### <font color="brightred"><strong>Exercise: </strong></font> Plot some other fields!
Modify the cell above. Remember `hl.plot.histogram` as well!
If you want to start getting fancy, you can plot more complicated expressions -- the ratio between two fields, for instance.
### Filter columns using generated QC statistics
```
mt = mt.filter_cols(mt.sample_qc.dp_stats.mean >= 4)
mt = mt.filter_cols(mt.sample_qc.call_rate >= 0.97)
```
## Entry QC
We explored GQ above, and analysts often set thresholds for GQ to filter entries (genotypes). Another useful metric is **allele read balance**.
This value is defined by:
$\quad AB = \dfrac{N_{alt}}{{N_{ref} + N_{alt}}}$
Where $N_{ref}$ is the number of reference reads and $N_{alt}$ is the number of alternate reads.
We want
```
# call rate before filtering
mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = (
hl.case()
.when(mt.GT.is_hom_ref(), ab <= 0.1)
.when(mt.GT.is_het(), (ab >= 0.25) & (ab <= 0.75))
.default(ab >= 0.9) # hom-var
)
mt = mt.filter_entries(filter_condition_ab)
# call rate after filtering
mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
```
## Variant QC
Hail has the function [hl.variant_qc](https://hail.is/docs/0.2/methods/genetics.html#hail.methods.variant_qc) to compute a list of useful statistics about **variants** from sequencing data.
Once again, **Click the link** above to see the documentation!
```
mt = hl.variant_qc(mt)
mt.variant_qc.describe()
mt.variant_qc.AF.show()
```
### Remove rare sites:
```
mt = mt.filter_rows(hl.min(mt.variant_qc.AF) > 1e-6)
```
### Remove sites far from [Hardy-Weinberg equilbrium](https://en.wikipedia.org/wiki/Hardy%E2%80%93Weinberg_principle):
```
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 0.005)
# final variant and sample count
mt.count()
```
# Part 3: GWAS!
A GWAS is an independent association test performed per variant of a genetic dataset. We use the same phenotype and covariates, but test the genotypes for each variant separately.
In Hail, the method we use is [hl.linear_regression_rows](https://hail.is/docs/0.2/methods/stats.html#hail.methods.linear_regression_rows).
We use the phenotype `CaffeineConsumption` as our dependent variable, the number of alternate alleles as our independent variable, and no covariates besides an intercept term (that's the `1.0`).
```
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.describe()
```
Two of the plots that analysts generally produce are a [Manhattan plot](https://en.wikipedia.org/wiki/Manhattan_plot) and a [Q-Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot).
We'll start with the Manhattan plot:
```
p = hl.plot.manhattan(gwas.p_value)
show(p)
p = hl.plot.qq(gwas.p_value)
show(p)
```
## Confounded!
The Q-Q plot indicates **extreme** inflation of p-values.
If you've done a GWAS before, you've probably included a few other covariates -- age, sex, and principal components.
Principal components are a measure of genetic ancestry, and can be used to control for [population stratification](https://en.wikipedia.org/wiki/Population_stratification).
We can compute principal components with Hail:
```
pca_eigenvalues, pca_scores, pca_loadings = hl.hwe_normalized_pca(mt.GT, compute_loadings=True)
```
The **eigenvalues** reflect the amount of variance explained by each principal component:
```
pca_eigenvalues
```
The **scores** are the principal components themselves, computed per sample.
```
pca_scores.describe()
pca_scores.show()
```
The **loadings** are the contributions to each component for each variant.
```
pca_loadings.describe()
```
We can **annotate** the principal components back onto `mt`:
```
mt = mt.annotate_cols(pca = pca_scores[mt.s])
```
## Principal components measure ancestry
```
p = hl.plot.scatter(mt.pca.scores[0],
mt.pca.scores[1],
label=mt.pheno.SuperPopulation)
show(p)
```
### <font color="brightred"><strong>Question: </strong></font> Does your plot match your neighbors'?
If not, how is it different?
## Control confounders and run another GWAS
```
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.pca.scores[0], mt.pca.scores[1], mt.pca.scores[2]])
p = hl.plot.qq(gwas.p_value)
show(p)
p = hl.plot.manhattan(gwas.p_value)
show(p)
```
# Part 4: Burden tests
GWAS is a great tool for finding associations between **common variants** and disease, but a GWAS can't hope to find associations between rare variants and disease. Even if we have sequencing data for 1,000,000 people, we won't have the statistical power to link a mutation found in only a few people to any disease.
But rare variation has lots of information - especially because statistical genetic theory dictates that rarer variants have, on average, stronger effects on disease per allele.
One possible strategy is to **group together rare variants with similar predicted consequence**. For example, we can group all variants that are predicted to knock out the function of each gene and test the variants for each gene as a group.
We will be running a burden test on our common variant dataset to demonstrate the technical side, but we shouldn't hope to find anything here -- especially because we've only got 10,000 variants!
### Import gene data
We start by importing data about genes.
First, we need to download it:
```
! wget https://storage.googleapis.com/hail-tutorial/ensembl_gene_annotations.txt -O data/ensembl_gene_annotations.txt
gene_ht = hl.import_table('data/ensembl_gene_annotations.txt', impute=True)
gene_ht.show()
gene_ht.count()
```
### Create an interval key
```
gene_ht = gene_ht.transmute(interval = hl.locus_interval(gene_ht['Chromosome'],
gene_ht['Gene start'],
gene_ht['Gene end']))
gene_ht = gene_ht.key_by('interval')
```
### Annotate variants using these intervals
```
mt = mt.annotate_rows(gene_info = gene_ht[mt.locus])
mt.gene_info.show()
```
### Aggregate genotypes per gene
There is no `hl.burden_test` function -- instead, a burden test is the composition of two modular pieces of Hail functionality:
- `group_rows_by / aggregate`
- `hl.linear_regression_rows`.
While this might be a few more lines of code to write than `hl.burden_test`, it means that you can flexibly specify the genotype aggregation however you like. Using other tools , you may have a few ways to aggregate, but if you want to do something different you are out of luck!
```
burden_mt = (
mt
.group_rows_by(gene = mt.gene_info['Gene name'])
.aggregate(n_variants = hl.agg.count_where(mt.GT.n_alt_alleles() > 0))
)
burden_mt.describe()
```
### What is `burden_mt`?
It is a **gene-by-sample** matrix (compare to `mt`, a **variant-by-sample** matrix).
It has one row field, the `gene`.
It has one entry field, `n_variants`.
It has all the column fields from `mt`.
### Run linear regression per gene
This should look familiar!
```
burden_results = hl.linear_regression_rows(
y=burden_mt.pheno.CaffeineConsumption,
x=burden_mt.n_variants,
covariates=[1.0,
burden_mt.pheno.isFemale,
burden_mt.pca.scores[0],
burden_mt.pca.scores[1],
burden_mt.pca.scores[2]])
```
### Sorry, no `hl.plot.manhattan` for genes!
Instead, we can sort by p-value and print:
```
burden_results.order_by(burden_results.p_value).show()
```
### <font color="brightred"><strong>Exercise: </strong></font> Where along the genome can we find the top gene?
### <font color="brightred"><strong>Statistics question: </strong></font> What is the significance threshold for a burden test?
Is this top gene genome-wide significant?
|
github_jupyter
|
print('Hello, world')
# This is a code cell
my_variable = 5
print(my_variable)
x = 5
x += 1
print(x)
import hail as hl
from bokeh.io import output_notebook, show
hl.init()
output_notebook()
hl.utils.get_1kg('data/')
! ls -1 data/
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
mt = hl.read_matrix_table('data/1kg.mt')
mt.describe()
mt.count()
mt.s.show(5)
mt.locus.show(5)
hl.summarize_variants(mt)
mt.aggregate_rows(hl.agg.count_where(mt.alleles == ['A', 'T']))
flight_data.aggregate(
hl.agg.count_where(flight_data.departure_city == 'Boston')
)
snp_counts = mt.aggregate_rows(
hl.array(hl.agg.counter(mt.alleles)))
snp_counts
sorted(snp_counts,
key=lambda x: x[1])
mt.aggregate_entries(hl.agg.stats(mt.GQ))
mt.aggregate_entries(hl.agg.filter(mt.GT.is_het(),
hl.agg.stats(mt.GQ)))
mt.aggregate_entries(hl.agg.filter(~mt.GT.is_het(),
hl.agg.stats(mt.GQ)))
mt.aggregate_entries(hl.agg.group_by(mt.GT,
hl.agg.stats(mt.GQ)))
p = hl.plot.histogram(
mt.GQ,
bins=100)
show(p)
! head data/1kg_annotations.txt
sa = hl.import_table('data/1kg_annotations.txt',
impute=True,
key='Sample')
sa.describe()
sa.show()
mt = mt.annotate_cols(pheno = sa[mt.s])
mt.describe()
mt = hl.sample_qc(mt)
mt.sample_qc.describe()
p = hl.plot.scatter(x=mt.sample_qc.r_het_hom_var,
y=mt.sample_qc.call_rate)
show(p)
mt = mt.filter_cols(mt.sample_qc.dp_stats.mean >= 4)
mt = mt.filter_cols(mt.sample_qc.call_rate >= 0.97)
# call rate before filtering
mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = (
hl.case()
.when(mt.GT.is_hom_ref(), ab <= 0.1)
.when(mt.GT.is_het(), (ab >= 0.25) & (ab <= 0.75))
.default(ab >= 0.9) # hom-var
)
mt = mt.filter_entries(filter_condition_ab)
# call rate after filtering
mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT)))
mt = hl.variant_qc(mt)
mt.variant_qc.describe()
mt.variant_qc.AF.show()
mt = mt.filter_rows(hl.min(mt.variant_qc.AF) > 1e-6)
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 0.005)
# final variant and sample count
mt.count()
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.describe()
p = hl.plot.manhattan(gwas.p_value)
show(p)
p = hl.plot.qq(gwas.p_value)
show(p)
pca_eigenvalues, pca_scores, pca_loadings = hl.hwe_normalized_pca(mt.GT, compute_loadings=True)
pca_eigenvalues
pca_scores.describe()
pca_scores.show()
pca_loadings.describe()
mt = mt.annotate_cols(pca = pca_scores[mt.s])
p = hl.plot.scatter(mt.pca.scores[0],
mt.pca.scores[1],
label=mt.pheno.SuperPopulation)
show(p)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.pca.scores[0], mt.pca.scores[1], mt.pca.scores[2]])
p = hl.plot.qq(gwas.p_value)
show(p)
p = hl.plot.manhattan(gwas.p_value)
show(p)
! wget https://storage.googleapis.com/hail-tutorial/ensembl_gene_annotations.txt -O data/ensembl_gene_annotations.txt
gene_ht = hl.import_table('data/ensembl_gene_annotations.txt', impute=True)
gene_ht.show()
gene_ht.count()
gene_ht = gene_ht.transmute(interval = hl.locus_interval(gene_ht['Chromosome'],
gene_ht['Gene start'],
gene_ht['Gene end']))
gene_ht = gene_ht.key_by('interval')
mt = mt.annotate_rows(gene_info = gene_ht[mt.locus])
mt.gene_info.show()
burden_mt = (
mt
.group_rows_by(gene = mt.gene_info['Gene name'])
.aggregate(n_variants = hl.agg.count_where(mt.GT.n_alt_alleles() > 0))
)
burden_mt.describe()
burden_results = hl.linear_regression_rows(
y=burden_mt.pheno.CaffeineConsumption,
x=burden_mt.n_variants,
covariates=[1.0,
burden_mt.pheno.isFemale,
burden_mt.pca.scores[0],
burden_mt.pca.scores[1],
burden_mt.pca.scores[2]])
burden_results.order_by(burden_results.p_value).show()
| 0.411939 | 0.98805 |
_This notebook contains code and comments from Section 3.2 of the book [Ensemble Methods for Machine Learning](https://www.manning.com/books/ensemble-methods-for-machine-learning). Please see the book for additional details on this topic. This notebook and code are released under the [MIT license](https://github.com/gkunapuli/ensemble-methods-notebooks/blob/master/LICENSE)._
---
## 3.2 Combining predictions by weighting
First, we copy over the data generation stuff and the functions ``fit`` and ``predict_individual`` exactly as is from Section 3.1. (An alternate way to do this would be to import the functions from the notebook using the ``ipynb`` package).
```
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
X, y = make_moons(600, noise=0.25, random_state=13)
X, Xval, y, yval = train_test_split(X, y, test_size=0.25) # Set aside 25% of data for validation
Xtrn, Xtst, ytrn, ytst = train_test_split(X, y, test_size=0.25) # Set aside a further 25% of data for hold-out test
# --- Some code to suppress warnings generated due to versioning changes in sklearn and scipy
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
# --- Can be removed at a future date
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
estimators = [('dt', DecisionTreeClassifier(max_depth=5)),
('svm', SVC(gamma=1.0, C=1.0, probability=True)),
('gp', GaussianProcessClassifier(RBF(1.0))),
('3nn', KNeighborsClassifier(n_neighbors=3)),
('rf',RandomForestClassifier(max_depth=3, n_estimators=25)),
('gnb', GaussianNB())]
def fit(estimators, X, y):
for model, estimator in estimators:
estimator.fit(X, y)
return estimators
import numpy as np
def predict_individual(X, estimators, proba=False):
n_estimators = len(estimators)
n_samples = X.shape[0]
y = np.zeros((n_samples, n_estimators))
for i, (model, estimator) in enumerate(estimators):
if proba:
y[:, i] = estimator.predict_proba(X)[:, 1]
else:
y[:, i] = estimator.predict(X)
return y
```
Fit the estimators to the synthetic data set.
```
estimators = fit(estimators, Xtrn, ytrn)
```
---
### 3.2.1 Majority Vote
**Listing 3.3**: Combine predictions using majority vote.
The listing below combines the individual predictions y_individual from a heterogeneous set of base estimators using majority voting. Note that since the weights of the base estimators are all equal, we do not explicitly compute them.
```
from scipy.stats import mode
def combine_using_majority_vote(X, estimators):
y_individual = predict_individual(X, estimators, proba=False)
y_final = mode(y_individual, axis=1)
return y_final[0].reshape(-1, )
from sklearn.metrics import accuracy_score
ypred = combine_using_majority_vote(Xtst, estimators)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
```
---
### 3.2.2 Accuracy weighting
**Listing 3.4**: Combine using accuracy weighting
Accuracy weighting is a performance-based weighting strategy, where higher-performing base estimators in the ensemble are assigned higher weights, so that they contribute more to the final prediction.
Once we have trained each base classifier, we evaluate its performance on a validation set. Let $\alpha_t$ be the validation accuracy of the $t$-th classifier, $H_t$. The weight of each base classifier is then computed as $w_t = \frac{\alpha_t}{\sum_{t=1}^m \, \alpha_t}$.
The denominator is a normalization term: the sum of all the individual validation accuracies. This computation ensures that a classifierโs weight is proportional to its accuracy and all the weights sum to 1.
```
def combine_using_accuracy_weighting(X, estimators, Xval, yval):
n_estimators = len(estimators)
yval_individual = predict_individual(Xval, estimators, proba=False)
wts = [accuracy_score(yval, yval_individual[:, i])
for i in range(n_estimators)]
wts /= np.sum(wts)
ypred_individual = predict_individual(X, estimators, proba=False)
y_final = np.dot(ypred_individual, wts)
return np.round(y_final)
ypred = combine_using_accuracy_weighting(Xtst, estimators, Xval, yval)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
```
---
### 3.2.3 Entropy weighting
The entropy weighting approach is another performance-based weighting approach, except that it uses entropy as the evaluation metric to judge the value of each base estimator. [Entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)) is a measure of uncertainty or impurity in a set; a more disorderly set will have higher entropy.
**Listing 3.5** Computing entropy.
```
def entropy(y):
_, counts = np.unique(y, return_counts=True)
p = np.array(counts.astype('float') / len(y))
ent = -p.T @ np.log2(p)
return ent
```
**Listing 3.6**: Combine using entropy weighting
Let $E_t$ be the validation entropy of the $t$-th classifier, $H_t$. The weight of each base classifier is then computed as \frac{(1/E_t)}{\sum_{t=1}^m \, (1/E_t)}$. Contrast this with accuracy weighting, and observe that the entropies are inverted. This is because lower entropies are desirable, much like higher accuracies are desirable in a model.
```
def combine_using_entropy_weighting(X, estimators, Xval, yval):
n_estimators = len(estimators)
yval_individual = predict_individual(Xval, estimators, proba=False)
wts = [1/entropy(yval_individual[:, i])
for i in range(n_estimators)]
wts /= np.sum(wts)
ypred_individual = predict_individual(X, estimators, proba=False)
y_final = np.dot(ypred_individual, wts)
return np.round(y_final)
ypred = combine_using_entropy_weighting(Xtst, estimators, Xval, yval)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
```
---
### 3.2.4 Dempster-Shafer combination
[Dempster-Shafer Theory](https://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory) is a generalization of probability theory that supports reasoning under uncertainty and with incomplete knowledge. While the foundations of DST are beyond the scope of this book, the theory itself provides a way to fuse beliefs and evidence from multiple sources into one single belief.
DST uses a number between 0 and 1 to indicate belief in a proposition, such as โthe test example x belongs to Class 1โ. This number is known as a basic probability assignment (BPA) and expresses the certainty that the text example x belongs to Class 1. BPA values closer to 1 characterize decisions made with more certainty. The BPA allows us to translate an estimatorโs confidence to a belief over what the true label is.
**Listing 3.7**: Combining using Dempster-Shafer
```
def combine_using_Dempster_Schafer(X, estimators):
p_individual = predict_individual(X, estimators, proba=True)
bpa0 = 1.0 - np.prod(p_individual, axis=1) - 1e-6
bpa1 = 1.0 - np.prod(1.0 - p_individual, axis=1) - 1e-6
belief = np.vstack([bpa0 / (1 - bpa0), bpa1 / (1 - bpa1)]).T
y_final = np.argmax(belief, axis=1)
return y_final
ypred = combine_using_Dempster_Schafer(Xtst, estimators)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
```
---
**Unlisted**: Logarithmic opinion pooling (LOP), first described in [Heskes98](https://papers.nips.cc/paper/1413-selecting-weighting-factors-in-logarithmic-opinion-pools.pdf).
```
def combine_using_LOP(X, estimators, Xval, yval):
n_estimators = len(estimators)
yval_individual = predict_individual(Xval, estimators, proba=False)
wts = [1 - accuracy_score(yval, yval_individual[:, i]) for i in range(n_estimators)]
wts /= np.sum(wts)
p_individual = predict_individual(X, estimators, proba=True)
p_LOP = np.exp(np.dot(np.log(p_individual), wts))
return np.round(p_LOP)
```
---
### Visualizing the decision boundaries of all 4 weighting methods
```
%matplotlib inline
import matplotlib.pyplot as plt
from visualization import plot_2d_classifier, get_colors
combination_methods = [('Majority Vote', combine_using_majority_vote),
('Dempster-Shafer', combine_using_Dempster_Schafer),
('Accuracy Weighting', combine_using_accuracy_weighting),
('Entropy Weighting', combine_using_entropy_weighting)]
# ('Logarithmic Opinion Pool', combine_using_LOP)]
nrows, ncols = 2, 2
cm = get_colors(colormap='RdBu')
# Init plotting
xMin, xMax = Xtrn[:, 0].min() - 0.25, Xtrn[:, 0].max() + 0.25
yMin, yMax = Xtrn[:, 1].min() - 0.25, Xtrn[:, 1].max() + 0.25
xMesh, yMesh = np.meshgrid(np.arange(xMin, xMax, 0.05),
np.arange(yMin, yMax, 0.05))
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(6, 6))
for i, (method, combiner) in enumerate(combination_methods):
c, r = divmod(i, 2)
if i < 2:
zMesh = combiner(np.c_[xMesh.ravel(), yMesh.ravel()], estimators)
ypred = combiner(Xtst, estimators)
else:
zMesh = combiner(np.c_[xMesh.ravel(), yMesh.ravel()], estimators, Xval, yval)
ypred = combiner(Xtst, estimators, Xval, yval)
zMesh = zMesh.reshape(xMesh.shape)
ax[r, c].contourf(xMesh, yMesh, zMesh, cmap='RdBu', alpha=0.65)
ax[r, c].contour(xMesh, yMesh, zMesh, [0.5], colors='k', linewidths=2.5)
ax[r, c].scatter(Xtrn[ytrn == 0, 0], Xtrn[ytrn == 0, 1], marker='o', c=cm[0], edgecolors='k')
ax[r, c].scatter(Xtrn[ytrn == 1, 0], Xtrn[ytrn == 1, 1], marker='s', c=cm[1], edgecolors='k')
# tst_err = 1 - accuracy_score(ytst, ypred)
# title = '{0} (err = {1:4.2f}%)'.format(method, tst_err*100)
title = '{0}'.format(method)
ax[r, c].set_xticks([])
ax[r, c].set_yticks([])
ax[r, c].set_title(title)
fig.tight_layout()
plt.savefig('./figures/CH03_F10_Kunapuli.png', dpi=300, bbox_inches='tight');
```
|
github_jupyter
|
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
X, y = make_moons(600, noise=0.25, random_state=13)
X, Xval, y, yval = train_test_split(X, y, test_size=0.25) # Set aside 25% of data for validation
Xtrn, Xtst, ytrn, ytst = train_test_split(X, y, test_size=0.25) # Set aside a further 25% of data for hold-out test
# --- Some code to suppress warnings generated due to versioning changes in sklearn and scipy
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
# --- Can be removed at a future date
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
estimators = [('dt', DecisionTreeClassifier(max_depth=5)),
('svm', SVC(gamma=1.0, C=1.0, probability=True)),
('gp', GaussianProcessClassifier(RBF(1.0))),
('3nn', KNeighborsClassifier(n_neighbors=3)),
('rf',RandomForestClassifier(max_depth=3, n_estimators=25)),
('gnb', GaussianNB())]
def fit(estimators, X, y):
for model, estimator in estimators:
estimator.fit(X, y)
return estimators
import numpy as np
def predict_individual(X, estimators, proba=False):
n_estimators = len(estimators)
n_samples = X.shape[0]
y = np.zeros((n_samples, n_estimators))
for i, (model, estimator) in enumerate(estimators):
if proba:
y[:, i] = estimator.predict_proba(X)[:, 1]
else:
y[:, i] = estimator.predict(X)
return y
estimators = fit(estimators, Xtrn, ytrn)
from scipy.stats import mode
def combine_using_majority_vote(X, estimators):
y_individual = predict_individual(X, estimators, proba=False)
y_final = mode(y_individual, axis=1)
return y_final[0].reshape(-1, )
from sklearn.metrics import accuracy_score
ypred = combine_using_majority_vote(Xtst, estimators)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
def combine_using_accuracy_weighting(X, estimators, Xval, yval):
n_estimators = len(estimators)
yval_individual = predict_individual(Xval, estimators, proba=False)
wts = [accuracy_score(yval, yval_individual[:, i])
for i in range(n_estimators)]
wts /= np.sum(wts)
ypred_individual = predict_individual(X, estimators, proba=False)
y_final = np.dot(ypred_individual, wts)
return np.round(y_final)
ypred = combine_using_accuracy_weighting(Xtst, estimators, Xval, yval)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
def entropy(y):
_, counts = np.unique(y, return_counts=True)
p = np.array(counts.astype('float') / len(y))
ent = -p.T @ np.log2(p)
return ent
def combine_using_entropy_weighting(X, estimators, Xval, yval):
n_estimators = len(estimators)
yval_individual = predict_individual(Xval, estimators, proba=False)
wts = [1/entropy(yval_individual[:, i])
for i in range(n_estimators)]
wts /= np.sum(wts)
ypred_individual = predict_individual(X, estimators, proba=False)
y_final = np.dot(ypred_individual, wts)
return np.round(y_final)
ypred = combine_using_entropy_weighting(Xtst, estimators, Xval, yval)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
def combine_using_Dempster_Schafer(X, estimators):
p_individual = predict_individual(X, estimators, proba=True)
bpa0 = 1.0 - np.prod(p_individual, axis=1) - 1e-6
bpa1 = 1.0 - np.prod(1.0 - p_individual, axis=1) - 1e-6
belief = np.vstack([bpa0 / (1 - bpa0), bpa1 / (1 - bpa1)]).T
y_final = np.argmax(belief, axis=1)
return y_final
ypred = combine_using_Dempster_Schafer(Xtst, estimators)
tst_err = 1 - accuracy_score(ytst, ypred)
tst_err
def combine_using_LOP(X, estimators, Xval, yval):
n_estimators = len(estimators)
yval_individual = predict_individual(Xval, estimators, proba=False)
wts = [1 - accuracy_score(yval, yval_individual[:, i]) for i in range(n_estimators)]
wts /= np.sum(wts)
p_individual = predict_individual(X, estimators, proba=True)
p_LOP = np.exp(np.dot(np.log(p_individual), wts))
return np.round(p_LOP)
%matplotlib inline
import matplotlib.pyplot as plt
from visualization import plot_2d_classifier, get_colors
combination_methods = [('Majority Vote', combine_using_majority_vote),
('Dempster-Shafer', combine_using_Dempster_Schafer),
('Accuracy Weighting', combine_using_accuracy_weighting),
('Entropy Weighting', combine_using_entropy_weighting)]
# ('Logarithmic Opinion Pool', combine_using_LOP)]
nrows, ncols = 2, 2
cm = get_colors(colormap='RdBu')
# Init plotting
xMin, xMax = Xtrn[:, 0].min() - 0.25, Xtrn[:, 0].max() + 0.25
yMin, yMax = Xtrn[:, 1].min() - 0.25, Xtrn[:, 1].max() + 0.25
xMesh, yMesh = np.meshgrid(np.arange(xMin, xMax, 0.05),
np.arange(yMin, yMax, 0.05))
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(6, 6))
for i, (method, combiner) in enumerate(combination_methods):
c, r = divmod(i, 2)
if i < 2:
zMesh = combiner(np.c_[xMesh.ravel(), yMesh.ravel()], estimators)
ypred = combiner(Xtst, estimators)
else:
zMesh = combiner(np.c_[xMesh.ravel(), yMesh.ravel()], estimators, Xval, yval)
ypred = combiner(Xtst, estimators, Xval, yval)
zMesh = zMesh.reshape(xMesh.shape)
ax[r, c].contourf(xMesh, yMesh, zMesh, cmap='RdBu', alpha=0.65)
ax[r, c].contour(xMesh, yMesh, zMesh, [0.5], colors='k', linewidths=2.5)
ax[r, c].scatter(Xtrn[ytrn == 0, 0], Xtrn[ytrn == 0, 1], marker='o', c=cm[0], edgecolors='k')
ax[r, c].scatter(Xtrn[ytrn == 1, 0], Xtrn[ytrn == 1, 1], marker='s', c=cm[1], edgecolors='k')
# tst_err = 1 - accuracy_score(ytst, ypred)
# title = '{0} (err = {1:4.2f}%)'.format(method, tst_err*100)
title = '{0}'.format(method)
ax[r, c].set_xticks([])
ax[r, c].set_yticks([])
ax[r, c].set_title(title)
fig.tight_layout()
plt.savefig('./figures/CH03_F10_Kunapuli.png', dpi=300, bbox_inches='tight');
| 0.812644 | 0.98459 |
<a href="https://colab.research.google.com/github/comHack/Mammography_DL_Classification/blob/master/Models/model_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Import Libraries**
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sn
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout, Activation, Conv2D, MaxPooling2D
from tensorflow.keras.optimizers import RMSprop, Adam
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import img_to_array, load_img
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
import pickle
import os
import random
from google.colab import drive
```
# **Mounting Drive**
```
drive.mount('/gdrive')
os.symlink('/gdrive/My Drive', '/content/gdrive')
```
# **Import Data**
## **Process data**
Create own data distribution (with own shuffle)
```
# setting the path to the pickle files saved (grayscale images)
benign_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/benign.pickle"
malign_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/malign.pickle"
# opening pickle files (grayscale images)
pickle_in = open(benign_link, "rb")
benign_data = pickle.load(pickle_in)
pickle_in = open(malign_link, "rb")
malign_data = pickle.load(pickle_in)
# shuffle the data
random.shuffle(benign_data)
random.shuffle(malign_data)
# splitting and merging the data from benign and malign arrays
# split eg. train_per = 0.7 --> 70% train data, 30% test data
train_per = 0.7
trn_b = int(len(benign_data) * train_per)
trn_m = int(len(malign_data) * train_per)
train_data = benign_data[: trn_b].copy() + malign_data[: trn_m].copy()
test_data = benign_data[trn_b :].copy() + malign_data[trn_m :].copy()
# shuffle train and test data
random.shuffle(train_data)
random.shuffle(test_data)
assert len(train_data + test_data) == len(benign_data + malign_data)
# separating the features and labels
X_train = []
y_train = []
X_test = []
y_test = []
for X, y in train_data:
X_train.append(X)
y_train.append(y)
for X, y in test_data:
X_test.append(X)
y_test.append(y)
# reshaping
num_channels = 1 # depend whether you're using rgb or grayscale images
IMG_SIZE = len(X_train[0])
X_train = np.array(X_train).reshape(-1, IMG_SIZE, IMG_SIZE, num_channels)
y_train = np.array(y_train).reshape(-1)
IMG_SIZE = len(X_test[0])
X_test = np.array(X_test).reshape(-1, IMG_SIZE, IMG_SIZE, num_channels)
y_test = np.array(y_test).reshape(-1)
print(X_train.shape)
print(X_test.shape)
```
**Scaling**
```
X_train = X_train / 255
X_test = X_test / 255
```
## **Import processed data**
Use the same data distribution
```
# setting the path to the pickle files saved (grayscale images)
X_train_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/X_train.pickle"
y_train_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/y_train.pickle"
X_test_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/X_test.pickle"
y_test_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/y_test.pickle"
pickle_in = open(X_train_link, "rb")
X_train = pickle.load(pickle_in)
pickle_in = open(y_train_link, "rb")
y_train = pickle.load(pickle_in)
pickle_in = open(X_test_link, "rb")
X_test = pickle.load(pickle_in)
pickle_in = open(y_test_link, "rb")
y_test = pickle.load(pickle_in)
```
# **Setting up models**
```
num_channels = 1
```
### **model**
```
model = tf.keras.models.Sequential(
[
Conv2D(8, (3,3), padding='same', activation='relu', input_shape=(100,100,num_channels)),
MaxPooling2D(2,2),
Conv2D(16, (3,3), padding='same', activation='relu'),
MaxPooling2D(2,2),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(1, activation='sigmoid')
]
)
model.summary()
model.compile(
loss = 'binary_crossentropy',
optimizer = Adam(),
metrics = ['acc']
)
result = model.fit(
x = X_train,
y = y_train,
batch_size = 32,
epochs = 100,
validation_split = 0.3,
verbose = 2
)
outputs = [layer.output for layer in model.layers[1:]]
model_vis = tf.keras.models.Model(inputs = model.input, outputs = outputs)
x = random.choice(X_test)
x = x.reshape((1,) + x.shape)
feature_maps = model_vis.predict(x)
layer_names = [layer.name for layer in model.layers]
for layer_name, feature_map in zip(layer_names, feature_maps):
if len(feature_map.shape) == 4:
n_features = feature_map.shape[-1]
size = feature_map.shape[ 1]
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std ()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
display_grid[:, i * size : (i + 1) * size] = x
scale = 20. / n_features
plt.figure( figsize=(scale * n_features, scale) )
plt.title ( layer_name )
plt.grid ( False )
plt.imshow( display_grid, aspect='auto', cmap='viridis' )
resEv_model = model.evaluate(X_test, y_test, 20)
print('test loss, test acc:', resEv_model)
y_Pred = model.predict(X_test)
print(confusion_matrix(y_test, np.rint(y_Pred)))
print(accuracy_score(y_test, np.rint(y_Pred)))
print(recall_score(y_test, np.rint(y_Pred), average=None))
print(precision_score(y_test, np.rint(y_Pred), average=None))
class_names = ['benign', 'malign']
cm = pd.DataFrame(confusion_matrix(y_test, np.rint(y_Pred)),
index = class_names,
columns = class_names)
metrics_values = [ [confusion_matrix(y_test, np.rint(y_Pred))],
[accuracy_score(y_test, np.rint(y_Pred))],
[recall_score(y_test, np.rint(y_Pred), average=None)],
[precision_score(y_test, np.rint(y_Pred), average=None)]]
fig, ax = plt.subplots(3, 1, figsize = (15, 25))
ax[0].plot(result.history['acc'])
ax[0].plot(result.history['val_acc'])
ax[0].set_title('model accuracy')
ax[0].set_ylabel('accuracy')
ax[0].set_xlabel('epoch')
ax[0].legend(['train', 'validation'], loc='upper left')
ax[1].plot(result.history['loss'])
ax[1].plot(result.history['val_loss'])
ax[1].set_title('model loss')
ax[1].set_ylabel('loss')
ax[1].set_xlabel('epoch')
ax[1].legend(['train', 'validation'], loc='upper left')
hm = sn.heatmap(cm, annot=True, fmt='.4g', cmap='BuGn', ax=ax[2])
hm.set_title("confusion matrix")
plt.show()
path_to_save_model = ""
model.save(path_to_save_model)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sn
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout, Activation, Conv2D, MaxPooling2D
from tensorflow.keras.optimizers import RMSprop, Adam
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.preprocessing.image import img_to_array, load_img
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
import pickle
import os
import random
from google.colab import drive
drive.mount('/gdrive')
os.symlink('/gdrive/My Drive', '/content/gdrive')
# setting the path to the pickle files saved (grayscale images)
benign_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/benign.pickle"
malign_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/malign.pickle"
# opening pickle files (grayscale images)
pickle_in = open(benign_link, "rb")
benign_data = pickle.load(pickle_in)
pickle_in = open(malign_link, "rb")
malign_data = pickle.load(pickle_in)
# shuffle the data
random.shuffle(benign_data)
random.shuffle(malign_data)
# splitting and merging the data from benign and malign arrays
# split eg. train_per = 0.7 --> 70% train data, 30% test data
train_per = 0.7
trn_b = int(len(benign_data) * train_per)
trn_m = int(len(malign_data) * train_per)
train_data = benign_data[: trn_b].copy() + malign_data[: trn_m].copy()
test_data = benign_data[trn_b :].copy() + malign_data[trn_m :].copy()
# shuffle train and test data
random.shuffle(train_data)
random.shuffle(test_data)
assert len(train_data + test_data) == len(benign_data + malign_data)
# separating the features and labels
X_train = []
y_train = []
X_test = []
y_test = []
for X, y in train_data:
X_train.append(X)
y_train.append(y)
for X, y in test_data:
X_test.append(X)
y_test.append(y)
# reshaping
num_channels = 1 # depend whether you're using rgb or grayscale images
IMG_SIZE = len(X_train[0])
X_train = np.array(X_train).reshape(-1, IMG_SIZE, IMG_SIZE, num_channels)
y_train = np.array(y_train).reshape(-1)
IMG_SIZE = len(X_test[0])
X_test = np.array(X_test).reshape(-1, IMG_SIZE, IMG_SIZE, num_channels)
y_test = np.array(y_test).reshape(-1)
print(X_train.shape)
print(X_test.shape)
X_train = X_train / 255
X_test = X_test / 255
# setting the path to the pickle files saved (grayscale images)
X_train_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/X_train.pickle"
y_train_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/y_train.pickle"
X_test_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/X_test.pickle"
y_test_link = "/content/gdrive/Copy of BCD/Breast Cancer Detection/Pickle files of images/GRAYSCALE IMAGES ARRAY/y_test.pickle"
pickle_in = open(X_train_link, "rb")
X_train = pickle.load(pickle_in)
pickle_in = open(y_train_link, "rb")
y_train = pickle.load(pickle_in)
pickle_in = open(X_test_link, "rb")
X_test = pickle.load(pickle_in)
pickle_in = open(y_test_link, "rb")
y_test = pickle.load(pickle_in)
num_channels = 1
model = tf.keras.models.Sequential(
[
Conv2D(8, (3,3), padding='same', activation='relu', input_shape=(100,100,num_channels)),
MaxPooling2D(2,2),
Conv2D(16, (3,3), padding='same', activation='relu'),
MaxPooling2D(2,2),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(1, activation='sigmoid')
]
)
model.summary()
model.compile(
loss = 'binary_crossentropy',
optimizer = Adam(),
metrics = ['acc']
)
result = model.fit(
x = X_train,
y = y_train,
batch_size = 32,
epochs = 100,
validation_split = 0.3,
verbose = 2
)
outputs = [layer.output for layer in model.layers[1:]]
model_vis = tf.keras.models.Model(inputs = model.input, outputs = outputs)
x = random.choice(X_test)
x = x.reshape((1,) + x.shape)
feature_maps = model_vis.predict(x)
layer_names = [layer.name for layer in model.layers]
for layer_name, feature_map in zip(layer_names, feature_maps):
if len(feature_map.shape) == 4:
n_features = feature_map.shape[-1]
size = feature_map.shape[ 1]
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std ()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
display_grid[:, i * size : (i + 1) * size] = x
scale = 20. / n_features
plt.figure( figsize=(scale * n_features, scale) )
plt.title ( layer_name )
plt.grid ( False )
plt.imshow( display_grid, aspect='auto', cmap='viridis' )
resEv_model = model.evaluate(X_test, y_test, 20)
print('test loss, test acc:', resEv_model)
y_Pred = model.predict(X_test)
print(confusion_matrix(y_test, np.rint(y_Pred)))
print(accuracy_score(y_test, np.rint(y_Pred)))
print(recall_score(y_test, np.rint(y_Pred), average=None))
print(precision_score(y_test, np.rint(y_Pred), average=None))
class_names = ['benign', 'malign']
cm = pd.DataFrame(confusion_matrix(y_test, np.rint(y_Pred)),
index = class_names,
columns = class_names)
metrics_values = [ [confusion_matrix(y_test, np.rint(y_Pred))],
[accuracy_score(y_test, np.rint(y_Pred))],
[recall_score(y_test, np.rint(y_Pred), average=None)],
[precision_score(y_test, np.rint(y_Pred), average=None)]]
fig, ax = plt.subplots(3, 1, figsize = (15, 25))
ax[0].plot(result.history['acc'])
ax[0].plot(result.history['val_acc'])
ax[0].set_title('model accuracy')
ax[0].set_ylabel('accuracy')
ax[0].set_xlabel('epoch')
ax[0].legend(['train', 'validation'], loc='upper left')
ax[1].plot(result.history['loss'])
ax[1].plot(result.history['val_loss'])
ax[1].set_title('model loss')
ax[1].set_ylabel('loss')
ax[1].set_xlabel('epoch')
ax[1].legend(['train', 'validation'], loc='upper left')
hm = sn.heatmap(cm, annot=True, fmt='.4g', cmap='BuGn', ax=ax[2])
hm.set_title("confusion matrix")
plt.show()
path_to_save_model = ""
model.save(path_to_save_model)
| 0.644784 | 0.86681 |
```
%run base_test.ipynb
import numpy as np
from base import ml
from base import plot
def test_regressors(type_, average=False, poly=False, bilinear=False,
gen_one_data=None, test_size=10000, average_top=5):
if type_ == 'point':
dir_ = '../saved_models/point'
if gen_one_data is None:
gen_one_data = ml.GenSolutionPoint(fenics_from_save=True)
PolyInterp = ml.PolyInterpPoint
BilinearInterp = ml.BilinearInterpPoint
elif type_ == 'grid':
dir_ = '../saved_models/grid'
if gen_one_data is None:
gen_one_data = ml.GenSolutionGrid(fenics_from_save=True)
PolyInterp = ml.PolyInterpGrid
BilinearInterp = ml.BilinearInterpGrid
else:
raise RuntimeError('Unknown type_ {}'.format(type_))
dnn_factories, names = ml.dnn_factories_from_dir(dir_)
extra_facs = []
extra_names = []
if average:
average_reg = ml.RegressorAverager(regressor_factories=dnn_factories)
if average_top is not None:
average_reg.auto_mask(gen_one_data=gen_one_data, top=average_top,
batch_size=test_size)
average_reg_fac = ml.RegressorFactory(regressor=average_reg)
extra_facs.append(average_reg_fac)
extra_names.append('AverageReg')
if poly:
# Warning: the polynomial interpolation takes a while.
for poly_deg in (3, 5, 7, 9):
poly_fac = ml.RegressorFactory(regressor=PolyInterp(poly_deg=poly_deg))
extra_facs.append(poly_fac)
extra_names.append('Poly{}Reg'.format(poly_deg))
if bilinear:
bilin_fac = ml.RegressorFactory(regressor=BilinearInterp())
extra_facs.append(bilin_fac)
extra_names.append('BilinearReg')
results = ml.eval_regressors([*dnn_factories,
*extra_facs],
gen_one_data,
batch_size=test_size)
bests = []
for stat in ('average_loss', 'max_deviation'):
print(stat)
print('-' * len(stat))
results_names = sorted(zip(results, [*names, *extra_names]),
key=lambda x: x[0][stat])
bests.append([name for _, (result, name) in zip(range(6), results_names)])
for result, name in results_names:
try:
num_neurons = eval(name.split('_')[0].replace('x', '*'))
except Exception:
num_neurons = ''
print('{:>4} | {:>34} | {}'.format(num_neurons,
name,
result[stat]))
print('')
best = [x for x in bests[0] if all(x in best_ for best_ in bests[1:])]
print('common best')
print('-----------')
for b in best:
print(b)
if not best:
print('No common best')
return results
```
|
github_jupyter
|
%run base_test.ipynb
import numpy as np
from base import ml
from base import plot
def test_regressors(type_, average=False, poly=False, bilinear=False,
gen_one_data=None, test_size=10000, average_top=5):
if type_ == 'point':
dir_ = '../saved_models/point'
if gen_one_data is None:
gen_one_data = ml.GenSolutionPoint(fenics_from_save=True)
PolyInterp = ml.PolyInterpPoint
BilinearInterp = ml.BilinearInterpPoint
elif type_ == 'grid':
dir_ = '../saved_models/grid'
if gen_one_data is None:
gen_one_data = ml.GenSolutionGrid(fenics_from_save=True)
PolyInterp = ml.PolyInterpGrid
BilinearInterp = ml.BilinearInterpGrid
else:
raise RuntimeError('Unknown type_ {}'.format(type_))
dnn_factories, names = ml.dnn_factories_from_dir(dir_)
extra_facs = []
extra_names = []
if average:
average_reg = ml.RegressorAverager(regressor_factories=dnn_factories)
if average_top is not None:
average_reg.auto_mask(gen_one_data=gen_one_data, top=average_top,
batch_size=test_size)
average_reg_fac = ml.RegressorFactory(regressor=average_reg)
extra_facs.append(average_reg_fac)
extra_names.append('AverageReg')
if poly:
# Warning: the polynomial interpolation takes a while.
for poly_deg in (3, 5, 7, 9):
poly_fac = ml.RegressorFactory(regressor=PolyInterp(poly_deg=poly_deg))
extra_facs.append(poly_fac)
extra_names.append('Poly{}Reg'.format(poly_deg))
if bilinear:
bilin_fac = ml.RegressorFactory(regressor=BilinearInterp())
extra_facs.append(bilin_fac)
extra_names.append('BilinearReg')
results = ml.eval_regressors([*dnn_factories,
*extra_facs],
gen_one_data,
batch_size=test_size)
bests = []
for stat in ('average_loss', 'max_deviation'):
print(stat)
print('-' * len(stat))
results_names = sorted(zip(results, [*names, *extra_names]),
key=lambda x: x[0][stat])
bests.append([name for _, (result, name) in zip(range(6), results_names)])
for result, name in results_names:
try:
num_neurons = eval(name.split('_')[0].replace('x', '*'))
except Exception:
num_neurons = ''
print('{:>4} | {:>34} | {}'.format(num_neurons,
name,
result[stat]))
print('')
best = [x for x in bests[0] if all(x in best_ for best_ in bests[1:])]
print('common best')
print('-----------')
for b in best:
print(b)
if not best:
print('No common best')
return results
| 0.376738 | 0.321833 |
# Disciplina - DQF10648 Eletromagnetismo I
## Aula em 24/06/2021 - Semestre 2021/1 EARTE
### [DQF - CCENS](http://alegre.ufes.br/ccens/departamento-de-quimica-e-fisica) - [UFES/Alegre](http://alegre.ufes.br/)
# Mostrando as Aulas no Repositรณrio de Eletromagnetismo I
Vide [aula anterior, de 22/06/2021](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/tree/master/Aulas/Aula_20210622), explicando as versรตes :
- prรฉvia/prevista (antes da aula);
- revisada (apรณs a aula).
# Coordenadas Curvilรญneas (recapitulaรงรฃo rรกpida da aula anterior)
Sobre coordenadas cartesianas, polares, cilรญndricas e esfรฉricas, vide :
- [referรชncias](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Griffiths], [Barcarena3], [Marques] e [Silveira];
- demonstraรงรตes grรกficas gratuitas no site [Wolfram Demonstrations](https://demonstrations.wolfram.com/);
Note que a [referรชncia](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) :
- [Barcarena3] usa coordenadas generalizadas (como sinรดnimo de curvilรญneas) $q_i$.
## Fatores de escala $h_i$ e vetores base unitรกrios $\hat{e}_i$
Para se calcular os fatores de escala (ou escalares) $h_i$ e vetores base unitรกrios $\hat{e}_i$ respectivos a um sistema de coordenadas curvilรญneas $u_i$, deve-se :
- inicialmente saber transformar das coordenadas curvilรญneas $u_i$ para o sistema de coordenadas cartesianas;
- depois calcular derivadas parciais em relaรงรฃo ร s coordenadas curvilรญneas $u_i$.
Considerando espaรงo de 3 dimensรตes, $i$ tem valores inteiros $1$, $2$ e $3$.
Em 5 passos, bem detalhadamente, considerando espaรงo de 3 dimensรตes :
1) liste as equaรงรตes de transformaรงรฃo de coordenadas curvilรญneas $u_i$ para coordenadas cartesianas :
$$ x = m(u_1, u_2, u_3)$$
$$ y = n(u_1, u_2, u_3)$$
$$ z = p(u_1, u_2, u_3)$$
2) escreva o vetor posiรงรฃo $\vec{r}$ em termos de coordenadas cartesianas e depois substitua as componentes por expressรตes em coordenadas curvilรญneas $u_i$ :
$$ \vec{r} = \vec{r}(x, y, z) = x\,\hat{i} + y\,\hat{j} + z\,\hat{k} \,\,\,\rightarrow$$
$$ \rightarrow \,\,\,\vec{r} = m(u_1, u_2, u_3)\,\hat{i} + n(u_1, u_2, u_3)\,\hat{j} + p(u_1, u_2, u_3)\,\hat{k}$$
3) calcule a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo a cada coordenada curvilรญnea $u_i$ :
$$ \frac{\partial\,\vec{r}}{\partial u_i} \,\,\,\rightarrow$$
$$ \rightarrow \,\,\, \frac{\partial \vec{r}}{\partial u_i} = \frac{\partial m(u_1, u_2, u_3)}{\partial u_i}\,\hat{i} + \frac{\partial n(u_1, u_2, u_3)}{\partial u_i}\,\hat{j} + \frac{\partial p(u_1, u_2, u_3)}{\partial u_i}\,\hat{k} $$
ou seja, sรฃo $3$ derivadas parciais de $\vec{r}$ em espaรงo $3$-dimensional, pois sรฃo $N$ derivadas parciais de $\vec{r}$ em espaรงo $N$-dimensional.
4) o fator de escala $h_i$ respectivo ร coordenada curvilรญnea $u_i$ รฉ o mรณdulo da derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea $u_i$ :
$$ h_i = \left| \frac{\partial \vec{r}}{\partial u_i}\right| \,\,\,\rightarrow $$
$$ \rightarrow \,\,\, h_i =
\left|\, \frac{\partial x}{\partial u_i}\,\hat{i} + \frac{\partial y}{\partial u_i}\,\hat{j} + \frac{\partial z}{\partial u_i}\,\hat{k}\, \right| =
\sqrt{ \left[\frac{\partial x}{\partial u_i}\right]^2 + \left[\frac{\partial y}{\partial u_i}\right]^2 + \left[\frac{\partial z}{\partial u_i}\right]^2 } $$
$$ \rightarrow \,\,\, h_i = \sqrt{ \left[ \frac{\partial m(u_1, u_2, u_3)}{\partial u_i} \right]^2 + \left[ \frac{\partial n(u_1, u_2, u_3)}{\partial u_i}\right]^2 + \left[\frac{\partial p(u_1, u_2, u_3)}{\partial u_i}\right]^2 } $$
Vide [referรชncia](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Barcarena3], seรงรฃo "5.1.3 Coordenadas Ortogonais" e anteriores, para obter a origem da expressรฃo de $h_i$.
O fator de escala $h_i$ รฉ utilizado para calcular, em coordenadas curvilรญneas :
- elemento diferencial de linha $dl$ (em integrais de linha, etc);
- elemento diferencial de รกrea $dA$ (em integrais de superfรญcie, etc);
- elemento diferencial de volume $dV$ (em integrais de volume, etc);
- jacobiano;
- gradiente;
- divergente;
- rotacional;
- laplaciano;
- etc.
5) o vetor base unitรกrio $\hat{e}_i$ รฉ a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea $u_i$, normalizada, ou seja, dividida pelo mรณdulo (= fator de escala $h_i$) :
$$ \frac{\partial \vec{r}}{\partial u_i} = h_i\,\hat{e}_i $$
que รฉ um vetor tangente ร curva $u_i$. Depois veremos grรกficos das curvas $u_i$ para certos sistemas de coordenadas curvilรญneas, por enquanto vide [referรชncia](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Silveira]), pรกgina 19 sobre coordenadas cilรญndricas e pรกgina 26 sobre coordenadas esfรฉricas.
Evidenciando vetor base unitรกrio $\hat{e}_i$, temos entรฃo :
$$ \hat{e}_i = \frac{1}{h_i} \frac{\partial \vec{r}}{\partial u_i} $$
## Exemplo : calculando $h_i$ e $\hat{e}_i$ para coordenadas polares $(\rho, \theta)$
1) liste as equaรงรตes de transformaรงรฃo de coordenadas curvilรญneas polares $u_i$, ou seja, $(\rho, \theta)$ para coordenadas cartesianas $(x, y)$ :
$$ x = m(u_1, u_2) = m(\rho, \theta) = \rho \cos \theta $$
$$ y = n(u_1, u_2) = n(\rho, \theta) = \rho \sin \theta $$
2) escreva o vetor posiรงรฃo $\vec{r}$ em termos de coordenadas cartesianas $(x, y)$ e depois substitua as componentes por expressรตes em coordenadas curvilรญneas polares $u_i$, ou seja, $(\rho, \theta)$ :
$$ \vec{r} = \vec{r}(x, y) = x\,\hat{i} + y\,\hat{j}$$
$$ \vec{r} = m(u_1, u_2)\,\hat{i} + n(u_1, u_2)\,\hat{j} = m(\rho, \theta)\,\hat{i} + n(\rho, \theta)\,\hat{j} $$
$$ \vec{r} = \rho \cos \theta\,\hat{i} + \rho \sin \theta\,\hat{j} $$
3) calcule a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo a cada coordenada curvilรญnea polar $u_i$, ou seja, $(\rho, \theta)$ :
$$ \frac{\partial \vec{r}}{\partial u_i} = \frac{\partial m(u_1, u_2)}{\partial u_i}\,\hat{i} + \frac{\partial n(u_1, u_2)}{\partial u_i}\,\hat{j} = \frac{\partial m(\rho, \theta)}{\partial u_i}\,\hat{i} + \frac{\partial n(\rho, \theta)}{\partial u_i}\,\hat{j} $$
$$ \frac{\partial \vec{r}}{\partial u_i} = \frac{\partial \left(\rho \cos \theta\right)}{\partial u_i}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial u_i}\,\hat{j} $$
aplicando a derivada parcial para cada coordenada curvilรญnea polar $(\rho, \theta)$ :
$$ \frac{\partial \vec{r}}{\partial \rho} = \frac{\partial \left(\rho \cos \theta\right)}{\partial \rho}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \rho}\,\hat{j} = \cos \theta\,\hat{i} + \sin \theta\,\hat{j} $$
$$ \frac{\partial \vec{r}}{\partial \theta} = \frac{\partial \left(\rho \cos \theta\right)}{\partial \theta}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \theta}\,\hat{j} = - \rho \sin \theta\,\hat{i} + \rho \cos \theta\,\hat{j} $$
4) o fator de escala $h_i$ รฉ o mรณdulo da derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea polar $(\rho, \theta)$ :
$$ h_i = \left| \frac{\partial \vec{r}}{\partial u_i} \right| = \left|\, \frac{\partial \left(\rho \cos \theta\right)}{\partial u_i}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial u_i}\,\hat{j}\, \right| $$
$h_i$ cada coordenada curvilรญnea polar $(\rho, \theta)$ :
$$ h_\rho = \left| \frac{\partial \vec{r}}{\partial \rho} \right| = \left|\, \frac{\partial \left(\rho \cos \theta\right)}{\partial \rho}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \rho}\,\hat{j}\, \right| $$
$$ h_\rho = \left|\, \cos \theta\,\hat{i} + \sin \theta\,\hat{j}\, \right| = \sqrt{(\cos \theta)^2 + (\sin \theta)^2} $$
$$ h_\rho = 1 $$
$$ h_\theta = \left| \frac{\partial \vec{r}}{\partial \theta} \right| = \left|\, \frac{\partial \left(\rho \cos \theta\right)}{\partial \theta}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \theta}\,\hat{j}\, \right| $$
$$ h_\theta = \left|\, - \rho \sin \theta\,\hat{i} + \rho \cos \theta\,\hat{j}\, \right| = \sqrt{(- \rho \sin \theta)^2 + (\rho \cos \theta)^2} $$
$$ h_\theta = \sqrt{\rho^2 (\sin \theta)^2 + \rho^2 (\cos \theta)^2} = \sqrt{\rho^2} = \left|\, \rho\, \right| $$
Considerando $\rho \geq 0\,m$, entรฃo :
$$ h_\theta = \rho $$
5) o vetor base unitรกrio $\hat{e}_i$ รฉ a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea $u_i$, normalizada, i. e., dividida pelo mรณdulo, ou seja, pelo fator de escala $h_i$ :
$$ \hat{e}_\rho = \frac{1}{h_\rho} \frac{\partial\vec{r}}{\partial \rho} = \frac{1}{1} \left[\cos \theta\,\hat{i} + \sin \theta\,\hat{j} \right] $$
$$ \hat{e}_\rho = \hat{\rho} = \cos \theta\,\hat{i} + \sin \theta\,\hat{j} $$
$$ \hat{e}_\theta = \frac{1}{h_\theta} \frac{\partial \vec{r}}{\partial \theta} = \frac{1}{\rho} \left[- \rho \sin \theta\,\hat{i} + \rho \cos \theta\,\hat{j} \right] $$
$$ \hat{e}_\theta = \hat{\theta} = - \sin \theta\,\hat{i} + \cos \theta\,\hat{j} $$
## (A FAZER) Exercรญcio : calcule $h_i$ e $\hat{e}_i$ para coordenadas esfรฉricas $(r, \theta, \phi)$
Exercรญcio para os alunos apresentarem resoluรงรฃo completa dos cรกlculos e tirarem dรบvidas.
# Operadores Diferenciais Vetoriais
Sobre os operadoradores diferenciais vetoriais gradiente, divergente, rotacional e laplaciano, vide :
- [referรชncias](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Griffiths], [Barcarena2] e [Barcarena3];
- demonstraรงรตes grรกficas gratuitas no site [Wolfram Demonstrations](https://demonstrations.wolfram.com/);
## Gradiente : $\vec{\nabla} f$
### Teoria
Gradiente รฉ um operador diferencial vetorial que atua sobre funรงรฃo escalar e gera funรงรฃo vetorial.
Em coordenadas curvilรญneas $u_i$ :
$$ \vec{\nabla} f = \left[ \frac{1}{h_1}\frac{\partial}{\partial u_1}\hat{e}_1, \frac{1}{h_2}\frac{\partial}{\partial u_2}\hat{e}_2, \frac{1}{h_3}\frac{\partial}{\partial u_3}\hat{e}_3 \right] f
= \frac{1}{h_1}\frac{\partial f}{\partial u_1}\hat{e}_1 + \frac{1}{h_2}\frac{\partial f}{\partial u_2}\hat{e}_2 + \frac{1}{h_3}\frac{\partial f}{\partial u_3}\hat{e}_3 $$
Operador diferencial vetorial gradiente (grad) em coordenadas cartesianas 2D $(x, y)$ :
$$\vec{\nabla} = \left\langle \frac{\partial}{\partial x}, \frac{\partial}{\partial y}\right\rangle = \frac{\partial}{\partial x}\hat{i} + \frac{\partial}{\partial y}\hat{j} $$
em coordenadas polares $(\rho, \theta)$ :
$$\vec{\nabla} = \left\langle \frac{\partial }{\partial \rho},\frac{1}{\rho}\frac{\partial }{\partial \theta }\right\rangle = \frac{\partial }{\partial \rho}\hat{\rho} + \frac{1}{\rho}\frac{\partial }{\partial \theta }\hat{\theta}$$
em coordenadas cartesianas 3D $(x, y, z)$ :
$$\vec{\nabla}=\left\langle \frac{\partial }{\partial x}, \frac{\partial }{\partial y}, \frac{\partial }{\partial z}\right\rangle = \frac{\partial}{\partial x}\hat{i} + \frac{\partial}{\partial y}\hat{j} + \frac{\partial}{\partial z}\hat{k}$$
Em coordenadas cilรญndricas $(\rho, \theta, z)$ :
$$\vec{\nabla}=\left\langle \frac{\partial }{\partial \rho},\frac{1}{\rho}\frac{\partial }{\partial \theta},\frac{\partial }{\partial z}\right\rangle = \frac{\partial }{\partial \rho}\hat{\rho} + \frac{1}{\rho}\frac{\partial }{\partial \theta }\hat{\theta} + \frac{\partial }{\partial z}\hat{k} $$
Em coordenadas esfรฉricas $(r, \theta, \phi)$ :
$$\vec{\nabla}=\left\langle \frac{\partial }{\partial r},\frac{1}{r}\frac{\partial }{\partial \theta},\frac{1}{r sen \theta}\frac{\partial }{\partial \phi}\right\rangle = \frac{\partial }{\partial r}\hat{r} + \frac{1}{r}\frac{\partial }{\partial \theta}\hat{\theta} + \frac{1}{r sen \theta}\frac{\partial }{\partial \phi}\hat{\phi}$$
## Divergente : $\vec{\nabla} \cdot \vec{F}$
### Teoria
Divergente รฉ um operador diferencial vetorial que atua sobre funรงรฃo vetorial e gera funรงรฃo escalar.
Em coordenadas curvilรญneas $u_i$ :
$$ \vec{\nabla} \cdot \vec{F} = \frac{1}{h_1 h_2 h_3}\left[ \frac{\partial\,(F_1 h_2 h_3)}{\partial u_1} + \frac{\partial\,(F_2 h_1 h_3)}{\partial u_2} + \frac{\partial\,(F_3 h_1 h_2)}{\partial u_e} \right] $$
Em coordenadas cartesianas 3D $(x, y, z)$ :
$$\vec{\nabla} \cdot \vec{F} = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} + \frac{\partial F_z}{\partial z} $$
em coordenadas cilรญndricas $(\rho, \theta, z)$ :
$$\vec{\nabla} \cdot \vec{F} = \frac{1}{\rho}\frac{\partial\,(\rho F_{\rho})}{\partial \rho} + \frac{1}{\rho}\frac{\partial F_{\theta}}{\partial \theta} + \frac{\partial F_z}{\partial z} $$
em coordenadas esfรฉricas $(r, \theta, \phi)$ :
$$\vec{\nabla} \cdot \vec{F} = \frac{1}{r^2} \frac{\partial\,(r^2 F_r)}{\partial r} + \frac{1}{r sen \theta}\frac{\partial\,(sen \theta F_{\theta})}{\partial \theta} + \frac{1}{r sen \theta}\frac{\partial F_\phi}{\partial \phi} $$
|
github_jupyter
|
# Disciplina - DQF10648 Eletromagnetismo I
## Aula em 24/06/2021 - Semestre 2021/1 EARTE
### [DQF - CCENS](http://alegre.ufes.br/ccens/departamento-de-quimica-e-fisica) - [UFES/Alegre](http://alegre.ufes.br/)
# Mostrando as Aulas no Repositรณrio de Eletromagnetismo I
Vide [aula anterior, de 22/06/2021](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/tree/master/Aulas/Aula_20210622), explicando as versรตes :
- prรฉvia/prevista (antes da aula);
- revisada (apรณs a aula).
# Coordenadas Curvilรญneas (recapitulaรงรฃo rรกpida da aula anterior)
Sobre coordenadas cartesianas, polares, cilรญndricas e esfรฉricas, vide :
- [referรชncias](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Griffiths], [Barcarena3], [Marques] e [Silveira];
- demonstraรงรตes grรกficas gratuitas no site [Wolfram Demonstrations](https://demonstrations.wolfram.com/);
Note que a [referรชncia](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) :
- [Barcarena3] usa coordenadas generalizadas (como sinรดnimo de curvilรญneas) $q_i$.
## Fatores de escala $h_i$ e vetores base unitรกrios $\hat{e}_i$
Para se calcular os fatores de escala (ou escalares) $h_i$ e vetores base unitรกrios $\hat{e}_i$ respectivos a um sistema de coordenadas curvilรญneas $u_i$, deve-se :
- inicialmente saber transformar das coordenadas curvilรญneas $u_i$ para o sistema de coordenadas cartesianas;
- depois calcular derivadas parciais em relaรงรฃo ร s coordenadas curvilรญneas $u_i$.
Considerando espaรงo de 3 dimensรตes, $i$ tem valores inteiros $1$, $2$ e $3$.
Em 5 passos, bem detalhadamente, considerando espaรงo de 3 dimensรตes :
1) liste as equaรงรตes de transformaรงรฃo de coordenadas curvilรญneas $u_i$ para coordenadas cartesianas :
$$ x = m(u_1, u_2, u_3)$$
$$ y = n(u_1, u_2, u_3)$$
$$ z = p(u_1, u_2, u_3)$$
2) escreva o vetor posiรงรฃo $\vec{r}$ em termos de coordenadas cartesianas e depois substitua as componentes por expressรตes em coordenadas curvilรญneas $u_i$ :
$$ \vec{r} = \vec{r}(x, y, z) = x\,\hat{i} + y\,\hat{j} + z\,\hat{k} \,\,\,\rightarrow$$
$$ \rightarrow \,\,\,\vec{r} = m(u_1, u_2, u_3)\,\hat{i} + n(u_1, u_2, u_3)\,\hat{j} + p(u_1, u_2, u_3)\,\hat{k}$$
3) calcule a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo a cada coordenada curvilรญnea $u_i$ :
$$ \frac{\partial\,\vec{r}}{\partial u_i} \,\,\,\rightarrow$$
$$ \rightarrow \,\,\, \frac{\partial \vec{r}}{\partial u_i} = \frac{\partial m(u_1, u_2, u_3)}{\partial u_i}\,\hat{i} + \frac{\partial n(u_1, u_2, u_3)}{\partial u_i}\,\hat{j} + \frac{\partial p(u_1, u_2, u_3)}{\partial u_i}\,\hat{k} $$
ou seja, sรฃo $3$ derivadas parciais de $\vec{r}$ em espaรงo $3$-dimensional, pois sรฃo $N$ derivadas parciais de $\vec{r}$ em espaรงo $N$-dimensional.
4) o fator de escala $h_i$ respectivo ร coordenada curvilรญnea $u_i$ รฉ o mรณdulo da derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea $u_i$ :
$$ h_i = \left| \frac{\partial \vec{r}}{\partial u_i}\right| \,\,\,\rightarrow $$
$$ \rightarrow \,\,\, h_i =
\left|\, \frac{\partial x}{\partial u_i}\,\hat{i} + \frac{\partial y}{\partial u_i}\,\hat{j} + \frac{\partial z}{\partial u_i}\,\hat{k}\, \right| =
\sqrt{ \left[\frac{\partial x}{\partial u_i}\right]^2 + \left[\frac{\partial y}{\partial u_i}\right]^2 + \left[\frac{\partial z}{\partial u_i}\right]^2 } $$
$$ \rightarrow \,\,\, h_i = \sqrt{ \left[ \frac{\partial m(u_1, u_2, u_3)}{\partial u_i} \right]^2 + \left[ \frac{\partial n(u_1, u_2, u_3)}{\partial u_i}\right]^2 + \left[\frac{\partial p(u_1, u_2, u_3)}{\partial u_i}\right]^2 } $$
Vide [referรชncia](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Barcarena3], seรงรฃo "5.1.3 Coordenadas Ortogonais" e anteriores, para obter a origem da expressรฃo de $h_i$.
O fator de escala $h_i$ รฉ utilizado para calcular, em coordenadas curvilรญneas :
- elemento diferencial de linha $dl$ (em integrais de linha, etc);
- elemento diferencial de รกrea $dA$ (em integrais de superfรญcie, etc);
- elemento diferencial de volume $dV$ (em integrais de volume, etc);
- jacobiano;
- gradiente;
- divergente;
- rotacional;
- laplaciano;
- etc.
5) o vetor base unitรกrio $\hat{e}_i$ รฉ a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea $u_i$, normalizada, ou seja, dividida pelo mรณdulo (= fator de escala $h_i$) :
$$ \frac{\partial \vec{r}}{\partial u_i} = h_i\,\hat{e}_i $$
que รฉ um vetor tangente ร curva $u_i$. Depois veremos grรกficos das curvas $u_i$ para certos sistemas de coordenadas curvilรญneas, por enquanto vide [referรชncia](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Silveira]), pรกgina 19 sobre coordenadas cilรญndricas e pรกgina 26 sobre coordenadas esfรฉricas.
Evidenciando vetor base unitรกrio $\hat{e}_i$, temos entรฃo :
$$ \hat{e}_i = \frac{1}{h_i} \frac{\partial \vec{r}}{\partial u_i} $$
## Exemplo : calculando $h_i$ e $\hat{e}_i$ para coordenadas polares $(\rho, \theta)$
1) liste as equaรงรตes de transformaรงรฃo de coordenadas curvilรญneas polares $u_i$, ou seja, $(\rho, \theta)$ para coordenadas cartesianas $(x, y)$ :
$$ x = m(u_1, u_2) = m(\rho, \theta) = \rho \cos \theta $$
$$ y = n(u_1, u_2) = n(\rho, \theta) = \rho \sin \theta $$
2) escreva o vetor posiรงรฃo $\vec{r}$ em termos de coordenadas cartesianas $(x, y)$ e depois substitua as componentes por expressรตes em coordenadas curvilรญneas polares $u_i$, ou seja, $(\rho, \theta)$ :
$$ \vec{r} = \vec{r}(x, y) = x\,\hat{i} + y\,\hat{j}$$
$$ \vec{r} = m(u_1, u_2)\,\hat{i} + n(u_1, u_2)\,\hat{j} = m(\rho, \theta)\,\hat{i} + n(\rho, \theta)\,\hat{j} $$
$$ \vec{r} = \rho \cos \theta\,\hat{i} + \rho \sin \theta\,\hat{j} $$
3) calcule a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo a cada coordenada curvilรญnea polar $u_i$, ou seja, $(\rho, \theta)$ :
$$ \frac{\partial \vec{r}}{\partial u_i} = \frac{\partial m(u_1, u_2)}{\partial u_i}\,\hat{i} + \frac{\partial n(u_1, u_2)}{\partial u_i}\,\hat{j} = \frac{\partial m(\rho, \theta)}{\partial u_i}\,\hat{i} + \frac{\partial n(\rho, \theta)}{\partial u_i}\,\hat{j} $$
$$ \frac{\partial \vec{r}}{\partial u_i} = \frac{\partial \left(\rho \cos \theta\right)}{\partial u_i}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial u_i}\,\hat{j} $$
aplicando a derivada parcial para cada coordenada curvilรญnea polar $(\rho, \theta)$ :
$$ \frac{\partial \vec{r}}{\partial \rho} = \frac{\partial \left(\rho \cos \theta\right)}{\partial \rho}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \rho}\,\hat{j} = \cos \theta\,\hat{i} + \sin \theta\,\hat{j} $$
$$ \frac{\partial \vec{r}}{\partial \theta} = \frac{\partial \left(\rho \cos \theta\right)}{\partial \theta}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \theta}\,\hat{j} = - \rho \sin \theta\,\hat{i} + \rho \cos \theta\,\hat{j} $$
4) o fator de escala $h_i$ รฉ o mรณdulo da derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea polar $(\rho, \theta)$ :
$$ h_i = \left| \frac{\partial \vec{r}}{\partial u_i} \right| = \left|\, \frac{\partial \left(\rho \cos \theta\right)}{\partial u_i}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial u_i}\,\hat{j}\, \right| $$
$h_i$ cada coordenada curvilรญnea polar $(\rho, \theta)$ :
$$ h_\rho = \left| \frac{\partial \vec{r}}{\partial \rho} \right| = \left|\, \frac{\partial \left(\rho \cos \theta\right)}{\partial \rho}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \rho}\,\hat{j}\, \right| $$
$$ h_\rho = \left|\, \cos \theta\,\hat{i} + \sin \theta\,\hat{j}\, \right| = \sqrt{(\cos \theta)^2 + (\sin \theta)^2} $$
$$ h_\rho = 1 $$
$$ h_\theta = \left| \frac{\partial \vec{r}}{\partial \theta} \right| = \left|\, \frac{\partial \left(\rho \cos \theta\right)}{\partial \theta}\,\hat{i} + \frac{\partial \left(\rho \sin \theta\right)}{\partial \theta}\,\hat{j}\, \right| $$
$$ h_\theta = \left|\, - \rho \sin \theta\,\hat{i} + \rho \cos \theta\,\hat{j}\, \right| = \sqrt{(- \rho \sin \theta)^2 + (\rho \cos \theta)^2} $$
$$ h_\theta = \sqrt{\rho^2 (\sin \theta)^2 + \rho^2 (\cos \theta)^2} = \sqrt{\rho^2} = \left|\, \rho\, \right| $$
Considerando $\rho \geq 0\,m$, entรฃo :
$$ h_\theta = \rho $$
5) o vetor base unitรกrio $\hat{e}_i$ รฉ a derivada parcial (primeira) de $\vec{r}$ em relaรงรฃo ร coordenada curvilรญnea $u_i$, normalizada, i. e., dividida pelo mรณdulo, ou seja, pelo fator de escala $h_i$ :
$$ \hat{e}_\rho = \frac{1}{h_\rho} \frac{\partial\vec{r}}{\partial \rho} = \frac{1}{1} \left[\cos \theta\,\hat{i} + \sin \theta\,\hat{j} \right] $$
$$ \hat{e}_\rho = \hat{\rho} = \cos \theta\,\hat{i} + \sin \theta\,\hat{j} $$
$$ \hat{e}_\theta = \frac{1}{h_\theta} \frac{\partial \vec{r}}{\partial \theta} = \frac{1}{\rho} \left[- \rho \sin \theta\,\hat{i} + \rho \cos \theta\,\hat{j} \right] $$
$$ \hat{e}_\theta = \hat{\theta} = - \sin \theta\,\hat{i} + \cos \theta\,\hat{j} $$
## (A FAZER) Exercรญcio : calcule $h_i$ e $\hat{e}_i$ para coordenadas esfรฉricas $(r, \theta, \phi)$
Exercรญcio para os alunos apresentarem resoluรงรฃo completa dos cรกlculos e tirarem dรบvidas.
# Operadores Diferenciais Vetoriais
Sobre os operadoradores diferenciais vetoriais gradiente, divergente, rotacional e laplaciano, vide :
- [referรชncias](https://github.com/rcolistete/Eletromagnetismo_I_UFES_Alegre/blob/master/Bibliografia.md) [Griffiths], [Barcarena2] e [Barcarena3];
- demonstraรงรตes grรกficas gratuitas no site [Wolfram Demonstrations](https://demonstrations.wolfram.com/);
## Gradiente : $\vec{\nabla} f$
### Teoria
Gradiente รฉ um operador diferencial vetorial que atua sobre funรงรฃo escalar e gera funรงรฃo vetorial.
Em coordenadas curvilรญneas $u_i$ :
$$ \vec{\nabla} f = \left[ \frac{1}{h_1}\frac{\partial}{\partial u_1}\hat{e}_1, \frac{1}{h_2}\frac{\partial}{\partial u_2}\hat{e}_2, \frac{1}{h_3}\frac{\partial}{\partial u_3}\hat{e}_3 \right] f
= \frac{1}{h_1}\frac{\partial f}{\partial u_1}\hat{e}_1 + \frac{1}{h_2}\frac{\partial f}{\partial u_2}\hat{e}_2 + \frac{1}{h_3}\frac{\partial f}{\partial u_3}\hat{e}_3 $$
Operador diferencial vetorial gradiente (grad) em coordenadas cartesianas 2D $(x, y)$ :
$$\vec{\nabla} = \left\langle \frac{\partial}{\partial x}, \frac{\partial}{\partial y}\right\rangle = \frac{\partial}{\partial x}\hat{i} + \frac{\partial}{\partial y}\hat{j} $$
em coordenadas polares $(\rho, \theta)$ :
$$\vec{\nabla} = \left\langle \frac{\partial }{\partial \rho},\frac{1}{\rho}\frac{\partial }{\partial \theta }\right\rangle = \frac{\partial }{\partial \rho}\hat{\rho} + \frac{1}{\rho}\frac{\partial }{\partial \theta }\hat{\theta}$$
em coordenadas cartesianas 3D $(x, y, z)$ :
$$\vec{\nabla}=\left\langle \frac{\partial }{\partial x}, \frac{\partial }{\partial y}, \frac{\partial }{\partial z}\right\rangle = \frac{\partial}{\partial x}\hat{i} + \frac{\partial}{\partial y}\hat{j} + \frac{\partial}{\partial z}\hat{k}$$
Em coordenadas cilรญndricas $(\rho, \theta, z)$ :
$$\vec{\nabla}=\left\langle \frac{\partial }{\partial \rho},\frac{1}{\rho}\frac{\partial }{\partial \theta},\frac{\partial }{\partial z}\right\rangle = \frac{\partial }{\partial \rho}\hat{\rho} + \frac{1}{\rho}\frac{\partial }{\partial \theta }\hat{\theta} + \frac{\partial }{\partial z}\hat{k} $$
Em coordenadas esfรฉricas $(r, \theta, \phi)$ :
$$\vec{\nabla}=\left\langle \frac{\partial }{\partial r},\frac{1}{r}\frac{\partial }{\partial \theta},\frac{1}{r sen \theta}\frac{\partial }{\partial \phi}\right\rangle = \frac{\partial }{\partial r}\hat{r} + \frac{1}{r}\frac{\partial }{\partial \theta}\hat{\theta} + \frac{1}{r sen \theta}\frac{\partial }{\partial \phi}\hat{\phi}$$
## Divergente : $\vec{\nabla} \cdot \vec{F}$
### Teoria
Divergente รฉ um operador diferencial vetorial que atua sobre funรงรฃo vetorial e gera funรงรฃo escalar.
Em coordenadas curvilรญneas $u_i$ :
$$ \vec{\nabla} \cdot \vec{F} = \frac{1}{h_1 h_2 h_3}\left[ \frac{\partial\,(F_1 h_2 h_3)}{\partial u_1} + \frac{\partial\,(F_2 h_1 h_3)}{\partial u_2} + \frac{\partial\,(F_3 h_1 h_2)}{\partial u_e} \right] $$
Em coordenadas cartesianas 3D $(x, y, z)$ :
$$\vec{\nabla} \cdot \vec{F} = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} + \frac{\partial F_z}{\partial z} $$
em coordenadas cilรญndricas $(\rho, \theta, z)$ :
$$\vec{\nabla} \cdot \vec{F} = \frac{1}{\rho}\frac{\partial\,(\rho F_{\rho})}{\partial \rho} + \frac{1}{\rho}\frac{\partial F_{\theta}}{\partial \theta} + \frac{\partial F_z}{\partial z} $$
em coordenadas esfรฉricas $(r, \theta, \phi)$ :
$$\vec{\nabla} \cdot \vec{F} = \frac{1}{r^2} \frac{\partial\,(r^2 F_r)}{\partial r} + \frac{1}{r sen \theta}\frac{\partial\,(sen \theta F_{\theta})}{\partial \theta} + \frac{1}{r sen \theta}\frac{\partial F_\phi}{\partial \phi} $$
| 0.609989 | 0.921145 |
# Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
```
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
```
## Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
```
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
```
### Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.
> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`.
```
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
```
### Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively.
```
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
```
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
> **Exercise:** First, remove the review with zero length from the `reviews_ints` list.
```
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
```
Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
```
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
```
> **Exercise:** Now, create an array `features` that contains the data we'll pass to the network. The data should come from `review_ints`, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
```
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
```
## Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
> **Exercise:** Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, `train_x` and `train_y` for example. Define a split fraction, `split_frac` as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
```
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
```
With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
```
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
```
## Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
* `lstm_size`: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
* `lstm_layers`: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
* `batch_size`: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
* `learning_rate`: Learning rate
```
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
```
For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be `batch_size` vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
> **Exercise:** Create the `inputs_`, `labels_`, and drop out `keep_prob` placeholders using `tf.placeholder`. `labels_` needs to be two-dimensional to work with some functions later. Since `keep_prob` is a scalar (a 0-dimensional tensor), you shouldn't provide a size to `tf.placeholder`.
```
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
```
### Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
> **Exercise:** Create the embedding lookup matrix as a `tf.Variable`. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup). This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
```
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
```
### LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network ([TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn)). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use `tf.contrib.rnn.BasicLSTMCell`. Looking at the function documentation:
```
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
```
you can see it takes a parameter called `num_units`, the number of units in the cell, called `lstm_size` in this code. So then, you can write something like
```
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
```
to create an LSTM cell with `num_units`. Next, you can add dropout to the cell with `tf.contrib.rnn.DropoutWrapper`. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
```
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
```
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with `tf.contrib.rnn.MultiRNNCell`:
```
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
```
Here, `[drop] * lstm_layers` creates a list of cells (`drop`) that is `lstm_layers` long. The `MultiRNNCell` wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
> **Exercise:** Below, use `tf.contrib.rnn.BasicLSTMCell` to create an LSTM cell. Then, add drop out to it with `tf.contrib.rnn.DropoutWrapper`. Finally, create multiple LSTM layers with `tf.contrib.rnn.MultiRNNCell`.
Here is [a tutorial on building RNNs](https://www.tensorflow.org/tutorials/recurrent) that will help you out.
```
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
```
### RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) to do this. You'd pass in the RNN cell you created (our multiple layered LSTM `cell` for instance), and the inputs to the network.
```
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
```
Above I created an initial state, `initial_state`, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. `tf.nn.dynamic_rnn` takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
> **Exercise:** Use `tf.nn.dynamic_rnn` to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, `embed`.
```
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
```
### Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with `outputs[:, -1]`, the calculate the cost from that and `labels_`.
```
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
```
### Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
```
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
```
### Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the `x` and `y` arrays and returns slices out of those arrays with size `[batch_size]`.
```
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
```
## Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the `checkpoints` directory exists.
```
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
```
## Testing
```
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
```
|
github_jupyter
|
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
| 0.59843 | 0.989751 |
# Imports
```
import torch
from torch.utils.data import Dataset, DataLoader
import random
import time
from datetime import datetime
from functools import partial
import json
```
# Sampler
## Basic sampler
```
def basic_sampler(seq, sample_len):
"""
Basic text sampler.
Returns the first sample_len items.
If sample_len is greater than the length of the seq, the seq is returned.
"""
seq_len = len(seq)
if seq_len > sample_len:
return seq[:sample_len]
else:
return seq
text = "ABC DEF GHI JKL!"
[basic_sampler(text, 5) for i in range(3)]
[basic_sampler(text, 8) for i in range(6)]
[basic_sampler(text, 100) for i in range(3)]
```
## Basic random sampler
```
def basic_rand_sampler(seq, sample_len):
"""
Basic random text sampler.
If sample_len is greater than the length of the seq, the seq is returned.
"""
seq_len = len(seq)
if seq_len > sample_len:
start_idx = random.randint(0, min(seq_len,seq_len - sample_len))
end_idx = start_idx+sample_len
return seq[start_idx:end_idx]
else:
return seq
text = "ABC DEF GHI JKL!"
[basic_rand_sampler(text, 5) for i in range(3)]
[basic_rand_sampler(text, 8) for i in range(6)]
[basic_rand_sampler(text, 100) for i in range(3)]
```
## Identity sampler
```
identity_sampler = lambda x: x
assert text == identity_sampler(text)
```
# Tokenizer
## Basic aminoacid tokenizer
```
AA_VOCAB = "ACDEFGHIKLMNOPQRSTUVWY"
AA_DICT = {a: i for i, a in enumerate(AA_VOCAB)}
def basic_aa_tokenizer(seq, context_length, return_mask=True):
"""
Maps a number between 0 and 21 to each 21 proteogenic aminoacids.
Unknown char input gets mapped to 22.
"""
seq_len = len(seq)
seq_wrap = torch.zeros(context_length, dtype=torch.long)
seq_wrap[:seq_len] = torch.tensor([AA_DICT[a] if a in AA_VOCAB else 22 for a in seq], dtype=torch.long)
if return_mask:
mask = torch.zeros_like(seq_wrap, dtype=torch.bool)
mask[0:seq_len] = True
return seq_wrap, mask
else:
return seq_wrap
aa_seq = "ACDEFGHIKLMNOPQRSTUVWYZZZ"
tokens, mask = basic_aa_tokenizer(aa_seq, 30)
tokens, mask
assert len(tokens[mask]) == len(aa_seq) # because we have a 1:1 relationship, unlike with the text
assert len(tokens[mask]) == mask.sum()
assert tokens[~mask].sum() == 0.
```
## Text tokenizer
```
from simple_tokenizer import tokenize
tokens, mask = tokenize(text, context_length=30, return_mask=True)
tokens, mask
assert len(tokens[mask]) == mask.sum()
assert tokens[~mask].sum() == 0.
```
# Dataset
```
class CLASPDataset(Dataset):
"""
Basic CLASP dataset that loads the preprocessed csv file into RAM.
path: path to the csv file
"""
def __init__(self, path, text_sampler, bioseq_sampler, text_tok, bioseq_tok):
super().__init__()
self.path = path
tp = time.time()
with open(path, "r") as reader:
self.data = reader.readlines()
print(f"Load data time: {time.time() - tp:.3f} s")
self.cols = self.data.pop(0).split(",")
self.len = len(self.data)
self.text_sampler = text_sampler
self.bioseq_sampler = bioseq_sampler
self.text_tok = text_tok
self.bioseq_tok = bioseq_tok
def __len__(self):
return self.len
def __getitem__(self, idx):
sample = self.data[idx][:-2] # without "\n"
sample = sample.split(",")
sample = [x for x in sample if len(x) > 0]
text = " ".join(sample[:-2])
bioseq = sample[-1]
text = self.text_sampler(text)
bioseq = self.bioseq_sampler(bioseq)
print(text, len(text))
print(bioseq, len(bioseq))
text, text_mask = self.text_tok(text)
bioseq, bioseq_mask = self.bioseq_tok(bioseq)
return text, text_mask, bioseq, bioseq_mask
str_sampler = partial(basic_rand_sampler, sample_len=100)
text_tok = partial(tokenize, context_length=120, return_mask=True)
bioseq_tok = partial(basic_aa_tokenizer, context_length=120, return_mask=True)
ds = CLASPDataset(path="uniprot_100_reduced.csv",
text_sampler=str_sampler,
bioseq_sampler=str_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok)
ds[1]
```
# Dataloader
```
dl = DataLoader(ds, 32)
batch = next(iter(dl))
[b.shape for b in batch]
```
# RankSplitDataset
For details see notebook `RankSplitDataset.ipynb`.
```
path_offset_dict = '../data/uniprot_sprot_offset_dict.json'
with open(path_offset_dict, "r", encoding='utf-8') as data_file:
offset_dict = json.load(data_file)
len(offset_dict.keys())
file_path = "../data/uniprot_sprot.csv"
class RankSplitDataset(Dataset):
def __init__(self, file_path, offset_dict, rank, world_size, logger=None):
self.file_path = file_path
self.offset_dict = offset_dict
self.total_len = len(offset_dict.keys())
self.rank_len = self.total_len // world_size
self.rank_line_offset = self.rank_len * rank
self.rank_byte_offset = self.offset_dict[str(self.rank_line_offset)] # because json keys are strings after it is saved
if logger:
logger.info(f"{datetime.now()} rank: {rank} dataset information:\n{'total len':>20}: {self.total_len}\n{'rank len':>20}: {self.rank_len}\n{'rank line offset':>20}: {self.rank_line_offset}\n{'rank byte offset':>20}: {self.rank_byte_offset}")
else:
print(f"{datetime.now()} rank: {rank} dataset information:\n{'total len':>20}: {self.total_len}\n{'rank len':>20}: {self.rank_len}\n{'rank line offset':>20}: {self.rank_line_offset}\n{'rank byte offset':>20}: {self.rank_byte_offset}")
tp = time.time()
with open(self.file_path, 'r', encoding='utf-8') as f:
f.seek(self.rank_byte_offset) # move to the line for the specific rank
lines = []
for i in range(self.rank_len): # load all the lines for the rank
line = f.readline()
if line != "":
lines.append(line)
self.data = lines
if logger:
logger.info(f"{datetime.now()} rank: {rank} dataset load data time: {time.time() - tp:.3f} s")
logger.info(f"{datetime.now()} rank: {rank} dataset len: {len(self.data)}")
else:
print(f"{datetime.now()} rank: {rank} dataset load data time: {time.time() - tp:.3f} s")
print(f"{datetime.now()} rank: {rank} dataset len: {len(self.data)}")
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
```
# CLASPRankSplitDataset
```
class CLASPRankSplitDataset(RankSplitDataset):
"""
CLASP rank split dataset that loads equally sized pieces for each rank
of the preprocessed csv file into RAM.
path: path to the csv file
"""
def __init__(self, file_path, offset_dict, rank, world_size, logger,
text_sampler, bioseq_sampler, text_tok, bioseq_tok):
super().__init__(file_path, offset_dict, rank, world_size, logger)
self.text_sampler = text_sampler
self.bioseq_sampler = bioseq_sampler
self.text_tok = text_tok
self.bioseq_tok = bioseq_tok
def __getitem__(self, idx):
sample = self.data[idx][:-1] # without "\n"
sample = sample.split(",")
sample = [x for x in sample if len(x) > 0]
text = " ".join(sample[:-1])
bioseq = sample[-1]
text = self.text_sampler(text)
bioseq = self.bioseq_sampler(bioseq)
print(text, len(text))
print(bioseq, len(bioseq))
text, text_mask = self.text_tok(text)
bioseq, bioseq_mask = self.bioseq_tok(bioseq)
return text, text_mask, bioseq, bioseq_mask
str_sampler = partial(basic_rand_sampler, sample_len=100)
text_tok = partial(tokenize, context_length=120, return_mask=True)
bioseq_tok = partial(basic_aa_tokenizer, context_length=120, return_mask=True)
!free -h
ds1 = CLASPRankSplitDataset(file_path=file_path,
offset_dict=offset_dict,
rank=0,
world_size=2,
text_sampler=str_sampler,
bioseq_sampler=str_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok,
logger=None)
!free -h
5.5 - 4.1
ds2 = CLASPRankSplitDataset(file_path=file_path,
offset_dict=offset_dict,
rank=1,
world_size=2,
text_sampler=str_sampler,
bioseq_sampler=str_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok,
logger=None)
!free -h
6.7 - 5.5
6.7 - 4.1
assert not(torch.equal(ds1[0][0], ds2[0][0]))
assert not(torch.equal(ds1[0][1], ds2[0][1]))
ds1[0]
ds2[0]
```
# Real parameter dataset setup
```
text_sampler = partial(basic_rand_sampler, sample_len=1024)
text_tok = partial(tokenize, context_length=1024, return_mask=True)
bioseq_sampler = partial(basic_rand_sampler, sample_len=512)
bioseq_tok = partial(basic_aa_tokenizer, context_length=512, return_mask=True)
ds_real = CLASPRankSplitDataset(file_path=file_path,
offset_dict=offset_dict,
rank=0,
world_size=2,
text_sampler=text_sampler,
bioseq_sampler=bioseq_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok,
logger=None)
```
## Small sample
```
# find some smaller length test cases
[(i,len(ds_real.data[i])) for i in range(10000) if len(ds_real.data[i]) < 1024]
idx = 246
ds_real.data[idx]
ds_real[idx]
sample = ds_real.data[idx][:-1].split(","); sample
" ".join(sample[:-1])
sample[-1]
```
## Long sample
```
idx = 0
ds_real.data[idx]
ds_real[idx]
sample = ds_real.data[idx][:-1].split(","); sample
" ".join(sample[:-1])
sample[-1]
```
# End
|
github_jupyter
|
import torch
from torch.utils.data import Dataset, DataLoader
import random
import time
from datetime import datetime
from functools import partial
import json
def basic_sampler(seq, sample_len):
"""
Basic text sampler.
Returns the first sample_len items.
If sample_len is greater than the length of the seq, the seq is returned.
"""
seq_len = len(seq)
if seq_len > sample_len:
return seq[:sample_len]
else:
return seq
text = "ABC DEF GHI JKL!"
[basic_sampler(text, 5) for i in range(3)]
[basic_sampler(text, 8) for i in range(6)]
[basic_sampler(text, 100) for i in range(3)]
def basic_rand_sampler(seq, sample_len):
"""
Basic random text sampler.
If sample_len is greater than the length of the seq, the seq is returned.
"""
seq_len = len(seq)
if seq_len > sample_len:
start_idx = random.randint(0, min(seq_len,seq_len - sample_len))
end_idx = start_idx+sample_len
return seq[start_idx:end_idx]
else:
return seq
text = "ABC DEF GHI JKL!"
[basic_rand_sampler(text, 5) for i in range(3)]
[basic_rand_sampler(text, 8) for i in range(6)]
[basic_rand_sampler(text, 100) for i in range(3)]
identity_sampler = lambda x: x
assert text == identity_sampler(text)
AA_VOCAB = "ACDEFGHIKLMNOPQRSTUVWY"
AA_DICT = {a: i for i, a in enumerate(AA_VOCAB)}
def basic_aa_tokenizer(seq, context_length, return_mask=True):
"""
Maps a number between 0 and 21 to each 21 proteogenic aminoacids.
Unknown char input gets mapped to 22.
"""
seq_len = len(seq)
seq_wrap = torch.zeros(context_length, dtype=torch.long)
seq_wrap[:seq_len] = torch.tensor([AA_DICT[a] if a in AA_VOCAB else 22 for a in seq], dtype=torch.long)
if return_mask:
mask = torch.zeros_like(seq_wrap, dtype=torch.bool)
mask[0:seq_len] = True
return seq_wrap, mask
else:
return seq_wrap
aa_seq = "ACDEFGHIKLMNOPQRSTUVWYZZZ"
tokens, mask = basic_aa_tokenizer(aa_seq, 30)
tokens, mask
assert len(tokens[mask]) == len(aa_seq) # because we have a 1:1 relationship, unlike with the text
assert len(tokens[mask]) == mask.sum()
assert tokens[~mask].sum() == 0.
from simple_tokenizer import tokenize
tokens, mask = tokenize(text, context_length=30, return_mask=True)
tokens, mask
assert len(tokens[mask]) == mask.sum()
assert tokens[~mask].sum() == 0.
class CLASPDataset(Dataset):
"""
Basic CLASP dataset that loads the preprocessed csv file into RAM.
path: path to the csv file
"""
def __init__(self, path, text_sampler, bioseq_sampler, text_tok, bioseq_tok):
super().__init__()
self.path = path
tp = time.time()
with open(path, "r") as reader:
self.data = reader.readlines()
print(f"Load data time: {time.time() - tp:.3f} s")
self.cols = self.data.pop(0).split(",")
self.len = len(self.data)
self.text_sampler = text_sampler
self.bioseq_sampler = bioseq_sampler
self.text_tok = text_tok
self.bioseq_tok = bioseq_tok
def __len__(self):
return self.len
def __getitem__(self, idx):
sample = self.data[idx][:-2] # without "\n"
sample = sample.split(",")
sample = [x for x in sample if len(x) > 0]
text = " ".join(sample[:-2])
bioseq = sample[-1]
text = self.text_sampler(text)
bioseq = self.bioseq_sampler(bioseq)
print(text, len(text))
print(bioseq, len(bioseq))
text, text_mask = self.text_tok(text)
bioseq, bioseq_mask = self.bioseq_tok(bioseq)
return text, text_mask, bioseq, bioseq_mask
str_sampler = partial(basic_rand_sampler, sample_len=100)
text_tok = partial(tokenize, context_length=120, return_mask=True)
bioseq_tok = partial(basic_aa_tokenizer, context_length=120, return_mask=True)
ds = CLASPDataset(path="uniprot_100_reduced.csv",
text_sampler=str_sampler,
bioseq_sampler=str_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok)
ds[1]
dl = DataLoader(ds, 32)
batch = next(iter(dl))
[b.shape for b in batch]
path_offset_dict = '../data/uniprot_sprot_offset_dict.json'
with open(path_offset_dict, "r", encoding='utf-8') as data_file:
offset_dict = json.load(data_file)
len(offset_dict.keys())
file_path = "../data/uniprot_sprot.csv"
class RankSplitDataset(Dataset):
def __init__(self, file_path, offset_dict, rank, world_size, logger=None):
self.file_path = file_path
self.offset_dict = offset_dict
self.total_len = len(offset_dict.keys())
self.rank_len = self.total_len // world_size
self.rank_line_offset = self.rank_len * rank
self.rank_byte_offset = self.offset_dict[str(self.rank_line_offset)] # because json keys are strings after it is saved
if logger:
logger.info(f"{datetime.now()} rank: {rank} dataset information:\n{'total len':>20}: {self.total_len}\n{'rank len':>20}: {self.rank_len}\n{'rank line offset':>20}: {self.rank_line_offset}\n{'rank byte offset':>20}: {self.rank_byte_offset}")
else:
print(f"{datetime.now()} rank: {rank} dataset information:\n{'total len':>20}: {self.total_len}\n{'rank len':>20}: {self.rank_len}\n{'rank line offset':>20}: {self.rank_line_offset}\n{'rank byte offset':>20}: {self.rank_byte_offset}")
tp = time.time()
with open(self.file_path, 'r', encoding='utf-8') as f:
f.seek(self.rank_byte_offset) # move to the line for the specific rank
lines = []
for i in range(self.rank_len): # load all the lines for the rank
line = f.readline()
if line != "":
lines.append(line)
self.data = lines
if logger:
logger.info(f"{datetime.now()} rank: {rank} dataset load data time: {time.time() - tp:.3f} s")
logger.info(f"{datetime.now()} rank: {rank} dataset len: {len(self.data)}")
else:
print(f"{datetime.now()} rank: {rank} dataset load data time: {time.time() - tp:.3f} s")
print(f"{datetime.now()} rank: {rank} dataset len: {len(self.data)}")
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]
class CLASPRankSplitDataset(RankSplitDataset):
"""
CLASP rank split dataset that loads equally sized pieces for each rank
of the preprocessed csv file into RAM.
path: path to the csv file
"""
def __init__(self, file_path, offset_dict, rank, world_size, logger,
text_sampler, bioseq_sampler, text_tok, bioseq_tok):
super().__init__(file_path, offset_dict, rank, world_size, logger)
self.text_sampler = text_sampler
self.bioseq_sampler = bioseq_sampler
self.text_tok = text_tok
self.bioseq_tok = bioseq_tok
def __getitem__(self, idx):
sample = self.data[idx][:-1] # without "\n"
sample = sample.split(",")
sample = [x for x in sample if len(x) > 0]
text = " ".join(sample[:-1])
bioseq = sample[-1]
text = self.text_sampler(text)
bioseq = self.bioseq_sampler(bioseq)
print(text, len(text))
print(bioseq, len(bioseq))
text, text_mask = self.text_tok(text)
bioseq, bioseq_mask = self.bioseq_tok(bioseq)
return text, text_mask, bioseq, bioseq_mask
str_sampler = partial(basic_rand_sampler, sample_len=100)
text_tok = partial(tokenize, context_length=120, return_mask=True)
bioseq_tok = partial(basic_aa_tokenizer, context_length=120, return_mask=True)
!free -h
ds1 = CLASPRankSplitDataset(file_path=file_path,
offset_dict=offset_dict,
rank=0,
world_size=2,
text_sampler=str_sampler,
bioseq_sampler=str_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok,
logger=None)
!free -h
5.5 - 4.1
ds2 = CLASPRankSplitDataset(file_path=file_path,
offset_dict=offset_dict,
rank=1,
world_size=2,
text_sampler=str_sampler,
bioseq_sampler=str_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok,
logger=None)
!free -h
6.7 - 5.5
6.7 - 4.1
assert not(torch.equal(ds1[0][0], ds2[0][0]))
assert not(torch.equal(ds1[0][1], ds2[0][1]))
ds1[0]
ds2[0]
text_sampler = partial(basic_rand_sampler, sample_len=1024)
text_tok = partial(tokenize, context_length=1024, return_mask=True)
bioseq_sampler = partial(basic_rand_sampler, sample_len=512)
bioseq_tok = partial(basic_aa_tokenizer, context_length=512, return_mask=True)
ds_real = CLASPRankSplitDataset(file_path=file_path,
offset_dict=offset_dict,
rank=0,
world_size=2,
text_sampler=text_sampler,
bioseq_sampler=bioseq_sampler,
text_tok=text_tok,
bioseq_tok=bioseq_tok,
logger=None)
# find some smaller length test cases
[(i,len(ds_real.data[i])) for i in range(10000) if len(ds_real.data[i]) < 1024]
idx = 246
ds_real.data[idx]
ds_real[idx]
sample = ds_real.data[idx][:-1].split(","); sample
" ".join(sample[:-1])
sample[-1]
idx = 0
ds_real.data[idx]
ds_real[idx]
sample = ds_real.data[idx][:-1].split(","); sample
" ".join(sample[:-1])
sample[-1]
| 0.678114 | 0.79049 |
# Exploration and Modeling of 2013 NYC Taxi Trip and Fare Dataset
Here we show key features and capabilities of Spark's MLlib toolkit using the NYC taxi trip and fare data-set from 2013 (about 40 Gb uncompressed). We take about 10% of the sample of this data (from the month of December 2013, about 3.6 Gb) to predict the amount of tip paid for each taxi trip in NYC based on features such as trip distance, number of passengers, trip distance etc. We have shown relevant plots in Python.
We have sampled the data in order for the runs to finish quickly. The same code (with minor modifications) can be run on the full 2013 data-set.
### The learning task is to predict the amount of tip paid in (dollars) paid for each taxi trip in December 2013
#### Pipeline:
1. Data ingestion, joining, and wrangling.
2. Data exploration and plotting.
3. Data preparation (featurizing/transformation).
4. Modeling (using incl. hyperparameter tuning with cross-validation), prediction, model persistance.
5. Model evaluation on an independent validation data-set.
Through the above steps we highlight Spark SQL, as well as, MLlib's modeling and transformation functions.
### Data
* [NYC 2013 Taxi data](http://www.andresmh.com/nyctaxitrips/)
Pre-processed data has been made available from a public facing Azure Blob Storage Account.
## Set directory paths and location of training, validation files, as well as model location in blob storage
NOTE: The blob storage attached to the HDI cluster is referenced as: wasb:/// (Windows Azure Storage Blob). Other blob storage accounts are referenced as: wasb://
```
# 1. Location of training data: contains Dec 2013 trip and fare data from NYC
trip_file_loc = "wasb://[email protected]/NYCTaxi/KDD2016/trip_data_12.csv"
fare_file_loc = "wasb://[email protected]/NYCTaxi/KDD2016/trip_fare_12.csv"
# 2. Location of the joined taxi+fare training file
taxi_valid_file_loc = "wasb://[email protected]/Data/NYCTaxi/JoinedTaxiTripFare.Point1Pct.Valid.csv"
# 3. Set model storage directory path. This is where models will be saved.
modelDir = "/user/sshuser/NYCTaxi/Models/"; # The last backslash is needed;
# 4. Set data storage path. This is where data is sotred on the blob attached to the cluster.
dataDir = "/HdiSamples/HdiSamples/NYCTaxi/"; # The last backslash is needed;
```
## Set SQL context and import necessary libraries
```
from pyspark import SparkConf
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.functions import UserDefinedFunction
from pyspark.sql.types import *
import matplotlib.pyplot as plt
import numpy as np
import datetime
sqlContext = SQLContext(sc)
```
## Data ingestion and wrangling using Spark SQL
#### Data import and registering as tables
```
## READ IN TRIP DATA FRAME FROM CSV
trip = spark.read.csv(path=trip_file_loc, header=True, inferSchema=True)
## READ IN FARE DATA FRAME FROM CSV
fare = spark.read.csv(path=fare_file_loc, header=True, inferSchema=True)
## CHECK SCHEMA OF TRIP AND FARE TABLES
trip.printSchema()
fare.printSchema()
## REGISTER DATA-FRAMEs AS A TEMP-TABLEs IN SQL-CONTEXT
trip.createOrReplaceTempView("trip")
fare.createOrReplaceTempView("fare")
```
#### Using Spark SQL to join, clean and featurize data
```
## USING SQL: MERGE TRIP AND FARE DATA-SETS TO CREATE A JOINED DATA-FRAME
## ELIMINATE SOME COLUMNS, AND FILTER ROWS WTIH VALUES OF SOME COLUMNS
sqlStatement = """SELECT t.medallion, t.hack_license,
f.total_amount, f.tolls_amount,
hour(f.pickup_datetime) as pickup_hour, f.vendor_id, f.fare_amount,
f.surcharge, f.tip_amount, f.payment_type, t.rate_code,
t.passenger_count, t.trip_distance, t.trip_time_in_secs
FROM trip t, fare f
WHERE t.medallion = f.medallion AND t.hack_license = f.hack_license
AND t.pickup_datetime = f.pickup_datetime
AND t.passenger_count > 0 and t.passenger_count < 8
AND f.tip_amount >= 0 AND f.tip_amount <= 25
AND f.fare_amount >= 1 AND f.fare_amount <= 250
AND f.tip_amount < f.fare_amount AND t.trip_distance > 0
AND t.trip_distance <= 100 AND t.trip_time_in_secs >= 30
AND t.trip_time_in_secs <= 7200 AND t.rate_code <= 5
AND f.payment_type in ('CSH','CRD')"""
trip_fareDF = spark.sql(sqlStatement)
# REGISTER JOINED TRIP-FARE DF IN SQL-CONTEXT
trip_fareDF.createOrReplaceTempView("trip_fare")
## SHOW WHICH TABLES ARE REGISTERED IN SQL-CONTEXT
spark.sql("show tables").show()
```
#### Sample data for creating and evaluating model & save in blob
```
# SAMPLE 10% OF DATA, SPLIT INTO TRAIINING AND VALIDATION AND SAVE IN BLOB
trip_fare_featSampled = trip_fareDF.sample(False, 0.1, seed=1234)
trainfilename = dataDir + "TrainData";
trip_fare_featSampled.repartition(10).write.mode("overwrite").parquet(trainfilename)
```
## Data ingestion & cleanup: Read in the joined taxi trip and fare training file (as parquet), format and clean data, and create data-frame
The taxi trip and fare files were joined based on the instructions provided in:
"https://azure.microsoft.com/en-us/documentation/articles/machine-learning-data-science-process-hive-walkthrough/"
```
## READ IN DATA FRAME FROM CSV
taxi_train_df = spark.read.parquet(trainfilename)
## CREATE A CLEANED DATA-FRAME BY DROPPING SOME UN-NECESSARY COLUMNS & FILTERING FOR UNDESIRED VALUES OR OUTLIERS
taxi_df_train_cleaned = taxi_train_df.drop('medallion').drop('hack_license').drop('total_amount').drop('tolls_amount')\
.filter("passenger_count > 0 and passenger_count < 8 AND tip_amount >= 0 AND tip_amount < 15 AND \
fare_amount >= 1 AND fare_amount < 150 AND trip_distance > 0 AND trip_distance < 100 AND \
trip_time_in_secs > 30 AND trip_time_in_secs < 7200" )
## PERSIST AND MATERIALIZE DF IN MEMORY
taxi_df_train_cleaned.persist()
taxi_df_train_cleaned.count()
## REGISTER DATA-FRAME AS A TEMP-TABLE IN SQL-CONTEXT
taxi_df_train_cleaned.createOrReplaceTempView("taxi_train")
taxi_df_train_cleaned.show()
taxi_df_train_cleaned.printSchema()
```
<a name="exploration"></a>
## Data exploration & visualization: Plotting of target variables and features
#### First, summarize data using SQL, this outputs a Spark data frame. If the data-set is too large, it can be sampled
```
%%sql -q -o sqlResultsPD
SELECT fare_amount, passenger_count, tip_amount FROM taxi_train WHERE passenger_count > 0 AND passenger_count < 7 AND fare_amount > 0 AND fare_amount < 100 AND tip_amount > 0 AND tip_amount < 15
%%local
sqlResultsPD.head()
```
#### Plot histogram of tip amount, relationship between tip amount vs. other features
```
%%local
%matplotlib inline
import matplotlib.pyplot as plt
## %%local creates a pandas data-frame on the head node memory, from spark data-frame,
## which can then be used for plotting. Here, sampling data is a good idea, depending on the memory of the head node
# TIP BY PAYMENT TYPE AND PASSENGER COUNT
ax1 = sqlResultsPD[['tip_amount']].plot(kind='hist', bins=25, facecolor='lightblue')
ax1.set_title('Tip amount distribution')
ax1.set_xlabel('Tip Amount ($)'); ax1.set_ylabel('Counts');
plt.figure(figsize=(4,4)); plt.suptitle(''); plt.show()
# TIP BY PASSENGER COUNT
ax2 = sqlResultsPD.boxplot(column=['tip_amount'], by=['passenger_count'])
ax2.set_title('Tip amount by Passenger count')
ax2.set_xlabel('Passenger count'); ax2.set_ylabel('Tip Amount ($)');
plt.figure(figsize=(4,4)); plt.suptitle(''); plt.show()
# TIP AMOUNT BY FARE AMOUNT, POINTS ARE SCALED BY PASSENGER COUNT
ax = sqlResultsPD.plot(kind='scatter', x= 'fare_amount', y = 'tip_amount', c='blue', alpha = 0.10, s=2.5*(sqlResultsPD.passenger_count))
ax.set_title('Tip amount by Fare amount')
ax.set_xlabel('Fare Amount ($)'); ax.set_ylabel('Tip Amount ($)');
plt.axis([-2, 80, -2, 20])
plt.figure(figsize=(4,4)); plt.suptitle(''); plt.show()
```
<a name="transformation"></a>
## Feature engineering, transformation and data preparation for modeling
#### Create a new feature by binning hours into traffic time buckets using Spark SQL
```
### CREATE FOUR BUCKETS FOR TRAFFIC TIMES
sqlStatement = """SELECT payment_type, pickup_hour, fare_amount, tip_amount,
vendor_id, rate_code, passenger_count, trip_distance, trip_time_in_secs,
CASE
WHEN (pickup_hour <= 6 OR pickup_hour >= 20) THEN 'Night'
WHEN (pickup_hour >= 7 AND pickup_hour <= 10) THEN 'AMRush'
WHEN (pickup_hour >= 11 AND pickup_hour <= 15) THEN 'Afternoon'
WHEN (pickup_hour >= 16 AND pickup_hour <= 19) THEN 'PMRush'
END as TrafficTimeBins,
CASE
WHEN (tip_amount > 0) THEN 1
WHEN (tip_amount <= 0) THEN 0
END as tipped
FROM taxi_train"""
taxi_df_train_with_newFeatures = spark.sql(sqlStatement)
```
#### Indexing of categorical features
Here we only transform some variables to show examples, which are character strings. Other variables, such as week-day, which are represented by numerical values, can also be indexed as categorical variables.
For indexing, we used [StringIndexer](https://spark.apache.org/docs/2.1.0/ml-features.html#stringindexer) function from SparkML. We'll then use the [Pipeline](https://spark.apache.org/docs/2.1.0/ml-pipeline.html) API to consolidate all of our feature engineering operations into a single high-level functional.
```
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorIndexer
# DEFINE THE TRANSFORMATIONS THAT NEEDS TO BE APPLIED TO SOME OF THE FEATURES
sI1 = StringIndexer(inputCol="vendor_id", outputCol="vendorIndex");
sI2 = StringIndexer(inputCol="rate_code", outputCol="rateIndex");
sI3 = StringIndexer(inputCol="payment_type", outputCol="paymentIndex");
sI4 = StringIndexer(inputCol="TrafficTimeBins", outputCol="TrafficTimeBinsIndex");
# APPLY TRANSFORMATIONS
encodedFinal = Pipeline(stages=[sI1, sI2, sI3, sI4]).fit(taxi_df_train_with_newFeatures).transform(taxi_df_train_with_newFeatures);
```
#### Split data into train/test. Training fraction will be used to create model, and testing fraction will be used to evaluate model.
```
trainingFraction = 0.75; testingFraction = (1-trainingFraction);
seed = 1234;
# SPLIT SAMPLED DATA-FRAME INTO TRAIN/TEST, WITH A RANDOM COLUMN ADDED FOR DOING CV (SHOWN LATER)
trainData, testData = encodedFinal.randomSplit([trainingFraction, testingFraction], seed=seed);
# CACHE DATA FRAMES IN MEMORY
trainData.persist(); trainData.count()
testData.persist(); testData.count()
```
## Train a regression model: Predict the amount of tip paid for taxi trips
### Train Elastic Net regression model, and evaluate performance on test data
```
from pyspark.ml.feature import RFormula
from pyspark.ml.regression import LinearRegression
from pyspark.mllib.evaluation import RegressionMetrics
## DEFINE REGRESSION FURMULA
regFormula = RFormula(formula="tip_amount ~ paymentIndex + vendorIndex + rateIndex + TrafficTimeBinsIndex + pickup_hour + passenger_count + trip_time_in_secs + trip_distance + fare_amount")
## DEFINE INDEXER FOR CATEGORIAL VARIABLES
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=32)
## DEFINE ELASTIC NET REGRESSOR
eNet = LinearRegression(featuresCol="indexedFeatures", maxIter=25, regParam=0.01, elasticNetParam=0.5)
## Fit model, with formula and other transformations
model = Pipeline(stages=[regFormula, featureIndexer, eNet]).fit(trainData)
## PREDICT ON TEST DATA AND EVALUATE
predictions = model.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
## PLOC ACTUALS VS. PREDICTIONS
predictions.select("label","prediction").createOrReplaceTempView("tmp_results");
```
### Train Gradient Boosting Tree regression model, and evaluate performance on test data
```
from pyspark.ml.regression import GBTRegressor
## DEFINE REGRESSION FURMULA
regFormula = RFormula(formula="tip_amount ~ paymentIndex + vendorIndex + rateIndex + TrafficTimeBinsIndex + pickup_hour + passenger_count + trip_time_in_secs + trip_distance + fare_amount")
## DEFINE INDEXER FOR CATEGORIAL VARIABLES
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=32)
## DEFINE GRADIENT BOOSTING TREE REGRESSOR
gBT = GBTRegressor(featuresCol="indexedFeatures", maxIter=10)
## Fit model, with formula and other transformations
model = Pipeline(stages=[regFormula, featureIndexer, gBT]).fit(trainData)
## PREDICT ON TEST DATA AND EVALUATE
predictions = model.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
## PLOC ACTUALS VS. PREDICTIONS
predictions.select("label","prediction").createOrReplaceTempView("tmp_results");
```
### Train a random forest regression model using the Pipeline function, save, and evaluate on test data set
```
from pyspark.ml.feature import RFormula
from sklearn.metrics import roc_curve,auc
from pyspark.ml.regression import RandomForestRegressor
from pyspark.mllib.evaluation import RegressionMetrics
## DEFINE REGRESSION FURMULA
regFormula = RFormula(formula="tip_amount ~ paymentIndex + vendorIndex + rateIndex + TrafficTimeBinsIndex + pickup_hour + passenger_count + trip_time_in_secs + trip_distance + fare_amount")
## DEFINE INDEXER FOR CATEGORIAL VARIABLES
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=32)
## DEFINE RANDOM FOREST ESTIMATOR
randForest = RandomForestRegressor(featuresCol = 'indexedFeatures', labelCol = 'label', numTrees=20,
featureSubsetStrategy="auto",impurity='variance', maxDepth=6, maxBins=100)
## Fit model, with formula and other transformations
model = Pipeline(stages=[regFormula, featureIndexer, randForest]).fit(trainData)
## SAVE MODEL
datestamp = datetime.datetime.now().strftime('%m-%d-%Y-%s');
fileName = "RandomForestRegressionModel_" + datestamp;
randForestDirfilename = modelDir + fileName;
model.save(randForestDirfilename)
## PREDICT ON TEST DATA AND EVALUATE
predictions = model.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
## PLOC ACTUALS VS. PREDICTIONS
predictions.select("label","prediction").createOrReplaceTempView("tmp_results");
%%sql -q -o predictionsPD
SELECT * from tmp_results
%%local
import numpy as np
ax = predictionsPD.plot(kind='scatter', figsize = (5,5), x='label', y='prediction', color='blue', alpha = 0.25, label='Actual vs. predicted');
fit = np.polyfit(predictionsPD['label'], predictionsPD['prediction'], deg=1)
ax.set_title('Actual vs. Predicted Tip Amounts ($)')
ax.set_xlabel("Actual"); ax.set_ylabel("Predicted");
ax.plot(predictionsPD['label'], fit[0] * predictionsPD['label'] + fit[1], color='magenta')
plt.axis([-1, 15, -1, 15])
plt.show(ax)
```
## Hyper-parameter tuning: Train a random forest model using cross-validation
```
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
## DEFINE RANDOM FOREST MODELS
randForest = RandomForestRegressor(featuresCol = 'indexedFeatures', labelCol = 'label',
featureSubsetStrategy="auto",impurity='variance', maxBins=100)
## DEFINE MODELING PIPELINE, INCLUDING FORMULA, FEATURE TRANSFORMATIONS, AND ESTIMATOR
pipeline = Pipeline(stages=[regFormula, featureIndexer, randForest])
## DEFINE PARAMETER GRID FOR RANDOM FOREST
paramGrid = ParamGridBuilder() \
.addGrid(randForest.numTrees, [10, 25, 50]) \
.addGrid(randForest.maxDepth, [3, 5, 7]) \
.build()
## DEFINE CROSS VALIDATION
crossval = CrossValidator(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=RegressionEvaluator(metricName="rmse"),
numFolds=3)
## TRAIN MODEL USING CV
cvModel = crossval.fit(trainData)
## PREDICT AND EVALUATE TEST DATA SET
predictions = cvModel.transform(testData)
evaluator = RegressionEvaluator(labelCol="label", predictionCol="prediction", metricName="r2")
r2 = evaluator.evaluate(predictions)
print("R-squared on test data = %g" % r2)
## SAVE THE BEST MODEL
datestamp = datetime.datetime.now().strftime('%m-%d-%Y-%s');
fileName = "CV_RandomForestRegressionModel_" + datestamp;
CVDirfilename = modelDir + fileName;
cvModel.bestModel.save(CVDirfilename);
```
## Load a saved pipeline model and evaluate it on test data set
```
from pyspark.ml import PipelineModel
savedModel = PipelineModel.load(randForestDirfilename)
predictions = savedModel.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
```
## Load and transform an independent validation data-set, and evaluate the saved pipeline model
### Note that this validation data, by design, has a different format than the original trainig data. By grangling and transformations, we make the data format the same as the training data for the purpose of scoring.
```
## READ IN DATA FRAME FROM CSV
taxi_valid_df = spark.read.csv(path=taxi_valid_file_loc, header=True, inferSchema=True)
taxi_valid_df.printSchema()
## READ IN DATA FRAME FROM CSV
taxi_valid_df = spark.read.csv(path=taxi_valid_file_loc, header=True, inferSchema=True)
## CREATE A CLEANED DATA-FRAME BY DROPPING SOME UN-NECESSARY COLUMNS & FILTERING FOR UNDESIRED VALUES OR OUTLIERS
taxi_df_valid_cleaned = taxi_valid_df.drop('medallion').drop('hack_license').drop('store_and_fwd_flag').drop('pickup_datetime')\
.drop('dropoff_datetime').drop('pickup_longitude').drop('pickup_latitude').drop('dropoff_latitude')\
.drop('dropoff_longitude').drop('tip_class').drop('total_amount').drop('tolls_amount').drop('mta_tax')\
.drop('direct_distance').drop('surcharge')\
.filter("passenger_count > 0 and passenger_count < 8 AND payment_type in ('CSH', 'CRD') \
AND tip_amount >= 0 AND tip_amount < 30 AND fare_amount >= 1 AND fare_amount < 150 AND trip_distance > 0 \
AND trip_distance < 100 AND trip_time_in_secs > 30 AND trip_time_in_secs < 7200" )
## REGISTER DATA-FRAME AS A TEMP-TABLE IN SQL-CONTEXT
taxi_df_valid_cleaned.createOrReplaceTempView("taxi_valid")
### CREATE FOUR BUCKETS FOR TRAFFIC TIMES
sqlStatement = """ SELECT *, CASE
WHEN (pickup_hour <= 6 OR pickup_hour >= 20) THEN "Night"
WHEN (pickup_hour >= 7 AND pickup_hour <= 10) THEN "AMRush"
WHEN (pickup_hour >= 11 AND pickup_hour <= 15) THEN "Afternoon"
WHEN (pickup_hour >= 16 AND pickup_hour <= 19) THEN "PMRush"
END as TrafficTimeBins
FROM taxi_valid
"""
taxi_df_valid_with_newFeatures = spark.sql(sqlStatement)
## APPLY THE SAME TRANSFORATION ON THIS DATA AS ORIGINAL TRAINING DATA
encodedFinalValid = Pipeline(stages=[sI1, sI2, sI3, sI4]).fit(taxi_df_train_with_newFeatures).transform(taxi_df_valid_with_newFeatures)
## LOAD SAVED MODEL, SCORE VALIDATION DATA, AND EVALUATE
savedModel = PipelineModel.load(CVDirfilename)
predictions = savedModel.transform(encodedFinalValid)
r2 = evaluator.evaluate(predictions)
print("R-squared on validation data = %g" % r2)
```
#### Save predictions to a file in HDFS
```
datestamp = datetime.datetime.now().strftime('%m-%d-%Y-%s');
fileName = "Predictions_CV_" + datestamp;
predictionfile = dataDir + fileName;
predictions.select("label","prediction").write.mode("overwrite").csv(predictionfile)
```
|
github_jupyter
|
# 1. Location of training data: contains Dec 2013 trip and fare data from NYC
trip_file_loc = "wasb://[email protected]/NYCTaxi/KDD2016/trip_data_12.csv"
fare_file_loc = "wasb://[email protected]/NYCTaxi/KDD2016/trip_fare_12.csv"
# 2. Location of the joined taxi+fare training file
taxi_valid_file_loc = "wasb://[email protected]/Data/NYCTaxi/JoinedTaxiTripFare.Point1Pct.Valid.csv"
# 3. Set model storage directory path. This is where models will be saved.
modelDir = "/user/sshuser/NYCTaxi/Models/"; # The last backslash is needed;
# 4. Set data storage path. This is where data is sotred on the blob attached to the cluster.
dataDir = "/HdiSamples/HdiSamples/NYCTaxi/"; # The last backslash is needed;
from pyspark import SparkConf
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.functions import UserDefinedFunction
from pyspark.sql.types import *
import matplotlib.pyplot as plt
import numpy as np
import datetime
sqlContext = SQLContext(sc)
## READ IN TRIP DATA FRAME FROM CSV
trip = spark.read.csv(path=trip_file_loc, header=True, inferSchema=True)
## READ IN FARE DATA FRAME FROM CSV
fare = spark.read.csv(path=fare_file_loc, header=True, inferSchema=True)
## CHECK SCHEMA OF TRIP AND FARE TABLES
trip.printSchema()
fare.printSchema()
## REGISTER DATA-FRAMEs AS A TEMP-TABLEs IN SQL-CONTEXT
trip.createOrReplaceTempView("trip")
fare.createOrReplaceTempView("fare")
## USING SQL: MERGE TRIP AND FARE DATA-SETS TO CREATE A JOINED DATA-FRAME
## ELIMINATE SOME COLUMNS, AND FILTER ROWS WTIH VALUES OF SOME COLUMNS
sqlStatement = """SELECT t.medallion, t.hack_license,
f.total_amount, f.tolls_amount,
hour(f.pickup_datetime) as pickup_hour, f.vendor_id, f.fare_amount,
f.surcharge, f.tip_amount, f.payment_type, t.rate_code,
t.passenger_count, t.trip_distance, t.trip_time_in_secs
FROM trip t, fare f
WHERE t.medallion = f.medallion AND t.hack_license = f.hack_license
AND t.pickup_datetime = f.pickup_datetime
AND t.passenger_count > 0 and t.passenger_count < 8
AND f.tip_amount >= 0 AND f.tip_amount <= 25
AND f.fare_amount >= 1 AND f.fare_amount <= 250
AND f.tip_amount < f.fare_amount AND t.trip_distance > 0
AND t.trip_distance <= 100 AND t.trip_time_in_secs >= 30
AND t.trip_time_in_secs <= 7200 AND t.rate_code <= 5
AND f.payment_type in ('CSH','CRD')"""
trip_fareDF = spark.sql(sqlStatement)
# REGISTER JOINED TRIP-FARE DF IN SQL-CONTEXT
trip_fareDF.createOrReplaceTempView("trip_fare")
## SHOW WHICH TABLES ARE REGISTERED IN SQL-CONTEXT
spark.sql("show tables").show()
# SAMPLE 10% OF DATA, SPLIT INTO TRAIINING AND VALIDATION AND SAVE IN BLOB
trip_fare_featSampled = trip_fareDF.sample(False, 0.1, seed=1234)
trainfilename = dataDir + "TrainData";
trip_fare_featSampled.repartition(10).write.mode("overwrite").parquet(trainfilename)
## READ IN DATA FRAME FROM CSV
taxi_train_df = spark.read.parquet(trainfilename)
## CREATE A CLEANED DATA-FRAME BY DROPPING SOME UN-NECESSARY COLUMNS & FILTERING FOR UNDESIRED VALUES OR OUTLIERS
taxi_df_train_cleaned = taxi_train_df.drop('medallion').drop('hack_license').drop('total_amount').drop('tolls_amount')\
.filter("passenger_count > 0 and passenger_count < 8 AND tip_amount >= 0 AND tip_amount < 15 AND \
fare_amount >= 1 AND fare_amount < 150 AND trip_distance > 0 AND trip_distance < 100 AND \
trip_time_in_secs > 30 AND trip_time_in_secs < 7200" )
## PERSIST AND MATERIALIZE DF IN MEMORY
taxi_df_train_cleaned.persist()
taxi_df_train_cleaned.count()
## REGISTER DATA-FRAME AS A TEMP-TABLE IN SQL-CONTEXT
taxi_df_train_cleaned.createOrReplaceTempView("taxi_train")
taxi_df_train_cleaned.show()
taxi_df_train_cleaned.printSchema()
%%sql -q -o sqlResultsPD
SELECT fare_amount, passenger_count, tip_amount FROM taxi_train WHERE passenger_count > 0 AND passenger_count < 7 AND fare_amount > 0 AND fare_amount < 100 AND tip_amount > 0 AND tip_amount < 15
%%local
sqlResultsPD.head()
%%local
%matplotlib inline
import matplotlib.pyplot as plt
## %%local creates a pandas data-frame on the head node memory, from spark data-frame,
## which can then be used for plotting. Here, sampling data is a good idea, depending on the memory of the head node
# TIP BY PAYMENT TYPE AND PASSENGER COUNT
ax1 = sqlResultsPD[['tip_amount']].plot(kind='hist', bins=25, facecolor='lightblue')
ax1.set_title('Tip amount distribution')
ax1.set_xlabel('Tip Amount ($)'); ax1.set_ylabel('Counts');
plt.figure(figsize=(4,4)); plt.suptitle(''); plt.show()
# TIP BY PASSENGER COUNT
ax2 = sqlResultsPD.boxplot(column=['tip_amount'], by=['passenger_count'])
ax2.set_title('Tip amount by Passenger count')
ax2.set_xlabel('Passenger count'); ax2.set_ylabel('Tip Amount ($)');
plt.figure(figsize=(4,4)); plt.suptitle(''); plt.show()
# TIP AMOUNT BY FARE AMOUNT, POINTS ARE SCALED BY PASSENGER COUNT
ax = sqlResultsPD.plot(kind='scatter', x= 'fare_amount', y = 'tip_amount', c='blue', alpha = 0.10, s=2.5*(sqlResultsPD.passenger_count))
ax.set_title('Tip amount by Fare amount')
ax.set_xlabel('Fare Amount ($)'); ax.set_ylabel('Tip Amount ($)');
plt.axis([-2, 80, -2, 20])
plt.figure(figsize=(4,4)); plt.suptitle(''); plt.show()
### CREATE FOUR BUCKETS FOR TRAFFIC TIMES
sqlStatement = """SELECT payment_type, pickup_hour, fare_amount, tip_amount,
vendor_id, rate_code, passenger_count, trip_distance, trip_time_in_secs,
CASE
WHEN (pickup_hour <= 6 OR pickup_hour >= 20) THEN 'Night'
WHEN (pickup_hour >= 7 AND pickup_hour <= 10) THEN 'AMRush'
WHEN (pickup_hour >= 11 AND pickup_hour <= 15) THEN 'Afternoon'
WHEN (pickup_hour >= 16 AND pickup_hour <= 19) THEN 'PMRush'
END as TrafficTimeBins,
CASE
WHEN (tip_amount > 0) THEN 1
WHEN (tip_amount <= 0) THEN 0
END as tipped
FROM taxi_train"""
taxi_df_train_with_newFeatures = spark.sql(sqlStatement)
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorIndexer
# DEFINE THE TRANSFORMATIONS THAT NEEDS TO BE APPLIED TO SOME OF THE FEATURES
sI1 = StringIndexer(inputCol="vendor_id", outputCol="vendorIndex");
sI2 = StringIndexer(inputCol="rate_code", outputCol="rateIndex");
sI3 = StringIndexer(inputCol="payment_type", outputCol="paymentIndex");
sI4 = StringIndexer(inputCol="TrafficTimeBins", outputCol="TrafficTimeBinsIndex");
# APPLY TRANSFORMATIONS
encodedFinal = Pipeline(stages=[sI1, sI2, sI3, sI4]).fit(taxi_df_train_with_newFeatures).transform(taxi_df_train_with_newFeatures);
trainingFraction = 0.75; testingFraction = (1-trainingFraction);
seed = 1234;
# SPLIT SAMPLED DATA-FRAME INTO TRAIN/TEST, WITH A RANDOM COLUMN ADDED FOR DOING CV (SHOWN LATER)
trainData, testData = encodedFinal.randomSplit([trainingFraction, testingFraction], seed=seed);
# CACHE DATA FRAMES IN MEMORY
trainData.persist(); trainData.count()
testData.persist(); testData.count()
from pyspark.ml.feature import RFormula
from pyspark.ml.regression import LinearRegression
from pyspark.mllib.evaluation import RegressionMetrics
## DEFINE REGRESSION FURMULA
regFormula = RFormula(formula="tip_amount ~ paymentIndex + vendorIndex + rateIndex + TrafficTimeBinsIndex + pickup_hour + passenger_count + trip_time_in_secs + trip_distance + fare_amount")
## DEFINE INDEXER FOR CATEGORIAL VARIABLES
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=32)
## DEFINE ELASTIC NET REGRESSOR
eNet = LinearRegression(featuresCol="indexedFeatures", maxIter=25, regParam=0.01, elasticNetParam=0.5)
## Fit model, with formula and other transformations
model = Pipeline(stages=[regFormula, featureIndexer, eNet]).fit(trainData)
## PREDICT ON TEST DATA AND EVALUATE
predictions = model.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
## PLOC ACTUALS VS. PREDICTIONS
predictions.select("label","prediction").createOrReplaceTempView("tmp_results");
from pyspark.ml.regression import GBTRegressor
## DEFINE REGRESSION FURMULA
regFormula = RFormula(formula="tip_amount ~ paymentIndex + vendorIndex + rateIndex + TrafficTimeBinsIndex + pickup_hour + passenger_count + trip_time_in_secs + trip_distance + fare_amount")
## DEFINE INDEXER FOR CATEGORIAL VARIABLES
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=32)
## DEFINE GRADIENT BOOSTING TREE REGRESSOR
gBT = GBTRegressor(featuresCol="indexedFeatures", maxIter=10)
## Fit model, with formula and other transformations
model = Pipeline(stages=[regFormula, featureIndexer, gBT]).fit(trainData)
## PREDICT ON TEST DATA AND EVALUATE
predictions = model.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
## PLOC ACTUALS VS. PREDICTIONS
predictions.select("label","prediction").createOrReplaceTempView("tmp_results");
from pyspark.ml.feature import RFormula
from sklearn.metrics import roc_curve,auc
from pyspark.ml.regression import RandomForestRegressor
from pyspark.mllib.evaluation import RegressionMetrics
## DEFINE REGRESSION FURMULA
regFormula = RFormula(formula="tip_amount ~ paymentIndex + vendorIndex + rateIndex + TrafficTimeBinsIndex + pickup_hour + passenger_count + trip_time_in_secs + trip_distance + fare_amount")
## DEFINE INDEXER FOR CATEGORIAL VARIABLES
featureIndexer = VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=32)
## DEFINE RANDOM FOREST ESTIMATOR
randForest = RandomForestRegressor(featuresCol = 'indexedFeatures', labelCol = 'label', numTrees=20,
featureSubsetStrategy="auto",impurity='variance', maxDepth=6, maxBins=100)
## Fit model, with formula and other transformations
model = Pipeline(stages=[regFormula, featureIndexer, randForest]).fit(trainData)
## SAVE MODEL
datestamp = datetime.datetime.now().strftime('%m-%d-%Y-%s');
fileName = "RandomForestRegressionModel_" + datestamp;
randForestDirfilename = modelDir + fileName;
model.save(randForestDirfilename)
## PREDICT ON TEST DATA AND EVALUATE
predictions = model.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
## PLOC ACTUALS VS. PREDICTIONS
predictions.select("label","prediction").createOrReplaceTempView("tmp_results");
%%sql -q -o predictionsPD
SELECT * from tmp_results
%%local
import numpy as np
ax = predictionsPD.plot(kind='scatter', figsize = (5,5), x='label', y='prediction', color='blue', alpha = 0.25, label='Actual vs. predicted');
fit = np.polyfit(predictionsPD['label'], predictionsPD['prediction'], deg=1)
ax.set_title('Actual vs. Predicted Tip Amounts ($)')
ax.set_xlabel("Actual"); ax.set_ylabel("Predicted");
ax.plot(predictionsPD['label'], fit[0] * predictionsPD['label'] + fit[1], color='magenta')
plt.axis([-1, 15, -1, 15])
plt.show(ax)
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
## DEFINE RANDOM FOREST MODELS
randForest = RandomForestRegressor(featuresCol = 'indexedFeatures', labelCol = 'label',
featureSubsetStrategy="auto",impurity='variance', maxBins=100)
## DEFINE MODELING PIPELINE, INCLUDING FORMULA, FEATURE TRANSFORMATIONS, AND ESTIMATOR
pipeline = Pipeline(stages=[regFormula, featureIndexer, randForest])
## DEFINE PARAMETER GRID FOR RANDOM FOREST
paramGrid = ParamGridBuilder() \
.addGrid(randForest.numTrees, [10, 25, 50]) \
.addGrid(randForest.maxDepth, [3, 5, 7]) \
.build()
## DEFINE CROSS VALIDATION
crossval = CrossValidator(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=RegressionEvaluator(metricName="rmse"),
numFolds=3)
## TRAIN MODEL USING CV
cvModel = crossval.fit(trainData)
## PREDICT AND EVALUATE TEST DATA SET
predictions = cvModel.transform(testData)
evaluator = RegressionEvaluator(labelCol="label", predictionCol="prediction", metricName="r2")
r2 = evaluator.evaluate(predictions)
print("R-squared on test data = %g" % r2)
## SAVE THE BEST MODEL
datestamp = datetime.datetime.now().strftime('%m-%d-%Y-%s');
fileName = "CV_RandomForestRegressionModel_" + datestamp;
CVDirfilename = modelDir + fileName;
cvModel.bestModel.save(CVDirfilename);
from pyspark.ml import PipelineModel
savedModel = PipelineModel.load(randForestDirfilename)
predictions = savedModel.transform(testData)
predictionAndLabels = predictions.select("label","prediction").rdd
testMetrics = RegressionMetrics(predictionAndLabels)
print("RMSE = %s" % testMetrics.rootMeanSquaredError)
print("R-sqr = %s" % testMetrics.r2)
## READ IN DATA FRAME FROM CSV
taxi_valid_df = spark.read.csv(path=taxi_valid_file_loc, header=True, inferSchema=True)
taxi_valid_df.printSchema()
## READ IN DATA FRAME FROM CSV
taxi_valid_df = spark.read.csv(path=taxi_valid_file_loc, header=True, inferSchema=True)
## CREATE A CLEANED DATA-FRAME BY DROPPING SOME UN-NECESSARY COLUMNS & FILTERING FOR UNDESIRED VALUES OR OUTLIERS
taxi_df_valid_cleaned = taxi_valid_df.drop('medallion').drop('hack_license').drop('store_and_fwd_flag').drop('pickup_datetime')\
.drop('dropoff_datetime').drop('pickup_longitude').drop('pickup_latitude').drop('dropoff_latitude')\
.drop('dropoff_longitude').drop('tip_class').drop('total_amount').drop('tolls_amount').drop('mta_tax')\
.drop('direct_distance').drop('surcharge')\
.filter("passenger_count > 0 and passenger_count < 8 AND payment_type in ('CSH', 'CRD') \
AND tip_amount >= 0 AND tip_amount < 30 AND fare_amount >= 1 AND fare_amount < 150 AND trip_distance > 0 \
AND trip_distance < 100 AND trip_time_in_secs > 30 AND trip_time_in_secs < 7200" )
## REGISTER DATA-FRAME AS A TEMP-TABLE IN SQL-CONTEXT
taxi_df_valid_cleaned.createOrReplaceTempView("taxi_valid")
### CREATE FOUR BUCKETS FOR TRAFFIC TIMES
sqlStatement = """ SELECT *, CASE
WHEN (pickup_hour <= 6 OR pickup_hour >= 20) THEN "Night"
WHEN (pickup_hour >= 7 AND pickup_hour <= 10) THEN "AMRush"
WHEN (pickup_hour >= 11 AND pickup_hour <= 15) THEN "Afternoon"
WHEN (pickup_hour >= 16 AND pickup_hour <= 19) THEN "PMRush"
END as TrafficTimeBins
FROM taxi_valid
"""
taxi_df_valid_with_newFeatures = spark.sql(sqlStatement)
## APPLY THE SAME TRANSFORATION ON THIS DATA AS ORIGINAL TRAINING DATA
encodedFinalValid = Pipeline(stages=[sI1, sI2, sI3, sI4]).fit(taxi_df_train_with_newFeatures).transform(taxi_df_valid_with_newFeatures)
## LOAD SAVED MODEL, SCORE VALIDATION DATA, AND EVALUATE
savedModel = PipelineModel.load(CVDirfilename)
predictions = savedModel.transform(encodedFinalValid)
r2 = evaluator.evaluate(predictions)
print("R-squared on validation data = %g" % r2)
datestamp = datetime.datetime.now().strftime('%m-%d-%Y-%s');
fileName = "Predictions_CV_" + datestamp;
predictionfile = dataDir + fileName;
predictions.select("label","prediction").write.mode("overwrite").csv(predictionfile)
| 0.51562 | 0.966124 |
The [previous notebook](2-Pipeline.ipynb) showed all the steps required to get a Datashader rendering of your dataset, yielding raster images displayed using [Jupyter](http://jupyter.org)'s "rich display" support. However, these bare images do not show the data ranges or axis labels, making them difficult to interpret. Moreover, they are only static images, and datasets often need to be explored at multiple scales, which is much easier to do in an interactive program.
To get axes and interactivity, the images generated by Datashader need to be embedded into a plot using an external library like [Matplotlib](http://matplotlib.org) or [Bokeh](http://bokeh.pydata.org). As we illustrate below, the most convenient way to make Datashader plots using these libraries is via the [HoloViews](http://holoviews.org) high-level data-science API. Plotly also includes Datashader support for Plotly, and native Datashader support for Matplotlib has been [sketched](https://github.com/bokeh/datashader/pull/200) but is not yet released.
In this notebook, we will first look at datashader's native Bokeh support, because it uses the same API introduced in the previous examples. We'll start with the same example from the [previous notebook](2-Pipeline.ipynb):
```
import pandas as pd
import numpy as np
import datashader as ds
import datashader.transfer_functions as tf
from collections import OrderedDict as odict
num=100000
np.random.seed(1)
dists = {cat: pd.DataFrame(odict([('x',np.random.normal(x,s,num)),
('y',np.random.normal(y,s,num)),
('val',val),
('cat',cat)]))
for x, y, s, val, cat in
[( 2, 2, 0.03, 10, "d1"),
( 2, -2, 0.10, 20, "d2"),
( -2, -2, 0.50, 30, "d3"),
( -2, 2, 1.00, 40, "d4"),
( 0, 0, 3.00, 50, "d5")] }
df = pd.concat(dists,ignore_index=True)
df["cat"]=df["cat"].astype("category")
```
Bokeh provides interactive plotting in a web browser. To make an interactive datashader plot when working with Bokeh directly, we'll first need to write a "callback" that wraps up the plotting steps shown in the previous notebook. A callback is a function that will render an image of the dataframe above when given some parameters:
```
def image_callback(x_range, y_range, w, h, name=None):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'x', 'y', ds.count_cat('cat'))
img = tf.shade(agg)
return tf.dynspread(img, threshold=0.50, name=name)
```
As you can see, this callback is a function that lets us generate a Datashader image covering any range of data space that we want to examine:
```
tf.Images(image_callback(None, None, 300, 300, name="Original"),
image_callback(( 0, 4 ), ( 0, 4 ), 300, 300, name="Zoom 1"),
image_callback((1.9, 2.1), (1.9, 2.1), 300, 300, name="Zoom 2"))
```
You can now see that the single apparent "red dot" from the original image is actually a large collection of overlapping points (100,000, to be exact). However, you can also see that it would be awkward to explore a dataset using static images in this way, having to guess at numerical ranges as in the code above. Instead, let's make an interactive Bokeh plot using a convenience utility from Datashader called ``InteractiveImage``:
```
from datashader.bokeh_ext import InteractiveImage
import bokeh.plotting as bp
bp.output_notebook()
p = bp.figure(tools='pan,wheel_zoom,reset', x_range=(-5,5), y_range=(-5,5), plot_width=500, plot_height=500)
InteractiveImage(p, image_callback)
```
``InteractiveImage`` accepts any Bokeh figure and a callback that returns an image when given the range and pixel size. Now we can see the full axes corresponding to this data, and we can also zoom in using a scroll wheel (as long as the "wheel zoom" tool is enabled on the right) or pan by clicking and dragging (as long as the "pan" tool is enabled on the right). Each time you zoom or pan, the callback will be given the new viewport that's now visible, and datashader will render a new image to update the display. The result makes it look as if all of the data is available in the web browser interactively, while only ever storing a single image at any one time. In this way, full interactivity can be provided even for data that is far too large to display in a web browser directly. (Most web browsers can handle tens of thousands or hundreds of thousands of data points, but not millions or billions!)
***Note that you'll only see an updated image on zooming in if there is a live Python process running.*** Bokeh works by taking a Python specification for a plot and generating a corresponding JavaScript-based visualization in the browser. Whatever data has been given to the browser can be viewed interactively, but in this case only a single image of the data is given at a time, and so you will not be able to see more detail when zooming in unless the Python (and thus Datashader) process is running. In a static HTML export of this notebook, such as those on a website, you'll only see the original pixels getting larger, not a zoomed-in rendering as in the callback plots above.
``InteractiveImage`` lets you explore any Datashader pipeline you like, but unfortunately it only works in a Jupyter notebook (not a deployed Bokeh server), and it is not typically possible to combine such a plot with other Bokeh figures. The [dashboard.py](https://github.com/bokeh/datashader/blob/cb2f49f9/examples/dashboard/dashboard.py) from datashader 0.6 gives an example of building Bokeh+Datashader visualizations from the ground up, but this approach is quite difficult and is not recommended for most users. For these reasons, we do not recommend using InteractiveImage in new projects. Luckily, a much more practical approach to embedding and interactivity is available using HoloViews, as shown in the rest of this guide.
# Embedding Datashader with HoloViews
[HoloViews](http://holoviews.org) (1.7 and later) is a high-level data analysis and visualization library that makes it simple to generate interactive [Datashader](https://github.com/bokeh/datashader)-based plots. Here's an illustration of how this all fits together when using HoloViews+[Bokeh](http://bokeh.pydata.org):

HoloViews offers a data-centered approach for analysis, where the same tool can be used with small data (anything that fits in a web browser's memory, which can be visualized with Bokeh directly), and large data (which is first sent through Datashader to make it tractable) and with several different plotting frontends. A developer willing to do more programming can do all the same things separately, using Bokeh, Matplotlib, and Datashader's APIs directly, but with HoloViews it is much simpler to explore and analyze data. Of course, the [previous notebook](1-Pipeline.ipynb) showed that you can also use datashader without either any plotting library at all (the light gray pathways above), but then you wouldn't have interactivity, axes, and so on.
Most of this notebook will focus on HoloViews+Bokeh to support full interactive plots in web browsers, but we will also briefly illustrate the non-interactive HoloViews+Matplotlib approach. Let's start by importing some parts of HoloViews and setting some defaults:
```
import holoviews as hv
import holoviews.operation.datashader as hd
hd.shade.cmap=["lightblue", "darkblue"]
hv.extension("bokeh", "matplotlib")
```
### HoloViews+Bokeh
Rather than starting out by specifying a figure or plot, in HoloViews you specify an [``Element``](http://holoviews.org/reference/index.html#elements) object to contain your data, such as `Points` for a collection of 2D x,y points. To start, let's define a Points object wrapping around a small dataframe with 10,000 random samples from the ``df`` above:
```
points = hv.Points(df.sample(10000))
points
```
As you can see, the ``points`` object visualizes itself as a Bokeh plot, where you can already see many of the [problems that motivate datashader](https://anaconda.org/jbednar/plotting_pitfalls) (overplotting of points, being unable to detect the closely spaced dense collections of points shown in red above, and so on). But this visualization is just the default representation of ``points``, using Jupyter's [rich display](https://anaconda.org/jbednar/rich_display) support; the actual ``points`` object itself is merely a data container:
```
points.data.head()
```
### HoloViews+Datashader+Matplotlib
The default visualizations in HoloViews work well for small datasets, but larger ones will have overplotting issues as are already visible above, and will eventually either overwhelm the web browser (for the Bokeh frontend) or take many minutes to plot (for the Matplotlib backend). Luckily, HoloViews provides support for using Datashader to handle both of these problems:
```
%%output backend="matplotlib"
agg = ds.Canvas().points(df,'x','y')
hd.datashade(points) + hd.shade(hv.Image(agg)) + hv.RGB(np.array(tf.shade(agg).to_pil()))
```
Here we asked HoloViews to plot ``df`` using Datashader+Matplotlib, in three different ways:
- **A**: HoloViews aggregates and shades an image directly from the ``points`` object using its own datashader support, then passes the image to Matplotlib to embed into an appropriate set of axes.
- **B**: HoloViews accepts a pre-computed datashader aggregate, reads out the metadata about the plot ranges that is stored in the aggregate array, and passes it to Matplotlib for colormapping and then embedding.
- **C**: HoloViews accepts a PIL image computed beforehand and passes it to Matplotlib for embedding.
As you can see, option A is the most convenient; you can simply wrap your HoloViews element with ``datashade`` and the rest will be taken care of. But if you want to have more control by computing the aggregate or the full RGB image yourself using the API from the [previous notebook](2-Pipeline.ipynb) you are welcome to do so while using HoloViews+Matplotlib (or HoloViews+Bokeh, below) to embed the result into labelled axes.
### HoloViews+Datashader+Bokeh
The Matplotlib interface only produces a static plot, i.e., a PNG or SVG image, but the [Bokeh](http://bokeh.pydata.org) interface of HoloViews adds the dynamic zooming and panning necessary to understand datasets across scales:
```
hd.datashade(points)
```
Here, ``hd.datashade`` is not just a function call; it is an "operation" that dynamically calls datashader every time a new plot is needed by Bokeh, without the need for any explicit callback functions. The above plot will automatically be interactive when using the Bokeh frontend to HoloViews, and datashader will be called on each zoom or pan event if you have a live Python process running.
The powerful feature of operations is that you can chain them to make expressions for complex interactive visualizations. For instance, here is a Bokeh plot that works like the one created by ``InteractiveImage`` at the start of this notebook:
```
datashaded = hd.datashade(points, aggregator=ds.count_cat('cat')).redim.range(x=(-5,5),y=(-5,5))
hd.dynspread(datashaded, threshold=0.50, how='over').opts(plot=dict(height=500,width=500))
```
Compared to using ``InteractiveImage``, the HoloViews approach is simpler for the most basic plots (e.g. ``hd.datashade(hv.Points(df))``) while allowing plots to be overlaid and laid out together very flexibly. You can read more about HoloViews support for Datashader at [holoviews.org](http://holoviews.org/user_guide/Large_Data.html).
### HoloViews+Datashader+Bokeh Legends
Because the underlying plotting library only ever sees an image when using Datashader, providing legends and keys has to be handled separately from any underlying support for those features in the plotting library. We are working to simplify this process, but for now you can show a categorical legend by adding a suitable collection of labeled dummy points:
```
from datashader.colors import Sets1to3
datashaded = hd.datashade(points, aggregator=ds.count_cat('cat'), color_key=Sets1to3)
gaussspread = hd.dynspread(datashaded, threshold=0.50, how='over').opts(plot=dict(height=400,width=400))
color_key = [(name,color) for name,color in zip(["d1","d2","d3","d4","d5"], Sets1to3)]
color_points = hv.NdOverlay({n: hv.Points([0,0], label=str(n)).opts(style=dict(color=c)) for n,c in color_key})
color_points * gaussspread
```
### HoloViews+Datashader+Bokeh Hover info
As you can see, converting the data to an image using Datashader makes it feasible to work with even very large datasets interactively. One unfortunate side effect is that the original datapoints and line segments can no longer be used to support "tooltips" or "hover" information directly; that data simply is not present at the browser level, and so the browser cannot unambiguously report information about any specific datapoint. Luckily, you can still provide hover information that reports properties of a subset of the data in a separate layer, or you can provide information for a spatial region of the plot rather than for specific datapoints. For instance, in some small rectangle you can provide statistics such as the mean, count, standard deviation, etc. E.g. here let's calculate the count for each small square region:
```
%%opts QuadMesh [tools=['hover']] (alpha=0 hover_alpha=0.2)
from holoviews.streams import RangeXY
pts = hd.datashade(points, width=400, height=400)
(pts * hv.QuadMesh(hd.aggregate(points, width=10, height=10, dynamic=False))).relabel("Fixed hover") + \
\
(pts * hv.util.Dynamic(hd.aggregate(points, width=10, height=10, streams=[RangeXY]),
operation=hv.QuadMesh)).relabel("Dynamic hover")
```
In the above examples, the plot on the left provides hover information at a fixed spatial scale, while the one on the right reports on an area that scales with the zoom level so that arbitrarily small regions of data space can be examined, which is generally more useful.
As you can see, HoloViews makes it just about as simple to work with Datashader-based plots as regular Bokeh plots (at least if you don't need hover or color keys!), letting you visualize data of any size interactively in a browser using just a few lines of code. Because Datashader-based HoloViews plots are just one or two extra steps added on to regular HoloViews plots, they support all of the same features as regular HoloViews objects, and can freely be laid out, overlaid, and nested together with them. See [holoviews.org](http://holoviews.org) for examples and documentation for how to control the appearance of these plots and how to work with them in general.
## HoloViews+Datashader+Panel
To interactively explore data in a dashboard, you can combine panel with holoviews and datashader to create an interactive visualization that allows you to toggle aggregation methods, edit colormaps, and generally interact with the data through the use of widgets.
[](../dashboard.ipynb)
|
github_jupyter
|
import pandas as pd
import numpy as np
import datashader as ds
import datashader.transfer_functions as tf
from collections import OrderedDict as odict
num=100000
np.random.seed(1)
dists = {cat: pd.DataFrame(odict([('x',np.random.normal(x,s,num)),
('y',np.random.normal(y,s,num)),
('val',val),
('cat',cat)]))
for x, y, s, val, cat in
[( 2, 2, 0.03, 10, "d1"),
( 2, -2, 0.10, 20, "d2"),
( -2, -2, 0.50, 30, "d3"),
( -2, 2, 1.00, 40, "d4"),
( 0, 0, 3.00, 50, "d5")] }
df = pd.concat(dists,ignore_index=True)
df["cat"]=df["cat"].astype("category")
def image_callback(x_range, y_range, w, h, name=None):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'x', 'y', ds.count_cat('cat'))
img = tf.shade(agg)
return tf.dynspread(img, threshold=0.50, name=name)
tf.Images(image_callback(None, None, 300, 300, name="Original"),
image_callback(( 0, 4 ), ( 0, 4 ), 300, 300, name="Zoom 1"),
image_callback((1.9, 2.1), (1.9, 2.1), 300, 300, name="Zoom 2"))
from datashader.bokeh_ext import InteractiveImage
import bokeh.plotting as bp
bp.output_notebook()
p = bp.figure(tools='pan,wheel_zoom,reset', x_range=(-5,5), y_range=(-5,5), plot_width=500, plot_height=500)
InteractiveImage(p, image_callback)
import holoviews as hv
import holoviews.operation.datashader as hd
hd.shade.cmap=["lightblue", "darkblue"]
hv.extension("bokeh", "matplotlib")
points = hv.Points(df.sample(10000))
points
points.data.head()
%%output backend="matplotlib"
agg = ds.Canvas().points(df,'x','y')
hd.datashade(points) + hd.shade(hv.Image(agg)) + hv.RGB(np.array(tf.shade(agg).to_pil()))
hd.datashade(points)
datashaded = hd.datashade(points, aggregator=ds.count_cat('cat')).redim.range(x=(-5,5),y=(-5,5))
hd.dynspread(datashaded, threshold=0.50, how='over').opts(plot=dict(height=500,width=500))
from datashader.colors import Sets1to3
datashaded = hd.datashade(points, aggregator=ds.count_cat('cat'), color_key=Sets1to3)
gaussspread = hd.dynspread(datashaded, threshold=0.50, how='over').opts(plot=dict(height=400,width=400))
color_key = [(name,color) for name,color in zip(["d1","d2","d3","d4","d5"], Sets1to3)]
color_points = hv.NdOverlay({n: hv.Points([0,0], label=str(n)).opts(style=dict(color=c)) for n,c in color_key})
color_points * gaussspread
%%opts QuadMesh [tools=['hover']] (alpha=0 hover_alpha=0.2)
from holoviews.streams import RangeXY
pts = hd.datashade(points, width=400, height=400)
(pts * hv.QuadMesh(hd.aggregate(points, width=10, height=10, dynamic=False))).relabel("Fixed hover") + \
\
(pts * hv.util.Dynamic(hd.aggregate(points, width=10, height=10, streams=[RangeXY]),
operation=hv.QuadMesh)).relabel("Dynamic hover")
| 0.614972 | 0.982288 |
# Tutorial
Import necessary packages
```
import parallelPermutationTest as ppt
import numpy as np
import pandas as pd
```
# Synthetic data: Integer data
The permutation test only works on integer data, so when one has an integer dataset, one does not have to pre-process the data with a binning-procedure. Hence, one can use ppt.GreenIntCuda method.
Let's construct a synthetic dataset with integers ranging from 0 to 500, and with sample sizes of 500 elements.
```
n_samples = 1
n = m = 500
data = lambda n,n_samples : np.asarray([np.random.randint(0,n,n,dtype=np.int32) for _ in range(n_samples)])
np.random.seed(1)
A,B = data(n,1), data(n,1)
```
The shift-algorithm implemented in the R-package Coin(https://cran.r-project.org/web/packages/coin/index.html) is probably the fastest version permutation test today. We have implemented a slightly speeded-up version of their version into Python.
```
%time p_shift = ppt.CoinShiftInt(A,B)
```
Greens algorithm is a slight variation of the shift-algorithm. Unfortunately, on a single thread, it's prolonged compared to the shift-algorithm. However, it has the perk of being parallelizable. Let's check the available thread version.
```
%time p_green = ppt.GreenInt(A,B)
```
So when only a single thread is accessible, we would recommend using ppt.CoinShiftInt rather than ppt.GreenInt. However, when one has several threads, the Greens algorithm starts to shine. Let us take a look at a multithreaded version of the Greens algorithm. Here we are using an Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz, which has eight threads.
```
%time p_green_mt = ppt.GreenIntMultiThread(A,B)
```
Quite a large speed-up, but still only on par with ppt.CoinShiftInt. Let us use more threads! We can use GPU for this. Here we use GeForce RTX 2070.
```
%time p_green_gpu = ppt.GreenIntCuda(A,B)
```
Great! We have essentially improved the run-time five times compared to ppt.CoinShiftInt.
Let us ensure that they all yield the same result.
```
np.allclose(p_shift, p_green, p_green_mt, p_green_gpu)
```
# Real data: Finance
This is example is based on https://www.datacamp.com/community/tutorials/stocks-significance-testing-p-hacking.
### The claim is: "Over the past 32 years, October has been the most volatile month on average for the S&P500 and December the least volatile".
### Let's check if this is statistically significant.
We have to download and pre-process the data.
```
#Daily S&P500 data from 1986==>
url = "https://raw.githubusercontent.com/Patrick-David/Stocks_Significance_PHacking/master/spx.csv"
df = pd.read_csv(url,index_col='date', parse_dates=True)
#To model returns we will use daily % change
daily_ret = df['close'].pct_change()
#drop the 1st value - nan
daily_ret.dropna(inplace=True)
mnthly_annu = daily_ret.resample('M').std()* np.sqrt(12)
dec_vol = mnthly_annu[mnthly_annu.index.month==12]
rest_vol = mnthly_annu[mnthly_annu.index.month!=12]
dec_vol.head(2)
(dec_vol.values.shape, rest_vol.values.shape)
```
Here we have float data, i.e., real values. So we can not use ppt.GreenIntCuda. We have to pre-process the data with a binning procedure. Let us take 500 bins. This procedure will map all values into 500 integer bins.
```
n_bins = 500
%time p = ppt.GreenFloatCuda(dec_vol.values, rest_vol.values, n_bins)
p
```
That December is the least volatile month seems not to be statistically significant.
# Real data: Biomedical data
### We want to see if there are any significant genes in breast cancer patients.
Let import the pre-processed from the Experiment 6 notebook.
```
NotTNP_df = pd.read_csv("experiment_data/experiment6/notTNPdf")
TNP_df = pd.read_csv("experiment_data/experiment6/TNPdf")
(TNP_df.shape, NotTNP_df.shape)
TNP_df.head(2)
```
This dataset is quite large (8051 experiments) with floating values. Let us take 100 bins. However, we have to be careful, so we do not overload the GPU. Let us make a memory check.
```
n_bins = 100
ppt.GreenFloatCuda_memcheck(TNP_df.values, NotTNP_df.values, n_bins)
```
We need to divide our data into batches.
```
batch_size = int(TNP_df.shape[0] / 4)
%time p_values = ppt.GreenFloatCuda(TNP_df.values, NotTNP_df.values, 100, batch_size=batch_size)
```
|
github_jupyter
|
import parallelPermutationTest as ppt
import numpy as np
import pandas as pd
n_samples = 1
n = m = 500
data = lambda n,n_samples : np.asarray([np.random.randint(0,n,n,dtype=np.int32) for _ in range(n_samples)])
np.random.seed(1)
A,B = data(n,1), data(n,1)
%time p_shift = ppt.CoinShiftInt(A,B)
%time p_green = ppt.GreenInt(A,B)
%time p_green_mt = ppt.GreenIntMultiThread(A,B)
%time p_green_gpu = ppt.GreenIntCuda(A,B)
np.allclose(p_shift, p_green, p_green_mt, p_green_gpu)
#Daily S&P500 data from 1986==>
url = "https://raw.githubusercontent.com/Patrick-David/Stocks_Significance_PHacking/master/spx.csv"
df = pd.read_csv(url,index_col='date', parse_dates=True)
#To model returns we will use daily % change
daily_ret = df['close'].pct_change()
#drop the 1st value - nan
daily_ret.dropna(inplace=True)
mnthly_annu = daily_ret.resample('M').std()* np.sqrt(12)
dec_vol = mnthly_annu[mnthly_annu.index.month==12]
rest_vol = mnthly_annu[mnthly_annu.index.month!=12]
dec_vol.head(2)
(dec_vol.values.shape, rest_vol.values.shape)
n_bins = 500
%time p = ppt.GreenFloatCuda(dec_vol.values, rest_vol.values, n_bins)
p
NotTNP_df = pd.read_csv("experiment_data/experiment6/notTNPdf")
TNP_df = pd.read_csv("experiment_data/experiment6/TNPdf")
(TNP_df.shape, NotTNP_df.shape)
TNP_df.head(2)
n_bins = 100
ppt.GreenFloatCuda_memcheck(TNP_df.values, NotTNP_df.values, n_bins)
batch_size = int(TNP_df.shape[0] / 4)
%time p_values = ppt.GreenFloatCuda(TNP_df.values, NotTNP_df.values, 100, batch_size=batch_size)
| 0.21158 | 0.974653 |
<a href="https://colab.research.google.com/github/maiormarso/DS-Unit-2-Linear-Models/blob/master/module3-ridge-regression/DS_213_assignment_ridge_regression_classification_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 3*
---
# Ridge Regression
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`).
Use a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.**
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
- [ ] Do train/test split. Use data from January โย March 2019 to train. Use data from April 2019 to test.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Fit a ridge regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! ๐ฅ
- [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if youโre interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df1 = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df1.columns = [col.replace(' ', '_') for col in df1]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df1['SALE_PRICE'] = (
df1['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df1['BOROUGH'] = df1['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df1['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df1['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
df.head(1)
df=df1.loc[(df1['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS')]
# Do train/test split
# Use data from April & May 2016 to train
# Use data from June 2016 to test
df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True)
cutoff = pd.to_datetime('04-01-2019') #('04/01/2016')
train = df[df.SALE_DATE < cutoff]
test = df[df.SALE_DATE >= cutoff]
df.loc[(df['SALE_PRICE'] >=100000) & (df['SALE_PRICE'] <= 2000000)]
df.head(1)
```
#Do train/test split. Use data from January โ March 2019 to train. Use data from April 2019 to test.
```
train.shape, test.shape
train.head()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
import seaborn as sns
for col in sorted(train.columns):
if train[col].nunique() < 10:
try:
sns.catplot(x=col, y='SALE_PRICE', data=train, kind='bar', color='grey')
plt.show()
except:
pass
numeric = train.select_dtypes('number')
for col in sorted(numeric.columns):
sns.lmplot(x=col, y='SALE_PRICE', data=train, scatter_kws=dict(alpha=0.05))
plt.show()
train.describe(exclude='number').T.sort_values(by='unique')
target = 'SALE_PRICE'
numerics = train.select_dtypes(include='number').columns.drop(target).tolist()
categoricals = train.select_dtypes(exclude='number').columns.tolist()
low_cardinality_categoricals = [col for col in categoricals
if train[col].nunique() < 10]
features = numerics + low_cardinality_categoricals
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
```
# Do one-hot encoding of categorical features.
```
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
X_train_encoded.head(1)
X_train_encoded = X_train_encoded.drop(columns = 'EASE-MENT')
X_test_encoded = X_test_encoded.drop(columns = 'EASE-MENT')
train.head()
```
##Do feature selection with SelectKBest.
##Do feature scaling.
##Fit a ridge regression model with multiple features.
##Get mean absolute error for the test set.
```
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_test_scaled = scaler.transform(X_test_encoded)
for k in range(1, len(X_train_encoded.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train_scaled, y_train)
X_test_selected = selector.transform(X_test_scaled)
model = Ridge()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test MAE: ${mae:,.0f} \n')
```
# Ridge Regression
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`).
Use a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.**
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
- [ ] Do train/test split. Use data from January โย March 2019 to train. Use data from April 2019 to test.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Fit a ridge regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! ๐ฅ
- [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if youโre interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
|
github_jupyter
|
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df1 = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df1.columns = [col.replace(' ', '_') for col in df1]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df1['SALE_PRICE'] = (
df1['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
# BOROUGH is a numeric column, but arguably should be a categorical feature,
# so convert it from a number to a string
df1['BOROUGH'] = df1['BOROUGH'].astype(str)
# Reduce cardinality for NEIGHBORHOOD feature
# Get a list of the top 10 neighborhoods
top10 = df1['NEIGHBORHOOD'].value_counts()[:10].index
# At locations where the neighborhood is NOT in the top 10,
# replace the neighborhood with 'OTHER'
df.loc[~df1['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'
df.head(1)
df=df1.loc[(df1['BUILDING_CLASS_CATEGORY'] == '01 ONE FAMILY DWELLINGS')]
# Do train/test split
# Use data from April & May 2016 to train
# Use data from June 2016 to test
df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True)
cutoff = pd.to_datetime('04-01-2019') #('04/01/2016')
train = df[df.SALE_DATE < cutoff]
test = df[df.SALE_DATE >= cutoff]
df.loc[(df['SALE_PRICE'] >=100000) & (df['SALE_PRICE'] <= 2000000)]
df.head(1)
train.shape, test.shape
train.head()
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
import seaborn as sns
for col in sorted(train.columns):
if train[col].nunique() < 10:
try:
sns.catplot(x=col, y='SALE_PRICE', data=train, kind='bar', color='grey')
plt.show()
except:
pass
numeric = train.select_dtypes('number')
for col in sorted(numeric.columns):
sns.lmplot(x=col, y='SALE_PRICE', data=train, scatter_kws=dict(alpha=0.05))
plt.show()
train.describe(exclude='number').T.sort_values(by='unique')
target = 'SALE_PRICE'
numerics = train.select_dtypes(include='number').columns.drop(target).tolist()
categoricals = train.select_dtypes(exclude='number').columns.tolist()
low_cardinality_categoricals = [col for col in categoricals
if train[col].nunique() < 10]
features = numerics + low_cardinality_categoricals
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
X_train_encoded.head(1)
X_train_encoded = X_train_encoded.drop(columns = 'EASE-MENT')
X_test_encoded = X_test_encoded.drop(columns = 'EASE-MENT')
train.head()
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_test_scaled = scaler.transform(X_test_encoded)
for k in range(1, len(X_train_encoded.columns)+1):
print(f'{k} features')
selector = SelectKBest(score_func=f_regression, k=k)
X_train_selected = selector.fit_transform(X_train_scaled, y_train)
X_test_selected = selector.transform(X_test_scaled)
model = Ridge()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test, y_pred)
print(f'Test MAE: ${mae:,.0f} \n')
| 0.455199 | 0.943764 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import jax.numpy as jnp
```
Here, we use the mock spectrum generated by the tutorial of "Foward modeling".
```
dat=pd.read_csv("spectrum.txt",delimiter=",",names=("wav","flux"))
```
We add Gaussian noise to data. nusd is the observing wavenumber grid.
```
wavd=dat["wav"].values
flux=dat["flux"].values
nusd=jnp.array(1.e8/wavd[::-1])
sigmain=0.05
norm=40000
nflux=flux/norm+np.random.normal(0,sigmain,len(wavd))
plt.plot(wavd[::-1],nflux)
from exojax.spec.lpf import xsmatrix
from exojax.spec.exomol import gamma_exomol
from exojax.spec.hitran import SijT, doppler_sigma, gamma_natural, gamma_hitran
from exojax.spec.hitrancia import read_cia, logacia
from exojax.spec.rtransfer import rtrun, dtauM, dtauCIA, nugrid
from exojax.spec import planck, response,xsvector
from exojax.spec import molinfo
from exojax.utils.constants import RJ, pc, Rs, c
```
The model is almost same as the forward modeling, but we will infer here Rp, RV, MMR_CO, T0, alpha, and Vsini.
```
from exojax.spec import rtransfer as rt
NP=100
Parr, dParr, k=rt.pressure_layer(NP=NP)
Nx=1500
nus,wav,res=nugrid(np.min(wavd)-5.0,np.max(wavd)+5.0,Nx,unit="AA")
R=100000.
beta=c/(2.0*np.sqrt(2.0*np.log(2.0))*R)
molmassCO=molinfo.molmass("CO")
mmw=2.33 #mean molecular weight
mmrH2=0.74
molmassH2=molinfo.molmass("H2")
vmrH2=(mmrH2*mmw/molmassH2) #VMR
Mp = 33.2 #fixing mass...
```
Loading the molecular database of CO and the CIA
```
from exojax.spec import moldb, contdb
mdbCO=moldb.MdbExomol('.database/CO/12C-16O/Li2015',nus,crit=1.e-46)
cdbH2H2=contdb.CdbCIA('.database/H2-H2_2011.cia',nus)
```
We have only 39 CO lines.
```
plt.plot(mdbCO.nu_lines,mdbCO.Sij0,".")
```
Again, numatrix should be precomputed prior to HMC-NUTS.
```
from exojax.spec import make_numatrix0
numatrix_CO=make_numatrix0(nus,mdbCO.nu_lines)
#reference pressure for a T-P model
Pref=1.0 #bar
ONEARR=np.ones_like(Parr)
ONEWAV=jnp.ones_like(nflux)
import jax.numpy as jnp
from jax import random
from jax import vmap, jit
import numpyro.distributions as dist
import numpyro
from numpyro.infer import MCMC, NUTS
from numpyro.infer import Predictive
from numpyro.diagnostics import hpdi
```
Now we write the model, which is used in HMC-NUTS.
```
def model_c(nu1,y1):
Rp = numpyro.sample('Rp', dist.Uniform(0.4,1.2))
RV = numpyro.sample('RV', dist.Uniform(5.0,15.0))
MMR_CO = numpyro.sample('MMR_CO', dist.Uniform(0.0,0.015))
T0 = numpyro.sample('T0', dist.Uniform(1000.0,1500.0))
alpha=numpyro.sample('alpha', dist.Uniform(0.05,0.2))
vsini = numpyro.sample('vsini', dist.Uniform(15.0,25.0))
g=2478.57730044555*Mp/Rp**2 #gravity
u1=0.0
u2=0.0
#T-P model//
Tarr = T0*(Parr/Pref)**alpha
#line computation CO
qt_CO=vmap(mdbCO.qr_interp)(Tarr)
def obyo(y,tag,nusd,nus,numatrix_CO,mdbCO,cdbH2H2):
#CO
SijM_CO=jit(vmap(SijT,(0,None,None,None,0)))\
(Tarr,mdbCO.logsij0,mdbCO.dev_nu_lines,mdbCO.elower,qt_CO)
gammaLMP_CO = jit(vmap(gamma_exomol,(0,0,None,None)))\
(Parr,Tarr,mdbCO.n_Texp,mdbCO.alpha_ref)
gammaLMN_CO=gamma_natural(mdbCO.A)
gammaLM_CO=gammaLMP_CO+gammaLMN_CO[None,:]
sigmaDM_CO=jit(vmap(doppler_sigma,(None,0,None)))\
(mdbCO.dev_nu_lines,Tarr,molmassCO)
xsm_CO=xsmatrix(numatrix_CO,sigmaDM_CO,gammaLM_CO,SijM_CO)
dtaumCO=dtauM(dParr,xsm_CO,MMR_CO*ONEARR,molmassCO,g)
#CIA
dtaucH2H2=dtauCIA(nus,Tarr,Parr,dParr,vmrH2,vmrH2,\
mmw,g,cdbH2H2.nucia,cdbH2H2.tcia,cdbH2H2.logac)
dtau=dtaumCO+dtaucH2H2
sourcef = planck.piBarr(Tarr,nus)
F0=rtrun(dtau,sourcef)/norm
Frot=response.rigidrot(nus,F0,vsini,u1,u2)
mu=response.ipgauss_sampling(nusd,nus,Frot,beta,RV)
numpyro.sample(tag, dist.Normal(mu, sigmain), obs=y)
obyo(y1,"y1",nu1,nus,numatrix_CO,mdbCO,cdbH2H2)
```
Run a HMC-NUTS. It took ~30min using my gaming laptop (GTX 1080 Max-Q). Here, the number of warmup is only 300, and that of the sampling is only 600, because the time when the draft on arxiv will be released is very soon (June 1st 2021 morning in JST!).
```
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 300, 600
kernel = NUTS(model_c,forward_mode_differentiation=True)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, nu1=nusd, y1=nflux)
```
Plotting a prediction and 90% area with the data... looks good.
```
posterior_sample = mcmc.get_samples()
pred = Predictive(model_c,posterior_sample,return_sites=["y1"])
predictions = pred(rng_key_,nu1=nusd,y1=None)
median_mu1 = jnp.median(predictions["y1"],axis=0)
hpdi_mu1 = hpdi(predictions["y1"], 0.9)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20,6.0))
ax.plot(wavd[::-1],median_mu1,color="C0")
ax.plot(wavd[::-1],nflux,"+",color="black",label="data")
ax.fill_between(wavd[::-1], hpdi_mu1[0], hpdi_mu1[1], alpha=0.3, interpolate=True,color="C0",label="90% area")
plt.xlabel("wavelength ($\AA$)",fontsize=16)
plt.legend(fontsize=16)
plt.tick_params(labelsize=16)
```
For the above reasons, I haven't been checking my results properly.
Arviz is useful to visualize the corner plot. Ah, the range of prior looks too narrow for some parameters. But I have no time to rerun it. Try to change the priors and run a HMC-NUTS again! The rest is up to you.
```
import arviz
pararr=["Rp","T0","alpha","MMR_CO","vsini","RV"]
arviz.plot_pair(arviz.from_numpyro(mcmc),kind='kde',divergences=False,marginals=True)
plt.show()
```
For fitting to the real spectrum, we may need a more well-considered model and a better GPU, such as V100 or A100. Read the next section in detail.
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import jax.numpy as jnp
dat=pd.read_csv("spectrum.txt",delimiter=",",names=("wav","flux"))
wavd=dat["wav"].values
flux=dat["flux"].values
nusd=jnp.array(1.e8/wavd[::-1])
sigmain=0.05
norm=40000
nflux=flux/norm+np.random.normal(0,sigmain,len(wavd))
plt.plot(wavd[::-1],nflux)
from exojax.spec.lpf import xsmatrix
from exojax.spec.exomol import gamma_exomol
from exojax.spec.hitran import SijT, doppler_sigma, gamma_natural, gamma_hitran
from exojax.spec.hitrancia import read_cia, logacia
from exojax.spec.rtransfer import rtrun, dtauM, dtauCIA, nugrid
from exojax.spec import planck, response,xsvector
from exojax.spec import molinfo
from exojax.utils.constants import RJ, pc, Rs, c
from exojax.spec import rtransfer as rt
NP=100
Parr, dParr, k=rt.pressure_layer(NP=NP)
Nx=1500
nus,wav,res=nugrid(np.min(wavd)-5.0,np.max(wavd)+5.0,Nx,unit="AA")
R=100000.
beta=c/(2.0*np.sqrt(2.0*np.log(2.0))*R)
molmassCO=molinfo.molmass("CO")
mmw=2.33 #mean molecular weight
mmrH2=0.74
molmassH2=molinfo.molmass("H2")
vmrH2=(mmrH2*mmw/molmassH2) #VMR
Mp = 33.2 #fixing mass...
from exojax.spec import moldb, contdb
mdbCO=moldb.MdbExomol('.database/CO/12C-16O/Li2015',nus,crit=1.e-46)
cdbH2H2=contdb.CdbCIA('.database/H2-H2_2011.cia',nus)
plt.plot(mdbCO.nu_lines,mdbCO.Sij0,".")
from exojax.spec import make_numatrix0
numatrix_CO=make_numatrix0(nus,mdbCO.nu_lines)
#reference pressure for a T-P model
Pref=1.0 #bar
ONEARR=np.ones_like(Parr)
ONEWAV=jnp.ones_like(nflux)
import jax.numpy as jnp
from jax import random
from jax import vmap, jit
import numpyro.distributions as dist
import numpyro
from numpyro.infer import MCMC, NUTS
from numpyro.infer import Predictive
from numpyro.diagnostics import hpdi
def model_c(nu1,y1):
Rp = numpyro.sample('Rp', dist.Uniform(0.4,1.2))
RV = numpyro.sample('RV', dist.Uniform(5.0,15.0))
MMR_CO = numpyro.sample('MMR_CO', dist.Uniform(0.0,0.015))
T0 = numpyro.sample('T0', dist.Uniform(1000.0,1500.0))
alpha=numpyro.sample('alpha', dist.Uniform(0.05,0.2))
vsini = numpyro.sample('vsini', dist.Uniform(15.0,25.0))
g=2478.57730044555*Mp/Rp**2 #gravity
u1=0.0
u2=0.0
#T-P model//
Tarr = T0*(Parr/Pref)**alpha
#line computation CO
qt_CO=vmap(mdbCO.qr_interp)(Tarr)
def obyo(y,tag,nusd,nus,numatrix_CO,mdbCO,cdbH2H2):
#CO
SijM_CO=jit(vmap(SijT,(0,None,None,None,0)))\
(Tarr,mdbCO.logsij0,mdbCO.dev_nu_lines,mdbCO.elower,qt_CO)
gammaLMP_CO = jit(vmap(gamma_exomol,(0,0,None,None)))\
(Parr,Tarr,mdbCO.n_Texp,mdbCO.alpha_ref)
gammaLMN_CO=gamma_natural(mdbCO.A)
gammaLM_CO=gammaLMP_CO+gammaLMN_CO[None,:]
sigmaDM_CO=jit(vmap(doppler_sigma,(None,0,None)))\
(mdbCO.dev_nu_lines,Tarr,molmassCO)
xsm_CO=xsmatrix(numatrix_CO,sigmaDM_CO,gammaLM_CO,SijM_CO)
dtaumCO=dtauM(dParr,xsm_CO,MMR_CO*ONEARR,molmassCO,g)
#CIA
dtaucH2H2=dtauCIA(nus,Tarr,Parr,dParr,vmrH2,vmrH2,\
mmw,g,cdbH2H2.nucia,cdbH2H2.tcia,cdbH2H2.logac)
dtau=dtaumCO+dtaucH2H2
sourcef = planck.piBarr(Tarr,nus)
F0=rtrun(dtau,sourcef)/norm
Frot=response.rigidrot(nus,F0,vsini,u1,u2)
mu=response.ipgauss_sampling(nusd,nus,Frot,beta,RV)
numpyro.sample(tag, dist.Normal(mu, sigmain), obs=y)
obyo(y1,"y1",nu1,nus,numatrix_CO,mdbCO,cdbH2H2)
rng_key = random.PRNGKey(0)
rng_key, rng_key_ = random.split(rng_key)
num_warmup, num_samples = 300, 600
kernel = NUTS(model_c,forward_mode_differentiation=True)
mcmc = MCMC(kernel, num_warmup, num_samples)
mcmc.run(rng_key_, nu1=nusd, y1=nflux)
posterior_sample = mcmc.get_samples()
pred = Predictive(model_c,posterior_sample,return_sites=["y1"])
predictions = pred(rng_key_,nu1=nusd,y1=None)
median_mu1 = jnp.median(predictions["y1"],axis=0)
hpdi_mu1 = hpdi(predictions["y1"], 0.9)
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(20,6.0))
ax.plot(wavd[::-1],median_mu1,color="C0")
ax.plot(wavd[::-1],nflux,"+",color="black",label="data")
ax.fill_between(wavd[::-1], hpdi_mu1[0], hpdi_mu1[1], alpha=0.3, interpolate=True,color="C0",label="90% area")
plt.xlabel("wavelength ($\AA$)",fontsize=16)
plt.legend(fontsize=16)
plt.tick_params(labelsize=16)
import arviz
pararr=["Rp","T0","alpha","MMR_CO","vsini","RV"]
arviz.plot_pair(arviz.from_numpyro(mcmc),kind='kde',divergences=False,marginals=True)
plt.show()
| 0.46393 | 0.795181 |
# Stateful Functors
Often one will want to perform transformations that keep track of some state. Here we show two ways of doing this. The first (preferred method) uses a Python Class to define the functor object which is initialized on the local process. The second uses the PipeSystem API to define a graph with a looping topology where the state is stored in the Streams instead of the functor object.
## Simple Moving Average
Here we demonstrate how to use Class functors. All pipe segments may use Class functors as long as the Class has a `run` method implemented. The `run` method plays the role of the standard functor, operating on each data chunk in the stream. Additionally the user may define two other methods `local_init` and `local_term`, which are executed once on the local process on initialization and termination respectively.
The following example shows how to use a Source and Sink with Class functors. First a random walk is generated from random data, where the location of the previous step is persisted on the local process. Then a moving average is calculated with a queue persisted on the Sink process. Normally only the data in the queue needs to be persisted but for demonstration purposes we persist all values and moving averages to be plotted after the pipeline has terminated.
```
import minipipe as mp
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
class generate_rand_walk:
def __init__(self, n_steps=100):
self.n_steps = 100
def local_init(self):
self.last_step = 0
def run(self):
for _ in range(self.n_steps):
step = np.random.randn() + self.last_step
self.last_step = step
yield step
class simple_moving_average:
def __init__(self, window_size = 10):
self.window_size = window_size
def local_init(self):
# This method initializes on the local process
self.queue = []
self.values = []
self.means = []
def run(self, data):
# Enqueue data
self.queue.append(data)
# Dequeue data once window size has been reached
if len(self.queue) > self.window_size:
self.queue.pop(0)
# Calcualte moving average
ma = sum(self.queue)/len(self.queue)
# Save values and moving averages for plotting at term
# Normally you wouldn't save these values bc data scale may cause OOM error
self.values.append(data)
self.means.append(ma)
def local_term(self):
# This method runs once on the local process at termination
# Here we simply plot the results
steps = range(len(self.values))
plt.plot(steps, self.values, label='Random Walk')
plt.plot(steps, self.means, label='Smoothed Walk')
plt.title('Simple Moving Average')
plt.xlabel('Steps')
plt.legend()
plt.show() # This is necessary since plot in on local process
rw = generate_rand_walk(100)
sma = simple_moving_average(10)
pline = mp.PipeLine()
pline.add(mp.Source(rw, name='random_walk'))
pline.add(mp.Sink(sma, name='moving_avg'))
pline.build()
pline.diagram()
pline.run()
pline.close()
```
## Fibonacci Sequence with Loops
With the PipeSystem API its possible to build graphs with loops. Loops can be used to store states in a Stream by passing the data back to an upstream. Here's a fun and useless example calculating the Fibonacci sequence with a stateless functor.
```
from multiprocessing import Event
# minipipe uses multiprocessing Events for termination flags
term_flag = Event()
n = 1000 # max fib number
def fib(x_1, x_2):
print(x_1)
# terminate when n is reached
if x_2 >= n:
term_flag.set()
return x_2, x_1 + x_2
# initialize streams
s1 = mp.Stream()
s2 = mp.Stream()
# initialize streams instead of using a Source
s1.q.put(0)
s2.q.put(1)
p = mp.Transform(fib, 'fib', upstreams=[s1, s2], downstreams=[s1, s2])
p.set_term_flag(term_flag) # term flag needs to be set explicitly
psys = mp.PipeSystem([p])
psys.build()
psys.diagram(draw_streams=True)
psys.run()
psys.close()
```
|
github_jupyter
|
import minipipe as mp
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
class generate_rand_walk:
def __init__(self, n_steps=100):
self.n_steps = 100
def local_init(self):
self.last_step = 0
def run(self):
for _ in range(self.n_steps):
step = np.random.randn() + self.last_step
self.last_step = step
yield step
class simple_moving_average:
def __init__(self, window_size = 10):
self.window_size = window_size
def local_init(self):
# This method initializes on the local process
self.queue = []
self.values = []
self.means = []
def run(self, data):
# Enqueue data
self.queue.append(data)
# Dequeue data once window size has been reached
if len(self.queue) > self.window_size:
self.queue.pop(0)
# Calcualte moving average
ma = sum(self.queue)/len(self.queue)
# Save values and moving averages for plotting at term
# Normally you wouldn't save these values bc data scale may cause OOM error
self.values.append(data)
self.means.append(ma)
def local_term(self):
# This method runs once on the local process at termination
# Here we simply plot the results
steps = range(len(self.values))
plt.plot(steps, self.values, label='Random Walk')
plt.plot(steps, self.means, label='Smoothed Walk')
plt.title('Simple Moving Average')
plt.xlabel('Steps')
plt.legend()
plt.show() # This is necessary since plot in on local process
rw = generate_rand_walk(100)
sma = simple_moving_average(10)
pline = mp.PipeLine()
pline.add(mp.Source(rw, name='random_walk'))
pline.add(mp.Sink(sma, name='moving_avg'))
pline.build()
pline.diagram()
pline.run()
pline.close()
from multiprocessing import Event
# minipipe uses multiprocessing Events for termination flags
term_flag = Event()
n = 1000 # max fib number
def fib(x_1, x_2):
print(x_1)
# terminate when n is reached
if x_2 >= n:
term_flag.set()
return x_2, x_1 + x_2
# initialize streams
s1 = mp.Stream()
s2 = mp.Stream()
# initialize streams instead of using a Source
s1.q.put(0)
s2.q.put(1)
p = mp.Transform(fib, 'fib', upstreams=[s1, s2], downstreams=[s1, s2])
p.set_term_flag(term_flag) # term flag needs to be set explicitly
psys = mp.PipeSystem([p])
psys.build()
psys.diagram(draw_streams=True)
psys.run()
psys.close()
| 0.500244 | 0.988436 |
# Deploying Various MNIST Models on Kubernetes
Using:
* kubeflow
* seldon-core
Follow the main README to setup kubeflow and seldon-core. This notebook will show various rolling deployments of the trained models
* Single model
* AB Test between 2 models
* Multi-Armed Bandit over 3 models
### Dependencies
* Tensorflow
* grpcio package
# Setup
Set kubectl to use the namespace where you installed kubeflow and seldon. In the README it is kubeflow.
```
!kubectl config set-context $(kubectl config current-context) --namespace=kubeflow
!make create_protos
!python -m grpc.tools.protoc -I. --python_out=. --grpc_python_out=. ./proto/prediction.proto
%matplotlib inline
import utils
from visualizer import get_graph
mnist = utils.download_mnist()
```
**Ensure you have port forwarded the ambassador reverse proxy**
```bash
kubectl port-forward $(kubectl get pods -n kubeflow -l service=ambassador -o jsonpath='{.items[0].metadata.name}') -n kubeflow 8002:80
```
# Deploy Single Tensorflow Model
```
get_graph("../k8s_serving/serving_model.json",'r')
!pygmentize ../k8s_serving/serving_model.json
!kubectl apply -f ../k8s_serving/serving_model.json
!kubectl get seldondeployments mnist-classifier -o jsonpath='{.status}'
utils.predict_rest_mnist(mnist)
utils.predict_grpc_mnist(mnist)
```
# Start load test
```
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust
!helm install seldon-core-loadtesting --name loadtest \
--namespace kubeflow \
--repo https://storage.googleapis.com/seldon-charts \
--set locust.script=mnist_rest_locust.py \
--set locust.host=http://mnist-classifier:8000 \
--set oauth.enabled=false \
--set oauth.key=oauth-key \
--set oauth.secret=oauth-secret \
--set locust.hatchRate=1 \
--set locust.clients=1 \
--set loadtest.sendFeedback=1 \
--set locust.minWait=0 \
--set locust.maxWait=0 \
--set replicaCount=1 \
--set data.size=784
```
# Rolling update to AB Test
Run an AB Test between 2 models:
* Tensorflow neural network model
* Scikit-learn random forest.
```
get_graph("../k8s_serving/ab_test_sklearn_tensorflow.json",'r')
!pygmentize ../k8s_serving/ab_test_sklearn_tensorflow.json
!kubectl apply -f ../k8s_serving/ab_test_sklearn_tensorflow.json
!kubectl get seldondeployments mnist-classifier -o jsonpath='{.status}'
utils.predict_rest_mnist(mnist)
utils.evaluate_abtest(mnist,100)
```
# Rolling Update to Multi-Armed Bandit
Run a epsilon-greey multi-armed bandit over 3 models:
* Tensorflow neural network model
* Scikit-learn random forest model
* R least-squares model
```
get_graph("../k8s_serving/epsilon_greedy_3way.json",'r')
!pygmentize ../k8s_serving/epsilon_greedy_3way.json
!kubectl apply -f ../k8s_serving/epsilon_greedy_3way.json
!kubectl get seldondeployments mnist-classifier -o jsonpath='{.status}'
utils.predict_rest_mnist(mnist)
utils.evaluate_egreedy(mnist,100)
```
|
github_jupyter
|
!kubectl config set-context $(kubectl config current-context) --namespace=kubeflow
!make create_protos
!python -m grpc.tools.protoc -I. --python_out=. --grpc_python_out=. ./proto/prediction.proto
%matplotlib inline
import utils
from visualizer import get_graph
mnist = utils.download_mnist()
kubectl port-forward $(kubectl get pods -n kubeflow -l service=ambassador -o jsonpath='{.items[0].metadata.name}') -n kubeflow 8002:80
get_graph("../k8s_serving/serving_model.json",'r')
!pygmentize ../k8s_serving/serving_model.json
!kubectl apply -f ../k8s_serving/serving_model.json
!kubectl get seldondeployments mnist-classifier -o jsonpath='{.status}'
utils.predict_rest_mnist(mnist)
utils.predict_grpc_mnist(mnist)
!kubectl label nodes $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') role=locust
!helm install seldon-core-loadtesting --name loadtest \
--namespace kubeflow \
--repo https://storage.googleapis.com/seldon-charts \
--set locust.script=mnist_rest_locust.py \
--set locust.host=http://mnist-classifier:8000 \
--set oauth.enabled=false \
--set oauth.key=oauth-key \
--set oauth.secret=oauth-secret \
--set locust.hatchRate=1 \
--set locust.clients=1 \
--set loadtest.sendFeedback=1 \
--set locust.minWait=0 \
--set locust.maxWait=0 \
--set replicaCount=1 \
--set data.size=784
get_graph("../k8s_serving/ab_test_sklearn_tensorflow.json",'r')
!pygmentize ../k8s_serving/ab_test_sklearn_tensorflow.json
!kubectl apply -f ../k8s_serving/ab_test_sklearn_tensorflow.json
!kubectl get seldondeployments mnist-classifier -o jsonpath='{.status}'
utils.predict_rest_mnist(mnist)
utils.evaluate_abtest(mnist,100)
get_graph("../k8s_serving/epsilon_greedy_3way.json",'r')
!pygmentize ../k8s_serving/epsilon_greedy_3way.json
!kubectl apply -f ../k8s_serving/epsilon_greedy_3way.json
!kubectl get seldondeployments mnist-classifier -o jsonpath='{.status}'
utils.predict_rest_mnist(mnist)
utils.evaluate_egreedy(mnist,100)
| 0.308086 | 0.895888 |
```
import pandas as pd
import numpy as np
df = pd.read_csv("titanic.csv")
df.head()
dados = df.sample(500)
dados = dados.reset_index(drop=True)
dados
dados.rename(columns = {"PassengerId":"ID_passageiro", "Survived":"Sobreviveu", "Name":"Nome",\
"Sex":"Sexo", "Age":"Idade", "Ticket":"Ticket", "Fare":"Passagem", "Cabin":"Cabine",\
"Embarked":"Embarcou"}, inplace=True)
dados.columns
# A funรงรฃo describe tras uma breve descriรงรฃo dos dados:
dados.describe()
#Obs: Testem essa funรงรฃo para um dataframe que so tem colunas do tipo object!
# O minimo de idade รฉ 0.42. Vamos investigar quais os valores unicos dessa variavel?
dados.Idade.unique()
dados.head()
#Pense que estamos construindo um modelo de ML e queremos prever quem irรก morrer e quem irรก sobreviver.
#Nesse DF essa informaรงรฃo estรก na coluna "Sobreviveu", onde quem sobreviveu รฉ 1 e quem morreu รฉ 0.
#Temos interesse em ver como estรก a distribuiรงรฃo da nosssa variรกvel target(como chamamos em ML), pois se tivermos
#dadis muito desbalanceados, temos que usar algumas tecnicas especificas para balancear os dados.
dados.Sobreviveu.value_counts()
#poderiamos ver em percentual:
dados.Sobreviveu.value_counts()/dados.shape[0] #numero de pessoas a bordo
#Podemos alรฉm disso querer saber dos que morreram, quantos homens por exemplo.
#Primeiro filtramos todos que morreram apenas: (FILTRO: df[df.coluna==valor])
morreram = dados[dados.Sobreviveu==0]
morreram
#agora vamos selecionar a coluna sexo e verificar quantos sรฃo homens
morreram[morreram.Sexo=='male'].shape
#Poderiamos ter feito assim:
dados[dados.Sobreviveu==0].Sexo.value_counts()
#Dos que morreram e tinham mais de 50 anos de idade, quantos eram homens e quantos eram mulheres?
dados[(dados.Sobreviveu==0) & (dados.Idade>50)].Sexo.value_counts()
#Ainda falando de slicing, podemos citar duas funรงรตes muito utilizadas LOC e ILOC
#loc:Usamos para filtrar informaรงรฃo no DF e funciona da seguinte maneira: df.loc[linhas,colunas]. Selecionamos as
#coluna pelo nome.
#iloc:Funciona de maneira similar:df.iloc[linhas,colunas], porรฉm o iloc seleciona linhas e colunas por nรบmeros
#(รญndices).
# todas as linhas e apenas duas colunas
dados.loc[:, ["Sobreviveu","Idade"]]
#usando o iloc procuramos pelo indice das colunas e nรฃo pelo nome
dados.iloc[:,[0,3]]
dados.head()
#linha cujo รญndice รฉ 0 e duas colunas
dados.loc[0,["Sobreviveu", "Idade"]]
#Linha que estรก na 4 posiรงรฃo no DF:
dados.iloc[4]
dados
#podemos usar o loc e fazer filtros no dataframe.
#Aqui pegamos as linhas onde a idade รฉ maior ou igual a 70 e todas as colunas:
dados.loc[dados.Idade >= 70,:]
#podemos tambรฉm fazer filtros excludentes:
df_filtrado = dados.loc[~dados.Cabine.isin(["B22","C104"])]
df_filtrado.Cabine.unique()
#Viram que na variavel cabine tem valores nan (Not a number)? E em outras variรกveis tambรฉm tem?
#No dia-a-dia do cientista รฉ muito importante conhecer as variรกveis, saber o que podem ser esses valores NaN, saber
#o % de missing em uma variรกvel รฉ tambรฉm saber como tratรก-los:
dados.isnull()
#Para o nosso estudo vamos deletar as observaรงรตes(linhas) que possuem valores faltantes:
print("Qtd nulos antes do drop:\n", dados.isnull().sum())
dados = dados.dropna()
print('----------------------------------')
print("Qtd nulos apรณs o drop:\n", dados.isnull().sum())
Funรงรฃo Map
# Para construir modelos de ML transfromamos todos os campos que sรฃo strings em campos numรฉricos. Vejam o que
#podemos fazer, por exemplo, pra variรกvel sexo.
print(dados.Sexo[:5])
dados.sexo = dados.Sexo.map({"male":0,"female":1})
```
```
#Podemos fazer agrupamentos aqui, assim como fizemos em SQL.Lembram da funรงรฃo group by lรก?
dados.groupby(by = "Sexo")
#Esse groupby faz algo desse tipo aqui:
dados[dados.sexo==1][:5]
dados[dados.sexo==0][:5]
#Mesmo salvando em uma variavel , nรฃo conseguimos ver o agrupamento feito pelo python:
dff = dados.groupby(by = "Sexo")
dff
#Uma exporessรฃo lambda permite escrever funรงรตes anonimas, em tempo de codigo, usando apenas uma linha de codigo.
#Onde x1, x2... sรฃo variรกveis que representem o argumento da funรงรฃo e expr รฉ qualquer expressรฃo vรกlida em Python,
#envolvendo essas variรกveis -- lambda x,y: np.sum(x,y)
dados.groupby(by = "Sexo").apply(lambda x: x.Idade.mean() )
dados.groupby(by = "Sexo").apply(lambda x: x.Sobreviveu.value_counts() )
#Poderiamos tambรฉm aplicar uma funรงรฃo nossa, customizada:
def f(x):
return np.max(x) - np.min(x)
dados.groupby(by = "Sexo").apply(lambda x: f(x.Idade) )
#Podemos usar o apply sem usar o groupby por exemplo, apenas aplicar nas series diretamente:
dados[["Passagem","Idade"]].apply(lambda s: s.mean())
#Outra funรงรฃo muito usada tambรฉm, alรฉm de apply รฉ a aggragate. Aqui usamos a agg para aplicar a funรงรฃo min() em
#todas as colunas:
dados.groupby("Cabine").agg('min')
#Podemos usar a agg para aplicar uma funรงรฃo especifica para cada coluna de um data frame
def moda(s):
ss = s.value_counts()
return ss.idxmax()
dados.groupby('Cabine').agg({"Sobreviveu": 'sum', "Idade":['min', 'max'], "Embarcou":moda})
#Criando Dataframes
#Criando dataframes a partir de dicionรกrios:
df_criado = pd.DataFrame({"nome": ['Joรฃo', 'Pedro', 'Paulo'],
"Idades":[12,15,18],
"Cidade":['SP','Rio','Fortaleza']})
df_criado
#Criando um dataframe a partir de um array:
data = np.array([[1,2,3],["A","B","C"],[12.3,4.5,10.8]])
print(data)
data = data.T
print('-------------------------')
print(data)
df_criado1 = pd.DataFrame(data,columns=["id","letra","valor"])
df_criado1
type(df_criado)
#Visualizaรงรฃo
#Podemos criar vizualizaรงรตes simples com pandas. As mais avanรงadas veremos em Matplotlib no prรณximo mรณdulo
dados.boxplot(column= "Idade")
```
|
github_jupyter
|
import pandas as pd
import numpy as np
df = pd.read_csv("titanic.csv")
df.head()
dados = df.sample(500)
dados = dados.reset_index(drop=True)
dados
dados.rename(columns = {"PassengerId":"ID_passageiro", "Survived":"Sobreviveu", "Name":"Nome",\
"Sex":"Sexo", "Age":"Idade", "Ticket":"Ticket", "Fare":"Passagem", "Cabin":"Cabine",\
"Embarked":"Embarcou"}, inplace=True)
dados.columns
# A funรงรฃo describe tras uma breve descriรงรฃo dos dados:
dados.describe()
#Obs: Testem essa funรงรฃo para um dataframe que so tem colunas do tipo object!
# O minimo de idade รฉ 0.42. Vamos investigar quais os valores unicos dessa variavel?
dados.Idade.unique()
dados.head()
#Pense que estamos construindo um modelo de ML e queremos prever quem irรก morrer e quem irรก sobreviver.
#Nesse DF essa informaรงรฃo estรก na coluna "Sobreviveu", onde quem sobreviveu รฉ 1 e quem morreu รฉ 0.
#Temos interesse em ver como estรก a distribuiรงรฃo da nosssa variรกvel target(como chamamos em ML), pois se tivermos
#dadis muito desbalanceados, temos que usar algumas tecnicas especificas para balancear os dados.
dados.Sobreviveu.value_counts()
#poderiamos ver em percentual:
dados.Sobreviveu.value_counts()/dados.shape[0] #numero de pessoas a bordo
#Podemos alรฉm disso querer saber dos que morreram, quantos homens por exemplo.
#Primeiro filtramos todos que morreram apenas: (FILTRO: df[df.coluna==valor])
morreram = dados[dados.Sobreviveu==0]
morreram
#agora vamos selecionar a coluna sexo e verificar quantos sรฃo homens
morreram[morreram.Sexo=='male'].shape
#Poderiamos ter feito assim:
dados[dados.Sobreviveu==0].Sexo.value_counts()
#Dos que morreram e tinham mais de 50 anos de idade, quantos eram homens e quantos eram mulheres?
dados[(dados.Sobreviveu==0) & (dados.Idade>50)].Sexo.value_counts()
#Ainda falando de slicing, podemos citar duas funรงรตes muito utilizadas LOC e ILOC
#loc:Usamos para filtrar informaรงรฃo no DF e funciona da seguinte maneira: df.loc[linhas,colunas]. Selecionamos as
#coluna pelo nome.
#iloc:Funciona de maneira similar:df.iloc[linhas,colunas], porรฉm o iloc seleciona linhas e colunas por nรบmeros
#(รญndices).
# todas as linhas e apenas duas colunas
dados.loc[:, ["Sobreviveu","Idade"]]
#usando o iloc procuramos pelo indice das colunas e nรฃo pelo nome
dados.iloc[:,[0,3]]
dados.head()
#linha cujo รญndice รฉ 0 e duas colunas
dados.loc[0,["Sobreviveu", "Idade"]]
#Linha que estรก na 4 posiรงรฃo no DF:
dados.iloc[4]
dados
#podemos usar o loc e fazer filtros no dataframe.
#Aqui pegamos as linhas onde a idade รฉ maior ou igual a 70 e todas as colunas:
dados.loc[dados.Idade >= 70,:]
#podemos tambรฉm fazer filtros excludentes:
df_filtrado = dados.loc[~dados.Cabine.isin(["B22","C104"])]
df_filtrado.Cabine.unique()
#Viram que na variavel cabine tem valores nan (Not a number)? E em outras variรกveis tambรฉm tem?
#No dia-a-dia do cientista รฉ muito importante conhecer as variรกveis, saber o que podem ser esses valores NaN, saber
#o % de missing em uma variรกvel รฉ tambรฉm saber como tratรก-los:
dados.isnull()
#Para o nosso estudo vamos deletar as observaรงรตes(linhas) que possuem valores faltantes:
print("Qtd nulos antes do drop:\n", dados.isnull().sum())
dados = dados.dropna()
print('----------------------------------')
print("Qtd nulos apรณs o drop:\n", dados.isnull().sum())
Funรงรฃo Map
# Para construir modelos de ML transfromamos todos os campos que sรฃo strings em campos numรฉricos. Vejam o que
#podemos fazer, por exemplo, pra variรกvel sexo.
print(dados.Sexo[:5])
dados.sexo = dados.Sexo.map({"male":0,"female":1})
#Podemos fazer agrupamentos aqui, assim como fizemos em SQL.Lembram da funรงรฃo group by lรก?
dados.groupby(by = "Sexo")
#Esse groupby faz algo desse tipo aqui:
dados[dados.sexo==1][:5]
dados[dados.sexo==0][:5]
#Mesmo salvando em uma variavel , nรฃo conseguimos ver o agrupamento feito pelo python:
dff = dados.groupby(by = "Sexo")
dff
#Uma exporessรฃo lambda permite escrever funรงรตes anonimas, em tempo de codigo, usando apenas uma linha de codigo.
#Onde x1, x2... sรฃo variรกveis que representem o argumento da funรงรฃo e expr รฉ qualquer expressรฃo vรกlida em Python,
#envolvendo essas variรกveis -- lambda x,y: np.sum(x,y)
dados.groupby(by = "Sexo").apply(lambda x: x.Idade.mean() )
dados.groupby(by = "Sexo").apply(lambda x: x.Sobreviveu.value_counts() )
#Poderiamos tambรฉm aplicar uma funรงรฃo nossa, customizada:
def f(x):
return np.max(x) - np.min(x)
dados.groupby(by = "Sexo").apply(lambda x: f(x.Idade) )
#Podemos usar o apply sem usar o groupby por exemplo, apenas aplicar nas series diretamente:
dados[["Passagem","Idade"]].apply(lambda s: s.mean())
#Outra funรงรฃo muito usada tambรฉm, alรฉm de apply รฉ a aggragate. Aqui usamos a agg para aplicar a funรงรฃo min() em
#todas as colunas:
dados.groupby("Cabine").agg('min')
#Podemos usar a agg para aplicar uma funรงรฃo especifica para cada coluna de um data frame
def moda(s):
ss = s.value_counts()
return ss.idxmax()
dados.groupby('Cabine').agg({"Sobreviveu": 'sum', "Idade":['min', 'max'], "Embarcou":moda})
#Criando Dataframes
#Criando dataframes a partir de dicionรกrios:
df_criado = pd.DataFrame({"nome": ['Joรฃo', 'Pedro', 'Paulo'],
"Idades":[12,15,18],
"Cidade":['SP','Rio','Fortaleza']})
df_criado
#Criando um dataframe a partir de um array:
data = np.array([[1,2,3],["A","B","C"],[12.3,4.5,10.8]])
print(data)
data = data.T
print('-------------------------')
print(data)
df_criado1 = pd.DataFrame(data,columns=["id","letra","valor"])
df_criado1
type(df_criado)
#Visualizaรงรฃo
#Podemos criar vizualizaรงรตes simples com pandas. As mais avanรงadas veremos em Matplotlib no prรณximo mรณdulo
dados.boxplot(column= "Idade")
| 0.219338 | 0.816187 |
<a href="https://colab.research.google.com/github/mirjunaid26/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/GNN_overview.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 7: Graph Neural Networks

**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/GNN_overview.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/GNN_overview.ipynb)
**Pre-trained models:**
[](https://github.com/phlippe/saved_models/tree/main/tutorial7)
[](https://drive.google.com/drive/folders/1DOTV_oYt5boa-MElbc2izat4VMSc1gob?usp=sharing)
**Recordings:**
[](https://youtu.be/fK7d56Ly9q8)
[](https://youtu.be/ZCNSUWe4a_Q)
In this tutorial, we will discuss the application of neural networks on graphs. Graph Neural Networks (GNNs) have recently gained increasing popularity in both applications and research, including domains such as social networks, knowledge graphs, recommender systems, and bioinformatics. While the theory and math behind GNNs might first seem complicated, the implementation of those models is quite simple and helps in understanding the methodology. Therefore, we will discuss the implementation of basic network layers of a GNN, namely graph convolutions, and attention layers. Finally, we will apply a GNN on a node-level, edge-level, and graph-level tasks.
Below, we will start by importing our standard libraries. We will use PyTorch Lightning as already done in Tutorial 5 and 6.
```
## Standard libraries
import os
import json
import math
import numpy as np
import time
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
sns.set()
## Progress bar
from tqdm.notebook import tqdm
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install --quiet pytorch-lightning>=1.4
import pytorch_lightning as pl
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial7"
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print(device)
```
We also have a few pre-trained models we download below.
```
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial7/"
# Files to download
pretrained_files = ["NodeLevelMLP.ckpt", "NodeLevelGNN.ckpt", "GraphLevelGraphConv.ckpt"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive fll output including the following error:\n", e)
```
## Graph Neural Networks
### Graph representation
Before starting the discussion of specific neural network operations on graphs, we should consider how to represent a graph. Mathematically, a graph $\mathcal{G}$ is defined as a tuple of a set of nodes/vertices $V$, and a set of edges/links $E$: $\mathcal{G}=(V,E)$. Each edge is a pair of two vertices, and represents a connection between them. For instance, let's look at the following graph:
<center width="100%" style="padding:10px"><img src="https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/example_graph.svg?raw=1" width="250px"></center>
The vertices are $V=\{1,2,3,4\}$, and edges $E=\{(1,2), (2,3), (2,4), (3,4)\}$. Note that for simplicity, we assume the graph to be undirected and hence don't add mirrored pairs like $(2,1)$. In application, vertices and edge can often have specific attributes, and edges can even be directed. The question is how we could represent this diversity in an efficient way for matrix operations. Usually, for the edges, we decide between two variants: an adjacency matrix, or a list of paired vertex indices.
The **adjacency matrix** $A$ is a square matrix whose elements indicate whether pairs of vertices are adjacent, i.e. connected, or not. In the simplest case, $A_{ij}$ is 1 if there is a connection from node $i$ to $j$, and otherwise 0. If we have edge attributes or different categories of edges in a graph, this information can be added to the matrix as well. For an undirected graph, keep in mind that $A$ is a symmetric matrix ($A_{ij}=A_{ji}$). For the example graph above, we have the following adjacency matrix:
$$
A = \begin{bmatrix}
0 & 1 & 0 & 0\\
1 & 0 & 1 & 1\\
0 & 1 & 0 & 1\\
0 & 1 & 1 & 0
\end{bmatrix}
$$
While expressing a graph as a list of edges is more efficient in terms of memory and (possibly) computation, using an adjacency matrix is more intuitive and simpler to implement. In our implementations below, we will rely on the adjacency matrix to keep the code simple. However, common libraries use edge lists, which we will discuss later more.
Alternatively, we could also use the list of edges to define a sparse adjacency matrix with which we can work as if it was a dense matrix, but allows more memory-efficient operations. PyTorch supports this with the sub-package `torch.sparse` ([documentation](https://pytorch.org/docs/stable/sparse.html)) which is however still in a beta-stage (API might change in future).
### Graph Convolutions
Graph Convolutional Networks have been introduced by [Kipf et al.](https://openreview.net/pdf?id=SJU4ayYgl) in 2016 at the University of Amsterdam. He also wrote a great [blog post](https://tkipf.github.io/graph-convolutional-networks/) about this topic, which is recommended if you want to read about GCNs from a different perspective. GCNs are similar to convolutions in images in the sense that the "filter" parameters are typically shared over all locations in the graph. At the same time, GCNs rely on message passing methods, which means that vertices exchange information with the neighbors, and send "messages" to each other. Before looking at the math, we can try to visually understand how GCNs work. The first step is that each node creates a feature vector that represents the message it wants to send to all its neighbors. In the second step, the messages are sent to the neighbors, so that a node receives one message per adjacent node. Below we have visualized the two steps for our example graph.
<center width="100%" style="padding:10px"><img src="https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/graph_message_passing.svg?raw=1" width="700px"></center>
If we want to formulate that in more mathematical terms, we need to first decide how to combine all the messages a node receives. As the number of messages vary across nodes, we need an operation that works for any number. Hence, the usual way to go is to sum or take the mean. Given the previous features of nodes $H^{(l)}$, the GCN layer is defined as follows:
$$H^{(l+1)} = \sigma\left(\hat{D}^{-1/2}\hat{A}\hat{D}^{-1/2}H^{(l)}W^{(l)}\right)$$
$W^{(l)}$ is the weight parameters with which we transform the input features into messages ($H^{(l)}W^{(l)}$). To the adjacency matrix $A$ we add the identity matrix so that each node sends its own message also to itself: $\hat{A}=A+I$. Finally, to take the average instead of summing, we calculate the matrix $\hat{D}$ which is a diagonal matrix with $D_{ii}$ denoting the number of neighbors node $i$ has. $\sigma$ represents an arbitrary activation function, and not necessarily the sigmoid (usually a ReLU-based activation function is used in GNNs).
When implementing the GCN layer in PyTorch, we can take advantage of the flexible operations on tensors. Instead of defining a matrix $\hat{D}$, we can simply divide the summed messages by the number of neighbors afterward. Additionally, we replace the weight matrix with a linear layer, which additionally allows us to add a bias. Written as a PyTorch module, the GCN layer is defined as follows:
```
class GCNLayer(nn.Module):
def __init__(self, c_in, c_out):
super().__init__()
self.projection = nn.Linear(c_in, c_out)
def forward(self, node_feats, adj_matrix):
"""
Inputs:
node_feats - Tensor with node features of shape [batch_size, num_nodes, c_in]
adj_matrix - Batch of adjacency matrices of the graph. If there is an edge from i to j, adj_matrix[b,i,j]=1 else 0.
Supports directed edges by non-symmetric matrices. Assumes to already have added the identity connections.
Shape: [batch_size, num_nodes, num_nodes]
"""
# Num neighbours = number of incoming edges
num_neighbours = adj_matrix.sum(dim=-1, keepdims=True)
node_feats = self.projection(node_feats)
node_feats = torch.bmm(adj_matrix, node_feats)
node_feats = node_feats / num_neighbours
return node_feats
```
To further understand the GCN layer, we can apply it to our example graph above. First, let's specify some node features and the adjacency matrix with added self-connections:
```
node_feats = torch.arange(8, dtype=torch.float32).view(1, 4, 2)
adj_matrix = torch.Tensor([[[1, 1, 0, 0],
[1, 1, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 1]]])
print("Node features:\n", node_feats)
print("\nAdjacency matrix:\n", adj_matrix)
```
Next, let's apply a GCN layer to it. For simplicity, we initialize the linear weight matrix as an identity matrix so that the input features are equal to the messages. This makes it easier for us to verify the message passing operation.
```
layer = GCNLayer(c_in=2, c_out=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
```
As we can see, the first node's output values are the average of itself and the second node. Similarly, we can verify all other nodes. However, in a GNN, we would also want to allow feature exchange between nodes beyond its neighbors. This can be achieved by applying multiple GCN layers, which gives us the final layout of a GNN. The GNN can be build up by a sequence of GCN layers and non-linearities such as ReLU. For a visualization, see below (figure credit - [Thomas Kipf, 2016](https://tkipf.github.io/graph-convolutional-networks/)).
<center width="100%" style="padding: 10px"><img src="https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/gcn_network.png?raw=1" width="600px"></center>
However, one issue we can see from looking at the example above is that the output features for nodes 3 and 4 are the same because they have the same adjacent nodes (including itself). Therefore, GCN layers can make the network forget node-specific information if we just take a mean over all messages. Multiple possible improvements have been proposed. While the simplest option might be using residual connections, the more common approach is to either weigh the self-connections higher or define a separate weight matrix for the self-connections. Alternatively, we can re-visit a concept from the last tutorial: attention.
### Graph Attention
If you remember from the last tutorial, attention describes a weighted average of multiple elements with the weights dynamically computed based on an input query and elements' keys (if you haven't read Tutorial 6 yet, it is recommended to at least go through the very first section called [What is Attention?](https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial6/Transformers_and_MHAttention.html#What-is-Attention?)). This concept can be similarly applied to graphs, one of such is the Graph Attention Network (called GAT, proposed by [Velickovic et al., 2017](https://arxiv.org/abs/1710.10903)). Similarly to the GCN, the graph attention layer creates a message for each node using a linear layer/weight matrix. For the attention part, it uses the message from the node itself as a query, and the messages to average as both keys and values (note that this also includes the message to itself). The score function $f_{attn}$ is implemented as a one-layer MLP which maps the query and key to a single value. The MLP looks as follows (figure credit - [Velickovic et al.](https://arxiv.org/abs/1710.10903)):
<center width="100%" style="padding:10px"><img src="https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/graph_attention_MLP.svg?raw=1" width="250px"></center>
$h_i$ and $h_j$ are the original features from node $i$ and $j$ respectively, and represent the messages of the layer with $\mathbf{W}$ as weight matrix. $\mathbf{a}$ is the weight matrix of the MLP, which has the shape $[1,2\times d_{\text{message}}]$, and $\alpha_{ij}$ the final attention weight from node $i$ to $j$. The calculation can be described as follows:
$$\alpha_{ij} = \frac{\exp\left(\text{LeakyReLU}\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_j\right]\right)\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\text{LeakyReLU}\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_k\right]\right)\right)}$$
The operator $||$ represents the concatenation, and $\mathcal{N}_i$ the indices of the neighbors of node $i$. Note that in contrast to usual practice, we apply a non-linearity (here LeakyReLU) before the softmax over elements. Although it seems like a minor change at first, it is crucial for the attention to depend on the original input. Specifically, let's remove the non-linearity for a second, and try to simplify the expression:
$$
\begin{split}
\alpha_{ij} & = \frac{\exp\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_j\right]\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_k\right]\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i+\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i+\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i\right)\cdot\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i\right)\cdot\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\
\end{split}
$$
We can see that without the non-linearity, the attention term with $h_i$ actually cancels itself out, resulting in the attention being independent of the node itself. Hence, we would have the same issue as the GCN of creating the same output features for nodes with the same neighbors. This is why the LeakyReLU is crucial and adds some dependency on $h_i$ to the attention.
Once we obtain all attention factors, we can calculate the output features for each node by performing the weighted average:
$$h_i'=\sigma\left(\sum_{j\in\mathcal{N}_i}\alpha_{ij}\mathbf{W}h_j\right)$$
$\sigma$ is yet another non-linearity, as in the GCN layer. Visually, we can represent the full message passing in an attention layer as follows (figure credit - [Velickovic et al.](https://arxiv.org/abs/1710.10903)):
<center width="100%"><img src="https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/graph_attention.jpeg?raw=1" width="400px"></center>
To increase the expressiveness of the graph attention network, [Velickovic et al.](https://arxiv.org/abs/1710.10903) proposed to extend it to multiple heads similar to the Multi-Head Attention block in Transformers. This results in $N$ attention layers being applied in parallel. In the image above, it is visualized as three different colors of arrows (green, blue, and purple) that are afterward concatenated. The average is only applied for the very final prediction layer in a network.
After having discussed the graph attention layer in detail, we can implement it below:
```
class GATLayer(nn.Module):
def __init__(self, c_in, c_out, num_heads=1, concat_heads=True, alpha=0.2):
"""
Inputs:
c_in - Dimensionality of input features
c_out - Dimensionality of output features
num_heads - Number of heads, i.e. attention mechanisms to apply in parallel. The
output features are equally split up over the heads if concat_heads=True.
concat_heads - If True, the output of the different heads is concatenated instead of averaged.
alpha - Negative slope of the LeakyReLU activation.
"""
super().__init__()
self.num_heads = num_heads
self.concat_heads = concat_heads
if self.concat_heads:
assert c_out % num_heads == 0, "Number of output features must be a multiple of the count of heads."
c_out = c_out // num_heads
# Sub-modules and parameters needed in the layer
self.projection = nn.Linear(c_in, c_out * num_heads)
self.a = nn.Parameter(torch.Tensor(num_heads, 2 * c_out)) # One per head
self.leakyrelu = nn.LeakyReLU(alpha)
# Initialization from the original implementation
nn.init.xavier_uniform_(self.projection.weight.data, gain=1.414)
nn.init.xavier_uniform_(self.a.data, gain=1.414)
def forward(self, node_feats, adj_matrix, print_attn_probs=False):
"""
Inputs:
node_feats - Input features of the node. Shape: [batch_size, c_in]
adj_matrix - Adjacency matrix including self-connections. Shape: [batch_size, num_nodes, num_nodes]
print_attn_probs - If True, the attention weights are printed during the forward pass (for debugging purposes)
"""
batch_size, num_nodes = node_feats.size(0), node_feats.size(1)
# Apply linear layer and sort nodes by head
node_feats = self.projection(node_feats)
node_feats = node_feats.view(batch_size, num_nodes, self.num_heads, -1)
# We need to calculate the attention logits for every edge in the adjacency matrix
# Doing this on all possible combinations of nodes is very expensive
# => Create a tensor of [W*h_i||W*h_j] with i and j being the indices of all edges
edges = adj_matrix.nonzero(as_tuple=False) # Returns indices where the adjacency matrix is not 0 => edges
node_feats_flat = node_feats.view(batch_size * num_nodes, self.num_heads, -1)
edge_indices_row = edges[:,0] * num_nodes + edges[:,1]
edge_indices_col = edges[:,0] * num_nodes + edges[:,2]
a_input = torch.cat([
torch.index_select(input=node_feats_flat, index=edge_indices_row, dim=0),
torch.index_select(input=node_feats_flat, index=edge_indices_col, dim=0)
], dim=-1) # Index select returns a tensor with node_feats_flat being indexed at the desired positions along dim=0
# Calculate attention MLP output (independent for each head)
attn_logits = torch.einsum('bhc,hc->bh', a_input, self.a)
attn_logits = self.leakyrelu(attn_logits)
# Map list of attention values back into a matrix
attn_matrix = attn_logits.new_zeros(adj_matrix.shape+(self.num_heads,)).fill_(-9e15)
attn_matrix[adj_matrix[...,None].repeat(1,1,1,self.num_heads) == 1] = attn_logits.reshape(-1)
# Weighted average of attention
attn_probs = F.softmax(attn_matrix, dim=2)
if print_attn_probs:
print("Attention probs\n", attn_probs.permute(0, 3, 1, 2))
node_feats = torch.einsum('bijh,bjhc->bihc', attn_probs, node_feats)
# If heads should be concatenated, we can do this by reshaping. Otherwise, take mean
if self.concat_heads:
node_feats = node_feats.reshape(batch_size, num_nodes, -1)
else:
node_feats = node_feats.mean(dim=2)
return node_feats
```
Again, we can apply the graph attention layer on our example graph above to understand the dynamics better. As before, the input layer is initialized as an identity matrix, but we set $\mathbf{a}$ to be a vector of arbitrary numbers to obtain different attention values. We use two heads to show the parallel, independent attention mechanisms working in the layer.
```
layer = GATLayer(2, 2, num_heads=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
layer.a.data = torch.Tensor([[-0.2, 0.3], [0.1, -0.1]])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix, print_attn_probs=True)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
```
We recommend that you try to calculate the attention matrix at least for one head and one node for yourself. The entries are 0 where there does not exist an edge between $i$ and $j$. For the others, we see a diverse set of attention probabilities. Moreover, the output features of node 3 and 4 are now different although they have the same neighbors.
## PyTorch Geometric
We had mentioned before that implementing graph networks with adjacency matrix is simple and straight-forward but can be computationally expensive for large graphs. Many real-world graphs can reach over 200k nodes, for which adjacency matrix-based implementations fail. There are a lot of optimizations possible when implementing GNNs, and luckily, there exist packages that provide such layers. The most popular packages for PyTorch are [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/) and the [Deep Graph Library](https://www.dgl.ai/) (the latter being actually framework agnostic). Which one to use depends on the project you are planning to do and personal taste. In this tutorial, we will look at PyTorch Geometric as part of the PyTorch family. Similar to PyTorch Lightning, PyTorch Geometric is not installed by default on GoogleColab (and actually also not in our `dl2021` environment due to many dependencies that would be unnecessary for the practicals). Hence, let's import and/or install it below:
```
# torch geometric
try:
import torch_geometric
except ModuleNotFoundError:
# Installing torch geometric packages with specific CUDA+PyTorch version.
# See https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html for details
TORCH = torch.__version__.split('+')[0]
CUDA = 'cu' + torch.version.cuda.replace('.','')
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-geometric
import torch_geometric
import torch_geometric.nn as geom_nn
import torch_geometric.data as geom_data
```
PyTorch Geometric provides us a set of common graph layers, including the GCN and GAT layer we implemented above. Additionally, similar to PyTorch's torchvision, it provides the common graph datasets and transformations on those to simplify training. Compared to our implementation above, PyTorch Geometric uses a list of index pairs to represent the edges. The details of this library will be explored further in our experiments.
In our tasks below, we want to allow us to pick from a multitude of graph layers. Thus, we define again below a dictionary to access those using a string:
```
gnn_layer_by_name = {
"GCN": geom_nn.GCNConv,
"GAT": geom_nn.GATConv,
"GraphConv": geom_nn.GraphConv
}
```
Additionally to GCN and GAT, we added the layer `geom_nn.GraphConv` ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GraphConv)). GraphConv is a GCN with a separate weight matrix for the self-connections. Mathematically, this would be:
$$
\mathbf{x}_i^{(l+1)} = \mathbf{W}^{(l + 1)}_1 \mathbf{x}_i^{(l)} + \mathbf{W}^{(\ell + 1)}_2 \sum_{j \in \mathcal{N}_i} \mathbf{x}_j^{(l)}
$$
In this formula, the neighbor's messages are added instead of averaged. However, PyTorch Geometric provides the argument `aggr` to switch between summing, averaging, and max pooling.
## Experiments on graph structures
Tasks on graph-structured data can be grouped into three groups: node-level, edge-level and graph-level. The different levels describe on which level we want to perform classification/regression. We will discuss all three types in more detail below.
### Node-level tasks: Semi-supervised node classification
Node-level tasks have the goal to classify nodes in a graph. Usually, we have given a single, large graph with >1000 nodes of which a certain amount of nodes are labeled. We learn to classify those labeled examples during training and try to generalize to the unlabeled nodes.
A popular example that we will use in this tutorial is the Cora dataset, a citation network among papers. The Cora consists of 2708 scientific publications with links between each other representing the citation of one paper by another. The task is to classify each publication into one of seven classes. Each publication is represented by a bag-of-words vector. This means that we have a vector of 1433 elements for each publication, where a 1 at feature $i$ indicates that the $i$-th word of a pre-defined dictionary is in the article. Binary bag-of-words representations are commonly used when we need very simple encodings, and already have an intuition of what words to expect in a network. There exist much better approaches, but we will leave this to the NLP courses to discuss.
We will load the dataset below:
```
cora_dataset = torch_geometric.datasets.Planetoid(root=DATASET_PATH, name="Cora")
```
Let's look at how PyTorch Geometric represents the graph data. Note that although we have a single graph, PyTorch Geometric returns a dataset for compatibility to other datasets.
```
cora_dataset[0]
```
The graph is represented by a `Data` object ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html#torch_geometric.data.Data)) which we can access as a standard Python namespace. The edge index tensor is the list of edges in the graph and contains the mirrored version of each edge for undirected graphs. The `train_mask`, `val_mask`, and `test_mask` are boolean masks that indicate which nodes we should use for training, validation, and testing. The `x` tensor is the feature tensor of our 2708 publications, and `y` the labels for all nodes.
After having seen the data, we can implement a simple graph neural network. The GNN applies a sequence of graph layers (GCN, GAT, or GraphConv), ReLU as activation function, and dropout for regularization. See below for the specific implementation.
```
class GNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, layer_name="GCN", dp_rate=0.1, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of "hidden" graph layers
layer_name - String of the graph layer to use
dp_rate - Dropout rate to apply throughout the network
kwargs - Additional arguments for the graph layer (e.g. number of heads for GAT)
"""
super().__init__()
gnn_layer = gnn_layer_by_name[layer_name]
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
gnn_layer(in_channels=in_channels,
out_channels=out_channels,
**kwargs),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [gnn_layer(in_channels=in_channels,
out_channels=c_out,
**kwargs)]
self.layers = nn.ModuleList(layers)
def forward(self, x, edge_index):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
"""
for l in self.layers:
# For graph layers, we need to add the "edge_index" tensor as additional input
# All PyTorch Geometric graph layer inherit the class "MessagePassing", hence
# we can simply check the class type.
if isinstance(l, geom_nn.MessagePassing):
x = l(x, edge_index)
else:
x = l(x)
return x
```
Good practice in node-level tasks is to create an MLP baseline that is applied to each node independently. This way we can verify whether adding the graph information to the model indeed improves the prediction, or not. It might also be that the features per node are already expressive enough to clearly point towards a specific class. To check this, we implement a simple MLP below.
```
class MLPModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, dp_rate=0.1):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of hidden layers
dp_rate - Dropout rate to apply throughout the network
"""
super().__init__()
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
nn.Linear(in_channels, out_channels),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [nn.Linear(in_channels, c_out)]
self.layers = nn.Sequential(*layers)
def forward(self, x, *args, **kwargs):
"""
Inputs:
x - Input features per node
"""
return self.layers(x)
```
Finally, we can merge the models into a PyTorch Lightning module which handles the training, validation, and testing for us.
```
class NodeLevelGNN(pl.LightningModule):
def __init__(self, model_name, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
if model_name == "MLP":
self.model = MLPModel(**model_kwargs)
else:
self.model = GNNModel(**model_kwargs)
self.loss_module = nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index = data.x, data.edge_index
x = self.model(x, edge_index)
# Only calculate the loss on the nodes corresponding to the mask
if mode == "train":
mask = data.train_mask
elif mode == "val":
mask = data.val_mask
elif mode == "test":
mask = data.test_mask
else:
assert False, f"Unknown forward mode: {mode}"
loss = self.loss_module(x[mask], data.y[mask])
acc = (x[mask].argmax(dim=-1) == data.y[mask]).sum().float() / mask.sum()
return loss, acc
def configure_optimizers(self):
# We use SGD here, but Adam works as well
optimizer = optim.SGD(self.parameters(), lr=0.1, momentum=0.9, weight_decay=2e-3)
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
```
Additionally to the Lightning module, we define a training function below. As we have a single graph, we use a batch size of 1 for the data loader and share the same data loader for the train, validation, and test set (the mask is picked inside the Lightning module). Besides, we set the argument `progress_bar_refresh_rate` to zero as it usually shows the progress per epoch, but an epoch only consists of a single step. The rest of the code is very similar to what we have seen in Tutorial 5 and 6 already.
```
def train_node_classifier(model_name, dataset, **model_kwargs):
pl.seed_everything(42)
node_data_loader = geom_data.DataLoader(dataset, batch_size=1)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "NodeLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=200,
progress_bar_refresh_rate=0) # 0 because epoch size is 1
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"NodeLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = NodeLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything()
model = NodeLevelGNN(model_name=model_name, c_in=dataset.num_node_features, c_out=dataset.num_classes, **model_kwargs)
trainer.fit(model, node_data_loader, node_data_loader)
model = NodeLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on the test set
test_result = trainer.test(model, test_dataloaders=node_data_loader, verbose=False)
batch = next(iter(node_data_loader))
batch = batch.to(model.device)
_, train_acc = model.forward(batch, mode="train")
_, val_acc = model.forward(batch, mode="val")
result = {"train": train_acc,
"val": val_acc,
"test": test_result[0]['test_acc']}
return model, result
```
Finally, we can train our models. First, let's train the simple MLP:
```
# Small function for printing the test scores
def print_results(result_dict):
if "train" in result_dict:
print(f"Train accuracy: {(100.0*result_dict['train']):4.2f}%")
if "val" in result_dict:
print(f"Val accuracy: {(100.0*result_dict['val']):4.2f}%")
print(f"Test accuracy: {(100.0*result_dict['test']):4.2f}%")
node_mlp_model, node_mlp_result = train_node_classifier(model_name="MLP",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_mlp_result)
```
Although the MLP can overfit on the training dataset because of the high-dimensional input features, it does not perform too well on the test set. Let's see if we can beat this score with our graph networks:
```
node_gnn_model, node_gnn_result = train_node_classifier(model_name="GNN",
layer_name="GCN",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_gnn_result)
```
As we would have hoped for, the GNN model outperforms the MLP by quite a margin. This shows that using the graph information indeed improves our predictions and lets us generalizes better.
The hyperparameters in the model have been chosen to create a relatively small network. This is because the first layer with an input dimension of 1433 can be relatively expensive to perform for large graphs. In general, GNNs can become relatively expensive for very big graphs. This is why such GNNs either have a small hidden size or use a special batching strategy where we sample a connected subgraph of the big, original graph.
### Edge-level tasks: Link prediction
In some applications, we might have to predict on an edge-level instead of node-level. The most common edge-level task in GNN is link prediction. Link prediction means that given a graph, we want to predict whether there will be/should be an edge between two nodes or not. For example, in a social network, this is used by Facebook and co to propose new friends to you. Again, graph level information can be crucial to perform this task. The output prediction is usually done by performing a similarity metric on the pair of node features, which should be 1 if there should be a link, and otherwise close to 0. To keep the tutorial short, we will not implement this task ourselves. Nevertheless, there are many good resources out there if you are interested in looking closer at this task.
Tutorials and papers for this topic include:
* [PyTorch Geometric example](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/link_pred.py)
* [Graph Neural Networks: A Review of Methods and Applications](https://arxiv.org/pdf/1812.08434.pdf), Zhou et al. 2019
* [Link Prediction Based on Graph Neural Networks](https://papers.nips.cc/paper/2018/file/53f0d7c537d99b3824f0f99d62ea2428-Paper.pdf), Zhang and Chen, 2018.
### Graph-level tasks: Graph classification
Finally, in this part of the tutorial, we will have a closer look at how to apply GNNs to the task of graph classification. The goal is to classify an entire graph instead of single nodes or edges. Therefore, we are also given a dataset of multiple graphs that we need to classify based on some structural graph properties. The most common task for graph classification is molecular property prediction, in which molecules are represented as graphs. Each atom is linked to a node, and edges in the graph are the bonds between atoms. For example, look at the figure below.
<center width="100%"><img src="https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/molecule_graph.svg?raw=1" width="600px"></center>
On the left, we have an arbitrary, small molecule with different atoms, whereas the right part of the image shows the graph representation. The atom types are abstracted as node features (e.g. a one-hot vector), and the different bond types are used as edge features. For simplicity, we will neglect the edge attributes in this tutorial, but you can include by using methods like the [Relational Graph Convolution](https://arxiv.org/abs/1703.06103) that uses a different weight matrix for each edge type.
The dataset we will use below is called the MUTAG dataset. It is a common small benchmark for graph classification algorithms, and contain 188 graphs with 18 nodes and 20 edges on average for each graph. The graph nodes have 7 different labels/atom types, and the binary graph labels represent "their mutagenic effect on a specific gram negative bacterium" (the specific meaning of the labels are not too important here). The dataset is part of a large collection of different graph classification datasets, known as the [TUDatasets](https://chrsmrrs.github.io/datasets/), which is directly accessible via `torch_geometric.datasets.TUDataset` ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html#torch_geometric.datasets.TUDataset)) in PyTorch Geometric. We can load the dataset below.
```
tu_dataset = torch_geometric.datasets.TUDataset(root=DATASET_PATH, name="MUTAG")
```
Let's look at some statistics for the dataset:
```
print("Data object:", tu_dataset.data)
print("Length:", len(tu_dataset))
print(f"Average label: {tu_dataset.data.y.float().mean().item():4.2f}")
```
The first line shows how the dataset stores different graphs. The nodes, edges, and labels of each graph are concatenated to one tensor, and the dataset stores the indices where to split the tensors correspondingly. The length of the dataset is the number of graphs we have, and the "average label" denotes the percentage of the graph with label 1. As long as the percentage is in the range of 0.5, we have a relatively balanced dataset. It happens quite often that graph datasets are very imbalanced, hence checking the class balance is always a good thing to do.
Next, we will split our dataset into a training and test part. Note that we do not use a validation set this time because of the small size of the dataset. Therefore, our model might overfit slightly on the validation set due to the noise of the evaluation, but we still get an estimate of the performance on untrained data.
```
torch.manual_seed(42)
tu_dataset.shuffle()
train_dataset = tu_dataset[:150]
test_dataset = tu_dataset[150:]
```
When using a data loader, we encounter a problem with batching $N$ graphs. Each graph in the batch can have a different number of nodes and edges, and hence we would require a lot of padding to obtain a single tensor. Torch geometric uses a different, more efficient approach: we can view the $N$ graphs in a batch as a single large graph with concatenated node and edge list. As there is no edge between the $N$ graphs, running GNN layers on the large graph gives us the same output as running the GNN on each graph separately. Visually, this batching strategy is visualized below (figure credit - PyTorch Geometric team, [tutorial here](https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb?usp=sharing#scrollTo=2owRWKcuoALo)).
<center width="100%"><img src="https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/torch_geometric_stacking_graphs.png?raw=1" width="600px"></center>
The adjacency matrix is zero for any nodes that come from two different graphs, and otherwise according to the adjacency matrix of the individual graph. Luckily, this strategy is already implemented in torch geometric, and hence we can use the corresponding data loader:
```
graph_train_loader = geom_data.DataLoader(train_dataset, batch_size=64, shuffle=True)
graph_val_loader = geom_data.DataLoader(test_dataset, batch_size=64) # Additional loader if you want to change to a larger dataset
graph_test_loader = geom_data.DataLoader(test_dataset, batch_size=64)
```
Let's load a batch below to see the batching in action:
```
batch = next(iter(graph_test_loader))
print("Batch:", batch)
print("Labels:", batch.y[:10])
print("Batch indices:", batch.batch[:40])
```
We have 38 graphs stacked together for the test dataset. The batch indices, stored in `batch`, show that the first 12 nodes belong to the first graph, the next 22 to the second graph, and so on. These indices are important for performing the final prediction. To perform a prediction over a whole graph, we usually perform a pooling operation over all nodes after running the GNN model. In this case, we will use the average pooling. Hence, we need to know which nodes should be included in which average pool. Using this pooling, we can already create our graph network below. Specifically, we re-use our class `GNNModel` from before, and simply add an average pool and single linear layer for the graph prediction task.
```
class GraphGNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, dp_rate_linear=0.5, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of output features (usually number of classes)
dp_rate_linear - Dropout rate before the linear layer (usually much higher than inside the GNN)
kwargs - Additional arguments for the GNNModel object
"""
super().__init__()
self.GNN = GNNModel(c_in=c_in,
c_hidden=c_hidden,
c_out=c_hidden, # Not our prediction output yet!
**kwargs)
self.head = nn.Sequential(
nn.Dropout(dp_rate_linear),
nn.Linear(c_hidden, c_out)
)
def forward(self, x, edge_index, batch_idx):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
batch_idx - Index of batch element for each node
"""
x = self.GNN(x, edge_index)
x = geom_nn.global_mean_pool(x, batch_idx) # Average pooling
x = self.head(x)
return x
```
Finally, we can create a PyTorch Lightning module to handle the training. It is similar to the modules we have seen before and does nothing surprising in terms of training. As we have a binary classification task, we use the Binary Cross Entropy loss.
```
class GraphLevelGNN(pl.LightningModule):
def __init__(self, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
self.model = GraphGNNModel(**model_kwargs)
self.loss_module = nn.BCEWithLogitsLoss() if self.hparams.c_out == 1 else nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index, batch_idx = data.x, data.edge_index, data.batch
x = self.model(x, edge_index, batch_idx)
x = x.squeeze(dim=-1)
if self.hparams.c_out == 1:
preds = (x > 0).float()
data.y = data.y.float()
else:
preds = x.argmax(dim=-1)
loss = self.loss_module(x, data.y)
acc = (preds == data.y).sum().float() / preds.shape[0]
return loss, acc
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=1e-2, weight_decay=0.0) # High lr because of small dataset and small model
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
```
Below we train the model on our dataset. It resembles the typical training functions we have seen so far.
```
def train_graph_classifier(model_name, **model_kwargs):
pl.seed_everything(42)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "GraphLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=500,
progress_bar_refresh_rate=0)
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"GraphLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = GraphLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42)
model = GraphLevelGNN(c_in=tu_dataset.num_node_features,
c_out=1 if tu_dataset.num_classes==2 else tu_dataset.num_classes,
**model_kwargs)
trainer.fit(model, graph_train_loader, graph_val_loader)
model = GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on validation and test set
train_result = trainer.test(model, test_dataloaders=graph_train_loader, verbose=False)
test_result = trainer.test(model, test_dataloaders=graph_test_loader, verbose=False)
result = {"test": test_result[0]['test_acc'], "train": train_result[0]['test_acc']}
return model, result
```
Finally, let's perform the training and testing. Feel free to experiment with different GNN layers, hyperparameters, etc.
```
model, result = train_graph_classifier(model_name="GraphConv",
c_hidden=256,
layer_name="GraphConv",
num_layers=3,
dp_rate_linear=0.5,
dp_rate=0.0)
print(f"Train performance: {100.0*result['train']:4.2f}%")
print(f"Test performance: {100.0*result['test']:4.2f}%")
```
The test performance shows that we obtain quite good scores on an unseen part of the dataset. It should be noted that as we have been using the test set for validation as well, we might have overfitted slightly to this set. Nevertheless, the experiment shows us that GNNs can be indeed powerful to predict the properties of graphs and/or molecules.
## Conclusion
In this tutorial, we have seen the application of neural networks to graph structures. We looked at how a graph can be represented (adjacency matrix or edge list), and discussed the implementation of common graph layers: GCN and GAT. The implementations showed the practical side of the layers, which is often easier than the theory. Finally, we experimented with different tasks, on node-, edge- and graph-level. Overall, we have seen that including graph information in the predictions can be crucial for achieving high performance. There are a lot of applications that benefit from GNNs, and the importance of these networks will likely increase over the next years.
|
github_jupyter
|
## Standard libraries
import os
import json
import math
import numpy as np
import time
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
sns.set()
## Progress bar
from tqdm.notebook import tqdm
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install --quiet pytorch-lightning>=1.4
import pytorch_lightning as pl
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial7"
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print(device)
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial7/"
# Files to download
pretrained_files = ["NodeLevelMLP.ckpt", "NodeLevelGNN.ckpt", "GraphLevelGraphConv.ckpt"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive fll output including the following error:\n", e)
class GCNLayer(nn.Module):
def __init__(self, c_in, c_out):
super().__init__()
self.projection = nn.Linear(c_in, c_out)
def forward(self, node_feats, adj_matrix):
"""
Inputs:
node_feats - Tensor with node features of shape [batch_size, num_nodes, c_in]
adj_matrix - Batch of adjacency matrices of the graph. If there is an edge from i to j, adj_matrix[b,i,j]=1 else 0.
Supports directed edges by non-symmetric matrices. Assumes to already have added the identity connections.
Shape: [batch_size, num_nodes, num_nodes]
"""
# Num neighbours = number of incoming edges
num_neighbours = adj_matrix.sum(dim=-1, keepdims=True)
node_feats = self.projection(node_feats)
node_feats = torch.bmm(adj_matrix, node_feats)
node_feats = node_feats / num_neighbours
return node_feats
node_feats = torch.arange(8, dtype=torch.float32).view(1, 4, 2)
adj_matrix = torch.Tensor([[[1, 1, 0, 0],
[1, 1, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 1]]])
print("Node features:\n", node_feats)
print("\nAdjacency matrix:\n", adj_matrix)
layer = GCNLayer(c_in=2, c_out=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
class GATLayer(nn.Module):
def __init__(self, c_in, c_out, num_heads=1, concat_heads=True, alpha=0.2):
"""
Inputs:
c_in - Dimensionality of input features
c_out - Dimensionality of output features
num_heads - Number of heads, i.e. attention mechanisms to apply in parallel. The
output features are equally split up over the heads if concat_heads=True.
concat_heads - If True, the output of the different heads is concatenated instead of averaged.
alpha - Negative slope of the LeakyReLU activation.
"""
super().__init__()
self.num_heads = num_heads
self.concat_heads = concat_heads
if self.concat_heads:
assert c_out % num_heads == 0, "Number of output features must be a multiple of the count of heads."
c_out = c_out // num_heads
# Sub-modules and parameters needed in the layer
self.projection = nn.Linear(c_in, c_out * num_heads)
self.a = nn.Parameter(torch.Tensor(num_heads, 2 * c_out)) # One per head
self.leakyrelu = nn.LeakyReLU(alpha)
# Initialization from the original implementation
nn.init.xavier_uniform_(self.projection.weight.data, gain=1.414)
nn.init.xavier_uniform_(self.a.data, gain=1.414)
def forward(self, node_feats, adj_matrix, print_attn_probs=False):
"""
Inputs:
node_feats - Input features of the node. Shape: [batch_size, c_in]
adj_matrix - Adjacency matrix including self-connections. Shape: [batch_size, num_nodes, num_nodes]
print_attn_probs - If True, the attention weights are printed during the forward pass (for debugging purposes)
"""
batch_size, num_nodes = node_feats.size(0), node_feats.size(1)
# Apply linear layer and sort nodes by head
node_feats = self.projection(node_feats)
node_feats = node_feats.view(batch_size, num_nodes, self.num_heads, -1)
# We need to calculate the attention logits for every edge in the adjacency matrix
# Doing this on all possible combinations of nodes is very expensive
# => Create a tensor of [W*h_i||W*h_j] with i and j being the indices of all edges
edges = adj_matrix.nonzero(as_tuple=False) # Returns indices where the adjacency matrix is not 0 => edges
node_feats_flat = node_feats.view(batch_size * num_nodes, self.num_heads, -1)
edge_indices_row = edges[:,0] * num_nodes + edges[:,1]
edge_indices_col = edges[:,0] * num_nodes + edges[:,2]
a_input = torch.cat([
torch.index_select(input=node_feats_flat, index=edge_indices_row, dim=0),
torch.index_select(input=node_feats_flat, index=edge_indices_col, dim=0)
], dim=-1) # Index select returns a tensor with node_feats_flat being indexed at the desired positions along dim=0
# Calculate attention MLP output (independent for each head)
attn_logits = torch.einsum('bhc,hc->bh', a_input, self.a)
attn_logits = self.leakyrelu(attn_logits)
# Map list of attention values back into a matrix
attn_matrix = attn_logits.new_zeros(adj_matrix.shape+(self.num_heads,)).fill_(-9e15)
attn_matrix[adj_matrix[...,None].repeat(1,1,1,self.num_heads) == 1] = attn_logits.reshape(-1)
# Weighted average of attention
attn_probs = F.softmax(attn_matrix, dim=2)
if print_attn_probs:
print("Attention probs\n", attn_probs.permute(0, 3, 1, 2))
node_feats = torch.einsum('bijh,bjhc->bihc', attn_probs, node_feats)
# If heads should be concatenated, we can do this by reshaping. Otherwise, take mean
if self.concat_heads:
node_feats = node_feats.reshape(batch_size, num_nodes, -1)
else:
node_feats = node_feats.mean(dim=2)
return node_feats
layer = GATLayer(2, 2, num_heads=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
layer.a.data = torch.Tensor([[-0.2, 0.3], [0.1, -0.1]])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix, print_attn_probs=True)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
# torch geometric
try:
import torch_geometric
except ModuleNotFoundError:
# Installing torch geometric packages with specific CUDA+PyTorch version.
# See https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html for details
TORCH = torch.__version__.split('+')[0]
CUDA = 'cu' + torch.version.cuda.replace('.','')
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-geometric
import torch_geometric
import torch_geometric.nn as geom_nn
import torch_geometric.data as geom_data
gnn_layer_by_name = {
"GCN": geom_nn.GCNConv,
"GAT": geom_nn.GATConv,
"GraphConv": geom_nn.GraphConv
}
cora_dataset = torch_geometric.datasets.Planetoid(root=DATASET_PATH, name="Cora")
cora_dataset[0]
class GNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, layer_name="GCN", dp_rate=0.1, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of "hidden" graph layers
layer_name - String of the graph layer to use
dp_rate - Dropout rate to apply throughout the network
kwargs - Additional arguments for the graph layer (e.g. number of heads for GAT)
"""
super().__init__()
gnn_layer = gnn_layer_by_name[layer_name]
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
gnn_layer(in_channels=in_channels,
out_channels=out_channels,
**kwargs),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [gnn_layer(in_channels=in_channels,
out_channels=c_out,
**kwargs)]
self.layers = nn.ModuleList(layers)
def forward(self, x, edge_index):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
"""
for l in self.layers:
# For graph layers, we need to add the "edge_index" tensor as additional input
# All PyTorch Geometric graph layer inherit the class "MessagePassing", hence
# we can simply check the class type.
if isinstance(l, geom_nn.MessagePassing):
x = l(x, edge_index)
else:
x = l(x)
return x
class MLPModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, dp_rate=0.1):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of hidden layers
dp_rate - Dropout rate to apply throughout the network
"""
super().__init__()
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
nn.Linear(in_channels, out_channels),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [nn.Linear(in_channels, c_out)]
self.layers = nn.Sequential(*layers)
def forward(self, x, *args, **kwargs):
"""
Inputs:
x - Input features per node
"""
return self.layers(x)
class NodeLevelGNN(pl.LightningModule):
def __init__(self, model_name, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
if model_name == "MLP":
self.model = MLPModel(**model_kwargs)
else:
self.model = GNNModel(**model_kwargs)
self.loss_module = nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index = data.x, data.edge_index
x = self.model(x, edge_index)
# Only calculate the loss on the nodes corresponding to the mask
if mode == "train":
mask = data.train_mask
elif mode == "val":
mask = data.val_mask
elif mode == "test":
mask = data.test_mask
else:
assert False, f"Unknown forward mode: {mode}"
loss = self.loss_module(x[mask], data.y[mask])
acc = (x[mask].argmax(dim=-1) == data.y[mask]).sum().float() / mask.sum()
return loss, acc
def configure_optimizers(self):
# We use SGD here, but Adam works as well
optimizer = optim.SGD(self.parameters(), lr=0.1, momentum=0.9, weight_decay=2e-3)
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
def train_node_classifier(model_name, dataset, **model_kwargs):
pl.seed_everything(42)
node_data_loader = geom_data.DataLoader(dataset, batch_size=1)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "NodeLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=200,
progress_bar_refresh_rate=0) # 0 because epoch size is 1
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"NodeLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = NodeLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything()
model = NodeLevelGNN(model_name=model_name, c_in=dataset.num_node_features, c_out=dataset.num_classes, **model_kwargs)
trainer.fit(model, node_data_loader, node_data_loader)
model = NodeLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on the test set
test_result = trainer.test(model, test_dataloaders=node_data_loader, verbose=False)
batch = next(iter(node_data_loader))
batch = batch.to(model.device)
_, train_acc = model.forward(batch, mode="train")
_, val_acc = model.forward(batch, mode="val")
result = {"train": train_acc,
"val": val_acc,
"test": test_result[0]['test_acc']}
return model, result
# Small function for printing the test scores
def print_results(result_dict):
if "train" in result_dict:
print(f"Train accuracy: {(100.0*result_dict['train']):4.2f}%")
if "val" in result_dict:
print(f"Val accuracy: {(100.0*result_dict['val']):4.2f}%")
print(f"Test accuracy: {(100.0*result_dict['test']):4.2f}%")
node_mlp_model, node_mlp_result = train_node_classifier(model_name="MLP",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_mlp_result)
node_gnn_model, node_gnn_result = train_node_classifier(model_name="GNN",
layer_name="GCN",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_gnn_result)
tu_dataset = torch_geometric.datasets.TUDataset(root=DATASET_PATH, name="MUTAG")
print("Data object:", tu_dataset.data)
print("Length:", len(tu_dataset))
print(f"Average label: {tu_dataset.data.y.float().mean().item():4.2f}")
torch.manual_seed(42)
tu_dataset.shuffle()
train_dataset = tu_dataset[:150]
test_dataset = tu_dataset[150:]
graph_train_loader = geom_data.DataLoader(train_dataset, batch_size=64, shuffle=True)
graph_val_loader = geom_data.DataLoader(test_dataset, batch_size=64) # Additional loader if you want to change to a larger dataset
graph_test_loader = geom_data.DataLoader(test_dataset, batch_size=64)
batch = next(iter(graph_test_loader))
print("Batch:", batch)
print("Labels:", batch.y[:10])
print("Batch indices:", batch.batch[:40])
class GraphGNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, dp_rate_linear=0.5, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of output features (usually number of classes)
dp_rate_linear - Dropout rate before the linear layer (usually much higher than inside the GNN)
kwargs - Additional arguments for the GNNModel object
"""
super().__init__()
self.GNN = GNNModel(c_in=c_in,
c_hidden=c_hidden,
c_out=c_hidden, # Not our prediction output yet!
**kwargs)
self.head = nn.Sequential(
nn.Dropout(dp_rate_linear),
nn.Linear(c_hidden, c_out)
)
def forward(self, x, edge_index, batch_idx):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
batch_idx - Index of batch element for each node
"""
x = self.GNN(x, edge_index)
x = geom_nn.global_mean_pool(x, batch_idx) # Average pooling
x = self.head(x)
return x
class GraphLevelGNN(pl.LightningModule):
def __init__(self, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
self.model = GraphGNNModel(**model_kwargs)
self.loss_module = nn.BCEWithLogitsLoss() if self.hparams.c_out == 1 else nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index, batch_idx = data.x, data.edge_index, data.batch
x = self.model(x, edge_index, batch_idx)
x = x.squeeze(dim=-1)
if self.hparams.c_out == 1:
preds = (x > 0).float()
data.y = data.y.float()
else:
preds = x.argmax(dim=-1)
loss = self.loss_module(x, data.y)
acc = (preds == data.y).sum().float() / preds.shape[0]
return loss, acc
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=1e-2, weight_decay=0.0) # High lr because of small dataset and small model
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
def train_graph_classifier(model_name, **model_kwargs):
pl.seed_everything(42)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "GraphLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=500,
progress_bar_refresh_rate=0)
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"GraphLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = GraphLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42)
model = GraphLevelGNN(c_in=tu_dataset.num_node_features,
c_out=1 if tu_dataset.num_classes==2 else tu_dataset.num_classes,
**model_kwargs)
trainer.fit(model, graph_train_loader, graph_val_loader)
model = GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on validation and test set
train_result = trainer.test(model, test_dataloaders=graph_train_loader, verbose=False)
test_result = trainer.test(model, test_dataloaders=graph_test_loader, verbose=False)
result = {"test": test_result[0]['test_acc'], "train": train_result[0]['test_acc']}
return model, result
model, result = train_graph_classifier(model_name="GraphConv",
c_hidden=256,
layer_name="GraphConv",
num_layers=3,
dp_rate_linear=0.5,
dp_rate=0.0)
print(f"Train performance: {100.0*result['train']:4.2f}%")
print(f"Test performance: {100.0*result['test']:4.2f}%")
| 0.689828 | 0.959875 |
```
from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#ๆญฃๅๅL2
import tensorflow as tf
import numpy as np
import pandas as pd
# 12-0.2
# 13-2.4
# 18-12.14
import pandas as pd
import numpy as np
normal = np.loadtxt(r'F:\ๅผ ่ๅธ่ฏพ้ขๅญฆไน ๅ
ๅฎน\code\ๆฐๆฎ้\่ฏ้ชๆฐๆฎ(ๅ
ๆฌๅๅ่ๅจๅๆฏๅจ)\2013.9.12-ๆชๅ็็ผ ็ปๅ\2013-9.12ๆฏๅจ\2013-9-12ๆฏๅจ-1250rmin-mat\1250rnormalvib4.txt', delimiter=',')
chanrao = np.loadtxt(r'F:\ๅผ ่ๅธ่ฏพ้ขๅญฆไน ๅ
ๅฎน\code\ๆฐๆฎ้\่ฏ้ชๆฐๆฎ(ๅ
ๆฌๅๅ่ๅจๅๆฏๅจ)\2013.9.17-ๅ็็ผ ็ปๅ\ๆฏๅจ\9-18ไธๅๆฏๅจ1250rmin-mat\1250r_chanraovib4.txt', delimiter=',')
print(normal.shape,chanrao.shape,"***************************************************")
data_normal=normal[18:20] #ๆๅๅไธค่ก
data_chanrao=chanrao[18:20] #ๆๅๅไธค่ก
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
data_normal=data_normal.reshape(1,-1)
data_chanrao=data_chanrao.reshape(1,-1)
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
#ๆฐดๆณต็ไธค็งๆ
้็ฑปๅไฟกๅทnormalๆญฃๅธธ๏ผchanraoๆ
้
data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)
data_chanrao=data_chanrao.reshape(-1,512)
print(data_normal.shape,data_chanrao.shape)
import numpy as np
def yuchuli(data,label):#(4:1)(51:13)
#ๆไนฑๆฐๆฎ้กบๅบ
np.random.shuffle(data)
train = data[0:102,:]
test = data[102:128,:]
label_train = np.array([label for i in range(0,102)])
label_test =np.array([label for i in range(0,26)])
return train,test ,label_train ,label_test
def stackkk(a,b,c,d,e,f,g,h):
aa = np.vstack((a, e))
bb = np.vstack((b, f))
cc = np.hstack((c, g))
dd = np.hstack((d, h))
return aa,bb,cc,dd
x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)
x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)
tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)
x_train=tr1
x_test=te1
y_train = yr1
y_test = ye1
#ๆไนฑๆฐๆฎ
state = np.random.get_state()
np.random.shuffle(x_train)
np.random.set_state(state)
np.random.shuffle(y_train)
state = np.random.get_state()
np.random.shuffle(x_test)
np.random.set_state(state)
np.random.shuffle(y_test)
#ๅฏน่ฎญ็ป้ๅๆต่ฏ้ๆ ๅๅ
def ZscoreNormalization(x):
"""Z-score normaliaztion"""
x = (x - np.mean(x)) / np.std(x)
return x
x_train=ZscoreNormalization(x_train)
x_test=ZscoreNormalization(x_test)
# print(x_test[0])
#่ฝฌๅไธบไธ็ปดๅบๅ
x_train = x_train.reshape(-1,512,1)
x_test = x_test.reshape(-1,512,1)
print(x_train.shape,x_test.shape)
def to_one_hot(labels,dimension=2):
results = np.zeros((len(labels),dimension))
for i,label in enumerate(labels):
results[i,label] = 1
return results
one_hot_train_labels = to_one_hot(y_train)
one_hot_test_labels = to_one_hot(y_test)
x = layers.Input(shape=[512,1,1])
#ๆฎ้ๅท็งฏๅฑ
conv1 = layers.Conv2D(filters=16, kernel_size=(2, 1), activation='relu',padding='valid',name='conv1')(x)
#ๆฑ ๅๅฑ
POOL1 = MaxPooling2D((2,1))(conv1)
#ๆฎ้ๅท็งฏๅฑ
conv2 = layers.Conv2D(filters=32, kernel_size=(2, 1), activation='relu',padding='valid',name='conv2')(POOL1)
#ๆฑ ๅๅฑ
POOL2 = MaxPooling2D((2,1))(conv2)
#Dropoutๅฑ
Dropout=layers.Dropout(0.1)(POOL2 )
Flatten=layers.Flatten()(Dropout)
#ๅ
จ่ฟๆฅๅฑ
Dense1=layers.Dense(50, activation='relu')(Flatten)
Dense2=layers.Dense(2, activation='softmax')(Dense1)
model = keras.Model(x, Dense2)
model.summary()
#ๅฎไนไผๅ
model.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
import time
time_begin = time.time()
history = model.fit(x_train,one_hot_train_labels,
validation_split=0.1,
epochs=50,batch_size=10,
shuffle=True)
time_end = time.time()
time = time_end - time_begin
print('time:', time)
import time
time_begin = time.time()
score = model.evaluate(x_test,one_hot_test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
time_end = time.time()
time = time_end - time_begin
print('time:', time)
#็ปๅถacc-lossๆฒ็บฟ
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['val_loss'],color='g')
plt.plot(history.history['accuracy'],color='b')
plt.plot(history.history['val_accuracy'],color='k')
plt.title('model loss and acc')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')
# plt.legend(['train_loss','train_acc'], loc='upper left')
#plt.savefig('1.png')
plt.show()
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['accuracy'],color='b')
plt.title('model loss and sccuracy ')
plt.ylabel('loss/sccuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'train_sccuracy'], loc='center right')
plt.show()
```
|
github_jupyter
|
from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#ๆญฃๅๅL2
import tensorflow as tf
import numpy as np
import pandas as pd
# 12-0.2
# 13-2.4
# 18-12.14
import pandas as pd
import numpy as np
normal = np.loadtxt(r'F:\ๅผ ่ๅธ่ฏพ้ขๅญฆไน ๅ
ๅฎน\code\ๆฐๆฎ้\่ฏ้ชๆฐๆฎ(ๅ
ๆฌๅๅ่ๅจๅๆฏๅจ)\2013.9.12-ๆชๅ็็ผ ็ปๅ\2013-9.12ๆฏๅจ\2013-9-12ๆฏๅจ-1250rmin-mat\1250rnormalvib4.txt', delimiter=',')
chanrao = np.loadtxt(r'F:\ๅผ ่ๅธ่ฏพ้ขๅญฆไน ๅ
ๅฎน\code\ๆฐๆฎ้\่ฏ้ชๆฐๆฎ(ๅ
ๆฌๅๅ่ๅจๅๆฏๅจ)\2013.9.17-ๅ็็ผ ็ปๅ\ๆฏๅจ\9-18ไธๅๆฏๅจ1250rmin-mat\1250r_chanraovib4.txt', delimiter=',')
print(normal.shape,chanrao.shape,"***************************************************")
data_normal=normal[18:20] #ๆๅๅไธค่ก
data_chanrao=chanrao[18:20] #ๆๅๅไธค่ก
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
data_normal=data_normal.reshape(1,-1)
data_chanrao=data_chanrao.reshape(1,-1)
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
#ๆฐดๆณต็ไธค็งๆ
้็ฑปๅไฟกๅทnormalๆญฃๅธธ๏ผchanraoๆ
้
data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)
data_chanrao=data_chanrao.reshape(-1,512)
print(data_normal.shape,data_chanrao.shape)
import numpy as np
def yuchuli(data,label):#(4:1)(51:13)
#ๆไนฑๆฐๆฎ้กบๅบ
np.random.shuffle(data)
train = data[0:102,:]
test = data[102:128,:]
label_train = np.array([label for i in range(0,102)])
label_test =np.array([label for i in range(0,26)])
return train,test ,label_train ,label_test
def stackkk(a,b,c,d,e,f,g,h):
aa = np.vstack((a, e))
bb = np.vstack((b, f))
cc = np.hstack((c, g))
dd = np.hstack((d, h))
return aa,bb,cc,dd
x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)
x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)
tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)
x_train=tr1
x_test=te1
y_train = yr1
y_test = ye1
#ๆไนฑๆฐๆฎ
state = np.random.get_state()
np.random.shuffle(x_train)
np.random.set_state(state)
np.random.shuffle(y_train)
state = np.random.get_state()
np.random.shuffle(x_test)
np.random.set_state(state)
np.random.shuffle(y_test)
#ๅฏน่ฎญ็ป้ๅๆต่ฏ้ๆ ๅๅ
def ZscoreNormalization(x):
"""Z-score normaliaztion"""
x = (x - np.mean(x)) / np.std(x)
return x
x_train=ZscoreNormalization(x_train)
x_test=ZscoreNormalization(x_test)
# print(x_test[0])
#่ฝฌๅไธบไธ็ปดๅบๅ
x_train = x_train.reshape(-1,512,1)
x_test = x_test.reshape(-1,512,1)
print(x_train.shape,x_test.shape)
def to_one_hot(labels,dimension=2):
results = np.zeros((len(labels),dimension))
for i,label in enumerate(labels):
results[i,label] = 1
return results
one_hot_train_labels = to_one_hot(y_train)
one_hot_test_labels = to_one_hot(y_test)
x = layers.Input(shape=[512,1,1])
#ๆฎ้ๅท็งฏๅฑ
conv1 = layers.Conv2D(filters=16, kernel_size=(2, 1), activation='relu',padding='valid',name='conv1')(x)
#ๆฑ ๅๅฑ
POOL1 = MaxPooling2D((2,1))(conv1)
#ๆฎ้ๅท็งฏๅฑ
conv2 = layers.Conv2D(filters=32, kernel_size=(2, 1), activation='relu',padding='valid',name='conv2')(POOL1)
#ๆฑ ๅๅฑ
POOL2 = MaxPooling2D((2,1))(conv2)
#Dropoutๅฑ
Dropout=layers.Dropout(0.1)(POOL2 )
Flatten=layers.Flatten()(Dropout)
#ๅ
จ่ฟๆฅๅฑ
Dense1=layers.Dense(50, activation='relu')(Flatten)
Dense2=layers.Dense(2, activation='softmax')(Dense1)
model = keras.Model(x, Dense2)
model.summary()
#ๅฎไนไผๅ
model.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
import time
time_begin = time.time()
history = model.fit(x_train,one_hot_train_labels,
validation_split=0.1,
epochs=50,batch_size=10,
shuffle=True)
time_end = time.time()
time = time_end - time_begin
print('time:', time)
import time
time_begin = time.time()
score = model.evaluate(x_test,one_hot_test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
time_end = time.time()
time = time_end - time_begin
print('time:', time)
#็ปๅถacc-lossๆฒ็บฟ
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['val_loss'],color='g')
plt.plot(history.history['accuracy'],color='b')
plt.plot(history.history['val_accuracy'],color='k')
plt.title('model loss and acc')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')
# plt.legend(['train_loss','train_acc'], loc='upper left')
#plt.savefig('1.png')
plt.show()
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['accuracy'],color='b')
plt.title('model loss and sccuracy ')
plt.ylabel('loss/sccuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'train_sccuracy'], loc='center right')
plt.show()
| 0.554953 | 0.384219 |
```
import numpy as np
import matplotlib.pyplot as plt
import glob
from scipy.io import loadmat
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.decomposition import PCA
# load fingerprints, and then write into a single matrix.
path2fings = '/home/ben/DATA_tmp/Geysers_3yrs_fingers/FNGRPRNTS/*'
finglist = glob.glob(path2fings)
N = len(finglist)
print(N)
f0 = loadmat(finglist[0])['A2']
print(type(f0))
fingflat = f0.flatten()
n_fingflat = len(fingflat)
print(n_fingflat)
Nsub = 300
inds_rand = np.random.randint(0,N,Nsub)
fingarray = np.zeros((Nsub,n_fingflat))
for i,irand in enumerate(inds_rand):
f0 = loadmat(finglist[irand])['A2']
fingflat = f0.flatten()
fingarray[i,:] = fingflat
#plt.plot(fingflat)
# PCA first:
X = fingarray
pca = PCA(n_components=10)
pca.fit(X)
print(pca.explained_variance_ratio_)
plt.plot(pca.explained_variance_ratio_)
plt.title('Explained variance ratio for PCs')
Xpcs = pca.fit_transform(X)
print(np.shape(Xpcs))
print(type(Xpcs))
```
# Linkage matrix
https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html#scipy.cluster.hierarchy.linkage <br>
<br>
A (n-1) by 4 matrix Z is returned. At the i-th iteration, clusters with indices Z[i, 0] and Z[i, 1] are combined to form cluster n+i. A cluster with an index less than n corresponds to one of the n original observations. The distance between clusters Z[i, 0] and Z[i, 1] is given by Z[i, 2]. The fourth value Z[i, 3] represents the number of original observations in the newly formed cluster.
```
# # do the hierarchical clustering:
# Xarray-like of shape (n_samples, n_features)
print(np.shape(Xpcs))
fig, axes = plt.subplots(4,1, figsize=(6,18))
linkage_list = ['average','complete','ward','single']
for i_lk, linktype in enumerate(linkage_list):
linkage_mat = linkage(Xpcs, linktype)
print(np.shape(linkage_mat))
this_ax = axes[i_lk]
dendrogram(linkage_mat, ax=this_ax, p=20, truncate_mode='level', orientation='top', distance_sort='descending')
this_ax.set_title(linktype)
print(linkage_mat[0:6,0:4])
print(linkage_mat[-6:-1,0:4])
fig, axes = plt.subplots(3,1,figsize=(8,12))
axes[0].plot(linkage_mat[:,0],'b-')
axes[0].plot(linkage_mat[:,1], 'r-')
axes[0].set_title('Cols 1,2: indexes of the data samples ')
axes[1].plot(linkage_mat[:,2])
axes[1].set_title('Col 3: distance between the two samples')
axes[2].plot(linkage_mat[:,3])
axes[2].set_title('Col 4: number of original observations ')
#plt.figure(figsize=(15, 8))
#,show_leaf_counts=True)
# labels=labelList,
# distance_sort='descending',
# show_leaf_counts=True)
#plt.show()
```
# Agglomerative Clustering
model = AgglomerativeClustering(distance_threshold=0, n_clusters=None) <br>
https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html <br>
https://jbhender.github.io/Stats506/F18/GP/Group10.html <br>
```
from sklearn.cluster import AgglomerativeClustering
X = Xpcs
cluster = AgglomerativeClustering(n_clusters=6, affinity='euclidean', linkage='ward')
cluster.fit_predict(X)
cluster_list = cluster.labels_
print(cluster_list)
print(len(cluster_list))
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
import glob
from scipy.io import loadmat
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.decomposition import PCA
# load fingerprints, and then write into a single matrix.
path2fings = '/home/ben/DATA_tmp/Geysers_3yrs_fingers/FNGRPRNTS/*'
finglist = glob.glob(path2fings)
N = len(finglist)
print(N)
f0 = loadmat(finglist[0])['A2']
print(type(f0))
fingflat = f0.flatten()
n_fingflat = len(fingflat)
print(n_fingflat)
Nsub = 300
inds_rand = np.random.randint(0,N,Nsub)
fingarray = np.zeros((Nsub,n_fingflat))
for i,irand in enumerate(inds_rand):
f0 = loadmat(finglist[irand])['A2']
fingflat = f0.flatten()
fingarray[i,:] = fingflat
#plt.plot(fingflat)
# PCA first:
X = fingarray
pca = PCA(n_components=10)
pca.fit(X)
print(pca.explained_variance_ratio_)
plt.plot(pca.explained_variance_ratio_)
plt.title('Explained variance ratio for PCs')
Xpcs = pca.fit_transform(X)
print(np.shape(Xpcs))
print(type(Xpcs))
# # do the hierarchical clustering:
# Xarray-like of shape (n_samples, n_features)
print(np.shape(Xpcs))
fig, axes = plt.subplots(4,1, figsize=(6,18))
linkage_list = ['average','complete','ward','single']
for i_lk, linktype in enumerate(linkage_list):
linkage_mat = linkage(Xpcs, linktype)
print(np.shape(linkage_mat))
this_ax = axes[i_lk]
dendrogram(linkage_mat, ax=this_ax, p=20, truncate_mode='level', orientation='top', distance_sort='descending')
this_ax.set_title(linktype)
print(linkage_mat[0:6,0:4])
print(linkage_mat[-6:-1,0:4])
fig, axes = plt.subplots(3,1,figsize=(8,12))
axes[0].plot(linkage_mat[:,0],'b-')
axes[0].plot(linkage_mat[:,1], 'r-')
axes[0].set_title('Cols 1,2: indexes of the data samples ')
axes[1].plot(linkage_mat[:,2])
axes[1].set_title('Col 3: distance between the two samples')
axes[2].plot(linkage_mat[:,3])
axes[2].set_title('Col 4: number of original observations ')
#plt.figure(figsize=(15, 8))
#,show_leaf_counts=True)
# labels=labelList,
# distance_sort='descending',
# show_leaf_counts=True)
#plt.show()
from sklearn.cluster import AgglomerativeClustering
X = Xpcs
cluster = AgglomerativeClustering(n_clusters=6, affinity='euclidean', linkage='ward')
cluster.fit_predict(X)
cluster_list = cluster.labels_
print(cluster_list)
print(len(cluster_list))
| 0.468791 | 0.77569 |
# SSD300 Training Tutorial
This tutorial explains how to train an SSD300 on the Pascal VOC datasets. The preset parameters reproduce the training of the original SSD300 "07+12" model. Training SSD512 works simiarly, so there's no extra tutorial for that. The same goes for training on other datasets.
You can find a summary of a full training here to get an impression of what it should look like:
[SSD300 "07+12" training summary](https://github.com/pierluigiferrari/ssd_keras/blob/master/training_summaries/ssd300_pascal_07%2B12_training_summary.md)
```
from keras.optimizers import Adam, SGD
from keras.callbacks import Callback, ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger, EarlyStopping, TensorBoard
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from keras.models import Model
from matplotlib import pyplot as plt
from keras.preprocessing import image
from imageio import imread
from models.keras_ssd300_center import ssd_300
from keras_loss_function.keras_ssd_loss_mod import SSDLoss
from keras_loss_function.keras_ssd_loss_center import SSDLoss_center
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder_mod import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize_Modified
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels_Modified
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation_modified
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
from bounding_box_utils.bounding_box_utils import iou, convert_coordinates
from ssd_encoder_decoder.matching_utils import match_bipartite_greedy, match_multi
import random
np.set_printoptions(precision=20)
import tensorflow as tf
np.random.seed(1337)
%matplotlib inline
```
## 0. Preliminary note
All places in the code where you need to make any changes are marked `TODO` and explained accordingly. All code cells that don't contain `TODO` markers just need to be executed.
## 1. Set the model configuration parameters
```
img_height = 300 # Height of the model input images
img_width = 600 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights.
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images.
n_classes = 1 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets
scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets
scales = scales_pascal
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True # print(y_encoded)
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation
normalize_coords = True
```
## 2. Build or load the model
You will want to execute either of the two code cells in the subsequent two sub-sections, not both.
```
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
# 2: Load some weights into the model.
# TODO: Set the path to the weights you want to load.
weights_path = 'weights/VGG_ILSVRC_16_layers_fc_reduced.h5'
model.load_weights(weights_path, by_name=True)
# 3: Instantiate an optimizer and the SSD loss function and compile the model.
# If you want to follow the original Caffe implementation, use the preset SGD
# optimizer, otherwise I'd recommend the commented-out Adam optimizer.
model.summary()
def gt_rem(pred, gt):
val = tf.subtract(tf.shape(pred)[1], tf.shape(gt)[1],name="gt_rem_subtract")
gt = tf.slice(gt, [0, 0, 0], [1, tf.shape(pred)[1], 18],name="rem_slice")
return gt
def gt_add(pred, gt):
#add to gt
val = tf.subtract(tf.shape(pred)[1], tf.shape(gt)[1],name="gt_add_subtract")
ext = tf.slice(gt, [0, 0, 0], [1, val, 18], name="add_slice")
gt = K.concatenate([ext,gt], axis=1)
return gt
def equalalready(gt, pred): return pred
def make_equal(pred, gt):
equal_tensor = tf.cond(tf.shape(pred)[1] < tf.shape(gt)[1], lambda: gt_rem(pred, gt), lambda: gt_add(pred, gt), name="make_equal_cond")
return equal_tensor
def matcher(y_true_1,y_true_2,y_pred_1,y_pred_2, bsz):
pred = 0
gt = 0
for i in range(bsz):
filterer = tf.where(tf.not_equal(y_true_1[i,:,-4],99))
y_true_new = tf.gather_nd(y_true_1[i,:,:],filterer)
y_true_new = tf.expand_dims(y_true_new, 0)
iou_out = tf.py_func(iou, [y_pred_1[i,:,-16:-12],tf.convert_to_tensor(y_true_new[i,:,-16:-12])], tf.float64, name="iou_out")
bipartite_matches = tf.py_func(match_bipartite_greedy, [iou_out], tf.int64, name="bipartite_matches")
out = tf.gather(y_pred_2[i,:,:], [bipartite_matches], axis=0, name="out")
filterer_2 = tf.where(tf.not_equal(y_true_2[i,:,-4],99))
y_true_2_new = tf.gather_nd(y_true_2[i,:,:],filterer_2)
y_true_2_new = tf.expand_dims(y_true_2_new, 0)
box_comparer = tf.reduce_all(tf.equal(tf.shape(out)[1], tf.shape(y_true_2_new)[1]), name="box_comparer")
y_true_2_equal = tf.cond(box_comparer, lambda: equalalready(out, y_true_2_new), lambda: make_equal(out, y_true_2_new), name="y_true_cond")
if i != 0:
pred = K.concatenate([pred,out], axis=-1)
gt = K.concatenate([gt,y_true_2_equal], axis=0)
else:
pred = out
gt = y_true_2_equal
return pred, gt
# ssd_loss3 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
# ssd_loss4 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
def Accuracy(y_true, y_pred):
'''Calculates the mean accuracy rate across all predictions for
multiclass classification problems.
'''
print("y_pred: ",y_pred)
print("y_true: ",y_true)
y_true = y_true[:,:,:18]
y_pred = y_pred[:,:,:18]
return K.mean(K.equal(K.argmax(y_true[:,:,:-4], axis=-1),
K.argmax(y_pred[:,:,:-4], axis=-1)))
def Accuracy_Proj(y_pred, y_true):
#add to gt
y_true_1 = y_true[:,:,:18]
y_pred_1 = y_pred[:,:,:18]
y_true_2 = y_true[:,:,18:]
y_pred_2 = y_pred[:,:,18:]
acc = tf.constant(0)
y_pred, y_true = matcher(y_true_1,y_pred_1,y_true_2,y_pred_2,1)
return K.mean(K.equal(K.argmax(y_true[:,:,:-4], axis=-1),
K.argmax(y_pred[:,:,:-4], axis=-1)))
adam = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss1 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
ssd_loss2 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
ssd_loss3 = SSDLoss_center(neg_pos_ratio=3, alpha=1.0)
ssd_loss4 = SSDLoss_center(neg_pos_ratio=3, alpha=1.0)
losses = {
"predictions_1": ssd_loss1.compute_loss,
"predictions_2": ssd_loss2.compute_loss,
"predictions_1_proj": ssd_loss3.compute_loss,
"predictions_2_proj": ssd_loss4.compute_loss
}
lossWeights = {"predictions_1": 1.0,"predictions_2": 1.0,"predictions_1_proj": 1.0,"predictions_2_proj": 1.0}
# MetricstDict = {"predictions_1": Accuracy,"predictions_2": Accuracy, "predictions_1_proj": Accuracy_Proj,"predictions_2_proj": Accuracy_Proj}
# lossWeights = {"predictions_1": 1.0,"predictions_2": 1.0}
MetricstDict = {"predictions_1": Accuracy,"predictions_2": Accuracy}
model.compile(optimizer=adam, loss=losses, loss_weights=lossWeights, metrics=MetricstDict)
# model.compile(optimizer=adam, loss=losses, loss_weights=lossWeights)
model.summary()
```
### 2.2 Load a previously created model
```
# train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path='dataset_pascal_voc_07+12_trainval.h5')
# val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path='dataset_pascal_voc_07_test.h5')
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset_1 = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
VOC_2007_images_dir = '../datasets/Images/'
# The directories that contain the annotations.
VOC_2007_annotations_dir = '../datasets/VOC/Pasadena/Annotations_Multi/'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/train.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/val.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/test.txt'
VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid_neu/train_few.txt'
VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid_neu/val_few.txt'
VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid_neu/test_few.txt'
# The pat[Accuracy]hs to the image sets.
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia_same.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia_same.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia_same.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia_sub.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia_sub.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia_sub.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_one.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_one.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_one.txt'
# The XML parser needs to now what object class names to look for and in which order to map them to integers.
classes = ['background',
'tree']
train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_trainval_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_val_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=True,
ret=False)
batch_size = 4
ssd_data_augmentation = SSDDataAugmentation_modified(img_height=img_height,
img_width=img_width,
background=mean_color)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels_Modified()
resize = Resize_Modified(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf__1').output_shape[1:3],
model.get_layer('fc7_mbox_conf__1').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf__1').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
```
## 4. Set the remaining training parameters
We've already chosen an optimizer and set the batch size above, now let's set the remaining training parameters. I'll set one epoch to consist of 1,000 training steps. The next code cell defines a learning rate schedule that replicates the learning rate schedule of the original Caffe implementation for the training of the SSD300 Pascal VOC "07+12" model. That model was trained for 120,000 steps with a learning rate of 0.001 for the first 80,000 steps, 0.0001 for the next 20,000 steps, and 0.00001 for the last 20,000 steps. If you're training on a different dataset, define the learning rate schedule however you see fit.
I'll set only a few essential Keras callbacks below, feel free to add more callbacks if you want TensorBoard summaries or whatever. We obviously need the learning rate scheduler and we want to save the best models during the training. It also makes sense to continuously stream our training history to a CSV log file after every epoch, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Finally, we'll also add a callback that makes sure that the training terminates if the loss becomes `NaN`. Depending on the optimizer you use, it can happen that the loss becomes `NaN` during the first iterations of the training. In later iterations it's less of a risk. For example, I've never seen a `NaN` loss when I trained SSD using an Adam optimizer, but I've seen a `NaN` loss a couple of times during the very first couple of hundred training steps of training a new model when I used an SGD optimizer.
```
# Define a learning rate schedule.
def lr_schedule(epoch):
if epoch < 80:
return 0.001
elif epoch < 100:
return 0.0001
else:
return 0.00001
class prediction_history(Callback):
def __init__(self):
print("Predictor")
def on_epoch_end(self, epoch, logs={}):
ssd_loss1 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
predder = np.load('outputs/predder.npy')
bX = predder[0][0]
bZ = predder[0][1]
gX = predder[0][2]
gZ = predder[0][3]
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer("predictions_1").output)
intermediate_layer_model_1 = Model(inputs=model.input,
outputs=model.get_layer("predictions_1_proj").output)
intermediate_layer_model_2 = Model(inputs=model.input,
outputs=model.get_layer("predictions_2").output)
intermediate_layer_model_3 = Model(inputs=model.input,
outputs=model.get_layer("predictions_2_proj").output)
intermediate_output = intermediate_layer_model.predict([bX,bZ,gX,gZ])
intermediate_output_1 = intermediate_layer_model_1.predict([bX,bZ,gX,gZ])
intermediate_output_2 = intermediate_layer_model_2.predict([bX,bZ,gX,gZ])
intermediate_output_3 = intermediate_layer_model_3.predict([bX,bZ,gX,gZ])
np.save('outputs/predictions_1_'+str(epoch)+'.npy',intermediate_output)
np.save('outputs/predictions_1_proj_'+str(epoch)+'.npy',intermediate_output_1)
np.save('outputs/predictions_2_'+str(epoch)+'.npy',intermediate_output_2)
np.save('outputs/predictions_2_proj_'+str(epoch)+'.npy',intermediate_output_3)
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath='checkpoints/double_ssd300_pascal_07+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
#model_checkpoint.best =
tbCallBack = TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
csv_logger = CSVLogger(filename='ssd300_pascal_07+12_training_log.csv',
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule)
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=1,
verbose=0, mode='auto')
terminate_on_nan = TerminateOnNaN()
# printer_callback = prediction_history()
# custom_los = custom_loss()
callbacks = [
model_checkpoint,
# csv_logger,
# custom_los,
learning_rate_scheduler,
early_stopping,
terminate_on_nan,
# printer_callback,
tbCallBack
]
```
## 5. Train
```
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 500
steps_per_epoch = 1000
# history = model.fit_generator(generator=train_generator,
# steps_per_epoch=ceil(train_dataset_size/batch_size),
# epochs=final_epoch,
# callbacks=callbacks,
# verbose=1,
# validation_data=val_generator,
# validation_steps=ceil(val_dataset_size/batch_size),
# initial_epoch=initial_epoch)
history = model.fit_generator(generator=train_generator,
steps_per_epoch=ceil(train_dataset_size/batch_size),
epochs=final_epoch,
callbacks=callbacks,
verbose=1,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
```
|
github_jupyter
|
from keras.optimizers import Adam, SGD
from keras.callbacks import Callback, ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger, EarlyStopping, TensorBoard
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from keras.models import Model
from matplotlib import pyplot as plt
from keras.preprocessing import image
from imageio import imread
from models.keras_ssd300_center import ssd_300
from keras_loss_function.keras_ssd_loss_mod import SSDLoss
from keras_loss_function.keras_ssd_loss_center import SSDLoss_center
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder_mod import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize_Modified
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels_Modified
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation_modified
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
from bounding_box_utils.bounding_box_utils import iou, convert_coordinates
from ssd_encoder_decoder.matching_utils import match_bipartite_greedy, match_multi
import random
np.set_printoptions(precision=20)
import tensorflow as tf
np.random.seed(1337)
%matplotlib inline
img_height = 300 # Height of the model input images
img_width = 600 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights.
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images.
n_classes = 1 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets
scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets
scales = scales_pascal
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True # print(y_encoded)
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation
normalize_coords = True
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
# 2: Load some weights into the model.
# TODO: Set the path to the weights you want to load.
weights_path = 'weights/VGG_ILSVRC_16_layers_fc_reduced.h5'
model.load_weights(weights_path, by_name=True)
# 3: Instantiate an optimizer and the SSD loss function and compile the model.
# If you want to follow the original Caffe implementation, use the preset SGD
# optimizer, otherwise I'd recommend the commented-out Adam optimizer.
model.summary()
def gt_rem(pred, gt):
val = tf.subtract(tf.shape(pred)[1], tf.shape(gt)[1],name="gt_rem_subtract")
gt = tf.slice(gt, [0, 0, 0], [1, tf.shape(pred)[1], 18],name="rem_slice")
return gt
def gt_add(pred, gt):
#add to gt
val = tf.subtract(tf.shape(pred)[1], tf.shape(gt)[1],name="gt_add_subtract")
ext = tf.slice(gt, [0, 0, 0], [1, val, 18], name="add_slice")
gt = K.concatenate([ext,gt], axis=1)
return gt
def equalalready(gt, pred): return pred
def make_equal(pred, gt):
equal_tensor = tf.cond(tf.shape(pred)[1] < tf.shape(gt)[1], lambda: gt_rem(pred, gt), lambda: gt_add(pred, gt), name="make_equal_cond")
return equal_tensor
def matcher(y_true_1,y_true_2,y_pred_1,y_pred_2, bsz):
pred = 0
gt = 0
for i in range(bsz):
filterer = tf.where(tf.not_equal(y_true_1[i,:,-4],99))
y_true_new = tf.gather_nd(y_true_1[i,:,:],filterer)
y_true_new = tf.expand_dims(y_true_new, 0)
iou_out = tf.py_func(iou, [y_pred_1[i,:,-16:-12],tf.convert_to_tensor(y_true_new[i,:,-16:-12])], tf.float64, name="iou_out")
bipartite_matches = tf.py_func(match_bipartite_greedy, [iou_out], tf.int64, name="bipartite_matches")
out = tf.gather(y_pred_2[i,:,:], [bipartite_matches], axis=0, name="out")
filterer_2 = tf.where(tf.not_equal(y_true_2[i,:,-4],99))
y_true_2_new = tf.gather_nd(y_true_2[i,:,:],filterer_2)
y_true_2_new = tf.expand_dims(y_true_2_new, 0)
box_comparer = tf.reduce_all(tf.equal(tf.shape(out)[1], tf.shape(y_true_2_new)[1]), name="box_comparer")
y_true_2_equal = tf.cond(box_comparer, lambda: equalalready(out, y_true_2_new), lambda: make_equal(out, y_true_2_new), name="y_true_cond")
if i != 0:
pred = K.concatenate([pred,out], axis=-1)
gt = K.concatenate([gt,y_true_2_equal], axis=0)
else:
pred = out
gt = y_true_2_equal
return pred, gt
# ssd_loss3 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
# ssd_loss4 = SSDLoss_proj(neg_pos_ratio=3, alpha=1.0)
def Accuracy(y_true, y_pred):
'''Calculates the mean accuracy rate across all predictions for
multiclass classification problems.
'''
print("y_pred: ",y_pred)
print("y_true: ",y_true)
y_true = y_true[:,:,:18]
y_pred = y_pred[:,:,:18]
return K.mean(K.equal(K.argmax(y_true[:,:,:-4], axis=-1),
K.argmax(y_pred[:,:,:-4], axis=-1)))
def Accuracy_Proj(y_pred, y_true):
#add to gt
y_true_1 = y_true[:,:,:18]
y_pred_1 = y_pred[:,:,:18]
y_true_2 = y_true[:,:,18:]
y_pred_2 = y_pred[:,:,18:]
acc = tf.constant(0)
y_pred, y_true = matcher(y_true_1,y_pred_1,y_true_2,y_pred_2,1)
return K.mean(K.equal(K.argmax(y_true[:,:,:-4], axis=-1),
K.argmax(y_pred[:,:,:-4], axis=-1)))
adam = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss1 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
ssd_loss2 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
ssd_loss3 = SSDLoss_center(neg_pos_ratio=3, alpha=1.0)
ssd_loss4 = SSDLoss_center(neg_pos_ratio=3, alpha=1.0)
losses = {
"predictions_1": ssd_loss1.compute_loss,
"predictions_2": ssd_loss2.compute_loss,
"predictions_1_proj": ssd_loss3.compute_loss,
"predictions_2_proj": ssd_loss4.compute_loss
}
lossWeights = {"predictions_1": 1.0,"predictions_2": 1.0,"predictions_1_proj": 1.0,"predictions_2_proj": 1.0}
# MetricstDict = {"predictions_1": Accuracy,"predictions_2": Accuracy, "predictions_1_proj": Accuracy_Proj,"predictions_2_proj": Accuracy_Proj}
# lossWeights = {"predictions_1": 1.0,"predictions_2": 1.0}
MetricstDict = {"predictions_1": Accuracy,"predictions_2": Accuracy}
model.compile(optimizer=adam, loss=losses, loss_weights=lossWeights, metrics=MetricstDict)
# model.compile(optimizer=adam, loss=losses, loss_weights=lossWeights)
model.summary()
# train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path='dataset_pascal_voc_07+12_trainval.h5')
# val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path='dataset_pascal_voc_07_test.h5')
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset_1 = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
VOC_2007_images_dir = '../datasets/Images/'
# The directories that contain the annotations.
VOC_2007_annotations_dir = '../datasets/VOC/Pasadena/Annotations_Multi/'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/train.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/val.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid/test.txt'
VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid_neu/train_few.txt'
VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid_neu/val_few.txt'
VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/reid_neu/test_few.txt'
# The pat[Accuracy]hs to the image sets.
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia_same.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia_same.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia_same.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_sia_sub.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_sia_sub.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_sia_sub.txt'
# VOC_2007_trainval_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/trainval_one.txt'
# VOC_2007_val_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/val_one.txt'
# VOC_2007_test_image_set_filename = '../datasets/VOC/Pasadena/ImageSets/Main/siamese/test_one.txt'
# The XML parser needs to now what object class names to look for and in which order to map them to integers.
classes = ['background',
'tree']
train_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_trainval_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[VOC_2007_images_dir],
image_set_filenames=[VOC_2007_val_image_set_filename],
annotations_dirs=[VOC_2007_annotations_dir],
classes=classes,
include_classes='all',
exclude_truncated=False,
exclude_difficult=True,
ret=False)
batch_size = 4
ssd_data_augmentation = SSDDataAugmentation_modified(img_height=img_height,
img_width=img_width,
background=mean_color)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels_Modified()
resize = Resize_Modified(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('conv4_3_norm_mbox_conf__1').output_shape[1:3],
model.get_layer('fc7_mbox_conf__1').output_shape[1:3],
model.get_layer('conv6_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv7_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv8_2_mbox_conf__1').output_shape[1:3],
model.get_layer('conv9_2_mbox_conf__1').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
# Define a learning rate schedule.
def lr_schedule(epoch):
if epoch < 80:
return 0.001
elif epoch < 100:
return 0.0001
else:
return 0.00001
class prediction_history(Callback):
def __init__(self):
print("Predictor")
def on_epoch_end(self, epoch, logs={}):
ssd_loss1 = SSDLoss(neg_pos_ratio=3, alpha=1.0)
predder = np.load('outputs/predder.npy')
bX = predder[0][0]
bZ = predder[0][1]
gX = predder[0][2]
gZ = predder[0][3]
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer("predictions_1").output)
intermediate_layer_model_1 = Model(inputs=model.input,
outputs=model.get_layer("predictions_1_proj").output)
intermediate_layer_model_2 = Model(inputs=model.input,
outputs=model.get_layer("predictions_2").output)
intermediate_layer_model_3 = Model(inputs=model.input,
outputs=model.get_layer("predictions_2_proj").output)
intermediate_output = intermediate_layer_model.predict([bX,bZ,gX,gZ])
intermediate_output_1 = intermediate_layer_model_1.predict([bX,bZ,gX,gZ])
intermediate_output_2 = intermediate_layer_model_2.predict([bX,bZ,gX,gZ])
intermediate_output_3 = intermediate_layer_model_3.predict([bX,bZ,gX,gZ])
np.save('outputs/predictions_1_'+str(epoch)+'.npy',intermediate_output)
np.save('outputs/predictions_1_proj_'+str(epoch)+'.npy',intermediate_output_1)
np.save('outputs/predictions_2_'+str(epoch)+'.npy',intermediate_output_2)
np.save('outputs/predictions_2_proj_'+str(epoch)+'.npy',intermediate_output_3)
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath='checkpoints/double_ssd300_pascal_07+12_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
#model_checkpoint.best =
tbCallBack = TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
csv_logger = CSVLogger(filename='ssd300_pascal_07+12_training_log.csv',
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule)
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=1,
verbose=0, mode='auto')
terminate_on_nan = TerminateOnNaN()
# printer_callback = prediction_history()
# custom_los = custom_loss()
callbacks = [
model_checkpoint,
# csv_logger,
# custom_los,
learning_rate_scheduler,
early_stopping,
terminate_on_nan,
# printer_callback,
tbCallBack
]
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 500
steps_per_epoch = 1000
# history = model.fit_generator(generator=train_generator,
# steps_per_epoch=ceil(train_dataset_size/batch_size),
# epochs=final_epoch,
# callbacks=callbacks,
# verbose=1,
# validation_data=val_generator,
# validation_steps=ceil(val_dataset_size/batch_size),
# initial_epoch=initial_epoch)
history = model.fit_generator(generator=train_generator,
steps_per_epoch=ceil(train_dataset_size/batch_size),
epochs=final_epoch,
callbacks=callbacks,
verbose=1,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
| 0.789031 | 0.962673 |
```
import re
import numpy as np
import pandas as pd
non_decimal = re.compile(r'[^\d.]+')
data_dir = '../data/'
columns = ['District', 'Route', 'County', 'Postmile', 'Description',
'Back_Peak_Hourly', 'Back_Peak_Monthly', 'Back_AADT',
'Ahead_Peak_Hourly', 'Ahead_Peak_Monthly', 'Ahead_AADT',
'Back_Latitude', 'Back_Longitude', 'Ahead_Latitude', 'Ahead_Longitude', 'Year']
df_aadt = pd.DataFrame(columns=columns)
for year in range(2010, 2017):
df_temp = pd.DataFrame.from_csv(data_dir + 'aadt/AADT%d.csv' % year,
index_col=None, header=0)
df_temp.drop_duplicates(inplace=True)
df_temp['Year'] = year
drop_cols = [u'AbsPM_S_W,N,13,11', u'AbsPM_N_E,N,13,11']
for col in drop_cols:
if col in df_temp.columns:
df_temp.drop([col], axis=1, inplace=True)
df_temp.columns = columns
df_aadt = df_aadt.append(df_temp)
df_aadt.Description.drop('BREAK IN ROUTE', inplace=True)
df_aadt.fillna(0, inplace=True)
df_aadt.rename_axis({'Ahead_Latitude': 'Latitude', 'Ahead_Longitude': 'Longitude'}, axis=1, inplace=True)
df_aadt.drop(['Back_Latitude', 'Back_Longitude'], axis=1, inplace=True)
df_aadt['Segment_ID'] = range(len(df_aadt))
seg_def_year = df_aadt.Year.max()
c1 = np.zeros((len(df_aadt[df_aadt.Year < seg_def_year]), 1))
c2 = np.array(range(1, len(df_aadt[df_aadt.Year == seg_def_year]) + 1))[:, np.newaxis]
df_aadt['Segment_Num'] = np.concatenate((c1, c2)).astype(np.int32)
print len(df_aadt)
df_aadt.head()
def convert_to_int(x):
try:
return int(x)
except:
s = non_decimal.sub('', str(x))
return int(s) if len(s) > 0 else 0
def convert_to_float(x):
try:
return float(x)
except:
s = non_decimal.sub('', str(x))
return float(s) if len(s) > 0 else 0.0
df_aadt.District = df_aadt.District.apply(convert_to_int)
df_aadt.Year = df_aadt.Year.apply(convert_to_int)
df_aadt.Route = df_aadt.Route.apply(convert_to_int)
df_aadt.Postmile = df_aadt.Postmile.apply(convert_to_float)
df_aadt.Back_Peak_Hourly = df_aadt.Back_Peak_Hourly.apply(convert_to_int)
cols = [
'Back_Peak_Hourly', 'Back_Peak_Monthly', 'Back_AADT',
'Ahead_Peak_Hourly', 'Ahead_Peak_Monthly', 'Ahead_AADT',
]
for col in cols:
df_aadt[col] = df_aadt[col].astype(np.int64)
print len(df_aadt)
df_aadt.head()
def get_df_segments(df):
df_segments = {}
# Start by looking only at segments for a given year
for year in df.Year.unique():
df_segments[year] = {}
df_year = df[df.Year == year]
# Then filter the collisions to their appropriate routes
for route in df_year.Route.unique():
df_segments[year][route] = {}
df_route = df_year[df_year.Route == route]
# Then filter them to each county to match with the Postmile
for county in df_route.County.unique():
df_segments[year][route][county] = df_route[df_route.County == county]
return df_segments
df_segments = get_df_segments(df_aadt)
def get_postmile_boundary(row):
# Grab identifying information to locate corresponding segment
year = row.Year
route = row.Route
county = row.County
df_segment = df_segments[year][route][county]
df_border = df_segment[df_segment.Postmile > row.Postmile]
return df_border.Postmile.min() if len(df_border) > 0 else 1000
df_aadt['Postmile_Boundary'] = df_aadt.apply(get_postmile_boundary, axis=1)
def get_postmile_distance(row):
p1 = row.Postmile
p2 = row.Postmile_Boundary
if p2 < 1000:
return p2 - p1
year = row.Year
route = row.Route
county = row.County
df_seg = df_aadt[(df_aadt.Year == year)
& (df_aadt.Route == route)
& (df_aadt.County == county)
& (df_aadt.Postmile_Boundary < 1000)]
return (df_seg.Postmile_Boundary - df_seg.Postmile).mean()
df_aadt['Postmile_Distance'] = df_aadt.apply(get_postmile_distance, axis=1)
def get_nearest_segment(row):
year = row.Year
route = row.Route
county = row.County
if year == seg_def_year:
return row.Segment_Num
if not route in df_segments[seg_def_year] or not county in df_segments[seg_def_year][route]:
return 0
df_gps = df_segments[seg_def_year][route][county]
df_postmile = df_gps[df_gps.Postmile == row.Postmile]
if len(df_postmile) == 0:
return 0
gps = zip(df_postmile.Latitude - row.Latitude, df_postmile.Longitude - row.Longitude)
gps_norm = np.linalg.norm(gps, axis=1)
if gps_norm.min() > 1e-3:
return 0
i_min = gps_norm.argmin()
return df_postmile.iloc[i_min].Segment_Num
df_aadt['Segment_Num'] = df_aadt.apply(get_nearest_segment, axis=1)
print df_aadt.columns
cols = [
u'Segment_ID', u'Segment_Num',
u'Year', u'Route', u'County', u'District',
u'Postmile', u'Postmile_Boundary', u'Postmile_Distance',
u'Latitude', u'Longitude',
u'Back_Peak_Hourly', u'Back_Peak_Monthly', u'Back_AADT',
u'Ahead_Peak_Hourly', u'Ahead_Peak_Monthly', u'Ahead_AADT',
]
df_aadt = df_aadt[cols]
df_aadt.head()
df_aadt.to_csv(data_dir + 'df_aadt.csv')
```
|
github_jupyter
|
import re
import numpy as np
import pandas as pd
non_decimal = re.compile(r'[^\d.]+')
data_dir = '../data/'
columns = ['District', 'Route', 'County', 'Postmile', 'Description',
'Back_Peak_Hourly', 'Back_Peak_Monthly', 'Back_AADT',
'Ahead_Peak_Hourly', 'Ahead_Peak_Monthly', 'Ahead_AADT',
'Back_Latitude', 'Back_Longitude', 'Ahead_Latitude', 'Ahead_Longitude', 'Year']
df_aadt = pd.DataFrame(columns=columns)
for year in range(2010, 2017):
df_temp = pd.DataFrame.from_csv(data_dir + 'aadt/AADT%d.csv' % year,
index_col=None, header=0)
df_temp.drop_duplicates(inplace=True)
df_temp['Year'] = year
drop_cols = [u'AbsPM_S_W,N,13,11', u'AbsPM_N_E,N,13,11']
for col in drop_cols:
if col in df_temp.columns:
df_temp.drop([col], axis=1, inplace=True)
df_temp.columns = columns
df_aadt = df_aadt.append(df_temp)
df_aadt.Description.drop('BREAK IN ROUTE', inplace=True)
df_aadt.fillna(0, inplace=True)
df_aadt.rename_axis({'Ahead_Latitude': 'Latitude', 'Ahead_Longitude': 'Longitude'}, axis=1, inplace=True)
df_aadt.drop(['Back_Latitude', 'Back_Longitude'], axis=1, inplace=True)
df_aadt['Segment_ID'] = range(len(df_aadt))
seg_def_year = df_aadt.Year.max()
c1 = np.zeros((len(df_aadt[df_aadt.Year < seg_def_year]), 1))
c2 = np.array(range(1, len(df_aadt[df_aadt.Year == seg_def_year]) + 1))[:, np.newaxis]
df_aadt['Segment_Num'] = np.concatenate((c1, c2)).astype(np.int32)
print len(df_aadt)
df_aadt.head()
def convert_to_int(x):
try:
return int(x)
except:
s = non_decimal.sub('', str(x))
return int(s) if len(s) > 0 else 0
def convert_to_float(x):
try:
return float(x)
except:
s = non_decimal.sub('', str(x))
return float(s) if len(s) > 0 else 0.0
df_aadt.District = df_aadt.District.apply(convert_to_int)
df_aadt.Year = df_aadt.Year.apply(convert_to_int)
df_aadt.Route = df_aadt.Route.apply(convert_to_int)
df_aadt.Postmile = df_aadt.Postmile.apply(convert_to_float)
df_aadt.Back_Peak_Hourly = df_aadt.Back_Peak_Hourly.apply(convert_to_int)
cols = [
'Back_Peak_Hourly', 'Back_Peak_Monthly', 'Back_AADT',
'Ahead_Peak_Hourly', 'Ahead_Peak_Monthly', 'Ahead_AADT',
]
for col in cols:
df_aadt[col] = df_aadt[col].astype(np.int64)
print len(df_aadt)
df_aadt.head()
def get_df_segments(df):
df_segments = {}
# Start by looking only at segments for a given year
for year in df.Year.unique():
df_segments[year] = {}
df_year = df[df.Year == year]
# Then filter the collisions to their appropriate routes
for route in df_year.Route.unique():
df_segments[year][route] = {}
df_route = df_year[df_year.Route == route]
# Then filter them to each county to match with the Postmile
for county in df_route.County.unique():
df_segments[year][route][county] = df_route[df_route.County == county]
return df_segments
df_segments = get_df_segments(df_aadt)
def get_postmile_boundary(row):
# Grab identifying information to locate corresponding segment
year = row.Year
route = row.Route
county = row.County
df_segment = df_segments[year][route][county]
df_border = df_segment[df_segment.Postmile > row.Postmile]
return df_border.Postmile.min() if len(df_border) > 0 else 1000
df_aadt['Postmile_Boundary'] = df_aadt.apply(get_postmile_boundary, axis=1)
def get_postmile_distance(row):
p1 = row.Postmile
p2 = row.Postmile_Boundary
if p2 < 1000:
return p2 - p1
year = row.Year
route = row.Route
county = row.County
df_seg = df_aadt[(df_aadt.Year == year)
& (df_aadt.Route == route)
& (df_aadt.County == county)
& (df_aadt.Postmile_Boundary < 1000)]
return (df_seg.Postmile_Boundary - df_seg.Postmile).mean()
df_aadt['Postmile_Distance'] = df_aadt.apply(get_postmile_distance, axis=1)
def get_nearest_segment(row):
year = row.Year
route = row.Route
county = row.County
if year == seg_def_year:
return row.Segment_Num
if not route in df_segments[seg_def_year] or not county in df_segments[seg_def_year][route]:
return 0
df_gps = df_segments[seg_def_year][route][county]
df_postmile = df_gps[df_gps.Postmile == row.Postmile]
if len(df_postmile) == 0:
return 0
gps = zip(df_postmile.Latitude - row.Latitude, df_postmile.Longitude - row.Longitude)
gps_norm = np.linalg.norm(gps, axis=1)
if gps_norm.min() > 1e-3:
return 0
i_min = gps_norm.argmin()
return df_postmile.iloc[i_min].Segment_Num
df_aadt['Segment_Num'] = df_aadt.apply(get_nearest_segment, axis=1)
print df_aadt.columns
cols = [
u'Segment_ID', u'Segment_Num',
u'Year', u'Route', u'County', u'District',
u'Postmile', u'Postmile_Boundary', u'Postmile_Distance',
u'Latitude', u'Longitude',
u'Back_Peak_Hourly', u'Back_Peak_Monthly', u'Back_AADT',
u'Ahead_Peak_Hourly', u'Ahead_Peak_Monthly', u'Ahead_AADT',
]
df_aadt = df_aadt[cols]
df_aadt.head()
df_aadt.to_csv(data_dir + 'df_aadt.csv')
| 0.340156 | 0.408277 |
# United States COVID-19
Need to be reworked now that https://covidtracking.com/ is not providing US COVID data anymore.
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Python-set-up" data-toc-modified-id="Python-set-up-1"><span class="toc-item-num">1 </span>Python set-up</a></span></li><li><span><a href="#Get-US-population-data" data-toc-modified-id="Get-US-population-data-2"><span class="toc-item-num">2 </span>Get US population data</a></span></li><li><span><a href="#Get-the-COVID-data" data-toc-modified-id="Get-the-COVID-data-3"><span class="toc-item-num">3 </span>Get the COVID data</a></span></li><li><span><a href="#Semilog-plot-of-US-States" data-toc-modified-id="Semilog-plot-of-US-States-4"><span class="toc-item-num">4 </span>Semilog plot of US States</a></span></li><li><span><a href="#Plot-of-new-vs-cumulative" data-toc-modified-id="Plot-of-new-vs-cumulative-5"><span class="toc-item-num">5 </span>Plot of new vs cumulative</a></span></li><li><span><a href="#Regional-per-capita" data-toc-modified-id="Regional-per-capita-6"><span class="toc-item-num">6 </span>Regional per capita</a></span></li><li><span><a href="#Growth-factor" data-toc-modified-id="Growth-factor-7"><span class="toc-item-num">7 </span>Growth factor</a></span></li><li><span><a href="#Plot-new-cases:-raw-and-smoothed" data-toc-modified-id="Plot-new-cases:-raw-and-smoothed-8"><span class="toc-item-num">8 </span>Plot new cases: raw and smoothed</a></span></li><li><span><a href="#Bring-it-all-together" data-toc-modified-id="Bring-it-all-together-9"><span class="toc-item-num">9 </span>Bring it all together</a></span></li><li><span><a href="#Finished" data-toc-modified-id="Finished-10"><span class="toc-item-num">10 </span>Finished</a></span></li></ul></div>
## Python set-up
```
# imports
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime
from pathlib import Path
import math
#pandas
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
# scraping
from selenium.webdriver import Chrome
import re
# local imports
sys.path.append(r'../bin')
import plotstuff as ps
# plotting
plt.style.use('ggplot')
%matplotlib inline
# save location
CHART_DIRECTORY = '../charts'
Path(CHART_DIRECTORY).mkdir(parents=True, exist_ok=True)
CHART_DIRECTORY += '/zzUS-'
```
## Get US population data
```
wiki = 'https://simple.wikipedia.org/wiki/List_of_U.S._states_by_population'
browser = Chrome('../Chrome/chromedriver')
browser.get(wiki)
html = browser.find_element_by_xpath('//table')
html = '<table>' + html.get_attribute('innerHTML') + '</table>'
html = re.sub('<span[^>]*>Sort Table[^/]*/span>', '', html)
population = pd.read_html(html)[0]
population = population[['State', 'Population estimate, July 1, 2019[2]']]
population = population.set_index('State')
population = population[population.columns[0]]
population = population[:-4] # drop vsrious totals
population = population.rename({'U.S. Virgin Islands': 'Virgin Islands'})
browser.quit()
```
## Get the COVID data
```
source = 'https://covidtracking.com/'
url = source + 'api/v1/states/daily.json'
data = pd.read_json(url)
source = 'Source: ' + source
cases = data.pivot(index='date', columns='state', values='positive').astype(float)
cases.index = pd.DatetimeIndex(cases.index.astype(str))
deaths = data.pivot(index='date', columns='state', values='death').astype(float)
deaths.index = pd.DatetimeIndex(deaths.index.astype(str))
```
## Semilog plot of US States
```
def plot_semi_log_trajectory(data, mode, threshold, source):
styles = ['-'] #, '--', '-.', ':'] # 4 lines
markers = list('PXo^v<>D*pH.d') # 13 markers
colours = ['maroon', 'brown', 'olive', 'red',
'darkorange', 'darkgoldenrod', 'green',
'blue', 'purple', 'black', 'teal'] # 11 colours
ax = plt.subplot(111)
ax.set_title(f'COVID-19 US States: Number of {mode}')
ax.set_xlabel('Days from the notional ' +
f'{int(threshold)}th {mode[:-1]}')
ax.set_ylabel(f'Cumulative {mode} (log scale)')
ax.set_yscale('log')
fig = ax.figure
endpoints = {}
color_legend = {}
for i, name in enumerate(data.columns):
# Get x and y data for nation
# - where two sequential days have the same
# value let's assume the second day is
# because of no reporting, and remove the
# second/subsequent data points.
y = data[name].dropna()
#print(f'{name}: \n{y}')
y = y.drop_duplicates(keep='first')
x = y.index.values
y = y.values
# let's not worry about the very short runs
if len(y) <= 2:
continue
# adjust the x data to start at the start_threshold at the y intercept
if y[0] == threshold:
adjust = 0
else:
span = y[1] - y[0]
adjust = (threshold - y[0]) / span
x = x - adjust
endpoints[name] = [x[-1], y[-1]]
# and plot
s = styles[i % len(styles)]
m = markers[i % len(markers)]
c = colours[i % len(colours)]
lw = 1
ax.plot(x, y, label=f'{name} ({int(y[-1])})',
#marker=m,
linewidth=lw, color=c, linestyle=s)
color_legend[name] = c
# label each end-point
min, max = ax.get_xlim()
ax.set_xlim(min, max+(max*0.02))
for label in endpoints:
x, y = endpoints[label]
ax.text(x=x+(max*0.01), y=y, s=f'{label}',
size='small', color=color_legend[label],
bbox={'alpha':0.5, 'facecolor':'white'})
# etc.
ax.legend(loc='upper left', ncol=4, fontsize='7')
fig.set_size_inches(8, 8)
fig.text(0.99, 0.005, source,
ha='right', va='bottom',
fontsize=9, fontstyle='italic',
color='#999999')
fig.tight_layout(pad=1.2)
fig.savefig(f'{CHART_DIRECTORY}!semilog-comparison-{mode}', dpi=125)
plt.show()
plt.close()
def prepare_comparative_data(data, threshold):
# focus on data at/above threshold (and just before)
mask = data >= threshold
for i in mask.columns:
ilocate = mask.index.get_loc(mask[i].idxmax()) - 1
if data[i].iloc[ilocate+1] > threshold:
mask[i].iloc[ilocate] = True
data = data.where(mask, other=np.nan)
# Rebase the data in terms of days starting at
# day at or immediately before the threshold
nans_in_col = data.isna().sum()
for i in nans_in_col.index:
data[i] = data[i].shift(-nans_in_col[i])
data.index = range(len(data))
return data
def semilog(data, mode, threshold, source):
x = prepare_comparative_data(data, threshold)
plot_semi_log_trajectory(x, mode, threshold, source)
```
## Plot of new vs cumulative
```
us_state_abbrev = {
'Alabama': 'AL',
'Alaska': 'AK',
'American Samoa': 'AS',
'Arizona': 'AZ',
'Arkansas': 'AR',
'California': 'CA',
'Colorado': 'CO',
'Connecticut': 'CT',
'Delaware': 'DE',
'District of Columbia': 'DC',
'Florida': 'FL',
'Georgia': 'GA',
'Guam': 'GU',
'Hawaii': 'HI',
'Idaho': 'ID',
'Illinois': 'IL',
'Indiana': 'IN',
'Iowa': 'IA',
'Kansas': 'KS',
'Kentucky': 'KY',
'Louisiana': 'LA',
'Maine': 'ME',
'Maryland': 'MD',
'Massachusetts': 'MA',
'Michigan': 'MI',
'Minnesota': 'MN',
'Mississippi': 'MS',
'Missouri': 'MO',
'Montana': 'MT',
'Nebraska': 'NE',
'Nevada': 'NV',
'New Hampshire': 'NH',
'New Jersey': 'NJ',
'New Mexico': 'NM',
'New York': 'NY',
'North Carolina': 'NC',
'North Dakota': 'ND',
'Northern Mariana Islands':'MP',
'Ohio': 'OH',
'Oklahoma': 'OK',
'Oregon': 'OR',
'Pennsylvania': 'PA',
'Puerto Rico': 'PR',
'Rhode Island': 'RI',
'South Carolina': 'SC',
'South Dakota': 'SD',
'Tennessee': 'TN',
'Texas': 'TX',
'Utah': 'UT',
'Vermont': 'VT',
'Virgin Islands': 'VI',
'Virginia': 'VA',
'Washington': 'WA',
'West Virginia': 'WV',
'Wisconsin': 'WI',
'Wyoming': 'WY'
}
abbrev_us_state = dict(map(reversed, us_state_abbrev.items()))
def plot_new_and_cum_cases(states_new, states_cum, mode, lfooter=''):
for name in states_cum.columns:
total = states_cum[name].iloc[-1]
if math.isnan(total):
total = 0
if total == 0:
continue
total = int(total)
ps.plot_new_cum(
states_new[name], states_cum[name], mode, name,
'day',
title=f'COVID-19 {mode.title()}: {name}',
rfooter=source,
lfooter=f'{lfooter}; Total={total:,}'.strip(),
save_as=f'{CHART_DIRECTORY}'+
f'{name}-new-vs-cum-{mode}-{lfooter}.png',
show=True,
)
def joint(cases, deaths, mode):
cases = cases.sort_index().fillna(0).diff().rolling(7).mean().copy()
cases = ps.negative_correct(cases)
deaths = deaths.sort_index().fillna(0).diff().rolling(7).mean().copy()
deaths = ps.negative_correct(deaths)
for state in cases.columns:
name = state
# plot cases
ax = plt.subplot(111)
labels = [f'{p.day}/{p.month}' for p in cases.index]
ax.plot(labels, cases[state].values,
color='darkorange', label=f'New cases (left)')
ax.set_title(f'COVID-19 in {name} {mode}')
ax.set_ylabel(f'Num. per Day {mode}\n7-day rolling average')
#plot deaths
axr = ax.twinx()
axr.plot(labels, deaths[state],
lw=2.0, color='royalblue', label=f'New deaths (right)')
axr.set_ylabel(None)
axr.grid(False)
# manually label the x-axis
MAX_LABELS = 9
ticks = ax.xaxis.get_major_ticks()
if len(ticks):
modulus = int(np.floor(len(ticks) / MAX_LABELS) + 1)
for i in range(len(ticks)):
if i % modulus:
ticks[i].label1.set_visible(False)
# put in a legend
ax.legend(loc='upper left')
axr.legend(loc='center left')
# wrap-up
fig = ax.figure
fig.set_size_inches(8, 4)
fig.tight_layout(pad=11.2)
fig.savefig(f'{CHART_DIRECTORY}{state}-cases-v-deaths-{mode}.png', dpi=125)
plt.show()
plt.close()
```
## Regional per capita
```
def regional(df, mode):
regions = {
'Far West': ['Alaska', 'California', 'Hawaii', 'Nevada', 'Oregon', 'Washington'],
'Rocky Mountains': ['Colorado', 'Idaho', 'Montana', 'Utah', 'Wyoming'],
'Southwest': ['Arizona', 'New Mexico', 'Oklahoma', 'Texas'],
'South': ['Alabama', 'Arkansas', 'Kentucky', 'Louisiana', 'Mississippi', 'Tennessee'],
'Southeast': ['Florida', 'Georgia', 'North Carolina', 'South Carolina', 'Virginia', 'West Virginia'],
'Plains': ['Iowa', 'Kansas', 'Minnesota', 'Missouri', 'Nebraska', 'North Dakota', 'South Dakota'],
'Great Lakes': ['Illinois', 'Indiana', 'Michigan', 'Ohio', 'Wisconsin'],
'Mideast': ['Delaware', 'District of Columbia', 'Maryland', 'New Jersey', 'New York', 'Pennsylvania'],
'New England': ['Connecticut', 'Maine', 'Massachusetts', 'New Hampshire', 'Rhode Island', 'Vermont'],
'Other': ['American Samoa', 'Guam', 'Northern Mariana Islands', 'Puerto Rico', 'Virgin Islands'],
}
ps.plot_regional_per_captia(
df, mode, regions, population,
rfooter=source,
chart_directory=CHART_DIRECTORY + '!',
show=True,
)
```
## Growth factor
```
def plot_growth_factor(states_new, mode):
for name in states_new.columns:
if states_new[name].sum() == 0:
continue
ps.plot_growth_factor(states_new[name],
title=f'{name} Week on Week Growth - COVID-19 {mode.title()}',
set_size_inches=(8, 4),
save_as=f'{CHART_DIRECTORY}{name}-growth-chart-{name}-{mode}.png',
rfooter=source,
mode=mode.lower(),
show=True,
)
```
## Plot new cases: raw and smoothed
```
def plot_new_original_smoothed(states_new, mode):
HMA = 15
ROLLING_PERIOD = 7
rolling_all = pd.DataFrame()
for name in states_new.columns:
if states_new[name].sum() == 0:
continue
title = f'{name} (new COVID-19 {mode} per day)'
ps.plot_orig_smooth(states_new[name].copy(),
HMA,
mode,
'Australia', # this is used to get starting point for series
title=title,
ylabel=f'New {mode} per day',
xlabel=None,
rfooter=source,
save_as=f'{CHART_DIRECTORY}{title}.png',
show=True,
)
# gross numbers per state
for name in states_new.columns:
rolling_all[name] = states_new[name].rolling(ROLLING_PERIOD).mean()
rolling_all = rolling_all.iloc[-1].sort_values() # latest
title = f'COVID19 Daily New {mode.title()} ({ROLLING_PERIOD} day average)'
ps.plot_barh(rolling_all.round(2),
title=title,
set_size_inches=(8,8),
save_as=f'{CHART_DIRECTORY}!bar-chart-{title}.png',
rfooter=source,
show=True,
)
# latest per-captia comparison
power = 6
pop_factor = int(10 ** power)
title = f"COVID19 Daily New {mode.title()} ({ROLLING_PERIOD} day average per $10^{power}$ pop'n)"
rolling_all = rolling_all[population.index] # same order as population
rolling_all = ((rolling_all / population) * pop_factor).round(2)
ps.plot_barh(rolling_all.sort_values(),
title=title,
set_size_inches=(8,8),
save_as=f'{CHART_DIRECTORY}!bar-chart-{title}.png',
rfooter=source,
show=True,
)
```
## Bring it all together
```
cases.columns = cases.columns.map(abbrev_us_state)
cases_pc = cases.div(population / 1_000_000, axis=1)
deaths.columns = deaths.columns.map(abbrev_us_state)
deaths_pc = deaths.div(population / 1_000_000, axis=1)
def main():
modes = ['cases', 'deaths']
frames = [cases.copy().fillna(0), deaths.copy().fillna(0)]
for mode, uncorrected_cumulative in zip(modes, frames):
# data transformation - correct for data glitches
(uncorrected_daily_new,
corrected_daily_new,
corrected_cumulative) = (
ps.dataframe_correction(uncorrected_cumulative, verbose=False))
#print(uncorrected_daily_new.tail(7))
# New cases original and smoothed
#plot_new_original_smoothed(corrected_daily_new.copy(), mode)
# regional plots
regional(corrected_daily_new.copy(), mode)
# new v cum plots
plot_new_and_cum_cases(corrected_daily_new.copy(), corrected_cumulative.copy(), mode,
lfooter='Extreme outliers have been adjusted')
# Growth rates
plot_growth_factor(corrected_daily_new.copy(), mode)
#joint(cases.copy(), deaths.copy(), '')
#joint(cases_pc, deaths_pc, 'per million pop')
main()
```
## Finished
```
print('Finished')
```
|
github_jupyter
|
# imports
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import datetime
from pathlib import Path
import math
#pandas
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
# scraping
from selenium.webdriver import Chrome
import re
# local imports
sys.path.append(r'../bin')
import plotstuff as ps
# plotting
plt.style.use('ggplot')
%matplotlib inline
# save location
CHART_DIRECTORY = '../charts'
Path(CHART_DIRECTORY).mkdir(parents=True, exist_ok=True)
CHART_DIRECTORY += '/zzUS-'
wiki = 'https://simple.wikipedia.org/wiki/List_of_U.S._states_by_population'
browser = Chrome('../Chrome/chromedriver')
browser.get(wiki)
html = browser.find_element_by_xpath('//table')
html = '<table>' + html.get_attribute('innerHTML') + '</table>'
html = re.sub('<span[^>]*>Sort Table[^/]*/span>', '', html)
population = pd.read_html(html)[0]
population = population[['State', 'Population estimate, July 1, 2019[2]']]
population = population.set_index('State')
population = population[population.columns[0]]
population = population[:-4] # drop vsrious totals
population = population.rename({'U.S. Virgin Islands': 'Virgin Islands'})
browser.quit()
source = 'https://covidtracking.com/'
url = source + 'api/v1/states/daily.json'
data = pd.read_json(url)
source = 'Source: ' + source
cases = data.pivot(index='date', columns='state', values='positive').astype(float)
cases.index = pd.DatetimeIndex(cases.index.astype(str))
deaths = data.pivot(index='date', columns='state', values='death').astype(float)
deaths.index = pd.DatetimeIndex(deaths.index.astype(str))
def plot_semi_log_trajectory(data, mode, threshold, source):
styles = ['-'] #, '--', '-.', ':'] # 4 lines
markers = list('PXo^v<>D*pH.d') # 13 markers
colours = ['maroon', 'brown', 'olive', 'red',
'darkorange', 'darkgoldenrod', 'green',
'blue', 'purple', 'black', 'teal'] # 11 colours
ax = plt.subplot(111)
ax.set_title(f'COVID-19 US States: Number of {mode}')
ax.set_xlabel('Days from the notional ' +
f'{int(threshold)}th {mode[:-1]}')
ax.set_ylabel(f'Cumulative {mode} (log scale)')
ax.set_yscale('log')
fig = ax.figure
endpoints = {}
color_legend = {}
for i, name in enumerate(data.columns):
# Get x and y data for nation
# - where two sequential days have the same
# value let's assume the second day is
# because of no reporting, and remove the
# second/subsequent data points.
y = data[name].dropna()
#print(f'{name}: \n{y}')
y = y.drop_duplicates(keep='first')
x = y.index.values
y = y.values
# let's not worry about the very short runs
if len(y) <= 2:
continue
# adjust the x data to start at the start_threshold at the y intercept
if y[0] == threshold:
adjust = 0
else:
span = y[1] - y[0]
adjust = (threshold - y[0]) / span
x = x - adjust
endpoints[name] = [x[-1], y[-1]]
# and plot
s = styles[i % len(styles)]
m = markers[i % len(markers)]
c = colours[i % len(colours)]
lw = 1
ax.plot(x, y, label=f'{name} ({int(y[-1])})',
#marker=m,
linewidth=lw, color=c, linestyle=s)
color_legend[name] = c
# label each end-point
min, max = ax.get_xlim()
ax.set_xlim(min, max+(max*0.02))
for label in endpoints:
x, y = endpoints[label]
ax.text(x=x+(max*0.01), y=y, s=f'{label}',
size='small', color=color_legend[label],
bbox={'alpha':0.5, 'facecolor':'white'})
# etc.
ax.legend(loc='upper left', ncol=4, fontsize='7')
fig.set_size_inches(8, 8)
fig.text(0.99, 0.005, source,
ha='right', va='bottom',
fontsize=9, fontstyle='italic',
color='#999999')
fig.tight_layout(pad=1.2)
fig.savefig(f'{CHART_DIRECTORY}!semilog-comparison-{mode}', dpi=125)
plt.show()
plt.close()
def prepare_comparative_data(data, threshold):
# focus on data at/above threshold (and just before)
mask = data >= threshold
for i in mask.columns:
ilocate = mask.index.get_loc(mask[i].idxmax()) - 1
if data[i].iloc[ilocate+1] > threshold:
mask[i].iloc[ilocate] = True
data = data.where(mask, other=np.nan)
# Rebase the data in terms of days starting at
# day at or immediately before the threshold
nans_in_col = data.isna().sum()
for i in nans_in_col.index:
data[i] = data[i].shift(-nans_in_col[i])
data.index = range(len(data))
return data
def semilog(data, mode, threshold, source):
x = prepare_comparative_data(data, threshold)
plot_semi_log_trajectory(x, mode, threshold, source)
us_state_abbrev = {
'Alabama': 'AL',
'Alaska': 'AK',
'American Samoa': 'AS',
'Arizona': 'AZ',
'Arkansas': 'AR',
'California': 'CA',
'Colorado': 'CO',
'Connecticut': 'CT',
'Delaware': 'DE',
'District of Columbia': 'DC',
'Florida': 'FL',
'Georgia': 'GA',
'Guam': 'GU',
'Hawaii': 'HI',
'Idaho': 'ID',
'Illinois': 'IL',
'Indiana': 'IN',
'Iowa': 'IA',
'Kansas': 'KS',
'Kentucky': 'KY',
'Louisiana': 'LA',
'Maine': 'ME',
'Maryland': 'MD',
'Massachusetts': 'MA',
'Michigan': 'MI',
'Minnesota': 'MN',
'Mississippi': 'MS',
'Missouri': 'MO',
'Montana': 'MT',
'Nebraska': 'NE',
'Nevada': 'NV',
'New Hampshire': 'NH',
'New Jersey': 'NJ',
'New Mexico': 'NM',
'New York': 'NY',
'North Carolina': 'NC',
'North Dakota': 'ND',
'Northern Mariana Islands':'MP',
'Ohio': 'OH',
'Oklahoma': 'OK',
'Oregon': 'OR',
'Pennsylvania': 'PA',
'Puerto Rico': 'PR',
'Rhode Island': 'RI',
'South Carolina': 'SC',
'South Dakota': 'SD',
'Tennessee': 'TN',
'Texas': 'TX',
'Utah': 'UT',
'Vermont': 'VT',
'Virgin Islands': 'VI',
'Virginia': 'VA',
'Washington': 'WA',
'West Virginia': 'WV',
'Wisconsin': 'WI',
'Wyoming': 'WY'
}
abbrev_us_state = dict(map(reversed, us_state_abbrev.items()))
def plot_new_and_cum_cases(states_new, states_cum, mode, lfooter=''):
for name in states_cum.columns:
total = states_cum[name].iloc[-1]
if math.isnan(total):
total = 0
if total == 0:
continue
total = int(total)
ps.plot_new_cum(
states_new[name], states_cum[name], mode, name,
'day',
title=f'COVID-19 {mode.title()}: {name}',
rfooter=source,
lfooter=f'{lfooter}; Total={total:,}'.strip(),
save_as=f'{CHART_DIRECTORY}'+
f'{name}-new-vs-cum-{mode}-{lfooter}.png',
show=True,
)
def joint(cases, deaths, mode):
cases = cases.sort_index().fillna(0).diff().rolling(7).mean().copy()
cases = ps.negative_correct(cases)
deaths = deaths.sort_index().fillna(0).diff().rolling(7).mean().copy()
deaths = ps.negative_correct(deaths)
for state in cases.columns:
name = state
# plot cases
ax = plt.subplot(111)
labels = [f'{p.day}/{p.month}' for p in cases.index]
ax.plot(labels, cases[state].values,
color='darkorange', label=f'New cases (left)')
ax.set_title(f'COVID-19 in {name} {mode}')
ax.set_ylabel(f'Num. per Day {mode}\n7-day rolling average')
#plot deaths
axr = ax.twinx()
axr.plot(labels, deaths[state],
lw=2.0, color='royalblue', label=f'New deaths (right)')
axr.set_ylabel(None)
axr.grid(False)
# manually label the x-axis
MAX_LABELS = 9
ticks = ax.xaxis.get_major_ticks()
if len(ticks):
modulus = int(np.floor(len(ticks) / MAX_LABELS) + 1)
for i in range(len(ticks)):
if i % modulus:
ticks[i].label1.set_visible(False)
# put in a legend
ax.legend(loc='upper left')
axr.legend(loc='center left')
# wrap-up
fig = ax.figure
fig.set_size_inches(8, 4)
fig.tight_layout(pad=11.2)
fig.savefig(f'{CHART_DIRECTORY}{state}-cases-v-deaths-{mode}.png', dpi=125)
plt.show()
plt.close()
def regional(df, mode):
regions = {
'Far West': ['Alaska', 'California', 'Hawaii', 'Nevada', 'Oregon', 'Washington'],
'Rocky Mountains': ['Colorado', 'Idaho', 'Montana', 'Utah', 'Wyoming'],
'Southwest': ['Arizona', 'New Mexico', 'Oklahoma', 'Texas'],
'South': ['Alabama', 'Arkansas', 'Kentucky', 'Louisiana', 'Mississippi', 'Tennessee'],
'Southeast': ['Florida', 'Georgia', 'North Carolina', 'South Carolina', 'Virginia', 'West Virginia'],
'Plains': ['Iowa', 'Kansas', 'Minnesota', 'Missouri', 'Nebraska', 'North Dakota', 'South Dakota'],
'Great Lakes': ['Illinois', 'Indiana', 'Michigan', 'Ohio', 'Wisconsin'],
'Mideast': ['Delaware', 'District of Columbia', 'Maryland', 'New Jersey', 'New York', 'Pennsylvania'],
'New England': ['Connecticut', 'Maine', 'Massachusetts', 'New Hampshire', 'Rhode Island', 'Vermont'],
'Other': ['American Samoa', 'Guam', 'Northern Mariana Islands', 'Puerto Rico', 'Virgin Islands'],
}
ps.plot_regional_per_captia(
df, mode, regions, population,
rfooter=source,
chart_directory=CHART_DIRECTORY + '!',
show=True,
)
def plot_growth_factor(states_new, mode):
for name in states_new.columns:
if states_new[name].sum() == 0:
continue
ps.plot_growth_factor(states_new[name],
title=f'{name} Week on Week Growth - COVID-19 {mode.title()}',
set_size_inches=(8, 4),
save_as=f'{CHART_DIRECTORY}{name}-growth-chart-{name}-{mode}.png',
rfooter=source,
mode=mode.lower(),
show=True,
)
def plot_new_original_smoothed(states_new, mode):
HMA = 15
ROLLING_PERIOD = 7
rolling_all = pd.DataFrame()
for name in states_new.columns:
if states_new[name].sum() == 0:
continue
title = f'{name} (new COVID-19 {mode} per day)'
ps.plot_orig_smooth(states_new[name].copy(),
HMA,
mode,
'Australia', # this is used to get starting point for series
title=title,
ylabel=f'New {mode} per day',
xlabel=None,
rfooter=source,
save_as=f'{CHART_DIRECTORY}{title}.png',
show=True,
)
# gross numbers per state
for name in states_new.columns:
rolling_all[name] = states_new[name].rolling(ROLLING_PERIOD).mean()
rolling_all = rolling_all.iloc[-1].sort_values() # latest
title = f'COVID19 Daily New {mode.title()} ({ROLLING_PERIOD} day average)'
ps.plot_barh(rolling_all.round(2),
title=title,
set_size_inches=(8,8),
save_as=f'{CHART_DIRECTORY}!bar-chart-{title}.png',
rfooter=source,
show=True,
)
# latest per-captia comparison
power = 6
pop_factor = int(10 ** power)
title = f"COVID19 Daily New {mode.title()} ({ROLLING_PERIOD} day average per $10^{power}$ pop'n)"
rolling_all = rolling_all[population.index] # same order as population
rolling_all = ((rolling_all / population) * pop_factor).round(2)
ps.plot_barh(rolling_all.sort_values(),
title=title,
set_size_inches=(8,8),
save_as=f'{CHART_DIRECTORY}!bar-chart-{title}.png',
rfooter=source,
show=True,
)
cases.columns = cases.columns.map(abbrev_us_state)
cases_pc = cases.div(population / 1_000_000, axis=1)
deaths.columns = deaths.columns.map(abbrev_us_state)
deaths_pc = deaths.div(population / 1_000_000, axis=1)
def main():
modes = ['cases', 'deaths']
frames = [cases.copy().fillna(0), deaths.copy().fillna(0)]
for mode, uncorrected_cumulative in zip(modes, frames):
# data transformation - correct for data glitches
(uncorrected_daily_new,
corrected_daily_new,
corrected_cumulative) = (
ps.dataframe_correction(uncorrected_cumulative, verbose=False))
#print(uncorrected_daily_new.tail(7))
# New cases original and smoothed
#plot_new_original_smoothed(corrected_daily_new.copy(), mode)
# regional plots
regional(corrected_daily_new.copy(), mode)
# new v cum plots
plot_new_and_cum_cases(corrected_daily_new.copy(), corrected_cumulative.copy(), mode,
lfooter='Extreme outliers have been adjusted')
# Growth rates
plot_growth_factor(corrected_daily_new.copy(), mode)
#joint(cases.copy(), deaths.copy(), '')
#joint(cases_pc, deaths_pc, 'per million pop')
main()
print('Finished')
| 0.314682 | 0.927232 |
```
%matplotlib inline
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
```
### Load and Read Datasets
```
# Files loaded
city_data_to_load = "data/city_data.csv"
ride_data_to_load = "data/ride_data.csv"
# City and Ride Data read
city = pd.read_csv(city_data_to_load)
ride = pd.read_csv(ride_data_to_load)
# Above datasets combined into a single dataset
data = pd.merge(ride, city, how = "left", on = ["city","city"])
# Datatable displayed for preview
data.head()
```
### Data clean-up by city type
```
#City Type
urban = data[data["type"] == "Urban"]
suburban = data[data["type"] == "Suburban"]
rural = data [data["type"] == "Rural"]
#Avg Fare by city
urban_avgfare = urban.groupby(["city"]).mean()["fare"]
suburban_avgfare = suburban.groupby(["city"]).mean()["fare"]
rural_avgfare = rural.groupby(["city"]).mean()["fare"]
#Rides Per City
urban_rides = urban.groupby(["city"]).count()["ride_id"]
suburban_rides = suburban.groupby(["city"]).count()["ride_id"]
rural_rides = rural.groupby(["city"]).count()["ride_id"]
#Driver Count Per City
urban_drivercount = urban.groupby(["city"]).mean()["driver_count"]
suburban_drivercount = suburban.groupby(["city"]).mean()["driver_count"]
rural_drivercount = rural.groupby(["city"]).mean()["driver_count"]
```
# Bubble Plot of Ride Sharing Data
```
# Scatter plot: Total Number of Rides vs Average Fares by City Type
plt.scatter(urban_rides, urban_avgfare, edgecolor = "black",
marker = "o", c = "lightgreen", linewidths = 1, alpha = .8,
s = 10 * urban_drivercount, label = "Urban")
plt.scatter(suburban_rides, suburban_avgfare, edgecolor = "black",
marker = "o", c = "skyblue", linewidths = 1, alpha = .8,
s = 10 * suburban_drivercount, label = "Suburban")
plt.scatter(rural_rides, rural_avgfare, edgecolor = "black",
marker = "o", c = "orange", linewidths = 1, alpha = .8,
s = 10 * rural_drivercount, label = "Rural")
# Graph Properties - Labels
plt.title("Pyber Ride Sharing Data (2016)")
plt.xlabel("Total Number of Ride (Per City)")
plt.ylabel("Average Fare ($)")
plt.grid(True)
# Graph Properties - Legend
legend = plt.legend(title = "City Types", fontsize = "small")
legend.legendHandles[0]._sizes = [30]
legend.legendHandles[1]._sizes = [30]
legend. legendHandles[2]._sizes = [30]
plt.show()
```
## Total Fares by City Type
```
# Analysis on % Total Fares by City Type
fare_by_citytype = data.groupby("type").sum()["fare"]/data.sum()["fare"]
#Pie Chart
plt.pie(fare_by_citytype,
labels = ["Rural", "Suburban", "Urban"],
colors = ["orange", "skyblue", "lightgreen"],
explode = [0, 0, .1],
autopct = "%1.1f%%",
shadow = True,
startangle = 150)
plt.title("% of Total Fares by City Type")
plt.show()
```
## Total Rides by City Type
```
# Analysis on % of Total Rides by City Type
rides_by_citytype = data.groupby("type").count()["driver_count"] / data.count()["driver_count"]
# Pie Chart
plt.pie(rides_by_citytype,
labels = ["Rural","Suburban","Urban"],
colors = ["orange", "skyblue", "lightgreen"],
autopct = "%1.1f%%",
explode = [.1,.1,.1],
startangle = 150 )
plt.title ("% of Total Rides by City Type")
plt.show()
```
## Total Drivers by City Type
```
# Analysis on % of Total Drivers by City Type
drivers_by_citytype = city.groupby("type").sum()["driver_count"] / city.sum()["driver_count"]
# Pie Chart
plt.pie(drivers_by_citytype,
labels = ["Rural", "Suburban", "Urban"],
colors = ["orange", "skyblue", "lightgreen"],
autopct = "%1.1f%%",
explode = [.1,.1,.1],
startangle = 150)
plt.title("% of Total Drivers by City Type")
plt.show()
```
|
github_jupyter
|
%matplotlib inline
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# Files loaded
city_data_to_load = "data/city_data.csv"
ride_data_to_load = "data/ride_data.csv"
# City and Ride Data read
city = pd.read_csv(city_data_to_load)
ride = pd.read_csv(ride_data_to_load)
# Above datasets combined into a single dataset
data = pd.merge(ride, city, how = "left", on = ["city","city"])
# Datatable displayed for preview
data.head()
#City Type
urban = data[data["type"] == "Urban"]
suburban = data[data["type"] == "Suburban"]
rural = data [data["type"] == "Rural"]
#Avg Fare by city
urban_avgfare = urban.groupby(["city"]).mean()["fare"]
suburban_avgfare = suburban.groupby(["city"]).mean()["fare"]
rural_avgfare = rural.groupby(["city"]).mean()["fare"]
#Rides Per City
urban_rides = urban.groupby(["city"]).count()["ride_id"]
suburban_rides = suburban.groupby(["city"]).count()["ride_id"]
rural_rides = rural.groupby(["city"]).count()["ride_id"]
#Driver Count Per City
urban_drivercount = urban.groupby(["city"]).mean()["driver_count"]
suburban_drivercount = suburban.groupby(["city"]).mean()["driver_count"]
rural_drivercount = rural.groupby(["city"]).mean()["driver_count"]
# Scatter plot: Total Number of Rides vs Average Fares by City Type
plt.scatter(urban_rides, urban_avgfare, edgecolor = "black",
marker = "o", c = "lightgreen", linewidths = 1, alpha = .8,
s = 10 * urban_drivercount, label = "Urban")
plt.scatter(suburban_rides, suburban_avgfare, edgecolor = "black",
marker = "o", c = "skyblue", linewidths = 1, alpha = .8,
s = 10 * suburban_drivercount, label = "Suburban")
plt.scatter(rural_rides, rural_avgfare, edgecolor = "black",
marker = "o", c = "orange", linewidths = 1, alpha = .8,
s = 10 * rural_drivercount, label = "Rural")
# Graph Properties - Labels
plt.title("Pyber Ride Sharing Data (2016)")
plt.xlabel("Total Number of Ride (Per City)")
plt.ylabel("Average Fare ($)")
plt.grid(True)
# Graph Properties - Legend
legend = plt.legend(title = "City Types", fontsize = "small")
legend.legendHandles[0]._sizes = [30]
legend.legendHandles[1]._sizes = [30]
legend. legendHandles[2]._sizes = [30]
plt.show()
# Analysis on % Total Fares by City Type
fare_by_citytype = data.groupby("type").sum()["fare"]/data.sum()["fare"]
#Pie Chart
plt.pie(fare_by_citytype,
labels = ["Rural", "Suburban", "Urban"],
colors = ["orange", "skyblue", "lightgreen"],
explode = [0, 0, .1],
autopct = "%1.1f%%",
shadow = True,
startangle = 150)
plt.title("% of Total Fares by City Type")
plt.show()
# Analysis on % of Total Rides by City Type
rides_by_citytype = data.groupby("type").count()["driver_count"] / data.count()["driver_count"]
# Pie Chart
plt.pie(rides_by_citytype,
labels = ["Rural","Suburban","Urban"],
colors = ["orange", "skyblue", "lightgreen"],
autopct = "%1.1f%%",
explode = [.1,.1,.1],
startangle = 150 )
plt.title ("% of Total Rides by City Type")
plt.show()
# Analysis on % of Total Drivers by City Type
drivers_by_citytype = city.groupby("type").sum()["driver_count"] / city.sum()["driver_count"]
# Pie Chart
plt.pie(drivers_by_citytype,
labels = ["Rural", "Suburban", "Urban"],
colors = ["orange", "skyblue", "lightgreen"],
autopct = "%1.1f%%",
explode = [.1,.1,.1],
startangle = 150)
plt.title("% of Total Drivers by City Type")
plt.show()
| 0.582729 | 0.731155 |
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import pymongo
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import precision_score
from sklearn.metrics import classification_report
from sklearn.metrics import recall_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import r2_score
from sklearn import metrics
client = pymongo.MongoClient('mongo', 27017)
# Database Name conection
db = client["movies_database"]
# Collection Name
col = db["data"]
x = col.find()
df = pd.DataFrame(list(x))
#df = pd.read_csv('movies_with_sentiment_analysis.csv')
df_results = pd.DataFrame(columns=['algorithm_name', 'score', 'target'])
df.head(10)
```
# Pre-analysis
```
original_df = df
final_df=original_df
df = df.drop(columns=['release_date', 'movie_title', 'id'], axis=0)
```
## Differents values
```
df['max_theaters'].describe()
df['score'].describe()
df['running_time_min'].describe()
for col in df.columns.to_list():
if col.startswith('genre_'):
print(f'{col}: {df[col].value_counts()[1]}/{len(df)}')
```
## Outliers
```
plt.figure(figsize=(20, 10))
ax = sns.boxplot(data=df)
_ = ax.set_xticklabels(df.keys(), rotation=90)
```
## Correlation
```
df.head(50)
plt.figure(figsize=(15, 8))
sns.heatmap(round(df.corr(method='spearman'), 2), annot=True, mask=None)
plt.show()
```
# Models
1. Predict Score (without total gross)
2. Total gross (without score)
3. Opening weekend gross (without total gross and score)
```
def prediccion(y_test,y_pred):
validos=0
no_validos=0
for i in range(len(y_pred)):
if y_test.iloc[i]==y_pred[i].astype(int):
validos=validos+1
else:
no_validos=no_validos+1
return(print("Validos: ",validos," / No validos: ",no_validos))
```
## 1. Predict Score
```
notas=["Bad/<6","Good/6-7","Very Good/7-8","Excelent/+8"]
df_classification = df
df_classification.loc[df_classification["score"] < 4.8, "score"] = 0
df_classification.loc[(df_classification["score"] >=4.8) & (df_classification["score"] < 5.5), "score"] = 1
df_classification.loc[(df_classification["score"] >=5.5) & (df_classification["score"] < 6.3), "score"] = 2
df_classification.loc[(df_classification["score"] >=6.3), "score"] = 3
final_df.loc[final_df["score"] < 4.8, "score"] = 0
final_df.loc[(final_df["score"] >=4.8) & (final_df["score"] < 5.5), "score"] = 1
final_df.loc[(final_df["score"] >=5.5) & (final_df["score"] < 6.3), "score"] = 2
final_df.loc[(final_df["score"] >=6.3), "score"] = 3
y=df_classification["score"]
cols = list(df_classification.columns)
scaler = MinMaxScaler()
df_classification = scaler.fit_transform(df_classification)
df_classification = pd.DataFrame(df_classification, columns=cols)
df_classification = df_classification.drop(columns=['score', 'gross_total'])
X = df_classification
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
rfc = RandomForestClassifier(n_estimators=2000,criterion="gini")
rfc.fit(X_train, y_train)
y_pred_rfc = rfc.predict(X_test)
score = rfc.score(X_test, y_test)
print(score)
df_results = df_results.append({'algorithm_name':'random_forest_classifier', 'score': score, 'target': 'score'}, ignore_index=True)
y_pred_df=rfc.predict(X)
final_df.insert(6,'predicted_score', y_pred_df)
final_df["score"]=final_df["score"].astype(int)
final_df["predicted_score"]=final_df["predicted_score"].astype(int)
def mostrar_resultados(y_test, pred_y):
conf_matrix = confusion_matrix(y_test, pred_y)
plt.figure(figsize=(12, 12))
sns.heatmap(conf_matrix, annot=True, fmt="d");
plt.title("Matriz de confusiรณn")
plt.ylabel('Realidad')
plt.xlabel('Predicciรณn')
plt.show()
print()
print(classification_report(y_test, pred_y))
print("Accuracy:",metrics.accuracy_score(y_test, y_pred_rfc))
prediccion(y_test,y_pred_rfc)
mostrar_resultados(y_test,y_pred_rfc)
```
## 2. Gross total (Linear Regression)
```
cols = list(df.columns)
scaler = MinMaxScaler()
scaler_y = MinMaxScaler()
df = scaler.fit_transform(df)
df = pd.DataFrame(df, columns=cols)
df_y = scaler_y.fit_transform(pd.DataFrame(original_df["gross_total"].values.tolist(),columns=["gross_total"]))
y = pd.DataFrame(df_y, columns=["gross_total"])
df_regresion = df.drop(columns=['score', 'gross_total', 'max_theaters'])
X = df_regresion
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
print(score)
y_pred = model.predict(X_test)
r2 = r2_score(y_test,y_pred)
df_results = df_results.append({'algorithm_name':'linear_regression', 'score': r2, 'target': 'gross_total'}, ignore_index=True)
results = pd.DataFrame(columns=['movie_title', 'gross_total', 'predicted_gross_total'])
results['movie_title']=original_df['movie_title']
results['gross_total']=original_df['gross_total']
results['predicted_gross_total']=model.predict(X)
results.head()
y_pred_transformed=model.predict(X)
y_pred_reversed=scaler_y.inverse_transform(y_pred_transformed)
y_pred_reversed
df_scaled=pd.DataFrame(y_pred_reversed,columns=["predicted_gross_total"])
final_df.insert(2,'predicted_gross_total', y_pred_reversed)
final_df["predicted_gross_total"]=final_df["predicted_gross_total"].astype(int)
```
## 3. Opening weekend gross (Random Forest Regressor)
```
cols = list(df.columns)
del scaler_y
del scaler
scaler = MinMaxScaler()
scaler_y = MinMaxScaler()
df = scaler.fit_transform(df)
df = pd.DataFrame(df, columns=cols)
df_y = scaler_y.fit_transform(pd.DataFrame(original_df["opening_weekend_gross"].values.tolist(),columns=["opening_weekend_gross"]))
y = pd.DataFrame(df_y, columns=["opening_weekend_gross"])
df_regresion= df.drop(columns=['opening_weekend_gross','score', 'gross_total'])
X = df_regresion
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=700)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = model.score(X_test, y_test)
print(score)
y_pred = model.predict(X_test)
r2 = r2_score(y_test,y_pred)
df_results = df_results.append({'algorithm_name':'random_forest_regressor', 'score': r2, 'target': 'opening_weekend_gross'}, ignore_index=True)
results = pd.DataFrame(columns=['movie_title', 'opening_weekend_gross', 'predicted_opening_weekend_gross'])
results['movie_title']=original_df['movie_title']
results['opening_weekend_gross']=original_df['opening_weekend_gross']
results['predicted_opening_weekend_gross']=model.predict(X)
results.head()
y_pred_transformed=model.predict(X)
y_pred_transformed
y_pred_reversed=scaler_y.inverse_transform(y_pred_transformed.reshape(1,-1))
y_pred_reversed
y_pred_reversed=y_pred_reversed.reshape(-1,1)
df_scaled=pd.DataFrame(y_pred_reversed,columns=["predicted_opening_weekend_gross"])
final_df.insert(5,'predicted_opening_weekend_gross', y_pred_reversed)
final_df["predicted_opening_weekend_gross"]=final_df["predicted_opening_weekend_gross"].astype(int)
final_df
```
# Results
```
df_results['score'] = df_results['score']*100
df_results = df_results.sort_values(by=['score'], ascending=False)
df_score = df_results.loc[df_results['target'] == 'score']
df_gross_total = df_results.loc[df_results['target'] == 'gross_total']
df_opening_weekend_gross = df_results.loc[df_results['target'] == 'opening_weekend_gross']
df_results = df_results.sort_values(by=['score'])
df_final_results = df_score.head(1)
df_final_results = df_final_results.append(df_gross_total.head(1))
df_final_results = df_final_results.append(df_opening_weekend_gross.head(1))
```
## Final conclusion
```
df_final_results
df_final_results.to_csv("accuracy_predictions_ML-NLP.csv",index=0,columns=df_final_results.columns.values.tolist())
final_df.to_csv("movies_list_2012-2020_predictions_ML-NLP.csv",index=0,columns=final_df.columns.values.tolist())
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import pymongo
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import precision_score
from sklearn.metrics import classification_report
from sklearn.metrics import recall_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import r2_score
from sklearn import metrics
client = pymongo.MongoClient('mongo', 27017)
# Database Name conection
db = client["movies_database"]
# Collection Name
col = db["data"]
x = col.find()
df = pd.DataFrame(list(x))
#df = pd.read_csv('movies_with_sentiment_analysis.csv')
df_results = pd.DataFrame(columns=['algorithm_name', 'score', 'target'])
df.head(10)
original_df = df
final_df=original_df
df = df.drop(columns=['release_date', 'movie_title', 'id'], axis=0)
df['max_theaters'].describe()
df['score'].describe()
df['running_time_min'].describe()
for col in df.columns.to_list():
if col.startswith('genre_'):
print(f'{col}: {df[col].value_counts()[1]}/{len(df)}')
plt.figure(figsize=(20, 10))
ax = sns.boxplot(data=df)
_ = ax.set_xticklabels(df.keys(), rotation=90)
df.head(50)
plt.figure(figsize=(15, 8))
sns.heatmap(round(df.corr(method='spearman'), 2), annot=True, mask=None)
plt.show()
def prediccion(y_test,y_pred):
validos=0
no_validos=0
for i in range(len(y_pred)):
if y_test.iloc[i]==y_pred[i].astype(int):
validos=validos+1
else:
no_validos=no_validos+1
return(print("Validos: ",validos," / No validos: ",no_validos))
notas=["Bad/<6","Good/6-7","Very Good/7-8","Excelent/+8"]
df_classification = df
df_classification.loc[df_classification["score"] < 4.8, "score"] = 0
df_classification.loc[(df_classification["score"] >=4.8) & (df_classification["score"] < 5.5), "score"] = 1
df_classification.loc[(df_classification["score"] >=5.5) & (df_classification["score"] < 6.3), "score"] = 2
df_classification.loc[(df_classification["score"] >=6.3), "score"] = 3
final_df.loc[final_df["score"] < 4.8, "score"] = 0
final_df.loc[(final_df["score"] >=4.8) & (final_df["score"] < 5.5), "score"] = 1
final_df.loc[(final_df["score"] >=5.5) & (final_df["score"] < 6.3), "score"] = 2
final_df.loc[(final_df["score"] >=6.3), "score"] = 3
y=df_classification["score"]
cols = list(df_classification.columns)
scaler = MinMaxScaler()
df_classification = scaler.fit_transform(df_classification)
df_classification = pd.DataFrame(df_classification, columns=cols)
df_classification = df_classification.drop(columns=['score', 'gross_total'])
X = df_classification
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
rfc = RandomForestClassifier(n_estimators=2000,criterion="gini")
rfc.fit(X_train, y_train)
y_pred_rfc = rfc.predict(X_test)
score = rfc.score(X_test, y_test)
print(score)
df_results = df_results.append({'algorithm_name':'random_forest_classifier', 'score': score, 'target': 'score'}, ignore_index=True)
y_pred_df=rfc.predict(X)
final_df.insert(6,'predicted_score', y_pred_df)
final_df["score"]=final_df["score"].astype(int)
final_df["predicted_score"]=final_df["predicted_score"].astype(int)
def mostrar_resultados(y_test, pred_y):
conf_matrix = confusion_matrix(y_test, pred_y)
plt.figure(figsize=(12, 12))
sns.heatmap(conf_matrix, annot=True, fmt="d");
plt.title("Matriz de confusiรณn")
plt.ylabel('Realidad')
plt.xlabel('Predicciรณn')
plt.show()
print()
print(classification_report(y_test, pred_y))
print("Accuracy:",metrics.accuracy_score(y_test, y_pred_rfc))
prediccion(y_test,y_pred_rfc)
mostrar_resultados(y_test,y_pred_rfc)
cols = list(df.columns)
scaler = MinMaxScaler()
scaler_y = MinMaxScaler()
df = scaler.fit_transform(df)
df = pd.DataFrame(df, columns=cols)
df_y = scaler_y.fit_transform(pd.DataFrame(original_df["gross_total"].values.tolist(),columns=["gross_total"]))
y = pd.DataFrame(df_y, columns=["gross_total"])
df_regresion = df.drop(columns=['score', 'gross_total', 'max_theaters'])
X = df_regresion
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
print(score)
y_pred = model.predict(X_test)
r2 = r2_score(y_test,y_pred)
df_results = df_results.append({'algorithm_name':'linear_regression', 'score': r2, 'target': 'gross_total'}, ignore_index=True)
results = pd.DataFrame(columns=['movie_title', 'gross_total', 'predicted_gross_total'])
results['movie_title']=original_df['movie_title']
results['gross_total']=original_df['gross_total']
results['predicted_gross_total']=model.predict(X)
results.head()
y_pred_transformed=model.predict(X)
y_pred_reversed=scaler_y.inverse_transform(y_pred_transformed)
y_pred_reversed
df_scaled=pd.DataFrame(y_pred_reversed,columns=["predicted_gross_total"])
final_df.insert(2,'predicted_gross_total', y_pred_reversed)
final_df["predicted_gross_total"]=final_df["predicted_gross_total"].astype(int)
cols = list(df.columns)
del scaler_y
del scaler
scaler = MinMaxScaler()
scaler_y = MinMaxScaler()
df = scaler.fit_transform(df)
df = pd.DataFrame(df, columns=cols)
df_y = scaler_y.fit_transform(pd.DataFrame(original_df["opening_weekend_gross"].values.tolist(),columns=["opening_weekend_gross"]))
y = pd.DataFrame(df_y, columns=["opening_weekend_gross"])
df_regresion= df.drop(columns=['opening_weekend_gross','score', 'gross_total'])
X = df_regresion
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=700)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = model.score(X_test, y_test)
print(score)
y_pred = model.predict(X_test)
r2 = r2_score(y_test,y_pred)
df_results = df_results.append({'algorithm_name':'random_forest_regressor', 'score': r2, 'target': 'opening_weekend_gross'}, ignore_index=True)
results = pd.DataFrame(columns=['movie_title', 'opening_weekend_gross', 'predicted_opening_weekend_gross'])
results['movie_title']=original_df['movie_title']
results['opening_weekend_gross']=original_df['opening_weekend_gross']
results['predicted_opening_weekend_gross']=model.predict(X)
results.head()
y_pred_transformed=model.predict(X)
y_pred_transformed
y_pred_reversed=scaler_y.inverse_transform(y_pred_transformed.reshape(1,-1))
y_pred_reversed
y_pred_reversed=y_pred_reversed.reshape(-1,1)
df_scaled=pd.DataFrame(y_pred_reversed,columns=["predicted_opening_weekend_gross"])
final_df.insert(5,'predicted_opening_weekend_gross', y_pred_reversed)
final_df["predicted_opening_weekend_gross"]=final_df["predicted_opening_weekend_gross"].astype(int)
final_df
df_results['score'] = df_results['score']*100
df_results = df_results.sort_values(by=['score'], ascending=False)
df_score = df_results.loc[df_results['target'] == 'score']
df_gross_total = df_results.loc[df_results['target'] == 'gross_total']
df_opening_weekend_gross = df_results.loc[df_results['target'] == 'opening_weekend_gross']
df_results = df_results.sort_values(by=['score'])
df_final_results = df_score.head(1)
df_final_results = df_final_results.append(df_gross_total.head(1))
df_final_results = df_final_results.append(df_opening_weekend_gross.head(1))
df_final_results
df_final_results.to_csv("accuracy_predictions_ML-NLP.csv",index=0,columns=df_final_results.columns.values.tolist())
final_df.to_csv("movies_list_2012-2020_predictions_ML-NLP.csv",index=0,columns=final_df.columns.values.tolist())
| 0.416085 | 0.72702 |
* ่ฏทๅจ็ฏๅขๅ้ไธญ่ฎพ็ฝฎ`DB_URI`ๆๅๆฐๆฎๅบ
```
import os
import numpy as np
import pandas as pd
from cvxpy import *
from cvxopt import *
from alphamind.api import *
from alphamind.cython.optimizers import QPOptimizer
```
# Data Preparing
--------------------------
```
risk_penlty = 0.5
ref_date = '2018-02-08'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('custom', ['ashare_ex'])
codes = engine.fetch_codes(ref_date, universe)
risk_cov, risk_exposure = engine.fetch_risk_model(ref_date, codes)
factor = engine.fetch_factor(ref_date, 'EPS', codes)
total_data = pd.merge(factor, risk_exposure, on='code').dropna()
all_styles = risk_styles + industry_styles + macro_styles
risk_exposure_values = total_data[all_styles].values.astype(float)
special_risk_values = total_data['srisk'].values.astype(float)
risk_cov_values = risk_cov[all_styles].values
sec_cov_values_full = risk_exposure_values @ risk_cov_values @ risk_exposure_values.T / 10000 + np.diag(special_risk_values ** 2) / 10000
signal_full = total_data['EPS'].values
n = 200
sec_cov_values = sec_cov_values_full[:n, :n]
signal = signal_full[:n]
```
# Optimizing Weights
-------------------------------------
```
%%time
w = Variable(n)
lbound = 0.
ubound = 1. / n * 20
objective = Minimize(risk_penlty * quad_form(w, sec_cov_values) - signal * w)
constraints = [w >= lbound,
w <= ubound,
sum_entries(w) == 1,]
prob = Problem(objective, constraints)
%%time
prob.solve(verbose=True)
prob.status, prob.value
%%time
prob.solve(verbose=True, solver='CVXOPT')
prob.status, prob.value
%%time
P = matrix(sec_cov_values)
q = -matrix(signal)
G = np.zeros((2*n, n))
h = np.zeros(2*n)
for i in range(n):
G[i, i] = 1.
h[i] = 1. / n * 20
G[i+n, i] = -1.
h[i+n] = 0.
G = matrix(G)
h = matrix(h)
A = np.ones((1, n))
b = np.ones(1)
A = matrix(A)
b = matrix(b)
sol = solvers.qp(P, q, G, h, A, b)
%%time
lbound = np.zeros(n)
ubound = np.ones(n) * 20 / n
cons_matrix = np.ones((1, n))
clb = np.ones(1)
cub = np.ones(1)
qpopt = QPOptimizer(signal, sec_cov_values, lbound, ubound, cons_matrix, clb, cub, 1.)
qpopt.feval()
qpopt.status()
```
# Performace Timing
-------------------------
```
import datetime as dt
def time_function(py_callable, n):
start = dt.datetime.now()
val = py_callable(n)
return (dt.datetime.now() - start).total_seconds(), val
def cvxpy(n):
w = Variable(n)
lbound = 0.
ubound = 0.01
objective = Minimize(risk_penlty * quad_form(w, sec_cov_values) - signal * w)
constraints = [w >= lbound,
w <= ubound,
sum_entries(w) == 1,]
prob = Problem(objective, constraints)
prob.solve(verbose=False, solver='CVXOPT', display=False)
return prob.value
def cvxopt(n):
P = matrix(sec_cov_values)
q = -matrix(signal)
G = np.zeros((2*n, n))
h = np.zeros(2*n)
for i in range(n):
G[i, i] = 1.
h[i] = 0.01
G[i+n, i] = -1.
h[i+n] = 0.
G = matrix(G)
h = matrix(h)
A = np.ones((1, n))
b = np.ones(1)
A = matrix(A)
b = matrix(b)
solvers.options['show_progress'] = False
sol = solvers.qp(P, q, G, h, A, b)
return sol['primal objective']
def ipopt(n):
lbound = np.zeros(n)
ubound = np.ones(n) * 0.01
cons_matrix = np.ones((1, n))
clb = np.ones(1)
cub = np.ones(1)
qpopt = QPOptimizer(signal, sec_cov_values, lbound, ubound, cons_matrix, clb, cub, 1.)
return qpopt.feval()
n_steps = list(range(200, 3201, 200))
cvxpy_times = [None] * len(n_steps)
cvxopt_times = [None] * len(n_steps)
ipopt_times = [None] * len(n_steps)
print("{0:<8}{1:>12}{2:>12}{3:>12}".format('Scale(n)', 'cvxpy', 'cvxopt', 'ipopt'))
for i, n in enumerate(n_steps):
sec_cov_values = sec_cov_values_full[:n, :n]
signal = signal_full[:n]
cvxpy_times[i], val1 = time_function(cvxpy, n)
cvxopt_times[i], val2 = time_function(cvxopt, n)
ipopt_times[i], val3 = time_function(ipopt, n)
np.testing.assert_almost_equal(val1, val2, 4)
np.testing.assert_almost_equal(val2, val3, 4)
print("{0:<8}{1:>12.4f}{2:>12.4f}{3:>12.4f}".format(n, cvxpy_times[i], cvxopt_times[i], ipopt_times[i]))
```
|
github_jupyter
|
import os
import numpy as np
import pandas as pd
from cvxpy import *
from cvxopt import *
from alphamind.api import *
from alphamind.cython.optimizers import QPOptimizer
risk_penlty = 0.5
ref_date = '2018-02-08'
engine = SqlEngine(os.environ['DB_URI'])
universe = Universe('custom', ['ashare_ex'])
codes = engine.fetch_codes(ref_date, universe)
risk_cov, risk_exposure = engine.fetch_risk_model(ref_date, codes)
factor = engine.fetch_factor(ref_date, 'EPS', codes)
total_data = pd.merge(factor, risk_exposure, on='code').dropna()
all_styles = risk_styles + industry_styles + macro_styles
risk_exposure_values = total_data[all_styles].values.astype(float)
special_risk_values = total_data['srisk'].values.astype(float)
risk_cov_values = risk_cov[all_styles].values
sec_cov_values_full = risk_exposure_values @ risk_cov_values @ risk_exposure_values.T / 10000 + np.diag(special_risk_values ** 2) / 10000
signal_full = total_data['EPS'].values
n = 200
sec_cov_values = sec_cov_values_full[:n, :n]
signal = signal_full[:n]
%%time
w = Variable(n)
lbound = 0.
ubound = 1. / n * 20
objective = Minimize(risk_penlty * quad_form(w, sec_cov_values) - signal * w)
constraints = [w >= lbound,
w <= ubound,
sum_entries(w) == 1,]
prob = Problem(objective, constraints)
%%time
prob.solve(verbose=True)
prob.status, prob.value
%%time
prob.solve(verbose=True, solver='CVXOPT')
prob.status, prob.value
%%time
P = matrix(sec_cov_values)
q = -matrix(signal)
G = np.zeros((2*n, n))
h = np.zeros(2*n)
for i in range(n):
G[i, i] = 1.
h[i] = 1. / n * 20
G[i+n, i] = -1.
h[i+n] = 0.
G = matrix(G)
h = matrix(h)
A = np.ones((1, n))
b = np.ones(1)
A = matrix(A)
b = matrix(b)
sol = solvers.qp(P, q, G, h, A, b)
%%time
lbound = np.zeros(n)
ubound = np.ones(n) * 20 / n
cons_matrix = np.ones((1, n))
clb = np.ones(1)
cub = np.ones(1)
qpopt = QPOptimizer(signal, sec_cov_values, lbound, ubound, cons_matrix, clb, cub, 1.)
qpopt.feval()
qpopt.status()
import datetime as dt
def time_function(py_callable, n):
start = dt.datetime.now()
val = py_callable(n)
return (dt.datetime.now() - start).total_seconds(), val
def cvxpy(n):
w = Variable(n)
lbound = 0.
ubound = 0.01
objective = Minimize(risk_penlty * quad_form(w, sec_cov_values) - signal * w)
constraints = [w >= lbound,
w <= ubound,
sum_entries(w) == 1,]
prob = Problem(objective, constraints)
prob.solve(verbose=False, solver='CVXOPT', display=False)
return prob.value
def cvxopt(n):
P = matrix(sec_cov_values)
q = -matrix(signal)
G = np.zeros((2*n, n))
h = np.zeros(2*n)
for i in range(n):
G[i, i] = 1.
h[i] = 0.01
G[i+n, i] = -1.
h[i+n] = 0.
G = matrix(G)
h = matrix(h)
A = np.ones((1, n))
b = np.ones(1)
A = matrix(A)
b = matrix(b)
solvers.options['show_progress'] = False
sol = solvers.qp(P, q, G, h, A, b)
return sol['primal objective']
def ipopt(n):
lbound = np.zeros(n)
ubound = np.ones(n) * 0.01
cons_matrix = np.ones((1, n))
clb = np.ones(1)
cub = np.ones(1)
qpopt = QPOptimizer(signal, sec_cov_values, lbound, ubound, cons_matrix, clb, cub, 1.)
return qpopt.feval()
n_steps = list(range(200, 3201, 200))
cvxpy_times = [None] * len(n_steps)
cvxopt_times = [None] * len(n_steps)
ipopt_times = [None] * len(n_steps)
print("{0:<8}{1:>12}{2:>12}{3:>12}".format('Scale(n)', 'cvxpy', 'cvxopt', 'ipopt'))
for i, n in enumerate(n_steps):
sec_cov_values = sec_cov_values_full[:n, :n]
signal = signal_full[:n]
cvxpy_times[i], val1 = time_function(cvxpy, n)
cvxopt_times[i], val2 = time_function(cvxopt, n)
ipopt_times[i], val3 = time_function(ipopt, n)
np.testing.assert_almost_equal(val1, val2, 4)
np.testing.assert_almost_equal(val2, val3, 4)
print("{0:<8}{1:>12.4f}{2:>12.4f}{3:>12.4f}".format(n, cvxpy_times[i], cvxopt_times[i], ipopt_times[i]))
| 0.462959 | 0.832134 |
# Mยฒ and Beam Quality Parameters
**Scott Prahl**
**Mar 2021**
In this notebook, the basic definitions of the beam waist, beam divergence, beam product, and Mยฒ are introduced.
As Ross points out in his book, *Laser Beam Quality Metrics*, describing a laser beam by a few numbers is an approximation that discards quite a lot of information.
> Any attempt to reduce the behavior of a seven-dimensional object to a single number inevitably results in loss of information.
where the seven dimensions consist of three-amplitude, three-phase, and time. Nevertheless, Mยฒ is a simple, widely-used metric for characterizing laser beams.
---
*If* `` laserbeamsize `` *is not installed, uncomment the following cell (i.e., delete the initial #) and execute it with* `` shift-enter ``. *Afterwards, you may need to restart the kernel/runtime before the module will import successfully.*
```
#!pip install --user laserbeamsize
import numpy as np
import matplotlib.pyplot as plt
try:
import laserbeamsize as lbs
except ModuleNotFoundError:
print('laserbeamsize is not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
```
## Minimum Beam Radius
The minimum beam radius $w_0$ (and its location $z_0$) tell us a lot about a laser beam. We know that a laser beam must have a minimum beam radius somewhere. If we assume that the beam obeys Gaussian beam propagation rules then we can make a few observations.
> It would seem that $w$ should stand for *width*, but it doesn't. This means that $w$ is not the diameter but the radius. Go figure.
A laser cavity with a flat mirror will have minimum beam radius at that mirror. For diode lasers, beam exits through a cleaved flat surface. Since the gain medium in a diode laser usually has a rectangular cross section, there are two different minimum beam radii associated with the exit aperture. These are often assumed to correspond to the dimensions of the gain medium.
In general, though, the beam waist happens somewhere inside the laser and both its location and radius is unknown. To determine the beam waist an aberration-free focusing lens is used to create an new beam waist external to the cavity that can be measured.
## Gaussian Beam Radius
The parameter $w(z)$ represents the beam radius at an axial location $z$. When $z = z_0$, the beam reaches its minimum radius $w_0$,
$$
w^2(z)=w_0^2\left[1+\left(\frac{z-z_0}{z_R}\right)^2\right]
$$
where $z_R=\pi w_0^2/\lambda M^2$.
Therefore, for a simple gaussian beam Mยฒ=1, the minimum radius $w_0$ and its location $z_0$ determine the beam size everywhere (assuming, of course, that the wavelength is known).
As can be seen in the plot below, the beam reaches a minimum and then expands symmetrically about the axial location of the minimum.
```
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
plt.vlines(0,0,w0,color='black',lw=1)
plt.text(0,w0/2,' $w_0$', va='center')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.xticks([])
plt.yticks([])
plt.show()
```
## Beam Divergence $\Theta$
All beams diverge or spread out as the beam propagates along the $z$ direction. The far-field divergence is defined as the half-angle
$$
\theta=\lim_{z\rightarrow\infty}\frac{w(z)}{z} = \frac{w_0}{z_R}
$$
where $w(z)$ is the beam radius at a distance $z$. The full angle $\Theta$ is
$$
\Theta=\lim_{z\rightarrow\infty}\frac{d(z)}{z}= \frac{2 w_0}{z_R}
$$
```
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
theta = w0/zR
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.plot(z,theta*z,'--b')
plt.plot(z,-theta*z,'--b')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Beam Divergence")
plt.text(120e-3, 0e-3, r'$\Theta=2\theta$', fontsize=14, va='center',color='blue')
plt.annotate('',xy=(100e-3,0.2e-3),xytext=(100e-3,-0.2e-3),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='blue'))
#plt.xticks([])
#plt.yticks([])
plt.show()
```
For a perfect Gaussian beam, the beam divergence is completely determined by its minimum beam radius $w_{00}$
$$
\Theta_{00} = \frac{\lambda}{\pi w_{00}}
$$
where the 00 subscript indicates that these only apply to the TEM$_{00}$ or fundamental gaussian mode.
## Beam Parameter Product
Laser beam quality can be described by combining the previous two metrics into a single beam parameter product (BPP) or
$$
\mathrm{BPP} = w \cdot \Theta
$$
where $w$ is the radius of the beam (at its waist/narrowest point) and $\Theta$ is the half-angle measure of the beam divergence in the far-field.
This is not unlike the throughput parameter (area $\times$ solid angle) from radiometry which captures both the angular expansion of light and focusing into a single variable. The BPP represents, for instance, the amount of light that can be coupled into a fiber. For practical use of the BPP, see
Wang, [Fiber coupled diode laser beam parameter product calculation and rules for optimized design](https://www.researchgate.net/publication/253527159_Fiber_Coupled_Diode_Laser_Beam_Parameter_Product_Calculation_and_Rules_for_Optimized_Design), *Proc. SPIE*, **7918**, 9 (2011)
## Mยฒ or the beam propagation factor
It turns out that real beams differ from perfect Gaussian beams. Specifically, they diverge more quickly or don't focus to the same size spot.
The beam propagation factor Mยฒ is a measure of how close a beam is to Gaussian (TEM$_{00}$ mode).
Johnston and Sasnett write in their chapter "Characterization of Laser Beams: The Mยฒ Model" in the *Handbook of Optical and Laser Scanning*, Marcel Dekker, (2004)::
> Unlike the fundamental mode beam where the 1/e$^2$-diameter definition is universally understood and applied, for mixed modes a number of different diameter definitions have been employed. The different definitions have in common that they all reduce to the 1/e$^2$-diameter when applied to an $M^2=1$ fundamental mode beam, but when applied to a mixed mode with higher order mode content, they in general give different numerical values. As Mยฒ always depends on a product of two measured diameters, its numerical value changes also as the square of that for diameters. It is all the same beam, but different methods provide results in different currencies; one has to specify what currency is in use and know the exchange rate.
Mยฒ is defined as the ratio of the beam parameter product (BPP)
$$
M^2 = \frac{\mathrm{BPP}}{\mathrm{BPP}_{00}} = \frac{\Theta \cdot w_0}{\Theta_{00}\cdot w_{00}}
$$
where $\Theta$ is the far-field beam divergence and $w_0$ is the minimum beam radius of the real beam. The beam divergence of a perfect Gaussian is
$$
\Theta_{00} = \frac{\lambda}{\pi w_{00}}
$$
and therefore the beam quality factor becomes
$$
M^2 = \frac{\Theta \cdot w_0}{\lambda\cdot \pi}
$$
where radius $w_0$ is the minimum radius for the real beam.
A Gaussian beam has Mยฒ=1, while all other beams will have Mยฒ>1. Moreover,
* for a given *beam radius*, the Gaussian beam has the smallest possible beam divergence
* for a given *beam divergence*, the Gaussian beam has the smallest possible beam radius.
A multimode beam has a beam waist which is Mยฒ times larger than a fundamental Gaussian beam with the same beam divergence, or a beam divergence which is Mยฒ times larger than that of a fundamental Gaussian beam with the same beam waist.
### Astigmatic or Elliptical Beams
A simple stigmatic beam has rotational symmetry โ any cross section will display a circular profile. However, a simple astigmatic beam will have elliptical cross-sections. It is *simple* because the major and minor axes of the ellipse remain in the same plane (a general astigmatic beam will have elliptical cross-sections that rotate with propagation distance).
For an elliptical beam, the beam waist radius, beam waist location, and Rayleigh distance will differ on the semi-major and semi-minor axes. Unsurprisingly, the Mยฒ values may differ as well
$$
w_x^2(z) = w_{0x}^2\left[1 + \left(\frac{z-z_0}{z_{Rx}} \right)^2\right]
$$
and
$$
w_y^2(z) = w_{0y}^2\left[1 + \left(\frac{z-z_0}{z_{Ry}} \right)^2\right]
$$
Two different Mยฒ values for the major and minor axes of the elliptical beam shape arise from the two Rayleigh distances
$$
z_{Rx} = \frac{\pi w_{0x}^2}{\lambda M_x^2} \qquad\mbox{and}\qquad z_{Ry} = \frac{\pi w_{0y}^2}{\lambda M_y^2}
$$
## Rayleigh Distance $z_R$
The Rayleigh distance $z_R$ is the distance from the beam waist to the point where the beam area has doubled. This means that the irradiance (power/area) has dropped 50% or that beam radius has increased by a factor of $\sqrt{2}$.
Interestingly, the radius of curvature of the beam is largest at one Rayleigh distance from the beam waist.
The Rayleigh distance for a real beam defined as
$$
z_R=\frac{\pi w_0^2}{\lambda M^2}
$$
where $w_0$ is the minimum beam radius of the beam.
```
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-3*zR,3*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
#plt.axvline(z0,color='black',lw=1)
plt.axvline(zR,color='blue', linestyle=':')
plt.axvline(-zR,color='blue', linestyle=':')
plt.text(zR, -3*w0, ' $z_R$')
plt.text(-zR, -3*w0, '$-z_R$ ', ha='right')
plt.text(0,w0/2,' $w_0$', va='center')
plt.text(zR,w0/2,' $\sqrt{2}w_0$', va='center')
plt.vlines(0,0,w0,color='black',lw=2)
plt.vlines(zR,0,np.sqrt(2)*w0,color='black',lw=2)
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Rayleigh Distance")
plt.xticks([])
plt.yticks([])
plt.show()
```
|
github_jupyter
|
#!pip install --user laserbeamsize
import numpy as np
import matplotlib.pyplot as plt
try:
import laserbeamsize as lbs
except ModuleNotFoundError:
print('laserbeamsize is not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
plt.vlines(0,0,w0,color='black',lw=1)
plt.text(0,w0/2,' $w_0$', va='center')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.xticks([])
plt.yticks([])
plt.show()
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
theta = w0/zR
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.plot(z,theta*z,'--b')
plt.plot(z,-theta*z,'--b')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Beam Divergence")
plt.text(120e-3, 0e-3, r'$\Theta=2\theta$', fontsize=14, va='center',color='blue')
plt.annotate('',xy=(100e-3,0.2e-3),xytext=(100e-3,-0.2e-3),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='blue'))
#plt.xticks([])
#plt.yticks([])
plt.show()
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-3*zR,3*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
#plt.axvline(z0,color='black',lw=1)
plt.axvline(zR,color='blue', linestyle=':')
plt.axvline(-zR,color='blue', linestyle=':')
plt.text(zR, -3*w0, ' $z_R$')
plt.text(-zR, -3*w0, '$-z_R$ ', ha='right')
plt.text(0,w0/2,' $w_0$', va='center')
plt.text(zR,w0/2,' $\sqrt{2}w_0$', va='center')
plt.vlines(0,0,w0,color='black',lw=2)
plt.vlines(zR,0,np.sqrt(2)*w0,color='black',lw=2)
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Rayleigh Distance")
plt.xticks([])
plt.yticks([])
plt.show()
| 0.45181 | 0.989098 |
# Blog Post 2: Spectral Clustering
In this blog post, you'll write a tutorial on a simple version of the *spectral clustering* algorithm for clustering data points. Each of the below parts will pose to you one or more specific tasks. You should plan to both:
- Achieve these tasks using clean, efficient, and well-documented Python and
- Write, in your own words, about how to understand what's going on.
> Remember, your aim is not just to write and understand the algorithm, but to explain to someone else how they could do the same.
***Note***: your blog post doesn't have to contain a lot of math. It's ok for you to give explanations like "this function is an approximation of this other function according to the math in the assignment."
### Notation
In all the math below:
- Boldface capital letters like $\mathbf{A}$ refer to matrices (2d arrays of numbers).
- Boldface lowercase letters like $\mathbf{v}$ refer to vectors (1d arrays of numbers).
- $\mathbf{A}\mathbf{B}$ refers to a matrix-matrix product (`A@B`). $\mathbf{A}\mathbf{v}$ refers to a matrix-vector product (`A@v`).
### Comments and Docstrings
You should plan to comment all of your code. Docstrings are not required except in Part G.
## Introduction
In this problem, we'll study *spectral clustering*. Spectral clustering is an important tool for identifying meaningful parts of data sets with complex structure. To start, let's look at an example where we *don't* need spectral clustering.
```
import numpy as np
from sklearn import datasets
from matplotlib import pyplot as plt
n = 200
np.random.seed(1111)
X, y = datasets.make_blobs(n_samples=n, shuffle=True, random_state=None, centers = 2, cluster_std = 2.0)
plt.scatter(X[:,0], X[:,1])
```
*Clustering* refers to the task of separating this data set into the two natural "blobs." K-means is a very common way to achieve this task, which has good performance on circular-ish blobs like these:
```
from sklearn.cluster import KMeans
km = KMeans(n_clusters = 2)
km.fit(X)
plt.scatter(X[:,0], X[:,1], c = km.predict(X))
```
### Harder Clustering
That was all well and good, but what if our data is "shaped weird"?
```
np.random.seed(1234)
n = 200
X, y = datasets.make_moons(n_samples=n, shuffle=True, noise=0.05, random_state=None)
plt.scatter(X[:,0], X[:,1])
```
We can still make out two meaningful clusters in the data, but now they aren't blobs but crescents. As before, the Euclidean coordinates of the data points are contained in the matrix `X`, while the labels of each point are contained in `y`. Now k-means won't work so well, because k-means is, by design, looking for circular clusters.
```
km = KMeans(n_clusters = 2)
km.fit(X)
plt.scatter(X[:,0], X[:,1], c = km.predict(X))
```
Whoops! That's not right!
As we'll see, spectral clustering is able to correctly cluster the two crescents. In the following problems, you will derive and implement spectral clustering.
## Part A
Construct the *similarity matrix* $\mathbf{A}$. $\mathbf{A}$ should be a matrix (2d `np.ndarray`) with shape `(n, n)` (recall that `n` is the number of data points).
When constructing the similarity matrix, use a parameter `epsilon`. Entry `A[i,j]` should be equal to `1` if `X[i]` (the coordinates of data point `i`) is within distance `epsilon` of `X[j]` (the coordinates of data point `j`).
**The diagonal entries `A[i,i]` should all be equal to zero.** The function `np.fill_diagonal()` is a good way to set the values of the diagonal of a matrix.
#### Note
It is possible to do this manually in a `for`-loop, by testing whether `(X[i] - X[j])**2 < epsilon**2` for each choice of `i` and `j`. This is not recommended! Instead, see if you can find a solution built into `sklearn`. Can you find a function that will compute all the pairwise distances and collect them into an appropriate matrix for you?
For this part, use `epsilon = 0.4`.
## Part B
The matrix `A` now contains information about which points are near (within distance `epsilon`) which other points. We now pose the task of clustering the data points in `X` as the task of partitioning the rows and columns of `A`.
Let $d_i = \sum_{j = 1}^n a_{ij}$ be the $i$th row-sum of $\mathbf{A}$, which is also called the *degree* of $i$. Let $C_0$ and $C_1$ be two clusters of the data points. We assume that every data point is in either $C_0$ or $C_1$. The cluster membership as being specified by `y`. We think of `y[i]` as being the label of point `i`. So, if `y[i] = 1`, then point `i` (and therefore row $i$ of $\mathbf{A}$) is an element of cluster $C_1$.
The *binary norm cut objective* of a matrix $\mathbf{A}$ is the function
$$N_{\mathbf{A}}(C_0, C_1)\equiv \mathbf{cut}(C_0, C_1)\left(\frac{1}{\mathbf{vol}(C_0)} + \frac{1}{\mathbf{vol}(C_1)}\right)\;.$$
In this expression,
- $\mathbf{cut}(C_0, C_1) \equiv \sum_{i \in C_0, j \in C_1} a_{ij}$ is the *cut* of the clusters $C_0$ and $C_1$.
- $\mathbf{vol}(C_0) \equiv \sum_{i \in C_0}d_i$, where $d_i = \sum_{j = 1}^n a_{ij}$ is the *degree* of row $i$ (the total number of all other rows related to row $i$ through $A$). The *volume* of cluster $C_0$ is a measure of the size of the cluster.
A pair of clusters $C_0$ and $C_1$ is considered to be a "good" partition of the data when $N_{\mathbf{A}}(C_0, C_1)$ is small. To see why, let's look at each of the two factors in this objective function separately.
#### B.1 The Cut Term
First, the cut term $\mathbf{cut}(C_0, C_1)$ is the number of nonzero entries in $\mathbf{A}$ that relate points in cluster $C_0$ to points in cluster $C_1$. Saying that this term should be small is the same as saying that points in $C_0$ shouldn't usually be very close to points in $C_1$.
Write a function called `cut(A,y)` to compute the cut term. You can compute it by summing up the entries `A[i,j]` for each pair of points `(i,j)` in different clusters.
It's ok if you use `for`-loops in this function -- we are going to see a more efficient view of this problem soon.
Compute the cut objective for the true clusters `y`. Then, generate a random vector of random labels of length `n`, with each label equal to either 0 or 1. Check the cut objective for the random labels. You should find that the cut objective for the true labels is *much* smaller than the cut objective for the random labels.
This shows that this part of the cut objective indeed favors the true clusters over the random ones.
#### B.2 The Volume Term
Now take a look at the second factor in the norm cut objective. This is the *volume term*. As mentioned above, the *volume* of cluster $C_0$ is a measure of how "big" cluster $C_0$ is. If we choose cluster $C_0$ to be small, then $\mathbf{vol}(C_0)$ will be small and $\frac{1}{\mathbf{vol}(C_0)}$ will be large, leading to an undesirable higher objective value.
Synthesizing, the binary normcut objective asks us to find clusters $C_0$ and $C_1$ such that:
1. There are relatively few entries of $\mathbf{A}$ that join $C_0$ and $C_1$.
2. Neither $C_0$ and $C_1$ are too small.
Write a function called `vols(A,y)` which computes the volumes of $C_0$ and $C_1$, returning them as a tuple. For example, `v0, v1 = vols(A,y)` should result in `v0` holding the volume of cluster `0` and `v1` holding the volume of cluster `1`. Then, write a function called `normcut(A,y)` which uses `cut(A,y)` and `vols(A,y)` to compute the binary normalized cut objective of a matrix `A` with clustering vector `y`.
***Note***: No for-loops in this part. Each of these functions should be implemented in five lines or less.
Now, compare the `normcut` objective using both the true labels `y` and the fake labels you generated above. What do you observe about the normcut for the true labels when compared to the normcut for the fake labels?
## Part C
We have now defined a normalized cut objective which takes small values when the input clusters are (a) joined by relatively few entries in $A$ and (b) not too small. One approach to clustering is to try to find a cluster vector `y` such that `normcut(A,y)` is small. However, this is an NP-hard combinatorial optimization problem, which means that may not be possible to find the best clustering in practical time, even for relatively small data sets. We need a math trick!
Here's the trick: define a new vector $\mathbf{z} \in \mathbb{R}^n$ such that:
$$
z_i =
\begin{cases}
\frac{1}{\mathbf{vol}(C_0)} &\quad \text{if } y_i = 0 \\
-\frac{1}{\mathbf{vol}(C_1)} &\quad \text{if } y_i = 1 \\
\end{cases}
$$
Note that the signs of the elements of $\mathbf{z}$ contain all the information from $\mathbf{y}$: if $i$ is in cluster $C_0$, then $y_i = 0$ and $z_i > 0$.
Next, if you like linear algebra, you can show that
$$\mathbf{N}_{\mathbf{A}}(C_0, C_1) = 2\frac{\mathbf{z}^T (\mathbf{D} - \mathbf{A})\mathbf{z}}{\mathbf{z}^T\mathbf{D}\mathbf{z}}\;,$$
where $\mathbf{D}$ is the diagonal matrix with nonzero entries $d_{ii} = d_i$, and where $d_i = \sum_{j = 1}^n a_i$ is the degree (row-sum) from before.
1. Write a function called `transform(A,y)` to compute the appropriate $\mathbf{z}$ vector given `A` and `y`, using the formula above.
2. Then, check the equation above that relates the matrix product to the normcut objective, by computing each side separately and checking that they are equal.
3. While you're here, also check the identity $\mathbf{z}^T\mathbf{D}\mathbb{1} = 0$, where $\mathbb{1}$ is the vector of `n` ones (i.e. `np.ones(n)`). This identity effectively says that $\mathbf{z}$ should contain roughly as many positive as negative entries.
#### Programming Note
You can compute $\mathbf{z}^T\mathbf{D}\mathbf{z}$ as `z@D@z`, provided that you have constructed these objects correctly.
#### Note
The equation above is exact, but computer arithmetic is not! `np.isclose(a,b)` is a good way to check if `a` is "close" to `b`, in the sense that they differ by less than the smallest amount that the computer is (by default) able to quantify.
Also, still no for-loops.
## Part D
In the last part, we saw that the problem of minimizing the normcut objective is mathematically related to the problem of minimizing the function
$$ R_\mathbf{A}(\mathbf{z})\equiv \frac{\mathbf{z}^T (\mathbf{D} - \mathbf{A})\mathbf{z}}{\mathbf{z}^T\mathbf{D}\mathbf{z}} $$
subject to the condition $\mathbf{z}^T\mathbf{D}\mathbb{1} = 0$. It's actually possible to bake this condition into the optimization, by substituting for $\mathbf{z}$ the orthogonal complement of $\mathbf{z}$ relative to $\mathbf{D}\mathbf{1}$. In the code below, I define an `orth_obj` function which handles this for you.
Use the `minimize` function from `scipy.optimize` to minimize the function `orth_obj` with respect to $\mathbf{z}$. Note that this computation might take a little while. Explicit optimization can be pretty slow! Give the minimizing vector a name `z_`.
```
def orth(u, v):
return (u @ v) / (v @ v) * v
e = np.ones(n)
d = D @ e
def orth_obj(z):
z_o = z - orth(z, d)
return (z_o @ (D - A) @ z_o)/(z_o @ D @ z_o)
```
**Note**: there's a cheat going on here! We originally specified that the entries of $\mathbf{z}$ should take only one of two values (back in Part C), whereas now we're allowing the entries to have *any* value! This means that we are no longer exactly optimizing the normcut objective, but rather an approximation. This cheat is so common that deserves a name: it is called the *continuous relaxation* of the normcut problem.
## Part E
Recall that, by design, only the sign of `z_min[i]` actually contains information about the cluster label of data point `i`. Plot the original data, using one color for points such that `z_min[i] < 0` and another color for points such that `z_min[i] >= 0`.
Does it look like we came close to correctly clustering the data?
## Part F
Explicitly optimizing the orthogonal objective is *way* too slow to be practical. If spectral clustering required that we do this each time, no one would use it.
The reason that spectral clustering actually matters, and indeed the reason that spectral clustering is called *spectral* clustering, is that we can actually solve the problem from Part E using eigenvalues and eigenvectors of matrices.
Recall that what we would like to do is minimize the function
$$ R_\mathbf{A}(\mathbf{z})\equiv \frac{\mathbf{z}^T (\mathbf{D} - \mathbf{A})\mathbf{z}}{\mathbf{z}^T\mathbf{D}\mathbf{z}} $$
with respect to $\mathbf{z}$, subject to the condition $\mathbf{z}^T\mathbf{D}\mathbb{1} = 0$.
The Rayleigh-Ritz Theorem states that the minimizing $\mathbf{z}$ must be the solution with smallest eigenvalue of the generalized eigenvalue problem
$$ (\mathbf{D} - \mathbf{A}) \mathbf{z} = \lambda \mathbf{D}\mathbf{z}\;, \quad \mathbf{z}^T\mathbf{D}\mathbb{1} = 0$$
which is equivalent to the standard eigenvalue problem
$$ \mathbf{D}^{-1}(\mathbf{D} - \mathbf{A}) \mathbf{z} = \lambda \mathbf{z}\;, \quad \mathbf{z}^T\mathbb{1} = 0\;.$$
Why is this helpful? Well, $\mathbb{1}$ is actually the eigenvector with smallest eigenvalue of the matrix $\mathbf{D}^{-1}(\mathbf{D} - \mathbf{A})$.
> So, the vector $\mathbf{z}$ that we want must be the eigenvector with the *second*-smallest eigenvalue.
Construct the matrix $\mathbf{L} = \mathbf{D}^{-1}(\mathbf{D} - \mathbf{A})$, which is often called the (normalized) *Laplacian* matrix of the similarity matrix $\mathbf{A}$. Find the eigenvector corresponding to its second-smallest eigenvalue, and call it `z_eig`. Then, plot the data again, using the sign of `z_eig` as the color. How did we do?
In fact, `z_eig` should be proportional to `z_min`, although this won't be exact because minimization has limited precision by default.
## Part G
Synthesize your results from the previous parts. In particular, write a function called `spectral_clustering(X, epsilon)` which takes in the input data `X` (in the same format as Part A) and the distance threshold `epsilon` and performs spectral clustering, returning an array of binary labels indicating whether data point `i` is in group `0` or group `1`. Demonstrate your function using the supplied data from the beginning of the problem.
#### Notes
Despite the fact that this has been a long journey, the final function should be quite short. You should definitely aim to keep your solution under 10, very compact lines.
**In this part only, please supply an informative docstring!**
#### Outline
Given data, you need to:
1. Construct the similarity matrix.
2. Construct the Laplacian matrix.
3. Compute the eigenvector with second-smallest eigenvalue of the Laplacian matrix.
4. Return labels based on this eigenvector.
## Part H
Run a few experiments using your function, by generating different data sets using `make_moons`. What happens when you increase the `noise`? Does spectral clustering still find the two half-moon clusters? For these experiments, you may find it useful to increase `n` to `1000` or so -- we can do this now, because of our fast algorithm!
## Part I
Now try your spectral clustering function on another data set -- the bull's eye!
```
n = 1000
X, y = datasets.make_circles(n_samples=n, shuffle=True, noise=0.05, random_state=None, factor = 0.4)
plt.scatter(X[:,0], X[:,1])
```
There are two concentric circles. As before k-means will not do well here at all.
```
km = KMeans(n_clusters = 2)
km.fit(X)
plt.scatter(X[:,0], X[:,1], c = km.predict(X))
```
Can your function successfully separate the two circles? Some experimentation here with the value of `epsilon` is likely to be required. Try values of `epsilon` between `0` and `1.0` and describe your findings. For roughly what values of `epsilon` are you able to correctly separate the two rings?
## Part J
Great work! Turn this notebook into a blog post with plenty of helpful explanation for your reader.
|
github_jupyter
|
import numpy as np
from sklearn import datasets
from matplotlib import pyplot as plt
n = 200
np.random.seed(1111)
X, y = datasets.make_blobs(n_samples=n, shuffle=True, random_state=None, centers = 2, cluster_std = 2.0)
plt.scatter(X[:,0], X[:,1])
from sklearn.cluster import KMeans
km = KMeans(n_clusters = 2)
km.fit(X)
plt.scatter(X[:,0], X[:,1], c = km.predict(X))
np.random.seed(1234)
n = 200
X, y = datasets.make_moons(n_samples=n, shuffle=True, noise=0.05, random_state=None)
plt.scatter(X[:,0], X[:,1])
km = KMeans(n_clusters = 2)
km.fit(X)
plt.scatter(X[:,0], X[:,1], c = km.predict(X))
def orth(u, v):
return (u @ v) / (v @ v) * v
e = np.ones(n)
d = D @ e
def orth_obj(z):
z_o = z - orth(z, d)
return (z_o @ (D - A) @ z_o)/(z_o @ D @ z_o)
n = 1000
X, y = datasets.make_circles(n_samples=n, shuffle=True, noise=0.05, random_state=None, factor = 0.4)
plt.scatter(X[:,0], X[:,1])
km = KMeans(n_clusters = 2)
km.fit(X)
plt.scatter(X[:,0], X[:,1], c = km.predict(X))
| 0.740644 | 0.992008 |
```
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import LeNet5_infernece
import os
import numpy as np
```
#### 1. ๅฎไน็ฅ็ป็ฝ็ป็ธๅ
ณ็ๅๆฐ
```
BATCH_SIZE = 100
LEARNING_RATE_BASE = 0.01
LEARNING_RATE_DECAY = 0.99
REGULARIZATION_RATE = 0.0001
TRAINING_STEPS = 6000
MOVING_AVERAGE_DECAY = 0.99
```
#### 2. ๅฎไน่ฎญ็ป่ฟ็จ
```
def train(mnist):
# ๅฎไน่พๅบไธบ4็ปด็ฉ้ต็placeholder
x = tf.placeholder(tf.float32, [
BATCH_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.NUM_CHANNELS],
name='x-input')
y_ = tf.placeholder(tf.float32, [None, LeNet5_infernece.OUTPUT_NODE], name='y-input')
regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
y = LeNet5_infernece.inference(x,False,regularizer)
global_step = tf.Variable(0, trainable=False)
# ๅฎไนๆๅคฑๅฝๆฐใๅญฆไน ็ใๆปๅจๅนณๅๆไฝไปฅๅ่ฎญ็ป่ฟ็จใ
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
cross_entropy_mean = tf.reduce_mean(cross_entropy)
loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
learning_rate = tf.train.exponential_decay(
LEARNING_RATE_BASE,
global_step,
mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY,
staircase=True)
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
with tf.control_dependencies([train_step, variables_averages_op]):
train_op = tf.no_op(name='train')
# ๅๅงๅTensorFlowๆไน
ๅ็ฑปใ
saver = tf.train.Saver()
with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(TRAINING_STEPS):
xs, ys = mnist.train.next_batch(BATCH_SIZE)
reshaped_xs = np.reshape(xs, (
BATCH_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.NUM_CHANNELS))
_, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: reshaped_xs, y_: ys})
if i % 50 == 0:
print("After %d training step(s), loss on training batch is %g." % (step, loss_value))
```
#### 3. ไธป็จๅบๅ
ฅๅฃ
```
def main(argv=None):
mnist = input_data.read_data_sets("../../datasets/MNIST_data", one_hot=True)
train(mnist)
if __name__ == '__main__':
main()
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import LeNet5_infernece
import os
import numpy as np
BATCH_SIZE = 100
LEARNING_RATE_BASE = 0.01
LEARNING_RATE_DECAY = 0.99
REGULARIZATION_RATE = 0.0001
TRAINING_STEPS = 6000
MOVING_AVERAGE_DECAY = 0.99
def train(mnist):
# ๅฎไน่พๅบไธบ4็ปด็ฉ้ต็placeholder
x = tf.placeholder(tf.float32, [
BATCH_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.NUM_CHANNELS],
name='x-input')
y_ = tf.placeholder(tf.float32, [None, LeNet5_infernece.OUTPUT_NODE], name='y-input')
regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
y = LeNet5_infernece.inference(x,False,regularizer)
global_step = tf.Variable(0, trainable=False)
# ๅฎไนๆๅคฑๅฝๆฐใๅญฆไน ็ใๆปๅจๅนณๅๆไฝไปฅๅ่ฎญ็ป่ฟ็จใ
variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))
cross_entropy_mean = tf.reduce_mean(cross_entropy)
loss = cross_entropy_mean + tf.add_n(tf.get_collection('losses'))
learning_rate = tf.train.exponential_decay(
LEARNING_RATE_BASE,
global_step,
mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY,
staircase=True)
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)
with tf.control_dependencies([train_step, variables_averages_op]):
train_op = tf.no_op(name='train')
# ๅๅงๅTensorFlowๆไน
ๅ็ฑปใ
saver = tf.train.Saver()
with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(TRAINING_STEPS):
xs, ys = mnist.train.next_batch(BATCH_SIZE)
reshaped_xs = np.reshape(xs, (
BATCH_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.IMAGE_SIZE,
LeNet5_infernece.NUM_CHANNELS))
_, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: reshaped_xs, y_: ys})
if i % 50 == 0:
print("After %d training step(s), loss on training batch is %g." % (step, loss_value))
def main(argv=None):
mnist = input_data.read_data_sets("../../datasets/MNIST_data", one_hot=True)
train(mnist)
if __name__ == '__main__':
main()
| 0.548915 | 0.818047 |
# Guia bรกsico para estilizar cรฉlulas de texto do Jupyter Notebook utilizando Markdown.
Este guia foi feito para ser uma referencia rรกpida para a estilizaรงรฃo de texto do Jupyter Notebook. Vocรช encontra maiores detalhes na pagina do [John Gruber's](https://daringfireball.net/projects/markdown/), na pรกgina do [Github](https://docs.github.com/en/github/writing-on-github) ou nesta [Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet).
**Disclaimer**: este notebook foi escrito utilizando o *Google colab* no navegador *Brave*. Comportamentos diferentes podem ser obitidos quando utilizados em outros navegadores.
# Cabeรงalhos
Podemos separar as cรฉlulas em seis nรญves diferentes cabeรงalhos, o que รฉ feito para auxiliar na organizaรงรฃo dos notebooks. O cabeรงalho รฉ renderizado a partir de um sรญmbolo de hashtag (\#) no inicio de uma linha.
<br>
<br>
Com apenas 1 hashtag (\#), temos um cabeรงalho de nรญvel um.
Com 2 hashtags (\#\#), temos um cabeรงalho de nรญvel 2.
Com 3 hashtags (\#\#\#), temos um cabeรงalho de nรญvel 3.
Com 4 hashtags (\#\#\#\#), temos um cabeรงalho de nรญvel 4.
Com 5 hashtags (\#\#\#\#\#), temos um cabeรงalho de nรญvel 5.
Com 6 hashtags (\#\#\#\#\#\#), temos um cabeรงalho de nรญvel 6.
A seguir temos exemplos de como sรฃo renderizados cada tipo de cabeรงalho.
# Cabeรงalho nรญvel 1
\# Cabeรงalho nรญvel 1
## Cabeรงalho nรญvel 2
\## Cabeรงalho nรญvel 2
### Cabeรงalho nรญvel 3
\### Cabeรงalho nรญvel 3
#### Cabeรงalho nรญvel 4
\#### Cabeรงalho nรญvel 4
##### Cabeรงalho nรญvel 5
\##### Cabeรงalho nรญvel 5
###### Cabeรงalho nรญvel 6
\###### Cabeรงalho nรญvel 6
# รnfase
## Itรกlico
Para deixar o texto em itรกlico utilize-se o asteristico (\*) ou o underline (\_):
Este texto \*em itรกlico\* utilizando asterisco: *em itรกlico*
Este texto \_em itรกlico\_ utilizando underline: _em itรกlico_
## Negrito
Para deixar o texto em negrito utilize-se dois asteristicos (\**) ou dois o underlines (\__):
Este texto \*\*em negrito\*\* utilizando asterisco: **em negrito**
Este texto \_\_em negrito\_\_ utilizando underline: __em negrito__
## Itรกlico e negrito
Para deixar o texto em itรกlico e em negrito ao mesmo tempo tilize-se trรชs asteristicos (\*\*\*) ou trรชs o underlines (\_\_\_):
Este texto \*\*\*em itรกlico e negrito\*\*\* utilizando asterisco: ***em itรกlico e negrito***
Este texto \_\_\_em itรกlico e negrito\_\_\_ utilizando underline: ___em itรกlico e negrito___
## Tachado
Para deixar o texto tachado utiliza-se dois tils (\~\~):
Este texto \~\~ tachado\~\~: ~~tachado~~
## Sublinhado
Nรฃo temos um cรณdigo especรญfico para escrever texto sublinhado utilizando a sintaxe de Markdown, mas podemos utilizar sintaxe `HTML`, atravรฉs da tag `<ins></ins>`
Este texto \<ins>sublinhado\</ins>: <ins>sublinhado</ins>
# Listas
## Listas de itens (lista nรฃo ordenada)
Para criar uma lista nรฃo ordenada, podemos utilizar o asterisco (\*), o sinal de menos (\-) e/ou o sinal de mais (\+) para adicionar cada item da lista.
Por exemplo:
\- Mamรฃo
\* Limรฃo
\+ Maรงa
serรก renderizado dessa forma:
- Mamรฃo
* Limรฃo
+ Maรงa
Para inserir lista dentro de listas, basta adicionar um espaรงamento (tab) em relaรงรฃo a margem esquerda. Por exemplo:
\* Maรงa
 \* Gala
 \* Fuji
Serรก renderizada da seguinte forma:
+ Maรงa
+ Gala
+ Fuji
E para adicionar novas listas dentro de listas (nested list), basta adicionar mais um espaรงamento (tab). Por exemplo:
\- Maรงa
 \- Gala
 \- Fuji
  \- verde
  \- madura
Serรก renderizada da seguinte forma:
+ Maรงa
+ Gala
+ Fuji
+ verde
+ madura
## Listas numeradas (listas ordenadas)
Para inserir listas numeradas, basta iniciar a linha com um nรบmero qualquer e adicionar um ponto apรณs o nรบmero, nรฃo importando qual nรบmero seja. Por exemplo:
1\. Item 1
2\. Item 2
3\. Item 3
Serรก renderizado dessa forma:
1. Item 1
2. Item 2
3. Item 3
E a seguinte forma
1\. Item 1
20\. Item 2
3100\. Item 3
serรก renderizado de forma igual a anterior:
1. Item 1
20. Item 2
3100. Item 3
Para iniciar em um nรบmero diferente de 1, basta utilizar alterar o primeiro nรบmero de lista. Por exemplo:
10\. Item 1
2\. Item 2
3100\. Item 3
Serรก rendezirado iniciando no nรบmero 10:
10. Item 1
2. Item 2
3100. Item 3
**Observaรงรฃo**: Este item apresenta comportamento diferente dependendo de onde รฉ renderizado. No Google colab, o comportamento รฉ o descrito acima. No GitHub, a numeraรงรฃo irรก sempre iniciar em 1. Sempre verifique se o comportamento descrito corresponde ao obitido.
E para adicionar uma lista dentro desta lista, basta utilizar um tab. Por exemplo:
1\. Item 1
2\. Item 2
3\. Item 3
 1\. Item 1
 2\. Item 2
 3\. Item 3
Serรก renderizado da seguinte forma:
1\. Item 1
2\. Item 2
3\. Item 3
1. Item 1
2. Item 2
3. Item 3
Listas ordenadas e nรฃo ordenadas podem ser acopladas uma nas outras. Por exemplo:
1\. Item 1
2\. Item 2
3\. Item 3
 \- Item 1
 \- Item 2
 \- Item 3
Serรก renderizado da seguinte forma:
1. Item 1
2. Item 2
3. Item 3
- Item 1
- Item 2
- Item 3
# Tabela
Para criar uma tabela utilizamos a barra vertical (|) para indicar o inicio e o final de cada cรฉlula da tabela. A primeira linha da tabela sempre serรก interpretada como o cabeรงalho da tabela, e o cabeรงalho e o corpo da tabela deve ser separado por pelo menos um traรงo (-).
Por exemplo,
\| Cabeรงalho 1 | Cabeรงalho 2 | Cabeรงalho 3 |
\| - | - | - |
\| Linha 1 | Linha 2 | Linha 3 |
Serรก renderizado da seguinte forma:
| Cabeรงalho 1 | Cabeรงalho 2 | Cabeรงalho 3 |
| - | - | - |
| Linha 1 | Linha 2 | Linha 3 |
Por padrรฃo, o cabeรงalho serรก centralizado, enquanto que as linhas serรฃo alinhadas ร esquerda. Para alterar o alinhamento das linhas para ร direita, basta colocar o simbolo de dois pontos (:) apรณs o traรงo que separa o cabeรงalho das linhas da tabela. Por exemplo:
\| Cabeรงalho 1 | Cabeรงalho 2 | Cabeรงalho 3 |
\| - | - | -: |
\| Linha 1 | Linha 2 | Direita |
Serรก renderizado destaforma:
| Cabeรงalho 1 | Cabeรงalho 2 | Cabeรงalho 3 |
| - | - | -: |
| Linha 1 | Linha 2 | Direira |
Para centralizar o conteรบdo da linha de uma coluna, basta colocar os dois pontos antes e depois do traรงo. Por exemplo,
\| Cabeรงalho 1 | Cabeรงalho 2 | Cabeรงalho 3 |
\| - | :-: | - |
\| Linha 1 | Central | Linha 3 |
Serรก renderizado desta forma:
| Cabeรงalho 1 | Cabeรงalho 2 | Cabeรงalho 3 |
| - | :-: | - |
| Linha 1 | Central | Linha 3 |
<br>
**Observaรงรฃo**: O alinhamento das cรฉlulas citado acima nรฃo funciona no GitHub.
# Cรณdigo
ร possรญvel renderizar o texto simulando cรณdigo. Para inserir dentro de uma linha, basta colocar o cรณdigo entre duas crases (\`).
Por exemplo, para renderizar x = 10 como cรณdigo, adiciona-se uma crase antes e outra apรณs o cรณdigo: \`x = 10\`.
Que serรก renderizado desta forma: ` x = 10`
Para inserir um bloco de cรณdigo, adicionamos trรชs crases (```) antes do bloco, e outras trรชs apรณs o bloco.
Por exemplo, para inserir um bloco de cรณdigo que contรฉm uma funรงรฃo que soma dois nรบmeros, fazemos desta forma:
\```
def soma(x, y):
 return x + y
\```
Que serรก renderizado assim:
```
def soma(x, y):
return x + y
```
Para que o estilo de cรณdigo seja aplicado, basta colocar o nome da linguagem apรณs a primeira linha de crases. Por exemplo:
\```python
def soma(x, y):
 return x + y
\```
Serรก renderizado destacando os elementos de cรณdigo Python:
```python
def soma(x, y):
return x + y
```
# Outros
## Blockquote
Podemos inserir um paragrรกfo destacado, utilizando um sรญmbolo de maior (\>) no inรญcio do paragrรกfo. Por exemplo,
\> Isto irรก ser destacado como uma citaรงรฃo direta
Serรก renderizado desta forma
> Isto irรก ser destacado como uma citaรงรฃo direta
##Link
Podemos adicionar links ao longo do texto. Para fazer isto, utilizamos a estrutura \[nome que irรก aparecer](link completo).
Por exemplo, para inserir um link que leva ao site da Google, utilizamos a seguinte notaรงรฃo:
Clique \[aqui](https://www.google.com/) para acessar o Google.
Que serรก renderizado da seguinte forma:
Clique [aqui](https://www.google.com/) para acessar o Google.
## Quebra de linha
Para separa dois parรกgrafos, basta deixa-los separados por um linha inteira, ou inserir dois espaรงos em branco apรณs o final do primeiro paragrรกfo. Tambรฉm podemos utilizar a TAG de `HTML` `<br>` para querar uma linha, adicionando esta TAG no final do parรกgrafo.
## Linha horizontal
Para adicionar uma linha horizontal, basta adicionar trรชs traรงos (\---) em uma linha vazia.
Por exemplo,
\---
Serรก renderizado desta forma:
---
## Imagem
Para inserir uma imagem, basta utilizar a seguinte notaรงรฃo:
\!\[alt text]\(link para a imagem "texto de titulo da imagem ao passar o mouse pela imagem").
Por exemplo,
\!\[alt text]\(https://raw.githubusercontent.com/andersonmdcanteli/matplotlib-course/main/logo/marca_puzzle.png
"texto de titulo da imagem ao passar o mouse pela imagem").
Serรก renderizado desta forma:
.
Passe o mouse por cima da imagem e obser o texto que serรก apresentado. Para alterar este texto para algo mais adequado, basta alterar o texto inserido entre aspas duplas:
.
## Equaรงรตes
ร possรญvel renderizar equaรงรตes nos notebooks utilizando cรณdigo Latex. Entretanto, รฉ necessรกrio envolver o cรณdigo da equaรงรฃo com o simbolo do dรณlar ($).
Para inserir uma equaรงรฃo dentro de um paragrรกfo, utilizamos um (1) simbolo de dolar antes e outros apรณs a equaรงรฃo. Por exemplo:
<br>
O volume de uma esfera refere-se ao seu espaรงo interno, sendo calculado atravรฉs da fรณrmula \\$V_{esfera} = \frac{4}{3}\pi r^{3}\\$, onde \\$r\\$ รฉ o raio da esfera.
<br>
O texto acima serรก renderizado da seguinte forma:
O volume de uma esfera refere-se ao seu espaรงo interno, sendo calculado atravรฉs da fรณrmula $V_{esfera} = \frac{4}{3}\pi r^{3}$, onde $r$ รฉ o raio da esfera.
<br>
Caso nรฃo contenha nenhum texto acompanhando a equaรงรฃo, ela serรก renderizada de forma justificada ร esquerda. Por exemplo:
<br>
\\$V_{esfera} = \frac{4}{3}\pi r^{3}\$
<br>
Serรก renderizado desta forma:
$V_{esfera} = \frac{4}{3}\pi r^{3}$
<br>
Para que a equaรงรฃo seja renderizada no centro do notebook, utilize dois simbolos de dolar (\$\$) ao invรฉs de apenas um. Por exemplo:
<br>
\\$\\$V_{esfera} = \frac{4}{3}\pi r^{3}\$\$
<br>
Serรก renderizado desta forma:
$$V_{esfera} = \frac{4}{3}\pi r^{3}$$
<br>
**Observaรงรฃo**: barras extras (\\) serรฃo renderizadas no GitHub nas equรงรตes propositalmente nรฃo rederizadas acima. As remova para que a equaรงรฃo seja renderizada de forma correta.
<br>
Para gerar o cรณdigo Latex para as equaรงรตes vocรช pode utilizar o site [mathurl.com](http://mathurl.com/), onde รฉ possรญvel desenhar as equaรงรตes de forma bem simpes.
# Consideraรงรตes finais
Como os notebooks funcionam em navegadores, eles tambรฉm podem ser editados utilizando TAGs `HTML` e cรณdigo `css`. Mas, a ideia dos notebooks รฉ ser algo simples e rรกpido de utilizar, e portanto, รฉ mais adequado utilizar o Markdown para estilizar o seu texto de forma eficiente.
# Sobre
**Author:** Anderson Marcos Dias Canteli, *PhD in Food Engineer*
**Last updated on:** 13/06/2021
### Interesting links:
- [GitPage](http://andersonmdcanteli.github.io/)
- [Blog](https://andersoncanteli.wordpress.com/)
- [YouTube channel](https://www.youtube.com/c/AndersonCanteli/)
- [Curriculum lattes](http://lattes.cnpq.br/6961242234529344)
<br>
<img style="float: right" src="https://raw.githubusercontent.com/andersonmdcanteli/matplotlib-course/main/logo/marca_puzzle.png" alt="logo Puzzle in a Mug project" width="400">
|
github_jupyter
|
Que serรก renderizado assim:
Para que o estilo de cรณdigo seja aplicado, basta colocar o nome da linguagem apรณs a primeira linha de crases. Por exemplo:
\```python
def soma(x, y):
 return x + y
\```
Serรก renderizado destacando os elementos de cรณdigo Python:
| 0.399929 | 0.746185 |
# 100 Years of Baby Names in British Columbia
## Data Wrangling
Data wrangling of baby names in British Columbia from 1915 to 2014. The data includes every first name that was chosen five or more times in a given year, and is published by the British Columbia Vital Statistics Agency. Raw data was downloaded from:
- https://catalogue.data.gov.bc.ca/dataset/most-popular-girl-names-for-the-past-100-years
- https://catalogue.data.gov.bc.ca/dataset/most-popular-boys-names-for-the-past-100-years
```
import pandas as pd
%matplotlib inline
```
## Girls Names
```
group = 'girls'
datafile = f'data/raw/bc-popular-{group}-names.csv'
savefile = f'data/processed/bc-popular-{group}-names.csv'
names_in = pd.read_csv(datafile, index_col=0)
names_in.head()
# Stack the data and wrangle with indexes and labels to get the data into a
# convenient format
names = names_in.drop('Total', axis=1).stack().reset_index().set_index('Name')
names.columns = ['Year', 'Count']
# Convert the years from strings to integers
names['Year'] = names['Year'].astype(int)
# Remove all the rows with Count of 0 (we don't need them)
names = names[names['Count'] > 0]
print(names.shape)
names.head()
# Compute the total number of names for each year
yearly_totals = names.groupby('Year').sum()
yearly_totals.columns = ['Yearly Total']
yearly_totals.plot.line()
yearly_totals.head()
# Compute each name's fraction of the total in each year
data = names.merge(yearly_totals, left_on='Year', right_index=True)
data['Fraction'] = data['Count'] / data['Yearly Total']
# Drop the Yearly Total column since we don't need it anymore
data = data.drop('Yearly Total', axis=1)
# Sort by name and then by year
data = data.reset_index().sort_values(['Name', 'Year']).set_index('Name')
data.head()
# Save to CSV file
print(f'Saving to {savefile}')
data.to_csv(savefile)
```
## Boys Names
Consolidate some of the above steps into a function and apply to the boys names data.
```
def process_names_data(names_in):
# Stack the data and wrangle with indexes and labels to get the data into a
# convenient format
names = names_in.drop('Total', axis=1).stack().reset_index().set_index('Name')
names.columns = ['Year', 'Count']
# Convert the years from strings to integers
names['Year'] = names['Year'].astype(int)
# Remove all the rows with Count of 0 (we don't need them)
names = names[names['Count'] > 0]
# Compute the total number of names for each year
yearly_totals = names.groupby('Year').sum()
yearly_totals.columns = ['Yearly Total']
# Compute each name's fraction of the total in each year
data = names.merge(yearly_totals, left_on='Year', right_index=True)
data['Fraction'] = data['Count'] / data['Yearly Total']
# Drop the Yearly Total column since we don't need it anymore
data = data.drop('Yearly Total', axis=1)
# Sort by name and then by year
data = data.reset_index().sort_values(['Name', 'Year']).set_index('Name')
return data
group = 'boys'
datafile = f'data/raw/bc-popular-{group}-names.csv'
savefile = f'data/processed/bc-popular-{group}-names.csv'
boys_names = pd.read_csv(datafile, index_col=0)
boys_data = process_names_data(boys_names)
print(f'Saving to {savefile}')
boys_data.to_csv(savefile)
boys_data.head()
```
|
github_jupyter
|
import pandas as pd
%matplotlib inline
group = 'girls'
datafile = f'data/raw/bc-popular-{group}-names.csv'
savefile = f'data/processed/bc-popular-{group}-names.csv'
names_in = pd.read_csv(datafile, index_col=0)
names_in.head()
# Stack the data and wrangle with indexes and labels to get the data into a
# convenient format
names = names_in.drop('Total', axis=1).stack().reset_index().set_index('Name')
names.columns = ['Year', 'Count']
# Convert the years from strings to integers
names['Year'] = names['Year'].astype(int)
# Remove all the rows with Count of 0 (we don't need them)
names = names[names['Count'] > 0]
print(names.shape)
names.head()
# Compute the total number of names for each year
yearly_totals = names.groupby('Year').sum()
yearly_totals.columns = ['Yearly Total']
yearly_totals.plot.line()
yearly_totals.head()
# Compute each name's fraction of the total in each year
data = names.merge(yearly_totals, left_on='Year', right_index=True)
data['Fraction'] = data['Count'] / data['Yearly Total']
# Drop the Yearly Total column since we don't need it anymore
data = data.drop('Yearly Total', axis=1)
# Sort by name and then by year
data = data.reset_index().sort_values(['Name', 'Year']).set_index('Name')
data.head()
# Save to CSV file
print(f'Saving to {savefile}')
data.to_csv(savefile)
def process_names_data(names_in):
# Stack the data and wrangle with indexes and labels to get the data into a
# convenient format
names = names_in.drop('Total', axis=1).stack().reset_index().set_index('Name')
names.columns = ['Year', 'Count']
# Convert the years from strings to integers
names['Year'] = names['Year'].astype(int)
# Remove all the rows with Count of 0 (we don't need them)
names = names[names['Count'] > 0]
# Compute the total number of names for each year
yearly_totals = names.groupby('Year').sum()
yearly_totals.columns = ['Yearly Total']
# Compute each name's fraction of the total in each year
data = names.merge(yearly_totals, left_on='Year', right_index=True)
data['Fraction'] = data['Count'] / data['Yearly Total']
# Drop the Yearly Total column since we don't need it anymore
data = data.drop('Yearly Total', axis=1)
# Sort by name and then by year
data = data.reset_index().sort_values(['Name', 'Year']).set_index('Name')
return data
group = 'boys'
datafile = f'data/raw/bc-popular-{group}-names.csv'
savefile = f'data/processed/bc-popular-{group}-names.csv'
boys_names = pd.read_csv(datafile, index_col=0)
boys_data = process_names_data(boys_names)
print(f'Saving to {savefile}')
boys_data.to_csv(savefile)
boys_data.head()
| 0.471953 | 0.925769 |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
con = [1093,1134,1293,1428,1477,1481,1557,1591,1316,1389]
cas = [1119,910,873,880,1282,1584,875,1009,1489,989]
all_file = [1119,910,873,880,1282,1584,875,1009,1489,989,1093,1134,1293,1428,1477,1481,1557,1591,1316,1389]
col = ['chr','start','ref','alt','ref_gene','func_refgene','avsnp147','clinvar_clinvar','intervar_intervar_and_evidence']
rename_col= ['CHR','POS','REF','ALT','Ref_Gene','Func_RefGene','avsnp147','Clinvar','Intervar']
```
### Loop For Case
```
case_merged = pd.DataFrame(columns=rename_col)
case_append = pd.DataFrame(columns=rename_col)
for file in cas:
case = pd.read_excel('../intervar_res.xlsx',sheet_name=str(file)+"_case",usecols=col)
case.columns = rename_col
case.Clinvar = case.Clinvar.map(lambda x : x[9:])
case.Intervar = case.Intervar.map(lambda x : x[10:])
case_append = case_append.append(case)
case_merged = case_merged.merge(case,how = 'outer', on = rename_col)
case_append.shape, case_merged.shape
plt.figure(figsize=(15,8))
sns.countplot(case_append.Ref_Gene,
order = case_append.Ref_Gene.value_counts().index[:20])
```
##### HYDIN
* https://www.genecards.org/cgi-bin/carddisp.pl?gene=HYDIN
* https://www.malacards.org/card/kartagener_syndrome
* https://www.malacards.org/card/primary_ciliary_dyskinesia This link show details about infertilaty
##### ESPN
* https://ghr.nlm.nih.gov/gene/ESPN#conditions `USHER SYNDROME-- TYPE 1M, Nonsyndromic hearing loss , profound hearing loss, and vestibular areflexia in some patients. [MIM:609006]`
##### AHDC1
* https://ghr.nlm.nih.gov/condition/xia-gibbs-syndrome ` global developmental delay, hypotonia, obstructive sleep apnea, intellectual disability and seizures. `
##### ZC3H4
* https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5587837/
##### ZNF462
* https://www.genecards.org/cgi-bin/carddisp.pl?gene=ZNF462 `Developmental anomalies during embryogenesis, Rare eye diseases, Rare bone diseases `
##### KMT2B
* https://www.genecards.org/cgi-bin/carddisp.pl?gene=KMT2B `Plays a central role in beta-globin locus transcription regulation by being recruited by NFE2 (PubMed:17707229). Plays an important role in controlling bulk H3K4me during oocyte growth and preimplantation development (By similarity). Required during the transcriptionally active period of oocyte growth for the establishment and/or maintenance of bulk H3K4 trimethylation (H3K4me3), global transcriptional silencing that preceeds resumption of meiosis, oocyte survival and normal zygotic genome activation (By similarity). `
##### ZMYM3
* https://www.genecards.org/cgi-bin/carddisp.pl?gene=ZMYM3 `Diseases associated with ZMYM3 include Dystonia 3, Torsion, X-Linked and Myasthenic Syndrome, Congenital, 6, Presynaptic. An important paralog of this gene is ZMYM2.` `From UniProt:A chromosomal aberration involving ZMYM3 may be a cause of X-linked mental retardation in Xq13.1. Translocation t(X;13)(q13.1;?).`
##### RUSC1
##### ALDH2
*https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4175174/ `Related to pregnency`
#### TRRAP
`DEVELOPMENTAL DELAY WITH OR WITHOUT DYSMORPHIC FACIES AND AUTISM`
#### TTN
* https://synapse.koreamed.org/Synapse/Data/PDFData/0044KJP/kjp-25-284.pdf
* https://ghr.nlm.nih.gov/gene/TTN
* https://www.malacards.org/card/spinocerebellar_ataxia_autosomal_recessive_16
##### ADCY2
*
## Loop for Control
```
control_merged = pd.DataFrame(columns=rename_col)
control_append = pd.DataFrame(columns=rename_col)
for file in con:
control = pd.read_excel('../intervar_res.xlsx',sheet_name=str(file)+"_control",usecols=col)
control.columns = rename_col
control.Clinvar = control.Clinvar.map(lambda x : x[9:])
control.Intervar = control.Intervar.map(lambda x : x[10:])
control_append = control_append.append(control)
control_merged = control_merged.merge(control,how = 'outer', on = rename_col)
control_append.shape, control_merged.shape
plt.figure(figsize=(10,8))
sns.countplot(control_append.Ref_Gene,
order = control_append.Ref_Gene.value_counts().index[:10])
#case_append.Ref_Gene.value_counts().index[:20]
case_gene = case_append.Ref_Gene.value_counts()
#display(case_append.Ref_Gene.value_counts()[5:25])
#control_append.Ref_Gene.value_counts().index[:20]
#display(control_append.Ref_Gene.value_counts()[5:25])
control_gene = control_append.Ref_Gene.value_counts()
a=pd.DataFrame([case_gene,control_gene])
#We are Counting Repeating genes on Case and Control
pd.set_option('display.max_columns', 1000)
#a = a.T.fillna(0)
#a.reset_index(level = 0,inplace=True)
a.columns = ["GENE","Case_Gene","Control_Gene"]
a.to_excel("../case_con_gene_table.xlsx",index = False)
a.head()
gene_gestational = ["WNT4", 'TGFBR3', 'BOLA3', 'EEFSEC' , 'ADCY5','SFTA2', 'BNC2','SEC61B','MPP7','AGTR2','RAP2C']
gene_preterm = ['EEFSEC','EBF1','TEKT3','TGFB1','AGTR2']
#a[a.columns[a.isnull().any()]].T
#a.columns.isin(gene_gestational)
#a.columns.isin(gene_preterm)
```
## We mainly Find 2 Gene Position ALDH2 and ZFHX3 in Case and not in control
```
our_pos = [139540103,108612304,75534704,75526563,75526563,75526780,75506172,75522394,220876353,90002548,75498490,75531681,\
96753149,75498490,75531681,96753149,75499347,75517725,96738585,6960776,148550156,75533452,155292758,27878534]
#last 2 digits are sample data from each Case and Control
case_append[case_append.POS.isin(our_pos)]
control_append[control_append.POS.isin(our_pos)]
case_append.shape,control_append.shape
#case_merged.head()
```
## Plotting All in a Graph
```
case_merged = pd.DataFrame(columns=rename_col)
case_append = pd.DataFrame(columns=rename_col)
for file in cas:
case = pd.read_excel('../intervar_res.xlsx',sheet_name=str(file)+"_case",usecols=col)
case.columns = rename_col
case.Clinvar = case.Clinvar.map(lambda x : x[9:])
case.Intervar = case.Intervar.map(lambda x : x[10:])
case_append = case_append.append(case)
case_merged = case_merged.merge(case,how = 'outer', on = rename_col)
case_append.shape, case_merged.shape
```
### Only Pathegonic from Case & Control
```
#case_append.Intervar = case_append.Intervar.apply(lambda x : x.split()[0]))
#case_append.Intervar = case_append.Intervar.apply(lambda x : x[0])
control_append.Intervar = control_append.Intervar.apply(lambda x: x.split()[0] )
case_path = case_append[case_append.Intervar=="Pathogenic"]
control_path = control_append[control_append.Intervar=="Pathogenic"]
```
### Pathogenic is in Case and not in Control
!!! Not Found according to this data
```
case_path[~(case_path.Intervar.isin(control_path.Intervar))]
case_path.Ref_Gene.nunique() , control_path.Ref_Gene.nunique()
case_path.shape, control_path.shape
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
con = [1093,1134,1293,1428,1477,1481,1557,1591,1316,1389]
cas = [1119,910,873,880,1282,1584,875,1009,1489,989]
all_file = [1119,910,873,880,1282,1584,875,1009,1489,989,1093,1134,1293,1428,1477,1481,1557,1591,1316,1389]
col = ['chr','start','ref','alt','ref_gene','func_refgene','avsnp147','clinvar_clinvar','intervar_intervar_and_evidence']
rename_col= ['CHR','POS','REF','ALT','Ref_Gene','Func_RefGene','avsnp147','Clinvar','Intervar']
case_merged = pd.DataFrame(columns=rename_col)
case_append = pd.DataFrame(columns=rename_col)
for file in cas:
case = pd.read_excel('../intervar_res.xlsx',sheet_name=str(file)+"_case",usecols=col)
case.columns = rename_col
case.Clinvar = case.Clinvar.map(lambda x : x[9:])
case.Intervar = case.Intervar.map(lambda x : x[10:])
case_append = case_append.append(case)
case_merged = case_merged.merge(case,how = 'outer', on = rename_col)
case_append.shape, case_merged.shape
plt.figure(figsize=(15,8))
sns.countplot(case_append.Ref_Gene,
order = case_append.Ref_Gene.value_counts().index[:20])
control_merged = pd.DataFrame(columns=rename_col)
control_append = pd.DataFrame(columns=rename_col)
for file in con:
control = pd.read_excel('../intervar_res.xlsx',sheet_name=str(file)+"_control",usecols=col)
control.columns = rename_col
control.Clinvar = control.Clinvar.map(lambda x : x[9:])
control.Intervar = control.Intervar.map(lambda x : x[10:])
control_append = control_append.append(control)
control_merged = control_merged.merge(control,how = 'outer', on = rename_col)
control_append.shape, control_merged.shape
plt.figure(figsize=(10,8))
sns.countplot(control_append.Ref_Gene,
order = control_append.Ref_Gene.value_counts().index[:10])
#case_append.Ref_Gene.value_counts().index[:20]
case_gene = case_append.Ref_Gene.value_counts()
#display(case_append.Ref_Gene.value_counts()[5:25])
#control_append.Ref_Gene.value_counts().index[:20]
#display(control_append.Ref_Gene.value_counts()[5:25])
control_gene = control_append.Ref_Gene.value_counts()
a=pd.DataFrame([case_gene,control_gene])
#We are Counting Repeating genes on Case and Control
pd.set_option('display.max_columns', 1000)
#a = a.T.fillna(0)
#a.reset_index(level = 0,inplace=True)
a.columns = ["GENE","Case_Gene","Control_Gene"]
a.to_excel("../case_con_gene_table.xlsx",index = False)
a.head()
gene_gestational = ["WNT4", 'TGFBR3', 'BOLA3', 'EEFSEC' , 'ADCY5','SFTA2', 'BNC2','SEC61B','MPP7','AGTR2','RAP2C']
gene_preterm = ['EEFSEC','EBF1','TEKT3','TGFB1','AGTR2']
#a[a.columns[a.isnull().any()]].T
#a.columns.isin(gene_gestational)
#a.columns.isin(gene_preterm)
our_pos = [139540103,108612304,75534704,75526563,75526563,75526780,75506172,75522394,220876353,90002548,75498490,75531681,\
96753149,75498490,75531681,96753149,75499347,75517725,96738585,6960776,148550156,75533452,155292758,27878534]
#last 2 digits are sample data from each Case and Control
case_append[case_append.POS.isin(our_pos)]
control_append[control_append.POS.isin(our_pos)]
case_append.shape,control_append.shape
#case_merged.head()
case_merged = pd.DataFrame(columns=rename_col)
case_append = pd.DataFrame(columns=rename_col)
for file in cas:
case = pd.read_excel('../intervar_res.xlsx',sheet_name=str(file)+"_case",usecols=col)
case.columns = rename_col
case.Clinvar = case.Clinvar.map(lambda x : x[9:])
case.Intervar = case.Intervar.map(lambda x : x[10:])
case_append = case_append.append(case)
case_merged = case_merged.merge(case,how = 'outer', on = rename_col)
case_append.shape, case_merged.shape
#case_append.Intervar = case_append.Intervar.apply(lambda x : x.split()[0]))
#case_append.Intervar = case_append.Intervar.apply(lambda x : x[0])
control_append.Intervar = control_append.Intervar.apply(lambda x: x.split()[0] )
case_path = case_append[case_append.Intervar=="Pathogenic"]
control_path = control_append[control_append.Intervar=="Pathogenic"]
case_path[~(case_path.Intervar.isin(control_path.Intervar))]
case_path.Ref_Gene.nunique() , control_path.Ref_Gene.nunique()
case_path.shape, control_path.shape
| 0.050965 | 0.585279 |
# Domain Specific Ranking Using Word2Vec
**PROTIP!** Check the other notebook first and then return here :)
Now, let's see some other useful features of Word2Vec.
## Before starting...
As usual, let's import the tools we'll need.
```
%matplotlib inline
from helpers import *
import inspect
import matplotlib as plt
from IPython.core.pylabtools import figsize
import geopandas as gpd
figsize(20, 20)
```
Of course, let's load our pre-trained model into `gensim`:
```
# First, let's download the model.
unzipped = download_google_news_model()
# Then, let's create a Word2Vec model using the downloaded parameters.
gnews_model = gensim.models.KeyedVectors.load_word2vec_format(unzipped, binary=True)
```
## Finding entity classes in embeddings
In high-dimensional spaces are often subspaces that contain only entities of a single class. How can we find them? Well, we can train a classifier that lears to separate the true from the false positives. Our choice is an SVM.
Let's try to find countries inside the Google News Word2Vec space.
Let's begin finding things similar to Germany:
```
get_most_similar_terms(gnews_model, 'Germany')
```
As expected there are a number of countries nearby, but also other things that definitely aren't countries. What this tell us. is that the concept of country isn't a point, but a region in Word2Vec space. Hence, we need a good classifier to find such region.
Let's build a dataset. First, we need some positive examples (i.e. countries). In the `resources/countries.csv` we have data of some countries. Let's load them and show a few. We'll use the following function for that purpose:
```
print(inspect.getsource(load_countries))
countries = load_countries()
for country in countries[:10]:
print(country)
```
Great. Now we need negative examples. This one is easier. There are around 200 countries, which represent 200 words in a corpus of millions, so by sampling randomly our Word2Vec model chances of getting false negatives (countries in our not_countries set) is really, really, **really** small.
Let's use the `random_sample_words` for this:
```
print(inspect.getsource(random_sample_words))
NEGATIVE_SAMPLE_SIZE = 5000
not_countries = random_sample_words(gnews_model, NEGATIVE_SAMPLE_SIZE)
print(not_countries[:10])
```
Let's now create our labeled set.
```
print(inspect.getsource(create_training_set))
labeled, X, y = create_training_set(gnews_model, countries, not_countries)
print(f'# labeled: {len(labeled)}')
print(labeled[:20])
print(f'# X: {len(X)}')
print(f'# y: {len(y)}')
```
Given our data comes already shuffled, we can split it into train and test sets manually, like this:
```
TRAINING_FRACTION = 0.7
split_point = int(TRAINING_FRACTION * len(labeled))
X_train = X[:split_point]
y_train = y[:split_point]
X_test = X[split_point:]
y_test = y[split_point:]
```
Now, time to train!
```
classifier = SVC(kernel='linear')
classifier.fit(X_train, y_train)
```
How **precise** is our model?
```
predictions = classifier.predict(X_test)
missed = [country
for (prediction, truth, country) in
zip(predictions, y_test, labeled[split_point:])
if prediction != truth]
precision = 100 - 100 * float(len(missed)) / len(predictions)
print(f'# missed: {len(missed)}')
print(f'Precision: {precision}')
```
We can use our model to make predictions on **all** the words in the pretrained model:
```
print(inspect.getsource(get_all_predictions))
correct, not_correct = get_all_predictions(classifier, gnews_model)
print('Correct sample: ')
print(random.sample(correct, 20))
print('------')
print('Incorrect sample: ')
print(random.sample(not_correct, 20))
```
## Calculating Semantic Distances Inside a Class
Another useful application of Word2Vec embeddings is that we can find words related to a particular topic, concept or class. For instance, we can see what countries are closely related to coffee (maybe because they consume it a lot or because they produce it, like Colombia).
We can perform this task by ranking the members of a class (countries in this case) agasint the criterion (coffee) based on their relative distance to it.
```
# Links countries to an index.
country_to_index = {country['name']: index for index, country in enumerate(countries)}
# Get the vector associate to each country.
country_vectors = np.asarray([gnews_model[country['name']]
for country
in countries
if country['name'] in gnews_model])
print(f'country_vectors shape: {country_vectors.shape}')
```
Just to verify we're on the right path, let's see what's similar to Canada:
```
distances = np.dot(country_vectors, country_vectors[country_to_index['Canada']])
for index in reversed(np.argsort(distances)[-10:]):
print(countries[index]['name'], distances[index])
```
Well, it makes sense. Most countries related to Canada speak English, and others like Sweden or Slovakia are pretty adept to hockey.
We'll use the function `rank_countries` to determine which countries are more relevant given a particular term or criteria:
```
print(inspect.getsource(rank_countries))
rank_countries(gnews_model, 'coffee', countries, country_vectors)
```
Given the model we are using was trained on Google News, these results are biased by the appearance of these countries and the word 'coffee' in the same context. Nevertheless, they look quite good, don't you think?
## Visualizing Country Data on a Map
We can use all the tools we defined in the cells above to visualize countries that rink higher on a certain criteria in a world map. We'll leverage the power of `GeoPandas` for this task.
Let's start small. We'll just load the entire world :)
```
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
world.head()
```
We can use the `map_term` to highlight countries that rank higher on a certain criteria like, for instance, consuming or producing lots of coffee.
```
print(inspect.getsource(map_term))
map_term(gnews_model, 'coffee', countries, country_vectors, world)
```
Visualizing our data is a crucial part of machine learning. Whether on a map or a more traditional plot, to see what's happening in our data can lead us to powerful insights with the potential of translating into performant models.
It also tells us if something's not right. For instance, do Greenlandics drink or produce that much coffee? Or is it an effect of the existence of a variation of an Irish coffe called Greenlandic coffee?
|
github_jupyter
|
%matplotlib inline
from helpers import *
import inspect
import matplotlib as plt
from IPython.core.pylabtools import figsize
import geopandas as gpd
figsize(20, 20)
# First, let's download the model.
unzipped = download_google_news_model()
# Then, let's create a Word2Vec model using the downloaded parameters.
gnews_model = gensim.models.KeyedVectors.load_word2vec_format(unzipped, binary=True)
get_most_similar_terms(gnews_model, 'Germany')
print(inspect.getsource(load_countries))
countries = load_countries()
for country in countries[:10]:
print(country)
print(inspect.getsource(random_sample_words))
NEGATIVE_SAMPLE_SIZE = 5000
not_countries = random_sample_words(gnews_model, NEGATIVE_SAMPLE_SIZE)
print(not_countries[:10])
print(inspect.getsource(create_training_set))
labeled, X, y = create_training_set(gnews_model, countries, not_countries)
print(f'# labeled: {len(labeled)}')
print(labeled[:20])
print(f'# X: {len(X)}')
print(f'# y: {len(y)}')
TRAINING_FRACTION = 0.7
split_point = int(TRAINING_FRACTION * len(labeled))
X_train = X[:split_point]
y_train = y[:split_point]
X_test = X[split_point:]
y_test = y[split_point:]
classifier = SVC(kernel='linear')
classifier.fit(X_train, y_train)
predictions = classifier.predict(X_test)
missed = [country
for (prediction, truth, country) in
zip(predictions, y_test, labeled[split_point:])
if prediction != truth]
precision = 100 - 100 * float(len(missed)) / len(predictions)
print(f'# missed: {len(missed)}')
print(f'Precision: {precision}')
print(inspect.getsource(get_all_predictions))
correct, not_correct = get_all_predictions(classifier, gnews_model)
print('Correct sample: ')
print(random.sample(correct, 20))
print('------')
print('Incorrect sample: ')
print(random.sample(not_correct, 20))
# Links countries to an index.
country_to_index = {country['name']: index for index, country in enumerate(countries)}
# Get the vector associate to each country.
country_vectors = np.asarray([gnews_model[country['name']]
for country
in countries
if country['name'] in gnews_model])
print(f'country_vectors shape: {country_vectors.shape}')
distances = np.dot(country_vectors, country_vectors[country_to_index['Canada']])
for index in reversed(np.argsort(distances)[-10:]):
print(countries[index]['name'], distances[index])
print(inspect.getsource(rank_countries))
rank_countries(gnews_model, 'coffee', countries, country_vectors)
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
world.head()
print(inspect.getsource(map_term))
map_term(gnews_model, 'coffee', countries, country_vectors, world)
| 0.506591 | 0.973393 |
## Sentiment Classifier
This script trains a sentiment classifier using either the student or teacher embeddings generated from the tweets
### Setup
Needs latest version of sklearn
```
!pip uninstall scikit-learn -y
!pip install -U scikit-learn
# Imports
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModel
import sklearn.linear_model
import pickle
from utils import load_csv, read_torch
from distil_funcs import *
from utils import load_pickle
import pickle
import random
from random import shuffle
import pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.linear_model import LogisticRegressionCV
from tqdm import tqdm
# Load Teacher Model for Evaluation
DEVICE = torch.device('cpu')
teacher_model = load_teacher(device=DEVICE)
# Load Student Model for Evaluation
student_config = {
'd_model': 768, # hidden dim of model
'heads': 12, # attention heads
'dropout':0.1, # dropout in network except ffn
'dropout_ffn':0.4, # dropout in ffn
'd_ff': 96, # num features in FFN hidden layer
'n_layers': 2, # num of transformer layers
'n_experts': 40, # number of FFN experts
'load_balancing_loss_ceof': 0.01, # load balancing co-eff, encourages expert diversity
'is_scale_prob': True, # whether to scale the selected expert outputs by routing probability
'drop_tokens': False, # whether to drop tokens
'capacity_factor':1.25, # capacity factor - seemed to work best in Switch Transformer
}
# 3. Create student model
word_embeddings = deepcopy(teacher_model.get_input_embeddings())
compressed_word_embeddings = word_embedding_compression(word_embeddings, student_config['d_model'])
student_model = LaBSE_Switch(config=student_config, word_embeddings_module=compressed_word_embeddings)
# 4. Load state_dict() of trained student
path = 's3://eu1-sagemaker-bucket/borisbubla/experiments/10000.0k/switch/LR0.0005LAY2EXP40D_FF96TEMP9TIME-20210609-174240/Distil_LaBSE_2L_40E_96D'
file = read_torch(path)
student_model.load_state_dict(file)
student_model.eval()
def create_sentence_embeddings(model, tokenizer, sentences, max_length):
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=max_length, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = model_output['pooler_output']
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings.numpy()
def shuffle_lists_together(lst1, lst2):
# Shuffle two lists with same order
temp = list(zip(lst1,lst2))
random.shuffle(temp)
lst1, lst2 = zip(*temp)
return list(lst1), list(lst2)
```
### Sentiment Classifiation Task
```
# load data
sentiment_train_data = pd.read_csv('data/twitter-2016train-A.txt', sep='\t', header=None)
sentiment_dev_data = pd.read_csv('data/twitter-2016dev-A.txt', sep='\t', header=None)
sentiment_test_data = pd.read_csv('data/twitter-2016test-A.txt', sep='\t', header=None)
train_sentences = sentiment_train_data[2].to_list()
train_labels = sentiment_train_data[1].to_list()
test_sentences = sentiment_test_data[2].to_list()
test_labels = sentiment_test_data[1].to_list()
# shuffle data
train_sentences, train_labels = shuffle_lists_together(train_sentences, train_labels)
test_sentences, test_labels = shuffle_lists_together(test_sentences, test_labels)
# create train embeddings
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/LaBSE")
embeddings_s = create_sentence_embeddings(model=student_model, tokenizer=tokenizer, sentences=train_sentences, max_length=64)
embeddings_t = create_sentence_embeddings(model=teacher_model, tokenizer=tokenizer, sentences=train_sentences, max_length=64)
print('Average CosSim for these embeddings: ',np.diag(cosine_similarity(embeddings_t, embeddings_s)).mean())
# convert test data to embeddings
test_embeddings_s = create_sentence_embeddings(model=student_model, tokenizer=tokenizer, sentences=test_sentences, max_length=64)
test_embeddings_t = create_sentence_embeddings(model=teacher_model, tokenizer=tokenizer, sentences=test_sentences, max_length=64)
# train model with CV - LaBSE
sentiment_model_labse = sklearn.linear_model.LogisticRegressionCV(cv=5, max_iter=10000)
sentiment_model_labse.fit(embeddings_t, train_labels)
# train model with CV - DistilLaBSE
sentiment_model_student = sklearn.linear_model.LogisticRegressionCV(cv=5, max_iter=10000)
sentiment_model_student.fit(embeddings_s, train_labels)
# make predictions
predictions_labse = sentiment_model_labse.predict(test_embeddings_t)
#predictions_student = sentiment_model_student.predict(test_embeddings_s)
predictions_labse
# eval
from sklearn.metrics import classification_report
print(classification_report(test_labels, predictions_labse))
print(classification_report(test_labels, predictions_student))
# create csv files for latex report
dict_report_labse = classification_report(test_labels, predictions_labse)
dict_report_distil_labse = classification_report(test_labels, predictions_student)
df = pd.DataFrame.from_dict(dict_report_labse).T.round(2)
df.to_csv('classification_report_sentiment_{}.csv'.format('labse'), index = True)
df = pd.DataFrame.from_dict(dict_report_distil_labse).T.round(2)
df.to_csv('classification_report_sentiment_{}.csv'.format('distil_labse_2L_40E_96D'), index = True)
```
|
github_jupyter
|
!pip uninstall scikit-learn -y
!pip install -U scikit-learn
# Imports
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModel
import sklearn.linear_model
import pickle
from utils import load_csv, read_torch
from distil_funcs import *
from utils import load_pickle
import pickle
import random
from random import shuffle
import pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.linear_model import LogisticRegressionCV
from tqdm import tqdm
# Load Teacher Model for Evaluation
DEVICE = torch.device('cpu')
teacher_model = load_teacher(device=DEVICE)
# Load Student Model for Evaluation
student_config = {
'd_model': 768, # hidden dim of model
'heads': 12, # attention heads
'dropout':0.1, # dropout in network except ffn
'dropout_ffn':0.4, # dropout in ffn
'd_ff': 96, # num features in FFN hidden layer
'n_layers': 2, # num of transformer layers
'n_experts': 40, # number of FFN experts
'load_balancing_loss_ceof': 0.01, # load balancing co-eff, encourages expert diversity
'is_scale_prob': True, # whether to scale the selected expert outputs by routing probability
'drop_tokens': False, # whether to drop tokens
'capacity_factor':1.25, # capacity factor - seemed to work best in Switch Transformer
}
# 3. Create student model
word_embeddings = deepcopy(teacher_model.get_input_embeddings())
compressed_word_embeddings = word_embedding_compression(word_embeddings, student_config['d_model'])
student_model = LaBSE_Switch(config=student_config, word_embeddings_module=compressed_word_embeddings)
# 4. Load state_dict() of trained student
path = 's3://eu1-sagemaker-bucket/borisbubla/experiments/10000.0k/switch/LR0.0005LAY2EXP40D_FF96TEMP9TIME-20210609-174240/Distil_LaBSE_2L_40E_96D'
file = read_torch(path)
student_model.load_state_dict(file)
student_model.eval()
def create_sentence_embeddings(model, tokenizer, sentences, max_length):
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=max_length, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = model_output['pooler_output']
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings.numpy()
def shuffle_lists_together(lst1, lst2):
# Shuffle two lists with same order
temp = list(zip(lst1,lst2))
random.shuffle(temp)
lst1, lst2 = zip(*temp)
return list(lst1), list(lst2)
# load data
sentiment_train_data = pd.read_csv('data/twitter-2016train-A.txt', sep='\t', header=None)
sentiment_dev_data = pd.read_csv('data/twitter-2016dev-A.txt', sep='\t', header=None)
sentiment_test_data = pd.read_csv('data/twitter-2016test-A.txt', sep='\t', header=None)
train_sentences = sentiment_train_data[2].to_list()
train_labels = sentiment_train_data[1].to_list()
test_sentences = sentiment_test_data[2].to_list()
test_labels = sentiment_test_data[1].to_list()
# shuffle data
train_sentences, train_labels = shuffle_lists_together(train_sentences, train_labels)
test_sentences, test_labels = shuffle_lists_together(test_sentences, test_labels)
# create train embeddings
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/LaBSE")
embeddings_s = create_sentence_embeddings(model=student_model, tokenizer=tokenizer, sentences=train_sentences, max_length=64)
embeddings_t = create_sentence_embeddings(model=teacher_model, tokenizer=tokenizer, sentences=train_sentences, max_length=64)
print('Average CosSim for these embeddings: ',np.diag(cosine_similarity(embeddings_t, embeddings_s)).mean())
# convert test data to embeddings
test_embeddings_s = create_sentence_embeddings(model=student_model, tokenizer=tokenizer, sentences=test_sentences, max_length=64)
test_embeddings_t = create_sentence_embeddings(model=teacher_model, tokenizer=tokenizer, sentences=test_sentences, max_length=64)
# train model with CV - LaBSE
sentiment_model_labse = sklearn.linear_model.LogisticRegressionCV(cv=5, max_iter=10000)
sentiment_model_labse.fit(embeddings_t, train_labels)
# train model with CV - DistilLaBSE
sentiment_model_student = sklearn.linear_model.LogisticRegressionCV(cv=5, max_iter=10000)
sentiment_model_student.fit(embeddings_s, train_labels)
# make predictions
predictions_labse = sentiment_model_labse.predict(test_embeddings_t)
#predictions_student = sentiment_model_student.predict(test_embeddings_s)
predictions_labse
# eval
from sklearn.metrics import classification_report
print(classification_report(test_labels, predictions_labse))
print(classification_report(test_labels, predictions_student))
# create csv files for latex report
dict_report_labse = classification_report(test_labels, predictions_labse)
dict_report_distil_labse = classification_report(test_labels, predictions_student)
df = pd.DataFrame.from_dict(dict_report_labse).T.round(2)
df.to_csv('classification_report_sentiment_{}.csv'.format('labse'), index = True)
df = pd.DataFrame.from_dict(dict_report_distil_labse).T.round(2)
df.to_csv('classification_report_sentiment_{}.csv'.format('distil_labse_2L_40E_96D'), index = True)
| 0.880502 | 0.689874 |
# NLT practicum bij H4: Tijdsverschillen in HiSPARC
Dit Jupyter notebook hoort bij de NLT module "Kosmische straling" en vervangt het practicum van H4.
Een HiSPARC opstelling heeft (minstens) twee detectoren, die deeltjes meten. Een coincidentie vindt plaats als beide detectoren binnen 1500 ns een puls geven. Het tijdsverschil 1500 ns noemen we het 'trigger venster'.
Het doel van deze proef is te onderzoeken hoe de tijdsverschillen tussen de pulsen (binnen het 1500 ns trigger venster) verdeeld zijn.
```
import numpy as np
import tables
import matplotlib.pyplot as plt
from sapphire import download_data
from datetime import datetime
```
# Download de data
```
start = datetime(2019, 11, 12)
end = datetime(2019, 11, 13)
STATION = 104
# download data to HDF5 and read into memory. Close HDF5 file.
station_group = '/s%s' % STATION
with tables.open_file('data.h5', 'w') as data:
download_data(data, station_group, STATION, start, end)
event_tabel = data.get_node(station_group, 'events').read()
```
## Event tabel
`event_tabel` is een array (tabel) van door het HiSPARC station vastgelegd events. Een event is een coincidentie, waarbij elke detector minstens een puls (deeltje) gemeten heeft.
Elke rij is een event. Elk event heeft een tijdsstempel (`timestamp`).
In de overige kolommen staan de gemeten grootheden, zoals de pulshoogte en aankomsttijden.
```
event_tabel
```
## Tijdsverschillen
In de kolommen `t1` en `t2` staan de (relatieve) tijdstippen van de pulsen in het event. De tijden zijn te interpreteren als de aankomsttijden van de gemeten deeltjes.
We definieren `dt`: Het tijdsverschil tussen de pulsen.
`dt` is een rij van tijdsverschillen: Voor elk event een tijdsverschil.
```
t1 = event_tabel['t1']
t2 = event_tabel['t2']
dt = t2 - t1
dt
len(dt) # lengte van de lijst dt (aantal elementen in de lijst)
```
## Histogram van (aankomst)tijdverschillen
De tijdsverschillen `dt` kunnen we uitzetten in een histogram.
Het maximale tijdsverschil is dat ons HiSPARC station vastlegt is 1500 ns (triggervenster).
```
plt.figure(figsize=(10,4))
plt.hist(dt, bins=np.arange(-1600, 1600., 2.5), histtype='step')
plt.title('Station %d: tijdverschil t2 - t1' % STATION)
plt.xlabel('dt (ns)')
plt.ylabel('aantal')
plt.show()
```
## Piek nader bekeken
We maken een histogram van van de tijdsverschillen `dt` tussen -200 ns en 200 ns.
In deze plot is de piek van kleine tijdsverschillen veel beter te zien.
Voor het tijdsverschil van gemeten pulsen (deeltjes) geldt:
* Grote tijdsverschillen horen bij **toevallige coincidenties tussen deeltjes die NIET afkomstig zijn uit dezelfde shower**.
* Kleine tijdsverschillen horen bij **coincidenties van deeltjes die afkomstig zijn uit dezelfde shower**.
```
plt.figure(figsize=(10,4))
plt.hist(dt, bins=np.arange(-200, 200., 2.5), histtype='step')
plt.title('Station %d: tijdverschil t2 - t1' % STATION)
plt.xlabel('dt (ns)')
plt.ylabel('counts')
plt.show()
```
# Opgave 1
Maak een schatting van het aantal events waarin de deeltjes afkomstig zijn uit dezelfde shower. Gebruik het histogram.
```
# antwoord:
```
## Toevallige coincidenties
Door gebruik te maken van een logaritmische schaalverdeling kunnen we de toevallige coincidenties beter bekijken:
```
plt.figure(figsize=(10,4))
plt.hist(dt, bins=np.arange(-1600, 1600., 2.5), histtype='step', log=True)
plt.title('Station %d: tijdverschil t2 - t1' % STATION)
plt.xlabel('dt (ns)')
plt.ylabel('counts')
plt.show()
```
# Opgave 2
Maak een schatting van het aantal toevallige coincidenties. Gebruik het histogram met logaritmische schaalverdeling.
```
# antwoord
```
## tijdsverschillen selecteren
We kunnen de tijdsverschillen uit de lijst `dt` selecteren. Op basis van de waarde van `dt`
```
dt[dt < 100.] # alle elementen uit de lijst dt, die kleiner zijn dan 100
```
# Opgave 3
- Maak een lijst van alle elementen uit `dt` die GEEN toevallige concidenties zijn: Noem de lijst `dt_shower`.
- Maak een lijst van alle elemeten uit `dt` die WEL toevallige coincidenties zijn: Noem de lijst `dt_toevallig`.
- Bereken het percentage toevallige coincidenties in deze dataset
```
# antwoord
## Extra pulshoogte van toev
shower_events = event_tabel[abs(dt) < 200.]
toevallige_coinc = event_tabel[abs(dt) >= 200.]
ph0_shower = shower_events['pulseheights'][:, 0]
ph1_shower = shower_events['pulseheights'][:, 1]
ph_coinc0 = toevallige_coinc['pulseheights'][:, 0]
ph_coinc1 = toevallige_coinc['pulseheights'][:, 1]
plt.figure(figsize=(10,4))
plt.hist(ph0_shower, bins=np.arange(0, 2000., 20), histtype='step', color='black', log=True)
plt.hist(ph1_shower, bins=np.arange(0, 2000., 20), histtype='step', color='red', log=True)
plt.hist(ph_coinc0, bins=np.arange(0, 2000., 20), histtype='step', color='blue', log=True)
#plt.hist(ph_coinc1, bins=np.arange(0, 2000., 20), histtype='step', color='blue', log=True)
plt.xlabel('pulshoogte (ADC)')
plt.title('Station %d: Pulshoogte' % STATION)
plt.ylabel('aantal')
plt.legend(['shower events ch0', 'shower events ch1', 'toevallige coincidenties ch0'])
plt.ylim([1, 5000])
plt.show()
```
|
github_jupyter
|
import numpy as np
import tables
import matplotlib.pyplot as plt
from sapphire import download_data
from datetime import datetime
start = datetime(2019, 11, 12)
end = datetime(2019, 11, 13)
STATION = 104
# download data to HDF5 and read into memory. Close HDF5 file.
station_group = '/s%s' % STATION
with tables.open_file('data.h5', 'w') as data:
download_data(data, station_group, STATION, start, end)
event_tabel = data.get_node(station_group, 'events').read()
event_tabel
t1 = event_tabel['t1']
t2 = event_tabel['t2']
dt = t2 - t1
dt
len(dt) # lengte van de lijst dt (aantal elementen in de lijst)
plt.figure(figsize=(10,4))
plt.hist(dt, bins=np.arange(-1600, 1600., 2.5), histtype='step')
plt.title('Station %d: tijdverschil t2 - t1' % STATION)
plt.xlabel('dt (ns)')
plt.ylabel('aantal')
plt.show()
plt.figure(figsize=(10,4))
plt.hist(dt, bins=np.arange(-200, 200., 2.5), histtype='step')
plt.title('Station %d: tijdverschil t2 - t1' % STATION)
plt.xlabel('dt (ns)')
plt.ylabel('counts')
plt.show()
# antwoord:
plt.figure(figsize=(10,4))
plt.hist(dt, bins=np.arange(-1600, 1600., 2.5), histtype='step', log=True)
plt.title('Station %d: tijdverschil t2 - t1' % STATION)
plt.xlabel('dt (ns)')
plt.ylabel('counts')
plt.show()
# antwoord
dt[dt < 100.] # alle elementen uit de lijst dt, die kleiner zijn dan 100
# antwoord
## Extra pulshoogte van toev
shower_events = event_tabel[abs(dt) < 200.]
toevallige_coinc = event_tabel[abs(dt) >= 200.]
ph0_shower = shower_events['pulseheights'][:, 0]
ph1_shower = shower_events['pulseheights'][:, 1]
ph_coinc0 = toevallige_coinc['pulseheights'][:, 0]
ph_coinc1 = toevallige_coinc['pulseheights'][:, 1]
plt.figure(figsize=(10,4))
plt.hist(ph0_shower, bins=np.arange(0, 2000., 20), histtype='step', color='black', log=True)
plt.hist(ph1_shower, bins=np.arange(0, 2000., 20), histtype='step', color='red', log=True)
plt.hist(ph_coinc0, bins=np.arange(0, 2000., 20), histtype='step', color='blue', log=True)
#plt.hist(ph_coinc1, bins=np.arange(0, 2000., 20), histtype='step', color='blue', log=True)
plt.xlabel('pulshoogte (ADC)')
plt.title('Station %d: Pulshoogte' % STATION)
plt.ylabel('aantal')
plt.legend(['shower events ch0', 'shower events ch1', 'toevallige coincidenties ch0'])
plt.ylim([1, 5000])
plt.show()
| 0.431345 | 0.849535 |
```
"""
Interpreting generic statements with RSA models of pragmatics.
Taken from:
[0] http://forestdb.org/models/generics.html
[1] https://gscontras.github.io/probLang/chapters/07-generics.html
"""
import argparse
import collections
import numbers
import torch
from search_inference import HashingMarginal, Search, memoize
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
torch.set_default_dtype(torch.float64) # double precision for numerical stability
```
# Models
```
def Marginal(fn):
return memoize(lambda *args: HashingMarginal(Search(fn).run(*args)))
# hashable params
Params = collections.namedtuple("Params", ["theta", "gamma", "delta"])
def discretize_beta_pdf(bins, gamma, delta):
"""
discretized version of the Beta pdf used for approximately integrating via Search
"""
shape_alpha = gamma * delta
shape_beta = (1.0 - gamma) * delta
return torch.tensor(
list(
map(
lambda x: (x ** (shape_alpha - 1)) * ((1.0 - x) ** (shape_beta - 1)),
bins,
)
)
)
@Marginal
def structured_prior_model(params):
propertyIsPresent = (
pyro.sample("propertyIsPresent", dist.Bernoulli(params.theta)).item() == 1
)
if propertyIsPresent:
# approximately integrate over a beta by enumerating over bins
beta_bins = [0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99]
ix = pyro.sample(
"bin",
dist.Categorical(
probs=discretize_beta_pdf(beta_bins, params.gamma, params.delta)
),
)
return beta_bins[ix]
return 0
def threshold_prior():
threshold_bins = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
ix = pyro.sample(
"threshold", dist.Categorical(logits=torch.zeros(len(threshold_bins)))
)
return threshold_bins[ix]
def utterance_prior():
utterances = ["generic is true", "mu"]
ix = pyro.sample("utterance", dist.Categorical(logits=torch.zeros(len(utterances))))
return utterances[ix]
def meaning(utterance, state, threshold):
if isinstance(utterance, numbers.Number):
return state == utterance
if utterance == "generic is true":
return state > threshold
if utterance == "generic is false":
return state <= threshold
if utterance == "mu":
return True
if utterance == "some":
return state > 0
if utterance == "most":
return state >= 0.5
if utterance == "all":
return state >= 0.99
return True
```
## Listener0
```
@Marginal
def listener0(utterance, threshold, prior):
state = pyro.sample("state", prior)
m = meaning(utterance, state, threshold)
pyro.factor("listener0_true", 0.0 if m else -99999.0)
return state
```
## Speaker1
```
@Marginal
def speaker1(state, threshold, prior):
s1Optimality = 5.0
utterance = utterance_prior()
L0 = listener0(utterance, threshold, prior)
with poutine.scale(scale=torch.tensor(s1Optimality)):
pyro.sample("L0_score", L0, obs=state)
return utterance
```
## Listener1
```
@Marginal
def listener1(utterance, prior):
state = pyro.sample("state", prior)
threshold = threshold_prior()
S1 = speaker1(state, threshold, prior)
pyro.sample("S1_score", S1, obs=utterance)
return state
```
## Speaker2
```
@Marginal
def speaker2(prevalence, prior):
utterance = utterance_prior()
wL1 = listener1(utterance, prior)
pyro.sample("wL1_score", wL1, obs=prevalence)
return utterance
```
# Playground
```
def inspect_support(model: HashingMarginal, name: str) -> None:
print(name)
for support in model.enumerate_support():
print(" ", support, model.log_prob(support).exp().item())
return None
hasWingsERP = structured_prior_model(Params(0.5, 0.99, 10.0))
inspect_support(hasWingsERP, "hasWingsERP")
wingsPosterior = listener1("generic is true", hasWingsERP)
inspect_support(wingsPosterior, "wingsPosterior")
laysEggsERP = structured_prior_model(Params(0.5, 0.5, 10.0))
inspect_support(laysEggsERP, "laysEggsERP")
eggsPosterior = listener1("generic is true", laysEggsERP)
inspect_support(eggsPosterior, "eggsPosterior")
carriesMalariaERP = structured_prior_model(Params(0.1, 0.01, 2.0))
inspect_support(carriesMalariaERP, "carriesMalariaERP")
malariaPosterior = listener1("generic is true", carriesMalariaERP)
inspect_support(malariaPosterior, "malariaPosterior")
areFemaleERP = structured_prior_model(Params(0.99, 0.5, 50.0))
inspect_support(areFemaleERP, "areFemaleERP")
femalePosterior = listener1("generic is true", areFemaleERP)
inspect_support(femalePosterior, "femalePosterior")
```
|
github_jupyter
|
"""
Interpreting generic statements with RSA models of pragmatics.
Taken from:
[0] http://forestdb.org/models/generics.html
[1] https://gscontras.github.io/probLang/chapters/07-generics.html
"""
import argparse
import collections
import numbers
import torch
from search_inference import HashingMarginal, Search, memoize
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
torch.set_default_dtype(torch.float64) # double precision for numerical stability
def Marginal(fn):
return memoize(lambda *args: HashingMarginal(Search(fn).run(*args)))
# hashable params
Params = collections.namedtuple("Params", ["theta", "gamma", "delta"])
def discretize_beta_pdf(bins, gamma, delta):
"""
discretized version of the Beta pdf used for approximately integrating via Search
"""
shape_alpha = gamma * delta
shape_beta = (1.0 - gamma) * delta
return torch.tensor(
list(
map(
lambda x: (x ** (shape_alpha - 1)) * ((1.0 - x) ** (shape_beta - 1)),
bins,
)
)
)
@Marginal
def structured_prior_model(params):
propertyIsPresent = (
pyro.sample("propertyIsPresent", dist.Bernoulli(params.theta)).item() == 1
)
if propertyIsPresent:
# approximately integrate over a beta by enumerating over bins
beta_bins = [0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99]
ix = pyro.sample(
"bin",
dist.Categorical(
probs=discretize_beta_pdf(beta_bins, params.gamma, params.delta)
),
)
return beta_bins[ix]
return 0
def threshold_prior():
threshold_bins = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
ix = pyro.sample(
"threshold", dist.Categorical(logits=torch.zeros(len(threshold_bins)))
)
return threshold_bins[ix]
def utterance_prior():
utterances = ["generic is true", "mu"]
ix = pyro.sample("utterance", dist.Categorical(logits=torch.zeros(len(utterances))))
return utterances[ix]
def meaning(utterance, state, threshold):
if isinstance(utterance, numbers.Number):
return state == utterance
if utterance == "generic is true":
return state > threshold
if utterance == "generic is false":
return state <= threshold
if utterance == "mu":
return True
if utterance == "some":
return state > 0
if utterance == "most":
return state >= 0.5
if utterance == "all":
return state >= 0.99
return True
@Marginal
def listener0(utterance, threshold, prior):
state = pyro.sample("state", prior)
m = meaning(utterance, state, threshold)
pyro.factor("listener0_true", 0.0 if m else -99999.0)
return state
@Marginal
def speaker1(state, threshold, prior):
s1Optimality = 5.0
utterance = utterance_prior()
L0 = listener0(utterance, threshold, prior)
with poutine.scale(scale=torch.tensor(s1Optimality)):
pyro.sample("L0_score", L0, obs=state)
return utterance
@Marginal
def listener1(utterance, prior):
state = pyro.sample("state", prior)
threshold = threshold_prior()
S1 = speaker1(state, threshold, prior)
pyro.sample("S1_score", S1, obs=utterance)
return state
@Marginal
def speaker2(prevalence, prior):
utterance = utterance_prior()
wL1 = listener1(utterance, prior)
pyro.sample("wL1_score", wL1, obs=prevalence)
return utterance
def inspect_support(model: HashingMarginal, name: str) -> None:
print(name)
for support in model.enumerate_support():
print(" ", support, model.log_prob(support).exp().item())
return None
hasWingsERP = structured_prior_model(Params(0.5, 0.99, 10.0))
inspect_support(hasWingsERP, "hasWingsERP")
wingsPosterior = listener1("generic is true", hasWingsERP)
inspect_support(wingsPosterior, "wingsPosterior")
laysEggsERP = structured_prior_model(Params(0.5, 0.5, 10.0))
inspect_support(laysEggsERP, "laysEggsERP")
eggsPosterior = listener1("generic is true", laysEggsERP)
inspect_support(eggsPosterior, "eggsPosterior")
carriesMalariaERP = structured_prior_model(Params(0.1, 0.01, 2.0))
inspect_support(carriesMalariaERP, "carriesMalariaERP")
malariaPosterior = listener1("generic is true", carriesMalariaERP)
inspect_support(malariaPosterior, "malariaPosterior")
areFemaleERP = structured_prior_model(Params(0.99, 0.5, 50.0))
inspect_support(areFemaleERP, "areFemaleERP")
femalePosterior = listener1("generic is true", areFemaleERP)
inspect_support(femalePosterior, "femalePosterior")
| 0.792986 | 0.883337 |
# Ex2 - Getting and Knowing your Data
Check out [Chipotle Exercises Video Tutorial](https://www.youtube.com/watch?v=lpuYZ5EUyS8&list=PLgJhDSE2ZLxaY_DigHeiIDC1cD09rXgJv&index=2) to watch a data scientist go through the exercises
This time we are going to pull data directly from the internet.
Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.
### Step 1. Import the necessary libraries
```
import pandas as pd
import numpy as np
```
### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv).
### Step 3. Assign it to a variable called chipo.
```
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv'
chipo = pd.read_csv(url, sep = '\t')
```
### Step 4. See the first 10 entries
```
chipo.head(10)
```
### Step 5. What is the number of observations in the dataset?
```
# Solution 1
chipo.shape[0] # entries <= 4622 observations
# Solution 2
chipo.info() # entries <= 4622 observations
```
### Step 6. What is the number of columns in the dataset?
```
chipo.shape[1]
```
### Step 7. Print the name of all the columns.
```
chipo.columns
```
### Step 8. How is the dataset indexed?
```
chipo.index
```
### Step 9. Which was the most-ordered item?
```
c = chipo.groupby('item_name')
c = c.sum()
c = c.sort_values(['quantity'], ascending=False)
c.head(1)
```
### Step 10. For the most-ordered item, how many items were ordered?
```
c = chipo.groupby('item_name')
c = c.sum()
c = c.sort_values(['quantity'], ascending=False)
c.head(1)
```
### Step 11. What was the most ordered item in the choice_description column?
```
c = chipo.groupby('choice_description').sum()
c = c.sort_values(['quantity'], ascending=False)
c.head(1)
# Diet Coke 159
```
### Step 12. How many items were orderd in total?
```
total_items_orders = chipo.quantity.sum()
total_items_orders
```
### Step 13. Turn the item price into a float
#### Step 13.a. Check the item price type
```
chipo.item_price.dtype
```
#### Step 13.b. Create a lambda function and change the type of item price
```
dollarizer = lambda x: float(x[1:-1])
chipo.item_price = chipo.item_price.apply(dollarizer)
```
#### Step 13.c. Check the item price type
```
chipo.item_price.dtype
```
### Step 14. How much was the revenue for the period in the dataset?
```
revenue = (chipo['quantity']* chipo['item_price']).sum()
print('Revenue was: $' + str(np.round(revenue,2)))
```
### Step 15. How many orders were made in the period?
```
orders = chipo.order_id.value_counts().count()
orders
```
### Step 16. What is the average revenue amount per order?
```
# Solution 1
chipo['revenue'] = chipo['quantity'] * chipo['item_price']
order_grouped = chipo.groupby(by=['order_id']).sum()
order_grouped.mean()['revenue']
# Solution 2
chipo.groupby(by=['order_id']).sum().mean()['revenue']
```
### Step 17. How many different items are sold?
```
chipo.item_name.value_counts().count()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv'
chipo = pd.read_csv(url, sep = '\t')
chipo.head(10)
# Solution 1
chipo.shape[0] # entries <= 4622 observations
# Solution 2
chipo.info() # entries <= 4622 observations
chipo.shape[1]
chipo.columns
chipo.index
c = chipo.groupby('item_name')
c = c.sum()
c = c.sort_values(['quantity'], ascending=False)
c.head(1)
c = chipo.groupby('item_name')
c = c.sum()
c = c.sort_values(['quantity'], ascending=False)
c.head(1)
c = chipo.groupby('choice_description').sum()
c = c.sort_values(['quantity'], ascending=False)
c.head(1)
# Diet Coke 159
total_items_orders = chipo.quantity.sum()
total_items_orders
chipo.item_price.dtype
dollarizer = lambda x: float(x[1:-1])
chipo.item_price = chipo.item_price.apply(dollarizer)
chipo.item_price.dtype
revenue = (chipo['quantity']* chipo['item_price']).sum()
print('Revenue was: $' + str(np.round(revenue,2)))
orders = chipo.order_id.value_counts().count()
orders
# Solution 1
chipo['revenue'] = chipo['quantity'] * chipo['item_price']
order_grouped = chipo.groupby(by=['order_id']).sum()
order_grouped.mean()['revenue']
# Solution 2
chipo.groupby(by=['order_id']).sum().mean()['revenue']
chipo.item_name.value_counts().count()
| 0.628179 | 0.988199 |
<a href="https://colab.research.google.com/github/anshupandey/Deep-Learning-for-structured-Data/blob/main/28082021_time_series_forecasting_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from numpy.random import seed
import tensorflow
import random
import os
seed(1)
tensorflow.random.set_seed(1)
random.seed(1)
os.environ['PYTHONHASHSEED'] = '0'
#load dataset
df = pd.read_csv(r"E:\MLIoT\ML\dataset\civil-aircraft-arrivals-departures-passengers-and-mail-changi-airport-monthly (1)\civil-aircraft-arrivals-departures-and-passengers-changi-airport-monthly.csv")
df.shape
df.head()
df.tail()
df = df[df.level_1=='Total Passengers']
df.shape
df.head()
df.month = pd.to_datetime(df.month)
df.index = df.month
df = df[["value"]]
df.head()
plt.figure(figsize=(18,6))
plt.plot(df)
plt.show()
```
# With sequence length = 1
```
#taking data from 20005 to 2016 - 12 years of data
df2 = df[pd.to_datetime("01-01-2005"):pd.to_datetime("12-01-2016")]
df2.shape
df2.head()
df2.tail()
plt.figure(figsize=(18,6))
plt.plot(df2)
plt.show()
df2['feature'] = df2.value.shift(1)
df2 = df2[['feature','value']]
df2.head(10)
df2.dropna(inplace=True)
x = df2.feature
y = df2.value
x = np.array(x).reshape(-1,1,1) # reshaping input data into - samples,timestamps,features
print(x.shape)
print(y.shape)
from tensorflow.keras import models,layers
input_layer = layers.Input(shape=(1,1)) # shape = (timestamps,features)
lstm_layer = layers.LSTM(20,activation='relu',return_sequences=False)(input_layer)
dense = layers.Dense(30,activation='relu')(lstm_layer)
op_layer = layers.Dense(1)(dense)
model = models.Model(inputs=input_layer,outputs=op_layer)
model.summary()
```
### How parameters are calculated for LSTM layer
#### Understanding number of parameters for an RNN Layer
Consider -
x = number of input features
h = number of neurons in RNN layer
Note: number of parameters(weights and bias) do not depend on the size of sequence, i.e. the number of timestamps
1. number of weights between input and hidden(RNN) layer = x * h
2. As previous instance of output produced by RNN layer neuron will also be taken as an input(recurrent input), so total number of inputs to the hidden(RNN layer) layer is = x+h
3. Now statement 1 above can rewritten as total number of weights between input layer and hidden layer (RNN layer) = total inputs (real inputs(x) + recurrent inputs(h)) * number of hidden neurons (h) = (x+h) * h
4. total bias with each computational neuron = h
5. total parameters = weights + bias = ((x+h)*h) + h
#### Understanding number of parameters for an LSTM Layer
<img src="https://i.stack.imgur.com/aTDpS.png" width="500" height="200">
As it can be seen in above image there are 4 steps of computations in an LSTM cell
1. for forget gate - ft
2. for input gate - it
3. for cell current state value = ct
4. for output gate ot
so total parameters in LSTM layer = 4 * parameters of an RNN layer
= 4 * [ ((x+h)*h) + h ]
In this case, input features(x) = 1
number of LSTM cells taken (h) = 20
so total paramers = 4 * [ ( (x+h)*h ) + h )]
= 4 * [ ( (1+20)*20 ) + 20 )]
= 4 * [420+20]
= 4 * 440
= 1760
```
model.compile(loss='mae',optimizer='adam')
model.fit(x,y,batch_size=10,shuffle=False,epochs=100)
df2.tail(1)
ip = np.array(df2.tail(1).value).reshape(1,1,1)
model.predict(ip)
forecast = []
for i in range(12):
pred = model.predict(ip)
forecast.append(pred)
ip = pred.reshape(1,1,1)
forecast = np.array(forecast).reshape(-1,1)
forecast
actuals = df[pd.to_datetime("01-01-2017"):pd.to_datetime("12-01-2017")]
actuals
forecast = pd.DataFrame(forecast,index=actuals.index,columns=['value'])
forecast
plt.figure(figsize=(12,5))
plt.plot(actuals,c='g')
plt.plot(forecast,c='r')
plt.show()
```
# With sequence length = 12
```
#taking data from 2004 to 2018 - 15 years of data
df2 = df[pd.to_datetime("01-01-2004"):pd.to_datetime("12-01-2018")]
df2.shape
df2.head()
df2.tail()
from sklearn.preprocessing import MinMaxScaler
mm = MinMaxScaler()
mm.fit(df2)
df2.value = mm.transform(df2)
df2.head()
plt.figure(figsize=(12,5))
plt.plot(df2)
plt.show()
# split a univariate sequence into samples
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
x,y = split_sequence(df2.value,n_steps=12)
print(x.shape)
x = x.reshape(-1,12,1)
x.shape
from tensorflow.keras import models,layers
input_layer = layers.Input(shape=(12,1)) # shape = (timestamps,features)
lstm_layer = layers.LSTM(50,activation='relu',return_sequences=True)(input_layer)
lstm_layer = layers.LSTM(50,activation='relu',return_sequences=False)(lstm_layer)
#dense = layers.Dense(40,activation='relu')(lstm_layer)
op_layer = layers.Dense(1)(lstm_layer)
model = models.Model(inputs=input_layer,outputs=op_layer)
model.summary()
model.compile(loss='mae',optimizer='adam')
model.fit(x,y,batch_size=24,shuffle=False,epochs=500)
model.fit(x,y,batch_size=36,shuffle=False,epochs=500)
model.fit(x,y,batch_size=48,shuffle=False,epochs=500)
df2.tail(1)
ip = df2[pd.to_datetime("01-01-2018"):pd.to_datetime("12-01-2018")]
ip = np.array(ip).reshape(1,12,1)
ip.shape
ip
model.predict(ip)
forecast = []
for i in range(12):
pred = model.predict(ip)
forecast.append(pred)
ip = ip.tolist()[0]
ip.pop(0)
ip.append(pred[0])
ip = np.array(ip).reshape(1,12,1)
forecast = np.array(forecast).reshape(-1,1)
actuals = df[pd.to_datetime("01-01-2019"):pd.to_datetime("12-01-2019")]
actuals
forecast = pd.DataFrame(forecast,index=actuals.index,columns=['value'])
forecast
forecast.value = mm.inverse_transform(forecast)
forecast
plt.figure(figsize=(12,5))
plt.plot(actuals,c='g')
plt.plot(forecast,c='r')
plt.show()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from numpy.random import seed
import tensorflow
import random
import os
seed(1)
tensorflow.random.set_seed(1)
random.seed(1)
os.environ['PYTHONHASHSEED'] = '0'
#load dataset
df = pd.read_csv(r"E:\MLIoT\ML\dataset\civil-aircraft-arrivals-departures-passengers-and-mail-changi-airport-monthly (1)\civil-aircraft-arrivals-departures-and-passengers-changi-airport-monthly.csv")
df.shape
df.head()
df.tail()
df = df[df.level_1=='Total Passengers']
df.shape
df.head()
df.month = pd.to_datetime(df.month)
df.index = df.month
df = df[["value"]]
df.head()
plt.figure(figsize=(18,6))
plt.plot(df)
plt.show()
#taking data from 20005 to 2016 - 12 years of data
df2 = df[pd.to_datetime("01-01-2005"):pd.to_datetime("12-01-2016")]
df2.shape
df2.head()
df2.tail()
plt.figure(figsize=(18,6))
plt.plot(df2)
plt.show()
df2['feature'] = df2.value.shift(1)
df2 = df2[['feature','value']]
df2.head(10)
df2.dropna(inplace=True)
x = df2.feature
y = df2.value
x = np.array(x).reshape(-1,1,1) # reshaping input data into - samples,timestamps,features
print(x.shape)
print(y.shape)
from tensorflow.keras import models,layers
input_layer = layers.Input(shape=(1,1)) # shape = (timestamps,features)
lstm_layer = layers.LSTM(20,activation='relu',return_sequences=False)(input_layer)
dense = layers.Dense(30,activation='relu')(lstm_layer)
op_layer = layers.Dense(1)(dense)
model = models.Model(inputs=input_layer,outputs=op_layer)
model.summary()
model.compile(loss='mae',optimizer='adam')
model.fit(x,y,batch_size=10,shuffle=False,epochs=100)
df2.tail(1)
ip = np.array(df2.tail(1).value).reshape(1,1,1)
model.predict(ip)
forecast = []
for i in range(12):
pred = model.predict(ip)
forecast.append(pred)
ip = pred.reshape(1,1,1)
forecast = np.array(forecast).reshape(-1,1)
forecast
actuals = df[pd.to_datetime("01-01-2017"):pd.to_datetime("12-01-2017")]
actuals
forecast = pd.DataFrame(forecast,index=actuals.index,columns=['value'])
forecast
plt.figure(figsize=(12,5))
plt.plot(actuals,c='g')
plt.plot(forecast,c='r')
plt.show()
#taking data from 2004 to 2018 - 15 years of data
df2 = df[pd.to_datetime("01-01-2004"):pd.to_datetime("12-01-2018")]
df2.shape
df2.head()
df2.tail()
from sklearn.preprocessing import MinMaxScaler
mm = MinMaxScaler()
mm.fit(df2)
df2.value = mm.transform(df2)
df2.head()
plt.figure(figsize=(12,5))
plt.plot(df2)
plt.show()
# split a univariate sequence into samples
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
x,y = split_sequence(df2.value,n_steps=12)
print(x.shape)
x = x.reshape(-1,12,1)
x.shape
from tensorflow.keras import models,layers
input_layer = layers.Input(shape=(12,1)) # shape = (timestamps,features)
lstm_layer = layers.LSTM(50,activation='relu',return_sequences=True)(input_layer)
lstm_layer = layers.LSTM(50,activation='relu',return_sequences=False)(lstm_layer)
#dense = layers.Dense(40,activation='relu')(lstm_layer)
op_layer = layers.Dense(1)(lstm_layer)
model = models.Model(inputs=input_layer,outputs=op_layer)
model.summary()
model.compile(loss='mae',optimizer='adam')
model.fit(x,y,batch_size=24,shuffle=False,epochs=500)
model.fit(x,y,batch_size=36,shuffle=False,epochs=500)
model.fit(x,y,batch_size=48,shuffle=False,epochs=500)
df2.tail(1)
ip = df2[pd.to_datetime("01-01-2018"):pd.to_datetime("12-01-2018")]
ip = np.array(ip).reshape(1,12,1)
ip.shape
ip
model.predict(ip)
forecast = []
for i in range(12):
pred = model.predict(ip)
forecast.append(pred)
ip = ip.tolist()[0]
ip.pop(0)
ip.append(pred[0])
ip = np.array(ip).reshape(1,12,1)
forecast = np.array(forecast).reshape(-1,1)
actuals = df[pd.to_datetime("01-01-2019"):pd.to_datetime("12-01-2019")]
actuals
forecast = pd.DataFrame(forecast,index=actuals.index,columns=['value'])
forecast
forecast.value = mm.inverse_transform(forecast)
forecast
plt.figure(figsize=(12,5))
plt.plot(actuals,c='g')
plt.plot(forecast,c='r')
plt.show()
| 0.576065 | 0.965803 |
# Generate Performance Plot
## Import Packages
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
```
## Retrieve and Process Data
```
df = pd.read_html('https://github.com/TDAmeritrade/stumpy', match='STUMPED.256')[0]
df = df.rename(columns={'n = 2i': 'n'})
df['GPU-STOMP'] = pd.to_timedelta(df['GPU-STOMP'])
df['STUMP.2'] = pd.to_timedelta(df['STUMP.2'])
df['STUMP.16'] = pd.to_timedelta(df['STUMP.16'])
df['STUMPED.128'] = pd.to_timedelta(df['STUMPED.128'])
df['STUMPED.256'] = pd.to_timedelta(df['STUMPED.256'])
df['GPU-STUMP.1'] = pd.to_timedelta(df['GPU-STUMP.1'])
df['GPU-STUMP.2'] = pd.to_timedelta(df['GPU-STUMP.2'])
df['GPU-STUMP.DGX1'] = pd.to_timedelta(df['GPU-STUMP.DGX1'])
df['GPU-STUMP.DGX2'] = pd.to_timedelta(df['GPU-STUMP.DGX2'])
df.head()
dfs = {
'GPU-STOMP': df[['n', 'GPU-STOMP']],
'STUMP.2': df[['n', 'STUMP.2']],
'STUMP.16': df[['n', 'STUMP.16']],
'STUMPED.128': df[['n', 'STUMPED.128']],
'STUMPED.256': df[['n', 'STUMPED.256']],
'GPU-STUMP.1': df[['n', 'GPU-STUMP.1']],
'GPU-STUMP.2': df[['n', 'GPU-STUMP.2']],
'GPU-STUMP.DGX1': df[['n', 'GPU-STUMP.DGX1']],
'GPU-STUMP.DGX2': df[['n', 'GPU-STUMP.DGX2']],
}
line_dashes = {
'GPU-STOMP': 'solid',
'STUMP.2': '20 20',
'STUMP.16': '16 16',
'STUMPED.128': '12 12',
'STUMPED.256': '8 8',
'GPU-STUMP.1': '16 16',
'GPU-STUMP.2': 'solid',
'GPU-STUMP.DGX1': '12 12',
'GPU-STUMP.DGX2': 'solid',
#โ^(\d+(\s+\d+)*)?$โ
}
for k in dfs.keys():
dfs[k] = dfs[k].dropna()
```
## Plot Performance Results
```
space = 1
line_dashes = {
'GPU-STOMP': (1, 0),
'STUMP.2': (1, space, 1, space),
'STUMP.16': (5, space, 5, space),
'STUMPED.128': (10, space, 10, space),
'STUMPED.256': (1, 0),
'GPU-STUMP.1': (10, space, 10, space),
'GPU-STUMP.2': (1, 0),
'GPU-STUMP.DGX1': (10, space, 10, space),
'GPU-STUMP.DGX2': (1, 0),
}
# viridis = cm.get_cmap('viridis', len(dfs.keys()))
viridis = cm.get_cmap('viridis', 4)
line_colors = {
'GPU-STOMP': viridis(4),
'STUMP.2': viridis(0),
'STUMP.16': viridis(0),
'STUMPED.128': viridis(0),
'STUMPED.256': viridis(0),
'GPU-STUMP.1': viridis(1),
'GPU-STUMP.2': viridis(1),
'GPU-STUMP.DGX1': viridis(2),
'GPU-STUMP.DGX2': viridis(2),
}
fig, ax = plt.subplots(figsize=[20, 10])
fig.suptitle('Perfomance Comparison of Matrix Profile Implementations', fontsize=30, y=0.95)
ax.set_yticks(range(13), minor=False)
ax.grid()
ax.get_xaxis().set_major_formatter(
mpl.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
plt.xlabel('Time Series Length (n)', fontsize=20, labelpad=20)
plt.ylabel('Days', fontsize=20)
ax.xaxis.set_tick_params(labelsize=16)
ax.yaxis.set_tick_params(labelsize=16)
legend_lines = []
legend_names = []
for i, k in enumerate(dfs.keys()):
legend_line, = ax.plot(dfs[k].iloc[:, 0],
dfs[k].iloc[:, 1]/pd.Timedelta('1 days'),
linewidth=4,
dashes=line_dashes[k],
c=line_colors[k],
label=k,
)
legend_lines.append(legend_line)
legend_names.append(k)
if k in ['GPU-STOMP', 'STUMPED.256', 'GPU-STUMP.2']:
legend_lines.append(mpl.lines.Line2D([],[],linestyle=''))
legend_names.append('')
ax.legend(legend_lines,
legend_names,
loc="upper left",
handlelength=6,
fontsize=16)
ax.text(100000000,
4,
'Lower is "Better"',
fontsize=25,
verticalalignment='top',
)
ax.text(80000000,
12.5,
'Benchmark',
fontsize=25,
verticalalignment='top',
rotation=34,
)
fig.savefig("performance.png", dpi=200, bbox_inches='tight', pad_inches=0.1,)
```
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
df = pd.read_html('https://github.com/TDAmeritrade/stumpy', match='STUMPED.256')[0]
df = df.rename(columns={'n = 2i': 'n'})
df['GPU-STOMP'] = pd.to_timedelta(df['GPU-STOMP'])
df['STUMP.2'] = pd.to_timedelta(df['STUMP.2'])
df['STUMP.16'] = pd.to_timedelta(df['STUMP.16'])
df['STUMPED.128'] = pd.to_timedelta(df['STUMPED.128'])
df['STUMPED.256'] = pd.to_timedelta(df['STUMPED.256'])
df['GPU-STUMP.1'] = pd.to_timedelta(df['GPU-STUMP.1'])
df['GPU-STUMP.2'] = pd.to_timedelta(df['GPU-STUMP.2'])
df['GPU-STUMP.DGX1'] = pd.to_timedelta(df['GPU-STUMP.DGX1'])
df['GPU-STUMP.DGX2'] = pd.to_timedelta(df['GPU-STUMP.DGX2'])
df.head()
dfs = {
'GPU-STOMP': df[['n', 'GPU-STOMP']],
'STUMP.2': df[['n', 'STUMP.2']],
'STUMP.16': df[['n', 'STUMP.16']],
'STUMPED.128': df[['n', 'STUMPED.128']],
'STUMPED.256': df[['n', 'STUMPED.256']],
'GPU-STUMP.1': df[['n', 'GPU-STUMP.1']],
'GPU-STUMP.2': df[['n', 'GPU-STUMP.2']],
'GPU-STUMP.DGX1': df[['n', 'GPU-STUMP.DGX1']],
'GPU-STUMP.DGX2': df[['n', 'GPU-STUMP.DGX2']],
}
line_dashes = {
'GPU-STOMP': 'solid',
'STUMP.2': '20 20',
'STUMP.16': '16 16',
'STUMPED.128': '12 12',
'STUMPED.256': '8 8',
'GPU-STUMP.1': '16 16',
'GPU-STUMP.2': 'solid',
'GPU-STUMP.DGX1': '12 12',
'GPU-STUMP.DGX2': 'solid',
#โ^(\d+(\s+\d+)*)?$โ
}
for k in dfs.keys():
dfs[k] = dfs[k].dropna()
space = 1
line_dashes = {
'GPU-STOMP': (1, 0),
'STUMP.2': (1, space, 1, space),
'STUMP.16': (5, space, 5, space),
'STUMPED.128': (10, space, 10, space),
'STUMPED.256': (1, 0),
'GPU-STUMP.1': (10, space, 10, space),
'GPU-STUMP.2': (1, 0),
'GPU-STUMP.DGX1': (10, space, 10, space),
'GPU-STUMP.DGX2': (1, 0),
}
# viridis = cm.get_cmap('viridis', len(dfs.keys()))
viridis = cm.get_cmap('viridis', 4)
line_colors = {
'GPU-STOMP': viridis(4),
'STUMP.2': viridis(0),
'STUMP.16': viridis(0),
'STUMPED.128': viridis(0),
'STUMPED.256': viridis(0),
'GPU-STUMP.1': viridis(1),
'GPU-STUMP.2': viridis(1),
'GPU-STUMP.DGX1': viridis(2),
'GPU-STUMP.DGX2': viridis(2),
}
fig, ax = plt.subplots(figsize=[20, 10])
fig.suptitle('Perfomance Comparison of Matrix Profile Implementations', fontsize=30, y=0.95)
ax.set_yticks(range(13), minor=False)
ax.grid()
ax.get_xaxis().set_major_formatter(
mpl.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
plt.xlabel('Time Series Length (n)', fontsize=20, labelpad=20)
plt.ylabel('Days', fontsize=20)
ax.xaxis.set_tick_params(labelsize=16)
ax.yaxis.set_tick_params(labelsize=16)
legend_lines = []
legend_names = []
for i, k in enumerate(dfs.keys()):
legend_line, = ax.plot(dfs[k].iloc[:, 0],
dfs[k].iloc[:, 1]/pd.Timedelta('1 days'),
linewidth=4,
dashes=line_dashes[k],
c=line_colors[k],
label=k,
)
legend_lines.append(legend_line)
legend_names.append(k)
if k in ['GPU-STOMP', 'STUMPED.256', 'GPU-STUMP.2']:
legend_lines.append(mpl.lines.Line2D([],[],linestyle=''))
legend_names.append('')
ax.legend(legend_lines,
legend_names,
loc="upper left",
handlelength=6,
fontsize=16)
ax.text(100000000,
4,
'Lower is "Better"',
fontsize=25,
verticalalignment='top',
)
ax.text(80000000,
12.5,
'Benchmark',
fontsize=25,
verticalalignment='top',
rotation=34,
)
fig.savefig("performance.png", dpi=200, bbox_inches='tight', pad_inches=0.1,)
| 0.35488 | 0.823896 |
### Import packages
Import of the packages that will be needed for the project. This includes packages for data manipulation, sklearn modules and custom functions.
```
import pandas as pd
import numpy as np
import pickle
import sklearn
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, RobustScaler, StandardScaler, MinMaxScaler, FunctionTransformer, PowerTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from Project_Functions import offensive_contribution, trailing_stats_mean, tier_maker, get_tiers, get_yards, get_contribution, get_touchdowns, LogShift, stats_for_trailing
import warnings
from pandas.core.common import SettingWithCopyWarning
warnings.simplefilter(action = 'ignore',
category = SettingWithCopyWarning)
```
### Import Data
Let's import the dataframe that we will be using for modelling
```
data = pd.read_csv('Data/weekly_data.csv')
data.head()
data.info()
data['Week'] = data['Week'].astype(str)
```
Before the train test split, we have to calculate the trailing average fantasy points for each observation, as we cannot incorporate this step into the pipeline without causing data leakage.
```
def trailing_stats(df):
"""
Function to create a new column with a trailing aggregate mean
as a new feature for prediction.
Inputs:
- df: The dataframe on which the function will be applied
- Column: The column on which to apply the function
- Window: The number of past values to consider when apply the function
Output:
- An aggregate value
"""
#Access the column names in stats_for_trailing
global stats_for_trailing
# Get all unique players in the DataFrame
players = df['Name'].unique().tolist()
# Define a DataFrame to hold our values
df_out = pd.DataFrame()
# Loop through the unique players
for player in players:
# Create a temporary dataframe for each player
temp_df = df[(df['Name'] == player) & (df['InjuryStatus'] != 'Out')]
# Calculate the n game trailing average for all players. Set closed parameter to 'left'
# so that the current value for fantasy points is not included in the calculation.
# Backfill the two resulting NaN values
for column in stats_for_trailing:
temp_df[f'TA3{column}'] = temp_df.loc[:,column].rolling(window = 3,
closed = 'left').mean().fillna(method = 'bfill')
temp_df[f'TA7{column}'] = temp_df.loc[:,column].rolling(window = 7,
closed = 'left').mean().fillna(method = 'bfill')
# Append the temporary dataframe to the output
df_out = df_out.append(temp_df)
# Return a dataframe with the values sorted by the original index
df_out.sort_index(inplace = True)
return df_out
stats_for_trailing = ['FantasyPointsPPR']
# Prepare the trailing average fantasy points column
data = trailing_stats(data)
data[data['Name'] == 'Tom Brady'][['FantasyPointsPPR', 'TA3FantasyPointsPPR', 'TA7FantasyPointsPPR']].head(35)
```
### Train Test Split
```
data.isna().sum().sort_values()
# Separate data from the target
# y = data['FantasyPointsPPR']
# Apply a log transform to the target to see how it impacts prediction
y = LogShift(data['FantasyPointsPPR'])
data.drop(columns = ['FantasyPointsPPR'],
inplace = True)
X = data
# Execute the train, test split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.2,
shuffle = False,
random_state = 13)
X_train.head()
```
### Feature Engineering
The main features that we will be engineering to predict a player's fantasy output will be the 5-game trailing average of various statistics as well as the binning of players into their respective tiers based on recent performance.
```
# Define the columns for which we want a 5 game trailing average.
stats_for_trailing = ['TotalTouchdowns','RushingYards','PassingInterceptions','PassingTouchdowns','PassingRating','PassingYards',
'PassingCompletionPercentage', 'PassingLong','RushingYards', 'RushingTouchdowns', 'RushingLong',
'RushingYardsPerAttempt', 'ReceivingYardsPerReception', 'PuntReturns', 'PuntReturnTouchdowns',
'Receptions','ReceivingYards','ReceivingTargets', 'ReceivingTouchdowns', 'ExtraPointsMade', 'FieldGoalsMade',
'FieldGoalsMade40to49','FieldGoalsMade50Plus','Fumbles','FumblesLost', 'TeamPoints', 'OpponentPoints', 'YardsFor', 'YardsAgainst']
trailing_stats = []
for col in stats_for_trailing:
trailing_stats.append('TA7' + col)
trailing_stats.append('TA3' + col)
trailing_stats.append('TA3FantasyPointsPPR')
trailing_stats.append('TA7FantasyPointsPPR')
# Instantiate the function transformers for the feature engineering pipeline
touchdown_transformer = FunctionTransformer(get_touchdowns) # Get total touchdowns per week per player
yard_transformer = FunctionTransformer(get_yards) # Get total yardage per week per player
trailing_transformer = FunctionTransformer(trailing_stats_mean) # Get the 5 game trailing averages of appropriate statistics
tier_transformer = FunctionTransformer(get_tiers) # Bin players into the appropriate tiers based on recent performance
contribution_transformer = FunctionTransformer(get_contribution) # Calculate the offensive contribution of a given player relative to the team's offense
# Instantiate the pipeline for the necessary transformations
engineering = Pipeline([('touchdown', touchdown_transformer),
('yards', yard_transformer),
('trailing', trailing_transformer),
('tier', tier_transformer),
('contribution', contribution_transformer)])
```
<br>
### Preprocessing
As shown above, the bulk of the null values fall into one of two categories. They are either:
* In the InjuryStatus column
* Here we can impute a value of healthy, as the only values in the injury column are
* In the TA (trailing average) columns we created
* No player with a null value played more than 5 games, therefore we cannot calculate the trailing average for them. We will impute a default value of 0 for these columns, as they represent players who likely did not have much impact. If they had an impact, they likely would have played in more games. I will explore imputing the median value as well through a grid search.
```
# Define the groups of columns for preprocessing steps.
categorical_columns = ['Week',
'Team',
'Opponent',
'PlayerTier',
'InjuryStatus']
numerical_columns = trailing_stats
# Create a custom function to generate a log-transformed version of continuous data with a constant 5 added prior to the transform
LogShiftTransformer = FunctionTransformer(LogShift)
# Define the preprocessing steps for categorical features
categorical_transform = Pipeline([('impute_cat',SimpleImputer(strategy = 'constant',
fill_value = 'Healthy')),
('one_hot_encoder', OneHotEncoder(handle_unknown = 'ignore'))])
# Define the preprocessing steps for numerical features
numerical_transform = Pipeline([('impute_num', SimpleImputer(strategy = 'mean')),
('scaler', LogShiftTransformer)])
# Instantiate the column transformer object for the preprocessing steps
preprocessing = ColumnTransformer([('num', numerical_transform, numerical_columns),
('cat', categorical_transform, categorical_columns)])
# Instantiate a pipeline with a linear regression model as a baseline
pipeline = Pipeline([('engineering', engineering),
('prep', preprocessing),
('model', Ridge())])
# Set param grid values, parameters for grid search
param_grid = {'model__alpha': [1]}
grid_search = GridSearchCV(pipeline,
param_grid = param_grid,
scoring = 'neg_mean_squared_error',
cv = 5,
verbose = 3)
# Fit the grid search to X_train and y_train
grid_search.fit(X_train, y_train)
grid_search.best_score_
grid_search.best_params_
preds = grid_search.predict(X_test)
mean_squared_error(y_test, preds)
mean_absolute_error(y_test, preds)
r2_score(y_test, preds)
```
Let's dig into the predictions a little bit
```
df_preds = pd.DataFrame(y_test)
df_preds['Predicted'] = preds
df_preds.sort_values(by = 'Predicted',
ascending = False)
```
### Save model
```
import pickle
pickle.dump(grid_search, open('Pickles/log_transform_linear.pickle', 'wb'))
```
# Time to dig into the results
```
df_review = X_test
df_review['Predicted'] = df_preds['Predicted']
df_review['Target'] = df_preds['FantasyPointsPPR']
df_review['Error'] = df_review['Target'] - df_review['Predicted']
df_review.head()
import seaborn as sns
import matplotlib.pyplot as plt
# Let's visualize how the prediction error varies by player position.
plt.figure(figsize = (12, 9))
sns.violinplot(x = df_review['Position'],
y = df_review['Error'])
# Let's apply our engineered features to df_review and keep on digging
df_review = trailing_stats_mean(df_review)
df_review['PlayerTier'] = df_review.apply(lambda x: tier_maker(x['Position'], x['TA7FantasyPointsPPR']), axis = 1)
df_review['AbsoluteError'] = abs(df_review['Error'])
mean_tier_error = df_review[['PlayerTier', 'Error', 'AbsoluteError']].groupby('PlayerTier').mean()
sum_tier_error = df_review[['PlayerTier', 'Error','AbsoluteError']].groupby('PlayerTier').sum()
count_tier_error = df_review[['PlayerTier', 'Error']].groupby('PlayerTier').count()
mean_tier_error['Total Absolute Error'] = sum_tier_error['AbsoluteError']
mean_tier_error['Total Error'] = sum_tier_error['Error']
mean_tier_error['Error Count'] = count_tier_error['Error']
mean_tier_error.rename({'Error': 'Mean Error'},
axis = 1)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import pickle
import sklearn
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder, RobustScaler, StandardScaler, MinMaxScaler, FunctionTransformer, PowerTransformer
from sklearn.base import BaseEstimator, TransformerMixin
from Project_Functions import offensive_contribution, trailing_stats_mean, tier_maker, get_tiers, get_yards, get_contribution, get_touchdowns, LogShift, stats_for_trailing
import warnings
from pandas.core.common import SettingWithCopyWarning
warnings.simplefilter(action = 'ignore',
category = SettingWithCopyWarning)
data = pd.read_csv('Data/weekly_data.csv')
data.head()
data.info()
data['Week'] = data['Week'].astype(str)
def trailing_stats(df):
"""
Function to create a new column with a trailing aggregate mean
as a new feature for prediction.
Inputs:
- df: The dataframe on which the function will be applied
- Column: The column on which to apply the function
- Window: The number of past values to consider when apply the function
Output:
- An aggregate value
"""
#Access the column names in stats_for_trailing
global stats_for_trailing
# Get all unique players in the DataFrame
players = df['Name'].unique().tolist()
# Define a DataFrame to hold our values
df_out = pd.DataFrame()
# Loop through the unique players
for player in players:
# Create a temporary dataframe for each player
temp_df = df[(df['Name'] == player) & (df['InjuryStatus'] != 'Out')]
# Calculate the n game trailing average for all players. Set closed parameter to 'left'
# so that the current value for fantasy points is not included in the calculation.
# Backfill the two resulting NaN values
for column in stats_for_trailing:
temp_df[f'TA3{column}'] = temp_df.loc[:,column].rolling(window = 3,
closed = 'left').mean().fillna(method = 'bfill')
temp_df[f'TA7{column}'] = temp_df.loc[:,column].rolling(window = 7,
closed = 'left').mean().fillna(method = 'bfill')
# Append the temporary dataframe to the output
df_out = df_out.append(temp_df)
# Return a dataframe with the values sorted by the original index
df_out.sort_index(inplace = True)
return df_out
stats_for_trailing = ['FantasyPointsPPR']
# Prepare the trailing average fantasy points column
data = trailing_stats(data)
data[data['Name'] == 'Tom Brady'][['FantasyPointsPPR', 'TA3FantasyPointsPPR', 'TA7FantasyPointsPPR']].head(35)
data.isna().sum().sort_values()
# Separate data from the target
# y = data['FantasyPointsPPR']
# Apply a log transform to the target to see how it impacts prediction
y = LogShift(data['FantasyPointsPPR'])
data.drop(columns = ['FantasyPointsPPR'],
inplace = True)
X = data
# Execute the train, test split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.2,
shuffle = False,
random_state = 13)
X_train.head()
# Define the columns for which we want a 5 game trailing average.
stats_for_trailing = ['TotalTouchdowns','RushingYards','PassingInterceptions','PassingTouchdowns','PassingRating','PassingYards',
'PassingCompletionPercentage', 'PassingLong','RushingYards', 'RushingTouchdowns', 'RushingLong',
'RushingYardsPerAttempt', 'ReceivingYardsPerReception', 'PuntReturns', 'PuntReturnTouchdowns',
'Receptions','ReceivingYards','ReceivingTargets', 'ReceivingTouchdowns', 'ExtraPointsMade', 'FieldGoalsMade',
'FieldGoalsMade40to49','FieldGoalsMade50Plus','Fumbles','FumblesLost', 'TeamPoints', 'OpponentPoints', 'YardsFor', 'YardsAgainst']
trailing_stats = []
for col in stats_for_trailing:
trailing_stats.append('TA7' + col)
trailing_stats.append('TA3' + col)
trailing_stats.append('TA3FantasyPointsPPR')
trailing_stats.append('TA7FantasyPointsPPR')
# Instantiate the function transformers for the feature engineering pipeline
touchdown_transformer = FunctionTransformer(get_touchdowns) # Get total touchdowns per week per player
yard_transformer = FunctionTransformer(get_yards) # Get total yardage per week per player
trailing_transformer = FunctionTransformer(trailing_stats_mean) # Get the 5 game trailing averages of appropriate statistics
tier_transformer = FunctionTransformer(get_tiers) # Bin players into the appropriate tiers based on recent performance
contribution_transformer = FunctionTransformer(get_contribution) # Calculate the offensive contribution of a given player relative to the team's offense
# Instantiate the pipeline for the necessary transformations
engineering = Pipeline([('touchdown', touchdown_transformer),
('yards', yard_transformer),
('trailing', trailing_transformer),
('tier', tier_transformer),
('contribution', contribution_transformer)])
# Define the groups of columns for preprocessing steps.
categorical_columns = ['Week',
'Team',
'Opponent',
'PlayerTier',
'InjuryStatus']
numerical_columns = trailing_stats
# Create a custom function to generate a log-transformed version of continuous data with a constant 5 added prior to the transform
LogShiftTransformer = FunctionTransformer(LogShift)
# Define the preprocessing steps for categorical features
categorical_transform = Pipeline([('impute_cat',SimpleImputer(strategy = 'constant',
fill_value = 'Healthy')),
('one_hot_encoder', OneHotEncoder(handle_unknown = 'ignore'))])
# Define the preprocessing steps for numerical features
numerical_transform = Pipeline([('impute_num', SimpleImputer(strategy = 'mean')),
('scaler', LogShiftTransformer)])
# Instantiate the column transformer object for the preprocessing steps
preprocessing = ColumnTransformer([('num', numerical_transform, numerical_columns),
('cat', categorical_transform, categorical_columns)])
# Instantiate a pipeline with a linear regression model as a baseline
pipeline = Pipeline([('engineering', engineering),
('prep', preprocessing),
('model', Ridge())])
# Set param grid values, parameters for grid search
param_grid = {'model__alpha': [1]}
grid_search = GridSearchCV(pipeline,
param_grid = param_grid,
scoring = 'neg_mean_squared_error',
cv = 5,
verbose = 3)
# Fit the grid search to X_train and y_train
grid_search.fit(X_train, y_train)
grid_search.best_score_
grid_search.best_params_
preds = grid_search.predict(X_test)
mean_squared_error(y_test, preds)
mean_absolute_error(y_test, preds)
r2_score(y_test, preds)
df_preds = pd.DataFrame(y_test)
df_preds['Predicted'] = preds
df_preds.sort_values(by = 'Predicted',
ascending = False)
import pickle
pickle.dump(grid_search, open('Pickles/log_transform_linear.pickle', 'wb'))
df_review = X_test
df_review['Predicted'] = df_preds['Predicted']
df_review['Target'] = df_preds['FantasyPointsPPR']
df_review['Error'] = df_review['Target'] - df_review['Predicted']
df_review.head()
import seaborn as sns
import matplotlib.pyplot as plt
# Let's visualize how the prediction error varies by player position.
plt.figure(figsize = (12, 9))
sns.violinplot(x = df_review['Position'],
y = df_review['Error'])
# Let's apply our engineered features to df_review and keep on digging
df_review = trailing_stats_mean(df_review)
df_review['PlayerTier'] = df_review.apply(lambda x: tier_maker(x['Position'], x['TA7FantasyPointsPPR']), axis = 1)
df_review['AbsoluteError'] = abs(df_review['Error'])
mean_tier_error = df_review[['PlayerTier', 'Error', 'AbsoluteError']].groupby('PlayerTier').mean()
sum_tier_error = df_review[['PlayerTier', 'Error','AbsoluteError']].groupby('PlayerTier').sum()
count_tier_error = df_review[['PlayerTier', 'Error']].groupby('PlayerTier').count()
mean_tier_error['Total Absolute Error'] = sum_tier_error['AbsoluteError']
mean_tier_error['Total Error'] = sum_tier_error['Error']
mean_tier_error['Error Count'] = count_tier_error['Error']
mean_tier_error.rename({'Error': 'Mean Error'},
axis = 1)
| 0.786295 | 0.88136 |
```
import pandas as pd
import boto3
import json
import configparser
from time import sleep
@unique
class TableType(Enum):
FACT = 'fact'
DIM = 'dim'
STAGE = 'staging'
@unique
class Table(Enum):
BUSINESS = 'business'
CITY = 'city'
REVIEW = 'review'
TIP = 'tip'
USERS = 'users'
STOCK = 'stock'
def get_table_name(self, table_type: TableType):
return f"{self.name}_{table_type.value}"
def get_partitions(self):
return {
self.USERS: {'YEAR': 2004, 'MONTH': 10, 'DAY': 12},
self.REVIEW: {'YEAR': 2005, 'MONTH': 3, 'DAY': 3},
self.TIP: {'YEAR': 2009, 'MONTH': 12, 'DAY': 15}
}.get(self)
def get_s3_path(self):
if self == self.BUSINESS:
return "s3://yelp-customer-reviews/processed/business/"
elif self == self.STOCK:
return "s3://yelp-customer-reviews/stock-data/cmg.us.txt"
else:
path = f"s3://yelp-customer-reviews/data-lake/{self.value}".replace('users', 'user')
path = path + "/pyear={YEAR}/pmonth={MONTH}/pday={DAY}"
return path.format(**self.get_partitions())
Table.BUSINESS.get_table_name(TableType.STAGE)
Table.USERS.get_s3_path()
from enum import Enum, unique
@unique
class SqlQueries(Enum):
setup_foreign_keys = ("""
ALTER TABLE "tip_fact" ADD FOREIGN KEY ("business_id") REFERENCES "business_fact" ("business_id");
ALTER TABLE "tip_fact" ADD FOREIGN KEY ("user_id") REFERENCES "users_fact" ("user_id");
ALTER TABLE "business_fact" ADD FOREIGN KEY ("city_id") REFERENCES "city_fact" ("city_id");
ALTER TABLE "review_dim" ADD FOREIGN KEY ("business_id") REFERENCES "business_fact" ("business_id");
ALTER TABLE "review_dim" ADD FOREIGN KEY ("user_id") REFERENCES "users_fact" ("user_id");
ALTER TABLE "review_fact" ADD FOREIGN KEY ("review_id") REFERENCES "review_dim" ("review_id");
ALTER TABLE "stock_fact" ADD FOREIGN KEY ("business_name") REFERENCES "business_fact" ("name");
""")
business_fact_create = ("""
CREATE TABLE IF NOT EXISTS "business_fact" (
"business_id" varchar PRIMARY KEY,
"name" varchar,
"categories" varchar,
"review_count" bigint,
"stars" count,
"city_id" varchar,
"address" varchar,
"postal_code" varchar
);
DISTSTYLE EVEN;
""")
city_fact_create = ("""
CREATE TABLE IF NOT EXISTS "city_fact" (
"city_id" varchar PRIMARY KEY,
"state" varchar,
"city" varchar
);
""")
users_fact_create = ("""
CREATE TABLE IF NOT EXISTS "users_fact" (
"user_id" varchar PRIMARY KEY,
"yelping_since" timestamp,
"name" varchar,
"average_stars" int,
"review_count" bigint
);
DISTSTYLE EVEN;
""")
review_dim_create = ("""
CREATE TABLE IF NOT EXISTS "review_dim" (
"review_id" varchar PRIMARY KEY,
"review_date" timestamp,
"business_id" varchar,
"user_id" varchar
);
DISTSTYLE EVEN;
""")
review_fact_create = ("""
CREATE TABLE IF NOT EXISTS "review_fact" (
"review_id" varchar PRIMARY KEY,
"stars" int,
"text" varchar
);
DISTSTYLE EVEN;
""")
stock_fact_create = ("""
CREATE TABLE IF NOT EXISTS "stock_fact" (
"stock_id" varchar PRIMARY KEY,
"business_name" varchar,
"date" timestamp,
"close_value" float
);
""")
tip_fact_create = ("""
CREATE TABLE IF NOT EXISTS "tip_fact" (
"tip_id" varchar PRIMARY KEY,
"business_id" varchar,
"user_id" varchar,
"text" varchar,
"tip_date" timestamp,
"compliment_count" bigint
);
DISTSTYLE EVEN;
""")
review_stage_create = ("""
CREATE TABLE IF NOT EXISTS "review_staging" (
"business_id" varchar
"cool" bigint,
"funny" bigint,
"review_id" varchar,
"stars" double,
"text" varchar,
"useful" bigint,
"user_id" string,
"dt" varchar
);
""")
business_stage_create = ("""
CREATE TABLE IF NOT EXISTS "business_staging" (
"business_id" varchar,
"categories" varchar,
"state" varchar,
"city" varchar,
"address" varchar,
"postal_code" string,
"review_count" bigint,
"stars" double
);
""")
tip_stage_create = ("""
CREATE TABLE IF NOT EXISTS "tip_staging" (
"business_id" varchar,
"compliment_count" bigint,
"text" varchar,
"user_id" varchar,
"dt" varchar
);
""")
users_stage_create = ("""
CREATE TABLE IF NOT EXISTS "users_staging" (
"average_stars" varchar
"compliment_cool" bigint,
"compliment_cute" bigint,
"compliment_funny" bigint,
"compliment_hot" bigint,
"compliment_list" bigint,
"compliment_more" bigint,
"compliment_note" bigint,
"compliment_photos" bigint,
"compliment_plain" bigint,
"compliment_profile" bigint,
"compliment_writer" bigint,
"cool" bigint,
"elite" varchar,
"fans" bigint,
"friends" varchar,
"funny" bigint,
"name" varchar,
"review_count" bigint,
"useful" bigint,
"user_id" varchar,
"yelping_since" varchar
);
""")
stock_stage_create = ("""
CREATE TABLE IF NOT EXISTS "stock_staging" (
"Date" varchar,
"Open" double,
"High" double,
"Low" double,
"Close" double,
"Volume" bigint,
"OpenInt" bigint
);
""")
users_fact_insert = ("""
INSERT INTO users_fact (
user_id,
yelping_since,
name,
average_stars,
review_count
)
SELECT distinct
user_id,
CAST(yelping_since as timestamp) AS yelping_since,
name,
average_stars,
review_count
FROM users_staging
""")
business_fact_insert = ("""
INSERT INTO business_fact (
business_id,
name,
categories,
review_count,
stars,
city_id,
address,
postal_code
)
SELECT distinct
business_id,
name,
categories,
review_count,
stars,
b.city_id,
address,
postal_code
FROM business_staging a
LEFT JOIN city_fact b ON a.city = b.city AND a.state = b.state
""")
city_fact_insert = ("""
INSERT INTO city_fact (
city_id,
state,
city
)
SELECT distinct
md5(state || city) city_id,
state,
city
FROM business_staging
""")
review_dim_insert = ("""
INSERT INTO review_dim (
review_id,
review_date,
business_id,
user_id
)
SELECT distinct
review_id,
CAST(dt as timestamp) AS review_date,
business_id,
user_id
FROM review_staging
""")
review_fact_insert = ("""
INSERT INTO review_fact (
review_id,
stars,
text
)
SELECT distinct
review_id,
stars,
text
FROM review_staging
""")
tip_fact_insert = ("""
INSERT INTO tip_fact (
tip_id,
business_id,
user_id,
text,
tip_date,
compliment_count
)
SELECT distinct
md5(business_id || user_id || tip_date) tip_id,
business_id,
user_id,
text,
CAST(dt as timestamp) AS tip_date,
compliment_count
FROM tip_staging
""")
stock_fact_insert = ("""
INSERT INTO stock_fact (
stock_id,
business_name,
date,
close_value
)
SELECT distinct
md5('cmg' || date ) stock_id,
'chipotle' AS business_name,
Date,
Close
FROM stock_staging
""")
setup_database_dict = {
query.name: query.value for query in SqlQueries if ('create' in query.name)
}
setup_database_dict[SqlQueries.setup_foreign_keys.name]= SqlQueries.setup_foreign_keys.value
setup_database_dict.keys()
!pip install pandas
```
## Load Config Parameters
The file `dwh.cfg` contains all parameers necessary to proceed with the Cluster creation.
In addition to it, the AWS IAM Roles parameters are also defined in the file
```
config = configparser.ConfigParser()
config.read_file(open('dwh.cfg'))
KEY = config.get('AWS','KEY')
SECRET = config.get('AWS','SECRET')
DWH_CLUSTER_TYPE = config.get("DWH","DWH_CLUSTER_TYPE")
DWH_NUM_NODES = config.get("DWH","DWH_NUM_NODES")
DWH_NODE_TYPE = config.get("DWH","DWH_NODE_TYPE")
DWH_CLUSTER_IDENTIFIER = config.get("DWH","DWH_CLUSTER_IDENTIFIER")
DWH_DB = config.get("DWH","DWH_DB")
DWH_DB_USER = config.get("DWH","DWH_DB_USER")
DWH_DB_PASSWORD = config.get("DWH","DWH_DB_PASSWORD")
DWH_PORT = config.get("DWH","DWH_PORT")
DWH_IAM_ROLE_NAME = config.get("DWH", "DWH_IAM_ROLE_NAME")
(DWH_DB_USER, DWH_DB_PASSWORD, DWH_DB)
pd.DataFrame({"Param":
["DWH_CLUSTER_TYPE", "DWH_NUM_NODES", "DWH_NODE_TYPE", "DWH_CLUSTER_IDENTIFIER", "DWH_DB", "DWH_DB_USER", "DWH_DB_PASSWORD", "DWH_PORT", "DWH_IAM_ROLE_NAME"],
"Value":
[DWH_CLUSTER_TYPE, DWH_NUM_NODES, DWH_NODE_TYPE, DWH_CLUSTER_IDENTIFIER, DWH_DB, DWH_DB_USER, DWH_DB_PASSWORD, DWH_PORT, DWH_IAM_ROLE_NAME]
})
```
## Create Clients
```
ec2 = boto3.resource('ec2',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
s3 = boto3.resource('s3',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
iam = boto3.client('iam',aws_access_key_id=KEY,
aws_secret_access_key=SECRET,
region_name='us-west-2'
)
redshift = boto3.client('redshift',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
from botocore.exceptions import ClientError
try:
print("Creating a new IAM Role")
dwhRole = iam.create_role(
Path='/',
RoleName=DWH_IAM_ROLE_NAME,
Description = "Allows Redshift clusters to call AWS services on your behalf.",
AssumeRolePolicyDocument=json.dumps(
{'Statement': [{'Action': 'sts:AssumeRole',
'Effect': 'Allow',
'Principal': {'Service': 'redshift.amazonaws.com'}}],
'Version': '2012-10-17'})
)
except Exception as e:
print(e)
print("Attaching Policy")
iam.attach_role_policy(RoleName=DWH_IAM_ROLE_NAME,
PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
)['ResponseMetadata']['HTTPStatusCode']
print("Get the IAM role ARN")
roleArn = iam.get_role(RoleName=DWH_IAM_ROLE_NAME)['Role']['Arn']
print(roleArn)
```
## Creating a Redshift Cluster
```
try:
response = redshift.create_cluster(
#HW
ClusterType=DWH_CLUSTER_TYPE,
NodeType=DWH_NODE_TYPE,
NumberOfNodes=int(DWH_NUM_NODES),
#Identifiers & Credentials
DBName=DWH_DB,
ClusterIdentifier=DWH_CLUSTER_IDENTIFIER,
MasterUsername=DWH_DB_USER,
MasterUserPassword=DWH_DB_PASSWORD,
#Roles (for s3 access)
IamRoles=[roleArn]
)
except Exception as e:
print(e)
def prettyRedshiftProps(props):
pd.set_option('display.max_colwidth', -1)
keysToShow = ["ClusterIdentifier", "NodeType", "ClusterStatus", "MasterUsername", "DBName", "Endpoint", "NumberOfNodes", 'VpcId']
x = [(k, v) for k,v in props.items() if k in keysToShow]
return pd.DataFrame(data=x, columns=["Key", "Value"])
cluster_status = 'Undefined'
while cluster_status != 'available':
sleep(30)
myClusterProps = redshift.describe_clusters(ClusterIdentifier=DWH_CLUSTER_IDENTIFIER)['Clusters'][0]
cluster_status = myClusterProps['ClusterStatus']
prettyRedshiftProps(myClusterProps)
DWH_ENDPOINT = myClusterProps['Endpoint']['Address']
DWH_ROLE_ARN = myClusterProps['IamRoles'][0]['IamRoleArn']
print("DWH_ENDPOINT :: ", DWH_ENDPOINT)
print("DWH_ROLE_ARN :: ", DWH_ROLE_ARN)
try:
vpc = ec2.Vpc(id=myClusterProps['VpcId'])
defaultSg = list(vpc.security_groups.all())[0]
print(defaultSg)
defaultSg.authorize_ingress(
GroupName=defaultSg.group_name,
CidrIp='0.0.0.0/0',
IpProtocol='TCP',
FromPort=int(DWH_PORT),
ToPort=int(DWH_PORT)
)
except Exception as e:
print(e)
print("host={} dbname={} user={} password={} port={}".format(DWH_ENDPOINT, DWH_DB, DWH_DB_USER, DWH_DB_PASSWORD, DWH_PORT))
```
## Delete Cluster
```
#### CAREFUL!!
#-- Uncomment & run to delete the created resources
redshift.delete_cluster( ClusterIdentifier=DWH_CLUSTER_IDENTIFIER, SkipFinalClusterSnapshot=True)
#### CAREFUL!!
```
|
github_jupyter
|
import pandas as pd
import boto3
import json
import configparser
from time import sleep
@unique
class TableType(Enum):
FACT = 'fact'
DIM = 'dim'
STAGE = 'staging'
@unique
class Table(Enum):
BUSINESS = 'business'
CITY = 'city'
REVIEW = 'review'
TIP = 'tip'
USERS = 'users'
STOCK = 'stock'
def get_table_name(self, table_type: TableType):
return f"{self.name}_{table_type.value}"
def get_partitions(self):
return {
self.USERS: {'YEAR': 2004, 'MONTH': 10, 'DAY': 12},
self.REVIEW: {'YEAR': 2005, 'MONTH': 3, 'DAY': 3},
self.TIP: {'YEAR': 2009, 'MONTH': 12, 'DAY': 15}
}.get(self)
def get_s3_path(self):
if self == self.BUSINESS:
return "s3://yelp-customer-reviews/processed/business/"
elif self == self.STOCK:
return "s3://yelp-customer-reviews/stock-data/cmg.us.txt"
else:
path = f"s3://yelp-customer-reviews/data-lake/{self.value}".replace('users', 'user')
path = path + "/pyear={YEAR}/pmonth={MONTH}/pday={DAY}"
return path.format(**self.get_partitions())
Table.BUSINESS.get_table_name(TableType.STAGE)
Table.USERS.get_s3_path()
from enum import Enum, unique
@unique
class SqlQueries(Enum):
setup_foreign_keys = ("""
ALTER TABLE "tip_fact" ADD FOREIGN KEY ("business_id") REFERENCES "business_fact" ("business_id");
ALTER TABLE "tip_fact" ADD FOREIGN KEY ("user_id") REFERENCES "users_fact" ("user_id");
ALTER TABLE "business_fact" ADD FOREIGN KEY ("city_id") REFERENCES "city_fact" ("city_id");
ALTER TABLE "review_dim" ADD FOREIGN KEY ("business_id") REFERENCES "business_fact" ("business_id");
ALTER TABLE "review_dim" ADD FOREIGN KEY ("user_id") REFERENCES "users_fact" ("user_id");
ALTER TABLE "review_fact" ADD FOREIGN KEY ("review_id") REFERENCES "review_dim" ("review_id");
ALTER TABLE "stock_fact" ADD FOREIGN KEY ("business_name") REFERENCES "business_fact" ("name");
""")
business_fact_create = ("""
CREATE TABLE IF NOT EXISTS "business_fact" (
"business_id" varchar PRIMARY KEY,
"name" varchar,
"categories" varchar,
"review_count" bigint,
"stars" count,
"city_id" varchar,
"address" varchar,
"postal_code" varchar
);
DISTSTYLE EVEN;
""")
city_fact_create = ("""
CREATE TABLE IF NOT EXISTS "city_fact" (
"city_id" varchar PRIMARY KEY,
"state" varchar,
"city" varchar
);
""")
users_fact_create = ("""
CREATE TABLE IF NOT EXISTS "users_fact" (
"user_id" varchar PRIMARY KEY,
"yelping_since" timestamp,
"name" varchar,
"average_stars" int,
"review_count" bigint
);
DISTSTYLE EVEN;
""")
review_dim_create = ("""
CREATE TABLE IF NOT EXISTS "review_dim" (
"review_id" varchar PRIMARY KEY,
"review_date" timestamp,
"business_id" varchar,
"user_id" varchar
);
DISTSTYLE EVEN;
""")
review_fact_create = ("""
CREATE TABLE IF NOT EXISTS "review_fact" (
"review_id" varchar PRIMARY KEY,
"stars" int,
"text" varchar
);
DISTSTYLE EVEN;
""")
stock_fact_create = ("""
CREATE TABLE IF NOT EXISTS "stock_fact" (
"stock_id" varchar PRIMARY KEY,
"business_name" varchar,
"date" timestamp,
"close_value" float
);
""")
tip_fact_create = ("""
CREATE TABLE IF NOT EXISTS "tip_fact" (
"tip_id" varchar PRIMARY KEY,
"business_id" varchar,
"user_id" varchar,
"text" varchar,
"tip_date" timestamp,
"compliment_count" bigint
);
DISTSTYLE EVEN;
""")
review_stage_create = ("""
CREATE TABLE IF NOT EXISTS "review_staging" (
"business_id" varchar
"cool" bigint,
"funny" bigint,
"review_id" varchar,
"stars" double,
"text" varchar,
"useful" bigint,
"user_id" string,
"dt" varchar
);
""")
business_stage_create = ("""
CREATE TABLE IF NOT EXISTS "business_staging" (
"business_id" varchar,
"categories" varchar,
"state" varchar,
"city" varchar,
"address" varchar,
"postal_code" string,
"review_count" bigint,
"stars" double
);
""")
tip_stage_create = ("""
CREATE TABLE IF NOT EXISTS "tip_staging" (
"business_id" varchar,
"compliment_count" bigint,
"text" varchar,
"user_id" varchar,
"dt" varchar
);
""")
users_stage_create = ("""
CREATE TABLE IF NOT EXISTS "users_staging" (
"average_stars" varchar
"compliment_cool" bigint,
"compliment_cute" bigint,
"compliment_funny" bigint,
"compliment_hot" bigint,
"compliment_list" bigint,
"compliment_more" bigint,
"compliment_note" bigint,
"compliment_photos" bigint,
"compliment_plain" bigint,
"compliment_profile" bigint,
"compliment_writer" bigint,
"cool" bigint,
"elite" varchar,
"fans" bigint,
"friends" varchar,
"funny" bigint,
"name" varchar,
"review_count" bigint,
"useful" bigint,
"user_id" varchar,
"yelping_since" varchar
);
""")
stock_stage_create = ("""
CREATE TABLE IF NOT EXISTS "stock_staging" (
"Date" varchar,
"Open" double,
"High" double,
"Low" double,
"Close" double,
"Volume" bigint,
"OpenInt" bigint
);
""")
users_fact_insert = ("""
INSERT INTO users_fact (
user_id,
yelping_since,
name,
average_stars,
review_count
)
SELECT distinct
user_id,
CAST(yelping_since as timestamp) AS yelping_since,
name,
average_stars,
review_count
FROM users_staging
""")
business_fact_insert = ("""
INSERT INTO business_fact (
business_id,
name,
categories,
review_count,
stars,
city_id,
address,
postal_code
)
SELECT distinct
business_id,
name,
categories,
review_count,
stars,
b.city_id,
address,
postal_code
FROM business_staging a
LEFT JOIN city_fact b ON a.city = b.city AND a.state = b.state
""")
city_fact_insert = ("""
INSERT INTO city_fact (
city_id,
state,
city
)
SELECT distinct
md5(state || city) city_id,
state,
city
FROM business_staging
""")
review_dim_insert = ("""
INSERT INTO review_dim (
review_id,
review_date,
business_id,
user_id
)
SELECT distinct
review_id,
CAST(dt as timestamp) AS review_date,
business_id,
user_id
FROM review_staging
""")
review_fact_insert = ("""
INSERT INTO review_fact (
review_id,
stars,
text
)
SELECT distinct
review_id,
stars,
text
FROM review_staging
""")
tip_fact_insert = ("""
INSERT INTO tip_fact (
tip_id,
business_id,
user_id,
text,
tip_date,
compliment_count
)
SELECT distinct
md5(business_id || user_id || tip_date) tip_id,
business_id,
user_id,
text,
CAST(dt as timestamp) AS tip_date,
compliment_count
FROM tip_staging
""")
stock_fact_insert = ("""
INSERT INTO stock_fact (
stock_id,
business_name,
date,
close_value
)
SELECT distinct
md5('cmg' || date ) stock_id,
'chipotle' AS business_name,
Date,
Close
FROM stock_staging
""")
setup_database_dict = {
query.name: query.value for query in SqlQueries if ('create' in query.name)
}
setup_database_dict[SqlQueries.setup_foreign_keys.name]= SqlQueries.setup_foreign_keys.value
setup_database_dict.keys()
!pip install pandas
config = configparser.ConfigParser()
config.read_file(open('dwh.cfg'))
KEY = config.get('AWS','KEY')
SECRET = config.get('AWS','SECRET')
DWH_CLUSTER_TYPE = config.get("DWH","DWH_CLUSTER_TYPE")
DWH_NUM_NODES = config.get("DWH","DWH_NUM_NODES")
DWH_NODE_TYPE = config.get("DWH","DWH_NODE_TYPE")
DWH_CLUSTER_IDENTIFIER = config.get("DWH","DWH_CLUSTER_IDENTIFIER")
DWH_DB = config.get("DWH","DWH_DB")
DWH_DB_USER = config.get("DWH","DWH_DB_USER")
DWH_DB_PASSWORD = config.get("DWH","DWH_DB_PASSWORD")
DWH_PORT = config.get("DWH","DWH_PORT")
DWH_IAM_ROLE_NAME = config.get("DWH", "DWH_IAM_ROLE_NAME")
(DWH_DB_USER, DWH_DB_PASSWORD, DWH_DB)
pd.DataFrame({"Param":
["DWH_CLUSTER_TYPE", "DWH_NUM_NODES", "DWH_NODE_TYPE", "DWH_CLUSTER_IDENTIFIER", "DWH_DB", "DWH_DB_USER", "DWH_DB_PASSWORD", "DWH_PORT", "DWH_IAM_ROLE_NAME"],
"Value":
[DWH_CLUSTER_TYPE, DWH_NUM_NODES, DWH_NODE_TYPE, DWH_CLUSTER_IDENTIFIER, DWH_DB, DWH_DB_USER, DWH_DB_PASSWORD, DWH_PORT, DWH_IAM_ROLE_NAME]
})
ec2 = boto3.resource('ec2',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
s3 = boto3.resource('s3',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
iam = boto3.client('iam',aws_access_key_id=KEY,
aws_secret_access_key=SECRET,
region_name='us-west-2'
)
redshift = boto3.client('redshift',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
from botocore.exceptions import ClientError
try:
print("Creating a new IAM Role")
dwhRole = iam.create_role(
Path='/',
RoleName=DWH_IAM_ROLE_NAME,
Description = "Allows Redshift clusters to call AWS services on your behalf.",
AssumeRolePolicyDocument=json.dumps(
{'Statement': [{'Action': 'sts:AssumeRole',
'Effect': 'Allow',
'Principal': {'Service': 'redshift.amazonaws.com'}}],
'Version': '2012-10-17'})
)
except Exception as e:
print(e)
print("Attaching Policy")
iam.attach_role_policy(RoleName=DWH_IAM_ROLE_NAME,
PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
)['ResponseMetadata']['HTTPStatusCode']
print("Get the IAM role ARN")
roleArn = iam.get_role(RoleName=DWH_IAM_ROLE_NAME)['Role']['Arn']
print(roleArn)
try:
response = redshift.create_cluster(
#HW
ClusterType=DWH_CLUSTER_TYPE,
NodeType=DWH_NODE_TYPE,
NumberOfNodes=int(DWH_NUM_NODES),
#Identifiers & Credentials
DBName=DWH_DB,
ClusterIdentifier=DWH_CLUSTER_IDENTIFIER,
MasterUsername=DWH_DB_USER,
MasterUserPassword=DWH_DB_PASSWORD,
#Roles (for s3 access)
IamRoles=[roleArn]
)
except Exception as e:
print(e)
def prettyRedshiftProps(props):
pd.set_option('display.max_colwidth', -1)
keysToShow = ["ClusterIdentifier", "NodeType", "ClusterStatus", "MasterUsername", "DBName", "Endpoint", "NumberOfNodes", 'VpcId']
x = [(k, v) for k,v in props.items() if k in keysToShow]
return pd.DataFrame(data=x, columns=["Key", "Value"])
cluster_status = 'Undefined'
while cluster_status != 'available':
sleep(30)
myClusterProps = redshift.describe_clusters(ClusterIdentifier=DWH_CLUSTER_IDENTIFIER)['Clusters'][0]
cluster_status = myClusterProps['ClusterStatus']
prettyRedshiftProps(myClusterProps)
DWH_ENDPOINT = myClusterProps['Endpoint']['Address']
DWH_ROLE_ARN = myClusterProps['IamRoles'][0]['IamRoleArn']
print("DWH_ENDPOINT :: ", DWH_ENDPOINT)
print("DWH_ROLE_ARN :: ", DWH_ROLE_ARN)
try:
vpc = ec2.Vpc(id=myClusterProps['VpcId'])
defaultSg = list(vpc.security_groups.all())[0]
print(defaultSg)
defaultSg.authorize_ingress(
GroupName=defaultSg.group_name,
CidrIp='0.0.0.0/0',
IpProtocol='TCP',
FromPort=int(DWH_PORT),
ToPort=int(DWH_PORT)
)
except Exception as e:
print(e)
print("host={} dbname={} user={} password={} port={}".format(DWH_ENDPOINT, DWH_DB, DWH_DB_USER, DWH_DB_PASSWORD, DWH_PORT))
#### CAREFUL!!
#-- Uncomment & run to delete the created resources
redshift.delete_cluster( ClusterIdentifier=DWH_CLUSTER_IDENTIFIER, SkipFinalClusterSnapshot=True)
#### CAREFUL!!
| 0.368519 | 0.086131 |
```
import re
import nltk
import tokenize
import numpy as np
reviews_train = []
for line in open('amazon_motog5_500_review.txt','r'):
reviews_train.append(line.strip())
# reviews_train
reviews_test = []
for line in open('motog5_flipkart_review.txt','r'):
reviews_test.append(line.strip())
REPLACE_NO_SPACE = re.compile("(\.)|(\;)|(\:)|(\!)|(\')|(\?)|(\,)|(\")|(\()|(\))|(\[)|(\])")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
def preprocess_reviews(reviews):
reviews = [REPLACE_NO_SPACE.sub("", line.lower()) for line in reviews]
reviews = [REPLACE_WITH_SPACE.sub(" ", line) for line in reviews]
return reviews
reviews_test_clean = preprocess_reviews(reviews_test)
reviews_train_clean = preprocess_reviews(reviews_train)
#reviews_test_clean
token_train = []
for i in reviews_train_clean:
tokenizer = nltk.tokenize.TreebankWordTokenizer()
tokens = token_train.append(tokenizer.tokenize(i))
token_test = []
for i in reviews_test_clean:
tokenizer = nltk.tokenize.TreebankWordTokenizer()
tokens = token_test.append(tokenizer.tokenize(i))
token_train_stem = []
stemmer = nltk.stem.PorterStemmer()
for j in token_train:
tokens = token_train_stem.append(" ".join(stemmer.stem(i) for i in j))
#token_train_stem
token_test_stem = []
for j in token_test:
tokens = token_test_stem.append(" ".join(stemmer.stem(i) for i in j))
#token_test_stem
token_test_lemm = []
stemmer = nltk.stem.WordNetLemmatizer()
for j in token_train:
tokens = token_test_lemm.append(" ".join(stemmer.lemmatize(i) for i in j))
#token_test_lemm
token_train_lemm = []
stemmer = nltk.stem.WordNetLemmatizer()
for j in token_test:
tokens = token_train_lemm.append(" ".join(stemmer.lemmatize(i) for i in j))
#token_train_lemm
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(binary=True)
cv.fit(token_train_lemm)
X_test= cv.transform(token_test_lemm)
X_train = cv.transform(token_train_lemm)
X_train.size
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
t1 = np.zeros(231)[np.newaxis]
t1= np.transpose(t1)
t2 = np.ones(231)[np.newaxis]
t2= np.transpose(t2)
target=np.concatenate((t1, t2), axis=0)
target.size
X_train, X_val, y_train, y_val = train_test_split(
X_test, target, train_size = 0.75
)
final_model = LogisticRegression(C=0.05)
final_model.fit(X_test, target)
print ("Final Accuracy: %s"
% accuracy_score(target, final_model.predict(X_val)))
```
|
github_jupyter
|
import re
import nltk
import tokenize
import numpy as np
reviews_train = []
for line in open('amazon_motog5_500_review.txt','r'):
reviews_train.append(line.strip())
# reviews_train
reviews_test = []
for line in open('motog5_flipkart_review.txt','r'):
reviews_test.append(line.strip())
REPLACE_NO_SPACE = re.compile("(\.)|(\;)|(\:)|(\!)|(\')|(\?)|(\,)|(\")|(\()|(\))|(\[)|(\])")
REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)")
def preprocess_reviews(reviews):
reviews = [REPLACE_NO_SPACE.sub("", line.lower()) for line in reviews]
reviews = [REPLACE_WITH_SPACE.sub(" ", line) for line in reviews]
return reviews
reviews_test_clean = preprocess_reviews(reviews_test)
reviews_train_clean = preprocess_reviews(reviews_train)
#reviews_test_clean
token_train = []
for i in reviews_train_clean:
tokenizer = nltk.tokenize.TreebankWordTokenizer()
tokens = token_train.append(tokenizer.tokenize(i))
token_test = []
for i in reviews_test_clean:
tokenizer = nltk.tokenize.TreebankWordTokenizer()
tokens = token_test.append(tokenizer.tokenize(i))
token_train_stem = []
stemmer = nltk.stem.PorterStemmer()
for j in token_train:
tokens = token_train_stem.append(" ".join(stemmer.stem(i) for i in j))
#token_train_stem
token_test_stem = []
for j in token_test:
tokens = token_test_stem.append(" ".join(stemmer.stem(i) for i in j))
#token_test_stem
token_test_lemm = []
stemmer = nltk.stem.WordNetLemmatizer()
for j in token_train:
tokens = token_test_lemm.append(" ".join(stemmer.lemmatize(i) for i in j))
#token_test_lemm
token_train_lemm = []
stemmer = nltk.stem.WordNetLemmatizer()
for j in token_test:
tokens = token_train_lemm.append(" ".join(stemmer.lemmatize(i) for i in j))
#token_train_lemm
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(binary=True)
cv.fit(token_train_lemm)
X_test= cv.transform(token_test_lemm)
X_train = cv.transform(token_train_lemm)
X_train.size
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
t1 = np.zeros(231)[np.newaxis]
t1= np.transpose(t1)
t2 = np.ones(231)[np.newaxis]
t2= np.transpose(t2)
target=np.concatenate((t1, t2), axis=0)
target.size
X_train, X_val, y_train, y_val = train_test_split(
X_test, target, train_size = 0.75
)
final_model = LogisticRegression(C=0.05)
final_model.fit(X_test, target)
print ("Final Accuracy: %s"
% accuracy_score(target, final_model.predict(X_val)))
| 0.370339 | 0.374905 |
```
%matplotlib inline
from pprint import pprint
import plot
from simulate import simulate_withdrawals
from harvesting import N_60_RebalanceHarvesting, N_100_RebalanceHarvesting
import harvesting
import itertools
from decimal import Decimal
from montecarlo import conservative
import metrics
import math
from market import Returns_US_1871
import withdrawal
START_YEAR = 1966
def CAPEPercentage(p, h):
return withdrawal.CAPEPercentage(p, h, start_year=START_YEAR, a=.0208, b=0.4)
def VPW(p, h):
return withdrawal.VPW(p, h, years_left=40)
def ERN_325(p, h):
return withdrawal.ConstantDollar(p, h, rate=Decimal('0.0325'))
def compare_em_vs_vpw(series, years=40, title=''):
(r1, r2) = itertools.tee(series)
portfolio = (800000, 200000)
# x = simulate_withdrawals(r1, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=CAPEPercentage, portfolio=portfolio)
x = simulate_withdrawals(r1, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=withdrawal.EM, portfolio=portfolio)
# x = simulate_withdrawals(r1, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=ERN_325, portfolio=portfolio)
y = simulate_withdrawals(r2, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=VPW, portfolio=portfolio)
plot.plot_n({'ERN' : [n.withdraw_r for n in x], 'VPW' : [n.withdraw_r for n in y]}, 'Year of Retirement', title)
plot.plot_n({'ERN' : [n.portfolio_r for n in x], 'VPW' : [n.portfolio_r for n in y]}, 'Year of Retirement', title)
print()
print ('cew')
print ('-' * 10)
print('ern', metrics.cew([n.withdraw_r for n in x]))
print('vpw', metrics.cew([n.withdraw_r for n in y]))
# print ('hreff-4')
# print ('-' * 10)
# print('ern', metrics.hreff([n.withdraw_pct_orig for n in x], [n.returns for n in x]))
# print('vpw', metrics.hreff([n.withdraw_pct_orig for n in y], [n.returns for n in y]))
print()
print ('ulcer - income (real)')
print ('-' * 10)
print('ern', metrics.ulcer([n.withdraw_r for n in x]))
print('vpw', metrics.ulcer([n.withdraw_r for n in y]))
print()
print ('ulcer - income (nominal)')
print ('-' * 10)
print('ern', metrics.ulcer([n.withdraw_n for n in x]))
print('vpw', metrics.ulcer([n.withdraw_n for n in y]))
print()
print ('ulcer - portfolio (real)')
print ('-' * 10)
print('ern', metrics.ulcer([n.portfolio_r for n in x]))
print('vpw', metrics.ulcer([n.portfolio_r for n in y]))
print()
print ('ulcer - portfolio (nominal)')
print ('-' * 10)
print('ern', metrics.ulcer([n.portfolio_n for n in x]))
print('vpw', metrics.ulcer([n.portfolio_n for n in y]))
def ern_vs_vpw(year, years=40):
compare_em_vs_vpw(Returns_US_1871().iter_from(year), title='%d: ERN vs VPW' % year, years=years)
ern_vs_vpw(START_YEAR, years=30)
```
|
github_jupyter
|
%matplotlib inline
from pprint import pprint
import plot
from simulate import simulate_withdrawals
from harvesting import N_60_RebalanceHarvesting, N_100_RebalanceHarvesting
import harvesting
import itertools
from decimal import Decimal
from montecarlo import conservative
import metrics
import math
from market import Returns_US_1871
import withdrawal
START_YEAR = 1966
def CAPEPercentage(p, h):
return withdrawal.CAPEPercentage(p, h, start_year=START_YEAR, a=.0208, b=0.4)
def VPW(p, h):
return withdrawal.VPW(p, h, years_left=40)
def ERN_325(p, h):
return withdrawal.ConstantDollar(p, h, rate=Decimal('0.0325'))
def compare_em_vs_vpw(series, years=40, title=''):
(r1, r2) = itertools.tee(series)
portfolio = (800000, 200000)
# x = simulate_withdrawals(r1, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=CAPEPercentage, portfolio=portfolio)
x = simulate_withdrawals(r1, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=withdrawal.EM, portfolio=portfolio)
# x = simulate_withdrawals(r1, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=ERN_325, portfolio=portfolio)
y = simulate_withdrawals(r2, years=years, harvesting=harvesting.make_rebalancer(0.8), withdraw=VPW, portfolio=portfolio)
plot.plot_n({'ERN' : [n.withdraw_r for n in x], 'VPW' : [n.withdraw_r for n in y]}, 'Year of Retirement', title)
plot.plot_n({'ERN' : [n.portfolio_r for n in x], 'VPW' : [n.portfolio_r for n in y]}, 'Year of Retirement', title)
print()
print ('cew')
print ('-' * 10)
print('ern', metrics.cew([n.withdraw_r for n in x]))
print('vpw', metrics.cew([n.withdraw_r for n in y]))
# print ('hreff-4')
# print ('-' * 10)
# print('ern', metrics.hreff([n.withdraw_pct_orig for n in x], [n.returns for n in x]))
# print('vpw', metrics.hreff([n.withdraw_pct_orig for n in y], [n.returns for n in y]))
print()
print ('ulcer - income (real)')
print ('-' * 10)
print('ern', metrics.ulcer([n.withdraw_r for n in x]))
print('vpw', metrics.ulcer([n.withdraw_r for n in y]))
print()
print ('ulcer - income (nominal)')
print ('-' * 10)
print('ern', metrics.ulcer([n.withdraw_n for n in x]))
print('vpw', metrics.ulcer([n.withdraw_n for n in y]))
print()
print ('ulcer - portfolio (real)')
print ('-' * 10)
print('ern', metrics.ulcer([n.portfolio_r for n in x]))
print('vpw', metrics.ulcer([n.portfolio_r for n in y]))
print()
print ('ulcer - portfolio (nominal)')
print ('-' * 10)
print('ern', metrics.ulcer([n.portfolio_n for n in x]))
print('vpw', metrics.ulcer([n.portfolio_n for n in y]))
def ern_vs_vpw(year, years=40):
compare_em_vs_vpw(Returns_US_1871().iter_from(year), title='%d: ERN vs VPW' % year, years=years)
ern_vs_vpw(START_YEAR, years=30)
| 0.272993 | 0.304626 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.ensemble import IsolationForest
from sklearn.metrics import f1_score
import glob
import os
%matplotlib inline
```
## Model
```
def conf_mat(pred,truth):
res = [0,0,0,0]
a = 0
for i in range(len(truth)):
if truth[i] == 1:
if truth[i] == pred[i]:
a = 0
else:
a = 2
else:
if truth[i] == pred[i]:
a = 1
else:
a = 3
res[a] = res[a] + 1
print(res)
return res
def map_pred(pred):
res = np.zeros(len(pred))
for i in range(len(pred)):
if pred[i] == -1:
res[i] = 1
return res
def naive_classifier_mean(x_test,mu,std,u_term,l_term):
print(mu - l_term * std,mu + u_term * std)
predictions = np.where( (x_test > (mu + u_term * std)) | (x_test < (mu - l_term * std)), 1, 0)
return predictions
def naive_classifier_mean_2(x_test,mu,std,u_term,l_term):
print(mu - l_term * std,mu + u_term * std)
predictions = np.where( (x_test < (mu + u_term * std)) & (x_test > (mu - l_term * std)), 1, 0)
return predictions
```
## Multi-dataset Model
```
input_dir = './../../train/KPI/'
summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE'])
for fname in os.listdir(input_dir):
df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp')
kpi_name = df['KPI ID'].values[0]
print(kpi_name)
df = df.drop(['KPI ID'], axis=1)
# Normalize Values
normalized_df=(df-df.min())/(df.max()-df.min())
normalized_df = normalized_df.astype({'label': 'int64'})
# Split to Train and Test
train_set, test_set= np.split(normalized_df, [int(.75 *len(normalized_df))])
# Format Train and Test
X = np.array(train_set['value']).reshape(-1, 1)
y = np.array(train_set['label'])
x_test = np.array(test_set['value']).reshape(-1,1)
y_test = np.array(test_set['label'])
# Check Valid Train Dataset
if len(np.unique(y)) > 1:
# Train Model
model = IsolationForest(n_estimators=100,contamination=float(0.005))
model.fit(df.value.values.reshape(-1, 1))
# Make Predictions
predictions = map_pred(model.predict(df.value.values.reshape(-1, 1)))
# Compute Confusion Matrix
cf = conf_mat(predictions,df.label.values)
# F1-Score
prec = 0
rec = 0
f1 = 0
if (cf[0] + cf[2]) != 0:
prec = cf[0] / (cf[0] + cf[2])
if (cf[0] + cf[3]) != 0:
rec = cf[0] / (cf[0] + cf[3])
if (prec + rec) != 0:
f1 = 2 * (prec * rec / (prec+rec))
# print(f1_score(predictions,y_test))
summary = summary.append({'KPI': kpi_name,
'TP': cf[0],
'TN': cf[1],
'FP': cf[2],
'FN': cf[3],
'PRECISION': prec,
'RECALL': rec,
'F1_SCORE': f1 }, ignore_index=True)
else:
summary = summary.append({'KPI': kpi_name,
'TP': None,
'TN': None,
'FP': None,
'FN': None,
'PRECISION': None,
'RECALL': None,
'F1_SCORE': None }, ignore_index=True)
# summary.to_csv('DT_Result.csv')
```
## Single Class
```
input_dir = './../../test/KPI/'
fname = 'test_88cf3a776ba00e7c.csv'
input_dir = './../../train/KPI/'
fname = 'train_88cf3a776ba00e7c.csv'
summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE'])
df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp')
kpi_name = df['KPI ID'].values[0]
print(kpi_name)
df = df.drop(['KPI ID'], axis=1)
# Format Train and Test
# X = np.array(train_set['value']).reshape(-1, 1)
# x_test = np.array(test_set['value']).reshape(-1,1)
model = IsolationForest(n_estimators=100,contamination=float(0.1))
model.fit(df.value.values.reshape(-1, 1))
# Make Predictions
predictions = map_pred(model.predict(df.value.values.reshape(-1, 1)))
# Compute Confusion Matrix
# cf = conf_mat(predictions,df.label.values)
f1_score(predictions,df.label.values)
unique, counts = np.unique(predictions, return_counts=True)
dict(zip(unique, counts))
df['pred'] = predictions
df[df['pred'] == 1]
```
## Plot
```
df.label = df.label * 0.1
df.head(1440*10).plot(kind='line',figsize=(12,8))
plt.plot(predictions*0.1,alpha=.5)
# plt.plot(np.squeeze(df_pred.values.T)*1000)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.ensemble import IsolationForest
from sklearn.metrics import f1_score
import glob
import os
%matplotlib inline
def conf_mat(pred,truth):
res = [0,0,0,0]
a = 0
for i in range(len(truth)):
if truth[i] == 1:
if truth[i] == pred[i]:
a = 0
else:
a = 2
else:
if truth[i] == pred[i]:
a = 1
else:
a = 3
res[a] = res[a] + 1
print(res)
return res
def map_pred(pred):
res = np.zeros(len(pred))
for i in range(len(pred)):
if pred[i] == -1:
res[i] = 1
return res
def naive_classifier_mean(x_test,mu,std,u_term,l_term):
print(mu - l_term * std,mu + u_term * std)
predictions = np.where( (x_test > (mu + u_term * std)) | (x_test < (mu - l_term * std)), 1, 0)
return predictions
def naive_classifier_mean_2(x_test,mu,std,u_term,l_term):
print(mu - l_term * std,mu + u_term * std)
predictions = np.where( (x_test < (mu + u_term * std)) & (x_test > (mu - l_term * std)), 1, 0)
return predictions
input_dir = './../../train/KPI/'
summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE'])
for fname in os.listdir(input_dir):
df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp')
kpi_name = df['KPI ID'].values[0]
print(kpi_name)
df = df.drop(['KPI ID'], axis=1)
# Normalize Values
normalized_df=(df-df.min())/(df.max()-df.min())
normalized_df = normalized_df.astype({'label': 'int64'})
# Split to Train and Test
train_set, test_set= np.split(normalized_df, [int(.75 *len(normalized_df))])
# Format Train and Test
X = np.array(train_set['value']).reshape(-1, 1)
y = np.array(train_set['label'])
x_test = np.array(test_set['value']).reshape(-1,1)
y_test = np.array(test_set['label'])
# Check Valid Train Dataset
if len(np.unique(y)) > 1:
# Train Model
model = IsolationForest(n_estimators=100,contamination=float(0.005))
model.fit(df.value.values.reshape(-1, 1))
# Make Predictions
predictions = map_pred(model.predict(df.value.values.reshape(-1, 1)))
# Compute Confusion Matrix
cf = conf_mat(predictions,df.label.values)
# F1-Score
prec = 0
rec = 0
f1 = 0
if (cf[0] + cf[2]) != 0:
prec = cf[0] / (cf[0] + cf[2])
if (cf[0] + cf[3]) != 0:
rec = cf[0] / (cf[0] + cf[3])
if (prec + rec) != 0:
f1 = 2 * (prec * rec / (prec+rec))
# print(f1_score(predictions,y_test))
summary = summary.append({'KPI': kpi_name,
'TP': cf[0],
'TN': cf[1],
'FP': cf[2],
'FN': cf[3],
'PRECISION': prec,
'RECALL': rec,
'F1_SCORE': f1 }, ignore_index=True)
else:
summary = summary.append({'KPI': kpi_name,
'TP': None,
'TN': None,
'FP': None,
'FN': None,
'PRECISION': None,
'RECALL': None,
'F1_SCORE': None }, ignore_index=True)
# summary.to_csv('DT_Result.csv')
input_dir = './../../test/KPI/'
fname = 'test_88cf3a776ba00e7c.csv'
input_dir = './../../train/KPI/'
fname = 'train_88cf3a776ba00e7c.csv'
summary = pd.DataFrame(columns=['KPI', 'TP', 'TN', 'FP', 'FN', 'PRECISION', 'RECALL', 'F1_SCORE'])
df = pd.read_csv(os.path.join(input_dir, fname), index_col='timestamp')
kpi_name = df['KPI ID'].values[0]
print(kpi_name)
df = df.drop(['KPI ID'], axis=1)
# Format Train and Test
# X = np.array(train_set['value']).reshape(-1, 1)
# x_test = np.array(test_set['value']).reshape(-1,1)
model = IsolationForest(n_estimators=100,contamination=float(0.1))
model.fit(df.value.values.reshape(-1, 1))
# Make Predictions
predictions = map_pred(model.predict(df.value.values.reshape(-1, 1)))
# Compute Confusion Matrix
# cf = conf_mat(predictions,df.label.values)
f1_score(predictions,df.label.values)
unique, counts = np.unique(predictions, return_counts=True)
dict(zip(unique, counts))
df['pred'] = predictions
df[df['pred'] == 1]
df.label = df.label * 0.1
df.head(1440*10).plot(kind='line',figsize=(12,8))
plt.plot(predictions*0.1,alpha=.5)
# plt.plot(np.squeeze(df_pred.values.T)*1000)
| 0.413122 | 0.765812 |
```
import pandas as pd
import numpy as np
import seaborn as sns
import os
os.chdir(r'C:\Users\Shyam Adsul\Python codes SSPU\Data sets')
df = pd.read_csv('horse.csv')
df.head()
df.isnull().sum()
print(df['rectal_temp'].mean())
print(df['rectal_temp'].median())
print(df['rectal_temp'].mode())
df['rectal_temp'].fillna(df['rectal_temp'].mean(),inplace = True)
df
print(df['pulse'].mean())
print(df['pulse'].median())
print(df['pulse'].mode())
df['pulse'].fillna(df['pulse'].mean(),inplace = True)
print(df['respiratory_rate'].mean())
print(df['respiratory_rate'].median())
print(df['respiratory_rate'].mode())
df['respiratory_rate'].fillna(df['respiratory_rate'].mean(),inplace = True)
df['temp_of_extremities'].fillna(df['temp_of_extremities'].mode()[0],inplace = True)
df['peripheral_pulse'].fillna(df['peripheral_pulse'].mode()[0],inplace = True)
df['mucous_membrane'].fillna(df['mucous_membrane'].mode()[0],inplace = True)
df['capillary_refill_time'].fillna(df['capillary_refill_time'].mode()[0],inplace = True)
df['pain'].fillna(df['pain'].mode()[0],inplace = True)
df['peristalsis'].fillna(df['peristalsis'].mode()[0],inplace = True)
df['abdominal_distention'].fillna(df['abdominal_distention'].mode()[0],inplace = True)
df.isnull().sum()
type(df['nasogastric_tube'])
df['nasogastric_tube'].fillna(df['nasogastric_tube'].mode()[0],inplace = True)
df['nasogastric_reflux'].fillna(df['nasogastric_reflux'].mode()[0],inplace = True)
df['nasogastric_reflux_ph'].fillna(df['nasogastric_reflux_ph'].mode()[0],inplace = True)
df['rectal_exam_feces'].fillna(df['rectal_exam_feces'].mode()[0],inplace = True)
df['abdomen'].fillna(df['abdomen'].mode()[0],inplace = True)
df['packed_cell_volume'].fillna(df['packed_cell_volume'].mode()[0],inplace = True)
df['total_protein'].fillna(df['total_protein'].mode()[0],inplace = True)
df['abdomo_appearance'].fillna(df['abdomo_appearance'].mode()[0],inplace = True)
df['abdomo_protein'].fillna(df['abdomo_protein'].mode()[0],inplace = True)
# Import label encoder
from sklearn import preprocessing
# label_encoder object knows how to understand word labels.
label_encoder = preprocessing.LabelEncoder()
df['outcome']= label_encoder.fit_transform(df['outcome'])
#df['outcome'] = number.fit_transform(df['outcome'].astype('int'))
df['outcome'].unique
df.head()
df.info()
df['surgery']= label_encoder.fit_transform(df['surgery'])
df['age']= label_encoder.fit_transform(df['age'])
df['temp_of_extremities']= label_encoder.fit_transform(df['temp_of_extremities'])
df['peripheral_pulse']= label_encoder.fit_transform(df['peripheral_pulse'])
df['mucous_membrane']= label_encoder.fit_transform(df['mucous_membrane'])
df['capillary_refill_time']= label_encoder.fit_transform(df['capillary_refill_time'])
df['peristalsis']= label_encoder.fit_transform(df['peristalsis'])
df['abdominal_distention']= label_encoder.fit_transform(df['abdominal_distention'])
df['nasogastric_tube']= label_encoder.fit_transform(df['nasogastric_tube'])
df['nasogastric_reflux']= label_encoder.fit_transform(df['nasogastric_reflux'])
df['rectal_exam_feces']= label_encoder.fit_transform(df['rectal_exam_feces'])
df['abdomen']= label_encoder.fit_transform(df['abdomen'])
df['abdomo_appearance']= label_encoder.fit_transform(df['abdomo_appearance'])
df['surgical_lesion']= label_encoder.fit_transform(df['surgical_lesion'])
df['cp_data']=label_encoder.fit_transform(df['cp_data'])
df['pain']=label_encoder.fit_transform(df['pain'])
df.head()
df.isnull().sum()
df.head()
df.columns
df1 = df[['hospital_number', 'rectal_temp', 'pulse',
'respiratory_rate',
'nasogastric_reflux_ph',
'packed_cell_volume', 'total_protein',
'abdomo_protein', 'lesion_1', 'lesion_2',
'lesion_3']]
df1.head()
from sklearn.preprocessing import StandardScaler
scaler= StandardScaler()
new_data=scaler.fit_transform(df1)
df1 = pd.DataFrame(new_data, columns = [['hospital_number','rectal_temp','pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein','lesion_1','lesion_2','lesion_3']])
df1
df2 = df.drop(['hospital_number','rectal_temp','pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein','lesion_1','lesion_2','lesion_3'],axis = 1)
df2
df3 = pd.concat([df1,df2],axis = 1)
df3.head()
x = df3.drop('outcome',axis = 1)
y = df3['outcome']
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.32)
from sklearn.naive_bayes import GaussianNB
from sklearn import metrics
gnb = GaussianNB()
gnb.fit(x_train, y_train)
y_pred = gnb.predict(x_test)
print('Accuracy {:.2f}'.format(gnb.score(x_test, y_test)))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import seaborn as sns
import os
os.chdir(r'C:\Users\Shyam Adsul\Python codes SSPU\Data sets')
df = pd.read_csv('horse.csv')
df.head()
df.isnull().sum()
print(df['rectal_temp'].mean())
print(df['rectal_temp'].median())
print(df['rectal_temp'].mode())
df['rectal_temp'].fillna(df['rectal_temp'].mean(),inplace = True)
df
print(df['pulse'].mean())
print(df['pulse'].median())
print(df['pulse'].mode())
df['pulse'].fillna(df['pulse'].mean(),inplace = True)
print(df['respiratory_rate'].mean())
print(df['respiratory_rate'].median())
print(df['respiratory_rate'].mode())
df['respiratory_rate'].fillna(df['respiratory_rate'].mean(),inplace = True)
df['temp_of_extremities'].fillna(df['temp_of_extremities'].mode()[0],inplace = True)
df['peripheral_pulse'].fillna(df['peripheral_pulse'].mode()[0],inplace = True)
df['mucous_membrane'].fillna(df['mucous_membrane'].mode()[0],inplace = True)
df['capillary_refill_time'].fillna(df['capillary_refill_time'].mode()[0],inplace = True)
df['pain'].fillna(df['pain'].mode()[0],inplace = True)
df['peristalsis'].fillna(df['peristalsis'].mode()[0],inplace = True)
df['abdominal_distention'].fillna(df['abdominal_distention'].mode()[0],inplace = True)
df.isnull().sum()
type(df['nasogastric_tube'])
df['nasogastric_tube'].fillna(df['nasogastric_tube'].mode()[0],inplace = True)
df['nasogastric_reflux'].fillna(df['nasogastric_reflux'].mode()[0],inplace = True)
df['nasogastric_reflux_ph'].fillna(df['nasogastric_reflux_ph'].mode()[0],inplace = True)
df['rectal_exam_feces'].fillna(df['rectal_exam_feces'].mode()[0],inplace = True)
df['abdomen'].fillna(df['abdomen'].mode()[0],inplace = True)
df['packed_cell_volume'].fillna(df['packed_cell_volume'].mode()[0],inplace = True)
df['total_protein'].fillna(df['total_protein'].mode()[0],inplace = True)
df['abdomo_appearance'].fillna(df['abdomo_appearance'].mode()[0],inplace = True)
df['abdomo_protein'].fillna(df['abdomo_protein'].mode()[0],inplace = True)
# Import label encoder
from sklearn import preprocessing
# label_encoder object knows how to understand word labels.
label_encoder = preprocessing.LabelEncoder()
df['outcome']= label_encoder.fit_transform(df['outcome'])
#df['outcome'] = number.fit_transform(df['outcome'].astype('int'))
df['outcome'].unique
df.head()
df.info()
df['surgery']= label_encoder.fit_transform(df['surgery'])
df['age']= label_encoder.fit_transform(df['age'])
df['temp_of_extremities']= label_encoder.fit_transform(df['temp_of_extremities'])
df['peripheral_pulse']= label_encoder.fit_transform(df['peripheral_pulse'])
df['mucous_membrane']= label_encoder.fit_transform(df['mucous_membrane'])
df['capillary_refill_time']= label_encoder.fit_transform(df['capillary_refill_time'])
df['peristalsis']= label_encoder.fit_transform(df['peristalsis'])
df['abdominal_distention']= label_encoder.fit_transform(df['abdominal_distention'])
df['nasogastric_tube']= label_encoder.fit_transform(df['nasogastric_tube'])
df['nasogastric_reflux']= label_encoder.fit_transform(df['nasogastric_reflux'])
df['rectal_exam_feces']= label_encoder.fit_transform(df['rectal_exam_feces'])
df['abdomen']= label_encoder.fit_transform(df['abdomen'])
df['abdomo_appearance']= label_encoder.fit_transform(df['abdomo_appearance'])
df['surgical_lesion']= label_encoder.fit_transform(df['surgical_lesion'])
df['cp_data']=label_encoder.fit_transform(df['cp_data'])
df['pain']=label_encoder.fit_transform(df['pain'])
df.head()
df.isnull().sum()
df.head()
df.columns
df1 = df[['hospital_number', 'rectal_temp', 'pulse',
'respiratory_rate',
'nasogastric_reflux_ph',
'packed_cell_volume', 'total_protein',
'abdomo_protein', 'lesion_1', 'lesion_2',
'lesion_3']]
df1.head()
from sklearn.preprocessing import StandardScaler
scaler= StandardScaler()
new_data=scaler.fit_transform(df1)
df1 = pd.DataFrame(new_data, columns = [['hospital_number','rectal_temp','pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein','lesion_1','lesion_2','lesion_3']])
df1
df2 = df.drop(['hospital_number','rectal_temp','pulse','respiratory_rate','nasogastric_reflux_ph','packed_cell_volume','total_protein','abdomo_protein','lesion_1','lesion_2','lesion_3'],axis = 1)
df2
df3 = pd.concat([df1,df2],axis = 1)
df3.head()
x = df3.drop('outcome',axis = 1)
y = df3['outcome']
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.32)
from sklearn.naive_bayes import GaussianNB
from sklearn import metrics
gnb = GaussianNB()
gnb.fit(x_train, y_train)
y_pred = gnb.predict(x_test)
print('Accuracy {:.2f}'.format(gnb.score(x_test, y_test)))
| 0.289271 | 0.157882 |
# Titanic -- Tutorial
Let's play the Titanic tutorial and see who survives! In this notebook, we collect some of the code used in the Kaggle tutorials https://www.kaggle.com/c/titanic/details/getting-started-with-python-ii and https://www.kaggle.com/c/titanic/details/getting-started-with-random-forests.
```
#seaborn generates pretty plots
import seaborn as sns
#label-encoders turn categories into numbers
from sklearn.preprocessing import LabelEncoder
#Random forest is a basic general-purpose classifier
from sklearn.ensemble import RandomForestClassifier
#magic command to show plots in notebook
%matplotlib inline
```
## Load train and test data and have a look
```
#pandas makes it easy to organize our data in data frames
import pandas as pd
train = pd.read_csv('./train.csv', index_col = 0)
train.head()
test = pd.read_csv('./test.csv', index_col = 0)
test.head()
```
We merge the train and test data in order to perform feature transformations on both datasets simultaneously.
```
train_test = pd.concat([train.drop('Survived', axis = 1), test])
```
## Play with data aka exploratory data analysis
What is the proportion of passengers who survived the disaster?
```
train['Survived'].sum()/train.shape[0]
```
Let's look at more general statistics for numeric fields
```
train.describe()
```
How does the survival rate differ from women to men?
```
#filter males and females
train_female = train[train['Sex'] == 'female']
train_male = train[train['Sex'] == 'male']
surv_rate_f = train_female['Survived'].sum()/train_female.shape[0]
surv_rate_m = train_male['Survived'].sum()/train_male.shape[0]
print('Survival rate for females {:0.2f}'.format(surv_rate_f))
print('Survival rate for males {:0.2f}'.format(surv_rate_m))
```
What is the age distribution?
```
train_test['Age'].plot('hist', bins=16, title = 'Age distribution')
```
## Cleaning the data
We observe that the age column has missing values.
```
train_test['Age'].shape[0] - train_test['Age'].count()
```
We fill the missing values by the median age. The same is done for the fare.
```
train_test['Age'] = train_test['Age'].fillna(train_test['Age'].median())
train_test['Fare'] = train_test['Fare'].fillna(train_test['Fare'].median())
```
Furthermore, the gender should be encoded to a binary numeric value.
```
train_test['Sex'] = LabelEncoder().fit_transform(train_test['Sex'])
```
## Feature Engineering
Feature engineering is the process of generating new features of high predictive potential from the given data. For instance, the family size could be of interest.
```
train_test['Fam_size'] = train_test['SibSp'] + train_test['Parch']
```
Furthermore, we add a feature capturing the two-way interaction between age and passenger class.
```
train_test['Age * Class'] = train_test['Age'] + train_test['Pclass']
```
## Training of machine-learning classifier
After having completed the feature engineering, a machine-learning classifier can be fitted to the data. We choose a *random-forest model*, an off-the-shelf classifier providing solid performance for many data sets.
```
rf_clf = RandomForestClassifier(n_estimators = 100)
df_num = train_test[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Fam_size', 'Age * Class']]
rf_clf = rf_clf.fit(df_num.iloc[:train.shape[0], :], train['Survived'])
```
After fitting to the numeric columns, we can generate predictions on the test set.
```
predicted = rf_clf.predict(df_num.iloc[train.shape[0]:, :])
predicted_series = pd.Series(predicted, index = df_num.iloc[train.shape[0]:, :].index, name = 'Survived')
```
Finally, persist the model to disk.
```
predicted_series.to_csv('./submission.csv', index = True, header = True)
```
|
github_jupyter
|
#seaborn generates pretty plots
import seaborn as sns
#label-encoders turn categories into numbers
from sklearn.preprocessing import LabelEncoder
#Random forest is a basic general-purpose classifier
from sklearn.ensemble import RandomForestClassifier
#magic command to show plots in notebook
%matplotlib inline
#pandas makes it easy to organize our data in data frames
import pandas as pd
train = pd.read_csv('./train.csv', index_col = 0)
train.head()
test = pd.read_csv('./test.csv', index_col = 0)
test.head()
train_test = pd.concat([train.drop('Survived', axis = 1), test])
train['Survived'].sum()/train.shape[0]
train.describe()
#filter males and females
train_female = train[train['Sex'] == 'female']
train_male = train[train['Sex'] == 'male']
surv_rate_f = train_female['Survived'].sum()/train_female.shape[0]
surv_rate_m = train_male['Survived'].sum()/train_male.shape[0]
print('Survival rate for females {:0.2f}'.format(surv_rate_f))
print('Survival rate for males {:0.2f}'.format(surv_rate_m))
train_test['Age'].plot('hist', bins=16, title = 'Age distribution')
train_test['Age'].shape[0] - train_test['Age'].count()
train_test['Age'] = train_test['Age'].fillna(train_test['Age'].median())
train_test['Fare'] = train_test['Fare'].fillna(train_test['Fare'].median())
train_test['Sex'] = LabelEncoder().fit_transform(train_test['Sex'])
train_test['Fam_size'] = train_test['SibSp'] + train_test['Parch']
train_test['Age * Class'] = train_test['Age'] + train_test['Pclass']
rf_clf = RandomForestClassifier(n_estimators = 100)
df_num = train_test[['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Fam_size', 'Age * Class']]
rf_clf = rf_clf.fit(df_num.iloc[:train.shape[0], :], train['Survived'])
predicted = rf_clf.predict(df_num.iloc[train.shape[0]:, :])
predicted_series = pd.Series(predicted, index = df_num.iloc[train.shape[0]:, :].index, name = 'Survived')
predicted_series.to_csv('./submission.csv', index = True, header = True)
| 0.46393 | 0.978364 |
# A Quick and Dirty Introduction to Deep Learning and PyTorch
## Deep Learning
Deep learning is a family of machine learning techniques that is the extension of neural networks. In this case, the word *deep* refers to the fact that these neural networks are organized into many **layers**.
This short tutorial aims to be a practical introduction to deep learning using [PyTorch](https://pytorch.org/). Parts of this tutorial are taken from PyTorch's [Introduction to PyTorch](https://pytorch.org/tutorials/beginner/introyt/introyt1_tutorial.html).
### Tensors
Most computations on neural networks computations are linear algebra operations on **tensors**. Tensors can be thought as a generalization of matrices (the proper mathematical definition of tensors is a [bit more complicated](https://en.wikipedia.org/wiki/Tensor)).
A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor, and so on. PyTorch is built around tensors!
##### Creating Tensors
We'll start with a few basic tensor manipulations. We start with ways of creating tensors
```
import torch
import numpy as np
import matplotlib.pyplot as plt
z = torch.zeros(5, 3)
print(z)
print(z.dtype)
```
It is possible to override the default datatype
```
i = torch.ones((5, 3), dtype=torch.int16)
print(i)
```
We can also initialize tensors with random values.
```
torch.manual_seed(1729)
r1 = torch.rand(2, 2)
print('A random tensor:')
print(r1)
r2 = torch.rand(2, 2)
print('\nA different random tensor:')
print(r2) # new values
torch.manual_seed(1729)
r3 = torch.rand(2, 2)
print('\nShould match r1:')
print(r3) # repeats values of r1 because of re-seed
```
We can initialize a tensor from a numpy array
```
n = np.ones(5)
t = torch.from_numpy(n)
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
```
And convert the tensors back to numpy
```
numpy_tensor = t.numpy()
print(f'numpy_tensor type {type(numpy_tensor)}')
```
##### Basic Arithmetic and Elementwise Operations
```
ones = torch.ones(2, 3)
print(ones)
twos = torch.ones(2, 3) * 2 # every element is multiplied by 2
print(twos)
threes = ones + twos # additon allowed because shapes are similar
print(threes) # tensors are added element-wise
print(threes.shape) # this has the same dimensions as input tensors
```
**Exercise**: The following snippet results in a runtime error. Why is that? (you have to uncomment the line)
```
r1 = torch.rand(2, 3)
r2 = torch.rand(3, 2)
# r3 = r1 + r2
```
Elementwise operations and aggregate operations
```
r = torch.rand(2, 2) - 0.5 * 2 # values between -1 and 1
print('A random matrix, r:')
print(r)
# Common mathematical operations are supported:
print('\nAbsolute value of r:')
print(torch.abs(r))
# ...as are trigonometric functions:
print('\nInverse sine of r:')
print(torch.asin(r))
# ...and statistical and aggregate operations:
print('\nAverage and standard deviation of r:')
print(torch.std_mean(r))
print('\nMaximum value of r:')
print(torch.max(r))
```
##### Linear Algebra
```
r1 = torch.rand(2, 2) - 0.5 * 2 # values between -1 and 1
print('A random matrix, r1:')
print(r)
r2 = torch.rand(2, 2) - 0.5 * 3 # values between -1.5 and 1.5
print('A random matrix, r2:')
print(r2)
print('\nMatrix Multiplication of r1 and r2')
print(torch.matmul(r1, r2))
print('\nDeterminant of r1:')
print(torch.det(r1))
print('\nSingular value decomposition of r1:')
print(torch.svd(r1))
print('\nPseudo-inverse of r1:')
print(torch.pinverse(r1))
```
### Artificial Neural Networks
Neural networks have its origins in early work that tried to model biological networks of neurons in the brain [(McCulloch and Pits, 1943)](https://homes.luddy.indiana.edu/jbollen/I501F13/readings/mccullochpitts1943.pdf). For this reason, these methods are called neural networks, although the resemblance to real neural cells is just superficial.
We can understand artificial neural networks (or simply neural networks) as **complex compositions of simpler functions**. Each node within a network is called a **unit**, and each of these units calculates a weighted sum of the inputs from predecessor nodes and then applies a nonlinear function.
Let us consider the following simple case:
Let $a_j$ denote the output of unit $j$ can be computed as
$$a_j = g_j\left(\sum_{i} w_{ij}a_i + b_j\right)$$
where
* $g_j(\cdot)$ is a nonlinear **activation function** associated with unit $j$
* $w_{ij}$ is the weight attached to the link from unit $i$ to unit $j$
* $b_j$ is a scalar bias
with this convention, we can write the above equation in vector form as
$$a_j = g_j\left(\mathbf{w}_j^T\mathbf{x} + b_j\right)$$
where $\mathbf{w}_j^T$ is the vector of weights leading into unit $j$.
##### Activation Functions
Some of the most common activation functions are
* **Sigmoid** function
$$ \sigma(x) = \frac{1}{1 + \exp(-x)}$$
```
def sigmoid_numpy(x):
output = 1 / (1 + np.exp(-x))
return output
x = np.linspace(-3, 3)
sig_numpy = sigmoid_numpy(x)
plt.plot(x, sig_numpy)
plt.ylim((0, 1))
plt.show()
```
* **ReLU** (rectified linear units)
$$\text{ReLU}(x) = \max(0, x)$$
```
def relu_numpy(x):
return np.maximum(0, x)
x = np.linspace(-3, 3)
relu = relu_numpy(x)
plt.plot(x, relu)
plt.show()
```
* **Hyperbolic tangent**
$$\tanh(x) = \frac{\exp(2x) - 1}{\exp(2x) + 1}$$
```
x = np.linspace(-3, 3)
tanh = np.tanh(x)
plt.plot(x, tanh)
plt.show()
```
### Computation Graphs
In the following we will consider $\mathbf{x}$ an input (training or test) example), $\hat{\mathbf{y}}$ are the outputs of the network and $\mathbf{y}$ the *true values* to derive a learning signal
* **Input Encoding**: It depends on the problem we want to model. Assume that we have $n$ input nodes
* If we have Boolean inputs, *false* is usually mapped to $0$ and *true* to $1$, although sometimes $-1$ and $1$ are used
* If we have real valued inputs, we can just use the actual values, although it is common to scale the inputs to fit a fixed range, or use a transformation like a log scale if the magnitudes of the different examples vary a lot.
* If we have categorical encodings, we can use a *one-hot* encoding
* **Output Layers and Loss Function**: On the output side of the network, the problem of encoding raw data values into actual values $\mathbf{y}$ is very similar than the input encoding: We can use a numerical mapping for Boolean outputs, (scaled/transformed) real values for real-valued outputs, one-hot encodings for categorical data. This can be achieved by choosing an appropriate output nonlinearity:
* For Boolean outputs, we can use the sigmoid function (if we are mapping *false* and *true* to 0 and 1, respectively), or tanh (if we are mapping to -1 and 1).
* For categorical problems, we can use a **softmax** layer:
$$\text{softmax}(\mathbf{in})_k = \frac{\exp(in_k)}{\sum_{l=1}^{d} \exp(in_l)}$$
where $\mathbf{in} = (in_1, \dots, in_d)$ are the input values.
* For regression problems, it is usual to use the identity function $g(x) = x$
* And many more, depending on the problem (e.g., mixture density layers).
* **Hidden Layers**: We can think of the hidden layers as learning different *representations* for the input $\mathbf{x}$. In many cases, the $l$-th hidden layer will be given as a function of the previous layers:
$$\mathbf{h}_l(\mathbf{h}_{l-1}) = g_l(\mathbf{W}\mathbf{h}_{l-1} + \mathbf{b}_l)$$
although this form would depend on the particular neural architecture. (the above example would be for a fully connected feed forward neural network). With this notation, we can write the inputs as the $0$-th layer $\mathbf{h}_0 = \mathbf{x}$ and the outputs as the $L$-th layer, i.e., $\hat{\mathbf{y}} = \mathbf{h}_L(\mathbf{h}_{l-1})$.
##### Training the network
The **loss function** is a measure of how good the predictions of the network are, i.e., how close do the predictions of the network approximate the expected values $\mathbf{y}$. We can use this loss function to learn the parameters of the network (the sets of weights and biases) as those which minimize the loss function
$$\hat{\mathbf{\theta}} = \arg \min_{\mathbf{\theta}} \mathcal{L}(\mathbf{Y}, \hat{\mathbf{Y}})$$
For regression problems, it is common to use the mean squared error
$$\text{mse}(\mathbf{Y}, \hat{\mathbf{Y}}) = \frac{1}{N} \sum_{i}||\mathbf{y}_i - \hat{\mathbf{y}}_i||^2$$
|
github_jupyter
|
import torch
import numpy as np
import matplotlib.pyplot as plt
z = torch.zeros(5, 3)
print(z)
print(z.dtype)
i = torch.ones((5, 3), dtype=torch.int16)
print(i)
torch.manual_seed(1729)
r1 = torch.rand(2, 2)
print('A random tensor:')
print(r1)
r2 = torch.rand(2, 2)
print('\nA different random tensor:')
print(r2) # new values
torch.manual_seed(1729)
r3 = torch.rand(2, 2)
print('\nShould match r1:')
print(r3) # repeats values of r1 because of re-seed
n = np.ones(5)
t = torch.from_numpy(n)
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
numpy_tensor = t.numpy()
print(f'numpy_tensor type {type(numpy_tensor)}')
ones = torch.ones(2, 3)
print(ones)
twos = torch.ones(2, 3) * 2 # every element is multiplied by 2
print(twos)
threes = ones + twos # additon allowed because shapes are similar
print(threes) # tensors are added element-wise
print(threes.shape) # this has the same dimensions as input tensors
r1 = torch.rand(2, 3)
r2 = torch.rand(3, 2)
# r3 = r1 + r2
r = torch.rand(2, 2) - 0.5 * 2 # values between -1 and 1
print('A random matrix, r:')
print(r)
# Common mathematical operations are supported:
print('\nAbsolute value of r:')
print(torch.abs(r))
# ...as are trigonometric functions:
print('\nInverse sine of r:')
print(torch.asin(r))
# ...and statistical and aggregate operations:
print('\nAverage and standard deviation of r:')
print(torch.std_mean(r))
print('\nMaximum value of r:')
print(torch.max(r))
r1 = torch.rand(2, 2) - 0.5 * 2 # values between -1 and 1
print('A random matrix, r1:')
print(r)
r2 = torch.rand(2, 2) - 0.5 * 3 # values between -1.5 and 1.5
print('A random matrix, r2:')
print(r2)
print('\nMatrix Multiplication of r1 and r2')
print(torch.matmul(r1, r2))
print('\nDeterminant of r1:')
print(torch.det(r1))
print('\nSingular value decomposition of r1:')
print(torch.svd(r1))
print('\nPseudo-inverse of r1:')
print(torch.pinverse(r1))
def sigmoid_numpy(x):
output = 1 / (1 + np.exp(-x))
return output
x = np.linspace(-3, 3)
sig_numpy = sigmoid_numpy(x)
plt.plot(x, sig_numpy)
plt.ylim((0, 1))
plt.show()
def relu_numpy(x):
return np.maximum(0, x)
x = np.linspace(-3, 3)
relu = relu_numpy(x)
plt.plot(x, relu)
plt.show()
x = np.linspace(-3, 3)
tanh = np.tanh(x)
plt.plot(x, tanh)
plt.show()
| 0.661923 | 0.989024 |
```
import pandas as pd
import numpy as np
import math
import tensorflow as tf
import matplotlib.pyplot as plt
print(pd.__version__)
import progressbar
from tensorflow import keras
```
### Load of the test data
```
from process import loaddata
data = loaddata('../data/classifier/250.csv')
np.random.shuffle(data)
y = data[:,-7:-4]
x = data[:,1:7]
print('Classifaction data shape:',x.shape)
```
### Model Load
```
model = keras.models.load_model('../models/classificationandregression/large_mse250.h5')
```
### Test of the Classification&Regression NN
```
model.fit(x, y)
```
### Test spectrum
```
def energy_spectrum(energy_array, bins):
energy_array = np.array(energy_array)
plt.hist(energy_array, bins, histtype=u'step')
plt.yscale("log")
plt.show()
final_e = []
for y_ in y:
final_e.append(np.linalg.norm(y_))
y.shape
energy_spectrum(final_e, 75)
prediction = model.predict(x)
from tensorflow import keras
final_e_nn = []
bar = progressbar.ProgressBar(maxval=len(prediction),
widgets=[progressbar.Bar('=', '[', ']'), ' ',
progressbar.Percentage(),
" of {0}".format(len(prediction))])
bar.start()
for i, pred in enumerate(prediction):
final_e_nn.append(np.linalg.norm(pred))
bar.update(i+1)
bar.finish()
from scipy.stats import norm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.stats import chisquare
mean,std=norm.fit(final_e)
plt.hist(final_e, bins=100, alpha = 0.5, color = 'mediumslateblue', label='Electrons spectrum', density = True)
xmin, xmax = plt.xlim()
x_e = np.linspace(xmin, xmax, 100)
y_e = norm.pdf(x_e, mean, std)
plt.plot(x_e, y_e,'g--', linewidth=2)
plt.legend(loc='upper right')
plt.savefig('../plots/onenetwork/250/electronspectrum.png')
plt.savefig('../plots/onenetwork/250/electronspectrum.pdf')
plt.show()
print('mean = ', mean)
print('std = ', std)
print("chi square = ", stats.chisquare(final_e))
from scipy.stats import norm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.stats import chisquare
mean_nn,std_nn=norm.fit(final_e_nn)
plt.hist(final_e_nn, bins=100, alpha = 0.5, color = 'coral', label='NN prediction', density = True)
xmin, xmax = plt.xlim()
x_nn = np.linspace(xmin, xmax, 100)
y_nn = norm.pdf(x_nn, mean_nn, std_nn)
plt.plot(x_nn, y_nn,'r--', linewidth=2)
plt.legend(loc='upper right')
plt.savefig('../plots/onenetwork/250/NNprediction.png')
plt.savefig('../plots/onenetwork/250/NNprediction.pdf')
plt.show()
plt.hist(final_e_nn, bins=100, alpha = 0.5, color = 'mediumslateblue', label='NN prediction', density = True)
plt.hist(final_e, bins=100, alpha = 0.5, color = 'coral', label='Electron Momentum from simulations', density = True)
x_nn = np.linspace(xmin, xmax, 100)
y_nn = norm.pdf(x_nn, mean_nn, std_nn)
plt.plot(x_nn, y_nn,'g--', label = 'fit NN', linewidth = 2)
plt.legend(loc='upper right')
x_e = np.linspace(xmin, xmax, 100)
y_e = norm.pdf(x_e, mean, std)
plt.plot(x_e, y_e, 'r:', label = 'fit Electron Momentum Simulations', linewidth = 2)
plt.legend(loc = 'upper right')
plt.ylim((0, 32.5))
plt.savefig('../plots/onenetwork/250/comparison.png')
plt.savefig('../plots/onenetwork/250/comparison.pdf')
plt.show()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import math
import tensorflow as tf
import matplotlib.pyplot as plt
print(pd.__version__)
import progressbar
from tensorflow import keras
from process import loaddata
data = loaddata('../data/classifier/250.csv')
np.random.shuffle(data)
y = data[:,-7:-4]
x = data[:,1:7]
print('Classifaction data shape:',x.shape)
model = keras.models.load_model('../models/classificationandregression/large_mse250.h5')
model.fit(x, y)
def energy_spectrum(energy_array, bins):
energy_array = np.array(energy_array)
plt.hist(energy_array, bins, histtype=u'step')
plt.yscale("log")
plt.show()
final_e = []
for y_ in y:
final_e.append(np.linalg.norm(y_))
y.shape
energy_spectrum(final_e, 75)
prediction = model.predict(x)
from tensorflow import keras
final_e_nn = []
bar = progressbar.ProgressBar(maxval=len(prediction),
widgets=[progressbar.Bar('=', '[', ']'), ' ',
progressbar.Percentage(),
" of {0}".format(len(prediction))])
bar.start()
for i, pred in enumerate(prediction):
final_e_nn.append(np.linalg.norm(pred))
bar.update(i+1)
bar.finish()
from scipy.stats import norm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.stats import chisquare
mean,std=norm.fit(final_e)
plt.hist(final_e, bins=100, alpha = 0.5, color = 'mediumslateblue', label='Electrons spectrum', density = True)
xmin, xmax = plt.xlim()
x_e = np.linspace(xmin, xmax, 100)
y_e = norm.pdf(x_e, mean, std)
plt.plot(x_e, y_e,'g--', linewidth=2)
plt.legend(loc='upper right')
plt.savefig('../plots/onenetwork/250/electronspectrum.png')
plt.savefig('../plots/onenetwork/250/electronspectrum.pdf')
plt.show()
print('mean = ', mean)
print('std = ', std)
print("chi square = ", stats.chisquare(final_e))
from scipy.stats import norm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.stats import chisquare
mean_nn,std_nn=norm.fit(final_e_nn)
plt.hist(final_e_nn, bins=100, alpha = 0.5, color = 'coral', label='NN prediction', density = True)
xmin, xmax = plt.xlim()
x_nn = np.linspace(xmin, xmax, 100)
y_nn = norm.pdf(x_nn, mean_nn, std_nn)
plt.plot(x_nn, y_nn,'r--', linewidth=2)
plt.legend(loc='upper right')
plt.savefig('../plots/onenetwork/250/NNprediction.png')
plt.savefig('../plots/onenetwork/250/NNprediction.pdf')
plt.show()
plt.hist(final_e_nn, bins=100, alpha = 0.5, color = 'mediumslateblue', label='NN prediction', density = True)
plt.hist(final_e, bins=100, alpha = 0.5, color = 'coral', label='Electron Momentum from simulations', density = True)
x_nn = np.linspace(xmin, xmax, 100)
y_nn = norm.pdf(x_nn, mean_nn, std_nn)
plt.plot(x_nn, y_nn,'g--', label = 'fit NN', linewidth = 2)
plt.legend(loc='upper right')
x_e = np.linspace(xmin, xmax, 100)
y_e = norm.pdf(x_e, mean, std)
plt.plot(x_e, y_e, 'r:', label = 'fit Electron Momentum Simulations', linewidth = 2)
plt.legend(loc = 'upper right')
plt.ylim((0, 32.5))
plt.savefig('../plots/onenetwork/250/comparison.png')
plt.savefig('../plots/onenetwork/250/comparison.pdf')
plt.show()
| 0.463444 | 0.821259 |
```
import os
import pandas as pd
import requests
```
# Training the LSTM model with latest data
```
# Main source for the training data
DATA_URL = 'https://raw.githubusercontent.com/OxCGRT/covid-policy-tracker/master/data/OxCGRT_latest.csv'
DATA_FILE = 'data/OxCGRT_latest.csv'
# Download the data set
data = requests.get(DATA_URL)
# Persist the data set locally in order to use it after submission to make predictions,
# as the sandbox won't have access to the internet anymore.
if not os.path.exists('data'):
os.mkdir('data')
open(DATA_FILE, 'wb').write(data.content)
# Reload the module to get the latest changes
import xprize_predictor
from importlib import reload
reload(xprize_predictor)
from xprize_predictor import XPrizePredictor
predictor = XPrizePredictor(None, DATA_FILE)
%%time
predictor_model = predictor.train()
if not os.path.exists('models'):
os.mkdir('models')
predictor_model.save_weights("models/trained_model_weights.h5")
```
# Predicting 4 days gap using the trained model with latest data
## Load candidate model
```
model_weights_file = "models/trained_model_weights.h5"
predictor = XPrizePredictor(model_weights_file, DATA_FILE)
```
## Make prediction
```
NPIS_INPUT_FILE = "../../../validation/data/2020-09-30_historical_ip.csv"
start_date = "2020-08-01"
end_date = "2020-08-4"
output_file_path = "predictions/2020-08-01_2020-08-4_latest_data.csv"
%%time
preds_df = predictor.predict(start_date, end_date, NPIS_INPUT_FILE)
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
preds_df.to_csv(output_file_path, index=False)
print(f"Saved predictions to {output_file_path}")
preds_df.head()
```
# Predicting the August covid wave 2021 with NPI-LSTM trained with latest data till July of 2021
## Training the lstm with latest data till 31 July of 2021
### Filtering and saving data till 31 July of 2021 from latest data
```
latest_df = pd.read_csv(DATA_FILE,
parse_dates=['Date'],
dtype={"RegionName": str},
encoding="ISO-8859-1")
latest_july_df = latest_df[(latest_df.Date < '2021-08-01')]
output_file_path = "data/OxCGRT_2021_07_31.csv";
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
latest_july_df.to_csv(output_file_path, index=False)
print(f"Saved dataframe to {output_file_path}")
```
### Training
Data for training is randomly splited on 90% for training, and 10% for validation. While the 14 latest days data of the global dataframe is keeped out for testing.
```
DATA_FILE = output_file_path
predictor = XPrizePredictor(None, DATA_FILE)
```
We have set number or trials to 10, in order to get the best model lstm which minimize the val loss MAE
```
%%time
predictor_model = predictor.train()
```
Val loss MAE
```
if not os.path.exists('models'):
os.mkdir('models')
predictor_model.save_weights("models/trained_model_weights_2021_07_31.h5")
```
## Predicting the August 2021 wave
### Preparing historical ip
```
DATA_FILE = "data/OxCGRT_latest.csv"
latest_df = pd.read_csv(DATA_FILE,
parse_dates=['Date'],
dtype={"RegionName": str},
encoding="ISO-8859-1")
latest_historical_ip_df = latest_df[["CountryName", "RegionName",
"Date","C1_School closing",
"C2_Workplace closing","C3_Cancel public events",
"C4_Restrictions on gatherings","C5_Close public transport",
"C6_Stay at home requirements","C7_Restrictions on internal movement",
"C8_International travel controls","H1_Public information campaigns",
"H2_Testing policy","H3_Contact tracing","H6_Facial Coverings"]]
output_file_path = "data/latest_historical_ip.csv";
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
latest_historical_ip_df.to_csv(output_file_path, index=False)
print(f"Saved dataframe to {output_file_path}")
```
### Prediction
```
NPIS_INPUT_FILE = "data/latest_historical_ip.csv"
start_date = "2021-08-01"
end_date = "2021-08-31"
output_file_path = "predictions/2021-08-01_2021-08-31_latest_data.csv"
DATA_FILE = "data/OxCGRT_2021_07_31.csv"
model_weights_file = "models/trained_model_weights_2021_07_31.h5"
predictor = XPrizePredictor(model_weights_file, DATA_FILE)
%%time
preds_df = predictor.predict(start_date, end_date, NPIS_INPUT_FILE)
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
preds_df.to_csv(output_file_path, index=False)
print(f"Saved predictions to {output_file_path}")
preds_df.head()
```
|
github_jupyter
|
import os
import pandas as pd
import requests
# Main source for the training data
DATA_URL = 'https://raw.githubusercontent.com/OxCGRT/covid-policy-tracker/master/data/OxCGRT_latest.csv'
DATA_FILE = 'data/OxCGRT_latest.csv'
# Download the data set
data = requests.get(DATA_URL)
# Persist the data set locally in order to use it after submission to make predictions,
# as the sandbox won't have access to the internet anymore.
if not os.path.exists('data'):
os.mkdir('data')
open(DATA_FILE, 'wb').write(data.content)
# Reload the module to get the latest changes
import xprize_predictor
from importlib import reload
reload(xprize_predictor)
from xprize_predictor import XPrizePredictor
predictor = XPrizePredictor(None, DATA_FILE)
%%time
predictor_model = predictor.train()
if not os.path.exists('models'):
os.mkdir('models')
predictor_model.save_weights("models/trained_model_weights.h5")
model_weights_file = "models/trained_model_weights.h5"
predictor = XPrizePredictor(model_weights_file, DATA_FILE)
NPIS_INPUT_FILE = "../../../validation/data/2020-09-30_historical_ip.csv"
start_date = "2020-08-01"
end_date = "2020-08-4"
output_file_path = "predictions/2020-08-01_2020-08-4_latest_data.csv"
%%time
preds_df = predictor.predict(start_date, end_date, NPIS_INPUT_FILE)
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
preds_df.to_csv(output_file_path, index=False)
print(f"Saved predictions to {output_file_path}")
preds_df.head()
latest_df = pd.read_csv(DATA_FILE,
parse_dates=['Date'],
dtype={"RegionName": str},
encoding="ISO-8859-1")
latest_july_df = latest_df[(latest_df.Date < '2021-08-01')]
output_file_path = "data/OxCGRT_2021_07_31.csv";
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
latest_july_df.to_csv(output_file_path, index=False)
print(f"Saved dataframe to {output_file_path}")
DATA_FILE = output_file_path
predictor = XPrizePredictor(None, DATA_FILE)
%%time
predictor_model = predictor.train()
if not os.path.exists('models'):
os.mkdir('models')
predictor_model.save_weights("models/trained_model_weights_2021_07_31.h5")
DATA_FILE = "data/OxCGRT_latest.csv"
latest_df = pd.read_csv(DATA_FILE,
parse_dates=['Date'],
dtype={"RegionName": str},
encoding="ISO-8859-1")
latest_historical_ip_df = latest_df[["CountryName", "RegionName",
"Date","C1_School closing",
"C2_Workplace closing","C3_Cancel public events",
"C4_Restrictions on gatherings","C5_Close public transport",
"C6_Stay at home requirements","C7_Restrictions on internal movement",
"C8_International travel controls","H1_Public information campaigns",
"H2_Testing policy","H3_Contact tracing","H6_Facial Coverings"]]
output_file_path = "data/latest_historical_ip.csv";
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
latest_historical_ip_df.to_csv(output_file_path, index=False)
print(f"Saved dataframe to {output_file_path}")
NPIS_INPUT_FILE = "data/latest_historical_ip.csv"
start_date = "2021-08-01"
end_date = "2021-08-31"
output_file_path = "predictions/2021-08-01_2021-08-31_latest_data.csv"
DATA_FILE = "data/OxCGRT_2021_07_31.csv"
model_weights_file = "models/trained_model_weights_2021_07_31.h5"
predictor = XPrizePredictor(model_weights_file, DATA_FILE)
%%time
preds_df = predictor.predict(start_date, end_date, NPIS_INPUT_FILE)
# Create the output path
os.makedirs(os.path.dirname(output_file_path), exist_ok=True)
# Save to a csv file
preds_df.to_csv(output_file_path, index=False)
print(f"Saved predictions to {output_file_path}")
preds_df.head()
| 0.330363 | 0.679767 |
```
import tensorflow as tf
import os
import numpy as np
import re
from PIL import Image
import matplotlib.pyplot as plt
class NodeLookup:
def __init__(self):
label_lookup_path = "C:/learn/inception_model/imagenet_2012_challenge_label_map_proto.pbtxt"
uid_lookup_path = "C:/learn/inception_model/imagenet_synset_to_human_label_map.txt"
self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
def load(self, label_lookup_path, uid_lookup_path):
# ๅ ่ฝฝๅญ็ฌฆไธฒn********ๅฏนๅบๅ็ฑปๅ็งฐ็ๆไปถ
proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines()
uid_to_human = {}
for line in proto_as_ascii_lines:
# ไธ่ก่ก่ฏปๅๆฐๆฎ
# ๅปๆๆข่ก็ฌฆ
line = line.strip("\n")
# ๆ็
ง\tๅๅฒ
parsed_items = line.split("\t")
# ่ทๅๅ็ฑป็ผๅท
uid = parsed_items[0]
# ่ทๅๅ็ฑปๅ็งฐ
human_string = parsed_items[1]
# ไฟๅญๅญ็ฌฆไธฒn******ไธๅ็ฑปๅ็งฐๆ ๅฐๅ
ณ็ณป
uid_to_human[uid] = human_string
# ๅ ่ฝฝๅ็ฑปๅญ็ฌฆไธฒไธๅฏนๅบๅ็ฑป็ผๅท1-1000็ๆไปถ
proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
node_id_to_uid = {}
for line in proto_as_ascii:
if line.startswith(" target_class:"):
# ่ทๅๅ็ฑป็ผๅท1-1000
target_class = int(line.split(": ")[1])
if line.startswith(" target_class_string"):
# ไฟๅญๅ็ฑป็ผๅทๅญ็ฌฆไธฒn****็ๆ ๅฐๅ
ณ็ณป
target_class_string = line.split(": ")[1]
node_id_to_uid[target_class] = target_class_string[1:-2]
# ๅปบ็ซๅ็ฑป็ผๅท1-1000ๅฏนๅบๅ็ฑปๅ็งฐ็ๆ ๅฐๅ
ณ็ณป
node_id_to_name = {}
for key, val in node_id_to_uid.items():
# ่ทๅๅ็ฑปๅ็งฐ
name = uid_to_human[val]
# ่ทๅๅ็ฑป็ผๅท1-1000ๅฐๅ็ฑปๅ็งฐ็ๆ ๅฐๅ
ณ็ณป
node_id_to_name[key] = name
return node_id_to_name
# ไผ ๅ
ฅ็ผๅท1-1000่ฟๅๅ็ฑปๅ็งฐ
def id_to_string(self, node_id):
if node_id not in self.node_lookup:
return ""
return self.node_lookup[node_id]
# ๅๅปบไธไธชๅพๆฅๅญๆพGoogle่ฎญ็ปๅฅฝ็ๆจกๅ
with tf.gfile.GFile("C:/learn/inception_model/classify_image_graph_def.pb", "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name="")
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name("softmax:0")
# ้ๅ็ฎๅฝ
for root, dirs, files in os.walk("C:/learn/images"):
for file in files:
# ่ฝฝๅ
ฅๅพ็
image_data = tf.gfile.FastGFile(os.path.join(root, file), "rb").read()
predictions = sess.run(softmax_tensor, {"DecodeJpeg/contents:0" : image_data})
predictions = np.squeeze(predictions) # ๆ็ปๆ่ฝฌไธบ1็ปดๆฐๆฎ
# ๆๅฐๅพ็่ทฏๅพๅๅ็งฐ
image_path = os.path.join(root, file)
print(image_path)
# ๆพ็คบๅพ็
img = Image.open(image_path)
plt.imshow(img)
plt.axis("off")
plt.show()
# ๆๅบ
top_k = predictions.argsort()[-5:][::-1]
node_lookup = NodeLookup()
for node_id in top_k:
# ่ทๅๅ็ฑปๅ็งฐ
human_string = node_lookup.id_to_string(node_id)
# ่ทๅๅ็ฑป็่ดไฟกๅบฆ
score = predictions[node_id]
print("%s (score = %.5f)" % (human_string, score))
print()
```
|
github_jupyter
|
import tensorflow as tf
import os
import numpy as np
import re
from PIL import Image
import matplotlib.pyplot as plt
class NodeLookup:
def __init__(self):
label_lookup_path = "C:/learn/inception_model/imagenet_2012_challenge_label_map_proto.pbtxt"
uid_lookup_path = "C:/learn/inception_model/imagenet_synset_to_human_label_map.txt"
self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
def load(self, label_lookup_path, uid_lookup_path):
# ๅ ่ฝฝๅญ็ฌฆไธฒn********ๅฏนๅบๅ็ฑปๅ็งฐ็ๆไปถ
proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines()
uid_to_human = {}
for line in proto_as_ascii_lines:
# ไธ่ก่ก่ฏปๅๆฐๆฎ
# ๅปๆๆข่ก็ฌฆ
line = line.strip("\n")
# ๆ็
ง\tๅๅฒ
parsed_items = line.split("\t")
# ่ทๅๅ็ฑป็ผๅท
uid = parsed_items[0]
# ่ทๅๅ็ฑปๅ็งฐ
human_string = parsed_items[1]
# ไฟๅญๅญ็ฌฆไธฒn******ไธๅ็ฑปๅ็งฐๆ ๅฐๅ
ณ็ณป
uid_to_human[uid] = human_string
# ๅ ่ฝฝๅ็ฑปๅญ็ฌฆไธฒไธๅฏนๅบๅ็ฑป็ผๅท1-1000็ๆไปถ
proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
node_id_to_uid = {}
for line in proto_as_ascii:
if line.startswith(" target_class:"):
# ่ทๅๅ็ฑป็ผๅท1-1000
target_class = int(line.split(": ")[1])
if line.startswith(" target_class_string"):
# ไฟๅญๅ็ฑป็ผๅทๅญ็ฌฆไธฒn****็ๆ ๅฐๅ
ณ็ณป
target_class_string = line.split(": ")[1]
node_id_to_uid[target_class] = target_class_string[1:-2]
# ๅปบ็ซๅ็ฑป็ผๅท1-1000ๅฏนๅบๅ็ฑปๅ็งฐ็ๆ ๅฐๅ
ณ็ณป
node_id_to_name = {}
for key, val in node_id_to_uid.items():
# ่ทๅๅ็ฑปๅ็งฐ
name = uid_to_human[val]
# ่ทๅๅ็ฑป็ผๅท1-1000ๅฐๅ็ฑปๅ็งฐ็ๆ ๅฐๅ
ณ็ณป
node_id_to_name[key] = name
return node_id_to_name
# ไผ ๅ
ฅ็ผๅท1-1000่ฟๅๅ็ฑปๅ็งฐ
def id_to_string(self, node_id):
if node_id not in self.node_lookup:
return ""
return self.node_lookup[node_id]
# ๅๅปบไธไธชๅพๆฅๅญๆพGoogle่ฎญ็ปๅฅฝ็ๆจกๅ
with tf.gfile.GFile("C:/learn/inception_model/classify_image_graph_def.pb", "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name="")
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name("softmax:0")
# ้ๅ็ฎๅฝ
for root, dirs, files in os.walk("C:/learn/images"):
for file in files:
# ่ฝฝๅ
ฅๅพ็
image_data = tf.gfile.FastGFile(os.path.join(root, file), "rb").read()
predictions = sess.run(softmax_tensor, {"DecodeJpeg/contents:0" : image_data})
predictions = np.squeeze(predictions) # ๆ็ปๆ่ฝฌไธบ1็ปดๆฐๆฎ
# ๆๅฐๅพ็่ทฏๅพๅๅ็งฐ
image_path = os.path.join(root, file)
print(image_path)
# ๆพ็คบๅพ็
img = Image.open(image_path)
plt.imshow(img)
plt.axis("off")
plt.show()
# ๆๅบ
top_k = predictions.argsort()[-5:][::-1]
node_lookup = NodeLookup()
for node_id in top_k:
# ่ทๅๅ็ฑปๅ็งฐ
human_string = node_lookup.id_to_string(node_id)
# ่ทๅๅ็ฑป็่ดไฟกๅบฆ
score = predictions[node_id]
print("%s (score = %.5f)" % (human_string, score))
print()
| 0.289472 | 0.201499 |
## Using latent semantic indexing on labor categories
This is an attempt to use gensim's [Latent Semantic Indexing](https://radimrehurek.com/gensim/models/lsimodel.html) functionality with contract data, providing us with a way to find contract rows whose labor categories are similar to one we're looking at. Then we'll combine that data with some other dimensions from the contract rows, like price and minimum experience, and finally use a [K Nearest Neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) algorithm to help us find comparables for any given criteria.
Code is largely based off the [Making an Impact with Python Natural Language Processing Tools](https://www.youtube.com/watch?v=jSdkFSg9oW8) Pycon 2016 tutorial, specifically its [LSI with Gensim](https://github.com/totalgood/twip/blob/master/docs/notebooks/09%20Features%20--%20LSI%20with%20Gensim.ipynb) notebook.
```
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
rows = pd.read_csv('../data/hourly_prices.csv', index_col=False, thousands=',')
from gensim.models import LsiModel, TfidfModel
from gensim.corpora import Dictionary
```
We'll build a vocabulary off the labor categories in the contract rows, and then build a [term frequencyโinverse document frequency](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) matrix off it.
```
vocab = Dictionary(rows['Labor Category'].str.split())
tfidf = TfidfModel(id2word=vocab, dictionary=vocab)
bows = rows['Labor Category'].apply(lambda x: vocab.doc2bow(x.split()))
vocab.token2id['engineer']
vocab[0]
dict([(vocab[i], round(freq, 2)) for i, freq in tfidf[bows[0]]])
```
Here we'll build a LSI model that places each labor category into a 5-dimensional vector.
```
lsi = LsiModel(tfidf[bows], num_topics=5, id2word=vocab, extra_samples=100, power_iters=2)
len(vocab)
topics = lsi[bows]
df_topics = pd.DataFrame([dict(d) for d in topics], index=bows.index, columns=range(5))
lsi.print_topic(1, topn=5)
```
This part is a bit weird: we're extending our vectors with information about the price and minimum experience of each contract row, semi-normalizing the data so that they don't "overwhelm" the importance of the LSI dimensions when calculating distances between points.
I have no idea if this is actually legit.
```
PRICE_COEFF = 1 / 500.0
XP_COEFF = 1 / 10.0
df_topics['Price'] = (rows['Year 1/base'] * PRICE_COEFF).fillna(0)
df_topics['Experience'] = (rows['MinExpAct'] * XP_COEFF).fillna(0)
```
Now we'll use a [K Nearest Neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) algorithm to make it easy for us to find vectors that are nearby.
```
from sklearn.neighbors import NearestNeighbors
df_topics = df_topics.fillna(0)
neigh = NearestNeighbors(n_neighbors=5)
neigh.fit(df_topics)
neigh.kneighbors(df_topics.ix[0].values.reshape(1, -1), return_distance=False)
```
Here's where things potentially become useful: we'll create a function that takes a labor category, price, and experience, and returns a list of comparables from our vector space.
```
def get_neighbors(labor_category, price, experience):
vector = []
topic_values = lsi[tfidf[vocab.doc2bow(labor_category.split())]]
vector.extend([v[1] for v in topic_values])
vector.extend([price * PRICE_COEFF, experience * XP_COEFF])
neighbors = list(neigh.kneighbors([vector], return_distance=False)[0])
return pd.DataFrame([rows.loc[i] for i in neighbors], index=neighbors)
get_neighbors('Awesome Engineer', 80, 5)
```
|
github_jupyter
|
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
rows = pd.read_csv('../data/hourly_prices.csv', index_col=False, thousands=',')
from gensim.models import LsiModel, TfidfModel
from gensim.corpora import Dictionary
vocab = Dictionary(rows['Labor Category'].str.split())
tfidf = TfidfModel(id2word=vocab, dictionary=vocab)
bows = rows['Labor Category'].apply(lambda x: vocab.doc2bow(x.split()))
vocab.token2id['engineer']
vocab[0]
dict([(vocab[i], round(freq, 2)) for i, freq in tfidf[bows[0]]])
lsi = LsiModel(tfidf[bows], num_topics=5, id2word=vocab, extra_samples=100, power_iters=2)
len(vocab)
topics = lsi[bows]
df_topics = pd.DataFrame([dict(d) for d in topics], index=bows.index, columns=range(5))
lsi.print_topic(1, topn=5)
PRICE_COEFF = 1 / 500.0
XP_COEFF = 1 / 10.0
df_topics['Price'] = (rows['Year 1/base'] * PRICE_COEFF).fillna(0)
df_topics['Experience'] = (rows['MinExpAct'] * XP_COEFF).fillna(0)
from sklearn.neighbors import NearestNeighbors
df_topics = df_topics.fillna(0)
neigh = NearestNeighbors(n_neighbors=5)
neigh.fit(df_topics)
neigh.kneighbors(df_topics.ix[0].values.reshape(1, -1), return_distance=False)
def get_neighbors(labor_category, price, experience):
vector = []
topic_values = lsi[tfidf[vocab.doc2bow(labor_category.split())]]
vector.extend([v[1] for v in topic_values])
vector.extend([price * PRICE_COEFF, experience * XP_COEFF])
neighbors = list(neigh.kneighbors([vector], return_distance=False)[0])
return pd.DataFrame([rows.loc[i] for i in neighbors], index=neighbors)
get_neighbors('Awesome Engineer', 80, 5)
| 0.331228 | 0.982322 |
# Data Mining Overview (Cont.)
As we discussed earlier, there are two broad categories of data mining methods: **supervised** and **unsupervised**.
In a **supervised** problem, the goal is to predict an outcome variable. It is labeled as supervised because there is a target that can supervise the learning and it is used to train the algorithm.
In an **unsupervised** learning problem, there is no outcome variable to guide the learning. In this case, the algorithm must extract meaningful information from the dataset without an output specified beforehand.
We will again use the dataset about Gies students as an example with column variables being major, year, and the type of internship they got.
Suppose your goal is to find โsimilarโ students by finding patterns in the data. If you do not have a particular outcome variable in your dataset that you are aiming to predict, this would be considered **unsupervised learning**.
Now suppose you are given a dataset of students that are currently enrolled in or have been enrolled in BADM 211. You want to use information about who dropped the course in previous semesters to predict who is at risk of dropping the course this semester. In this case, we have a particular outcome that we are trying to predict.
At least for some of the data, we have information about the outcome. The remaining variables (the variables other than the one that indicates if a student dropped the course) are the predictors, or the independent variables, in the model. In this example, because we have a specific target weโre interested in predicting, it is a **supervised learning** problem.
In data mining, there are many synonyms for common terms. For example, the outcome variable is the one that we would like to predict in a supervised learning problem. The outcome variable can be referred to as the response, predicted, target, label, or dependent variable.
For a supervised learning problem, we usually have a regression or a classification problem.
In a **regression** problem, the outcome variable is continuous. For example, if you want to predict how much a customer will spend, that is a regression problem. It should be noted that regression problems are also called prediction problems, but we will stick to using the term regression.
On the other hand, for a **classification** problem, the outcome variable is a categorical variable. For example, predicting whether or not a customer will spend more than $500 is a classification problem.
|  |
|:--:|
|<b>Fig. 1 - Summary of Supervised vs. Unsupervised Learning</b>|
For unsupervised learning problems, we may want to use methods like association rules, clustering, and data reduction and exploration.
**Association rules** allow us to determine if different variables go together. Amazon uses association rules to suggest items to you when you are making a purchase.
**Data reduction** is an unsupervised method that allows us to reduce the complexity of our data sets.
**Clustering** is useful for grouping like records. For example, we might use clusters to develop customer segments. In a healthcare setting, we could use this to group patients based on genetic signatures, allowing for more targeted treatments.
Now that we know the different types of data mining models, we must determine how to evaluate and select these models. For now, we will focus on supervised models, where there is a specific outcome variable we are trying to predict.
**Overfitting** is a fundamental concept in data science. When we want to learn from a given dataset, our goal is to glean enough information to make accurate predictions on new data, while ensuring that we do not learn the idiosyncrasies of the data (which are not generalizable to new data). If we learn these idiosyncrasies, then we are overfitting.
|  |
|:--:|
|<b>Fig. 2</b>|
Suppose we have a graph of two types of dots, blue and red. We want to use this data to learn where the boundary between the red and blue dots should be. In the figure, we see two models: the green line and the black lines.
Think about which fits the sample better and which generalizes all of the data better.
The green model suggests that the two new dots on the right should be red and the new dot on the left should be blue. However, we must determine if this is reasonable.
Both of the supposedly red dots (as predicted by the green line model) are mostly surrounded by blue dots and the supposedly blue dot is mostly surrounded by red dots.
With the green model, weโve ended up with a model thatโs really complex and the line isnโt smooth at all. This is an example of overfitting because the model learned too much from the underlying data and attempts to capture every minor fluctuation.Though we may think weโre fitting a better model, weโre learning that the noise in the model wonโt be generalizable, meaning if the data is changed, the green model will likely not be favorable.
**Addressing Overfitting:**
Weโve learned that overfitting is a problem because it means our model wonโt generalize to new data. We will simulate what will happen when we have to apply our model to real-world data by splitting our dataset into two or three parts.
The first partition is called the training partition. We take a subset of our original dataset and use it to train - or develop - the model. From the remaining records, we create a second partition which we call the validation partition.
Remember, we trained our models using the training partition; we then apply each of those models to the data in the validation partition.
In particular, for each model, we make predictions about the outcome variable for the validation set. We can then compare those predictions to what we know to be the actual outcomes. Doing so allows us to assess predictive performances for each of our models.
|  |
|:--:|
|<b>Fig. 3 - Data Partitioning Summary</b>|
If there is enough data or if many models are being tested, we will have a third partition: the test partition.
This can be useful if we are comparing thousands of models. In this case, it is likely that a bad model will randomly perform well on the validation partition. Testing the final model on a third partition allows us to avoid this scenario. Data scientists use the terms validation partition and test partition interchangeably.
| |
|:--:|
|<b>Fig. 4 - Sweet Spot of Model Complexity</b>|
Partitioning allows us to find the **sweet spot** of **model complexity.** In this figure, the blue line represents how well a model performs on a training dataset and the green line represents how well it generalizes: how well it performs on the validation and test sets.
We see that if our model isnโt complex enough, we will likely underfit our data. As the model increases in complexity, our model's accuracy will improve both for the training data and for test sets.
Eventually, weโll hit a sweet spot where the model is pretty accurate for both the training data and the test data.
If we continue tweaking the model beyond this point, it will become too fitted to the training data and may not fit the test data as accurately. **Partitioning** helps us avoid this.
A couple of notes: 1) The terminology here can get confusing: we sometimes refer to the validation set as a test set or, as we used in the header, a holdout set. 2) We'll be taking about a more sophisticated version of this called cross-validation. But let's start here to get our bearings.
Below, randomly sample 60% of the dataset into a training dataset, where train_X holds the predictors and train_y holds the outcome variable. The remaining 40% serve as a validation set, where test_X holds the predictors and test_y holds the outcome variable.
```
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.40, random_state=12)
print('Training: ', train_X.shape)
print('Validation: ', test_X.shape)
```
Notice that we also set the random_state equal to 12. This ensures that we are not randomly selecting a new training and test set every time we run that cell. If we didn't specify the random_state, we'd never be able to replicate our work.
Another method, aside from partitions, is whatโs called **K-Fold cross validation.** Cross validation is more stable and works with less data than partition requires. In this method, we create a k number of subsets of the data.
We fit each model k times: each time we set aside one of the k folds as a test set and we use the remaining folds as the training set. We then end up with k measures of accuracy. We can combine these by taking the average to assess a given modelโs accuracy.
Letโs look at an example of this with k=5.
|  |
|:--:|
|<b>Fig. 5 - Train-Test Split with k = 5</b>|
Weโve split up our dataset so 20% is in the test set and the remaining 80% is in the training set. We will first fit the model on this training set, test it in the test set, and then evaluate accuracy.
Now, weโll set aside 20% to use as a test set and then use the remaining 80% as our training set. Again, weโll fit the model on this new training set and assess the accuracy on the new set.
Weโll do this 3 more times, having in total fit the model 5 times with 5 different accuracy scores. We can combine these to understand how accurate the model will be on average. If we use this method on multiple models, we can compare those average accuracies to pick the best fitting model.
Next, we will look at an example of how to measure the accuracy of your model.
We can then take the model which was trained using the training set and use it to make predictions on this holdout set, a final estimate of the accuracy of the model. Those predictions are in the greyed out column.
The actual outcome will be known as y and the predicted outcome will be known as y-hat.
| |
|:--:|
|<b>Fig. 6 - Actual vs. Predicted Outcomes</b>|
We can use the difference between the actual and predicted outcomes to calculate the error for each record. We can then combine the errors across all of the records in the test set to determine the predictive accuracy.
There are five common ways we can combine the errors.
| |
|:--:|
|<b>Fig. 7 - Types of Errors</b>|
The simplest way to combine errors is to take the average. This is called the **average** (or mean) **error**. The benefit is that it is easy to understand, though the downside is that errors may cancel out. For example, let's assume that half of my records had an error and half had an error of -100. The mean error would be zero even though none of the records were perfectly predicted. Therefore, a small mean error does NOT indicate good performance. However, this can potentially tell us whether the model consistently overestimates or underestimates the outcome.
If you wanted to instead look at the magnitude of the error, we can take the square or the absolute value of the error term.
The **root mean squared error** (RMSE) sums the squared error terms then takes the average and the square root. RMSE is more sensitive to large errors because errors are squared before being averaged. Use this measure if large errors are undesirable.
The **mean absolute error** (MAE) averages the absolute value of the error.
Both the root mean squared error and mean absolute error give us a sense of the absolute error, and both are used to assess performance of *predictive* models.
To make sense of the relative error, we can use either the **mean percentage error** (MPE) or the **mean absolute percentage error** (MAPE). This will not work if any of our actual outcome values are equal to 0, since we divide by y.
MPE and MAPE give us a sense of the relative error because the prediction is divided by the actual value. Again, this wonโt work on your data if you have any data points equal to 0.
We will now summarize and share some of the key-takeaways.
* First, before going into data mining algorithms, it is important to take necessary steps to prepare before the model stage by exploring and pre-processing the data.
* Second, data mining consists of two classes of methods: supervised (Classification & Prediction) and unsupervised (Association Rules, Data Reduction, Data Exploration & Visualization). In supervised data mining we have an outcome variable we are trying to predict while in unsupervised data mining we do not.
* We use partitioning to evaluate how well our model performs and to avoid overfitting to our sample.
* With partitioning, we fit the model to the training partition and assess the accuracy on the validation and test partitions.
* Data mining methods are usually applied to a sample from a large database and then the best model is used to score the entire database.
**(introduce Python Packages/modules)**
| Name | Description |
|--------------|---------------------------------------------------------------|
| NumPy | Numerical computation with python using arrays and matrices |
| SciPy | Scientific computation with python for optimization and stats |
| Matplotlib | Creates comprehensive visualizations |
| Seaborn | Creates comprehensive visualizations with improved graphics |
| Pandas | Data manipulation and analysis |
| Scikit_learn | Machine learning tools |
## References:
[NumPy](http://www.numpy.org/)
[SciPy](http://www.scipy.org/scipylib/index.html)
[Matplotlib](http://matplotlib.org/)
[Seaborn](http://web.stanford.edu/~mwaskom/software/seaborn/index.html)
[Pandas](http://pandas.pydata.org/)
[Scikit_learn](http://scikit-learn.org/stable/index.html)
## Glossary:
**Supervised Learning:**
**Unsupervised Learning:**
**Regression:**
**Classification:**
**Association Rules:**
**Data Reduction:**
**Clustering:**
**Overfitting:**
**Partitioning:**
**K-fold Cross Validation:**
**Mean Error:**
**Root Mean Squared Error (RMSE):**
**Mean Absolute Error (MAE):**
**Mean Percentage Error (MPE):**
**Mean Absolute Percentage Error (MAPE):**
```{toctree}
:hidden:
:titlesonly:
EDA & Data Viz. Chapter 3 Draft v2
LASSO
Logistic Regression Chapter
MLR
Similarity & Clustering Chapter 4 Draft v2
```
|
github_jupyter
|
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.40, random_state=12)
print('Training: ', train_X.shape)
print('Validation: ', test_X.shape)
| 0.607896 | 0.995527 |
# Angular kinematics in a plane (2D)
Marcos Duarte
Human motion is a combination of linear and angular movement and occurs in the three-dimensional (3D) space. For certain movements and depending on the desired or needed degree of detail for the motion analysis, it's possible to perform a two-dimensional (2D, planar) analysis at the main plane of movement. Such simplification is appreciated because the instrumentation and analysis are much more complicated in order to measure 3D motion than for the 2D case.
For the planar case, the calculation of angles is reduced to the application of trigonometry to the kinematic data. For instance, given the coordinates in a plane of markers on a segment as shown in the figure below, the angle of the segment can be calculated using the inverse function of <span class="notranslate"> $\sin$, $\cos$ </span>, or <span class="notranslate"> $\tan$ </span>.
<div class='center-align'><figure><img src="./../images/segment.png" width=250/><figcaption><figcaption><center><i>Figure. A segment in a plane and its coordinates.</i></center></figcaption> </figure></div>
For better numerical accuracy (and also to distinguish values in the whole quadrant), the inverse function of <span class="notranslate">$\tan$</span> is preferred.
For the data shown in the previous figure:
<span class="notranslate">
$$ \theta = arctan\left(\frac{y_2-y_1}{x_2-x_1}\right) $$
</span>
In computer programming (here, Python/Numpy) this is calculated using: `numpy.arctan((y2-y1)/(x2-x1))`. However, for the previous figure the function `arctan` can not distinguish if the segment is at 45$^o$ or at 225$^o$, `arctan` will return the same value. Because this, the function `numpy.arctan2(y, x)` is used, but be aware that `arctan2` will return angles between $[-\pi,\pi]$:
```
# Import the necessary libraries
import numpy as np
from IPython.display import display, Latex
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2, "lines.markersize": 8})
x1, y1 = 0, 0
x2, y2 = 1, 1
display(Latex('Segment at 45$^o$:'))
angs = [ np.arctan((1-0)/(1-0))*180/np.pi, np.arctan2(1-0, 1-0)*180/np.pi ]
display(Latex('Using arctan: '+str(angs[0])+'$\quad$'+'Using arctan2: '+str(angs[1])))
display(Latex('Segment at 225$^o$:'))
angs = [ np.arctan((-1-0)/(-1-0))*180/np.pi, np.arctan2(-1-0, -1-0)*180/np.pi ]
display(Latex('Using arctan: '+str(angs[0])+'$\quad$'+'Using arctan2: '+str(angs[1])))
```
And Numpy has a function to convert an angle in rad to degrees and the other way around:
```
print('np.rad2deg(np.pi/2) =', np.rad2deg(np.pi))
print('np.deg2rad(180) =', np.deg2rad(180))
```
Let's simulate a 2D motion of the arm performing two complete turns around the shoulder to exemplify the use of `arctan2`:
```
t = np.arange(0, 2, 0.01)
x = np.cos(2*np.pi*t)
y = np.sin(2*np.pi*t)
ang = np.arctan2(y, x)*180/np.pi
plt.figure(figsize=(12, 4))
hax1 = plt.subplot2grid((2, 3), (0, 0), rowspan=2)
hax1.plot(x, y, 'go')
hax1.plot(0, 0, 'go')
hax1.set_xlabel('x [m]')
hax1.set_ylabel('y [m]')
hax1.set_xlim([-1.1, 1.1])
hax1.set_ylim([-1.1, 1.1])
hax2 = plt.subplot2grid((2, 3), (0, 1), colspan=2)
hax2.plot(t, x, 'bo', label='x')
hax2.plot(t, y, 'ro', label='y')
hax2.legend(numpoints=1, frameon=True, framealpha=.8)
hax2.set_ylabel('Position [m]')
hax2.set_ylim([-1.1, 1.1])
hax3 = plt.subplot2grid((2, 3), (1, 1), colspan=2)
hax3.plot(t, ang, 'go')
hax3.set_yticks(np.arange(-180, 181, 90))
hax3.set_xlabel('Time [s]')
hax3.set_ylabel('Angle [ $^o$]')
plt.tight_layout()
```
Because the output of the `arctan2` is bounded to $[-\pi,\pi]$, the angle measured appears chopped in the figure. This problem can be solved using the function `numpy.unwrap`, which detects sudden jumps in the angle and corrects that:
```
ang = np.unwrap(np.arctan2(y, x))*180/np.pi
hfig, hax = plt.subplots(1,1, figsize=(8,3))
hax.plot(t, ang, 'go')
hax.set_yticks(np.arange(start=0, stop=721, step=90))
hax.set_xlabel('Time [s]')
hax.set_ylabel('Angle [ $^o$]')
plt.tight_layout()
```
If now we want to measure the angle of a joint (i.e., the angle of a segment in relation to other segment) we just have to subtract the two segment angles (but this is correct only if the angles are at the same plane):
```
x1, y1 = 0.0, 0.0
x2, y2 = 1.0, 1.0
x3, y3 = 1.1, 1.0
x4, y4 = 2.1, 0.0
hfig, hax = plt.subplots(1,1, figsize=(8,3))
hax.plot((x1,x2), (y1,y2), 'b-', (x1,x2), (y1,y2), 'ro', linewidth=3, markersize=12)
hax.add_patch(matplotlib.patches.FancyArrowPatch(posA=(x1+np.sqrt(2)/3, y1),
posB=(x2/3, y2/3),\
arrowstyle='->,head_length=10,head_width=5', connectionstyle='arc3,rad=0.3'))
plt.text(1/2, 1/5, '$\\theta_1$', fontsize=24)
hax.plot((x3,x4), (y3,y4), 'b-', (x3,x4), (y3,y4), 'ro', linewidth=3, markersize=12)
hax.add_patch(matplotlib.patches.FancyArrowPatch(posA=(x4+np.sqrt(2)/3, y4),
posB=(x4-1/3, y4+1/3),\
arrowstyle='->,head_length=10,head_width=5', connectionstyle='arc3,rad=0.3'))
hax.xaxis.set_ticks((x1,x2,x3,x4))
hax.yaxis.set_ticks((y1,y2,y3,y4))
hax.xaxis.set_ticklabels(('$x_1$','$x_2$','$x_3$','$x_4$'), fontsize=20)
hax.yaxis.set_ticklabels(('$y_1,\,y_4$','$y_2,\,y_3$'), fontsize=20)
plt.text(x4+.2,y4+.3,'$\\theta_2$', fontsize=24)
hax.add_patch(matplotlib.patches.FancyArrowPatch(posA=(x2-1/3, y2-1/3),
posB=(x3+1/3, y3-1/3),\
arrowstyle='->,head_length=10,head_width=5', connectionstyle='arc3,rad=0.3'))
plt.text(x1+.8,y1+.35,'$\\theta_J=\\theta_2-\\theta_1$', fontsize=24)
hax.set_xlim(min([x1,x2,x3,x4])-0.1, max([x1,x2,x3,x4])+0.5)
hax.set_ylim(min([y1,y2,y3,y4])-0.1, max([y1,y2,y3,y4])+0.1)
hax.grid(xdata=(0,1), ydata=(0,1))
plt.tight_layout()
```
The joint angle shown above is simply the difference between the adjacent segment angles:
```
x1, y1, x2, y2 = 0, 0, 1, 1
x3, y3, x4, y4 = 1.1, 1, 2.1, 0
ang1 = np.arctan2(y2-y1, x2-x1)*180/np.pi
ang2 = np.arctan2(y3-y4, x3-x4)*180/np.pi
#print('Angle 1:', ang1, '\nAngle 2:', ang2, '\nJoint angle:', ang2-ang1)
display(Latex('$\\theta_1=\;$' + str(ang1) + '$^o$'))
display(Latex('$\\theta_2=\;$' + str(ang2) + '$^o$'))
display(Latex('$\\theta_J=\;$' + str(ang2-ang1) + '$^o$'))
```
The following convention is commonly used to describe the knee and ankle joint angles at the sagittal plane (figure from Winter 2005):
<div class='center-align'><figure><img src='./../images/jointangles.png' width=350 alt='Joint angle convention'/> <figcaption><center><i>Figure. Convention for the sagital joint angles of the lower limb (from Winter, 2009).</i></center></figcaption></figure></div>
## Angle between two 3D vectors
In certain cases, we have access to the 3D coordinates of markers but we just care for the angle between segments in the plane defined by these segments (but if there is considerable movement in different planes, this simple 2D angle might give unexpected results).
Consider that `p1` and `p2` are the 3D coordinates of markers placed on segment 1 and `p2` and `p3` are the 3D coordinates of the markers on segment 2.
To determine the 2D angle between the segments, one can use the definition of the dot product:
<span class="notranslate">
$$ \mathbf{a} \cdot \mathbf{b} = ||\mathbf{a}||\:||\mathbf{b}||\:cos(\theta)\;\;\; \Rightarrow \;\;\; angle = arccos\left(\frac{dot(p2-p1,\;p3-p2)}{norm(p2-p1)*norm(p3-p2)\;\;\;\;\;} \right) $$
</span>
Or using the definition of the cross product:
<span class="notranslate">
$$ \mathbf{a} \times \mathbf{b} = ||\mathbf{a}||\:||\mathbf{b}||\:sin(\theta) \;\; \Rightarrow \;\; angle = arcsin\left(\frac{cross(p2-p1,\;p3-p2)}{norm(p2-p1)*norm(p3-p2)\;\;\;\;\;} \right) $$
</span>
But because `arctan2` has a better numerical accuracy, combine the dot and cross products, and in Python notation:
```python
angle = np.arctan2(np.linalg.norm(np.cross(p1-p2, p4-p3)), np.dot(p1-p2, p4-p3))
```
See [this notebook](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/ScalarVector.ipynb) for a review on the mathematical functions cross product and scalar product.
We can use the formula above for the angle between two 3D vectors to calculate the joint angle even with the 2D vectors we calculated before:
```
p1, p2 = np.array([0, 0]), np.array([1, 1]) # segment 1
p3, p4 = np.array([1.1, 1]), np.array([2.1, 0]) # segment 2
angle = np.arctan2(np.linalg.norm(np.cross(p1-p2, p4-p3)), np.dot(p1-p2, p4-p3))*180/np.pi
print('Joint angle:', '{0:.1f}'.format(angle))
```
As expected, the same result.
In Numpy, if the third components of vectors are zero, we don't even need to type them; Numpy takes care of adding zero as the third component for the cross product.
## Angular position, velocity, and acceleration
The angular position is a vector, its direction is given by the perpendicular axis to the plane where the angular position is described, and the motion if it occurs it's said to occur around this axis.
Angular velocity is the rate (with respect to time) of change of the angular position:
<span class="notranslate">
$$ \mathbf{\omega}(t) = \frac{\mathbf{\theta}(t_2)-\mathbf{\theta}(t_1)}{t_2-t_1} = \frac{\Delta \mathbf{\theta}}{\Delta t}$$
</span>
<span class="notranslate">
$$ \mathbf{\omega}(t) = \frac{d\mathbf{\theta}(t)}{dt} $$
</span>
Angular acceleration is the rate (with respect to time) of change of the angular velocity, which can also be given by the second-order rate of change of the angular position:
<span class="notranslate">
$$ \mathbf{\alpha}(t) = \frac{\mathbf{\omega}(t_2)-\mathbf{\omega}(t_1)}{t_2-t_1} = \frac{\Delta \mathbf{\omega}}{\Delta t}$$
</span>
Likewise, angular acceleration is the first-order derivative of the angular velocity or the second-order derivative of the angular position vector:
<span class="notranslate">
$$ \mathbf{\alpha}(t) = \frac{d\mathbf{\omega}(t)}{dt} = \frac{d^2\mathbf{\theta}(t)}{dt^2} $$
</span>
The direction of the angular velocity and acceleration vectors is the same as the angular position (perpendicular to the plane of rotation) and the sense is given by the right-hand rule.
### The antiderivative
As the angular acceleration is the derivative of the angular velocity which is the derivative of angular position, the inverse mathematical operation is the [antiderivative](http://en.wikipedia.org/wiki/Antiderivative) (or integral):
<span class="notranslate">
$$ \mathbf{\theta}(t) = \mathbf{\theta}_0 + \int \mathbf{\omega}(t) dt $$
$$ \mathbf{\omega}(t) = \mathbf{\omega}_0 + \int \mathbf{\alpha}(t) dt $$
</span>
## Relationship between linear and angular kinematics
Consider a particle rotating around a point at a fixed distance `r` (circular motion), as the particle moves along the circle, it travels an arc of length `s`.
The angular position of the particle is:
<span class="notranslate">
$$ \theta = \frac{s}{r} $$
</span>
Which is in fact similar to the definition of the angular measure radian:
<div class='center-align'><figure><img src='http://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Radian_cropped_color.svg/220px-Radian_cropped_color.svg.png' width=200/><figcaption><center><i>Figure. An arc of a circle with the same length as the radius of that circle corresponds to an angle of 1 radian (<a href="https://en.wikipedia.org/wiki/Radian">image from Wikipedia</a>).</i></center></figcaption></figure></div>
Then, the distance travelled by the particle is the arc length:
<span class="notranslate">
$$ s = r\theta $$
</span>
As the radius is constant, the relation between linear and angular velocity and acceleration is straightfoward:
<span class="notranslate">
$$ v = \frac{ds}{dt} = r\frac{d\theta}{dt} = r\omega $$
$$ a = \frac{dv}{dt} = r\frac{d\omega}{dt} = r\alpha $$
</span>
## Problems
1. A gymnast performs giant circles around the horizontal bar (with the official dimensions for Artistic Gymnastics) at a constant rate of one circle every 2 s and consider that his center of mass is 1 m distant from the bar. At the lowest point (exactly beneath the bigh bar), the gymnast releases the bar, moves forward, and lands standing on the ground.
a) Calculate the angular and lineat velocity of the gymnast's center of mass at the point of release.
b) Calculate the horizontal distance travelled by the gymnast's center of mass.
2. With the data from Table A1 of Winter (2009) and the convention for the sagital joint angles of the lower limb:
a. Calculate and plot the angles of the foot, leg, and thigh segments.
b. Calculate and plot the angles of the ankle, knee, and hip joint.
c. Calculate and plot the velocities and accelerations for the angles calculated in B.
d. Compare the ankle angle using the two different conventions described by Winter (2009), that is, defining the foot segment with the MT5 or the TOE marker.
e. Knowing that a stride period corresponds to the data between frames 1 and 70 (two subsequents toe-off by the right foot), can you suggest a possible candidate for automatic determination of a stride? Hint: look at the vertical displacement and acceleration of the heel marker.
[Clik here for the data from Table A.1 (Winter, 2009)](./../data/WinterTableA1.txt) from [Winter's book student site](http://bcs.wiley.com/he-bcs/Books?action=index&bcsId=5453&itemId=0470398183).
Example: load data and plot the markers' positions:
```
filename = './../data/WinterTableA1.txt'
data = np.loadtxt(filename, skiprows=2, unpack=False)
markers = ['RIB CAGE', 'HIP', 'KNEE', 'FIBULA', 'ANKLE', 'HEEL', 'MT5', 'TOE']
fig = plt.figure(figsize=(10, 6))
ax = plt.subplot2grid((2,2),(0, 0))
ax.plot(data[: ,1], data[:, 2::2])
ax.set_xlabel('Time [s]', fontsize=14)
ax.set_ylabel('Horizontal [cm]', fontsize=14)
ax = plt.subplot2grid((2, 2),(0, 1))
ax.plot(data[: ,1], data[:, 3::2])
ax.set_xlabel('Time [s]', fontsize=14)
ax.set_ylabel('Vertical [cm]', fontsize=14)
ax = plt.subplot2grid((2, 2), (1, 0), colspan=2)
ax.plot(data[:, 2::2], data[:, 3::2])
ax.set_xlabel('Horizontal [cm]', fontsize=14)
ax.set_ylabel('Vertical [cm]', fontsize=14)
plt.suptitle('Table A.1 (Winter, 2009): female, 22 yrs, 55.7 kg, 156 cm, ' \
'fast cadence (115 steps/min)', y=1.02, fontsize=14)
ax.legend(markers, loc="upper right", bbox_to_anchor=(1.21, 2), title='Markers')
plt.tight_layout()
```
## References
- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4th edition. Hoboken, EUA: Wiley.
|
github_jupyter
|
# Import the necessary libraries
import numpy as np
from IPython.display import display, Latex
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2, "lines.markersize": 8})
x1, y1 = 0, 0
x2, y2 = 1, 1
display(Latex('Segment at 45$^o$:'))
angs = [ np.arctan((1-0)/(1-0))*180/np.pi, np.arctan2(1-0, 1-0)*180/np.pi ]
display(Latex('Using arctan: '+str(angs[0])+'$\quad$'+'Using arctan2: '+str(angs[1])))
display(Latex('Segment at 225$^o$:'))
angs = [ np.arctan((-1-0)/(-1-0))*180/np.pi, np.arctan2(-1-0, -1-0)*180/np.pi ]
display(Latex('Using arctan: '+str(angs[0])+'$\quad$'+'Using arctan2: '+str(angs[1])))
print('np.rad2deg(np.pi/2) =', np.rad2deg(np.pi))
print('np.deg2rad(180) =', np.deg2rad(180))
t = np.arange(0, 2, 0.01)
x = np.cos(2*np.pi*t)
y = np.sin(2*np.pi*t)
ang = np.arctan2(y, x)*180/np.pi
plt.figure(figsize=(12, 4))
hax1 = plt.subplot2grid((2, 3), (0, 0), rowspan=2)
hax1.plot(x, y, 'go')
hax1.plot(0, 0, 'go')
hax1.set_xlabel('x [m]')
hax1.set_ylabel('y [m]')
hax1.set_xlim([-1.1, 1.1])
hax1.set_ylim([-1.1, 1.1])
hax2 = plt.subplot2grid((2, 3), (0, 1), colspan=2)
hax2.plot(t, x, 'bo', label='x')
hax2.plot(t, y, 'ro', label='y')
hax2.legend(numpoints=1, frameon=True, framealpha=.8)
hax2.set_ylabel('Position [m]')
hax2.set_ylim([-1.1, 1.1])
hax3 = plt.subplot2grid((2, 3), (1, 1), colspan=2)
hax3.plot(t, ang, 'go')
hax3.set_yticks(np.arange(-180, 181, 90))
hax3.set_xlabel('Time [s]')
hax3.set_ylabel('Angle [ $^o$]')
plt.tight_layout()
ang = np.unwrap(np.arctan2(y, x))*180/np.pi
hfig, hax = plt.subplots(1,1, figsize=(8,3))
hax.plot(t, ang, 'go')
hax.set_yticks(np.arange(start=0, stop=721, step=90))
hax.set_xlabel('Time [s]')
hax.set_ylabel('Angle [ $^o$]')
plt.tight_layout()
x1, y1 = 0.0, 0.0
x2, y2 = 1.0, 1.0
x3, y3 = 1.1, 1.0
x4, y4 = 2.1, 0.0
hfig, hax = plt.subplots(1,1, figsize=(8,3))
hax.plot((x1,x2), (y1,y2), 'b-', (x1,x2), (y1,y2), 'ro', linewidth=3, markersize=12)
hax.add_patch(matplotlib.patches.FancyArrowPatch(posA=(x1+np.sqrt(2)/3, y1),
posB=(x2/3, y2/3),\
arrowstyle='->,head_length=10,head_width=5', connectionstyle='arc3,rad=0.3'))
plt.text(1/2, 1/5, '$\\theta_1$', fontsize=24)
hax.plot((x3,x4), (y3,y4), 'b-', (x3,x4), (y3,y4), 'ro', linewidth=3, markersize=12)
hax.add_patch(matplotlib.patches.FancyArrowPatch(posA=(x4+np.sqrt(2)/3, y4),
posB=(x4-1/3, y4+1/3),\
arrowstyle='->,head_length=10,head_width=5', connectionstyle='arc3,rad=0.3'))
hax.xaxis.set_ticks((x1,x2,x3,x4))
hax.yaxis.set_ticks((y1,y2,y3,y4))
hax.xaxis.set_ticklabels(('$x_1$','$x_2$','$x_3$','$x_4$'), fontsize=20)
hax.yaxis.set_ticklabels(('$y_1,\,y_4$','$y_2,\,y_3$'), fontsize=20)
plt.text(x4+.2,y4+.3,'$\\theta_2$', fontsize=24)
hax.add_patch(matplotlib.patches.FancyArrowPatch(posA=(x2-1/3, y2-1/3),
posB=(x3+1/3, y3-1/3),\
arrowstyle='->,head_length=10,head_width=5', connectionstyle='arc3,rad=0.3'))
plt.text(x1+.8,y1+.35,'$\\theta_J=\\theta_2-\\theta_1$', fontsize=24)
hax.set_xlim(min([x1,x2,x3,x4])-0.1, max([x1,x2,x3,x4])+0.5)
hax.set_ylim(min([y1,y2,y3,y4])-0.1, max([y1,y2,y3,y4])+0.1)
hax.grid(xdata=(0,1), ydata=(0,1))
plt.tight_layout()
x1, y1, x2, y2 = 0, 0, 1, 1
x3, y3, x4, y4 = 1.1, 1, 2.1, 0
ang1 = np.arctan2(y2-y1, x2-x1)*180/np.pi
ang2 = np.arctan2(y3-y4, x3-x4)*180/np.pi
#print('Angle 1:', ang1, '\nAngle 2:', ang2, '\nJoint angle:', ang2-ang1)
display(Latex('$\\theta_1=\;$' + str(ang1) + '$^o$'))
display(Latex('$\\theta_2=\;$' + str(ang2) + '$^o$'))
display(Latex('$\\theta_J=\;$' + str(ang2-ang1) + '$^o$'))
angle = np.arctan2(np.linalg.norm(np.cross(p1-p2, p4-p3)), np.dot(p1-p2, p4-p3))
p1, p2 = np.array([0, 0]), np.array([1, 1]) # segment 1
p3, p4 = np.array([1.1, 1]), np.array([2.1, 0]) # segment 2
angle = np.arctan2(np.linalg.norm(np.cross(p1-p2, p4-p3)), np.dot(p1-p2, p4-p3))*180/np.pi
print('Joint angle:', '{0:.1f}'.format(angle))
filename = './../data/WinterTableA1.txt'
data = np.loadtxt(filename, skiprows=2, unpack=False)
markers = ['RIB CAGE', 'HIP', 'KNEE', 'FIBULA', 'ANKLE', 'HEEL', 'MT5', 'TOE']
fig = plt.figure(figsize=(10, 6))
ax = plt.subplot2grid((2,2),(0, 0))
ax.plot(data[: ,1], data[:, 2::2])
ax.set_xlabel('Time [s]', fontsize=14)
ax.set_ylabel('Horizontal [cm]', fontsize=14)
ax = plt.subplot2grid((2, 2),(0, 1))
ax.plot(data[: ,1], data[:, 3::2])
ax.set_xlabel('Time [s]', fontsize=14)
ax.set_ylabel('Vertical [cm]', fontsize=14)
ax = plt.subplot2grid((2, 2), (1, 0), colspan=2)
ax.plot(data[:, 2::2], data[:, 3::2])
ax.set_xlabel('Horizontal [cm]', fontsize=14)
ax.set_ylabel('Vertical [cm]', fontsize=14)
plt.suptitle('Table A.1 (Winter, 2009): female, 22 yrs, 55.7 kg, 156 cm, ' \
'fast cadence (115 steps/min)', y=1.02, fontsize=14)
ax.legend(markers, loc="upper right", bbox_to_anchor=(1.21, 2), title='Markers')
plt.tight_layout()
| 0.190799 | 0.987759 |
# ๆฝๅจ่ฏญไนๅๆ๏ผLatent semantic analysis, LSA๏ผ
**LSA** ๆฏไธ็งๆ ็็ฃๅญฆไน ๆนๆณ๏ผไธป่ฆ็จไบๆๆฌ็่ฏ้ขๅๆ๏ผๅ
ถ็น็นๆฏ้่ฟ็ฉ้ตๅ่งฃๅ็ฐๆๆฌไธๅ่ฏไน้ด็ๅบไบ่ฏ้ข็่ฏญไนๅ
ณ็ณปใไน็งฐไธบๆฝๅจ่ฏญไน็ดขๅผ๏ผLatent semantic indexing, LSI๏ผใ
LSA ไฝฟ็จ็ๆฏ้ๆฆ็็่ฏ้ขๅๆๆจกๅใๅฐๆๆฌ้ๅ่กจ็คบไธบ**ๅ่ฏ-ๆๆฌ็ฉ้ต**๏ผๅฏนๅ่ฏ-ๆๆฌ็ฉ้ต่ฟ่ก**ๅฅๅผๅผๅ่งฃ**๏ผไป่ๅพๅฐ่ฏ้ขๅ้็ฉบ้ด๏ผไปฅๅๆๆฌๅจ่ฏ้ขๅ้็ฉบ้ด็่กจ็คบใ
**้่ด็ฉ้ตๅ่งฃ**๏ผnon-negative matrix factorization, NMF๏ผๆฏๅฆไธ็ง็ฉ้ต็ๅ ๅญๅ่งฃๆนๆณ๏ผๅ
ถ็น็นๆฏๅ่งฃ็็ฉ้ต้่ดใ
## ๅ่ฏๅ้็ฉบ้ด
word vector space model
็ปๅฎไธไธชๆๆฌ๏ผ็จไธไธชๅ้่กจ็คบ่ฏฅๆๆฌ็โ่ฏญไนโ๏ผ ๅ้็**ๆฏไธ็ปดๅฏนๅบไธไธชๅ่ฏ**๏ผๅ
ถๆฐๅผไธบ่ฏฅๅ่ฏๅจ่ฏฅๆๆฌไธญๅบ็ฐ็้ขๆฐๆๆๅผ๏ผๅบๆฌๅ่ฎพๆฏๆๆฌไธญๆๆๅ่ฏ็ๅบ็ฐๆ
ๅต่กจ็คบไบๆๆฌ็่ฏญไนๅ
ๅฎน๏ผๆๆฌ้ๅไธญ็ๆฏไธชๆๆฌ้ฝ่กจ็คบไธบไธไธชๅ้๏ผๅญๅจไบไธไธชๅ้็ฉบ้ด๏ผๅ้็ฉบ้ด็ๅบฆ้๏ผๅฆๅ
็งฏๆๆ ๅๅ**ๅ
็งฏ**่กจ็คบๆๆฌไน้ด็**็ธไผผๅบฆ**ใ
็ปๅฎไธไธชๅซๆ$n$ไธชๆๆฌ็้ๅ$D=({d_{1}, d_{2},...,d_{n}})$๏ผไปฅๅๅจๆๆๆๆฌไธญๅบ็ฐ็$m$ไธชๅ่ฏ็้ๅ$W=({w_{1},w_{2},...,w_{m}})$. ๅฐๅ่ฏๅจๆๆฌ็ๅบ็ฐ็ๆฐๆฎ็จไธไธชๅ่ฏ-ๆๆฌ็ฉ้ต๏ผword-document matrix๏ผ่กจ็คบ๏ผ่ฎฐไฝ$X$:
$
X = \begin{bmatrix}
x_{11} & x_{12}& x_{1n}& \\
x_{21}& x_{22}& x_{2n}& \\
\vdots & \vdots & \vdots & \\
x_{m1}& x_{m2}& x_{mn}&
\end{bmatrix}
$
่ฟๆฏไธไธช$m*n$็ฉ้ต๏ผๅ
็ด $x_{ij}$่กจ็คบๅ่ฏ$w_{i}$ๅจๆๆฌ$d_{j}$ไธญๅบ็ฐ็้ขๆฐๆๆๅผใ็ฑไบๅ่ฏ็็ง็ฑปๅพๅค๏ผ่ๆฏไธชๆๆฌไธญๅบ็ฐๅ่ฏ็็ง็ฑป้ๅธธ่พๅฐ๏ผๆๆๅ่ฏ-ๆๆฌ็ฉ้ตๆฏไธไธช็จ็็ฉ้ตใ
ๆๅผ้ๅธธ็จๅ่ฏ**้ข็-้ๆๆฌ็**๏ผterm frequency-inverse document frequency, TF-IDF๏ผ่กจ็คบ๏ผ
$TF-IDF(t, d ) = TF(t, d) * IDF(t)$,
ๅ
ถไธญ๏ผ$TF(t,d)$ไธบๅ่ฏ$t$ๅจๆๆฌ$d$ไธญๅบ็ฐ็ๆฆ็๏ผ$IDF(t)$ๆฏ้ๆๆฌ็๏ผ็จๆฅ่กก้ๅ่ฏ$t$ๅฏน่กจ็คบ่ฏญไนๆ่ตท็้่ฆๆง๏ผ
$IDF(t) = log(\frac{len(D)}{len(t \in D) + 1})$.
ๅ่ฏๅ้็ฉบ้ดๆจกๅ็ไผ็นๆฏ**ๆจกๅ็ฎๅ๏ผ่ฎก็ฎๆ็้ซ**ใๅ ไธบๅ่ฏๅ้้ๅธธๆฏ็จ็็๏ผๅ่ฏๅ้็ฉบ้ดๆจกๅไนๆไธๅฎ็ๅฑ้ๆง๏ผไฝ็ฐๅจๅ
็งฏ็ธไผผๅบฆๆชๅฟ
่ฝๅคๅ็กฎ่กจ่พพไธคไธชๆๆฌ็่ฏญไน็ธไผผๅบฆไธใๅ ไธบ่ช็ถ่ฏญ่จ็ๅ่ฏๅ
ทๆไธ่ฏๅคไนๆง๏ผpolysemy๏ผๅๅค่ฏไธไนๆง๏ผsynonymy๏ผใ
## ่ฏ้ขๅ้็ฉบ้ด
**1. ่ฏ้ขๅ้็ฉบ้ด**๏ผ
็ปๅฎไธไธชๅซๆ$n$ไธชๆๆฌ็้ๅ$D=({d_{1}, d_{2},...,d_{n}})$๏ผไปฅๅๅจๆๆๆๆฌไธญๅบ็ฐ็$m$ไธชๅ่ฏ็้ๅ$W=({w_{1},w_{2},...,w_{m}})$. ๅฏไปฅ่ทๅพๅ
ถๅ่ฏ-ๆๆฌ็ฉ้ต$X$:
$
X = \begin{bmatrix}
x_{11} & x_{12}& x_{1n}& \\
x_{21}& x_{22}& x_{2n}& \\
\vdots & \vdots & \vdots & \\
x_{m1}& x_{m2}& x_{mn}&
\end{bmatrix}
$
ๅ่ฎพๆๆๆๆฌๅ
ฑๅซๆ$k$ไธช่ฏ้ขใๅ่ฎพๆฏไธช่ฏ้ข็ฑไธไธชๅฎไนๅจๅ่ฏ้ๅ$W$ไธ็$m$็ปดๅ้่กจ็คบ๏ผ็งฐไธบ่ฏ้ขๅ้๏ผๅณ๏ผ
$t_{l} = \begin{bmatrix}
t_{1l}\\
t_{2l}\\
\vdots \\
t_{ml}\end{bmatrix}, l=1,2,...,k$
ๅ
ถไธญ$t_{il}$ๅ่ฏ$w_{i}$ๅจ่ฏ้ข$t_{l}$็ๆๅผ๏ผ$i=1,2,...,m$, ๆๅผ่ถๅคง๏ผ่ฏฅๅ่ฏๅจ่ฏฅ่ฏ้ขไธญ็้่ฆ็จๅบฆๅฐฑ่ถ้ซใ่ฟ$k$ไธช่ฏ้ขๅ้ $t_{1},t_{2},...,t_{k}$ๅผ ๆไธไธช่ฏ้ขๅ้็ฉบ้ด๏ผtopic vector space๏ผ๏ผ ็ปดๆฐไธบ$k$.**่ฏ้ขๅ้็ฉบ้ดๆฏๅ่ฏๅ้็ฉบ้ด็ไธไธชๅญ็ฉบ้ด**ใ
่ฏ้ขๅ้็ฉบ้ด$T$๏ผ
$
T = \begin{bmatrix}
t_{11} & t_{12}& t_{1k}& \\
t_{21}& t_{22}& t_{2k}& \\
\vdots & \vdots & \vdots & \\
t_{m1}& t_{m2}& t_{mk}&
\end{bmatrix}
$
็ฉ้ต$T$,็งฐไธบ**ๅ่ฏ-่ฏ้ข็ฉ้ต**ใ $T = [t_{1}, t_{2}, ..., t_{k}]$
**2. ๆๆฌๅจ่ฏ้ขๅ้็ฉบ้ดไธญ็่กจ็คบ** ๏ผ
่่ๆๆฌ้ๅ$D$็ๆๆฌ$d_{j}$, ๅจๅ่ฏๅ้็ฉบ้ดไธญ็ฑไธไธชๅ้$x_{j}$่กจ็คบ๏ผๅฐ$x_{j}$ๆๅฝฑๅฐ่ฏ้ขๅ้็ฉบ้ด$T$ไธญ๏ผๅพๅฐ่ฏ้ขๅ้็ฉบ้ด็ไธไธชๅ้$y_{j}$, $y_{j}$ๆฏไธไธช$k$็ปดๅ้๏ผ
$y_{j} = \begin{bmatrix}
y_{1j}\\
y_{2j}\\
\vdots \\
y_{kj}\end{bmatrix}, j=1,2,...,n$
ๅ
ถไธญ๏ผ$y_{lj}$ๆฏๆๆฌ$d_{j}$ๅจ่ฏ้ข$t_{l}$ไธญ็ๆๅผ๏ผ $l = 1,2,..., k$, ๆๅผ่ถๅคง๏ผ่ฏฅ่ฏ้ขๅจ่ฏฅๆๆฌไธญ็้่ฆ็จๅบฆๅฐฑ่ถ้ซใ
็ฉ้ต$Y$ ่กจ็คบ่ฏ้ขๅจๆๆฌไธญๅบ็ฐ็ๆ
ๅต๏ผ็งฐไธบ่ฏ้ข-ๆๆฌ็ฉ้ต๏ผtopic-document matrix๏ผ,่ฎฐไฝ๏ผ
$
Y = \begin{bmatrix}
y_{11} & y_{12}& y_{1n}& \\
y_{21}& y_{22}& y_{2n}& \\
\vdots & \vdots & \vdots & \\
y_{k1}& y_{k2}& y_{kn}&
\end{bmatrix}
$
ไนๅฏๅๆ๏ผ $Y = [y_{1}, y_{2} ..., y_{n}]$
**3. ไปๅ่ฏๅ้็ฉบ้ดๅฐ่ฏ้ขๅ้็ฉบ้ด็็บฟๆงๅๆข**๏ผ
ๅฆๆญค๏ผๅ่ฏๅ้็ฉบ้ด็ๆๆฌๅ้$x_{j}$ๅฏไปฅ้่ฟไปๅจ่ฏ้ข็ฉบ้ดไธญ็ๅ้$y_{j}$่ฟไผผ่กจ็คบ๏ผๅ
ทไฝๅฐ็ฑ$k$ไธช่ฏ้ขๅ้ไปฅ$y_{j}$ไธบ็ณปๆฐ็็บฟๆง็ปๅ่ฟไผผ่กจ็คบ๏ผ
$x_{j} = y_{1j}t_{1} + y_{2j}t_{2} + ... + y_{yj}t_{k}, j = 1,2,..., n$
ๆไปฅ๏ผๅ่ฏ-ๆๆฌ็ฉ้ต$X$ๅฏไปฅ่ฟไผผ็่กจ็คบไธบๅ่ฏ-่ฏ้ข็ฉ้ต$T$ไธ่ฏ้ข-ๆๆฌ็ฉ้ต$Y$็ไน็งฏๅฝขๅผใ
$X \approx TY$
็ด่งไธ๏ผๆฝๅจ่ฏญไนๅๆๆฏๅฐๅ่ฏๅ้็ฉบ้ด็่กจ็คบ้่ฟ็บฟๆงๅๆข่ฝฌๆขไธบๅจ่ฏ้ขๅ้็ฉบ้ดไธญ็่กจ็คบใ่ฟไธช็บฟๆงๅๆข็ฑ็ฉ้ตๅ ๅญๅ่งฃๅผ็ๅฝขๅผไฝ็ฐใ
### ๆฝๅจ่ฏญไนๅๆ็ฎๆณ
ๆฝๅจ่ฏญไนๅๆๅฉ็จ็ฉ้ตๅฅๅผๅผๅ่งฃ๏ผๅ
ทไฝๅฐ๏ผๅฏนๅ่ฏ-ๆๆฌ็ฉ้ต่ฟ่กๅฅๅผๅผๅ่งฃ๏ผๅฐๅ
ถๅทฆ็ฉ้ตไฝไธบ่ฏ้ขๅ้็ฉบ้ด๏ผๅฐๅ
ถๅฏน่ง็ฉ้ตไธๅณ็ฉ้ต็ไน็งฏไฝไธบๆๆฌๅจ่ฏ้ขๅ้็ฉบ้ด็่กจ็คบใ
็ปๅฎไธไธชๅซๆ$n$ไธชๆๆฌ็้ๅ$D=({d_{1}, d_{2},...,d_{n}})$๏ผไปฅๅๅจๆๆๆๆฌไธญๅบ็ฐ็$m$ไธชๅ่ฏ็้ๅ$W=({w_{1},w_{2},...,w_{m}})$. ๅฏไปฅ่ทๅพๅ
ถๅ่ฏ-ๆๆฌ็ฉ้ต$X$:
$
X = \begin{bmatrix}
x_{11} & x_{12}& x_{1n}& \\
x_{21}& x_{22}& x_{2n}& \\
\vdots & \vdots & \vdots & \\
x_{m1}& x_{m2}& x_{mn}&
\end{bmatrix}
$
**ๆชๆญๅฅๅผๅผๅ่งฃ**๏ผ
ๆฝๅจ่ฏญไนๅๆๆ นๆฎ็กฎๅฎ็่ฏ้ขๆฐ$k$ๅฏนๅ่ฏ-ๆๆฌ็ฉ้ต$X$่ฟ่กๆชๆญๅฅๅผๅผๅ่งฃ๏ผ
$
X \approx U_{k}\Sigma _{k}V_{k}^{T} = \begin{bmatrix}
\mu _{1} & \mu _{2}& \cdots & \mu _{k}
\end{bmatrix}\begin{bmatrix}
\sigma_{1} & 0& 0& 0\\
0& \sigma_{2}& 0& 0\\
0& 0& \ddots & 0\\
0& 0& 0& \sigma_{k}
\end{bmatrix}\begin{bmatrix}
v_{1}^{T}\\
v_{2}^{T}\\
\vdots \\
v_{k}^{T}\end{bmatrix}
$
็ฉ้ต$U_{k}$็ๆฏไธไธชๅๅ้ $u_{1}, u_{2},..., u_{k}$ ่กจ็คบไธไธช่ฏ้ข๏ผ็งฐไธบ**่ฏ้ขๅ้**ใ็ฑ่ฟ $k$ ไธช่ฏ้ขๅ้ๅผ ๆไธไธชๅญ็ฉบ้ด๏ผ
$
U_{k} = \begin{bmatrix}
u_{1} & u_{2}& \cdots & u_{k}
\end{bmatrix}
$
็งฐไธบ**่ฏ้ขๅ้็ฉบ้ด**ใ
็ปผไธ๏ผ ๅฏไปฅ้่ฟๅฏนๅ่ฏ-ๆๆฌ็ฉ้ต็ๅฅๅผๅผๅ่งฃ่ฟ่กๆฝๅจ่ฏญไนๅๆ๏ผ
$ X \approx U_{k} \Sigma_{k} V_{k}^{T} = U_{k}(\Sigma_{k}V_{k}^{T})$
ๅพๅฐ่ฏ้ข็ฉบ้ด $U_{k}$ ๏ผ ไปฅๅๆๆฌๅจ่ฏ้ข็ฉบ้ด็่กจ็คบ($\Sigma_{k}V_{k}^{T}$).
### ้่ด็ฉ้ตๅ่งฃ็ฎๆณ
้่ด็ฉ้ตๅ่งฃไนๅฏไปฅ็จไบ่ฏ้ขๅๆใๅฏนๅ่ฏ-ๆๆฌ็ฉ้ต่ฟ่ก้่ด็ฉ้ตๅ่งฃ๏ผๅฐ**ๅ
ถๅทฆ็ฉ้ตไฝไธบ่ฏ้ขๅ้็ฉบ้ด**๏ผๅฐๅ
ถ**ๅณ็ฉ้ตไฝไธบๆๆฌๅจ่ฏ้ขๅ้็ฉบ้ด็่กจ็คบ**ใ
#### ้่ด็ฉ้ตๅ่งฃ
่ฅไธไธช็ฉ้ต็็ดขๅผๅ
็ด ้่ด๏ผๅ่ฏฅ็ฉ้ตไธบ้่ด็ฉ้ตใ่ฅ$X$ๆฏ้่ด็ฉ้ต๏ผๅ๏ผ $X >= 0$.
็ปๅฎไธไธช้่ด็ฉ้ต$X$, ๆพๅฐไธคไธช้่ด็ฉ้ต$W >= 0$ ๅ $H>= 0$, ไฝฟๅพ๏ผ
$ X \approx WH$
ๅณ้่ด็ฉ้ต$X$ๅ่งฃไธบไธคไธช้่ด็ฉ้ต$W$ๅ$H$็ไน็งฏๅฝขๅผ๏ผๆไธบ้่ด็ฉ้ตๅ่งฃใๅ ไธบ$WH$ไธ$X$ๅฎๅ
จ็ธ็ญๅพ้พๅฎ็ฐ๏ผๆไปฅๅช่ฆๆฑ่ฟไผผ็ธ็ญใ
ๅ่ฎพ้่ด็ฉ้ต$X$ๆฏ$m\times n$็ฉ้ต๏ผ้่ด็ฉ้ต$W$ๅ$H$ๅๅซไธบ $m\times k$ ็ฉ้ตๅ $k\times n$ ็ฉ้ตใๅ่ฎพ $k < min(m, n)$ ๅณ$W$ ๅ $H$ ๅฐไบๅ็ฉ้ต $X$, ๆไปฅ้่ด็ฉ้ตๅ่งฃๆฏๅฏนๅๆฐๆฎ็ๅ็ผฉใ
็งฐ $W$ ไธบๅบ็ฉ้ต๏ผ $H$ ไธบ็ณปๆฐ็ฉ้ตใ้่ด็ฉ้ตๅ่งฃๆจๅจ็จ่พๅฐ็ๅบๅ้๏ผ็ณปๆฐๅ้ๆฅ่กจ็คบไธบ่พๅคง็ๆฐๆฎ็ฉ้ตใ
ไปค $W = \begin{bmatrix}
w_{1} & w_{2}& \cdots& w_{k}
\end{bmatrix}$
ไธบ่ฏ้ขๅ้็ฉบ้ด๏ผ $w_{1}, w_{2}, ..., w_{k}$ ่กจ็คบๆๆฌ้ๅ็ $k$ ไธช่ฏ้ข๏ผ ไปค $H = \begin{bmatrix}
h_{1} & h_{2}& \cdots& h_{n}
\end{bmatrix}$
ไธบๆๆฌๅจ่ฏ้ขๅ้็ฉบ้ด็่กจ็คบ๏ผ $h_{1}, h_{2},..., h_{n}$ ่กจ็คบๆๆฌ้ๅ็ $n$ ไธชๆๆฌใ
##### ็ฎๆณ
้่ด็ฉ้ตๅ่งฃๅฏไปฅๅฝขๅผๅไธบๆไผๅ้ฎ้ขๆฑ่งฃใๅฏไปฅๅฉ็จๅนณๆนๆๅคฑๆๆฃๅบฆๆฅไฝไธบๆๅคฑๅฝๆฐใ
็ฎๆ ๅฝๆฐ $|| X - WH ||^{2}$ ๅ
ณไบ $W$ ๅ $H$ ็ๆๅฐๅ๏ผๆปก่ถณ็บฆๆๆกไปถ $W, H >= 0$, ๅณ๏ผ
$\underset{W,H}{min} || X - WH ||^{2}$
$s.t. W, H >= 0$
ไนๆณๆดๆฐ่งๅ๏ผ
$W_{il} \leftarrow W_{il}\frac{(XH^{T})_{il}}{(WHH^{T})_{il}}$ ๏ผ17.33๏ผ
$H_{lj} \leftarrow H_{lj}\frac{(W^{T}X)_{lj}}{(W^{T}WH)_{lj}}$ ๏ผ17.34๏ผ
้ๆฉๅๅง็ฉ้ต $W$ ๅ $H$ ไธบ้่ด็ฉ้ต๏ผๅฏไปฅไฟ่ฏ่ฟญไปฃ่ฟ็จๅ็ปๆ็็ฉ้ต $W$ ๅ $H$ ้่ดใ
**็ฎๆณ 17.1 ๏ผ้่ด็ฉ้ตๅ่งฃ็่ฟญไปฃ็ฎๆณ๏ผ**
่พๅ
ฅ๏ผ ๅ่ฏ-ๆๆฌ็ฉ้ต $X >= 0$, ๆๆฌ้ๅ็่ฏ้ขไธชๆฐ $k$, ๆๅคง่ฟญไปฃๆฌกๆฐ $t$๏ผ
่พๅบ๏ผ ่ฏ้ข็ฉ้ต $W$, ๆๆฌ่กจ็คบ็ฉ้ต $H$ใ
**1)**. ๅๅงๅ
$W>=0$, ๅนถๅฏน $W$ ็ๆฏไธๅๆฐๆฎๅฝไธๅ๏ผ
$H>=0$๏ผ
**2)**. ่ฟญไปฃ
ๅฏน่ฟญไปฃๆฌกๆฐ็ฑ1ๅฐ$t$ๆง่กไธๅๆญฅ้ชค๏ผ
a. ๆดๆฐ$W$็ๅ
็ด ๏ผๅฏน $l$ ไป1ๅฐ $k,i$ไป1ๅฐ$m$ๆ(17.33)ๆดๆฐ $W_{il}$;
a. ๆดๆฐ$H$็ๅ
็ด ๏ผๅฏน $l$ ไป1ๅฐ $k,j$ไป1ๅฐ$m$ๆ(17.34)ๆดๆฐ $H_{lj}$;
### ๅพไพ 17.1
```
import numpy as np
from sklearn.decomposition import TruncatedSVD
X = [[2, 0, 0, 0], [0, 2, 0, 0], [0, 0, 1, 0], [0, 0, 2, 3], [0, 0, 0, 1], [1, 2, 2, 1]]
X = np.asarray(X);X
# ๅฅๅผๅผๅ่งฃ
U,sigma,VT=np.linalg.svd(X)
U
sigma
VT
# ๆชๆญๅฅๅผๅผๅ่งฃ
svd = TruncatedSVD(n_components=3, n_iter=7, random_state=42)
svd.fit(X)
print(svd.explained_variance_ratio_)
print(svd.explained_variance_ratio_.sum())
print(svd.singular_values_)
```
#### ้่ด็ฉ้ตๅ่งฃ
```
def inverse_transform(W, H):
# ้ๆ
return W.dot(H)
def loss(X, X_):
#่ฎก็ฎ้ๆ่ฏฏๅทฎ
return ((X - X_) * (X - X_)).sum()
# ็ฎๆณ 17.1
class MyNMF:
def fit(self, X, k, t):
m, n = X.shape
W = np.random.rand(m, k)
W = W/W.sum(axis=0)
H = np.random.rand(k, n)
i = 1
while i < t:
W = W * X.dot(H.T) / W.dot(H).dot(H.T)
H = H * (W.T).dot(X) / (W.T).dot(W).dot(H)
i += 1
return W, H
model = MyNMF()
W, H = model.fit(X, 3, 200)
W
H
# ้ๆ
X_ = inverse_transform(W, H);X_
# ้ๆ่ฏฏๅทฎ
loss(X, X_)
```
### ไฝฟ็จ sklearn ่ฎก็ฎ
```
from sklearn.decomposition import NMF
model = NMF(n_components=3, init='random', max_iter=200, random_state=0)
W = model.fit_transform(X)
H = model.components_
W
H
X__ = inverse_transform(W, H);X__
loss(X, X__)
```
|
github_jupyter
|
import numpy as np
from sklearn.decomposition import TruncatedSVD
X = [[2, 0, 0, 0], [0, 2, 0, 0], [0, 0, 1, 0], [0, 0, 2, 3], [0, 0, 0, 1], [1, 2, 2, 1]]
X = np.asarray(X);X
# ๅฅๅผๅผๅ่งฃ
U,sigma,VT=np.linalg.svd(X)
U
sigma
VT
# ๆชๆญๅฅๅผๅผๅ่งฃ
svd = TruncatedSVD(n_components=3, n_iter=7, random_state=42)
svd.fit(X)
print(svd.explained_variance_ratio_)
print(svd.explained_variance_ratio_.sum())
print(svd.singular_values_)
def inverse_transform(W, H):
# ้ๆ
return W.dot(H)
def loss(X, X_):
#่ฎก็ฎ้ๆ่ฏฏๅทฎ
return ((X - X_) * (X - X_)).sum()
# ็ฎๆณ 17.1
class MyNMF:
def fit(self, X, k, t):
m, n = X.shape
W = np.random.rand(m, k)
W = W/W.sum(axis=0)
H = np.random.rand(k, n)
i = 1
while i < t:
W = W * X.dot(H.T) / W.dot(H).dot(H.T)
H = H * (W.T).dot(X) / (W.T).dot(W).dot(H)
i += 1
return W, H
model = MyNMF()
W, H = model.fit(X, 3, 200)
W
H
# ้ๆ
X_ = inverse_transform(W, H);X_
# ้ๆ่ฏฏๅทฎ
loss(X, X_)
from sklearn.decomposition import NMF
model = NMF(n_components=3, init='random', max_iter=200, random_state=0)
W = model.fit_transform(X)
H = model.components_
W
H
X__ = inverse_transform(W, H);X__
loss(X, X__)
| 0.535098 | 0.827061 |
# Build LSOA data for England and Wales
### Other potential data sources
* postcode to various hierarchies
* https://geoportal.statistics.gov.uk/datasets/postcode-to-output-area-hierarchy-with-classifications-august-2020-lookup-in-the-uk
* oa to lsoa / msoa / region etc
* https://geoportal.statistics.gov.uk/datasets/output-area-to-lower-layer-super-output-area-to-middle-layer-super-output-area-to-local-authority-district-december-2020-lookup-in-england-and-wales/data
## Relationships within the tables
* postcodes have many to one relationships between all other fields
* wz (work zones) and oa (output areas) have a many to many relationship
* oa and lsoa (lower layer super output areas) have a many to one relationship
* lsoa and msoa (middle layer super output areas) have a many to one relationship
* msoa and lad (local authority districts) have a many to one relationship
* lad and rgn (regions) have a many to one relationship
```
pc --> oa11cd --> ladcd
| |\> lsoa11cd --> msoa11cd
| | |\> soac11cd
| | |\> laccd
| | \> rgn20cd
| \> oac11cd
|\> wz11cd
|\> wzc11cd
\> lsoa11cd (duplicated for convenience)
```
## Step 0: Fetch the source datasets from their respective URLs
```
! wget -O imd.xlsx https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/833973/File_2_-_IoD2019_Domains_of_Deprivation.xlsx
! wget -O rural_urban.csv http://geoportal1-ons.opendata.arcgis.com/datasets/276d973d30134c339eaecfc3c49770b3_0.csv
! wget -O townsend.csv http://s3-eu-west-1.amazonaws.com/statistics.digitalresources.jisc.ac.uk/dkan/files/Townsend_Deprivation_Scores/Scores/Scores-%202011%20UK%20LSOA.csv
! wget -O oa_to_rgn.csv https://opendata.arcgis.com/datasets/65664b00231444edb3f6f83c9d40591f_0.csv
! wget -O pc_to_lsoa.zip https://www.arcgis.com/sharing/rest/content/items/83300a9b0e63465fabee3fddd8fbd30e/data
! unzip -o pc_to_lsoa.zip
! wget -O lsoa_to_utla.csv https://opendata.arcgis.com/datasets/9f4c270148014f20bf24abff9a7aef62_0.csv
! wget -O density.zip https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2flowersuperoutputareapopulationdensity%2fmid2019sape22dt11/sape22dt11mid2019lsoapopulationdensity.zip
! unzip -o density.zip
```
## Step 1: specify the output filenames
```
postcode_csv_name = 'postcode_lookups.csv'
oa11cd_csv_name = 'oa11cd_lookups.csv'
lsoa11cd_csv_name = 'lsoa11cd_lookups.csv'
output_dataset_name = 'uk_geocodes.hdf5'
```
## Step 2: load the various source tables
```
import numpy as np
import pandas as pd
pc_df = pd.read_csv('NSPCL_MAY20_UK_LU.csv')
oa_to_rgn_df = pd.read_csv('oa_to_rgn.csv')
oa_to_rgn_df = oa_to_rgn_df.rename(columns={k:k.lower() for k in oa_to_rgn_df.columns})
imd_df = pd.read_excel('imd.xlsx', sheet_name=1)
imd_df = imd_df.rename(columns={"LSOA code (2011)": "lsoa11cd",
"LSOA name (2011)": "lsoa11nm",
"Local Authority District code (2019)": "lad19cd",
"Local Authority District name (2019)": "lad19nm",
"Index of Multiple Deprivation (IMD) Rank (where 1 is most deprived)": "imd_rank",
"Index of Multiple Deprivation (IMD) Decile (where 1 is most deprived 10% of LSOAs)": "imd_decile",
"Income Rank (where 1 is most deprived)": "imd_income_rank",
"Income Decile (where 1 is most deprived 10% of LSOAs)": "imd_income_decile",
"Employment Rank (where 1 is most deprived)": "imd_employment_rank",
"Employment Decile (where 1 is most deprived 10% of LSOAs)": "imd_employment_decile",
"Education, Skills and Training Rank (where 1 is most deprived)": "imd_education_rank",
"Education, Skills and Training Decile (where 1 is most deprived 10% of LSOAs)": "imd_education_decile",
"Health Deprivation and Disability Rank (where 1 is most deprived)": "imd_health_rank",
"Health Deprivation and Disability Decile (where 1 is most deprived 10% of LSOAs)": "imd_health_decile",
"Crime Rank (where 1 is most deprived)": "imd_crime_rank",
"Crime Decile (where 1 is most deprived 10% of LSOAs)": "imd_crime_decile",
"Barriers to Housing and Services Rank (where 1 is most deprived)": "imd_housing_rank",
"Barriers to Housing and Services Decile (where 1 is most deprived 10% of LSOAs)": "imd_housing_decile",
"Living Environment Rank (where 1 is most deprived)": "imd_living_rank",
"Living Environment Decile (where 1 is most deprived 10% of LSOAs)": "imd_living_decile"} )
tsend_df = pd.read_csv('./townsend.csv')
tsend_df = tsend_df.rename(columns={"GEO_CODE": "lsoa11cd", "TDS": "townsend_score", "quintile": "townsend_quintile"})
tsend_df = tsend_df.drop(columns=[c for c in tsend_df.columns if c not in ("lsoa11cd", "townsend_score", "townsend_quintile")])
ruc_df = pd.read_csv('./rural_urban.csv')
ruc_df = ruc_df.rename(columns={"LSOA11CD": "lsoa11cd", "RUC11CD": "ruc11cd", "RUC11": "ruc11nm"})
ruc_df = ruc_df.drop(columns=[c for c in ruc_df.columns if c not in ("lsoa11cd", "ruc11cd", "ruc11nm")])
lsoa_to_utla_df = pd.read_csv('lsoa_to_utla.csv')
lsoa_to_utla_df = lsoa_to_utla_df.rename(columns={k:k.lower() for k in lsoa_to_utla_df.columns})
density_df = pd.read_excel('SAPE22DT11-mid-2019-lsoa-population-density.xlsx', sheet_name=3, skiprows=4)
density_df = density_df.rename(columns={"LSOA Code": "lsoa11cd", "LSOA Name": "lsoa11nm",
"Mid-2019 population": "population", "Area Sq Km": "area_sq_km", "People per Sq Km": "density"})
print(density_df.columns)
print("done!")
```
## Step 3: build tables as the postcode, oa11cd and lsoa11cd levels
```
# generate sub-tables from pc_df
# ------------------------------
# pc --> oa11cd --> ladcd
# | |\> lsoa11cd --> msoa11cd (--> rgn20cd)
# | | |\> soac11cd
# | | |\> laccd
# | | \> rgn20cd
# | \> oac11cd
# |\> wz11cd
# |\> wzc11cd
# \> lsoa11cd (duplicated for convenience)
# create a postcode to oa11cd / lsoa11cd lookup table from pc to oacd table
print("pc_df")
print(pc_df.columns)
pc_lookup_df = pc_df[['pcd7', 'pcd8', 'pcds', 'oa11cd', 'lsoa11cd', 'lsoa11nm', 'wz11cd', 'wzc11cd', 'wzc11nm']]
print(" pc_lookup_df")
pc_lookup_df = pc_lookup_df[~pc_lookup_df['pcd7'].isna()]
pc_lookup_df = pc_lookup_df[~pc_lookup_df['pcd8'].isna()]
pc_lookup_df = pc_lookup_df[~pc_lookup_df['pcds'].isna()]
# create an oa11cd to ladcd / lsoa11cd lookup table from pc to oacd table
print(" pc_oa11cd_lookup_df")
pc_oa11cd_lookup_df = pc_df[['oa11cd', 'lsoa11cd', 'ladcd', 'ladnm', 'ladnmw', 'oac11cd', 'oac11nm']]
pc_oa11cd_lookup_df = pc_oa11cd_lookup_df[~pc_oa11cd_lookup_df['oa11cd'].isna()]
pc_oa11cd_lookup_df = pc_oa11cd_lookup_df.drop_duplicates()
# create an lsoa11cd to msoa11cd / soac11cd lookup table from pc to oacd table
print(" pc_lsoa11cd_lookup_df")
pc_lsoa11cd_lookup_df = pc_df[['lsoa11cd', 'lsoa11nm', 'msoa11cd', 'msoa11nm', 'soac11cd', 'soac11nm', 'laccd', 'lacnm']]
pc_lsoa11cd_lookup_df = pc_lsoa11cd_lookup_df[~pc_lsoa11cd_lookup_df['lsoa11cd'].isna()]
pc_lsoa11cd_lookup_df = pc_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables from oa_to_rgn_df
# -------------------------------------
# create an oa11cd to lsoa11cd lookup table from pc to oacd table
print("oa_to_rgn_df")
print(oa_to_rgn_df.columns)
print(" oa_oa11cd_lookup_df")
oa_oa11cd_lookup_df = oa_to_rgn_df[['oa11cd', 'lsoa11cd', 'lad20cd', 'lad20nm']]
oa_oa11cd_lookup_df = oa_oa11cd_lookup_df[~oa_oa11cd_lookup_df['oa11cd'].isna()]
oa_oa11cd_lookup_df = oa_oa11cd_lookup_df.drop_duplicates()
# create a lsoa11cd to msoa11cd / rgn20cd
print(" oa_lsoa11cd_lookup_df")
oa_lsoa11cd_lookup_df = oa_to_rgn_df[['lsoa11cd', 'lsoa11nm', 'msoa11cd', 'msoa11nm', 'rgn20cd', 'rgn20nm']]
oa_lsoa11cd_lookup_df = oa_lsoa11cd_lookup_df[~oa_lsoa11cd_lookup_df['lsoa11cd'].isna()]
oa_lsoa11cd_lookup_df = oa_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for imd_df
# ------------------------------
print("imd_df")
print(imd_df.columns)
print(" imd_lsoa11cd_lookup_df")
imd_lsoa11cd_lookup_df = imd_df[['lsoa11cd', 'lsoa11nm', 'lad19cd', 'lad19nm',
'imd_rank', 'imd_decile',
'imd_income_rank', 'imd_income_decile', 'imd_employment_rank',
'imd_employment_decile', 'imd_education_rank', 'imd_education_decile',
'imd_health_rank', 'imd_health_decile', 'imd_crime_rank',
'imd_crime_decile', 'imd_housing_rank', 'imd_housing_decile',
'imd_living_rank', 'imd_living_decile']]
imd_lsoa11cd_lookup_df = imd_lsoa11cd_lookup_df[~imd_lsoa11cd_lookup_df['lsoa11cd'].isna()]
imd_lsoa11cd_lookup_df = imd_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for tsend_df
# --------------------------------
# ['lsoa11cd', 'townsend_score', 'townsend_quintile']
print("townsend_df")
print(tsend_df.columns)
print(" townsend_lsoa11cd_lookup_df")
tsend_lsoa11cd_lookup_df = tsend_df[['lsoa11cd', 'townsend_score', 'townsend_quintile']]
tsend_lsoa11cd_lookup_df = tsend_lsoa11cd_lookup_df[~tsend_lsoa11cd_lookup_df['lsoa11cd'].isna()]
tsend_lsoa11cd_lookup_df = tsend_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for ruc_df
# ------------------------------
# ['lsoa11cd', 'ruc11cd', 'ruc11nm']
print("ruc_df")
print(ruc_df.columns)
print(" ruc_lsoa11cd_lookup_df")
ruc_lsoa11cd_lookup_df = ruc_df[['lsoa11cd', 'ruc11cd', 'ruc11nm']]
ruc_lsoa11cd_lookup_df = ruc_lsoa11cd_lookup_df[~ruc_lsoa11cd_lookup_df['lsoa11cd'].isna()]
ruc_lsoa11cd_lookup_df = ruc_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for lsoa_to_utla_df
# ---------------------------------------
print("lsoa_to_utla_df")
print(lsoa_to_utla_df.columns)
lsoa_utla_lsoa11cd_lookup_df = lsoa_to_utla_df[['lsoa11cd', 'lsoa11nm', 'utla17cd', 'utla17nm']]
lsoa_utla_lsoa11cd_lookup_df = lsoa_utla_lsoa11cd_lookup_df[~lsoa_utla_lsoa11cd_lookup_df['lsoa11cd'].isna()]
lsoa_utla_lsoa11cd_lookup_df = lsoa_utla_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for density_df
# ----------------------------------
print("density_df")
print(density_df.columns)
density_lsoa11cd_lookup_df = density_df[['lsoa11cd', 'lsoa11nm', 'population', 'area_sq_km', 'density']]
density_lsoa11cd_lookup_df = density_lsoa11cd_lookup_df[~density_lsoa11cd_lookup_df['lsoa11cd'].isna()]
density_lsoa11cd_lookup_df = density_lsoa11cd_lookup_df.drop_duplicates()
```
## Step 4: Merge osa tables together and lsoa tables together, checking common fields for consistency
```
def check_consistency(msg, one, two):
mismatches = 0
one = dict(sorted(one.values.tolist()))
two = dict(sorted(two.values.tolist()))
for k,v in one.items():
if k in two and two[k] != v:
mismatches += 1
print("{}: {} mismatches".format(msg, mismatches))
def consolidate_fields(df, result, first, second):
tbl = df[[first, second]]
tbl = tbl.dropna(how='any')
tbl = tbl[tbl[first] != tbl[second]]
if len(tbl) > 0:
print("Unexpected: non-null entries mismatched")
print(tbl)
df[result] = df[first].where(df[first].notna(), df[second])
df = df.drop(columns=[first, second])
return df
# double check that oa1cd -> lsoa11cd mappings are in mutual agreement
check_consistency("checking oa11cd -> lsoa11cd consistency",
pc_oa11cd_lookup_df[['oa11cd', 'lsoa11cd']],
oa_oa11cd_lookup_df[['oa11cd', 'lsoa11cd']])
check_consistency("checking lsoa11cd consistency (pc, oa)",
pc_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']],
oa_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']])
check_consistency("checking lsoa11cd consistency (pc, imd)",
pc_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']],
imd_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']])
check_consistency("checking lsoa11cd consistency (pc, utla)",
pc_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']],
lsoa_utla_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']])
print('\noa11cd_lookup_df')
oa11cd_lookup_df = pd.merge(left_on='oa11cd', right_on='oa11cd',
left=pc_oa11cd_lookup_df, right=oa_oa11cd_lookup_df,
how='outer')
oa11cd_lookup_df = consolidate_fields(oa11cd_lookup_df, 'lsoa11cd', 'lsoa11cd_x', 'lsoa11cd_y')
print(oa11cd_lookup_df.columns)
print('\nlsoa11cd_lookup_df')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=pc_lsoa11cd_lookup_df, right=oa_lsoa11cd_lookup_df,
how='outer')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'lsoa11nm', 'lsoa11nm_x', 'lsoa11nm_y')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'msoa11cd', 'msoa11cd_x', 'msoa11cd_y')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'msoa11nm', 'msoa11nm_x', 'msoa11nm_y')
print(lsoa11cd_lookup_df.columns)
print('\n merge lsoa11cd_lookup with imd')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=imd_lsoa11cd_lookup_df,
how='outer')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'lsoa11nm', 'lsoa11nm_x', 'lsoa11nm_y')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup with townsend')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=tsend_lsoa11cd_lookup_df,
how='outer')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup_with_ruc')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=ruc_lsoa11cd_lookup_df,
how='outer')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup with lsoa_to_utla')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=lsoa_utla_lsoa11cd_lookup_df,
how='outer')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'lsoa11nm', 'lsoa11nm_x', 'lsoa11nm_y')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup with density')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=density_lsoa11cd_lookup_df.drop(columns=['lsoa11nm']),
how='outer')
print(lsoa11cd_lookup_df.columns)
```
## Step 6a: Export csv files for the consolidated postcode, oa11cd and lsoa11cd tables
```
pc_lookup_df.to_csv(postcode_csv_name)
oa11cd_lookup_df.to_csv(oa11cd_csv_name)
lsoa11cd_lookup_df.to_csv(lsoa11cd_csv_name)
print("done!")
```
## Step 6b: Export the consolidated postcode, oa11cd and lsoa11cd tables as an ExeTera datastore
```
from exetera.core.session import Session
from exetera.core.persistence import try_str_to_int, try_str_to_float
def fixed_string_fn(s, g, n, df):
print('fixed_string:', n)
values = df[n].replace(np.nan, '').str.encode('utf-8')
s.create_fixed_string(g, n, values.map(len).max()).data.write(values.to_list())
def indexed_string_fn(s, g, n, df):
print('indexed_string:', n)
values = df[n].replace(np.nan, '').astype('str')
s.create_indexed_string(g, n).data.write(values.to_list())
def rank_and_decile_fn(s, g, nr, nd, nv, df):
print("rank_and_decile:", nr, nd)
valid = list()
rank = list()
decile = list()
fvalid = s.create_numeric(g, nv, 'bool')
frank = s.create_numeric(g, nr, 'int8')
fdecile = s.create_numeric(g, nd, 'int32')
for i in range(len(df[nr])):
f, vr = try_str_to_int(df[nr][i])
_, vd = try_str_to_int(df[nd][i])
valid.append(f)
rank.append(vr)
decile.append(vd)
fvalid.data.write(valid)
frank.data.write(rank)
fdecile.data.write(decile)
def townsend_fn(s, g, ns, nq, nv, df):
print("townsend:", ns, nq)
valid = list()
score = list()
quintile = list()
fvalid = s.create_numeric(g, nv, 'bool')
fscore = s.create_numeric(g, ns, 'float32')
fquintile = s.create_numeric(g, nq, 'int8')
for i in range(len(df[ns])):
f, vs = try_str_to_float(df[ns][i])
_, vq = try_str_to_int(df[nq][i])
valid.append(f)
score.append(vs)
quintile.append(vq)
fvalid.data.write(valid)
fscore.data.write(score)
fquintile.data.write(quintile)
def numeric_fn(s, g, n, nv, df, map_fn, dtype):
print('numeric:', n)
valid = list()
data = list()
fvalid = s.create_numeric(g, nv, 'bool')
fdata = s.create_numeric(g, n, dtype)
for i in range(len(df[n])):
f, v = map_fn(df[n][i])
valid.append(f)
data.append(v)
fvalid.data.write(valid)
fdata.data.write(data)
def categorical_fn(s, g, n, df, dvals):
print('categorical:', n)
values = df[n].replace(np.nan, '').astype('str')
values = [dvals[v] for v in values]
s.create_categorical(g, n, 'int8', dvals).data.write(values)
def create_categorical_dicts(df, n_cd, n_nm):
codes = df[[n_cd, n_nm]].replace(np.nan, '').drop_duplicates().sort_values(by=n_cd).reset_index()
cddict = {}
nmdict = {}
for i, r in codes.iterrows():
cddict[r[n_cd]] = i
nmdict[r[n_nm]] = i
return cddict, nmdict
with Session() as s:
dest = s.open_dataset(output_dataset_name, 'w', 'dest')
print('postcode')
print(pc_lookup_df.columns)
d_pc = dest.create_group('postcode')
fixed_string_fn(s, d_pc, 'pcd7', pc_lookup_df)
fixed_string_fn(s, d_pc, 'pcd8', pc_lookup_df)
fixed_string_fn(s, d_pc, 'pcds', pc_lookup_df)
fixed_string_fn(s, d_pc, 'oa11cd', pc_lookup_df)
fixed_string_fn(s, d_pc, 'lsoa11cd', pc_lookup_df)
indexed_string_fn(s, d_pc, 'lsoa11nm', pc_lookup_df)
fixed_string_fn(s, d_pc, 'wz11cd', pc_lookup_df)
cddict, nmdict = create_categorical_dicts(pc_lookup_df, 'wzc11cd', 'wzc11nm')
categorical_fn(s, d_pc, 'wzc11cd', pc_lookup_df, cddict)
categorical_fn(s, d_pc, 'wzc11nm', pc_lookup_df, nmdict)
print('\nosa11cd')
print(oa11cd_lookup_df.columns)
d_oa = dest.create_group('osa11cd')
fixed_string_fn(s, d_oa, 'oa11cd', oa11cd_lookup_df)
fixed_string_fn(s, d_oa, 'lsoa11cd', oa11cd_lookup_df)
fixed_string_fn(s, d_oa, 'ladcd', oa11cd_lookup_df)
indexed_string_fn(s, d_oa, 'ladnm', oa11cd_lookup_df)
indexed_string_fn(s, d_oa, 'ladnmw', oa11cd_lookup_df)
cddict, nmdict = create_categorical_dicts(oa11cd_lookup_df, 'oac11cd', 'oac11nm')
categorical_fn(s, d_oa, 'oac11cd', oa11cd_lookup_df, cddict)
categorical_fn(s, d_oa, 'oac11nm', oa11cd_lookup_df, nmdict)
print('\nlsoa11cd')
print(lsoa11cd_lookup_df.columns)
d_lsoa = dest.create_group('lsoa11cd')
fixed_string_fn(s, d_lsoa, 'lsoa11cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'lsoa11nm', lsoa11cd_lookup_df)
cddict, nmdict = create_categorical_dicts(lsoa11cd_lookup_df, 'soac11cd', 'soac11nm')
categorical_fn(s, d_lsoa, 'soac11cd', lsoa11cd_lookup_df, cddict)
categorical_fn(s, d_lsoa, 'soac11nm', lsoa11cd_lookup_df, nmdict)
fixed_string_fn(s, d_lsoa, 'msoa11cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'msoa11nm', lsoa11cd_lookup_df)
fixed_string_fn(s, d_lsoa, 'lad19cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'lad19nm', lsoa11cd_lookup_df)
fixed_string_fn(s, d_lsoa, 'utla17cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'utla17nm', lsoa11cd_lookup_df)
fixed_string_fn(s, d_lsoa, 'rgn20cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'rgn20nm', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_rank', 'imd_decile', 'imd_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_education_rank', 'imd_education_decile', 'imd_education_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_income_rank', 'imd_income_decile', 'imd_income_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_employment_rank', 'imd_employment_decile', 'imd_employment_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_health_rank', 'imd_health_decile', 'imd_health_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_crime_rank', 'imd_crime_decile', 'imd_crime_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_housing_rank', 'imd_housing_decile', 'imd_housing_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_living_rank', 'imd_living_decile', 'imd_living_valid', lsoa11cd_lookup_df)
cddict, nmdict = create_categorical_dicts(lsoa11cd_lookup_df, 'ruc11cd', 'ruc11nm')
categorical_fn(s, d_lsoa, 'ruc11cd', lsoa11cd_lookup_df, cddict)
categorical_fn(s, d_lsoa, 'ruc11nm', lsoa11cd_lookup_df, nmdict)
townsend_fn(s, d_lsoa, 'townsend_score', 'townsend_quintile', 'townsend_valid', lsoa11cd_lookup_df)
numeric_fn(s, d_lsoa, 'population', 'population_valid', lsoa11cd_lookup_df, try_str_to_int, 'int32')
numeric_fn(s, d_lsoa, 'area_sq_km', 'area_sq_km_valid', lsoa11cd_lookup_df, try_str_to_float, 'float32')
numeric_fn(s, d_lsoa, 'density', 'density_valid', lsoa11cd_lookup_df, try_str_to_float, 'float32')
print('\ndone!')
```
### Ancillary code to check relationships
```
def list_relationships(df, key_pairs):
# oa -> lsoa
for kp in key_pairs:
print(kp)
sdf = df[kp]
sdf = sdf.drop_duplicates()
print(len(sdf))
print(len(sdf[kp[0]].unique()))
print(len(sdf[kp[1]].unique()))
print(sdf[kp[0]].value_counts().unique())
print('pc_df')
list_relationships(pc_df,
(['oa11cd', 'oac11cd'], ['oa11cd', 'lsoa11cd'], ['oa11cd', 'msoa11cd'], ['oa11cd', 'ladcd'], ['oa11cd', 'soac11cd'],
['oa11cd', 'laccd'], ['oa11cd', 'wz11cd'], ['oa11cd', 'wzc11cd'],
['lsoa11cd', 'oac11cd'], ['lsoa11cd', 'msoa11cd'], ['lsoa11cd', 'ladcd'], ['lsoa11cd', 'soac11cd'],
['lsoa11cd', 'laccd'], ['lsoa11cd', 'wz11cd'], ['lsoa11cd', 'wzc11cd'],
['msoa11cd', 'ladcd'], ['msoa11cd', 'soac11cd'], ['ladcd', 'soac11cd']))
print('oa_to_rgn')
list_relationships(oa_to_rgn_df,
(['oa11cd', 'lsoa11cd'], ['oa11cd', 'lad20cd']))
```
|
github_jupyter
|
pc --> oa11cd --> ladcd
| |\> lsoa11cd --> msoa11cd
| | |\> soac11cd
| | |\> laccd
| | \> rgn20cd
| \> oac11cd
|\> wz11cd
|\> wzc11cd
\> lsoa11cd (duplicated for convenience)
! wget -O imd.xlsx https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/833973/File_2_-_IoD2019_Domains_of_Deprivation.xlsx
! wget -O rural_urban.csv http://geoportal1-ons.opendata.arcgis.com/datasets/276d973d30134c339eaecfc3c49770b3_0.csv
! wget -O townsend.csv http://s3-eu-west-1.amazonaws.com/statistics.digitalresources.jisc.ac.uk/dkan/files/Townsend_Deprivation_Scores/Scores/Scores-%202011%20UK%20LSOA.csv
! wget -O oa_to_rgn.csv https://opendata.arcgis.com/datasets/65664b00231444edb3f6f83c9d40591f_0.csv
! wget -O pc_to_lsoa.zip https://www.arcgis.com/sharing/rest/content/items/83300a9b0e63465fabee3fddd8fbd30e/data
! unzip -o pc_to_lsoa.zip
! wget -O lsoa_to_utla.csv https://opendata.arcgis.com/datasets/9f4c270148014f20bf24abff9a7aef62_0.csv
! wget -O density.zip https://www.ons.gov.uk/file?uri=%2fpeoplepopulationandcommunity%2fpopulationandmigration%2fpopulationestimates%2fdatasets%2flowersuperoutputareapopulationdensity%2fmid2019sape22dt11/sape22dt11mid2019lsoapopulationdensity.zip
! unzip -o density.zip
postcode_csv_name = 'postcode_lookups.csv'
oa11cd_csv_name = 'oa11cd_lookups.csv'
lsoa11cd_csv_name = 'lsoa11cd_lookups.csv'
output_dataset_name = 'uk_geocodes.hdf5'
import numpy as np
import pandas as pd
pc_df = pd.read_csv('NSPCL_MAY20_UK_LU.csv')
oa_to_rgn_df = pd.read_csv('oa_to_rgn.csv')
oa_to_rgn_df = oa_to_rgn_df.rename(columns={k:k.lower() for k in oa_to_rgn_df.columns})
imd_df = pd.read_excel('imd.xlsx', sheet_name=1)
imd_df = imd_df.rename(columns={"LSOA code (2011)": "lsoa11cd",
"LSOA name (2011)": "lsoa11nm",
"Local Authority District code (2019)": "lad19cd",
"Local Authority District name (2019)": "lad19nm",
"Index of Multiple Deprivation (IMD) Rank (where 1 is most deprived)": "imd_rank",
"Index of Multiple Deprivation (IMD) Decile (where 1 is most deprived 10% of LSOAs)": "imd_decile",
"Income Rank (where 1 is most deprived)": "imd_income_rank",
"Income Decile (where 1 is most deprived 10% of LSOAs)": "imd_income_decile",
"Employment Rank (where 1 is most deprived)": "imd_employment_rank",
"Employment Decile (where 1 is most deprived 10% of LSOAs)": "imd_employment_decile",
"Education, Skills and Training Rank (where 1 is most deprived)": "imd_education_rank",
"Education, Skills and Training Decile (where 1 is most deprived 10% of LSOAs)": "imd_education_decile",
"Health Deprivation and Disability Rank (where 1 is most deprived)": "imd_health_rank",
"Health Deprivation and Disability Decile (where 1 is most deprived 10% of LSOAs)": "imd_health_decile",
"Crime Rank (where 1 is most deprived)": "imd_crime_rank",
"Crime Decile (where 1 is most deprived 10% of LSOAs)": "imd_crime_decile",
"Barriers to Housing and Services Rank (where 1 is most deprived)": "imd_housing_rank",
"Barriers to Housing and Services Decile (where 1 is most deprived 10% of LSOAs)": "imd_housing_decile",
"Living Environment Rank (where 1 is most deprived)": "imd_living_rank",
"Living Environment Decile (where 1 is most deprived 10% of LSOAs)": "imd_living_decile"} )
tsend_df = pd.read_csv('./townsend.csv')
tsend_df = tsend_df.rename(columns={"GEO_CODE": "lsoa11cd", "TDS": "townsend_score", "quintile": "townsend_quintile"})
tsend_df = tsend_df.drop(columns=[c for c in tsend_df.columns if c not in ("lsoa11cd", "townsend_score", "townsend_quintile")])
ruc_df = pd.read_csv('./rural_urban.csv')
ruc_df = ruc_df.rename(columns={"LSOA11CD": "lsoa11cd", "RUC11CD": "ruc11cd", "RUC11": "ruc11nm"})
ruc_df = ruc_df.drop(columns=[c for c in ruc_df.columns if c not in ("lsoa11cd", "ruc11cd", "ruc11nm")])
lsoa_to_utla_df = pd.read_csv('lsoa_to_utla.csv')
lsoa_to_utla_df = lsoa_to_utla_df.rename(columns={k:k.lower() for k in lsoa_to_utla_df.columns})
density_df = pd.read_excel('SAPE22DT11-mid-2019-lsoa-population-density.xlsx', sheet_name=3, skiprows=4)
density_df = density_df.rename(columns={"LSOA Code": "lsoa11cd", "LSOA Name": "lsoa11nm",
"Mid-2019 population": "population", "Area Sq Km": "area_sq_km", "People per Sq Km": "density"})
print(density_df.columns)
print("done!")
# generate sub-tables from pc_df
# ------------------------------
# pc --> oa11cd --> ladcd
# | |\> lsoa11cd --> msoa11cd (--> rgn20cd)
# | | |\> soac11cd
# | | |\> laccd
# | | \> rgn20cd
# | \> oac11cd
# |\> wz11cd
# |\> wzc11cd
# \> lsoa11cd (duplicated for convenience)
# create a postcode to oa11cd / lsoa11cd lookup table from pc to oacd table
print("pc_df")
print(pc_df.columns)
pc_lookup_df = pc_df[['pcd7', 'pcd8', 'pcds', 'oa11cd', 'lsoa11cd', 'lsoa11nm', 'wz11cd', 'wzc11cd', 'wzc11nm']]
print(" pc_lookup_df")
pc_lookup_df = pc_lookup_df[~pc_lookup_df['pcd7'].isna()]
pc_lookup_df = pc_lookup_df[~pc_lookup_df['pcd8'].isna()]
pc_lookup_df = pc_lookup_df[~pc_lookup_df['pcds'].isna()]
# create an oa11cd to ladcd / lsoa11cd lookup table from pc to oacd table
print(" pc_oa11cd_lookup_df")
pc_oa11cd_lookup_df = pc_df[['oa11cd', 'lsoa11cd', 'ladcd', 'ladnm', 'ladnmw', 'oac11cd', 'oac11nm']]
pc_oa11cd_lookup_df = pc_oa11cd_lookup_df[~pc_oa11cd_lookup_df['oa11cd'].isna()]
pc_oa11cd_lookup_df = pc_oa11cd_lookup_df.drop_duplicates()
# create an lsoa11cd to msoa11cd / soac11cd lookup table from pc to oacd table
print(" pc_lsoa11cd_lookup_df")
pc_lsoa11cd_lookup_df = pc_df[['lsoa11cd', 'lsoa11nm', 'msoa11cd', 'msoa11nm', 'soac11cd', 'soac11nm', 'laccd', 'lacnm']]
pc_lsoa11cd_lookup_df = pc_lsoa11cd_lookup_df[~pc_lsoa11cd_lookup_df['lsoa11cd'].isna()]
pc_lsoa11cd_lookup_df = pc_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables from oa_to_rgn_df
# -------------------------------------
# create an oa11cd to lsoa11cd lookup table from pc to oacd table
print("oa_to_rgn_df")
print(oa_to_rgn_df.columns)
print(" oa_oa11cd_lookup_df")
oa_oa11cd_lookup_df = oa_to_rgn_df[['oa11cd', 'lsoa11cd', 'lad20cd', 'lad20nm']]
oa_oa11cd_lookup_df = oa_oa11cd_lookup_df[~oa_oa11cd_lookup_df['oa11cd'].isna()]
oa_oa11cd_lookup_df = oa_oa11cd_lookup_df.drop_duplicates()
# create a lsoa11cd to msoa11cd / rgn20cd
print(" oa_lsoa11cd_lookup_df")
oa_lsoa11cd_lookup_df = oa_to_rgn_df[['lsoa11cd', 'lsoa11nm', 'msoa11cd', 'msoa11nm', 'rgn20cd', 'rgn20nm']]
oa_lsoa11cd_lookup_df = oa_lsoa11cd_lookup_df[~oa_lsoa11cd_lookup_df['lsoa11cd'].isna()]
oa_lsoa11cd_lookup_df = oa_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for imd_df
# ------------------------------
print("imd_df")
print(imd_df.columns)
print(" imd_lsoa11cd_lookup_df")
imd_lsoa11cd_lookup_df = imd_df[['lsoa11cd', 'lsoa11nm', 'lad19cd', 'lad19nm',
'imd_rank', 'imd_decile',
'imd_income_rank', 'imd_income_decile', 'imd_employment_rank',
'imd_employment_decile', 'imd_education_rank', 'imd_education_decile',
'imd_health_rank', 'imd_health_decile', 'imd_crime_rank',
'imd_crime_decile', 'imd_housing_rank', 'imd_housing_decile',
'imd_living_rank', 'imd_living_decile']]
imd_lsoa11cd_lookup_df = imd_lsoa11cd_lookup_df[~imd_lsoa11cd_lookup_df['lsoa11cd'].isna()]
imd_lsoa11cd_lookup_df = imd_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for tsend_df
# --------------------------------
# ['lsoa11cd', 'townsend_score', 'townsend_quintile']
print("townsend_df")
print(tsend_df.columns)
print(" townsend_lsoa11cd_lookup_df")
tsend_lsoa11cd_lookup_df = tsend_df[['lsoa11cd', 'townsend_score', 'townsend_quintile']]
tsend_lsoa11cd_lookup_df = tsend_lsoa11cd_lookup_df[~tsend_lsoa11cd_lookup_df['lsoa11cd'].isna()]
tsend_lsoa11cd_lookup_df = tsend_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for ruc_df
# ------------------------------
# ['lsoa11cd', 'ruc11cd', 'ruc11nm']
print("ruc_df")
print(ruc_df.columns)
print(" ruc_lsoa11cd_lookup_df")
ruc_lsoa11cd_lookup_df = ruc_df[['lsoa11cd', 'ruc11cd', 'ruc11nm']]
ruc_lsoa11cd_lookup_df = ruc_lsoa11cd_lookup_df[~ruc_lsoa11cd_lookup_df['lsoa11cd'].isna()]
ruc_lsoa11cd_lookup_df = ruc_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for lsoa_to_utla_df
# ---------------------------------------
print("lsoa_to_utla_df")
print(lsoa_to_utla_df.columns)
lsoa_utla_lsoa11cd_lookup_df = lsoa_to_utla_df[['lsoa11cd', 'lsoa11nm', 'utla17cd', 'utla17nm']]
lsoa_utla_lsoa11cd_lookup_df = lsoa_utla_lsoa11cd_lookup_df[~lsoa_utla_lsoa11cd_lookup_df['lsoa11cd'].isna()]
lsoa_utla_lsoa11cd_lookup_df = lsoa_utla_lsoa11cd_lookup_df.drop_duplicates()
# generate sub-tables for density_df
# ----------------------------------
print("density_df")
print(density_df.columns)
density_lsoa11cd_lookup_df = density_df[['lsoa11cd', 'lsoa11nm', 'population', 'area_sq_km', 'density']]
density_lsoa11cd_lookup_df = density_lsoa11cd_lookup_df[~density_lsoa11cd_lookup_df['lsoa11cd'].isna()]
density_lsoa11cd_lookup_df = density_lsoa11cd_lookup_df.drop_duplicates()
def check_consistency(msg, one, two):
mismatches = 0
one = dict(sorted(one.values.tolist()))
two = dict(sorted(two.values.tolist()))
for k,v in one.items():
if k in two and two[k] != v:
mismatches += 1
print("{}: {} mismatches".format(msg, mismatches))
def consolidate_fields(df, result, first, second):
tbl = df[[first, second]]
tbl = tbl.dropna(how='any')
tbl = tbl[tbl[first] != tbl[second]]
if len(tbl) > 0:
print("Unexpected: non-null entries mismatched")
print(tbl)
df[result] = df[first].where(df[first].notna(), df[second])
df = df.drop(columns=[first, second])
return df
# double check that oa1cd -> lsoa11cd mappings are in mutual agreement
check_consistency("checking oa11cd -> lsoa11cd consistency",
pc_oa11cd_lookup_df[['oa11cd', 'lsoa11cd']],
oa_oa11cd_lookup_df[['oa11cd', 'lsoa11cd']])
check_consistency("checking lsoa11cd consistency (pc, oa)",
pc_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']],
oa_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']])
check_consistency("checking lsoa11cd consistency (pc, imd)",
pc_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']],
imd_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']])
check_consistency("checking lsoa11cd consistency (pc, utla)",
pc_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']],
lsoa_utla_lsoa11cd_lookup_df[['lsoa11cd', 'lsoa11nm']])
print('\noa11cd_lookup_df')
oa11cd_lookup_df = pd.merge(left_on='oa11cd', right_on='oa11cd',
left=pc_oa11cd_lookup_df, right=oa_oa11cd_lookup_df,
how='outer')
oa11cd_lookup_df = consolidate_fields(oa11cd_lookup_df, 'lsoa11cd', 'lsoa11cd_x', 'lsoa11cd_y')
print(oa11cd_lookup_df.columns)
print('\nlsoa11cd_lookup_df')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=pc_lsoa11cd_lookup_df, right=oa_lsoa11cd_lookup_df,
how='outer')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'lsoa11nm', 'lsoa11nm_x', 'lsoa11nm_y')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'msoa11cd', 'msoa11cd_x', 'msoa11cd_y')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'msoa11nm', 'msoa11nm_x', 'msoa11nm_y')
print(lsoa11cd_lookup_df.columns)
print('\n merge lsoa11cd_lookup with imd')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=imd_lsoa11cd_lookup_df,
how='outer')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'lsoa11nm', 'lsoa11nm_x', 'lsoa11nm_y')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup with townsend')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=tsend_lsoa11cd_lookup_df,
how='outer')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup_with_ruc')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=ruc_lsoa11cd_lookup_df,
how='outer')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup with lsoa_to_utla')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=lsoa_utla_lsoa11cd_lookup_df,
how='outer')
lsoa11cd_lookup_df = consolidate_fields(lsoa11cd_lookup_df, 'lsoa11nm', 'lsoa11nm_x', 'lsoa11nm_y')
print(lsoa11cd_lookup_df.columns)
print('\nmerge lsoa11cd_lookup with density')
lsoa11cd_lookup_df = pd.merge(left_on='lsoa11cd', right_on='lsoa11cd',
left=lsoa11cd_lookup_df, right=density_lsoa11cd_lookup_df.drop(columns=['lsoa11nm']),
how='outer')
print(lsoa11cd_lookup_df.columns)
pc_lookup_df.to_csv(postcode_csv_name)
oa11cd_lookup_df.to_csv(oa11cd_csv_name)
lsoa11cd_lookup_df.to_csv(lsoa11cd_csv_name)
print("done!")
from exetera.core.session import Session
from exetera.core.persistence import try_str_to_int, try_str_to_float
def fixed_string_fn(s, g, n, df):
print('fixed_string:', n)
values = df[n].replace(np.nan, '').str.encode('utf-8')
s.create_fixed_string(g, n, values.map(len).max()).data.write(values.to_list())
def indexed_string_fn(s, g, n, df):
print('indexed_string:', n)
values = df[n].replace(np.nan, '').astype('str')
s.create_indexed_string(g, n).data.write(values.to_list())
def rank_and_decile_fn(s, g, nr, nd, nv, df):
print("rank_and_decile:", nr, nd)
valid = list()
rank = list()
decile = list()
fvalid = s.create_numeric(g, nv, 'bool')
frank = s.create_numeric(g, nr, 'int8')
fdecile = s.create_numeric(g, nd, 'int32')
for i in range(len(df[nr])):
f, vr = try_str_to_int(df[nr][i])
_, vd = try_str_to_int(df[nd][i])
valid.append(f)
rank.append(vr)
decile.append(vd)
fvalid.data.write(valid)
frank.data.write(rank)
fdecile.data.write(decile)
def townsend_fn(s, g, ns, nq, nv, df):
print("townsend:", ns, nq)
valid = list()
score = list()
quintile = list()
fvalid = s.create_numeric(g, nv, 'bool')
fscore = s.create_numeric(g, ns, 'float32')
fquintile = s.create_numeric(g, nq, 'int8')
for i in range(len(df[ns])):
f, vs = try_str_to_float(df[ns][i])
_, vq = try_str_to_int(df[nq][i])
valid.append(f)
score.append(vs)
quintile.append(vq)
fvalid.data.write(valid)
fscore.data.write(score)
fquintile.data.write(quintile)
def numeric_fn(s, g, n, nv, df, map_fn, dtype):
print('numeric:', n)
valid = list()
data = list()
fvalid = s.create_numeric(g, nv, 'bool')
fdata = s.create_numeric(g, n, dtype)
for i in range(len(df[n])):
f, v = map_fn(df[n][i])
valid.append(f)
data.append(v)
fvalid.data.write(valid)
fdata.data.write(data)
def categorical_fn(s, g, n, df, dvals):
print('categorical:', n)
values = df[n].replace(np.nan, '').astype('str')
values = [dvals[v] for v in values]
s.create_categorical(g, n, 'int8', dvals).data.write(values)
def create_categorical_dicts(df, n_cd, n_nm):
codes = df[[n_cd, n_nm]].replace(np.nan, '').drop_duplicates().sort_values(by=n_cd).reset_index()
cddict = {}
nmdict = {}
for i, r in codes.iterrows():
cddict[r[n_cd]] = i
nmdict[r[n_nm]] = i
return cddict, nmdict
with Session() as s:
dest = s.open_dataset(output_dataset_name, 'w', 'dest')
print('postcode')
print(pc_lookup_df.columns)
d_pc = dest.create_group('postcode')
fixed_string_fn(s, d_pc, 'pcd7', pc_lookup_df)
fixed_string_fn(s, d_pc, 'pcd8', pc_lookup_df)
fixed_string_fn(s, d_pc, 'pcds', pc_lookup_df)
fixed_string_fn(s, d_pc, 'oa11cd', pc_lookup_df)
fixed_string_fn(s, d_pc, 'lsoa11cd', pc_lookup_df)
indexed_string_fn(s, d_pc, 'lsoa11nm', pc_lookup_df)
fixed_string_fn(s, d_pc, 'wz11cd', pc_lookup_df)
cddict, nmdict = create_categorical_dicts(pc_lookup_df, 'wzc11cd', 'wzc11nm')
categorical_fn(s, d_pc, 'wzc11cd', pc_lookup_df, cddict)
categorical_fn(s, d_pc, 'wzc11nm', pc_lookup_df, nmdict)
print('\nosa11cd')
print(oa11cd_lookup_df.columns)
d_oa = dest.create_group('osa11cd')
fixed_string_fn(s, d_oa, 'oa11cd', oa11cd_lookup_df)
fixed_string_fn(s, d_oa, 'lsoa11cd', oa11cd_lookup_df)
fixed_string_fn(s, d_oa, 'ladcd', oa11cd_lookup_df)
indexed_string_fn(s, d_oa, 'ladnm', oa11cd_lookup_df)
indexed_string_fn(s, d_oa, 'ladnmw', oa11cd_lookup_df)
cddict, nmdict = create_categorical_dicts(oa11cd_lookup_df, 'oac11cd', 'oac11nm')
categorical_fn(s, d_oa, 'oac11cd', oa11cd_lookup_df, cddict)
categorical_fn(s, d_oa, 'oac11nm', oa11cd_lookup_df, nmdict)
print('\nlsoa11cd')
print(lsoa11cd_lookup_df.columns)
d_lsoa = dest.create_group('lsoa11cd')
fixed_string_fn(s, d_lsoa, 'lsoa11cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'lsoa11nm', lsoa11cd_lookup_df)
cddict, nmdict = create_categorical_dicts(lsoa11cd_lookup_df, 'soac11cd', 'soac11nm')
categorical_fn(s, d_lsoa, 'soac11cd', lsoa11cd_lookup_df, cddict)
categorical_fn(s, d_lsoa, 'soac11nm', lsoa11cd_lookup_df, nmdict)
fixed_string_fn(s, d_lsoa, 'msoa11cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'msoa11nm', lsoa11cd_lookup_df)
fixed_string_fn(s, d_lsoa, 'lad19cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'lad19nm', lsoa11cd_lookup_df)
fixed_string_fn(s, d_lsoa, 'utla17cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'utla17nm', lsoa11cd_lookup_df)
fixed_string_fn(s, d_lsoa, 'rgn20cd', lsoa11cd_lookup_df)
indexed_string_fn(s, d_lsoa, 'rgn20nm', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_rank', 'imd_decile', 'imd_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_education_rank', 'imd_education_decile', 'imd_education_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_income_rank', 'imd_income_decile', 'imd_income_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_employment_rank', 'imd_employment_decile', 'imd_employment_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_health_rank', 'imd_health_decile', 'imd_health_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_crime_rank', 'imd_crime_decile', 'imd_crime_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_housing_rank', 'imd_housing_decile', 'imd_housing_valid', lsoa11cd_lookup_df)
rank_and_decile_fn(s, d_lsoa, 'imd_living_rank', 'imd_living_decile', 'imd_living_valid', lsoa11cd_lookup_df)
cddict, nmdict = create_categorical_dicts(lsoa11cd_lookup_df, 'ruc11cd', 'ruc11nm')
categorical_fn(s, d_lsoa, 'ruc11cd', lsoa11cd_lookup_df, cddict)
categorical_fn(s, d_lsoa, 'ruc11nm', lsoa11cd_lookup_df, nmdict)
townsend_fn(s, d_lsoa, 'townsend_score', 'townsend_quintile', 'townsend_valid', lsoa11cd_lookup_df)
numeric_fn(s, d_lsoa, 'population', 'population_valid', lsoa11cd_lookup_df, try_str_to_int, 'int32')
numeric_fn(s, d_lsoa, 'area_sq_km', 'area_sq_km_valid', lsoa11cd_lookup_df, try_str_to_float, 'float32')
numeric_fn(s, d_lsoa, 'density', 'density_valid', lsoa11cd_lookup_df, try_str_to_float, 'float32')
print('\ndone!')
def list_relationships(df, key_pairs):
# oa -> lsoa
for kp in key_pairs:
print(kp)
sdf = df[kp]
sdf = sdf.drop_duplicates()
print(len(sdf))
print(len(sdf[kp[0]].unique()))
print(len(sdf[kp[1]].unique()))
print(sdf[kp[0]].value_counts().unique())
print('pc_df')
list_relationships(pc_df,
(['oa11cd', 'oac11cd'], ['oa11cd', 'lsoa11cd'], ['oa11cd', 'msoa11cd'], ['oa11cd', 'ladcd'], ['oa11cd', 'soac11cd'],
['oa11cd', 'laccd'], ['oa11cd', 'wz11cd'], ['oa11cd', 'wzc11cd'],
['lsoa11cd', 'oac11cd'], ['lsoa11cd', 'msoa11cd'], ['lsoa11cd', 'ladcd'], ['lsoa11cd', 'soac11cd'],
['lsoa11cd', 'laccd'], ['lsoa11cd', 'wz11cd'], ['lsoa11cd', 'wzc11cd'],
['msoa11cd', 'ladcd'], ['msoa11cd', 'soac11cd'], ['ladcd', 'soac11cd']))
print('oa_to_rgn')
list_relationships(oa_to_rgn_df,
(['oa11cd', 'lsoa11cd'], ['oa11cd', 'lad20cd']))
| 0.343892 | 0.838481 |
# SIT742: Modern Data Science
**(Week 02: A Touch of Data Science)**
---
- Materials in this module include resources collected from various open-source online repositories.
- You are free to use, change and distribute this package.
- If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)
Prepared by **SIT742 Teaching Team**
---
# Session 2C - Operations in Pandas
# Pandas
Credits: The following are notes taken while working through [Python for Data Analysis](http://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1449319793) by Wes McKinney
Pandas is a strange name, kind of an acronym: Python, Numerical, Data Analysis?
(or so I thought, seems the name comes from Panel Data, either way, nothing to do with Chinese bamboo chewing bears)
Because pandas is an external library you need to import it. There are several ways that you will see imports done:
* import pandas
* from pandas import tools
* import pandas as pd
## Table of Content
1. Series
2. DataFrame
3. Reindexing
1. Dropping Entries
1. Indexing, Selecting, Filtering
1. Arithmetic and Data Alignment
1. Function Application and Mapping
1. Sorting and Ranking
1. Axis Indices with Duplicate Values
1. Summarizing and Computing Descriptive Statistics
1. Cleaning Data
1. Input and Output
```
#You can review week 1 lab materials to recall what is Module ? and how to import a Module?
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
```
## 1. Series
A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels. The data can be any NumPy data type and the labels are the Series' index.
Create a Series:
```
ser_1 = Series([1, 1, 2, -3, -5, 8, 13])
ser_1
```
Get the array representation of a Series:
```
ser_1.values
```
Index objects are immutable and hold the axis labels and metadata such as names and axis names.
Get the index of the Series:
```
ser_1.index
```
Create a Series with a custom index:
```
ser_2 = Series([1, 1, 2, -3, -5], index=['a', 'b', 'c', 'd', 'e'])
ser_2
```
Get a value from a Series:
```
ser_2[4] == ser_2['e']
```
Get a set of values from a Series by passing in a list:
```
ser_2[['c', 'a', 'b']]
```
Get values great than 0:
```
ser_2[ser_2 > 0]
```
Scalar multiply:
```
ser_2 * 2
```
Apply a numpy math function:
```
import numpy as np
#The np.exp() is used to calculate the exponential of all elements in the input array
np.exp(ser_2)
```
A Series is like a fixed-length, ordered dict.
Create a series by passing in a dict:
```
dict_1 = {'foo' : 100, 'bar' : 200, 'baz' : 300}
ser_3 = Series(dict_1)
ser_3
```
Re-order a Series by passing in an index (indices not found are NaN):
```
index = ['foo', 'bar', 'baz', 'qux']
ser_4 = Series(dict_1, index=index)
ser_4
```
Check for NaN with the pandas method:
```
pd.isnull(ser_4)
```
Check for NaN with the Series method:
```
ser_4.isnull()
```
Series automatically aligns differently indexed data in arithmetic operations:
```
ser_3 + ser_4
```
Name a Series:
```
ser_4.name = 'foobarbazqux'
```
Name a Series index:
```
ser_4.index.name = 'label'
ser_4
```
Rename a Series' index in place:
```
ser_4.index = ['fo', 'br', 'bz', 'qx']
ser_4
```
## 2. DataFrame
A DataFrame is a tabular data structure containing an ordered collection of columns. Each column can have a different type. DataFrames have both row and column indices and is analogous to a dict of Series. Row and column operations are treated roughly symmetrically. Columns returned when indexing a DataFrame are views of the underlying data, not a copy. To obtain a copy, use the Series' copy method.
Create a DataFrame:
```
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'pop' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = DataFrame(data_1)
df_1
```
Create a DataFrame specifying a sequence of columns:
```
df_2 = DataFrame(data_1, columns=['year', 'state', 'pop'])
df_2
```
Like Series, columns that are not present in the data are NaN:
```
df_3 = DataFrame(data_1, columns=['year', 'state', 'pop', 'unempl'])
df_3
```
Retrieve a column by key, returning a Series:
```
df_3['state']
```
Retrive a column by attribute, returning a Series:
```
df_3.year
```
Retrieve a row by position:
```
df_3.ix[0]
```
Update a column by assignment:
```
df_3['unempl'] = np.arange(5)
df_3
```
Assign a Series to a column (note if assigning a list or array, the length must match the DataFrame, unlike a Series):
```
unempl = Series([6.0, 6.0, 6.1], index=[2, 3, 4])
df_3['unempl'] = unempl
df_3
```
Assign a new column that doesn't exist to create a new column:
```
df_3['state_dup'] = df_3['state']
df_3
```
Delete a column:
```
del df_3['state_dup']
df_3
```
Create a DataFrame from a nested dict of dicts (the keys in the inner dicts are unioned and sorted to form the index in the result, unless an explicit index is specified):
```
pop = {'VA' : {2013 : 5.1, 2014 : 5.2},
'MD' : {2014 : 4.0, 2015 : 4.1}}
df_4 = DataFrame(pop)
df_4
```
Transpose the DataFrame:
```
df_4.T
```
Create a DataFrame from a dict of Series:
```
data_2 = {'VA' : df_4['VA'][1:],
'MD' : df_4['MD'][2:]}
df_5 = DataFrame(data_2)
df_5
```
Set the DataFrame index name:
```
df_5.index.name = 'year'
df_5
```
Set the DataFrame columns name:
```
df_5.columns.name = 'state'
df_5
```
Return the data contained in a DataFrame as a 2D ndarray:
```
df_5.values
```
If the columns are different dtypes, the 2D ndarray's dtype will accomodate all of the columns:
```
df_3.values
```
## 3. Reindexing
Create a new object with the data conformed to a new index. Any missing values are set to NaN.
```
df_3
```
Reindexing rows returns a new frame with the specified index:
```
df_3.reindex(list(reversed(range(0, 6))))
```
Missing values can be set to something other than NaN:
```
df_3.reindex(range(6, 0), fill_value=0)
```
Interpolate ordered data like a time series:
```
ser_5 = Series(['foo', 'bar', 'baz'], index=[0, 2, 4])
ser_5.reindex(range(5), method='ffill')
ser_5.reindex(range(5), method='bfill')
```
Reindex columns:
```
df_3.reindex(columns=['state', 'pop', 'unempl', 'year'])
```
Reindex rows and columns while filling rows:
```
df_3.reindex(index=list(reversed(range(0, 6))),
fill_value=0,
columns=['state', 'pop', 'unempl', 'year'])
```
Reindex using ix:
```
df_6 = df_3.ix[range(0, 7), ['state', 'pop', 'unempl', 'year']]
df_6
```
## 4. Dropping Entries
Drop rows from a Series or DataFrame:
```
df_7 = df_6.drop([0, 1])
df_7
```
Drop columns from a DataFrame:
```
df_7 = df_7.drop('unempl', axis=1)
df_7
```
## 5. Indexing, Selecting, Filtering
Series indexing is similar to NumPy array indexing with the added bonus of being able to use the Series' index values.
```
ser_2
```
Select a value from a Series:
```
ser_2[0] == ser_2['a']
```
Select a slice from a Series:
```
ser_2[1:4]
```
Select specific values from a Series:
```
ser_2[['b', 'c', 'd']]
```
Select from a Series based on a filter:
```
ser_2[ser_2 > 0]
```
Select a slice from a Series with labels (note the end point is inclusive):
```
ser_2['a':'b']
```
Assign to a Series slice (note the end point is inclusive):
```
ser_2['a':'b'] = 0
ser_2
```
Pandas supports indexing into a DataFrame.
```
df_6
```
Select specified columns from a DataFrame:
```
df_6[['pop', 'unempl']]
```
Select a slice from a DataFrame:
```
df_6[:2]
```
Select from a DataFrame based on a filter:
```
df_6[df_6['pop'] > 5]
```
Perform a scalar comparison on a DataFrame:
```
df_6 > 5
```
Perform a scalar comparison on a DataFrame, retain the values that pass the filter:
```
df_6[df_6 > 5]
```
Select a slice of rows from a DataFrame (note the end point is inclusive):
```
df_6.ix[2:3]
```
Select a slice of rows from a specific column of a DataFrame:
```
df_6.ix[0:2, 'pop']
```
Select rows based on an arithmetic operation on a specific row:
```
df_6.ix[df_6.unempl > 5.0]
```
## 6. Arithmetic and Data Alignment
Adding Series objects results in the union of index pairs if the pairs are not the same, resulting in NaN for indices that do not overlap:
```
np.random.seed(0)
ser_6 = Series(np.random.randn(5),
index=['a', 'b', 'c', 'd', 'e'])
ser_6
np.random.seed(1)
ser_7 = Series(np.random.randn(5),
index=['a', 'c', 'e', 'f', 'g'])
ser_7
ser_6 + ser_7
```
Set a fill value instead of NaN for indices that do not overlap:
```
ser_6.add(ser_7, fill_value=0)
```
Adding DataFrame objects results in the union of index pairs for rows and columns if the pairs are not the same, resulting in NaN for indices that do not overlap:
```
np.random.seed(0)
df_8 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['a', 'b', 'c'])
df_8
np.random.seed(1)
df_9 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['b', 'c', 'd'])
df_9
df_8 + df_9
```
Set a fill value instead of NaN for indices that do not overlap:
```
df_10 = df_8.add(df_9, fill_value=0)
df_10
```
Like NumPy, pandas supports arithmetic operations between DataFrames and Series.
Match the index of the Series on the DataFrame's columns, broadcasting down the rows:
```
ser_8 = df_10.ix[0]
df_11 = df_10 - ser_8
df_11
```
Match the index of the Series on the DataFrame's columns, broadcasting down the rows and union the indices that do not match:
```
ser_9 = Series(range(3), index=['a', 'd', 'e'])
ser_9
df_11 - ser_9
```
Broadcast over the columns and match the rows (axis=0) by using an arithmetic method:
```
df_10
ser_10 = Series([100, 200, 300])
ser_10
df_10.sub(ser_10, axis=0)
```
## 7. Function Application and Mapping
NumPy ufuncs (element-wise array methods) operate on pandas objects:
```
df_11 = np.abs(df_11)
df_11
```
Apply a function on 1D arrays to each column:
```
func_1 = lambda x: x.max() - x.min()
df_11.apply(func_1)
```
Apply a function on 1D arrays to each row:
```
df_11.apply(func_1, axis=1)
```
Apply a function and return a DataFrame:
```
func_2 = lambda x: Series([x.min(), x.max()], index=['min', 'max'])
df_11.apply(func_2)
```
Apply an element-wise Python function to a DataFrame:
```
func_3 = lambda x: '%.2f' %x
df_11.applymap(func_3)
```
Apply an element-wise Python function to a Series:
```
df_11['a'].map(func_3)
```
## 8. Sorting and Ranking
```
ser_4
```
Sort a Series by its index:
```
ser_4.sort_index()
```
Sort a Series by its values:
```
ser_4.sort_values()
df_12 = DataFrame(np.arange(12).reshape((3, 4)),
index=['three', 'one', 'two'],
columns=['c', 'a', 'b', 'd'])
df_12
```
Sort a DataFrame by its index:
```
df_12.sort_index()
```
Sort a DataFrame by columns in descending order:
```
df_12.sort_index(axis=1, ascending=False)
```
Sort a DataFrame's values by column:
```
df_12.sort_values(by=['d', 'c'])
```
Ranking is similar to numpy.argsort except that ties are broken by assigning each group the mean rank:
```
ser_11 = Series([7, -5, 7, 4, 2, 0, 4, 7])
ser_11 = ser_11.sort_values()
ser_11
ser_11.rank()
```
Rank a Series according to when they appear in the data:
```
ser_11.rank(method='first')
```
Rank a Series in descending order, using the maximum rank for the group:
```
ser_11.rank(ascending=False, method='max')
```
DataFrames can rank over rows or columns.
```
df_13 = DataFrame({'foo' : [7, -5, 7, 4, 2, 0, 4, 7],
'bar' : [-5, 4, 2, 0, 4, 7, 7, 8],
'baz' : [-1, 2, 3, 0, 5, 9, 9, 5]})
df_13
```
Rank a DataFrame over rows:
```
df_13.rank()
```
Rank a DataFrame over columns:
```
df_13.rank(axis=1)
```
## 9. Axis Indexes with Duplicate Values
Labels do not have to be unique in Pandas:
```
ser_12 = Series(range(5), index=['foo', 'foo', 'bar', 'bar', 'baz'])
ser_12
ser_12.index.is_unique
```
Select Series elements:
```
ser_12['foo']
```
Select DataFrame elements:
```
df_14 = DataFrame(np.random.randn(5, 4),
index=['foo', 'foo', 'bar', 'bar', 'baz'])
df_14
df_14.ix['bar']
```
## 10. Summarizing and Computing Descriptive Statistics
Unlike NumPy arrays, Pandas descriptive statistics automatically exclude missing data. NaN values are excluded unless the entire row or column is NA.
```
df_6
df_6.sum()
```
Sum over the rows:
```
df_6.sum(axis=1)
```
Account for NaNs:
```
df_6.sum(axis=1, skipna=False)
```
## 11. Cleaning Data
* Replace
* Drop
* Concatenate
```
from pandas import Series, DataFrame
import pandas as pd
```
Setup a DataFrame:
```
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_15 = DataFrame(data_1)
df_15
```
### 11.1 Replace
Replace all occurrences of a string with another string, in place (no copy):
```
df_15.replace('VA', 'VIRGINIA', inplace=True)
df_15
```
In a specified column, replace all occurrences of a string with another string, in place (no copy):
```
df_15.replace({'state' : { 'MD' : 'MARYLAND' }}, inplace=True)
df_15
```
### 11.2 Drop
Drop the 'population' column and return a copy of the DataFrame:
```
df_2 = df_15.drop('population', axis=1)
df_2
```
### 11.3 Concatenate
Concatenate two DataFrames:
```
data_2 = {'state' : ['NY', 'NY', 'NY', 'FL', 'FL'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [6.0, 6.1, 6.2, 3.0, 3.1]}
df_3 = DataFrame(data_2)
df_3
df_4 = pd.concat([df_1, df_3])
df_4
```
## 12. Input and Output
* Reading
* Writing
```
from pandas import Series, DataFrame
import pandas as pd
```
### 12.1 Reading
Read data from a CSV file into a DataFrame (use sep='\t' for TSV):
```
!pip install wget
import wget
link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/ozone.csv'
DataSet = wget.download(link_to_data)
df_1 = pd.read_csv("ozone.csv")
```
Get a summary of the DataFrame:
```
df_1.describe()
```
List the first five rows of the DataFrame:
```
df_1.head()
```
### 12.2 Writing
Create a copy of the CSV file, encoded in UTF-8 and hiding the index and header labels:
```
df_1.to_csv('ozone_copy.csv',
encoding='utf-8',
index=False,
header=False)
```
View the data directory:
```
!ls -l ./
```
|
github_jupyter
|
#You can review week 1 lab materials to recall what is Module ? and how to import a Module?
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
ser_1 = Series([1, 1, 2, -3, -5, 8, 13])
ser_1
ser_1.values
ser_1.index
ser_2 = Series([1, 1, 2, -3, -5], index=['a', 'b', 'c', 'd', 'e'])
ser_2
ser_2[4] == ser_2['e']
ser_2[['c', 'a', 'b']]
ser_2[ser_2 > 0]
ser_2 * 2
import numpy as np
#The np.exp() is used to calculate the exponential of all elements in the input array
np.exp(ser_2)
dict_1 = {'foo' : 100, 'bar' : 200, 'baz' : 300}
ser_3 = Series(dict_1)
ser_3
index = ['foo', 'bar', 'baz', 'qux']
ser_4 = Series(dict_1, index=index)
ser_4
pd.isnull(ser_4)
ser_4.isnull()
ser_3 + ser_4
ser_4.name = 'foobarbazqux'
ser_4.index.name = 'label'
ser_4
ser_4.index = ['fo', 'br', 'bz', 'qx']
ser_4
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'pop' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = DataFrame(data_1)
df_1
df_2 = DataFrame(data_1, columns=['year', 'state', 'pop'])
df_2
df_3 = DataFrame(data_1, columns=['year', 'state', 'pop', 'unempl'])
df_3
df_3['state']
df_3.year
df_3.ix[0]
df_3['unempl'] = np.arange(5)
df_3
unempl = Series([6.0, 6.0, 6.1], index=[2, 3, 4])
df_3['unempl'] = unempl
df_3
df_3['state_dup'] = df_3['state']
df_3
del df_3['state_dup']
df_3
pop = {'VA' : {2013 : 5.1, 2014 : 5.2},
'MD' : {2014 : 4.0, 2015 : 4.1}}
df_4 = DataFrame(pop)
df_4
df_4.T
data_2 = {'VA' : df_4['VA'][1:],
'MD' : df_4['MD'][2:]}
df_5 = DataFrame(data_2)
df_5
df_5.index.name = 'year'
df_5
df_5.columns.name = 'state'
df_5
df_5.values
df_3.values
df_3
df_3.reindex(list(reversed(range(0, 6))))
df_3.reindex(range(6, 0), fill_value=0)
ser_5 = Series(['foo', 'bar', 'baz'], index=[0, 2, 4])
ser_5.reindex(range(5), method='ffill')
ser_5.reindex(range(5), method='bfill')
df_3.reindex(columns=['state', 'pop', 'unempl', 'year'])
df_3.reindex(index=list(reversed(range(0, 6))),
fill_value=0,
columns=['state', 'pop', 'unempl', 'year'])
df_6 = df_3.ix[range(0, 7), ['state', 'pop', 'unempl', 'year']]
df_6
df_7 = df_6.drop([0, 1])
df_7
df_7 = df_7.drop('unempl', axis=1)
df_7
ser_2
ser_2[0] == ser_2['a']
ser_2[1:4]
ser_2[['b', 'c', 'd']]
ser_2[ser_2 > 0]
ser_2['a':'b']
ser_2['a':'b'] = 0
ser_2
df_6
df_6[['pop', 'unempl']]
df_6[:2]
df_6[df_6['pop'] > 5]
df_6 > 5
df_6[df_6 > 5]
df_6.ix[2:3]
df_6.ix[0:2, 'pop']
df_6.ix[df_6.unempl > 5.0]
np.random.seed(0)
ser_6 = Series(np.random.randn(5),
index=['a', 'b', 'c', 'd', 'e'])
ser_6
np.random.seed(1)
ser_7 = Series(np.random.randn(5),
index=['a', 'c', 'e', 'f', 'g'])
ser_7
ser_6 + ser_7
ser_6.add(ser_7, fill_value=0)
np.random.seed(0)
df_8 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['a', 'b', 'c'])
df_8
np.random.seed(1)
df_9 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['b', 'c', 'd'])
df_9
df_8 + df_9
df_10 = df_8.add(df_9, fill_value=0)
df_10
ser_8 = df_10.ix[0]
df_11 = df_10 - ser_8
df_11
ser_9 = Series(range(3), index=['a', 'd', 'e'])
ser_9
df_11 - ser_9
df_10
ser_10 = Series([100, 200, 300])
ser_10
df_10.sub(ser_10, axis=0)
df_11 = np.abs(df_11)
df_11
func_1 = lambda x: x.max() - x.min()
df_11.apply(func_1)
df_11.apply(func_1, axis=1)
func_2 = lambda x: Series([x.min(), x.max()], index=['min', 'max'])
df_11.apply(func_2)
func_3 = lambda x: '%.2f' %x
df_11.applymap(func_3)
df_11['a'].map(func_3)
ser_4
ser_4.sort_index()
ser_4.sort_values()
df_12 = DataFrame(np.arange(12).reshape((3, 4)),
index=['three', 'one', 'two'],
columns=['c', 'a', 'b', 'd'])
df_12
df_12.sort_index()
df_12.sort_index(axis=1, ascending=False)
df_12.sort_values(by=['d', 'c'])
ser_11 = Series([7, -5, 7, 4, 2, 0, 4, 7])
ser_11 = ser_11.sort_values()
ser_11
ser_11.rank()
ser_11.rank(method='first')
ser_11.rank(ascending=False, method='max')
df_13 = DataFrame({'foo' : [7, -5, 7, 4, 2, 0, 4, 7],
'bar' : [-5, 4, 2, 0, 4, 7, 7, 8],
'baz' : [-1, 2, 3, 0, 5, 9, 9, 5]})
df_13
df_13.rank()
df_13.rank(axis=1)
ser_12 = Series(range(5), index=['foo', 'foo', 'bar', 'bar', 'baz'])
ser_12
ser_12.index.is_unique
ser_12['foo']
df_14 = DataFrame(np.random.randn(5, 4),
index=['foo', 'foo', 'bar', 'bar', 'baz'])
df_14
df_14.ix['bar']
df_6
df_6.sum()
df_6.sum(axis=1)
df_6.sum(axis=1, skipna=False)
from pandas import Series, DataFrame
import pandas as pd
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_15 = DataFrame(data_1)
df_15
df_15.replace('VA', 'VIRGINIA', inplace=True)
df_15
df_15.replace({'state' : { 'MD' : 'MARYLAND' }}, inplace=True)
df_15
df_2 = df_15.drop('population', axis=1)
df_2
data_2 = {'state' : ['NY', 'NY', 'NY', 'FL', 'FL'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [6.0, 6.1, 6.2, 3.0, 3.1]}
df_3 = DataFrame(data_2)
df_3
df_4 = pd.concat([df_1, df_3])
df_4
from pandas import Series, DataFrame
import pandas as pd
!pip install wget
import wget
link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/ozone.csv'
DataSet = wget.download(link_to_data)
df_1 = pd.read_csv("ozone.csv")
df_1.describe()
df_1.head()
df_1.to_csv('ozone_copy.csv',
encoding='utf-8',
index=False,
header=False)
!ls -l ./
| 0.426322 | 0.969324 |
# README
---
## Requirements
1. Docker
1. Docker Compose
1. A copy of the code (get the [source .zip file or fork the repository](https://github.com/pr3d4t0r/trackodometer))
1. Basic Python knowledge
1. Access to the file system to put the data files
---
## Running the trackodometer locally
1. Open a terminal
1. Go to the directory where you copied the code
1. Execute this command: `./run`
1. Execute this command: `docker logs trackodometer_cime_dev` - get the identification token; it's the long string of alphanumeric characters at the bottom of the screen
1. Open this in your browser: http://localhost:8809/lab
1. Set your own access password to something that you can remember, use the token from the previous step for validation
1. That's it!
From here, you can go straight to the [Tracking Competition Scoring](http://localhost:8809/notebooks/scoring.ipynb) tool and start working with FlySight data. Check out the [How to use JupyterLab video](https://www.youtube.com/watch?v=A5YyoCKxEOU) if you're a developer and want to learn the basic features of the environment.
---
## Using Git from within the environment
This environment supports using Git commands from within the running Docker container, or from the host environment, against the same repository.
### Configuration
1. Ensure that the `~/.ssh` directory exists
```
%sx if [[ -d ".ssh" ]]; then echo ".ssh directory OK - continue"; else echo "Run: ssh-keygen -t ed25519"; fi
```
2. Add your public key to GitHub under _yourAccount/Settings/SSH and PGP keys_
2. Fetch the latest branches and state from GitHub `origin`
```
%sx git fetch 2> "/dev/null" ; [[ "$?" = "0" ]] || echo "Unable to fetch - your account is not authorized to fetch or pull"
```
4. List all branches and continue using Git as usual
```
%sx git branch -a
```
---
## Vim and NERDTree support
A complete command line development environment for Python shipped with this project. Vim is the default programming editor, available in any terminal. (NERDTree)[https://github.com/preservim/nerdtree] will simplify file system operations.
### Check NERDTree availability
```
%sx if (test -d "$HOME/.vim/pack/vendor/start/nerdtree"); then echo "NERDTree is installed and ready"; else echo "NERDTree is NOT installed"; fi
```
### Install NERDTree
```
%sx git clone https://github.com/preservim/nerdtree.git ~/.vim/pack/vendor/start/nerdtree
%sx vim -u NONE -c "helptags ~/.vim/pack/vendor/start/nerdtree/doc" -c q
```
---
### nbstripout - strips all output from notebooks before a commit
This only needs to run once per repository. The configuration file alread exists in `./work/.pre-commit-config.yaml`, run this command to avoid having to clear all the Notebooks by hand.
```
!! [[ -n $(which pre-commit) ]] && pre-commit install
```
---
© pr3d4t0r Speed Skydiving Team - [BSD-3 license](https://github.com/pr3d4t0r/trackodometer/blob/master/LICENSE)
|
github_jupyter
|
%sx if [[ -d ".ssh" ]]; then echo ".ssh directory OK - continue"; else echo "Run: ssh-keygen -t ed25519"; fi
%sx git fetch 2> "/dev/null" ; [[ "$?" = "0" ]] || echo "Unable to fetch - your account is not authorized to fetch or pull"
%sx git branch -a
%sx if (test -d "$HOME/.vim/pack/vendor/start/nerdtree"); then echo "NERDTree is installed and ready"; else echo "NERDTree is NOT installed"; fi
%sx git clone https://github.com/preservim/nerdtree.git ~/.vim/pack/vendor/start/nerdtree
%sx vim -u NONE -c "helptags ~/.vim/pack/vendor/start/nerdtree/doc" -c q
!! [[ -n $(which pre-commit) ]] && pre-commit install
| 0.114294 | 0.798029 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as ticker
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
labels = pd.read_csv('../../csv/train_labels.csv')
labels.head()
values = pd.read_csv('../../csv/train_values.csv')
values.loc[values["geo_level_2_id"] == 1]
values.isnull().values.any()
labels.isnull().values.any()
values.dtypes
values["building_id"].count() == values["building_id"].drop_duplicates().count()
values.info()
to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for row in to_be_categorized:
values[row] = values[row].astype("category")
values.info()
datatypes = dict(values.dtypes)
for row in values.columns:
if datatypes[row] != "int64" and datatypes[row] != "int32" and \
datatypes[row] != "int16" and datatypes[row] != "int8":
continue
if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:
values[row] = values[row].astype(np.int32)
elif values[row].nlargest(1).item() > 127:
values[row] = values[row].astype(np.int16)
else:
values[row] = values[row].astype(np.int8)
values.info()
labels.info()
labels["building_id"] = labels["building_id"].astype(np.int32)
labels["damage_grade"] = labels["damage_grade"].astype(np.int8)
labels.info()
```
# Nuevo Modelo
```
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
important_values.shape
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
from sklearn.utils import class_weight
classes_weights = class_weight.compute_sample_weight(
class_weight='balanced',
y=y_train
)
classes_weights
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
# # Busco los mejores tres parametros indicados abajo.
# max_features = [35, 40, 45]
# min_samples_split = [10, 15, 20]
# min_impurity_decrease = [0.0, 0.01, 0.025, 0.05, 0.1]
# hyperF = {'min_impurity_decrease': min_impurity_decrease,
# 'max_features': max_features,
# 'min_samples_split': min_samples_split}
# gridF = GridSearchCV(estimator = RandomForestClassifier(random_state = 123,
# n_estimators = 150,
# max_depth = None,
# min_samples_leaf = 1,
# criterion = "gini"),
# scoring = 'f1_micro',
# param_grid = hyperF,
# cv = 3,
# verbose = 1,
# n_jobs = -1)
# bestF = gridF.fit(X_train, y_train)
# res = pd.DataFrame(bestF.cv_results_)
# res
# Utilizo los mejores parametros segun el GridSearch
rf_model = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 45,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True,
class_weight = 'balanced')
rf_model.fit(X_train, y_train)
rf_model.score(X_train, y_train)
# Calculo el F1 score para mi training set.
y_preds = rf_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
rf_model.feature_importances_
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
# Genero las predicciones para los test.
preds = rf_model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf/9/jf-model-9-submission-all-params.csv')
!head ../../csv/predictions/jf/9/jf-model-9-submission-all-params.csv
fig, ax = plt.subplots(figsize = (20,20))
plt.bar(X_train.columns, rf_model.feature_importances_)
plt.xlabel("Features")
plt.xticks(rotation = 90)
plt.ylabel("Importance")
plt.show()
df = pd.DataFrame({"column": X_train.columns.to_list(), "importance": rf_model.feature_importances_})
df = df.sort_values("importance", ascending=False).set_index("column")
df.plot(kind='bar', figsize = (20,20), orientation='vertical')
# values['count_floors_pre_eq'].sort_values()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import style
import matplotlib.ticker as ticker
import seaborn as sns
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import f1_score
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RepeatedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import ParameterGrid
from sklearn.inspection import permutation_importance
import multiprocessing
labels = pd.read_csv('../../csv/train_labels.csv')
labels.head()
values = pd.read_csv('../../csv/train_values.csv')
values.loc[values["geo_level_2_id"] == 1]
values.isnull().values.any()
labels.isnull().values.any()
values.dtypes
values["building_id"].count() == values["building_id"].drop_duplicates().count()
values.info()
to_be_categorized = ["land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for row in to_be_categorized:
values[row] = values[row].astype("category")
values.info()
datatypes = dict(values.dtypes)
for row in values.columns:
if datatypes[row] != "int64" and datatypes[row] != "int32" and \
datatypes[row] != "int16" and datatypes[row] != "int8":
continue
if values[row].nlargest(1).item() > 32767 and values[row].nlargest(1).item() < 2**31:
values[row] = values[row].astype(np.int32)
elif values[row].nlargest(1).item() > 127:
values[row] = values[row].astype(np.int16)
else:
values[row] = values[row].astype(np.int8)
values.info()
labels.info()
labels["building_id"] = labels["building_id"].astype(np.int32)
labels["damage_grade"] = labels["damage_grade"].astype(np.int8)
labels.info()
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
important_values.shape
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
from sklearn.utils import class_weight
classes_weights = class_weight.compute_sample_weight(
class_weight='balanced',
y=y_train
)
classes_weights
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
# # Busco los mejores tres parametros indicados abajo.
# max_features = [35, 40, 45]
# min_samples_split = [10, 15, 20]
# min_impurity_decrease = [0.0, 0.01, 0.025, 0.05, 0.1]
# hyperF = {'min_impurity_decrease': min_impurity_decrease,
# 'max_features': max_features,
# 'min_samples_split': min_samples_split}
# gridF = GridSearchCV(estimator = RandomForestClassifier(random_state = 123,
# n_estimators = 150,
# max_depth = None,
# min_samples_leaf = 1,
# criterion = "gini"),
# scoring = 'f1_micro',
# param_grid = hyperF,
# cv = 3,
# verbose = 1,
# n_jobs = -1)
# bestF = gridF.fit(X_train, y_train)
# res = pd.DataFrame(bestF.cv_results_)
# res
# Utilizo los mejores parametros segun el GridSearch
rf_model = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 45,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True,
class_weight = 'balanced')
rf_model.fit(X_train, y_train)
rf_model.score(X_train, y_train)
# Calculo el F1 score para mi training set.
y_preds = rf_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
rf_model.feature_importances_
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
# Genero las predicciones para los test.
preds = rf_model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf/9/jf-model-9-submission-all-params.csv')
!head ../../csv/predictions/jf/9/jf-model-9-submission-all-params.csv
fig, ax = plt.subplots(figsize = (20,20))
plt.bar(X_train.columns, rf_model.feature_importances_)
plt.xlabel("Features")
plt.xticks(rotation = 90)
plt.ylabel("Importance")
plt.show()
df = pd.DataFrame({"column": X_train.columns.to_list(), "importance": rf_model.feature_importances_})
df = df.sort_values("importance", ascending=False).set_index("column")
df.plot(kind='bar', figsize = (20,20), orientation='vertical')
# values['count_floors_pre_eq'].sort_values()
| 0.451568 | 0.762844 |
```
import sys, os
everestPath = os.path.abspath('everest')
if not everestPath in sys.path:
sys.path.insert(0, everestPath)
from everest import window
%matplotlib inline
import numpy as np
import sklearn
from sklearn import preprocessing
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPClassifier
from everest.h5anchor import Reader, Fetch
F = lambda key: Fetch(f"*/{key}")
reader = Reader('obsvisc', '.')
cut = reader[
(F('temperatureField') == '_built_peaskauslu-thoesfthuec') \
& (F('aspect') == 1) & (F('f') == 1) \
# & (F('t') > 0.3) & (F('t') < 0.4)
]
paramkeys = (
'tauRef',
# 'f',
# 'aspect',
# 'etaDelta',
# 'etaRef',
# 'alpha',
# 'H',
# 'flux',
# 'kappa',
)
datakeys = (
# 't',
'dt',
'Nu',
'Nu_freq',
'Nu_min',
'Nu_range',
'VRMS',
'strainRate_outer_av',
'strainRate_outer_min',
'strainRate_outer_range',
'stressAng_outer_av',
'stressAng_outer_min',
'stressAng_outer_range',
'stressRad_outer_av',
'stressRad_outer_min',
'stressRad_outer_range',
'temp_av',
'temp_min',
'temp_range',
'velAng_outer_av',
'velAng_outer_min',
'velAng_outer_range',
'velMag_range',
'visc_av',
'visc_min',
'visc_range',
'yieldFrac'
)
params = reader[cut : paramkeys]
datas = reader[cut : datakeys]
allkeys = tuple([*paramkeys, *datakeys])
X = [[] for _ in range(len(allkeys))]
y = [[] for _ in range(len(allkeys))]
for k, subdatas in sorted(datas.items()):
subparams = params[k]
if not len(set([len(d) for d in subdatas])) == 1:
continue
length = len(subdatas[0])
if not length:
continue
ps = [np.full(length, p) for p in subparams]
allds = [*ps, *subdatas]
for i, d in enumerate(allds):
X[i].extend(d[:-2])
y[i].extend(d[1:-1])
X = np.array(X).T
y = np.array(y).T
reg = LinearRegression().fit(X, y)
def predict_future_Nu(sampleno, n = 300):
Nuk = allkeys.index('Nu')
frame = X[sampleno]
Nus = [frame[Nuk],]
for i in range(n):
frame = reg.predict([frame,])[0]
Nus.append(frame[Nuk])
return Nus
canvas = window.Canvas(size = (18, 12))
ax = canvas.make_ax()
for _ in range(100):
i = np.random.randint(X.shape[0])
Nu = predict_future_Nu(i)
t = np.arange(len(Nu))
ax.line(t, Nu, alpha = 0.5)
canvas.show()
# scaler = preprocessing.StandardScaler()
# tx = scaler.fit_transform(x, y)
# clf = MLPClassifier().fit(tx, y)
```
|
github_jupyter
|
import sys, os
everestPath = os.path.abspath('everest')
if not everestPath in sys.path:
sys.path.insert(0, everestPath)
from everest import window
%matplotlib inline
import numpy as np
import sklearn
from sklearn import preprocessing
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPClassifier
from everest.h5anchor import Reader, Fetch
F = lambda key: Fetch(f"*/{key}")
reader = Reader('obsvisc', '.')
cut = reader[
(F('temperatureField') == '_built_peaskauslu-thoesfthuec') \
& (F('aspect') == 1) & (F('f') == 1) \
# & (F('t') > 0.3) & (F('t') < 0.4)
]
paramkeys = (
'tauRef',
# 'f',
# 'aspect',
# 'etaDelta',
# 'etaRef',
# 'alpha',
# 'H',
# 'flux',
# 'kappa',
)
datakeys = (
# 't',
'dt',
'Nu',
'Nu_freq',
'Nu_min',
'Nu_range',
'VRMS',
'strainRate_outer_av',
'strainRate_outer_min',
'strainRate_outer_range',
'stressAng_outer_av',
'stressAng_outer_min',
'stressAng_outer_range',
'stressRad_outer_av',
'stressRad_outer_min',
'stressRad_outer_range',
'temp_av',
'temp_min',
'temp_range',
'velAng_outer_av',
'velAng_outer_min',
'velAng_outer_range',
'velMag_range',
'visc_av',
'visc_min',
'visc_range',
'yieldFrac'
)
params = reader[cut : paramkeys]
datas = reader[cut : datakeys]
allkeys = tuple([*paramkeys, *datakeys])
X = [[] for _ in range(len(allkeys))]
y = [[] for _ in range(len(allkeys))]
for k, subdatas in sorted(datas.items()):
subparams = params[k]
if not len(set([len(d) for d in subdatas])) == 1:
continue
length = len(subdatas[0])
if not length:
continue
ps = [np.full(length, p) for p in subparams]
allds = [*ps, *subdatas]
for i, d in enumerate(allds):
X[i].extend(d[:-2])
y[i].extend(d[1:-1])
X = np.array(X).T
y = np.array(y).T
reg = LinearRegression().fit(X, y)
def predict_future_Nu(sampleno, n = 300):
Nuk = allkeys.index('Nu')
frame = X[sampleno]
Nus = [frame[Nuk],]
for i in range(n):
frame = reg.predict([frame,])[0]
Nus.append(frame[Nuk])
return Nus
canvas = window.Canvas(size = (18, 12))
ax = canvas.make_ax()
for _ in range(100):
i = np.random.randint(X.shape[0])
Nu = predict_future_Nu(i)
t = np.arange(len(Nu))
ax.line(t, Nu, alpha = 0.5)
canvas.show()
# scaler = preprocessing.StandardScaler()
# tx = scaler.fit_transform(x, y)
# clf = MLPClassifier().fit(tx, y)
| 0.260954 | 0.256518 |
```
from feos.si import *
from feos.pcsaft import *
from feos.pcsaft.eos import *
import matplotlib.pyplot as plt
```
## Ideal mixture
```
params = PcSaftParameters.from_json(['hexane', 'octane'], 'parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramBinary.new_pxy(saft, 300*KELVIN)
dia_t = PhaseDiagramBinary.new_txy(saft, BAR)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.liquid_molefracs, dia_p.pressure/BAR, '-k')
ax[0].plot(dia_p.vapor_molefracs, dia_p.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g')
ax[2].plot(dia_p.liquid_molefracs, dia_p.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
```
## Azeotropic mixture
```
params = PcSaftParameters.from_json(['acetone', 'hexane'], 'parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramBinary.new_pxy(saft, 300*KELVIN)
dia_t = PhaseDiagramBinary.new_txy(saft, 5*BAR)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.liquid_molefracs, dia_p.pressure/BAR, '-k')
ax[0].plot(dia_p.vapor_molefracs, dia_p.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g')
ax[2].plot(dia_p.liquid_molefracs, dia_p.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
```
## Supercritical mixture
```
params = PcSaftParameters.from_json(['carbon dioxide', 'hexane'], 'parameters.json')
saft = PcSaft(params)
T_vec = [350*KELVIN, 450*KELVIN, 500*KELVIN]
c_vec = ['k', 'r', 'b']
dia_p = [PhaseDiagramBinary.new_pxy(saft, T) for T in T_vec]
dia_t = PhaseDiagramBinary.new_txy(saft, 50*BAR)
f, ax = plt.subplots(1,3,figsize=(20,5))
for d,c,T in zip(dia_p, c_vec, T_vec):
ax[0].plot(d.liquid_molefracs, d.pressure/BAR, color=c, label=f'{T}')
ax[0].plot(d.vapor_molefracs, d.pressure/BAR, color=c)
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[0].legend()
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
for d,c,T in zip(dia_p, c_vec, T_vec):
ax[2].plot(d.liquid_molefracs, d.vapor_molefracs, color=c, label=f'{T}')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g', label=f'{50*BAR}')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$')
ax[2].legend();
State.critical_point_binary_t(saft, 450.*KELVIN, verbosity=Verbosity.Result)
```
## Liquid-liquid equilibrium
```
params = PcSaftParameters.from_json(['water_np', 'octane'], '20191105_pure_parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramBinary.new_pxy_lle(saft, 500*KELVIN, 0.5, BAR, 5000*BAR)
dia_t = PhaseDiagramBinary.new_txy_lle(saft, BAR, 0.5, 300*KELVIN, 364*KELVIN)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.liquid_molefracs, dia_p.pressure/BAR, '-k')
ax[0].plot(dia_p.vapor_molefracs, dia_p.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g')
ax[2].plot(dia_p.liquid_molefracs, dia_p.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
```
## Heteroazeotropic mixture
```
params = PcSaftParameters.from_json(['acetone', 'water_np'], '20191105_pure_parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramHetero.new_pxy(saft, 450*KELVIN, (0.005, 0.9), 50*BAR, 101)
dia_t = PhaseDiagramHetero.new_txy(saft, BAR, (0.001, 0.99), 273.15*KELVIN, 101)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.vle.liquid_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.vle.vapor_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.vapor_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.liquid_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.vle.liquid_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vle.vapor_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.liquid_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.vapor_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.vle.liquid_molefracs, dia_t.vle.vapor_molefracs, '-g')
ax[2].plot(dia_p.vle.liquid_molefracs, dia_p.vle.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
```
## Two associating components
```
params = PcSaftParameters.from_json(['water_np', '1-butanol_n'], '20191105_pure_parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramHetero.new_pxy(saft, 350*KELVIN, (0.55, 0.98), BAR, 101)
dia_t = PhaseDiagramHetero.new_txy(saft, BAR, (0.5,0.995), 343.15*KELVIN, 101)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.vle.liquid_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.vle.vapor_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.vapor_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.liquid_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.vle.liquid_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vle.vapor_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.liquid_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.vapor_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.vle.liquid_molefracs, dia_t.vle.vapor_molefracs, '-g')
ax[2].plot(dia_p.vle.liquid_molefracs, dia_p.vle.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
```
|
github_jupyter
|
from feos.si import *
from feos.pcsaft import *
from feos.pcsaft.eos import *
import matplotlib.pyplot as plt
params = PcSaftParameters.from_json(['hexane', 'octane'], 'parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramBinary.new_pxy(saft, 300*KELVIN)
dia_t = PhaseDiagramBinary.new_txy(saft, BAR)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.liquid_molefracs, dia_p.pressure/BAR, '-k')
ax[0].plot(dia_p.vapor_molefracs, dia_p.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g')
ax[2].plot(dia_p.liquid_molefracs, dia_p.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
params = PcSaftParameters.from_json(['acetone', 'hexane'], 'parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramBinary.new_pxy(saft, 300*KELVIN)
dia_t = PhaseDiagramBinary.new_txy(saft, 5*BAR)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.liquid_molefracs, dia_p.pressure/BAR, '-k')
ax[0].plot(dia_p.vapor_molefracs, dia_p.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g')
ax[2].plot(dia_p.liquid_molefracs, dia_p.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
params = PcSaftParameters.from_json(['carbon dioxide', 'hexane'], 'parameters.json')
saft = PcSaft(params)
T_vec = [350*KELVIN, 450*KELVIN, 500*KELVIN]
c_vec = ['k', 'r', 'b']
dia_p = [PhaseDiagramBinary.new_pxy(saft, T) for T in T_vec]
dia_t = PhaseDiagramBinary.new_txy(saft, 50*BAR)
f, ax = plt.subplots(1,3,figsize=(20,5))
for d,c,T in zip(dia_p, c_vec, T_vec):
ax[0].plot(d.liquid_molefracs, d.pressure/BAR, color=c, label=f'{T}')
ax[0].plot(d.vapor_molefracs, d.pressure/BAR, color=c)
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[0].legend()
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
for d,c,T in zip(dia_p, c_vec, T_vec):
ax[2].plot(d.liquid_molefracs, d.vapor_molefracs, color=c, label=f'{T}')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g', label=f'{50*BAR}')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$')
ax[2].legend();
State.critical_point_binary_t(saft, 450.*KELVIN, verbosity=Verbosity.Result)
params = PcSaftParameters.from_json(['water_np', 'octane'], '20191105_pure_parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramBinary.new_pxy_lle(saft, 500*KELVIN, 0.5, BAR, 5000*BAR)
dia_t = PhaseDiagramBinary.new_txy_lle(saft, BAR, 0.5, 300*KELVIN, 364*KELVIN)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.liquid_molefracs, dia_p.pressure/BAR, '-k')
ax[0].plot(dia_p.vapor_molefracs, dia_p.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.liquid_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vapor_molefracs, dia_t.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.liquid_molefracs, dia_t.vapor_molefracs, '-g')
ax[2].plot(dia_p.liquid_molefracs, dia_p.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
params = PcSaftParameters.from_json(['acetone', 'water_np'], '20191105_pure_parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramHetero.new_pxy(saft, 450*KELVIN, (0.005, 0.9), 50*BAR, 101)
dia_t = PhaseDiagramHetero.new_txy(saft, BAR, (0.001, 0.99), 273.15*KELVIN, 101)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.vle.liquid_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.vle.vapor_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.vapor_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.liquid_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.vle.liquid_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vle.vapor_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.liquid_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.vapor_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.vle.liquid_molefracs, dia_t.vle.vapor_molefracs, '-g')
ax[2].plot(dia_p.vle.liquid_molefracs, dia_p.vle.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
params = PcSaftParameters.from_json(['water_np', '1-butanol_n'], '20191105_pure_parameters.json')
saft = PcSaft(params)
dia_p = PhaseDiagramHetero.new_pxy(saft, 350*KELVIN, (0.55, 0.98), BAR, 101)
dia_t = PhaseDiagramHetero.new_txy(saft, BAR, (0.5,0.995), 343.15*KELVIN, 101)
f, ax = plt.subplots(1,3,figsize=(20,5))
ax[0].plot(dia_p.vle.liquid_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.vle.vapor_molefracs, dia_p.vle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.vapor_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].plot(dia_p.lle.liquid_molefracs, dia_p.lle.pressure/BAR, '-k')
ax[0].set_xlim(0,1)
ax[0].set_xlabel('$x_1,y_1$')
ax[0].set_ylabel('$p$ / bar')
ax[1].plot(dia_t.vle.liquid_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.vle.vapor_molefracs, dia_t.vle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.liquid_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].plot(dia_t.lle.vapor_molefracs, dia_t.lle.temperature/KELVIN, '-g')
ax[1].set_xlim(0,1)
ax[1].set_xlabel('$x_1,y_1$')
ax[1].set_ylabel('$T$ / K')
ax[2].plot([0,1], [0,1], '--k')
ax[2].plot(dia_t.vle.liquid_molefracs, dia_t.vle.vapor_molefracs, '-g')
ax[2].plot(dia_p.vle.liquid_molefracs, dia_p.vle.vapor_molefracs, '-k')
ax[2].set_xlim(0,1)
ax[2].set_ylim(0,1)
ax[2].set_xlabel('$x_1$')
ax[2].set_ylabel('$y_1$');
| 0.504639 | 0.775753 |
# HW02: MATLAB
**Yuanxing Cheng, 810925466**
## Q1
$(\textrm{a})$ Write a MATLAB function with inputs $x$ and $y$, and output $z$. Here the inputs $x$ and $y$ are any real numbers, and the output $z$ is determined by the following formula:
$$z=\begin{cases}
1, & \text{ if } \; |x-y|\leq2 \\
0, & \text{ if } \; |x-y|>2
\end{cases}$$
$Answer$
```
function [ z ] = One_a( x,y )
%function for question one (a)
if(abs(x-y) <= 2)
z=1;
end
if(abs(x-y)>2)
z=0;
end
end
```
$(\textrm{b})$ Write another MATLAB function with input $n$, and output $A$. Here $n$ is a positive integer, and $A$ is a $n \times n$ matrix with entries $A_{i,j} = z(i, j)$, where $z(i, j)$ represents the output value $z$ of the function in Part $(\textrm{a})$ with inputs $i$ and $j$.
$Answer$
```
function [ A ] = One_b( n )
%function for question one (b)
for i=1:n
for j=1:n
A(i,j) = One_a(i,j);
end
end
end
```
$(\textrm{c})$ Print out the $10 \times 10$ matrix $A$ determined in Part $(\textrm{b})$.
```
1 1 1 0 0 0 0 0 0 0
1 1 1 1 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0 0
0 1 1 1 1 1 0 0 0 0
0 0 1 1 1 1 1 0 0 0
0 0 0 1 1 1 1 1 0 0
0 0 0 0 1 1 1 1 1 0
0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 1 1 1
```
## Q2
The Taylor expansion of sine function is
$$\sin \left( x \right) = \sum _{n=1} ^{\infty} \left( -1 \right)^{n-1} \frac{x^{2n-1}} {\left( 2n-1 \right)!} = x - \frac{x^3} {3!} + \frac{x^5} {5!} - \frac{x^7} {7!} + \cdots$$
Plot the sine function $y = \sin \left( x \right)$ on the interval $x \in \left[ -2\pi, 2\pi \right]$, by using the MATLAB command `plot`. In the same graph, plot the four functions containing only the ย
first term, only the ย
first three terms, the ย
first ย
five terms, and the fiย
rst seven terms, in the Taylor expansion, respectively.
In order to keep the plots in the same graph, you need to use the MATLAB command holdon, between plots.
$Answer$
```
clear
clc
x = linspace(-2*pi,2*pi,2000);
y1 = sin(x);
y2 = x;
y3 = x - x.^3/factorial(3) + x.^5/factorial(5);
y4 = x - x.^3/factorial(3) + x.^5/factorial(5) - x.^7/factorial(7) + x.^9/factorial(9);
y5 = x - x.^3/factorial(3) + x.^5/factorial(5) - x.^7/factorial(7) + x.^9/factorial(9)- x.^11/factorial(11) + x.^13/factorial(13);
figure
hold on
plot(x,y1,'LineWidth',2.5)
plot(x,y2,'LineWidth',2)
plot(x,y3,'LineWidth',1.5)
plot(x,y4,'LineWidth',1)
plot(x,y5,'LineWidth',0.5)
legend('\fontname{Courier New} \bf sin(x)','\fontname{Courier New} \bf First term','\fontname{Courier New} \bf First three term','\fontname{Courier New} \bf First five term','\fontname{Courier New} \bf First seven term')
axis([-7 7 -inf inf])
```

## Q3
The derivative of a function $f(x)$ can be estimated by the average rate of change of the function, $i.e.$,
$$f'(x) \approx \frac{f(x + h) - f(x)} {h}$$
Estimate the derivative of the function $f(x) = e^x$ at $x = 1$ by using the above formula for $h = 10^{-1}, 10^{-2}, 10^{-3}, \dots, 10^{-8}$ respectively.
Plot the error of this approximation, $\left|f'(x) - \frac{f(x + h) - f(x)} {h}\right|$, with respect to $h$, by connecting dots corresponding to $h = 10^{-1}, 10^{-2}, 10^{-3}, \dots, 10^{-8}$. The horizontal axis in
the graph should correspond to $h$, and the vertical axis correspond to the error.
$Answer$
```
clear
clc
syms symx symh
f(symx) = exp(symx)
g(symx) = diff(f)
err(symx,symh) = abs(g(symx) - (f(symx+symh) - f(symx))/symh)
i = 1:8;
h = 10.^(-i);
x=1;
figure
loglog(h,err(x,h),'-s')
grid on
legend('\fontname{Courier New} \bf Err')
ylabel('\fontname{Courier New} \bf Err','Rotation',0,'FontName','Courier New','FontWeight','bold')
xlabel('\fontname{Courier New} \bf h','FontName','Courier New','FontWeight','bold')
```

|
github_jupyter
|
function [ z ] = One_a( x,y )
%function for question one (a)
if(abs(x-y) <= 2)
z=1;
end
if(abs(x-y)>2)
z=0;
end
end
function [ A ] = One_b( n )
%function for question one (b)
for i=1:n
for j=1:n
A(i,j) = One_a(i,j);
end
end
end
1 1 1 0 0 0 0 0 0 0
1 1 1 1 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0 0
0 1 1 1 1 1 0 0 0 0
0 0 1 1 1 1 1 0 0 0
0 0 0 1 1 1 1 1 0 0
0 0 0 0 1 1 1 1 1 0
0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 1 1 1
```
## Q2
The Taylor expansion of sine function is
$$\sin \left( x \right) = \sum _{n=1} ^{\infty} \left( -1 \right)^{n-1} \frac{x^{2n-1}} {\left( 2n-1 \right)!} = x - \frac{x^3} {3!} + \frac{x^5} {5!} - \frac{x^7} {7!} + \cdots$$
Plot the sine function $y = \sin \left( x \right)$ on the interval $x \in \left[ -2\pi, 2\pi \right]$, by using the MATLAB command `plot`. In the same graph, plot the four functions containing only the ย
first term, only the ย
first three terms, the ย
first ย
five terms, and the fiย
rst seven terms, in the Taylor expansion, respectively.
In order to keep the plots in the same graph, you need to use the MATLAB command holdon, between plots.
$Answer$

## Q3
The derivative of a function $f(x)$ can be estimated by the average rate of change of the function, $i.e.$,
$$f'(x) \approx \frac{f(x + h) - f(x)} {h}$$
Estimate the derivative of the function $f(x) = e^x$ at $x = 1$ by using the above formula for $h = 10^{-1}, 10^{-2}, 10^{-3}, \dots, 10^{-8}$ respectively.
Plot the error of this approximation, $\left|f'(x) - \frac{f(x + h) - f(x)} {h}\right|$, with respect to $h$, by connecting dots corresponding to $h = 10^{-1}, 10^{-2}, 10^{-3}, \dots, 10^{-8}$. The horizontal axis in
the graph should correspond to $h$, and the vertical axis correspond to the error.
$Answer$
| 0.719975 | 0.938632 |
# MNIST-Overfit-Dropout
```
# coding: utf-8
import sys, os
import numpy as np
import matplotlib.pyplot as plt
import math
sys.path.append(os.pardir)
from deeplink.mnist import *
from deeplink.networks import *
```
## Multilayer Neural Network Model (Two Hidden Layers) and Learing/Validation
### Multi Layer Model Class
```
class MultiLayerNetExtended(MultiLayerNet):
def __init__(self, input_size, hidden_size_list, output_size, activation='ReLU', initializer='N2',
optimizer='AdaGrad', learning_rate=0.01,
use_batch_normalization=False,
use_weight_decay=False, weight_decay_lambda=0.0,
use_dropout=False, dropout_ratio_list=None):
self.input_size = input_size
self.output_size = output_size
self.hidden_size_list = hidden_size_list
self.hidden_layer_num = len(hidden_size_list)
self.use_batch_normalization = use_batch_normalization
self.use_weight_decay = use_weight_decay
self.weight_decay_lambda = weight_decay_lambda
self.use_dropout = use_dropout
self.dropout_ratio_list = dropout_ratio_list
# Weight Initialization
self.params = {}
self.weight_initialization(initializer)
# Layering
self.layers = OrderedDict()
self.last_layer = None
self.layering(activation)
# Optimization Method
self.optimizer = optimizers[optimizer](lr=learning_rate)
def weight_initialization(self, initializer):
params_size_list = [self.input_size] + self.hidden_size_list + [self.output_size]
initializer_obj = initializers[initializer](self.params,
params_size_list,
self.use_batch_normalization)
initializer_obj.initialize_params();
def layering(self, activation):
for idx in range(1, self.hidden_layer_num + 1):
self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], self.params['b' + str(idx)])
if self.use_batch_normalization:
self.layers['Batch_Normalization' + str(idx)] = BatchNormalization(self.params['gamma' + str(idx)],
self.params['beta' + str(idx)])
self.layers['Activation' + str(idx)] = activation_layers[activation]()
if self.use_dropout:
self.layers['Dropout' + str(idx)] = Dropout(self.dropout_ratio_list[idx - 1])
idx = self.hidden_layer_num + 1
self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], self.params['b' + str(idx)])
self.last_layer = SoftmaxWithCrossEntropyLoss()
def predict(self, x, is_train=False):
for key, layer in self.layers.items():
if "BatchNorm" in key or "Dropout" in key:
x = layer.forward(x, is_train)
else:
x = layer.forward(x)
return x
def loss(self, x, t, is_train=False):
y = self.predict(x, is_train)
if self.use_weight_decay:
weight_decay = 0.0
for idx in range(1, self.hidden_layer_num + 2):
W = self.params['W' + str(idx)]
weight_decay += 0.5 * self.weight_decay_lambda * np.sum(W**2)
return self.last_layer.forward(y, t) + weight_decay
else:
return self.last_layer.forward(y, t)
def accuracy(self, x, t):
y = self.predict(x, is_train=False)
y = np.argmax(y, axis=1)
if t.ndim != 1 : t = np.argmax(t, axis=1)
accuracy = np.sum(y == t) / float(x.shape[0])
return accuracy
def backpropagation_gradient(self, x, t):
# forward
self.loss(x, t, is_train=True)
# backward
din = 1
din = self.last_layer.backward(din)
layers = list(self.layers.values())
layers.reverse()
for layer in layers:
din = layer.backward(din)
grads = {}
for idx in range(1, self.hidden_layer_num + 2):
if self.use_weight_decay:
grads['W' + str(idx)] = self.layers['Affine' + str(idx)].dW + self.weight_decay_lambda * self.params['W' + str(idx)]
else:
grads['W' + str(idx)] = self.layers['Affine' + str(idx)].dW
grads['b' + str(idx)] = self.layers['Affine' + str(idx)].db
if self.use_batch_normalization and idx <= self.hidden_layer_num:
grads['gamma' + str(idx)] = self.layers['Batch_Normalization' + str(idx)].dgamma
grads['beta' + str(idx)] = self.layers['Batch_Normalization' + str(idx)].dbeta
return grads
def learning(self, x_batch, t_batch):
grads = self.backpropagation_gradient(x_batch, t_batch)
self.optimizer.update(self.params, grads)
```
### Training and Evaluation
```
data = mnist_data("/Users/yhhan/git/aiclass/0.Professor/data/MNIST_data/.")
(img_train, label_train), (img_validation, label_validation), (img_test, label_test) = data.load_mnist(flatten=True, normalize=True, one_hot_label=True)
# ์ค๋ฒํผํ
์ ์ ๋ํ๊ธฐ ์ํ์ฌ ๋ฐ์ดํฐ ์๋ฅผ ๋ํญ ์ค์
img_train = img_train[:200]
label_train = label_train[:200]
# ์ค๋ฒํผํ
์ ์ ๋ํ๊ธฐ ์ํ์ฌ ๋ ์ด์ด๋ฅผ ๊น๊ฒ ๊ฐ์ ธ๊ฐ๊ณ ํ๋ผ๋ฏธํฐ๋ฅผ ๋ํญ ๋๋ฆผ
input_size=784
hidden_layer1_size=128
hidden_layer2_size=128
hidden_layer3_size=128
hidden_layer4_size=128
hidden_layer5_size=128
hidden_layer6_size=128
output_size=10
num_epochs = 200
train_size = img_train.shape[0]
batch_size = 100
learning_rate = 0.1
markers = {"N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, No_dropout": "x", "N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, Dropout": "o"}
networks = {}
train_errors = {}
validation_errors = {}
test_accuracy_values = {}
max_test_accuracy_epoch = {}
max_test_accuracy_value = {}
for key in markers.keys():
if key == "N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, No_dropout":
networks[key] = MultiLayerNetExtended(input_size,
[hidden_layer1_size, hidden_layer2_size, hidden_layer3_size, hidden_layer4_size, hidden_layer5_size, hidden_layer6_size],
output_size,
activation='ReLU',
initializer='N2',
optimizer='AdaGrad', learning_rate=learning_rate,
use_batch_normalization=False,
use_weight_decay=False, weight_decay_lambda=0.0,
use_dropout=False, dropout_ratio_list=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
elif key == "N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, Dropout":
networks[key] = MultiLayerNetExtended(input_size,
[hidden_layer1_size, hidden_layer2_size, hidden_layer3_size, hidden_layer4_size, hidden_layer5_size, hidden_layer6_size],
output_size,
activation='ReLU',
initializer='N2',
optimizer='AdaGrad', learning_rate=learning_rate,
use_batch_normalization=False,
use_weight_decay=False, weight_decay_lambda=0.0,
use_dropout=True, dropout_ratio_list=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
train_errors[key] = []
validation_errors[key] = []
test_accuracy_values[key] = []
max_test_accuracy_epoch[key] = 0
max_test_accuracy_value[key] = 0.0
epoch_list = []
num_batch = math.ceil(train_size / batch_size)
for i in range(num_epochs):
epoch_list.append(i)
for key in markers.keys():
for k in range(num_batch):
x_batch = img_train[k * batch_size : k * batch_size + batch_size]
t_batch = label_train[k * batch_size : k * batch_size + batch_size]
networks[key].learning(x_batch, t_batch)
train_loss = networks[key].loss(x_batch, t_batch, is_train=True)
train_errors[key].append(train_loss)
validation_loss = networks[key].loss(img_validation, label_validation, is_train=False)
validation_errors[key].append(validation_loss)
test_accuracy = networks[key].accuracy(img_test, label_test)
test_accuracy_values[key].append(test_accuracy)
if test_accuracy > max_test_accuracy_value[key]:
max_test_accuracy_epoch[key] = i
max_test_accuracy_value[key] = test_accuracy
# print("{0:26s}-Epoch:{1:3d}, Train Err.:{2:7.5f}, Validation Err.:{3:7.5f}, Test Accuracy:{4:7.5f}, Max Test Accuracy:{5:7.5f}".format(
# key,
# i,
# train_loss,
# validation_loss,
# test_accuracy,
# max_test_accuracy_value[key]
# ))
print(i, end=", ")
f, axarr = plt.subplots(2, 2, figsize=(20, 12))
for key in markers.keys():
axarr[0, 0].plot(epoch_list[1:], train_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[0, 0].set_ylabel('Train - Total Error')
axarr[0, 0].set_xlabel('Epochs')
axarr[0, 0].grid(True)
axarr[0, 0].set_title('Train Error')
axarr[0, 0].legend(loc='upper right')
for key in markers.keys():
axarr[0, 1].plot(epoch_list[1:], validation_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[0, 1].set_ylabel('Validation - Total Error')
axarr[0, 1].set_xlabel('Epochs')
axarr[0, 1].grid(True)
axarr[0, 1].set_title('Validation Error')
axarr[0, 1].legend(loc='upper right')
for key in markers.keys():
axarr[1, 0].plot(epoch_list[1:], train_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[1, 0].set_ylabel('Train - Total Error')
axarr[1, 0].set_xlabel('Epochs')
axarr[1, 0].grid(True)
axarr[1, 0].set_ylim(0.8, 2.4)
axarr[1, 0].set_title('Train Error (0.8 ~ 2.4)')
axarr[1, 0].legend(loc='upper right')
for key in markers.keys():
axarr[1, 1].plot(epoch_list[1:], validation_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[1, 1].set_ylabel('Validation - Total Error')
axarr[1, 1].set_xlabel('Epochs')
axarr[1, 1].grid(True)
axarr[1, 1].set_ylim(2.25, 2.4)
axarr[1, 1].set_title('Validation Error (2.25 ~ 2.4)')
axarr[1, 1].legend(loc='upper right')
f.subplots_adjust(hspace=0.3)
plt.show()
f, axarr = plt.subplots(2, 1, figsize=(15,10))
for key in markers.keys():
axarr[0].plot(epoch_list[1:], test_accuracy_values[key][1:], marker=markers[key], markevery=1, label=key)
axarr[0].set_ylabel('Test Accuracy')
axarr[0].set_xlabel('Epochs')
axarr[0].grid(True)
axarr[0].set_title('Test Accuracy')
axarr[0].legend(loc='lower right')
for key in markers.keys():
axarr[1].plot(epoch_list[1:], test_accuracy_values[key][1:], marker=markers[key], markevery=1, label=key)
axarr[1].set_ylabel('Test Accuracy')
axarr[1].set_xlabel('Epochs')
axarr[1].grid(True)
axarr[1].set_ylim(0.30, 0.50)
axarr[1].set_title('Test Accuracy (0.30 ~ 0.50)')
axarr[1].legend(loc='lower right')
f.subplots_adjust(hspace=0.3)
plt.show()
for key in markers.keys():
print("{0:26s} - Epoch:{1:3d}, Max Test Accuracy: {2:7.5f}".format(key, max_test_accuracy_epoch[key], max_test_accuracy_value[key]))
```
|
github_jupyter
|
# coding: utf-8
import sys, os
import numpy as np
import matplotlib.pyplot as plt
import math
sys.path.append(os.pardir)
from deeplink.mnist import *
from deeplink.networks import *
class MultiLayerNetExtended(MultiLayerNet):
def __init__(self, input_size, hidden_size_list, output_size, activation='ReLU', initializer='N2',
optimizer='AdaGrad', learning_rate=0.01,
use_batch_normalization=False,
use_weight_decay=False, weight_decay_lambda=0.0,
use_dropout=False, dropout_ratio_list=None):
self.input_size = input_size
self.output_size = output_size
self.hidden_size_list = hidden_size_list
self.hidden_layer_num = len(hidden_size_list)
self.use_batch_normalization = use_batch_normalization
self.use_weight_decay = use_weight_decay
self.weight_decay_lambda = weight_decay_lambda
self.use_dropout = use_dropout
self.dropout_ratio_list = dropout_ratio_list
# Weight Initialization
self.params = {}
self.weight_initialization(initializer)
# Layering
self.layers = OrderedDict()
self.last_layer = None
self.layering(activation)
# Optimization Method
self.optimizer = optimizers[optimizer](lr=learning_rate)
def weight_initialization(self, initializer):
params_size_list = [self.input_size] + self.hidden_size_list + [self.output_size]
initializer_obj = initializers[initializer](self.params,
params_size_list,
self.use_batch_normalization)
initializer_obj.initialize_params();
def layering(self, activation):
for idx in range(1, self.hidden_layer_num + 1):
self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], self.params['b' + str(idx)])
if self.use_batch_normalization:
self.layers['Batch_Normalization' + str(idx)] = BatchNormalization(self.params['gamma' + str(idx)],
self.params['beta' + str(idx)])
self.layers['Activation' + str(idx)] = activation_layers[activation]()
if self.use_dropout:
self.layers['Dropout' + str(idx)] = Dropout(self.dropout_ratio_list[idx - 1])
idx = self.hidden_layer_num + 1
self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)], self.params['b' + str(idx)])
self.last_layer = SoftmaxWithCrossEntropyLoss()
def predict(self, x, is_train=False):
for key, layer in self.layers.items():
if "BatchNorm" in key or "Dropout" in key:
x = layer.forward(x, is_train)
else:
x = layer.forward(x)
return x
def loss(self, x, t, is_train=False):
y = self.predict(x, is_train)
if self.use_weight_decay:
weight_decay = 0.0
for idx in range(1, self.hidden_layer_num + 2):
W = self.params['W' + str(idx)]
weight_decay += 0.5 * self.weight_decay_lambda * np.sum(W**2)
return self.last_layer.forward(y, t) + weight_decay
else:
return self.last_layer.forward(y, t)
def accuracy(self, x, t):
y = self.predict(x, is_train=False)
y = np.argmax(y, axis=1)
if t.ndim != 1 : t = np.argmax(t, axis=1)
accuracy = np.sum(y == t) / float(x.shape[0])
return accuracy
def backpropagation_gradient(self, x, t):
# forward
self.loss(x, t, is_train=True)
# backward
din = 1
din = self.last_layer.backward(din)
layers = list(self.layers.values())
layers.reverse()
for layer in layers:
din = layer.backward(din)
grads = {}
for idx in range(1, self.hidden_layer_num + 2):
if self.use_weight_decay:
grads['W' + str(idx)] = self.layers['Affine' + str(idx)].dW + self.weight_decay_lambda * self.params['W' + str(idx)]
else:
grads['W' + str(idx)] = self.layers['Affine' + str(idx)].dW
grads['b' + str(idx)] = self.layers['Affine' + str(idx)].db
if self.use_batch_normalization and idx <= self.hidden_layer_num:
grads['gamma' + str(idx)] = self.layers['Batch_Normalization' + str(idx)].dgamma
grads['beta' + str(idx)] = self.layers['Batch_Normalization' + str(idx)].dbeta
return grads
def learning(self, x_batch, t_batch):
grads = self.backpropagation_gradient(x_batch, t_batch)
self.optimizer.update(self.params, grads)
data = mnist_data("/Users/yhhan/git/aiclass/0.Professor/data/MNIST_data/.")
(img_train, label_train), (img_validation, label_validation), (img_test, label_test) = data.load_mnist(flatten=True, normalize=True, one_hot_label=True)
# ์ค๋ฒํผํ
์ ์ ๋ํ๊ธฐ ์ํ์ฌ ๋ฐ์ดํฐ ์๋ฅผ ๋ํญ ์ค์
img_train = img_train[:200]
label_train = label_train[:200]
# ์ค๋ฒํผํ
์ ์ ๋ํ๊ธฐ ์ํ์ฌ ๋ ์ด์ด๋ฅผ ๊น๊ฒ ๊ฐ์ ธ๊ฐ๊ณ ํ๋ผ๋ฏธํฐ๋ฅผ ๋ํญ ๋๋ฆผ
input_size=784
hidden_layer1_size=128
hidden_layer2_size=128
hidden_layer3_size=128
hidden_layer4_size=128
hidden_layer5_size=128
hidden_layer6_size=128
output_size=10
num_epochs = 200
train_size = img_train.shape[0]
batch_size = 100
learning_rate = 0.1
markers = {"N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, No_dropout": "x", "N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, Dropout": "o"}
networks = {}
train_errors = {}
validation_errors = {}
test_accuracy_values = {}
max_test_accuracy_epoch = {}
max_test_accuracy_value = {}
for key in markers.keys():
if key == "N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, No_dropout":
networks[key] = MultiLayerNetExtended(input_size,
[hidden_layer1_size, hidden_layer2_size, hidden_layer3_size, hidden_layer4_size, hidden_layer5_size, hidden_layer6_size],
output_size,
activation='ReLU',
initializer='N2',
optimizer='AdaGrad', learning_rate=learning_rate,
use_batch_normalization=False,
use_weight_decay=False, weight_decay_lambda=0.0,
use_dropout=False, dropout_ratio_list=[0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
elif key == "N2, AdaGrad, No_Batch_Norm, No_Weight_Decay, Dropout":
networks[key] = MultiLayerNetExtended(input_size,
[hidden_layer1_size, hidden_layer2_size, hidden_layer3_size, hidden_layer4_size, hidden_layer5_size, hidden_layer6_size],
output_size,
activation='ReLU',
initializer='N2',
optimizer='AdaGrad', learning_rate=learning_rate,
use_batch_normalization=False,
use_weight_decay=False, weight_decay_lambda=0.0,
use_dropout=True, dropout_ratio_list=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
train_errors[key] = []
validation_errors[key] = []
test_accuracy_values[key] = []
max_test_accuracy_epoch[key] = 0
max_test_accuracy_value[key] = 0.0
epoch_list = []
num_batch = math.ceil(train_size / batch_size)
for i in range(num_epochs):
epoch_list.append(i)
for key in markers.keys():
for k in range(num_batch):
x_batch = img_train[k * batch_size : k * batch_size + batch_size]
t_batch = label_train[k * batch_size : k * batch_size + batch_size]
networks[key].learning(x_batch, t_batch)
train_loss = networks[key].loss(x_batch, t_batch, is_train=True)
train_errors[key].append(train_loss)
validation_loss = networks[key].loss(img_validation, label_validation, is_train=False)
validation_errors[key].append(validation_loss)
test_accuracy = networks[key].accuracy(img_test, label_test)
test_accuracy_values[key].append(test_accuracy)
if test_accuracy > max_test_accuracy_value[key]:
max_test_accuracy_epoch[key] = i
max_test_accuracy_value[key] = test_accuracy
# print("{0:26s}-Epoch:{1:3d}, Train Err.:{2:7.5f}, Validation Err.:{3:7.5f}, Test Accuracy:{4:7.5f}, Max Test Accuracy:{5:7.5f}".format(
# key,
# i,
# train_loss,
# validation_loss,
# test_accuracy,
# max_test_accuracy_value[key]
# ))
print(i, end=", ")
f, axarr = plt.subplots(2, 2, figsize=(20, 12))
for key in markers.keys():
axarr[0, 0].plot(epoch_list[1:], train_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[0, 0].set_ylabel('Train - Total Error')
axarr[0, 0].set_xlabel('Epochs')
axarr[0, 0].grid(True)
axarr[0, 0].set_title('Train Error')
axarr[0, 0].legend(loc='upper right')
for key in markers.keys():
axarr[0, 1].plot(epoch_list[1:], validation_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[0, 1].set_ylabel('Validation - Total Error')
axarr[0, 1].set_xlabel('Epochs')
axarr[0, 1].grid(True)
axarr[0, 1].set_title('Validation Error')
axarr[0, 1].legend(loc='upper right')
for key in markers.keys():
axarr[1, 0].plot(epoch_list[1:], train_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[1, 0].set_ylabel('Train - Total Error')
axarr[1, 0].set_xlabel('Epochs')
axarr[1, 0].grid(True)
axarr[1, 0].set_ylim(0.8, 2.4)
axarr[1, 0].set_title('Train Error (0.8 ~ 2.4)')
axarr[1, 0].legend(loc='upper right')
for key in markers.keys():
axarr[1, 1].plot(epoch_list[1:], validation_errors[key][1:], marker=markers[key], markevery=2, label=key)
axarr[1, 1].set_ylabel('Validation - Total Error')
axarr[1, 1].set_xlabel('Epochs')
axarr[1, 1].grid(True)
axarr[1, 1].set_ylim(2.25, 2.4)
axarr[1, 1].set_title('Validation Error (2.25 ~ 2.4)')
axarr[1, 1].legend(loc='upper right')
f.subplots_adjust(hspace=0.3)
plt.show()
f, axarr = plt.subplots(2, 1, figsize=(15,10))
for key in markers.keys():
axarr[0].plot(epoch_list[1:], test_accuracy_values[key][1:], marker=markers[key], markevery=1, label=key)
axarr[0].set_ylabel('Test Accuracy')
axarr[0].set_xlabel('Epochs')
axarr[0].grid(True)
axarr[0].set_title('Test Accuracy')
axarr[0].legend(loc='lower right')
for key in markers.keys():
axarr[1].plot(epoch_list[1:], test_accuracy_values[key][1:], marker=markers[key], markevery=1, label=key)
axarr[1].set_ylabel('Test Accuracy')
axarr[1].set_xlabel('Epochs')
axarr[1].grid(True)
axarr[1].set_ylim(0.30, 0.50)
axarr[1].set_title('Test Accuracy (0.30 ~ 0.50)')
axarr[1].legend(loc='lower right')
f.subplots_adjust(hspace=0.3)
plt.show()
for key in markers.keys():
print("{0:26s} - Epoch:{1:3d}, Max Test Accuracy: {2:7.5f}".format(key, max_test_accuracy_epoch[key], max_test_accuracy_value[key]))
| 0.47025 | 0.775711 |
# This notebook compares the outputs from VESIcal to the Shishkina et al. (2014) Calibration dataset.
- This notebook relies on the Excel spreadsheet entitled: "S7_Testing_Shishkina_et_al_2014.xlsx"
- Test 1 compares the experimental pressures in the calibration dataset of Shishkina et al. (2014) for H$_2$O-only experiments to the saturation pressures obtained from VESIcal for the "ShishkinaWater" model.
- Test 2 compares the experimental pressures in the calibration dataset of Shishkina et al. (2014) for CO$_2$-only experiments to the saturation pressures obtained from VESIcal for the "ShishkinaCarbon" model.
- Test 3 compares the experimental pressures for mixed H$_2$O-CO$_2$ bearing fluids presented in Table 2 of the main text to the saturation pressures obtained from VESIcal for the "Shishkina" model.
- Test 4 justifies the approach used in VESIcal, where cation fractions for their equation 9 are calculated ignoring H$_2$O and CO$_2$
```
import VESIcal as v
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display, HTML
import pandas as pd
import matplotlib as mpl
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
%matplotlib inline
sns.set(style="ticks", context="poster",rc={"grid.linewidth": 1,"xtick.major.width": 1,"ytick.major.width": 1, 'patch.edgecolor': 'black'})
plt.style.use("seaborn-colorblind")
plt.rcParams["font.size"] =12
plt.rcParams["mathtext.default"] = "regular"
plt.rcParams["mathtext.fontset"] = "dejavusans"
plt.rcParams['patch.linewidth'] = 1
plt.rcParams['axes.linewidth'] = 1
plt.rcParams["xtick.direction"] = "in"
plt.rcParams["ytick.direction"] = "in"
plt.rcParams["ytick.direction"] = "in"
plt.rcParams["xtick.major.size"] = 6 # Sets length of ticks
plt.rcParams["ytick.major.size"] = 4 # Sets length of ticks
plt.rcParams["ytick.labelsize"] = 12 # Sets size of numbers on tick marks
plt.rcParams["xtick.labelsize"] = 12 # Sets size of numbers on tick marks
plt.rcParams["axes.titlesize"] = 14 # Overall title
plt.rcParams["axes.labelsize"] = 14 # Axes labels
plt.rcParams["legend.fontsize"]= 14
```
## Test 1 and 2 - comparing saturation pressures to experimental pressures
```
myfile_CO2 = v.BatchFile('S7_Testing_Shishkina_et_al_2014.xlsx', sheet_name='CO2') # Loading Carbon calibration dataset
satPs_wtemps_Shish_CO2= myfile_CO2.calculate_saturation_pressure(temperature="Temp", model='ShishkinaCarbon') # Calculating saturation pressures
myfile_H2O = v.BatchFile('S7_Testing_Shishkina_et_al_2014.xlsx', sheet_name='H2O') # Loading Water calibration dataset
satPs_wtemps_Shish_H2O= myfile_H2O.calculate_saturation_pressure(temperature="Temp", model='ShishkinaWater') # Calculating Saturation pressures
######################## H2O only experiments
# This calculating a linear regression, and plots experimental pressures vs. saturation pressures for the Water calibration dataset
X_Test1=satPs_wtemps_Shish_H2O['Press']
Y_Test1=satPs_wtemps_Shish_H2O['SaturationP_bars_VESIcal']
mask_Test1 = (X_Test1>-1) & (Y_Test1>-1) # This gets rid of Nans
X_Test1noNan=X_Test1[mask_Test1].values.reshape(-1, 1)
Y_Test1noNan=Y_Test1[mask_Test1].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test1noNan,Y_Test1noNan)
Y_pred_Test1=lr.predict(X_Test1noNan)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5)) # adjust dimensions of figure here
ax1.plot(X_Test1noNan,Y_pred_Test1, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax1.scatter(satPs_wtemps_Shish_H2O['Press'], satPs_wtemps_Shish_H2O['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 1))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test1noNan, Y_pred_Test1), 3))
ax1.text(3000, 1500, R, fontsize=14)
ax1.text(3000, 1000, G, fontsize=14)
ax1.text(3000, 500, I, fontsize=14)
################### CO2 experiments
X_Test2=satPs_wtemps_Shish_CO2['Press']
Y_Test2=satPs_wtemps_Shish_CO2['SaturationP_bars_VESIcal']
mask_Test2 = (X_Test2>-1) & (Y_Test2>-1) # This gets rid of Nans
X_Test2noNan=X_Test2[mask_Test2].values.reshape(-1, 1)
Y_Test2noNan=Y_Test2[mask_Test2].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test2noNan,Y_Test2noNan)
Y_pred_Test2=lr.predict(X_Test2noNan)
ax2.plot(X_Test2noNan,Y_pred_Test2, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax2.scatter(satPs_wtemps_Shish_CO2['Press'], satPs_wtemps_Shish_CO2['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 2))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test2noNan, Y_pred_Test2), 2))
ax2.text(4000, 500, I, fontsize=14)
ax2.text(4000, 1000, G, fontsize=14)
ax2.text(4000, 1500, R, fontsize=14)
ax1.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax1.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax2.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax2.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax1.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_xlim([-200, 6500])
ax1.set_ylim([-200, 6500])
ax2.set_xlim([-200, 8000])
ax2.set_ylim([-200, 8000])
plt.subplots_adjust(left=0.125, bottom=None, right=0.9, top=None, wspace=0.3, hspace=None)
ax1.text(-150, 6200, 'a)', fontsize=14)
ax2.text(-150, 7600, 'b)', fontsize=14)
ax1.set_title('H$_{2}$O-only', fontsize=14)
ax2.set_title('CO$_2$-only', fontsize=14)
fig.savefig('Shishkina_Test1and2.png', transparent=True)
```
## Test 3 - Mixed H$_2$O - CO$_2$ experiments from Table 2 in the text.
- We show the regression for experimental pressure vs. saturation pressure calculated in VESIcal for all data, and data with experimental pressures <4000 bars (to remove the most scattered datapoints).
```
myfile_Comb = v.BatchFile('S7_Testing_Shishkina_et_al_2014.xlsx', sheet_name='Table2_Text') # Loads experimental data from Table 2
satPs_wtemps_Shish_Comb= myfile_Comb.calculate_saturation_pressure(temperature="Temp", model='ShishkinaIdealMixing') # Calculates saturation pressures for these compositions + tempts
######################## H2O only experiments
X_Test3b=satPs_wtemps_Shish_Comb['Press']
Y_Test3b=satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal']
mask_Test3b = (X_Test3b>-1) & (Y_Test3b>-1) # This gets rid of Nans
X_Test3bnoNan=X_Test3b[mask_Test3b].values.reshape(-1, 1)
Y_Test3bnoNan=Y_Test3b[mask_Test3b].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test3bnoNan,Y_Test3bnoNan)
Y_pred_Test3b=lr.predict(X_Test3bnoNan)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5)) # adjust dimensions of figure here
ax1.plot(X_Test3bnoNan,Y_pred_Test3b, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax1.scatter(satPs_wtemps_Shish_Comb['Press'], satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 1))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test3bnoNan, Y_pred_Test3b), 3))
ax1.text(3000, 1500, R, fontsize=14)
ax1.text(3000, 1000, G, fontsize=14)
ax1.text(3000, 500, I, fontsize=14)
################### CO2 experiments
X_Test3=satPs_wtemps_Shish_Comb['Press']
Y_Test3=satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal']
mask_Test3 = (X_Test3>-1) & (Y_Test3>-1) &(X_Test3<4000) # This gets rid of Nans
X_Test3noNan=X_Test3[mask_Test3].values.reshape(-1, 1)
Y_Test3noNan=Y_Test3[mask_Test3].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test3noNan,Y_Test3noNan)
Y_pred_Test3=lr.predict(X_Test3noNan)
ax2.plot(X_Test3noNan,Y_pred_Test3, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax2.scatter(satPs_wtemps_Shish_Comb['Press'], satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 2))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test3noNan, Y_pred_Test3), 2))
ax2.text(2000, 100, I, fontsize=14)
ax2.text(2000, 400, G, fontsize=14)
ax2.text(2000, 700, R, fontsize=14)
ax1.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax1.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax2.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax2.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax1.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_xlim([-200, 8000])
ax1.set_ylim([-200, 8000])
ax2.set_xlim([-200, 4000])
ax2.set_ylim([-200, 4000])
plt.subplots_adjust(left=0.125, bottom=None, right=0.9, top=None, wspace=0.3, hspace=None)
ax1.text(-150, 7600, 'a)', fontsize=14)
ax2.text(-150, 3800, 'b)', fontsize=14)
ax1.set_title('All Experiments', fontsize=14)
ax2.set_title('Experimental Pressure < 4000 bars)', fontsize=14)
fig.savefig('Shishkina_Test3.png', transparent=True)
```
## Test 4 - Intepretation of "atomic fractions of cations in Equation 9.
- We can only recreate the chemical data for cation fractions shown in their Fig. 7a if the "atomic fractions of cations" are calculated excluding volatiles. Including atomic proportions including H$_2$O and CO$_2$ results in a significantly worse fit to experimental data for the ShishkinaWater model shown in test 2. The choice of normalization doesn't affect the results for the CO$_2$ model, where the compositional dependence is expressed as a fraction
```
# Removed CO2 and H2O
oxides = ['SiO2', 'TiO2', 'Al2O3', 'Fe2O3', 'Cr2O3', 'FeO', 'MnO', 'MgO', 'NiO', 'CoO', 'CaO', 'Na2O', 'K2O', 'P2O5']
oxideMass = {'SiO2': 28.085+32, 'MgO': 24.305+16, 'FeO': 55.845+16, 'CaO': 40.078+16, 'Al2O3': 2*26.982+16*3, 'Na2O': 22.99*2+16,
'K2O': 39.098*2+16, 'MnO': 54.938+16, 'TiO2': 47.867+32, 'P2O5': 2*30.974+5*16, 'Cr2O3': 51.996*2+3*16,
'NiO': 58.693+16, 'CoO': 28.01+16, 'Fe2O3': 55.845*2+16*3}
CationNum = {'SiO2': 1, 'MgO': 1, 'FeO': 1, 'CaO': 1, 'Al2O3': 2, 'Na2O': 2,
'K2O': 2, 'MnO': 1, 'TiO2': 1, 'P2O5': 2, 'Cr2O3': 2,
'NiO': 1, 'CoO': 1, 'Fe2O3': 2}
Normdata = myfile_H2O.get_data(normalization="additionalvolatiles")
for ind,row in Normdata.iterrows():
for ox in oxides:
Normdata.loc[ind, ox + 'molar']=((row[ox]*CationNum[ox])/oxideMass[ox]) # helps us get desired column name with its actual name, rather than its index. If by number, do by iloc.
#oxide_molar[ind, ox]=ox+'molar'
Normdata.loc[ind,'sum']=sum(Normdata.loc[ind, ox+'molar'] for ox in oxides)
for ox in oxides:
Normdata.loc[ind, ox + 'norm']=Normdata.loc[ind, ox+'molar']/Normdata.loc[ind, 'sum']
# helps us get desired column name with its actual name, rather than its index. If by number, do by iloc.
Normdata.head()
### Comparison of these cation fractions to those shown in their Fig. 7a
fig, ax1 = plt.subplots(figsize = (10,8)) # adjust dimensions of figure here
font = {'family': 'sans-serif',
'color': 'black',
'weight': 'normal',
'size': 20,
}
plt.xlim([0, 0.25])
plt.ylim([1, 13])
plt.title('Calculated using VESIcal')
plt.scatter(Normdata['Na2Onorm']+Normdata['K2Onorm'], Normdata['H2O'], edgecolor='k', facecolor='b', s=50, label='Normalized')
plt.xlabel('Na+K')
plt.ylabel('H$_2$O')
plt.legend()
```
# Their graph below

|
github_jupyter
|
import VESIcal as v
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import display, HTML
import pandas as pd
import matplotlib as mpl
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
%matplotlib inline
sns.set(style="ticks", context="poster",rc={"grid.linewidth": 1,"xtick.major.width": 1,"ytick.major.width": 1, 'patch.edgecolor': 'black'})
plt.style.use("seaborn-colorblind")
plt.rcParams["font.size"] =12
plt.rcParams["mathtext.default"] = "regular"
plt.rcParams["mathtext.fontset"] = "dejavusans"
plt.rcParams['patch.linewidth'] = 1
plt.rcParams['axes.linewidth'] = 1
plt.rcParams["xtick.direction"] = "in"
plt.rcParams["ytick.direction"] = "in"
plt.rcParams["ytick.direction"] = "in"
plt.rcParams["xtick.major.size"] = 6 # Sets length of ticks
plt.rcParams["ytick.major.size"] = 4 # Sets length of ticks
plt.rcParams["ytick.labelsize"] = 12 # Sets size of numbers on tick marks
plt.rcParams["xtick.labelsize"] = 12 # Sets size of numbers on tick marks
plt.rcParams["axes.titlesize"] = 14 # Overall title
plt.rcParams["axes.labelsize"] = 14 # Axes labels
plt.rcParams["legend.fontsize"]= 14
myfile_CO2 = v.BatchFile('S7_Testing_Shishkina_et_al_2014.xlsx', sheet_name='CO2') # Loading Carbon calibration dataset
satPs_wtemps_Shish_CO2= myfile_CO2.calculate_saturation_pressure(temperature="Temp", model='ShishkinaCarbon') # Calculating saturation pressures
myfile_H2O = v.BatchFile('S7_Testing_Shishkina_et_al_2014.xlsx', sheet_name='H2O') # Loading Water calibration dataset
satPs_wtemps_Shish_H2O= myfile_H2O.calculate_saturation_pressure(temperature="Temp", model='ShishkinaWater') # Calculating Saturation pressures
######################## H2O only experiments
# This calculating a linear regression, and plots experimental pressures vs. saturation pressures for the Water calibration dataset
X_Test1=satPs_wtemps_Shish_H2O['Press']
Y_Test1=satPs_wtemps_Shish_H2O['SaturationP_bars_VESIcal']
mask_Test1 = (X_Test1>-1) & (Y_Test1>-1) # This gets rid of Nans
X_Test1noNan=X_Test1[mask_Test1].values.reshape(-1, 1)
Y_Test1noNan=Y_Test1[mask_Test1].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test1noNan,Y_Test1noNan)
Y_pred_Test1=lr.predict(X_Test1noNan)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5)) # adjust dimensions of figure here
ax1.plot(X_Test1noNan,Y_pred_Test1, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax1.scatter(satPs_wtemps_Shish_H2O['Press'], satPs_wtemps_Shish_H2O['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 1))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test1noNan, Y_pred_Test1), 3))
ax1.text(3000, 1500, R, fontsize=14)
ax1.text(3000, 1000, G, fontsize=14)
ax1.text(3000, 500, I, fontsize=14)
################### CO2 experiments
X_Test2=satPs_wtemps_Shish_CO2['Press']
Y_Test2=satPs_wtemps_Shish_CO2['SaturationP_bars_VESIcal']
mask_Test2 = (X_Test2>-1) & (Y_Test2>-1) # This gets rid of Nans
X_Test2noNan=X_Test2[mask_Test2].values.reshape(-1, 1)
Y_Test2noNan=Y_Test2[mask_Test2].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test2noNan,Y_Test2noNan)
Y_pred_Test2=lr.predict(X_Test2noNan)
ax2.plot(X_Test2noNan,Y_pred_Test2, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax2.scatter(satPs_wtemps_Shish_CO2['Press'], satPs_wtemps_Shish_CO2['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 2))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test2noNan, Y_pred_Test2), 2))
ax2.text(4000, 500, I, fontsize=14)
ax2.text(4000, 1000, G, fontsize=14)
ax2.text(4000, 1500, R, fontsize=14)
ax1.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax1.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax2.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax2.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax1.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_xlim([-200, 6500])
ax1.set_ylim([-200, 6500])
ax2.set_xlim([-200, 8000])
ax2.set_ylim([-200, 8000])
plt.subplots_adjust(left=0.125, bottom=None, right=0.9, top=None, wspace=0.3, hspace=None)
ax1.text(-150, 6200, 'a)', fontsize=14)
ax2.text(-150, 7600, 'b)', fontsize=14)
ax1.set_title('H$_{2}$O-only', fontsize=14)
ax2.set_title('CO$_2$-only', fontsize=14)
fig.savefig('Shishkina_Test1and2.png', transparent=True)
myfile_Comb = v.BatchFile('S7_Testing_Shishkina_et_al_2014.xlsx', sheet_name='Table2_Text') # Loads experimental data from Table 2
satPs_wtemps_Shish_Comb= myfile_Comb.calculate_saturation_pressure(temperature="Temp", model='ShishkinaIdealMixing') # Calculates saturation pressures for these compositions + tempts
######################## H2O only experiments
X_Test3b=satPs_wtemps_Shish_Comb['Press']
Y_Test3b=satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal']
mask_Test3b = (X_Test3b>-1) & (Y_Test3b>-1) # This gets rid of Nans
X_Test3bnoNan=X_Test3b[mask_Test3b].values.reshape(-1, 1)
Y_Test3bnoNan=Y_Test3b[mask_Test3b].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test3bnoNan,Y_Test3bnoNan)
Y_pred_Test3b=lr.predict(X_Test3bnoNan)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5)) # adjust dimensions of figure here
ax1.plot(X_Test3bnoNan,Y_pred_Test3b, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax1.scatter(satPs_wtemps_Shish_Comb['Press'], satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 1))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test3bnoNan, Y_pred_Test3b), 3))
ax1.text(3000, 1500, R, fontsize=14)
ax1.text(3000, 1000, G, fontsize=14)
ax1.text(3000, 500, I, fontsize=14)
################### CO2 experiments
X_Test3=satPs_wtemps_Shish_Comb['Press']
Y_Test3=satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal']
mask_Test3 = (X_Test3>-1) & (Y_Test3>-1) &(X_Test3<4000) # This gets rid of Nans
X_Test3noNan=X_Test3[mask_Test3].values.reshape(-1, 1)
Y_Test3noNan=Y_Test3[mask_Test3].values.reshape(-1, 1)
lr=LinearRegression()
lr.fit(X_Test3noNan,Y_Test3noNan)
Y_pred_Test3=lr.predict(X_Test3noNan)
ax2.plot(X_Test3noNan,Y_pred_Test3, color='red', linewidth=0.5, zorder=1) # This plots the best fit line
ax2.scatter(satPs_wtemps_Shish_Comb['Press'], satPs_wtemps_Shish_Comb['SaturationP_bars_VESIcal'], s=50, edgecolors='k', facecolors='silver', marker='o', zorder=5)
# This bit plots the regression parameters on the graph
I='Intercept= ' + str(np.round(lr.intercept_, 2))[1:-1]
G='Gradient= ' + str(np.round(lr.coef_, 3))[2:-2]
R='R$^2$= ' + str(np.round(r2_score(Y_Test3noNan, Y_pred_Test3), 2))
ax2.text(2000, 100, I, fontsize=14)
ax2.text(2000, 400, G, fontsize=14)
ax2.text(2000, 700, R, fontsize=14)
ax1.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax1.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax2.set_xlabel('Experimental Pressure (bar)', fontsize=14)
ax2.set_ylabel('P$_{Sat}$ VESIcal (bar)', fontsize=14)
ax1.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_xticks([0, 2000, 4000, 6000, 8000, 10000])
ax2.set_yticks([0, 2000, 4000, 6000, 8000, 10000])
ax1.set_xlim([-200, 8000])
ax1.set_ylim([-200, 8000])
ax2.set_xlim([-200, 4000])
ax2.set_ylim([-200, 4000])
plt.subplots_adjust(left=0.125, bottom=None, right=0.9, top=None, wspace=0.3, hspace=None)
ax1.text(-150, 7600, 'a)', fontsize=14)
ax2.text(-150, 3800, 'b)', fontsize=14)
ax1.set_title('All Experiments', fontsize=14)
ax2.set_title('Experimental Pressure < 4000 bars)', fontsize=14)
fig.savefig('Shishkina_Test3.png', transparent=True)
# Removed CO2 and H2O
oxides = ['SiO2', 'TiO2', 'Al2O3', 'Fe2O3', 'Cr2O3', 'FeO', 'MnO', 'MgO', 'NiO', 'CoO', 'CaO', 'Na2O', 'K2O', 'P2O5']
oxideMass = {'SiO2': 28.085+32, 'MgO': 24.305+16, 'FeO': 55.845+16, 'CaO': 40.078+16, 'Al2O3': 2*26.982+16*3, 'Na2O': 22.99*2+16,
'K2O': 39.098*2+16, 'MnO': 54.938+16, 'TiO2': 47.867+32, 'P2O5': 2*30.974+5*16, 'Cr2O3': 51.996*2+3*16,
'NiO': 58.693+16, 'CoO': 28.01+16, 'Fe2O3': 55.845*2+16*3}
CationNum = {'SiO2': 1, 'MgO': 1, 'FeO': 1, 'CaO': 1, 'Al2O3': 2, 'Na2O': 2,
'K2O': 2, 'MnO': 1, 'TiO2': 1, 'P2O5': 2, 'Cr2O3': 2,
'NiO': 1, 'CoO': 1, 'Fe2O3': 2}
Normdata = myfile_H2O.get_data(normalization="additionalvolatiles")
for ind,row in Normdata.iterrows():
for ox in oxides:
Normdata.loc[ind, ox + 'molar']=((row[ox]*CationNum[ox])/oxideMass[ox]) # helps us get desired column name with its actual name, rather than its index. If by number, do by iloc.
#oxide_molar[ind, ox]=ox+'molar'
Normdata.loc[ind,'sum']=sum(Normdata.loc[ind, ox+'molar'] for ox in oxides)
for ox in oxides:
Normdata.loc[ind, ox + 'norm']=Normdata.loc[ind, ox+'molar']/Normdata.loc[ind, 'sum']
# helps us get desired column name with its actual name, rather than its index. If by number, do by iloc.
Normdata.head()
### Comparison of these cation fractions to those shown in their Fig. 7a
fig, ax1 = plt.subplots(figsize = (10,8)) # adjust dimensions of figure here
font = {'family': 'sans-serif',
'color': 'black',
'weight': 'normal',
'size': 20,
}
plt.xlim([0, 0.25])
plt.ylim([1, 13])
plt.title('Calculated using VESIcal')
plt.scatter(Normdata['Na2Onorm']+Normdata['K2Onorm'], Normdata['H2O'], edgecolor='k', facecolor='b', s=50, label='Normalized')
plt.xlabel('Na+K')
plt.ylabel('H$_2$O')
plt.legend()
| 0.604632 | 0.873161 |
# Media Outlets Activity on Twitter aggregated by Media Type
The parameters in the cell below can be adjusted to explore other media outlets and time frames.
### How to explore other media types?
The ***media*** parameter can be use to aggregate media outlets by their type. The column `subcategory` in this [this other notebook](../media.ipynb?autorun=true) show the media outlets that belong each type.
***Alternatively***, you can direcly use the [organizations API](http://mediamonitoring.gesis.org/api/organizations/swagger/), or access it with the [SMM Wrapper](https://pypi.org/project/smm-wrapper/).
## A. Set Up parameters
```
# Parameters:
media = 'Newspaper'
from_date = '2017-09-01'
to_date = '2018-12-31'
aggregation = 'week'
```
## B. Using the SMM Organization API
```
library("httr")
library("jsonlite")
library('dplyr', warn.conflicts = FALSE)
# prepare urls
base <- "http://mediamonitoring.gesis.org/api/organizations/"
url_all <- paste(base,"all/", sep="")
url_tweets <- paste(base, "twitter/tweets_by/organizations/", sep="")
url_replies <- paste(base, "twitter/replies_to/organizations/", sep="")
# prepare parameters
params = list(
from_date=from_date,
to_date=to_date,
aggregate_by=aggregation
)
# use the api to get the organizations and filter the parties
orgs <- as.data.frame(fromJSON(content(GET(url_all), "text", encoding="UTF-8"), flatten = TRUE))
media_df <- orgs[grepl(media, orgs$subcategory, fixed=TRUE) & !is.null(orgs$tw_ids) ,]
# query the Social Media Monitoring API for tweets and replies
tweets <- data.frame()
replies <- data.frame()
for (organization_id in media_df$organization_id) {
json_tweets <- fromJSON(content(GET(paste(url_tweets, organization_id, "/?", sep=""),query=params), "text", encoding="UTF-8"), flatten = TRUE)
json_replies <- fromJSON(content(GET(paste(url_replies, organization_id, "/?", sep=""), query=params), "text", encoding="UTF-8"), flatten = TRUE)
# concatenate
if (length(json_tweets$values) != 0) {
tweets <- rbind(tweets, as.data.frame(json_tweets))
}
if (length(json_replies$values) != 0) {
replies <- rbind(replies, as.data.frame(json_replies))
}
}
# group by day, week, or month, and then merge
tweets <- summarise(group_by(tweets, labels), tweets = sum(values), response_type = aggregation)
replies <- summarise(group_by(replies, labels), replies = sum(values), response_type = aggregation)
merged <- merge(tweets, replies, by='labels')
```
## C. Plotting
```
library("ggplot2")
#plotting
options(repr.plot.width=8, repr.plot.height=4)
ggplot(data = merged, mapping = aes(as.Date(labels))) +
geom_line(aes(y = tweets, color="Tweets", group=response_type.x)) +
geom_line(aes(y = replies, color="Replies", group=response_type.y)) +
labs(title = "Twitter (tweets)", y = "Tweets") +
theme(axis.text.x = element_text(angle = 60, hjust = 0.5, vjust = 0.5),
axis.title.x = element_blank(), legend.title = element_blank(), plot.title = element_text(size=10)) +
scale_x_date(date_breaks = "1 month")
merged
```
|
github_jupyter
|
# Parameters:
media = 'Newspaper'
from_date = '2017-09-01'
to_date = '2018-12-31'
aggregation = 'week'
library("httr")
library("jsonlite")
library('dplyr', warn.conflicts = FALSE)
# prepare urls
base <- "http://mediamonitoring.gesis.org/api/organizations/"
url_all <- paste(base,"all/", sep="")
url_tweets <- paste(base, "twitter/tweets_by/organizations/", sep="")
url_replies <- paste(base, "twitter/replies_to/organizations/", sep="")
# prepare parameters
params = list(
from_date=from_date,
to_date=to_date,
aggregate_by=aggregation
)
# use the api to get the organizations and filter the parties
orgs <- as.data.frame(fromJSON(content(GET(url_all), "text", encoding="UTF-8"), flatten = TRUE))
media_df <- orgs[grepl(media, orgs$subcategory, fixed=TRUE) & !is.null(orgs$tw_ids) ,]
# query the Social Media Monitoring API for tweets and replies
tweets <- data.frame()
replies <- data.frame()
for (organization_id in media_df$organization_id) {
json_tweets <- fromJSON(content(GET(paste(url_tweets, organization_id, "/?", sep=""),query=params), "text", encoding="UTF-8"), flatten = TRUE)
json_replies <- fromJSON(content(GET(paste(url_replies, organization_id, "/?", sep=""), query=params), "text", encoding="UTF-8"), flatten = TRUE)
# concatenate
if (length(json_tweets$values) != 0) {
tweets <- rbind(tweets, as.data.frame(json_tweets))
}
if (length(json_replies$values) != 0) {
replies <- rbind(replies, as.data.frame(json_replies))
}
}
# group by day, week, or month, and then merge
tweets <- summarise(group_by(tweets, labels), tweets = sum(values), response_type = aggregation)
replies <- summarise(group_by(replies, labels), replies = sum(values), response_type = aggregation)
merged <- merge(tweets, replies, by='labels')
library("ggplot2")
#plotting
options(repr.plot.width=8, repr.plot.height=4)
ggplot(data = merged, mapping = aes(as.Date(labels))) +
geom_line(aes(y = tweets, color="Tweets", group=response_type.x)) +
geom_line(aes(y = replies, color="Replies", group=response_type.y)) +
labs(title = "Twitter (tweets)", y = "Tweets") +
theme(axis.text.x = element_text(angle = 60, hjust = 0.5, vjust = 0.5),
axis.title.x = element_blank(), legend.title = element_blank(), plot.title = element_text(size=10)) +
scale_x_date(date_breaks = "1 month")
merged
| 0.387343 | 0.7865 |
# Introduccion a la Sintetizacion de Voz - Tutorial ESPnet
Ref:
https://colab.research.google.com/drive/1L85G7jdhsI1QKs2o0qCGEbhm5X4QV2zN?usp=sharing
Autor Original: Shinji Watanabe
ESPnet is un tolkit para procesamiento de voz end-to-end (fin-a-fin), inicialmente aplicado al reconocimiento del habla fin-a-fin y la sintetizacion de voz (texto-a-voz) fin-a-fin. En la actualidad este toolkit ha sido extendido a otros aplicaciones del procesamiento del habla. ESPnet usa PyTorch como motor principal para el aprendizaje profundo.
```
from google.colab import drive
drive.mount('/content/drive')
```
## Ejemplo de Sintetizacion
- ESPnet cuenta con varias aplicaciones dedicadas al procesamiento de voz y tiene modelos preentrenados
- Podemos verificar los modelos en espnet_model_zoo
```
!pip install -q espnet_model_zoo
!pip install -q pyopenjtalk==0.1.5 parallel_wavegan==0.5.3
```
Ahora escogemos el idioma de nuestro sintetizador
```
#@title Choose English model { run: "auto" }
lang = 'English'
tag = 'kan-bayashi/ljspeech_vits' #@param ["kan-bayashi/ljspeech_tacotron2", "kan-bayashi/ljspeech_fastspeech", "kan-bayashi/ljspeech_fastspeech2", "kan-bayashi/ljspeech_conformer_fastspeech2", "kan-bayashi/ljspeech_vits"] {type:"string"}
vocoder_tag = "none" #@param ["none", "parallel_wavegan/ljspeech_parallel_wavegan.v1", "parallel_wavegan/ljspeech_full_band_melgan.v2", "parallel_wavegan/ljspeech_multi_band_melgan.v2", "parallel_wavegan/ljspeech_hifigan.v1", "parallel_wavegan/ljspeech_style_melgan.v1"] {type:"string"}
#@title Choose Japanese model { run: "auto" }
# lang = 'Japanese'
# tag = 'kan-bayashi/jsut_full_band_vits_prosody' #@param ["kan-bayashi/jsut_tacotron2", "kan-bayashi/jsut_transformer", "kan-bayashi/jsut_fastspeech", "kan-bayashi/jsut_fastspeech2", "kan-bayashi/jsut_conformer_fastspeech2", "kan-bayashi/jsut_conformer_fastspeech2_accent", "kan-bayashi/jsut_conformer_fastspeech2_accent_with_pause", "kan-bayashi/jsut_vits_accent_with_pause", "kan-bayashi/jsut_full_band_vits_accent_with_pause", "kan-bayashi/jsut_tacotron2_prosody", "kan-bayashi/jsut_transformer_prosody", "kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody", "kan-bayashi/jsut_vits_prosody", "kan-bayashi/jsut_full_band_vits_prosody", "kan-bayashi/jvs_jvs010_vits_prosody", "kan-bayashi/tsukuyomi_full_band_vits_prosody"] {type:"string"}
# vocoder_tag = 'none' #@param ["none", "parallel_wavegan/jsut_parallel_wavegan.v1", "parallel_wavegan/jsut_multi_band_melgan.v2", "parallel_wavegan/jsut_style_melgan.v1", "parallel_wavegan/jsut_hifigan.v1"] {type:"string"}
```
## Inicializacion del Modelo
```
from espnet2.bin.tts_inference import Text2Speech
from espnet2.utils.types import str_or_none
text2speech = Text2Speech.from_pretrained(
model_tag=str_or_none(tag),
vocoder_tag=str_or_none(vocoder_tag),
device="cuda",
# Only for Tacotron 2 & Transformer
threshold=0.5,
# Only for Tacotron 2
minlenratio=0.0,
maxlenratio=10.0,
use_att_constraint=False,
backward_window=1,
forward_window=3,
# Only for FastSpeech & FastSpeech2 & VITS
speed_control_alpha=1.0,
# Only for VITS
noise_scale=0.667,
noise_scale_dur=0.8,
)
```
## Sintetizacion
```
import time
import torch
# decide the input sentence by yourself
print(f"Input your favorite sentence in {lang}.")
x = input()
# synthesis
with torch.no_grad():
start = time.time()
wav = text2speech(x)["wav"]
rtf = (time.time() - start) / (len(wav) / text2speech.fs)
print(f"RTF = {rtf:5f}")
# let us listen to generated samples
from IPython.display import display, Audio
display(Audio(wav.view(-1).cpu().numpy(), rate=text2speech.fs))
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
!pip install -q espnet_model_zoo
!pip install -q pyopenjtalk==0.1.5 parallel_wavegan==0.5.3
#@title Choose English model { run: "auto" }
lang = 'English'
tag = 'kan-bayashi/ljspeech_vits' #@param ["kan-bayashi/ljspeech_tacotron2", "kan-bayashi/ljspeech_fastspeech", "kan-bayashi/ljspeech_fastspeech2", "kan-bayashi/ljspeech_conformer_fastspeech2", "kan-bayashi/ljspeech_vits"] {type:"string"}
vocoder_tag = "none" #@param ["none", "parallel_wavegan/ljspeech_parallel_wavegan.v1", "parallel_wavegan/ljspeech_full_band_melgan.v2", "parallel_wavegan/ljspeech_multi_band_melgan.v2", "parallel_wavegan/ljspeech_hifigan.v1", "parallel_wavegan/ljspeech_style_melgan.v1"] {type:"string"}
#@title Choose Japanese model { run: "auto" }
# lang = 'Japanese'
# tag = 'kan-bayashi/jsut_full_band_vits_prosody' #@param ["kan-bayashi/jsut_tacotron2", "kan-bayashi/jsut_transformer", "kan-bayashi/jsut_fastspeech", "kan-bayashi/jsut_fastspeech2", "kan-bayashi/jsut_conformer_fastspeech2", "kan-bayashi/jsut_conformer_fastspeech2_accent", "kan-bayashi/jsut_conformer_fastspeech2_accent_with_pause", "kan-bayashi/jsut_vits_accent_with_pause", "kan-bayashi/jsut_full_band_vits_accent_with_pause", "kan-bayashi/jsut_tacotron2_prosody", "kan-bayashi/jsut_transformer_prosody", "kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody", "kan-bayashi/jsut_vits_prosody", "kan-bayashi/jsut_full_band_vits_prosody", "kan-bayashi/jvs_jvs010_vits_prosody", "kan-bayashi/tsukuyomi_full_band_vits_prosody"] {type:"string"}
# vocoder_tag = 'none' #@param ["none", "parallel_wavegan/jsut_parallel_wavegan.v1", "parallel_wavegan/jsut_multi_band_melgan.v2", "parallel_wavegan/jsut_style_melgan.v1", "parallel_wavegan/jsut_hifigan.v1"] {type:"string"}
from espnet2.bin.tts_inference import Text2Speech
from espnet2.utils.types import str_or_none
text2speech = Text2Speech.from_pretrained(
model_tag=str_or_none(tag),
vocoder_tag=str_or_none(vocoder_tag),
device="cuda",
# Only for Tacotron 2 & Transformer
threshold=0.5,
# Only for Tacotron 2
minlenratio=0.0,
maxlenratio=10.0,
use_att_constraint=False,
backward_window=1,
forward_window=3,
# Only for FastSpeech & FastSpeech2 & VITS
speed_control_alpha=1.0,
# Only for VITS
noise_scale=0.667,
noise_scale_dur=0.8,
)
import time
import torch
# decide the input sentence by yourself
print(f"Input your favorite sentence in {lang}.")
x = input()
# synthesis
with torch.no_grad():
start = time.time()
wav = text2speech(x)["wav"]
rtf = (time.time() - start) / (len(wav) / text2speech.fs)
print(f"RTF = {rtf:5f}")
# let us listen to generated samples
from IPython.display import display, Audio
display(Audio(wav.view(-1).cpu().numpy(), rate=text2speech.fs))
| 0.39712 | 0.669839 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.