prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
# FAQs for Regression, MAP and MLE
* So far we have focused on regression. We began with the polynomial regression example where we have training data $\mathbf{X}$ and associated training labels $\mathbf{t}$ and we use these to estimate weights, $\mathbf{w}$ to fit a polynomial curve through the data:
\begin{equation}
y(x, \mathbf{w}) = \sum_{j=0}^M w_j x^j
\end{equation}
* We derived how to estimate the weights using both maximum likelihood estimation (MLE) and maximum a-posteriori estimation (MAP).
* Then, last class we said that we can generalize this further using basis functions (instead of only raising x to the jth power):
\begin{equation}
y(x, \mathbf{w}) = \sum_{j=0}^M w_j \phi_j(x)
\end{equation}
where $\phi_j(\cdot)$ is any basis function you choose to use on the data.
* *Why is regression useful?*
* Regression is a common type of machine learning problem where we want to map inputs to a value (instead of a class label). For example, the example we used in our first class was mapping silhouttes of individuals to their age. So regression is an important technique whenever you want to map from a data set to another value of interest. *Can you think of other examples of regression problems?*
* *Why would I want to use other basis functions?*
* So, we began with the polynomial curve fitting example just so we can have a concrete example to work through but polynomial curve fitting is not the best approach for every problem. You can think of the basis functions as methods to extract useful features from your data. For example, if it is more useful to compute distances between data points (instead of raising each data point to various powers), then you should do that instead!
* *Why did we go through all the math derivations? You could've just provided the MLE and MAP solution to us since that is all we need in practice to code this up.*
* In practice, you may have unique requirements for a particular problem and will need to decide upon and set up a different data likelihood and prior for a problem. For example, we assumed Gaussian noise for our regression example with a Gaussian zero-mean prior on the weights. You may have an application in which you know the noise is Gamma disributed and have other requirements for the weights that you want to incorporate into the prior. Knowing the process used to derive the estimate for weights in this case is a helpful guide for deriving your solution. (Also, on a practical note for the course, stepping through the math served as a quick review of various linear algebra, calculus and statistics topics that will be useful throughout the course.)
* *What is overfitting and why is it bad?*
* The goal of a supervised machine learning algorithm is to be able to learn a mapping from inputs to desired outputs from training data. When you overfit, you memorize your training data such that you can recreate the samples perfectly. This often comes about when you have a model that is more complex than your underlying true model and/or you do not have the data to support such a complex model. However, you do this at the cost of generalization. When you overfit, you do very well on training data but poorly on test (or unseen) data. So, to have useful trained machine learning model, you need to avoid overfitting. You can avoid overfitting through a number of ways. The methods we discussed in class are using *enough* data and regularization. Overfitting is related to the "bias-variance trade-off" (discussed in section 3.2 of the reading). There is a trade-off between bias and variance. Complex models have low bias and high variance (which is another way of saying, they fit the training data very well but may oscillate widely between training data points) where as rigid (not-complex-enough) models have high bias and low variance (they do not oscillate widely but may not fit the training data very well either).
* *What is the goal of MLE and MAP?*
* MLE and MAP are general approaches for estimating parameter values. For example, you may have data from some unknown distribution that you would like to model as best you can with a Gaussian distribution. You can use MLE or MAP to estimate the Gaussian parameters to fit the data and determine your estimate at what the true (but unknown) distribution is.
* *Why would you use MAP over MLE (or vice versa)?*
* As we saw in class, MAP is a method to add in other terms to trade off against the data likelihood during optimization. It is a mechanism to incorporate our "prior belief" about the parameters. In our example in class, we used the MAP solution for the weights in regression to help prevent overfitting by imposing the assumptions that the weights should be small in magnitude. When you have enough data, the MAP and the MLE solution converge to the same solution. The amount of data you need for this to occur varies based on how strongly you impose the prior (which is done using the variance of the prior distribution).
# Probabilistic Generative Models
* So far we have focused on regression. Today we will begin to discuss classification.
* Suppose we have training data from two classes, $C_1$ and $C_2$, and we would like to train a classifier to assign a label to incoming test points whether they belong to class 1 or 2.
* There are *many* classifiers in the machine learning literature. We will cover a few in this class. Today we will focus on probabilistic generative approaches for classification.
* A *generative* approach for classification is one in which we estimate the parameters for distributions that generate the data for each class. Then, when we have a test point, we can compute the posterior probability of that point belonging to each class and assign the point to the class with the highest posterior probability.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
%matplotlib inline
mean1 = [-1.5, -1]
mean2 = [1, 1]
cov1 = [[1,0], [0,2]]
cov2 = [[2,.1],[.1,.2]]
N1 = 250
N2 = 100
def generateData(mean1, mean2, cov1, cov2, N1=100, N2=100):
# We are generating data from two Gaussians to represent two classes.
# In practice, we would not do this - we would just have data from the problem we are trying to solve.
class1X = np.random.multivariate_normal(mean1, cov1, N1)
class2X = np.random.multivariate_normal(mean2, cov2, N2)
fig = plt.figure()
ax = fig.add_subplot(*[1,1,1])
ax.scatter(class1X[:,0], class1X[:,1], c='r')
ax.scatter(class2X[:,0], class2X[:,1])
plt.show()
return class1X, class2X
class1X, class2X = generateData(mean1, mean2,cov1,cov2, N1,N2)
```
In the data we generated above, we have a "red" class and a "blue" class. When we are given a test sample, we will want to assign the label of either red or blue.
We can compute the posterior probability for class $C_1$ as follows:
\begin{eqnarray}
p(C_1 | x) &=& \frac{p(x|C_1)p(C_1)}{p(x)}\\
&=& \frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
\end{eqnarray}
We can similarly compute the posterior probability for class $C_2$:
\begin{eqnarray}
p(C_2 | x) &=& \frac{p(x|C_2)p(C_2)}{p(x|C_1)p(C_1) + p(x|C_2)p(C_2)}\\
\end{eqnarray}
Note that $p(C_1|x) + p(C_2|x) = 1$.
So, to train the classifier, what we need is to determine the parametric forms and estimate the parameters for $p(x|C_1)$, $p(x|C_2)$, $p(C_1)$ and $p(C_2)$.
For example, we can assume that the data from both $C_1$ and $C_2$ are distributed according to Gaussian distributions. In this case,
\begin{eqnarray}
p(\mathbf{x}|C_k) = \frac{1}{(2\pi)^{1/2}}\frac{1}{|\Sigma|^{1/2}}\exp\left\{ - \frac{1}{2} (\mathbf{x}-\mu_k)^T\Sigma_k^{-1}(\mathbf{x}-\mu_k)\right\}
\end{eqnarray}
Given the assumption of the Gaussian form, how would you estimate the parameter for $p(x|C_1)$ and $p(x|C_2)$? *You can use maximum likelihood estimate for the mean and covariance!*
The MLE estimate for the mean of class $C_k$ is:
\begin{eqnarray}
\mu_{k,MLE} = \frac{1}{N_k} \sum_{n \in C_k} \mathbf{x}_n
\end{eqnarray}
where $N_k$ is the number of training data points that belong to class $C_k$
The MLE estimate for the covariance of class $C_k$ is:
\begin{eqnarray}
\Sigma_k = \frac{1}{N_k} \sum_{n \in C_k} (\mathbf{x}_n - \mu_{k,MLE})(\mathbf{x}_n - \mu_{k,MLE})^T
\end{eqnarray}
We can determine the values for $p(C_1)$ and $p(C_2)$ from the number of data points in each class:
\begin{eqnarray}
p(C_k) = \frac{N_k}{N}
\end{eqnarray}
where $N$ is the total number of data points.
```
#Estimate the mean and covariance for each class from the training data
mu1 = np.mean(class1X, axis=0)
print(mu1)
cov1 = np.cov(class1X.T)
print(cov1)
mu2 = np.mean(class2X, axis=0)
print(mu2)
cov2 = np.cov(class2X.T)
print(cov2)
# Estimate the prior for each class
pC1 = class1X.shape[0]/(class1X.shape[0] + class2X.shape[0])
print(pC1)
pC2 = class2X.shape[0]/(class1X.shape[0] + class2X.shape[0])
print(pC2)
#We now have all parameters needed and can compute values for test samples
from scipy.stats import multivariate_normal
x = np.linspace(-5, 4, 100)
y = np.linspace(-6, 6, 100)
xm,ym = np.meshgrid(x, y)
X = np.dstack([xm,ym])
#look at the pdf for class 1
y1 = multivariate_normal.pdf(X, mean=mu1, cov=cov1)
plt.imshow(y1)
#look at the pdf for class 2
y2 = multivariate_normal.pdf(X, mean=mu2, cov=cov2);
plt.imshow(y2)
#Look at the posterior for class 1
pos1 = (y1*pC1)/(y1*pC1 + y2*pC2 );
plt.imshow(pos1)
#Look at the posterior for class 2
pos2 = (y2*pC2)/(y1*pC1 + y2*pC2 );
plt.imshow(pos2)
#Look at the decision boundary
plt.imshow(pos1>pos2)
```
*How did we come up with using the MLE solution for the mean and variance? How did we determine how to compute $p(C_1)$ and $p(C_2)$?
* We can define a likelihood for this problem and maximize it!
\begin{eqnarray}
p(\mathbf{t}, \mathbf{X}|\pi, \mu_1, \mu_2, \Sigma_1, \Sigma_2) = \prod_{n=1}^N \left[\pi N(x_n|\mu_1, \Sigma_1)\right]^{t_n}\left[(1-\pi)N(x_n|\mu_2, \Sigma_2) \right]^{1-t_n}
\end{eqnarray}
* *How would we maximize this?* As usual, we would use our "trick" and take the log of the likelihood function. Then, we would take the derivative with respect to each parameter we are interested in, set the derivative to zero, and solve for the parameter of interest.
## Reading Assignment: Read Section 4.2 and Section 2.5.2
| true |
code
| 0.687171 | null | null | null | null |
|
## In this notebook we are going to Predict the Growth of Google Stock using LSTM Model and CRISP-DM.
```
#importing the libraries
import math
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
"""For LSTM model please use Numpy --version = 1.19 or lower Cause latest Tensorflow array don't accept np tensors
"""
```
# Data Understanding
The data is already processed to price-split values so it is easy to analysis but we are creating new tables to optimize our model
```
#importing Price Split Data
data = pd.read_csv('prices-split-adjusted.csv')
data
#checking data for null values
data.isnull().sum()
```
# Data Preprocessing
Creating Table for a specific Stock
```
#Initializing the Dataset for the Stock to be Analysized
data = data.loc[(data['symbol'] == 'GOOG')]
data = data.drop(columns=['symbol'])
data = data[['date','open','close','low','volume','high']]
data
#Number of rows and columns we are working with
data.shape
```
Ploting the closing price of the Stock
```
plt.figure(figsize=(16,8))
plt.title('Closing Price of the Stock Historically')
plt.plot(data['close'])
plt.xlabel('Year', fontsize=20)
plt.ylabel('Closing Price Historically ($)', fontsize=20)
plt.show()
```
#### Here we can see that there is Long-Term growth in this stock.
# Preparing Data for LSTM
Here we are going to use LSTM to more accurate prediction of the stock value change. We are checking for accuracy on a particular Stock.
First we create a seperate dataframe only with "Close" cloumn
```
#Getting the rows and columns we need
data = data.filter(['close'])
dataset = data.values
#Find out the number of rows that are present in this dataset in order to train our model.
training_data_len = math.ceil(len(dataset)* .8)
training_data_len
```
Scaling the Data to make better Predictions
```
scaler = MinMaxScaler(feature_range=(0,1))
scaled_data = scaler.fit_transform(dataset)
scaled_data
#Creating a train test datasets
train_data = scaled_data[0:training_data_len , :]
x_train = []
y_train = []
for j in range(60, len(train_data)):
x_train.append(train_data[j-60:j,0])
y_train.append(train_data[j,0])
if j<=60:
print(x_train)
print(y_train)
print()
x_train, y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
x_train.shape
```
# Building LSTM Model
```
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape = (x_train.shape[1], 1)))
model.add(LSTM(50, return_sequences=False))
model.add(Dense(25))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
```
##### Training the Model
```
model.fit(x_train, y_train, batch_size=1, epochs=1)
test_data = scaled_data[training_data_len - 60: , :]
x_test = []
y_test = dataset[training_data_len:, :]
for j in range(60, len(test_data)):
x_test.append(test_data[j-60:j, 0])
x_test = np.array(x_test)
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
predictions = model.predict(x_test)
predictions = scaler.inverse_transform(predictions)
#Finding the Root Mean Squared Error for the Stock
rmse = np.sqrt( np.mean( predictions - y_test)**2)
rmse
```
# Visualization
### Plotting Acutal Close values vs Predicted Values in LR Model
```
#builing close value and prediction value table for comparison
train = data[:training_data_len]
val = data[training_data_len:]
val['Predictions'] = predictions
plt.figure(figsize=(16,8))
plt.title('LSTM Model Data')
plt.xlabel('Date', fontsize=16)
plt.ylabel('Close Price', fontsize=16)
plt.plot(train['close'])
plt.plot(val[['close', 'Predictions']])
plt.legend(['Trained Dataset', 'Actual Value', 'Predictions'])
plt.show()
```
# Evaluation of the model
Making table for Actual price and Predicted Price
```
#actual close values against predictions
val
new_data = pd.read_csv('prices-split-adjusted.csv')
new_data = data.filter(['close'])
last_60_days = new_data[-60:].values
last_60_scaled = scaler.transform(last_60_days)
X_test = []
X_test.append(last_60_scaled)
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_price = model.predict(X_test)
predicted_price = scaler.inverse_transform(predicted_price)
print('The predicted price of the final value of the dataset', predicted_price)
new_data.tail(1)
```
#### The predicted price is USD 122.0, whereas the actual observed value is USD 115.82
```
#check predicted values
predictions = model.predict(x_test)
#Undo scaling
predictions = scaler.inverse_transform(predictions)
#Calculate RMSE score
rmse=np.sqrt(np.mean(((predictions- y_test)**2)))
rmse
neww_data = pd.read_csv('prices-split-adjusted.csv')
val.describe()
x = val.close.mean()
y = val.Predictions.mean()
Accuracy = x/y*100
print("The accuracy of the model is " , Accuracy)
```
The LSTM model Accuracy is 99.39%
As we can see the predictions made by LSTM model show a greater accuracy than LR model. So we can finally conclude that the stock is going to grow for long-term.
| true |
code
| 0.616099 | null | null | null | null |
|
# Training a dense neural network
The handwritten digit recognition is a classification problem. We will start with the simplest possible approach for image classification - a fully-connected neural network (which is also called a *perceptron*). We use `pytorchcv` helper to load all data we have talked about in the previous unit.
```
!wget https://raw.githubusercontent.com/MicrosoftDocs/pytorchfundamentals/main/computer-vision-pytorch/pytorchcv.py
import torch
import torch.nn as nn
import torchvision
import matplotlib.pyplot as plt
import pytorchcv
pytorchcv.load_mnist()
```
## Fully-connected dense neural networks
A basic **neural network** in PyTorch consists of a number of **layers**. The simplest network would include just one fully-connected layer, which is called **Linear** layer, with 784 inputs (one input for each pixel of the input image) and 10 outputs (one output for each class).

As we discussed above, the dimension of our digit images is $1\times28\times28$. Because the input dimension of a fully-connected layer is 784, we need to insert another layer into the network, called **Flatten**, to change tensor shape from $1\times28\times28$ to $784$.
We want $n$-th output of the network to return the probability of the input digit being equal to $n$. Because the output of a fully-connected layer is not normalized to be between 0 and 1, it cannot be thought of as probability. To turn it into a probability we need to apply another layer called **Softmax**.
In PyTorch, it is easier to use **LogSoftmax** function, which will also compute logarithms of output probabilities. To turn the output vector into the actual probabilities, we need to take **torch.exp** of the output.
Thus, the architecture of our network can be represented by the following sequence of layers:

It can be defined in PyTorch in the following way, using `Sequential` syntax:
```
net = nn.Sequential(
nn.Flatten(),
nn.Linear(784,10), # 784 inputs, 10 outputs
nn.LogSoftmax())
```
## Training the network
A network defined this way can take any digit as input and produce a vector of probabilities as an output. Let's see how this network performs by giving it a digit from our dataset:
```
print('Digit to be predicted: ',data_train[0][1])
torch.exp(net(data_train[0][0]))
```
As you can see the network predicts similar probabilities for each digit. This is because it has not been trained on how to recognize the digits. We need to give it our training data to train it on our dataset.
To train the model we will need to create **batches** of our datasets of a certain size, let's say 64. PyTorch has an object called **DataLoader** that can create batches of our data for us automatically:
```
train_loader = torch.utils.data.DataLoader(data_train,batch_size=64)
test_loader = torch.utils.data.DataLoader(data_test,batch_size=64) # we can use larger batch size for testing
```
The training process steps are as follows:
1. We take a minibatch from the input dataset, which consists of input data (features) and expected result (label).
2. We calculate the predicted result for this minibatch.
3. The difference between this result and expected result is calculated using a special function called the **loss function**
4. We calculate the gradients of this loss function with respect to model weights (parameters), which are then used to adjust the weights to optimize the performance of the network. The amount of adjustment is controlled by a parameter called **learning rate**, and the details of optimization algorithm are defined in the **optimizer** object.
5. We repeat those steps until the whole dataset is processed. One complete pass through the dataset is called **an epoch**.
Here is a function that performs one epoch training:
```
def train_epoch(net,dataloader,lr=0.01,optimizer=None,loss_fn = nn.NLLLoss()):
optimizer = optimizer or torch.optim.Adam(net.parameters(),lr=lr)
net.train()
total_loss,acc,count = 0,0,0
for features,labels in dataloader:
optimizer.zero_grad()
out = net(features)
loss = loss_fn(out,labels) #cross_entropy(out,labels)
loss.backward()
optimizer.step()
total_loss+=loss
_,predicted = torch.max(out,1)
acc+=(predicted==labels).sum()
count+=len(labels)
return total_loss.item()/count, acc.item()/count
train_epoch(net,train_loader)
```
Since this function is pretty generic we will be able to use it later in our other examples. The function takes the following parameters:
* **Neural network**
* **DataLoader**, which defines the data to train on
* **Loss Function**, which is a function that measures the difference between the expected result and the one produced by the network. In most of the classification tasks `NLLLoss` is used, so we will make it a default.
* **Optimizer**, which defined an *optimization algorithm*. The most traditional algorithm is *stochastic gradient descent*, but we will use a more advanced version called **Adam** by default.
* **Learning rate** defines the speed at which the network learns. During learning, we show the same data multiple times, and each time weights are adjusted. If the learning rate is too high, new values will overwrite the knowledge from the old ones, and the network would perform badly. If the learning rate is too small it results in a very slow learning process.
Here is what we do when training:
* Switch the network to training mode (`net.train()`)
* Go over all batches in the dataset, and for each batch do the following:
- compute predictions made by the network on this batch (`out`)
- compute `loss`, which is the discrepancy between predicted and expected values
- try to minimize the loss by adjusting weights of the network (`optimizer.step()`)
- compute the number of correctly predicted cases (**accuracy**)
The function calculates and returns the average loss per data item, and training accuracy (percentage of cases guessed correctly). By observing this loss during training we can see whether the network is improving and learning from the data provided.
It is also important to control the accuracy on the test dataset (also called **validation accuracy**). A good neural network with a lot of parameters can predict with decent accuracy on any training dataset, but it may poorly generalize to other data. That's why in most cases we set aside part of our data, and then periodically check how well the model performs on them. Here is the function to evaluate the network on test dataset:
```
def validate(net, dataloader,loss_fn=nn.NLLLoss()):
net.eval()
count,acc,loss = 0,0,0
with torch.no_grad():
for features,labels in dataloader:
out = net(features)
loss += loss_fn(out,labels)
pred = torch.max(out,1)[1]
acc += (pred==labels).sum()
count += len(labels)
return loss.item()/count, acc.item()/count
validate(net,test_loader)
```
We train the model for several epochs observing training and validation accuracy. If training accuracy increases while validation accuracy decreases that would be an indication of **overfitting**. Meaning it will do well on your dataset but not on new data.
Below is the training function that can be used to perform both training and validation. It prints the training and validation accuracy for each epoch, and also returns the history that can be used to plot the loss and accuracy on the graph.
```
def train(net,train_loader,test_loader,optimizer=None,lr=0.01,epochs=10,loss_fn=nn.NLLLoss()):
optimizer = optimizer or torch.optim.Adam(net.parameters(),lr=lr)
res = { 'train_loss' : [], 'train_acc': [], 'val_loss': [], 'val_acc': []}
for ep in range(epochs):
tl,ta = train_epoch(net,train_loader,optimizer=optimizer,lr=lr,loss_fn=loss_fn)
vl,va = validate(net,test_loader,loss_fn=loss_fn)
print(f"Epoch {ep:2}, Train acc={ta:.3f}, Val acc={va:.3f}, Train loss={tl:.3f}, Val loss={vl:.3f}")
res['train_loss'].append(tl)
res['train_acc'].append(ta)
res['val_loss'].append(vl)
res['val_acc'].append(va)
return res
# Re-initialize the network to start from scratch
net = nn.Sequential(
nn.Flatten(),
nn.Linear(784,10), # 784 inputs, 10 outputs
nn.LogSoftmax())
hist = train(net,train_loader,test_loader,epochs=5)
```
This function logs messages with the accuracy on training and validation data from each epoch. It also returns this data as a dictionary (called **history**). We can then visualize this data to better understand our model training.
```
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.plot(hist['train_acc'], label='Training acc')
plt.plot(hist['val_acc'], label='Validation acc')
plt.legend()
plt.subplot(122)
plt.plot(hist['train_loss'], label='Training loss')
plt.plot(hist['val_loss'], label='Validation loss')
plt.legend()
```
The diagram on the left shows the `training accuracy` increasing (which corresponds to the network learning to classify our training data better and better), while `validation accuracy` starts to fall. The diagram on the right show the `training loss` and `validation loss`, you can see the `training loss` decreasing (meaning its performing better) and the `validation loss` increasing (meaning its performing worse). These graphs would indicate the model is **overfitted**.
## Visualizing network weights
Now lets visualize our weights of our neural network and see what they look like. When the network is more complex than just one layer it can be a difficult to visulize the results like this. However, in our case (classification of a digit) it happens by multiplying the initial image by a weight matrix allowing us to visualize the network weights with a bit of added logic.
Let's create a `weight_tensor` which will have a dimension of 784x10. This tensor can be obtained by calling the `net.parameters()` method. In this example, if we want to see if our number is 0 or not, we will multiply input digit by `weight_tensor[0]` and pass the result through a softmax normalization to get the answer. This results in the weight tensor elements somewhat resembling the average shape of the digit it classifies:
```
weight_tensor = next(net.parameters())
fig,ax = plt.subplots(1,10,figsize=(15,4))
for i,x in enumerate(weight_tensor):
ax[i].imshow(x.view(28,28).detach())
```
## Takeaway
Training a neural network in PyTorch can be programmed with a training loop. It may seem like a complicated process, but in real life we need to write it once, and we can then re-use this training code later without changing it.
We can see that a single-layer dense neural network shows relatively good performance, but we definitely want to get higher than 91% on accuracy! In the next unit, we will try to use multi-level perceptrons.
| true |
code
| 0.748312 | null | null | null | null |
|
# Hyperparameter Optimization (HPO) of Machine Learning Models
L. Yang and A. Shami, “On hyperparameter optimization of machine learning algorithms: Theory and practice,” Neurocomputing, vol. 415, pp. 295–316, 2020, doi: https://doi.org/10.1016/j.neucom.2020.07.061.
### **Sample code for regression problems**
**Dataset used:**
Boson Housing dataset from sklearn
**Machine learning algorithms used:**
Random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), artificial neural network (ANN)
**HPO algorithms used:**
Grid search, random search, hyperband, Bayesian Optimization with Gaussian Processes (BO-GP), Bayesian Optimization with Tree-structured Parzen Estimator (BO-TPE), particle swarm optimization (PSO), genetic algorithm (GA).
**Performance metric:**
Mean square error (MSE)
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split,cross_val_score
from sklearn.ensemble import RandomForestClassifier,RandomForestRegressor
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score
from sklearn.neighbors import KNeighborsClassifier,KNeighborsRegressor
from sklearn.svm import SVC,SVR
from sklearn import datasets
import scipy.stats as stats
```
## Load Boston Housing dataset
We will take the Housing dataset which contains information about different houses in Boston. There are 506 samples and 13 feature variables in this Boston dataset. The main goal is to predict the value of prices of the house using the given features.
You can read more about the data and the variables [[1]](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) [[2]](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html).
```
X, y = datasets.load_boston(return_X_y=True)
datasets.load_boston()
```
## Baseline Machine Learning models: Regressors with Default Hyperparameters
```
#Random Forest
clf = RandomForestRegressor()
scores = cross_val_score(clf, X, y, cv=3,scoring='neg_mean_squared_error') # 3-fold cross-validation
print("MSE:"+ str(-scores.mean()))
#SVM
clf = SVR(gamma='scale')
scores = cross_val_score(clf, X, y, cv=3,scoring='neg_mean_squared_error')
print("MSE:"+ str(-scores.mean()))
#KNN
clf = KNeighborsRegressor()
scores = cross_val_score(clf, X, y, cv=3,scoring='neg_mean_squared_error')
print("MSE:"+ str(-scores.mean()))
#ANN
from keras.models import Sequential, Model
from keras.layers import Dense, Input
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasRegressor
from keras.callbacks import EarlyStopping
def ANN(optimizer = 'adam',neurons=32,batch_size=32,epochs=50,activation='relu',patience=5,loss='mse'):
model = Sequential()
model.add(Dense(neurons, input_shape=(X.shape[1],), activation=activation))
model.add(Dense(neurons, activation=activation))
model.add(Dense(1))
model.compile(optimizer = optimizer, loss=loss)
early_stopping = EarlyStopping(monitor="loss", patience = patience)# early stop patience
history = model.fit(X, y,
batch_size=batch_size,
epochs=epochs,
callbacks = [early_stopping],
verbose=0) #verbose set to 1 will show the training process
return model
clf = KerasRegressor(build_fn=ANN, verbose=0)
scores = cross_val_score(clf, X, y, cv=3,scoring='neg_mean_squared_error')
print("MSE:"+ str(-scores.mean()))
```
## HPO Algorithm 1: Grid Search
Search all the given hyper-parameter configurations
**Advantages:**
* Simple implementation.
**Disadvantages:**
* Time-consuming,
* Only efficient with categorical HPs.
```
#Random Forest
from sklearn.model_selection import GridSearchCV
# Define the hyperparameter configuration space
rf_params = {
'n_estimators': [10, 20, 30],
#'max_features': ['sqrt',0.5],
'max_depth': [15,20,30,50],
#'min_samples_leaf': [1,2,4,8],
#"bootstrap":[True,False],
#"criterion":['mse','mae']
}
clf = RandomForestRegressor(random_state=0)
grid = GridSearchCV(clf, rf_params, cv=3, scoring='neg_mean_squared_error')
grid.fit(X, y)
print(grid.best_params_)
print("MSE:"+ str(-grid.best_score_))
#SVM
from sklearn.model_selection import GridSearchCV
rf_params = {
'C': [1,10, 100],
"kernel":['poly','rbf','sigmoid'],
"epsilon":[0.01,0.1,1]
}
clf = SVR(gamma='scale')
grid = GridSearchCV(clf, rf_params, cv=3, scoring='neg_mean_squared_error')
grid.fit(X, y)
print(grid.best_params_)
print("MSE:"+ str(-grid.best_score_))
#KNN
from sklearn.model_selection import GridSearchCV
rf_params = {
'n_neighbors': [2, 3, 5,7,10]
}
clf = KNeighborsRegressor()
grid = GridSearchCV(clf, rf_params, cv=3, scoring='neg_mean_squared_error')
grid.fit(X, y)
print(grid.best_params_)
print("MSE:"+ str(-grid.best_score_))
#ANN
from sklearn.model_selection import GridSearchCV
rf_params = {
'optimizer': ['adam','rmsprop'],
'activation': ['relu','tanh'],
'loss': ['mse','mae'],
'batch_size': [16,32],
'neurons':[16,32],
'epochs':[20,50],
'patience':[2,5]
}
clf = KerasRegressor(build_fn=ANN, verbose=0)
grid = GridSearchCV(clf, rf_params, cv=3,scoring='neg_mean_squared_error')
grid.fit(X, y)
print(grid.best_params_)
print("MSE:"+ str(-grid.best_score_))
```
## HPO Algorithm 2: Random Search
Randomly search hyper-parameter combinations in the search space
**Advantages:**
* More efficient than GS.
* Enable parallelization.
**Disadvantages:**
* Not consider previous results.
* Not efficient with conditional HPs.
```
#Random Forest
from scipy.stats import randint as sp_randint
from sklearn.model_selection import RandomizedSearchCV
# Define the hyperparameter configuration space
rf_params = {
'n_estimators': sp_randint(10,100),
"max_features":sp_randint(1,13),
'max_depth': sp_randint(5,50),
"min_samples_split":sp_randint(2,11),
"min_samples_leaf":sp_randint(1,11),
"criterion":['mse','mae']
}
n_iter_search=20 #number of iterations is set to 20, you can increase this number if time permits
clf = RandomForestRegressor(random_state=0)
Random = RandomizedSearchCV(clf, param_distributions=rf_params,n_iter=n_iter_search,cv=3,scoring='neg_mean_squared_error')
Random.fit(X, y)
print(Random.best_params_)
print("MSE:"+ str(-Random.best_score_))
#SVM
from scipy.stats import randint as sp_randint
from sklearn.model_selection import RandomizedSearchCV
rf_params = {
'C': stats.uniform(0,50),
"kernel":['poly','rbf','sigmoid'],
"epsilon":stats.uniform(0,1)
}
n_iter_search=20
clf = SVR(gamma='scale')
Random = RandomizedSearchCV(clf, param_distributions=rf_params,n_iter=n_iter_search,cv=3,scoring='neg_mean_squared_error')
Random.fit(X, y)
print(Random.best_params_)
print("MSE:"+ str(-Random.best_score_))
#KNN
from scipy.stats import randint as sp_randint
from sklearn.model_selection import RandomizedSearchCV
rf_params = {
'n_neighbors': sp_randint(1,20),
}
n_iter_search=10
clf = KNeighborsRegressor()
Random = RandomizedSearchCV(clf, param_distributions=rf_params,n_iter=n_iter_search,cv=3,scoring='neg_mean_squared_error')
Random.fit(X, y)
print(Random.best_params_)
print("MSE:"+ str(-Random.best_score_))
#ANN
from scipy.stats import randint as sp_randint
from random import randrange as sp_randrange
from sklearn.model_selection import RandomizedSearchCV
rf_params = {
'optimizer': ['adam','rmsprop'],
'activation': ['relu','tanh'],
'loss': ['mse','mae'],
'batch_size': [16,32,64],
'neurons':sp_randint(10,100),
'epochs':[20,50],
#'epochs':[20,50,100,200],
'patience':sp_randint(3,20)
}
n_iter_search=10
clf = KerasRegressor(build_fn=ANN, verbose=0)
Random = RandomizedSearchCV(clf, param_distributions=rf_params,n_iter=n_iter_search,cv=3,scoring='neg_mean_squared_error')
Random.fit(X, y)
print(Random.best_params_)
print("MSE:"+ str(-Random.best_score_))
```
## HPO Algorithm 3: Hyperband
Generate small-sized subsets and allocate budgets to each hyper-parameter combination based on its performance
**Advantages:**
* Enable parallelization.
**Disadvantages:**
* Not efficient with conditional HPs.
* Require subsets with small budgets to be representative.
```
#Random Forest
from hyperband import HyperbandSearchCV
from scipy.stats import randint as sp_randint
# Define the hyperparameter configuration space
rf_params = {
'n_estimators': sp_randint(10,100),
"max_features":sp_randint(1,13),
'max_depth': sp_randint(5,50),
"min_samples_split":sp_randint(2,11),
"min_samples_leaf":sp_randint(1,11),
"criterion":['mse','mae']
}
clf = RandomForestRegressor(random_state=0)
hyper = HyperbandSearchCV(clf, param_distributions =rf_params,cv=3,min_iter=10,max_iter=100,scoring='neg_mean_squared_error')
hyper.fit(X, y)
print(hyper.best_params_)
print("MSE:"+ str(-hyper.best_score_))
#SVM
from hyperband import HyperbandSearchCV
from scipy.stats import randint as sp_randint
rf_params = {
'C': stats.uniform(0,50),
"kernel":['poly','rbf','sigmoid'],
"epsilon":stats.uniform(0,1)
}
clf = SVR(gamma='scale')
hyper = HyperbandSearchCV(clf, param_distributions =rf_params,cv=3,min_iter=1,max_iter=10,scoring='neg_mean_squared_error',resource_param='C')
hyper.fit(X, y)
print(hyper.best_params_)
print("MSE:"+ str(-hyper.best_score_))
#KNN
from hyperband import HyperbandSearchCV
from scipy.stats import randint as sp_randint
rf_params = {
'n_neighbors': range(1,20),
}
clf = KNeighborsRegressor()
hyper = HyperbandSearchCV(clf, param_distributions =rf_params,cv=3,min_iter=1,max_iter=20,scoring='neg_mean_squared_error',resource_param='n_neighbors')
hyper.fit(X, y)
print(hyper.best_params_)
print("MSE:"+ str(-hyper.best_score_))
#ANN
from hyperband import HyperbandSearchCV
from scipy.stats import randint as sp_randint
rf_params = {
'optimizer': ['adam','rmsprop'],
'activation': ['relu','tanh'],
'loss': ['mse','mae'],
'batch_size': [16,32,64],
'neurons':sp_randint(10,100),
'epochs':[20,50],
#'epochs':[20,50,100,200],
'patience':sp_randint(3,20)
}
clf = KerasRegressor(build_fn=ANN, epochs=20, verbose=0)
hyper = HyperbandSearchCV(clf, param_distributions =rf_params,cv=3,min_iter=1,max_iter=10,scoring='neg_mean_squared_error',resource_param='epochs')
hyper.fit(X, y)
print(hyper.best_params_)
print("MSE:"+ str(-hyper.best_score_))
```
## HPO Algorithm 4: BO-GP
Bayesian Optimization with Gaussian Process (BO-GP)
**Advantages:**
* Fast convergence speed for continuous HPs.
**Disadvantages:**
* Poor capacity for parallelization.
* Not efficient with conditional HPs.
### Using skopt.BayesSearchCV
```
#Random Forest
from skopt import Optimizer
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
# Define the hyperparameter configuration space
rf_params = {
'n_estimators': Integer(10,100),
"max_features":Integer(1,13),
'max_depth': Integer(5,50),
"min_samples_split":Integer(2,11),
"min_samples_leaf":Integer(1,11),
"criterion":['mse','mae']
}
clf = RandomForestRegressor(random_state=0)
Bayes = BayesSearchCV(clf, rf_params,cv=3,n_iter=20, scoring='neg_mean_squared_error')
#number of iterations is set to 20, you can increase this number if time permits
Bayes.fit(X, y)
print(Bayes.best_params_)
bclf = Bayes.best_estimator_
print("MSE:"+ str(-Bayes.best_score_))
#SVM
from skopt import Optimizer
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
rf_params = {
'C': Real(0,50),
"kernel":['poly','rbf','sigmoid'],
'epsilon': Real(0,1)
}
clf = SVR(gamma='scale')
Bayes = BayesSearchCV(clf, rf_params,cv=3,n_iter=20, scoring='neg_mean_squared_error')
Bayes.fit(X, y)
print(Bayes.best_params_)
print("MSE:"+ str(-Bayes.best_score_))
#KNN
from skopt import Optimizer
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
rf_params = {
'n_neighbors': Integer(1,20),
}
clf = KNeighborsRegressor()
Bayes = BayesSearchCV(clf, rf_params,cv=3,n_iter=10, scoring='neg_mean_squared_error')
Bayes.fit(X, y)
print(Bayes.best_params_)
print("MSE:"+ str(-Bayes.best_score_))
#ANN
from skopt import Optimizer
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
rf_params = {
'optimizer': ['adam','rmsprop'],
'activation': ['relu','tanh'],
'loss': ['mse','mae'],
'batch_size': [16,32,64],
'neurons':Integer(10,100),
'epochs':[20,50],
#'epochs':[20,50,100,200],
'patience':Integer(3,20)
}
clf = KerasRegressor(build_fn=ANN, verbose=0)
Bayes = BayesSearchCV(clf, rf_params,cv=3,n_iter=10, scoring='neg_mean_squared_error')
Bayes.fit(X, y)
print(Bayes.best_params_)
print("MSE:"+ str(-Bayes.best_score_))
```
### Using skopt.gp_minimize
```
#Random Forest
from skopt.space import Real, Integer
from skopt.utils import use_named_args
reg = RandomForestRegressor()
# Define the hyperparameter configuration space
space = [Integer(10, 100, name='n_estimators'),
Integer(5, 50, name='max_depth'),
Integer(1, 13, name='max_features'),
Integer(2, 11, name='min_samples_split'),
Integer(1, 11, name='min_samples_leaf'),
Categorical(['mse', 'mae'], name='criterion')
]
# Define the objective function
@use_named_args(space)
def objective(**params):
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=20, random_state=0)
#number of iterations is set to 20, you can increase this number if time permits
print("MSE:%.4f" % res_gp.fun)
print(res_gp.x)
#SVM
from skopt.space import Real, Integer
from skopt.utils import use_named_args
reg = SVR(gamma='scale')
space = [Real(0, 50, name='C'),
Categorical(['poly','rbf','sigmoid'], name='kernel'),
Real(0, 1, name='epsilon'),
]
@use_named_args(space)
def objective(**params):
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=20, random_state=0)
print("MSE:%.4f" % res_gp.fun)
print(res_gp.x)
#KNN
from skopt.space import Real, Integer
from skopt.utils import use_named_args
reg = KNeighborsRegressor()
space = [Integer(1, 20, name='n_neighbors')]
@use_named_args(space)
def objective(**params):
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=10, random_state=0)
print("MSE:%.4f" % res_gp.fun)
print(res_gp.x)
```
## HPO Algorithm 5: BO-TPE
Bayesian Optimization with Tree-structured Parzen Estimator (TPE)
**Advantages:**
* Efficient with all types of HPs.
* Keep conditional dependencies.
**Disadvantages:**
* Poor capacity for parallelization.
```
#Random Forest
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
from sklearn.model_selection import cross_val_score, StratifiedKFold
# Define the objective function
def objective(params):
params = {
'n_estimators': int(params['n_estimators']),
'max_depth': int(params['max_depth']),
'max_features': int(params['max_features']),
"min_samples_split":int(params['min_samples_split']),
"min_samples_leaf":int(params['min_samples_leaf']),
"criterion":str(params['criterion'])
}
clf = RandomForestRegressor( **params)
score = -np.mean(cross_val_score(clf, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
return {'loss':score, 'status': STATUS_OK }
# Define the hyperparameter configuration space
space = {
'n_estimators': hp.quniform('n_estimators', 10, 100, 1),
'max_depth': hp.quniform('max_depth', 5, 50, 1),
"max_features":hp.quniform('max_features', 1, 13, 1),
"min_samples_split":hp.quniform('min_samples_split',2,11,1),
"min_samples_leaf":hp.quniform('min_samples_leaf',1,11,1),
"criterion":hp.choice('criterion',['mse','mae'])
}
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=20)
print("Random Forest: Hyperopt estimated optimum {}".format(best))
#SVM
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
from sklearn.model_selection import cross_val_score, StratifiedKFold
def objective(params):
params = {
'C': abs(float(params['C'])),
"kernel":str(params['kernel']),
'epsilon': abs(float(params['epsilon'])),
}
clf = SVR(gamma='scale', **params)
score = -np.mean(cross_val_score(clf, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
return {'loss':score, 'status': STATUS_OK }
space = {
'C': hp.normal('C', 0, 50),
"kernel":hp.choice('kernel',['poly','rbf','sigmoid']),
'epsilon': hp.normal('epsilon', 0, 1),
}
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=20)
print("SVM: Hyperopt estimated optimum {}".format(best))
#KNN
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
from sklearn.model_selection import cross_val_score, StratifiedKFold
def objective(params):
params = {
'n_neighbors': abs(int(params['n_neighbors']))
}
clf = KNeighborsRegressor( **params)
score = -np.mean(cross_val_score(clf, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
return {'loss':score, 'status': STATUS_OK }
space = {
'n_neighbors': hp.quniform('n_neighbors', 1, 20, 1),
}
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=10)
print("KNN: Hyperopt estimated optimum {}".format(best))
#ANN
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
from sklearn.model_selection import cross_val_score, StratifiedKFold
def objective(params):
params = {
"optimizer":str(params['optimizer']),
"activation":str(params['activation']),
"loss":str(params['loss']),
'batch_size': abs(int(params['batch_size'])),
'neurons': abs(int(params['neurons'])),
'epochs': abs(int(params['epochs'])),
'patience': abs(int(params['patience']))
}
clf = KerasRegressor(build_fn=ANN,**params, verbose=0)
score = -np.mean(cross_val_score(clf, X, y, cv=3,
scoring="neg_mean_squared_error"))
return {'loss':score, 'status': STATUS_OK }
space = {
"optimizer":hp.choice('optimizer',['adam','rmsprop']),
"activation":hp.choice('activation',['relu','tanh']),
"loss":hp.choice('loss',['mse','mae']),
'batch_size': hp.quniform('batch_size', 16, 64, 16),
'neurons': hp.quniform('neurons', 10, 100, 10),
'epochs': hp.quniform('epochs', 20, 50, 10),
'patience': hp.quniform('patience', 3, 20, 3),
}
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=10)
print("ANN: Hyperopt estimated optimum {}".format(best))
```
## HPO Algorithm 6: PSO
Partical swarm optimization (PSO): Each particle in a swarm communicates with other particles to detect and update the current global optimum in each iteration until the final optimum is detected.
**Advantages:**
* Efficient with all types of HPs.
* Enable parallelization.
**Disadvantages:**
* Require proper initialization.
```
#Random Forest
import optunity
import optunity.metrics
# Define the hyperparameter configuration space
search = {
'n_estimators': [10, 100],
'max_features': [1, 13],
'max_depth': [5,50],
"min_samples_split":[2,11],
"min_samples_leaf":[1,11],
}
# Define the objective function
@optunity.cross_validated(x=X, y=y, num_folds=3)
def performance(x_train, y_train, x_test, y_test,n_estimators=None, max_features=None,max_depth=None,min_samples_split=None,min_samples_leaf=None):
# fit the model
model = RandomForestRegressor(n_estimators=int(n_estimators),
max_features=int(max_features),
max_depth=int(max_depth),
min_samples_split=int(min_samples_split),
min_samples_leaf=int(min_samples_leaf),
)
scores=-np.mean(cross_val_score(model, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
return scores
optimal_configuration, info, _ = optunity.minimize(performance,
solver_name='particle swarm',
num_evals=20,
**search
)
print(optimal_configuration)
print("MSE:"+ str(info.optimum))
#SVM
import optunity
import optunity.metrics
search = {
'C': (0,50),
'kernel':[0,3],
'epsilon': (0, 1)
}
@optunity.cross_validated(x=X, y=y, num_folds=3)
def performance(x_train, y_train, x_test, y_test,C=None,kernel=None,epsilon=None):
# fit the model
if kernel<1:
ke='poly'
elif kernel<2:
ke='rbf'
else:
ke='sigmoid'
model = SVR(C=float(C),
kernel=ke,
gamma='scale',
epsilon=float(epsilon)
)
scores=-np.mean(cross_val_score(model, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
return scores
optimal_configuration, info, _ = optunity.minimize(performance,
solver_name='particle swarm',
num_evals=20,
**search
)
print(optimal_configuration)
print("MSE:"+ str(info.optimum))
#KNN
import optunity
import optunity.metrics
search = {
'n_neighbors': [1, 20],
}
@optunity.cross_validated(x=X, y=y, num_folds=3)
def performance(x_train, y_train, x_test, y_test,n_neighbors=None):
# fit the model
model = KNeighborsRegressor(n_neighbors=int(n_neighbors),
)
scores=-np.mean(cross_val_score(model, X, y, cv=3, n_jobs=-1,
scoring="neg_mean_squared_error"))
return scores
optimal_configuration, info, _ = optunity.minimize(performance,
solver_name='particle swarm',
num_evals=10,
**search
)
print(optimal_configuration)
print("MSE:"+ str(info.optimum))
#ANN
import optunity
import optunity.metrics
search = {
'optimizer':[0,2],
'activation':[0,2],
'loss':[0,2],
'batch_size': [0, 2],
'neurons': [10, 100],
'epochs': [20, 50],
'patience': [3, 20],
}
@optunity.cross_validated(x=X, y=y, num_folds=3)
def performance(x_train, y_train, x_test, y_test,optimizer=None,activation=None,loss=None,batch_size=None,neurons=None,epochs=None,patience=None):
# fit the model
if optimizer<1:
op='adam'
else:
op='rmsprop'
if activation<1:
ac='relu'
else:
ac='tanh'
if loss<1:
lo='mse'
else:
lo='mae'
if batch_size<1:
ba=16
else:
ba=32
model = ANN(optimizer=op,
activation=ac,
loss=lo,
batch_size=ba,
neurons=int(neurons),
epochs=int(epochs),
patience=int(patience)
)
clf = KerasRegressor(build_fn=ANN, verbose=0)
scores=-np.mean(cross_val_score(clf, X, y, cv=3,
scoring="neg_mean_squared_error"))
return scores
optimal_configuration, info, _ = optunity.minimize(performance,
solver_name='particle swarm',
num_evals=20,
**search
)
print(optimal_configuration)
print("MSE:"+ str(info.optimum))
```
## HPO Algorithm 7: Genetic Algorithm
Genetic algorithms detect well-performing hyper-parameter combinations in each generation, and pass them to the next generation until the best-performing combination is identified.
**Advantages:**
* Efficient with all types of HPs.
* Not require good initialization.
**Disadvantages:**
* Poor capacity for parallelization.
### Using DEAP
```
#Random Forest
from evolutionary_search import EvolutionaryAlgorithmSearchCV
from scipy.stats import randint as sp_randint
# Define the hyperparameter configuration space
rf_params = {
'n_estimators': range(10,100),
"max_features":range(1,13),
'max_depth': range(5,50),
"min_samples_split":range(2,11),
"min_samples_leaf":range(1,11),
"criterion":['mse','mae']
}
clf = RandomForestRegressor(random_state=0)
# Set the hyperparameters of GA
ga1 = EvolutionaryAlgorithmSearchCV(estimator=clf,
params=rf_params,
scoring="neg_mean_squared_error",
cv=3,
verbose=1,
population_size=10,
gene_mutation_prob=0.10,
gene_crossover_prob=0.5,
tournament_size=3,
generations_number=5,
n_jobs=1)
ga1.fit(X, y)
print(ga1.best_params_)
print("MSE:"+ str(-ga1.best_score_))
#SVM
from evolutionary_search import EvolutionaryAlgorithmSearchCV
rf_params = {
'C': np.random.uniform(0,50,1000),
"kernel":['poly','rbf','sigmoid'],
'epsilon': np.random.uniform(0,1,100),
}
clf = SVR(gamma='scale')
ga1 = EvolutionaryAlgorithmSearchCV(estimator=clf,
params=rf_params,
scoring="neg_mean_squared_error",
cv=3,
verbose=1,
population_size=10,
gene_mutation_prob=0.10,
gene_crossover_prob=0.5,
tournament_size=3,
generations_number=5,
n_jobs=1)
ga1.fit(X, y)
print(ga1.best_params_)
print("MSE:"+ str(-ga1.best_score_))
#KNN
from evolutionary_search import EvolutionaryAlgorithmSearchCV
rf_params = {
'n_neighbors': range(1,20),
}
clf = KNeighborsRegressor()
ga1 = EvolutionaryAlgorithmSearchCV(estimator=clf,
params=rf_params,
scoring="neg_mean_squared_error",
cv=3,
verbose=1,
population_size=10,
gene_mutation_prob=0.10,
gene_crossover_prob=0.5,
tournament_size=3,
generations_number=5,
n_jobs=1)
ga1.fit(X, y)
print(ga1.best_params_)
print("MSE:"+ str(-ga1.best_score_))
#ANN
from evolutionary_search import EvolutionaryAlgorithmSearchCV
# Define the hyperparameter configuration space
rf_params = {
'optimizer': ['adam','rmsprop'],
'activation': ['relu','tanh'],
'loss': ['mse','mae'],
'batch_size': [16,32,64],
'neurons':range(10,100),
'epochs':[20,50],
#'epochs':[20,50,100,200],
'patience':range(3,20)
}
clf = KerasRegressor(build_fn=ANN, verbose=0)
# Set the hyperparameters of GA
ga1 = EvolutionaryAlgorithmSearchCV(estimator=clf,
params=rf_params,
scoring="neg_mean_squared_error",
cv=3,
verbose=1,
population_size=10,
gene_mutation_prob=0.10,
gene_crossover_prob=0.5,
tournament_size=3,
generations_number=5,
n_jobs=1)
ga1.fit(X, y)
print(ga1.best_params_)
print("MSE:"+ str(-ga1.best_score_))
```
### Using TPOT
```
#Random Forest
from tpot import TPOTRegressor
# Define the hyperparameter configuration space
parameters = {
'n_estimators': range(20,200),
"max_features":range(1,13),
'max_depth': range(10,100),
"min_samples_split":range(2,11),
"min_samples_leaf":range(1,11),
#"criterion":['mse','mae']
}
# Set the hyperparameters of GA
ga2 = TPOTRegressor(generations= 3, population_size= 10, offspring_size= 5,
verbosity= 3, early_stop= 5,
config_dict=
{'sklearn.ensemble.RandomForestRegressor': parameters},
cv = 3, scoring = 'neg_mean_squared_error')
ga2.fit(X, y)
#SVM
from tpot import TPOTRegressor
parameters = {
'C': np.random.uniform(0,50,1000),
"kernel":['poly','rbf','sigmoid'],
'epsilon': np.random.uniform(0,1,100),
'gamma': ['scale']
}
ga2 = TPOTRegressor(generations= 3, population_size= 10, offspring_size= 5,
verbosity= 3, early_stop= 5,
config_dict=
{'sklearn.svm.SVR': parameters},
cv = 3, scoring = 'neg_mean_squared_error')
ga2.fit(X, y)
#KNN
from tpot import TPOTRegressor
parameters = {
'n_neighbors': range(1,20),
}
ga2 = TPOTRegressor(generations= 3, population_size= 10, offspring_size= 5,
verbosity= 3, early_stop= 5,
config_dict=
{'sklearn.neighbors.KNeighborsRegressor': parameters},
cv = 3, scoring = 'neg_mean_squared_error')
ga2.fit(X, y)
```
| true |
code
| 0.620248 | null | null | null | null |
|
## CNN on MNIST digits classification
This example is the same as the MLP for MNIST classification. The difference is we are going to use `Conv2D` layers instead of `Dense` layers.
The model that will be costructed below is made of:
- First 2 layers - `Conv2D-ReLU-MaxPool`
- 3rd layer - `Conv2D-ReLU`
- 4th layer - `Dense(10)`
- Output Activation - `softmax`
- Optimizer - `SGD`
Let us first load the packages and perform the initial pre-processing such as loading the dataset, performing normalization and conversion of labels to one-hot.
Recall that in our `3-Dense` MLP example, we achieved ~95.3% accuracy at 269k parameters. Here, we can achieve ~98.5% using 105k parameters. CNN is more parameter efficient.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dense, Dropout
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.utils import to_categorical, plot_model
from tensorflow.keras.datasets import mnist
# load mnist dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# compute the number of labels
num_labels = len(np.unique(y_train))
# convert to one-hot vector
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# input image dimensions
image_size = x_train.shape[1]
# resize and normalize
x_train = np.reshape(x_train,[-1, image_size, image_size, 1])
x_test = np.reshape(x_test,[-1, image_size, image_size, 1])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
```
### Hyper-parameters
This hyper-parameters are similar to our MLP example. The differences are `kernel_size = 3` which is a typical kernel size in most CNNs and `filters = 64`.
```
# network parameters
# image is processed as is (square grayscale)
input_shape = (image_size, image_size, 1)
batch_size = 128
kernel_size = 3
filters = 64
```
### Sequential Model Building
The model is similar to our previous example in MLP. The difference is we use `Conv2D` instead of `Dense`. Note that due to mismatch in dimensions, the output of the last `Conv2D` is flattened via `Flatten()` layer to suit the input vector dimensions of the `Dense`. Note that though we use `Activation(softmax)` as the last layer, this can also be integrated within the `Dense` layer in the parameter `activation='softmax'`. Both are the same.
```
# model is a stack of CNN-ReLU-MaxPooling
model = Sequential()
model.add(Conv2D(filters=filters,
kernel_size=kernel_size,
activation='relu',
padding='same',
input_shape=input_shape))
model.add(MaxPooling2D())
model.add(Conv2D(filters=filters,
kernel_size=kernel_size,
padding='same',
activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(filters=filters,
kernel_size=kernel_size,
padding='same',
activation='relu'))
model.add(Flatten())
# dropout added as regularizer
# model.add(Dropout(dropout))
# output layer is 10-dim one-hot vector
model.add(Dense(num_labels))
model.add(Activation('softmax'))
model.summary()
```
## Model Training and Evaluation
After building the model, it is time to train and evaluate. This part is similar to MLP training and evaluation.
```
#plot_model(model, to_file='cnn-mnist.png', show_shapes=True)
# loss function for one-hot vector
# use of adam optimizer
# accuracy is good metric for classification tasks
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
# train the network
model.fit(x_train, y_train, epochs=20, batch_size=batch_size)
loss, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print("\nTest accuracy: %.1f%%" % (100.0 * acc))
```
| true |
code
| 0.877135 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/BNN-UPC/ignnition/blob/ignnition-nightly/notebooks/shortest_path.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# IGNNITION: Quick start tutorial
### **Problem**: Find the shortest path in graphs with a Graph Neural Network
Find more details on this quick-start tutorial at:
https://ignnition.net/doc/quick_tutorial/
---
# Prepare the environment
#### **Note**: Follow the instructions below to finish the installation
```
#@title Installing libraries and load resources
#@markdown ####Hit **"enter"** to complete the installation of libraries
!add-apt-repository ppa:deadsnakes/ppa
!apt-get update
!apt-get install python3.7
!python -m pip install --upgrade pip
!pip install jupyter-client==6.1.5
!pip install ignnition==1.2.2
!pip install ipython-autotime
#@title Import libraries { form-width: "30%" }
import networkx as nx
import random
import json
from networkx.readwrite import json_graph
import os
import ignnition
%load_ext tensorboard
%load_ext autotime
#@markdown #### Download three YAML files we will need after (train_options.yaml, model_description.yaml, global_variables.yaml)
# Download YAML files for this tutorial
!curl -O https://raw.githubusercontent.com/BNN-UPC/ignnition/ignnition-nightly/examples/Shortest_Path/train_options.yaml
!curl -O https://raw.githubusercontent.com/BNN-UPC/ignnition/ignnition-nightly/examples/Shortest_Path/global_variables.yaml
!curl -O https://raw.githubusercontent.com/BNN-UPC/ignnition/ignnition-nightly/examples/Shortest_Path/model_description.yaml
#@title Generate the datasets (training and validation)
import os
def generate_random_graph(min_nodes, max_nodes, min_edge_weight, max_edge_weight, p):
while True:
# Create a random Erdos Renyi graph
G = nx.erdos_renyi_graph(random.randint(min_nodes, max_nodes), p)
complement = list(nx.k_edge_augmentation(G, k=1, partial=True))
G.add_edges_from(complement)
nx.set_node_attributes(G, 0, 'src-tgt')
nx.set_node_attributes(G, 0, 'sp')
nx.set_node_attributes(G, 'node', 'entity')
# Assign randomly weights to graph edges
for (u, v, w) in G.edges(data=True):
w['weight'] = random.randint(min_edge_weight, max_edge_weight)
# Select a source and target nodes to compute the shortest path
src, tgt = random.sample(list(G.nodes), 2)
G.nodes[src]['src-tgt'] = 1
G.nodes[tgt]['src-tgt'] = 1
# Compute all the shortest paths between source and target nodes
try:
shortest_paths = list(nx.all_shortest_paths(G, source=src, target=tgt,weight='weight'))
except:
shortest_paths = []
# Check if there exists only one shortest path
if len(shortest_paths) == 1:
for node in shortest_paths[0]:
G.nodes[node]['sp'] = 1
return nx.DiGraph(G)
def generate_dataset(file_name, num_samples, min_nodes=5, max_nodes=15, min_edge_weight=1, max_edge_weight=10, p=0.3):
samples = []
for _ in range(num_samples):
G = generate_random_graph(min_nodes, max_nodes, min_edge_weight, max_edge_weight, p)
G.remove_nodes_from([node for node, degree in dict(G.degree()).items() if degree == 0])
samples.append(json_graph.node_link_data(G))
with open(file_name, "w") as f:
json.dump(samples, f)
root_dir="./data"
if not os.path.exists(root_dir):
os.makedirs(root_dir)
if not os.path.exists(root_dir+"/train"):
os.makedirs(root_dir+"/train")
if not os.path.exists(root_dir + "/validation"):
os.makedirs(root_dir + "/validation")
generate_dataset("./data/train/data.json", 20000)
generate_dataset("./data/validation/data.json", 1000)
```
---
# GNN model training
```
#@title Remove all the models previously trained (CheckPoints)
#@markdown (It is not needed to execute this the first time)
! rm -r ./CheckPoint
! rm -r ./computational_graphs
#@title Load TensorBoard to visualize the evolution of learning metrics along training
#@markdown **IMPORTANT NOTE**: Click on "settings" in the TensorBoard GUI and check the option "Reload data" to see the evolution in real time. Note you can set the reload time interval (in seconds).
from tensorboard import notebook
notebook.list() # View open TensorBoard instances
dir="./CheckPoint"
if not os.path.exists(dir):
os.makedirs(dir)
%tensorboard --logdir $dir
# Para finalizar instancias anteriores de TensorBoard
# !kill 2953
# !ps aux
#@title Run the training of your GNN model
#@markdown </u>**Note**</u>: You can stop the training whenever you want to continue making predictions below
import ignnition
model = ignnition.create_model(model_dir= './')
model.computational_graph()
model.train_and_validate()
```
---
# Make predictions
## (This can be only excuted once the training is finished or stopped)
```
#@title Load functions to generate random graphs and print them
import os
import networkx as nx
import matplotlib.pyplot as plt
import json
from networkx.readwrite import json_graph
import ignnition
import numpy as np
import random
%load_ext autotime
def generate_random_graph(min_nodes, max_nodes, min_edge_weight, max_edge_weight, p):
while True:
# Create a random Erdos Renyi graph
G = nx.erdos_renyi_graph(random.randint(min_nodes, max_nodes), p)
complement = list(nx.k_edge_augmentation(G, k=1, partial=True))
G.add_edges_from(complement)
nx.set_node_attributes(G, 0, 'src-tgt')
nx.set_node_attributes(G, 0, 'sp')
nx.set_node_attributes(G, 'node', 'entity')
# Assign randomly weights to graph edges
for (u, v, w) in G.edges(data=True):
w['weight'] = random.randint(min_edge_weight, max_edge_weight)
# Select the source and target nodes to compute the shortest path
src, tgt = random.sample(list(G.nodes), 2)
G.nodes[src]['src-tgt'] = 1
G.nodes[tgt]['src-tgt'] = 1
# Compute all the shortest paths between source and target nodes
try:
shortest_paths = list(nx.all_shortest_paths(G, source=src, target=tgt,weight='weight'))
except:
shortest_paths = []
# Check if there exists only one shortest path
if len(shortest_paths) == 1:
if len(shortest_paths[0])>=3 and len(shortest_paths[0])<=5:
for node in shortest_paths[0]:
G.nodes[node]['sp'] = 1
return shortest_paths[0], nx.DiGraph(G)
def print_graph_predictions(G, path, predictions,ax):
predictions = np.array(predictions)
node_border_colors = []
links = []
for i in range(len(path)-1):
links.append([path[i], path[i+1]])
links.append([path[i+1], path[i]])
# Add colors to node borders for source and target nodes
for node in G.nodes(data=True):
if node[1]['src-tgt'] == 1:
node_border_colors.append('red')
else:
node_border_colors.append('white')
# Add colors for predictions [0,1]
node_colors = predictions
# Add colors for edges
edge_colors = []
for edge in G.edges(data=True):
e=[edge[0],edge[1]]
if e in links:
edge_colors.append('red')
else:
edge_colors.append('black')
pos= nx.shell_layout(G)
vmin = node_colors.min()
vmax = node_colors.max()
vmin = 0
vmax = 1
cmap = plt.cm.coolwarm
nx.draw_networkx_nodes(G, pos=pos, node_color=node_colors, cmap=cmap, vmin=vmin, vmax=vmax,
edgecolors=node_border_colors, linewidths=4, ax=ax)
nx.draw_networkx_edges(G, pos=pos, edge_color=edge_colors, arrows=False, ax=ax, width=2)
nx.draw_networkx_edge_labels(G, pos=pos, label_pos=0.5, edge_labels=nx.get_edge_attributes(G, 'weight'), ax=ax)
sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=vmin, vmax=vmax))
sm.set_array([])
plt.colorbar(sm, ax=ax)
def print_graph_solution(G, path, predictions,ax, pred_th):
predictions = np.array(predictions)
node_colors = []
node_border_colors = []
links = []
for i in range(len(path)-1):
links.append([path[i], path[i+1]])
links.append([path[i+1], path[i]])
# Add colors on node borders for source and target nodes
for node in G.nodes(data=True):
if node[1]['src-tgt'] == 1:
node_border_colors.append('red')
else:
node_border_colors.append('white')
# Add colors for predictions Blue or Red
cmap = plt.cm.get_cmap('coolwarm')
dark_red = cmap(1.0)
for p in predictions:
if p >= pred_th:
node_colors.append(dark_red)
else:
node_colors.append('blue')
# Add colors for edges
edge_colors = []
for edge in G.edges(data=True):
e=[edge[0],edge[1]]
if e in links:
edge_colors.append('red')
else:
edge_colors.append('black')
pos= nx.shell_layout(G)
nx.draw_networkx_nodes(G, pos=pos, node_color=node_colors, edgecolors=node_border_colors, linewidths=4, ax=ax)
nx.draw_networkx_edges(G, pos=pos, edge_color=edge_colors, arrows=False, ax=ax, width=2)
nx.draw_networkx_edge_labels(G, pos=pos, label_pos=0.5, edge_labels=nx.get_edge_attributes(G, 'weight'), ax=ax)
def print_input_graph(G, ax):
node_colors = []
node_border_colors = []
# Add colors to node borders for source and target nodes
for node in G.nodes(data=True):
if node[1]['src-tgt'] == 1:
node_border_colors.append('red')
else:
node_border_colors.append('white')
pos= nx.shell_layout(G)
nx.draw_networkx_nodes(G, pos=pos, edgecolors=node_border_colors, linewidths=4, ax=ax)
nx.draw_networkx_edges(G, pos=pos, arrows=False, ax=ax, width=2)
nx.draw_networkx_edge_labels(G, pos=pos, label_pos=0.5, edge_labels=nx.get_edge_attributes(G, 'weight'), ax=ax)
#@title Make predictions on random graphs
#@markdown **NOTE**: IGNNITION will automatically load the latest trained model (CheckPoint) to make the predictions
dataset_samples = []
sh_path, G = generate_random_graph(min_nodes=8, max_nodes=12, min_edge_weight=1, max_edge_weight=10, p=0.3)
graph = G.to_undirected()
dataset_samples.append(json_graph.node_link_data(G))
# write prediction dataset
root_dir="./data"
if not os.path.exists(root_dir):
os.makedirs(root_dir)
if not os.path.exists(root_dir+"/test"):
os.makedirs(root_dir+"/test")
with open(root_dir+"/test/data.json", "w") as f:
json.dump(dataset_samples, f)
# Make predictions
predictions = model.predict()
# Print the results
fig, axes = plt.subplots(nrows=1, ncols=3)
ax = axes.flatten()
# Print input graph
ax1 = ax[0]
ax1.set_title("Input graph")
print_input_graph(graph, ax1)
# Print graph with predictions (soft values)
ax1 = ax[1]
ax1.set_title("GNN predictions (soft values)")
print_graph_predictions(graph, sh_path, predictions[0], ax1)
# Print solution of the GNN
pred_th = 0.5
ax1 = ax[2]
ax1.set_title("GNN solution (p >= "+str(pred_th)+")")
print_graph_solution(graph, sh_path, predictions[0], ax1, pred_th)
# Show plot in full screen
plt.rcParams['figure.figsize'] = [10, 4]
plt.rcParams['figure.dpi'] = 100
plt.tight_layout()
plt.show()
```
---
# Try to improve your GNN model
**Optional exercise**:
The previous training was executed with some parameters set by default, so the accuracy of the GNN model is far from optimal.
Here, we propose an alternative configuration that defines better training parameters for the GNN model.
For this, you can check and modify the following YAML files to configure your GNN model:
* /content/model_description.yaml -> GNN model description
* /content/train_options.yaml -> Configuration of training parameters
Try to define an optimizer with learning rate decay and set the number of samples and epochs adding the following lines in the train_options.yaml file:
```
optimizer:
type: Adam
learning_rate: # define a schedule
type: ExponentialDecay
initial_learning_rate: 0.001
decay_steps: 10000
decay_rate: 0.5
...
batch_size: 1
epochs: 150
epoch_size: 200
```
Then, you can train a new model from scratch by executing al the code snippets from section "GNN model training"
Please note that the training process may take quite a long time depending on the machine where it is executed.
In this example, there are a total of 30,000 training samples:
1 sample/step * 200 steps/epoch * 150 epochs = 30.000 samples
| true |
code
| 0.471102 | null | null | null | null |
|
# Step 1) Data Preparation
```
%run data_prep.py INTC
import pandas as pd
df = pd.read_csv("../1_Data/INTC.csv",infer_datetime_format=True, parse_dates=['dt'], index_col=['dt'])
trainCount=int(len(df)*0.4)
dfTrain = df.iloc[:trainCount]
dfTest = df.iloc[trainCount:]
dfTest.to_csv('local_test/test_dir/input/data/training/data.csv')
dfTest.head()
%matplotlib notebook
dfTest["close"].plot()
```
# Step 2) Modify Strategy Configuration
In the following cell, you can adjust the parameters for the strategy.
* `user` = Name for Leaderboard (optional)
* `go_long` = Go Long for Breakout (true or false)
* `go_short` = Go Short for Breakout (true or false)
* `period` = Length of window for previous high and low
* `size` = The number of shares for a transaction
`Tip`: A good starting point for improving the strategy is to lengthen the period of the previous high and low. Equity Markets tend to have a long bias and if you only consider long trades this might improve the performance.
```
%%writefile model/algo_config
{ "user" : "user",
"go_long" : true,
"go_short" : true,
"period" : 9,
"size" : 1000
}
%run update_config.py daily_breakout
```
# Step 3) Modify Strategy Code
`Tip`: A good starting point for improving the strategy is to add additional indicators like ATR (Average True Range) before placing a trade. You want to avoid false signals if there is not enough volatility.
Here are some helpful links:
* Backtrader Documentation: https://www.backtrader.com/docu/strategy/
* TA-Lib Indicator Reference: https://www.backtrader.com/docu/talibindautoref/
* Backtrader Indicator Reference: https://www.backtrader.com/docu/indautoref/
```
%%writefile model/algo_daily_breakout.py
import backtrader as bt
from algo_base import *
import pytz
from pytz import timezone
class MyStrategy(StrategyTemplate):
def __init__(self): # Initiation
super(MyStrategy, self).__init__()
self.highest = bt.ind.Highest(period=self.config["period"])
self.lowest = bt.ind.Lowest(period=self.config["period"])
self.size = self.config["size"]
def next(self): # Processing
super(MyStrategy, self).next()
dt=self.datas[0].datetime.datetime(0)
if not self.position:
if self.config["go_long"] and self.datas[0] > self.highest[-1]:
self.buy(size=self.size) # Go long
elif self.config["go_short"] and self.datas[0] < self.lowest[-1]:
self.sell(size=self.size) # Go short
elif self.position.size>0 and self.datas[0] < self.highest[-1]:
self.close()
elif self.position.size<0 and self.datas[0] > self.lowest[-1]:
self.close()
```
# Step 4) Backtest Locally (historical data)
**Please note that the initial docker image build may take up to 5 min. Subsequent runs are fast.**
```
#Build Local Algo Image
!docker build -t algo_$(cat model/algo_name) .
!docker run -v $(pwd)/local_test/test_dir:/opt/ml --rm algo_$(cat model/algo_name) train
from IPython.display import Image
Image(filename='local_test/test_dir/model/chart.png')
```
## Refine your trading strategy (step 2 to 4). Once you are ready to test the performance of your strategy in a forwardtest, move on to the next step.
# Step 5) Forwardtest on SageMaker (simulated data) and submit performance
**Please note that the forwardtest in SageMaker runs each time with a new simulated dataset to validate the performance of the strategy. Feel free to run it multiple times to compare performance.**
```
#Deploy Algo Image to ECS
!./build_and_push.sh
#Run Remote Forwardtest via SageMaker
import sagemaker as sage
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
role = get_execution_role()
sess = sage.Session()
WORK_DIRECTORY = 'local_test/test_dir/input/data/training'
data_location = sess.upload_data(WORK_DIRECTORY, key_prefix='data')
print(data_location)
with open('model/algo_config', 'r') as f:
config = json.load(f)
algo_name=config['algo_name']
config['sim_data']=True
prefix='algo_'+algo_name
job_name=prefix.replace('_','-')
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
image = f'{account}.dkr.ecr.{region}.amazonaws.com/{prefix}:latest'
algo = sage.estimator.Estimator(
image_name=image,
role=role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path="s3://{}/output".format(sess.default_bucket()),
sagemaker_session=sess,
base_job_name=job_name,
hyperparameters=config,
metric_definitions=[
{
"Name": "algo:pnl",
"Regex": "Total PnL:(.*?)]"
},
{
"Name": "algo:sharpe_ratio",
"Regex": "Sharpe Ratio:(.*?),"
}
])
algo.fit(data_location)
#Get Algo Metrics
from sagemaker.analytics import TrainingJobAnalytics
latest_job_name = algo.latest_training_job.job_name
metrics_dataframe = TrainingJobAnalytics(training_job_name=latest_job_name).dataframe()
metrics_dataframe
#Get Algo Chart from S3
model_name=algo.model_data.replace('s3://'+sess.default_bucket()+'/','')
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(sess.default_bucket())
my_bucket.download_file(model_name,'model.tar.gz')
!tar -xzf model.tar.gz
!rm model.tar.gz
from IPython.display import Image
Image(filename='chart.png')
```
### Congratulations! You've completed this strategy. Verify your submission on the leaderboard.
```
%run leaderboard.py
```
| true |
code
| 0.347509 | null | null | null | null |
|
###### Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Daniel Koehn based on Jupyter notebooks by Marc Spiegelman [Dynamical Systems APMA 4101](https://github.com/mspieg/dynamical-systems) and Kyle Mandli from his course [Introduction to numerical methods](https://github.com/mandli/intro-numerical-methods), notebook style sheet by L.A. Barba, N.C. Clementi [Engineering Computations](https://github.com/engineersCode)
```
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
```
# Exploring the Lorenz Equations
The Lorenz Equations are a 3-D dynamical system that is a simplified model of Rayleigh-Benard thermal convection. They are derived and described in detail in Edward Lorenz' 1963 paper [Deterministic Nonperiodic Flow](http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2) in the Journal of Atmospheric Science. In their classical form they can be written
\begin{equation}
\begin{split}
\frac{\partial X}{\partial t} &= \sigma( Y - X)\\
\frac{\partial Y}{\partial t} &= rX - Y - XZ \\
\frac{\partial Z}{\partial t} &= XY -b Z
\end{split}
\tag{1}
\end{equation}
where $\sigma$ is the "Prandtl number", $r = \mathrm{Ra}/\mathrm{Ra}_c$ is a scaled "Rayleigh number" and $b$ is a parameter that is related to the the aspect ratio of a convecting cell in the original derivation.
Here, $X(t)$, $Y(t)$ and $Z(t)$ are the time dependent amplitudes of the streamfunction and temperature fields, expanded in a highly truncated Fourier Series where the streamfunction contains one cellular mode
$$
\psi(x,z,t) = X(t)\sin(a\pi x)\sin(\pi z)
$$
and temperature has two modes
$$
\theta(x,z,t) = Y(t)\cos(a\pi x)\sin(\pi z) - Z(t)\sin(2\pi z)
$$
This Jupyter notebook, will provide some simple python routines for numerical integration and visualization of the Lorenz Equations.
## Numerical solution of the Lorenz Equations
We have to solve the uncoupled ordinary differential equations (1) using the finite difference method introduced in [this lecture](https://nbviewer.jupyter.org/github/daniel-koehn/Differential-equations-earth-system/blob/master/02_finite_difference_intro/1_fd_intro.ipynb).
The approach is similar to the one used in [Exercise: How to sail without wind](https://nbviewer.jupyter.org/github/daniel-koehn/Differential-equations-earth-system/blob/master/02_finite_difference_intro/3_fd_ODE_example_sailing_wo_wind.ipynb), except that eqs.(1) are coupled ordinary differential equations, we have an additional differential equation and the RHS are more complex.
Approximating the temporal derivatives in eqs. (1) using the **backward FD operator**
\begin{equation}
\frac{df}{dt} = \frac{f(t)-f(t-dt)}{dt} \notag
\end{equation}
with the time sample interval $dt$ leads to
\begin{equation}
\begin{split}
\frac{X(t)-X(t-dt)}{dt} &= \sigma(Y - X)\\
\frac{Y(t)-Y(t-dt)}{dt} &= rX - Y - XZ\\
\frac{Y(t)-Y(t-dt)}{dt} &= XY -b Z\\
\end{split}
\notag
\end{equation}
After solving for $X(t), Y(t), Z(t)$, we get the **explicit time integration scheme** for the Lorenz equations:
\begin{equation}
\begin{split}
X(t) &= X(t-dt) + dt\; \sigma(Y - X)\\
Y(t) &= Y(t-dt) + dt\; (rX - Y - XZ)\\
Z(t) &= Z(t-dt) + dt\; (XY -b Z)\\
\end{split}
\notag
\end{equation}
and by introducing a temporal dicretization $t^n = n * dt$ with $n \in [0,1,...,nt]$, where $nt$ denotes the maximum time steps, the final FD code becomes:
\begin{equation}
\begin{split}
X^{n} &= X^{n-1} + dt\; \sigma(Y^{n-1} - X^{n-1})\\
Y^{n} &= Y^{n-1} + dt\; (rX^{n-1} - Y^{n-1} - X^{n-1}Z^{n-1})\\
Z^{n} &= Z^{n-1} + dt\; (X^{n-1}Y^{n-1} - b Z^{n-1})\\
\end{split}
\tag{2}
\end{equation}
The Python implementation is quite straightforward, because we can reuse some old codes ...
##### Exercise 1
Finish the function `Lorenz`, which computes and returns the RHS of eqs. (1) for a given $X$, $Y$, $Z$.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def Lorenz(X,Y,Z,sigma,r,b):
'''
Returns the RHS of the Lorenz equations
'''
# ADD RHS OF LORENZ EQUATIONS (1) HERE!
X_dot_rhs =
Y_dot_rhs =
Z_dot_rhs =
# return the state derivatives
return X_dot_rhs, Y_dot_rhs, Z_dot_rhs
```
Next, we write the function to solve the Lorenz equation `SolveLorenz` based on the `sailing_boring` code from the [Exercise: How to sail without wind](https://nbviewer.jupyter.org/github/daniel-koehn/Differential-equations-earth-system/blob/master/02_finite_difference_intro/3_fd_ODE_example_sailing_wo_wind.ipynb)
##### Exercise 2
Finish the FD-code implementation `SolveLorenz`
```
def SolveLorenz(tmax, dt, X0, Y0, Z0, sigma=10.,r=28.,b=8./3.0):
'''
Integrate the Lorenz equations from initial condition (X0,Y0,Z0)^T at t=0
for parameters sigma, r, b
Returns: X, Y, Z, time
'''
# Compute number of time steps based on tmax and dt
nt = (int)(tmax/dt)
# vectors for storage of X, Y, Z positions and time t
X = np.zeros(nt + 1)
Y = np.zeros(nt + 1)
Z = np.zeros(nt + 1)
t = np.zeros(nt + 1)
# define initial position and time
X[0] = X0
Y[0] = Y0
Z[0] = Z0
# start time stepping over time samples n
for n in range(1,nt + 1):
# compute RHS of Lorenz eqs. (1) at current position (X,Y,Z)^T
X_dot_rhs, Y_dot_rhs, Z_dot_rhs = Lorenz(X[n-1],Y[n-1],Z[n-1],sigma,r,b)
# compute new position using FD approximation of time derivative
# ADD FD SCHEME OF THE LORENZ EQS. HERE!
X[n] =
Y[n] =
Z[n] =
t[n] = n * dt
return X, Y, Z, t
```
Finally, we create a function to plot the solution (X,Y,Z)^T of the Lorenz eqs. ...
```
def PlotLorenzXvT(X,Y,Z,t,sigma,r,b):
'''
Create time series plots of solutions of the Lorenz equations X(t),Y(t),Z(t)
'''
plt.figure()
ax = plt.subplot(111)
ax.plot(t,X,'r',label='X')
ax.plot(t,Y,'g',label='Y')
ax.plot(t,Z,'b',label='Z')
ax.set_xlabel('time t')
plt.title('Lorenz Equations: $\sigma=${}, $r=${}, $b=${}'.format(sigma,r,b))
# Shrink current axis's height by 10% on the bottom
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1,
box.width, box.height * 0.9])
# Put a legend below current axis
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=3)
plt.show()
```
... and a function to plot the trajectory in the **phase space portrait**:
```
def PlotLorenz3D(X,Y,Z,sigma,r,b):
'''
Show 3-D Phase portrait using mplot3D
'''
# do some fancy 3D plotting
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(X,Y,Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.title('Lorenz Equations: $\sigma=${}, $r=${}, $b=${}'.format(sigma,r,b))
plt.show()
```
##### Exercise 3
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=0.5$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results.
```
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 0.01
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)
```
##### Exercise 4
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=10$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results.
```
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 0.01
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)
```
##### Exercise 5
Solve the Lorenz equations again for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=10$. However, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(-2,-3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results. How does the solution change compared to exercise 4?
```
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 0.01
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)
```
##### Exercise 6
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=28$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results. Compare with the previous results.
```
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 5e-4
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)
```
##### Exercise 7
In his 1963 paper Lorenz also investigated the influence of small changes of the initial conditions on the long-term evolution of the thermal convection problem for large Rayleigh numbers.
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=28$, however starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3.001,4)^T$. Plot the temporal evolution and compare with the solution of exercise 6. Describe and interpret the results.
Explain why Lorenz introduced the term **Butterfly effect** based on your results.
```
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 30
dt = 5e-4
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X1, Y1, Z1, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize differences as a time series
PlotLorenzXvT(X-X1,Y-Y1,Z-Z1,t,sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X1,Y1,Z1,t,sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
```
##### Exercise 8
Solve the Lorenz equations for a Prandtl number $\sigma=10$, $b=8/3$ and a scaled Rayleigh number $r=350$, starting from the initial condition ${\bf{X_0}}=(X_0,Y_0,Z_0)^T=(2,3,4)^T$. Plot the temporal evolution and 3D phase potrait of the solution $(X(t),Y(t),Z(t))^T$. Mark the fix points, you derived in [Stationary Solutions of Time-Dependent Problems](http://nbviewer.ipython.org/urls/github.com/daniel-koehn/Differential-equations-earth-system/tree/master/03_Lorenz_equations/02_Stationary_solutions_of_DE.ipynb) in the 3D phase portrait. Describe and interpret the results. Compare with the previous result from exercise 8.
```
# SET THE PARAMETERS HERE!
sigma=
b =
# SET THE INITIAL CONDITIONS HERE!
X0 =
Y0 =
Z0 =
# Set maximum integration time and sample interval dt
tmax = 8.
dt = 5e-4
# SET THE RAYLEIGH NUMBER HERE!
r =
# Solve the Equations
X, Y, Z, t = SolveLorenz(tmax, dt, X0, Y0, Z0, sigma,r,b)
# and Visualize as a time series
PlotLorenzXvT(X,Y,Z,t,sigma,r,b)
# and as a 3-D phase portrait
PlotLorenz3D(X,Y,Z,sigma,r,b)
```
## What we learned:
- How to solve the Lorenz equations using a simple finite-difference scheme.
- How to visualize the solution of ordinary differential equations using the temporal evolution and phase portrait.
- Exporing the dynamic of non-linear differential equations and the sensitivity of small changes of the initial conditions to the long term evolution of the system.
- Why physicists can only predict the time evolution of complex dynamical systems to some extent.
| true |
code
| 0.634034 | null | null | null | null |
|
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Vowpal Wabbit Deep Dive
<center>
<img src="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/logo_assets/vowpal-wabbits-github-logo.png?raw=true" height="30%" width="30%" alt="Vowpal Wabbit">
</center>
[Vowpal Wabbit](https://github.com/VowpalWabbit/vowpal_wabbit) is a fast online machine learning library that implements several algorithms relevant to the recommendation use case.
The main advantage of Vowpal Wabbit (VW) is that training is done in an online fashion typically using Stochastic Gradient Descent or similar variants, which allows it to scale well to very large datasets. Additionally, it is optimized to run very quickly and can support distributed training scenarios for extremely large datasets.
VW is best applied to problems where the dataset is too large to fit into memory but can be stored on disk in a single node. Though distributed training is possible with additional setup and configuration of the nodes. The kinds of problems that VW handles well mostly fall into the supervised classification domain of machine learning (Linear Regression, Logistic Regression, Multiclass Classification, Support Vector Machines, Simple Neural Nets). It also supports Matrix Factorization approaches and Latent Dirichlet Allocation, as well as a few other algorithms (see the [wiki](https://github.com/VowpalWabbit/vowpal_wabbit/wiki) for more information).
A good example of a typical deployment use case is a Real Time Bidding scenario, where an auction to place an ad for a user is being decided in a matter of milliseconds. Feature information about the user and items must be extracted and passed into a model to predict likelihood of click (or other interaction) in short order. And if the user and context features are constantly changing (e.g. user browser and local time of day) it may be infeasible to score every possible input combination before hand. This is where VW provides value, as a platform to explore various algorithms offline to train a highly accurate model on a large set of historical data then deploy the model into production so it can generate rapid predictions in real time. Of course this isn't the only manner VW can be deployed, it is also possible to use it entirely online where the model is constantly updating, or use active learning approaches, or work completely offline in a pre-scoring mode.
<h3>Vowpal Wabbit for Recommendations</h3>
In this notebook we demonstrate how to use the VW library to generate recommendations on the [Movielens](https://grouplens.org/datasets/movielens/) dataset.
Several things are worth noting in how VW is being used in this notebook:
By leveraging an Azure Data Science Virtual Machine ([DSVM](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/)), VW comes pre-installed and can be used directly from the command line. If you are not using a DSVM you must install vw yourself.
There are also python bindings to allow VW use within a python environment and even a wrapper conforming to the SciKit-Learn Estimator API. However, the python bindings must be installed as an additional python package with Boost dependencies, so for simplicity's sake execution of VW is done via a subprocess call mimicking what would happen from the command line execution of the model.
VW expects a specific [input format](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Input-format), in this notebook to_vw() is a convenience function that converts the standard movielens dataset into the required data format. Datafiles are then written to disk and passed to VW for training.
The examples shown are to demonstrate functional capabilities of VW not to indicate performance advantages of different approaches. There are several hyper-parameters (e.g. learning rate and regularization terms) that can greatly impact performance of VW models which can be adjusted using [command line options](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments). To properly compare approaches it is helpful to learn about and tune these parameters on the relevant dataset.
# 0. Global Setup
```
import sys
sys.path.append('../..')
import os
from subprocess import run
from tempfile import TemporaryDirectory
from time import process_time
import pandas as pd
import papermill as pm
from reco_utils.common.notebook_utils import is_jupyter
from reco_utils.dataset.movielens import load_pandas_df
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import (rmse, mae, exp_var, rsquared, get_top_k_items,
map_at_k, ndcg_at_k, precision_at_k, recall_at_k)
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
def to_vw(df, output, logistic=False):
"""Convert Pandas DataFrame to vw input format
Args:
df (pd.DataFrame): input DataFrame
output (str): path to output file
logistic (bool): flag to convert label to logistic value
"""
with open(output, 'w') as f:
tmp = df.reset_index()
# we need to reset the rating type to an integer to simplify the vw formatting
tmp['rating'] = tmp['rating'].astype('int64')
# convert rating to binary value
if logistic:
tmp['rating'] = tmp['rating'].apply(lambda x: 1 if x >= 3 else -1)
# convert each row to VW input format (https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Input-format)
# [label] [tag]|[user namespace] [user id feature] |[item namespace] [movie id feature]
# label is the true rating, tag is a unique id for the example just used to link predictions to truth
# user and item namespaces separate the features to support interaction features through command line options
for _, row in tmp.iterrows():
f.write('{rating} {index}|user {userID} |item {itemID}\n'.format_map(row))
def run_vw(train_params, test_params, test_data, prediction_path, logistic=False):
"""Convenience function to train, test, and show metrics of interest
Args:
train_params (str): vw training parameters
test_params (str): vw testing parameters
test_data (pd.dataFrame): test data
prediction_path (str): path to vw prediction output
logistic (bool): flag to convert label to logistic value
Returns:
(dict): metrics and timing information
"""
# train model
train_start = process_time()
run(train_params.split(' '), check=True)
train_stop = process_time()
# test model
test_start = process_time()
run(test_params.split(' '), check=True)
test_stop = process_time()
# read in predictions
pred_df = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(test_data)
test_df = test_data.copy()
if logistic:
# make the true label binary so that the metrics are captured correctly
test_df['rating'] = test['rating'].apply(lambda x: 1 if x >= 3 else -1)
else:
# ensure results are integers in correct range
pred_df['prediction'] = pred_df['prediction'].apply(lambda x: int(max(1, min(5, round(x)))))
# calculate metrics
result = dict()
result['RMSE'] = rmse(test_df, pred_df)
result['MAE'] = mae(test_df, pred_df)
result['R2'] = rsquared(test_df, pred_df)
result['Explained Variance'] = exp_var(test_df, pred_df)
result['Train Time (ms)'] = (train_stop - train_start) * 1000
result['Test Time (ms)'] = (test_stop - test_start) * 1000
return result
# create temp directory to maintain data files
tmpdir = TemporaryDirectory()
model_path = os.path.join(tmpdir.name, 'vw.model')
saved_model_path = os.path.join(tmpdir.name, 'vw_saved.model')
train_path = os.path.join(tmpdir.name, 'train.dat')
test_path = os.path.join(tmpdir.name, 'test.dat')
train_logistic_path = os.path.join(tmpdir.name, 'train_logistic.dat')
test_logistic_path = os.path.join(tmpdir.name, 'test_logistic.dat')
prediction_path = os.path.join(tmpdir.name, 'prediction.dat')
all_test_path = os.path.join(tmpdir.name, 'new_test.dat')
all_prediction_path = os.path.join(tmpdir.name, 'new_prediction.dat')
```
# 1. Load & Transform Data
```
# Select Movielens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
TOP_K = 10
# load movielens data (use the 1M dataset)
df = load_pandas_df(MOVIELENS_DATA_SIZE)
# split data to train and test sets, default values take 75% of each users ratings as train, and 25% as test
train, test = python_random_split(df, 0.75)
# save train and test data in vw format
to_vw(df=train, output=train_path)
to_vw(df=test, output=test_path)
# save data for logistic regression (requires adjusting the label)
to_vw(df=train, output=train_logistic_path, logistic=True)
to_vw(df=test, output=test_logistic_path, logistic=True)
```
# 2. Regression Based Recommendations
When considering different approaches for solving a problem with machine learning it is helpful to generate a baseline approach to understand how more complex solutions perform across dimensions of performance, time, and resource (memory or cpu) usage.
Regression based approaches are some of the simplest and fastest baselines to consider for many ML problems.
## 2.1 Linear Regression
As the data provides a numerical rating between 1-5, fitting those values with a linear regression model is easy approach. This model is trained on examples of ratings as the target variable and corresponding user ids and movie ids as independent features.
By passing each user-item rating in as an example the model will begin to learn weights based on average ratings for each user as well as average ratings per item.
This however can generate predicted ratings which are no longer integers, so some additional adjustments should be made at prediction time to convert them back to the integer scale of 1 through 5 if necessary. Here, this is done in the evaluate function.
```
"""
Quick description of command line parameters used
Other optional parameters can be found here: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments
VW uses linear regression by default, so no extra command line options
-f <model_path>: indicates where the final model file will reside after training
-d <data_path>: indicates which data file to use for training or testing
--quiet: this runs vw in quiet mode silencing stdout (for debugging it's helpful to not use quiet mode)
-i <model_path>: indicates where to load the previously model file created during training
-t: this executes inference only (no learned updates to the model)
-p <prediction_path>: indicates where to store prediction output
"""
train_params = 'vw -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
# save these results for later use during top-k analysis
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = pd.DataFrame(result, index=['Linear Regression'])
comparison
```
## 2.2 Linear Regression with Interaction Features
Previously we treated the user features and item features independently, but taking into account interactions between features can provide a mechanism to learn more fine grained preferences of the users.
To generate interaction features use the quadratic command line argument and specify the namespaces that should be combined: '-q ui' combines the user and item namespaces based on the first letter of each.
Currently the userIDs and itemIDs used are integers which means the feature ID is used directly, for instance when user ID 123 rates movie 456, the training example puts a 1 in the values for features 123 and 456. However when interaction is specified (or if a feature is a string) the resulting interaction feature is hashed into the available feature space. Feature hashing is a way to take a very sparse high dimensional feature space and reduce it into a lower dimensional space. This allows for reduced memory while retaining fast computation of feature and model weights.
The caveat with feature hashing, is that it can lead to hash collisions, where separate features are mapped to the same location. In this case it can be beneficial to increase the size of the space to support interactions between features of high cardinality. The available feature space is dictated by the --bit_precision (-b) <N> argument. Where the total available space for all features in the model is 2<sup>N</sup>.
See [Feature Hashing and Extraction](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Feature-Hashing-and-Extraction) for more details.
```
"""
Quick description of command line parameters used
-b <N>: sets the memory size to 2<sup>N</sup> entries
-q <ab>: create quadratic feature interactions between features in namespaces starting with 'a' and 'b'
"""
train_params = 'vw -b 26 -q ui -f {model} -d {data} --quiet'.format(model=saved_model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
saved_result = result
comparison = comparison.append(pd.DataFrame(result, index=['Linear Regression w/ Interaction']))
comparison
```
## 2.3 Multinomial Logistic Regression
An alternative to linear regression is to leverage multinomial logistic regression, or multiclass classification, which treats each rating value as a distinct class.
This avoids any non integer results, but also reduces the training data for each class which could lead to poorer performance if the counts of different rating levels are skewed.
Basic multiclass logistic regression can be accomplished using the One Against All approach specified by the '--oaa N' option, where N is the number of classes and proving the logistic option for the loss function to be used.
```
"""
Quick description of command line parameters used
--loss_function logistic: sets the model loss function for logistic regression
--oaa <N>: trains N separate models using One-Against-All approach (all models are captured in the single model file)
This expects the labels to be contiguous integers starting at 1
--link logistic: converts the predicted output from logit to probability
The predicted output is the model (label) with the largest likelihood
"""
train_params = 'vw --loss_function logistic --oaa 5 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Multinomial Regression']))
comparison
```
## 2.4 Logistic Regression
Additionally, one might simply be interested in whether the user likes or dislikes an item and we can adjust the input data to represent a binary outcome, where ratings in (1,3] are dislikes (negative results) and (3,5] are likes (positive results).
This framing allows for a simple logistic regression model to be applied. To perform logistic regression the loss_function parameter is changed to 'logistic' and the target label is switched to [0, 1]. Also, be sure to set '--link logistic' during prediction to convert the logit output back to a probability value.
```
train_params = 'vw --loss_function logistic -f {model} -d {data} --quiet'.format(model=model_path, data=train_logistic_path)
test_params = 'vw --link logistic -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_logistic_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path,
logistic=True)
comparison = comparison.append(pd.DataFrame(result, index=['Logistic Regression']))
comparison
```
# 3. Matrix Factorization Based Recommendations
All of the above approaches train a regression model, but VW also supports matrix factorization with two different approaches.
As opposed to learning direct weights for specific users, items and interactions when training a regression model, matrix factorization attempts to learn latent factors that determine how a user rates an item. An example of how this might work is if you could represent user preference and item categorization by genre. Given a smaller set of genres we can associate how much each item belongs to each genre class, and we can set weights for a user's preference for each genre. Both sets of weights could be represented as a vectors where the inner product would be the user-item rating. Matrix factorization approaches learn low rank matrices for latent features of users and items such that those matrices can be combined to approximate the original user item matrix.
## 3.1. Singular Value Decomposition Based Matrix Factorization
The first approach performs matrix factorization based on Singular Value Decomposition (SVD) to learn a low rank approximation for the user-item rating matix. It is is called using the '--rank' command line argument.
See the [Matrix Factorization Example](https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example) for more detail.
```
"""
Quick description of command line parameters used
--rank <N>: sets the number of latent factors in the reduced matrix
"""
train_params = 'vw --rank 5 -q ui -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (Rank)']))
comparison
```
## 3.2. Factorization Machine Based Matrix Factorization
An alternative approach based on [Rendel's factorization machines](https://cseweb.ucsd.edu/classes/fa17/cse291-b/reading/Rendle2010FM.pdf) is called using '--lrq' (low rank quadratic). More LRQ details in this [demo](https://github.com/VowpalWabbit/vowpal_wabbit/tree/master/demo/movielens).
This learns two lower rank matrices which are multiplied to generate an approximation of the user-item rating matrix. Compressing the matrix in this way leads to learning generalizable factors which avoids some of the limitations of using regression models with extremely sparse interaction features. This can lead to better convergence and smaller on-disk models.
An additional term to improve performance is --lrqdropout which will dropout columns during training. This however tends to increase the optimal rank size. Other parameters such as L2 regularization can help avoid overfitting.
```
"""
Quick description of command line parameters used
--lrq <abN>: learns approximations of rank N for the quadratic interaction between namespaces starting with 'a' and 'b'
--lrqdroupout: performs dropout during training to improve generalization
"""
train_params = 'vw --lrq ui7 -f {model} -d {data} --quiet'.format(model=model_path, data=train_path)
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=model_path, data=test_path, pred=prediction_path)
result = run_vw(train_params=train_params,
test_params=test_params,
test_data=test,
prediction_path=prediction_path)
comparison = comparison.append(pd.DataFrame(result, index=['Matrix Factorization (LRQ)']))
comparison
```
# 4. Conclusion
The table above shows a few of the approaches in the VW library that can be used for recommendation prediction. The relative performance can change when applied to different datasets and properly tuned, but it is useful to note the rapid speed at which all approaches are able to train (75,000 examples) and test (25,000 examples).
# 5. Scoring
After training a model with any of the above approaches, the model can be used to score potential user-pairs in offline batch mode, or in a real-time scoring mode. The example below shows how to leverage the utilities in the reco_utils directory to generate Top-K recommendations from offline scored output.
```
# First construct a test set of all items (except those seen during training) for each user
users = df[['userID']].drop_duplicates()
users['key'] = 1
items = df[['itemID']].drop_duplicates()
items['key'] = 1
all_pairs = pd.merge(users, items, on='key').drop(columns=['key'])
# now combine with training data and filter only those entries that don't match
merged = pd.merge(train, all_pairs, on=["userID", "itemID"], how="outer")
all_user_items = merged[merged['rating'].isnull()].copy()
all_user_items['rating'] = 0
# save in vw format (this can take a while)
to_vw(df=all_user_items, output=all_test_path)
# run the saved model (linear regression with interactions) on the new dataset
test_start = process_time()
test_params = 'vw -i {model} -d {data} -t -p {pred} --quiet'.format(model=saved_model_path, data=all_test_path, pred=prediction_path)
run(test_params.split(' '), check=True)
test_stop = process_time()
test_time = test_stop - test_start
# load predictions and get top-k from previous saved results
pred_data = pd.read_csv(prediction_path, delim_whitespace=True, names=['prediction'], index_col=1).join(test)
pred_data['prediction'] = pred_data['prediction'].apply(lambda x: int(max(1, min(5, round(x)))))
top_k = get_top_k_items(pred_data, col_rating='prediction', k=TOP_K)[['prediction', 'userID', 'itemID', 'rating']]
# convert dtypes of userID and itemID columns.
for col in ['userID', 'itemID']:
top_k[col] = top_k[col].astype(int)
top_k.head()
# get ranking metrics
args = [test, top_k]
kwargs = dict(col_user='userID', col_item='itemID', col_rating='rating', col_prediction='prediction',
relevancy_method='top_k', k=TOP_K)
rank_metrics = {'MAP': map_at_k(*args, **kwargs),
'NDCG': ndcg_at_k(*args, **kwargs),
'Precision': precision_at_k(*args, **kwargs),
'Recall': recall_at_k(*args, **kwargs)}
# final results
all_results = ['{k}: {v}'.format(k=k, v=v) for k, v in saved_result.items()]
all_results += ['{k}: {v}'.format(k=k, v=v) for k, v in rank_metrics.items()]
print('\n'.join(all_results))
```
# 6. Cleanup
```
# record results for testing
if is_jupyter():
pm.record('rmse', saved_result['RMSE'])
pm.record('mae', saved_result['MAE'])
pm.record('rsquared', saved_result['R2'])
pm.record('exp_var', saved_result['Explained Variance'])
pm.record("train_time", saved_result['Train Time (ms)'])
pm.record("test_time", test_time)
pm.record('map', rank_metrics['MAP'])
pm.record('ndcg', rank_metrics['NDCG'])
pm.record('precision', rank_metrics['Precision'])
pm.record('recall', rank_metrics['Recall'])
tmpdir.cleanup()
```
## References
1. John Langford, et. al. Vowpal Wabbit Wiki. URL: https://github.com/VowpalWabbit/vowpal_wabbit/wiki
2. Steffen Rendel. Factorization Machines. 2010 IEEE International Conference on Data Mining.
3. Jake Hoffman. Matrix Factorization Example. URL: https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example
4. Paul Minero. Low Rank Quadratic Example. URL: https://github.com/VowpalWabbit/vowpal_wabbit/tree/master/demo/movielens
| true |
code
| 0.542136 | null | null | null | null |
|
```
import numpy as np
import matplotlib.pyplot as plt
import cython
import timeit
import math
%load_ext cython
```
# Native code compilation
We will see how to convert Python code to native compiled code. We will use the example of calculating the pairwise distance between a set of vectors, a $O(n^2)$ operation.
For native code compilation, it is usually preferable to use explicit for loops and minimize the use of `numpy` vectorization and broadcasting because
- It makes it easier for the `numba` JIT to optimize
- It is easier to "cythonize"
- It is easier to port to C++
However, use of vectors and matrices is fine especially if you will be porting to use a C++ library such as Eigen.
## Timing code
### Manual
```
import time
def f(n=1):
start = time.time()
time.sleep(n)
elapsed = time.time() - start
return elapsed
f(1)
```
### Clock time
```
%%time
time.sleep(1)
```
### Using `timeit`
The `-r` argument says how many runs to average over, and `-n` says how many times to run the function in a loop per run.
```
%timeit time.sleep(0.01)
%timeit -r3 time.sleep(0.01)
%timeit -n10 time.sleep(0.01)
%timeit -r3 -n10 time.sleep(0.01)
```
### Time unit conversions
```
1 s = 1,000 ms
1 ms = 1,000 µs
1 µs = 1,000 ns
```
## Profiling
If you want to identify bottlenecks in a Python script, do the following:
- First make sure that the script is modular - i.e. it consists mainly of function calls
- Each function should be fairly small and only do one thing
- Then run a profiler to identify the bottleneck function(s) and optimize them
See the Python docs on [profiling Python code](https://docs.python.org/3/library/profile.html)
Profiling can be done in a notebook with %prun, with the following readouts as column headers:
- ncalls
- for the number of calls,
- tottime
- for the total time spent in the given function (and excluding time made in calls to sub-functions),
- percall
- is the quotient of tottime divided by ncalls
- cumtime
- is the total time spent in this and all subfunctions (from invocation till exit). This figure is accurate even for recursive functions.
- percall
- is the quotient of cumtime divided by primitive calls
- filename:lineno(function)
- provides the respective data of each function
```
def foo1(n):
return np.sum(np.square(np.arange(n)))
def foo2(n):
return sum(i*i for i in range(n))
def foo3(n):
[foo1(n) for i in range(10)]
foo2(n)
def foo4(n):
return [foo2(n) for i in range(100)]
def work(n):
foo1(n)
foo2(n)
foo3(n)
foo4(n)
%%time
work(int(1e5))
%prun -q -D work.prof work(int(1e5))
import pstats
p = pstats.Stats('work.prof')
p.print_stats()
pass
p.sort_stats('time', 'cumulative').print_stats('foo')
pass
p.sort_stats('ncalls').print_stats(5)
pass
```
## Optimizing a function
Our example will be to optimize a function that calculates the pairwise distance between a set of vectors.
We first use a built-in function from`scipy` to check that our answers are right and also to benchmark how our code compares in speed to an optimized compiled routine.
```
from scipy.spatial.distance import squareform, pdist
n = 100
p = 100
xs = np.random.random((n, p))
sol = squareform(pdist(xs))
%timeit -r3 -n10 squareform(pdist(xs))
```
## Python
### Simple version
```
def pdist_py(xs):
"""Unvectorized Python."""
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(n):
for k in range(p):
A[i,j] += (xs[i, k] - xs[j, k])**2
A[i,j] = np.sqrt(A[i,j])
return A
```
Note that we
- first check that the output is **right**
- then check how fast the code is
```
func = pdist_py
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Exploiting symmetry
```
def pdist_sym(xs):
"""Unvectorized Python."""
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(i+1, n):
for k in range(p):
A[i,j] += (xs[i, k] - xs[j, k])**2
A[i,j] = np.sqrt(A[i,j])
A += A.T
return A
func = pdist_sym
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Vectorizing inner loop
```
def pdist_vec(xs):
"""Vectorize inner loop."""
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(i+1, n):
A[i,j] = np.sqrt(np.sum((xs[i] - xs[j])**2))
A += A.T
return A
func = pdist_vec
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Broadcasting and vectorizing
Note that the broadcast version does twice as much work as it does not exploit symmetry.
```
def pdist_numpy(xs):
"""Fully vectroized version."""
return np.sqrt(np.square(xs[:, None] - xs[None, :]).sum(axis=-1))
func = pdist_numpy
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 squareform(func(xs))
```
## JIT with `numba`
We use the `numba.jit` decorator which will trigger generation and execution of compiled code when the function is first called.
```
from numba import jit
```
### Using `jit` as a function
```
pdist_numba_py = jit(pdist_py, nopython=True, cache=True)
func = pdist_numba_py
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Using `jit` as a decorator
```
@jit(nopython=True, cache=True)
def pdist_numba_py_1(xs):
"""Unvectorized Python."""
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(n):
for k in range(p):
A[i,j] += (xs[i, k] - xs[j, k])**2
A[i,j] = np.sqrt(A[i,j])
return A
func = pdist_numba_py_1
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Can we make the code faster?
Note that in the inner loop, we are updating a matrix when we only need to update a scalar. Let's fix this.
```
@jit(nopython=True, cache=True)
def pdist_numba_py_2(xs):
"""Unvectorized Python."""
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(n):
d = 0.0
for k in range(p):
d += (xs[i, k] - xs[j, k])**2
A[i,j] = np.sqrt(d)
return A
func = pdist_numba_py_2
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Can we make the code even faster?
We can also try to exploit symmetry.
```
@jit(nopython=True, cache=True)
def pdist_numba_py_sym(xs):
"""Unvectorized Python."""
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(i+1, n):
d = 0.0
for k in range(p):
d += (xs[i, k] - xs[j, k])**2
A[i,j] = np.sqrt(d)
A += A.T
return A
func = pdist_numba_py_sym
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Does `jit` work with vectorized code?
```
pdist_numba_vec = jit(pdist_vec, nopython=True, cache=True)
%timeit -r3 -n10 pdist_vec(xs)
func = pdist_numba_vec
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Does `jit` work with broadcasting?
```
pdist_numba_numpy = jit(pdist_numpy, nopython=True, cache=True)
%timeit -r3 -n10 pdist_numpy(xs)
func = pdist_numba_numpy
try:
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
except Exception as e:
print(e)
```
#### We need to use `reshape` to broadcast
```
def pdist_numpy_(xs):
"""Fully vectroized version."""
return np.sqrt(np.square(xs.reshape(n,1,p) - xs.reshape(1,n,p)).sum(axis=-1))
pdist_numba_numpy_ = jit(pdist_numpy_, nopython=True, cache=True)
%timeit -r3 -n10 pdist_numpy_(xs)
func = pdist_numba_numpy_
print(np.allclose(func(xs), sol))
%timeit -r3 -n10 func(xs)
```
### Summary
- `numba` appears to work best with converting fairly explicit Python code
- This might change in the future as the `numba` JIT compiler becomes more sophisticated
- Always check optimized code for correctness
- We can use `timeit` magic as a simple way to benchmark functions
## Cython
Cython is an Ahead Of Time (AOT) compiler. It compiles the code and replaces the function invoked with the compiled version.
In the notebook, calling `%cython -a` magic shows code colored by how many Python C API calls are being made. You want to reduce the yellow as much as possible.
```
%%cython -a
import numpy as np
def pdist_cython_1(xs):
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(i+1, n):
d = 0.0
for k in range(p):
d += (xs[i,k] - xs[j,k])**2
A[i,j] = np.sqrt(d)
A += A.T
return A
def pdist_base(xs):
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(i+1, n):
d = 0.0
for k in range(p):
d += (xs[i,k] - xs[j,k])**2
A[i,j] = np.sqrt(d)
A += A.T
return A
%timeit -r3 -n1 pdist_base(xs)
func = pdist_cython_1
print(np.allclose(func(xs), sol))
%timeit -r3 -n1 func(xs)
```
## Cython with static types
- We provide types for all variables so that Cython can optimize their compilation to C code.
- Note `numpy` functions are optimized for working with `ndarrays` and have unnecessary overhead for scalars. We therefor replace them with math functions from the C `math` library.
```
%%cython -a
import cython
import numpy as np
cimport numpy as np
from libc.math cimport sqrt, pow
@cython.boundscheck(False)
@cython.wraparound(False)
def pdist_cython_2(double[:, :] xs):
cdef int n, p
cdef int i, j, k
cdef double[:, :] A
cdef double d
n = xs.shape[0]
p = xs.shape[1]
A = np.zeros((n, n))
for i in range(n):
for j in range(i+1, n):
d = 0.0
for k in range(p):
d += pow(xs[i,k] - xs[j,k],2)
A[i,j] = sqrt(d)
for i in range(1, n):
for j in range(i):
A[i, j] = A[j, i]
return A
func = pdist_cython_2
print(np.allclose(func(xs), sol))
%timeit -r3 -n1 func(xs)
```
## Wrapping C++ cdoe
### Function to port
```python
def pdist_base(xs):
n, p = xs.shape
A = np.zeros((n, n))
for i in range(n):
for j in range(i+1, n):
d = 0.0
for k in range(p):
d += (xs[i,k] - xs[j,k])**2
A[i,j] = np.sqrt(d)
A += A.T
return A
```
### First check that the function works as expected
```
%%file main.cpp
#include <iostream>
#include <Eigen/Dense>
#include <cmath>
using std::cout;
// takes numpy array as input and returns another numpy array
Eigen::MatrixXd pdist(Eigen::MatrixXd xs) {
int n = xs.rows() ;
int p = xs.cols();
Eigen::MatrixXd A = Eigen::MatrixXd::Zero(n, n);
for (int i=0; i<n; i++) {
for (int j=i+1; j<n; j++) {
double d = 0;
for (int k=0; k<p; k++) {
d += std::pow(xs(i,k) - xs(j,k), 2);
}
A(i, j) = std::sqrt(d);
}
}
A += A.transpose().eval();
return A;
}
int main() {
using namespace Eigen;
MatrixXd A(3,2);
A << 0, 0,
3, 4,
5, 12;
std::cout << pdist(A) << "\n";
}
%%bash
g++ -o main.exe main.cpp -I./eigen3
%%bash
./main.exe
A = np.array([
[0, 0],
[3, 4],
[5, 12]
])
squareform(pdist(A))
```
### Now use the boiler plate for wrapping
```
%%file wrap.cpp
<%
cfg['compiler_args'] = ['-std=c++11']
cfg['include_dirs'] = ['./eigen3']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/eigen.h>
// takes numpy array as input and returns another numpy array
Eigen::MatrixXd pdist(Eigen::MatrixXd xs) {
int n = xs.rows() ;
int p = xs.cols();
Eigen::MatrixXd A = Eigen::MatrixXd::Zero(n, n);
for (int i=0; i<n; i++) {
for (int j=i+1; j<n; j++) {
double d = 0;
for (int k=0; k<p; k++) {
d += std::pow(xs(i,k) - xs(j,k), 2);
}
A(i, j) = std::sqrt(d);
}
}
A += A.transpose().eval();
return A;
}
PYBIND11_PLUGIN(wrap) {
pybind11::module m("wrap", "auto-compiled c++ extension");
m.def("pdist", &pdist);
return m.ptr();
}
import cppimport
import numpy as np
code = cppimport.imp("wrap")
print(code.pdist(A))
func = code.pdist
print(np.allclose(func(xs), sol))
%timeit -r3 -n1 func(xs)
```
| true |
code
| 0.310917 | null | null | null | null |
|
# MSOA Mapping - England
```
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
from shapely.geometry import Point
from sklearn.neighbors import KNeighborsRegressor
import rasterio as rst
from rasterstats import zonal_stats
%matplotlib inline
path = r"[CHANGE THIS PATH]\England\\"
data = pd.read_csv(path + "final_data.csv", index_col = 0)
```
# Convert to GeoDataFrame
```
geo_data = gpd.GeoDataFrame(data = data,
crs = {'init':'epsg:27700'},
geometry = data.apply(lambda geom: Point(geom['oseast1m'],geom['osnrth1m']),axis=1))
geo_data.head()
f, (ax1, ax2, ax3) = plt.subplots(1,3, figsize = (16,6), sharex = True, sharey = True)
geo_data[geo_data['Year'] == 2016].plot(column = 'loneills', scheme = 'quantiles', cmap = 'Reds', marker = '.', ax = ax1);
geo_data[geo_data['Year'] == 2017].plot(column = 'loneills', scheme = 'quantiles', cmap = 'Reds', marker = '.', ax = ax2);
geo_data[geo_data['Year'] == 2018].plot(column = 'loneills', scheme = 'quantiles', cmap = 'Reds', marker = '.', ax = ax3);
```
## k-nearest neighbour interpolation
Non-parametric interpolation of loneliness based on local set of _k_ nearest neighbours for each cell in our evaluation grid.
Effectively becomes an inverse distance weighted (idw) interpolation when weights are set to be distance based.
```
def idw_model(k, p):
def _inv_distance_index(weights, index=p):
return (test==0).astype(int) if np.any(weights == 0) else 1. / weights**index
return KNeighborsRegressor(k, weights=_inv_distance_index)
def grid(xmin, xmax, ymin, ymax, cellsize):
# Set x and y ranges to accommodate cellsize
xmin = (xmin // cellsize) * cellsize
xmax = -(-xmax // cellsize) * cellsize # ceiling division
ymin = (ymin // cellsize) * cellsize
ymax = -(-ymax // cellsize) * cellsize
# Make meshgrid
x = np.linspace(xmin,xmax,(xmax-xmin)/cellsize)
y = np.linspace(ymin,ymax,(ymax-ymin)/cellsize)
return np.meshgrid(x,y)
def reshape_grid(xx,yy):
return np.append(xx.ravel()[:,np.newaxis],yy.ravel()[:,np.newaxis],1)
def reshape_image(z, xx):
return np.flip(z.reshape(np.shape(xx)),0)
def idw_surface(locations, values, xmin, xmax, ymin, ymax, cellsize, k=5, p=2):
# Make and fit the idw model
idw = idw_model(k,p).fit(locations, values)
# Make the grid to estimate over
xx, yy = grid(xmin, xmax, ymin, ymax, cellsize)
# reshape the grid for estimation
xy = reshape_grid(xx,yy)
# Predict the grid values
z = idw.predict(xy)
# reshape to image array
z = reshape_image(z, xx)
return z
```
## 2016 data
```
# Get point locations and values from data
points = geo_data[geo_data['Year'] == 2016][['oseast1m','osnrth1m']].values
vals = geo_data[geo_data['Year'] == 2016]['loneills'].values
surface2016 = idw_surface(points, vals, 90000,656000,10000,654000,250,7,2)
# Look at surface
f, ax = plt.subplots(figsize = (8,10))
ax.imshow(surface2016, cmap='Reds')
ax.set_aspect('equal')
```
## 2017 Data
```
# Get point locations and values from data
points = geo_data[geo_data['Year'] == 2017][['oseast1m','osnrth1m']].values
vals = geo_data[geo_data['Year'] == 2017]['loneills'].values
surface2017 = idw_surface(points, vals, 90000,656000,10000,654000,250,7,2)
# Look at surface
f, ax = plt.subplots(figsize = (8,10))
ax.imshow(surface2017, cmap='Reds')
ax.set_aspect('equal')
```
## 2018 Data
Get minimum and maximum bounds from the data. Round these down (in case of the 'min's) and up (in case of the 'max's) to get the values for `idw_surface()`
```
print("xmin = ", geo_data['oseast1m'].min(), "\n\r",
"xmax = ", geo_data['oseast1m'].max(), "\n\r",
"ymin = ", geo_data['osnrth1m'].min(), "\n\r",
"ymax = ", geo_data['osnrth1m'].max())
xmin = 90000
xmax = 656000
ymin = 10000
ymax = 654000
# Get point locations and values from data
points = geo_data[geo_data['Year'] == 2018][['oseast1m','osnrth1m']].values
vals = geo_data[geo_data['Year'] == 2018]['loneills'].values
surface2018 = idw_surface(points, vals, xmin,xmax,ymin,ymax,250,7,2)
# Look at surface
f, ax = plt.subplots(figsize = (8,10))
ax.imshow(surface2018, cmap='Reds')
ax.set_aspect('equal')
```
# Extract Values to MSOAs
Get 2011 MSOAs from the Open Geography Portal: http://geoportal.statistics.gov.uk/
```
# Get MSOAs which we use to aggregate the loneills variable.
#filestring = './Data/MSOAs/Middle_Layer_Super_Output_Areas_December_2011_Full_Clipped_Boundaries_in_England_and_Wales.shp'
filestring = r'[CHANGE THIS PATH]\Data\Boundaries\England and Wales\Middle_Layer_Super_Output_Areas_December_2011_Super_Generalised_Clipped_Boundaries_in_England_and_Wales.shp'
msoas = gpd.read_file(filestring)
msoas.to_crs({'init':'epsg:27700'})
# drop the Wales MSOAs
msoas = msoas[msoas['msoa11cd'].str[:1] == 'E'].copy()
# Get GB countries data to use for representation
#gb = gpd.read_file('./Data/GB/Countries_December_2017_Generalised_Clipped_Boundaries_in_UK_WGS84.shp')
#gb = gb.to_crs({'init':'epsg:27700'})
# get England
#eng = gb[gb['ctry17nm'] == 'England'].copy()
# Make affine transform for raster
trans = rst.Affine.from_gdal(xmin-125,250,0,ymax+125,0,-250)
# NB This process is slooow - write bespoke method?
# 2016
#msoa_zones = zonal_stats(msoas['geometry'], surface2016, affine = trans, stats = 'mean', nodata = np.nan)
#msoas['loneills_2016'] = list(map(lambda x: x['mean'] , msoa_zones))
# 2017
#msoa_zones = zonal_stats(msoas['geometry'], surface2017, affine = trans, stats = 'mean', nodata = np.nan)
#msoas['loneills_2017'] = list(map(lambda x: x['mean'] , msoa_zones))
# 2018
msoa_zones = zonal_stats(msoas['geometry'], surface2018, affine = trans, stats = 'mean', nodata = np.nan)
msoas['loneills_2018'] = list(map(lambda x: x['mean'] , msoa_zones))
# Check out the distributions of loneills by MSOA
f, [ax1, ax2, ax3] = plt.subplots(1,3, figsize=(14,5), sharex = True, sharey=True)
#ax1.hist(msoas['loneills_2016'], bins = 30)
#ax2.hist(msoas['loneills_2017'], bins = 30)
ax3.hist(msoas['loneills_2018'], bins = 30)
ax1.set_title("2016")
ax2.set_title("2017")
ax3.set_title("2018");
bins = [-10, -5, -3, -2, -1, 1, 2, 3, 5, 10, 22]
labels = ['#01665e','#35978f', '#80cdc1','#c7eae5','#f5f5f5','#f6e8c3','#dfc27d','#bf812d','#8c510a','#543005']
#msoas['loneills_2016_class'] = pd.cut(msoas['loneills_2016'], bins, labels = labels)
#msoas['loneills_2017_class'] = pd.cut(msoas['loneills_2017'], bins, labels = labels)
msoas['loneills_2018_class'] = pd.cut(msoas['loneills_2018'], bins, labels = labels)
msoas['loneills_2018_class'] = msoas.loneills_2018_class.astype(str) # convert categorical to string
f, (ax1, ax2, ax3) = plt.subplots(1,3,figsize = (16,10))
#msoas.plot(color = msoas['loneills_2016_class'], ax=ax1)
#msoas.plot(color = msoas['loneills_2017_class'], ax=ax2)
msoas.plot(color = msoas['loneills_2018_class'], ax=ax3)
#gb.plot(edgecolor = 'k', linewidth = 0.5, facecolor='none', ax=ax1)
#gb.plot(edgecolor = 'k', linewidth = 0.5, facecolor='none', ax=ax2)
#gb.plot(edgecolor = 'k', linewidth = 0.5, facecolor='none', ax=ax3)
# restrict to England
#ax1.set_xlim([82672,656000])
#ax1.set_ylim([5342,658000])
#ax2.set_xlim([82672,656000])
#ax2.set_ylim([5342,658000])
#ax3.set_xlim([82672,656000])
#ax3.set_ylim([5342,658000])
# Make a legend
# make bespoke legend
from matplotlib.patches import Patch
handles = []
ranges = ["-10, -5","-5, -3","-3, -2","-2, -1","-1, 1","1, 2","3, 3","3, 5","5, 10","10, 22"]
for color, label in zip(labels,ranges):
handles.append(Patch(facecolor = color, label = label))
ax1.legend(handles = handles, loc = 2);
# Save out msoa data as shapefile and geojson
msoas.to_file(path + "msoa_loneliness.shp", driver = 'ESRI Shapefile')
#msoas.to_file(path + "msoa_loneliness.geojson", driver = 'GeoJSON')
# save out msoa data as csv
msoas.to_csv(path + "msoa_loneliness.csv")
```
| true |
code
| 0.58747 | null | null | null | null |
|
## Problem
Given a sorted list of integers of length N, determine if an element x is in the list without performing any multiplication, division, or bit-shift operations.
Do this in `O(log N)` time.
## Solution
We can't use binary search to locate the element because involves dividing by two to get the middle element.
We can use Fibonacci search to get around this limitation. The idea is that fibonacci numbers are used to locate indices to check in the array, and by cleverly updating these indices, we can efficiently locate our element.
Let `p` and `q` be consequtive Fibonacci numbers. `q` is the smallest Fibonacci number that is **greater than or equal to** the size of the array. We compare `x` with `array[p]` and perform the following logic:
1. If `x == array[p]`, we have found the element. Return true.
2. If `x < array[p]` move p and q down two indices each, cutting down the largest two elements from the search.
3. If `x > array[p]` move p and q down index each, and add an offset of p to the next search value.
If we have exhausted our list of Fibonacci numbers, we can be assured that the element is not in our array.
Let's go through an example.
First, we need a helper function to generate the Fibonacci numbers, given the length of the array => N.
```
def get_fib_sequence(n):
a, b = 0, 1
sequence = [a]
while a < n:
a, b = b, a + b
sequence.append(a)
return sequence
```
Suppose we have array
```
[2, 4, 10, 16, 25, 45, 55, 65, 80, 100]
```
Since there are 10 elements in the array, the generated sequence of Fibonacci numbers will be
```
[0, 1, 1, 2, 3, 5, 8, 13]
```
So the values of p and q are: `p == 6, q == 7` (The second last and last indices in the sequence)
Now suppose we are searching for `45`, we'll carry out the following steps:
- Compare 45 with `array[fib[p]] => array[8]`. Since 45 < 80, we move p and q down two indices. p = 4, q = 5.
- Next, compare 45 with `array[fib[p]] => array[3]`. Since 45 > 16, we set p = 3 and create an offset of 2. So p = 5, q = 4.
- Finally, we compare 45 with `array[fib[p]]`. Since array[5] == 45, we have found x.
```
def fibo_search(array, x):
n = len(array)
fibs = get_fib_sequence(n)
p, q = len(fibs) - 2, len(fibs) - 1
offset = 0
while q > 0:
index = min(offset + fibs[p], n - 1)
if x == array[index]:
return True
elif x < array[index]:
p -= 2
q -= 2
else:
p -= 1
q -= 1
offset = index
return False
fibo_search([2, 4, 10, 16, 25, 45, 55, 65, 80, 100], 45)
```
| true |
code
| 0.293759 | null | null | null | null |
|
# Polynomial Regression
```
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['axes.titlesize'] = 14
plt.rcParams['legend.fontsize'] = 12
plt.rcParams['figure.figsize'] = (8, 5)
%config InlineBackend.figure_format = 'retina'
```
### Linear models
$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_n x_n + \epsilon$
$\begin{bmatrix} \vdots \\ y \\ \vdots \end{bmatrix} = \beta_0
+ \beta_1 \begin{bmatrix} \vdots \\ x_1 \\ \vdots \end{bmatrix}
+ \beta_2 \begin{bmatrix} \vdots \\ x_2 \\ \vdots \end{bmatrix}
+ \dots
+ \beta_n \begin{bmatrix} \vdots \\ x_n \\ \vdots \end{bmatrix}
+ \begin{bmatrix} \vdots \\ \epsilon \\ \vdots \end{bmatrix}$
$X =
\begin{bmatrix}
\vdots & \vdots & & \vdots \\
x_1 & x_2 & \dots & x_n \\
\vdots & \vdots & & \vdots
\end{bmatrix}$
### A simple linear model
$y = \beta_1 x_1 + \beta_2 x_2 + \epsilon$
### Extending this to a $2^{nd}$ degree polynomial model
$y = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1^2 + \beta_4 x_1 x_2 + \beta_5 x_2^2 + \epsilon$
$x_1 x_2$ is an interaction term between $x_1$ and $x_2$
### Reparameterize the model
$y = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1^2 + \beta_4 x_1 x_2 + \beta_5 x_2^2 + \epsilon$
$\begin{matrix}
x_3 & \rightarrow & x_1^2 \\
x_4 & \rightarrow & x_1 x_2 \\
x_5 & \rightarrow & x_2^2
\end{matrix}$
$y = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_4 x_4 + \beta_5 x_5 + \epsilon$
### !!! But that's just a linear model
### Given the matrix of measured features $X$:
$X =
\begin{bmatrix}
\vdots & \vdots \\
x_1 & x_2 \\
\vdots & \vdots
\end{bmatrix}$
### All we need to do is fit a linear model using the following feature matrix $X_{poly}$:
$X_{poly} =
\begin{bmatrix}
\vdots & \vdots & \vdots & \vdots & \vdots \\
x_1 & x_2 & x_1^2 & x_1 x_2 & x_2^2 \\
\vdots & \vdots & \vdots & \vdots & \vdots
\end{bmatrix}$
## Some experimental data: Temperature vs. Yield
```
temperature = np.array([50, 50, 50, 70, 70, 70, 80, 80, 80, 90, 90, 90, 100, 100, 100])
experimental_yield = np.array([3.3, 2.8, 2.9, 2.3, 2.6, 2.1, 2.5, 2.9, 2.4, 3, 3.1, 2.8, 3.3, 3.5, 3])
plt.plot(temperature, experimental_yield, 'o')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield');
```
### Rearranging the data for use with sklearn
```
X = temperature.reshape([-1,1])
y = experimental_yield
X
```
# Fit yield vs. temperature data with a linear model
```
from sklearn.linear_model import LinearRegression
ols_model = LinearRegression()
ols_model.fit(X, y)
plt.plot(temperature, experimental_yield, 'o')
plt.plot(temperature, ols_model.predict(X), '-', label='OLS')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield')
plt.legend();
```
# Fit yield vs. temperature data with a $2^{nd}$ degree polynomial model
```
from sklearn.preprocessing import PolynomialFeatures
poly2 = PolynomialFeatures(degree=2)
X_poly2 = poly2.fit_transform(X)
X.shape, X_poly2.shape
poly2_model = LinearRegression()
poly2_model.fit(X_poly2, y)
plt.plot(temperature, experimental_yield, 'o')
plt.plot(temperature, ols_model.predict(X), '-', label='OLS')
plt.plot(temperature, poly2_model.predict(X_poly2), '-', label='Poly2')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield')
plt.legend();
```
Note that you could very well use a regularization model such as Ridge or Lasso instead of the simple ordinary least squares LinearRegression model. In this case, it doesn't matter too much becuase we have only one feature (Temperature).
# Smoothing the plot of the model fit
```
X_fit = np.arange(50, 101).reshape([-1, 1])
X_fit_poly2 = poly2.fit_transform(X_fit)
plt.plot(temperature, experimental_yield, 'o')
plt.plot(X_fit, ols_model.predict(X_fit), '-', label='OLS')
plt.plot(X_fit, poly2_model.predict(X_fit_poly2), '-', label='Poly2')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield')
plt.legend();
```
# Fit yield vs. temperature data with a $3^{rd}$ degree polynomial model
```
poly3 = PolynomialFeatures(degree=3)
X_poly3 = poly.fit_transform(X)
X.shape, X_poly3.shape
poly3_model = LinearRegression()
poly3_model.fit(X_poly3, y)
X_fit_poly3 = poly3.fit_transform(X_fit)
plt.plot(temperature, experimental_yield, 'o')
plt.plot(X_fit, ols_model.predict(X_fit), '-', label='OLS')
plt.plot(X_fit, poly2_model.predict(X_fit_poly2), '-', label='Poly2')
plt.plot(X_fit, poly3_model.predict(X_fit_poly3), '-', label='Poly3')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield')
plt.legend();
```
### Polynomial fit is clearly better than a linear fit, but which degree polynomial should we use?
### Why not try a range of polynomiall degrees, and see which one is best?
### But how do we determine which degree is best?
### We could use cross validation to determine the degree of polynomial that is most likely to best explain new data.
### Ideally, we would:
1. Split the data into training and testing sets
2. Perform cross validation on the training set to determine the best choice of polynomial degree
3. Fit the chosen model to the training set
4. Evaluate it on the withheld testing set
However, we have such little data that doing all of these splits is likely to leave individual partitions with subsets of data that are no longer representative of the relationship between temperature and yield.
```
plt.plot(temperature, experimental_yield, 'o')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield');
```
Thus, I'll forgo splitting the data into training and testing sets, and we'll train our model on the entire dataset. This is not ideal of course, and it means we'll have to simply hope that our model generalizes to new data.
I will use 5-fold cross validation to tune the polynomial degree hyperparameter. You might also want to explore 10-fold or leave one out cross validation.
```
from sklearn.model_selection import cross_validate
cv_mse = []
for degree in [2, 3]:
poly = PolynomialFeatures(degree=degree)
X_poly = poly.fit_transform(X)
model = LinearRegression()
results = cross_validate(model, X_poly, y, cv=5, scoring='neg_mean_squared_error')
cv_mse.append(-results['test_score'])
cv_mse
np.mean(cv_mse[0]), np.mean(cv_mse[1])
```
Slightly better mean validation error for $3^{rd}$ degree polynomial.
```
plt.plot(temperature, experimental_yield, 'o')
plt.plot(X_fit, ols_model.predict(X_fit), '-', label='OLS')
plt.plot(X_fit, poly2_model.predict(X_fit_poly2), '-', label='Poly2')
plt.plot(X_fit, poly3_model.predict(X_fit_poly3), '-', label='Poly3')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield')
plt.legend();
```
Despite the lower validation error for the $3^{rd}$ degree polynomial, we might still opt to stick with a $2^{nd}$ degree polynomial model. Why might we want to do that?
Less flexible models are more likely to generalize to new data because they are less likely to overfit noise.
Another important question to ask is whether the slight difference in mean validation error between $2^{nd}$ and $3^{rd}$ degree polynomial models is enough to really distinguish between the models?
One thing we can do is look at how variable the validation errors are across the various validation partitions.
```
cv_mse
binedges = np.linspace(0, np.max(cv_mse[0]), 11)
plt.hist(cv_mse[0], binedges, alpha=0.5, label='Poly2')
plt.hist(cv_mse[1], binedges, alpha=0.5, label='Poly3')
plt.xlabel('Validation MSE')
plt.ylabel('Counts')
plt.legend();
```
Is the extra flexibility of the $3^{rd}$ degree polynomial model worth it, or is it more likely to overfit noise in our data and less likely to generalize to new measurements?
How dependent are our results on how we partitioned the data? Repeat the above using 10-fold cross validation.
Of course, more measurements, including measures at 60 degrees, would help you to better distinguish between these models.
```
plt.plot(temperature, experimental_yield, 'o')
plt.plot(X_fit, ols_model.predict(X_fit), '-', label='OLS')
plt.plot(X_fit, poly2_model.predict(X_fit_poly2), '-', label='Poly2')
plt.plot(X_fit, poly3_model.predict(X_fit_poly3), '-', label='Poly3')
plt.xlabel('Temperature')
plt.ylabel('Experimental Yield')
plt.legend();
```
| true |
code
| 0.700485 | null | null | null | null |
|
# Python Functions
```
import numpy as np
```
## Custom functions
### Anatomy
name, arguments, docstring, body, return statement
```
def func_name(arg1, arg2):
"""Docstring starts wtih a short description.
May have more information here.
arg1 = something
arg2 = somehting
Returns something
Example usage:
func_name(1, 2)
"""
result = arg1 + arg2
return result
help(func_name)
```
### Function arguments
place, keyword, keyword-only, defaults, mutatble an immutable arguments
```
def f(a, b, c, *args, **kwargs):
return a, b, c, args, kwargs
f(1, 2, 3, 4, 5, 6, x=7, y=8, z=9)
def g(a, b, c, *, x, y, z):
return a, b, c, x, y, z
try:
g(1,2,3,4,5,6)
except TypeError as e:
print(e)
g(1,2,3,x=4,y=5,z=6)
def h(a=1, b=2, c=3):
return a, b, c
h()
h(b=9)
h(7,8,9)
```
### Default mutable argumnet
binding is fixed at function definition, the default=None idiom
```
def f(a, x=[]):
x.append(a)
return x
f(1)
f(2)
def f(a, x=None):
if x is None:
x = []
x.append(a)
return x
f(1)
f(2)
```
## Pure functions
deterministic, no side effects
```
def f1(x):
"""Pure."""
return x**2
def f2(x):
"""Pure if we ignore local state change.
The x in the function baheaves like a copy.
"""
x = x**2
return x
def f3(x):
"""Impure if x is mutable.
Augmented assignemnt is an in-place operation for mutable structures."""
x **= 2
return x
a = 2
b = np.array([1,2,3])
f1(a), a
f1(b), b
f2(a), a
f2(b), b
f3(a), a
f3(b), b
def f4():
"""Stochastic functions are tehcnically impure
since a global seed is changed between function calls."""
import random
return random.randint(0,10)
f4(), f4(), f4()
```
## Recursive functions
Euclidean GCD algorithm
```
gcd(a, 0) = a
gcd(a, b) = gcd(b, a mod b)
```
```
def factorial(n):
"""Simple recursive funciton."""
if n == 0:
return 1
else:
return n * factorial(n-1)
factorial(4)
def factorial1(n):
"""Non-recursive version."""
s = 1
for i in range(1, n+1):
s *= i
return s
factorial1(4)
def gcd(a, b):
if b == 0:
return a
else:
return gcd(b, a % b)
gcd(16, 24)
```
## Generators
yield and laziness, infinite streams
```
def count(n=0):
while True:
yield n
n += 1
for i in count(10):
print(i)
if i >= 15:
break
from itertools import islice
list(islice(count(), 10, 15))
def updown(n):
yield from range(n)
yield from range(n, 0, -1)
updown(5)
list(updown(5))
```
## First class functions
functions as arguments, functions as return values
```
def double(x):
return x*2
def twice(x, func):
return func(func(x))
twice(3, double)
```
Example from standard library
```
xs = 'banana apple guava'.split()
xs
sorted(xs)
sorted(xs, key=lambda s: s.count('a'))
def f(n):
def g():
print("hello")
def h():
print("goodbye")
if n == 0:
return g
else:
return h
g = f(0)
g()
h = f(1)
h()
```
## Function dispatch
Poor man's switch statement
```
def add(x, y):
return x + y
def mul(x, y):
return x * y
ops = {
'a': add,
'm': mul
}
items = zip('aammaammam', range(10), range(10))
for item in items:
key, x, y = item
op = ops[key]
print(key, x, y, op(x, y))
```
## Closure
Capture of argument in enclosing scope
```
def f(x):
def g(y):
return x + y
return g
f1 = f(0)
f2 = f(10)
f1(5), f2(5)
```
## Decorators
A timing decorator
```
def timer(f):
import time
def g(*args, **kwargs):
tic = time.time()
res = f(*args, **kwargs)
toc = time.time()
return res, toc-tic
return g
def f(n):
s = 0
for i in range(n):
s += i
return s
timed_f = timer(f)
timed_f(100000)
```
Decorator syntax
```
@timer
def g(n):
s = 0
for i in range(n):
s += i
return s
g(100000)
```
## Anonymous functions
Short, one-use lambdas
```
f = lambda x: x**2
f(3)
g = lambda x, y: x+y
g(3,4)
```
## Map, filter and reduce
Funcitonal building blocks
```
xs = range(10)
list(map(lambda x: x**2, xs))
list(filter(lambda x: x%2 == 0, xs))
from functools import reduce
reduce(lambda x, y: x+y, xs)
reduce(lambda x, y: x+y, xs, 100)
```
## Functional modules in the standard library
itertools, functional and operator
```
import operator as op
reduce(op.add, range(10))
import itertools as it
list(it.islice(it.cycle([1,2,3]), 1, 10))
list(it.permutations('abc', 2))
list(it.combinations('abc', 2))
from functools import partial, lru_cache
def f(a, b, c):
return a + b + c
g = partial(f, b = 2, c=3)
g(1)
def fib(n, trace=False):
if trace:
print("fib(%d)" % n, end=',')
if n <= 2:
return 1
else:
return fib(n-1, trace) + fib(n-2, trace)
fib(10, True)
%timeit -r1 -n100 fib(20)
@lru_cache(3)
def fib1(n, trace=False):
if trace:
print("fib(%d)" % n, end=',')
if n <= 2:
return 1
else:
return fib1(n-1, trace) + fib1(n-2, trace)
fib1(10, True)
%timeit -r1 -n100 fib1(20)
```
## Using `toolz`
funcitonal power tools
```
import toolz as tz
import toolz.curried as c
```
Find the 5 most common sequences of length 3 in the dna variable.
```
dna = np.random.choice(list('ACTG'), (10,80), p=[.1,.2,.3,.4])
dna
tz.pipe(
dna,
c.map(lambda s: ''.join(s)),
list
)
res = tz.pipe(
dna,
c.map(lambda s: ''.join(s)),
lambda s: ''.join(s),
c.sliding_window(3),
c.map(lambda s: ''.join(s)),
tz.frequencies
)
[(k,v) for i, (k, v) in enumerate(sorted(res.items(), key=lambda x: -x[1])) if i < 5]
```
## Function annotations and type hints
Function annotations and type hints are optional and meant for 3rd party libraries (e.g. a static type checker or JIT compiler). They are NOT enforced at runtime.
Notice the type annotation, default value and return type.
```
def f(a: str = "hello") -> bool:
return a.islower()
f()
f("hello")
f("Hello")
```
Function annotations can be accessed through a special attribute.
```
f.__annotations__
```
Type and function annotations are NOT enforced. In fact, the Python interpreter essentially ignores them.
```
def f(x: int) -> int:
return x + x
f("hello")
```
For more types, import from the `typing` module
```
from typing import Sequence, TypeVar
from functools import reduce
import operator as op
T = TypeVar('T')
def f(xs: Sequence[T]) -> T:
return reduce(op.add, xs)
f([1,2,3])
f({1., 2., 3.})
f(('a', 'b', 'c'))
```
| true |
code
| 0.512022 | null | null | null | null |
|
## $k$-means clustering: An example implementation in Python 3 with numpy and matplotlib.
The [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) algorithm is an unsupervised learning method for identifying clusters within a dataset. The $k$ represents the number of clusters to be identified, which is specified by the user before starting the algorithm.
The algorithm goes like this:
* Initialize the $k$ cluster centroids.
* Repeat:
1. Cluster assignment: Assign each data point to the nearest cluster centroid.
2. Cluster updating: For each cluster centroid, average the locations of it's corresponding points and re-assign the centroid to that location.
The last two steps are repeated until stopping criteria are met such as a maximum number of iterations or the centroid velocity drops below a threshold. The results of the algorithm can be highly dependent on the cluster initialization step, especially when there are a large number of clusters and data points. Performance be improved in a few different ways such as running it multiple times and averaging the results or using different initalization methods such as [$k$-means plus plus](https://en.wikipedia.org/wiki/K-means%2B%2B). Here, we will initialize the $k$ cluster centroids by selecting $k$ random data points.
Mathematically, the cluster assignment step can be written as:
$c^{(i)} = argmin_{k} \left\lVert x^{(i)} - \mu_k\right\rVert^2$
where $c^{(i)}$ is the centroid closest to sample $x^{(i)}$ and $\mu_k$ represents the $k$-th centroid.
Similarly, the cluster update step can be written as:
$\mu_k = \frac{1}{n}[x^{(k_1)}+x^{(k_2)}+...+x^{(k_n)}]$
where, again $\mu_k$ represents the $k$-th centroid and $x^{(k_n)}$ are the training examples assigned to that centroid.
First, some imports.
```
import numpy as np
np.random.seed(0)
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from sklearn.datasets import make_blobs
```
Next we'll define some functions based on steps in the K-means algorithm.
```
def initialize_clusters(points, k):
"""Initializes clusters as k randomly selected points from points."""
return points[np.random.randint(points.shape[0], size=k)]
# Function for calculating the distance between centroids
def get_distances(centroid, points):
"""Returns the distance the centroid is from each data point in points."""
return np.linalg.norm(points - centroid, axis=1)
```
Here we'll generate some data using [scikit-learn](http://scikit-learn.org)'s [`make_blobs`](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.html#sklearn.datasets.make_blobs) function. For this example we'll generate a dataset with three clusters. Using this function will give us access to the actual class labels for each group so we can assess accuracy later if we would like to. Normally when using K-means, you won't know the cluster assignments or the number of clusters in the dataset!
```
# Generate dataset
X, y = make_blobs(centers=3, n_samples=500, random_state=1)
# Visualize
fig, ax = plt.subplots(figsize=(4,4))
ax.scatter(X[:,0], X[:,1], alpha=0.5)
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$');
```
Now let's implement K-means using k = 3.
```
k = 3
maxiter = 50
# Initialize our centroids by picking random data points
centroids = initialize_clusters(X, k)
# Initialize the vectors in which we will store the
# assigned classes of each data point and the
# calculated distances from each centroid
classes = np.zeros(X.shape[0], dtype=np.float64)
distances = np.zeros([X.shape[0], k], dtype=np.float64)
# Loop for the maximum number of iterations
for i in range(maxiter):
# Assign all points to the nearest centroid
for i, c in enumerate(centroids):
distances[:, i] = get_distances(c, X)
# Determine class membership of each point
# by picking the closest centroid
classes = np.argmin(distances, axis=1)
# Update centroid location using the newly
# assigned data point classes
for c in range(k):
centroids[c] = np.mean(X[classes == c], 0)
```
Once we've finished running the algorithm, we can visualize the classified data and our calculated centroids locations.
```
group_colors = ['skyblue', 'coral', 'lightgreen']
colors = [group_colors[j] for j in classes]
fig, ax = plt.subplots(figsize=(4,4))
ax.scatter(X[:,0], X[:,1], color=colors, alpha=0.5)
ax.scatter(centroids[:,0], centroids[:,1], color=['blue', 'darkred', 'green'], marker='o', lw=2)
ax.set_xlabel('$x_0$')
ax.set_ylabel('$x_1$');
```
Look's pretty good! In another post I'll discuss some limitations of the $k$-means algorithm and assess what happens when $k$ is chosen to be greater than or less than the actual number of clusters in your dataset.
| true |
code
| 0.780024 | null | null | null | null |
|
# Using TensorNet (Basic)
This notebook will demonstrate some of the core functionalities of TensorNet:
- Creating and setting up a dataset
- Augmenting the dataset
- Creating and configuring a model and viewing its summary
- Defining an optimizer and a criterion
- Setting up callbacks
- Training and validating the model
- Displaying plots for viewing the change in accuracy during training
# Installing Packages
```
!pip install --upgrade --no-cache-dir torch-tensornet
```
# Imports
Importing necessary packages and modules
```
%matplotlib inline
import matplotlib.pyplot as plt
from tensornet.data import CIFAR10
from tensornet.models import mobilenet_v2
from tensornet.models.loss import cross_entropy_loss
from tensornet.models.optimizer import sgd
from tensornet.utils import initialize_cuda, plot_metric
from tensornet.engine.ops import ModelCheckpoint
from tensornet.engine.ops.lr_scheduler import reduce_lr_on_plateau
```
## Set Seed and Get GPU Availability
```
# Initialize CUDA and set random seed
cuda, device = initialize_cuda(1) # random seed is set to 1
```
## Setup Dataset
Downloading and initializing `CIFAR-10` dataset and applying the following augmentations:
- Horizontal Flip
- Random Rotation
- Cutout Augmentation
```
dataset = CIFAR10(
train_batch_size=64,
val_batch_size=64,
cuda=cuda,
num_workers=4,
horizontal_flip_prob=0.2,
rotate_degree=20,
cutout_prob=0.3,
cutout_dim=(8, 8),
)
```
## Data Visualization
Let's see how our data looks like. This information will help us decide the transformations that can be used on the dataset.
```
# Fetch data
classes = dataset.classes
sample_data, sample_targets = dataset.data()
# Set number of images to display
num_images = 4
# Display images with labels
fig, axs = plt.subplots(1, 4, figsize=(8, 8))
fig.tight_layout()
for i in range(num_images):
axs[i].axis('off')
axs[i].set_title(f'Label: {classes[sample_targets[i]]}')
axs[i].imshow(sample_data[i])
```
## Training and Validation Dataloaders
This is the final step in data preparation. It sets the dataloader arguments and then creates the dataloader
```
# Create train data loader
train_loader = dataset.loader(train=True)
# Create val data loader
val_loader = dataset.loader(train=False)
```
# Model Architecture and Summary
We'll download a pretrained ResNet18 model and train it on our dataset using fine-tuning.
```
model = mobilenet_v2(pretrained=True).to(device) # Create model
model.summary(dataset.image_size) # Display model summary
```
# Model Training and Validation
- Loss Function: `Cross Entropy Loss`
- Optimizer: `SGD`
- Callbacks: `Model Checkpoint` and `Reduce LR on Plateau`
```
criterion = cross_entropy_loss() # Create loss function
optimizer = sgd(model) # Create optimizer with deafult learning rate
# Create callbacks
checkpoint_path = 'checkpoints'
callbacks = [
ModelCheckpoint(checkpoint_path, monitor='val_accuracy'),
reduce_lr_on_plateau(optimizer, factor=0.2, patience=2, min_lr=1e-6),
]
model.fit(
train_loader,
optimizer,
criterion,
device=device,
epochs=10,
val_loader=val_loader,
callbacks=callbacks,
metrics=['accuracy'],
)
```
## Result Analysis
Displaying the change in accuracy of the training and the validation set during training
```
plot_metric({
'Training': model.learner.train_metrics[0]['accuracy'],
'Validation': model.learner.val_metrics[0]['accuracy']
}, 'Accuracy')
```
| true |
code
| 0.804387 | null | null | null | null |
|
# Exploring Datasets with Python
In this short demo we will analyse a given dataset from 1978, which contains information about politicians having affairs.
To analyse it, we will use a Jupyter Notebook, which is basically a REPL++ for Python. Entering a command with shift executes the line and prints the result.
```
4 + 4
def sum(a, b):
return a + b
sum(40, 2)
import pandas as pd
affairs = pd.read_csv('affairs.csv')
affairs.head()
affairs['sex'].head()
affairs['sex'].value_counts()
affairs['age'].describe()
affairs['age'].max()
affairs.describe()
affairs[affairs['sex'] == 'female'].head()
affairs[affairs['sex'] == 'female'].describe()
affairs['below_30'] = affairs['age'] < 30
affairs['below_30'].value_counts()
affairs.head()
rel_meanings = ['not', 'mildly', 'fairly', 'strongly']
affairs['religious'] = affairs['religious'].apply(lambda x: rel_meanings[min(x, 4)-1])
affairs.head()
```
# Visualize Data
To visualize our data, we will use Seaborn, a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. Let's import it.
```
import seaborn as sns
%matplotlib inline
sns.set()
sns.set_context('talk')
```
Seaborn together with Pandas makes it pretty easy to create charts to analyze our data. We can pass our Dataframes and Series directly into Seaborn methods. We will see how in the following sections.
# Univariate Plotting
Let's start by visualizing the distribution of the age our our people. We can achieve this with a simple method called distplot by passing our series of ages as argument.
```
sns.distplot(affairs['age'])
sns.distplot(affairs['age'], bins=50, rug=True, kde=False)
sns.distplot(affairs['ym'], bins=10, kde=False)
```
The average age of our people is around 32, but the most people are married for more than 14 years!
# Bivariate Plotting
Numbers get even more interesting when we can compare them to other numbers! Lets start comparing the number of years married vs the number of affairs. Seaborn provides us with a method called jointplot for this use case.
```
sns.jointplot(affairs['ym'], affairs['nbaffairs'])
sns.jointplot(affairs['ym'], affairs['nbaffairs'], kind='reg')
sns.jointplot(affairs['ym'], affairs['age'], kind='kde', shade=True)
sns.pairplot(affairs.drop('below_30', axis=1), hue='sex', kind='reg')
sns.lmplot(x="ym", y="nbaffairs", hue="sex", col="child", row="religious", data=affairs)
sns.boxplot(x="sex", y="ym", hue="child", data=affairs);
sns.violinplot(x="religious", y="nbaffairs", hue="sex", data=affairs, split=True);
affairs.corr()
sns.heatmap(affairs.corr(), cmap='coolwarm')
```
| true |
code
| 0.322106 | null | null | null | null |
|
# Deep Markov Model
## Introduction
We're going to build a deep probabilistic model for sequential data: the deep markov model. The particular dataset we want to model is composed of snippets of polyphonic music. Each time slice in a sequence spans a quarter note and is represented by an 88-dimensional binary vector that encodes the notes at that time step.
Since music is (obviously) temporally coherent, we need a model that can represent complex time dependencies in the observed data. It would not, for example, be appropriate to consider a model in which the notes at a particular time step are independent of the notes at previous time steps. One way to do this is to build a latent variable model in which the variability and temporal structure of the observations is controlled by the dynamics of the latent variables.
One particular realization of this idea is a markov model, in which we have a chain of latent variables, with each latent variable in the chain conditioned on the previous latent variable. This is a powerful approach, but if we want to represent complex data with complex (and in this case unknown) dynamics, we would like our model to be sufficiently flexible to accommodate dynamics that are potentially highly non-linear. Thus a deep markov model: we allow for the transition probabilities governing the dynamics of the latent variables as well as the the emission probabilities that govern how the observations are generated by the latent dynamics to be parameterized by (non-linear) neural networks.
The specific model we're going to implement is based on the following reference:
[1] `Structured Inference Networks for Nonlinear State Space Models`,<br />
Rahul G. Krishnan, Uri Shalit, David Sontag
Please note that while we do not assume that the reader of this tutorial has read the reference, it's definitely a good place to look for a more comprehensive discussion of the deep markov model in the context of other time series models.
We've described the model, but how do we go about training it? The inference strategy we're going to use is variational inference, which requires specifying a parameterized family of distributions that can be used to approximate the posterior distribution over the latent random variables. Given the non-linearities and complex time-dependencies inherent in our model and data, we expect the exact posterior to be highly non-trivial. So we're going to need a flexible family of variational distributions if we hope to learn a good model. Happily, together PyTorch and Pyro provide all the necessary ingredients. As we will see, assembling them will be straightforward. Let's get to work.
## The Model
A convenient way to describe the high-level structure of the model is with a graphical model.
Here, we've rolled out the model assuming that the sequence of observations is of length three: $\{{\bf x}_1, {\bf x}_2, {\bf x}_3\}$. Mirroring the sequence of observations we also have a sequence of latent random variables: $\{{\bf z}_1, {\bf z}_2, {\bf z}_3\}$. The figure encodes the structure of the model. The corresponding joint distribution is
$$p({\bf x}_{123} , {\bf z}_{123})=p({\bf x}_1|{\bf z}_1)p({\bf x}_2|{\bf z}_2)p({\bf x}_3|{\bf z}_3)p({\bf z}_1)p({\bf z}_2|{\bf z}_1)p({\bf z}_3|{\bf z}_2)$$
Conditioned on ${\bf z}_t$, each observation ${\bf x}_t$ is independent of the other observations. This can be read off from the fact that each ${\bf x}_t$ only depends on the corresponding latent ${\bf z}_t$, as indicated by the downward pointing arrows. We can also read off the markov property of the model: each latent ${\bf z}_t$, when conditioned on the previous latent ${\bf z}_{t-1}$, is independent of all previous latents $\{ {\bf z}_{t-2}, {\bf z}_{t-3}, ...\}$. This effectively says that everything one needs to know about the state of the system at time $t$ is encapsulated by the latent ${\bf z}_{t}$.
We will assume that the observation likelihoods, i.e. the probability distributions $p({{\bf x}_t}|{{\bf z}_t})$ that control the observations, are given by the bernoulli distribution. This is an appropriate choice since our observations are all 0 or 1. For the probability distributions $p({\bf z}_t|{\bf z}_{t-1})$ that control the latent dynamics, we choose (conditional) gaussian distributions with diagonal covariances. This is reasonable since we assume that the latent space is continuous.
The solid black squares represent non-linear functions parameterized by neural networks. This is what makes this a _deep_ markov model. Note that the black squares appear in two different places: in between pairs of latents and in between latents and observations. The non-linear function that connects the latent variables ('Trans' in Fig. 1) controls the dynamics of the latent variables. Since we allow the conditional probability distribution of ${\bf z}_{t}$ to depend on ${\bf z}_{t-1}$ in a complex way, we will be able to capture complex dynamics in our model. Similarly, the non-linear function that connects the latent variables to the observations ('Emit' in Fig. 1) controls how the observations depend on the latent dynamics.
Some additional notes:
- we can freely choose the dimension of the latent space to suit the problem at hand: small latent spaces for simple problems and larger latent spaces for problems with complex dynamics
- note the parameter ${\bf z}_0$ in Fig. 1. as will become more apparent from the code, this is just a convenient way for us to parameterize the probability distribution $p({\bf z}_1)$ for the first time step, where there are no previous latents to condition on.
### The Gated Transition and the Emitter
Without further ado, let's start writing some code. We first define the two PyTorch Modules that correspond to the black squares in Fig. 1. First the emission function:
```python
class Emitter(nn.Module):
"""
Parameterizes the bernoulli observation likelihood p(x_t | z_t)
"""
def __init__(self, input_dim, z_dim, emission_dim):
super().__init__()
# initialize the three linear transformations used in the neural network
self.lin_z_to_hidden = nn.Linear(z_dim, emission_dim)
self.lin_hidden_to_hidden = nn.Linear(emission_dim, emission_dim)
self.lin_hidden_to_input = nn.Linear(emission_dim, input_dim)
# initialize the two non-linearities used in the neural network
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def forward(self, z_t):
"""
Given the latent z at a particular time step t we return the vector of
probabilities `ps` that parameterizes the bernoulli distribution p(x_t|z_t)
"""
h1 = self.relu(self.lin_z_to_hidden(z_t))
h2 = self.relu(self.lin_hidden_to_hidden(h1))
ps = self.sigmoid(self.lin_hidden_to_input(h2))
return ps
```
In the constructor we define the linear transformations that will be used in our emission function. Note that `emission_dim` is the number of hidden units in the neural network. We also define the non-linearities that we will be using. The forward call defines the computational flow of the function. We take in the latent ${\bf z}_{t}$ as input and do a sequence of transformations until we obtain a vector of length 88 that defines the emission probabilities of our bernoulli likelihood. Because of the sigmoid, each element of `ps` will be between 0 and 1 and will define a valid probability. Taken together the elements of `ps` encode which notes we expect to observe at time $t$ given the state of the system (as encoded in ${\bf z}_{t}$).
Now we define the gated transition function:
```python
class GatedTransition(nn.Module):
"""
Parameterizes the gaussian latent transition probability p(z_t | z_{t-1})
See section 5 in the reference for comparison.
"""
def __init__(self, z_dim, transition_dim):
super().__init__()
# initialize the six linear transformations used in the neural network
self.lin_gate_z_to_hidden = nn.Linear(z_dim, transition_dim)
self.lin_gate_hidden_to_z = nn.Linear(transition_dim, z_dim)
self.lin_proposed_mean_z_to_hidden = nn.Linear(z_dim, transition_dim)
self.lin_proposed_mean_hidden_to_z = nn.Linear(transition_dim, z_dim)
self.lin_sig = nn.Linear(z_dim, z_dim)
self.lin_z_to_loc = nn.Linear(z_dim, z_dim)
# modify the default initialization of lin_z_to_loc
# so that it's starts out as the identity function
self.lin_z_to_loc.weight.data = torch.eye(z_dim)
self.lin_z_to_loc.bias.data = torch.zeros(z_dim)
# initialize the three non-linearities used in the neural network
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
self.softplus = nn.Softplus()
def forward(self, z_t_1):
"""
Given the latent z_{t-1} corresponding to the time step t-1
we return the mean and scale vectors that parameterize the
(diagonal) gaussian distribution p(z_t | z_{t-1})
"""
# compute the gating function
_gate = self.relu(self.lin_gate_z_to_hidden(z_t_1))
gate = self.sigmoid(self.lin_gate_hidden_to_z(_gate))
# compute the 'proposed mean'
_proposed_mean = self.relu(self.lin_proposed_mean_z_to_hidden(z_t_1))
proposed_mean = self.lin_proposed_mean_hidden_to_z(_proposed_mean)
# assemble the actual mean used to sample z_t, which mixes
# a linear transformation of z_{t-1} with the proposed mean
# modulated by the gating function
loc = (1 - gate) * self.lin_z_to_loc(z_t_1) + gate * proposed_mean
# compute the scale used to sample z_t, using the proposed
# mean from above as input. the softplus ensures that scale is positive
scale = self.softplus(self.lin_sig(self.relu(proposed_mean)))
# return loc, scale which can be fed into Normal
return loc, scale
```
This mirrors the structure of `Emitter` above, with the difference that the computational flow is a bit more complicated. This is for two reasons. First, the output of `GatedTransition` needs to define a valid (diagonal) gaussian distribution. So we need to output two parameters: the mean `loc`, and the (square root) covariance `scale`. These both need to have the same dimension as the latent space. Second, we don't want to _force_ the dynamics to be non-linear. Thus our mean `loc` is a sum of two terms, only one of which depends non-linearily on the input `z_t_1`. This way we can support both linear and non-linear dynamics (or indeed have the dynamics of part of the latent space be linear, while the remainder of the dynamics is non-linear).
### Model - a Pyro Stochastic Function
So far everything we've done is pure PyTorch. To finish translating our model into code we need to bring Pyro into the picture. Basically we need to implement the stochastic nodes (i.e. the circles) in Fig. 1. To do this we introduce a callable `model()` that contains the Pyro primitive `pyro.sample`. The `sample` statements will be used to specify the joint distribution over the latents ${\bf z}_{1:T}$. Additionally, the `obs` argument can be used with the `sample` statements to specify how the observations ${\bf x}_{1:T}$ depend on the latents. Before we look at the complete code for `model()`, let's look at a stripped down version that contains the main logic:
```python
def model(...):
z_prev = self.z_0
# sample the latents z and observed x's one time step at a time
for t in range(1, T_max + 1):
# the next two lines of code sample z_t ~ p(z_t | z_{t-1}).
# first compute the parameters of the diagonal gaussian
# distribution p(z_t | z_{t-1})
z_loc, z_scale = self.trans(z_prev)
# then sample z_t according to dist.Normal(z_loc, z_scale)
z_t = pyro.sample("z_%d" % t, dist.Normal(z_loc, z_scale))
# compute the probabilities that parameterize the bernoulli likelihood
emission_probs_t = self.emitter(z_t)
# the next statement instructs pyro to observe x_t according to the
# bernoulli distribution p(x_t|z_t)
pyro.sample("obs_x_%d" % t,
dist.Bernoulli(emission_probs_t),
obs=mini_batch[:, t - 1, :])
# the latent sampled at this time step will be conditioned upon
# in the next time step so keep track of it
z_prev = z_t
```
The first thing we need to do is sample ${\bf z}_1$. Once we've sampled ${\bf z}_1$, we can sample ${\bf z}_2 \sim p({\bf z}_2|{\bf z}_1)$ and so on. This is the logic implemented in the `for` loop. The parameters `z_loc` and `z_scale` that define the probability distributions $p({\bf z}_t|{\bf z}_{t-1})$ are computed using `self.trans`, which is just an instance of the `GatedTransition` module defined above. For the first time step at $t=1$ we condition on `self.z_0`, which is a (trainable) `Parameter`, while for subsequent time steps we condition on the previously drawn latent. Note that each random variable `z_t` is assigned a unique name by the user.
Once we've sampled ${\bf z}_t$ at a given time step, we need to observe the datapoint ${\bf x}_t$. So we pass `z_t` through `self.emitter`, an instance of the `Emitter` module defined above to obtain `emission_probs_t`. Together with the argument `dist.Bernoulli()` in the `sample` statement, these probabilities fully specify the observation likelihood. Finally, we also specify the slice of observed data ${\bf x}_t$: `mini_batch[:, t - 1, :]` using the `obs` argument to `sample`.
This fully specifies our model and encapsulates it in a callable that can be passed to Pyro. Before we move on let's look at the full version of `model()` and go through some of the details we glossed over in our first pass.
```python
def model(self, mini_batch, mini_batch_reversed, mini_batch_mask,
mini_batch_seq_lengths, annealing_factor=1.0):
# this is the number of time steps we need to process in the mini-batch
T_max = mini_batch.size(1)
# register all PyTorch (sub)modules with pyro
# this needs to happen in both the model and guide
pyro.module("dmm", self)
# set z_prev = z_0 to setup the recursive conditioning in p(z_t | z_{t-1})
z_prev = self.z_0.expand(mini_batch.size(0), self.z_0.size(0))
# we enclose all the sample statements in the model in a plate.
# this marks that each datapoint is conditionally independent of the others
with pyro.plate("z_minibatch", len(mini_batch)):
# sample the latents z and observed x's one time step at a time
for t in range(1, T_max + 1):
# the next chunk of code samples z_t ~ p(z_t | z_{t-1})
# note that (both here and elsewhere) we use poutine.scale to take care
# of KL annealing. we use the mask() method to deal with raggedness
# in the observed data (i.e. different sequences in the mini-batch
# have different lengths)
# first compute the parameters of the diagonal gaussian
# distribution p(z_t | z_{t-1})
z_loc, z_scale = self.trans(z_prev)
# then sample z_t according to dist.Normal(z_loc, z_scale).
# note that we use the reshape method so that the univariate
# Normal distribution is treated as a multivariate Normal
# distribution with a diagonal covariance.
with poutine.scale(None, annealing_factor):
z_t = pyro.sample("z_%d" % t,
dist.Normal(z_loc, z_scale)
.mask(mini_batch_mask[:, t - 1:t])
.to_event(1))
# compute the probabilities that parameterize the bernoulli likelihood
emission_probs_t = self.emitter(z_t)
# the next statement instructs pyro to observe x_t according to the
# bernoulli distribution p(x_t|z_t)
pyro.sample("obs_x_%d" % t,
dist.Bernoulli(emission_probs_t)
.mask(mini_batch_mask[:, t - 1:t])
.to_event(1),
obs=mini_batch[:, t - 1, :])
# the latent sampled at this time step will be conditioned upon
# in the next time step so keep track of it
z_prev = z_t
```
The first thing to note is that `model()` takes a number of arguments. For now let's just take a look at `mini_batch` and `mini_batch_mask`. `mini_batch` is a three dimensional tensor, with the first dimension being the batch dimension, the second dimension being the temporal dimension, and the final dimension being the features (88-dimensional in our case). To speed up the code, whenever we run `model` we're going to process an entire mini-batch of sequences (i.e. we're going to take advantage of vectorization).
This is sensible because our model is implicitly defined over a single observed sequence. The probability of a set of sequences is just given by the products of the individual sequence probabilities. In other words, given the parameters of the model the sequences are conditionally independent.
This vectorization introduces some complications because sequences can be of different lengths. This is where `mini_batch_mask` comes in. `mini_batch_mask` is a two dimensional 0/1 mask of dimensions `mini_batch_size` x `T_max`, where `T_max` is the maximum length of any sequence in the mini-batch. This encodes which parts of `mini_batch` are valid observations.
So the first thing we do is grab `T_max`: we have to unroll our model for at least this many time steps. Note that this will result in a lot of 'wasted' computation, since some of the sequences will be shorter than `T_max`, but this is a small price to pay for the big speed-ups that come with vectorization. We just need to make sure that none of the 'wasted' computations 'pollute' our model computation. We accomplish this by passing the mask appropriate to time step $t$ to the `mask` method (which acts on the distribution that needs masking).
Finally, the line `pyro.module("dmm", self)` is equivalent to a bunch of `pyro.param` statements for each parameter in the model. This lets Pyro know which parameters are part of the model. Just like for the `sample` statement, we give the module a unique name. This name will be incorporated into the name of the `Parameters` in the model. We leave a discussion of the KL annealing factor for later.
## Inference
At this point we've fully specified our model. The next step is to set ourselves up for inference. As mentioned in the introduction, our inference strategy is going to be variational inference (see [SVI Part I](svi_part_i.ipynb) for an introduction). So our next task is to build a family of variational distributions appropriate to doing inference in a deep markov model. However, at this point it's worth emphasizing that nothing about the way we've implemented `model()` ties us to variational inference. In principle we could use _any_ inference strategy available in Pyro. For example, in this particular context one could imagine using some variant of Sequential Monte Carlo (although this is not currently supported in Pyro).
### Guide
The purpose of the guide (i.e. the variational distribution) is to provide a (parameterized) approximation to the exact posterior $p({\bf z}_{1:T}|{\bf x}_{1:T})$. Actually, there's an implicit assumption here which we should make explicit, so let's take a step back.
Suppose our dataset $\mathcal{D}$ consists of $N$ sequences
$\{ {\bf x}_{1:T_1}^1, {\bf x}_{1:T_2}^2, ..., {\bf x}_{1:T_N}^N \}$. Then the posterior we're actually interested in is given by
$p({\bf z}_{1:T_1}^1, {\bf z}_{1:T_2}^2, ..., {\bf z}_{1:T_N}^N | \mathcal{D})$, i.e. we want to infer the latents for _all_ $N$ sequences. Even for small $N$ this is a very high-dimensional distribution that will require a very large number of parameters to specify. In particular if we were to directly parameterize the posterior in this form, the number of parameters required would grow (at least) linearly with $N$. One way to avoid this nasty growth with the size of the dataset is *amortization* (see the analogous discussion in [SVI Part II](svi_part_ii.ipynb)).
#### Aside: Amortization
This works as follows. Instead of introducing variational parameters for each sequence in our dataset, we're going to learn a single parametric function $f({\bf x}_{1:T})$ and work with a variational distribution that has the form $\prod_{n=1}^N q({\bf z}_{1:T_n}^n | f({\bf x}_{1:T_n}^n))$. The function $f(\cdot)$—which basically maps a given observed sequence to a set of variational parameters tailored to that sequence—will need to be sufficiently rich to capture the posterior accurately, but now we can handle large datasets without having to introduce an obscene number of variational parameters.
So our task is to construct the function $f(\cdot)$. Since in our case we need to support variable-length sequences, it's only natural that $f(\cdot)$ have a RNN in the loop. Before we look at the various component parts that make up our $f(\cdot)$ in detail, let's look at a computational graph that encodes the basic structure: <p>
At the bottom of the figure we have our sequence of three observations. These observations will be consumed by a RNN that reads the observations from right to left and outputs three hidden states $\{ {\bf h}_1, {\bf h}_2,{\bf h}_3\}$. Note that this computation is done _before_ we sample any latent variables. Next, each of the hidden states will be fed into a `Combiner` module whose job is to output the mean and covariance of the the conditional distribution $q({\bf z}_t | {\bf z}_{t-1}, {\bf x}_{t:T})$, which we take to be given by a diagonal gaussian distribution. (Just like in the model, the conditional structure of ${\bf z}_{1:T}$ in the guide is such that we sample ${\bf z}_t$ forward in time.) In addition to the RNN hidden state, the `Combiner` also takes the latent random variable from the previous time step as input, except for $t=1$, where it instead takes the trainable (variational) parameter ${\bf z}_0^{\rm{q}}$.
#### Aside: Guide Structure
Why do we setup the RNN to consume the observations from right to left? Why not left to right? With this choice our conditional distribution $q({\bf z}_t |...)$ depends on two things:
- the latent ${\bf z}_{t-1}$ from the previous time step; and
- the observations ${\bf x}_{t:T}$, i.e. the current observation together with all future observations
We are free to make other choices; all that is required is that that the guide is a properly normalized distribution that plays nice with autograd. This particular choice is motivated by the dependency structure of the true posterior: see reference [1] for a detailed discussion. In brief, while we could, for example, condition on the entire sequence of observations, because of the markov structure of the model everything that we need to know about the previous observations ${\bf x}_{1:t-1}$ is encapsulated by ${\bf z}_{t-1}$. We could condition on more things, but there's no need; and doing so will probably tend to dilute the learning signal. So running the RNN from right to left is the most natural choice for this particular model.
Let's look at the component parts in detail. First, the `Combiner` module:
```python
class Combiner(nn.Module):
"""
Parameterizes q(z_t | z_{t-1}, x_{t:T}), which is the basic building block
of the guide (i.e. the variational distribution). The dependence on x_{t:T} is
through the hidden state of the RNN (see the pytorch module `rnn` below)
"""
def __init__(self, z_dim, rnn_dim):
super().__init__()
# initialize the three linear transformations used in the neural network
self.lin_z_to_hidden = nn.Linear(z_dim, rnn_dim)
self.lin_hidden_to_loc = nn.Linear(rnn_dim, z_dim)
self.lin_hidden_to_scale = nn.Linear(rnn_dim, z_dim)
# initialize the two non-linearities used in the neural network
self.tanh = nn.Tanh()
self.softplus = nn.Softplus()
def forward(self, z_t_1, h_rnn):
"""
Given the latent z at at a particular time step t-1 as well as the hidden
state of the RNN h(x_{t:T}) we return the mean and scale vectors that
parameterize the (diagonal) gaussian distribution q(z_t | z_{t-1}, x_{t:T})
"""
# combine the rnn hidden state with a transformed version of z_t_1
h_combined = 0.5 * (self.tanh(self.lin_z_to_hidden(z_t_1)) + h_rnn)
# use the combined hidden state to compute the mean used to sample z_t
loc = self.lin_hidden_to_loc(h_combined)
# use the combined hidden state to compute the scale used to sample z_t
scale = self.softplus(self.lin_hidden_to_scale(h_combined))
# return loc, scale which can be fed into Normal
return loc, scale
```
This module has the same general structure as `Emitter` and `GatedTransition` in the model. The only thing of note is that because the `Combiner` needs to consume two inputs at each time step, it transforms the inputs into a single combined hidden state `h_combined` before it computes the outputs.
Apart from the RNN, we now have all the ingredients we need to construct our guide distribution.
Happily, PyTorch has great built-in RNN modules, so we don't have much work to do here. We'll see where we instantiate the RNN later. Let's instead jump right into the definition of the stochastic function `guide()`.
```python
def guide(self, mini_batch, mini_batch_reversed, mini_batch_mask,
mini_batch_seq_lengths, annealing_factor=1.0):
# this is the number of time steps we need to process in the mini-batch
T_max = mini_batch.size(1)
# register all PyTorch (sub)modules with pyro
pyro.module("dmm", self)
# if on gpu we need the fully broadcast view of the rnn initial state
# to be in contiguous gpu memory
h_0_contig = self.h_0.expand(1, mini_batch.size(0),
self.rnn.hidden_size).contiguous()
# push the observed x's through the rnn;
# rnn_output contains the hidden state at each time step
rnn_output, _ = self.rnn(mini_batch_reversed, h_0_contig)
# reverse the time-ordering in the hidden state and un-pack it
rnn_output = poly.pad_and_reverse(rnn_output, mini_batch_seq_lengths)
# set z_prev = z_q_0 to setup the recursive conditioning in q(z_t |...)
z_prev = self.z_q_0.expand(mini_batch.size(0), self.z_q_0.size(0))
# we enclose all the sample statements in the guide in a plate.
# this marks that each datapoint is conditionally independent of the others.
with pyro.plate("z_minibatch", len(mini_batch)):
# sample the latents z one time step at a time
for t in range(1, T_max + 1):
# the next two lines assemble the distribution q(z_t | z_{t-1}, x_{t:T})
z_loc, z_scale = self.combiner(z_prev, rnn_output[:, t - 1, :])
z_dist = dist.Normal(z_loc, z_scale)
# sample z_t from the distribution z_dist
with pyro.poutine.scale(None, annealing_factor):
z_t = pyro.sample("z_%d" % t,
z_dist.mask(mini_batch_mask[:, t - 1:t])
.to_event(1))
# the latent sampled at this time step will be conditioned
# upon in the next time step so keep track of it
z_prev = z_t
```
The high-level structure of `guide()` is very similar to `model()`. First note that the model and guide take the same arguments: this is a general requirement for model/guide pairs in Pyro. As in the model, there's a call to `pyro.module` that registers all the parameters with Pyro. Also, the `for` loop has the same structure as the one in `model()`, with the difference that the guide only needs to sample latents (there are no `sample` statements with the `obs` keyword). Finally, note that the names of the latent variables in the guide exactly match those in the model. This is how Pyro knows to correctly align random variables.
The RNN logic should be familar to PyTorch users, but let's go through it quickly. First we prepare the initial state of the RNN, `h_0`. Then we invoke the RNN via its forward call; the resulting tensor `rnn_output` contains the hidden states for the entire mini-batch. Note that because we want the RNN to consume the observations from right to left, the input to the RNN is `mini_batch_reversed`, which is a copy of `mini_batch` with all the sequences running in _reverse_ temporal order. Furthermore, `mini_batch_reversed` has been wrapped in a PyTorch `rnn.pack_padded_sequence` so that the RNN can deal with variable-length sequences. Since we do our sampling in latent space in normal temporal order, we use the helper function `pad_and_reverse` to reverse the hidden state sequences in `rnn_output`, so that we can feed the `Combiner` RNN hidden states that are correctly aligned and ordered. This helper function also unpacks the `rnn_output` so that it is no longer in the form of a PyTorch `rnn.pack_padded_sequence`.
## Packaging the Model and Guide as a PyTorch Module
At this juncture, we're ready to proceed to inference. But before we do so let's quickly go over how we packaged the model and guide as a single PyTorch Module. This is generally good practice, especially for larger models.
```python
class DMM(nn.Module):
"""
This PyTorch Module encapsulates the model as well as the
variational distribution (the guide) for the Deep Markov Model
"""
def __init__(self, input_dim=88, z_dim=100, emission_dim=100,
transition_dim=200, rnn_dim=600, rnn_dropout_rate=0.0,
num_iafs=0, iaf_dim=50, use_cuda=False):
super().__init__()
# instantiate pytorch modules used in the model and guide below
self.emitter = Emitter(input_dim, z_dim, emission_dim)
self.trans = GatedTransition(z_dim, transition_dim)
self.combiner = Combiner(z_dim, rnn_dim)
self.rnn = nn.RNN(input_size=input_dim, hidden_size=rnn_dim,
nonlinearity='relu', batch_first=True,
bidirectional=False, num_layers=1, dropout=rnn_dropout_rate)
# define a (trainable) parameters z_0 and z_q_0 that help define
# the probability distributions p(z_1) and q(z_1)
# (since for t = 1 there are no previous latents to condition on)
self.z_0 = nn.Parameter(torch.zeros(z_dim))
self.z_q_0 = nn.Parameter(torch.zeros(z_dim))
# define a (trainable) parameter for the initial hidden state of the rnn
self.h_0 = nn.Parameter(torch.zeros(1, 1, rnn_dim))
self.use_cuda = use_cuda
# if on gpu cuda-ize all pytorch (sub)modules
if use_cuda:
self.cuda()
# the model p(x_{1:T} | z_{1:T}) p(z_{1:T})
def model(...):
# ... as above ...
# the guide q(z_{1:T} | x_{1:T}) (i.e. the variational distribution)
def guide(...):
# ... as above ...
```
Since we've already gone over `model` and `guide`, our focus here is on the constructor. First we instantiate the four PyTorch modules that we use in our model and guide. On the model-side: `Emitter` and `GatedTransition`. On the guide-side: `Combiner` and the RNN.
Next we define PyTorch `Parameter`s for the initial state of the RNN as well as `z_0` and `z_q_0`, which are fed into `self.trans` and `self.combiner`, respectively, in lieu of the non-existent random variable $\bf z_0$.
The important point to make here is that all of these `Module`s and `Parameter`s are attributes of `DMM` (which itself inherits from `nn.Module`). This has the consequence they are all automatically registered as belonging to the module. So, for example, when we call `parameters()` on an instance of `DMM`, PyTorch will know to return all the relevant parameters. It also means that when we invoke `pyro.module("dmm", self)` in `model()` and `guide()`, all the parameters of both the model and guide will be registered with Pyro. Finally, it means that if we're running on a GPU, the call to `cuda()` will move all the parameters into GPU memory.
## Stochastic Variational Inference
With our model and guide at hand, we're finally ready to do inference. Before we look at the full logic that is involved in a complete experimental script, let's first see how to take a single gradient step. First we instantiate an instance of `DMM` and setup an optimizer.
```python
# instantiate the dmm
dmm = DMM(input_dim, z_dim, emission_dim, transition_dim, rnn_dim,
args.rnn_dropout_rate, args.num_iafs, args.iaf_dim, args.cuda)
# setup optimizer
adam_params = {"lr": args.learning_rate, "betas": (args.beta1, args.beta2),
"clip_norm": args.clip_norm, "lrd": args.lr_decay,
"weight_decay": args.weight_decay}
optimizer = ClippedAdam(adam_params)
```
Here we're using an implementation of the Adam optimizer that includes gradient clipping. This mitigates some of the problems that can occur when training recurrent neural networks (e.g. vanishing/exploding gradients). Next we setup the inference algorithm.
```python
# setup inference algorithm
svi = SVI(dmm.model, dmm.guide, optimizer, Trace_ELBO())
```
The inference algorithm `SVI` uses a stochastic gradient estimator to take gradient steps on an objective function, which in this case is given by the ELBO (the evidence lower bound). As the name indicates, the ELBO is a lower bound to the log evidence: $\log p(\mathcal{D})$. As we take gradient steps that maximize the ELBO, we move our guide $q(\cdot)$ closer to the exact posterior.
The argument `Trace_ELBO()` constructs a version of the gradient estimator that doesn't need access to the dependency structure of the model and guide. Since all the latent variables in our model are reparameterizable, this is the appropriate gradient estimator for our use case. (It's also the default option.)
Assuming we've prepared the various arguments of `dmm.model` and `dmm.guide`, taking a gradient step is accomplished by calling
```python
svi.step(mini_batch, ...)
```
That's all there is to it!
Well, not quite. This will be the main step in our inference algorithm, but we still need to implement a complete training loop with preparation of mini-batches, evaluation, and so on. This sort of logic will be familiar to any deep learner but let's see how it looks in PyTorch/Pyro.
## The Black Magic of Optimization
Actually, before we get to the guts of training, let's take a moment and think a bit about the optimization problem we've setup. We've traded Bayesian inference in a non-linear model with a high-dimensional latent space—a hard problem—for a particular optimization problem. Let's not kid ourselves, this optimization problem is pretty hard too. Why? Let's go through some of the reasons:
- the space of parameters we're optimizing over is very high-dimensional (it includes all the weights in all the neural networks we've defined).
- our objective function (the ELBO) cannot be computed analytically. so our parameter updates will be following noisy Monte Carlo gradient estimates
- data-subsampling serves as an additional source of stochasticity: even if we wanted to, we couldn't in general take gradient steps on the ELBO defined over the whole dataset (actually in our particular case the dataset isn't so large, but let's ignore that).
- given all the neural networks and non-linearities we have in the loop, our (stochastic) loss surface is highly non-trivial
The upshot is that if we're going to find reasonable (local) optima of the ELBO, we better take some care in deciding how to do optimization. This isn't the time or place to discuss all the different strategies that one might adopt, but it's important to emphasize how decisive a good or bad choice in learning hyperparameters (the learning rate, the mini-batch size, etc.) can be.
Before we move on, let's discuss one particular optimization strategy that we're making use of in greater detail: KL annealing. In our case the ELBO is the sum of two terms: an expected log likelihood term (which measures model fit) and a sum of KL divergence terms (which serve to regularize the approximate posterior):
$\rm{ELBO} = \mathbb{E}_{q({\bf z}_{1:T})}[\log p({\bf x}_{1:T}|{\bf z}_{1:T})] - \mathbb{E}_{q({\bf z}_{1:T})}[ \log q({\bf z}_{1:T}) - \log p({\bf z}_{1:T})]$
This latter term can be a quite strong regularizer, and in early stages of training it has a tendency to favor regions of the loss surface that contain lots of bad local optima. One strategy to avoid these bad local optima, which was also adopted in reference [1], is to anneal the KL divergence terms by multiplying them by a scalar `annealing_factor` that ranges between zero and one:
$\mathbb{E}_{q({\bf z}_{1:T})}[\log p({\bf x}_{1:T}|{\bf z}_{1:T})] - \rm{annealing\_factor} \times \mathbb{E}_{q({\bf z}_{1:T})}[ \log q({\bf z}_{1:T}) - \log p({\bf z}_{1:T})]$
The idea is that during the course of training the `annealing_factor` rises slowly from its initial value at/near zero to its final value at 1.0. The annealing schedule is arbitrary; below we will use a simple linear schedule. In terms of code, to scale the log likelihoods by the appropriate annealing factor we enclose each of the latent sample statements in the model and guide with a `pyro.poutine.scale` context.
Finally, we should mention that the main difference between the DMM implementation described here and the one used in reference [1] is that they take advantage of the analytic formula for the KL divergence between two gaussian distributions (whereas we rely on Monte Carlo estimates). This leads to lower variance gradient estimates of the ELBO, which makes training a bit easier. We can still train the model without making this analytic substitution, but training probably takes somewhat longer because of the higher variance. Support for analytic KL divergences in Pyro is something we plan to add in the future.
## Data Loading, Training, and Evaluation
First we load the data. There are 229 sequences in the training dataset, each with an average length of ~60 time steps.
```python
jsb_file_loc = "./data/jsb_processed.pkl"
data = pickle.load(open(jsb_file_loc, "rb"))
training_seq_lengths = data['train']['sequence_lengths']
training_data_sequences = data['train']['sequences']
test_seq_lengths = data['test']['sequence_lengths']
test_data_sequences = data['test']['sequences']
val_seq_lengths = data['valid']['sequence_lengths']
val_data_sequences = data['valid']['sequences']
N_train_data = len(training_seq_lengths)
N_train_time_slices = np.sum(training_seq_lengths)
N_mini_batches = int(N_train_data / args.mini_batch_size +
int(N_train_data % args.mini_batch_size > 0))
```
For this dataset we will typically use a `mini_batch_size` of 20, so that there will be 12 mini-batches per epoch. Next we define the function `process_minibatch` which prepares a mini-batch for training and takes a gradient step:
```python
def process_minibatch(epoch, which_mini_batch, shuffled_indices):
if args.annealing_epochs > 0 and epoch < args.annealing_epochs:
# compute the KL annealing factor appropriate
# for the current mini-batch in the current epoch
min_af = args.minimum_annealing_factor
annealing_factor = min_af + (1.0 - min_af) * \
(float(which_mini_batch + epoch * N_mini_batches + 1) /
float(args.annealing_epochs * N_mini_batches))
else:
# by default the KL annealing factor is unity
annealing_factor = 1.0
# compute which sequences in the training set we should grab
mini_batch_start = (which_mini_batch * args.mini_batch_size)
mini_batch_end = np.min([(which_mini_batch + 1) * args.mini_batch_size,
N_train_data])
mini_batch_indices = shuffled_indices[mini_batch_start:mini_batch_end]
# grab the fully prepped mini-batch using the helper function in the data loader
mini_batch, mini_batch_reversed, mini_batch_mask, mini_batch_seq_lengths \
= poly.get_mini_batch(mini_batch_indices, training_data_sequences,
training_seq_lengths, cuda=args.cuda)
# do an actual gradient step
loss = svi.step(mini_batch, mini_batch_reversed, mini_batch_mask,
mini_batch_seq_lengths, annealing_factor)
# keep track of the training loss
return loss
```
We first compute the KL annealing factor appropriate to the mini-batch (according to a linear schedule as described earlier). We then compute the mini-batch indices, which we pass to the helper function `get_mini_batch`. This helper function takes care of a number of different things:
- it sorts each mini-batch by sequence length
- it calls another helper function to get a copy of the mini-batch in reversed temporal order
- it packs each reversed mini-batch in a `rnn.pack_padded_sequence`, which is then ready to be ingested by the RNN
- it cuda-izes all tensors if we're on a GPU
- it calls another helper function to get an appropriate 0/1 mask for the mini-batch
We then pipe all the return values of `get_mini_batch()` into `elbo.step(...)`. Recall that these arguments will be further piped to `model(...)` and `guide(...)` during construction of the gradient estimator in `elbo`. Finally, we return a float which is a noisy estimate of the loss for that mini-batch.
We now have all the ingredients required for the main bit of our training loop:
```python
times = [time.time()]
for epoch in range(args.num_epochs):
# accumulator for our estimate of the negative log likelihood
# (or rather -elbo) for this epoch
epoch_nll = 0.0
# prepare mini-batch subsampling indices for this epoch
shuffled_indices = np.arange(N_train_data)
np.random.shuffle(shuffled_indices)
# process each mini-batch; this is where we take gradient steps
for which_mini_batch in range(N_mini_batches):
epoch_nll += process_minibatch(epoch, which_mini_batch, shuffled_indices)
# report training diagnostics
times.append(time.time())
epoch_time = times[-1] - times[-2]
log("[training epoch %04d] %.4f \t\t\t\t(dt = %.3f sec)" %
(epoch, epoch_nll / N_train_time_slices, epoch_time))
```
At the beginning of each epoch we shuffle the indices pointing to the training data. We then process each mini-batch until we've gone through the entire training set, accumulating the training loss as we go. Finally we report some diagnostic info. Note that we normalize the loss by the total number of time slices in the training set (this allows us to compare to reference [1]).
## Evaluation
This training loop is still missing any kind of evaluation diagnostics. Let's fix that. First we need to prepare the validation and test data for evaluation. Since the validation and test datasets are small enough that we can easily fit them into memory, we're going to process each dataset batchwise (i.e. we will not be breaking up the dataset into mini-batches). [_Aside: at this point the reader may ask why we don't do the same thing for the training set. The reason is that additional stochasticity due to data-subsampling is often advantageous during optimization: in particular it can help us avoid local optima._] And, in fact, in order to get a lessy noisy estimate of the ELBO, we're going to compute a multi-sample estimate. The simplest way to do this would be as follows:
```python
val_loss = svi.evaluate_loss(val_batch, ..., num_particles=5)
```
This, however, would involve an explicit `for` loop with five iterations. For our particular model, we can do better and vectorize the whole computation. The only way to do this currently in Pyro is to explicitly replicate the data `n_eval_samples` many times. This is the strategy we follow:
```python
# package repeated copies of val/test data for faster evaluation
# (i.e. set us up for vectorization)
def rep(x):
return np.repeat(x, n_eval_samples, axis=0)
# get the validation/test data ready for the dmm: pack into sequences, etc.
val_seq_lengths = rep(val_seq_lengths)
test_seq_lengths = rep(test_seq_lengths)
val_batch, val_batch_reversed, val_batch_mask, val_seq_lengths = poly.get_mini_batch(
np.arange(n_eval_samples * val_data_sequences.shape[0]), rep(val_data_sequences),
val_seq_lengths, cuda=args.cuda)
test_batch, test_batch_reversed, test_batch_mask, test_seq_lengths = \
poly.get_mini_batch(np.arange(n_eval_samples * test_data_sequences.shape[0]),
rep(test_data_sequences),
test_seq_lengths, cuda=args.cuda)
```
With the test and validation data now fully prepped, we define the helper function that does the evaluation:
```python
def do_evaluation():
# put the RNN into evaluation mode (i.e. turn off drop-out if applicable)
dmm.rnn.eval()
# compute the validation and test loss
val_nll = svi.evaluate_loss(val_batch, val_batch_reversed, val_batch_mask,
val_seq_lengths) / np.sum(val_seq_lengths)
test_nll = svi.evaluate_loss(test_batch, test_batch_reversed, test_batch_mask,
test_seq_lengths) / np.sum(test_seq_lengths)
# put the RNN back into training mode (i.e. turn on drop-out if applicable)
dmm.rnn.train()
return val_nll, test_nll
```
We simply call the `evaluate_loss` method of `elbo`, which takes the same arguments as `step()`, namely the arguments that are passed to the model and guide. Note that we have to put the RNN into and out of evaluation mode to account for dropout. We can now stick `do_evaluation()` into the training loop; see [the source code](https://github.com/pyro-ppl/pyro/blob/dev/examples/dmm/dmm.py) for details.
## Results
Let's make sure that our implementation gives reasonable results. We can use the numbers reported in reference [1] as a sanity check. For the same dataset and a similar model/guide setup (dimension of the latent space, number of hidden units in the RNN, etc.) they report a normalized negative log likelihood (NLL) of `6.93` on the testset (lower is better$)^{\S}$. This is to be compared to our result of `6.87`. These numbers are very much in the same ball park, which is reassuring. It seems that, at least for this dataset, not using analytic expressions for the KL divergences doesn't degrade the quality of the learned model (although, as discussed above, the training probably takes somewhat longer).
In the figure we show how the test NLL progresses during training for a single sample run (one with a rather conservative learning rate). Most of the progress is during the first 3000 epochs or so, with some marginal gains if we let training go on for longer. On a GeForce GTX 1080, 5000 epochs takes about 20 hours.
| `num_iafs` | test NLL |
|---|---|
| `0` | `6.87` |
| `1` | `6.82` |
| `2` | `6.80` |
Finally, we also report results for guides with normalizing flows in the mix (details to be found in the next section).
${ \S\;}$ Actually, they seem to report two numbers—6.93 and 7.03—for the same model/guide and it's not entirely clear how the two reported numbers are different.
## Bells, whistles, and other improvements
### Inverse Autoregressive Flows
One of the great things about a probabilistic programming language is that it encourages modularity. Let's showcase an example in the context of the DMM. We're going to make our variational distribution richer by adding normalizing flows to the mix (see reference [2] for a discussion). **This will only cost us four additional lines of code!**
First, in the `DMM` constructor we add
```python
iafs = [AffineAutoregressive(AutoRegressiveNN(z_dim, [iaf_dim])) for _ in range(num_iafs)]
self.iafs = nn.ModuleList(iafs)
```
This instantiates `num_iafs` many bijective transforms of the `AffineAutoregressive` type (see references [3,4]); each normalizing flow will have `iaf_dim` many hidden units. We then bundle the normalizing flows in a `nn.ModuleList`; this is just the PyTorchy way to package a list of `nn.Module`s. Next, in the guide we add the lines
```python
if self.iafs.__len__() > 0:
z_dist = TransformedDistribution(z_dist, self.iafs)
```
Here we're taking the base distribution `z_dist`, which in our case is a conditional gaussian distribution, and using the `TransformedDistribution` construct we transform it into a non-gaussian distribution that is, by construction, richer than the base distribution. Voila!
### Checkpointing
If we want to recover from a catastrophic failure in our training loop, there are two kinds of state we need to keep track of. The first is the various parameters of the model and guide. The second is the state of the optimizers (e.g. in Adam this will include the running average of recent gradient estimates for each parameter).
In Pyro, the parameters can all be found in the `ParamStore`. However, PyTorch also keeps track of them for us via the `parameters()` method of `nn.Module`. So one simple way we can save the parameters of the model and guide is to make use of the `state_dict()` method of `dmm` in conjunction with `torch.save()`; see below. In the case that we have `AffineAutoregressive`'s in the loop, this is in fact the only option at our disposal. This is because the `AffineAutoregressive` module contains what are called 'persistent buffers' in PyTorch parlance. These are things that carry state but are not `Parameter`s. The `state_dict()` and `load_state_dict()` methods of `nn.Module` know how to deal with buffers correctly.
To save the state of the optimizers, we have to use functionality inside of `pyro.optim.PyroOptim`. Recall that the typical user never interacts directly with PyTorch `Optimizers` when using Pyro; since parameters can be created dynamically in an arbitrary probabilistic program, Pyro needs to manage `Optimizers` for us. In our case saving the optimizer state will be as easy as calling `optimizer.save()`. The loading logic is entirely analagous. So our entire logic for saving and loading checkpoints only takes a few lines:
```python
# saves the model and optimizer states to disk
def save_checkpoint():
log("saving model to %s..." % args.save_model)
torch.save(dmm.state_dict(), args.save_model)
log("saving optimizer states to %s..." % args.save_opt)
optimizer.save(args.save_opt)
log("done saving model and optimizer checkpoints to disk.")
# loads the model and optimizer states from disk
def load_checkpoint():
assert exists(args.load_opt) and exists(args.load_model), \
"--load-model and/or --load-opt misspecified"
log("loading model from %s..." % args.load_model)
dmm.load_state_dict(torch.load(args.load_model))
log("loading optimizer states from %s..." % args.load_opt)
optimizer.load(args.load_opt)
log("done loading model and optimizer states.")
```
## Some final comments
A deep markov model is a relatively complex model. Now that we've taken the effort to implement a version of the deep markov model tailored to the polyphonic music dataset, we should ask ourselves what else we can do. What if we're handed a different sequential dataset? Do we have to start all over?
Not at all! The beauty of probalistic programming is that it enables—and encourages—modular approaches to modeling and inference. Adapting our polyphonic music model to a dataset with continuous observations is as simple as changing the observation likelihood. The vast majority of the code could be taken over unchanged. This means that with a little bit of extra work, the code in this tutorial could be repurposed to enable a huge variety of different models.
See the complete code on [Github](https://github.com/pyro-ppl/pyro/blob/dev/examples/dmm/dmm.py).
## References
[1] `Structured Inference Networks for Nonlinear State Space Models`,<br />
Rahul G. Krishnan, Uri Shalit, David Sontag
[2] `Variational Inference with Normalizing Flows`,
<br />
Danilo Jimenez Rezende, Shakir Mohamed
[3] `Improving Variational Inference with Inverse Autoregressive Flow`,
<br />
Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
[4] `MADE: Masked Autoencoder for Distribution Estimation`,
<br />
Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
[5] `Modeling Temporal Dependencies in High-Dimensional Sequences:`
<br />
`Application to Polyphonic Music Generation and Transcription`,
<br />
Boulanger-Lewandowski, N., Bengio, Y. and Vincent, P.
| true |
code
| 0.944459 | null | null | null | null |
|
# Shor's algorithm, fully classical implementation
```
%matplotlib inline
import random
import math
import itertools
def period_finding_classical(a,N):
# This is an inefficient classical algorithm to find the period of f(x)=a^x (mod N)
# f(0) = a**0 (mod N) = 1, so we find the first x greater than 0 for which f(x) is also 1
for r in itertools.count(start=1):
if (a**r) % N == 1:
return r
def shors_algorithm_classical(N):
assert(N>0)
assert(int(N)==N)
while True:
a=random.randint(0,N-1)
g=math.gcd(a,N)
if g!=1 or N==1:
first_factor=g
second_factor=int(N/g)
return first_factor,second_factor
else:
r=period_finding_classical(a,N)
if r % 2 != 0:
continue
elif a**(int(r/2)) % N == -1 % N:
continue
else:
first_factor=math.gcd(a**int(r/2)+1,N)
second_factor=math.gcd(a**int(r/2)-1,N)
if first_factor==N or second_factor==N:
continue
return first_factor,second_factor
# Testing it out. Note because of the probabilistic nature of the algorithm, different factors and different ordering is possible
shors_algorithm_classical(15)
shors_algorithm_classical(91)
```
# Shor's algorithm, working on a quantum implementation
## The following code will help give intuition for how to design a quantum circuit to do modular multiplication
```
def U_a_modN(a,N,binary=False):
"""
a and N are decimal
This algorithm returns U_a where:
U_a is a modular multiplication operator map from |x> to |ax mod N>
If binary is set to True, the mapping is given in binary instead of in decimal notation.
"""
res={}
l=[]
for i in range(1,N):
l+=[a*i%N]
res=set()
for i in range(1,N):
mp=[i]
end=i
nxt=i-1
while l[nxt]!=end:
mp+=[l[nxt]]
nxt=l[nxt]-1
res.add(tuple(mp))
final_res=[]
for item in res:
dup=False
for final_item in final_res:
if set(item) == set(final_item):
dup=True
if not dup:
final_res+=[item]
if not binary:
return final_res
else:
final_res_bin=[]
for mapping in final_res:
final_res_bin+=[tuple(['{0:06b}'.format(decimal) for decimal in mapping])]
return final_res_bin
print(U_a_modN(8,35))
print(U_a_modN(8,35,binary=True))
```
# This code implements modular multiplication by 2 mod 15
```
import qiskit
import matplotlib
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, QISKitError
from qiskit.tools.visualization import circuit_drawer
from qiskit.extensions.standard import cx, cswap
from qiskit import Aer
from qiskit import IBMQ
# Authenticate an account and add for use during this session. Replace string
# argument with your private token.
IBMQ.enable_account("INSERT_YOUR_API_TOKEN_HERE")
def mult_2mod15_quantum(qr,qc):
# Swap 0th qubit and 3rd qubit
qc.cx(qr[0],qr[3])
qc.cx(qr[3],qr[0])
qc.cx(qr[0],qr[3])
# Swap 0th qubit and 1st qubit
qc.cx(qr[1],qr[0])
qc.cx(qr[0],qr[1])
qc.cx(qr[1],qr[0])
# Swap 1st qubit and 2nd qubit
qc.cx(qr[1],qr[2])
qc.cx(qr[2],qr[1])
qc.cx(qr[1],qr[2])
def mult_2mod15_quantum_test(x):
qr = QuantumRegister(4)
cr = ClassicalRegister(4)
qc = QuantumCircuit(qr,cr)
# input
x_bin='{0:04b}'.format(x)
for i,b in enumerate(x_bin):
if int(b):
qc.x(qr[i])
# run circuit
mult_2mod15_quantum(qr,qc)
# measure results
for i in range(4):
qc.measure(qr[i],cr[i])
import time
from qiskit.tools.visualization import plot_histogram
backend=Aer.get_backend('qasm_simulator')
shots=50
job_exp = qiskit.execute(qc, backend=backend)
result = job_exp.result()
final=result.get_counts(qc)
result_in_order=list(final.keys())[0]
dec=0
for i,b in enumerate(result_in_order):
if int(b):
dec+=2**i
return (x,dec)
def mult_2mod15_classical_test(x):
return (x,2*x%15)
# testing!
for i in range(1,15):
quantum=mult_2mod15_quantum_test(i)
classical=mult_2mod15_classical_test(i)
if quantum!=classical:
print(quantum,classical)
```
## This code makes the previous an operation controlled by a control qubit
```
def controlled_mult_2mod15_quantum(qr,qc,control_qubit):
"""
Controlled quantum circuit for multiplication by 2 mod 15.
Note: control qubit should an index greater than 3,
and qubits 0,1,2,3 are reserved for circuit operations
"""
# Swap 0th qubit and 3rd qubit
qc.cswap(control_qubit,qr[0],qr[3])
# Swap 0th qubit and 1st qubit
qc.cswap(control_qubit,qr[1],qr[0])
# Swap 1st qubit and 2nd qubit
qc.cswap(control_qubit,qr[1],qr[2])
```
# This code performas the entire Shor's algorithm subroutine for multiplication by 2 mod 15
```
import math
def shors_subroutine_period_2mod15(qr,qc,cr):
qc.x(qr[0])
qc.h(qr[4])
qc.h(qr[4])
qc.measure(qr[4],cr[0])
qc.h(qr[5])
qc.cx(qr[5],qr[0])
qc.cx(qr[5],qr[2])
if cr[0] == 1:
qc.u1(math.pi/2,qr[4]) #pi/2 is 90 degrees in radians
qc.h(qr[5])
qc.measure(qr[5],cr[1])
qc.h(qr[6])
controlled_mult_2mod15_quantum(qr,qc,qr[6])
if cr[1] == 1:
qc.u1(math.pi/2,qr[6]) # pi/2 is 90 degrees in radians
if cr[0] == 1:
qc.u1(math.pi/4,qr[6]) #pi/4 is 45 degrees in radians
qc.h(qr[6])
qc.measure(qr[6],cr[2])
```
# This code will help us read out the results from our quantum Shor's subroutine. First, implementing the code to compute the period from the output of the quantum computation:
```
# see https://arxiv.org/pdf/quant-ph/0010034.pdf for more details (convergence relations on page 11)
import math
def continued_fraction(xi,max_steps=100): # stop_after is cutoff for algorithm, for debugging
"""
This function computes the continued fraction expansion of input xi
per the recurrance relations on page 11 of https://arxiv.org/pdf/quant-ph/0010034.pdf
"""
#a and xi initial
all_as=[]
all_xis=[]
a_0=math.floor(xi)
xi_0=xi-a_0
all_as+=[a_0]
all_xis+=[xi_0]
# p and q initial
all_ps=[]
all_qs=[]
p_0=all_as[0]
q_0=1
all_ps+=[p_0]
all_qs+=[q_0]
xi_n=xi_0
while not numpy.isclose(xi_n,0,atol=1e-7):
if len(all_as)>=max_steps:
print("Warning: algorithm did not converge within max_steps %d steps, try increasing max_steps"%max_steps)
break
# computing a and xi
a_nplus1=math.floor(1/xi_n)
xi_nplus1=1/xi_n-a_nplus1
all_as+=[a_nplus1]
all_xis+=[xi_nplus1]
xi_n=xi_nplus1
# computing p and q
n=len(all_as)-1
if n==1:
p_1=all_as[1]*all_as[0]+1
q_1=all_as[1]
all_ps+=[p_1]
all_qs+=[q_1]
else:
p_n=all_as[n]*all_ps[n-1]+all_ps[n-2]
q_n=all_as[n]*all_qs[n-1]+all_qs[n-2]
all_ps+=[p_n]
all_qs+=[q_n]
return all_ps,all_qs,all_as,all_xis
import numpy
def test_continued_fraction():
"""
Testing the continued fraction see https://arxiv.org/pdf/quant-ph/0010034.pdf, step 2.5 chart page 20
NOTE: I believe there is a mistake in this chart at the last row, and that n should range as in my code below
their chart is missing one line. Please contact me if you find differently!
"""
xi=13453/16384
all_ps,all_qs,all_as,all_xis=continued_fraction(xi)
## step 2.5 chart in https://arxiv.org/pdf/quant-ph/0010034.pdf page 20
#n_13453_16384=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]
#a_n_13453_16384=[0,1,4,1,1,2,3,1,1,3,1,1,1,1,3]
#p_n_13453_16384=[0,1,4,5,9,23,78,101,179,638,817,1455,2272,3727,13453]
#q_n_13453_16384=[1,1,5,6,11,28,95,123,218,777,995,1772,2767,4539,16384]
## what I find instead:
n_13453_16384=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]
a_n_13453_16384=[0,1,4,1,1,2,3,1,1,3,1,1,1,1,2,1]
p_n_13453_16384=[0,1,4,5,9,23,78,101,179,638,817,1455,2272,3727,9726,13453]
q_n_13453_16384=[1,1,5,6,11,28,95,123,218,777,995,1772,2767,4539,11845,16384]
for tup in [("ns",range(len(all_ps)),range(len(n_13453_16384))),
("as",all_as,a_n_13453_16384),
("ps",all_ps,p_n_13453_16384),
("qs",all_qs,q_n_13453_16384),
]:
if not numpy.array_equal(tup[1],tup[2]):
print(tup[0])
print("act:",tup[1])
print("exp:",tup[2])
print()
from IPython.display import display, Math
def pretty_print_continued_fraction(results,raw_latex=False):
all_ps,all_qs,all_as,all_xis=results
for i,vals in enumerate(zip(all_ps,all_qs,all_as,all_xis)):
p,q,a,xi=vals
if raw_latex:
print(r'\frac{p_%d}{q_%d}=\frac{%d}{%d}'%(i,i,p,q))
else:
display(Math(r'$\frac{p_%d}{q_%d}=\frac{%d}{%d}$'%(i,i,p,q)))
test_continued_fraction()
#pretty_print_continued_fraction(continued_fraction(5/8),raw_latex=True)
#pretty_print_continued_fraction(continued_fraction(0/8))
pretty_print_continued_fraction(continued_fraction(6/8))
```
# Next we will integrate the check for whether we have found the period into the continued fraction code, so that we can stop computing the continued fraction as soon as we've found the period
```
import math
def period_from_quantum_measurement(quantum_measurement,
number_qubits,
a_shor,
N_shor,
max_steps=100): # stop_after is cutoff for algorithm, for debugging
"""
This function computes the continued fraction expansion of input xi
per the recurrance relations on page 11 of https://arxiv.org/pdf/quant-ph/0010034.pdf
a_shor is the random number chosen as part of Shor's algorithm
N_shor is the number Shor's algorithm is trying to factor
"""
xi=quantum_measurement/2**number_qubits
#a and xi initial
all_as=[]
all_xis=[]
a_0=math.floor(xi)
xi_0=xi-a_0
all_as+=[a_0]
all_xis+=[xi_0]
# p and q initial
all_ps=[]
all_qs=[]
p_0=all_as[0]
q_0=1
all_ps+=[p_0]
all_qs+=[q_0]
xi_n=xi_0
while not numpy.isclose(xi_n,0,atol=1e-7):
if len(all_as)>=max_steps:
print("Warning: algorithm did not converge within max_steps %d steps, try increasing max_steps"%max_steps)
break
# computing a and xi
a_nplus1=math.floor(1/xi_n)
xi_nplus1=1/xi_n-a_nplus1
all_as+=[a_nplus1]
all_xis+=[xi_nplus1]
xi_n=xi_nplus1
# computing p and q
n=len(all_as)-1
if n==1:
p_1=all_as[1]*all_as[0]+1
q_1=all_as[1]
all_ps+=[p_1]
all_qs+=[q_1]
else:
p_n=all_as[n]*all_ps[n-1]+all_ps[n-2]
q_n=all_as[n]*all_qs[n-1]+all_qs[n-2]
all_ps+=[p_n]
all_qs+=[q_n]
# check the q to see if it is our answer (note with this we skip the first q, as a trivial case)
if a_shor**all_qs[-1]%N_shor == 1 % N_shor:
return all_qs[-1]
period_from_quantum_measurement(13453,14,3,91) #should return, for example 6 per page 20 of https://arxiv.org/pdf/quant-ph/0010034.pdf
# Testing this:
import qiskit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
def binary_string_to_decimal(s):
dec=0
for i in s[::-1]:
if int(i):
dec+=2**int(i)
return dec
def run_shors_subroutine_period2_mod15():
qr = QuantumRegister(7)
cr = ClassicalRegister(3)
qc = QuantumCircuit(qr,cr)
# initialize x to be a superposition of all possible r quibit values
#for i in range(4):
# qc.h(qr[i])
# run circuit (which includes measurement steps)
shors_subroutine_period_2mod15(qr,qc,cr)
import time
from qiskit.tools.visualization import plot_histogram
backend=Aer.get_backend('qasm_simulator')
job_exp = qiskit.execute(qc, backend=backend,shots=1)
result = job_exp.result()
final=result.get_counts(qc)
# convert final result to decimal
measurement=binary_string_to_decimal(list(final.keys())[0])
period_r=period_from_quantum_measurement(measurement,3,2,15)
return period_r
print(run_shors_subroutine_period2_mod15())
```
# The last thing to do will be to implement the full Shor's algorithm and check if the r is correct by plugging it in, getting factors and checking results. If not, rerun the algorithm.
```
def period_finding_quantum(a,N):
# for the sake of example we will not implement this algorithm in full generality
# rather, we will create an example with one specific a and one specific N
# extension work could be done to impl
if a==2 and N==15:
return run_shors_subroutine_period2_mod15()
else:
raise Exception("Not implemented for N=%d, a=%d" % (N,a))
def shors_algorithm_quantum(N,fixed_a=None):
assert(N>0)
assert(int(N)==N)
while True:
if not fixed_a:
a=random.randint(0,N-1)
else:
a=fixed_a
g=math.gcd(a,N)
if g!=1 or N==1:
first_factor=g
second_factor=int(N/g)
return first_factor,second_factor
else:
r=period_finding_quantum(a,N)
if not r:
continue
if r % 2 != 0:
continue
elif a**(int(r/2)) % N == -1 % N:
continue
else:
first_factor=math.gcd(a**int(r/2)+1,N)
second_factor=math.gcd(a**int(r/2)-1,N)
if first_factor==N or second_factor==N:
continue
if first_factor*second_factor!=N:
# checking our work
continue
return first_factor,second_factor
# Here's our final result
shors_algorithm_quantum(15,fixed_a=2)
# Now trying it out to see how the algorithm would function if we let it choose a given random a:
for a in range(15):
# Here's the result for a given a:
try:
print("randomly chosen a=%d would result in %s"%(a,shors_algorithm_quantum(15,fixed_a=a)))
except:
print("FINISH IMPLEMENTING algorithm doesn't work with a randomly chosen a=%d at this stage"%a)
```
| true |
code
| 0.430506 | null | null | null | null |
|
**Principal Component Analysis (PCA)** is widely used in Machine Learning pipelines as a means to compress data or help visualization. This notebook aims to walk through the basic idea of the PCA and build the algorithm from scratch in Python.
Before diving directly into the PCA, let's first talk about several import concepts - the **"eigenvectors & eigenvalues"** and **"Singular Value Decomposition (SVD)"**.
An **eigenvector** of a square matrix is a column vector that satisfies:
$$Av=\lambda v$$
Where A is a $[n\times n]$ square matrix, v is a $[n\times 1]$ **eigenvector**, and $\lambda$ is a scalar value which is also known as the **eigenvalue**.
If A is both a square and symmetric matrix (like a typical variance-covariance matrix), then we can write A as:
$$A=U\Sigma U^T$$
Here columns of matrix U are eigenvectors of matrix A; and $\Sigma$ is a diaonal matrix containing the corresponding eigenvalues.
This is also a special case of the well-known theorem **"Singular Value Decomposition" (SVD)**, where a rectangular matrix M can be expressed as:
$$M=U\Sigma V^T$$
####With SVD, we can calcuate the eigenvectors and eigenvalues of a square & symmetric matrix. This will be the key to solve the PCA.
The goal of the PCA is to find a lower dimension surface to maxmize total variance of the projection, or in other means, to minimize the projection error. The entire algorithm can be summarized as the following:
1) Given a data matrix **$X$** with **$m$** rows (number of records) and **$n$** columns (number of dimensions), we should first substract the column mean for each dimension.
2) Then we can calculate the variance-covariance matrix using the equation (X here already has zero mean for each column from step 1):
$$cov=\frac{1}{m}X^TX$$
3) We can then use SVD to compute the eigenvectors and corresponding eigenvalues of the above covariance matrix "$cov$":
$$cov=U\Sigma U^T$$
4) If our target dimension is $p$ ($p<n$), then we will select the first $p$ columns of the $U$ matrix and get matrix $U_{reduce}$.
5) To get the compressed data set, we can do the transformation as below:
$$X_{reduce}=XU_{reduce}$$
6) To appoximate the original data set given the compressed data, we can use:
$$X=X_{reduce}U_{reduce}^T$$
Note this is true because $U_{reduce}^{-1}=U_{reduce}^T$ (in this case, all the eigenvectors are unit vectors).
####In practice, it is also important to choose the proper number of principal components. For data compression, we want to retain as much variation in the original data while reducing the dimension. Luckily, with SVD, we can get a estimate of the retained variation by:
$$\%\ of\ variance\ retained = \frac{\sum_{i=1}^{p}S_{ii}}{\sum_{i=1}^{n}S_{ii}}$$
Where $S_{ii}$ is the $ith$ diagonal element of the $\Sigma$ matrix, $p$ is the number of reduced dimension, and $n$ is the dimension of the original data.
####For data visulization purposes, we usually choose 2 or 3 dimensions to plot the compressed data.
####The following class PCA() implements the idea of principal component analysis.
```
import numpy as np
class PCA():
def __init__(self, num_components):
self.num_components = num_components
self.U = None
self.S = None
def fit(self, X):
# perform pca
m = X.shape[0]
X_mean = np.mean(X, axis=0)
X -= X_mean
cov = X.T.dot(X) * 1.0 / m
self.U, self.S, _ = np.linalg.svd(cov)
return self
def project(self, X):
# project data based on reduced dimension
U_reduce = self.U[:, :self.num_components]
X_reduce = X.dot(U_reduce)
return X_reduce
def inverse(self, X_reduce):
# recover the original data based on the reduced form
U_reduce = self.U[:, :self.num_components]
X = X_reduce.dot(U_reduce.T)
return X
def explained_variance(self):
# print the ratio of explained variance with the pca
explained = np.sum(self.S[:self.num_components])
total = np.sum(self.S)
return explained * 1.0 / total
```
####Now we can use a demo data set to show dimensionality reduction and data visualization.
We will use the Iris Data set as always.
```
from sklearn.datasets import load_iris
iris = load_iris()
X = iris['data']
y = iris['target']
print X.shape
```
We can find that the dimension of the original $X$ matrix is 4. We can then compress it to 2 using PCA technique with the **PCA()** class that we defined above.
```
pca = PCA(num_components=2)
pca.fit(X)
X_reduce = pca.project(X)
print X_reduce.shape
```
Now that the data has been compressed, we can check the ratianed variance.
```
print "{:.2%}".format(pca.explained_variance())
```
We have 97.76% of variance retained. This is okay for data visulization purposes. But if we used PCA in supervised learning pipelines, we might want to add more dimension to keep more than 99% of the variation from the original data.
Finally, with the compressed dimension, we can plot to see the distribution of iris dataset.
```
%pylab inline
pylab.rcParams['figure.figsize'] = (10, 6)
from matplotlib import pyplot as plt
for c, marker, class_num in zip(['green', 'r', 'cyan'], ['o', '^', 's'], np.unique(y)):
plt.scatter(x=X_reduce[:, 0][y == class_num], y=X_reduce[:, 1][y == class_num], c=c, marker=marker,
label="Class {}".format(class_num), alpha=0.7, s=30)
plt.xlabel("Component 1")
plt.ylabel("Component 2")
plt.legend()
plt.show()
```
From the above example, we can see that PCA can help us visualize data with more than 3 feature dimensions. The general use of PCA is for dimensionality reductions in Machine Learning Pipelines. It can speed up the learning process and save memory when running supervised and unsupervised algorithms on large dataset. However, it also throws away some information when reducing the feature dimension. Thus it is always beneficial to test whether using PCA on top of something else since it's pretty easy to set up.
| true |
code
| 0.739178 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/ai-fast-track/icevision-gradio/blob/master/IceApp_pets.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# IceVision Deployment App: PETS Dataset
This example uses Faster RCNN trained weights using the [PETS dataset](https://airctic.github.io/icedata/pets/)
About IceVision:
- an Object-Detection Framework that connects to different libraries/frameworks such as Fastai, Pytorch Lightning, and Pytorch with more to come.
- Features a Unified Data API with out-of-the-box support for common annotation formats (COCO, VOC, etc.)
- Provides flexible model implementations with pluggable backbones
## Installing packages
```
!pip install icevision[inference]
!pip install icedata
!pip install gradio
```
## Imports
```
from icevision.all import *
import icedata
import PIL, requests
import torch
from torchvision import transforms
import gradio as gr
```
## Loading trained model
```
class_map = icedata.pets.class_map()
model = icedata.pets.trained_models.faster_rcnn_resnet50_fpn()
```
## Defininig the predict() method
```
def predict(
model, image, detection_threshold: float = 0.5, mask_threshold: float = 0.5
):
tfms_ = tfms.A.Adapter([tfms.A.Normalize()])
# Whenever you have images in memory (numpy arrays) you can use `Dataset.from_images`
infer_ds = Dataset.from_images([image], tfms_)
batch, samples = faster_rcnn.build_infer_batch(infer_ds)
preds = faster_rcnn.predict(
model=model,
batch=batch,
detection_threshold=detection_threshold
)
return samples[0]["img"], preds[0]
```
## Defining the `show_preds` method: called by `gr.Interface(fn=show_preds, ...)`
```
def show_preds(input_image, display_list, detection_threshold):
display_label = ("Label" in display_list)
display_bbox = ("BBox" in display_list)
if detection_threshold==0: detection_threshold=0.5
img, pred = predict(model=model, image=input_image, detection_threshold=detection_threshold)
# print(pred)
img = draw_pred(img=img, pred=pred, class_map=class_map, denormalize_fn=denormalize_imagenet, display_label=display_label, display_bbox=display_bbox)
img = PIL.Image.fromarray(img)
# print("Output Image: ", img.size, type(img))
return img
```
## Gradio User Interface
```
display_chkbox = gr.inputs.CheckboxGroup(["Label", "BBox"], label="Display")
detection_threshold_slider = gr.inputs.Slider(minimum=0, maximum=1, step=0.1, default=0.5, label="Detection Threshold")
outputs = gr.outputs.Image(type="pil")
gr_interface = gr.Interface(fn=show_preds, inputs=["image", display_chkbox, detection_threshold_slider], outputs=outputs, title='IceApp - PETS')
gr_interface.launch(inline=False, share=True, debug=True)
```
## Enjoy!
If you have any questions, please feel free to [join us](https://discord.gg/JDBeZYK)
| true |
code
| 0.777838 | null | null | null | null |
|
<div class="alert alert-info" role="alert">
This tutorial contains a lot of bokeh plots, which may take a little while to load and render.
</div>
``Element``s are the basic building blocks for any HoloViews visualization. These are the objects that can be composed together using the various [Container](Containers.ipynb) types.
Here in this overview, we show an example of how to build each of these ``Element``s directly out of Python or Numpy data structures. An even more powerful way to use them is by collecting similar ``Element``s into a HoloMap, as described in [Exploring Data](Exploring_Data.ipynb), so that you can explore, select, slice, and animate them flexibly, but here we focus on having small, self-contained examples. Complete reference material for each type can be accessed using our [documentation system](Introduction.ipynb#ParamDoc). This tutorial uses the default matplotlib plotting backend; see the [Bokeh Elements](Bokeh_Elements.ipynb) tutorial for the corresponding bokeh plots.
## Element types
This class hierarchy shows each of the ``Element`` types.
Each type is named for the default or expected way that the underlying data can be visualized. E.g., if your data is wrapped into a ``Surface`` object, it will display as a 3D surface by default, whereas the same data embedded in an ``Image`` object will display as a 2D raster image. But please note that the specification and implementation for each ``Element`` type does not actually include *any* such visualization -- the name merely serves as a semantic indication that you ordinarily think of the data as being laid out visually in that way. The actual plotting is done by a separate plotting subsystem, while the objects themselves focus on storing your data and the metadata needed to describe and use it.
This separation of data and visualization is described in detail in the [Options tutorial](Options.ipynb), which describes all about how to find out the options available for each ``Element`` type and change them if necessary, from either Python or IPython Notebook. When using this tutorial interactively in an IPython/Jupyter notebook session, we suggest adding ``%output info=True`` after the call to ``notebook_extension`` below, which will pop up a detailed list and explanation of the available options for visualizing each ``Element`` type, after that notebook cell is executed. Then, to find out all the options for any of these ``Element`` types, just press ``<Shift-Enter>`` on the corresponding cell in the live notebook.
The types available:
<dl class="dl-horizontal">
<dt><a href="#Element"><code>Element</code></a></dt><dd>The base class of all <code>Elements</code>.</dd>
</dl>
### <a id='ChartIndex'></a> <a href="#Chart Elements"><code>Charts:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Curve"><code>Curve</code></a></dt><dd>A continuous relation between a dependent and an independent variable. <font color='green'>✓</font></dd>
<dt><a href="#ErrorBars"><code>ErrorBars</code></a></dt><dd>A collection of x-/y-coordinates with associated error magnitudes. <font color='green'>✓</font></dd>
<dt><a href="#Spread"><code>Spread</code></a></dt><dd>Continuous version of ErrorBars. <font color='green'>✓</font></dd>
<dt><a href="#Area"><code>Area</code></a></dt><dd>Area under the curve or between curves. <font color='green'>✓</font></dd>
<dt><a href="#Bars"><code>Bars</code></a></dt><dd>Data collected and binned into categories. <font color='green'>✓</font></dd>
<dt><a href="#Histogram"><code>Histogram</code></a></dt><dd>Data collected and binned in a continuous space using specified bin edges. <font color='green'>✓</font></dd>
<dt><a href="#BoxWhisker"><code>BoxWhisker</code></a></dt><dd>Distributions of data varying by 0-N key dimensions.<font color='green'>✓</font></dd>
<dt><a href="#Scatter"><code>Scatter</code></a></dt><dd>Discontinuous collection of points indexed over a single dimension. <font color='green'>✓</font></dd>
<dt><a href="#Points"><code>Points</code></a></dt><dd>Discontinuous collection of points indexed over two dimensions. <font color='green'>✓</font></dd>
<dt><a href="#VectorField"><code>VectorField</code></a></dt><dd>Cyclic variable (and optional auxiliary data) distributed over two-dimensional space. <font color='green'>✓</font></dd>
<dt><a href="#Spikes"><code>Spikes</code></a></dt><dd>A collection of horizontal or vertical lines at various locations with fixed height (1D) or variable height (2D). <font color='green'>✓</font></dd>
<dt><a href="#SideHistogram"><code>SideHistogram</code></a></dt><dd>Histogram binning data contained by some other <code>Element</code>. <font color='green'>✓</font></dd>
</dl>
### <a id='Chart3DIndex'></a> <a href="#Chart3D Elements"><code>Chart3D Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Surface"><code>Surface</code></a></dt><dd>Continuous collection of points in a three-dimensional space. <font color='red'>✗</font></dd>
<dt><a href="#Scatter3D"><code>Scatter3D</code></a></dt><dd>Discontinuous collection of points in a three-dimensional space. <font color='red'>✗</font></dd>
<dt><a href="#TriSurface"><code>TriSurface</code></a></dt><dd>Continuous but irregular collection of points interpolated into a Surface using Delaunay triangulation. <font color='red'>✗</font></dd>
</dl>
### <a id='RasterIndex'></a> <a href="#Raster Elements"><code>Raster Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Raster"><code>Raster</code></a></dt><dd>The base class of all rasters containing two-dimensional arrays. <font color='green'>✓</font></dd>
<dt><a href="#QuadMesh"><code>QuadMesh</code></a></dt><dd>Raster type specifying 2D bins with two-dimensional array of values. <font color='green'>✓</font></dd>
<dt><a href="#HeatMap"><code>HeatMap</code></a></dt><dd>Raster displaying sparse, discontinuous data collected in a two-dimensional space. <font color='green'>✓</font></dd>
<dt><a href="#Image"><code>Image</code></a></dt><dd>Raster containing a two-dimensional array covering a continuous space (sliceable). <font color='green'>✓</font></dd>
<dt><a href="#RGB"><code>RGB</code></a></dt><dd>Image with 3 (R,G,B) or 4 (R,G,B,Alpha) color channels. <font color='green'>✓</font></dd>
<dt><a href="#HSV"><code>HSV</code></a></dt><dd>Image with 3 (Hue, Saturation, Value) or 4 channels. <font color='green'>✓</font></dd>
</dl>
### <a id='TabularIndex'></a> <a href="#Tabular Elements"><code>Tabular Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#ItemTable"><code>ItemTable</code></a></dt><dd>Ordered collection of key-value pairs (ordered dictionary). <font color='green'>✓</font></dd>
<dt><a href="#Table"><code>Table</code></a></dt><dd>Collection of arbitrary data with arbitrary key and value dimensions. <font color='green'>✓</font></dd>
</dl>
### <a id='AnnotationIndex'></a> <a href="#Annotation Elements"><code>Annotations:</code></a>
<dl class="dl-horizontal">
<dt><a href="#VLine"><code>VLine</code></a></dt><dd>Vertical line annotation. <font color='green'>✓</font></dd>
<dt><a href="#HLine"><code>HLine</code></a></dt><dd>Horizontal line annotation. <font color='green'>✓</font></dd>
<dt><a href="#Spline"><code>Spline</code></a></dt><dd>Bezier spline (arbitrary curves). <font color='green'>✓</font></dd>
<dt><a href="#Text"><code>Text</code></a></dt><dd>Text annotation on an <code>Element</code>. <font color='green'>✓</font></dd>
<dt><a href="#Arrow"><code>Arrow</code></a></dt><dd>Arrow on an <code>Element</code> with optional text label. <font color='red'>✗</font></dd>
</dl>
### <a id='PathIndex'></a> <a href="#Path Elements"><code>Paths:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Path"><code>Path</code></a></dt><dd>Collection of paths. <font color='green'>✓</font></dd>
<dt><a href="#Contours"><code>Contours</code></a></dt><dd>Collection of paths, each with an associated value. <font color='green'>✓</font></dd>
<dt><a href="#Polygons"><code>Polygons</code></a></dt><dd>Collection of filled, closed paths with an associated value. <font color='green'>✓</font></dd>
<dt><a href="#Bounds"><code>Bounds</code></a></dt><dd>Box specified by corner positions. <font color='green'>✓</font></dd>
<dt><a href="#Box"><code>Box</code></a></dt><dd>Box specified by center position, radius, and aspect ratio. <font color='green'>✓</font></dd>
<dt><a href="#Ellipse"><code>Ellipse</code></a></dt><dd>Ellipse specified by center position, radius, and aspect ratio. <font color='green'>✓</font></dd>
</dl>
## ``Element`` <a id='Element'></a>
**The basic or fundamental types of data that can be visualized.**
``Element`` is the base class for all the other HoloViews objects shown in this section.
All ``Element`` objects accept ``data`` as the first argument to define the contents of that element. In addition to its implicit type, each element object has a ``group`` string defining its category, and a ``label`` naming this particular item, as described in the [Introduction](Introduction.ipynb#value).
When rich display is off, or if no visualization has been defined for that type of ``Element``, the ``Element`` is presented with a default textual representation:
```
import holoviews as hv
hv.notebook_extension(bokeh=True)
hv.Element(None, group='Value', label='Label')
```
In addition, ``Element`` has key dimensions (``kdims``), value dimensions (``vdims``), and constant dimensions (``cdims``) to describe the semantics of indexing within the ``Element``, the semantics of the underlying data contained by the ``Element``, and any constant parameters associated with the object, respectively.
Dimensions are described in the [Introduction](Introduction.ipynb).
The remaining ``Element`` types each have a rich, graphical display as shown below.
## ``Chart`` Elements <a id='Chart Elements'></a>
**Visualization of a dependent variable against an independent variable**
The first large class of ``Elements`` is the ``Chart`` elements. These objects have at least one fully indexable, sliceable key dimension (typically the *x* axis in a plot), and usually have one or more value dimension(s) (often the *y* axis) that may or may not be indexable depending on the implementation. The key dimensions are normally the parameter settings for which things are measured, and the value dimensions are the data points recorded at those settings.
As described in the [Columnar Data tutorial](Columnar_Data.ipynb), the data can be stored in several different internal formats, such as a NumPy array of shape (N, D), where N is the number of samples and D the number of dimensions. A somewhat larger list of formats can be accepted, including any of the supported internal formats, or
1. As a list of length N containing tuples of length D.
2. As a tuple of length D containing iterables of length N.
### ``Curve`` <a id='Curve'></a>
```
import numpy as np
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
hv.Curve(points)
```
A ``Curve`` is a set of values provided for some set of keys from a [continuously indexable 1D coordinate system](Continuous_Coordinates.ipynb), where the plotted values will be connected up because they are assumed to be samples from a continuous relation.
### ``ErrorBars`` <a id='ErrorBars'></a>
```
np.random.seed(7)
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2) for i in np.linspace(0, 100, 11)]
hv.Curve(points) * hv.ErrorBars(errors)
```
``ErrorBars`` is a set of x-/y-coordinates with associated error values. Error values may be either symmetric or asymmetric, and thus can be supplied as an Nx3 or Nx4 array (or any of the alternative constructors Chart Elements allow).
```
%%opts ErrorBars
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2, np.random.rand()/4) for i in np.linspace(0, 100, 11)]
hv.Curve(points) * hv.ErrorBars(errors, vdims=['y', 'yerrneg', 'yerrpos'])
```
### ``Area`` <a id='Area'></a>
** *Area under the curve* **
By default the Area Element draws just the area under the curve, i.e. the region between the curve and the origin.
```
xs = np.linspace(0, np.pi*4, 40)
hv.Area((xs, np.sin(xs)))
```
** * Area between curves * **
When supplied a second value dimension the area is defined as the area between two curves.
```
X = np.linspace(0,3,200)
Y = X**2 + 3
Y2 = np.exp(X) + 2
Y3 = np.cos(X)
hv.Area((X, Y, Y2), vdims=['y', 'y2']) * hv.Area((X, Y, Y3), vdims=['y', 'y3'])
```
#### Stacked areas
Areas are also useful to visualize multiple variables changing over time, but in order to be able to compare them the areas need to be stacked. Therefore the ``operation`` module provides the ``stack_area`` operation which makes it trivial to stack multiple Area in an (Nd)Overlay.
In this example we will generate a set of 5 arrays representing percentages and create an Overlay of them. Then we simply call the ``stack_area`` operation on the Overlay to get a stacked area chart.
```
values = np.random.rand(5, 20)
percentages = (values/values.sum(axis=0)).T*100
overlay = hv.Overlay([hv.Area(percentages[:, i], vdims=[hv.Dimension('value', unit='%')]) for i in range(5)])
overlay + hv.Area.stack(overlay)
```
### ``Spread`` <a id='Spread'></a>
``Spread`` elements have the same data format as the ``ErrorBars`` element, namely x- and y-values with associated symmetric or asymmetric errors, but are interpreted as samples from a continuous distribution (just as ``Curve`` is the continuous version of ``Scatter``). These are often paired with an overlaid ``Curve`` to show both the mean (as a curve) and the spread of values; see the [Columnar Data tutorial](Columnar_Data.ipynb) for examples.
##### Symmetric
```
np.random.seed(42)
xs = np.linspace(0, np.pi*2, 20)
err = 0.2+np.random.rand(len(xs))
hv.Spread((xs, np.sin(xs), err))
```
##### Asymmetric
```
%%opts Spread (fill_color='indianred' fill_alpha=1)
xs = np.linspace(0, np.pi*2, 20)
hv.Spread((xs, np.sin(xs), 0.1+np.random.rand(len(xs)), 0.1+np.random.rand(len(xs))),
vdims=['y', 'yerrneg', 'yerrpos'])
```
### ``Bars`` <a id='Bars'></a>
```
data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)]
bars = hv.Bars(data, kdims=[hv.Dimension('Car occupants', values='initial')], vdims=['Count'])
bars + bars[['one', 'two', 'three']]
```
``Bars`` is an ``NdElement`` type, so by default it is sorted. To preserve the initial ordering specify the ``Dimension`` with values set to 'initial', or you can supply an explicit list of valid dimension keys.
``Bars`` support up to two key dimensions which can be laid by ``'group'`` and ``'stack'`` dimensions. By default the key dimensions are mapped onto the first, second ``Dimension`` of the ``Bars`` object, but this behavior can be overridden via the ``group_index`` and ``stack_index`` options.
```
%%opts Bars [group_index=0 stack_index=1]
from itertools import product
np.random.seed(3)
groups, stacks = ['A', 'B'], ['a', 'b']
keys = product(groups, stacks)
hv.Bars([k+(np.random.rand()*100.,) for k in keys],
kdims=['Group', 'Stack'], vdims=['Count'])
```
### ``BoxWhisker`` <a id='BoxWhisker'></a>
The ``BoxWhisker`` Element allows representing distributions of data varying by 0-N key dimensions. To represent the distribution of a single variable, we can create a BoxWhisker Element with no key dimensions and a single value dimension:
```
hv.BoxWhisker(np.random.randn(200), kdims=[], vdims=['Value'])
```
BoxWhisker Elements support any number of dimensions and may also be rotated. To style the boxes and whiskers, supply ``boxprops``, ``whiskerprops``, and ``flierprops``.
```
%%opts BoxWhisker [invert_axes=True width=600]
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
kdims=['Group', 'Category'], vdims=['Value']).sort()
```
### ``Histogram`` <a id='Histogram'></a>
```
np.random.seed(1)
data = [np.random.normal() for i in range(10000)]
frequencies, edges = np.histogram(data, 20)
hv.Histogram(frequencies, edges)
```
``Histogram``s partition the `x` axis into discrete (but not necessarily regular) bins, showing counts in each as a bar.
Almost all Element types, including ``Histogram``, may be projected onto a polar axis by supplying ``projection='polar'`` as a plot option.
```
%%opts Histogram [projection='polar' show_grid=True]
data = [np.random.rand()*np.pi*2 for i in range(100)]
frequencies, edges = np.histogram(data, 20)
hv.Histogram(frequencies, edges, kdims=['Angle'])
```
### ``Scatter`` <a id='Scatter'></a>
```
%%opts Scatter (color='k', marker='s', s=10)
np.random.seed(42)
points = [(i, np.random.random()) for i in range(20)]
hv.Scatter(points) + hv.Scatter(points)[12:20]
```
Scatter is the discrete equivalent of Curve, showing *y* values for discrete *x* values selected. See [``Points``](#Points) for more information.
The marker shape specified above can be any supported by [matplotlib](http://matplotlib.org/api/markers_api.html), e.g. ``s``, ``d``, or ``o``; the other options select the color and size of the marker. For convenience with the [bokeh backend](Bokeh_Backend), the matplotlib marker options are supported using a compatibility function in HoloViews.
### ``Points`` <a id='Points'></a>
```
np.random.seed(12)
points = np.random.rand(50,2)
hv.Points(points) + hv.Points(points)[0.6:0.8,0.2:0.5]
```
As you can see, ``Points`` is very similar to ``Scatter``, and can produce some plots that look identical. However, the two ``Element``s are very different semantically. For ``Scatter``, the dots each show a dependent variable *y* for some *x*, such as in the ``Scatter`` example above where we selected regularly spaced values of *x* and then created a random number as the corresponding *y*. I.e., for ``Scatter``, the *y* values are the data; the *x*s are just where the data values are located. For ``Points``, both *x* and *y* are independent variables, known as ``key_dimensions`` in HoloViews:
```
for o in [hv.Points(points,name="Points "), hv.Scatter(points,name="Scatter")]:
for d in ['key','value']:
print("%s %s_dimensions: %s " % (o.name, d, o.dimensions(d,label=True)))
```
The ``Scatter`` object expresses a dependent relationship between *x* and *y*, making it useful for combining with other similar ``Chart`` types, while the ``Points`` object expresses the relationship of two independent keys *x* and *y* with optional ``vdims`` (zero in this case), which makes ``Points`` objects meaningful to combine with the ``Raster`` types below.
Of course, the ``vdims`` need not be empty for ``Points``; here is an example with two additional quantities for each point, as ``value_dimension``s *z* and α visualized as the color and size of the dots, respectively:
```
%%opts Points [color_index=2 size_index=3 scaling_factor=50]
np.random.seed(10)
data = np.random.rand(100,4)
points = hv.Points(data, vdims=['z', 'alpha'])
points + points[0.3:0.7, 0.3:0.7].hist()
```
Such a plot wouldn't be meaningful for ``Scatter``, but is a valid use for ``Points``, where the *x* and *y* locations are independent variables representing coordinates, and the "data" is conveyed by the size and color of the dots.
### ``Spikes`` <a id='Spikes'></a>
Spikes represent any number of horizontal or vertical line segments with fixed or variable heights. There are a number of disparate uses for this type. First of all, they may be used as a rugplot to give an overview of a one-dimensional distribution. They may also be useful in more domain-specific cases, such as visualizing spike trains for neurophysiology or spectrograms in physics and chemistry applications.
In the simplest case, a Spikes object represents coordinates in a 1D distribution:
```
%%opts Spikes (line_alpha=0.4) [spike_length=0.1]
xs = np.random.rand(50)
ys = np.random.rand(50)
hv.Points((xs, ys)) * hv.Spikes(xs)
```
When supplying two dimensions to the Spikes object, the second dimension will be mapped onto the line height. Optionally, you may also supply a cmap and color_index to map color onto one of the dimensions. This way we can, for example, plot a mass spectrogram:
```
%%opts Spikes (cmap='Reds')
hv.Spikes(np.random.rand(20, 2), kdims=['Mass'], vdims=['Intensity'])
```
Another possibility is to draw a number of spike trains as you would encounter in neuroscience. Here we generate 10 separate random spike trains and distribute them evenly across the space by setting their ``position``. By also declaring some ``yticks``, each spike train can be labeled individually:
```
%%opts Spikes [spike_length=0.1] NdOverlay [show_legend=False]
hv.NdOverlay({i: hv.Spikes(np.random.randint(0, 100, 10), kdims=['Time']).opts(plot=dict(position=0.1*i))
for i in range(10)}).opts(plot=dict(yticks=[((i+1)*0.1-0.05, i) for i in range(10)]))
```
Finally, we may use ``Spikes`` to visualize marginal distributions as adjoined plots using the ``<<`` adjoin operator:
```
%%opts Spikes (line_alpha=0.2)
points = hv.Points(np.random.randn(500, 2))
points << hv.Spikes(points['y']) << hv.Spikes(points['x'])
```
### ``VectorField`` <a id='VectorField'></a>
```
%%opts VectorField [size_index=3]
x,y = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = (x,y,sine_rings, exp_falloff)
hv.VectorField(vector_data)
```
As you can see above, here the *x* and *y* positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each *x,y* position).
Using the IPython ``%%opts`` cell-magic (described in the [Options tutorial](Options), along with the Python equivalent), we can also use color as a redundant indicator to the direction or magnitude:
```
%%opts VectorField [size_index=3] VectorField.A [color_index=2] VectorField.M [color_index=3]
hv.VectorField(vector_data, group='A') + hv.VectorField(vector_data, group='M')
```
### ``SideHistogram`` <a id='SideHistogram'></a>
The ``.hist`` method conveniently adjoins a histogram to the side of any ``Chart``, ``Surface``, or ``Raster`` component, as well as many of the container types (though it would be reporting data from one of these underlying ``Element`` types). For a ``Raster`` using color or grayscale to show values (see ``Raster`` section below), the side histogram doubles as a color bar or key.
```
import numpy as np
np.random.seed(42)
points = [(i, np.random.normal()) for i in range(800)]
hv.Scatter(points).hist()
```
## ``Chart3D`` Elements <a id='Chart3D Elements'></a>
### ``Surface`` <a id='Surface'></a>
```
%%opts Surface (cmap='jet' rstride=20, cstride=2)
hv.Surface(np.sin(np.linspace(0,100*np.pi*2,10000)).reshape(100,100))
```
Surface is used for a set of gridded points whose associated value dimension represents samples from a continuous surface; it is the equivalent of a ``Curve`` but with two key dimensions instead of just one.
### ``Scatter3D`` <a id='Scatter3D'></a>
```
%%opts Scatter3D [azimuth=40 elevation=20]
x,y = np.mgrid[-5:5, -5:5] * 0.1
heights = np.sin(x**2+y**2)
hv.Scatter3D(zip(x.flat,y.flat,heights.flat))
```
``Scatter3D`` is the equivalent of ``Scatter`` but for two key dimensions, rather than just one.
### ``TriSurface`` <a id='TriSurface'></a>
The ``TriSurface`` Element renders any collection of 3D points as a Surface by applying Delaunay triangulation. It thus supports arbitrary, non-gridded data, but it does not support indexing to find data values, since finding the closest ones would require a search.
```
%%opts TriSurface [fig_size=200] (cmap='hot_r')
hv.TriSurface((x.flat,y.flat,heights.flat))
```
## ``Raster`` Elements <a id='Raster Elements'></a>
**A collection of raster image types**
The second large class of ``Elements`` is the raster elements. Like ``Points`` and unlike the other ``Chart`` elements, ``Raster Elements`` live in a 2D key-dimensions space. For the ``Image``, ``RGB``, and ``HSV`` elements, the coordinates of this two-dimensional key space are defined in a [continuously indexable coordinate system](Continuous_Coordinates.ipynb).
### ``Raster`` <a id='Raster'></a>
A ``Raster`` is the base class for image-like ``Elements``, but may be used directly to visualize 2D arrays using a color map. The coordinate system of a ``Raster`` is the raw indexes of the underlying array, with integer values always starting from (0,0) in the top left, with default extents corresponding to the shape of the array. The ``Image`` subclass visualizes similarly, but using a continuous Cartesian coordinate system suitable for an array that represents some underlying continuous region.
```
x,y = np.mgrid[-50:51, -50:51] * 0.1
hv.Raster(np.sin(x**2+y**2))
```
### ``QuadMesh`` <a id='QuadMesh'></a>
The basic ``QuadMesh`` is a 2D grid of bins specified as x-/y-values specifying a regular sampling or edges, with arbitrary sampling and an associated 2D array containing the bin values. The coordinate system of a ``QuadMesh`` is defined by the bin edges, therefore any index falling into a binned region will return the appropriate value. Unlike ``Image`` objects, slices must be inclusive of the bin edges.
```
n = 21
xs = np.logspace(1, 3, n)
ys = np.linspace(1, 10, n)
hv.QuadMesh((xs, ys, np.random.rand(n-1, n-1)))
```
QuadMesh may also be used to represent an arbitrary mesh of quadrilaterals by supplying three separate 2D arrays representing the coordinates of each quadrilateral in a 2D space. Note that when using ``QuadMesh`` in this mode, slicing and indexing semantics and most operations will currently not work.
```
coords = np.linspace(-1.5,1.5,n)
X,Y = np.meshgrid(coords, coords);
Qx = np.cos(Y) - np.cos(X)
Qz = np.sin(Y) + np.sin(X)
Z = np.sqrt(X**2 + Y**2)
hv.QuadMesh((Qx, Qz, Z))
```
### ``HeatMap`` <a id='HeatMap'></a>
A ``HeatMap`` displays like a typical raster image, but the input is a dictionary indexed with two-dimensional keys, not a Numpy array or Pandas dataframe. As many rows and columns as required will be created to display the values in an appropriate grid format. Values unspecified are left blank, and the keys can be any Python datatype (not necessarily numeric). One typical usage is to show values from a set of experiments, such as a parameter space exploration, and many other such visualizations are shown in the [Containers](Containers.ipynb) and [Exploring Data](Exploring_Data.ipynb) tutorials. Each value in a ``HeatMap`` is labeled explicitly by default, and so this component is not meant for very large numbers of samples. With the default color map, high values (in the upper half of the range present) are colored orange and red, while low values (in the lower half of the range present) are colored shades of blue.
```
data = {(chr(65+i),chr(97+j)): i*j for i in range(5) for j in range(5) if i!=j}
hv.HeatMap(data).sort()
```
### ``Image`` <a id='Image'></a>
Like ``Raster``, a HoloViews ``Image`` allows you to view 2D arrays using an arbitrary color map. Unlike ``Raster``, an ``Image`` is associated with a [2D coordinate system in continuous space](Continuous_Coordinates.ipynb), which is appropriate for values sampled from some underlying continuous distribution (as in a photograph or other measurements from locations in real space). Slicing, sampling, etc. on an ``Image`` all use this continuous space, whereas the corresponding operations on a ``Raster`` work on the raw array coordinates.
```
x,y = np.mgrid[-50:51, -50:51] * 0.1
bounds=(-1,-1,1,1) # Coordinate system: (left, bottom, top, right)
(hv.Image(np.sin(x**2+y**2), bounds=bounds)
+ hv.Image(np.sin(x**2+y**2), bounds=bounds)[-0.5:0.5, -0.5:0.5])
```
Notice how, because our declared coordinate system is continuous, we can slice with any floating-point value we choose. The appropriate range of the samples in the input numpy array will always be displayed, whether or not there are samples at those specific floating-point values.
It is also worth noting that the name ``Image`` can clash with other common libraries, which is one reason to avoid unqualified imports like ``from holoviews import *``. For instance, the Python Imaging Libray provides an ``Image`` module, and IPython itself supplies an ``Image`` class in ``IPython.display``. Python namespaces allow you to avoid such problems, e.g. using ``from PIL import Image as PILImage`` or using ``import holoviews as hv`` and then ``hv.Image()``, as we do in these tutorials.
### ``RGB`` <a id='RGB'></a>
The ``RGB`` element is an ``Image`` that supports red, green, blue channels:
```
x,y = np.mgrid[-50:51, -50:51] * 0.1
r = 0.5*np.sin(np.pi +3*x**2+y**2)+0.5
g = 0.5*np.sin(x**2+2*y**2)+0.5
b = 0.5*np.sin(np.pi/2+x**2+y**2)+0.5
hv.RGB(np.dstack([r,g,b]))
```
You can see how the RGB object is created from the original channels:
```
%%opts Image (cmap='gray')
hv.Image(r,label="R") + hv.Image(g,label="G") + hv.Image(b,label="B")
```
``RGB`` also supports an optional alpha channel, which will be used as a mask revealing or hiding any ``Element``s it is overlaid on top of:
```
%%opts Image (cmap='gray')
mask = 0.5*np.sin(0.2*(x**2+y**2))+0.5
rgba = hv.RGB(np.dstack([r,g,b,mask]))
bg = hv.Image(0.5*np.cos(x*3)+0.5, label="Background") * hv.VLine(x=0,label="Background")
overlay = bg*rgba
overlay.label="RGBA Overlay"
bg + hv.Image(mask,label="Mask") + overlay
```
### ``HSV`` <a id='HSV'></a>
HoloViews makes it trivial to work in any color space that can be converted to ``RGB`` by making a simple subclass of ``RGB`` as appropriate. For instance, we also provide the HSV (hue, saturation, value) color space, which is useful for plotting cyclic data (as the Hue) along with two additional dimensions (controlling the saturation and value of the color, respectively):
```
x,y = np.mgrid[-50:51, -50:51] * 0.1
h = 0.5 + np.sin(0.2*(x**2+y**2)) / 2.0
s = 0.5*np.cos(y*3)+0.5
v = 0.5*np.cos(x*3)+0.5
hsv = hv.HSV(np.dstack([h, s, v]))
hsv
```
You can see how this is created from the original channels:
```
%%opts Image (cmap='gray')
hv.Image(h, label="H") + hv.Image(s, label="S") + hv.Image(v, label="V")
```
# ``Tabular`` Elements <a id='Tabular Elements'></a>
**General data structures for holding arbitrary information**
## ``ItemTable`` <a id='ItemTable'></a>
An ``ItemTable`` is an ordered collection of key, value pairs. It can be used to directly visualize items in a tabular format where the items may be supplied as an ``OrderedDict`` or a list of (key,value) pairs. A standard Python dictionary can be easily visualized using a call to the ``.items()`` method, though the entries in such a dictionary are not kept in any particular order, and so you may wish to sort them before display. One typical usage for an ``ItemTable`` is to list parameter values or measurements associated with an adjacent ``Element``.
```
hv.ItemTable([('Age', 10), ('Weight',15), ('Height','0.8 meters')])
```
## ``Table`` <a id='Table'></a>
A table is more general than an ``ItemTable``, as it allows multi-dimensional keys and multidimensional values.
```
keys = [('M',10), ('M',16), ('F',12)]
values = [(15, 0.8), (18, 0.6), (10, 0.8)]
table = hv.Table(zip(keys,values),
kdims = ['Gender', 'Age'],
vdims=['Weight', 'Height'])
table
```
Note that you can use select using tables, and once you select using a full, multidimensional key, you get an ``ItemTable`` (shown on the right):
```
table.select(Gender='M') + table.select(Gender='M', Age=10)
```
The ``Table`` is used as a common data structure that may be converted to any other HoloViews data structure using the ``TableConversion`` class.
The functionality of the ``TableConversion`` class may be conveniently accessed using the ``.to`` property. For more extended usage of table conversion see the [Columnar Data](Columnnar_Data.ipynb) and [Pandas Conversion](Pandas_Conversion.ipynb) Tutorials.
```
table.select(Gender='M').to.curve(kdims=["Age"], vdims=["Weight"])
```
# ``Annotation`` Elements <a id='Annotation Elements'></a>
**Useful information that can be overlaid onto other components**
Annotations are components designed to be overlaid on top of other ``Element`` objects. To demonstrate annotation and paths, we will be drawing many of our elements on top of an RGB Image:
```
scene = hv.RGB.load_image('../assets/penguins.png')
```
### ``VLine`` and ``HLine`` <a id='VLine'></a><a id='HLine'></a>
```
scene * hv.VLine(-0.05) + scene * hv.HLine(-0.05)
```
### ``Spline`` <a id='Spline'></a>
The ``Spline`` annotation is used to draw Bezier splines using the same semantics as [matplotlib splines](http://matplotlib.org/api/path_api.html). In the overlay below, the spline is in dark blue and the control points are in light blue.
```
points = [(-0.3, -0.3), (0,0), (0.25, -0.25), (0.3, 0.3)]
codes = [1,4,4,4]
scene * hv.Spline((points,codes)) * hv.Curve(points)
```
### Text and Arrow <a id='Text'></a><a id='Arrow'></a>
```
scene * hv.Text(0, 0.2, 'Adult\npenguins') + scene * hv.Arrow(0,-0.1, 'Baby penguin', 'v')
```
# Paths <a id='Path Elements'></a>
**Line-based components that can be overlaid onto other components**
Paths are a subclass of annotations that involve drawing line-based components on top of other elements. Internally, Path Element types hold a list of Nx2 arrays, specifying the x/y-coordinates along each path. The data may be supplied in a number of ways, including:
1. A list of Nx2 numpy arrays.
2. A list of lists containing x/y coordinate tuples.
3. A tuple containing an array of length N with the x-values and a second array of shape NxP, where P is the number of paths.
4. A list of tuples each containing separate x and y values.
## ``Path`` <a id='Path'></a>
A ``Path`` object is actually a collection of paths which can be arbitrarily specified. Although there may be multiple unconnected paths in a single ``Path`` object, they will all share the same style. Only by overlaying multiple ``Path`` objects do you iterate through the defined color cycle (or any other style options that have been defined).
```
angle = np.linspace(0, 2*np.pi, 100)
baby = list(zip(0.15*np.sin(angle), 0.2*np.cos(angle)-0.2))
adultR = [(0.25, 0.45), (0.35,0.35), (0.25, 0.25), (0.15, 0.35), (0.25, 0.45)]
adultL = [(-0.3, 0.4), (-0.3, 0.3), (-0.2, 0.3), (-0.2, 0.4),(-0.3, 0.4)]
scene * hv.Path([adultL, adultR, baby]) * hv.Path([baby])
```
## ``Contours`` <a id='Contours'></a>
A ``Contours`` object is similar to ``Path`` object except each of the path elements is associated with a numeric value, called the ``level``. Sadly, our penguins are too complicated to give a simple example so instead we will simply mark the first couple of rings of our earlier ring pattern:
```
x,y = np.mgrid[-50:51, -50:51] * 0.1
def circle(radius, x=0, y=0):
angles = np.linspace(0, 2*np.pi, 100)
return np.array( list(zip(x+radius*np.sin(angles), y+radius*np.cos(angles))))
hv.Image(np.sin(x**2+y**2)) * hv.Contours([circle(0.22)], level=0) * hv.Contours([circle(0.33)], level=1)
```
## ``Polygons`` <a id='Polygons'></a>
A ``Polygons`` object is similar to a ``Contours`` object except that each supplied path is closed and filled. Just like ``Contours``, optionally a ``level`` may be supplied; the Polygons will then be colored according to the supplied ``cmap``. Non-finite values such as ``np.NaN`` or ``np.inf`` will default to the supplied ``facecolor``.
Polygons with values can be used to build heatmaps with arbitrary shapes.
```
%%opts Polygons (cmap='hot' line_color='black' line_width=2)
np.random.seed(35)
hv.Polygons([np.random.rand(4,2)], level=0.5) *\
hv.Polygons([np.random.rand(4,2)], level=1.0) *\
hv.Polygons([np.random.rand(4,2)], level=1.5) *\
hv.Polygons([np.random.rand(4,2)], level=2.0)
```
Polygons without a value are useful as annotation, but also allow us to draw arbitrary shapes.
```
def rectangle(x=0, y=0, width=1, height=1):
return np.array([(x,y), (x+width, y), (x+width, y+height), (x, y+height)])
(hv.Polygons([rectangle(width=2), rectangle(x=6, width=2)]).opts(style={'fill_color': '#a50d0d'})
* hv.Polygons([rectangle(x=2, height=2), rectangle(x=5, height=2)]).opts(style={'fill_color': '#ffcc00'})
* hv.Polygons([rectangle(x=3, height=2, width=2)]).opts(style={'fill_color': 'cyan'}))
```
## ``Bounds`` <a id='Bounds'></a>
A bounds is a rectangular area specified as a tuple in ``(left, bottom, right, top)`` format. It is useful for denoting a region of interest defined by some bounds, whereas ``Box`` (below) is useful for drawing a box at a specific location.
```
scene * hv.Bounds(0.2) * hv.Bounds((0.2, 0.2, 0.45, 0.45,))
```
## ``Box`` <a id='Box'></a> and ``Ellipse`` <a id='Ellipse'></a>
A ``Box`` is similar to a ``Bounds`` except you specify the box position, width, and aspect ratio instead of the coordinates of the box corners. An ``Ellipse`` is specified just as for ``Box``, but has a rounded shape.
```
scene * hv.Box( -0.25, 0.3, 0.3, aspect=0.5) * hv.Box( 0, -0.2, 0.1) + \
scene * hv.Ellipse(-0.25, 0.3, 0.3, aspect=0.5) * hv.Ellipse(0, -0.2, 0.1)
```
| true |
code
| 0.303429 | null | null | null | null |
|
# Extra Trees Classifier with MinMax Scaler
### Required Packages
```
import numpy as np
import pandas as pd
import seaborn as se
import warnings
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)
```
### Data Rescaling
This estimator scales and translates each feature individually such that it is in the given range on the training set, e.g. between zero and one.
The transformation is given by:
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
where min, max = feature_range.
```
minmax_scaler = MinMaxScaler()
X_train = minmax_scaler.fit_transform(X_train)
X_test = minmax_scaler.transform(X_test)
```
### Model
ExtraTreesClassifier is an ensemble learning method fundamentally based on decision trees. ExtraTreesClassifier, like RandomForest, randomizes certain decisions and subsets of data to minimize over-learning from the data and overfitting.
#### Model Tuning Parameters
1.n_estimators:int, default=100
>The number of trees in the forest.
2.criterion:{“gini”, “entropy”}, default="gini"
>The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
3.max_depth:int, default=None
>The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
4.max_features:{“auto”, “sqrt”, “log2”}, int or float, default=”auto”
>The number of features to consider when looking for the best split:
```
model=ExtraTreesClassifier(n_jobs = -1,random_state = 123)
model.fit(X_train,y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,X_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(X_test)))
```
#### Feature Importances.
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Creator: Ayush Gupta , Github: [Profile](https://github.com/guptayush179)
| true |
code
| 0.31822 | null | null | null | null |
|
# Implementing doND using the dataset
```
from functools import partial
import numpy as np
from qcodes.dataset.database import initialise_database
from qcodes.dataset.experiment_container import new_experiment
from qcodes.tests.instrument_mocks import DummyInstrument
from qcodes.dataset.measurements import Measurement
from qcodes.dataset.plotting import plot_by_id
initialise_database() # just in case no database file exists
new_experiment("doNd-tutorial", sample_name="no sample")
```
First we borrow the dummy instruments from the contextmanager notebook to have something to measure.
```
# preparatory mocking of physical setup
dac = DummyInstrument('dac', gates=['ch1', 'ch2'])
dmm = DummyInstrument('dmm', gates=['v1', 'v2'])
# and we'll make a 2D gaussian to sample from/measure
def gauss_model(x0: float, y0: float, sigma: float, noise: float=0.0005):
"""
Returns a generator sampling a gaussian. The gaussian is
normalised such that its maximal value is simply 1
"""
while True:
(x, y) = yield
model = np.exp(-((x0-x)**2+(y0-y)**2)/2/sigma**2)*np.exp(2*sigma**2)
noise = np.random.randn()*noise
yield model + noise
# and finally wire up the dmm v1 to "measure" the gaussian
gauss = gauss_model(0.1, 0.2, 0.25)
next(gauss)
def measure_gauss(dac):
val = gauss.send((dac.ch1.get(), dac.ch2.get()))
next(gauss)
return val
dmm.v1.get = partial(measure_gauss, dac)
```
Now lets reimplement the qdev-wrapper do1d function that can measure one one more parameters as a function of another parameter. This is more or less as simple as you would expect.
```
def do1d(param_set, start, stop, num_points, delay, *param_meas):
meas = Measurement()
meas.register_parameter(param_set) # register the first independent parameter
output = []
param_set.post_delay = delay
# do1D enforces a simple relationship between measured parameters
# and set parameters. For anything more complicated this should be reimplemented from scratch
for parameter in param_meas:
meas.register_parameter(parameter, setpoints=(param_set,))
output.append([parameter, None])
with meas.run() as datasaver:
for set_point in np.linspace(start, stop, num_points):
param_set.set(set_point)
for i, parameter in enumerate(param_meas):
output[i][1] = parameter.get()
datasaver.add_result((param_set, set_point),
*output)
dataid = datasaver.run_id # convenient to have for plotting
return dataid
dataid = do1d(dac.ch1, 0, 1, 10, 0.01, dmm.v1, dmm.v2)
axes, cbaxes = plot_by_id(dataid)
def do2d(param_set1, start1, stop1, num_points1, delay1,
param_set2, start2, stop2, num_points2, delay2,
*param_meas):
# And then run an experiment
meas = Measurement()
meas.register_parameter(param_set1)
param_set1.post_delay = delay1
meas.register_parameter(param_set2)
param_set1.post_delay = delay2
output = []
for parameter in param_meas:
meas.register_parameter(parameter, setpoints=(param_set1,param_set2))
output.append([parameter, None])
with meas.run() as datasaver:
for set_point1 in np.linspace(start1, stop1, num_points1):
param_set1.set(set_point1)
for set_point2 in np.linspace(start2, stop2, num_points2):
param_set2.set(set_point2)
for i, parameter in enumerate(param_meas):
output[i][1] = parameter.get()
datasaver.add_result((param_set1, set_point1),
(param_set2, set_point2),
*output)
dataid = datasaver.run_id # convenient to have for plotting
return dataid
dataid = do2d(dac.ch1, -1, 1, 100, 0.01,
dac.ch2, -1, 1, 100, 0.01,
dmm.v1, dmm.v2)
axes, cbaxes = plot_by_id(dataid)
```
| true |
code
| 0.717445 | null | null | null | null |
|
# Cross-asset skewness
This notebook analyses cross-asset cross-sectional skewness strategy. The strategy takes long positions on contracts with most negative historical skewness and short positions on ones with most positive skewness.
```
%matplotlib inline
from datetime import datetime
import logging
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
plt.style.use('bmh')
from vivace.backtest import signal
from vivace.backtest import processing
from vivace.backtest.contract import all_futures_baltas2019
from vivace.backtest.engine import BacktestEngine
from vivace.backtest.enums import Strategy
from vivace.backtest.stats import Performance
```
# Data
Various futures contracts in commodity, currency, government bond futures and equity index futures are tested. Some contracts are missing in this data set due to data availability.
```
all_futures_baltas2019
all_futures_baltas2019.shape
```
# Performance
## Run backtest
For each asset class, a simple portfolio is constructed by using trailing 1-year returns of each futures. Unlike studies in equities, the recent 1-month is included in the formation period. Positions are rebalanced on a monthly basis.
```
engine_commodity = BacktestEngine(
strategy=Strategy.DELTA_ONE.value,
instrument=all_futures_baltas2019.query('asset_class == "commodity"').index,
signal=signal.XSSkewness(lookback=252,
post_process=processing.Pipeline([
processing.Negate(),
processing.AsFreq(freq='m', method='pad')
])),
log_level=logging.WARN,
)
engine_commodity.run()
commodity_portfolio_return = (engine_commodity.calculate_equity_curve(calculate_net=False)
.rename('Commodity skewness portfolio'))
engine_equity = BacktestEngine(
strategy=Strategy.DELTA_ONE.value,
instrument=all_futures_baltas2019.query('asset_class == "equity"').index,
signal=signal.XSSkewness(lookback=252,
post_process=processing.Pipeline([
processing.Negate(),
processing.AsFreq(freq='m', method='pad')
])),
log_level=logging.WARN,
)
engine_equity.run()
equity_portfolio_return = (engine_equity.calculate_equity_curve(calculate_net=False)
.rename('Equity skewness portfolio'))
engine_fixed_income = BacktestEngine(
strategy=Strategy.DELTA_ONE.value,
instrument=all_futures_baltas2019.query('asset_class == "fixed_income"').index,
signal=signal.XSSkewness(lookback=252,
post_process=processing.Pipeline([
processing.Negate(),
processing.AsFreq(freq='m', method='pad')
])),
log_level=logging.WARN,
)
engine_fixed_income.run()
fixed_income_portfolio_return = (engine_fixed_income.calculate_equity_curve(calculate_net=False)
.rename('Fixed income skewness portfolio'))
engine_currency = BacktestEngine(
strategy=Strategy.DELTA_ONE.value,
instrument=all_futures_baltas2019.query('asset_class == "currency"').index,
signal=signal.XSSkewness(lookback=252,
post_process=processing.Pipeline([
processing.Negate(),
processing.AsFreq(freq='m', method='pad')
])),
log_level=logging.WARN,
)
engine_currency.run()
currency_portfolio_return = (engine_currency.calculate_equity_curve(calculate_net=False)
.rename('Currency skewness portfolio'))
fig, ax = plt.subplots(2, 2, figsize=(14, 8), sharex=True)
commodity_portfolio_return.plot(ax=ax[0][0], logy=True)
equity_portfolio_return.plot(ax=ax[0][1], logy=True)
fixed_income_portfolio_return.plot(ax=ax[1][0], logy=True)
currency_portfolio_return.plot(ax=ax[1][1], logy=True)
ax[0][0].set_title('Commodity skewness portfolio')
ax[0][1].set_title('Equity skewness portfolio')
ax[1][0].set_title('Fixed income skewness portfolio')
ax[1][1].set_title('Currency skewness portfolio')
ax[0][0].set_ylabel('Cumulative returns');
ax[1][0].set_ylabel('Cumulative returns');
pd.concat((
commodity_portfolio_return.pipe(Performance).summary(),
equity_portfolio_return.pipe(Performance).summary(),
fixed_income_portfolio_return.pipe(Performance).summary(),
currency_portfolio_return.pipe(Performance).summary(),
), axis=1)
```
## Performance since 1990
In the original paper, performance since 1990 is reported. The result below confirms that all skewness based portfolios exhibited positive performance over time.
Interestingly the equity portfolio somewhat performed weakly in the backtest. This could be due to the slightly different data set.
```
fig, ax = plt.subplots(2, 2, figsize=(14, 8), sharex=True)
commodity_portfolio_return['1990':].plot(ax=ax[0][0], logy=True)
equity_portfolio_return['1990':].plot(ax=ax[0][1], logy=True)
fixed_income_portfolio_return['1990':].plot(ax=ax[1][0], logy=True)
currency_portfolio_return['1990':].plot(ax=ax[1][1], logy=True)
ax[0][0].set_title('Commodity skewness portfolio')
ax[0][1].set_title('Equity skewness portfolio')
ax[1][0].set_title('Fixed income skewness portfolio')
ax[1][1].set_title('Currency skewness portfolio')
ax[0][0].set_ylabel('Cumulative returns');
ax[1][0].set_ylabel('Cumulative returns');
```
## GSF
The authors defines the global skewness factor (GSF) by combining the 4 asset classes with equal vol weighting. Here, the 4 backtests are simply combined with each ex-post realised volatility.
```
def get_leverage(equity_curve: pd.Series) -> float:
return 0.1 / (equity_curve.pct_change().std() * (252 ** 0.5))
gsf = pd.concat((
commodity_portfolio_return.pct_change() * get_leverage(commodity_portfolio_return),
equity_portfolio_return.pct_change() * get_leverage(equity_portfolio_return),
fixed_income_portfolio_return.pct_change() * get_leverage(fixed_income_portfolio_return),
currency_portfolio_return.pct_change() * get_leverage(currency_portfolio_return),
), axis=1).mean(axis=1)
gsf = gsf.fillna(0).add(1).cumprod().rename('GSF')
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
gsf.plot(ax=ax[0], logy=True);
gsf['1990':].plot(ax=ax[1], logy=True);
ax[0].set_title('GSF portfolio')
ax[1].set_title('Since 1990')
ax[0].set_ylabel('Cumulative returns');
pd.concat((
gsf.pipe(Performance).summary(),
gsf['1990':].pipe(Performance).summary().add_suffix(' (since 1990)')
), axis=1)
```
## Post publication
```
publication_date = datetime(2019, 12, 16)
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
gsf.plot(ax=ax[0], logy=True);
ax[0].set_title('GSF portfolio')
ax[0].set_ylabel('Cumulative returns');
ax[0].axvline(publication_date, lw=1, ls='--', color='black')
ax[0].text(publication_date, 0.6, 'Publication date ', ha='right')
gsf.loc[publication_date:].plot(ax=ax[1], logy=True);
ax[1].set_title('GSF portfolio (post publication)');
```
## Recent performance
```
fig, ax = plt.subplots(figsize=(8, 4.5))
gsf.tail(252 * 2).plot(ax=ax, logy=True);
ax.set_title('GSF portfolio')
ax.set_ylabel('Cumulative returns');
```
# Reference
- Baltas, N. and Salinas, G., 2019. Cross-Asset Skew. Available at SSRN.
```
print(f'Updated: {datetime.utcnow().strftime("%d-%b-%Y %H:%M")}')
```
| true |
code
| 0.59884 | null | null | null | null |
|
# GFA Zero Calibration
GFA calibrations should normally be updated in the following sequence: zeros, flats, darks.
This notebook should be run using a DESI kernel, e.g. `DESI master`.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os
import sys
import json
import collections
from pathlib import Path
import scipy.interpolate
import scipy.stats
import fitsio
```
Install / upgrade the `desietcimg` package:
```
try:
import desietcimg
print('desietcimg already installed')
except ImportError:
print('Installing desietcimg...')
!{sys.executable} -m pip install --user git+https://github.com/dkirkby/desietcimg
upgrade = False
if upgrade:
print('Upgrading desietcimg...')
!{sys.executable} -m pip install --upgrade --user git+https://github.com/dkirkby/desietcimg
import desietcimg.util
import desietcimg.plot
import desietcimg.gfa
```
NERSC configuration:
```
assert os.getenv('NERSC_HOST', False)
ROOT = Path('/project/projectdirs/desi/spectro/data/')
assert ROOT.exists()
```
Initial GFA calibration:
```
CALIB = Path('/global/cscratch1/sd/dkirkby/GFA_calib.fits')
assert CALIB.exists()
```
Directory for saving plots:
```
plotdir = Path('zerocal')
plotdir.mkdir(exist_ok=True)
```
## Process Zero Sequences
Use a sequence of 200 zeros from [20191027](http://desi-www.kpno.noao.edu:8090/nightsum/nightsum-2019-10-27/nightsum.html).
**Since this data has not yet been staged to its final location, we fetch it from the `lost+found` directory** (by overriding the definition of `ROOT` above):
```
ROOT = Path('/global/project/projectdirs/desi/spectro/staging/lost+found/')
files = desietcimg.util.find_files(ROOT / '20191027' / '{N}/gfa-{N}.fits.fz', min=21968, max=22167)
```
Build master zero images:
```
def build_master_zero():
master_zero = {}
GFA = desietcimg.gfa.GFACamera(calib_name=str(CALIB))
for k, gfa in enumerate(GFA.gfa_names):
raw, meta = desietcimg.util.load_raw(files, 'EXPTIME', hdu=gfa)
assert np.all(np.array(meta['EXPTIME']) == 0)
GFA.setraw(raw, name=gfa, subtract_master_zero=False, apply_gain=False)
master_zero[gfa] = np.median(GFA.data, axis=0)
return master_zero
%time master_zero = build_master_zero()
```
Estimate the readnoise in ADU for each amplifier, using the new master zero:
```
desietcimg.gfa.GFACamera.master_zero = master_zero
def get_readnoise(hrange=70, hbins=141, nsig=6, save=None):
GFA = desietcimg.gfa.GFACamera(calib_name=str(CALIB))
fig, axes = plt.subplots(5, 2, sharex=True, figsize=(18, 11))
bins = np.linspace(-hrange, +hrange, hbins)
noise = {}
for k, gfa in enumerate(GFA.gfa_names):
GFA.name = gfa
ax = axes[k // 2, k % 2]
raw, meta = desietcimg.util.load_raw(files, 'EXPTIME', hdu=gfa)
assert np.all(np.array(meta['EXPTIME']) == 0)
GFA.setraw(raw, name=gfa, subtract_master_zero=True, apply_gain=False)
noise[gfa] = {}
for j, amp in enumerate(GFA.amp_names):
# Extract data for this quadrant.
qdata = GFA.data[GFA.quad[amp]]
X = qdata.reshape(-1)
# Clip for std dev calculation.
Xclipped, lo, hi = scipy.stats.sigmaclip(X, low=nsig, high=nsig)
noise[gfa][amp] = np.std(Xclipped)
label = f'{amp} {noise[gfa][amp]:.2f}'
c = plt.rcParams['axes.prop_cycle'].by_key()['color'][j]
ax.hist(X, bins=bins, label=label, color=c, histtype='step')
for x in lo, hi:
ax.axvline(x, ls='-', c=c, alpha=0.5)
ax.set_yscale('log')
ax.set_yticks([])
if k in (8, 9):
ax.set_xlabel('Zero Residual [ADU]')
ax.set_xlim(bins[0], bins[-1])
ax.legend(ncol=2, title=f'{gfa}', loc='upper left')
plt.subplots_adjust(left=0.03, right=0.99, bottom=0.04, top=0.99, wspace=0.07, hspace=0.04)
if save:
plt.savefig(save)
return noise
%time readnoise = get_readnoise(save=str(plotdir / 'GFA_readnoise.png'))
repr(readnoise)
```
## Save Updated Calibrations
```
desietcimg.gfa.save_calib_data('GFA_calib_zero.fits', master_zero=master_zero, readnoise=readnoise)
```
Use this for subsequent flat and dark calibrations:
```
!cp GFA_calib_zero.fits {CALIB}
```
## Comparisons
Compare with the read noise values from the lab studies and Aaron Meisner's [independent analysis](https://desi.lbl.gov/trac/wiki/Commissioning/Planning/gfachar/bias_readnoise_20191027):
```
ameisner_rdnoise = {
'GUIDE0': { 'E': 5.56, 'F': 5.46, 'G': 5.12, 'H': 5.24},
'FOCUS1': { 'E': 5.21, 'F': 5.11, 'G': 4.88, 'H': 4.90},
'GUIDE2': { 'E': 7.11, 'F': 6.23, 'G': 5.04, 'H': 5.29},
'GUIDE3': { 'E': 5.28, 'F': 5.16, 'G': 4.89, 'H': 5.00},
'FOCUS4': { 'E': 5.23, 'F': 5.12, 'G': 5.01, 'H': 5.11},
'GUIDE5': { 'E': 5.11, 'F': 5.00, 'G': 4.80, 'H': 4.86},
'FOCUS6': { 'E': 5.12, 'F': 5.09, 'G': 4.85, 'H': 5.07},
'GUIDE7': { 'E': 5.00, 'F': 4.96, 'G': 4.63, 'H': 4.79},
'GUIDE8': { 'E': 6.51, 'F': 5.58, 'G': 5.12, 'H': 5.47},
'FOCUS9': { 'E': 6.85, 'F': 5.53, 'G': 5.07, 'H': 5.57},
}
def compare_rdnoise(label='20191027', save=None):
# Use the new calibrations written above.
desietcimg.gfa.GFACamera.calib_data = None
GFA = desietcimg.gfa.GFACamera(calib_name='GFA_calib_zero.fits')
markers = '+xo.'
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
for k, gfa in enumerate(GFA.gfa_names):
color = plt.rcParams['axes.prop_cycle'].by_key()['color'][k]
ax[1].scatter([], [], marker='o', c=color, label=gfa)
for j, amp in enumerate(desietcimg.gfa.GFACamera.amp_names):
marker = markers[j]
measured = GFA.calib_data[gfa][amp]['RDNOISE']
# Lab results are given in elec so use lab gains to convert back to ADU
lab = GFA.lab_data[gfa][amp]['RDNOISE'] / GFA.lab_data[gfa][amp]['GAIN']
ax[0].scatter(lab, measured, marker=marker, c=color)
ax[1].scatter(ameisner_rdnoise[gfa][amp], measured, marker=marker, c=color)
for j, amp in enumerate(GFA.amp_names):
ax[1].scatter([], [], marker=markers[j], c='k', label=amp)
xylim = (4.3, 5.5)
for axis in ax:
axis.plot(xylim, xylim, 'k-', zorder=-10, alpha=0.25)
axis.set_ylabel(f'{label} Read Noise [ADU]')
axis.set_xlim(*xylim)
axis.set_ylim(*xylim)
ax[1].legend(ncol=3)
ax[0].set_xlabel('Lab Data Read Noise [ADU]')
ax[1].set_xlabel('ameisner Read Noise [ADU]')
plt.tight_layout()
if save:
plt.savefig(save)
compare_rdnoise(save=str(plotdir / 'rdnoise_compare.png'))
```
| true |
code
| 0.549399 | null | null | null | null |
|
<table>
<tr>
<td ><h1><strong>NI SystemLink Analysis Automation</strong></h1></td>
</tr>
</table>
This notebook is an example for how you can analyze your data with NI SystemLink Analysis Automation. It forms the core of the analysis procedure, which includes the notebook, the query, and the execution parameters (parallel or comparative). The [procedure is uploaded to Analysis Automation](https://www.ni.com/documentation/en/systemlink/latest/analysis/creating-anp-with-jupyter/). The output is a report in form of PDF documents or HTML pages.
<br>
<hr>
## Prerequisites
Before you run this example, you need to [create a DataFinder search query](https://www.ni.com/documentation/en/systemlink/latest/datanavigation/finding-data-with-advanced-search/) in Data Navigation to find the example files (e.g. 'TR_M17_QT_42-1.tdms'). Save this query on the server.
<hr>
## Summary
This example exercises the SystemLink TDMReader API to access bulk data (see `data_api`) and/or descriptive data (see `metadata_api`). When the notebook executes, Analysis Automation provides data links which the API uses to access content.
It also shows how to select channels from two channel groups and display the data in two graphs.
The channel values from each channel group populate arrays, which you can use to further analyze and visualize your data.
Furthermore, the example uses two procedure parameters that write a comment to the first graph and select a channel to display in the second graph (refer to __Plot Graph__ below).
<hr>
## Imports
This example uses the `TDMReader` API to work with the bulk data and meta data of the given files. `Matplotlib` is used for plotting the graph. The `scrapbook` is used to set and display the results in the analysis procedure results list.
```
import systemlink.clients.nitdmreader as tdmreader
metadata_api = tdmreader.MetadataApi()
data_api = tdmreader.DataApi()
import matplotlib.pyplot as plt
import scrapbook as sb
def get_property(element, property_name):
"""Gets a property of the given element.
The element can be a file, channel group, or channel.
Args:
element: Element to get the property from.
property_name: Name of the property to get.
Returns:
The according property of the element or ``None`` if the property doesn't exist.
"""
return next((e.value for e in element.properties.properties if e.name == property_name), None)
```
## Define Notebook Parameters
a) In a code cell (*called __parameters cell__*), define the parameters. Fill in the needed values/content parameters in the code cell below. E.g.
**Defined parameters:**
- `comment_group_1`: Writes a comment into the box of the first group.<br>
(Default value = `Checked`)
- `shown_channel_index`: Any valid channel index of the second group. This channel is plotted in the second graph. <br>
(Default value = `2`)
Your code may look like the following:
```
comment_group_1 = "Checked"
shown_channel_index = 2
```
b) Select this code cell (*__parameters cell__*) and open on the __Property Inspector__ panel on the right sidebar to add the parameters, their default values, to the __Cell Metadata__ code block. For example, your code may look like the following:
```json
{
"papermill": {
"parameters": {
"comment_group_1": "Checked",
"shown_channel_index": 2
}
},
"tags": [
"parameters"
]
}
```
You can use the variables of the __parameters__ cell content in all code cells below.
## Retrieve Metadata with a Data Link
A data link is the input for each __Analysis Automation procedure__ that uses a query to collect specific data items. A `data_link` contains a list of one or more elements that point to a list of files, channel groups, or channels (depending on the query result type).
This example shows how the Metadata API accesses the `file_info` structure from the file, through the `groups`, and down to the `channels` level.
This example calculates the absolute minimum and absolute maximum value of all channels in each group and displays these values in the report.
```
data_links = ni_analysis_automation["data_links"]
file_ids = [d["fileId"] for d in data_links]
file_infos = await metadata_api.get_multiple_file_info(tdmreader.FileList(file_ids))
file_info = file_infos[0]
test_file_name = get_property(file_info, "name")
program_name = get_property(file_info, "Test~Procedure")
group_names = []
channels = []
formatted_properties = []
for group in file_info.groups:
group_names.append(group.name)
channels.append(group.channels)
max_values_of_group = []
min_values_of_group = []
mean_values_of_group = []
for channel in group.channels:
minimum = float(get_property(channel, "minimum") or "NaN")
maximum = float(get_property(channel, "maximum") or "NaN")
mean_values_of_group.append((minimum + maximum) / 2)
max_values_of_group.append(maximum)
min_values_of_group.append(minimum)
# Calculate statistical values from metadata
abs_min = max(max_values_of_group)
abs_max = min(max_values_of_group)
abs_mean = sum(mean_values_of_group) / float(len(mean_values_of_group))
formatted_properties.append(f"Absolute Maximum: {abs_max:.3f} °C"+
f",Absolute Minimum: {abs_min:.3f} °C"+
f",Mean Value: {abs_mean:.3f} °C")
# Populate the info box of the plot with the notebook parameters
formatted_properties[1] += f",Parameter: {comment_group_1}"
formatted_properties[0] += f",Channel #: {shown_channel_index}"
```
## Retrieve Bulk Data with a Data Link
Use the TDMReader API to work with bulk data. There are multiple ways for retrieving the data. The access path used in this example shows you how to loop over all groups and over all channels within the groups. The resulting channel specifiers (`chn_specs`) are used in the next step to `query` the bulk data and retrieve all channel `values` from the queried data.
```
bulk_data = []
file_id = data_links[0]['fileId']
for group in file_info.groups:
chn_specs = []
for channel in group.channels:
channel_specifier = tdmreader.OneChannelSpecifier(
file_id=file_id,
group_name=group.name,
channel_name=channel.name)
chn_specs.append(channel_specifier)
xy_chns = tdmreader.ChannelSpecificationsXyChannels(y_channels=chn_specs)
channel_specs = tdmreader.ChannelSpecifications(xy_channels=[xy_chns])
query = tdmreader.QueryDataSpecifier(channel_specs)
data = await data_api.query_data(query)
# get numeric y-data
y_channels = data.data[0].y
values = list(map(lambda c: c.numeric_data, y_channels))
bulk_data.append(values)
```
## Plot Graph
The next two cells plot a graph with two areas and two sub plots, using the Python `matplotlib.pyplot` module as `plt`.
```
# Helper method and constant for plotting data
curr_fontsize = 18
axis_lable_fontsize = curr_fontsize - 5
def plot_area(subplot, area_bulk_data, area_meta_data, enable_channel_selector, area_properties):
""" Plot a sub print area of a figure
:param subplot: Object of the plot print area
:param area_bulk_data: Channel bulk data to print
:param area_meta_data: Channel metadata (name, properties, ...)
:param enable_channel_selector: True, when property shown_channel_index should be used
:param area_properties: String with comma-separated parts as content for the info box area
e.g.: "Absolute Maximum: 12.6 °C,Absolute Minimum: -22.3 °C"
"""
# Place a text box below the legend
subplot.text(1.05, 0.0, area_properties.replace(",", "\n"),
transform=subplot.transAxes, ha="left", va="bottom")
subplot.grid(True)
subplot.set_xlabel('Time [s]', fontsize=axis_lable_fontsize)
unit = get_property(area_meta_data[0], "unit_string")
subplot.set_ylabel('Amplitudes ['+unit+']', fontsize=axis_lable_fontsize)
i = 0
for channel in area_meta_data:
if (enable_channel_selector):
if (i == (shown_channel_index - 1)):
subplot.plot(area_bulk_data[i], label=channel.name) # Lable => name of the curve = channel
else:
subplot.plot(area_bulk_data[i], label=channel.name) # Lable => name of the curve = channel
i += 1
# Place a legend to the right of this subplot.
subplot.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0., fontsize=axis_lable_fontsize)
# Create plot and print data
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(15, 10))
fig.suptitle ('Temperature Monitoring File: '+ test_file_name + ' Test program: ' + program_name, fontsize=curr_fontsize, color='blue')
ax1.set_title(group_names[1], fontsize=curr_fontsize)
plot_area(ax1, bulk_data[1], channels[1], False, formatted_properties[1])
ax2.set_title(group_names[0], fontsize=curr_fontsize)
plot_area(ax2, bulk_data[0], channels[0], True, formatted_properties[0])
plt.tight_layout()
plt.show()
```
## Add Result Summary
Each Scrap recorded with `sb.glue()` is displayed for each procedure on the __History__ tab in Analysis Automation.
```
sb.glue("File", test_file_name)
sb.glue("Test", program_name)
sb.glue("Comment", comment_group_1)
sb.glue("Displayed Channel #", shown_channel_index)
```
| true |
code
| 0.781976 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/Serbeld/ArtificialVisionForQualityControl/blob/master/Copia_de_Yolo_Step_by_Step.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**Outline of Steps**
+ Initialization
+ Download COCO detection data from http://cocodataset.org/#download
+ http://images.cocodataset.org/zips/train2014.zip <= train images
+ http://images.cocodataset.org/zips/val2014.zip <= validation images
+ http://images.cocodataset.org/annotations/annotations_trainval2014.zip <= train and validation annotations
+ Run this script to convert annotations in COCO format to VOC format
+ https://gist.github.com/chicham/6ed3842d0d2014987186#file-coco2pascal-py
+ Download pre-trained weights from https://pjreddie.com/darknet/yolo/
+ https://pjreddie.com/media/files/yolo.weights
+ Specify the directory of train annotations (train_annot_folder) and train images (train_image_folder)
+ Specify the directory of validation annotations (valid_annot_folder) and validation images (valid_image_folder)
+ Specity the path of pre-trained weights by setting variable *wt_path*
+ Construct equivalent network in Keras
+ Network arch from https://github.com/pjreddie/darknet/blob/master/cfg/yolo-voc.cfg
+ Load the pretrained weights
+ Perform training
+ Perform detection on an image with newly trained weights
+ Perform detection on an video with newly trained weights
# Initialization
```
!pip install h5py
import h5py
from google.colab import drive,files
drive.mount('/content/drive')
import sys
sys.path.append('/content/drive/My Drive/keras-yolo2/')
!pip install tensorflow-gpu==2.0.0-alpha0
from keras.models import Sequential, Model
from keras.layers import Reshape, Activation, Conv2D, Input, MaxPooling2D, BatchNormalization, Flatten, Dense, Lambda
from keras.layers.advanced_activations import LeakyReLU
from keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard
from keras.optimizers import SGD, Adam, RMSprop
from keras.layers.merge import concatenate
import matplotlib.pyplot as plt
import keras.backend as K
import tensorflow as tf
import imgaug as ia
from tqdm import tqdm
from imgaug import augmenters as iaa
import numpy as np
import pickle
import os, cv2
from preprocessing import parse_annotation, BatchGenerator
from utils import WeightReader, decode_netout, draw_boxes
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
LABELS = ['COLOR HDPE', 'PET', 'WHITE HDPE']
IMAGE_H, IMAGE_W = 416, 416
GRID_H, GRID_W = 13 , 13
BOX = 5
CLASS = len(LABELS)
CLASS_WEIGHTS = np.ones(CLASS, dtype='float32')
OBJ_THRESHOLD = 0.2#0.5
NMS_THRESHOLD = 0.2#0.45
ANCHORS = [0.96,4.22, 1.52,4.79, 2.30,4.30, 2.76,2.35, 3.62,6.03]
NO_OBJECT_SCALE = 1.0
OBJECT_SCALE = 5.0
COORD_SCALE = 1.0
CLASS_SCALE = 1.0
BATCH_SIZE = 16
WARM_UP_BATCHES = 0
TRUE_BOX_BUFFER = 50
wt_path = '/content/drive/My Drive/keras-yolo2/yolov2.weights'
train_image_folder = '/content/drive/My Drive/dataset/images/'
train_annot_folder = '/content/drive/My Drive/dataset/annotations/'
valid_image_folder = '/content/drive/My Drive/dataset/images_val/'
valid_annot_folder = '/content/drive/My Drive/dataset/annotattionsVAL/'
#import os
#print(os.listdir('/content/drive/My Drive/dataset/images'))
train_imgs, seen_train_labels = parse_annotation(train_annot_folder, train_image_folder, labels=LABELS)
val_imgs, seen_val_labels = parse_annotation(valid_annot_folder, valid_image_folder, labels=LABELS)
train_batch = BatchGenerator(train_imgs, generator_config, norm=normalize)
valid_batch = BatchGenerator(val_imgs, generator_config, norm=normalize)
```
**Sanity check: show a few images with ground truth boxes overlaid**
```
batches = BatchGenerator(train_imgs, generator_config)
image = batches[0][0][0][0]
image = cv2.resize(image,(680,340))
plt.imshow(image.astype('uint8'))
```
# Construct the network
```
# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K)
def space_to_depth_x2(x):
return tf.space_to_depth(x, block_size=2)
input_image = Input(shape=(IMAGE_H, IMAGE_W, 3))
true_boxes = Input(shape=(1, 1, 1, TRUE_BOX_BUFFER , 4))
# Layer 1
x = Conv2D(32, (3,3), strides=(1,1), padding='same', name='conv_1', use_bias=False)(input_image)
x = BatchNormalization(name='norm_1')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 2
x = Conv2D(64, (3,3), strides=(1,1), padding='same', name='conv_2', use_bias=False)(x)
x = BatchNormalization(name='norm_2')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 3
x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_3', use_bias=False)(x)
x = BatchNormalization(name='norm_3')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 4
x = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_4', use_bias=False)(x)
x = BatchNormalization(name='norm_4')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 5
x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_5', use_bias=False)(x)
x = BatchNormalization(name='norm_5')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 6
x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_6', use_bias=False)(x)
x = BatchNormalization(name='norm_6')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 7
x = Conv2D(128, (1,1), strides=(1,1), padding='same', name='conv_7', use_bias=False)(x)
x = BatchNormalization(name='norm_7')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 8
x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_8', use_bias=False)(x)
x = BatchNormalization(name='norm_8')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 9
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_9', use_bias=False)(x)
x = BatchNormalization(name='norm_9')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 10
x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_10', use_bias=False)(x)
x = BatchNormalization(name='norm_10')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 11
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_11', use_bias=False)(x)
x = BatchNormalization(name='norm_11')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 12
x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_12', use_bias=False)(x)
x = BatchNormalization(name='norm_12')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 13
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_13', use_bias=False)(x)
x = BatchNormalization(name='norm_13')(x)
x = LeakyReLU(alpha=0.1)(x)
skip_connection = x
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 14
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_14', use_bias=False)(x)
x = BatchNormalization(name='norm_14')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 15
x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_15', use_bias=False)(x)
x = BatchNormalization(name='norm_15')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 16
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_16', use_bias=False)(x)
x = BatchNormalization(name='norm_16')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 17
x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_17', use_bias=False)(x)
x = BatchNormalization(name='norm_17')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 18
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_18', use_bias=False)(x)
x = BatchNormalization(name='norm_18')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 19
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_19', use_bias=False)(x)
x = BatchNormalization(name='norm_19')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 20
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_20', use_bias=False)(x)
x = BatchNormalization(name='norm_20')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 21
skip_connection = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_21', use_bias=False)(skip_connection)
skip_connection = BatchNormalization(name='norm_21')(skip_connection)
skip_connection = LeakyReLU(alpha=0.1)(skip_connection)
skip_connection = Lambda(space_to_depth_x2)(skip_connection)
x = concatenate([skip_connection, x])
# Layer 22
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_22', use_bias=False)(x)
x = BatchNormalization(name='norm_22')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 23
x = Conv2D(BOX * (4 + 1 + CLASS), (1,1), strides=(1,1), padding='same', name='conv_23')(x)
output = Reshape((GRID_H, GRID_W, BOX, 4 + 1 + CLASS))(x)
# small hack to allow true_boxes to be registered when Keras build the model
# for more information: https://github.com/fchollet/keras/issues/2790
output = Lambda(lambda args: args[0])([output, true_boxes])
model = Model([input_image, true_boxes], output)
model.summary()
```
# Load pretrained weights
**Load the weights originally provided by YOLO**
```
weight_reader = WeightReader(wt_path)
weight_reader.reset()
nb_conv = 23
for i in range(1, nb_conv+1):
conv_layer = model.get_layer('conv_' + str(i))
if i < nb_conv:
norm_layer = model.get_layer('norm_' + str(i))
size = np.prod(norm_layer.get_weights()[0].shape)
beta = weight_reader.read_bytes(size)
gamma = weight_reader.read_bytes(size)
mean = weight_reader.read_bytes(size)
var = weight_reader.read_bytes(size)
weights = norm_layer.set_weights([gamma, beta, mean, var])
if len(conv_layer.get_weights()) > 1:
bias = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[1].shape))
kernel = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel, bias])
else:
kernel = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel])
```
**Randomize weights of the last layer**
```
layer = model.layers[-4] # the last convolutional layer
weights = layer.get_weights()
new_kernel = np.random.normal(size=weights[0].shape)/(GRID_H*GRID_W)
new_bias = np.random.normal(size=weights[1].shape)/(GRID_H*GRID_W)
layer.set_weights([new_kernel, new_bias])
```
# Perform training
**Loss function**
$$\begin{multline}
\lambda_\textbf{coord}
\sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{obj}}
\left[
\left(
x_i - \hat{x}_i
\right)^2 +
\left(
y_i - \hat{y}_i
\right)^2
\right]
\\
+ \lambda_\textbf{coord}
\sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{obj}}
\left[
\left(
\sqrt{w_i} - \sqrt{\hat{w}_i}
\right)^2 +
\left(
\sqrt{h_i} - \sqrt{\hat{h}_i}
\right)^2
\right]
\\
+ \sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{obj}}
\left(
C_i - \hat{C}_i
\right)^2
\\
+ \lambda_\textrm{noobj}
\sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{noobj}}
\left(
C_i - \hat{C}_i
\right)^2
\\
+ \sum_{i = 0}^{S^2}
L_i^{\text{obj}}
\sum_{c \in \textrm{classes}}
\left(
p_i(c) - \hat{p}_i(c)
\right)^2
\end{multline}$$
```
def custom_loss(y_true, y_pred):
mask_shape = tf.shape(y_true)[:4]
cell_x = tf.to_float(tf.reshape(tf.tile(tf.range(GRID_W), [GRID_H]), (1, GRID_H, GRID_W, 1, 1)))
cell_y = tf.transpose(cell_x, (0,2,1,3,4))
cell_grid = tf.tile(tf.concat([cell_x,cell_y], -1), [BATCH_SIZE, 1, 1, 5, 1])
coord_mask = tf.zeros(mask_shape)
conf_mask = tf.zeros(mask_shape)
class_mask = tf.zeros(mask_shape)
seen = tf.Variable(0.)
total_recall = tf.Variable(0.)
"""
Adjust prediction
"""
### adjust x and y
pred_box_xy = tf.sigmoid(y_pred[..., :2]) + cell_grid
### adjust w and h
pred_box_wh = tf.exp(y_pred[..., 2:4]) * np.reshape(ANCHORS, [1,1,1,BOX,2])
### adjust confidence
pred_box_conf = tf.sigmoid(y_pred[..., 4])
### adjust class probabilities
pred_box_class = y_pred[..., 5:]
"""
Adjust ground truth
"""
### adjust x and y
true_box_xy = y_true[..., 0:2] # relative position to the containing cell
### adjust w and h
true_box_wh = y_true[..., 2:4] # number of cells accross, horizontally and vertically
### adjust confidence
true_wh_half = true_box_wh / 2.
true_mins = true_box_xy - true_wh_half
true_maxes = true_box_xy + true_wh_half
pred_wh_half = pred_box_wh / 2.
pred_mins = pred_box_xy - pred_wh_half
pred_maxes = pred_box_xy + pred_wh_half
intersect_mins = tf.maximum(pred_mins, true_mins)
intersect_maxes = tf.minimum(pred_maxes, true_maxes)
intersect_wh = tf.maximum(intersect_maxes - intersect_mins, 0.)
intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
true_areas = true_box_wh[..., 0] * true_box_wh[..., 1]
pred_areas = pred_box_wh[..., 0] * pred_box_wh[..., 1]
union_areas = pred_areas + true_areas - intersect_areas
iou_scores = tf.truediv(intersect_areas, union_areas)
true_box_conf = iou_scores * y_true[..., 4]
### adjust class probabilities
true_box_class = tf.argmax(y_true[..., 5:], -1)
"""
Determine the masks
"""
### coordinate mask: simply the position of the ground truth boxes (the predictors)
coord_mask = tf.expand_dims(y_true[..., 4], axis=-1) * COORD_SCALE
### confidence mask: penelize predictors + penalize boxes with low IOU
# penalize the confidence of the boxes, which have IOU with some ground truth box < 0.6
true_xy = true_boxes[..., 0:2]
true_wh = true_boxes[..., 2:4]
true_wh_half = true_wh / 2.
true_mins = true_xy - true_wh_half
true_maxes = true_xy + true_wh_half
pred_xy = tf.expand_dims(pred_box_xy, 4)
pred_wh = tf.expand_dims(pred_box_wh, 4)
pred_wh_half = pred_wh / 2.
pred_mins = pred_xy - pred_wh_half
pred_maxes = pred_xy + pred_wh_half
intersect_mins = tf.maximum(pred_mins, true_mins)
intersect_maxes = tf.minimum(pred_maxes, true_maxes)
intersect_wh = tf.maximum(intersect_maxes - intersect_mins, 0.)
intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
true_areas = true_wh[..., 0] * true_wh[..., 1]
pred_areas = pred_wh[..., 0] * pred_wh[..., 1]
union_areas = pred_areas + true_areas - intersect_areas
iou_scores = tf.truediv(intersect_areas, union_areas)
best_ious = tf.reduce_max(iou_scores, axis=4)
conf_mask = conf_mask + tf.to_float(best_ious < 0.6) * (1 - y_true[..., 4]) * NO_OBJECT_SCALE
# penalize the confidence of the boxes, which are reponsible for corresponding ground truth box
conf_mask = conf_mask + y_true[..., 4] * OBJECT_SCALE
### class mask: simply the position of the ground truth boxes (the predictors)
class_mask = y_true[..., 4] * tf.gather(CLASS_WEIGHTS, true_box_class) * CLASS_SCALE
"""
Warm-up training
"""
no_boxes_mask = tf.to_float(coord_mask < COORD_SCALE/2.)
seen = tf.assign_add(seen, 1.)
true_box_xy, true_box_wh, coord_mask = tf.cond(tf.less(seen, WARM_UP_BATCHES),
lambda: [true_box_xy + (0.5 + cell_grid) * no_boxes_mask,
true_box_wh + tf.ones_like(true_box_wh) * np.reshape(ANCHORS, [1,1,1,BOX,2]) * no_boxes_mask,
tf.ones_like(coord_mask)],
lambda: [true_box_xy,
true_box_wh,
coord_mask])
"""
Finalize the loss
"""
nb_coord_box = tf.reduce_sum(tf.to_float(coord_mask > 0.0))
nb_conf_box = tf.reduce_sum(tf.to_float(conf_mask > 0.0))
nb_class_box = tf.reduce_sum(tf.to_float(class_mask > 0.0))
loss_xy = tf.reduce_sum(tf.square(true_box_xy-pred_box_xy) * coord_mask) / (nb_coord_box + 1e-6) / 2.
loss_wh = tf.reduce_sum(tf.square(true_box_wh-pred_box_wh) * coord_mask) / (nb_coord_box + 1e-6) / 2.
loss_conf = tf.reduce_sum(tf.square(true_box_conf-pred_box_conf) * conf_mask) / (nb_conf_box + 1e-6) / 2.
loss_class = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=true_box_class, logits=pred_box_class)
loss_class = tf.reduce_sum(loss_class * class_mask) / (nb_class_box + 1e-6)
loss = loss_xy + loss_wh + loss_conf + loss_class
nb_true_box = tf.reduce_sum(y_true[..., 4])
nb_pred_box = tf.reduce_sum(tf.to_float(true_box_conf > 0.5) * tf.to_float(pred_box_conf > 0.3))
"""
Debugging code
"""
current_recall = nb_pred_box/(nb_true_box + 1e-6)
total_recall = tf.assign_add(total_recall, current_recall)
#loss = tf.Print(loss, [tf.zeros((1))], message='Dummy Line \t', summarize=1000)
#loss = tf.Print(loss, [loss_xy], message='Loss XY \t', summarize=1000)
#loss = tf.Print(loss, [loss_wh], message='Loss WH \t', summarize=1000)
#loss = tf.Print(loss, [loss_conf], message='Loss Conf \t', summarize=1000)
#loss = tf.Print(loss, [loss_class], message='Loss Class \t', summarize=1000)
#loss = tf.Print(loss, [loss], message='Total Loss \t', summarize=1000)
#loss = tf.Print(loss, [current_recall], message='Current Recall \t', summarize=1000)
#loss = tf.Print(loss, [total_recall/seen], message='Average Recall \t', summarize=1000)
loss = tf.Print(loss, [tf.zeros((1))], message='Dummy Line \t')
loss = tf.Print(loss, [loss_xy], message='Loss XY \t')
loss = tf.Print(loss, [loss_wh], message='Loss WH \t')
loss = tf.Print(loss, [loss_conf], message='Loss Conf \t')
loss = tf.Print(loss, [loss_class], message='Loss Class \t')
loss = tf.Print(loss, [loss], message='Total Loss \t')
loss = tf.Print(loss, [current_recall], message='Current Recall \t')
loss = tf.Print(loss, [total_recall/seen], message='Average Recall \t')
return loss
```
**Parse the annotations to construct train generator and validation generator**
```
generator_config = {
'IMAGE_H' : IMAGE_H,
'IMAGE_W' : IMAGE_W,
'GRID_H' : GRID_H,
'GRID_W' : GRID_W,
'BOX' : BOX,
'LABELS' : LABELS,
'CLASS' : len(LABELS),
'ANCHORS' : ANCHORS,
'BATCH_SIZE' : BATCH_SIZE,
'TRUE_BOX_BUFFER' : 50,
}
def normalize(image):
return image / 255.
print(train_annot_folder)
```
**Setup a few callbacks and start the training**
```
early_stop = EarlyStopping(monitor='val_loss',
min_delta=0.001,
patience=3,
mode='min',
verbose=1)
checkpoint = ModelCheckpoint('botellas.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=1)
#tb_counter = len([log for log in os.listdir(os.path.expanduser('~/logs/')) if 'coco_' in log]) + 1
#tensorboard = TensorBoard(log_dir=os.path.expanduser('~/logs/') + 'coco_' + '_' + str(tb_counter),
# histogram_freq=0,
# write_graph=True,
# write_images=False)
optimizer = Adam(lr=0.5e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
#optimizer = SGD(lr=1e-4, decay=0.0005, momentum=0.9)
#optimizer = RMSprop(lr=1e-4, rho=0.9, epsilon=1e-08, decay=0.0)
model.compile(loss=custom_loss, optimizer=optimizer,metrics=['accuracy'])
#'loss_xy','loss_wh','loss_conf','loss_classloss','current_recall','total_recall/seen'
stad = model.fit_generator(generator = train_batch,
steps_per_epoch = len(train_batch),
epochs = 3,
verbose = 1,
validation_data = valid_batch,
validation_steps = len(valid_batch),
callbacks = [early_stop, checkpoint],
max_queue_size = 3)
#model.fit_generator(generator = train_batch,
# steps_per_epoch = len(train_batch),
# epochs = 100,
# verbose = 1,
# validation_data = valid_batch,
# validation_steps = len(valid_batch),
# callbacks = [early_stop, checkpoint, tensorboard],
# max_queue_size = 3)
image = batches[0][0][0][0]
plt.imshow(image.astype('uint8'))plt.figure(0)
plt.plot(stad.history['acc'],'r')
plt.plot(stad.history['val_acc'],'g')
plt.xlabel("Num of Epochs")
plt.ylabel("Accuracy")
plt.title("Training Accuracy vs Validation Accuracy")
plt.legend(['train','validation'])
plt.savefig("Grafica_1.jpg", bbox_inches = 'tight')
plt.figure(1)
plt.plot(stad.history['loss'],'r')
plt.plot(stad.history['val_loss'],'g')
plt.xlabel("Num of Epochs")
plt.ylabel("Loss")
plt.title("Training Loss vs Validation Loss")
plt.legend(['train','validation'])
plt.savefig("Grafica_2.jpg", bbox_inches = 'tight')
plt.show()
```
# Perform detection on image
```
model.load_weights("botellas.h5")
import cv2
import matplotlib.pyplot as plt
plt.figure()
input_image = cv2.imread("/content/drive/My Drive/dataset/images/1.png")
input_image = cv2.resize(input_image, (416, 416))
dummy_array = np.zeros((1,1,1,1,TRUE_BOX_BUFFER,4))
input_image = input_image / 255.
input_image = input_image[:,:,::-1]
input_image = np.expand_dims(input_image, 0)
netout = model.predict([input_image, dummy_array])
boxes = decode_netout(netout[0],
obj_threshold=OBJ_THRESHOLD,
nms_threshold=NMS_THRESHOLD,
anchors=ANCHORS,
nb_class=CLASS)
imagen = draw_boxes(imagen, boxes, labels=LABELS)
imagen = cv2.resize(imagen,(640,380))
plt.imshow(imagen[:,:,::-1]); plt.show()
```
# Perform detection on video
```
#model.load_weights("weights_coco.h5")
#dummy_array = np.zeros((1,1,1,1,TRUE_BOX_BUFFER,4))
#video_inp = '../basic-yolo-keras/images/phnom_penh.mp4'
#video_out = '../basic-yolo-keras/images/phnom_penh_bbox.mp4'
#video_reader = cv2.VideoCapture(video_inp)
#nb_frames = int(video_reader.get(cv2.CAP_PROP_FRAME_COUNT))
#frame_h = int(video_reader.get(cv2.CAP_PROP_FRAME_HEIGHT))
#frame_w = int(video_reader.get(cv2.CAP_PROP_FRAME_WIDTH))
#video_writer = cv2.VideoWriter(video_out,
# cv2.VideoWriter_fourcc(*'XVID'),
# 50.0,
# (frame_w, frame_h))
#for i in tqdm(range(nb_frames)):
# ret, image = video_reader.read()
# input_image = cv2.resize(image, (416, 416))
# input_image = input_image / 255.
# input_image = input_image[:,:,::-1]
# input_image = np.expand_dims(input_image, 0)
# netout = model.predict([input_image, dummy_array])
# boxes = decode_netout(netout[0],
# obj_threshold=0.3,
# nms_threshold=NMS_THRESHOLD,
# anchors=ANCHORS,
# nb_class=CLASS)
# image = draw_boxes(image, boxes, labels=LABELS)
# video_writer.write(np.uint8(image))
#video_reader.release()
#video_writer.release()
```
| true |
code
| 0.603231 | null | null | null | null |
|
... ***CURRENTLY UNDER DEVELOPMENT*** ...
## Synthetic simulation of historical TCs parameters using Gaussian copulas (Rueda et al. 2016) and subsequent selection of representative cases using Maximum Dissimilarity (MaxDiss) algorithm (Camus et al. 2011)
inputs required:
* Historical TC parameters that affect the site (output of *notebook 05*)
* number of synthetic simulations to run
* number of representative cases to be selected using MaxDiss
in this notebook:
* synthetic generation of TCs tracks based on gaussian copulas of the TC parameters
* MDA selection of representative number of events
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# common
import os
import os.path as op
# pip
import xarray as xr
import numpy as np
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..'))
# teslakit
from teslakit.database import Database
from teslakit.statistical import CopulaSimulation
from teslakit.mda import MaxDiss_Simplified_NoThreshold
from teslakit.plotting.storms import Plot_TCs_Params_MDAvsSIM, \
Plot_TCs_Params_HISTvsSIM, Plot_TCs_Params_HISTvsSIM_histogram
```
## Database and Site parameters
```
# --------------------------------------
# Teslakit database
p_data = r'/Users/nico/Projects/TESLA-kit/TeslaKit/data'
db = Database(p_data)
# set site
db.SetSite('ROI')
# --------------------------------------
# load data and set parameters
_, TCs_r2_params = db.Load_TCs_r2_hist() # TCs parameters inside radius 2
# TCs random generation and MDA parameters
num_sim_rnd = 100000
num_sel_mda = 1000
```
## Historical TCs - Probabilistic Simulation
```
# --------------------------------------
# Probabilistic simulation Historical TCs
# aux functions
def adjust_to_pareto(var):
'Fix data. It needs to start at 0 for Pareto adjustment '
var = var.astype(float)
var_pareto = np.amax(var) - var + 0.00001
return var_pareto
def adjust_from_pareto(var_base, var_pareto):
'Returns data from pareto adjustment'
var = np.amax(var_base) - var_pareto + 0.00001
return var
# use small radius parameters (4º)
pmean = TCs_r2_params.pressure_mean.values[:]
pmin = TCs_r2_params.pressure_min.values[:]
gamma = TCs_r2_params.gamma.values[:]
delta = TCs_r2_params.delta.values[:]
vmean = TCs_r2_params.velocity_mean.values[:]
# fix pressure for p
pmean_p = adjust_to_pareto(pmean)
pmin_p = adjust_to_pareto(pmin)
# join storm parameters for copula simulation
storm_params = np.column_stack(
(pmean_p, pmin_p, gamma, delta, vmean)
)
# statistical simulate PCs using copulas
kernels = ['GPareto', 'GPareto', 'ECDF', 'ECDF', 'ECDF']
storm_params_sim = CopulaSimulation(storm_params, kernels, num_sim_rnd)
# adjust back pressures from pareto
pmean_sim = adjust_from_pareto(pmean, storm_params_sim[:,0])
pmin_sim = adjust_from_pareto(pmin, storm_params_sim[:,1])
# store simulated storms - parameters
TCs_r2_sim_params = xr.Dataset(
{
'pressure_mean':(('storm'), pmean_sim),
'pressure_min':(('storm'), pmin_sim),
'gamma':(('storm'), storm_params_sim[:,2]),
'delta':(('storm'), storm_params_sim[:,3]),
'velocity_mean':(('storm'), storm_params_sim[:,4]),
},
coords = {
'storm':(('storm'), np.arange(num_sim_rnd))
},
)
print(TCs_r2_sim_params)
db.Save_TCs_r2_sim_params(TCs_r2_sim_params)
# Historical vs Simulated: scatter plot parameters
Plot_TCs_Params_HISTvsSIM(TCs_r2_params, TCs_r2_sim_params);
# Historical vs Simulated: histogram parameters
Plot_TCs_Params_HISTvsSIM_histogram(TCs_r2_params, TCs_r2_sim_params);
```
## Simulated TCs - MaxDiss classification
```
# --------------------------------------
# MaxDiss classification
# get simulated parameters
pmean_s = TCs_r2_sim_params.pressure_mean.values[:]
pmin_s = TCs_r2_sim_params.pressure_min.values[:]
gamma_s = TCs_r2_sim_params.gamma.values[:]
delta_s = TCs_r2_sim_params.delta.values[:]
vmean_s = TCs_r2_sim_params.velocity_mean.values[:]
# subset, scalar and directional indexes
data_mda = np.column_stack((pmean_s, pmin_s, vmean_s, delta_s, gamma_s))
ix_scalar = [0,1,2]
ix_directional = [3,4]
centroids = MaxDiss_Simplified_NoThreshold(
data_mda, num_sel_mda, ix_scalar, ix_directional
)
# store MDA storms - parameters
TCs_r2_MDA_params = xr.Dataset(
{
'pressure_mean':(('storm'), centroids[:,0]),
'pressure_min':(('storm'), centroids[:,1]),
'velocity_mean':(('storm'), centroids[:,2]),
'delta':(('storm'), centroids[:,3]),
'gamma':(('storm'), centroids[:,4]),
},
coords = {
'storm':(('storm'), np.arange(num_sel_mda))
},
)
print(TCs_r2_MDA_params)
#db.Save_TCs_r2_mda_params(TCs_r2_MDA_params)
# Historical vs Simulated: scatter plot parameters
Plot_TCs_Params_MDAvsSIM(TCs_r2_MDA_params, TCs_r2_sim_params);
```
## Historical TCs (MDA centroids) Waves Simulation
Waves data is generated by numerically simulating selected storms.
This methodology is not included inside teslakit python library.
This step needs to be done before continuing with notebook 07
| true |
code
| 0.436742 | null | null | null | null |
|
# Preprocessing Part
## Author: Xiaochi (George) Li
Input: "data.xlsx" provided by the professor
Output: "processed_data.pickle" with target variable "Salary" as the last column. And all the missing value should be imputed or dropped.
### Summary
In this part, we read the data from the file, did some exploratory data analysis on the data and processed the data for further analysis and synthesis.
#### Exploratory Data Analysis
* Correlation analysis
* Missing value analysis
* Unique percentage analysis
#### Process
* Removed
1. Need NLP: "MOU", "MOU Title", "Title", "Department",
2. No meaning:"Record Number",
3. \>50% missing: "POBP"
* Imputed
1. p_dep: mean
2. p_grade: add new category
3. Lump Sum Pay:0
4. benefit: add new category
5. Rate:mean
6. o_pay:median
```
import numpy as np
import pandas as pd
import sklearn
import seaborn as sns
import matplotlib.pyplot as plt
np.random.seed(42)
df = pd.read_excel("data.xlsx",thousands=",") #seperations in thousands
df.info()
"""Correlation analysis"""
corr = df.corr()
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=1, vmin=-1, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
corr
"""Missing rate for each feature"""
null_rate = df.isnull().sum(axis = 0).sort_values(ascending = False)/float((len(df)))
null_rate
"""Unique Rate for each feature"""
unique_rate = df.apply(lambda x: len(pd.unique(x)),axis = 0).sort_values(ascending = False) #unique rate and sort
print(unique_rate)
def column_analyse(x,df = df): #print count for columns that only has few uniques
print(df[x].value_counts(),"\n",df[x].value_counts().sum() ,"\n",df[x].value_counts()/len(df[x]), "\n-----------------------")
column_analyse("e_type")
column_analyse("benefit")
column_analyse("Time")
column_analyse("p_grade")
"""Feature selection"""
categotical_features = ["e_type", "benefit", "Time", "p_grade"]
not_include_features = ["MOU", "MOU Title", "Title", "Department", "Record Number", "POBP"]
selected_features = [i for i in df.columns if i not in not_include_features]
X_selected = df.loc[:,selected_features]
X_selected["p_dep"].hist(bins=50)
X_selected["p_dep"].describe()
X_selected["Lump Sum Pay"].hist(bins=50)
X_selected["Lump Sum Pay"].describe()
X_selected["Rate"].hist(bins=50)
X_selected["Rate"].describe()
X_selected["o_pay"].hist(bins=50)
X_selected["o_pay"].describe()
```
|Feature Name|Missing Rate|Imputation Method|
|----|----|----|
|p_dep|0.189287|Mean|
|p_grade|0.189287|add new category|
|Lump Sum Pay|0.185537|0|
|benefit|0.178262|add new category|
|Rate|0.058162|mean|
|o_pay|0.003750|median|
```
"""imputation"""
X_selected["p_dep"] = X_selected["p_dep"].fillna(X_selected["p_dep"].mean())
X_selected["Lump Sum Pay"] = X_selected["Lump Sum Pay"].fillna(0)
X_selected["Rate"] = X_selected["Rate"].fillna(X_selected["Rate"].mean())
X_selected["o_pay"] = X_selected["o_pay"].fillna(X_selected["o_pay"].median())
X_selected["p_grade"] = X_selected["p_grade"].fillna(-1)
X_selected["benefit"] = X_selected["benefit"].fillna(-1)
X_selected.head()
X_selected.to_pickle("processed_data.pickle")
```
| true |
code
| 0.347759 | null | null | null | null |
|
# Amazon Augmented AI(A2I) Integrated with AWS Marketplace ML Models
Sometimes, for some payloads, machine learning (ML) model predictions are just not confident enough and you want more than a machine. Furthermore, training a model can be complicated, time-consuming, and expensive. This is where [AWS Marketplace](https://aws.amazon.com/marketplace/b/6297422012?page=1&filters=FulfillmentOptionType&FulfillmentOptionType=SageMaker&ref_=sa_campaign_pbrao) and [Amazon Augmented AI](https://aws.amazon.com/augmented-ai/) (Amazon A2I) come in. By combining a pretrained ML model in AWS Marketplace with Amazon Augmented AI, you can quickly reap the benefits of pretrained models with validating and augmenting the model's accuracy with human intelligence.
AWS Marketplace contains over 400 pretrained ML models. Some models are general purpose. For example, the [GluonCV SSD Object Detector](https://aws.amazon.com/marketplace/pp/prodview-ggbuxlnrm2lh4?qid=1605041213915&sr=0-5&ref_=sa_campaign_pbrao) can detect objects in an image and place bounding boxes around the objects. AWS Marketplace also offers many purpose-built models such as a [Background Noise Classifier](https://aws.amazon.com/marketplace/pp/prodview-vpd6qdjm4d7u4?applicationId=AWS-Sagemaker-Console&ref_=sa_campaign_pbrao), a [Hard Hat Detector for Worker Safety](https://aws.amazon.com/marketplace/pp/prodview-jd5tj2egpxxum?applicationId=AWS-Sagemaker-Console&ref_=sa_campaign_pbrao), and a [Person in Water](https://aws.amazon.com/marketplace/pp/prodview-wlndemzv5pxhw?applicationId=AWS-Sagemaker-Console&ref_=sa_campaign_pbrao).
Amazon A2I provides a human-in-loop workflow to review ML predictions. Its configurable human-review workflow solution and customizable user-review console enable you to focus on ML tasks and increase the accuracy of the predictions with human input.
## Overview
In this notebook, you will use a pre-trained AWS Marketplace machine learning model with Amazon A2I to detect images as well as trigger a human-in-loop workflow to review, update and add additional labeled objects to an individual image. Furthermore, you can specify configurable threshold criteria for triggering the human-in-loop workflow in Amazon A2I. For example, you can trigger a human-in-loop workflow if there are no objects that are detected with an accuracy of 90% or greater.
The following diagram shows the AWS services that are used in this notebook and the steps that you will perform. Here are the high level steps in this notebook:
1. Configure the human-in-loop review using Amazon A2I
1. Select, deploy, and invoke an AWS Marketplace ML model
1. Trigger the human review workflow in Amazon A2I.
1. The private workforce that was created in Amazon SageMaker Ground Truth reviews and edits the objects detected in the image.
<img style="float: center;" src="./img/a2i_diagram.png" width="700" height="500">
## Contents
* [Prerequisites](#Prerequisites)
* [Step 1 Configure Amazon A2I service](#step1)
* [Step 1.1 Creating human review Workteam or Workforce](#step1_1)
* [Step 1.2 Create Human Task UI](#step1_2)
* [Step 1.3 Create the Flow Definition](#step1_3)
* [Step 2 Deploy and invoke AWS Marketplace model](#step2)
* [Step 2.1 Create an endpoint](#step2_1)
* [Step 2.2 Create input payload](#step2_2)
* [Step 2.3 Perform real-time inference](#step2_3)
* [Step3 Starting Human Loops](#step3)
* [Step 3.1 View Task Results](#step3_1)
* [Step 4 Next steps](#step4)
* [Step 4.1 Additional resources](#step4_1)
* [Step 5 Cleanup Resources](#step5)
### Usage instructions
You can run this notebook one cell at a time (By using Shift+Enter for running a cell).
## Prerequisites <a class="anchor" id="prerequisites"></a>
This sample notebook requires a subscription to **[GluonCV SSD Object Detector](https://aws.amazon.com/marketplace/pp/prodview-ggbuxlnrm2lh4?ref_=sa_campaign_pbrao)**, a pre-trained machine learning model package from AWS Marketplace.
If your AWS account has not been subscribed to this listing, here is the process you can follow:
1. Open the [listing](https://aws.amazon.com/marketplace/pp/prodview-ggbuxlnrm2lh4?ref_=sa_campaign_pbrao) from AWS Marketplace
1. Read the Highlights section and then product overview section of the listing.
1. View usage information and then additional resources.
1. Note the supported instance types.
1. Next, click on **Continue to subscribe.**
1. Review End-user license agreement, support terms, as well as pricing information.
1. The **Accept Offer** button needs to be selected if your organization agrees with EULA, pricing information as well as support terms. If the Continue to configuration button is active, it means your account already has a subscription to this listing. Once you select the **Continue to configuration** button and then choose **region**, you will see that a Product Arn will appear. This is the **model package ARN** that you need to specify in the following cell.
```
model_package_arn = "arn:aws:sagemaker:us-east-1:865070037744:model-package/gluoncv-ssd-resnet501547760463-0f9e6796d2438a1d64bb9b15aac57bc0" # Update as needed
```
8. This notebook requires the IAM role associated with this notebook to have *AmazonSageMakerFullAccess* IAM permission.
8. Note: If you want to run this notebook on AWS SageMaker Studio - please use Classic Jupyter mode to be able correctly render visualization. Pick instance type **'ml.m4.xlarge'** or larger. Set kernel to **'Data Science'**.
<img style="float: left;" src="./img/classicjupyter.png">
### Installing Dependencies
Import the libraries that are needed for this notebook.
```
# Import necessary libraries
import boto3
import json
import pandas as pd
import pprint
import requests
import sagemaker
import shutil
import time
import uuid
import PIL.Image
from IPython.display import Image
from IPython.display import Markdown as md
from sagemaker import get_execution_role
from sagemaker import ModelPackage
```
#### Setup Variables, Bucket and Paths
```
# Setting Role to the default SageMaker Execution Role
role = get_execution_role()
# Instantiate the SageMaker session and client that will be used throughout the notebook
sagemaker_session = sagemaker.Session()
sagemaker_client = sagemaker_session.sagemaker_client
# Fetch the region
region = sagemaker_session.boto_region_name
# Create S3 and A2I clients
s3 = boto3.client("s3", region)
a2i = boto3.client("sagemaker-a2i-runtime", region)
# Retrieve the current timestamp
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
# endpoint_name = '<ENDPOINT_NAME>'
endpoint_name = "gluoncv-object-detector"
# content_type='<CONTENT_TYPE>'
content_type = "image/jpeg"
# Instance size type to be used for making real-time predictions
real_time_inference_instance_type = "ml.m4.xlarge"
# Task UI name - this value is unique per account and region. You can also provide your own value here.
# task_ui_name = '<TASK_UI_NAME>'
task_ui_name = "ui-aws-marketplace-gluon-model-" + timestamp
# Flow definition name - this value is unique per account and region. You can also provide your own value here.
flow_definition_name = "fd-aws-marketplace-gluon-model-" + timestamp
# Name of the image file that will be used in object detection
image_file_name = "image.jpg"
# Create the sub-directory in the default S3 bucket
# that will store the results of the human-in-loop A2I review
bucket = sagemaker_session.default_bucket()
key = "a2i-results"
s3.put_object(Bucket=bucket, Key=(key + "/"))
output_path = f"s3://{bucket}/a2i-results"
print(f"Results of A2I will be stored in {output_path}.")
```
## Step 1 Configure Amazon A2I service<a class="anchor" id="step1"></a>
In this section, you will create 3 resources:
1. Private workforce
2. Human-in-loop Console UI
3. Workflow definition
### Step 1.1 Creating human review Workteam or Workforce <a class="anchor" id="step1_1"></a>
If you have already created a private work team, replace <WORKTEAM_ARN> with the ARN of your work team. If you have never created a private work team, use the instructions below to create one. To learn more about using and managing private work teams, see [Use a Private Workforce](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-private.html)).
1. In the Amazon SageMaker console in the left sidebar under the Ground Truth heading, open the **Labeling Workforces**.
1. Choose **Private**, and then choose **Create private team**.
1. If you are a new user of SageMaker workforces, it is recommended you select **Create a private work team with AWS Cognito**.
1. For team name, enter "MyTeam".
1. To add workers by email, select **Invite new workers by email** and paste or type a list of up to 50 email addresses, separated by commas, into the email addresses box. If you are following this notebook, specify an email account that you have access to. The system sends an invitation email, which allows users to authenticate and set up their profile for performing human-in-loop review.
1. Enter an organization name - this will be used to customize emails sent to your workers.
1. For contact email, enter an email address you have access to.
1. Select **Create private team**.
This will bring you back to the Private tab under labeling workforces, where you can view and manage your private teams and workers.
### **IMPORTANT: After you have created your workteam, from the Team summary section copy the value of the ARN and uncomment and replace `<WORKTEAM_ARN>` below:**
```
# workteam_arn = '<WORKTEAM_ARN>'
```
### Step 1.2 Create Human Task UI <a class="anchor" id="step1_2"></a>
Create a human task UI resource, giving a UI template in liquid HTML. This template will be rendered to the human workers whenever human loop is required.
For additional UI templates, check out this repository: https://github.com/aws-samples/amazon-a2i-sample-task-uis.
You will be using a slightly modified version of the [object detection UI](https://github.com/aws-samples/amazon-a2i-sample-task-uis/blob/master/images/bounding-box.liquid.html) that provides support for the `initial-value` and `labels` variables in the template.
```
# Create task UI
# Read in the template from a local file
template = open("./src/worker-task-template.html").read()
human_task_ui_response = sagemaker_client.create_human_task_ui(
HumanTaskUiName=task_ui_name, UiTemplate={"Content": template}
)
human_task_ui_arn = human_task_ui_response["HumanTaskUiArn"]
print(human_task_ui_arn)
```
### Step 1.3 Create the Flow Definition <a class="anchor" id="step1_3"></a>
In this section, you will create a flow definition. Flow Definitions allow you to specify:
* The workforce that your tasks will be sent to.
* The instructions that your workforce will receive. This is called a worker task template.
* The configuration of your worker tasks, including the number of workers that receive a task and time limits to complete tasks.
* Where your output data will be stored.
For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html.
```
create_workflow_definition_response = sagemaker_client.create_flow_definition(
FlowDefinitionName=flow_definition_name,
RoleArn=role,
HumanLoopConfig={
"WorkteamArn": workteam_arn,
"HumanTaskUiArn": human_task_ui_arn,
"TaskCount": 1,
"TaskDescription": "Identify and locate the object in an image.",
"TaskTitle": "Object detection Amazon A2I demo",
},
OutputConfig={"S3OutputPath": output_path},
)
flow_definition_arn = create_workflow_definition_response[
"FlowDefinitionArn"
] # let's save this ARN for future use
%%time
# Describe flow definition - status should be active
for x in range(60):
describe_flow_definition_response = sagemaker_client.describe_flow_definition(
FlowDefinitionName=flow_definition_name
)
print(describe_flow_definition_response["FlowDefinitionStatus"])
if describe_flow_definition_response["FlowDefinitionStatus"] == "Active":
print("Flow Definition is active")
break
time.sleep(2)
```
## Step 2 Deploy and invoke AWS Marketplace model <a class="anchor" id="step2"></a>
In this section, you will stand up an Amazon SageMaker endpoint. Each endpoint must have a unique name which you can use for performing inference.
### Step 2.1 Create an Endpoint <a class="anchor" id="step2_1"></a>
```
%%time
# Create a deployable model from the model package.
model = ModelPackage(
role=role,
model_package_arn=model_package_arn,
sagemaker_session=sagemaker_session,
predictor_cls=sagemaker.predictor.Predictor,
)
# Deploy the model
predictor = model.deploy(
initial_instance_count=1,
instance_type=real_time_inference_instance_type,
endpoint_name=endpoint_name,
)
```
It will take anywhere between 5 to 10 minutes to create the endpoint. Once the endpoint has been created, you would be able to perform real-time inference.
### Step 2.2 Create input payload <a class="anchor" id="step2_2"></a>
In this step, you will prepare a payload to perform a prediction.
```
# Download the image file
# Open the url image, set stream to True, this will return the stream content.
r = requests.get("https://images.pexels.com/photos/763398/pexels-photo-763398.jpeg", stream=True)
# Open a local file with wb ( write binary ) permission to save it locally.
with open(image_file_name, "wb") as f:
shutil.copyfileobj(r.raw, f)
```
Resize the image and upload the file to S3 so that the image can be referenced from the worker console UI.
```
# Load the image
image = PIL.Image.open(image_file_name)
# Resize the image
resized_image = image.resize((600, 400))
# Save the resized image file locally
resized_image.save(image_file_name)
# Save file to S3
s3 = boto3.client("s3")
with open(image_file_name, "rb") as f:
s3.upload_fileobj(f, bucket, image_file_name)
# Display the image
from IPython.core.display import Image, display
Image(filename=image_file_name, width=600, height=400)
```
### Step 2.3 Perform real-time inference <a class="anchor" id="step2_3"></a>
Submit the image file to the model and it will detect the objects in the image.
```
with open(image_file_name, "rb") as f:
payload = f.read()
response = sagemaker_session.sagemaker_runtime_client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Accept="json", Body=payload
)
result = json.loads(response["Body"].read().decode())
# Convert list to JSON
json_result = json.dumps(result)
df = pd.read_json(json_result)
# Display confidence scores < 0.90
df = df[df.score < 0.90]
print(df.head())
```
## Step 3 Starting Human Loops <a class="anchor" id="step3"></a>
In a previous step, you have already submitted your image to the model for prediction and stored the output in JSON format in the `result` variable. You simply need to modify the X, Y coordinates of the bounding boxes. Additionally, you can filter out all predictions that are less than 90% accurate before submitting it to your human-in-loop review. This will insure that your model's predictions are highly accurate and any additional detections of objects will be made by a human.
```
# Helper function to update X,Y coordinates and labels for the bounding boxes
def fix_boundingboxes(prediction_results, threshold=0.8):
bounding_boxes = []
labels = set()
for data in prediction_results:
label = data["id"]
labels.add(label)
if data["score"] > threshold:
width = data["right"] - data["left"]
height = data["bottom"] - data["top"]
top = data["top"]
left = data["left"]
bounding_boxes.append(
{"height": height, "width": width, "top": top, "left": left, "label": label}
)
return bounding_boxes, list(labels)
bounding_boxes, labels = fix_boundingboxes(result, threshold=0.9)
# Define the content that is passed into the human-in-loop workflow and console
human_loop_name = str(uuid.uuid4())
input_content = {
"initialValue": bounding_boxes, # the bounding box values that have been detected by model prediction
"taskObject": f"s3://{bucket}/"
+ image_file_name, # the s3 object will be passed to the worker task UI to render
"labels": labels, # the labels that are displayed in the legend
}
# Trigger the human-in-loop workflow
start_loop_response = a2i.start_human_loop(
HumanLoopName=human_loop_name,
FlowDefinitionArn=flow_definition_arn,
HumanLoopInput={"InputContent": json.dumps(input_content)},
)
```
Now that the human-in-loop review has been triggered, you can log into the worker console to work on the task and make edits and additions to the object detection bounding boxes from the image.
```
# Fetch the URL for the worker console UI
workteam_name = workteam_arn.split("/")[-1]
my_workteam = sagemaker_session.sagemaker_client.list_workteams(NameContains=workteam_name)
worker_console_url = "https://" + my_workteam["Workteams"][0]["SubDomain"]
md(
"### Click on the [Worker Console]({}) to begin reviewing the object detection".format(
worker_console_url
)
)
```
The below image shows the objects that were detected for the sample image that was used in this notebook by your model and displayed in the worker console.
<img src='./img/rain_biker_bb.png' align='center' height=600 width=800/>
You can now make edits to the image to detect other objects. For example, in the image above, the model failed to detect the bicycle in the foreground with an accuracy of 90% or greater. However, as a human reviewer, you can clearly see the bicycle and can make a bounding box around it. Once you have finished with your edits, you can submit the result.
### Step 3.1 View Task Results <a class="anchor" id="step3_1"></a>
Once work is completed, Amazon A2I stores results in your S3 bucket and sends a Cloudwatch event. Your results should be available in the S3 `output_path` that you specified when all work is completed. Note that the human answer, the label and the bounding box, is returned and saved in the JSON file.
**NOTE: You must edit/submit the image in the Worker console so that its status is `Completed`.**
```
# Fetch the details about the human loop review in order to locate the JSON output on S3
resp = a2i.describe_human_loop(HumanLoopName=human_loop_name)
# Wait for the human-in-loop review to be completed
while True:
resp = a2i.describe_human_loop(HumanLoopName=human_loop_name)
print("-", sep="", end="", flush=True)
if resp["HumanLoopStatus"] == "Completed":
print("!")
break
time.sleep(2)
```
Once its status is `Completed`, you can execute the below cell to view the JSON output that is stored in S3. Under `annotatedResult`, any new bounding boxes will be included along with those that the model predicted, will be included. To learn more about the output data schema, please refer to the documentation about [Output Data From Custom Task Types](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-output-data.html#sms-output-data-custom).
```
# Once the image has been submitted, display the JSON output that was sent to S3
bucket, key = resp["HumanLoopOutput"]["OutputS3Uri"].replace("s3://", "").split("/", 1)
response = s3.get_object(Bucket=bucket, Key=key)
content = response["Body"].read()
json_output = json.loads(content)
print(json.dumps(json_output, indent=1))
```
## Step 4 Next Steps <a class="anchor" id="step4"></a>
### Step 4.1 Additional Resources <a class="anchor" id="step4_1"></a>
* You can explore additional machine learning models in [AWS Marketplace - Machine Learning](https://aws.amazon.com/marketplace/b/c3714653-8485-4e34-b35b-82c2203e81c1?page=1&filters=FulfillmentOptionType&FulfillmentOptionType=SageMaker&ref_=sa_campaign_pbrao).
* Learn more about [Amazon Augmented AI](https://aws.amazon.com/augmented-ai/)
* Other AWS blogs that may be of interest are:
* [Using AWS Marketplace for machine learning workloads](https://aws.amazon.com/blogs/awsmarketplace/using-aws-marketplace-for-machine-learning-workloads/)
* [Adding AI to your applications with ready-to-use models from AWS Marketplace](https://aws.amazon.com/blogs/machine-learning/adding-ai-to-your-applications-with-ready-to-use-models-from-aws-marketplace/)
* [Building an end-to-end intelligent document processing solution using AWS](https://aws.amazon.com/blogs/machine-learning/building-an-end-to-end-intelligent-document-processing-solution-using-aws/)
## Step 5 Clean up resources <a class="anchor" id="step5"></a>
In order to clean up the resources from this notebook,simply execute the below cells.
```
# Delete Workflow definition
sagemaker_client.delete_flow_definition(FlowDefinitionName=flow_definition_name)
# Delete Human Task UI
sagemaker_client.delete_human_task_ui(HumanTaskUiName=task_ui_name)
# Delete Endpoint
predictor.delete_endpoint()
# Delete Model
predictor.delete_model()
```
#### Cancel AWS Marketplace subscription (Optional)
Finally, if you subscribed to AWS Marketplace model for an experiment and would like to unsubscribe, you can follow the steps below. Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home#/models) created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model.
**Steps to unsubscribe from the product on AWS Marketplace:**
Navigate to Machine Learning tab on Your [Software subscriptions page](https://aws.amazon.com/marketplace/ai/library?productType=ml&ref_=lbr_tab_ml).
Locate the listing that you would need to cancel, and click Cancel Subscription.
| true |
code
| 0.406597 | null | null | null | null |
|
# NLP - Using spaCy library
- **Created by Andrés Segura Tinoco**
- **Created on June 04, 2019**
- **Updated on October 29, 2021**
**Natural language processing (NLP):** is a discipline where computer science, artificial intelligence and cognitive logic are intercepted, with the objective that machines can read and understand our language for decision making <a href="#link_one">[1]</a>.
**spaCy:** features fast statistical NER as well as an open-source named-entity visualizer <a href="#link_two">[2]</a>.
## Example with a document in Spanish
```
# Load Python libraries
import io
import random
from collections import Counter
# Load NLP libraries from spacy
import spacy
# Verify installed spacy version
spacy.__version__
```
### Step 1 - Read natural text from a book
```
# Util function to read a plain text file
def read_text_file(file_path, encoding='ISO-8859-1'):
text = ""
with open(file_path, 'r', encoding=encoding) as f:
text = f.read()
return text
# Get text sample
file_path = "../data/es/El Grillo del Hogar - Charles Dickens.txt"
book_text = read_text_file(file_path)
# Show first 1000 raw characters of document
book_text[:1000]
```
### Step 2 - Create a NLP model
```
# Create NLP model for spanish language
nlp = spacy.load('es_core_news_sm')
doc_es = nlp(book_text)
```
**- Vocabulary:** unique words of the document.
```
# Get vocabulary
vocabulary_es = set(str(token).lower() for token in doc_es if not token.is_stop and token.is_alpha)
len(vocabulary_es)
# Show 100 random words of the vocabulary
print(random.sample(vocabulary_es, 100))
```
**- Stopwords:** refers to the most common words in a language, which do not significantly affect the meaning of the text.
```
# Get unique stop-words
stop_words_es = set(str(token).lower() for token in doc_es if token.is_stop)
len(stop_words_es)
# Show unique stop-words
print(stop_words_es)
```
**- Entity:** can be any word or series of words that consistently refers to the same thing.
```
# Returns a text with data quality
def text_quality(text):
new_text = text.replace('\n', '')
return new_text.strip('\r\n')
# Print out named first 50 entities
for ix in range(50):
ent = doc_es.ents[ix]
ent_text = text_quality(ent.text)
if len(ent_text) > 3:
print((ix + 1), '- Entity:', ent_text, ', Label:', ent.label_)
```
### Step 3 - Working with POS, NER and sentences
**- POS:** the parts of speech explain how a word is used in a sentence.
```
# Part of speech (POS) used in this document
set(token.pos_ for token in doc_es)
```
**- Sentences:** a set of words that is complete in itself and typically containing a subject and predicate.
```
# How many sentences are in this text?
sentences = [s for s in doc_es.sents]
len(sentences)
# Show first 10 sentences
sentences[1:11]
# Get the sentences in which the 'grillo' appears
pattern = 'grillo'
cricket_sent = [sent for sent in doc_es.sents if pattern in sent.text]
len(cricket_sent)
# Show the first 10 sentences in which the 'grillo' appears
for sent in cricket_sent[1:11]:
print('-', sent)
```
**- NER:** Named Entity Recognition.
```
# Returns the most common entities and their quantity
def find_entities(doc, ent_type, n):
entities = Counter()
for ent in doc.ents:
if ent.label_ == ent_type:
ent_name = text_quality(ent.lemma_)
entities[ent_name] += 1
return entities.most_common(n)
# Show entities of type PERSON
find_entities(doc_es, 'PER', 20)
# Returns persons adjectives
def get_person_adj(doc, person):
adjectives = []
for ent in doc.ents:
if ent.lemma_ == person:
for token in ent.subtree:
if token.pos_ == 'ADJ': # Adjective
adjectives.append(token.lemma_)
for ent in doc.ents:
if ent.lemma_ == person:
if ent.root.dep_ == 'nsubj': # Nominal subject
for child in ent.root.head.children:
if child.dep_ == 'acomp': # Adjectival complement
adjectives.append(child.lemma_)
return set(adjectives)
# Show the adjectives used for John (most common entity)
curr_person = 'John'
print(get_person_adj(doc_es, curr_person))
# Returns the people who use a certain verb
def verb_persons(doc, verb, n):
verb_count = Counter()
for ent in doc.ents:
if ent.label_ == 'PER' and ent.root.head.lemma_ == verb:
verb_count[ent.text] += 1
return verb_count.most_common(n)
# Show the people who use a certain verb
curr_verb = 'hacer'
verb_persons(doc_es, curr_verb, 10)
# Get ADJ type labels
adj_tokens = set(str(token.orth_).lower() for token in doc_es if token.pos_ == 'ADJ')
len(adj_tokens)
# Show 50 random ADJ type labels
print(random.sample(adj_tokens, 50))
# Get PROPN type labels
propn_tokens = set(str(token.orth_).lower() for token in doc_es if token.pos_ == 'PROPN')
len(adj_tokens)
# Show 50 random PROPN type labels
print(random.sample(propn_tokens, 50))
```
## Reference
<a name='link_one' href='https://en.wikipedia.org/wiki/Natural_language_processing' target='_blank' >[1]</a> Wikipedia - Natural language processing.
<a name='link_two' href='https://spacy.io/' target='_blank' >[2]</a> spaCy website.
<hr>
<p><a href="https://ansegura7.github.io/NLP/">« Home</a></p>
| true |
code
| 0.280038 | null | null | null | null |
|
# Tutorial Part 11: Learning Unsupervised Embeddings for Molecules
In this example, we will use a `SeqToSeq` model to generate fingerprints for classifying molecules. This is based on the following paper, although some of the implementation details are different: Xu et al., "Seq2seq Fingerprint: An Unsupervised Deep Molecular Embedding for Drug Discovery" (https://doi.org/10.1145/3107411.3107424).
Many types of models require their inputs to have a fixed shape. Since molecules can vary widely in the numbers of atoms and bonds they contain, this makes it hard to apply those models to them. We need a way of generating a fixed length "fingerprint" for each molecule. Various ways of doing this have been designed, such as Extended-Connectivity Fingerprints (ECFPs). But in this example, instead of designing a fingerprint by hand, we will let a `SeqToSeq` model learn its own method of creating fingerprints.
A `SeqToSeq` model performs sequence to sequence translation. For example, they are often used to translate text from one language to another. It consists of two parts called the "encoder" and "decoder". The encoder is a stack of recurrent layers. The input sequence is fed into it, one token at a time, and it generates a fixed length vector called the "embedding vector". The decoder is another stack of recurrent layers that performs the inverse operation: it takes the embedding vector as input, and generates the output sequence. By training it on appropriately chosen input/output pairs, you can create a model that performs many sorts of transformations.
In this case, we will use SMILES strings describing molecules as the input sequences. We will train the model as an autoencoder, so it tries to make the output sequences identical to the input sequences. For that to work, the encoder must create embedding vectors that contain all information from the original sequence. That's exactly what we want in a fingerprint, so perhaps those embedding vectors will then be useful as a way to represent molecules in other models!
## Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
[](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/11_Learning_Unsupervised_Embeddings_for_Molecules.ipynb)
## Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. This notebook will take a few hours to run on a GPU machine, so we encourage you to run it on Google colab unless you have a good GPU machine available.
```
!wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh
!chmod +x Anaconda3-2019.10-Linux-x86_64.sh
!bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local
!conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
import deepchem as dc
```
Let's start by loading the data. We will use the MUV dataset. It includes 74,501 molecules in the training set, and 9313 molecules in the validation set, so it gives us plenty of SMILES strings to work with.
```
import deepchem as dc
tasks, datasets, transformers = dc.molnet.load_muv()
train_dataset, valid_dataset, test_dataset = datasets
train_smiles = train_dataset.ids
valid_smiles = valid_dataset.ids
```
We need to define the "alphabet" for our `SeqToSeq` model, the list of all tokens that can appear in sequences. (It's also possible for input and output sequences to have different alphabets, but since we're training it as an autoencoder, they're identical in this case.) Make a list of every character that appears in any training sequence.
```
tokens = set()
for s in train_smiles:
tokens = tokens.union(set(c for c in s))
tokens = sorted(list(tokens))
```
Create the model and define the optimization method to use. In this case, learning works much better if we gradually decrease the learning rate. We use an `ExponentialDecay` to multiply the learning rate by 0.9 after each epoch.
```
from deepchem.models.optimizers import Adam, ExponentialDecay
max_length = max(len(s) for s in train_smiles)
batch_size = 100
batches_per_epoch = len(train_smiles)/batch_size
model = dc.models.SeqToSeq(tokens,
tokens,
max_length,
encoder_layers=2,
decoder_layers=2,
embedding_dimension=256,
model_dir='fingerprint',
batch_size=batch_size,
learning_rate=ExponentialDecay(0.004, 0.9, batches_per_epoch))
```
Let's train it! The input to `fit_sequences()` is a generator that produces input/output pairs. On a good GPU, this should take a few hours or less.
```
def generate_sequences(epochs):
for i in range(epochs):
for s in train_smiles:
yield (s, s)
model.fit_sequences(generate_sequences(40))
```
Let's see how well it works as an autoencoder. We'll run the first 500 molecules from the validation set through it, and see how many of them are exactly reproduced.
```
predicted = model.predict_from_sequences(valid_smiles[:500])
count = 0
for s,p in zip(valid_smiles[:500], predicted):
if ''.join(p) == s:
count += 1
print('reproduced', count, 'of 500 validation SMILES strings')
```
Now we'll trying using the encoder as a way to generate molecular fingerprints. We compute the embedding vectors for all molecules in the training and validation datasets, and create new datasets that have those as their feature vectors. The amount of data is small enough that we can just store everything in memory.
```
train_embeddings = model.predict_embeddings(train_smiles)
train_embeddings_dataset = dc.data.NumpyDataset(train_embeddings,
train_dataset.y,
train_dataset.w,
train_dataset.ids)
valid_embeddings = model.predict_embeddings(valid_smiles)
valid_embeddings_dataset = dc.data.NumpyDataset(valid_embeddings,
valid_dataset.y,
valid_dataset.w,
valid_dataset.ids)
```
For classification, we'll use a simple fully connected network with one hidden layer.
```
classifier = dc.models.MultitaskClassifier(n_tasks=len(tasks),
n_features=256,
layer_sizes=[512])
classifier.fit(train_embeddings_dataset, nb_epoch=10)
```
Find out how well it worked. Compute the ROC AUC for the training and validation datasets.
```
import numpy as np
metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean, mode="classification")
train_score = classifier.evaluate(train_embeddings_dataset, [metric], transformers)
valid_score = classifier.evaluate(valid_embeddings_dataset, [metric], transformers)
print('Training set ROC AUC:', train_score)
print('Validation set ROC AUC:', valid_score)
```
# Congratulations! Time to join the Community!
Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:
## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem)
This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.
## Join the DeepChem Gitter
The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!
| true |
code
| 0.526404 | null | null | null | null |
|
```
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import scipy.io as io
import numpy as np
import matplotlib.pyplot as plt
from math import ceil
from scipy.optimize import curve_fit
realization = 1000
import seaborn as sns
from matplotlib import cm
from array_response import *
import itertools
mat = io.loadmat('boundary.mat')
bound1_para = mat['bound1_para'][0,:]
bound2_para = mat['bound2_para'][0,:]
bound3_para = mat['bound3_para'][0,:]
bound4_1para = mat['bound4_1para'][0,:]
bound4_2para = mat['bound4_2para'][0,:]
bound4_3para = mat['bound4_3para'][0,:]
xlim_4_1 = mat['xlim_4_1'][0,0]
xlim_4_2 = mat['xlim_4_2'][0,:]
xlim_4_3 = mat['xlim_4_3'][0,0]
azi_rot = np.linspace(0,2*np.pi,50)
def func_sin(x, c, d):
return np.sin(2*np.pi*x*0.312 + c)*0.23 + d
test_1 = func_sin(azi_rot, *bound1_para)
test_2 = func_sin(azi_rot, *bound2_para)
bound3 = np.poly1d(bound3_para)
boud4_13 = np.poly1d(bound4_1para)
bound4_2 = np.poly1d(bound4_2para)
plt.plot(azi_rot,test_1)
plt.plot(azi_rot,test_2)
plt.plot(azi_rot,bound3(azi_rot))
plt.plot(azi_rot,boud4_13(azi_rot))
plt.plot(azi_rot,bound4_2(azi_rot))
plt.ylim(0,3.14)
def check_cate(_azi,_ele):
_index = ""
if ((_ele - bound3(_azi)) > 0):
if (((_azi<xlim_4_1) and ((_ele - boud4_13(_azi))<0)) or ((_azi>xlim_4_2[0]) and (_azi<xlim_4_2[1]) and ((_ele - bound4_2(_azi))<0)) or ((_azi>xlim_4_3) and ((_ele - boud4_13(_azi))<0))):
_index = "samecluster"
else:
_index = "diffclus_samepol"
else:
if ((_ele - func_sin(_azi, *bound2_para)) > 0):
_index = "diffclus_crosspol"
else:
if ((_ele - func_sin(_azi, *bound1_para)) > 0):
_index = "samecluster"
else:
_index = "diffclus_samepol"
return _index
```
### Parameters declaration
Declare parameters needed for channel realization
```
Ns = 1 # number of streams
Nc = 6 # number of cluster
Nray = 1 # number of rays in each cluster
Nt = 64 # number of transmit antennas
Nr = 16 # number of receive antennas
angle_sigma = 10/180*np.pi # standard deviation of the angles in azimuth and elevation both of Rx and Tx
gamma = np.sqrt((Nt*Nr)/(Nc*Nray))
realization = 1000 # equivalent to number of taking sample
count = 0
eps = 0.1 # 20dB isolation
sigma = np.sqrt(8/(1+eps**2))*1.37/1.14 # according to the normalization condition of H
```
### Channel Realization
Realize channel H for Dual-Polarized antenna array
```
H_pol = np.zeros((2*Nr,2*Nt,realization),dtype=complex)
At = np.zeros((Nt,Nc*Nray,realization),dtype=complex)
Ar = np.zeros((Nr,Nc*Nray,realization),dtype=complex)
alpha_hh = np.zeros((Nc*Nray,realization),dtype=complex)
alpha_hv = np.zeros((Nc*Nray,realization),dtype=complex)
alpha_vh = np.zeros((Nc*Nray,realization),dtype=complex)
alpha_vv = np.zeros((Nc*Nray,realization),dtype=complex)
AoD = np.zeros((2,Nc*Nray),dtype=complex)
AoA = np.zeros((2,Nc*Nray),dtype=complex)
H = np.zeros((2*Nr,2*Nt,realization),dtype=complex)
azi_rot = np.random.normal(1.7,0.3,realization)
ele_rot = np.random.normal(2.3,0.3,realization) # Why PI/2 ??
# azi_rot = np.random.uniform(0,2*np.pi,realization)
# ele_rot = np.random.uniform(0,np.pi,realization) # Why PI/2 ??
R = np.array([[np.cos(ele_rot)*np.cos(azi_rot),np.sin(ele_rot)],[-np.sin(ele_rot)*np.cos(azi_rot),np.cos(ele_rot)]]) # rotation matrix
for reali in range(realization):
for c in range(1,Nc+1):
AoD_azi_m = np.random.uniform(0,2*np.pi,1) # Mean Angle of Departure _ azimuth
AoD_ele_m = np.random.uniform(0,np.pi,1) # Mean Angle of Departure _ elevation
AoA_azi_m = np.random.uniform(0,2*np.pi,1) # Mean Angle of Arrival_ azimuth
AoA_ele_m = np.random.uniform(0,np.pi,1) # Mean Angle of Arrival_ elevation
AoD[0,(c-1)*Nray:Nray*c] = np.random.laplace(AoD_azi_m, angle_sigma, (1,Nray))
AoD[1,(c-1)*Nray:Nray*c] = np.random.laplace(AoD_ele_m, angle_sigma, (1,Nray))
AoA[0,(c-1)*Nray:Nray*c] = np.random.laplace(AoA_azi_m, angle_sigma, (1,Nray))
AoA[1,(c-1)*Nray:Nray*c] = np.random.laplace(AoA_ele_m, angle_sigma, (1,Nray))
for j in range(Nc*Nray):
At[:,j,reali] = array_response(AoD[0,j],AoD[1,j],Nt)/np.sqrt(2) # UPA array response
Ar[:,j,reali] = array_response(AoA[0,j],AoA[1,j],Nr)/np.sqrt(2)
var_hh = ((sigma**2)*(np.cos(AoD[0,j])**2)*(np.cos(AoA[0,j])**2)).real
var_hv = ((eps**2)*(sigma**2)*(np.cos(AoD[1,j])**2)*(np.cos(AoA[0,j])**2)).real
var_vh = ((eps**2)*(sigma**2)*(np.cos(AoD[0,j])**2)*(np.cos(AoA[1,j])**2)).real
var_vv = ((sigma**2)*(np.cos(AoD[1,j])**2)*(np.cos(AoA[1,j])**2)).real
alpha_hh[j,reali] = np.random.normal(0, np.sqrt(var_hh/2)) + 1j*np.random.normal(0, np.sqrt(var_hh/2))
alpha_hv[j,reali] = np.random.normal(0, np.sqrt(var_hv/2)) + 1j*np.random.normal(0, np.sqrt(var_hv/2))
alpha_vh[j,reali] = np.random.normal(0, np.sqrt(var_vh/2)) + 1j*np.random.normal(0, np.sqrt(var_vh/2))
alpha_vv[j,reali] = np.random.normal(0, np.sqrt(var_vv/2)) + 1j*np.random.normal(0, np.sqrt(var_vv/2))
alpha = np.vstack((np.hstack((alpha_hh[j,reali],alpha_hv[j,reali])),np.hstack((alpha_vh[j,reali],alpha_vv[j,reali]))))
H_pol[:,:,reali] = H_pol[:,:,reali] + np.kron(alpha,Ar[:,[j],reali]@At[:,[j],reali].conj().T)
H_pol[:,:,reali] = 2*gamma* H_pol[:,:,reali]
H[:,:,reali] = (np.kron(R[:,:,reali],np.eye(Nr)))@H_pol[:,:,reali]
H[:,:,reali] = np.sqrt(4/3)* H[:,:,reali]
```
### Check normalized condition
```
channel_fro_2 = np.zeros(realization)
for reali in range(realization):
channel_fro_2[reali] = np.linalg.norm(H[:,:,reali],'fro')
print("4*Nt*Nr =", 4*Nt*Nr , " Frobenius norm =", np.mean(channel_fro_2**2))
cluster = np.arange(Nc)
print(cluster)
c = list(itertools.combinations(cluster, 2))
num_path = (2*Nc-1)*Nc
path_combi = np.zeros((num_path,4),dtype=int)
print(path_combi.shape)
path_combi[0:Nc,:]=np.arange(Nc).reshape(Nc,1).repeat(4,axis=1)
count = 0
for i in range(int(Nc*(Nc-1)/2)):
path_combi[Nc+4*i,:] = np.array([c[count][0],c[count][0],c[count][1],c[count][1]])
path_combi[Nc+4*i+1,:] = np.array([c[count][1],c[count][1],c[count][0],c[count][0]])
path_combi[Nc+4*i+2,:] = np.array([c[count][0],c[count][1],c[count][1],c[count][0]])
path_combi[Nc+4*i+3,:] = np.array([c[count][1],c[count][0],c[count][0],c[count][1]])
count = count+1
cross_index = []
samepolar_index = []
count = Nc-1
while (count<num_path-4):
cross_index.extend([count+3,count+4])
samepolar_index.extend([count+1,count+2])
count = count + 4
cross_index = np.array(cross_index)
samepolar_index = np.array(samepolar_index)
sameclus_index = np.arange(0,Nc)
print(cross_index)
print(samepolar_index)
print(sameclus_index)
# print(path_combi)
path_gain = np.zeros((num_path,realization)) # 2 to save the position and maximum value
for reali in range(realization):
for combi in range(num_path):
path_gain[combi,reali] =\
(np.abs\
((np.cos(ele_rot[reali])*np.cos(azi_rot[reali])*alpha_hh[path_combi[combi,0],reali]+np.sin(ele_rot[reali])*alpha_vh[path_combi[combi,0],reali])*(path_combi[combi,0]==path_combi[combi,1])+\
(np.cos(ele_rot[reali])*np.cos(azi_rot[reali])*alpha_hv[path_combi[combi,2],reali]+np.sin(ele_rot[reali])*alpha_vv[path_combi[combi,2],reali])*(path_combi[combi,2]==path_combi[combi,1])+\
(-np.sin(ele_rot[reali])*np.cos(azi_rot[reali])*alpha_hh[path_combi[combi,0],reali]+np.cos(ele_rot[reali])*alpha_vh[path_combi[combi,0],reali])*(path_combi[combi,0]==path_combi[combi,3])+\
(-np.sin(ele_rot[reali])*np.cos(azi_rot[reali])*alpha_hv[path_combi[combi,2],reali]+np.cos(ele_rot[reali])*alpha_vv[path_combi[combi,2],reali])*(path_combi[combi,2]==path_combi[combi,3])
))**2
print(np.max(path_gain[0:Nc,2]))
print(path_gain[0:Nc,2])
print(path_gain[samepolar_index,2])
print(np.max(path_gain[samepolar_index,2]))
```
__Check maximum gain from combination of path in each realization__
```
index = np.zeros(realization,dtype=int)
for reali in range(realization):
index[reali] = np.argmax(path_gain[:,reali])
```
__Same Cluster__
```
index_sameclus = np.zeros(realization,dtype=int)
for reali in range(realization):
index_sameclus[reali] = np.argmax(path_gain[0:Nc,reali])
gain_sameclus = np.zeros(realization,dtype=float)
for reali in range(realization):
gain_sameclus[reali] = path_gain[index_sameclus[reali],reali]
```
__Chosen Category before check__
```
choosen_cate = ["" for x in range(realization)]
index_checkcate = np.zeros(realization,dtype=int)
cate = ""
temp = 0
for reali in range(realization):
cate = check_cate(azi_rot[reali],ele_rot[reali])
if (cate == "samecluster"):
index_checkcate[reali] = np.argmax(path_gain[0:Nc,reali])
if (cate == "diffclus_samepol"):
temp = np.argmax(path_gain[samepolar_index,reali])
index_checkcate[reali] = int(temp+(np.floor(temp/2))*2+Nc)
# index_checkcate[reali] = np.argmax(path_gain[samepolar_index,reali])
if (cate == "diffclus_crosspol"):
# index_checkcate[reali] = np.argmax(path_gain[cross_index,reali])
temp = np.argmax(path_gain[cross_index,reali])
index_checkcate[reali] = int(temp+(np.floor(temp/2)+1)*2+Nc)
choosen_cate[reali] = cate
temp = 0
```
### Plot Spectral Efficiency
```
SNR_dB = np.arange(-35,10,5)
SNR = 10**(SNR_dB/10)
smax = SNR.shape[0]
R_cross = np.zeros([smax, realization],dtype=complex)
# R_steer = np.zeros([smax, realization],dtype=complex)
R_samecl = np.zeros([smax, realization],dtype=complex)
R_checkcate = np.zeros([smax, realization],dtype=complex)
for reali in range(realization):
_chosen_combi_path = path_combi[index[reali]]
_chosen_checkcate_path = path_combi[index_checkcate[reali]]
# _chosen_checkcate_path = path_combi[:,reali]
_chosen_sameclus_path = path_combi[index_sameclus[reali]]
W_cross = np.vstack((Ar[:,[_chosen_combi_path[1]],reali],Ar[:,[_chosen_combi_path[3]],reali]))
F_cross = np.vstack((At[:,[_chosen_combi_path[0]],reali],At[:,[_chosen_combi_path[2]],reali]))
W_checkcate = np.vstack((Ar[:,[_chosen_checkcate_path[1]],reali],Ar[:,[_chosen_checkcate_path[3]],reali]))
F_checkcate = np.vstack((At[:,[_chosen_checkcate_path[0]],reali],At[:,[_chosen_checkcate_path[2]],reali]))
# W_steer = np.vstack((Ar[:,[_chosen_steer_path[0]],reali],Ar[:,[_chosen_steer_path[1]],reali]))
# F_steer = np.vstack((At[:,[_chosen_steer_path[0]],reali],At[:,[_chosen_steer_path[1]],reali]))
W_samecl = np.vstack((Ar[:,[_chosen_sameclus_path[1]],reali],Ar[:,[_chosen_sameclus_path[3]],reali]))
F_samecl = np.vstack((At[:,[_chosen_sameclus_path[0]],reali],At[:,[_chosen_sameclus_path[2]],reali]))
for s in range(smax):
R_cross[s,reali] = np.log2(np.linalg.det(np.eye(Ns)+(SNR[s]/Ns)*np.linalg.pinv(W_cross)@H[:,:,reali]@F_cross@F_cross.conj().T@H[:,:,reali].conj().T@W_cross))
R_checkcate[s,reali] = np.log2(np.linalg.det(np.eye(Ns)+(SNR[s]/Ns)*np.linalg.pinv(W_checkcate)@H[:,:,reali]@F_checkcate@F_checkcate.conj().T@H[:,:,reali].conj().T@W_checkcate))
R_samecl[s,reali] = np.log2(np.linalg.det(np.eye(Ns)+(SNR[s]/Ns)*np.linalg.pinv(W_samecl)@H[:,:,reali]@F_samecl@F_samecl.conj().T@H[:,:,reali].conj().T@W_samecl))
x = np.linalg.norm(F_cross,'fro')
print("Ns", Ns , " Frobenius norm FRF*FBB=", x**2)
plt.plot(SNR_dB, (np.sum(R_cross,axis=1).real)/realization, label='joint polarization beam steering')
plt.plot(SNR_dB, (np.sum(R_checkcate,axis=1).real)/realization, label='one category beam steering')
plt.plot(SNR_dB, (np.sum(R_samecl,axis=1).real)/realization, label='same ray beam steering')
plt.legend(loc='upper left',prop={'size': 9})
plt.xlabel('SNR(dB)',fontsize=11)
plt.ylabel('Spectral Efficiency (bits/s/Hz)',fontsize=11)
plt.tick_params(axis='both', which='major', labelsize=9)
plt.ylim(0,11)
plt.grid()
plt.show()
```
| true |
code
| 0.371393 | null | null | null | null |
|
# Fit $k_{ij}$ and $r_c^{ABij}$ interactions parameter of Ethanol and CPME
---
Let's call $\underline{\xi}$ the optimization parameters of a mixture. In order to optimize them, you need to provide experimental phase equilibria data. This can include VLE, LLE and VLLE data. The objective function used for each equilibria type are shown below:
### Vapor-Liquid Equilibria Data
$$ OF_{VLE}(\underline{\xi}) = w_y \sum_{j=1}^{Np} \left[ \sum_{i=1}^c (y_{i,j}^{cal} - y_{i,j}^{exp})^2 \right] + w_P \sum_{j=1}^{Np} \left[ \frac{P_{j}^{cal} - P_{j}^{exp}}{P_{j}^{exp}} \right]^2$$
Where, $Np$ is the number of experimental data points, $y_i$ is the vapor molar fraction of the component $i$ and $P$ is the bubble pressure. The superscripts $cal$ and $exp$ refers to the computed and experimental values, respectively. Finally, $w_y$ is the weight for the vapor composition error and $w_P$ is the weight for the bubble pressure error.
### Liquid-Liquid Equilibria Data
$$ OF_{LLE}(\underline{\xi}) = w_x \sum_{j=1}^{Np} \sum_{i=1}^c \left[x_{i,j} - x_{i,j}^{exp}\right]^2 + w_w \sum_{j=1}^{Np} \sum_{i=1}^c \left[ w_{i,j} - w_{i,j}^{exp} \right]^2 $$
Where, $Np$ is the number of experimental data points, $x_i$ and $w_i$ are the molar fraction of the component $i$ on the liquids phases. The superscripts $cal$ and $exp$ refers to the computed and experimental values, respectively. Finally, $w_x$ and $w_w$ are the weights for the liquid 1 ($x$) and liquid 2 ($w$) composition error.
### Vapor-Liquid-Liquid Equilibria Data
$$ OF_{VLLE}(\underline{\xi}) = w_x \sum_{j=1}^{Np} \sum_{i=1}^c \left[x_{i,j}^{cal} - x_{i,j}^{exp}\right]^2 + w_w \sum_{j=1}^{Np} \sum_{i=1}^c \left[w_{i,j}^{cal} - w_{i,j}^{exp}\right]^2 + w_y \sum_{j=1}^{Np} \sum_{i=1}^c \left[y_{i,j}^{cal} - y_{i,j}^{exp}\right]^2 + w_P \sum_{j=1}^{Np} \left[ \frac{P_{j}^{cal}}{P_{j}^{exp}} - 1\right]^2 $$
Where, $Np$ is the number of experimental data points, $y_i$, $x_i$ and $w_i$ are the molar fraction of the component $i$ on the vapor and liquids phases, respectively. The superscripts $cal$ and $exp$ refers to the computed and experimental values, respectively. Finally, $w_x$ and $w_w$ are the weights for the liquid 1 ($x$) and liquid 2 ($w$) composition error, $w_y$ is the weight for vapor composition error and $w_P$ is weight for three phase equilibria pressure error.
If there is data for more than one equilibria type, the errors can be added accordinly. So the objective funcion becomes:
$$ OF(\underline{\xi}) =OF_{ELV}(\underline{\xi}) + OF_{ELL}(\underline{\xi}) + OF_{ELLV}(\underline{\xi})$$
---
This notebook has te purpose of showing how to optimize the $k_{ij}$ and $r_c^{ABij}$ for a mixture with induced association. For these mixtures the interactions parameters are shown below:
$$ \epsilon_{ij} = (1-k_{ij}) \frac{\sqrt{\sigma_i^3 \sigma_j^3}}{\sigma_{ij}^3} \sqrt{\epsilon_i \epsilon_j} ;\quad\epsilon_{ij}^{AB} = \frac{\epsilon^{AB} (self-associating)}{2} ;\quad r^{ABij}_c (fitted)$$
First, it's needed to import the necessary modules
```
import numpy as np
from sgtpy import component, mixture, saftvrmie
from sgtpy.fit import fit_cross
```
Now that the functions are available it is necessary to create the mixture.
```
ethanol = component('ethanol2C', ms = 1.7728, sigma = 3.5592 , eps = 224.50,
lambda_r = 11.319, lambda_a = 6., eAB = 3018.05, rcAB = 0.3547,
rdAB = 0.4, sites = [1,0,1], cii= 5.3141080872882285e-20)
cpme = component('cpme', ms = 2.32521144, sigma = 4.13606074, eps = 343.91193798, lambda_r = 14.15484877,
lambda_a = 6.0, npol = 1.91990385,mupol = 1.27, sites =[0,0,1], cii = 3.5213681817448466e-19)
mix = mixture(ethanol, cpme)
```
Now the experimental equilibria data is read and a tuple is created. It includes the experimental liquid composition, vapor composition, equilibrium temperature and pressure. This is done with ```datavle = (Xexp, Yexp, Texp, Pexp)```
```
# Experimental data obtained from Mejia, Cartes, J. Chem. Eng. Data, vol. 64, no. 5, pp. 1970–1977, 2019
# Experimental temperature saturation in K
Texp = np.array([355.77, 346.42, 342.82, 340.41, 338.95, 337.78, 336.95, 336.29,
335.72, 335.3 , 334.92, 334.61, 334.35, 334.09, 333.92, 333.79,
333.72, 333.72, 333.81, 334.06, 334.58])
# Experimental pressure in Pa
Pexp = np.array([50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000.,
50000., 50000., 50000., 50000., 50000., 50000., 50000., 50000.,
50000., 50000., 50000., 50000., 50000.])
# Experimental liquid composition
Xexp = np.array([[0. , 0.065, 0.11 , 0.161, 0.203, 0.253, 0.301, 0.351, 0.402,
0.446, 0.497, 0.541, 0.588, 0.643, 0.689, 0.743, 0.785, 0.837,
0.893, 0.947, 1. ],
[1. , 0.935, 0.89 , 0.839, 0.797, 0.747, 0.699, 0.649, 0.598,
0.554, 0.503, 0.459, 0.412, 0.357, 0.311, 0.257, 0.215, 0.163,
0.107, 0.053, 0. ]])
# Experimental vapor composition
Yexp = np.array([[0. , 0.302, 0.411, 0.48 , 0.527, 0.567, 0.592, 0.614, 0.642,
0.657, 0.678, 0.694, 0.71 , 0.737, 0.753, 0.781, 0.801, 0.837,
0.883, 0.929, 1. ],
[1. , 0.698, 0.589, 0.52 , 0.473, 0.433, 0.408, 0.386, 0.358,
0.343, 0.322, 0.306, 0.29 , 0.263, 0.247, 0.219, 0.199, 0.163,
0.117, 0.071, 0. ]])
datavle = (Xexp, Yexp, Texp, Pexp)
```
The function ```fit_cross``` optimize the $k_{ij}$ correction and $r_c^{ABij}$ distance. An initial guess is needed, as well as the mixture object, the index of the self-associating component and the equilibria data. Optionally, the ```minimize_options``` option allows modifying the minimizer default parameters.
```
#initial guesses for kij and rcij
x0 = [0.01015194, 2.23153033]
fit_cross(x0, mix, assoc=0, datavle=datavle)
```
If the mixture exhibits other equilibria types you can supply this experimental data to the ``datalle`` or ``datavlle`` parameters.
- ``datalle``: (Xexp, Wexp, Texp, Pexp)
- ``datavlle``: (Xexp, Wexp, Yexp, Texp, Pexp)
You can specify the weights for each objetive function through the following parameters:
- ``weights_vle``: list or array_like, weights for the VLE objective function.
- weights_vle[0] = weight for Y composition error, default to 1.
- weights_vle[1] = weight for bubble pressure error, default to 1.
- ``weights_lle``: list or array_like, weights for the LLE objective function.
- weights_lle[0] = weight for X (liquid 1) composition error, default to 1.
- weights_lle[1] = weight for W (liquid 2) composition error, default to 1.
- ``weights_vlle``: list or array_like, weights for the VLLE objective function.
- weights_vlle[0] = weight for X (liquid 1) composition error, default to 1.
- weights_vlle[1] = weight for W (liquid 2) composition error, default to 1.
- weights_vlle[2] = weight for Y (vapor) composition error, default to 1.
- weights_vlle[3] = weight for equilibrium pressure error, default to 1.
Additionally, you can set options to the SciPy's ``minimize`` function using the ``minimize_options`` parameter.
For more information just run:
```fit_cross?```
| true |
code
| 0.628635 | null | null | null | null |
|
<table width=60%>
<tr style="background-color: white;">
<td><img src='https://www.creativedestructionlab.com/wp-content/uploads/2018/05/xanadu.jpg'></td>></td>
</tr>
</table>
---
<img src='https://raw.githubusercontent.com/XanaduAI/strawberryfields/master/doc/_static/strawberry-fields-text.png'>
---
<br>
<center> <h1> Gaussian boson sampling tutorial </h1></center>
To get a feel for how Strawberry Fields works, let's try coding a quantum program, Gaussian boson sampling.
## Background information: Gaussian states
A Gaussian state is one that can be described by a [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function) in the phase space. For example, for a single mode Gaussian state, squeezed in the $x$ quadrature by squeezing operator $S(r)$, could be described by the following [Wigner quasiprobability distribution](Wigner quasiprobability distribution):
$$W(x,p) = \frac{2}{\pi}e^{-2\sigma^2(x-\bar{x})^2 - 2(p-\bar{p})^2/\sigma^2}$$
where $\sigma$ represents the **squeezing**, and $\bar{x}$ and $\bar{p}$ are the mean **displacement**, respectively. For multimode states containing $N$ modes, this can be generalised; Gaussian states are uniquely defined by a [multivariate Gaussian function](https://en.wikipedia.org/wiki/Multivariate_normal_distribution), defined in terms of the **vector of means** ${\mu}$ and a **covariance matrix** $\sigma$.
### The position and momentum basis
For example, consider a single mode in the position and momentum quadrature basis (the default for Strawberry Fields). Assuming a Gaussian state with displacement $\alpha = \bar{x}+i\bar{p}$ and squeezing $\xi = r e^{i\phi}$ in the phase space, it has a vector of means and a covariance matrix given by:
$$ \mu = (\bar{x},\bar{p}),~~~~~~\sigma = SS\dagger=R(\phi/2)\begin{bmatrix}e^{-2r} & 0 \\0 & e^{2r} \\\end{bmatrix}R(\phi/2)^T$$
where $S$ is the squeezing operator, and $R(\phi)$ is the standard two-dimensional rotation matrix. For multiple modes, in Strawberry Fields we use the convention
$$ \mu = (\bar{x}_1,\bar{x}_2,\dots,\bar{x}_N,\bar{p}_1,\bar{p}_2,\dots,\bar{p}_N)$$
and therefore, considering $\phi=0$ for convenience, the multimode covariance matrix is simply
$$\sigma = \text{diag}(e^{-2r_1},\dots,e^{-2r_N},e^{2r_1},\dots,e^{2r_N})\in\mathbb{C}^{2N\times 2N}$$
If a continuous-variable state *cannot* be represented in the above form (for example, a single photon Fock state or a cat state), then it is non-Gaussian.
### The annihilation and creation operator basis
If we are instead working in the creation and annihilation operator basis, we can use the transformation of the single mode squeezing operator
$$ S(\xi) \left[\begin{matrix}\hat{a}\\\hat{a}^\dagger\end{matrix}\right] = \left[\begin{matrix}\cosh(r)&-e^{i\phi}\sinh(r)\\-e^{-i\phi}\sinh(r)&\cosh(r)\end{matrix}\right] \left[\begin{matrix}\hat{a}\\\hat{a}^\dagger\end{matrix}\right]$$
resulting in
$$\sigma = SS^\dagger = \left[\begin{matrix}\cosh(2r)&-e^{i\phi}\sinh(2r)\\-e^{-i\phi}\sinh(2r)&\cosh(2r)\end{matrix}\right]$$
For multiple Gaussian states with non-zero squeezing, the covariance matrix in this basis simply generalises to
$$\sigma = \text{diag}(S_1S_1^\dagger,\dots,S_NS_N^\dagger)\in\mathbb{C}^{2N\times 2N}$$
## Introduction to Gaussian boson sampling
<div class="alert alert-info">
“If you need to wait exponential time for your single photon sources to emit simultaneously, then there would seem to be no advantage over classical computation. This is the reason why so far, boson sampling has only been demonstrated with 3-4 photons. When faced with these problems, until recently, all we could do was shrug our shoulders.” - [Scott Aaronson](https://www.scottaaronson.com/blog/?p=1579)
</div>
While [boson sampling](https://en.wikipedia.org/wiki/Boson_sampling) allows the experimental implementation of a quantum sampling problem that it countably hard classically, one of the main issues it has in experimental setups is one of **scalability**, due to its dependence on an array of simultaneously emitting single photon sources.
Currently, most physical implementations of boson sampling make use of a process known as [Spontaneous Parametric Down-Conversion](http://en.wikipedia.org/wiki/Spontaneous_parametric_down-conversion) to generate the single photon source inputs. Unfortunately, this method is non-deterministic - as the number of modes in the apparatus increases, the average time required until every photon source emits a simultaneous photon increases *exponentially*.
In order to simulate a *deterministic* single photon source array, several variations on boson sampling have been proposed; the most well known being scattershot boson sampling ([Lund, 2014](https://link.aps.org/doi/10.1103/PhysRevLett.113.100502)). However, a recent boson sampling variation by [Hamilton et al.](https://link.aps.org/doi/10.1103/PhysRevLett.119.170501) negates the need for single photon Fock states altogether, by showing that **incident Gaussian states** - in this case, single mode squeezed states - can produce problems in the same computational complexity class as boson sampling. Even more significantly, this negates the scalability problem with single photon sources, as single mode squeezed states can be easily simultaneously generated experimentally.
Aside from changing the input states from single photon Fock states to Gaussian states, the Gaussian boson sampling scheme appears quite similar to that of boson sampling:
1. $N$ single mode squeezed states $\left|{\xi_i}\right\rangle$, with squeezing parameters $\xi_i=r_ie^{i\phi_i}$, enter an $N$ mode linear interferometer with unitary $U$.
<br>
2. The output of the interferometer is denoted $\left|{\psi'}\right\rangle$. Each output mode is then measured in the Fock basis, $\bigotimes_i n_i\left|{n_i}\middle\rangle\middle\langle{n_i}\right|$.
Without loss of generality, we can absorb the squeezing parameter $\phi$ into the interferometer, and set $\phi=0$ for convenience. The covariance matrix **in the creation and annihilation operator basis** at the output of the interferometer is then given by:
$$\sigma_{out} = \frac{1}{2} \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right]\sigma_{in} \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right]$$
Using phase space methods, [Hamilton et al.](https://link.aps.org/doi/10.1103/PhysRevLett.119.170501) showed that the probability of measuring a Fock state is given by
$$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(U\bigoplus_i\tanh(r_i)U^T)]_{st}\right|^2}{n_1!n_2!\cdots n_N!\sqrt{|\sigma_{out}+I/2|}},$$
i.e. the sampled single photon probability distribution is proportional to the **Hafnian** of a submatrix of $U\bigoplus_i\tanh(r_i)U^T$, dependent upon the output covariance matrix.
<div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9">
<p style="color: #119a68;">**The Hafnian**</p>
The Hafnian of a matrix is defined by
<br><br>
$$\text{Haf}(A) = \frac{1}{n!2^n}\sum_{\sigma=S_{2N}}\prod_{i=1}^N A_{\sigma(2i-1)\sigma(2i)}$$
<br>
$S_{2N}$ is the set of all permutations of $2N$ elements. In graph theory, the Hafnian calculates the number of perfect <a href="https://en.wikipedia.org/wiki/Matching_(graph_theory)">matchings</a> in an **arbitrary graph** with adjacency matrix $A$.
<br>
Compare this to the permanent, which calculates the number of perfect matchings on a *bipartite* graph - the Hafnian turns out to be a generalisation of the permanent, with the relationship
$$\begin{align}
\text{Per(A)} = \text{Haf}\left(\left[\begin{matrix}
0&A\\
A^T&0
\end{matrix}\right]\right)
\end{align}$$
As any algorithm that could calculate (or even approximate) the Hafnian could also calculate the permanent - a #P problem - it follows that calculating or approximating the Hafnian must also be a classically hard problem.
</div>
### Equally squeezed input states
In the case where all the input states are squeezed equally with squeezing factor $\xi=r$ (i.e. so $\phi=0$), we can simplify the denominator into a much nicer form. It can be easily seen that, due to the unitarity of $U$,
$$\left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right] = \left[ \begin{matrix}UU^\dagger&0\\0&U^*U^T\end{matrix} \right] =I$$
Thus, we have
$$\begin{align}
\sigma_{out} +\frac{1}{2}I &= \sigma_{out} + \frac{1}{2} \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right] = \left[ \begin{matrix}U&0\\0&U^*\end{matrix} \right] \frac{1}{2} \left(\sigma_{in}+I\right) \left[ \begin{matrix}U^\dagger&0\\0&U^T\end{matrix} \right]
\end{align}$$
where we have subtituted in the expression for $\sigma_{out}$. Taking the determinants of both sides, the two block diagonal matrices containing $U$ are unitary, and thus have determinant 1, resulting in
$$\left|\sigma_{out} +\frac{1}{2}I\right| =\left|\frac{1}{2}\left(\sigma_{in}+I\right)\right|=\left|\frac{1}{2}\left(SS^\dagger+I\right)\right| $$
By expanding out the right hand side, and using various trig identities, it is easy to see that this simply reduces to $\cosh^{2N}(r)$ where $N$ is the number of modes; thus the Gaussian boson sampling problem in the case of equally squeezed input modes reduces to
$$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(UU^T\tanh(r))]_{st}\right|^2}{n_1!n_2!\cdots n_N!\cosh^N(r)},$$
## The Gaussian boson sampling circuit
The multimode linear interferometer can be decomposed into two-mode beamsplitters (`BSgate`) and single-mode phase shifters (`Rgate`) (<a href="https://doi.org/10.1103/physrevlett.73.58">Reck, 1994</a>), allowing for an almost trivial translation into a continuous-variable quantum circuit.
For example, in the case of a 4 mode interferometer, with arbitrary $4\times 4$ unitary $U$, the continuous-variable quantum circuit for Gaussian boson sampling is given by
<img src="https://s3.amazonaws.com/xanadu-img/gaussian_boson_sampling.svg" width=70%/>
In the above,
* the single mode squeeze states all apply identical squeezing $\xi=r$,
* the detectors perform Fock state measurements (i.e. measuring the photon number of each mode),
* the parameters of the beamsplitters and the rotation gates determines the unitary $U$.
For $N$ input modes, we must have a minimum of $N$ columns in the beamsplitter array ([Clements, 2016](https://arxiv.org/abs/1603.08788)).
## Simulating boson sampling in Strawberry Fields
```
import strawberryfields as sf
from strawberryfields.ops import *
from strawberryfields.utils import random_interferometer
```
Strawberry Fields makes this easy; there is an `Interferometer` quantum operation, and a utility function that allows us to generate the matrix representing a random interferometer.
```
U = random_interferometer(4)
```
The lack of Fock states and non-linear operations means we can use the Gaussian backend to simulate Gaussian boson sampling. In this example program, we are using input states with squeezing parameter $\xi=1$, and the randomly chosen interferometer generated above.
```
eng = sf.Engine('gaussian')
gbs = sf.Program(4)
with gbs.context as q:
# prepare the input squeezed states
S = Sgate(1)
All(S) | q
# interferometer
Interferometer(U) | q
MeasureFock() | q
results = eng.run(gbs, run_options={"shots":10})
state = results.state
# Note: Running this cell will generate a warning. This is just the Gaussian backend of Strawberryfields telling us
# that, although it can carry out the MeasureFock operation, it will not update the state of the circuit after doing so,
# since the resulting state would be non-Gaussian. For this notebook, the warning can be safely ignored.
```
We can see the decomposed beamsplitters and rotation gates, by calling `eng.print_applied()`:
```
eng.print_applied()
```
<div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9">
<p style="color: #119a68;">**Available decompositions**</p>
Check out our <a href="https://strawberryfields.readthedocs.io/en/stable/conventions/decompositions.html">documentation</a> to see the available CV decompositions available in Strawberry Fields.
</div>
We can also see some of the measurement samples from this circuit within `results.samples`. These correspond to independent runs of the Gaussian Boson Sampling circuit.
```
results.samples
```
## Analysis
Let's now verify the Gaussian boson sampling result, by comparing the output Fock state probabilities to the Hafnian, using the relationship
$$\left|\left\langle{n_1,n_2,\dots,n_N}\middle|{\psi'}\right\rangle\right|^2 = \frac{\left|\text{Haf}[(UU^T\tanh(r))]_{st}\right|^2}{n_1!n_2!\cdots n_N!\cosh^N(r)}$$
### Calculating the Hafnian
For the right hand side numerator, we first calculate the submatrix $[(UU^T\tanh(r))]_{st}$:
```
import numpy as np
B = (np.dot(U, U.T) * np.tanh(1))
```
In Gaussian boson sampling, we determine the submatrix by taking the rows and columns corresponding to the measured Fock state. For example, to calculate the submatrix in the case of the output measurement $\left|{1,1,0,0}\right\rangle$,
```
B[:,[0,1]][[0,1]]
```
To calculate the Hafnian in Python, we can use the direct definition
$$\text{Haf}(A) = \frac{1}{n!2^n} \sum_{\sigma \in S_{2n}} \prod_{j=1}^n A_{\sigma(2j - 1), \sigma(2j)}$$
Notice that this function counts each term in the definition multiple times, and renormalizes to remove the multiple counts by dividing by a factor $\frac{1}{n!2^n}$. **This function is extremely slow!**
```
from itertools import permutations
from scipy.special import factorial
def Haf(M):
n = len(M)
m = int(n/2)
haf = 0.0
for i in permutations(range(n)):
prod = 1.0
for j in range(m):
prod *= M[i[2 * j], i[2 * j + 1]]
haf += prod
return haf / (factorial(m) * (2 ** m))
```
## Comparing to the SF result
In Strawberry Fields, both Fock and Gaussian states have the method `fock_prob()`, which returns the probability of measuring that particular Fock state.
#### Let's compare the case of measuring at the output state $\left|0,1,0,1\right\rangle$:
```
B = (np.dot(U, U.T) * np.tanh(1))[:, [1, 3]][[1, 3]]
np.abs(Haf(B)) ** 2 / np.cosh(1) ** 4
state.fock_prob([0, 1, 0, 1])
```
#### For the measurement result $\left|2,0,0,0\right\rangle$:
```
B = (np.dot(U, U.T) * np.tanh(1))[:, [0, 0]][[0, 0]]
np.abs(Haf(B)) ** 2 / (2 * np.cosh(1) ** 4)
state.fock_prob([2, 0, 0, 0])
```
#### For the measurement result $\left|1,1,0,0\right\rangle$:
```
B = (np.dot(U, U.T) * np.tanh(1))[:, [0, 1]][[0, 1]]
np.abs(Haf(B)) ** 2 / np.cosh(1) ** 4
state.fock_prob([1, 1, 0, 0])
```
#### For the measurement result $\left|1,1,1,1\right\rangle$, this corresponds to the full matrix $B$:
```
B = (np.dot(U,U.T) * np.tanh(1))
np.abs(Haf(B)) ** 2 / np.cosh(1) ** 4
state.fock_prob([1, 1, 1, 1])
```
#### For the measurement result $\left|0,0,0,0\right\rangle$, this corresponds to a **null** submatrix, which has a Hafnian of 1:
```
1 / np.cosh(1) ** 4
state.fock_prob([0, 0, 0, 0])
```
As you can see, like in the boson sampling tutorial, they agree with almost negligable difference.
<div class="alert alert-success" style="border: 0px; border-left: 3px solid #119a68; color: black; background-color: #daf0e9">
<p style="color: #119a68;">**Exercises**</p>
Repeat this notebook with
<ol>
<li> A higher value for <tt>shots</tt> in <tt>eng.run()</tt>, and compare the relative probabilties of events with the expected values.</li>
<li> A Fock backend such as NumPy, instead of the Gaussian backend</li>
<li> Different beamsplitter and rotation parameters</li>
<li> Input states with *differing* squeezed values $r_i$. You will need to modify the code to take into account the fact that the output covariance matrix determinant must now be calculated!
</ol>
</div>
| true |
code
| 0.49707 | null | null | null | null |
|
This notebook is part of the $\omega radlib$ documentation: https://docs.wradlib.org.
Copyright (c) $\omega radlib$ developers.
Distributed under the MIT License. See LICENSE.txt for more info.
# How to use wradlib's ipol module for interpolation tasks?
```
import wradlib.ipol as ipol
from wradlib.util import get_wradlib_data_file
from wradlib.vis import plot_ppi
import numpy as np
import matplotlib.pyplot as pl
import datetime as dt
import warnings
warnings.filterwarnings('ignore')
try:
get_ipython().magic("matplotlib inline")
except:
pl.ion()
```
### 1-dimensional example
Includes Nearest Neighbours, Inverse Distance Weighting, and Ordinary Kriging.
```
# Synthetic observations
xsrc = np.arange(10)[:, None]
vals = np.sin(xsrc).ravel()
# Define target coordinates
xtrg = np.linspace(0, 20, 100)[:, None]
# Set up interpolation objects
# IDW
idw = ipol.Idw(xsrc, xtrg)
# Nearest Neighbours
nn = ipol.Nearest(xsrc, xtrg)
# Linear
ok = ipol.OrdinaryKriging(xsrc, xtrg)
# Plot results
pl.figure(figsize=(10,5))
pl.plot(xsrc.ravel(), vals, 'bo', label="Observation")
pl.plot(xtrg.ravel(), idw(vals), 'r-', label="IDW interpolation")
pl.plot(xtrg.ravel(), nn(vals), 'k-', label="Nearest Neighbour interpolation")
pl.plot(xtrg.ravel(), ok(vals), 'g-', label="Ordinary Kriging")
pl.xlabel("Distance", fontsize="large")
pl.ylabel("Value", fontsize="large")
pl.legend(loc="bottomright")
```
### 2-dimensional example
Includes Nearest Neighbours, Inverse Distance Weighting, Linear Interpolation, and Ordinary Kriging.
```
# Synthetic observations and source coordinates
src = np.vstack( (np.array([4, 7, 3, 15]), np.array([8, 18, 17, 3]))).transpose()
np.random.seed(1319622840)
vals = np.random.uniform(size=len(src))
# Target coordinates
xtrg = np.linspace(0, 20, 40)
ytrg = np.linspace(0, 20, 40)
trg = np.meshgrid(xtrg, ytrg)
trg = np.vstack( (trg[0].ravel(), trg[1].ravel()) ).T
# Interpolation objects
idw = ipol.Idw(src, trg)
nn = ipol.Nearest(src, trg)
linear = ipol.Linear(src, trg)
ok = ipol.OrdinaryKriging(src, trg)
# Subplot layout
def gridplot(interpolated, title=""):
pm = ax.pcolormesh(xtrg, ytrg, interpolated.reshape( (len(xtrg), len(ytrg)) ) )
pl.axis("tight")
ax.scatter(src[:, 0], src[:, 1], facecolor="None", s=50, marker='s')
pl.title(title)
pl.xlabel("x coordinate")
pl.ylabel("y coordinate")
# Plot results
fig = pl.figure(figsize=(8,8))
ax = fig.add_subplot(221, aspect="equal")
gridplot(idw(vals), "IDW")
ax = fig.add_subplot(222, aspect="equal")
gridplot(nn(vals), "Nearest Neighbours")
ax = fig.add_subplot(223, aspect="equal")
gridplot(np.ma.masked_invalid(linear(vals)), "Linear interpolation")
ax = fig.add_subplot(224, aspect="equal")
gridplot(ok(vals), "Ordinary Kriging")
pl.tight_layout()
```
### Using the convenience function ipol.interpolation in order to deal with missing values
**(1)** Exemplified for one dimension in space and two dimensions of the source value array (could e.g. be two time steps).
```
# Synthetic observations (e.g. two time steps)
src = np.arange(10)[:, None]
vals = np.hstack((1.+np.sin(src), 5. + 2.*np.sin(src)))
# Target coordinates
trg = np.linspace(0, 20, 100)[:, None]
# Here we introduce missing values in the second dimension of the source value array
vals[3:5, 1] = np.nan
# interpolation using the convenience function "interpolate"
idw_result = ipol.interpolate(src, trg, vals, ipol.Idw, nnearest=4)
nn_result = ipol.interpolate(src, trg, vals, ipol.Nearest)
# Plot results
fig = pl.figure(figsize=(10,5))
ax = fig.add_subplot(111)
pl1 = ax.plot(trg, idw_result, 'b-', label="IDW")
pl2 = ax.plot(trg, nn_result, 'k-', label="Nearest Neighbour")
pl3 = ax.plot(src, vals, 'ro', label="Observations")
```
**(2)** Exemplified for two dimensions in space and two dimensions of the source value array (e.g. time steps), containing also NaN values (here we only use IDW interpolation)
```
# Just a helper function for repeated subplots
def plotall(ax, trgx, trgy, src, interp, pts, title, vmin, vmax):
ix = np.where(np.isfinite(pts))
ax.pcolormesh(trgx, trgy, interp.reshape( (len(trgx),len(trgy) ) ), vmin=vmin, vmax=vmax )
ax.scatter(src[ix, 0].ravel(), src[ix, 1].ravel(), c=pts.ravel()[ix], s=20, marker='s',
vmin=vmin, vmax=vmax)
ax.set_title(title)
pl.axis("tight")
# Synthetic observations
src = np.vstack( (np.array([4, 7, 3, 15]), np.array([8, 18, 17, 3])) ).T
np.random.seed(1319622840 + 1)
vals = np.round(np.random.uniform(size=(len(src), 2)), 1)
# Target coordinates
trgx = np.linspace(0, 20, 100)
trgy = np.linspace(0, 20, 100)
trg = np.meshgrid(trgx, trgy)
trg = np.vstack((trg[0].ravel(), trg[1].ravel())).transpose()
result = ipol.interpolate(src, trg, vals, ipol.Idw, nnearest=4)
# Now introduce NaNs in the observations
vals_with_nan = vals.copy()
vals_with_nan[1, 0] = np.nan
vals_with_nan[1:3, 1] = np.nan
result_with_nan = ipol.interpolate(src, trg, vals_with_nan, ipol.Idw, nnearest=4)
vmin = np.concatenate((vals.ravel(), result.ravel())).min()
vmax = np.concatenate((vals.ravel(), result.ravel())).max()
fig = pl.figure(figsize=(8,8))
ax = fig.add_subplot(221)
plotall(ax, trgx, trgy, src, result[:, 0], vals[:, 0], '1st dim: no NaNs', vmin, vmax)
ax = fig.add_subplot(222)
plotall(ax, trgx, trgy, src, result[:, 1], vals[:, 1], '2nd dim: no NaNs', vmin, vmax)
ax = fig.add_subplot(223)
plotall(ax, trgx, trgy, src, result_with_nan[:, 0], vals_with_nan[:, 0], '1st dim: one NaN', vmin, vmax)
ax = fig.add_subplot(224)
plotall(ax, trgx, trgy, src, result_with_nan[:, 1], vals_with_nan[:, 1], '2nd dim: two NaN', vmin, vmax)
pl.tight_layout()
```
### How to use interpolation for gridding data in polar coordinates?
Read polar coordinates and corresponding rainfall intensity from file
```
filename = get_wradlib_data_file('misc/bin_coords_tur.gz')
src = np.loadtxt(filename)
filename = get_wradlib_data_file('misc/polar_R_tur.gz')
vals = np.loadtxt(filename)
src.shape
```
Define target grid coordinates
```
xtrg = np.linspace(src[:,0].min(), src[:,0].max(), 200)
ytrg = np.linspace(src[:,1].min(), src[:,1].max(), 200)
trg = np.meshgrid(xtrg, ytrg)
trg = np.vstack((trg[0].ravel(), trg[1].ravel())).T
```
Linear Interpolation
```
ip_lin = ipol.Linear(src, trg)
result_lin = ip_lin(vals.ravel(), fill_value=np.nan)
```
IDW interpolation
```
ip_near = ipol.Nearest(src, trg)
maxdist = trg[1,0] - trg[0,0]
result_near = ip_near(vals.ravel(), maxdist=maxdist)
```
Plot results
```
fig = pl.figure(figsize=(15, 6))
fig.subplots_adjust(wspace=0.4)
ax = fig.add_subplot(131, aspect="equal")
plot_ppi(vals, ax=ax)
ax = fig.add_subplot(132, aspect="equal")
pl.pcolormesh(xtrg, ytrg, result_lin.reshape( (len(xtrg), len(ytrg)) ) )
ax = fig.add_subplot(133, aspect="equal")
pl.pcolormesh(xtrg, ytrg, result_near.reshape( (len(xtrg), len(ytrg)) ) )
```
| true |
code
| 0.62223 | null | null | null | null |
|
## Exploratory Data Analysis
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# read dataset
df = pd.read_csv('../datasets/winequality/winequality-red.csv',sep=';')
# check data dimensions
print(df.shape)
# check length
print(len(df))
# check number of dimensions of your DataFrame or Series
print(df.ndim)
# show the first five rows
print(df.head(5))
# show the last five rows
print(df.tail(5))
# print column names
df.dtypes
# return the number of non-missing values for each column of the DataFrame
print(df.count)
# change direction to get count of non-missing values for each each row
df.count(axis='columns')
# To print the metadata, use info()
print(df.info())
# show the columns
df.columns
```
### Sorting
A DataFrame can be sorted by the value of one of the variables (i.e columns). For example, we can sort by Total day charge (use ascending=False to sort in descending order):
```
df.sort_values(by='alcohol', ascending=False).head()
```
Alternatively, we can also sort by multiple columns:
```
df.sort_values(by=['alcohol', 'quality'],
ascending=[True, False]).head()
```
### Indexing and retrieving data
DataFrame can be indexed in different ways.
To get a single column, you can use a DataFrame['Name'] construction. Let's use this to answer a question about that column alone: **what is the proportion of alcohol in our dataframe?**
```
df['alcohol'].mean()
```
### Applying Functions to Cells, Columns and Rows
**To apply functions to each column, use `apply():`**
```
df.apply(np.max)
```
The apply method can also be used to apply a function to each row. To do this, specify `axis=1`. `lambda` functions are very convenient in such scenarios. For example, if we need to select all wines with alcohol content greater than 6, we can do it like this:
```
df[df['alcohol'].apply(lambda alcohol: alcohol > 6)].head()
```
The `map` method can be used to **replace values in a column** by passing a dictionary of the form `{old_value: new_value}` as its argument:
```
d = {'9.4' : 100, '9.8' : 200}
df['alcohol'] = df['alcohol'].map(d)
df.head()
```
The same thing can be done with the `replace` method:
### Grouping
In general, grouping data in Pandas goes as follows:
df.groupby(by=grouping_columns)[columns_to_show].function()
1. First, the `groupby` method divides the grouping_columns by their values. They become a new index in the resulting dataframe.
2. Then, columns of interest are selected (`columns_to_show`). If columns_to_show is not included, all non groupby clauses will be included.
3. Finally, one or several functions are applied to the obtained groups per selected columns.
Here is an example where we group the data according to the values of the `sulphates` variable and display statistics of three columns in each group:
```
columns_to_show = ['pH', 'chlorides', 'citric acid']
df.groupby(['sulphates'])[columns_to_show].describe(percentiles=[]).head()
```
Let’s do the same thing, but slightly differently by passing a list of functions to `agg()`:
```
columns_to_show = ['pH', 'chlorides', 'citric acid']
df.groupby(['sulphates'])[columns_to_show].agg([np.mean, np.std, np.min,
np.max]).head()
```
### Summary tables
Suppose we want to see how the observations in our sample are distributed in the context of two variables - `sulphates` and `quality`. To do so, we can build a contingency table using the `crosstab` method:
```
pd.crosstab(df['sulphates'], df['quality']).head()
pd.crosstab(df['sulphates'], df['quality'], normalize=True).head()
```
## First attempt on predicting wine quality
Let's see how wine quality is related to the alcohol content in it. We’ll do this using a crosstab contingency table and also through visual analysis with Seaborn (however, visual analysis will be covered more thoroughly in the next article).
```
pd.crosstab(df['pH'], df['quality'], margins=True).head()
sns.countplot(x='density', hue='quality', data=df);
```
### Histogram
```
# create histogram
bin_edges = np.arange(0, df['residual sugar'].max() + 1, 1)
fig = plt.hist(df['residual sugar'], bins=bin_edges)
# add plot labels
plt.xlabel('count')
plt.ylabel('residual sugar')
plt.show()
```
### Scatterplot for continuous variables
```
# create scatterplot
fig = plt.scatter(df['pH'], df['residual sugar'])
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
```
### Scatterplot Matrix
```
# show columns
df.columns
# create scatterplot matrix
fig = sns.pairplot(data=df[['alcohol', 'pH', 'residual sugar', 'quality']],
hue='quality')
# add plot labels
plt.xlabel('pH')
plt.ylabel('residual sugar')
plt.show()
```
### Boxplots
- Distribution of data in terms of median and percentiles (median is the 50th percentile)
##### manual approach
```
percentiles = np.percentile(df['alcohol'], q=[25, 50, 75])
percentiles
for p in percentiles:
plt.axhline(p, color='black', linestyle='-')
plt.scatter(np.zeros(df.shape[0]) + 0.5, df['alcohol'])
iqr = percentiles[-1] - percentiles[0]
upper_whisker = min(df['alcohol'].max(), percentiles[-1] + iqr * 1.5)
lower_whisker = max(df['alcohol'].min(), percentiles[0] - iqr * 1.5)
plt.axhline(upper_whisker, color='black', linestyle='--')
plt.axhline(lower_whisker, color='black', linestyle='--')
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
```
#### using matplotlib.pyplot.boxplot approach
```
plt.boxplot(df['alcohol'])
plt.ylim([8, 16])
plt.ylabel('alcohol')
fig = plt.gca()
fig.axes.get_xaxis().set_ticks([])
plt.show()
# Assume density is the target variable
#descriptive statistics summary
df['density'].describe()
#histogram
sns.distplot(df['density']);
#skewness and kurtosis
print("Skewness: %f" % df['density'].skew())
print("Kurtosis: %f" % df['density'].kurt())
```
### Relationship with other continuous variables
```
# other variables are fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol',
var = 'pH'
data = pd.concat([df['density'], df[var]], axis=1)
data.plot.scatter(x=var, y='density');
### Relationship with categorical variable
var = 'quality'
data = pd.concat([df['density'], df[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="density", data=data)
```
#### Correlation matrix (heatmap style)
```
#correlation matrix
corrmat = df.corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
```
#### `density` correlation matrix (zoomed heatmap style)
```
k = 10 #number of variables for heatmap
cols = corrmat.nlargest(k, 'density')['density'].index
cm = np.corrcoef(df[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values)
plt.show()
```
From the above heatmap plot we can see that variable `density` is highly correlated to `fixed acidity`, `citric acid`, `total sulphur dioxide`, and `free sulphur dioxide`
```
df.columns
#scatterplot
sns.set()
cols = ['fixed acidity', 'citric acid', 'total sulfur dioxide', 'free sulfur dioxide']
sns.pairplot(df[cols], height = 2.5)
plt.show();
```
### Missing data
Important questions when thinking about missing data:
- How prevalent is the missing data?
- Is missing data random or does it have a pattern?
```
#missing data
total = df.isnull().sum().sort_values(ascending=False)
percent = (df.isnull().sum()/df.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(20)
```
### Detailed Statistical Analysis
According to Hair et al. (2013), four assumptions should be tested:
**Normality** - When we talk about normality what we mean is that the data should look like a normal distribution. This is important because several statistic tests rely on this (e.g. t-statistics). In this exercise we'll just check univariate normality for 'density' (which is a limited approach). Remember that univariate normality doesn't ensure multivariate normality (which is what we would like to have), but it helps. Another detail to take into account is that in big samples (>200 observations) normality is not such an issue. However, if we solve normality, we avoid a lot of other problems (e.g. heteroscedacity) so that's the main reason why we are doing this analysis.
**Homoscedasticity** - I just hope I wrote it right. Homoscedasticity refers to the 'assumption that dependent variable(s) exhibit equal levels of variance across the range of predictor variable(s)' (Hair et al., 2013). Homoscedasticity is desirable because we want the error term to be the same across all values of the independent variables.
**Linearity**- The most common way to assess linearity is to examine scatter plots and search for linear patterns. If patterns are not linear, it would be worthwhile to explore data transformations. However, we'll not get into this because most of the scatter plots we've seen appear to have linear relationships.
**Absence of correlated errors** - Correlated errors, like the definition suggests, happen when one error is correlated to another. For instance, if one positive error makes a negative error systematically, it means that there's a relationship between these variables. This occurs often in time series, where some patterns are time related. We'll also not get into this. However, if you detect something, try to add a variable that can explain the effect you're getting. That's the most common solution for correlated errors.
**Normality**
- Histogram - Kurtosis and skewness.
- Normal probability plot - Data distribution should closely follow the diagonal that represents the normal distribution.
```
#histogram and normal probability plot
sns.set_style('darkgrid')
sns.distplot(df['density']);
# Add labels
plt.title('Histogram of Density')
plt.xlabel('Density')
plt.ylabel('Count')
sns.distplot(df['density'], hist= True, kde=False)
help(sns.distplot)
```
| true |
code
| 0.615666 | null | null | null | null |
|
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
##### Functions
# 1st function: to graph time series based on TransactionDT vs the variable selected
def scatter(column):
fr,no_fr = (train[train['isFraud'] == 1], train[train['isFraud'] == 0])
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,3))
ax1.title.set_text('Histogram ' + column + ' when isFraud == 0')
ax1.set_ylim(train[column].min() - 1,train[column].max() + 1)
ax1.scatter(x = no_fr['TransactionDT'], y = no_fr[column], color = 'blue', marker='o')
ax2.title.set_text('Histogram ' + column + ' when isFraud == 1')
ax2.set_ylim(train[column].min() - 1,train[column].max() + 1)
ax2.scatter(x = fr['TransactionDT'], y = fr[column], color = 'red', marker='o')
plt.show()
# 2nd function: to show a ranking of pearson correlation with the variable selected
def corr(data,column):
print('Correlation with ' + column)
print(train[data].corrwith(train[column]).abs().sort_values(ascending = False)[1:])
# 3rd function: to reduce the groups based on Nans agroupation and pearson correlation
def reduce(groups):
result = list()
for values in groups:
maxval = 0
val = values[0]
for value in values:
unique_values = train[value].nunique()
if unique_values > maxval:
maxval = unique_values
val = value
result.append(value)
return result
# 4th function: to sort each column in ascending order based on its number
def order_finalcolumns(final_Xcolumns):
return sorted(final_Xcolumns, key=lambda x: int("".join([i for i in x if i.isdigit()])))
##### Download of files.
print('Downloading datasets...')
print(' ')
train = pd.read_pickle('/kaggle/input/1-fraud-detection-memory-reduction/train_mred.pkl')
print('Train has been downloaded... (1/2)')
test = pd.read_pickle('/kaggle/input/1-fraud-detection-memory-reduction/test_mred.pkl')
print('Test has been downloaded... (2/2)')
print(' ')
print('All files are downloaded')
##### All the columns of train dataset.
print(list(train))
```
# NaNs Exploration
We will search all the columns to determine which columns are related by the number of NANs present. After grouping them, we decide to keep the columns of each group with major amount of unique values (its supposed to be the most explanatory variable)
## Transaction columns
```
# These columns are the first ones in transaction dataset.
columns= list(train.columns[:17])
columns
for col in columns:
print(f'{col} NaNs: {train[col].isna().sum()} | {train[col].isna().sum()/train.shape[0]:.2%}')
# If we look closely to % NaNs data, most of them have low number of missing information. We are keeping all the columns where % NaNs < 0.7
final_transactioncolumns = list()
for col in columns:
if train[col].isna().sum()/train.shape[0] < 0.7:
final_transactioncolumns.append(col)
print('Final Transaction columns:',final_transactioncolumns)
```
## C columns
```
##### Group the C columns to determine which columns are related by the number of NANs present and analyze its groups independently.
columns = ['C' + str(i) for i in range(1,15)]
df_nan = train.isna()
dict_nans = dict()
for column in columns:
number_nans = df_nan[column].sum()
try:
dict_nans[number_nans].append(column)
except:
dict_nans[number_nans] = [column]
group_number = 1
for key,values in dict_nans.items():
print('Group {}'.format(group_number),'| Number of NANs =',key)
print(values)
print(' ')
group_number += 1
```
### Group 1 (single group)
```
##### Time series graph based on TransactionDT
# There is no column that does not have NaNs values so we get all the columns in the same group
group_list = ['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11', 'C12', 'C13', 'C14']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['C1','C11','C2','C6','C8','C4','C10','C14','C12','C7','C13'], ['C3'], ['C5','C9']]
result = reduce(reduce_groups)
print('Final C columns:',result)
final_ccolumns = result
```
## D columns
```
##### Group the D columns + Dachr columns to determine which columns are related by the number of NANs present and analyze its groups independently.
columns = ['D' + str(i) for i in range(1,16)]
columns.extend(['D1achr','D2achr','D4achr','D6achr','D10achr','D11achr','D12achr','D13achr','D14achr','D15achr'])
df_nan = train.isna()
dict_nans = dict()
for column in columns:
number_nans = df_nan[column].sum()
try:
dict_nans[number_nans].append(column)
except:
dict_nans[number_nans] = [column]
group_number = 1
for key,values in dict_nans.items():
print('Group {}'.format(group_number),'| Number of NANs =',key)
print(values)
print(' ')
group_number += 1
```
### Group 1 (single group)
```
##### Time series graph based on TransactionDT.
# Despite having different number of NaNs, we are analyzing it as a single group. But due to NaNs low number in D1, we keep it as a final column.
group_list = ['D1achr', 'D2achr', 'D3', 'D4achr', 'D5', 'D6achr', 'D7', 'D8', 'D9', 'D10achr', 'D11achr', 'D12achr', 'D13achr', 'D14achr', 'D15achr']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
# On the first group, D1achr vs D2achr --> we keep D1achr due to the low number of NaNs.
reduce_groups = [['D3','D7','D5'],['D4achr','D12achr','D6achr','D15achr','D10achr', 'D11achr'], ['D8'], ['D9'], ['D13achr'],['D14achr']]
result = reduce(reduce_groups)
result.append('D1achr')
print('Final D columns:',result)
final_dcolumns = result
```
## M columns
```
##### Group the M columns to determine which columns are related by the number of NANs present and analyze its groups independently.
columns = ['M' + str(i) for i in range(1,10)]
df_nan = train.isna()
dict_nans = dict()
for column in columns:
number_nans = df_nan[column].sum()
try:
dict_nans[number_nans].append(column)
except:
dict_nans[number_nans] = [column]
group_number = 1
for key,values in dict_nans.items():
print('Group {}'.format(group_number),'| Number of NANs =',key)
print(values)
print(' ')
group_number += 1
```
### Group 1 (single group)
```
# To analize M columns, we need to transform strings to numbers. Instead of using Label Encoder, we use a dictionary.
T_F_num = dict({'F': 0, 'T': 1, 'M0': 0, 'M1': 1, 'M2': 2})
for column in ['M1', 'M2', 'M3', 'M4', 'M5', 'M6', 'M7', 'M8', 'M9']:
print(f'{column}:', train[column].unique())
print('Transforming strings to numbers...')
train[column] = train[column].replace(T_F_num)
print(f'{column}:', train[column].unique())
print('')
##### Time series graph based on TransactionDT.
# Despite having different number of NaNs, we are analyzing it as a single group.
group_list = ['M1', 'M2', 'M3', 'M4', 'M5', 'M6', 'M7', 'M8', 'M9']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
#### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, We grouped together the columns with corr > 0.7 but in this case, no correlation is bigger than 0.7
# That's why, in this particular case we grouped together the columns with corr > 0.5
reduce_groups = ['M1'], ['M2','M3'], ['M4'], ['M5'], ['M6'], ['M7', 'M8'], ['M9']
result = reduce(reduce_groups)
print('Final M columns:',result)
final_mcolumns = result
```
## V columns
```
##### Group the V columns to determine which columns are related by the number of NANs present and analyze its groups independently.
columns = ['V' + str(i) for i in range(1,340)]
df_nan = train.isna()
dict_nans = dict()
for column in columns:
number_nans = df_nan[column].sum()
try:
dict_nans[number_nans].append(column)
except:
dict_nans[number_nans] = [column]
group_number = 1
for key,values in dict_nans.items():
print('Group {}'.format(group_number),'| Number of NANs =',key)
print(values)
print(' ')
group_number += 1
final_vcolumns = list()
```
### Group 1
```
##### Time series graph based on TransactionDT.
group_list = ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = ['V1'], ['V2','V3'], ['V4','V5'], ['V6','V7'], ['V8','V9']
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group1 columns:',result)
```
### Group 2
```
##### Time series graph based on TransactionDT.
group_list = ['V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27',
'V28', 'V29', 'V30', 'V31', 'V32', 'V33', 'V34']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V12','V13'], ['V14'], ['V15','V16','V33','V34','V31','V32','V21','V22','V17','V18'], ['V19','V20'],['V23','V24'],['V25','V26'],['V27','V28'],['V29','V30']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group2 columns:',result)
```
### Group 3
```
##### Time series graph based on TransactionDT.
group_list = ['V35', 'V36', 'V37', 'V38', 'V39', 'V40', 'V41', 'V42', 'V43', 'V44', 'V45', 'V46', 'V47', 'V48', 'V49', 'V50', 'V51', 'V52']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V35','V36'], ['V37','V38'], ['V39','V40','V42','V43','V50','V51','V52'], ['V41'], ['V44','V45'],['V46','V47'],['V48','V49']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group3 columns:',result)
```
### Group 4
```
##### Time series graph based on TransactionDT.
group_list = ['V53', 'V54', 'V55', 'V56', 'V57', 'V58', 'V59', 'V60', 'V61', 'V62', 'V63', 'V64', 'V65', 'V66', 'V67', 'V68',
'V69', 'V70', 'V71', 'V72', 'V73', 'V74']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V53','V54'], ['V55','V56'], ['V57','V58','V71','V73','V72','V74','V63','V59','V64','V60'],['V61','V62'],['V65'],
['V66','V67'],['V68'], ['V69','V70']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group4 columns:',result)
```
### Group 5
```
##### Time series graph based on TransactionDT.
group_list = ['V75', 'V76', 'V77', 'V78', 'V79', 'V80', 'V81', 'V82', 'V83', 'V84', 'V85', 'V86', 'V87', 'V88', 'V89', 'V90', 'V91', 'V92', 'V93', 'V94']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V75','V76'],['V77','V78'], ['V79', 'V94', 'V93', 'V92', 'V84', 'V85', 'V80', 'V81'],['V82','V83'],['V86','V87'],['V88'],['V89'],['V90','V91']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group5 columns:',result)
```
### Group 6
```
##### Time series graph based on TransactionDT.
group_list = ['V95', 'V96', 'V97', 'V98', 'V99', 'V100', 'V101', 'V102', 'V103', 'V104', 'V105', 'V106', 'V107', 'V108', 'V109', 'V110', 'V111', 'V112',
'V113', 'V114', 'V115', 'V116', 'V117', 'V118', 'V119', 'V120', 'V121', 'V122', 'V123', 'V124', 'V125', 'V126', 'V127', 'V128', 'V129', 'V130',
'V131', 'V132', 'V133', 'V134', 'V135', 'V136', 'V137']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
# We omit V107 since there is no info about corr with other columns and its unique values are 1.
reduce_groups = [['V95','V101'],['V96','V102','V97','V99','V100','V103'],['V98'],['V104','V106','V105'],['V108','V110','V114','V109','V111','V113','V112','V115','V116'],
['V117','V119','V118'],['V120','V122','V121'],['V123','V125','V124'],['V126','V128','V132'],['V127','V133','V134'],['V129','V131','V130'],
['V135','V137','V136']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group6 columns:',result)
```
### Group 7
```
##### Time series graph based on TransactionDT.
group_list = ['V138', 'V139', 'V140', 'V141', 'V142', 'V143', 'V144', 'V145', 'V146', 'V147', 'V148', 'V149', 'V150', 'V151', 'V152', 'V153', 'V154',
'V155', 'V156', 'V157', 'V158', 'V159', 'V160', 'V161', 'V162', 'V163', 'V164', 'V165', 'V166']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V138'],['V139','V140'],['V141','V142'],['V143','V159','V150','V151','V165','V144','V145','V160','V152','V164','V166'],['V146','V147'],
['V148','V155','V149','V153','V154','V156','V157','V158'],['V161','V163','V162']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group7 columns:',result)
```
### Group 8
```
##### Time series graph based on TransactionDT.
group_list = ['V167', 'V168', 'V172', 'V173', 'V176', 'V177', 'V178', 'V179', 'V181', 'V182', 'V183', 'V186', 'V187', 'V190', 'V191', 'V192', 'V193',
'V196', 'V199', 'V202', 'V203', 'V204', 'V205', 'V206', 'V207', 'V211', 'V212', 'V213', 'V214', 'V215', 'V216']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = ['V167','V176','V199','V179','V190','V177','V186','V168','V172','V178','V196','V191','V204','V213','V207','V173'],['V181','V183','V182',
'V187','V192','V203','V215','V178','V193','V212','V204'],['V202','V216','V204','V214']
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group8 columns:',result)
```
### Group 9
```
##### Time series graph based on TransactionDT.
group_list = ['V169', 'V170', 'V171', 'V174', 'V175', 'V180', 'V184', 'V185', 'V188', 'V189', 'V194', 'V195', 'V197', 'V198', 'V200', 'V201', 'V208', 'V209', 'V210']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V169'],['V170','V171','V200','V201'],['V174','V175'],['V180'],['V184','V185'],['V188','V189'],['V194','V197','V195','V198'],
['V208','V210','V209']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group9 columns:',result)
```
### Group 10
```
##### Time series graph based on TransactionDT.
group_list = ['V217', 'V218', 'V219', 'V223', 'V224', 'V225', 'V226', 'V228', 'V229', 'V230', 'V231', 'V232', 'V233', 'V235', 'V236', 'V237','V240',
'V241', 'V242', 'V243', 'V244', 'V246', 'V247', 'V248', 'V249', 'V252', 'V253', 'V254', 'V257', 'V258', 'V260', 'V261', 'V262', 'V263',
'V264', 'V265', 'V266', 'V267', 'V268', 'V269', 'V273', 'V274', 'V275', 'V276', 'V277', 'V278']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V217','V231','V233','V228','V257','V219','V232','V246'],['V218','V229','V224','V225','V253','V243','V254','V248','V264','V261','V249','V258',
'V267','V274','V230','V236','V247','V262','V223','V252','V260'],['V226','V263','V276','V278'], ['V235','V237'],['V240','V241'],['V242','V244'],
['V265','V275','V277','V268','V273'],['V269','V266']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group10 columns:',result)
```
### Group 11
```
##### Time series graph based on TransactionDT.
group_list = ['V220', 'V221', 'V222', 'V227', 'V234', 'V238', 'V239', 'V245', 'V250', 'V251', 'V255', 'V256', 'V259', 'V270', 'V271', 'V272']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = ['V220'],['V221','V222','V259','V245','V227','V255','V256'],['V234'],['V238','V239'],['V250','V251'],['V270','V272','V271']
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group11 columns:',result)
```
### Group 12
```
##### Time series graph based on TransactionDT.
group_list = ['V279', 'V280', 'V284', 'V285', 'V286', 'V287', 'V290', 'V291', 'V292', 'V293', 'V294', 'V295', 'V297', 'V298', 'V299', 'V302', 'V303', 'V304',
'V305', 'V306', 'V307', 'V308', 'V309', 'V310', 'V311', 'V312', 'V316', 'V317', 'V318', 'V319', 'V320', 'V321']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = [['V279','V293','V290','V280','V295','V294','V292','V291','V317','V307','V318'],['V284'],['V285','V287'],['V286'],['V297','V299','V298'],
['V302','V304','V303'],['V305'],['V306','V308','V316','V319'],['V309','V311','V312','V310'],['V320','V321']]
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group12 columns:',result)
```
### Group 13
```
##### Time series graph based on TransactionDT.
group_list = ['V281', 'V282', 'V283', 'V288', 'V289', 'V296', 'V300', 'V301', 'V313', 'V314', 'V315']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = ['V281','V282','V283'],['V288','V289'],['V296'],['V300','V301'],['V313','V315','V314']
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group13 columns:',result)
```
### Group 14
```
##### Time series graph based on TransactionDT.
group_list = ['V322', 'V323', 'V324', 'V325', 'V326', 'V327', 'V328', 'V329', 'V330', 'V331', 'V332', 'V333', 'V334', 'V335', 'V336', 'V337', 'V338', 'V339']
for column in group_list:
scatter(column)
##### Heatmap
plt.figure(figsize = (15,15))
sns.heatmap(train[group_list].corr(), cmap='RdBu_r', annot=True, center=0.0)
plt.show()
##### Ranking of pearson correlation.
for column in group_list:
corr(group_list,column)
print(' ')
##### Based on pearson correlation, we grouped together the columns with corr > 0.7
reduce_groups = ['V322','V324'],['V323','V326','V324','V327','V326'],['V325'],['V328','V330','V329'],['V331','V333','V332','V337'],['V334','V336','V335']
result = reduce(reduce_groups)
final_vcolumns.extend(result)
print('Final V_Group14 columns:',result)
```
### Final V columns
```
print('Number of V columns:', len(final_vcolumns))
print(final_vcolumns)
```
# Conclusions
Based on previous process, we suggest keeping as final columns the ones describes below:
```
##### 1st we sort them (ascending order) with a function
final_ccolumns = order_finalcolumns(final_ccolumns)
final_dcolumns = order_finalcolumns(final_dcolumns)
final_mcolumns = order_finalcolumns(final_mcolumns)
final_vcolumns = order_finalcolumns(final_vcolumns)
##### Final columns
print(f'Final Transaction columns ({len(final_transactioncolumns)}): {final_transactioncolumns}')
print(' ')
print(f'Final C columns ({len(final_ccolumns)}): {final_ccolumns}')
print(' ')
print(f'Final D columns ({len(final_dcolumns)}): {final_dcolumns}')
print(' ')
print(f'Final M columns ({len(final_mcolumns)}): {final_mcolumns}')
print(' ')
print(f'Final V columns ({len(final_vcolumns)}): {final_vcolumns}')
print(' ')
print('#' * 50)
final_columns = final_transactioncolumns + final_ccolumns + final_dcolumns + final_mcolumns + final_vcolumns
print(' ')
print('Final columns:', final_columns)
print(' ')
print('Lenght of final columns:', len(final_columns))
```
| true |
code
| 0.398962 | null | null | null | null |
|
# Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] [Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012](https://arxiv.org/abs/1207.0580)
```
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
```
# Dropout forward pass
In the file `cs231n/layers.py`, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
```
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.25, 0.4, 0.7]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
```
# Dropout backward pass
In the file `cs231n/layers.py`, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
```
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.2, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
# Error should be around e-10 or less
print('dx relative error: ', rel_error(dx, dx_num))
```
## Inline Question 1:
What happens if we do not divide the values being passed through inverse dropout by `p` in the dropout layer? Why does that happen?
## Answer:
# Fully-connected nets with Dropout
In the file `cs231n/classifiers/fc_net.py`, modify your implementation to use dropout. Specifically, if the constructor of the net receives a value that is not 1 for the `dropout` parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
```
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [1, 0.75, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
# Relative errors should be around e-6 or less; Note that it's fine
# if for dropout=1 you have W2 error be on the order of e-5.
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
```
# Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a keep probability of 0.25. We will then visualize the training and validation accuracies of the two networks over time.
```
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [1, 0.25]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## Inline Question 2:
Compare the validation and training accuracies with and without dropout -- what do your results suggest about dropout as a regularizer?
## Answer:
## Inline Question 3:
Suppose we are training a deep fully-connected network for image classification, with dropout after hidden layers (parameterized by keep probability p). How should we modify p, if at all, if we decide to decrease the size of the hidden layers (that is, the number of nodes in each layer)?
## Answer:
| true |
code
| 0.712742 | null | null | null | null |
|
# HuberRegressorw with StandardScaler
This Code template is for the regression analysis using a Huber Regression and the feature rescaling technique StandardScaler in a pipeline.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.linear_model import HuberRegressor
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features= []
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Linear regression model that is robust to outliers.
The Huber Regressor optimizes the squared loss for the samples where |(y - X'w) / sigma| < epsilon and the absolute loss for the samples where |(y - X'w) / sigma| > epsilon, where w and sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales.
This makes sure that the loss function is not heavily influenced by the outliers while not completely ignoring their effect.
#### Data Scaling
Used sklearn.preprocessing.StandardScaler
Standardize features by removing the mean and scaling to unit variance
The standard score of a sample x is calculated as:
z = (x - u) / s
Where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
```
Input=[("standard",StandardScaler()),("model",HuberRegressor())]
model = Pipeline(Input)
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Snehaan Bhawal , Github: [Profile](https://github.com/Sbhawal)
| true |
code
| 0.495361 | null | null | null | null |
|
# BetterReads: Optimizing GoodReads review data
This notebook explores how to achieve the best results with the BetterReads algorithm when using review data scraped from GoodReads. It is a short follow-up to the exploration performed in the `03_optimizing_reviews.ipynb` notebook.
We have two options when scraping review data from GoodReads: For any given book, we can either scrape 1,500 reviews, with 300 reviews for each star rating (1 to 5), or we can scrape just the top 300 reviews, of any rating. (This is due to some quirks in the way that reviews are displayed on the GoodReads website; for more information, see my [GoodReadsReviewsScraper script](https://github.com/williecostello/GoodReadsReviewsScraper).)
There are advantages and disadvantages to both options. If we scrape 1,500 reviews, we obviously have more review data to work with; however, the data is artifically class-balanced, such that, for example, we'll still see a good number of negative reviews even if the vast majority of the book's reviews are positive. If we scrape just the top 300 reviews, we will have a more representative dataset, but much less data to work with.
We saw in the `03_optimizing_reviews.ipynb` notebook that the BetterReads algorithm can achieve meaningful and representative results from a dataset with less than 100 reviews. So we should not dismiss the 300 review option simply because it involves less data. We should only dismiss it if its smaller dataset leads to worse results. So let's try these two options out on a particular book and see how the algorithm performs.
```
import numpy as np
import pandas as pd
import random
from sklearn.cluster import KMeans
import tensorflow_hub as hub
# Loads Universal Sentence Encoder locally, from downloaded module
embed = hub.load('../../Universal Sentence Encoder/module/')
# Loads Universal Sentence Encoder remotely, from Tensorflow Hub
# embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
```
## Which set of reviews should we use?
For this notebook we'll work with a new example: Sally Rooney's *Conversations with Friends*.
<img src='https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1500031338l/32187419._SY475_.jpg' width=250 align=center>
We have prepared two datasets, one of 1,500 reviews and another of 300 reviews, as described above. Both datasets were scraped from GoodReads at the same time, so there is some overlap between them. (Note that the total number of reviews in both datasets is less than advertised, since non-English and very short reviews are dropped during data cleaning.)
```
# Set path for processed file
file_path_1500 = 'data/32187419_conversations_with_friends.csv'
file_path_300 = 'data/32187419_conversations_with_friends_top_300.csv'
# Read in processed file as dataframe
df_1500 = pd.read_csv(file_path_1500)
df_300 = pd.read_csv(file_path_300)
print(f'The first dataset consists of {df_1500.shape[0]} sentences from {df_1500["review_index"].nunique()} reviews')
print(f'The second dataset consists of {df_300.shape[0]} sentences from {df_300["review_index"].nunique()} reviews')
```
As we can see above, in comparison to the smaller dataset, the bigger dataset contains approximately three times the number of sentences from four times the number of reviews. And as we can see below, the bigger dataset contains approximately the same number of reviews for each star rating, while the smaller dataset is much more heavily skewed toward 5 star and 4 star reviews.
```
df_1500.groupby('review_index')['rating'].mean().value_counts().sort_index()
df_300.groupby('review_index')['rating'].mean().value_counts().sort_index()
```
On [the book's actual GoodReads page](https://www.goodreads.com/book/show/32187419-conversations-with-friends), its average review rating is listed as 3.82 stars. This is nearly the same as the average review rating of our smaller dataset. The bigger dataset's average review rating, in contrast, is just less than 3. This confirms our earlier suspicion that the smaller dataset presents a more representative sample of the book's full set of reviews.
```
df_300.groupby('review_index')['rating'].mean().mean()
df_1500.groupby('review_index')['rating'].mean().mean()
```
Let's see how these high-level differences affect the output of our algorithm.
```
def load_sentences(file_path):
'''
Function to load and embed a book's sentences
'''
# Read in processed file as dataframe
df = pd.read_csv(file_path)
# Copy sentence column to new variable
sentences = df['sentence'].copy()
# Vectorize sentences
sentence_vectors = embed(sentences)
return sentences, sentence_vectors
def get_clusters(sentences, sentence_vectors, k, n):
'''
Function to extract the n most representative sentences from k clusters, with density scores
'''
# Instantiate the model
kmeans_model = KMeans(n_clusters=k, random_state=24)
# Fit the model
kmeans_model.fit(sentence_vectors);
# Set the number of cluster centre points to look at when calculating density score
centre_points = int(len(sentences) * 0.02)
# Initialize list to store mean inner product value for each cluster
cluster_density_scores = []
# Initialize dataframe to store cluster centre sentences
df = pd.DataFrame()
# Loop through number of clusters
for i in range(k):
# Define cluster centre
centre = kmeans_model.cluster_centers_[i]
# Calculate inner product of cluster centre and sentence vectors
ips = np.inner(centre, sentence_vectors)
# Find the sentences with the highest inner products
top_indices = pd.Series(ips).nlargest(n).index
top_sentences = list(sentences[top_indices])
centre_ips = pd.Series(ips).nlargest(centre_points)
density_score = round(np.mean(centre_ips), 5)
# Append the cluster density score to master list
cluster_density_scores.append(density_score)
# Create new row with cluster's top 10 sentences and density score
new_row = pd.Series([top_sentences, density_score])
# Append new row to master dataframe
df = df.append(new_row, ignore_index=True)
# Rename dataframe columns
df.columns = ['sentences', 'density']
# Sort dataframe by density score, from highest to lowest
df = df.sort_values(by='density', ascending=False).reset_index(drop=True)
# Loop through number of clusters selected
for i in range(k):
# Save density / similarity score & sentence list to variables
sim_score = round(df.loc[i]["density"], 3)
sents = df.loc[i]['sentences'].copy()
print(f'Cluster #{i+1} sentences (density score: {sim_score}):\n')
print(*sents, sep='\n')
print('\n')
model_density_score = round(np.mean(cluster_density_scores), 5)
print(f'Model density score: {model_density_score}')
# Load and embed sentences
sentences_1500, sentence_vectors_1500 = load_sentences(file_path_1500)
sentences_300, sentence_vectors_300 = load_sentences(file_path_300)
# Get cluster sentences for bigger dataset
get_clusters(sentences_1500, sentence_vectors_1500, k=6, n=8)
# Get cluster sentences for smaller dataset
get_clusters(sentences_300, sentence_vectors_300, k=6, n=8)
```
Let's summarize our results. The bigger dataset's sentence clusters can be summed up as follows:
1. Fantastic writing
1. Reading experience (?)
1. Unlikeable characters
1. Plot synopsis
1. Not enjoyable
1. Thematic elements: relationships & emotions
The smaller dataset's clusters can be summed up like this:
1. Fantastic writing
1. Plot synopsis
1. Loved it
1. Unlikeable characters
1. Reading experience
1. Thematic elements: Relationships & emotions
As we can see, the two sets of results are broadly similar; there are no radical differences between the two sets of clusters. The only major difference is that the bigger dataset includes a cluster of sentences expressing dislike of the book, whereas the smaller dataset includes a cluster of sentences expressing love of the book. But this was to be expected, given the relative proportions of positive and negative reviews between the two datasets.
Given these results, we feel that the smaller dataset is preferable. Its clusters seem slightly more internally coherent and to better capture the general sentiment toward the book.
| true |
code
| 0.502625 | null | null | null | null |
|
# 2.18 Programming for Geoscientists class test 2016
# Test instructions
* This test contains **4** questions each of which should be answered.
* Write your program in a Python cell just under each question.
* You can write an explanation of your solution as comments in your code.
* In each case your solution program must fulfil all of the instructions - please check the instructions carefully and double check that your program fulfils all of the given instructions.
* Save your work regularly.
* At the end of the test you should email your IPython notebook document (i.e. this document) to [Gerard J. Gorman](http://www.imperial.ac.uk/people/g.gorman) at [email protected]
**1.** The following cells contain at least one programming bug each. For each cell add a comment to identify and explain the bug, and correct the program.
```
# Function to calculate wave velocity.
def wave_velocity(k, mu, rho):
vp = sqrt((k+4*mu/3)/rho)
return vp
# Use the function to calculate the velocity of an
# acoustic wave in water.
vp = wave_velocity(k=0, mu=2.29e9, rho=1000)
print "Velocity of acoustic wave in water: %d", vp
data = (3.14, 2.29, 10, 12)
data.append(4)
line = "2015-12-14T06:29:15.740Z,19.4333324,-155.2906647,1.66,2.14,ml,17,248,0.0123,0.36,hv,hv61126056,2015-12-14T06:34:58.500Z,5km W of Volcano, Hawaii,earthquake"
latitude = line.split(',')[1]
longitude = line.split(',')[2]
print "longitude, latitude = (%g, %g)"%(longitude, latitude)
```
**2.** The Ricker wavelet is frequently employed to model seismic data. The amplitude of the Ricker wavelet with peak frequency $f$ at time $t$ is computed as:
$$A = (1-2 \pi^2 f^2 t^2) e^{-\pi^2 f^2 t^2}$$
* Implement a function which calculates the amplitude of the Ricker wavelet for a given peak frequency $f$ and time $t$.
* Use a *for loop* to create a python *list* for time ranging from $-0.5$ to $0.5$, using a peak frequency, $f$, of $10$.
* Using the function created above, calculate a numpy array of the Ricker wavelet amplitudes for these times.
* Plot a graph of time against Ricker wavelet.
**3.** The data file [vp.dat](data/vp.dat) (all of the data files are stored in the sub-folder *data/* of this notebook library) contains a profile of the acoustic velocity with respect to depth. Depth is measured with respect to a reference point; therefore the first few entries contain NaN's indicating that they are actually above ground.
* Write a function to read in the depth and acoustic velocity.
* Ensure you skip the entries that contain NaN's.
* Store depth and velocities in two seperate numpy arrays.
* Plot depth against velocity ensuring you label your axis.
**4.** The file [BrachiopodBiometrics.csv](data/BrachiopodBiometrics.csv) contains the biometrics of Brachiopods found in 3 different locations.
* Read the data file into a Python *dictionary*.
* You should use the samples location as the *key*.
* For each key you should form a Python *list* containing tuples of *length* and *width* of each sample.
* For each location, calculate the mean length and width of the samples.
* Print the result for each location using a formatted print statement. The mean values should only be printed to within one decimal place.
| true |
code
| 0.53959 | null | null | null | null |
|
# Introduction to Machine Learning Nanodegree
## Project: Finding Donors for *CharityML*
In this project, we employ several supervised algorithms to accurately model individuals' income using data collected from the 1994 U.S. Census. The best candidate algorithm is then chosen from preliminary results and is further optimized to best model the data. The goal with this implementation is to construct a model that accurately predicts whether an individual makes more than \$50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features.
The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. You can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries.
----
## Exploring the Data
Run the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, `'income'`, will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database.
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Display the first record
display(data.head(5))
```
### Implementation: Data Exploration
A cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, the following information is computed:
- The total number of records, `'n_records'`
- The number of individuals making more than \$50,000 annually, `'n_greater_50k'`.
- The number of individuals making at most \$50,000 annually, `'n_at_most_50k'`.
- The percentage of individuals making more than \$50,000 annually, `'greater_percent'`.
```
# Total number of records
n_records = data.shape[0]
# Number of records where individual's income is more than $50,000
n_greater_50k = data['income'].value_counts()[1]
# Number of records where individual's income is at most $50,000
n_at_most_50k = data['income'].value_counts()[0]
# Percentage of individuals whose income is more than $50,000
greater_percent = 100 * (n_greater_50k / (n_greater_50k + n_at_most_50k))
# Print the results
print("Total number of records: {}".format(n_records))
print("Individuals making more than $50,000: {}".format(n_greater_50k))
print("Individuals making at most $50,000: {}".format(n_at_most_50k))
print("Percentage of individuals making more than $50,000: {}%".format(greater_percent))
# Check whether records are consistent
if n_records == (n_greater_50k + n_at_most_50k):
print('Records are consistent!')
```
**Featureset Exploration**
* **age**: continuous.
* **workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.
* **education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.
* **education-num**: continuous.
* **marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.
* **occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.
* **relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.
* **race**: Black, White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other.
* **sex**: Female, Male.
* **capital-gain**: continuous.
* **capital-loss**: continuous.
* **hours-per-week**: continuous.
* **native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.
----
## Preparing the Data
Before data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms.
### Transforming Skewed Continuous Features
A dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: '`capital-gain'` and `'capital-loss'`.
Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed.
```
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
```
For highly-skewed feature distributions such as `'capital-gain'` and `'capital-loss'`, it is common practice to apply a <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of `0` is undefined, so we must translate the values by a small amount above `0` to apply the the logarithm successfully.
Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed.
```
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_log_transformed = pd.DataFrame(data = features_raw)
features_log_transformed[skewed] = features_raw[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_log_transformed, transformed = True)
```
### Normalizing Numerical Features
In addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-loss'` above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.
Run the code cell below to normalize each numerical feature. We will use [`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) for this.
```
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler() # default=(0, 1)
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_log_minmax_transform = pd.DataFrame(data = features_log_transformed)
features_log_minmax_transform[numerical] = scaler.fit_transform(features_log_transformed[numerical])
# Show an example of a record with scaling applied
display(features_log_minmax_transform.head(n = 5))
```
### Data Preprocessing
From the table in **Exploring the Data** above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called *categorical variables*) be converted. One popular way to convert categorical variables is by using the **one-hot encoding** scheme. One-hot encoding creates a _"dummy"_ variable for each possible category of each non-numeric feature. For example, assume `someFeature` has three possible entries: `A`, `B`, or `C`. We then encode this feature into `someFeature_A`, `someFeature_B` and `someFeature_C`.
| | someFeature | | someFeature_A | someFeature_B | someFeature_C |
| :-: | :-: | | :-: | :-: | :-: |
| 0 | B | | 0 | 1 | 0 |
| 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 |
| 2 | A | | 1 | 0 | 0 |
Additionally, as with the non-numeric features, we need to convert the non-numeric target label, `'income'` to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("<=50K" and ">50K"), we can avoid using one-hot encoding and simply encode these two categories as `0` and `1`, respectively. In code cell below, you will need to implement the following:
- Use [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) to perform one-hot encoding on the `'features_log_minmax_transform'` data.
- Convert the target label `'income_raw'` to numerical entries.
- Set records with "<=50K" to `0` and records with ">50K" to `1`.
```
# One-hot encode the 'features_log_minmax_transform' data using pandas.get_dummies()
features_final = pd.get_dummies(features_log_minmax_transform)
# Encode the 'income_raw' data to numerical values
income = income_raw.replace(to_replace = {'<=50K': 0, '>50K': 1})
# Print the number of features after one-hot encoding
encoded = list(features_final.columns)
print("{} total features after one-hot encoding.".format(len(encoded)))
# Uncomment the following line to see the encoded feature names
#print(encoded)
```
### Shuffle and Split Data
Now all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.
Run the code cell below to perform this split.
```
# Import train_test_split
from sklearn.model_selection import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features_final,
income,
test_size = 0.2,
random_state = 0)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
```
----
## Evaluating Model Performance
In this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of our choice, and the fourth algorithm is known as a *naive predictor*.
### Metrics and the Naive Predictor
*CharityML*, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, *CharityML* is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using **accuracy** as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that *does not* make more than \$50,000 as someone who does would be detrimental to *CharityML*, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is *more important* than the model's ability to **recall** those individuals. We can use **F-beta score** as a metric that considers both precision and recall:
$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$
In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the **F$_{0.5}$ score** (or F-score for simplicity).
Looking at the distribution of classes (those who make at most 50,000, and those who make more), it's clear most individuals do not make more than 50,000. This can greatly affect accuracy, since we could simply say \"this person does not make more than 50,000\" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than 50,000, CharityML would identify no one as donors.
#### Note: Recap of accuracy, precision, recall
**Accuracy** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).
**Precision** tells us what proportion of messages we classified as spam, actually were spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of
`[True Positives/(True Positives + False Positives)]`
**Recall(sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam.
It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of
`[True Positives/(True Positives + False Negatives)]`
For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average(harmonic mean) of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score(we take the harmonic mean as we are dealing with ratios).
### Naive Predictor Performace
If we chose a model that always predicted an individual made more than $50,000, what would that model's accuracy and F-score be on this dataset? You must use the code cell below and assign your results to `'accuracy'` and `'fscore'` to be used later.
**Please note** that the the purpose of generating a naive predictor is simply to show what a base model without any intelligence would look like. In the real world, ideally your base model would be either the results of a previous model or could be based on a research paper upon which you are looking to improve. When there is no benchmark model set, getting a result better than random choice is a place you could start from.
**Notes:**
* When we have a model that always predicts '1' (i.e. the individual makes more than 50k) then our model will have no True Negatives(TN) or False Negatives(FN) as we are not making any negative('0' value) predictions. Therefore our Accuracy in this case becomes the same as our Precision(True Positives/(True Positives + False Positives)) as every prediction that we have made with value '1' that should have '0' becomes a False Positive; therefore our denominator in this case is the total number of records we have in total.
* Our Recall score(True Positives/(True Positives + False Negatives)) in this setting becomes 1 as we have no False Negatives.
```
'''
TP = np.sum(income) # Counting the ones as this is the naive case. Note that 'income' is the 'income_raw' data
encoded to numerical values done in the data preprocessing step.
FP = income.count() - TP # Specific to the naive case
TN = 0 # No predicted negatives in the naive case
FN = 0 # No predicted negatives in the naive case
'''
# Calculate accuracy, precision and recall
TP = np.sum(income)
FP = income.count() - TP
TN, FN = 0, 0
accuracy = (TP + TN) / (TP + TN + FP + FN)
recall = TP / (TP + FN)
precision = TP / (TP + FP)
# Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall.
beta = 0.5 # Define beta
fscore = (1 + beta**2) * (precision * recall) / (beta**2 * precision + recall)
# Print the results
print("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore))
```
### Supervised Learning Models
**The following are some of the supervised learning models that are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent Classifier (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
### Model Application
List three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen
- Describe one real-world application in industry where the model can be applied.
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
### Decision Trees
**Describe one real-world application in industry where the model can be applied.**
Decision trees can be used for "Identifying Defective Products in the
Manufacturing Process". [1]
In this regard, decision trees are used as a classification algorithm that is trained on data with features of products that the company manufactures, as well as labels "Defective" and "Non-defective".
After training process, the model should be able to group products into "Defective" and "Non-defective" categories and predict whether a manufactured product is defective or not.
**What are the strengths of the model; when does it perform well?**
1. The data pre-processing step for decision trees requires less effort compared to other algorithms (e.g. no need to normalize/scale data or impute missing values). [2]
2. The way the algorithm works is very intuitive, and thus easier to understand and explain. In addition, they can be used as a white box model. [3]
**What are the weaknesses of the model; when does it perform poorly?**
1. Because decision trees are so simple there is often a need for more complex algorithms (e.g. Random Forest) to achieve a higher accuracy. [3]
2. Decision trees have the tendency to overfit the training set. [3]
3. Decision trees are unstable. The reproducibility of a decision tree model is unreliable since the structure is sensitive to even to small changes in the data. [3]
4. Decision trees can get complex and computationally expensive. [3]
**What makes this model a good candidate for the problem, given what you know about the data?**
I think this model is a good candidate in this situation because, as a white box, and because the features are well-defined, it might provide further insights which CharityML can rely on.
For example, CharityML identified that the most relevant parameter when it comes to determining donation likelihood is individual income.
A decision tree model may find highly accurate predictors of income that can simplify the current process and help draw more valuable conclusions such as this one.
Moreover, due to the algorithms simplicity, the charity members will have the capacity to intuitively understand its basic internal processes.
**References**
[[1]](http://www.kpubs.org/article/articleDownload.kpubs?downType=pdf&articleANo=E1CTBR_2017_v13n2_57)
[[2]](https://medium.com/@dhiraj8899/top-5-advantages-and-disadvantages-of-decision-tree-algorithm-428ebd199d9a)
[[3]](https://botbark.com/2019/12/19/top-6-advantages-and-disadvantages-of-decision-tree-algorithm/)
### Ensemble Methods (AdaBoost)
**Describe one real-world application in industry where the model can be applied.**
The AdaBoost algorithm can be applied for "Telecommunication Fraud Detection". [1]
The model is trained using features of past telecommunication messages (features) along with whether they ended up being fraudulent or not (labels).
Then, the AdaBoost model should be able to predict whether future telecommunication material is fraudulent or not.
**What are the strengths of the model; when does it perform well?**
1. High flexibility. Different classification algorithms (decision trees, SVMs, etc.) can be used as weak learners to finally constitute a strong learner (final model). [2]
2. High precision. Experiments have shown AdaBoost models to achieve relatively high precision when making predictions. [3]
3. Simple preprocessing. AdaBoost algorithms are not too demanding when it comes to preprocessed data, thus more time is saved during the pre-processing step. [4]
**What are the weaknesses of the model; when does it perform poorly?**
1. Sensitive to noise data and outliers. [4]
2. Requires quality data because the boosting technique learns progressively and is prone to error. [4]
3. Low Accuracy when Data is Imbalanced. [3]
4. Training is mildly computationally expensive, and thus it can be time-consuming. [3]
**What makes this model a good candidate for the problem, given what you know about the data?**
AdaBoost will be tried as a alternative to decision trees with stronger predictive capacity.
An AdaBoost model is a good candidate because it can provide improvements over decision trees to valuable metrics such as accuracy and precision.
Since it has been shown that this algorithm can achieve relatively high precision (which is what we are looking for in this problem), this aspect of it will also benefit us.
**References**
[[1]](https://download.atlantis-press.com/article/25896505.pdf)
[[2]](https://www.educba.com/adaboost-algorithm/)
[[3]](https://easyai.tech/en/ai-definition/adaboost/#:~:text=AdaBoost%20is%20adaptive%20in%20a,problems%20than%20other%20learning%20algorithms.)
[[4]](https://blog.paperspace.com/adaboost-optimizer/)
### Support Vector Machines
**Describe one real-world application in industry where the model can be applied.**
SVM's can be applied in bioinformatics. [1]
For example, an SVM model can be trained on data involving features of cancer tumours and then be able to identify whether a tumour is benign or malignant (labels).
**What are the strengths of the model; when does it perform well?**
1. Effective in high dimensional spaces (i.e. when there numerous features). [2]
2. Generally good algorithm. SVM’s are good when we have almost no information about the data. [3]
3. Relatively low risk of overfitting. This is due to its L2 Regularisation feature. [4]
4. High flexibility. Can handle linear & non-linear data due to variety added by different kernel functions. [3]
5. Stability. Since a small change to the data does not greatly affect the hyperplane. [4]
6. SVM is defined by a convex optimisation problem (i.e. no local minima) [4]
**What are the weaknesses of the model; when does it perform poorly?**
1. Training is very computationally expensive (high memory requirement) and thus it can be time-consuming, especially for large datasets [3]
2. Sensitive to noisy data, i.e. when the target classes are overlapping [2]
3. Hyperparameters can be difficult to tune. (Kernel, C parameter, gamma)
e.g. when choosing a Kernel, if you always go with high-dimensional ones you might generate too many support vectors and reduce training speed drastically. [4]
4. Difficult to understand and interpret, particularly with high dimensional data. Also, the final model is not easy to see, so we cannot do small calibrations based on business intuition. [3]
5. Requires feature scaling. [4]
**What makes this model a good candidate for the problem, given what you know about the data?**
Given what we know about the data, SVM would be a good choice since it can handle its multiple dimensions.
It will also add variety when compared to decision trees and AdaBoost, potentially yielding better results due to its vastly different mechanism.
**References**
[[1]](https://data-flair.training/blogs/applications-of-svm/)
[[2]](https://medium.com/@dhiraj8899/top-4-advantages-and-disadvantages-of-support-vector-machine-or-svm-a3c06a2b107)
[[3]](https://statinfer.com/204-6-8-svm-advantages-disadvantages-applications/)
[[4]](http://theprofessionalspoint.blogspot.com/2019/03/advantages-and-disadvantages-of-svm.html)
### Creating a Training and Predicting Pipeline
To properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.
In the code block below, you will need to implement the following:
- Import `fbeta_score` and `accuracy_score` from [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics).
- Fit the learner to the sampled training data and record the training time.
- Perform predictions on the test data `X_test`, and also on the first 300 training points `X_train[:300]`.
- Record the total prediction time.
- Calculate the accuracy score for both the training subset and testing set.
- Calculate the F-score for both the training subset and testing set.
- Make sure that you set the `beta` parameter!
```
# Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:])
start = time() # Get start time
learner = learner.fit(X_train[:sample_size], y_train[:sample_size])
end = time() # Get end time
# Calculate the training time
results['train_time'] = end - start
# Get the predictions on the test set(X_test),
# then get predictions on the first 300 training samples(X_train) using .predict()
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# Calculate the total prediction time
results['pred_time'] = end - start
# Compute accuracy on the first 300 training samples
results['acc_train'] = accuracy_score(y_train[:300], predictions_train)
# Compute accuracy on test set using accuracy_score()
results['acc_test'] = accuracy_score(y_test, predictions_test)
# Compute F-score on the the first 300 training samples using fbeta_score()
results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta=beta)
# Compute F-score on the test set which is y_test
results['f_test'] = fbeta_score(y_test, predictions_test, beta=beta)
# Success
print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
```
### Initial Model Evaluation
In the code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in `'clf_A'`, `'clf_B'`, and `'clf_C'`.
- Use a `'random_state'` for each model you use, if provided.
- **Note:** Use the default settings for each model — you will tune one specific model in a later section.
- Calculate the number of records equal to 1%, 10%, and 100% of the training data.
- Store those values in `'samples_1'`, `'samples_10'`, and `'samples_100'` respectively.
**Note:** Depending on which algorithms you chose, the following implementation may take some time to run!
```
# Import the three supervised learning models from sklearn
# Import Algorithms
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.svm import SVC
# Initialize the three models
clf_A = DecisionTreeClassifier(random_state=42)
clf_B = AdaBoostClassifier(random_state=42)
clf_C = SVC(random_state=42)
# Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_100 = len(y_train)
samples_10 = int(0.1*len(y_train))
samples_1 = int(0.01*len(y_train))
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
```
----
## Improving Results
In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F-score.
### Choosing the Best Model
Based on the evaluation you performed earlier, in one to two paragraphs, explain to *CharityML* which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000.
##### AdaBoost
According to the analysis, the most appropriate model for identifying individuals who make more than \$50,000 is the AdaBoost model. This is because of the following reasons:
- AdaBoost yields the best accuracy and F-score on the testing data, meaning that to maximise the number of true potential donors, it is the ideal model to choose.
- The 2nd best competitor (namely, SVM) has a slightly higher tendency to overfit, and is significantly more time-consuming to train.
- AdaBoost is suitable for the given dataset because it yields high precision (i.e. few false positives, which is what we want), and will allow us to interpret the result for potential callibrations more so than an SVM model would.
### Describing the Model in Layman's Terms
In one to two paragraphs, explain to *CharityML*, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical jargon, such as describing equations.
##### Introduction
AdaBoost is a model that belongs to a group of models called "Ensemble Methods".
As the name suggests, the model trains weaker models on the data (also known as "weak learners"), and then combines them into a single, more powerful model (which we call a "strong learner").
##### Training the AdaBoost Model
In our case, we feed the model the training data from our dataset, and it fits a simple "weak learner" to the data. Then, it augments the errors made by the first learner, and it fits a second learner to correct its mistakes. Then, a 3rd weak learner does the same for the 2nd one, and this process repeats until enough learners have been trained.
Then, the algorithm assigns a weight to each weak learner based on its performance, and combines all the weak learners into a single **Strong Learner**.
When combining the weak learners, the ones with the stronger weights (i.e. the more successful ones) will get more of a say on how the final model is structured.
##### AdaBoost Predictions
After training the model, we will be able to feed to it unseen examples (i.e. new individuals), and the model will use its knowledge on the previous individuals to predict whether or not they make more than /$50,000 per year.
### Model Tuning
Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).
- Initialize the classifier you've chosen and store it in `clf`.
- Set a `random_state` if one is available to the same state you set before.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: `parameters = {'parameter' : [list of values]}`.
- **Note:** Avoid tuning the `max_features` parameter of your learner if that parameter is available!
- Use `make_scorer` to create an `fbeta_score` scoring object (with $\beta = 0.5$).
- Perform grid search on the classifier `clf` using the `'scorer'`, and store it in `grid_obj`.
- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_fit`.
**Note:** Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!
```
# Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
# Initialize the classifier
clf = AdaBoostClassifier(random_state=42)
# Create the parameters list you wish to tune, using a dictionary if needed.
parameters = {'n_estimators': [500, 1000, 1500, 2000], 'learning_rate': np.linspace(0.001, 1, 10)}
# Make an fbeta_score scoring object using make_scorer()
scorer = make_scorer(fbeta_score, beta=beta)
# Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = GridSearchCV(clf, parameters, scoring=scorer, n_jobs = -1)
# Fit the grid search object to the training data and find the optimal parameters using fit()
start = time()
grid_fit = grid_obj.fit(X_train, y_train)
end = time()
print('Time to tune: ', end - start)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Check hyperparameters
print(clf)
print(best_clf)
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
```
### Final Model Evaluation
* What is your optimized model's accuracy and F-score on the testing data?
* Are these scores better or worse than the unoptimized model?
* How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in **Question 1**?_
#### Results:
| Metric | Unoptimized Model | Optimized Model |
| :------------: | :---------------: | :-------------: |
| Accuracy Score | 0.8576 | 0.8676 |
| F-score | 0.7246 | 0.7456 |
**Discussion**
My optimised model's accuracy is 86.71% while the F-score (beta = 0.5) is 0.7448.
These scores are slightly better than the optimised model's. Accuracy improved by ~1.2% and F-score by ~2.9%.
The scores are significantly better than the naive predictor's. Accuracy improved by ~350% (3.5+ times higher) and F-score by ~256% (2.5+ times higher).
----
## Feature Importance
An important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000.
Here, we choose a scikit-learn classifier (e.g., adaboost, random forests) that has a `feature_importance_` attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell, we fit this classifier to the training set and use this attribute to determine the top 5 most important features for the census dataset.
### Feature Relevance Observation
When **Exploring the Data**, it was shown there are thirteen available features for each individual on record in the census data. Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?
**Answer:**
1. **Occupation**. I would expect the job that a person has to be a good predictor of income.
2. **Hours per week**. The more hours you work, the more you earn.
3. **Education Number** Because of the positive correlation between education level and income.
4. **Age** Usually older people who've had longer careers have a higher income.
5. **Native Country** Because a US worker earns significantly more than, say, an Argentina one.
### Feature Importance
Choose a `scikit-learn` supervised learning algorithm that has a `feature_importance_` attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.
In the code cell below, you will need to implement the following:
- Import a supervised learning model from sklearn if it is different from the three used earlier.
- Train the supervised model on the entire training set.
- Extract the feature importances using `'.feature_importances_'`.
```
# Import a supervised learning model that has 'feature_importances_'
from sklearn.ensemble import AdaBoostClassifier
# Train the supervised model on the training set using .fit(X_train, y_train)
model = AdaBoostClassifier().fit(X_train, y_train)
# Extract the feature importances using .feature_importances_
importances = model.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)
```
### Extracting Feature Importance
Observe the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000.
* How do these five features compare to the five features you discussed in **Question 6**?
* If you were close to the same answer, how does this visualization confirm your thoughts?
* If you were not close, why do you think these features are more relevant?
**Answer:**
* *How do these five features compare to the five features you discussed in **Question 6**?*
These five features are significantly different to what I predicted in question 6. While I did mention age, hours-per-week and education-num, I failed to mention two of the most significant features: capital-loss and capital-gain, which together amount to about 37% cumulative feature weight.
* *If you were close to the same answer, how does this visualization confirm your thoughts?*
This visualisation confirms that age plays a large role and that hours-per-week and education-num are among the most relevant features.
This is because of the direct and strong correlation between these variables and individual income.
* *If you were not close, why do you think these features are more relevant?*
I was genuinely surprised that occupation did not make it in the top 5. I suppose it was because the mentioned occupations just do not have a large discrepancy in income. Whereas capital-loss and capital-gain varies more among those individuals and more directly affects their income. Similarly, regarding native-country, I suppose most people were from the US or a similarly developed country and hence the feature didn't have great predictive power.
### Feature Selection
How does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of **all** features present in the data. This hints that we can attempt to *reduce the feature space* and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set *with only the top five important features*.
```
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print("Final Model trained on full data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
print("\nFinal Model trained on reduced data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)))
```
### Effects of Feature Selection
* How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?
* If training time was a factor, would you consider using the reduced data as your training set?
**Answer:**
The model trained on reduced data gets an extra of ~2% of testing examples wrong, and its F-score is ~0.04 less.
If training time was a factor, I would probably still not use the reduced data as my training set.
However, if more training examples yielded a significant improvement, I would recommend using lower-dimension data so that we could accommodate more training examples.
| true |
code
| 0.549641 | null | null | null | null |
|
```
import numpy as np
import matplotlib.pyplot as plt
import os
import sys
from hawkes import hawkes, sampleHawkes, plotHawkes, iterative_sampling, extract_samples, sample_counterfactual_superposition, check_monotonicity_hawkes
sys.path.append(os.path.abspath('../'))
from sampling_utils import thinning_T
```
This notebook contains an example of running algorithm 3 (in the paper) for both cases where we have (1) both observed and un-observed events, and (2) the case that we have only the observed events.
# 1. Sampling From Lambda_max
```
# required parameters
mu0 = 1
alpha = 1
w = 1
lambda_max = 3
T = 5
def constant1(x): return mu0
# sampling from hawkes using the superposition property
initial_sample, indicators = thinning_T(0, constant1, lambda_max, T)
events = {initial_sample[i]: indicators[i] for i in range(len(initial_sample))}
all_events = {}
all_events[mu0] = events
iterative_sampling(all_events, events, mu0, alpha, w, lambda_max, T)
# plotting hawkes
sampled_events = list(all_events.keys())[1:]
sampled_events.sort()
sampled_events = np.array(sampled_events)
sampled_lambdas = hawkes(sampled_events, mu0, alpha, w)
plt.figure(figsize=(10, 8))
tvec, l_t = plotHawkes(sampled_events, constant1, alpha, w, T, 10000.0, label= 'intensity', color = 'r+', legend= 'accepted')
plt.plot(sampled_events, sampled_lambdas, 'r^')
plt.legend()
plt.show()
# extract all sampled events from all_events dictionary.
all_samples, all_lambdas = extract_samples(all_events, sampled_events, mu0, alpha, w)
# plots all events, both accepted and rejected with their intensities.
plt.figure(figsize=(10, 8))
plt.plot(tvec, l_t, label = 'Original Intensity')
plt.plot(all_samples, all_lambdas, 'oy', label = 'events')
plt.plot(sampled_events,sampled_lambdas, 'r+', label = 'accepted')
plt.xlabel('time')
plt.ylabel('intensity')
plt.legend()
# sampling from the counterfactual intensity.
new_mu0 = 3
new_alpha = 0.1
real_counterfactuals = sample_counterfactual_superposition(mu0, alpha, new_mu0, new_alpha, all_events, lambda_max, w, T)
```
**The red +s are the counterfactuals.**
```
plt.figure(figsize=(15, 6))
plotHawkes(np.array(real_counterfactuals), lambda t: new_mu0, new_alpha, w, T, 10000.0, label= 'counterfactul intensity', color = 'g+', legend= 'accepted in counterfactual')
plt.plot(tvec, l_t, label = 'Original Intensity')
plt.plot(all_samples, all_lambdas, 'oy', label = 'events')
plt.plot(sampled_events,sampled_lambdas, 'r^')
plt.plot(sampled_events,np.full(len(sampled_events), -0.1), 'r+', label = 'originally accepted')
for xc in real_counterfactuals:
plt.axvline(x=xc, color = 'k', ls = '--', alpha = 0.2)
plt.xlabel('time')
plt.ylabel('intensity')
plt.legend()
```
In the following cell, we will check monotonicity property. Note that this property should hold in **each exponential created by superposition** (please have a look at `check_monotonicity_hawkes` in `hawkes.py` for more details.).
```
check_monotonicity_hawkes(mu0, alpha, new_mu0, new_alpha, all_events, sampled_events, real_counterfactuals, w)
```
# 2. Real-World Scenario
```
# First, we sample from the hawkes process using the Ogata's algorithm (or any other sampling method), but only store the accepted events.
plt.figure(figsize=(10, 8))
mu0 = 1
alpha = 1
w = 1
lambda_max = 3
T = 5
tev, tend, lambdas_original = sampleHawkes(mu0, alpha, w, T, Nev= 100)
tvec, l_t = plotHawkes(tev, lambda t: mu0, alpha, w, T, 10000.0, label = 'Original Intensity', color= 'r+', legend= 'samples')
plt.plot(tev, lambdas_original, 'r^')
plt.legend()
# this list stores functions corresponding to each exponential.
exponentials = []
all_events = {}
exponentials.append(lambda t: mu0)
all_events[mu0] = {}
for i in range(len(tev)):
exponentials.append(lambda t: alpha * np.exp(-w * (t - tev[i])))
all_events[tev[i]] = {}
# we should assign each accepted event to some exponential. (IMPORTANT)
for i in range(len(tev)):
if i == 0:
all_events[mu0][tev[i]] = True
else:
probabilities = [exponentials[j](tev[i]) for j in range(0, i + 1)]
probabilities = [float(i)/sum(probabilities) for i in probabilities]
a = np.random.choice(i + 1, 1, p = probabilities)
if a == 0:
all_events[mu0][tev[i]] = True
else:
all_events[tev[a[0] - 1]][tev[i]] = True
# using the superposition to calculate the difference between lambda_max and the exponentials, and sample from it.
differences = []
differences.append(lambda t: lambda_max - mu0)
for k in range(len(tev)):
f = lambda t: lambda_max - alpha * np.exp(-w * (t - tev[k]))
differences.append(f)
for i in range(len(differences)):
if i == 0:
rejceted, indicators = thinning_T(0, differences[i], lambda_max, T)
else:
rejceted, indicators = thinning_T(tev[i - 1], differences[i], lambda_max, T)
rejceted = {rejceted[j]: False for j in range(len(rejceted)) if indicators[j] == True}
if i == 0:
all_events[mu0].update(rejceted)
all_events[mu0] = {k:v for k,v in sorted(all_events[mu0].items())}
else:
all_events[tev[i - 1]].update(rejceted)
all_events[tev[i - 1]] = {k:v for k,v in sorted(all_events[tev[i - 1]].items())}
all_samples, all_lambdas = extract_samples(all_events, tev, mu0, alpha, w)
plt.figure(figsize=(10, 8))
plt.plot(tvec, l_t, label = 'Original Intensity')
plt.plot(all_samples, all_lambdas, 'oy', label = 'events')
plt.plot(tev,lambdas_original, 'r+', label = 'accepted')
plt.xlabel('time')
plt.ylabel('intensity')
plt.legend()
new_mu0 = 0.1
new_alpha = 1.7
real_counterfactuals = sample_counterfactual_superposition(mu0, alpha, new_mu0, new_alpha, all_events, lambda_max, w, T)
```
**The red +s are the counterfactuals.**
```
plt.figure(figsize=(15, 8))
plotHawkes(np.array(real_counterfactuals), lambda t: new_mu0, new_alpha, w, T, 10000.0, label= 'counterfactual intensity', color= 'g+', legend= 'accepted in counterfactual')
plt.plot(tvec, l_t, label = 'Original Intensity')
plt.plot(all_samples, all_lambdas, 'oy', label = 'events')
plt.plot(tev,lambdas_original, 'r^')
plt.plot(tev,np.full(len(tev), -0.1), 'r+', label = 'originally accepted')
for xc in real_counterfactuals:
plt.axvline(x=xc, color = 'k', ls = '--', alpha = 0.2)
plt.xlabel('time')
plt.ylabel('intensity')
plt.legend()
check_monotonicity_hawkes(mu0, alpha, new_mu0, new_alpha, all_events, tev, real_counterfactuals, w)
```
| true |
code
| 0.612599 | null | null | null | null |
|
This notebook will show an example of text preprocessing applied to RTL-Wiki dataset.
This dataset was introduced in [1] and later recreated in [2]. You can download it in from http://139.18.2.164/mroeder/palmetto/datasets/rtl-wiki.tar.gz
--------
[1] "Reading Tea Leaves: How Humans Interpret Topic Models" (NIPS 2009)
[2] "Exploring the Space of Topic Coherence Measures" (WSDM 2015)
```
# download corpus and unpack it:
! wget http://139.18.2.164/mroeder/palmetto/datasets/rtl-wiki.tar.gz -O rtl-wiki.tar.gz
! tar xzf rtl-wiki.tar.gz
```
The corpus is a sample of 10000 articles from English Wikipedia in a MediaWiki markup format.
Hence, we need to strip specific wiki formatting. We advise using a `mwparserfromhell` fork optimized to deal with the English Wikipedia.
```
git clone --branch images_and_interwiki https://github.com/bt2901/mwparserfromhell.git
```
```
! git clone --branch images_and_interwiki https://github.com/bt2901/mwparserfromhell.git
```
The Wikipedia dataset is too heterogenous. Building a good topic model here requires a lot of topics or a lot of documents.
To make collection more focused, we will filter out everything which isn't about people. We will use the following criteria to distinguish people and not-people:
```
import re
# all infoboxes related to persons, according to https://en.wikipedia.org/wiki/Wikipedia:List_of_infoboxes
person_infoboxes = {'infobox magic: the gathering player', 'infobox architect', 'infobox mountaineer', 'infobox scientist', 'infobox chess biography', 'infobox racing driver', 'infobox saint', 'infobox snooker player', 'infobox figure skater', 'infobox theological work', 'infobox gaelic athletic association player', 'infobox professional wrestler', 'infobox noble', 'infobox pelotari', 'infobox native american leader', 'infobox pretender', 'infobox amateur wrestler', 'infobox college football player', 'infobox buddha', 'infobox cfl biography', 'infobox playboy playmate', 'infobox cyclist', 'infobox martial artist', 'infobox motorcycle rider', 'infobox motocross rider', 'infobox bandy biography', 'infobox video game player', 'infobox dancer', 'infobox nahua officeholder', 'infobox criminal', 'infobox squash player', 'infobox go player', 'infobox bullfighting career', 'infobox engineering career', 'infobox pirate', 'infobox latter day saint biography', 'infobox sumo wrestler', 'infobox youtube personality', 'infobox national hockey league coach', 'infobox rebbe', 'infobox football official', 'infobox aviator', 'infobox pharaoh', 'infobox classical composer', 'infobox fbi ten most wanted', 'infobox chef', 'infobox engineer', 'infobox nascar driver', 'infobox medical person', 'infobox jewish leader', 'infobox horseracing personality', 'infobox poker player', 'infobox economist', 'infobox peer', 'infobox war on terror detainee', 'infobox philosopher', 'infobox professional bowler', 'infobox champ car driver', 'infobox golfer', 'infobox le mans driver', 'infobox alpine ski racer', 'infobox boxer (amateur)', 'infobox bodybuilder', 'infobox college coach', 'infobox speedway rider', 'infobox skier', 'infobox medical details', 'infobox field hockey player', 'infobox badminton player', 'infobox sports announcer details', 'infobox academic', 'infobox f1 driver', 'infobox ncaa athlete', 'infobox biathlete', 'infobox comics creator', 'infobox rugby league biography', 'infobox fencer', 'infobox theologian', 'infobox religious biography', 'infobox egyptian dignitary', 'infobox curler', 'infobox racing driver series section', 'infobox afl biography', 'infobox speed skater', 'infobox climber', 'infobox rugby biography', 'infobox clergy', 'infobox equestrian', 'infobox member of the knesset', 'infobox pageant titleholder', 'infobox lacrosse player', 'infobox tennis biography', 'infobox gymnast', 'infobox sport wrestler', 'infobox sports announcer', 'infobox surfer', 'infobox darts player', 'infobox christian leader', 'infobox presenter', 'infobox gunpowder plotter', 'infobox table tennis player', 'infobox sailor', 'infobox astronaut', 'infobox handball biography', 'infobox volleyball biography', 'infobox spy', 'infobox wrc driver', 'infobox police officer', 'infobox swimmer', 'infobox netball biography', 'infobox model', 'infobox comedian', 'infobox boxer'}
# is page included in a category with demography information?
demography_re = re.compile("([0-9]+ (deaths|births))|(living people)")
dir_name = "persons"
! mkdir $dir_name
import glob
from bs4 import BeautifulSoup
from mwparserfromhell import mwparserfromhell
from tqdm import tqdm_notebook as tqdm
for filename in tqdm(glob.glob("documents/*.html")):
doc_id = filename.partition("/")[-1]
doc_id = doc_id.rpartition(".")[0] + ".txt"
is_about_person = False
with open(filename, "r") as f:
soup = BeautifulSoup("".join(f.readlines()))
text = soup.findAll('textarea', id="wpTextbox1")[0].contents[0]
text = text.replace("&", "&").replace('<', '<').replace('>', '>')
wikicode = mwparserfromhell.parse(text)
if dir_name == "persons":
for node in wikicode.nodes:
entry_type = str(type(node))
if "Wikilink" in entry_type:
special_link_name, _, cat_name = node.title.lower().strip().partition(":")
if special_link_name == "category":
if demography_re.match(cat_name):
is_about_person = True
if "Template" in entry_type:
name = str(node.name).lower().strip()
if name in person_infoboxes:
is_about_person = True
should_be_saved = is_about_person
else:
should_be_saved = True
if should_be_saved:
with open(f"{dir_name}/{doc_id}", "w") as f2:
stripped_text = wikicode.strip_code()
f2.write(stripped_text)
```
Now we have a folder `persons` which contains 1201 document. Let's take a look at the one of them:
```
! head $dir_name/Eusebius.txt
```
We need to lemmatize texts, remove stopwords and extract informative ngramms.
There's no one "correct" way to do it, but the reasonable baseline is using well-known `nltk` library.
```
import nltk
import string
import pandas as pd
from glob import glob
nltk.data.path.append('/home/evgenyegorov/nltk_data/')
files = glob(dir_name + '/*.txt')
data = []
for path in files:
entry = {}
entry['id'] = path.split('/')[-1].rpartition(".")[0]
with open(path, 'r') as f:
entry['raw_text'] = " ".join(line.strip() for line in f.readlines())
data.append(entry)
wiki_texts = pd.DataFrame(data)
from tqdm import tqdm
tokenized_text = []
for text in tqdm(wiki_texts['raw_text'].values):
tokens = nltk.wordpunct_tokenize(text.lower())
tokenized_text.append(nltk.pos_tag(tokens))
wiki_texts['tokenized'] = tokenized_text
from nltk.corpus import wordnet
def nltk2wn_tag(nltk_tag):
if nltk_tag.startswith('J'):
return wordnet.ADJ
elif nltk_tag.startswith('V'):
return wordnet.VERB
elif nltk_tag.startswith('N'):
return wordnet.NOUN
elif nltk_tag.startswith('R'):
return wordnet.ADV
else:
return ''
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
stop = set(stopwords.words('english'))
lemmatized_text = []
wnl = WordNetLemmatizer()
for text in wiki_texts['tokenized'].values:
lemmatized = [wnl.lemmatize(word,nltk2wn_tag(pos))
if nltk2wn_tag(pos) != ''
else wnl.lemmatize(word)
for word, pos in text ]
lemmatized = [word for word in lemmatized
if word not in stop and word.isalpha()]
lemmatized_text.append(lemmatized)
wiki_texts['lemmatized'] = lemmatized_text
```
Ngrams are a powerful feature, and BigARTM is able to take advantage of it (the technical term is 'multimodal topic modeling': our topic model could model a lot of different features linked to a specific document, not just words).
```
from nltk.collocations import BigramAssocMeasures, BigramCollocationFinder
bigram_measures = BigramAssocMeasures()
finder = BigramCollocationFinder.from_documents(wiki_texts['lemmatized'])
finder.apply_freq_filter(5)
set_dict = set(finder.nbest(bigram_measures.pmi,32100)[100:])
documents = wiki_texts['lemmatized']
bigrams = []
for doc in documents:
entry = ['_'.join([word_first, word_second])
for word_first, word_second in zip(doc[:-1],doc[1:])
if (word_first, word_second) in set_dict]
bigrams.append(entry)
wiki_texts['bigram'] = bigrams
from collections import Counter
def vowpalize_sequence(sequence):
word_2_frequency = Counter(sequence)
del word_2_frequency['']
vw_string = ''
for word in word_2_frequency:
vw_string += word + ":" + str(word_2_frequency[word]) + ' '
return vw_string
vw_text = []
for index, data in wiki_texts.iterrows():
vw_string = ''
doc_id = data.id
lemmatized = '@lemmatized ' + vowpalize_sequence(data.lemmatized)
bigram = '@bigram ' + vowpalize_sequence(data.bigram)
vw_string = ' |'.join([doc_id, lemmatized, bigram])
vw_text.append(vw_string)
wiki_texts['vw_text'] = vw_text
```
Vowpal Wabbit ("wv") is a text format which is a good fit for multimodal topic modeling. Here, we elected to store dataset in a Bag-of-Words format (for performance reasons), but VW could store everything as a sequence of words as well.
It looks like this:
```
wiki_texts['vw_text'].head().values[0]
wiki_texts[['id','raw_text', 'vw_text']].to_csv('./wiki_data.csv')
```
| true |
code
| 0.435421 | null | null | null | null |
|
<table style="float:left; border:none">
<tr style="border:none">
<td style="border:none">
<a href="http://bokeh.pydata.org/">
<img
src="http://bokeh.pydata.org/en/latest/_static/bokeh-transparent.png"
style="width:70px"
>
</a>
</td>
<td style="border:none">
<h1>Bokeh Tutorial — <tt style="display:inline">bokeh.models</tt> interface</h1>
</td>
</tr>
</table>
## Models
NYTimes interactive chart [Usain Bolt vs. 116 years of Olympic sprinters](http://www.nytimes.com/interactive/2012/08/05/sports/olympics/the-100-meter-dash-one-race-every-medalist-ever.html)
The first thing we need is to get the data. The data for this chart is located in the ``bokeh.sampledata`` module as a Pandas DataFrame. You can see the first ten rows below:?
```
from bokeh.sampledata.sprint import sprint
sprint[:10]
```
Next we import some of the Bokeh models that need to be assembled to make a plot. At a minimum, we need to start with ``Plot``, the glyphs (``Circle`` and ``Text``) we want to display, as well as ``ColumnDataSource`` to hold the data and range obejcts to set the plot bounds.
```
from bokeh.io import output_notebook, show
from bokeh.models.glyphs import Circle, Text
from bokeh.models import ColumnDataSource, Range1d, DataRange1d, Plot
output_notebook()
```
## Setting up Data
```
abbrev_to_country = {
"USA": "United States",
"GBR": "Britain",
"JAM": "Jamaica",
"CAN": "Canada",
"TRI": "Trinidad and Tobago",
"AUS": "Australia",
"GER": "Germany",
"CUB": "Cuba",
"NAM": "Namibia",
"URS": "Soviet Union",
"BAR": "Barbados",
"BUL": "Bulgaria",
"HUN": "Hungary",
"NED": "Netherlands",
"NZL": "New Zealand",
"PAN": "Panama",
"POR": "Portugal",
"RSA": "South Africa",
"EUA": "United Team of Germany",
}
gold_fill = "#efcf6d"
gold_line = "#c8a850"
silver_fill = "#cccccc"
silver_line = "#b0b0b1"
bronze_fill = "#c59e8a"
bronze_line = "#98715d"
fill_color = { "gold": gold_fill, "silver": silver_fill, "bronze": bronze_fill }
line_color = { "gold": gold_line, "silver": silver_line, "bronze": bronze_line }
def selected_name(name, medal, year):
return name if medal == "gold" and year in [1988, 1968, 1936, 1896] else None
t0 = sprint.Time[0]
sprint["Abbrev"] = sprint.Country
sprint["Country"] = sprint.Abbrev.map(lambda abbr: abbrev_to_country[abbr])
sprint["Medal"] = sprint.Medal.map(lambda medal: medal.lower())
sprint["Speed"] = 100.0/sprint.Time
sprint["MetersBack"] = 100.0*(1.0 - t0/sprint.Time)
sprint["MedalFill"] = sprint.Medal.map(lambda medal: fill_color[medal])
sprint["MedalLine"] = sprint.Medal.map(lambda medal: line_color[medal])
sprint["SelectedName"] = sprint[["Name", "Medal", "Year"]].apply(tuple, axis=1).map(lambda args: selected_name(*args))
source = ColumnDataSource(sprint)
```
## Basic Plot with Glyphs
```
plot_options = dict(plot_width=800, plot_height=480, toolbar_location=None,
outline_line_color=None, title = "Usain Bolt vs. 116 years of Olympic sprinters")
radius = dict(value=5, units="screen")
medal_glyph = Circle(x="MetersBack", y="Year", radius=radius, fill_color="MedalFill",
line_color="MedalLine", fill_alpha=0.5)
athlete_glyph = Text(x="MetersBack", y="Year", x_offset=10, text="SelectedName",
text_align="left", text_baseline="middle", text_font_size="9pt")
no_olympics_glyph = Text(x=7.5, y=1942, text=["No Olympics in 1940 or 1944"],
text_align="center", text_baseline="middle",
text_font_size="9pt", text_font_style="italic", text_color="silver")
xdr = Range1d(start=sprint.MetersBack.max()+2, end=0) # +2 is for padding
ydr = DataRange1d(range_padding=0.05)
plot = Plot(x_range=xdr, y_range=ydr, **plot_options)
plot.add_glyph(source, medal_glyph)
plot.add_glyph(source, athlete_glyph)
plot.add_glyph(no_olympics_glyph)
show(plot)
```
## Adding Axes and Grids
```
from bokeh.models import Grid, LinearAxis, SingleIntervalTicker
xdr = Range1d(start=sprint.MetersBack.max()+2, end=0) # +2 is for padding
ydr = DataRange1d(range_padding=0.05)
plot = Plot(x_range=xdr, y_range=ydr, **plot_options)
plot.add_glyph(source, medal_glyph)
plot.add_glyph(source, athlete_glyph)
plot.add_glyph(no_olympics_glyph)
xticker = SingleIntervalTicker(interval=5, num_minor_ticks=0)
xaxis = LinearAxis(ticker=xticker, axis_line_color=None, major_tick_line_color=None,
axis_label="Meters behind 2012 Bolt", axis_label_text_font_size="10pt",
axis_label_text_font_style="bold")
plot.add_layout(xaxis, "below")
xgrid = Grid(dimension=0, ticker=xaxis.ticker, grid_line_dash="dashed")
plot.add_layout(xgrid)
yticker = SingleIntervalTicker(interval=12, num_minor_ticks=0)
yaxis = LinearAxis(ticker=yticker, major_tick_in=-5, major_tick_out=10)
plot.add_layout(yaxis, "right")
show(plot)
```
## Adding a Hover Tool
```
from bokeh.models import HoverTool
tooltips = """
<div>
<span style="font-size: 15px;">@Name</span>
<span style="font-size: 10px; color: #666;">(@Abbrev)</span>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">@Time{0.00}</span>
<span style="font-size: 10px; color: #666;">@Year</span>
</div>
<div style="font-size: 11px; color: #666;">@{MetersBack}{0.00} meters behind</div>
"""
xdr = Range1d(start=sprint.MetersBack.max()+2, end=0) # +2 is for padding
ydr = DataRange1d(range_padding=0.05)
plot = Plot(x_range=xdr, y_range=ydr, **plot_options)
medal = plot.add_glyph(source, medal_glyph) # we need this renderer to configure the hover tool
plot.add_glyph(source, athlete_glyph)
plot.add_glyph(no_olympics_glyph)
xticker = SingleIntervalTicker(interval=5, num_minor_ticks=0)
xaxis = LinearAxis(ticker=xticker, axis_line_color=None, major_tick_line_color=None,
axis_label="Meters behind 2012 Bolt", axis_label_text_font_size="10pt",
axis_label_text_font_style="bold")
plot.add_layout(xaxis, "below")
xgrid = Grid(dimension=0, ticker=xaxis.ticker, grid_line_dash="dashed")
plot.add_layout(xgrid)
yticker = SingleIntervalTicker(interval=12, num_minor_ticks=0)
yaxis = LinearAxis(ticker=yticker, major_tick_in=-5, major_tick_out=10)
plot.add_layout(yaxis, "right")
hover = HoverTool(tooltips=tooltips, renderers=[medal])
plot.add_tools(hover)
show(plot)
from bubble_plot import get_1964_data
def get_plot():
return Plot(
x_range=Range1d(1, 9), y_range=Range1d(20, 100),
title="", plot_width=800, plot_height=400,
outline_line_color=None, toolbar_location=None,
)
df = get_1964_data()
df.head()
# EXERCISE: Add Circles to the plot from the data in `df`.
# With `fertility` for the x coordinates, `life` for the y coordinates.
plot = get_plot()
# EXERCISE: Color the circles by region_color & change the size of the color by population
# EXERCISE: Add axes and grid lines
# EXERCISE: Manually add a legend using Circle & Text. The color key is as follows
region_name_and_color = [
('America', '#3288bd'),
('East Asia & Pacific', '#99d594'),
('Europe & Central Asia', '#e6f598'),
('Middle East & North Africa', '#fee08b'),
('South Asia', '#fc8d59'),
('Sub-Saharan Africa', '#d53e4f')
]
```
| true |
code
| 0.645371 | null | null | null | null |
|
```
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import theano
from scipy.integrate import odeint
from theano import *
THEANO_FLAGS = "optimizer=fast_compile"
```
# Lotka-Volterra with manual gradients
by [Sanmitra Ghosh](https://www.mrc-bsu.cam.ac.uk/people/in-alphabetical-order/a-to-g/sanmitra-ghosh/)
Mathematical models are used ubiquitously in a variety of science and engineering domains to model the time evolution of physical variables. These mathematical models are often described as ODEs that are characterised by model structure - the functions of the dynamical variables - and model parameters. However, for the vast majority of systems of practical interest it is necessary to infer both the model parameters and an appropriate model structure from experimental observations. This experimental data often appears to be scarce and incomplete. Furthermore, a large variety of models described as dynamical systems show traits of sloppiness (see [Gutenkunst et al., 2007](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0030189)) and have unidentifiable parameter combinations. The task of inferring model parameters and structure from experimental data is of paramount importance to reliably analyse the behaviour of dynamical systems and draw faithful predictions in light of the difficulties posit by their complexities. Moreover, any future model prediction should encompass and propagate variability and uncertainty in model parameters and/or structure. Thus, it is also important that the inference methods are equipped to quantify and propagate the aforementioned uncertainties from the model descriptions to model predictions. As a natural choice to handle uncertainty, at least in the parameters, Bayesian inference is increasingly used to fit ODE models to experimental data ([Mark Girolami, 2008](https://www.sciencedirect.com/science/article/pii/S030439750800501X)). However, due to some of the difficulties that I pointed above, fitting an ODE model using Bayesian inference is a challenging task. In this tutorial I am going to take up that challenge and will show how PyMC3 could be potentially used for this purpose.
I must point out that model fitting (inference of the unknown parameters) is just one of many crucial tasks that a modeller has to complete in order to gain a deeper understanding of a physical process. However, success in this task is crucial and this is where PyMC3, and probabilistic programming (ppl) in general, is extremely useful. The modeller can take full advantage of the variety of samplers and distributions provided by PyMC3 to automate inference.
In this tutorial I will focus on the fitting exercise, that is estimating the posterior distribution of the parameters given some noisy experimental time series.
## Bayesian inference of the parameters of an ODE
I begin by first introducing the Bayesian framework for inference in a coupled non-linear ODE defined as
$$
\frac{d X(t)}{dt}=\boldsymbol{f}\big(X(t),\boldsymbol{\theta}\big),
$$
where $X(t)\in\mathbb{R}^K$ is the solution, at each time point, of the system composed of $K$ coupled ODEs - the state vector - and $\boldsymbol{\theta}\in\mathbb{R}^D$ is the parameter vector that we wish to infer. $\boldsymbol{f}(\cdot)$ is a non-linear function that describes the governing dynamics. Also, in case of an initial value problem, let the matrix $\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$ denote the solution of the above system of equations at some specified time points for the parameters $\boldsymbol{\theta}$ and initial conditions $\mathbf{x_0}$.
Consider a set of noisy experimental observations $\boldsymbol{Y} \in \mathbb{R}^{T\times K}$ observed at $T$ experimental time points for the $K$ states. We can obtain the likelihood $p(\boldsymbol{Y}|\boldsymbol{X})$, where I use the symbol $\boldsymbol{X}:=\boldsymbol{X}(\boldsymbol{\theta}, \mathbf{x_0})$, and combine that with a prior distribution $p(\boldsymbol{\theta})$ on the parameters, using the Bayes theorem, to obtain the posterior distribution as
$$
p(\boldsymbol{\theta}|\boldsymbol{Y})=\frac{1}{Z}p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}),
$$
where $Z=\int p(\boldsymbol{Y}|\boldsymbol{X})p(\boldsymbol{\theta}) d\boldsymbol{\theta} $ is the intractable marginal likelihood. Due to this intractability we resort to approximate inference and apply MCMC.
For this tutorial I have chosen two ODEs:
1. The [__Lotka-Volterra predator prey model__ ](http://www.scholarpedia.org/article/Predator-prey_model)
2. The [__Fitzhugh-Nagumo action potential model__](http://www.scholarpedia.org/article/FitzHugh-Nagumo_model)
I will showcase two distinctive approaches (__NUTS__ and __SMC__ step methods), supported by PyMC3, for the estimation of unknown parameters in these models.
## Lotka-Volterra predator prey model
The Lotka Volterra model depicts an ecological system that is used to describe the interaction between a predator and prey species. This ODE given by
$$
\begin{aligned}
\frac{d x}{dt} &=\alpha x -\beta xy \\
\frac{d y}{dt} &=-\gamma y + \delta xy,
\end{aligned}
$$
shows limit cycle behaviour and has often been used for benchmarking Bayesian inference methods. $\boldsymbol{\theta}=(\alpha,\beta,\gamma,\delta, x(0),y(0))$ is the set of unknown parameters that we wish to infer from experimental observations of the state vector $X(t)=(x(t),y(t))$ comprising the concentrations of the prey and the predator species respectively. $x(0), y(0)$ are the initial values of the states needed to solve the ODE, which are also treated as unknown quantities. The predator prey model was recently used to demonstrate the applicability of the NUTS sampler, and the Stan ppl in general, for inference in ODE models. I will closely follow [this](https://mc-stan.org/users/documentation/case-studies/lotka-volterra-predator-prey.html) Stan tutorial and thus I will setup this model and associated inference problem (including the data) exactly as was done for the Stan tutorial. Let me first write down the code to solve this ODE using the SciPy's `odeint`. Note that the methods in this tutorial is not limited or tied to `odeint`. Here I have chosen `odeint` to simply stay within PyMC3's dependencies (SciPy in this case).
```
class LotkaVolterraModel:
def __init__(self, y0=None):
self._y0 = y0
def simulate(self, parameters, times):
alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters]
def rhs(y, t, p):
X, Y = y
dX_dt = alpha * X - beta * X * Y
dY_dt = -gamma * Y + delta * X * Y
return dX_dt, dY_dt
values = odeint(rhs, [Xt0, Yt0], times, (parameters,))
return values
ode_model = LotkaVolterraModel()
```
## Handling ODE gradients
NUTS requires the gradient of the log of the target density w.r.t. the unknown parameters, $\nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y})$, which can be evaluated using the chain rule of differentiation as
$$ \nabla_{\boldsymbol{\theta}}p(\boldsymbol{\theta}|\boldsymbol{Y}) = \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}.$$
The gradient of an ODE w.r.t. its parameters, the term $\frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, can be obtained using local sensitivity analysis, although this is not the only method to obtain gradients. However, just like solving an ODE (a non-linear one to be precise) evaluation of the gradients can only be carried out using some sort of numerical method, say for example the famous Runge-Kutta method for non-stiff ODEs. PyMC3 uses Theano as the automatic differentiation engine and thus all models are implemented by stitching together available primitive operations (Ops) supported by Theano. Even to extend PyMC3 we need to compose models that can be expressed as symbolic combinations of Theano's Ops. However, if we take a step back and think about Theano then it is apparent that neither the ODE solution nor its gradient w.r.t. to the parameters can be expressed symbolically as combinations of Theano’s primitive Ops. Hence, from Theano’s perspective an ODE (and for that matter any other form of a non-linear differential equation) is a non-differentiable black-box function. However, one might argue that if a numerical method is coded up in Theano (using say the `scan` Op), then it is possible to symbolically express the numerical method that evaluates the ODE states, and then we can easily use Theano’s automatic differentiation engine to obtain the gradients as well by differentiating through the numerical solver itself. I like to point out that the former, obtaining the solution, is indeed possible this way but the obtained gradient would be error-prone. Additionally, this entails to a complete ‘re-inventing the wheel’ as one would have to implement decades old sophisticated numerical algorithms again from scratch in Theano.
Thus, in this tutorial I am going to present the alternative approach which consists of defining new [custom Theano Ops](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html), extending Theano, that will wrap both the numerical solution and the vector-Matrix product, $ \frac{\partial p(\boldsymbol{\theta}|\boldsymbol{Y})}{\partial \boldsymbol{X}}^T \frac{\partial \boldsymbol{X}}{\partial \boldsymbol{\theta}}$, often known as the _**vector-Jacobian product**_ (VJP) in automatic differentiation literature. I like to point out here that in the context of non-linear ODEs the term Jacobian is used to denote gradients of the ODE dynamics $\boldsymbol{f}$ w.r.t. the ODE states $X(t)$. Thus, to avoid confusion, from now on I will use the term _**vector-sensitivity product**_ (VSP) to denote the same quantity that the term VJP denotes.
I will start by introducing the forward sensitivity analysis.
## ODE sensitivity analysis
For a coupled ODE system $\frac{d X(t)}{dt} = \boldsymbol{f}(X(t),\boldsymbol{\theta})$, the local sensitivity of the solution to a parameter is defined by how much the solution would change by changes in the parameter, i.e. the sensitivity of the the $k$-th state is simply put the time evolution of its graident w.r.t. the $d$-th parameter. This quantitiy, denoted as $Z_{kd}(t)$, is given by
$$Z_{kd}(t)=\frac{d }{d t} \left\{\frac{\partial X_k (t)}{\partial \theta_d}\right\} = \sum_{i=1}^K \frac{\partial f_k}{\partial X_i (t)}\frac{\partial X_i (t)}{\partial \theta_d} + \frac{\partial f_k}{\partial \theta_d}.$$
Using forward sensitivity analysis we can obtain both the state $X(t)$ and its derivative w.r.t the parameters, at each time point, as the solution to an initial value problem by augmenting the original ODE system with the sensitivity equations $Z_{kd}$. The augmented ODE system $\big(X(t), Z(t)\big)$ can then be solved together using a chosen numerical method. The augmented ODE system needs the initial values for the sensitivity equations. All of these should be set to zero except the ones where the sensitivity of a state w.r.t. its own initial value is sought, that is $ \frac{\partial X_k(t)}{\partial X_k (0)} =1 $. Note that in order to solve this augmented system we have to embark in the tedious process of deriving $ \frac{\partial f_k}{\partial X_i (t)}$, also known as the Jacobian of an ODE, and $\frac{\partial f_k}{\partial \theta_d}$ terms. Thankfully, many ODE solvers calculate these terms and solve the augmented system when asked for by the user. An example would be the [SUNDIAL CVODES solver suite](https://computation.llnl.gov/projects/sundials/cvodes). A Python wrapper for CVODES can be found [here](https://jmodelica.org/assimulo/).
However, for this tutorial I would go ahead and derive the terms mentioned above, manually, and solve the Lotka-Volterra ODEs alongwith the sensitivites in the following code block. The functions `jac` and `dfdp` below calculate $ \frac{\partial f_k}{\partial X_i (t)}$ and $\frac{\partial f_k}{\partial \theta_d}$ respectively for the Lotka-Volterra model. For conveniance I have transformed the sensitivity equation in a matrix form. Here I extended the solver code snippet above to include sensitivities when asked for.
```
n_states = 2
n_odeparams = 4
n_ivs = 2
class LotkaVolterraModel:
def __init__(self, n_states, n_odeparams, n_ivs, y0=None):
self._n_states = n_states
self._n_odeparams = n_odeparams
self._n_ivs = n_ivs
self._y0 = y0
def simulate(self, parameters, times):
return self._simulate(parameters, times, False)
def simulate_with_sensitivities(self, parameters, times):
return self._simulate(parameters, times, True)
def _simulate(self, parameters, times, sensitivities):
alpha, beta, gamma, delta, Xt0, Yt0 = [x for x in parameters]
def r(y, t, p):
X, Y = y
dX_dt = alpha * X - beta * X * Y
dY_dt = -gamma * Y + delta * X * Y
return dX_dt, dY_dt
if sensitivities:
def jac(y):
X, Y = y
ret = np.zeros((self._n_states, self._n_states))
ret[0, 0] = alpha - beta * Y
ret[0, 1] = -beta * X
ret[1, 0] = delta * Y
ret[1, 1] = -gamma + delta * X
return ret
def dfdp(y):
X, Y = y
ret = np.zeros(
(self._n_states, self._n_odeparams + self._n_ivs)
) # except the following entries
ret[
0, 0
] = X # \frac{\partial [\alpha X - \beta XY]}{\partial \alpha}, and so on...
ret[0, 1] = -X * Y
ret[1, 2] = -Y
ret[1, 3] = X * Y
return ret
def rhs(y_and_dydp, t, p):
y = y_and_dydp[0 : self._n_states]
dydp = y_and_dydp[self._n_states :].reshape(
(self._n_states, self._n_odeparams + self._n_ivs)
)
dydt = r(y, t, p)
d_dydp_dt = np.matmul(jac(y), dydp) + dfdp(y)
return np.concatenate((dydt, d_dydp_dt.reshape(-1)))
y0 = np.zeros((2 * (n_odeparams + n_ivs)) + n_states)
y0[6] = 1.0 # \frac{\partial [X]}{\partial Xt0} at t==0, and same below for Y
y0[13] = 1.0
y0[0:n_states] = [Xt0, Yt0]
result = odeint(rhs, y0, times, (parameters,), rtol=1e-6, atol=1e-5)
values = result[:, 0 : self._n_states]
dvalues_dp = result[:, self._n_states :].reshape(
(len(times), self._n_states, self._n_odeparams + self._n_ivs)
)
return values, dvalues_dp
else:
values = odeint(r, [Xt0, Yt0], times, (parameters,), rtol=1e-6, atol=1e-5)
return values
ode_model = LotkaVolterraModel(n_states, n_odeparams, n_ivs)
```
For this model I have set the relative and absolute tolerances to $10^{-6}$ and $10^{-5}$ respectively, as was suggested in the Stan tutorial. This will produce sufficiently accurate solutions. Further reducing the tolerances will increase accuracy but at the cost of increasing the computational time. A thorough discussion on the choice and use of a numerical method for solving the ODE is out of the scope of this tutorial. However, I must point out that the inaccuracies of the ODE solver do affect the likelihood and as a result the inference. This is more so the case for stiff systems. I would recommend interested readers to this nice blog article where this effect is discussed thoroughly for a [cardiac ODE model](https://mirams.wordpress.com/2018/10/17/ode-errors-and-optimisation/). There is also an emerging area of uncertainty quantification that attacks the problem of noise arisng from impreciseness of numerical algorithms, [probabilistic numerics](http://probabilistic-numerics.org/). This is indeed an elegant framework to carry out inference while taking into account the errors coming from the numeric ODE solvers.
## Custom ODE Op
In order to define the custom `Op` I have written down two `theano.Op` classes `ODEGradop`, `ODEop`. `ODEop` essentially wraps the ODE solution and will be called by PyMC3. The `ODEGradop` wraps the numerical VSP and this op is then in turn used inside the `grad` method in the `ODEop` to return the VSP. Note that we pass in two functions: `state`, `numpy_vsp` as arguments to respective Ops. I will define these functions later. These functions act as shims using which we connect the python code for numerical solution of sate and VSP to Theano and thus PyMC3.
```
class ODEGradop(theano.Op):
def __init__(self, numpy_vsp):
self._numpy_vsp = numpy_vsp
def make_node(self, x, g):
x = theano.tensor.as_tensor_variable(x)
g = theano.tensor.as_tensor_variable(g)
node = theano.Apply(self, [x, g], [g.type()])
return node
def perform(self, node, inputs_storage, output_storage):
x = inputs_storage[0]
g = inputs_storage[1]
out = output_storage[0]
out[0] = self._numpy_vsp(x, g) # get the numerical VSP
class ODEop(theano.Op):
def __init__(self, state, numpy_vsp):
self._state = state
self._numpy_vsp = numpy_vsp
def make_node(self, x):
x = theano.tensor.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs_storage, output_storage):
x = inputs_storage[0]
out = output_storage[0]
out[0] = self._state(x) # get the numerical solution of ODE states
def grad(self, inputs, output_grads):
x = inputs[0]
g = output_grads[0]
grad_op = ODEGradop(self._numpy_vsp) # pass the VSP when asked for gradient
grad_op_apply = grad_op(x, g)
return [grad_op_apply]
```
I must point out that the way I have defined the custom ODE Ops above there is the possibility that the ODE is solved twice for the same parameter values, once for the states and another time for the VSP. To avoid this behaviour I have written a helper class which stops this double evaluation.
```
class solveCached:
def __init__(self, times, n_params, n_outputs):
self._times = times
self._n_params = n_params
self._n_outputs = n_outputs
self._cachedParam = np.zeros(n_params)
self._cachedSens = np.zeros((len(times), n_outputs, n_params))
self._cachedState = np.zeros((len(times), n_outputs))
def __call__(self, x):
if np.all(x == self._cachedParam):
state, sens = self._cachedState, self._cachedSens
else:
state, sens = ode_model.simulate_with_sensitivities(x, times)
return state, sens
times = np.arange(0, 21) # number of measurement points (see below)
cached_solver = solveCached(times, n_odeparams + n_ivs, n_states)
```
### The ODE state & VSP evaluation
Most ODE systems of practical interest will have multiple states and thus the output of the solver, which I have denoted so far as $\boldsymbol{X}$, for a system with $K$ states solved on $T$ time points, would be a $T \times K$-dimensional matrix. For the Lotka-Volterra model the columns of this matrix represent the time evolution of the individual species concentrations. I flatten this matrix to a $TK$-dimensional vector $vec(\boldsymbol{X})$, and also rearrange the sensitivities accordingly to obtain the desired vector-matrix product. It is beneficial at this point to test the custom Op as described [here](http://deeplearning.net/software/theano_versions/dev/extending/extending_theano.html#how-to-test-it).
```
def state(x):
State, Sens = cached_solver(np.array(x, dtype=np.float64))
cached_solver._cachedState, cached_solver._cachedSens, cached_solver._cachedParam = (
State,
Sens,
x,
)
return State.reshape((2 * len(State),))
def numpy_vsp(x, g):
numpy_sens = cached_solver(np.array(x, dtype=np.float64))[1].reshape(
(n_states * len(times), len(x))
)
return numpy_sens.T.dot(g)
```
## The Hudson's Bay Company data
The Lotka-Volterra predator prey model has been used previously to successfully explain the dynamics of natural populations of predators and prey, such as the lynx and snowshoe hare data of the Hudson's Bay Company. This is the same data (that was shared [here](https://github.com/stan-dev/example-models/tree/master/knitr/lotka-volterra)) used in the Stan example and thus I will use this data-set as the experimental observations $\boldsymbol{Y}(t)$ to infer the parameters.
```
Year = np.arange(1900, 1921, 1)
# fmt: off
Lynx = np.array([4.0, 6.1, 9.8, 35.2, 59.4, 41.7, 19.0, 13.0, 8.3, 9.1, 7.4,
8.0, 12.3, 19.5, 45.7, 51.1, 29.7, 15.8, 9.7, 10.1, 8.6])
Hare = np.array([30.0, 47.2, 70.2, 77.4, 36.3, 20.6, 18.1, 21.4, 22.0, 25.4,
27.1, 40.3, 57.0, 76.6, 52.3, 19.5, 11.2, 7.6, 14.6, 16.2, 24.7])
# fmt: on
plt.figure(figsize=(15, 7.5))
plt.plot(Year, Lynx, color="b", lw=4, label="Lynx")
plt.plot(Year, Hare, color="g", lw=4, label="Hare")
plt.legend(fontsize=15)
plt.xlim([1900, 1920])
plt.xlabel("Year", fontsize=15)
plt.ylabel("Concentrations", fontsize=15)
plt.xticks(Year, rotation=45)
plt.title("Lynx (predator) - Hare (prey): oscillatory dynamics", fontsize=25);
```
## The probablistic model
I have now got all the ingredients needed in order to define the probabilistic model in PyMC3. As I have mentioned previously I will set up the probabilistic model with the exact same likelihood and priors used in the Stan example. The observed data is defined as follows:
$$\log (\boldsymbol{Y(t)}) = \log (\boldsymbol{X(t)}) + \eta(t),$$
where $\eta(t)$ is assumed to be zero mean i.i.d Gaussian noise with an unknown standard deviation $\sigma$, that needs to be estimated. The above multiplicative (on the natural scale) noise model encodes a lognormal distribution as the likelihood:
$$\boldsymbol{Y(t)} \sim \mathcal{L}\mathcal{N}(\log (\boldsymbol{X(t)}), \sigma^2).$$
The following priors are then placed on the parameters:
$$
\begin{aligned}
x(0), y(0) &\sim \mathcal{L}\mathcal{N}(\log(10),1),\\
\alpha, \gamma &\sim \mathcal{N}(1,0.5),\\
\beta, \delta &\sim \mathcal{N}(0.05,0.05),\\
\sigma &\sim \mathcal{L}\mathcal{N}(-1,1).
\end{aligned}
$$
For an intuitive explanation, which I am omitting for brevity, regarding the choice of priors as well as the likelihood model, I would recommend the Stan example mentioned above. The above probabilistic model is defined in PyMC3 below. Note that the flattened state vector is reshaped to match the data dimensionality.
Finally, I use the `pm.sample` method to run NUTS by default and obtain $1500$ post warm-up samples from the posterior.
```
theano.config.exception_verbosity = "high"
theano.config.floatX = "float64"
# Define the data matrix
Y = np.vstack((Hare, Lynx)).T
# Now instantiate the theano custom ODE op
my_ODEop = ODEop(state, numpy_vsp)
# The probabilistic model
with pm.Model() as LV_model:
# Priors for unknown model parameters
alpha = pm.Normal("alpha", mu=1, sd=0.5)
beta = pm.Normal("beta", mu=0.05, sd=0.05)
gamma = pm.Normal("gamma", mu=1, sd=0.5)
delta = pm.Normal("delta", mu=0.05, sd=0.05)
xt0 = pm.Lognormal("xto", mu=np.log(10), sd=1)
yt0 = pm.Lognormal("yto", mu=np.log(10), sd=1)
sigma = pm.Lognormal("sigma", mu=-1, sd=1, shape=2)
# Forward model
all_params = pm.math.stack([alpha, beta, gamma, delta, xt0, yt0], axis=0)
ode_sol = my_ODEop(all_params)
forward = ode_sol.reshape(Y.shape)
# Likelihood
Y_obs = pm.Lognormal("Y_obs", mu=pm.math.log(forward), sd=sigma, observed=Y)
trace = pm.sample(1500, tune=1000, init="adapt_diag")
trace["diverging"].sum()
with LV_model:
pm.traceplot(trace);
import pandas as pd
summary = pm.summary(trace)
STAN_mus = [0.549, 0.028, 0.797, 0.024, 33.960, 5.949, 0.248, 0.252]
STAN_sds = [0.065, 0.004, 0.091, 0.004, 2.909, 0.533, 0.045, 0.044]
summary["STAN_mus"] = pd.Series(np.array(STAN_mus), index=summary.index)
summary["STAN_sds"] = pd.Series(np.array(STAN_sds), index=summary.index)
summary
```
These estimates are almost identical to those obtained in the Stan tutorial (see the last two columns above), which is what we can expect. Posterior predictives can be drawn as below.
```
ppc_samples = pm.sample_posterior_predictive(trace, samples=1000, model=LV_model)["Y_obs"]
mean_ppc = ppc_samples.mean(axis=0)
CriL_ppc = np.percentile(ppc_samples, q=2.5, axis=0)
CriU_ppc = np.percentile(ppc_samples, q=97.5, axis=0)
plt.figure(figsize=(15, 2 * (5)))
plt.subplot(2, 1, 1)
plt.plot(Year, Lynx, "o", color="b", lw=4, ms=10.5)
plt.plot(Year, mean_ppc[:, 1], color="b", lw=4)
plt.plot(Year, CriL_ppc[:, 1], "--", color="b", lw=2)
plt.plot(Year, CriU_ppc[:, 1], "--", color="b", lw=2)
plt.xlim([1900, 1920])
plt.ylabel("Lynx conc", fontsize=15)
plt.xticks(Year, rotation=45)
plt.subplot(2, 1, 2)
plt.plot(Year, Hare, "o", color="g", lw=4, ms=10.5, label="Observed")
plt.plot(Year, mean_ppc[:, 0], color="g", lw=4, label="mean of ppc")
plt.plot(Year, CriL_ppc[:, 0], "--", color="g", lw=2, label="credible intervals")
plt.plot(Year, CriU_ppc[:, 0], "--", color="g", lw=2)
plt.legend(fontsize=15)
plt.xlim([1900, 1920])
plt.xlabel("Year", fontsize=15)
plt.ylabel("Hare conc", fontsize=15)
plt.xticks(Year, rotation=45);
```
# Efficient exploration of the posterior landscape with SMC
It has been pointed out in several papers that the complex non-linear dynamics of an ODE results in a posterior landscape that is extremely difficult to navigate efficiently by many MCMC samplers. Thus, recently the curvature information of the posterior surface has been used to construct powerful geometrically aware samplers ([Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x)) that perform extremely well in ODE inference problems. Another set of ideas suggest breaking down a complex inference task into a sequence of simpler tasks. In essence the idea is to use sequential-importance-sampling to sample from an artificial sequence of increasingly complex distributions where the first in the sequence is a distribution that is easy to sample from, the prior, and the last in the sequence is the actual complex target distribution. The associated importance distribution is constructed by moving the set of particles sampled at the previous step using a Markov kernel, say for example the MH kernel.
A simple way of building the sequence of distributions is to use a temperature $\beta$, that is raised slowly from $0$ to $1$. Using this temperature variable $\beta$ we can write down the annealed intermediate distribution as
$$p_{\beta}(\boldsymbol{\theta}|\boldsymbol{y})\propto p(\boldsymbol{y}|\boldsymbol{\theta})^{\beta} p(\boldsymbol{\theta}).$$
Samplers that carry out sequential-importance-sampling from these artificial sequence of distributions, to avoid the difficult task of sampling directly from $p(\boldsymbol{\theta}|\boldsymbol{y})$, are known as Sequential Monte Carlo (SMC) samplers ([P Del Moral et al., 2006](https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1467-9868.2006.00553.x)). The performance of these samplers are sensitive to the choice of the temperature schedule, that is the set of user-defined increasing values of $\beta$ between $0$ and $1$. Fortunately, PyMC3 provides a version of the SMC sampler ([Jianye Ching and Yi-Chu Chen, 2007](https://ascelibrary.org/doi/10.1061/%28ASCE%290733-9399%282007%29133%3A7%28816%29)) that automatically figures out this temperature schedule. Moreover, the PyMC3's SMC sampler does not require the gradient of the log target density. As a result it is extremely easy to use this sampler for inference in ODE models. In the next example I will apply this SMC sampler to estimate the parameters of the Fitzhugh-Nagumo model.
## The Fitzhugh-Nagumo model
The Fitzhugh-Nagumo model given by
$$
\begin{aligned}
\frac{dV}{dt}&=(V - \frac{V^3}{3} + R)c\\
\frac{dR}{dt}&=\frac{-(V-a+bR)}{c},
\end{aligned}
$$
consisting of a membrane voltage variable $V(t)$ and a recovery variable $R(t)$ is a two-dimensional simplification of the [Hodgkin-Huxley](http://www.scholarpedia.org/article/Conductance-based_models) model of spike (action potential) generation in squid giant axons and where $a$, $b$, $c$ are the model parameters. This model produces a rich dynamics and as a result a complex geometry of the posterior surface that often leads to poor performance of many MCMC samplers. As a result this model was used to test the efficacy of the discussed geometric MCMC scheme and since then has been used to benchmark other novel MCMC methods. Following [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will also use artificially generated data from this model to setup the inference task for estimating $\boldsymbol{\theta}=(a,b,c)$.
```
class FitzhughNagumoModel:
def __init__(self, times, y0=None):
self._y0 = np.array([-1, 1], dtype=np.float64)
self._times = times
def _simulate(self, parameters, times):
a, b, c = [float(x) for x in parameters]
def rhs(y, t, p):
V, R = y
dV_dt = (V - V ** 3 / 3 + R) * c
dR_dt = (V - a + b * R) / -c
return dV_dt, dR_dt
values = odeint(rhs, self._y0, times, (parameters,), rtol=1e-6, atol=1e-6)
return values
def simulate(self, x):
return self._simulate(x, self._times)
```
## Simulated Data
For this example I am going to use simulated data that is I will generate noisy traces from the forward model defined above with parameters $\theta$ set to $(0.2,0.2,3)$ respectively and corrupted by i.i.d Gaussian noise with a standard deviation $\sigma=0.5$. The initial values are set to $V(0)=-1$ and $R(0)=1$ respectively. Again following [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x) I will assume that the initial values are known. These parameter values pushes the model into the oscillatory regime.
```
n_states = 2
n_times = 200
true_params = [0.2, 0.2, 3.0]
noise_sigma = 0.5
FN_solver_times = np.linspace(0, 20, n_times)
ode_model = FitzhughNagumoModel(FN_solver_times)
sim_data = ode_model.simulate(true_params)
np.random.seed(42)
Y_sim = sim_data + np.random.randn(n_times, n_states) * noise_sigma
plt.figure(figsize=(15, 7.5))
plt.plot(FN_solver_times, sim_data[:, 0], color="darkblue", lw=4, label=r"$V(t)$")
plt.plot(FN_solver_times, sim_data[:, 1], color="darkgreen", lw=4, label=r"$R(t)$")
plt.plot(FN_solver_times, Y_sim[:, 0], "o", color="darkblue", ms=4.5, label="Noisy traces")
plt.plot(FN_solver_times, Y_sim[:, 1], "o", color="darkgreen", ms=4.5)
plt.legend(fontsize=15)
plt.xlabel("Time", fontsize=15)
plt.ylabel("Values", fontsize=15)
plt.title("Fitzhugh-Nagumo Action Potential Model", fontsize=25);
```
## Define a non-differentiable black-box op using Theano @as_op
Remember that I told SMC sampler does not require gradients, this is by the way the case for other samplers such as the Metropolis-Hastings, Slice sampler that are also supported in PyMC3. For all these gradient-free samplers I will show a simple and quick way of wrapping the forward model i.e. the ODE solution in Theano. All we have to do is to simply to use the decorator `as_op` that converts a python function into a basic Theano Op. We also tell Theano using the `as_op` decorator that we have three parameters each being a Theano scalar. The output then is a Theano matrix whose columns are the state vectors.
```
import theano.tensor as tt
from theano.compile.ops import as_op
@as_op(itypes=[tt.dscalar, tt.dscalar, tt.dscalar], otypes=[tt.dmatrix])
def th_forward_model(param1, param2, param3):
param = [param1, param2, param3]
th_states = ode_model.simulate(param)
return th_states
```
## Generative model
Since I have corrupted the original traces with i.i.d Gaussian thus the likelihood is given by
$$\boldsymbol{Y} = \prod_{i=1}^T \mathcal{N}(\boldsymbol{X}(t_i)), \sigma^2\mathbb{I}),$$
where $\mathbb{I}\in \mathbb{R}^{K \times K}$. We place a Gamma, Normal, Uniform prior on $(a,b,c)$ and a HalfNormal prior on $\sigma$ as follows:
$$
\begin{aligned}
a & \sim \mathcal{Gamma}(2,1),\\
b & \sim \mathcal{N}(0,1),\\
c & \sim \mathcal{U}(0.1,1),\\
\sigma & \sim \mathcal{H}(1).
\end{aligned}
$$
Notice how I have used the `start` argument for this example. Just like `pm.sample` `pm.sample_smc` has a number of settings, but I found the default ones good enough for simple models such as this one.
```
draws = 1000
with pm.Model() as FN_model:
a = pm.Gamma("a", alpha=2, beta=1)
b = pm.Normal("b", mu=0, sd=1)
c = pm.Uniform("c", lower=0.1, upper=10)
sigma = pm.HalfNormal("sigma", sd=1)
forward = th_forward_model(a, b, c)
cov = np.eye(2) * sigma ** 2
Y_obs = pm.MvNormal("Y_obs", mu=forward, cov=cov, observed=Y_sim)
startsmc = {v.name: np.random.uniform(1e-3, 2, size=draws) for v in FN_model.free_RVs}
trace_FN = pm.sample_smc(draws, start=startsmc)
pm.plot_posterior(trace_FN, kind="hist", bins=30, color="seagreen");
```
## Inference summary
With `pm.SMC`, do I get similar performance to geometric MCMC samplers (see [Mark Girolami and Ben Calderhead, 2011](https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1467-9868.2010.00765.x))? I think so !
```
results = [
pm.summary(trace_FN, ["a"]),
pm.summary(trace_FN, ["b"]),
pm.summary(trace_FN, ["c"]),
pm.summary(trace_FN, ["sigma"]),
]
results = pd.concat(results)
true_params.append(noise_sigma)
results["True values"] = pd.Series(np.array(true_params), index=results.index)
true_params.pop()
results
```
## Reconstruction of the phase portrait
Its good to check that we can reconstruct the (famous) pahse portrait for this model based on the obtained samples.
```
params = np.array([trace_FN.get_values("a"), trace_FN.get_values("b"), trace_FN.get_values("c")]).T
params.shape
new_values = []
for ind in range(len(params)):
ppc_sol = ode_model.simulate(params[ind])
new_values.append(ppc_sol)
new_values = np.array(new_values)
mean_values = np.mean(new_values, axis=0)
plt.figure(figsize=(15, 7.5))
plt.plot(
mean_values[:, 0],
mean_values[:, 1],
color="black",
lw=4,
label="Inferred (mean of sampled) phase portrait",
)
plt.plot(
sim_data[:, 0], sim_data[:, 1], "--", color="#ff7f0e", lw=4, ms=6, label="True phase portrait"
)
plt.legend(fontsize=15)
plt.xlabel(r"$V(t)$", fontsize=15)
plt.ylabel(r"$R(t)$", fontsize=15);
```
# Perspectives
### Using some other ODE models
I have tried to keep everything as general as possible. So, my custom ODE Op, the state and VSP evaluator as well as the cached solver are not tied to a specific ODE model. Thus, to use any other ODE model one only needs to implement a `simulate_with_sensitivities` method according to their own specific ODE model.
### Other forms of differential equation (DDE, DAE, PDE)
I hope the two examples have elucidated the applicability of PyMC3 in regards to fitting ODE models. Although ODEs are the most fundamental constituent of a mathematical model, there are indeed other forms of dynamical systems such as a delay differential equation (DDE), a differential algebraic equation (DAE) and the partial differential equation (PDE) whose parameter estimation is equally important. The SMC and for that matter any other non-gradient sampler supported by PyMC3 can be used to fit all these forms of differential equation, of course using the `as_op`. However, just like an ODE we can solve augmented systems of DDE/DAE along with their sensitivity equations. The sensitivity equations for a DDE and a DAE can be found in this recent paper, [C Rackauckas et al., 2018](https://arxiv.org/abs/1812.01892) (Equation 9 and 10). Thus we can easily apply NUTS sampler to these models.
### Stan already supports ODEs
Well there are many problems where I believe SMC sampler would be more suitable than NUTS and thus its good to have that option.
### Model selection
Most ODE inference literature since [Vladislav Vyshemirsky and Mark Girolami, 2008](https://academic.oup.com/bioinformatics/article/24/6/833/192524) recommend the usage of Bayes factor for the purpose of model selection/comparison. This involves the calculation of the marginal likelihood which is a much more nuanced topic and I would refrain from any discussion about that. Fortunately, the SMC sampler calculates the marginal likelihood as a by product so this can be used for obtaining Bayes factors. Follow PyMC3's other tutorials for further information regarding how to obtain the marginal likelihood after running the SMC sampler.
Since we generally frame the ODE inference as a regression problem (along with the i.i.d measurement noise assumption in most cases) we can straight away use any of the supported information criterion, such as the widely available information criterion (WAIC), irrespective of what sampler is used for inference. See the PyMC3's API for further information regarding WAIC.
### Other AD packages
Although this is a slight digression nonetheless I would still like to point out my observations on this issue. The approach that I have presented here for embedding an ODE (also extends to DDE/DAE) as a custom Op can be trivially carried forward to other AD packages such as TensorFlow and PyTorch. I had been able to use TensorFlow's [py_func](https://www.tensorflow.org/api_docs/python/tf/py_func) to build a custom TensorFlow ODE Op and then use that in the [Edward](http://edwardlib.org/) ppl. I would recommend [this](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html) tutorial, for writing PyTorch extensions, to those who are interested in using the [Pyro](http://pyro.ai/) ppl.
```
%load_ext watermark
%watermark -n -u -v -iv -w
```
| true |
code
| 0.630258 | null | null | null | null |
|
# Quantum Teleportation
This notebook demonstrates quantum teleportation. We first use Qiskit's built-in simulators to test our quantum circuit, and then try it out on a real quantum computer.
## 1. Overview <a id='overview'></a>
Alice wants to send quantum information to Bob. Specifically, suppose she wants to send the qubit state
$\vert\psi\rangle = \alpha\vert0\rangle + \beta\vert1\rangle$.
This entails passing on information about $\alpha$ and $\beta$ to Bob.
There exists a theorem in quantum mechanics which states that you cannot simply make an exact copy of an unknown quantum state. This is known as the no-cloning theorem. As a result of this we can see that Alice can't simply generate a copy of $\vert\psi\rangle$ and give the copy to Bob. We can only copy classical states (not superpositions).
However, by taking advantage of two classical bits and an entangled qubit pair, Alice can transfer her state $\vert\psi\rangle$ to Bob. We call this teleportation because, at the end, Bob will have $\vert\psi\rangle$ and Alice won't anymore.
## 2. The Quantum Teleportation Protocol <a id='how'></a>
To transfer a quantum bit, Alice and Bob must use a third party (Telamon) to send them an entangled qubit pair. Alice then performs some operations on her qubit, sends the results to Bob over a classical communication channel, and Bob then performs some operations on his end to receive Alice’s qubit.

We will describe the steps on a quantum circuit below. Here, no qubits are actually ‘sent’, you’ll just have to imagine that part!
First we set up our session:
```
# Do the necessary imports
import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import IBMQ, Aer, transpile
from qiskit.visualization import plot_histogram, plot_bloch_multivector, array_to_latex
from qiskit.extensions import Initialize
from qiskit.result import marginal_counts
from qiskit.quantum_info import random_statevector
```
and create our quantum circuit:
```
## SETUP
# Protocol uses 3 qubits and 2 classical bits in 2 different registers
qr = QuantumRegister(3, name="q") # Protocol uses 3 qubits
crz = ClassicalRegister(1, name="crz") # and 2 classical bits
crx = ClassicalRegister(1, name="crx") # in 2 different registers
teleportation_circuit = QuantumCircuit(qr, crz, crx)
```
#### Step 1
A third party, Telamon, creates an entangled pair of qubits and gives one to Bob and one to Alice.
The pair Telamon creates is a special pair called a Bell pair. In quantum circuit language, the way to create a Bell pair between two qubits is to first transfer one of them to the X-basis ($|+\rangle$ and $|-\rangle$) using a Hadamard gate, and then to apply a CNOT gate onto the other qubit controlled by the one in the X-basis.
```
def create_bell_pair(qc, a, b):
"""Creates a bell pair in qc using qubits a & b"""
qc.h(a) # Put qubit a into state |+>
qc.cx(a,b) # CNOT with a as control and b as target
## SETUP
# Protocol uses 3 qubits and 2 classical bits in 2 different registers
qr = QuantumRegister(3, name="q")
crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx")
teleportation_circuit = QuantumCircuit(qr, crz, crx)
## STEP 1
# In our case, Telamon entangles qubits q1 and q2
# Let's apply this to our circuit:
create_bell_pair(teleportation_circuit, 1, 2)
# And view the circuit so far:
teleportation_circuit.draw()
```
Let's say Alice owns $q_1$ and Bob owns $q_2$ after they part ways.
#### Step 2
Alice applies a CNOT gate to $q_1$, controlled by $\vert\psi\rangle$ (the qubit she is trying to send Bob). Then Alice applies a Hadamard gate to $|\psi\rangle$. In our quantum circuit, the qubit ($|\psi\rangle$) Alice is trying to send is $q_0$:
```
def alice_gates(qc, psi, a):
qc.cx(psi, a)
qc.h(psi)
## SETUP
# Protocol uses 3 qubits and 2 classical bits in 2 different registers
qr = QuantumRegister(3, name="q")
crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx")
teleportation_circuit = QuantumCircuit(qr, crz, crx)
## STEP 1
create_bell_pair(teleportation_circuit, 1, 2)
## STEP 2
teleportation_circuit.barrier() # Use barrier to separate steps
alice_gates(teleportation_circuit, 0, 1)
teleportation_circuit.draw()
```
#### Step 3
Next, Alice applies a measurement to both qubits that she owns, $q_1$ and $\vert\psi\rangle$, and stores this result in two classical bits. She then sends these two bits to Bob.
```
def measure_and_send(qc, a, b):
"""Measures qubits a & b and 'sends' the results to Bob"""
qc.barrier()
qc.measure(a,0)
qc.measure(b,1)
## SETUP
# Protocol uses 3 qubits and 2 classical bits in 2 different registers
qr = QuantumRegister(3, name="q")
crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx")
teleportation_circuit = QuantumCircuit(qr, crz, crx)
## STEP 1
create_bell_pair(teleportation_circuit, 1, 2)
## STEP 2
teleportation_circuit.barrier() # Use barrier to separate steps
alice_gates(teleportation_circuit, 0, 1)
## STEP 3
measure_and_send(teleportation_circuit, 0 ,1)
teleportation_circuit.draw()
```
#### Step 4
Bob, who already has the qubit $q_2$, then applies the following gates depending on the state of the classical bits:
00 $\rightarrow$ Do nothing
01 $\rightarrow$ Apply $X$ gate
10 $\rightarrow$ Apply $Z$ gate
11 $\rightarrow$ Apply $ZX$ gate
(*Note that this transfer of information is purely classical*.)
```
# This function takes a QuantumCircuit (qc), integer (qubit)
# and ClassicalRegisters (crz & crx) to decide which gates to apply
def bob_gates(qc, qubit, crz, crx):
# Here we use c_if to control our gates with a classical
# bit instead of a qubit
qc.x(qubit).c_if(crx, 1) # Apply gates if the registers
qc.z(qubit).c_if(crz, 1) # are in the state '1'
## SETUP
# Protocol uses 3 qubits and 2 classical bits in 2 different registers
qr = QuantumRegister(3, name="q")
crz, crx = ClassicalRegister(1, name="crz"), ClassicalRegister(1, name="crx")
teleportation_circuit = QuantumCircuit(qr, crz, crx)
## STEP 1
create_bell_pair(teleportation_circuit, 1, 2)
## STEP 2
teleportation_circuit.barrier() # Use barrier to separate steps
alice_gates(teleportation_circuit, 0, 1)
## STEP 3
measure_and_send(teleportation_circuit, 0, 1)
## STEP 4
teleportation_circuit.barrier() # Use barrier to separate steps
bob_gates(teleportation_circuit, 2, crz, crx)
teleportation_circuit.draw()
```
And voila! At the end of this protocol, Alice's qubit has now teleported to Bob.
## 3. Simulating the Teleportation Protocol <a id='simulating'></a>
### 3.1 How Will We Test the Protocol on a Quantum Computer? <a id='testing'></a>
In this notebook, we will initialize Alice's qubit in a random state $\vert\psi\rangle$ (`psi`). This state will be created using an `Initialize` gate on $|q_0\rangle$. In this chapter we use the function `random_statevector` to choose `psi` for us, but feel free to set `psi` to any qubit state you want.
```
# Create random 1-qubit state
psi = random_statevector(2)
# Display it nicely
display(array_to_latex(psi, prefix="|\\psi\\rangle ="))
# Show it on a Bloch sphere
plot_bloch_multivector(psi)
```
Let's create our initialization instruction to create $|\psi\rangle$ from the state $|0\rangle$:
```
init_gate = Initialize(psi)
init_gate.label = "init"
```
(`Initialize` is technically not a gate since it contains a reset operation, and so is not reversible. We call it an 'instruction' instead). If the quantum teleportation circuit works, then at the end of the circuit the qubit $|q_2\rangle$ will be in this state. We will check this using the statevector simulator.
### 3.2 Using the Simulated Statevector <a id='simulating-sv'></a>
We can use the Aer simulator to verify our qubit has been teleported.
```
## SETUP
qr = QuantumRegister(3, name="q") # Protocol uses 3 qubits
crz = ClassicalRegister(1, name="crz") # and 2 classical registers
crx = ClassicalRegister(1, name="crx")
qc = QuantumCircuit(qr, crz, crx)
## STEP 0
# First, let's initialize Alice's q0
qc.append(init_gate, [0])
qc.barrier()
## STEP 1
# Now begins the teleportation protocol
create_bell_pair(qc, 1, 2)
qc.barrier()
## STEP 2
# Send q1 to Alice and q2 to Bob
alice_gates(qc, 0, 1)
## STEP 3
# Alice then sends her classical bits to Bob
measure_and_send(qc, 0, 1)
## STEP 4
# Bob decodes qubits
bob_gates(qc, 2, crz, crx)
# Display the circuit
qc.draw()
```
We can see below, using the statevector obtained from the aer simulator, that the state of $|q_2\rangle$ is the same as the state $|\psi\rangle$ we created above, while the states of $|q_0\rangle$ and $|q_1\rangle$ have been collapsed to either $|0\rangle$ or $|1\rangle$. The state $|\psi\rangle$ has been teleported from qubit 0 to qubit 2.
```
sim = Aer.get_backend('aer_simulator')
qc.save_statevector()
out_vector = sim.run(qc).result().get_statevector()
plot_bloch_multivector(out_vector)
```
You can run this cell a few times to make sure. You may notice that the qubits 0 & 1 change states, but qubit 2 is always in the state $|\psi\rangle$.
### 3.3 Using the Simulated Counts <a id='simulating-fc'></a>
Quantum teleportation is designed to send qubits between two parties. We do not have the hardware to demonstrate this, but we can demonstrate that the gates perform the correct transformations on a single quantum chip. Here we again use the aer simulator to simulate how we might test our protocol.
On a real quantum computer, we would not be able to sample the statevector, so if we wanted to check our teleportation circuit is working, we need to do things slightly differently. The `Initialize` instruction first performs a reset, setting our qubit to the state $|0\rangle$. It then applies gates to turn our $|0\rangle$ qubit into the state $|\psi\rangle$:
$$ |0\rangle \xrightarrow{\text{Initialize gates}} |\psi\rangle $$
Since all quantum gates are reversible, we can find the inverse of these gates using:
```
inverse_init_gate = init_gate.gates_to_uncompute()
```
This operation has the property:
$$ |\psi\rangle \xrightarrow{\text{Inverse Initialize gates}} |0\rangle $$
To prove the qubit $|q_0\rangle$ has been teleported to $|q_2\rangle$, if we do this inverse initialization on $|q_2\rangle$, we expect to measure $|0\rangle$ with certainty. We do this in the circuit below:
```
## SETUP
qr = QuantumRegister(3, name="q") # Protocol uses 3 qubits
crz = ClassicalRegister(1, name="crz") # and 2 classical registers
crx = ClassicalRegister(1, name="crx")
qc = QuantumCircuit(qr, crz, crx)
## STEP 0
# First, let's initialize Alice's q0
qc.append(init_gate, [0])
qc.barrier()
## STEP 1
# Now begins the teleportation protocol
create_bell_pair(qc, 1, 2)
qc.barrier()
## STEP 2
# Send q1 to Alice and q2 to Bob
alice_gates(qc, 0, 1)
## STEP 3
# Alice then sends her classical bits to Bob
measure_and_send(qc, 0, 1)
## STEP 4
# Bob decodes qubits
bob_gates(qc, 2, crz, crx)
## STEP 5
# reverse the initialization process
qc.append(inverse_init_gate, [2])
# Display the circuit
qc.draw()
```
We can see the `inverse_init_gate` appearing, labelled 'disentangler' on the circuit diagram. Finally, we measure the third qubit and store the result in the third classical bit:
```
# Need to add a new ClassicalRegister
# to see the result
cr_result = ClassicalRegister(1)
qc.add_register(cr_result)
qc.measure(2,2)
qc.draw()
```
and we run our experiment:
```
t_qc = transpile(qc, sim)
t_qc.save_statevector()
counts = sim.run(t_qc).result().get_counts()
qubit_counts = [marginal_counts(counts, [qubit]) for qubit in range(3)]
plot_histogram(qubit_counts)
```
We can see we have a 100% chance of measuring $q_2$ (the purple bar in the histogram) in the state $|0\rangle$. This is the expected result, and indicates the teleportation protocol has worked properly.
## 4. Understanding Quantum Teleportation <a id="understanding-qt"></a>
As you have worked with the Quantum Teleportation's implementation, it is time to understand the mathematics behind the protocol.
#### Step 1
Quantum Teleportation begins with the fact that Alice needs to transmit $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ (a random qubit) to Bob. She doesn't know the state of the qubit. For this, Alice and Bob take the help of a third party (Telamon). Telamon prepares a pair of entangled qubits for Alice and Bob. The entangled qubits could be written in Dirac Notation as:
$$ |e \rangle = \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle) $$
Alice and Bob each possess one qubit of the entangled pair (denoted as A and B respectively),
$$|e\rangle = \frac{1}{\sqrt{2}} (|0\rangle_A |0\rangle_B + |1\rangle_A |1\rangle_B) $$
This creates a three qubit quantum system where Alice has the first two qubits and Bob the last one.
$$ \begin{aligned}
|\psi\rangle \otimes |e\rangle &= \frac{1}{\sqrt{2}} (\alpha |0\rangle \otimes (|00\rangle + |11\rangle) + \beta |1\rangle \otimes (|00\rangle + |11\rangle))\\
&= \frac{1}{\sqrt{2}} (\alpha|000\rangle + \alpha|011\rangle + \beta|100\rangle + \beta|111\rangle)
\end{aligned}$$
#### Step 2
Now according to the protocol Alice applies CNOT gate on her two qubits followed by Hadamard gate on the first qubit. This results in the state:
$$
\begin{aligned} &(H \otimes I \otimes I) (CNOT \otimes I) (|\psi\rangle \otimes |e\rangle)\\
&=(H \otimes I \otimes I) (CNOT \otimes I) \frac{1}{\sqrt{2}} (\alpha|000\rangle + \alpha|011\rangle + \beta|100\rangle + \beta|111\rangle) \\
&= (H \otimes I \otimes I) \frac{1}{\sqrt{2}} (\alpha|000\rangle + \alpha|011\rangle + \beta|110\rangle + \beta|101\rangle) \\
&= \frac{1}{2} (\alpha(|000\rangle + |011\rangle + |100\rangle + |111\rangle) + \beta(|010\rangle + |001\rangle - |110\rangle - |101\rangle)) \\
\end{aligned}
$$
Which can then be separated and written as:
$$
\begin{aligned}
= \frac{1}{2}(
& \phantom{+} |00\rangle (\alpha|0\rangle + \beta|1\rangle) \hphantom{\quad )} \\
& + |01\rangle (\alpha|1\rangle + \beta|0\rangle) \hphantom{\quad )}\\[4pt]
& + |10\rangle (\alpha|0\rangle - \beta|1\rangle) \hphantom{\quad )}\\[4pt]
& + |11\rangle (\alpha|1\rangle - \beta|0\rangle) \quad )\\
\end{aligned}
$$
#### Step 3
Alice measures the first two qubit (which she owns) and sends them as two classical bits to Bob. The result she obtains is always one of the four standard basis states $|00\rangle, |01\rangle, |10\rangle,$ and $|11\rangle$ with equal probability.
On the basis of her measurement, Bob's state will be projected to,
$$
|00\rangle \rightarrow (\alpha|0\rangle + \beta|1\rangle)\\
|01\rangle \rightarrow (\alpha|1\rangle + \beta|0\rangle)\\
|10\rangle \rightarrow (\alpha|0\rangle - \beta|1\rangle)\\
|11\rangle \rightarrow (\alpha|1\rangle - \beta|0\rangle)
$$
#### Step 4
Bob, on receiving the bits from Alice, knows he can obtain the original state $|\psi\rangle$ by applying appropriate transformations on his qubit that was once part of the entangled pair.
The transformations he needs to apply are:
$$
\begin{array}{c c c}
\mbox{Bob's State} & \mbox{Bits Received} & \mbox{Gate Applied} \\
(\alpha|0\rangle + \beta|1\rangle) & 00 & I \\
(\alpha|1\rangle + \beta|0\rangle) & 01 & X \\
(\alpha|0\rangle - \beta|1\rangle) & 10 & Z \\
(\alpha|1\rangle - \beta|0\rangle) & 11 & ZX
\end{array}
$$
After this step Bob will have successfully reconstructed Alice's state.
## 5. Teleportation on a Real Quantum Computer <a id='real_qc'></a>
### 5.1 IBM hardware and Deferred Measurement <a id='deferred-measurement'></a>
The IBM quantum computers currently do not support instructions after measurements, meaning we cannot run the quantum teleportation in its current form on real hardware. Fortunately, this does not limit our ability to perform any computations due to the _deferred measurement principle_ discussed in chapter 4.4 of [1]. The principle states that any measurement can be postponed until the end of the circuit, i.e. we can move all the measurements to the end, and we should see the same results.

Any benefits of measuring early are hardware related: If we can measure early, we may be able to reuse qubits, or reduce the amount of time our qubits are in their fragile superposition. In this example, the early measurement in quantum teleportation would have allowed us to transmit a qubit state without a direct quantum communication channel.
While moving the gates allows us to demonstrate the "teleportation" circuit on real hardware, it should be noted that the benefit of the teleportation process (transferring quantum states via classical channels) is lost.
Let us re-write the `bob_gates` function to `new_bob_gates`:
```
def new_bob_gates(qc, a, b, c):
qc.cx(b, c)
qc.cz(a, c)
```
And create our new circuit:
```
qc = QuantumCircuit(3,1)
# First, let's initialize Alice's q0
qc.append(init_gate, [0])
qc.barrier()
# Now begins the teleportation protocol
create_bell_pair(qc, 1, 2)
qc.barrier()
# Send q1 to Alice and q2 to Bob
alice_gates(qc, 0, 1)
qc.barrier()
# Alice sends classical bits to Bob
new_bob_gates(qc, 0, 1, 2)
# We undo the initialization process
qc.append(inverse_init_gate, [2])
# See the results, we only care about the state of qubit 2
qc.measure(2,0)
# View the results:
qc.draw()
```
### 5.2 Executing <a id='executing'></a>
```
# First, see what devices we are allowed to use by loading our saved accounts
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
# get the least-busy backend at IBM and run the quantum circuit there
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
backend = least_busy(provider.backends(filters=lambda b: b.configuration().n_qubits >= 3 and
not b.configuration().simulator and b.status().operational==True))
t_qc = transpile(qc, backend, optimization_level=3)
job = backend.run(t_qc)
job_monitor(job) # displays job status under cell
# Get the results and display them
exp_result = job.result()
exp_counts = exp_result.get_counts(qc)
print(exp_counts)
plot_histogram(exp_counts)
```
As we see here, there are a few results in which we measured $|1\rangle$. These arise due to errors in the gates and the qubits. In contrast, our simulator in the earlier part of the notebook had zero errors in its gates, and allowed error-free teleportation.
```
print(f"The experimental error rate : {exp_counts['1']*100/sum(exp_counts.values()):.3f}%")
```
## 6. References <a id='references'></a>
[1] M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000).
[2] Eleanor Rieffel and Wolfgang Polak, Quantum Computing: a Gentle Introduction (The MIT Press Cambridge England, Massachusetts, 2011).
```
import qiskit.tools.jupyter
%qiskit_version_table
```
| true |
code
| 0.61173 | null | null | null | null |
|
# Train convolutional network for sentiment analysis.
Based on
"Convolutional Neural Networks for Sentence Classification" by Yoon Kim
http://arxiv.org/pdf/1408.5882v2.pdf
For `CNN-non-static` gets to 82.1% after 61 epochs with following settings:
embedding_dim = 20
filter_sizes = (3, 4)
num_filters = 3
dropout_prob = (0.7, 0.8)
hidden_dims = 100
For `CNN-rand` gets to 78-79% after 7-8 epochs with following settings:
embedding_dim = 20
filter_sizes = (3, 4)
num_filters = 150
dropout_prob = (0.25, 0.5)
hidden_dims = 150
For `CNN-static` gets to 75.4% after 7 epochs with following settings:
embedding_dim = 100
filter_sizes = (3, 4)
num_filters = 150
dropout_prob = (0.25, 0.5)
hidden_dims = 150
* it turns out that such a small data set as "Movie reviews with one
sentence per review" (Pang and Lee, 2005) requires much smaller network
than the one introduced in the original article:
- embedding dimension is only 20 (instead of 300; 'CNN-static' still requires ~100)
- 2 filter sizes (instead of 3)
- higher dropout probabilities and
- 3 filters per filter size is enough for 'CNN-non-static' (instead of 100)
- embedding initialization does not require prebuilt Google Word2Vec data.
Training Word2Vec on the same "Movie reviews" data set is enough to
achieve performance reported in the article (81.6%)
Another distinct difference is sliding MaxPooling window of length=2
instead of MaxPooling over whole feature map as in the article
```
import numpy as np
import data_helpers
from w2v import train_word2vec
from keras.models import Sequential, Model
from keras.layers import Activation, Dense, Dropout, Embedding, Flatten, Input, Merge, Convolution1D, MaxPooling1D
from sklearn.cross_validation import train_test_split
np.random.seed(2)
model_variation = 'CNN-rand' # CNN-rand | CNN-non-static | CNN-static
print('Model variation is %s' % model_variation)
# Model Hyperparameters
sequence_length = 56
embedding_dim = 20
filter_sizes = (3, 4)
num_filters = 150
dropout_prob = (0.25, 0.5)
hidden_dims = 150
# Training parameters
batch_size = 32
num_epochs = 2
# Word2Vec parameters, see train_word2vec
min_word_count = 1 # Minimum word count
context = 10 # Context window size
print("Loading data...")
x, y, vocabulary, vocabulary_inv = data_helpers.load_data()
if model_variation=='CNN-non-static' or model_variation=='CNN-static':
embedding_weights = train_word2vec(x, vocabulary_inv, embedding_dim, min_word_count, context)
if model_variation=='CNN-static':
x = embedding_weights[0][x]
elif model_variation=='CNN-rand':
embedding_weights = None
else:
raise ValueError('Unknown model variation')
data = np.append(x,y,axis = 1)
train, test = train_test_split(data, test_size = 0.15,random_state = 0)
X_test = test[:,:56]
Y_test = test[:,56:58]
X_train = train[:,:56]
Y_train = train[:,56:58]
train_rows = np.random.randint(0,X_train.shape[0],2500)
X_train = X_train[train_rows]
Y_train = Y_train[train_rows]
print("Vocabulary Size: {:d}".format(len(vocabulary)))
def initialize():
global graph_in
global convs
graph_in = Input(shape=(sequence_length, embedding_dim))
convs = []
#Buliding the first layer (Convolution Layer) of the network
def build_layer_1(filter_length):
conv = Convolution1D(nb_filter=num_filters,
filter_length=filter_length,
border_mode='valid',
activation='relu',
subsample_length=1)(graph_in)
return conv
#Adding a max pooling layer to the model(network)
def add_max_pooling(conv):
pool = MaxPooling1D(pool_length=2)(conv)
return pool
#Adding a flattening layer to the model(network), before adding a dense layer
def add_flatten(conv_or_pool):
flatten = Flatten()(conv_or_pool)
return flatten
def add_sequential(graph):
#main sequential model
model = Sequential()
if not model_variation=='CNN-static':
model.add(Embedding(len(vocabulary), embedding_dim, input_length=sequence_length,
weights=embedding_weights))
model.add(Dropout(dropout_prob[0], input_shape=(sequence_length, embedding_dim)))
model.add(graph)
model.add(Dense(2))
model.add(Activation('sigmoid'))
return model
#1.Convolution 2.Flatten
def one_layer_convolution():
initialize()
conv = build_layer_1(3)
flatten = add_flatten(conv)
convs.append(flatten)
out = convs[0]
graph = Model(input=graph_in, output=out)
model = add_sequential(graph)
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size,
nb_epoch=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
#1.Convolution 2.Max Pooling 3.Flatten
def two_layer_convolution():
initialize()
conv = build_layer_1(3)
pool = add_max_pooling(conv)
flatten = add_flatten(pool)
convs.append(flatten)
out = convs[0]
graph = Model(input=graph_in, output=out)
model = add_sequential(graph)
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size,
nb_epoch=num_epochs, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
#1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Flatten
def three_layer_convolution():
initialize()
conv = build_layer_1(3)
pool = add_max_pooling(conv)
flatten = add_flatten(pool)
convs.append(flatten)
conv = build_layer_1(4)
flatten = add_flatten(conv)
convs.append(flatten)
if len(filter_sizes)>1:
out = Merge(mode='concat')(convs)
else:
out = convs[0]
graph = Model(input=graph_in, output=out)
model = add_sequential(graph)
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size,
nb_epoch=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
#1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Max Pooling 6.Flatten
def four_layer_convolution():
initialize()
conv = build_layer_1(3)
pool = add_max_pooling(conv)
flatten = add_flatten(pool)
convs.append(flatten)
conv = build_layer_1(4)
pool = add_max_pooling(conv)
flatten = add_flatten(pool)
convs.append(flatten)
if len(filter_sizes)>1:
out = Merge(mode='concat')(convs)
else:
out = convs[0]
graph = Model(input=graph_in, output=out)
model = add_sequential(graph)
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=batch_size,
nb_epoch=num_epochs, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
%%time
#1.Convolution 2.Flatten
one_layer_convolution()
%%time
#1.Convolution 2.Max Pooling 3.Flatten
two_layer_convolution()
%%time
#1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Flatten
three_layer_convolution()
%%time
#1.Convolution 2.Max Pooling 3.Flatten 4.Convolution 5.Max Pooling 6.Flatten
four_layer_convolution()
```
| true |
code
| 0.63861 | null | null | null | null |
|
## Libraries
```
### Uncomment the next two lines to,
### install tensorflow_hub and tensorflow datasets
#!pip install tensorflow_hub
#!pip install tensorflow_datasets
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import tensorflow_hub as hub
import tensorflow_datasets as tfds
from tensorflow.keras import layers
```
### Download and Split data into Train and Validation
```
def get_data():
(train_set, validation_set), info = tfds.load(
'tf_flowers',
with_info=True,
as_supervised=True,
split=['train[:70%]', 'train[70%:]'],
)
return train_set, validation_set, info
train_set, validation_set, info = get_data()
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
print('Total Number of Classes: {}'.format(num_classes))
print('Total Number of Training Images: {}'.format(len(train_set)))
print('Total Number of Validation Images: {} \n'.format(len(validation_set)))
img_shape = 224
batch_size = 32
def format_image(image, label):
image = tf.image.resize(image, (img_shape, img_shape))/255.0
return image, label
train_batches = train_set.shuffle(num_examples//4).map(format_image).batch(batch_size).prefetch(1)
validation_batches = validation_set.map(format_image).batch(batch_size).prefetch(1)
```
### Getting MobileNet model's learned features
```
def get_mobilenet_features():
URL = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
global img_shape
feature_extractor = hub.KerasLayer(URL, input_shape=(img_shape, img_shape,3))
return feature_extractor
### Freezing the layers of transferred model (MobileNet)
feature_extractor = get_mobilenet_features()
feature_extractor.trainable = False
```
## Deep Learning Model - Transfer Learning using MobileNet
```
def create_transfer_learned_model(feature_extractor):
global num_classes
model = tf.keras.Sequential([
feature_extractor,
layers.Dense(num_classes, activation='softmax')
])
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
return model
```
### Training the last classification layer of the model
Achieved Validation Accuracy: 90.10% (significant improvement over simple architecture)
```
epochs = 6
model = create_transfer_learned_model(feature_extractor)
history = model.fit(train_batches,
epochs=epochs,
validation_data=validation_batches)
```
### Plotting Accuracy and Loss Curves
```
def create_plots(history):
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
global epochs
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
create_plots(history)
```
### Prediction
```
def predict():
global train_batches, info
image_batch, label_batch = next(iter(train_batches.take(1)))
image_batch = image_batch.numpy()
label_batch = label_batch.numpy()
predicted_batch = model.predict(image_batch)
predicted_batch = tf.squeeze(predicted_batch).numpy()
class_names = np.array(info.features['label'].names)
predicted_ids = np.argmax(predicted_batch, axis=-1)
predicted_class_names = class_names[predicted_ids]
return image_batch, label_batch, predicted_ids, predicted_class_names
image_batch, label_batch, predicted_ids, predicted_class_names = predict()
print("Labels: ", label_batch)
print("Predicted labels: ", predicted_ids)
def plot_figures():
global image_batch, predicted_ids, label_batch
plt.figure(figsize=(10,9))
for n in range(30):
plt.subplot(6,5,n+1)
plt.subplots_adjust(hspace = 0.3)
plt.imshow(image_batch[n])
color = "blue" if predicted_ids[n] == label_batch[n] else "red"
plt.title(predicted_class_names[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (blue: correct, red: incorrect)")
plot_figures()
```
| true |
code
| 0.897628 | null | null | null | null |
|
# Text Data in scikit-learn
```
import matplotlib.pyplot as plt
import sklearn
sklearn.set_config(display='diagram')
from pathlib import Path
import tarfile
from urllib import request
data_path = Path("data")
extracted_path = Path("data") / "train"
imdb_path = data_path / "aclImdbmini.tar.gz"
def untar_imdb():
if extracted_path.exists():
print("imdb dataset already extracted")
return
with tarfile.open(imdb_path, "r") as tar_f:
tar_f.extractall(data_path)
# This may take some time to run since it will download and extracted
untar_imdb()
```
## CountVectorizer
```
sample_text = ["Can we go to the hill? I finished my homework.",
"The hill is very tall. Please be careful"]
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
vect.fit(sample_text)
vect.get_feature_names()
X = vect.transform(sample_text)
X
X.toarray()
```
### Bag of words
```
sample_text
X_inverse = vect.inverse_transform(X)
X_inverse[0]
X_inverse[1]
```
## Loading text data with scikit-learn
```
from sklearn.datasets import load_files
reviews_train = load_files(extracted_path, categories=["neg", "pos"])
raw_text_train, raw_y_train = reviews_train.data, reviews_train.target
raw_text_train = [doc.replace(b"<br />", b" ") for doc in raw_text_train]
import numpy as np
np.unique(raw_y_train)
np.bincount(raw_y_train)
len(raw_text_train)
raw_text_train[5]
```
## Split dataset
```
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(
raw_text_train, raw_y_train, stratify=raw_y_train, random_state=0)
```
### Transform training data
```
vect = CountVectorizer()
X_train = vect.fit_transform(text_train)
len(text_train)
X_train
```
### Transform testing set
```
len(text_test)
X_test = vect.transform(text_test)
X_test
```
### Extract feature names
```
feature_names = vect.get_feature_names()
feature_names[10000:10020]
feature_names[::3000]
```
### Linear model for classification
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear', random_state=42).fit(X_train, y_train)
lr.score(X_test, y_test)
def plot_important_features(coef, feature_names, top_n=20, ax=None, rotation=40):
if ax is None:
ax = plt.gca()
feature_names = np.asarray(feature_names)
coef = coef.reshape(-1)
inds = np.argsort(coef)
low = inds[:top_n]
high = inds[-top_n:]
important = np.hstack([low, high])
myrange = range(len(important))
colors = ['red'] * top_n + ['blue'] * top_n
ax.bar(myrange, coef[important], color=colors)
ax.set_xticks(myrange)
ax.set_xticklabels(feature_names[important], rotation=rotation, ha="right")
ax.set_xlim(-.7, 2 * top_n)
ax.set_frame_on(False)
feature_names = vect.get_feature_names()
fig, ax = plt.subplots(figsize=(15, 6))
plot_important_features(lr.coef_, feature_names, top_n=20, ax=ax)
```
## Exercise 1
1. Train a `sklearn.ensemble.RandomForestClassifier` on the training set, `X_train` and `y_train`.
2. Evalute the accuracy on the test set.
3. What are the top 20 important features accourind go `feature_importances_` of the random forst.
```
# %load solutions/01-ex01-solutions.py
```
## CountVectorizer Options
```
sample_text = ["Can we go to the hill? I finished my homework.",
"The hill is very tall. Please be careful"]
vect = CountVectorizer()
vect.fit(sample_text)
vect.get_feature_names()
```
### Stop words
```
vect = CountVectorizer(stop_words='english')
vect.fit(sample_text)
vect.get_feature_names()
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
print(list(ENGLISH_STOP_WORDS))
```
### Max features
```
vect = CountVectorizer(max_features=4, stop_words='english')
vect.fit(sample_text)
vect.get_feature_names()
```
### Min frequency on the imdb dataset
With `min_df=1` (default)
```
X_train.shape
```
With `min_df=4`
```
vect = CountVectorizer(min_df=4)
X_train_min_df_4 = vect.fit_transform(text_train)
X_train_min_df_4.shape
lr_df_4 = LogisticRegression(solver='liblinear', random_state=42).fit(X_train_min_df_4, y_train)
X_test_min_df_4 = vect.transform(text_test)
```
#### Scores with different min frequencies
```
lr_df_4.score(X_test_min_df_4, y_test)
lr.score(X_test, y_test)
```
## Pipelines and Vectorizers
```
from sklearn.pipeline import Pipeline
log_reg = Pipeline([
('vectorizer', CountVectorizer()),
('classifier', LogisticRegression(random_state=42, solver='liblinear'))
])
log_reg
text_train[:2]
log_reg.fit(text_train, y_train)
log_reg.score(text_train, y_train)
log_reg.score(text_test, y_test)
```
## Exercise 2
1. Create a pipeline with a `CountVectorizer` with `min_df=5` and `stop_words='english'` and a `RandomForestClassifier`.
2. What is the score of the random forest on the test dataset?
```
# %load solutions/01-ex02-solutions.py
```
## Bigrams
`CountVectorizer` takes a `ngram_range` parameter
```
sample_text
cv = CountVectorizer(ngram_range=(1, 1)).fit(sample_text)
print("Vocabulary size:", len(cv.vocabulary_))
print("Vocabulary:", cv.get_feature_names())
cv = CountVectorizer(ngram_range=(2, 2)).fit(sample_text)
print("Vocabulary size:", len(cv.vocabulary_))
print("Vocabulary:")
print(cv.get_feature_names())
cv = CountVectorizer(ngram_range=(1, 2)).fit(sample_text)
print("Vocabulary size:", len(cv.vocabulary_))
print("Vocabulary:")
print(cv.get_feature_names())
```
## n-grams with stop words
```
cv_n_gram = CountVectorizer(ngram_range=(1, 2), min_df=4, stop_words="english")
cv_n_gram.fit(text_train)
len(cv_n_gram.vocabulary_)
print(cv_n_gram.get_feature_names()[::2000])
pipe_cv_n_gram = Pipeline([
('vectorizer', cv_n_gram),
('classifier', LogisticRegression(random_state=42, solver='liblinear'))
])
pipe_cv_n_gram.fit(text_train, y_train)
pipe_cv_n_gram.score(text_test, y_test)
feature_names = pipe_cv_n_gram['vectorizer'].get_feature_names()
fig, ax = plt.subplots(figsize=(15, 6))
plot_important_features(pipe_cv_n_gram['classifier'].coef_.ravel(), feature_names, top_n=20, ax=ax)
```
## Tf-idf rescaling
```
sample_text
from sklearn.feature_extraction.text import TfidfVectorizer
tfidvect = TfidfVectorizer().fit(sample_text)
tfid_trans = tfidvect.transform(sample_text)
tfid_trans.toarray()
```
## Train on the imdb dataset
```
log_reg_tfid = Pipeline([
('vectorizer', TfidfVectorizer(ngram_range=(1, 2), min_df=4,
stop_words="english")),
('classifier', LogisticRegression(random_state=42, solver='liblinear'))
])
log_reg_tfid.fit(text_train, y_train)
log_reg_tfid.score(text_test, y_test)
```
## Exercise 3
0. Load data from `fetch_20newsgroups`:
```python
from sklearn.datasets import fetch_20newsgroups
categories = [
'alt.atheism',
'sci.space',
]
remove = ('headers', 'footers', 'quotes')
data_train = fetch_20newsgroups(subset='train', categories=categories,
remove=remove)
data_test = fetch_20newsgroups(subset='test', categories=categories,
remove=remove)
X_train, y_train = data_train.data, data_train.target
X_test, y_test = data_test.data, data_test.target
```
1. How many samples are there in the training dataset and test dataset?
1. Construct a pipeline with a `TfidfVectorizer` and `LogisticRegression`.
1. Evalute the pipeline on the test set.
1. Plot the feature importances using `plot_important_features`.
```
# %load solutions/01-ex03-solutions.py
```
| true |
code
| 0.527682 | null | null | null | null |
|
## Sparse logistic regression
$\newcommand{\n}[1]{\left\|#1 \right\|}$
$\newcommand{\R}{\mathbb R} $
$\newcommand{\N}{\mathbb N} $
$\newcommand{\Z}{\mathbb Z} $
$\newcommand{\lr}[1]{\left\langle #1\right\rangle}$
We want to minimize
$$\min_x J(x) := \sum_{i=1}^m \log\bigl(1+\exp (-b_i\lr{a_i, x})\bigr) + \gamma \n{x}_1$$
where $(a_i, b_i)\in \R^n\times \{-1,1\}$ is the training set and $\gamma >0$. We can rewrite the objective as
$J(x) = \tilde f(Kx)+g(x)$,
where $$\tilde f(y)=\sum_{i=1}^{} \log (1+\exp(y_i)), \quad K = -b*A \in \R^{m\times n}, \quad g(x) = \gamma \n{x}_1$$
```
import numpy as np
import scipy.linalg as LA
import scipy.sparse as spr
import scipy.sparse.linalg as spr_LA
from time import perf_counter
from sklearn import datasets
filename = "data/a9a"
#filename = "data/real-sim.bz2"
#filename = "data/rcv1_train.binary.bz2"
#filename = "data/kdda.t.bz2"
A, b = datasets.load_svmlight_file(filename)
m, n = A.shape
print("The dataset {}. The dimensions: m={}, n={}".format(filename[5:], m, n))
# define all ingredients for sparse logistic regression
gamma = 0.005 * LA.norm(A.T.dot(b), np.inf)
K = (A.T.multiply(-b)).T.tocsr()
# find the norm of K^T K
L = spr_LA.svds(K, k=1, return_singular_vectors=False)**2
# starting point
x0 = np.zeros(n)
# stepsize
ss = 4/L
g = lambda x: gamma*LA.norm(x,1)
prox_g = lambda x, rho: x + np.clip(-x, -rho*gamma, rho*gamma)
f = lambda x: np.log(1. + np.exp(x)).sum()
def df(x):
exp_x = np.exp(x)
return exp_x/(1.+exp_x)
dh = lambda x, Kx: K.T.dot(df(Kx))
# residual
res = lambda x: LA.norm(x-prox_g(x-dh(x,K.dot(x)), 1))
# energy
J = lambda x, Kx: f(Kx)+g(x)
### Algorithms
def prox_grad(x1, s=1, numb_iter=100):
"""
Implementation of the proximal gradient method.
x1: array, a starting point
s: positive number, a stepsize
numb_iter: positive integer, number of iterations
Returns an array of energy values, computed in each iteration, and the
argument x_k after numb_iter iterations
"""
begin = perf_counter()
x = x1.copy()
Kx = K.dot(x)
values = [J(x, Kx)]
dhx = dh(x,Kx)
for i in range(numb_iter):
#x = prox_g(x - s * dh(x, Kx), s)
x = prox_g(x - s * dhx, s)
Kx = K.dot(x)
dhx = dh(x,Kx)
values.append(J(x, Kx))
end = perf_counter()
print("Time execution of prox-grad:", end - begin)
return np.array(values), x
def fista(x1, s=1, numb_iter=100):
"""
Implementation of the FISTA.
x1: array, a starting point
s: positive number, a stepsize
numb_iter: positive integer, number of iterations
Returns an array of energy values, computed in each iteration, and the
argument x_k after numb_iter iterations
"""
begin = perf_counter()
x, y = x1.copy(), x1.copy()
t = 1.
Ky = K.dot(y)
values = [J(y,Ky)]
for i in range(numb_iter):
x1 = prox_g(y - s * dh(y, Ky), s)
t1 = 0.5 * (1 + np.sqrt(1 + 4 * t**2))
y = x1 + (t - 1) / t1 * (x1 - x)
x, t = x1, t1
Ky = K.dot(y)
values.append(J(y, Ky))
end = perf_counter()
print("Time execution of FISTA:", end - begin)
return np.array(values), x
def adaptive_graal(x1, numb_iter=100):
"""
Implementation of the adaptive GRAAL.
x1: array, a starting point
numb_iter: positive integer, number of iterations
Returns an array of energy values, computed in each iteration, and the
argument x_k after numb_iter iterations
"""
begin = perf_counter()
phi = 1.5
x, x_ = x1.copy(), x1.copy()
x0 = x + np.random.randn(x.shape[0]) * 1e-9
Kx = K.dot(x)
dhx = dh(x, Kx)
la = phi / 2 * LA.norm(x - x0) / LA.norm(dhx - dh(x0, K.dot(x0)))
rho = 1. / phi + 1. / phi**2
values = [J(x, Kx)]
th = 1
for i in range(numb_iter):
x1 = prox_g(x_ - la * dhx, la)
Kx1 = K.dot(x1)
dhx1 = dh(x1, Kx1)
n1 = LA.norm(x1 - x)**2
n2 = LA.norm(dhx1 - dhx)**2
n1_div_n2 = n1/n2 if n2 != 0 else la*10
la1 = min(rho * la, 0.25 * phi * th / la * (n1_div_n2))
x_ = ((phi - 1) * x1 + x_) / phi
th = phi * la1 / la
x, la, dhx = x1, la1, dhx1
values.append(J(x1, Kx1))
end = perf_counter()
print("Time execution of aGRAAL:", end - begin)
return values, x, x_
```
Run the algorithms. It might take some time, if the dataset and/or the number of iterations are huge
```
N = 10000
ans1 = prox_grad(x0, ss, numb_iter=N)
ans2 = fista(x0, ss, numb_iter=N)
ans3 = adaptive_graal(x0, numb_iter=N)
x1, x2, x3 = ans1[1], ans2[1], ans3[1]
x1, x3 = ans1[1], ans3[1]
print("Residuals:", [res(x) for x in [x1, x2, x3]])
```
Plot the results
```
values = [ans1[0], ans2[0], ans3[0]]
labels = ["PGM", "FISTA", "aGRAAL"]
linestyles = [':', "--", "-"]
colors = ['b', 'g', '#FFD700']
v_min = min([min(v) for v in values])
plt.figure(figsize=(6,4))
for i,v in enumerate(values):
plt.plot(v - v_min, color=colors[i], label=labels[i], linestyle=linestyles[i])
plt.yscale('log')
plt.xlabel(u'iterations, k')
plt.ylabel('$J(x^k)-J_{_*}$')
plt.legend()
#plt.savefig('figures/a9a.pdf', bbox_inches='tight')
plt.show()
plt.clf()
np.max(spr_LA.eigsh(K.T.dot(K))[0])
L
```
| true |
code
| 0.675551 | null | null | null | null |
|
# Ibis Integration (Experimental)
The [Ibis project](https://ibis-project.org/docs/) tries to bridge the gap between local Python and [various backends](https://ibis-project.org/docs/backends/index.html) including distributed systems such as Spark and Dask. The main idea is to create a pythonic interface to express SQL semantic, so the expression is agnostic to the backends.
The design idea is very aligned with Fugue. But please notice there are a few key differences:
* **Fugue supports both pythonic APIs and SQL**, and the choice should be determined by particular cases or users' preferences. On the other hand, Ibis focuses on the pythonic expression of SQL and perfectionizes it.
* **Fugue supports SQL and non-SQL semantic for data transformation.** Besides SQL, another important option is [Fugue Transform](introduction.html#fugue-transform). The Fugue transformers can wrap complicated Python/Pandas logic and apply them distributedly on dataframes. A typical example is distributed model inference, the inference part has to be done by Python, it can be easily achieved by a transformer, but the data preparation may be done nicely by SQL or Ibis.
* **Fugue and Ibis are on different abstraction layers.** Ibis is nice to construct single SQL statements to accomplish single tasks. Even it involves multiple tables and multiple steps, its final step is either outputting one table or inserting one table into a database. On the other hand, Fugue workflow is to orchestrate these tasks. For example, it can read a table, do the first transformation and save to a file, then do the second transformation and print. Each transformation may be done using Ibis, but loading, saving and printing and the orchestration can be done by Fugue.
This is also why Ibis can be a very nice option for Fugue users to build their pipelines. For people who prefer pythonic APIs, they can keep all the logic in Python with the help of Ibis. Although Fugue has its own functional API similar to Ibis, the programming interface of Ibis is really elegant. It usually helps users write less but more expressive code to achieve the same thing.
## Hello World
In this example, we try to achieve this SQL semantic:
```sql
SELECT a, a+1 AS b FROM
(SELECT a FROM tb1 UNION SELECT a FROM tb2)
```
```
from ibis import BaseBackend, literal
import ibis.expr.types as ir
def ibis_func(backend:BaseBackend) -> ir.TableExpr:
tb1 = backend.table("tb1")
tb2 = backend.table("tb2")
tb3 = tb1.union(tb2)
return tb3.mutate(b=tb3.a+literal(1))
```
Now let's test with the pandas backend
```
import ibis
import pandas as pd
con = ibis.pandas.connect({
"tb1": pd.DataFrame([[0]], columns=["a"]),
"tb2": pd.DataFrame([[1]], columns=["a"])
})
ibis_func(con).execute()
```
Now let's make this a part of Fugue
```
from fugue import FugueWorkflow
from fugue_ibis import run_ibis
dag = FugueWorkflow()
df1 = dag.df([[0]], "a:long")
df2 = dag.df([[1]], "a:long")
df3 = run_ibis(ibis_func, tb1=df1, tb2=df2)
df3.show()
```
Now let's run on Pandas
```
dag.run()
```
Now let's run on Dask
```
import fugue_dask
dag.run("dask")
```
Now let's run on DuckDB
```
import fugue_duckdb
dag.run("duck")
```
For each different execution engine, Ibis will also run on the correspondent backend.
## A deeper integration
The above approach needs a function taking in an Ibis backend and returning a `TableExpr`. The following is another approach that simpler and more elegant.
```
from fugue_ibis import as_ibis, as_fugue
dag = FugueWorkflow()
tb1 = as_ibis(dag.df([[0]], "a:long"))
tb2 = as_ibis(dag.df([[1]], "a:long"))
tb3 = tb1.union(tb2)
df3 = as_fugue(tb3.mutate(b=tb3.a+literal(1)))
df3.show()
dag.run()
```
Alternatively, you can treat `as_ibis` and `as_fugue` as class methods. This is more convenient to use, but it's a bit magical. This is achieved by adding these two methods using `setattr` to the correspondent classes. This patching-like design pattern is widely used by Ibis.
```
import fugue_ibis # must import
dag = FugueWorkflow()
tb1 = dag.df([[0]], "a:long").as_ibis()
tb2 = dag.df([[1]], "a:long").as_ibis()
tb3 = tb1.union(tb2)
df3 = tb3.mutate(b=tb3.a+literal(1)).as_fugue()
df3.show()
dag.run()
```
By importing `fugue_ibis`, the two methods were automatically added.
It's up to the users which way to go. The first approach (`run_ibis`) is the best to separate Ibis logic, as you can see, it is also great for unit testing. The second approach is elegant, but you will have to unit test the code with the logic before and after the conversions. The third approach is the most intuitive, but it's a bit magical.
## Z-Score
Now, let's consider a practical example. We want to use Fugue to compute z-score of a dataframe, partitioning should be an option. The reason to implement it on Fugue level is that the compute becomes scale agnostic and framework agnostic.
```
from fugue import WorkflowDataFrame
from fugue_ibis import as_ibis, as_fugue
def z_score(df:WorkflowDataFrame, input_col:str, output_col:str) -> WorkflowDataFrame:
by = df.partition_spec.partition_by
idf = as_ibis(df)
col = idf[input_col]
if len(by)==0:
return as_fugue(idf.mutate(**{output_col:(col - col.mean())/col.std()}))
agg = idf.group_by(by).aggregate(mean_=col.mean(), std_=col.std())
j = idf.inner_join(agg, by)[idf, ((idf[input_col]-agg.mean_)/agg.std_).name(output_col)]
return as_fugue(j)
```
Now, generate testing data
```
import numpy as np
np.random.seed(0)
pdf = pd.DataFrame(dict(
a=np.random.choice(["a","b"], 100),
b=np.random.choice(["c","d"], 100),
c=np.random.rand(100),
))
pdf["expected1"] = (pdf.c - pdf.c.mean())/pdf.c.std()
pdf = pdf.groupby(["a", "b"]).apply(lambda tdf: tdf.assign(expected2=(tdf.c - tdf.c.mean())/tdf.c.std())).reset_index(drop=True)
```
And here is the final code.
```
dag = FugueWorkflow()
df = z_score(dag.df(pdf), "c", "z1")
df = z_score(df.partition_by("a", "b"), "c", "z2")
df.show()
dag.run()
```
## Consistency issues
Ibis as of 2.0.0 can have different behaviors on different backends. Here are some examples from the common discrepencies between pandas and SQL.
```
# pandas drops null keys on group (by default), SQL doesn't
dag = FugueWorkflow()
df = dag.df([["a",1],[None,2]], "a:str,b:int").as_ibis()
df.groupby(["a"]).aggregate(s=df.b.sum()).as_fugue().show()
dag.run()
dag.run("duckdb")
# pandas joins on NULLs, SQL doesn't
dag = FugueWorkflow()
df1 = dag.df([["a",1],[None,2]], "a:str,b:int").as_ibis()
df2 = dag.df([["a",1],[None,2]], "a:str,c:int").as_ibis()
df1.inner_join(df2, ["a"])[df1, df2.c].as_fugue().show()
dag.run()
dag.run("duckdb")
```
Since Ibis integration is experimental, we rely on Ibis to achieve consistent behaviors. If you have any Ibis specific question please also consider asking in [Ibis issues](https://github.com/ibis-project/ibis/issues).
| true |
code
| 0.379953 | null | null | null | null |
|
# One Shot Learning with Siamese Networks
This is the jupyter notebook that accompanies
## Imports
All the imports are defined here
```
%matplotlib inline
import torchvision
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torch.utils.data import DataLoader,Dataset
import matplotlib.pyplot as plt
import torchvision.utils
import numpy as np
import random
from PIL import Image
import torch
from torch.autograd import Variable
import PIL.ImageOps
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
```
## Helper functions
Set of helper functions
```
def imshow(img,text=None,should_save=False):
npimg = img.numpy()
plt.axis("off")
if text:
plt.text(75, 8, text, style='italic',fontweight='bold',
bbox={'facecolor':'white', 'alpha':0.8, 'pad':10})
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
def show_plot(iteration,loss):
plt.plot(iteration,loss)
plt.show()
```
## Configuration Class
A simple class to manage configuration
```
class Config():
training_dir = "./data/faces/training/"
testing_dir = "./data/faces/testing/"
train_batch_size = 64
train_number_epochs = 100
```
## Custom Dataset Class
This dataset generates a pair of images. 0 for geniune pair and 1 for imposter pair
```
class SiameseNetworkDataset(Dataset):
def __init__(self,imageFolderDataset,transform=None,should_invert=True):
self.imageFolderDataset = imageFolderDataset
self.transform = transform
self.should_invert = should_invert
def __getitem__(self,index):
img0_tuple = random.choice(self.imageFolderDataset.imgs)
#we need to make sure approx 50% of images are in the same class
should_get_same_class = random.randint(0,1)
if should_get_same_class:
while True:
#keep looping till the same class image is found
img1_tuple = random.choice(self.imageFolderDataset.imgs)
if img0_tuple[1]==img1_tuple[1]:
break
else:
while True:
#keep looping till a different class image is found
img1_tuple = random.choice(self.imageFolderDataset.imgs)
if img0_tuple[1] !=img1_tuple[1]:
break
img0 = Image.open(img0_tuple[0])
img1 = Image.open(img1_tuple[0])
img0 = img0.convert("L")
img1 = img1.convert("L")
if self.should_invert:
img0 = PIL.ImageOps.invert(img0)
img1 = PIL.ImageOps.invert(img1)
if self.transform is not None:
img0 = self.transform(img0)
img1 = self.transform(img1)
return img0, img1 , torch.from_numpy(np.array([int(img1_tuple[1]!=img0_tuple[1])],dtype=np.float32))
def __len__(self):
return len(self.imageFolderDataset.imgs)
```
## Using Image Folder Dataset
```
folder_dataset = dset.ImageFolder(root=Config.training_dir)
siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset,
transform=transforms.Compose([transforms.Resize((100,100)),
transforms.ToTensor()
])
,should_invert=False)
```
## Visualising some of the data
The top row and the bottom row of any column is one pair. The 0s and 1s correspond to the column of the image.
1 indiciates dissimilar, and 0 indicates similar.
```
vis_dataloader = DataLoader(siamese_dataset,
shuffle=True,
num_workers=8,
batch_size=8)
dataiter = iter(vis_dataloader)
example_batch = next(dataiter)
concatenated = torch.cat((example_batch[0],example_batch[1]),0)
imshow(torchvision.utils.make_grid(concatenated))
print(example_batch[2].numpy())
```
## Neural Net Definition
We will use a standard convolutional neural network
```
class SiameseNetwork(nn.Module):
def __init__(self):
super(SiameseNetwork, self).__init__()
self.cnn1 = nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(1, 4, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(4),
nn.ReflectionPad2d(1),
nn.Conv2d(4, 8, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(8),
nn.ReflectionPad2d(1),
nn.Conv2d(8, 8, kernel_size=3),
nn.ReLU(inplace=True),
nn.BatchNorm2d(8),
)
self.fc1 = nn.Sequential(
nn.Linear(8*100*100, 500),
nn.ReLU(inplace=True),
nn.Linear(500, 500),
nn.ReLU(inplace=True),
nn.Linear(500, 5))
def forward_once(self, x):
output = self.cnn1(x)
output = output.view(output.size()[0], -1)
output = self.fc1(output)
return output
def forward(self, input1, input2):
output1 = self.forward_once(input1)
output2 = self.forward_once(input2)
return output1, output2
```
## Contrastive Loss
```
class ContrastiveLoss(torch.nn.Module):
"""
Contrastive loss function.
Based on: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
"""
def __init__(self, margin=2.0):
super(ContrastiveLoss, self).__init__()
self.margin = margin
def forward(self, output1, output2, label):
euclidean_distance = F.pairwise_distance(output1, output2, keepdim = True)
loss_contrastive = torch.mean((1-label) * torch.pow(euclidean_distance, 2) +
(label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))
return loss_contrastive
```
## Training Time!
```
train_dataloader = DataLoader(siamese_dataset,
shuffle=True,
num_workers=8,
batch_size=Config.train_batch_size)
net = SiameseNetwork().cuda()
criterion = ContrastiveLoss()
optimizer = optim.Adam(net.parameters(),lr = 0.0005 )
counter = []
loss_history = []
iteration_number= 0
for epoch in range(0,Config.train_number_epochs):
for i, data in enumerate(train_dataloader,0):
img0, img1 , label = data
img0, img1 , label = img0.cuda(), img1.cuda() , label.cuda()
optimizer.zero_grad()
output1,output2 = net(img0,img1)
loss_contrastive = criterion(output1,output2,label)
loss_contrastive.backward()
optimizer.step()
if i %10 == 0 :
print("Epoch number {}\n Current loss {}\n".format(epoch,loss_contrastive.item()))
iteration_number +=10
counter.append(iteration_number)
loss_history.append(loss_contrastive.item())
show_plot(counter,loss_history)
```
## Some simple testing
The last 3 subjects were held out from the training, and will be used to test. The Distance between each image pair denotes the degree of similarity the model found between the two images. Less means it found more similar, while higher values indicate it found them to be dissimilar.
```
folder_dataset_test = dset.ImageFolder(root=Config.testing_dir)
siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset_test,
transform=transforms.Compose([transforms.Resize((100,100)),
transforms.ToTensor()
])
,should_invert=False)
test_dataloader = DataLoader(siamese_dataset,num_workers=6,batch_size=1,shuffle=True)
dataiter = iter(test_dataloader)
x0,_,_ = next(dataiter)
for i in range(10):
_,x1,label2 = next(dataiter)
concatenated = torch.cat((x0,x1),0)
output1,output2 = net(Variable(x0).cuda(),Variable(x1).cuda())
euclidean_distance = F.pairwise_distance(output1, output2)
imshow(torchvision.utils.make_grid(concatenated),'Dissimilarity: {:.2f}'.format(euclidean_distance.item()))
```
| true |
code
| 0.695803 | null | null | null | null |
|
# Quantization of Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Characteristic of a Linear Uniform Quantizer
The characteristics of a quantizer depend on the mapping functions $f(\cdot)$, $g(\cdot)$ and the rounding operation $\lfloor \cdot \rfloor$ introduced in the [previous section](introduction.ipynb). A linear quantizer bases on linear mapping functions $f(\cdot)$ and $g(\cdot)$. A uniform quantizer splits the mapped input signal into quantization steps of equal size. Quantizers can be described by their nonlinear in-/output characteristic $x_Q[k] = \mathcal{Q} \{ x[k] \}$, where $\mathcal{Q} \{ \cdot \}$ denotes the quantization process. For linear uniform quantization it is common to differentiate between two characteristic curves, the so called mid-tread and mid-rise. Both are introduced in the following.
### Mid-Tread Characteristic Curve
The in-/output relation of the mid-tread quantizer is given as
\begin{equation}
x_Q[k] = Q \cdot \underbrace{\left\lfloor \frac{x[k]}{Q} + \frac{1}{2} \right\rfloor}_{index}
\end{equation}
where $Q$ denotes the constant quantization step size and $\lfloor \cdot \rfloor$ the [floor function](https://en.wikipedia.org/wiki/Floor_and_ceiling_functions) which maps a real number to the largest integer not greater than its argument. Without restricting $x[k]$ in amplitude, the resulting quantization indexes are [countable infinite](https://en.wikipedia.org/wiki/Countable_set). For a finite number of quantization indexes, the input signal has to be restricted to a minimal/maximal amplitude $x_\text{min} < x[k] < x_\text{max}$ before quantization. The resulting quantization characteristic of a linear uniform mid-tread quantizer is shown below

The term mid-tread is due to the fact that small values $|x[k]| < \frac{Q}{2}$ are mapped to zero.
#### Example - Mid-tread quantization of a sine signal
The quantization of one period of a sine signal $x[k] = A \cdot \sin[\Omega_0\,k]$ by a mid-tread quantizer is simulated. $A$ denotes the amplitude of the signal, $x_\text{min} = -1$ and $x_\text{max} = 1$ are the smallest and largest output values of the quantizer, respectively.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
A = 1.2 # amplitude of signal
Q = 1/10 # quantization stepsize
N = 2000 # number of samples
def uniform_midtread_quantizer(x, Q):
# limiter
x = np.copy(x)
idx = np.where(np.abs(x) >= 1)
x[idx] = np.sign(x[idx])
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
def plot_signals(x, xQ):
e = xQ - x
plt.figure(figsize=(10,6))
plt.plot(x, label=r'signal $x[k]$')
plt.plot(xQ, label=r'quantized signal $x_Q[k]$')
plt.plot(e, label=r'quantization error $e[k]$')
plt.xlabel(r'$k$')
plt.axis([0, N, -1.1*A, 1.1*A])
plt.legend()
plt.grid()
# generate signal
x = A * np.sin(2*np.pi/N * np.arange(N))
# quantize signal
xQ = uniform_midtread_quantizer(x, Q)
# plot signals
plot_signals(x, xQ)
```
**Exercise**
* Change the quantization stepsize `Q` and the amplitude `A` of the signal. Which effect does this have on the quantization error?
Solution: The smaller the quantization step size, the smaller the quantization error is for $|x[k]| < 1$. Note, the quantization error is not bounded for $|x[k]| > 1$ due to the clipping of the signal $x[k]$.
### Mid-Rise Characteristic Curve
The in-/output relation of the mid-rise quantizer is given as
\begin{equation}
x_Q[k] = Q \cdot \Big( \underbrace{\left\lfloor\frac{ x[k] }{Q}\right\rfloor}_{index} + \frac{1}{2} \Big)
\end{equation}
where $\lfloor \cdot \rfloor$ denotes the floor function. The quantization characteristic of a linear uniform mid-rise quantizer is illustrated below

The term mid-rise copes for the fact that $x[k] = 0$ is not mapped to zero. Small positive/negative values around zero are mapped to $\pm \frac{Q}{2}$.
#### Example - Mid-rise quantization of a sine signal
The previous example is now reevaluated using the mid-rise characteristic
```
A = 1.2 # amplitude of signal
Q = 1/10 # quantization stepsize
N = 2000 # number of samples
def uniform_midrise_quantizer(x, Q):
# limiter
x = np.copy(x)
idx = np.where(np.abs(x) >= 1)
x[idx] = np.sign(x[idx])
# linear uniform quantization
xQ = Q * (np.floor(x/Q) + .5)
return xQ
# generate signal
x = A * np.sin(2*np.pi/N * np.arange(N))
# quantize signal
xQ = uniform_midrise_quantizer(x, Q)
# plot signals
plot_signals(x, xQ)
```
**Exercise**
* What are the differences between the mid-tread and the mid-rise characteristic curves for the given example?
Solution: The mid-tread and the mid-rise quantization of the sine signal differ for signal values smaller than half of the quantization interval. Mid-tread has a representation of $x[k] = 0$ while this is not the case for the mid-rise quantization.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
| true |
code
| 0.701905 | null | null | null | null |
|
# Diseño de software para cómputo científico
----
## Unidad 5: Integración con lenguajes de alto nivel con bajo nivel.
## Agenda de la Unidad 5
- JIT (Numba)
- Cython.
- Integración de Python con FORTRAN.
- **Integración de Python con C.**
## Recapitulando
- Escribimos el código Python.
- Pasamos todo a numpy.
- Hicimos profile.
- Paralelisamos (joblib/dask).
- Hicimos profile.
- Usamos Numba.
- Hicimos profile.
- Si podemos elegir el lenguaje: Cython
- Si no podemos elegir el lenguaje y vamos a hacer cómputo numérico FORTRAN.
- Si no podemos elegir vamos con C/C++/Rust/lo-que-sea.
## Ctypes
- Permite usar bibliotecas existentes en otros idiomas escribiendo envoltorios **simples** en Python.
- Viene con Python.
- Puede ser un poco **Dificil** de usar.
- Es una herramienta ideal para comper Python
### Ejemplo para Ctypes 1/2
El código C que usaremos en este tutorial está diseñado para ser lo más simple posible mientras demuestra los conceptos que estamos cubriendo. Es más un "ejemplo de juguete" y no pretende ser útil por sí solo. Estas son las funciones que utilizaremos:
```c
int simple_function(void) {
static int counter = 0;
counter++;
return counter;
}
```
- `simple_function` simplemente devuelve números de conteo.
- Cada vez que se llama en incrementos de contador y devuelve ese valor.
### Ejemplo para Ctypes 2/2
```c
void add_one_to_string(char *input) {
int ii = 0;
for (; ii < strlen(input); ii++) {
input[ii]++;
}
}
```
- Agrega uno a cada carácter en una matriz de caracteres que se pasa.
- Usaremos esto para hablar sobre las cadenas inmutables de Python y cómo solucionarlas cuando sea necesario.
Estos ejemplos estan guardadoe en `clibc1.c`, y se compilan con:
```bash
gcc -c -Wall -Werror -fpic clib1.c # crea el código objeto
gcc -shared -o libclib1.so clib1.o # crea el .so
```
## Llamando a una función simple
```
import ctypes
# Load the shared library into c types.
libc = ctypes.CDLL("ctypes/libclib1.so")
counter = libc.simple_function()
counter
```
## Cadenas inmutables en Python con Ctypes
```
print("Calling C function which tries to modify Python string")
original_string = "starting string"
print("Before:", original_string)
# This call does not change value, even though it tries!
libc.add_one_to_string(original_string)
print("After: ", original_string)
```
- Como notarán esto **no anda**.
- El `original_string` no está disponible en la función C en absoluto al hacer esto.
- La función C modificó alguna otra memoria, no la cadena.
- La función C no solo no hace lo que desea, sino que también modifica la memoria que no debería, lo que genera posibles problemas de corrupción de memoria.
- Si queremos que la función C tenga acceso a la cadena, necesitamos hacer un poco de trabajo de serialización.
## Cadenas inmutables en Python con Ctypes
- Necesitamos convertir la cadena original a bytes usando `str.encode,` y luego pasar esto al constructor para un `ctypes.string_buffer`.
- Los String_buffers son mutables y se pasan a C como `char *`.
```
# The ctypes string buffer IS mutable, however.
print("Calling C function with mutable buffer this time")
# Need to encode the original to get bytes for string_buffer
mutable_string = ctypes.create_string_buffer(str.encode(original_string))
print("Before:", mutable_string.value)
libc.add_one_to_string(mutable_string) # Works!
print("After: ", mutable_string.value)
```
## Especificación de firmas de funciones en ctypes
- Como vimos anteriormente, podemos especificar el tipo de retorno si es necesario.
- Podemos hacer una especificación similar de los parámetros de la función.
- Además, proporcionar una firma de función le permite a Python verificar que está pasando los parámetros correctos cuando llama a una función C, de lo contrario, pueden suceder cosas **malas**.
Para especificar el tipo de retorno de una función, hayque obtener el bjeto de la función y establecer la propiedad `restype`:
```python
libc.func.restype = ctypes.POINTER(ctypes.c_char)
```
y para especificar las firmas
```python
libc.func.argtypes = [ctypes.POINTER(ctypes.c_char), ]
```
## Escribir una interfaz Python en C
Vamos a "envolver" función de biblioteca C `fputs()`:
```C
int fputs (const char *, FILE *)
```
- Esta función toma dos argumentos:
1. `const char *` es una matriz de caracteres.
2. `FILE *` es un puntero a un stream de archivo.
- `fputs()` escribe la matriz de caracteres en el archivo especificado y devuelve un valor no negativo, si la operación es exitosa, este valor indicará el número de bytes escritos en el archivo.
- Si hay un error, entonces devuelve `EOF`.
## Escribir la función C para `fputs()`
Este es un programa básico de C que usa fputs() para escribir una cadena en una secuencia de archivos:
```C
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
FILE *fp = fopen("write.txt", "w");
fputs("Real Python!", fp);
fclose(fp);
return 0;
}
```
## Envolviendo `fputs()`
El siguiente bloque de código muestra la versión final envuelta de su código C:
```C
#include <Python.h>
static PyObject *method_fputs(PyObject *self, PyObject *args) {
char *str, *filename = NULL;
int bytes_copied = -1;
/* Parse arguments */
if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {
return NULL;
}
FILE *fp = fopen(filename, "w");
bytes_copied = fputs(str, fp);
fclose(fp);
return PyLong_FromLong(bytes_copied);
}
```
Este fragmento de código hace referencia a tres estructuras de objetos que se definen en `Python.h`:
`PyObject`, `PyArg_ParseTuple()` y `PyLong_FromLong()`
## `PyObject`
- `PyObject` es una estructura de objetos que utiliza para definir tipos de objetos para Python.
- Todos los demás tipos de objetos Python son extensiones de este tipo.
- Establecer el tipo de retorno de la función anterior como `PyObject` define los campos comunes que requiere Python para reconocer esto como un tipo válido.
Eche otro vistazo a las primeras líneas de su código C:
```C
static PyObject *method_fputs(PyObject *self, PyObject *args) {
char *str, *filename = NULL;
int bytes_copied = -1;
...
```
En la línea 2, declara los tipos de argumento que desea recibir de su código Python:
- `char *str` es la cadena que desea escribir en la secuencia del archivo.
- `char *filename` es el nombre del archivo para escribir.
## `PyArg_ParseTuple()`
`PyArg_ParseTuple()` transforma los argumentos que recibirá de su programa Python en variables locales:
```C
static PyObject *method_fputs(PyObject *self, PyObject *args) {
char *str, *filename = NULL;
int bytes_copied = -1;
if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {
return NULL;
}
...
```
`PyArg_ParseTuple()` toma los siguientes argumentos:
- `args` de tipo `PyObject`.
- `"ss"` especifica el tipo de datos de los argumentos a analizar.
- `&str` y `&filename` son punteros a variables locales a las que se asignarán los valores analizados.
`PyArg_ParseTuple()` retorna `false` frente a un error.
## `fputs()` y `PyLongFromLon()`
```C
static PyObject *method_fputs(PyObject *self, PyObject *args) {
char *str, *filename = NULL;
int bytes_copied = -1;
if(!PyArg_ParseTuple(args, "ss", &str, &filename)) {
return NULL;
}
FILE *fp = fopen(filename, "w");
bytes_copied = fputs(str, fp);
fclose(fp);
return PyLong_FromLong(bytes_copied);
}
```
- Las llamadas a `fputs()` fueron explicadas anteriormente, la única diferencia es que las variables utilizadas son las que provienen de `*args` y almacenadas localmente.
- Finalmente `PyLong_FromLong()` retorna un `PyLongObject`, que representa objecto entero en Python.
## Módulo de extensión
Ya se escribió el código que constituye la funcionalidad principal de su módulo de extensión Python C.
- Sin embargo queda escribir las definiciones de su módulo y los métodos que contiene, de esta manera:
```C
static PyMethodDef FputsMethods[] = {
{"fputs", method_fputs, METH_VARARGS, "Python interface for fputs C library function"},
{NULL, NULL, 0, NULL}
};
static struct PyModuleDef fputsmodule = {
PyModuleDef_HEAD_INIT,
"fputs",
"Python interface for the fputs C library function",
-1,
FputsMethods
};
```
## `PyMethodDef`
- `PyMethodDef` informa al intérprete de Python sobre ello los métodos definidos en el módulo
- Idealmente, habrá más de un método en la. Es por eso que necesita definir una matriz de estructuras:
```C
static PyMethodDef FputsMethods[] = {
{"fputs", method_fputs, METH_VARARGS, "Python interface for fputs C library function"},
{NULL, NULL, 0, NULL}
};
```
Cada miembro individual de la estructura contiene la siguiente información:
- `fputs` es el nombre que el usuario escribiría para invocar esta función en particular desde Python.
- `method_fputs` es el nombre de la función C a invocar.
- `METH_VARARGS` indica que la función aceptará dos argumentos de tipo
`PyObject *`:
- `self` es el objeto del módulo.
- `args` es una tupla que contiene los argumentos de la función (descomprimibles `PyArg_ParseTuple()`.
- La cadena final es un valor para representar el docstring.
### `PyModuleDef`
Define un módulo Python (un archivo `.py`) en C.
```C
static struct PyModuleDef fputsmodule = {
PyModuleDef_HEAD_INIT, "fputs",
"Interface for the fputs C function", -1, FputsMethods};```
Hay un total de 9 miembros en esta estructura, pero el bloque de código anterior, inicializa los siguientes cinco:
- `PyModuleDef_HEAD_INIT` es la clase "base" del módulo (normalmente esto siempre es igual).
- `"fputs"` nombre del módulo.
- La cadena es la documentación del módulo.
- `-1` cantidad de memoria necesaria para almacenar el estado del programa. Es útil cuando su módulo se utiliza en múltiples subinterpretadores, y puede tener los siguientes valores:
- Un valor negativo indica que este módulo no tiene soporte para subinterpretadores.
- Un valor no negativo permite la reinicialización del módulo. También especifica el requisito de memoria que se asignará en cada sesión de subinterpretador.
- `FputsMethods` es tabla de métodos.
## Inicializando el módulo
- Ahora que ha definido la extensión Python C y las estructuras de métodos, es hora de ponerlas en uso.
- Cuando un programa Python importa su módulo por primera vez, llamará a `PyInit_fputs()`:
```C
PyMODINIT_FUNC PyInit_fputs(void) {
return PyModule_Create(&fputsmodule);
}
```
`PyMODINIT_FUNC hace 3 cosas implícitamente`
- Establece implícitamente el tipo de retorno de la función como PyObject *.
- Declara cualquier enlace especial.
- Declara la función como "C" externa. En caso de que esté usando C++, le dice al compilador de C ++ que no haga cambios de nombre en los símbolos.
`PyModule_Create()` devolverá un nuevo objeto de módulo de tipo `PyObject *`.
## Poniendo todo junto - Qué pasa cuando importamos el módulo?

## Poniendo todo junto - Qué retorna cuando se importa el módulo?

## Poniendo todo junto - Qué sucede cuando llamamos a `fputs.fputs()`

## Empaquetado con `distutils`
```python
from distutils.core import setup, Extension
def main():
setup(name="fputs",
ext_modules=[Extension("fputs", ["fputsmodule.c"])],
...)
if __name__ == "__main__":
main()
```
Para instalar:
```bash
$ python3 setup.py install
```
Para compilar
```bash
$ python setup.py build_ext --inplace
```
Si se quiere especificar el compilador
```bash
$ CC=gcc python3 setup.py install
```
## Usando la extensión
```
import sys; sys.path.insert(0, "./c_extensions")
import fputs
fputs?
fputs.fputs?
fputs.fputs("Hola mundo!", "salida.txt")
with open("salida.txt") as fp:
print(fp.read())
```
## Raising Exceptions
- Si desea lanzar excepciones de Python desde C, puede usar la API de Python para hacerlo.
- Algunas de las funciones proporcionadas por la API de Python para generar excepciones son las siguientes:
- `PyErr_SetString(PyObject *type, const char *message)`
- `PyErr_Format(PyObject *type, const char *format)`
- `PyErr_SetObject(PyObject *type, PyObject *value)`
Todas las exceptions de Python estan definidas en las API.
## Raising Exceptions
```C
static PyObject *method_fputs(PyObject *self, PyObject *args) {
char *str, *filename = NULL;
int bytes_copied = -1;
/* Parse arguments */
if(!PyArg_ParseTuple(args, "ss", &str, &fd)) {
return NULL;
}
if (strlen(str) <= 0) {
PyErr_SetString(PyExc_ValueError, "String length must be greater than 0");
return NULL;
}
fp = fopen(filename, "w");
bytes_copied = fputs(str, fp);
fclose(fp);
return PyLong_FromLong(bytes_copied);
}
```
## Raising Custom Exceptions
Para crear y usar excepción personalizada, se debe agregarla instancia de módulo:
```C
static PyObject *StringTooShortError = NULL;
PyMODINIT_FUNC PyInit_fputs(void) {
/* Assign module value */
PyObject *module = PyModule_Create(&fputsmodule);
/* Initialize new exception object */
StringTooShortError = PyErr_NewException("fputs.StringTooShortError", NULL, NULL);
/* Add exception object to your module */
PyModule_AddObject(module, "StringTooShortError", StringTooShortError);
return module;
}
static PyObject *method_fputs(PyObject *self, PyObject *args) {
...
if (strlen(str) <=0 10) {
/* Passing custom exception */
PyErr_SetString(StringTooShortError, "String length must be greater than 0");
return NULL;}
...
}
```
## Referencias
- https://docs.python.org/3.8/library/ctypes.html
- https://dbader.org/blog/python-ctypes-tutorial
- https://realpython.com/build-python-c-extension-module/
| true |
code
| 0.448547 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/vndee/pytorch-vi/blob/master/chatbot_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## CHATBOT
**Tác giả**: [Matthew Inkawhich](https://github.com/MatthewInkawhich)
Trong hướng dẫn này chúng ta sẽ khám phá một ứng dụng thú vị của mô hình seq2seq. Chúng ta sẽ huấn luyện một chatbot đơn giản sử dụng data là lời thoại trong phim từ [Cornell Movie-Dialogs Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html)
Các mô hình có khả năng đàm thoại là một mảng nghiên cứu đang rất được chú ý của trí tuệ nhân tạo. Chatbot có thể tìm thấy trong rất nhiều sản phẩm tiện ích như bộ phận chăm sóc khách hàng hoặc các dịch vụ tư vấn online. Nhưng con bot này thường thuộc dạng retrieval-based (dựa trên truy xuất), đó là các mô hình mà câu trả lời đã được định sẵn cho mỗi loại câu hỏi nhất định. Dạy một cỗ máy để nó có khả năng đàm thoại với con người một cách tự nhiên vẫn là một bài toán khó và còn xa để đi đến lời giải. Gần đây, đi theo sự bùng nổ của học sâu, các mô hình sinh mạnh mẽ như Google's Neural Conversational Model đã tạo ra một bước nhảy vọt ấn tượng. Trong bài hướng dẫn này, chúng ta sẽ hiện thực một kiểu mô hình sinh như vậy với PyTorch.

```
> hello?
Bot: hello .
> where am I?
Bot: you re in a hospital .
> who are you?
Bot: i m a lawyer .
> how are you doing?
Bot: i m fine .
> are you my friend?
Bot: no .
> you're under arrest
Bot: i m trying to help you !
> i'm just kidding
Bot: i m sorry .
> where are you from?
Bot: san francisco .
> it's time for me to leave
Bot: i know .
> goodbye
Bot: goodbye .
```
### Các phần chính:
- Load và tiền xử lý [Cornell Movie-Dialogs Corpus](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) dataset.
- Hiện thực mô hình seq2seq với Luong's attention.
- Phối hợp huấn luyện mô hình encoder-decoder với mini-batches.
- Hiện thực thuật toán decoding bằng tìm kiếm tham lam.
- Tương tác với mô hình đã huấn luyện.
### Lời cảm ơn:
Code trong bài viết này được mượn từ các project mã nguồn mở sau:
- Yuan-Kuei Wu’s pytorch-chatbot implementation: https://github.com/ywk991112/pytorch-chatbot
- Sean Robertson’s practical-pytorch seq2seq-translation example: https://github.com/spro/practical-pytorch/tree/master/seq2seq-translation
- FloydHub’s Cornell Movie Corpus preprocessing code: https://github.com/floydhub/textutil-preprocess-cornell-movie-corpus
## Chuẩn bị
Đầu tiên chúng ta cần tải dữ liệu tại [đây](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) và giải nén.
```
!wget --header 'Host: www.cs.cornell.edu' --user-agent 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:66.0) Gecko/20100101 Firefox/66.0' --header 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' --header 'Accept-Language: en-US,en;q=0.5' --header 'Upgrade-Insecure-Requests: 1' 'http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip' --output-document 'cornell_movie_dialogs_corpus.zip'
!unzip cornell_movie_dialogs_corpus.zip
!ls cornell\ movie-dialogs\ corpus
```
Import một số thư viện hỗ trợ:
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math
USE_CUDA = torch.cuda.is_available()
device = torch.device("cuda" if USE_CUDA else "cpu")
```
## Load và tiền xử lý dữ liệu
Bước tiếp theo chúng ta cần tổ chức lại dữ liệu. Cornell Movie-Dialogs Corpus là một tập dữ liệu lớn gồm các đoạn hội thoại của các nhân vật trong phim.
- 220,579 đoạn hội thoại của 10,292 cặp nhân vật.
- 9,035 nhân vật từ 617 bộ phim.
- 304,713 cách diễn đạt.
Tập dữ liệu này rất lớn và phân tán, đa dạng trong phong cách ngôn ngữ, thời gian, địa điểm cũng như ý nghĩa. Chúng ta hi vọng mô hình của mình sẽ đủ tốt để làm việc với nhiều cách nói hay truy vấn khác nhau.
Trước hết, hãy xem một vài dòng từ dữ liệu gốc, xem chúng ta có gì ở đây.
```
corpus_name = 'cornell movie-dialogs corpus'
def printLines(file, n=10):
with open(file, 'rb') as datafile:
lines = datafile.readlines()
for line in lines[:n]:
print(line)
printLines(os.path.join(corpus_name, 'movie_lines.txt'))
```
Để thuận tiện, chúng ta sẽ tổ chức lại dữ liệu theo một format mỗi dòng trong file sẽ được tách ra bởi dấu tab cho một câu hỏi và một câu trả lời.
Phía dưới chúng ta sẽ cần một số phương thức để phân tích dữ liệu từ file movie_lines.tx
- `loadLines': Tách mỗi dòng dữ liệu thành một đối tượng dictionary trong python gồm các thuộc tính (lineID, characterID, movieID, character, text).
-`loadConversations`: Nhóm các thuộc tính của từng dòng trong `loadLines` thành một đoạn hội thoại dựa trên movie_conversations.txt.
- `extractSentencePairs`: Trích xuất một cặp câu trong đoạn hội thoại.
```
# Splits each line of the file into a dictionary of fields
def loadLines(fileName, fields):
lines = {}
with open(fileName, 'r', encoding='iso-8859-1') as f:
for line in f:
values = line.split(" +++$+++ ")
# Extract fields
lineObj = {}
for i, field in enumerate(fields):
lineObj[field] = values[i]
lines[lineObj['lineID']] = lineObj
return lines
# Groups fields of lines from `loadLines` into conversations based on *movie_conversations.txt*
def loadConversations(fileName, lines, fields):
conversations = []
with open(fileName, 'r', encoding='iso-8859-1') as f:
for line in f:
values = line.split(" +++$+++ ")
# Extract fields
convObj = {}
for i, field in enumerate(fields):
convObj[field] = values[i]
# Convert string to list (convObj["utteranceIDs"] == "['L598485', 'L598486', ...]")
lineIds = eval(convObj["utteranceIDs"])
# Reassemble lines
convObj["lines"] = []
for lineId in lineIds:
convObj["lines"].append(lines[lineId])
conversations.append(convObj)
return conversations
# Extracts pairs of sentences from conversations
def extractSentencePairs(conversations):
qa_pairs = []
for conversation in conversations:
# Iterate over all the lines of the conversation
for i in range(len(conversation["lines"]) - 1): # We ignore the last line (no answer for it)
inputLine = conversation["lines"][i]["text"].strip()
targetLine = conversation["lines"][i+1]["text"].strip()
# Filter wrong samples (if one of the lists is empty)
if inputLine and targetLine:
qa_pairs.append([inputLine, targetLine])
return qa_pairs
```
Bây giờ chúng ta sẽ gọi các phương thức ở trên để tạo ra một file dữ liệu mới tên là formatted_movie_lines.txt.
```
# Define path to new file
datafile = os.path.join(corpus_name, 'formatted_movie_lines.txt')
delimiter = '\t'
# Unescape the delimiter
delimiter = str(codecs.decode(delimiter, 'unicode_escape'))
# Initialize lines dict, conversations list, and field ids
lines = {}
conversations = []
MOVIE_LINES_FIELDS = ["lineID", "characterID", "movieID", "character", "text"]
MOVIE_CONVERSATIONS_FIELDS = ["character1ID", "character2ID", "movieID", "utteranceIDs"]
# Load lines and process conversations
print("\nProcessing corpus...")
lines = loadLines(os.path.join(corpus_name, "movie_lines.txt"), MOVIE_LINES_FIELDS)
print("\nLoading conversations...")
conversations = loadConversations(os.path.join(corpus_name, "movie_conversations.txt"),
lines, MOVIE_CONVERSATIONS_FIELDS)
# Write new csv file
print("\nWriting newly formatted file...")
with open(datafile, 'w', encoding='utf-8') as outputfile:
writer = csv.writer(outputfile, delimiter=delimiter, lineterminator='\n')
for pair in extractSentencePairs(conversations):
writer.writerow(pair)
# Print a sample of lines
print("\nSample lines from file:")
printLines(datafile)
```
### Đọc và cắt dữ liệu
Sau khi đã tổ chức lại dữ liệu, chúng ta cần tạo một từ điển các từ dùng trong tập dữ liệu và đọc các cặp câu truy vấn - phản hồi vào bộ nhớ.
Chú ý rằng chúng ta xem một câu là một chuỗi liên tiếp các **từ**, không có một ánh xạ ngầm nào của nó ở một không gian số học rời rạc. Do đó chúng ta cần phải tạo một hàm ánh xạ sao cho mỗi từ riêng biệt chỉ có duy nhất một giá trị chỉ số đại diện chính là vị trí của nó trong từ điển.
Để làm điều đó chúng ta định nghĩa lớp `Voc`, nơi sẽ lưu một dictionary ánh xạ **từ** sang **chỉ số**, một dictionary ánh xạ ngược **chỉ số** sang **từ**, một biến đếm cho mỗi từ và một biến đếm tổng số các từ. Lớp `Voc` cũng cung cắp các phương thức để thêm một từ vào từ điển (`addWord`), thêm tất cả các từ trong một câu (`addSentence`) và lược bỏ (trimming) các từ không thường gặp. Chúng ta sẽ nói về trimming sau:
```
# Default word tokens
PAD_token = 0 # Used for padding short sentences
SOS_token = 1 # Start-of-sentence token
EOS_token = 2 # End-of-sentence token
class Voc:
def __init__(self, name):
self.name = name
self.trimmed = False
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
self.num_words = 3 # Count SOS, EOS, PAD
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.num_words
self.word2count[word] = 1
self.index2word[self.num_words] = word
self.num_words += 1
else:
self.word2count[word] += 1
# Remove words below a certain count threshold
def trim(self, min_count):
if self.trimmed:
return
self.trimmed = True
keep_words = []
for k, v in self.word2count.items():
if v >= min_count:
keep_words.append(k)
print('keep_words {} / {} = {:.4f}'.format(
len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index)
))
# Reinitialize dictionaries
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
self.num_words = 3 # Count default tokens
for word in keep_words:
self.addWord(word)
```
Trước khi đưa vào huấn luyện ta cần một số thao tác tiền xử lý dữ liệu. Đầu tiên, chúng ta cần chuyển đổi các chuỗi Unicode thành ASCII sử dụng `unicodeToAscii`. Tiếp theo phải chuyển tất cả các kí tự thành chữ viết thường và lược bỏ các kí tự không ở trong bảng chữ cái ngoại trừ một số dấu câu (`normalizedString`). Cuối cùng để giúp quá trình huấn luyện nhanh chóng hội tụ chúng ta sẽ lọc ra các câu có độ dài lớn hơn ngưỡng `MAX_LENGTH` (`filterPairs`).
```
MAX_LENGTH = 10 # Maximum sentence length to consider
# Turn a Unicode string to plain ASCII, thanks to
# https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
# Read query/response pairs and return a voc object
def readVocs(datafile, corpus_name):
print("Reading lines...")
# Read the file and split into lines
lines = open(datafile, encoding='utf-8').\
read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
voc = Voc(corpus_name)
return voc, pairs
# Returns True iff both sentences in a pair 'p' are under the MAX_LENGTH threshold
def filterPair(p):
# Input sequences need to preserve the last word for EOS token
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH
# Filter pairs using filterPair condition
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
# Using the functions defined above, return a populated voc object and pairs list
def loadPrepareData(corpus_name, datafile, save_dir):
print("Start preparing training data ...")
voc, pairs = readVocs(datafile, corpus_name)
print("Read {!s} sentence pairs".format(len(pairs)))
pairs = filterPairs(pairs)
print("Trimmed to {!s} sentence pairs".format(len(pairs)))
print("Counting words...")
for pair in pairs:
voc.addSentence(pair[0])
voc.addSentence(pair[1])
print("Counted words:", voc.num_words)
return voc, pairs
# Load/Assemble voc and pairs
save_dir = os.path.join("save")
voc, pairs = loadPrepareData(corpus_name, datafile, save_dir)
# Print some pairs to validate
print("\npairs:")
for pair in pairs[:10]:
print(pair)
```
Một chiến thuật khác để giúp mô hình học nhanh hơn đó là lược bỏ các từ hiếm gặp trong dữ liệu. Việc này giúp làm giảm đi độ khó của bài toán, và do đó mô hình sẽ hội tụ nhanh hơn. Chúng ta sẽ làm điều này bằng 2 bước.
- Lược bỏ các từ với tần suất xuất hiện ít hơn `MIN_COUNT` sử dụng phương thức `voc.trim`.
- Lược bỏ các cặp câu hội thoại có chứa từ bị cắt ở bước trên.
```
MIN_COUNT = 3 # Minimum word count threshold for trimming
def trimRareWords(voc, pairs, MIN_COUNT):
# Trim words used under the MIN_COUNT from the voc
voc.trim(MIN_COUNT)
# Filter out pairs with trimmed words
keep_pairs = []
for pair in pairs:
input_sentence = pair[0]
output_sentence = pair[1]
keep_input = True
keep_output = True
# Check input sentence
for word in input_sentence.split(' '):
if word not in voc.word2index:
keep_input = False
break
# Check output sentence
for word in output_sentence.split(' '):
if word not in voc.word2index:
keep_output = False
break
# Only keep pairs that do not contain trimmed word(s) in their input or output sentence
if keep_input and keep_output:
keep_pairs.append(pair)
print("Trimmed from {} pairs to {}, {:.4f} of total".format(len(pairs), len(keep_pairs), len(keep_pairs) / len(pairs)))
return keep_pairs
# Trim voc and pairs
pairs = trimRareWords(voc, pairs, MIN_COUNT)
```
## Chuẩn bị dữ liệu cho mô hình
Mặc dù ở trên chúng ta đã làm rất nhiều thứ để có một bộ dữ liệu tốt gồm các cặp câu hội thoại, từ điển. Nhưng mô hình của chúng ta luôn mong đợi dữ liệu vào của nó phải là numerical torch tensor. Cách để chuyển dữ liệu dạng này thành tensor có thể tìm thấy ở bài viết [seq2seq translation tutorial](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html). Trong bài viết này chúng ta chỉ dùng batch size bằng 1, tất cả những gì chúng ta phải làm là chuyển tất cả các từ trong một cặp câu thành chỉ số tương ứng của nó trong từ điển và đưa vào mô hình huấn luyện.
Tuy nhiên, nếu muốn quá trình huấn luyện nhanh hơn và tận dụng được khả năng tính toán song song của GPU chúng ta nên huấn luyện theo mini-batches.
Sử dụng mini-batches thì cần phải chú ý rằng các câu trong một batch có thể sẽ có độ dài không giống nhau. Vì vậy chúng ta nên đặt số chiều của các tensor batch cố định là (max_length, batch_size). Các câu có độ dài nhỏ hơn max_length sẽ được thêm zero padding phía sau kí tự EOS_token (kí tự kết thúc câu).
Một vấn đề khác đặt ra là nếu chúng ta chuyển tất cả các từ của một cặp câu vào một batch tensor, lúc này tensor của chúng ta sẽ có kích thước là (max_length, batch_size). Tuy nhiên cái chúng ta cần là một tensor với kích thước (batch_size, max_length) và lúc đó cần phải hiện thực thêm một phướng thức để chuyển vị ma trận. Thay vì rườm ra như vậy, chúng ta sẽ thực hiện việc chuyển vị đó ngay từ trong hàm `zeroPadding`.

```
def indexesFromSentence(voc, sentence):
return [voc.word2index[word] for word in sentence.split(' ')] + [EOS_token]
def zeroPadding(l, fillvalue=PAD_token):
return list(itertools.zip_longest(*l, fillvalue=fillvalue))
def binaryMatrix(l, value=PAD_token):
m = []
for i, seq in enumerate(l):
m.append([])
for token in seq:
if token == PAD_token:
m[i].append(0)
else:
m[i].append(1)
return m
# Returns padded input sequene tensor and lengths
def inputVar(l, voc):
indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
lengths = torch.tensor([len(indexes) for indexes in indexes_batch])
padList = zeroPadding(indexes_batch)
padVar = torch.LongTensor(padList)
return padVar, lengths
# Returns padded target sequence tensor, padding mask, and max target length
def outputVar(l, voc):
indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
max_target_len = max([len(indexes) for indexes in indexes_batch])
padList = zeroPadding(indexes_batch)
mask = binaryMatrix(padList)
mask = torch.ByteTensor(mask)
padVar = torch.LongTensor(padList)
return padVar, mask, max_target_len
def batch2TrainData(voc, pair_batch):
pair_batch.sort(key=lambda x: len(x[0].split(' ')), reverse=True)
input_batch, output_batch = [], []
for pair in pair_batch:
input_batch.append(pair[0])
output_batch.append(pair[1])
inp, lengths = inputVar(input_batch, voc)
output, mask, max_target_len = outputVar(output_batch, voc)
return inp, lengths, output, mask, max_target_len
# Example for validation
small_batch_size = 5
batches = batch2TrainData(voc, [random.choice(pairs) for _ in range(small_batch_size)])
input_variable, lengths, target_variable, mask, max_target_len = batches
print('input_variable:', input_variable)
print('lengths:', lengths)
print('target_variable:', target_variable)
print('mask:', mask)
print('max_target_len:', max_target_len)
```
##Định nghĩa mô hình
###Mô hình Seq2Seq
Bộ não chatbot của chúng ta là một mô hình sequence-to-sequence (seq2seq). Mục tiêu của mô hình seq2seq là nhận một chuỗi đầu vào và dự đoán chuỗi đầu ra dựa trên mô mô hình cố định.
[Sutskever và các cộng sự](https://arxiv.org/abs/1409.3215) đã đề xuất một phương pháp dựa trên hai mô hình mạng nơ-ron hồi quy (RNN) có thể giải quyết được bài toán này. Một RNN hoạt động như một encoder (bộ mã hóa), encoder có nhiệm vụ mã hóa chuỗi đầu vào thành một context vector (vector ngữ cảnh). Trên lý thuyết, context vector (layer cuối cùng của RNN) sẽ chứa các thông tin ngữ nghĩa của chuỗi đầu vào. RNN thứ hai là decoder (bộ giải mã), nó dùng context vector của encoder để dự đoán chuỗi đầu ra tương ứng.

*Nguồn ảnh: https://jeddy92.github.io/JEddy92.github.io/ts_seq2seq_intro/*
###Encoder
Bộ mã hóa sử dụng mạng nơ-ron hồi quy (encoder RNN) duyệt qua từng token của chuỗi đầu vào, tại mỗi thời điểm xuất ra một "output" vector và một "hidden state" vector. Hidden state vector sau đó sẽ được dùng để tính hidden state vector tại thời điểm tiếp theo như trong ý tưởng cơ bản của RNN. Mạng encoder sẽ cố gắn g chuyển đổi những cái gì nó nhìn thấy trong chuỗi đầu vào bao gồm cả ngữ cảnh và ngữ nghĩa thành một tập hợp các điểm trong một không gian nhiều chiều, nơi decoder nhìn vào để giải mã chuỗi đầu ra có ý nghĩa.
Trái tim của encoder là multi-layered Gate Recurrent Unit, được đề xuất bởi [Cho và các cộng sư](https://arxiv.org/pdf/1406.1078v3.pdf) vào năm 2014. Chúng ta sẽ dùng dạng hai chiều của GRU, đồng nghĩa với việc có 2 mạng RNN độc lập: một đọc chuỗi đầu vào theo một thứ tự từ trái sáng phải, một từ phải sang trái.

*Nguồn ảnh: https://colah.github.io/posts/2015-09-NN-Types-FP/*
Chú ý rằng `embedding` layer được dùng để mã hóa từng từ trong câu văn đầu vào thành một vector trong không gian ngữ nghĩa của nó.
Cuối cùng, nếu đưa một batch dữ liệu vào RNN, chúng ta cần phải "unpack" zeros padding xung quanh của từng chuỗi.
####Các bước tính toán
1. Chuyển word index thành embedding vector.
2. Đóng gói các câu thành một các batch.
3. Đưa từng batch qua GRU để tính toán.
4. Unpack padding.
5. Cộng tất cả các output của GRU hai chiều.
6. Trả về kết quả và hidden state cuối cùng.
####Input:
- `input_seq`: batch of input sentences, kích thước (max_length, batch_size)
- `input_lengths`: Danh sách chứa độ dài câu tương ứng với từng câu trong batch, kích thước (batch_size)
- `hidden`: hidden state, kích thước (n_layers * num_directions, batch_size, hidden_size)
####Output:
- `output`: Layer của cuối cùng của GRU, kích thước (max_length, batch_size, hidden_size)
- `hidden`: cập nhật hidden state từ GRU, kích thước (n_layers * num_directions, batch_size, hidden_size)
```
class EncoderRNN(nn.Module):
def __init__(self, hidden_size, embedding, n_layers=1, dropout=0):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = embedding
# Initialize GRU; the input_size and hidden_size params are both set to
# 'hidden_size' because our input size is a word embedding with number
# of features == hidden_size
self.gru = nn.GRU(hidden_size, hidden_size, n_layers,
dropout=(0 if n_layers == 1 else dropout), bidirectional=True)
def forward(self, input_seq, input_lengths, hidden=None):
# Convert word indexes to embedding vector
embedded = self.embedding(input_seq)
# Pack padded batch of sequences for RNN module
packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
# Forward pass through GRU
outputs, hidden = self.gru(packed, hidden)
# Unpack padding
outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs)
# Sum bidirectional GRU outputs
output = outputs[:, :, :self.hidden_size] + outputs[:, :, self.hidden_size:]
# Return output and final hidden state
return outputs, hidden
```
###Decoder
Bộ giải mã RNN sẽ sinh ra chuỗi đầu ra theo từng token. Nó sử dụng context vector của encoder và hidden state để sinh từ tiếp theo trong chuỗi đầu ra cho đến khi gặp phải EOS_token (kí hiệu kết thúc câu). Một vấn đề với bài toán seq2seq truyền thống đó là nếu chỉ dùng context vector và hidden state thì sẽ bị mất mát thông tin, đặc biệt là với những câu dài.
Để đối phó với điều đó, [Bahdanau](https://arxiv.org/abs/1409.0473) đã đề xuất một phương pháp gọi là cơ chế attention. Cơ chế này cho phép decoder đặt sự chú ý lên một vài điểm nhất định trong câu thay vì nhìn các từ với mức độ quan trọng y như nhau.
Attention được tính toán dựa vào hidden state hiện tại của decoder và kết quả của encoder. Bộ trọng số của attention có cùng kích thước với chuồi đầu vào.

[Luong](https://arxiv.org/abs/1508.04025) attention là một phiên bản cải tiến với ý tưởng "Global attention". Sự khác biệt là với "Global attention" chúng ta sẽ nhìn tất cả các hidden state của encoder, thay vì chỉ nhìn hidden state cuối cùng của encoder như của Bahdanau. Một khác biệt nữa là "global attention" tính dựa trên duy nhất hidden state hiện tại của decoder chứ không như phiên bản của Bahdanau cần phải tính qua hidden state tại các bước trước đó.

Trong đó: $h_{t}$ là hidden state hiện tại của decoder và $h_{s}$ là toàn bộ hidden state của encoder.
Nhìn chung, global attention có thể tổng hợp như hình bên dưới.

```
# Luong attention layer
class Attn(nn.Module):
def __init__(self, method, hidden_size):
super(Attn, self).__init__()
self.method = method
if self.method not in ['dot', 'general', 'concat']:
raise ValueError(self.method, 'is not an appropriate attention method.')
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
self.v = nn.Parameter(torch.FloatTensor(hidden_size))
def dot_score(self, hidden, encoder_output):
return torch.sum(hidden * encoder_ouput, dim=2)
def general_score(self, hidden, encoder_output):
energy = self.attn(encoder_output)
return torch.sum(hidden * energy, dim=2)
def concat_score(self, hidden, encoder_outputs):
energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1),
encoder_ouputs), 2)).tanh()
return torch.sum(self.v * energy, dim=2)
def forward(self, hidden, encoder_outputs):
# Calculate the attention weights (energies) based on the given method
if self.method == 'general':
attn_energies = self.general_score(hidden, encoder_outputs)
elif self.method == 'concat':
attn_energies = self.concat_score(hidden, encoder_outputs)
elif self.method == 'dot':
attn_energies = self.dot_score(hidden, encoder_outputs)
# Transpose max_length and batch_size dimensions
attn_energies = attn_energies.t()
# Return the softmax normalized probability scores (with added dimension)
return F.softmax(attn_energies, dim=1).unsqueeze(1)
```
####Các bước tính toán
1. Lấy embedding vector của từ hiện tại
2. Đưa dữ liệu qua GRU hai chiều để tính toán
3. Tính trọng số attention từ output của GRU
4. Nhân trọng số của attention của encoder output để có được trọng số mới của context vector.
5. Nối (concat) context vector và GRU hidden state như trong công thức của Luong attention.
6. Dự đoán từ tiếp theo dựa trên Luong attention
7. Trả về kết quả và hidden state cuối cùng
####Inputs:
- `input_step`: Một step là một đơn vị thời gian, kích thước (1, batch_size)
- `last_hidden`: hidden layer cuối của GRU, kích thước (n_layers * num_directión, batch_size, hidden_size)
- `encoder_outputs`: encoder output, kích thước (max_length, batch_size, hidden_size)
####Outputs:
- `output`: softmax normalized tensor, kích thước (batch_size, voc.num_words)
- `hidden`: hidden state cuối của GRU, kích thước (n_layers * num_directions, batch_size, hidden_size)
```
class LuongAttnDecoderRNN(nn.Module):
def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1):
super(LuongAttnDecoderRNN, self).__init__()
# Keep for reference
self.attn_model = attn_model
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout = dropout
# Define layers
self.embedding = embedding
self.embedding_dropout = nn.Dropout(dropout)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout))
self.concat = nn.Linear(hidden_size * 2, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
self.attn = Attn(attn_model, hidden_size)
def forward(self, input_step, last_hidden, encoder_outputs):
# Note: we run this one step (word) at a time
# Get embedding of current input word
embedded = self.embedding(input_step)
embedded = self.embedding_dropout(embedded)
# Forward through unidirectional GRU
rnn_output, hidden = self.gru(embedded, last_hidden)
# Calculate attention weights from the current GRU output
attn_weights = self.attn(rnn_output, encoder_outputs)
# Multiply attention weights to encoder outputs to get new "weighted sum" context vector
context = attn_weights.bmm(encoder_outputs.transpose(0, 1))
# Concatenate weighted context vector and GRU output using Luong eq. 5
rnn_output = rnn_output.squeeze(0)
context = context.squeeze(1)
concat_input = torch.cat((rnn_output, context), 1)
concat_output = torch.tanh(self.concat(concat_input))
# Predict next word using Luong eq. 6
output = self.out(concat_output)
output = F.softmax(output, dim=1)
# Return output and final hidden state
return output, hidden
```
##Huấn luyện
###Masked loss
Vì chúng ta đang làm việc với batch of padded sentences, cho nên không thể dễ dàng để tính loss cho tất cả các thành phần của tensor. Chúng ta định nghĩa hàm `maskNLLLoss` để tính loss dựa trên output của decoder. Kết quả trả về là trung bình negative log likelihood của các thành phần trong tensor (mỗi thành phần là một câu).
```
def maskNLLLoss(inp, target, mask):
nTotal = mask.sum()
crossEntropy = -troch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1))
loss = crossEntropy.masked_selected(mask).mean()
loss = loss.to(device)
return loss, nTotal.item()
```
###Training
Hàm `train` hiện thực thuật toán huấn luyện cho một lần lặp.
Chúng ta sẽ dùng một vài kỹ thuật để quá trình training diễn ra tốt hơn:
- **Teacher forcing**: Kỹ thuật này cho phép với một xác suất được quy định sẵn `teacher_forcing_ratio`, decoder sẽ dùng target word tại thời điểm hiện tại để dự đoán từ tiếp theo thay vì dùng từ được dự đoán bởi decoder tại thời điểm hiện tại.
- **Gradient clipping**: Đây là một kỹ thuật thường dùng để đối phố với "exploding gradient". Kỹ thuật này đơn giản là chặn giá trị gradient ở một ngưỡng trên, không để nó trở nên quá lớn.

*Nguồn ảnh: Goodfellow et al. Deep Learning. 2016. https://www.deeplearningbook.org/*
####Các bước tính toán
1. Đưa toàn bộ batch vào encoder đê tính toán.
2. Khởi tạo input cho decoder bằng SOS_token và hidden state bằng với hidden state cuối cùng của encoder.
3. Đưa chuỗi input qua decoder.
4. If teacher_forcing: gán input tại thời điểm tiếp theo của decoder bằng nhãn đúng của từ dự đoán hiện tại, ngược lại gán bằng từ được decoder dự đoán tại thời điểm hiện tại.
5. Tính loss
6. Thực hiện giải thuật lan truyền ngược.
7. Clip gradients.
8. Cập nhật trọng số encoder và decoder.
```
def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, embedding,
encoder_optimizer, decoder_optimizer, batch_size, clip, max_length=MAX_LENGTH):
# Zero gradients
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
# Set device options
input_variable = input_variable.to(device)
lengths = lengths.to(device)
target_variable = target_variable.to(device)
mask = mask.to(device)
# Initialize variables
loss = 0
print_losses = []
n_totals = 0
# Forward pass through encoder
encoder_outputs, encoder_hidden = encoder(input_variable, lengths)
# Create initial decoder input (start with SOS tokens for each sentence)
decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]])
decoder_input = decoder_input.to(device)
# Set initial decoder hidden state to the encoder's final hidden state
decoder_hidden = encoder_hidden[:decoder.n_layers]
# Determine if we are using teacher forcing this iteration
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
# Forward batch of sequences through decoder one time step at a time
if use_teacher_forcing:
for t in range(max_target_len):
decoder_output, decoder_hidden = decoder(
decoder_input, decoder_hidden, encoder_outputs
)
# Teacher forcing: next input is current target
decoder_input = target_variable[t].view(1, -1)
# Calculate and accumulate loss
mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item() * nTotal)
n_totals += nTotal
else:
for t in range(max_target_len):
decoder_output, decoder_hidden = decoder(
decoder_input, decoder_hidden, encoder_outputs
)
# No teacher forcing: next input is decoder's own current output
_, topi = decoder_output.topk(1)
decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]])
decoder_input = decoder_input.to(device)
# Calculate and accumulate loss
mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item() * nTotal)
n_totals += nTotal
# Perform backpropatation
loss.backward()
# Clip gradients: gradients are modified in place
_ = nn.utils.clip_grad_norm_(encoder.parameters(), clip)
_ = nn.utils.clip_grad_norm_(decoder.parameters(), clip)
# Adjust model weights
encoder_optimizer.step()
decoder_optimizer.step()
return sum(print_losses) / n_totals
```
| true |
code
| 0.396915 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/wesleybeckner/technology_fundamentals/blob/main/C4%20Machine%20Learning%20II/SOLUTIONS/SOLUTION_Tech_Fun_C4_S2_Computer_Vision_Part_2_(Defect_Detection_Case_Study).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Technology Fundamentals Course 4, Session 2: Computer Vision Part 2 (Defect Detection Case Study)
**Instructor**: Wesley Beckner
**Contact**: [email protected]
**Teaching Assitants**: Varsha Bang, Harsha Vardhan
**Contact**: [email protected], [email protected]
<br>
---
<br>
In this session we will continue with our exploration of CNNs. In the previous session we discussed three flagship layers for the CNN: convolution ReLU and maximum pooling. Here we'll discuss the sliding window, how to build your custom CNN, and data augmentation for images.
<br>
_images in this notebook borrowed from [Ryan Holbrook](https://mathformachines.com/)_
---
<br>
<a name='top'></a>
# Contents
* 4.0 [Preparing Environment and Importing Data](#x.0)
* 4.0.1 [Enabling and Testing the GPU](#x.0.1)
* 4.0.2 [Observe TensorFlow on GPU vs CPU](#x.0.2)
* 4.0.3 [Import Packages](#x.0.3)
* 4.0.4 [Load Dataset](#x.0.4)
* 4.0.4.1 [Loading Data with ImageDataGenerator](#x.0.4.1)
* 4.0.4.2 [Loading Data with image_dataset_from_directory](#x.0.4.2)
* 4.1 [Sliding Window](#x.1)
* 4.1.1 [Stride](#x.1.1)
* 4.1.2 [Padding](#x.1.2)
* 4.1.3 [Exercise: Exploring Sliding Windows](#x.1.3)
* 4.2 [Custom CNN](#x.2)
* 4.2.1 [Evaluate Model](#x.2.1)
* 4.3 [Data Augmentation](#x.3)
* 4.3.1 [Evaluate Model](#x.3.1)
* 4.3.2 [Exercise: Image Preprocessing Layers](#x.3.2)
* 4.4 [Transfer Learning](#x.4)
<br>
---
<a name='x.0'></a>
## 4.0 Preparing Environment and Importing Data
[back to top](#top)
<a name='x.0.1'></a>
### 4.0.1 Enabling and testing the GPU
[back to top](#top)
First, you'll need to enable GPUs for the notebook:
- Navigate to Edit→Notebook Settings
- select GPU from the Hardware Accelerator drop-down
Next, we'll confirm that we can connect to the GPU with tensorflow:
```
%tensorflow_version 2.x
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
```
<a name='x.0.2'></a>
### 4.0.2 Observe TensorFlow speedup on GPU relative to CPU
[back to top](#top)
This example constructs a typical convolutional neural network layer over a
random image and manually places the resulting ops on either the CPU or the GPU
to compare execution speed.
```
%tensorflow_version 2.x
import tensorflow as tf
import timeit
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
print(
'\n\nThis error most likely means that this notebook is not '
'configured to use a GPU. Change this in Notebook Settings via the '
'command palette (cmd/ctrl-shift-P) or the Edit menu.\n\n')
raise SystemError('GPU device not found')
def cpu():
with tf.device('/cpu:0'):
random_image_cpu = tf.random.normal((100, 100, 100, 3))
net_cpu = tf.keras.layers.Conv2D(32, 7)(random_image_cpu)
return tf.math.reduce_sum(net_cpu)
def gpu():
with tf.device('/device:GPU:0'):
random_image_gpu = tf.random.normal((100, 100, 100, 3))
net_gpu = tf.keras.layers.Conv2D(32, 7)(random_image_gpu)
return tf.math.reduce_sum(net_gpu)
# We run each op once to warm up; see: https://stackoverflow.com/a/45067900
cpu()
gpu()
# Run the op several times.
print('Time (s) to convolve 32x7x7x3 filter over random 100x100x100x3 images '
'(batch x height x width x channel). Sum of ten runs.')
print('CPU (s):')
cpu_time = timeit.timeit('cpu()', number=10, setup="from __main__ import cpu")
print(cpu_time)
print('GPU (s):')
gpu_time = timeit.timeit('gpu()', number=10, setup="from __main__ import gpu")
print(gpu_time)
print('GPU speedup over CPU: {}x'.format(int(cpu_time/gpu_time)))
```
<a name='x.0.3'></a>
### 4.0.3 Import Packages
[back to top](#top)
```
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
#importing required libraries
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense, Conv2D, MaxPooling2D, InputLayer
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.metrics import classification_report,confusion_matrix
```
<a name='x.0.4'></a>
### 4.0.4 Load Dataset
[back to top](#top)
We will actually take a beat here today. When we started building our ML frameworks, we simply wanted our data in a numpy array to feed it into our pipeline. At some point, especially when working with images, the data becomes too large to fit into memory. For this reason we need an alternative way to import our data. With the merger of keras/tf two popular frameworks became available, `ImageDataGenerator` and `image_dataset_from_directory` both under `tf.keras.preprocessing.image`. `image_dataset_from_directory` can sometimes be faster (tf origin) but `ImageDataGenerator` is a lot simpler to use and has on-the-fly data augmentation capability (keras).
For a full comparison of methods visit [this link](https://towardsdatascience.com/what-is-the-best-input-pipeline-to-train-image-classification-models-with-tf-keras-eb3fe26d3cc5)
```
# Sync your google drive folder
from google.colab import drive
drive.mount("/content/drive")
```
<a name='x.0.4.1'></a>
#### 4.0.4.1 Loading Data with `ImageDataGenerator`
[back to top](#top)
```
# full dataset can be attained from kaggle if you are interested
# https://www.kaggle.com/ravirajsinh45/real-life-industrial-dataset-of-casting-product?select=casting_data
path_to_casting_data = '/content/drive/MyDrive/courses/tech_fundamentals/TECH_FUNDAMENTALS/data/casting_data_class_practice'
image_shape = (300,300,1)
batch_size = 32
technocast_train_path = path_to_casting_data + '/train/'
technocast_test_path = path_to_casting_data + '/test/'
image_gen = ImageDataGenerator(rescale=1/255) # normalize pixels to 0-1
#we're using keras inbuilt function to ImageDataGenerator so we
# dont need to label all images into 0 and 1
print("loading training set...")
train_set = image_gen.flow_from_directory(technocast_train_path,
target_size=image_shape[:2],
color_mode="grayscale",
batch_size=batch_size,
class_mode='binary',
shuffle=True)
print("loading testing set...")
test_set = image_gen.flow_from_directory(technocast_test_path,
target_size=image_shape[:2],
color_mode="grayscale",
batch_size=batch_size,
class_mode='binary',
shuffle=False)
```
<a name='x.0.4.2'></a>
#### 4.0.4.2 loading data with `image_dataset_from_directory`
[back to top](#top)
This method should be approx 2x faster than `ImageDataGenerator`
```
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.data.experimental import AUTOTUNE
path_to_casting_data = '/content/drive/MyDrive/courses/tech_fundamentals/TECH_FUNDAMENTALS/data/casting_data_class_practice'
technocast_train_path = path_to_casting_data + '/train/'
technocast_test_path = path_to_casting_data + '/test/'
# Load training and validation sets
image_shape = (300,300,1)
batch_size = 32
ds_train_ = image_dataset_from_directory(
technocast_train_path,
labels='inferred',
label_mode='binary',
color_mode="grayscale",
image_size=image_shape[:2],
batch_size=batch_size,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
technocast_test_path,
labels='inferred',
label_mode='binary',
color_mode="grayscale",
image_size=image_shape[:2],
batch_size=batch_size,
shuffle=False,
)
train_set = ds_train_.prefetch(buffer_size=AUTOTUNE)
test_set = ds_valid_.prefetch(buffer_size=AUTOTUNE)
# view some images
def_path = '/def_front/cast_def_0_1001.jpeg'
ok_path = '/ok_front/cast_ok_0_1.jpeg'
image_path = technocast_train_path + ok_path
image = tf.io.read_file(image_path)
image = tf.io.decode_jpeg(image)
plt.figure(figsize=(6, 6))
plt.imshow(tf.squeeze(image), cmap='gray')
plt.axis('off')
plt.show();
```
<a name='x.1'></a>
## 4.1 Sliding Window
[back to top](#top)
The kernels we just reviewed, need to be swept or _slid_ along the preceding layer. We call this a **_sliding window_**, the window being the kernel.
<p align=center>
<img src="https://i.imgur.com/LueNK6b.gif" width=400></img>
What do you notice about the gif? One perhaps obvious observation is that you can't scoot all the way up to the border of the input layer, this is because the kernel defines operations _around_ the centered pixel and so you bang up against the margin of the input array. We can change the behavior at the boundary with a **_padding_** hyperparameter. A second observation, is that the distance we move the kernel along in each step could be variable, we call this the **_stride_**. We will explore the affects of each of these.
```
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.Conv2D(filters=64,
kernel_size=3,
strides=1,
padding='same',
activation='relu'),
layers.MaxPool2D(pool_size=2,
strides=1,
padding='same')
# More layers follow
])
```
<a name='x.1.1'></a>
### 4.1.1 Stride
[back to top](#top)
Stride defines the the step size we take with each kernel as it passes along the input array. The stride needs to be defined in both the horizontal and vertical dimensions. This animation shows a 2x2 stride
<p align=center>
<img src="https://i.imgur.com/Tlptsvt.gif" width=400></img>
The stride will often be 1 for CNNs, where we don't want to lose any important information. Maximum pooling layers will often have strides greater than 1, to better summarize/accentuate the relevant features/activations.
If the stride is the same in both the horizontal and vertical directions, it can be set with a single number like `strides=2` within keras.
### 4.1.2 Padding
[back to top](#top)
Padding attempts to resolve our issue at the border: our kernel requires information surrounding the centered pixel, and at the border of the input array we don't have that information. What to do?
We have a couple popular options within the keras framework. We can set `padding='valid'` and only slide the kernel to the edge of the input array. This has the drawback of feature maps shrinking in size as we pass through the NN. Another option is to set `padding='same'` what this will do is pad the input array with 0's, just enough of them to allow the feature map to be the same size as the input array. This is shown in the gif below:
<p align=center>
<img src="https://i.imgur.com/RvGM2xb.gif" width=400></img>
The downside of setting the padding to same will be that features at the edges of the image will be diluted.
<a name='x.1.3'></a>
### 4.1.3 Exercise: Exploring Sliding Windows
[back to top](#top)
```
from skimage import draw, transform
from itertools import product
# helper functions borrowed from Ryan Holbrook
# https://mathformachines.com/
def circle(size, val=None, r_shrink=0):
circle = np.zeros([size[0]+1, size[1]+1])
rr, cc = draw.circle_perimeter(
size[0]//2, size[1]//2,
radius=size[0]//2 - r_shrink,
shape=[size[0]+1, size[1]+1],
)
if val is None:
circle[rr, cc] = np.random.uniform(size=circle.shape)[rr, cc]
else:
circle[rr, cc] = val
circle = transform.resize(circle, size, order=0)
return circle
def show_kernel(kernel, label=True, digits=None, text_size=28):
# Format kernel
kernel = np.array(kernel)
if digits is not None:
kernel = kernel.round(digits)
# Plot kernel
cmap = plt.get_cmap('Blues_r')
plt.imshow(kernel, cmap=cmap)
rows, cols = kernel.shape
thresh = (kernel.max()+kernel.min())/2
# Optionally, add value labels
if label:
for i, j in product(range(rows), range(cols)):
val = kernel[i, j]
color = cmap(0) if val > thresh else cmap(255)
plt.text(j, i, val,
color=color, size=text_size,
horizontalalignment='center', verticalalignment='center')
plt.xticks([])
plt.yticks([])
def show_extraction(image,
kernel,
conv_stride=1,
conv_padding='valid',
activation='relu',
pool_size=2,
pool_stride=2,
pool_padding='same',
figsize=(10, 10),
subplot_shape=(2, 2),
ops=['Input', 'Filter', 'Detect', 'Condense'],
gamma=1.0):
# Create Layers
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(
filters=1,
kernel_size=kernel.shape,
strides=conv_stride,
padding=conv_padding,
use_bias=False,
input_shape=image.shape,
),
tf.keras.layers.Activation(activation),
tf.keras.layers.MaxPool2D(
pool_size=pool_size,
strides=pool_stride,
padding=pool_padding,
),
])
layer_filter, layer_detect, layer_condense = model.layers
kernel = tf.reshape(kernel, [*kernel.shape, 1, 1])
layer_filter.set_weights([kernel])
# Format for TF
image = tf.expand_dims(image, axis=0)
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
# Extract Feature
image_filter = layer_filter(image)
image_detect = layer_detect(image_filter)
image_condense = layer_condense(image_detect)
images = {}
if 'Input' in ops:
images.update({'Input': (image, 1.0)})
if 'Filter' in ops:
images.update({'Filter': (image_filter, 1.0)})
if 'Detect' in ops:
images.update({'Detect': (image_detect, gamma)})
if 'Condense' in ops:
images.update({'Condense': (image_condense, gamma)})
# Plot
plt.figure(figsize=figsize)
for i, title in enumerate(ops):
image, gamma = images[title]
plt.subplot(*subplot_shape, i+1)
plt.imshow(tf.image.adjust_gamma(tf.squeeze(image), gamma))
plt.axis('off')
plt.title(title)
```
Create an image and kernel:
```
import tensorflow as tf
import matplotlib.pyplot as plt
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
image = circle([64, 64], val=1.0, r_shrink=3)
image = tf.reshape(image, [*image.shape, 1])
# Bottom sobel
kernel = tf.constant(
[[-1, -2, -1],
[0, 0, 0],
[1, 2, 1]],
)
show_kernel(kernel)
```
What do we think this kernel is meant to detect for?
We will apply our kernel with a 1x1 stride and our max pooling with a 2x2 stride and pool size of 2.
```
show_extraction(
image, kernel,
# Window parameters
conv_stride=1,
pool_size=2,
pool_stride=2,
subplot_shape=(1, 4),
figsize=(14, 6),
)
```
Works ok! what about a higher conv stride?
```
show_extraction(
image, kernel,
# Window parameters
conv_stride=2,
pool_size=3,
pool_stride=4,
subplot_shape=(1, 4),
figsize=(14, 6),
)
```
Looks like we lost a bit of information!
Sometimes published models will use a larger kernel and stride in the initial layer to produce large-scale features early on in the network without losing too much information (ResNet50 uses 7x7 kernels with a stride of 2). For now, without having much experience it's safe to set conv strides to 1.
Take a moment here with the given kernel and explore different settings for applying both the kernel and the max_pool
```
conv_stride=YOUR_VALUE, # condenses pixels
pool_size=YOUR_VALUE,
pool_stride=YOUR_VALUE, # condenses pixels
```
Given a total condensation of 8 (I'm taking condensation to mean `conv_stride` x `pool_stride`). what do you think is the best combination of values for `conv_stride, pool_size, and pool_stride`?
<a name='x.2'></a>
## 4.2 Custom CNN
[back to top](#top)
As we move through the network, small-scale features (lines, edges, etc.) turn to large-scale features (shapes, eyes, ears, etc). We call these blocks of convolution, ReLU, and max pool **_convolutional blocks_** and they are the low level modular framework we work with. By this means, the CNN is able to design it's own features, ones suited for the classification or regression task at hand.
We will design a custom CNN for the Casting Defect Detection Dataset.
In the following I'm going to double the filter size after the first block. This is a common pattern as the max pooling layers forces us in the opposite direction.
```
#Creating model
model = Sequential()
model.add(InputLayer(input_shape=(image_shape)))
model.add(Conv2D(filters=8, kernel_size=(3,3), activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(224))
model.add(Activation('relu'))
# Last layer
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['binary_accuracy'])
early_stop = EarlyStopping(monitor='val_loss',
patience=5,
restore_best_weights=True,)
# with CPU + ImageDataGenerator runs for about 40 minutes (5 epochs)
# with GPU + image_dataset_from_directory runs for about 4 minutes (16 epochs)
with tf.device('/device:GPU:0'):
results = model.fit(train_set,
epochs=20,
validation_data=test_set,
callbacks=[early_stop])
# model.save('inspection_of_casting_products.h5')
```
<a name='x.2.1'></a>
### 4.2.1 Evaluate Model
[back to top](#top)
```
# model.load_weights('inspection_of_casting_products.h5')
losses = pd.DataFrame(results.history)
# losses.to_csv('history_simple_model.csv', index=False)
fig, ax = plt.subplots(1, 2, figsize=(10,5))
losses[['loss','val_loss']].plot(ax=ax[0])
losses[['binary_accuracy','val_binary_accuracy']].plot(ax=ax[1])
# predict test set
pred_probability = model.predict(test_set)
# convert to bool
predictions = pred_probability > 0.5
# precision / recall / f1-score
# test_set.classes to get images from ImageDataGenerator
# for image_dataset_from_directory we have to do a little gymnastics
# to get the labels
labels = np.array([])
for x, y in ds_valid_:
labels = np.concatenate([labels, tf.squeeze(y.numpy()).numpy()])
print(classification_report(labels,predictions))
plt.figure(figsize=(10,6))
sns.heatmap(confusion_matrix(labels,predictions),annot=True)
```
<a name='x.3'></a>
## 4.3 Data Augmentation
[back to top](#top)
Alright, alright, alright. We've done pretty good making our CNN model. But let's see if we can make it even better. There's a last trick we'll cover here in regard to image classifiers. We're going to perturb the input images in such a way as to create a pseudo-larger dataset.
With any machine learning model, the more relevant training data we give the model, the better. The key here is _relevant_ training data. We can easily do this with images so long as we do not change the class of the image. For example, in the small plot below, we are changing contrast, hue, rotation, and doing other things to the image of a car; and this is okay because it does not change the classification from a car to, say, a truck.
<p align=center>
<img src="https://i.imgur.com/UaOm0ms.png" width=400></img>
Typically when we do data augmentation for images, we do them _online_, i.e. during training. Recall that we train in batches (or minibatches) with CNNs. An example of a minibatch then, might be the small multiples plot below.
<p align=center>
<img src="https://i.imgur.com/MFviYoE.png" width=400></img>
by varying the images in this way, the model always sees slightly new data, and becomes a more robust model. Remember that the caveat is that we can't muddle the relevant classification of the image. Sometimes the best way to see if data augmentation will be helpful is to just try it and see!
```
from tensorflow.keras.layers.experimental import preprocessing
#Creating model
model = Sequential()
model.add(preprocessing.RandomFlip('horizontal')), # flip left-to-right
model.add(preprocessing.RandomFlip('vertical')), # flip upside-down
model.add(preprocessing.RandomContrast(0.5)), # contrast change by up to 50%
model.add(Conv2D(filters=8, kernel_size=(3,3),input_shape=image_shape, activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(224))
model.add(Activation('relu'))
# Last layer
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['binary_accuracy'])
early_stop = EarlyStopping(monitor='val_loss',
patience=5,
restore_best_weights=True,)
results = model.fit(train_set,
epochs=30,
validation_data=test_set,
callbacks=[early_stop])
```
<a name='x.3.1'></a>
### 4.3.1 Evaluate Model
[back to top](#top)
```
losses = pd.DataFrame(results.history)
# losses.to_csv('history_augment_model.csv', index=False)
fig, ax = plt.subplots(1, 2, figsize=(10,5))
losses[['loss','val_loss']].plot(ax=ax[0])
losses[['binary_accuracy','val_binary_accuracy']].plot(ax=ax[1])
# predict test set
pred_probability = model.predict(test_set)
# convert to bool
predictions = pred_probability > 0.5
# precision / recall / f1-score
# test_set.classes to get images from ImageDataGenerator
# for image_dataset_from_directory we have to do a little gymnastics
# to get the labels
labels = np.array([])
for x, y in ds_valid_:
labels = np.concatenate([labels, tf.squeeze(y.numpy()).numpy()])
print(classification_report(labels,predictions))
plt.figure(figsize=(10,6))
sns.heatmap(confusion_matrix(labels,predictions),annot=True)
```
<a name='x.3.2'></a>
### 4.3.2 Exercise: Image Preprocessing Layers
[back to top](#top)
These layers apply random augmentation transforms to a batch of images. They are only active during training. You can visit the documentation [here](https://keras.io/api/layers/preprocessing_layers/image_preprocessing/)
* `RandomCrop` layer
* `RandomFlip` layer
* `RandomTranslation` layer
* `RandomRotation` layer
* `RandomZoom` layer
* `RandomHeight` layer
* `RandomWidth` layer
Use any combination of random augmentation transforms and retrain your model. Can you get a higher val performance? you may need to increase your epochs.
```
# code cell for exercise 4.3.2
from tensorflow.keras.layers.experimental import preprocessing
#Creating model
model = Sequential()
model.add(preprocessing.RandomFlip('horizontal')), # flip left-to-right
model.add(preprocessing.RandomFlip('vertical')), # flip upside-down
model.add(preprocessing.RandomContrast(0.5)), # contrast change by up to 50%
model.add(preprocessing.RandomRotation((-1,1))), # contrast change by up to 50%
model.add(Conv2D(filters=8, kernel_size=(3,3),input_shape=image_shape, activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=16, kernel_size=(3,3), activation='relu',))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(224))
model.add(Activation('relu'))
# Last layer
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['binary_accuracy'])
early_stop = EarlyStopping(monitor='val_loss',
patience=5,
restore_best_weights=True,)
results = model.fit(train_set,
epochs=200,
validation_data=test_set,
callbacks=[early_stop])
# predict test set
pred_probability = model.predict(test_set)
# convert to bool
predictions = pred_probability > 0.5
# precision / recall / f1-score
# test_set.classes to get images from ImageDataGenerator
# for image_dataset_from_directory we have to do a little gymnastics
# to get the labels
labels = np.array([])
for x, y in ds_valid_:
labels = np.concatenate([labels, tf.squeeze(y.numpy()).numpy()])
print(classification_report(labels,predictions))
plt.figure(figsize=(10,6))
sns.heatmap(confusion_matrix(labels,predictions),annot=True)
```
<a name='x.4'></a>
## 4.4 Transfer Learning
[back to top](#top)
Transfer learning with [EfficientNet](https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/)
```
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.data.experimental import AUTOTUNE
path_to_casting_data = '/content/drive/MyDrive/courses/TECH_FUNDAMENTALS/data/casting_data_class_practice'
technocast_train_path = path_to_casting_data + '/train/'
technocast_test_path = path_to_casting_data + '/test/'
# Load training and validation sets
image_shape = (300,300,3)
batch_size = 32
ds_train_ = image_dataset_from_directory(
technocast_train_path,
labels='inferred',
label_mode='binary',
color_mode="grayscale",
image_size=image_shape[:2],
batch_size=batch_size,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
technocast_test_path,
labels='inferred',
label_mode='binary',
color_mode="grayscale",
image_size=image_shape[:2],
batch_size=batch_size,
shuffle=False,
)
train_set = ds_train_.prefetch(buffer_size=AUTOTUNE)
test_set = ds_valid_.prefetch(buffer_size=AUTOTUNE)
def build_model(image_shape):
input = tf.keras.layers.Input(shape=(image_shape))
# include_top = False will take of the last dense layer used for classification
model = tf.keras.applications.EfficientNetB3(include_top=False,
input_tensor=input,
weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# now we have to rebuild the top
x = tf.keras.layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = tf.keras.layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = tf.keras.layers.Dropout(top_dropout_rate, name="top_dropout")(x)
# use num-nodes = 1 for binary, class # for multiclass
output = tf.keras.layers.Dense(1, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(input, output, name="EfficientNet")
model.compile(optimizer='adam',
loss="binary_crossentropy",
metrics=["binary_accuracy"])
return model
model = build_model(image_shape)
with tf.device('/device:GPU:0'):
results = model.fit(train_set,
epochs=20,
validation_data=test_set,
callbacks=[early_stop])
```
| true |
code
| 0.682891 | null | null | null | null |
|
# Predicting Remaining Useful Life (advanced)
<p style="margin:30px">
<img style="display:inline; margin-right:50px" width=50% src="https://www.featuretools.com/wp-content/uploads/2017/12/FeatureLabs-Logo-Tangerine-800.png" alt="Featuretools" />
<img style="display:inline" width=15% src="https://upload.wikimedia.org/wikipedia/commons/e/e5/NASA_logo.svg" alt="NASA" />
</p>
This notebook has a more advanced workflow than [the other notebook](Simple%20Featuretools%20RUL%20Demo.ipynb) for predicting Remaining Useful Life (RUL). If you are a new to either this dataset or Featuretools, I would recommend reading the other notebook first.
## Highlights
* Demonstrate how novel entityset structures improve predictive accuracy
* Use TSFresh Primitives from a featuretools [addon](https://docs.featuretools.com/getting_started/install.html#add-ons)
* Improve Mean Absolute Error by tuning hyper parameters with [BTB](https://github.com/HDI-Project/BTB)
Here is a collection of mean absolute errors from both notebooks. Though we've used averages where possible (denoted by \*), the randomness in the Random Forest Regressor and how we choose labels from the train data changes the score.
| | Train/Validation MAE| Test MAE|
|---------------------------------|---------------------|----------|
| Median Baseline | 72.06* | 50.66* |
| Simple Featuretools | 40.92* | 39.56 |
| Advanced: Custom Primitives | 35.90* | 28.84 |
| Advanced: Hyperparameter Tuning | 34.80* | 27.85 |
# Step 1: Load Data
We load in the train data using the same function we used in the previous notebook:
```
import composeml as cp
import numpy as np
import pandas as pd
import featuretools as ft
import utils
import os
from tqdm import tqdm
from sklearn.cluster import KMeans
data_path = 'data/train_FD004.txt'
data = utils.load_data(data_path)
data.head()
```
We also make cutoff times by using [Compose](https://compose.featurelabs.com) for generating labels on engines that reach at least 100 cycles. For each engine, we generate 10 labels that are spaced 10 cycles apart.
```
def remaining_useful_life(df):
return len(df) - 1
lm = cp.LabelMaker(
target_entity='engine_no',
time_index='time',
labeling_function=remaining_useful_life,
)
label_times = lm.search(
data.sort_values('time'),
num_examples_per_instance=10,
minimum_data=100,
gap=10,
verbose=True,
)
label_times.head()
```
We're going to make 5 sets of cutoff times to use for cross validation by random sampling the labels times we created previously.
```
splits = 5
cutoff_time_list = []
for i in range(splits):
sample = label_times.sample(n=249, random_state=i)
sample.sort_index(inplace=True)
cutoff_time_list.append(sample)
cutoff_time_list[0].head()
```
We're going to do something fancy for our entityset. The values for `operational_setting` 1-3 are continuous but create an implicit relation between different engines. If two engines have a similar `operational_setting`, it could indicate that we should expect the sensor measurements to mean similar things. We make clusters of those settings using [KMeans](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html) from scikit-learn and make a new entity from the clusters.
```
nclusters = 50
def make_entityset(data, nclusters, kmeans=None):
X = data[[
'operational_setting_1',
'operational_setting_2',
'operational_setting_3',
]]
if kmeans is None:
kmeans = KMeans(n_clusters=nclusters).fit(X)
data['settings_clusters'] = kmeans.predict(X)
es = ft.EntitySet('Dataset')
es.entity_from_dataframe(
dataframe=data,
entity_id='recordings',
index='index',
time_index='time',
)
es.normalize_entity(
base_entity_id='recordings',
new_entity_id='engines',
index='engine_no',
)
es.normalize_entity(
base_entity_id='recordings',
new_entity_id='settings_clusters',
index='settings_clusters',
)
return es, kmeans
es, kmeans = make_entityset(data, nclusters)
es
```
## Visualize EntitySet
```
es.plot()
```
# Step 2: DFS and Creating a Model
In addition to changing our `EntitySet` structure, we're also going to use the [Complexity](http://tsfresh.readthedocs.io/en/latest/api/tsfresh.feature_extraction.html#tsfresh.feature_extraction.feature_calculators.cid_ce) time series primitive from the featuretools [addon](https://docs.featuretools.com/getting_started/install.html#add-ons) of ready-to-use TSFresh Primitives.
```
from featuretools.tsfresh import CidCe
fm, features = ft.dfs(
entityset=es,
target_entity='engines',
agg_primitives=['last', 'max', CidCe(normalize=False)],
trans_primitives=[],
chunk_size=.26,
cutoff_time=cutoff_time_list[0],
max_depth=3,
verbose=True,
)
fm.to_csv('advanced_fm.csv')
fm.head()
```
We build 4 more feature matrices with the same feature set but different cutoff times. That lets us test the pipeline multiple times before using it on test data.
```
fm_list = [fm]
for i in tqdm(range(1, splits)):
es = make_entityset(data, nclusters, kmeans=kmeans)[0]
fm = ft.calculate_feature_matrix(
entityset=es,
features=features,
chunk_size=.26,
cutoff_time=cutoff_time_list[i],
)
fm_list.append(fm)
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.feature_selection import RFE
def pipeline_for_test(fm_list, hyperparams=None, do_selection=False):
scores = []
regs = []
selectors = []
hyperparams = hyperparams or {
'n_estimators': 100,
'max_feats': 50,
'nfeats': 50,
}
for fm in fm_list:
X = fm.copy().fillna(0)
y = X.pop('remaining_useful_life')
n_estimators = int(hyperparams['n_estimators'])
max_features = int(hyperparams['max_feats'])
max_features = min(max_features, int(hyperparams['nfeats']))
reg = RandomForestRegressor(n_estimators=n_estimators, max_features=max_features)
X_train, X_test, y_train, y_test = train_test_split(X, y)
if do_selection:
reg2 = RandomForestRegressor(n_estimators=10, n_jobs=3)
selector = RFE(reg2, int(hyperparams['nfeats']), step=25)
selector.fit(X_train, y_train)
X_train = selector.transform(X_train)
X_test = selector.transform(X_test)
selectors.append(selector)
reg.fit(X_train, y_train)
regs.append(reg)
preds = reg.predict(X_test)
mae = mean_absolute_error(preds, y_test)
scores.append(mae)
return scores, regs, selectors
scores, regs, selectors = pipeline_for_test(fm_list)
print([float('{:.1f}'.format(score)) for score in scores])
mean, std = np.mean(scores), np.std(scores)
info = 'Average MAE: {:.1f}, Std: {:.2f}\n'
print(info.format(mean, std))
most_imp_feats = utils.feature_importances(fm_list[0], regs[0])
data_test = utils.load_data('data/test_FD004.txt')
es_test, _ = make_entityset(
data_test,
nclusters,
kmeans=kmeans,
)
fm_test = ft.calculate_feature_matrix(
entityset=es_test,
features=features,
verbose=True,
chunk_size=.26,
)
X = fm_test.copy().fillna(0)
y = pd.read_csv(
'data/RUL_FD004.txt',
sep=' ',
header=None,
names=['remaining_useful_life'],
index_col=False,
)
preds = regs[0].predict(X)
mae = mean_absolute_error(preds, y)
print('Mean Abs Error: {:.2f}'.format(mae))
```
# Step 3: Feature Selection and Scoring
Here, we'll use [Recursive Feature Elimination](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html). In order to set ourselves up for later optimization, we're going to write a generic `pipeline` function which takes in a set of hyperparameters and returns a score. Our pipeline will first run `RFE` and then split the remaining data for scoring by a `RandomForestRegressor`. We're going to pass in a list of hyperparameters, which we will tune later.
Lastly, we can use that selector and regressor to score the test values.
# Step 4: Hyperparameter Tuning
Because of the way we set up our pipeline, we can use a Gaussian Process to tune the hyperparameters. We will use [BTB](https://github.com/HDI-Project/BTB) from the [HDI Project](https://github.com/HDI-Project). This will search through the hyperparameters `n_estimators` and `max_feats` for RandomForest, and the number of features for RFE to find the hyperparameter set that has the best average score.
```
from btb import HyperParameter, ParamTypes
from btb.tuning import GP
def run_btb(fm_list, n=30, best=45):
hyperparam_ranges = [
('n_estimators', HyperParameter(ParamTypes.INT, [10, 200])),
('max_feats', HyperParameter(ParamTypes.INT, [5, 50])),
('nfeats', HyperParameter(ParamTypes.INT, [10, 70])),
]
tuner = GP(hyperparam_ranges)
shape = (n, len(hyperparam_ranges))
tested_parameters = np.zeros(shape, dtype=object)
scores = []
print('[n_est, max_feats, nfeats]')
best_hyperparams = None
best_sel = None
best_reg = None
for i in tqdm(range(n)):
hyperparams = tuner.propose()
cvscores, regs, selectors = pipeline_for_test(
fm_list,
hyperparams=hyperparams,
do_selection=True,
)
bound = np.mean(cvscores)
tested_parameters[i, :] = hyperparams
tuner.add(hyperparams, -np.mean(cvscores))
if np.mean(cvscores) + np.std(cvscores) < best:
best = np.mean(cvscores)
best_hyperparams = hyperparams
best_reg = regs[0]
best_sel = selectors[0]
info = '{}. {} -- Average MAE: {:.1f}, Std: {:.2f}'
mean, std = np.mean(cvscores), np.std(cvscores)
print(info.format(i, best_hyperparams, mean, std))
print('Raw: {}'.format([float('{:.1f}'.format(s)) for s in cvscores]))
return best_hyperparams, (best_sel, best_reg)
best_hyperparams, best_pipeline = run_btb(fm_list, n=30)
X = fm_test.copy().fillna(0)
y = pd.read_csv(
'data/RUL_FD004.txt',
sep=' ',
header=None,
names=['remaining_useful_life'],
index_col=False,
)
preds = best_pipeline[1].predict(best_pipeline[0].transform(X))
score = mean_absolute_error(preds, y)
print('Mean Abs Error on Test: {:.2f}'.format(score))
most_imp_feats = utils.feature_importances(
X.iloc[:, best_pipeline[0].support_],
best_pipeline[1],
)
```
# Appendix: Averaging old scores
To make a fair comparison between the previous notebook and this one, we should average scores where possible. The work in this section is exactly the work in the previous notebook plus some code for taking the average in the validation step.
```
from featuretools.primitives import Min
old_fm, features = ft.dfs(
entityset=es,
target_entity='engines',
agg_primitives=['last', 'max', 'min'],
trans_primitives=[],
cutoff_time=cutoff_time_list[0],
max_depth=3,
verbose=True,
)
old_fm_list = [old_fm]
for i in tqdm(range(1, splits)):
es = make_entityset(data, nclusters, kmeans=kmeans)[0]
old_fm = ft.calculate_feature_matrix(
entityset=es,
features=features,
cutoff_time=cutoff_time_list[i],
)
old_fm_list.append(fm)
old_scores = []
median_scores = []
for fm in old_fm_list:
X = fm.copy().fillna(0)
y = X.pop('remaining_useful_life')
X_train, X_test, y_train, y_test = train_test_split(X, y)
reg = RandomForestRegressor(n_estimators=10)
reg.fit(X_train, y_train)
preds = reg.predict(X_test)
mae = mean_absolute_error(preds, y_test)
old_scores.append(mae)
medianpredict = [np.median(y_train) for _ in y_test]
mae = mean_absolute_error(medianpredict, y_test)
median_scores.append(mae)
print([float('{:.1f}'.format(score)) for score in old_scores])
mean, std = np.mean(old_scores), np.std(old_scores)
info = 'Average MAE: {:.2f}, Std: {:.2f}\n'
print(info.format(mean, std))
print([float('{:.1f}'.format(score)) for score in median_scores])
mean, std = np.mean(median_scores), np.std(median_scores)
info = 'Baseline by Median MAE: {:.2f}, Std: {:.2f}\n'
print(info.format(mean, std))
y = pd.read_csv(
'data/RUL_FD004.txt',
sep=' ',
header=None,
names=['remaining_useful_life'],
index_col=False,
)
median_scores_2 = []
for ct in cutoff_time_list:
medianpredict2 = [np.median(ct['remaining_useful_life'].values) for _ in y.values]
mae = mean_absolute_error(medianpredict2, y)
median_scores_2.append(mae)
print([float('{:.1f}'.format(score)) for score in median_scores_2])
mean, std = np.mean(median_scores_2), np.std(median_scores_2)
info = 'Baseline by Median MAE: {:.2f}, Std: {:.2f}\n'
print(info.format(mean, std))
# Save output files
os.makedirs("output", exist_ok=True)
fm.to_csv('output/advanced_train_feature_matrix.csv')
cutoff_time_list[0].to_csv('output/advanced_train_label_times.csv')
fm_test.to_csv('output/advanced_test_feature_matrix.csv')
```
<p>
<img src="https://www.featurelabs.com/wp-content/uploads/2017/12/logo.png" alt="Featuretools" />
</p>
Featuretools was created by the developers at [Feature Labs](https://www.featurelabs.com/). If building impactful data science pipelines is important to you or your business, please [get in touch](https://www.featurelabs.com/contact).
| true |
code
| 0.472683 | null | null | null | null |
|
```
#export
from local.torch_basics import *
from local.test import *
from local.layers import *
from local.data.all import *
from local.notebook.showdoc import show_doc
from local.optimizer import *
from local.learner import *
#default_exp callback.hook
```
# Model hooks
> Callback and helper function to add hooks in models
```
from local.utils.test import *
```
## What are hooks?
Hooks are function you can attach to a particular layer in your model and that will be executed in the foward pass (for forward hooks) or backward pass (for backward hooks). Here we begin with an introduction around hooks, but you should jump to `HookCallback` if you quickly want to implemet one (and read the following example `ActivationStats`).
Forward hooks are functions that take three arguments, the layer it's applied to, the input of that layer and the output of that layer.
```
tst_model = nn.Linear(5,3)
def example_forward_hook(m,i,o): print(m,i,o)
x = torch.randn(4,5)
hook = tst_model.register_forward_hook(example_forward_hook)
y = tst_model(x)
hook.remove()
```
Backward hooks are functions that take three arguments: the layer it's applied to, the gradients of the loss with respect to the input, and the gradients with respect to the output.
```
def example_backward_hook(m,gi,go): print(m,gi,go)
hook = tst_model.register_backward_hook(example_backward_hook)
x = torch.randn(4,5)
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
hook.remove()
```
Hooks can change the input/output of a layer, or the gradients, print values or shapes. If you want to store something related to theses inputs/outputs, it's best to have you hook associated to a class so that it can put it in the state of an instance of that class.
## Hook -
```
#export
@docs
class Hook():
"Create a hook on `m` with `hook_func`."
def __init__(self, m, hook_func, is_forward=True, detach=True, cpu=False):
self.hook_func,self.detach,self.cpu,self.stored = hook_func,detach,cpu,None
f = m.register_forward_hook if is_forward else m.register_backward_hook
self.hook = f(self.hook_fn)
self.removed = False
def hook_fn(self, module, input, output):
"Applies `hook_func` to `module`, `input`, `output`."
if self.detach: input,output = to_detach(input, cpu=self.cpu),to_detach(output, cpu=self.cpu)
self.stored = self.hook_func(module, input, output)
def remove(self):
"Remove the hook from the model."
if not self.removed:
self.hook.remove()
self.removed=True
def __enter__(self, *args): return self
def __exit__(self, *args): self.remove()
_docs = dict(__enter__="Register the hook",
__exit__="Remove the hook")
```
This will be called during the forward pass if `is_forward=True`, the backward pass otherwise, and will optionally `detach` and put on the `cpu` the (gradient of the) input/output of the model before passing them to `hook_func`. The result of `hook_func` will be stored in the `stored` attribute of the `Hook`.
```
tst_model = nn.Linear(5,3)
hook = Hook(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hook.stored, y)
show_doc(Hook.hook_fn)
show_doc(Hook.remove)
```
> Note: It's important to properly remove your hooks for your model when you're done to avoid them being called again next time your model is applied to some inputs, and to free the memory that go with their state.
```
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
hook = Hook(tst_model, example_forward_hook)
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
hook.remove()
test_stdout(lambda: tst_model(x), "")
```
### Context Manager
Since it's very important to remove your `Hook` even if your code is interrupted by some bug, `Hook` can be used as context managers.
```
show_doc(Hook.__enter__)
show_doc(Hook.__exit__)
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
with Hook(tst_model, example_forward_hook) as h:
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
test_stdout(lambda: tst_model(x), "")
#export
def _hook_inner(m,i,o): return o if isinstance(o,Tensor) or is_listy(o) else list(o)
def hook_output(module, detach=True, cpu=False, grad=False):
"Return a `Hook` that stores activations of `module` in `self.stored`"
return Hook(module, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
```
The activations stored are the gradients if `grad=True`, otherwise the output of `module`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
```
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
with hook_output(tst_model) as h:
y = tst_model(x)
test_eq(y, h.stored)
assert not h.stored.requires_grad
with hook_output(tst_model, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
test_close(2*y / y.numel(), h.stored[0])
#cuda
with hook_output(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
test_eq(h.stored.device, torch.device('cpu'))
```
## Hooks -
```
#export
@docs
class Hooks():
"Create several hooks on the modules in `ms` with `hook_func`."
def __init__(self, ms, hook_func, is_forward=True, detach=True, cpu=False):
self.hooks = [Hook(m, hook_func, is_forward, detach, cpu) for m in ms]
def __getitem__(self,i): return self.hooks[i]
def __len__(self): return len(self.hooks)
def __iter__(self): return iter(self.hooks)
@property
def stored(self): return [o.stored for o in self]
def remove(self):
"Remove the hooks from the model."
for h in self.hooks: h.remove()
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
_docs = dict(stored = "The states saved in each hook.",
__enter__="Register the hooks",
__exit__="Remove the hooks")
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
hooks = Hooks(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hooks.stored[0], layers[0](x))
test_eq(hooks.stored[1], F.relu(layers[0](x)))
test_eq(hooks.stored[2], y)
hooks.remove()
show_doc(Hooks.stored, name='Hooks.stored')
show_doc(Hooks.remove)
```
### Context Manager
Like `Hook` , you can use `Hooks` as context managers.
```
show_doc(Hooks.__enter__)
show_doc(Hooks.__exit__)
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
with Hooks(layers, lambda m,i,o: o) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
#export
def hook_outputs(modules, detach=True, cpu=False, grad=False):
"Return `Hooks` that store activations of all `modules` in `self.stored`"
return Hooks(modules, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
```
The activations stored are the gradients if `grad=True`, otherwise the output of `modules`. If `detach=True` they are detached from their history, and if `cpu=True`, they're put on the CPU.
```
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
x = torch.randn(4,5)
with hook_outputs(layers) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
for s in h.stored: assert not s.requires_grad
with hook_outputs(layers, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
g = 2*y / y.numel()
test_close(g, h.stored[2][0])
g = g @ layers[2].weight.data
test_close(g, h.stored[1][0])
g = g * (layers[0](x) > 0).float()
test_close(g, h.stored[0][0])
#cuda
with hook_outputs(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
for s in h.stored: test_eq(s.device, torch.device('cpu'))
```
## HookCallback -
To make hooks easy to use, we wrapped a version in a Callback where you just have to implement a `hook` function (plus any element you might need).
```
#export
def has_params(m):
"Check if `m` has at least one parameter"
return len(list(m.parameters())) > 0
assert has_params(nn.Linear(3,4))
assert has_params(nn.LSTM(4,5,2))
assert not has_params(nn.ReLU())
#export
class HookCallback(Callback):
"`Callback` that can be used to register hooks on `modules`"
def __init__(self, hook=None, modules=None, do_remove=True, is_forward=True, detach=True, cpu=False):
self.modules,self.do_remove = modules,do_remove
self.is_forward,self.detach,self.cpu = is_forward,detach,cpu
if hook is not None: setattr(self, 'hook', hook)
def begin_fit(self):
"Register the `Hooks` on `self.modules`."
if not self.modules:
self.modules = [m for m in flatten_model(self.model) if has_params(m)]
self.hooks = Hooks(self.modules, self.hook, self.is_forward, self.detach, self.cpu)
def after_fit(self):
"Remove the `Hooks`."
if self.do_remove: self._remove()
def _remove(self):
if getattr(self, 'hooks', None): self.hooks.remove()
def __del__(self): self._remove()
```
You can either subclass and implement a `hook` function (along with any event you want) or pass that a `hook` function when initializing. Such a function needs to take three argument: a layer, input and output (for a backward hook, input means gradient with respect to the inputs, output, gradient with respect to the output) and can either modify them or update the state according to them.
If not provided, `modules` will default to the layers of `self.model` that have a `weight` attribute. Depending on `do_remove`, the hooks will be properly removed at the end of training (or in case of error). `is_forward` , `detach` and `cpu` are passed to `Hooks`.
The function called at each forward (or backward) pass is `self.hook` and must be implemented when subclassing this callback.
```
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def after_batch(self): test_eq(self.hooks.stored[0], self.pred)
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
class TstCallback(HookCallback):
def __init__(self, modules=None, do_remove=True, detach=True, cpu=False):
super().__init__(None, modules, do_remove, False, detach, cpu)
def hook(self, m, i, o): return o
def after_batch(self):
if self.training:
test_eq(self.hooks.stored[0][0], 2*(self.pred-self.yb)/self.pred.shape[0])
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
show_doc(HookCallback.begin_fit)
show_doc(HookCallback.after_fit)
```
An example of such a `HookCallback` is the following, that stores the mean and stds of activations that go through the network.
```
#exports
@docs
class ActivationStats(HookCallback):
"Callback that record the mean and std of activations."
def begin_fit(self):
"Initialize stats."
super().begin_fit()
self.stats = []
def hook(self, m, i, o): return o.mean().item(),o.std().item()
def after_batch(self):
"Take the stored results and puts it in `self.stats`"
if self.training: self.stats.append(self.hooks.stored)
def after_fit(self):
"Polish the final result."
self.stats = tensor(self.stats).permute(2,1,0)
super().after_fit()
_docs = dict(hook="Take the mean and std of the output")
learn = synth_learner(n_trn=5, cbs = ActivationStats())
learn.fit(1)
learn.activation_stats.stats
```
The first line contains the means of the outputs of the model for each batch in the training set, the second line their standard deviations.
```
#hide
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def begin_fit(self):
super().begin_fit()
self.means,self.stds = [],[]
def after_batch(self):
if self.training:
self.means.append(self.hooks.stored[0].mean().item())
self.stds.append (self.hooks.stored[0].std() .item())
learn = synth_learner(n_trn=5, cbs = [TstCallback(), ActivationStats()])
learn.fit(1)
test_eq(learn.activation_stats.stats[0].squeeze(), tensor(learn.tst.means))
test_eq(learn.activation_stats.stats[1].squeeze(), tensor(learn.tst.stds))
```
## Model summary
```
#export
def total_params(m):
"Give the number of parameters of a module and if it's trainable or not"
params = sum([p.numel() for p in m.parameters()])
trains = [p.requires_grad for p in m.parameters()]
return params, (False if len(trains)==0 else trains[0])
test_eq(total_params(nn.Linear(10,32)), (32*10+32,True))
test_eq(total_params(nn.Linear(10,32, bias=False)), (32*10,True))
test_eq(total_params(nn.BatchNorm2d(20)), (20*2, True))
test_eq(total_params(nn.BatchNorm2d(20, affine=False)), (0,False))
test_eq(total_params(nn.Conv2d(16, 32, 3)), (16*32*3*3 + 32, True))
test_eq(total_params(nn.Conv2d(16, 32, 3, bias=False)), (16*32*3*3, True))
#First ih layer 20--10, all else 10--10. *4 for the four gates
test_eq(total_params(nn.LSTM(20, 10, 2)), (4 * (20*10 + 10) + 3 * 4 * (10*10 + 10), True))
#export
def layer_info(learn):
def _track(m, i, o):
return (m.__class__.__name__,)+total_params(m)+(apply(lambda x:x.shape, o),)
layers = [m for m in flatten_model(learn.model)]
xb,_ = learn.data.train_dl.one_batch()
with Hooks(layers, _track) as h:
_ = learn.model.eval()(apply(lambda o:o[:1], xb))
return h.stored
m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
learn = synth_learner()
learn.model=m
test_eq(layer_info(learn), [
('Linear', 100, True, [1, 50]),
('ReLU', 0, False, [1, 50]),
('BatchNorm1d', 100, True, [1, 50]),
('Linear', 51, True, [1, 1])
])
#export core
class PrettyString(str):
"Little hack to get strings to show properly in Jupyter."
def __repr__(self): return self
#export
def _print_shapes(o, bs):
if isinstance(o, torch.Size): return ' x '.join([str(bs)] + [str(t) for t in o[1:]])
else: return [_print_shapes(x, bs) for x in o]
#export
@patch
def summary(self:Learner):
"Print a summary of the model, optimizer and loss function."
infos = layer_info(self)
xb,_ = self.data.train_dl.one_batch()
n,bs = 64,find_bs(xb)
inp_sz = _print_shapes(apply(lambda x:x.shape, xb), bs)
res = f"{self.model.__class__.__name__} (Input shape: {inp_sz})\n"
res += "=" * n + "\n"
res += f"{'Layer (type)':<20} {'Output Shape':<20} {'Param #':<10} {'Trainable':<10}\n"
res += "=" * n + "\n"
ps,trn_ps = 0,0
for typ,np,trn,sz in infos:
if sz is None: continue
ps += np
if trn: trn_ps += np
res += f"{typ:<20} {_print_shapes(sz, bs):<20} {np:<10,} {str(trn):<10}\n"
res += "_" * n + "\n"
res += f"\nTotal params: {ps:,}\n"
res += f"Total trainable params: {trn_ps:,}\n"
res += f"Total non-trainable params: {ps - trn_ps:,}\n\n"
res += f"Optimizer used: {self.opt_func}\nLoss function: {self.loss_func}\n\nCallbacks:\n"
res += '\n'.join(f" - {cb}" for cb in sort_by_run(self.cbs))
return PrettyString(res)
m = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
for p in m[0].parameters(): p.requires_grad_(False)
learn = synth_learner()
learn.model=m
learn.summary()
```
## Export -
```
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
```
| true |
code
| 0.852874 | null | null | null | null |
|
## Demo of 1D regression with an Attentive Neural Process with Recurrent Neural Network (ANP-RNN) model
This notebook will provide a simple and straightforward demonstration on how to utilize an Attentive Neural Process with a Recurrent Neural Network (ANP-RNN) to regress context and target points to a sine curve.
First, we need to import all necessary packages and modules for our task:
```
import os
import sys
import torch
# from matplotlib import pyplot as plt
# Provide access to modules in repo.
sys.path.insert(0, os.path.abspath('neural_process_models'))
sys.path.insert(0, os.path.abspath('misc'))
from neural_process_models.anp_rnn import ANP_RNN_Model
from misc.test_sin_regression.Sin_Wave_Data import sin_wave_data, plot_functions
```
The `sin_wave_data` class, defined in `misc/test_sin_regression/Sin_Data_Wave.py`, represents the curve that we will try to regress to. From instances of this class, we are able to sample context and target points from the curve to serve as inputs for our neural process.
The default parameters of this class will produce a "ground truth" curve defined as the sum of the following:
1. A sine curve with amplitude 1, frequency 1, and phase 1.
2. A sine curve with amplitude 2, frequency 2, and phase 1.
3. A measured amount of noise (0.1).
Let us create an instance of this class:
```
data = sin_wave_data()
```
Next, we need to instantiate our model. The ANP model is implemented under the `NeuralProcessModel` class under the file `neural_process_models/attentive_neural_process.py`.
We will use the following parameters for our example model:
* 1 for x-dimension and y-dimension (since this is 1D regression)
* 4 hidden layers of dimension 256 for encoders and decoder
* 256 as the latent dimension for encoders and decoder
* We will utilize a self-attention process.
* We will utilize a deterministic path for the encoder.
Let us create an instance of this class, as well as set some hyperparameters for our training:
```
np_model = ANP_RNN_Model(x_dim=1,
y_dim=1,
mlp_hidden_size_list=[256, 256, 256, 256],
latent_dim=256,
use_rnn=True,
use_self_attention=True,
le_self_attention_type="laplace",
de_self_attention_type="laplace",
de_cross_attention_type="laplace",
use_deter_path=True)
optim = torch.optim.Adam(np_model.parameters(), lr=1e-4)
num_epochs = 1000
batch_size = 16
```
Now, let us train our model. For each epoch, we will print the loss at that epoch.
Additionally, every 50 epochs, an image will be generated and displayed, using `pyplot`. This will give you an opportunity to more closely analyze and/or save the images, if you would like.
```
for epoch in range(1, num_epochs + 1):
print("step = " + str(epoch))
np_model.train()
plt.clf()
optim.zero_grad()
ctt_x, ctt_y, tgt_x, tgt_y = data.query(batch_size=batch_size,
context_x_start=-6,
context_x_end=6,
context_x_num=200,
target_x_start=-6,
target_x_end=6,
target_x_num=200)
mu, sigma, log_p, kl, loss = np_model(ctt_x, ctt_y, tgt_x, tgt_y)
print('loss = ', loss)
loss.backward()
optim.step()
np_model.eval()
if epoch % 50 == 0:
plt.ion()
plot_functions(tgt_x.numpy(),
tgt_y.numpy(),
ctt_x.numpy(),
ctt_y.numpy(),
mu.detach().numpy(),
sigma.detach().numpy())
title_str = 'Training at epoch ' + str(epoch)
plt.title(title_str)
plt.pause(0.1)
plt.ioff()
plt.show()
```
| true |
code
| 0.63982 | null | null | null | null |
|
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Using a Single Slider to Set the Range
```
import plotly.plotly as py
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed
from IPython.core.display import HTML
from IPython.display import display, clear_output
from plotly.widgets import GraphWidget
styles = '''<style>.widget-hslider { width: 100%; }
.widget-hbox { width: 100% !important; }
.widget-slider { width: 100% !important; }</style>'''
HTML(styles)
#this widget will display our plotly chart
graph = GraphWidget("https://plotly.com/~jordanpeterson/889")
fig = py.get_figure("https://plotly.com/~jordanpeterson/889")
#find the range of the slider.
xmin, xmax = fig['layout']['xaxis']['range']
# use the interact decorator to tie a widget to the listener function
@interact(y=widgets.FloatRangeSlider(min=xmin, max=xmax, step=(xmax-xmin)/1000.0, continuous_update=False))
def update_plot(y):
graph.relayout({'xaxis.range[0]': y[0], 'xaxis.range[1]': y[1]})
#display the app
graph
%%html
<img src='https://cloud.githubusercontent.com/assets/12302455/16469485/42791e90-3e1f-11e6-8db4-2364bd610ce4.gif'>
```
#### Using Two Sliders to Set Range
```
import plotly.plotly as py
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed
from IPython.core.display import HTML
from IPython.display import display, clear_output
from plotly.widgets import GraphWidget
from traitlets import link
styles = '''<style>.widget-hslider { width: 100%; }
.widget-hbox { width: 100% !important; }
.widget-slider { width: 100% !important; }</style>'''
HTML(styles)
#this widget will display our plotly chart
graph = GraphWidget("https://plotly.com/~jordanpeterson/889")
fig = py.get_figure("https://plotly.com/~jordanpeterson/889")
#find the range of the slider.
xmin, xmax = fig['layout']['xaxis']['range']
# let's define our listener functions that will respond to changes in the sliders
def on_value_change_left(change):
graph.relayout({'xaxis.range[0]': change['new']})
def on_value_change_right(change):
graph.relayout({'xaxis.range[1]': change['new']})
# define the sliders
left_slider = widgets.FloatSlider(min=xmin, max=xmax, value=xmin, description="Left Slider")
right_slider = widgets.FloatSlider(min=xmin, max=xmax, value=xmax, description="Right Slider")
# put listeners on slider activity
left_slider.observe(on_value_change_left, names='value')
right_slider.observe(on_value_change_right, names='value')
# set a relationship between the left and right slider
link((left_slider, 'max'), (right_slider, 'value'))
link((left_slider, 'value'), (right_slider, 'min'))
# display our app
display(left_slider)
display(right_slider)
display(graph)
%%html
<img src='https://cloud.githubusercontent.com/assets/12302455/16469486/42891d0e-3e1f-11e6-9576-02c5f6c3d3c9.gif'>
```
#### Sliders with 3d Plots
```
import plotly.plotly as py
import ipywidgets as widgets
import numpy as np
from ipywidgets import interact, interactive, fixed
from IPython.core.display import HTML
from IPython.display import display, clear_output
from plotly.widgets import GraphWidget
g = GraphWidget('https://plotly.com/~DemoAccount/10147/')
x = y = np.arange(-5,5,0.1)
yt = x[:,np.newaxis]
# define our listener class
class z_data:
def __init__(self):
self.z = np.cos(x*yt)+np.sin(x*yt)*2
def on_z_change(self, name):
new_value = name['new']
self.z = np.cos(x*yt*(new_value+1)/100)+np.sin(x*yt*(new_value+1/100))
self.replot()
def replot(self):
g.restyle({ 'z': [self.z], 'colorscale': 'Viridis'})
# create sliders
z_slider = widgets.FloatSlider(min=0,max=30,value=1,step=0.05, continuous_update=False)
z_slider.description = 'Frequency'
z_slider.value = 1
# initialize listener class
z_state = z_data()
# activate listener on our slider
z_slider.observe(z_state.on_z_change, 'value')
# display our app
display(z_slider)
display(g)
%%html
<img src="https://cloud.githubusercontent.com/assets/12302455/16569550/bd02e030-4205-11e6-8087-d41c9b5d3681.gif">
```
#### Reference
```
help(GraphWidget)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'slider_example.ipynb', 'python/slider-widget/', 'IPython Widgets | plotly',
'Interacting with Plotly charts using Sliders',
title = 'Slider Widget with Plotly',
name = 'Slider Widget with Plotly',
has_thumbnail='true', thumbnail='thumbnail/ipython_widgets.jpg',
language='python', page_type='example_index',
display_as='chart_events', order=20,
ipynb= '~notebook_demo/91')
```
| true |
code
| 0.639173 | null | null | null | null |
|
# Activity #1: MarketMap
* another way to visualize mappable data
## 1.a : explore the dataset
```
# our usual stuff
%matplotlib inline
import pandas as pd
import numpy as np
#!pip install xlrd # JPN, might have to run this
# note: this is quering from the web! How neat is that??
df = pd.read_excel('https://query.data.world/s/ivl45pdpubos6jpsii3djsjwm2pcjv', skiprows=5)
# the above might take a while to load all the data
# what is in this dataframe? lets take a look at the top
df.head()
# this dataset is called: "Surgery Charges Across the U.S."
# and its just showing us how much different procedures
# cost from different hospitals
# what kinds of data are we working with?
df.dtypes
# lets look at some summary data
# recall: this is like R's "summary" function
df.describe()
# so, things like the mean zipcode aren't
# meaningful, same thing with provider ID
# But certainly looking at the average
# total payments, discharges, might
# be useful
# lets look at how many seperate types of surgery are
# represented in this dataset:
df["DRG Definition"].unique().size
# what about how many provider (hospital) names?
df["Provider Name"].unique().size
# how many states are represented
df["Provider State"].unique().size
# what are the state codes?
df["Provider State"].unique()
# lets figure out what the most common surgeries are via how
# many many folks are discharged after each type of surgery
# (1)
most_common = df.groupby("DRG Definition")["Total Discharges"].sum()
most_common
# (2) but lets sort by the largest on top
most_common = df.groupby("DRG Definition")["Total Discharges"].sum().sort_values(ascending=False)
most_common
# (3) lets look at only the top 5, for fun
most_common[:5]
# (4) or we can only look at the names of the top 5:
most_common[:5].index.values
```
## 1.b: formatting data for MarketMap
* here we are going to practice doing some fancy things to clean this data
* this will be good practice for when you run into other datasets "in the wild"
```
# (1) lets create a little table of total discharges for
# each type of surgery & state
total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum()
total_discharges
# (2) the above is not intuative, lets prettify it
total_discharges = df.groupby(["DRG Definition", "Provider State"])["Total Discharges"].sum().unstack()
total_discharges
```
### Aside: lets quick check out what are the most frequent surgeries
```
# for our map, we are going to want to
# normalize the discharges or each surgery
# for each
# state by the total discharges across all
# states for a particular type of surger
# lets add this to our total_discharges DF
total_discharges["Total"] = total_discharges.sum(axis = 1)
total_discharges["Total"].head() # just look at the first few
# finally, lets check out the most often
# performed surgery across all states
# we can do this by sorting our DF by this total we just
# calculated:
total_discharges.sort_values(by = "Total",
ascending=False,
inplace = True)
# now lets just look at the first few of our
# sorted array
total_discharges.head()
# so, from this we see that joint replacement
# or reattachment of a lower extremeity is
# the most likely surgery (in number of discharges)
# followed by surgeries for sepsis and then heart failure
# neat. We won't need these for plotting, so we can remove our
# total column we just calculated
del total_discharges["Total"]
total_discharges.head()
# now we see that we are back to just states & surgeries
# *but* our sorting is still by the total that we
# previously calculated.
# spiffy!
```
## 1.c: plot data with bqplot
```
import bqplot
# by default bqplot does not import
# all packages, we have to
# explicitely import market_map
import bqplot.market_map # for access to market_map
# lets do our usual thing, but with a market map
# instead of a heat map
# scales:
x_sc, y_sc = bqplot.OrdinalScale(), bqplot.OrdinalScale() # note, just a different way to call things
c_sc = bqplot.ColorScale(scheme="Blues")
# just a color axes for now:
c_ax = bqplot.ColorAxis(scale = c_sc, orientation = 'vertical')
# lets make the market map:
# (1) what should we plot for our color? lets take a look:
total_discharges.iloc[0].values, total_discharges.columns.values
# this is the total discharges for the most
# popular surgical procedure
# the columns will be states
# (2) lets put this into a map
mmap = bqplot.market_map.MarketMap(color = total_discharges.iloc[0].values,
names = total_discharges.columns.values,
scales={'color':c_sc},
axes=[c_ax])
# (3) ok, but just clicking on things doesn't tell us too much
# lets add a little label to print out the total of the selected
import ipywidgets
label = ipywidgets.Label()
# link to market map
def get_data(change):
# (3.1)
#print(change['owner'].selected)
# (3.2) loop
v = 0.0 # to store total value
for s in change['owner'].selected:
v += total_discharges.iloc[0][total_discharges.iloc[0].index == s].values
if v > 0: # in case nothing is selected
# what are we printing?
l = 'Total discharges of ' + \
total_discharges.iloc[0].name + \
' = ' + str(v[0]) # note: v is by default an array
label.value = l
mmap.observe(get_data,'selected')
#mmap
# (3)
ipywidgets.VBox([label,mmap])
```
## Discussion:
* think back to the map we had last week: we can certainly plot this information with a more geo-realistic map
* what are the pros & cons of each style of map? What do each highlight? How are each biased?
## IF we have time: Re-do with other mapping system:
```
from us_state_abbrev import us_state_abbrev
sc_geo = bqplot.AlbersUSA()
state_data = bqplot.topo_load('map_data/USStatesMap.json')
#(1)
#states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo})
#(2)
# library from last time
from states_utils import get_ids_and_names
ids, state_names = get_ids_and_names(states_map)
# color maps
import matplotlib.cm as cm
cmap = cm.Blues
# most popular surgery
popSurg = total_discharges.iloc[0]
# here, we will go through the process of getting colors to plot
# each state with its similar color to the marketmap above:
#!pip install webcolors
from webcolors import rgb_to_hex
d = {} # empty dict to store colors
for s in states_map.map_data['objects']['subunits']['geometries']:
if s['properties'] is not None:
#print(s['properties']['name'], s['id'])
# match states to abbreviations
state_abbrev = us_state_abbrev[s['properties']['name']]
#print(state_abbrev)
v = popSurg[popSurg.index == state_abbrev].values[0]
# renorm v to colors and then number of states
v = (v - popSurg.values.min())/(popSurg.values.max()-popSurg.values.min())
#print(v, int(cmap(v)[0]), int(cmap(v)[1]), int(cmap(v)[2]))
# convert to from 0-1 to 0-255 rgbs
c = [int(cmap(v)[i]*255) for i in range(3)]
#d[s['id']] = rgb_to_hex([int(cmap(v)[0]*255), int(cmap(v)[1]*255), int(cmap(v)[2]*255)])
d[s['id']] = rgb_to_hex(c)
def_tt = bqplot.Tooltip(fields=['name'])
states_map = bqplot.Map(map_data=state_data, scales={'projection':sc_geo}, colors = d, tooltip=def_tt)
# add interactions
states_map.interactions = {'click': 'select', 'hover': 'tooltip'}
# (3)
label = ipywidgets.Label()
# link to heat map
def get_data(change):
v = 0.0 # to store total value
if change['owner'].selected is not None:
for s in change['owner'].selected:
#print(s)
sn = state_names[s == ids][0]
state_abbrev = us_state_abbrev[sn]
v += popSurg[popSurg.index == state_abbrev].values[0]
if v > 0: # in case nothing is selected
# what are we printing?
l = 'Total discharges of ' + \
popSurg.name + \
' = ' + str(v) # note: v is by default an array
label.value = l
states_map.observe(get_data,'selected')
fig=bqplot.Figure(marks=[states_map],
title='US States Map Example',
fig_margin={'top': 0, 'bottom': 0, 'left': 0, 'right': 0}) # try w/o first and see
#fig
# (3)
ipywidgets.VBox([label,fig])
```
# Activity #2: Real quick ipyleaflets
* since cartopy wasn't working for folks, we'll quickly look at another option: ipyleaflets
```
#!pip install ipyleaflet
from ipyleaflet import *
# note: you might have to close and reopen you notebook
# to see the map
m = Map(center=(52, 10), zoom=8, basemap=basemaps.Hydda.Full)
#(2) street maps
strata_all = basemap_to_tiles(basemaps.Strava.All)
m.add_layer(strata_all)
m
```
### Note: more examples available here - https://github.com/jupyter-widgets/ipyleaflet/tree/master/examples
# Activity #3: Networked data - Simple example
```
# lets start with some very basic node data
# **copy paste into chat **
node_data = [
{"label": "Luke Skywalker", "media": "Star Wars", "shape": "rect"},
{"label": "Jean-Luc Picard", "media": "Star Trek", "shape": "rect"},
{"label": "Doctor Who", "media": "Doctor Who", "shape": "rect"},
{"label": "Pikachu", "media": "Detective Pikachu", "shape": "circle"},
]
# we'll use bqplot.Graph to plot these
graph = bqplot.Graph(node_data=node_data,
colors = ["red", "red", "red", "red"])
fig = bqplot.Figure(marks = [graph])
fig
# you note I can pick them up and move them around, but they aren't connected in any way
# lets make some connections
node_data = [
{"label": "Luke Skywalker", "media": "Star Wars", "shape": "rect"},
{"label": "Jean-Luc Picard", "media": "Star Trek", "shape": "rect"},
{"label": "Doctor Who", "media": "Doctor Who", "shape": "rect"},
{"label": "Pikachu", "media": "Detective Pikachu", "shape": "circle"},
]
# lets link the 0th entry (luke skywalker) to both
# jean-luc picard (1th entry) and pikachu (3rd entry)
link_data = [{'source': 0, 'target': 1}, {'source': 0, 'target': 3}]
graph = bqplot.Graph(node_data=node_data, link_data=link_data,
colors = ["red", "red", "red", "red"])
#(2) we can also play with the springiness of our links:
graph.charge = -300 # setting it to positive makes them want to overlap and is, ingeneral, a lot of fun
# -300 is default
# (3) we can also change the link type:
graph.link_type = 'line' # arc = default, line, slant_line
# (4) highlight link direction, or not
graph.directed = False
fig = bqplot.Figure(marks = [graph])
fig
# we can do all the same things we've done with
# our previous map plots:
# for example, we can add a tooltip:
#(1)
tooltip = bqplot.Tooltip(fields=["media"])
graph = bqplot.Graph(node_data=node_data, link_data=link_data,
colors = ["red", "red", "red", "red"],
tooltip=tooltip)
# we can also do interactive things with labels
label = ipywidgets.Label()
# note here that the calling sequence
# is a little different - instead
# of "change" we have "obj" and
# "element"
def printstuff(obj, element):
# (1.1)
#print(obj)
#print(element)
label.value = 'Media = ' + element['data']['media']
graph.on_element_click(printstuff)
fig = bqplot.Figure(marks = [graph])
ipywidgets.VBox([label,fig])
```
# Activity #4: Network data - subset of facebook friends dataset
* from: https://snap.stanford.edu/data/egonets-Facebook.html
* dataset of friends lists
#### Info about this dataset:
* the original file you can read in has about 80,000 different connections
* it is ordered by the most connected person (person 0) at the top
* because this network would be computationally slow and just a hairball - we're going to be working with downsampled data
* for example, a file tagged "000090_000010" starts with the 10th most connected person, and only included connections up to the 90th most connected person
* Its worth noting that this dataset (linked here and on the webpage) also includes feature data like gender, last name, school, etc - however it is too sparse to be of visualization use to us
Check out the other social network links at the SNAP data webpage!
```
# from 10 to 150 connections, a few large nodes
#filename = 'facebook_combined_sm000150_000010.txt'
# this might be too large: one large node, up to 100 connections
#filename='facebook_combined_sm000100.txt'
# start here
filename = 'facebook_combined_sm000090_000010.txt'
# then this one
#filename = 'facebook_combined_sm000030_000000.txt'
# note how different the topologies are
network = pd.read_csv('/Users/jillnaiman1/Downloads/'+filename,
sep=' ', names=['ind1', 'ind2'])
network
# build the network
node_data = []
link_data = []
color_data = [] # all same color
# add nodes
maxNet = max([network['ind1'].max(),network['ind2'].max()])
for i in range(maxNet+1):
node_data.append({"label": str(i), 'shape_attrs': {'r': 8} }) # small circles
# now, make links
for i in range(len(network)):
# we are linking the ith object to another jth object, but we
# gotta figure out with jth object it is
source_id = network.iloc[i]['ind1']
target_id = network.iloc[i]['ind2']
link_data.append({'source': source_id, 'target': target_id})
color_data.append('blue')
#link_data,node_data
#color_data
# plot
graph = bqplot.Graph(node_data=node_data,
link_data = link_data,
colors=color_data)
# play with these for different graphs
graph.charge = -100
graph.link_type = 'line'
graph.link_distance=50
# there is no direction to links
graph.directed = False
fig = bqplot.Figure(marks = [graph])
fig.layout.min_width='1000px'
fig.layout.min_height='900px'
# note: I think this has to be the layout for this to look right
fig
# in theory, we could color this network by what school folks are in, or some such
# but while the dataset does contain some of these features, the
# answer rate is too sparse for our subset here
```
# Note: the below is just prep if you want to make your own subset datasets
```
# prep fb data by downsampling
minCon = 0
maxCon = 30
G = pd.read_csv('/Users/jillnaiman1/Downloads/facebook_combined.txt',sep=' ', names=['ind1', 'ind2'])
Gnew = np.zeros([2],dtype='int')
# loop and append
Gnew = G.loc[G['ind1']==minCon].values[0]
for i in xrange(G.loc[G['ind1']==minCon].index[0],len(G)):
gl = G.loc[i].values
if (gl[0] <= maxCon) and (gl[1] <= maxCon) and (gl[0] >= minCon) and (gl[1] >= minCon):
Gnew = np.vstack((Gnew,gl))
np.savetxt('/Users/jillnaiman1/spring2019online/week09/data/facebook_combined_sm' + \
str(maxCon).zfill(6) + '_' + str(minCon).zfill(6) + '.txt', Gnew,fmt='%i')
graph.link_distance
```
| true |
code
| 0.300181 | null | null | null | null |
|
# 转置卷积
:label:`sec_transposed_conv`
到目前为止,我们所见到的卷积神经网络层,例如卷积层( :numref:`sec_conv_layer`)和汇聚层( :numref:`sec_pooling`),通常会减少下采样输入图像的空间维度(高和宽)。
然而如果输入和输出图像的空间维度相同,在以像素级分类的语义分割中将会很方便。
例如,输出像素所处的通道维可以保有输入像素在同一位置上的分类结果。
为了实现这一点,尤其是在空间维度被卷积神经网络层缩小后,我们可以使用另一种类型的卷积神经网络层,它可以增加上采样中间层特征图的空间维度。
在本节中,我们将介绍
*转置卷积*(transposed convolution) :cite:`Dumoulin.Visin.2016`,
用于逆转下采样导致的空间尺寸减小。
```
import torch
from torch import nn
from d2l import torch as d2l
```
## 基本操作
让我们暂时忽略通道,从基本的转置卷积开始,设步幅为1且没有填充。
假设我们有一个$n_h \times n_w$的输入张量和一个$k_h \times k_w$的卷积核。
以步幅为1滑动卷积核窗口,每行$n_w$次,每列$n_h$次,共产生$n_h n_w$个中间结果。
每个中间结果都是一个$(n_h + k_h - 1) \times (n_w + k_w - 1)$的张量,初始化为0。
为了计算每个中间张量,输入张量中的每个元素都要乘以卷积核,从而使所得的$k_h \times k_w$张量替换中间张量的一部分。
请注意,每个中间张量被替换部分的位置与输入张量中元素的位置相对应。
最后,所有中间结果相加以获得最终结果。
例如, :numref:`fig_trans_conv`解释了如何为$2\times 2$的输入张量计算卷积核为$2\times 2$的转置卷积。

:label:`fig_trans_conv`
我们可以对输入矩阵`X`和卷积核矩阵`K`(**实现基本的转置卷积运算**)`trans_conv`。
```
def trans_conv(X, K):
h, w = K.shape
Y = torch.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Y[i: i + h, j: j + w] += X[i, j] * K
return Y
```
与通过卷积核“减少”输入元素的常规卷积(在 :numref:`sec_conv_layer`中)相比,转置卷积通过卷积核“广播”输入元素,从而产生大于输入的输出。
我们可以通过 :numref:`fig_trans_conv`来构建输入张量`X`和卷积核张量`K`从而[**验证上述实现输出**]。
此实现是基本的二维转置卷积运算。
```
X = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
trans_conv(X, K)
```
或者,当输入`X`和卷积核`K`都是四维张量时,我们可以[**使用高级API获得相同的结果**]。
```
X, K = X.reshape(1, 1, 2, 2), K.reshape(1, 1, 2, 2)
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, bias=False)
tconv.weight.data = K
tconv(X)
```
## [**填充、步幅和多通道**]
与常规卷积不同,在转置卷积中,填充被应用于的输出(常规卷积将填充应用于输入)。
例如,当将高和宽两侧的填充数指定为1时,转置卷积的输出中将删除第一和最后的行与列。
```
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, padding=1, bias=False)
tconv.weight.data = K
tconv(X)
```
在转置卷积中,步幅被指定为中间结果(输出),而不是输入。
使用 :numref:`fig_trans_conv`中相同输入和卷积核张量,将步幅从1更改为2会增加中间张量的高和权重,因此输出张量在 :numref:`fig_trans_conv_stride2`中。

:label:`fig_trans_conv_stride2`
以下代码可以验证 :numref:`fig_trans_conv_stride2`中步幅为2的转置卷积的输出。
```
tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, stride=2, bias=False)
tconv.weight.data = K
tconv(X)
```
对于多个输入和输出通道,转置卷积与常规卷积以相同方式运作。
假设输入有$c_i$个通道,且转置卷积为每个输入通道分配了一个$k_h\times k_w$的卷积核张量。
当指定多个输出通道时,每个输出通道将有一个$c_i\times k_h\times k_w$的卷积核。
同样,如果我们将$\mathsf{X}$代入卷积层$f$来输出$\mathsf{Y}=f(\mathsf{X})$,并创建一个与$f$具有相同的超参数、但输出通道数量是$\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\mathsf{X}$相同。
下面的示例可以解释这一点。
```
X = torch.rand(size=(1, 10, 16, 16))
conv = nn.Conv2d(10, 20, kernel_size=5, padding=2, stride=3)
tconv = nn.ConvTranspose2d(20, 10, kernel_size=5, padding=2, stride=3)
tconv(conv(X)).shape == X.shape
```
## [**与矩阵变换的联系**]
:label:`subsec-connection-to-mat-transposition`
转置卷积为何以矩阵变换命名呢?
让我们首先看看如何使用矩阵乘法来实现卷积。
在下面的示例中,我们定义了一个$3\times 3$的输入`X`和$2\times 2$卷积核`K`,然后使用`corr2d`函数计算卷积输出`Y`。
```
X = torch.arange(9.0).reshape(3, 3)
K = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
Y = d2l.corr2d(X, K)
Y
```
接下来,我们将卷积核`K`重写为包含大量0的稀疏权重矩阵`W`。
权重矩阵的形状是($4$,$9$),其中非0元素来自卷积核`K`。
```
def kernel2matrix(K):
k, W = torch.zeros(5), torch.zeros((4, 9))
k[:2], k[3:5] = K[0, :], K[1, :]
W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k
return W
W = kernel2matrix(K)
W
```
逐行连结输入`X`,获得了一个长度为9的矢量。
然后,`W`的矩阵乘法和向量化的`X`给出了一个长度为4的向量。
重塑它之后,可以获得与上面的原始卷积操作所得相同的结果`Y`:我们刚刚使用矩阵乘法实现了卷积。
```
Y == torch.matmul(W, X.reshape(-1)).reshape(2, 2)
```
同样,我们可以使用矩阵乘法来实现转置卷积。
在下面的示例中,我们将上面的常规卷积$2 \times 2$的输出`Y`作为转置卷积的输入。
想要通过矩阵相乘来实现它,我们只需要将权重矩阵`W`的形状转置为$(9, 4)$。
```
Z = trans_conv(Y, K)
Z == torch.matmul(W.T, Y.reshape(-1)).reshape(3, 3)
```
抽象来看,给定输入向量$\mathbf{x}$和权重矩阵$\mathbf{W}$,卷积的前向传播函数可以通过将其输入与权重矩阵相乘并输出向量$\mathbf{y}=\mathbf{W}\mathbf{x}$来实现。
由于反向传播遵循链式法则和$\nabla_{\mathbf{x}}\mathbf{y}=\mathbf{W}^\top$,卷积的反向传播函数可以通过将其输入与转置的权重矩阵$\mathbf{W}^\top$相乘来实现。
因此,转置卷积层能够交换卷积层的正向传播函数和反向传播函数:它的正向传播和反向传播函数将输入向量分别与$\mathbf{W}^\top$和$\mathbf{W}$相乘。
## 小结
* 与通过卷积核减少输入元素的常规卷积相反,转置卷积通过卷积核广播输入元素,从而产生形状大于输入的输出。
* 如果我们将$\mathsf{X}$输入卷积层$f$来获得输出$\mathsf{Y}=f(\mathsf{X})$并创造一个与$f$有相同的超参数、但输出通道数是$\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\mathsf{X}$相同。
* 我们可以使用矩阵乘法来实现卷积。转置卷积层能够交换卷积层的正向传播函数和反向传播函数。
## 练习
1. 在 :numref:`subsec-connection-to-mat-transposition`中,卷积输入`X`和转置的卷积输出`Z`具有相同的形状。他们的数值也相同吗?为什么?
1. 使用矩阵乘法来实现卷积是否有效率?为什么?
[Discussions](https://discuss.d2l.ai/t/3302)
| true |
code
| 0.769438 | null | null | null | null |
|
Lambda School Data Science, Unit 2: Predictive Modeling
# Applied Modeling, Module 3
### Objective
- Visualize and interpret partial dependence plots
### Links
- [Kaggle / Dan Becker: Machine Learning Explainability — Partial Dependence Plots](https://www.kaggle.com/dansbecker/partial-plots)
- [Christoph Molnar: Interpretable Machine Learning — Partial Dependence Plots](https://christophm.github.io/interpretable-ml-book/pdp.html) + [animated explanation](https://twitter.com/ChristophMolnar/status/1066398522608635904)
### Three types of model explanations this unit:
#### 1. Global model explanation: all features in relation to each other _(Last Week)_
- Feature Importances: _Default, fastest, good for first estimates_
- Drop-Column Importances: _The best in theory, but much too slow in practice_
- Permutaton Importances: _A good compromise!_
#### 2. Global model explanation: individual feature(s) in relation to target _(Today)_
- Partial Dependence plots
#### 3. Individual prediction explanation _(Tomorrow)_
- Shapley Values
_Note that the coefficients from a linear model give you all three types of explanations!_
### Setup
#### If you're using [Anaconda](https://www.anaconda.com/distribution/) locally
Install required Python packages, if you haven't already:
- [category_encoders](https://github.com/scikit-learn-contrib/categorical-encoding), version >= 2.0: `conda install -c conda-forge category_encoders` / `pip install category_encoders`
- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`
- [Plotly](https://medium.com/plotly/plotly-py-4-0-is-here-offline-only-express-first-displayable-anywhere-fc444e5659ee), version >= 4.0
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python package:
# category_encoders, version >= 2.0
!pip install --upgrade category_encoders pdpbox plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Applied-Modeling.git
!git pull origin master
# Change into directory for module
os.chdir('module3')
```
## Lending Club: Predict interest rate
```
import pandas as pd
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history_location = '../data/lending-club/lending-club-subset.csv'
history = pd.read_csv(history_location)
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Just use 36 month loans
history = history[history.term==' 36 months']
# Index & sort by issue date
history = history.set_index('issue_d').sort_index()
# Clean data, engineer feature, & select subset of features
history = history.rename(columns=
{'annual_inc': 'Annual Income',
'fico_range_high': 'Credit Score',
'funded_amnt': 'Loan Amount',
'title': 'Loan Purpose'})
history['Interest Rate'] = history['int_rate'].str.strip('%').astype(float)
history['Monthly Debts'] = history['Annual Income'] / 12 * history['dti'] / 100
columns = ['Annual Income',
'Credit Score',
'Loan Amount',
'Loan Purpose',
'Monthly Debts',
'Interest Rate']
history = history[columns]
history = history.dropna()
# Test on the last 10,000 loans,
# Validate on the 10,000 before that,
# Train on the rest
test = history[-10000:]
val = history[-20000:-10000]
train = history[:-20000]
# Assign to X, y
target = 'Interest Rate'
features = history.columns.drop('Interest Rate')
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[target]
# The target has some right skew.
# It's not bad, but we'll log transform anyways
%matplotlib inline
import seaborn as sns
sns.distplot(y_train);
# Log transform the target
import numpy as np
y_train_log = np.log1p(y_train)
y_val_log = np.log1p(y_val)
y_test_log = np.log1p(y_test)
# Plot the transformed target's distribution
sns.distplot(y_train_log);
```
### Fit Linear Regression model, with original target
```
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
lr = make_pipeline(
ce.OrdinalEncoder(), # Not ideal for Linear Regression
StandardScaler(),
LinearRegression()
)
lr.fit(X_train, y_train)
print('Linear Regression R^2', lr.score(X_val, y_val))
```
### Fit Gradient Boosting model, with log transformed target
```
from xgboost import XGBRegressor
gb = make_pipeline(
ce.OrdinalEncoder(),
XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
)
gb.fit(X_train, y_train_log)
# print('Gradient Boosting R^2', gb.score(X_val, y_val_log))
# Convert back away from log space
```
### Explaining Linear Regression
```
example = X_val.iloc[[0]]
example
pred = lr.predict(example)[0]
print(f'Predicted Interest Rate: {pred:.2f}%')
def predict(model, example, log=False):
print('Vary income, hold other features constant', '\n')
example = example.copy()
preds = []
for income in range(20000, 200000, 20000):
example['Annual Income'] = income
pred = model.predict(example)[0]
if log:
pred = np.expm1(pred)
print(f'Predicted Interest Rate: {pred:.3f}%')
print(example.to_string(), '\n')
preds.append(pred)
print('Difference between predictions')
print(np.diff(preds))
predict(lr, example)
example2 = X_val.iloc[[2]]
predict(lr, example2);
```
### Explaining Gradient Boosting???
```
predict(gb, example, log=True)
predict(gb, example2, log=True)
```
## Partial Dependence Plots
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):
>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction.
[Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/pdp.html#examples)
> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.
> 1. Define grid along feature
> 2. Model predictions at grid points
> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve
> 4. Average curves to get a PDP (Partial Dependence Plot)
```
%matplotlib inline
import matplotlib.pyplot as plt
examples = pd.concat([example, example2])
for income in range(20000, 200000, 20000):
examples['Annual Income'] = income
preds_log = gb.predict(examples)
preds = np.expm1(preds_log)
for pred in preds:
plt.scatter(income, pred, color='grey')
plt.scatter(income, np.mean(preds), color='red')
```
## Partial Dependence Plots with 1 feature
#### PDPbox
- [Gallery](https://github.com/SauceCat/PDPbox#gallery)
- [API Reference: pdp_isolate](https://pdpbox.readthedocs.io/en/latest/pdp_isolate.html)
- [API Reference: pdp_plot](https://pdpbox.readthedocs.io/en/latest/pdp_plot.html)
```
# Later, when you save matplotlib images to include in blog posts or web apps,
# increase the dots per inch (double it), so the text isn't so fuzzy
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 72
```
#### You can customize it
PDPbox
- [API Reference: PDPIsolate](https://pdpbox.readthedocs.io/en/latest/PDPIsolate.html)
```
```
## Partial Dependence Plots with 2 features
See interactions!
PDPbox
- [Gallery](https://github.com/SauceCat/PDPbox#gallery)
- [API Reference: pdp_interact](https://pdpbox.readthedocs.io/en/latest/pdp_interact.html)
- [API Reference: pdp_interact_plot](https://pdpbox.readthedocs.io/en/latest/pdp_interact_plot.html)
Be aware of a bug in PDPBox version <= 0.20:
- With the `pdp_interact_plot` function, `plot_type='contour'` gets an error, but `plot_type='grid'` works
- This will be fixed in the next release of PDPbox: https://github.com/SauceCat/PDPbox/issues/40
```
```
### 3D with Plotly!
```
```
# Partial Dependence Plots with categorical features
1. I recommend you use Ordinal Encoder, outside of a pipeline, to encode your data first. Then use the encoded data with pdpbox.
2. There's some extra work to get readable category names on your plot, instead of integer category codes.
```
# Fit a model on Titanic data
import category_encoders as ce
import seaborn as sns
from sklearn.ensemble import RandomForestClassifier
df = sns.load_dataset('titanic')
df.age = df.age.fillna(df.age.median())
df = df.drop(columns='deck')
df = df.dropna()
target = 'survived'
features = df.columns.drop(['survived', 'alive'])
X = df[features]
y = df[target]
# Use Ordinal
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X)
model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_encoded, y)
# Use Pdpbox
%matplotlib inline
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'sex'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature);
# Look at the encoder's mappings
encoder.mapping
pdp.pdp_plot(pdp_dist, feature)
# Manually change the xticks labels
plt.xticks([1, 2], ['male', 'female']);
# Let's automate it
feature = 'sex'
for item in encoder.mapping:
if item['col'] == feature:
feature_mapping = item['mapping']
feature_mapping = feature_mapping[feature_mapping.index.dropna()]
category_names = feature_mapping.index.tolist()
category_codes = feature_mapping.values.tolist()
# Use Pdpbox
%matplotlib inline
import matplotlib.pyplot as plt
from pdpbox import pdp
feature = 'sex'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature)
pdp.pdp_plot(pdp_dist, feature)
# Automatically change the xticks labels
plt.xticks(category_codes, category_names);
features = ['sex', 'age']
interaction = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interaction, plot_type='grid', feature_names=features);
pdp = interaction.pdp.pivot_table(
values='preds',
columns=features[0], # First feature on x axis
index=features[1] # Next feature on y axis
)[::-1] # Reverse the index order so y axis is ascending
pdp = pdp.rename(columns=dict(zip(category_codes, category_names)))
plt.figure(figsize=(10,8))
sns.heatmap(pdp, annot=True, fmt='.2f', cmap='viridis')
plt.title('Partial Dependence of Titanic survival, on sex & age');
```
| true |
code
| 0.622832 | null | null | null | null |
|
# HyperEuler on MNIST-trained Neural ODEs
```
import sys ; sys.path.append('..')
from torchdyn.models import *; from torchdyn import *
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import pytorch_lightning as pl
from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning.metrics.functional import accuracy
from tqdm import tqdm_notebook as tqdm
from src.custom_fixed_explicit import ButcherTableau, GenericExplicitButcher
from src.hypersolver import *
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# smaller batch_size; only needed for visualization. The classification model
# will not be retrained
batch_size=16
size=28
path_to_data='../../data/mnist_data'
all_transforms = transforms.Compose([
transforms.RandomRotation(20),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
])
test_transforms = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
])
train_data = datasets.MNIST(path_to_data, train=True, download=True,
transform=all_transforms)
test_data = datasets.MNIST(path_to_data, train=False,
transform=test_transforms)
trainloader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
testloader = DataLoader(test_data, batch_size=batch_size, shuffle=True)
```
## Loading the pretrained Neural ODE
```
func = nn.Sequential(nn.Conv2d(32, 46, 3, padding=1),
nn.Softplus(),
nn.Conv2d(46, 46, 3, padding=1),
nn.Softplus(),
nn.Conv2d(46, 32, 3, padding=1)
).to(device)
ndes = []
for i in range(1):
ndes.append(NeuralDE(func,
solver='dopri5',
sensitivity='adjoint',
atol=1e-4,
rtol=1e-4,
s_span=torch.linspace(0, 1, 2)).to(device))
#ndes.append(nn.Conv2d(32, 32, 3, padding=1)))
model = nn.Sequential(nn.BatchNorm2d(1),
Augmenter(augment_func=nn.Conv2d(1, 31, 3, padding=1)),
*ndes,
nn.AvgPool2d(28),
#nn.Conv2d(32, 1, 3, padding=1),
nn.Flatten(),
nn.Linear(32, 10)).to(device)
state_dict = torch.load('../pretrained_models/nde_mnist')
# remove state_dict keys for `torchdyn`'s Adjoint nn.Module (not used here)
copy_dict = state_dict.copy()
for key in copy_dict.keys():
if 'adjoint' in key: state_dict.pop(key)
model.load_state_dict(state_dict)
```
### Visualizing pretrained flows
```
x, y = next(iter(trainloader)); x = x.to(device)
for layer in model[:2]: x = layer(x)
model[2].nfe = 0
traj = model[2].trajectory(x, torch.linspace(0, 1, 50)).detach().cpu()
model[2].nfe
```
Pixel-flows of the Neural ODE, solved with `dopri5`
```
fig, axes = plt.subplots(nrows=5, ncols=10, figsize=(22, 10))
K = 4
for i in range(5):
for j in range(10):
im = axes[i][j].imshow(traj[i*5+j, K, 0], cmap='inferno')
fig.tight_layout(w_pad=0)
```
### Defining the HyperSolver class (-- HyperEuler version --)
```
tableau = ButcherTableau([[0]], [1], [0], [])
euler_solver = GenericExplicitButcher(tableau)
hypersolv_net = nn.Sequential(
nn.Conv2d(32+32+1, 32, 3, stride=1, padding=1),
nn.PReLU(),
nn.Conv2d(32, 32, 3, padding=1),
nn.PReLU(),
nn.Conv2d(32, 32, 3, padding=1)).to(device)
#for p in hypersolv_net.parameters(): torch.nn.init.zeros_(p)
hs = HyperEuler(f=model[2].defunc, g=hypersolv_net)
x0 = torch.zeros(12, 32, 6, 6).to(device)
span = torch.linspace(0, 2, 10).to(device)
traj = model[2].trajectory(x0, span)
res_traj = hs.base_residuals(traj, span)
hyp_res_traj = hs.hypersolver_residuals(traj, span)
hyp_traj = hs.odeint(x0, span)
hyp_traj = hs.odeint(x0, span, use_residual=False).detach().cpu()
etraj = odeint(model[2].defunc, x0, span, method='euler').detach().cpu()
(hyp_traj - etraj).max()
```
### Training the Hypersolver
```
PHASE1_ITERS = 10 # num iters without swapping of the ODE initial condition (new sample)
ITERS = 15000
s_span = torch.linspace(0, 1, 10).to(device)
run_loss = 0.
# using test data for hypersolver training does not cause issues
# or task information leakage; the labels are not utilized in any way
it = iter(trainloader)
X0, Y = next(it)
Y = Y.to(device)
X0 = model[:2](X0.to(device))
model[2].solver = 'dopri5'
traj = model[2].trajectory(X0, s_span)
etraj = odeint(model[2].defunc, X0, s_span, method='euler')
opt = torch.optim.AdamW(hypersolv_net.parameters(), 1e-3, weight_decay=1e-8)
sched = torch.optim.lr_scheduler.CosineAnnealingLR(opt, T_max=ITERS, eta_min=5e-4)
for i in tqdm(range(ITERS)):
ds = s_span[1] - s_span[0]
base_traj = model[2].trajectory(X0, s_span)
residuals = hs.base_residuals(base_traj, s_span).detach()
# Let the model generalize to other ICs after PHASE1_ITERS
if i > PHASE1_ITERS:
if i % 10 == 0: # swapping IC
try:
X0, _ = next(it)
except:
it = iter(trainloader)
X0, _ = next(it)
X0 = model[:2](X0.to(device))
model[2].solver = 'dopri5'
base_traj = model[2].trajectory(X0, s_span)
residuals = hs.base_residuals(base_traj.detach(), s_span).detach()
corrections = hs.hypersolver_residuals(base_traj.detach(), s_span)
loss = torch.norm(corrections - residuals.detach(), p='fro', dim=(3, 4)).mean() * ds**2
loss.backward()
torch.nn.utils.clip_grad_norm_(hypersolv_net.parameters(), 1)
if i % 10 == 0: print(f'\rLoss: {loss}', end='')
opt.step()
sched.step()
opt.zero_grad()
it = iter(testloader)
X0, _ = next(it)
X0 = model[:2](X0.to(device))
steps = 10
s_span = torch.linspace(0, 1, steps)
# dopri traj
model[2].solver = 'dopri5'
traj = model[2].trajectory(X0, s_span).detach().cpu()
# euler traj
model[2].solver = 'euler'
etraj = model[2].trajectory(X0, s_span).detach().cpu()
#etraj = hs.odeint(X0, s_span, use_residual=False).detach().cpu()
straj = hs.odeint(X0, s_span, use_residual=True).detach().cpu()
```
Evolution of absolute error: [Above] HyperEuler, [Below] Euler
```
fig, axes = plt.subplots(nrows=2, ncols=steps-1, figsize=(10, 4))
K = 1
vmin = min(torch.abs(straj[steps-1,:]-traj[steps-1,:]).mean(1)[K].min(),
torch.abs(etraj[steps-1,:]-traj[steps-1,:]).mean(1)[K].min())
vmax = max(torch.abs(straj[steps-1,:]-traj[steps-1,:]).mean(1)[K].max(),
torch.abs(etraj[steps-1,:]-traj[steps-1,:]).mean(1)[K].max())
for i in range(steps-1):
im = axes[0][i].imshow(torch.abs(straj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno', vmin=vmin, vmax=vmax)
for i in range(steps-1):
im = axes[1][i].imshow(torch.abs(etraj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno', vmin=vmin, vmax=vmax)
fig.colorbar(im, ax=axes.ravel().tolist(), orientation='horizontal')
#tikz.save('MNIST_interpolation_AE_plot.tex')
```
Evolution of absolute error: HyperEuler (alone). Greater detail
```
fig, axes = plt.subplots(nrows=1, ncols=steps-1, figsize=(10, 4))
for i in range(steps-1):
im = axes[i].imshow(torch.abs(straj[i+1,:]-traj[i+1,:]).mean(1)[K], cmap='inferno')
fig.colorbar(im, ax=axes.ravel().tolist(), orientation='horizontal')
```
### Evaluating ODE solution error
```
x = []
# NOTE: high GPU mem usage for generating data below for plot (on GPU)
# consider using less batches (and iterating) or performing everything on CPU
for i in range(5):
x_b, _ = next(it)
x += [model[:2](x_b.to(device))]
x = torch.cat(x); x.shape
STEPS = range(8, 50)
euler_avg_error, euler_std_error = [], []
hyper_avg_error, hyper_std_error = [], []
midpoint_avg_error, midpoint_std_error = [], []
rk4_avg_error, rk4_std_error = [], []
for step in tqdm(STEPS):
s_span = torch.linspace(0, 1, step)
# dopri traj
model[2].solver = 'dopri5'
traj = model[2].trajectory(x, s_span).detach().cpu()
# euler traj
model[2].solver = 'euler'
etraj = model[2].trajectory(x, s_span).detach().cpu()
# hypersolver
s_span = torch.linspace(0, 1, step)
straj = hs.odeint(x, s_span, use_residual=True).detach().cpu()
#midpoint
model[2].solver = 'midpoint'
s_span = torch.linspace(0, 1, step//2)
mtraj = model[2].trajectory(x, s_span).detach().cpu()
#midpoint
model[2].solver = 'rk4'
s_span = torch.linspace(0, 1, step//4)
rtraj = model[2].trajectory(x, s_span).detach().cpu()
# errors
euler_error = torch.abs((etraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
hyper_error = torch.abs((straj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
midpoint_error = torch.abs((mtraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
rk4_error = torch.abs((rtraj[-1].detach().cpu() - traj[-1].detach().cpu()) / traj[-1].detach().cpu()).sum(1)
# mean, stdev
euler_avg_error += [euler_error.mean().item()] ; euler_std_error += [euler_error.mean(dim=1).mean(dim=1).std(0).item()]
hyper_avg_error += [hyper_error.mean().item()] ; hyper_std_error += [hyper_error.mean(dim=1).mean(dim=1).std(0).item()]
midpoint_avg_error += [midpoint_error.mean().item()] ; midpoint_std_error += [midpoint_error.mean(dim=1).mean(dim=1).std(0).item()]
rk4_avg_error += [rk4_error.mean().item()] ; rk4_std_error += [rk4_error.mean(dim=1).mean(dim=1).std(0).item()]
euler_avg_error, euler_std_error = np.array(euler_avg_error), np.array(euler_std_error)
hyper_avg_error, hyper_std_error = np.array(hyper_avg_error), np.array(hyper_std_error)
midpoint_avg_error, midpoint_std_error = np.array(midpoint_avg_error), np.array(midpoint_std_error)
rk4_avg_error, rk4_std_error = np.array(rk4_avg_error), np.array(rk4_std_error)
range_steps = range(8, 50, 1)
fig, ax = plt.subplots(1, 1); fig.set_size_inches(8, 3)
ax.plot(range_steps, euler_avg_error, color='red', linewidth=3, alpha=0.5)
ax.fill_between(range_steps, euler_avg_error-euler_std_error, euler_avg_error+euler_std_error, alpha=0.05, color='red')
ax.plot(range_steps, hyper_avg_error, c='black', linewidth=3, alpha=0.5)
ax.fill_between(range_steps, hyper_avg_error+hyper_std_error, hyper_avg_error-hyper_std_error, alpha=0.05, color='black')
# start from 10 steps, balance the steps
mid_range_steps = range(8, 50, 2)
ax.plot(mid_range_steps, midpoint_avg_error[::2], color='green', linewidth=3, alpha=0.5)
ax.fill_between(mid_range_steps, midpoint_avg_error[::2]-midpoint_std_error[::2], midpoint_avg_error[::2]+midpoint_std_error[::2], alpha=0.1, color='green')
# start from 10 steps, balance the steps
mid_range_steps = range(8, 50, 4)
ax.plot(mid_range_steps, rk4_avg_error[::4], color='gray', linewidth=3, alpha=0.5)
ax.fill_between(mid_range_steps, rk4_avg_error[::4]-rk4_std_error[::4], rk4_avg_error[::4]+rk4_std_error[::4], alpha=0.05, color='gray')
ax.set_ylim(0, 200)
ax.set_xlim(8, 40)
ax.legend(['Euler', 'HyperEuler', 'Midpoint', 'RK4'])
ax.set_xlabel('NFEs')
ax.set_ylabel('Terminal error (MAPE)')
```
| true |
code
| 0.738386 | null | null | null | null |
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Modeling" data-toc-modified-id="Modeling-1"><span class="toc-item-num">1 </span>Modeling</a></span><ul class="toc-item"><li><span><a href="#Victims" data-toc-modified-id="Victims-1.1"><span class="toc-item-num">1.1 </span>Victims</a></span></li><li><span><a href="#Perpetrators" data-toc-modified-id="Perpetrators-1.2"><span class="toc-item-num">1.2 </span>Perpetrators</a></span></li><li><span><a href="#ViolenceEvent" data-toc-modified-id="ViolenceEvent-1.3"><span class="toc-item-num">1.3 </span>ViolenceEvent</a></span></li></ul></li></ul></div>
```
import sys
sys.version
from pathlib import Path
import pprint
%load_ext cypher
# https://ipython-cypher.readthedocs.io/en/latest/
# used for cell magic
from py2neo import Graph
NEO4J_URI="bolt://localhost:7687"
graph = Graph(NEO4J_URI)
graph
def clear_graph():
print(graph.run("MATCH (n) DETACH DELETE n").stats())
clear_graph()
graph.run("RETURN apoc.version();").data()
graph.run("call dbms.components() yield name, versions, edition unwind versions as version return name, version, edition;").data()
```
# Modeling
```
import pandas as pd
```
We are modeling data from the pinochet dataset, available in https://github.com/danilofreire/pinochet
> Freire, D., Meadowcroft, J., Skarbek, D., & Guerrero, E.. (2019). Deaths and Disappearances in the Pinochet Regime: A New Dataset. https://doi.org/10.31235/osf.io/vqnwu.
The dataset has 59 variables with information about the victims, the perpetrators, and geographical
coordinates of each incident.
```
PINOCHET_DATA = "../pinochet/data/pinochet.csv"
pin = pd.read_csv(PINOCHET_DATA)
pin.head()
pin.age.isna().sum()
```
The dataset contains informations about perpetrators, victims, violence events and event locations. We will develop models around these concepts, and we will stablish relationships between them later.
## Victims
- victim_id*: this is not the same as in the dataset.
- individual_id
- group_id
- first_name
- last_name
- age
- minor
- male
- number_previous_arrests
- occupation
- occupation_detail
- victim_affiliation
- victim_affiliation_detail
- targeted
```
victim_attributes = [
"individual_id",
"group_id",
"first_name",
"last_name",
"age",
"minor",
"male",
"number_previous_arrests",
"occupation",
"occupation_detail",
"victim_affiliation",
"victim_affiliation_detail",
"targeted",
]
pin_victims = pin[victim_attributes]
pin_victims.head()
# https://neo4j.com/docs/labs/apoc/current/import/load-csv/
PINOCHET_CSV_GITHUB = "https://raw.githubusercontent.com/danilofreire/pinochet/master/data/pinochet.csv"
query = """
WITH $url AS url
CALL apoc.load.csv(url)
YIELD lineNo, map, list
RETURN *
LIMIT 1"""
graph.run(query, url = PINOCHET_CSV_GITHUB).data()
%%cypher
CALL apoc.load.csv('pinochet.csv')
YIELD lineNo, map, list
RETURN *
LIMIT 1
```
## Perpetrators
- perpetrator_affiliation
- perpetrator_affiliation_detail
- war_tribunal
```
perpetrators_attributes = [
"perpetrator_affiliation",
"perpetrator_affiliation_detail",
"war_tribunal",
]
pin_perps = pin[perpetrators_attributes]
pin_perps.head()
```
## ViolenceEvent
```
clear_graph()
query = Path("../services/graph-api/project/queries/load_csv.cql").read_text()
# pprint.pprint(query)
graph.run(query, url = PINOCHET_CSV_GITHUB).stats()
```
| true |
code
| 0.352035 | null | null | null | null |
|
# Introduction to XGBoost Spark with GPU
The goal of this notebook is to show how to train a XGBoost Model with Spark RAPIDS XGBoost library on GPUs. The dataset used with this notebook is derived from Fannie Mae’s Single-Family Loan Performance Data with all rights reserved by Fannie Mae. This processed dataset is redistributed with permission and consent from Fannie Mae. This notebook uses XGBoost to train 12-month mortgage loan delinquency prediction model .
A few libraries required for this notebook:
1. NumPy
2. cudf jar
3. xgboost4j jar
4. xgboost4j-spark jar
5. rapids-4-spark.jar
This notebook also illustrates the ease of porting a sample CPU based Spark xgboost4j code into GPU. There is only one change required for running Spark XGBoost on GPU. That is replacing the API `setFeaturesCol(feature)` on CPU with the new API `setFeaturesCols(features)`. This also eliminates the need for vectorization (assembling multiple feature columns in to one column) since we can read multiple columns.
#### Import All Libraries
```
from ml.dmlc.xgboost4j.scala.spark import XGBoostClassificationModel, XGBoostClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.sql import SparkSession
from pyspark.sql.types import FloatType, IntegerType, StructField, StructType
from time import time
```
Besides CPU version requires two extra libraries.
```Python
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.functions import col
```
#### Create Spark Session and Data Reader
```
spark = SparkSession.builder.getOrCreate()
reader = spark.read
```
#### Specify the Data Schema and Load the Data
```
label = 'delinquency_12'
schema = StructType([
StructField('orig_channel', FloatType()),
StructField('first_home_buyer', FloatType()),
StructField('loan_purpose', FloatType()),
StructField('property_type', FloatType()),
StructField('occupancy_status', FloatType()),
StructField('property_state', FloatType()),
StructField('product_type', FloatType()),
StructField('relocation_mortgage_indicator', FloatType()),
StructField('seller_name', FloatType()),
StructField('mod_flag', FloatType()),
StructField('orig_interest_rate', FloatType()),
StructField('orig_upb', IntegerType()),
StructField('orig_loan_term', IntegerType()),
StructField('orig_ltv', FloatType()),
StructField('orig_cltv', FloatType()),
StructField('num_borrowers', FloatType()),
StructField('dti', FloatType()),
StructField('borrower_credit_score', FloatType()),
StructField('num_units', IntegerType()),
StructField('zip', IntegerType()),
StructField('mortgage_insurance_percent', FloatType()),
StructField('current_loan_delinquency_status', IntegerType()),
StructField('current_actual_upb', FloatType()),
StructField('interest_rate', FloatType()),
StructField('loan_age', FloatType()),
StructField('msa', FloatType()),
StructField('non_interest_bearing_upb', FloatType()),
StructField(label, IntegerType()),
])
features = [ x.name for x in schema if x.name != label ]
train_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/train')
trans_data = reader.schema(schema).option('header', True).csv('/data/mortgage/csv/test')
```
Note on CPU version, vectorization is required before fitting data to classifier, which means you need to assemble all feature columns into one column.
```Python
def vectorize(data_frame):
to_floats = [ col(x.name).cast(FloatType()) for x in data_frame.schema ]
return (VectorAssembler()
.setInputCols(features)
.setOutputCol('features')
.transform(data_frame.select(to_floats))
.select(col('features'), col(label)))
train_data = vectorize(train_data)
trans_data = vectorize(trans_data)
```
#### Create a XGBoostClassifier
```
params = {
'eta': 0.1,
'gamma': 0.1,
'missing': 0.0,
'treeMethod': 'gpu_hist',
'maxDepth': 10,
'maxLeaves': 256,
'objective':'binary:logistic',
'growPolicy': 'depthwise',
'minChildWeight': 30.0,
'lambda_': 1.0,
'scalePosWeight': 2.0,
'subsample': 1.0,
'nthread': 1,
'numRound': 100,
'numWorkers': 1,
}
classifier = XGBoostClassifier(**params).setLabelCol(label).setFeaturesCols(features)
```
The CPU version classifier provides the API `setFeaturesCol` which only accepts a single column name, so vectorization for multiple feature columns is required.
```Python
classifier = XGBoostClassifier(**params).setLabelCol(label).setFeaturesCol('features')
```
The parameter `num_workers` should be set to the number of GPUs in Spark cluster for GPU version, while for CPU version it is usually equal to the number of the CPU cores.
Concerning the tree method, GPU version only supports `gpu_hist` currently, while `hist` is designed and used here for CPU training.
#### Train the Data with Benchmark
```
def with_benchmark(phrase, action):
start = time()
result = action()
end = time()
print('{} takes {} seconds'.format(phrase, round(end - start, 2)))
return result
model = with_benchmark('Training', lambda: classifier.fit(train_data))
```
#### Save and Reload the Model
```
model.write().overwrite().save('/data/new-model-path')
loaded_model = XGBoostClassificationModel().load('/data/new-model-path')
```
#### Transformation and Show Result Sample
```
def transform():
result = loaded_model.transform(trans_data).cache()
result.foreachPartition(lambda _: None)
return result
result = with_benchmark('Transformation', transform)
result.select(label, 'rawPrediction', 'probability', 'prediction').show(5)
```
#### Evaluation
```
accuracy = with_benchmark(
'Evaluation',
lambda: MulticlassClassificationEvaluator().setLabelCol(label).evaluate(result))
print('Accuracy is ' + str(accuracy))
spark.stop()
```
| true |
code
| 0.527317 | null | null | null | null |
|
<h1 align="center">SimpleITK Spatial Transformations</h1>
**Summary:**
1. Points are represented by vector-like data types: Tuple, Numpy array, List.
2. Matrices are represented by vector-like data types in row major order.
3. Default transformation initialization as the identity transform.
4. Angles specified in radians, distances specified in unknown but consistent units (nm,mm,m,km...).
5. All global transformations **except translation** are of the form:
$$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$
Nomenclature (when printing your transformation):
* Matrix: the matrix $A$
* Center: the point $\mathbf{c}$
* Translation: the vector $\mathbf{t}$
* Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$
6. Bounded transformations, BSplineTransform and DisplacementFieldTransform, behave as the identity transform outside the defined bounds.
7. DisplacementFieldTransform:
* Initializing the DisplacementFieldTransform using an image requires that the image's pixel type be sitk.sitkVectorFloat64.
* Initializing the DisplacementFieldTransform using an image will "clear out" your image (your alias to the image will point to an empty, zero sized, image).
8. Composite transformations are applied in stack order (first added, last applied).
## Transformation Types
SimpleITK supports the following transformation types.
<table width="100%">
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1TranslationTransform.html">TranslationTransform</a></td><td>2D or 3D, translation</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1VersorTransform.html">VersorTransform</a></td><td>3D, rotation represented by a versor</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1VersorRigid3DTransform.html">VersorRigid3DTransform</a></td><td>3D, rigid transformation with rotation represented by a versor</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Euler2DTransform.html">Euler2DTransform</a></td><td>2D, rigid transformation with rotation represented by a Euler angle</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Euler3DTransform.html">Euler3DTransform</a></td><td>3D, rigid transformation with rotation represented by Euler angles</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Similarity2DTransform.html">Similarity2DTransform</a></td><td>2D, composition of isotropic scaling and rigid transformation with rotation represented by a Euler angle</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1Similarity3DTransform.html">Similarity3DTransform</a></td><td>3D, composition of isotropic scaling and rigid transformation with rotation represented by a versor</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScaleTransform.html">ScaleTransform</a></td><td>2D or 3D, anisotropic scaling</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScaleVersor3DTransform.html">ScaleVersor3DTransform</a></td><td>3D, rigid transformation and anisotropic scale is <bf>added</bf> to the rotation matrix part (not composed as one would expect)</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1ScaleSkewVersor3DTransform.html">ScaleSkewVersor3DTransform</a></td><td>3D, rigid transformation with anisotropic scale and skew matrices <bf>added</bf> to the rotation matrix part (not composed as one would expect)</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1AffineTransform.html">AffineTransform</a></td><td>2D or 3D, affine transformation.</td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1BSplineTransform.html">BSplineTransform</a></td><td>2D or 3D, deformable transformation represented by a sparse regular grid of control points. </td></tr>
<tr><td><a href="http://www.itk.org/Doxygen/html/classitk_1_1DisplacementFieldTransform.html">DisplacementFieldTransform</a></td><td>2D or 3D, deformable transformation represented as a dense regular grid of vectors.</td></tr>
<tr><td><a href="http://www.itk.org/SimpleITKDoxygen/html/classitk_1_1simple_1_1Transform.html">Transform</a></td>
<td>A generic transformation. Can represent any of the SimpleITK transformations, and a <b>composite transformation</b> (stack of transformations concatenated via composition, last added, first applied). </td></tr>
</table>
```
import SimpleITK as sitk
import utilities as util
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact, fixed
OUTPUT_DIR = "output"
```
We will introduce the transformation types, starting with translation and illustrating how to move from a lower to higher parameter space (e.g. translation to rigid).
We start with the global transformations. All of them <b>except translation</b> are of the form:
$$T(\mathbf{x}) = A(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c}$$
In ITK speak (when printing your transformation):
<ul>
<li>Matrix: the matrix $A$</li>
<li>Center: the point $\mathbf{c}$</li>
<li>Translation: the vector $\mathbf{t}$</li>
<li>Offset: $\mathbf{t} + \mathbf{c} - A\mathbf{c}$</li>
</ul>
## TranslationTransform
Create a translation and then transform a point and use the inverse transformation to get the original back.
```
dimension = 2
offset = [2]*dimension # use a Python trick to create the offset list based on the dimension
translation = sitk.TranslationTransform(dimension, offset)
print(translation)
point = [10, 11] if dimension==2 else [10, 11, 12] # set point to match dimension
transformed_point = translation.TransformPoint(point)
translation_inverse = translation.GetInverse()
print('original point: ' + util.point2str(point) + '\n'
'transformed point: ' + util.point2str(transformed_point) + '\n'
'back to original: ' + util.point2str(translation_inverse.TransformPoint(transformed_point)))
```
## Euler2DTransform
Rigidly transform a 2D point using a Euler angle parameter specification.
Notice that the dimensionality of the Euler angle based rigid transformation is associated with the class, unlike the translation which is set at construction.
```
point = [10, 11]
rotation2D = sitk.Euler2DTransform()
rotation2D.SetTranslation((7.2, 8.4))
rotation2D.SetAngle(np.pi/2)
print('original point: ' + util.point2str(point) + '\n'
'transformed point: ' + util.point2str(rotation2D.TransformPoint(point)))
```
## VersorTransform (rotation in 3D)
Rotation using a versor, vector part of unit quaternion, parameterization. Quaternion defined by rotation of $\theta$ radians around axis $n$, is $q = [n*\sin(\frac{\theta}{2}), \cos(\frac{\theta}{2})]$.
```
# Use a versor:
rotation1 = sitk.VersorTransform([0,0,1,0])
# Use axis-angle:
rotation2 = sitk.VersorTransform((0,0,1), np.pi)
# Use a matrix:
rotation3 = sitk.VersorTransform()
rotation3.SetMatrix([-1, 0, 0, 0, -1, 0, 0, 0, 1]);
point = (10, 100, 1000)
p1 = rotation1.TransformPoint(point)
p2 = rotation2.TransformPoint(point)
p3 = rotation3.TransformPoint(point)
print('Points after transformation:\np1=' + str(p1) +
'\np2='+ str(p2) + '\np3='+ str(p3))
```
## Translation to Rigid [3D]
We only need to copy the translational component.
```
dimension = 3
t =(1,2,3)
translation = sitk.TranslationTransform(dimension, t)
# Copy the translational component.
rigid_euler = sitk.Euler3DTransform()
rigid_euler.SetTranslation(translation.GetOffset())
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(translation, rigid_euler)
```
## Rotation to Rigid [3D]
Copy the matrix or versor and <b>center of rotation</b>.
```
rotation_center = (10, 10, 10)
rotation = sitk.VersorTransform([0,0,1,0], rotation_center)
rigid_versor = sitk.VersorRigid3DTransform()
rigid_versor.SetRotation(rotation.GetVersor())
#rigid_versor.SetCenter(rotation.GetCenter()) #intentional error, not copying center of rotation
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(rotation, rigid_versor)
```
In the cell above, when we don't copy the center of rotation we have a constant error vector, $\mathbf{c}$ - A$\mathbf{c}$.
## Similarity [2D]
When the center of the similarity transformation is not at the origin the effect of the transformation is not what most of us expect. This is readily visible if we limit the transformation to scaling: $T(\mathbf{x}) = s\mathbf{x}-s\mathbf{c} + \mathbf{c}$. Changing the transformation's center results in scale + translation.
```
def display_center_effect(x, y, tx, point_list, xlim, ylim):
tx.SetCenter((x,y))
transformed_point_list = [ tx.TransformPoint(p) for p in point_list]
plt.scatter(list(np.array(transformed_point_list).T)[0],
list(np.array(transformed_point_list).T)[1],
marker='^',
color='red', label='transformed points')
plt.scatter(list(np.array(point_list).T)[0],
list(np.array(point_list).T)[1],
marker='o',
color='blue', label='original points')
plt.xlim(xlim)
plt.ylim(ylim)
plt.legend(loc=(0.25,1.01))
# 2D square centered on (0,0)
points = [np.array((-1.0,-1.0)), np.array((-1.0,1.0)), np.array((1.0,1.0)), np.array((1.0,-1.0))]
# Scale by 2
similarity = sitk.Similarity2DTransform();
similarity.SetScale(2)
interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(similarity), point_list = fixed(points),
xlim = fixed((-10,10)),ylim = fixed((-10,10)));
```
## Rigid to Similarity [3D]
Copy the translation, center, and matrix or versor.
```
rotation_center = (100, 100, 100)
theta_x = 0.0
theta_y = 0.0
theta_z = np.pi/2.0
translation = (1,2,3)
rigid_euler = sitk.Euler3DTransform(rotation_center, theta_x, theta_y, theta_z, translation)
similarity = sitk.Similarity3DTransform()
similarity.SetMatrix(rigid_euler.GetMatrix())
similarity.SetTranslation(rigid_euler.GetTranslation())
similarity.SetCenter(rigid_euler.GetCenter())
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(rigid_euler, similarity)
```
## Similarity to Affine [3D]
Copy the translation, center and matrix.
```
rotation_center = (100, 100, 100)
axis = (0,0,1)
angle = np.pi/2.0
translation = (1,2,3)
scale_factor = 2.0
similarity = sitk.Similarity3DTransform(scale_factor, axis, angle, translation, rotation_center)
affine = sitk.AffineTransform(3)
affine.SetMatrix(similarity.GetMatrix())
affine.SetTranslation(similarity.GetTranslation())
affine.SetCenter(similarity.GetCenter())
# Apply the transformations to the same set of random points and compare the results.
util.print_transformation_differences(similarity, affine)
```
## Scale Transform
Just as the case was for the similarity transformation above, when the transformations center is not at the origin, instead of a pure anisotropic scaling we also have translation ($T(\mathbf{x}) = \mathbf{s}^T\mathbf{x}-\mathbf{s}^T\mathbf{c} + \mathbf{c}$).
```
# 2D square centered on (0,0).
points = [np.array((-1.0,-1.0)), np.array((-1.0,1.0)), np.array((1.0,1.0)), np.array((1.0,-1.0))]
# Scale by half in x and 2 in y.
scale = sitk.ScaleTransform(2, (0.5,2));
# Interactively change the location of the center.
interact(display_center_effect, x=(-10,10), y=(-10,10),tx = fixed(scale), point_list = fixed(points),
xlim = fixed((-10,10)),ylim = fixed((-10,10)));
```
## Unintentional Misnomers (originally from ITK)
Two transformation types whose names may mislead you are ScaleVersor and ScaleSkewVersor. Basing your choices on expectations without reading the documentation will surprise you.
ScaleVersor - based on name expected a composition of transformations, in practice it is:
$$T(x) = (R+S)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S= \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]$$
ScaleSkewVersor - based on name expected a composition of transformations, in practice it is:
$$T(x) = (R+S+K)(\mathbf{x}-\mathbf{c}) + \mathbf{t} + \mathbf{c},\;\; \textrm{where } S = \left[\begin{array}{ccc} s_0-1 & 0 & 0 \\ 0 & s_1-1 & 0 \\ 0 & 0 & s_2-1 \end{array}\right]\;\; \textrm{and } K = \left[\begin{array}{ccc} 0 & k_0 & k_1 \\ k_2 & 0 & k_3 \\ k_4 & k_5 & 0 \end{array}\right]$$
Note that ScaleSkewVersor is is an over-parametrized version of the affine transform, 15 parameters (scale, skew, versor, translation) vs. 12 parameters (matrix, translation).
## Bounded Transformations
SimpleITK supports two types of bounded non-rigid transformations, BSplineTransform (sparse representation) and DisplacementFieldTransform (dense representation).
Transforming a point that is outside the bounds will return the original point - identity transform.
## BSpline
Using a sparse set of control points to control a free form deformation. Using the cell below it is clear that the BSplineTransform allows for folding and tearing.
```
# Create the transformation (when working with images it is easier to use the BSplineTransformInitializer function
# or its object oriented counterpart BSplineTransformInitializerFilter).
dimension = 2
spline_order = 3
direction_matrix_row_major = [1.0,0.0,0.0,1.0] # identity, mesh is axis aligned
origin = [-1.0,-1.0]
domain_physical_dimensions = [2,2]
bspline = sitk.BSplineTransform(dimension, spline_order)
bspline.SetTransformDomainOrigin(origin)
bspline.SetTransformDomainDirection(direction_matrix_row_major)
bspline.SetTransformDomainPhysicalDimensions(domain_physical_dimensions)
bspline.SetTransformDomainMeshSize((4,3))
# Random displacement of the control points.
originalControlPointDisplacements = np.random.random(len(bspline.GetParameters()))
bspline.SetParameters(originalControlPointDisplacements)
# Apply the BSpline transformation to a grid of points
# starting the point set exactly at the origin of the BSpline mesh is problematic as
# these points are considered outside the transformation's domain,
# remove epsilon below and see what happens.
numSamplesX = 10
numSamplesY = 20
coordsX = np.linspace(origin[0]+np.finfo(float).eps, origin[0] + domain_physical_dimensions[0], numSamplesX)
coordsY = np.linspace(origin[1]+np.finfo(float).eps, origin[1] + domain_physical_dimensions[1], numSamplesY)
XX, YY = np.meshgrid(coordsX, coordsY)
interact(util.display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),
tx = fixed(bspline), original_control_point_displacements = fixed(originalControlPointDisplacements));
```
## DisplacementField
A dense set of vectors representing the displacement inside the given domain. The most generic representation of a transformation.
```
# Create the displacement field.
# When working with images the safer thing to do is use the image based constructor,
# sitk.DisplacementFieldTransform(my_image), all the fixed parameters will be set correctly and the displacement
# field is initialized using the vectors stored in the image. SimpleITK requires that the image's pixel type be
# sitk.sitkVectorFloat64.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-1.0,-1.0]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list
displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
# Set the interpolator, either sitkLinear which is default or nearest neighbor
displacement.SetInterpolator(sitk.sitkNearestNeighbor)
originalDisplacements = np.random.random(len(displacement.GetParameters()))
displacement.SetParameters(originalDisplacements)
coordsX = np.linspace(field_origin[0], field_origin[0]+(field_size[0]-1)*field_spacing[0], field_size[0])
coordsY = np.linspace(field_origin[1], field_origin[1]+(field_size[1]-1)*field_spacing[1], field_size[1])
XX, YY = np.meshgrid(coordsX, coordsY)
interact(util.display_displacement_scaling_effect, s= (-1.5,1.5), original_x_mat = fixed(XX), original_y_mat = fixed(YY),
tx = fixed(displacement), original_control_point_displacements = fixed(originalDisplacements));
```
## Composite transform (Transform)
The generic SimpleITK transform class. This class can represent both a single transformation (global, local), or a composite transformation (multiple transformations applied one after the other). This is the output typed returned by the SimpleITK registration framework.
The choice of whether to use a composite transformation or compose transformations on your own has subtle differences in the registration framework.
Composite transforms enable a combination of a global transformation with multiple local/bounded transformations. This is useful if we want to apply deformations only in regions that deform while other regions are only effected by the global transformation.
The following code illustrates this, where the whole region is translated and subregions have different deformations.
```
# Global transformation.
translation = sitk.TranslationTransform(2,(1.0,0.0))
# Displacement in region 1.
displacement1 = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-1.0,-1.0]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] # direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement1.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement1.SetParameters(np.ones(len(displacement1.GetParameters())))
# Displacement in region 2.
displacement2 = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [1.0,-3]
field_spacing = [2.0/9.0,2.0/19.0]
field_direction = [1,0,0,1] #direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement2.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement2.SetParameters(-1.0*np.ones(len(displacement2.GetParameters())))
# Composite transform which applies the global and local transformations.
composite = sitk.Transform(translation)
composite.AddTransform(displacement1)
composite.AddTransform(displacement2)
# Apply the composite transformation to points in ([-1,-3],[3,1]) and
# display the deformation using a quiver plot.
# Generate points.
numSamplesX = 10
numSamplesY = 10
coordsX = np.linspace(-1.0, 3.0, numSamplesX)
coordsY = np.linspace(-3.0, 1.0, numSamplesY)
XX, YY = np.meshgrid(coordsX, coordsY)
# Transform points and compute deformation vectors.
pointsX = np.zeros(XX.shape)
pointsY = np.zeros(XX.shape)
for index, value in np.ndenumerate(XX):
px,py = composite.TransformPoint((value, YY[index]))
pointsX[index]=px - value
pointsY[index]=py - YY[index]
plt.quiver(XX, YY, pointsX, pointsY);
```
## Writing and Reading
The SimpleITK.ReadTransform() returns a SimpleITK.Transform . The content of the file can be any of the SimpleITK transformations or a composite (set of transformations).
```
import os
# Create a 2D rigid transformation, write it to disk and read it back.
basic_transform = sitk.Euler2DTransform()
basic_transform.SetTranslation((1.0,2.0))
basic_transform.SetAngle(np.pi/2)
full_file_name = os.path.join(OUTPUT_DIR, 'euler2D.tfm')
sitk.WriteTransform(basic_transform, full_file_name)
# The ReadTransform function returns an sitk.Transform no matter the type of the transform
# found in the file (global, bounded, composite).
read_result = sitk.ReadTransform(full_file_name)
print('Different types: '+ str(type(read_result) != type(basic_transform)))
util.print_transformation_differences(basic_transform, read_result)
# Create a composite transform then write and read.
displacement = sitk.DisplacementFieldTransform(2)
field_size = [10,20]
field_origin = [-10.0,-100.0]
field_spacing = [20.0/(field_size[0]-1),200.0/(field_size[1]-1)]
field_direction = [1,0,0,1] #direction cosine matrix (row major order)
# Concatenate all the information into a single list.
displacement.SetFixedParameters(field_size+field_origin+field_spacing+field_direction)
displacement.SetParameters(np.random.random(len(displacement.GetParameters())))
composite_transform = sitk.Transform(basic_transform)
composite_transform.AddTransform(displacement)
full_file_name = os.path.join(OUTPUT_DIR, 'composite.tfm')
sitk.WriteTransform(composite_transform, full_file_name)
read_result = sitk.ReadTransform(full_file_name)
util.print_transformation_differences(composite_transform, read_result)
```
<a href="02_images_and_resampling.ipynb"><h2 align=right>Next »</h2></a>
| true |
code
| 0.585694 | null | null | null | null |
|
<div class="alert alert-block alert-info">
<font size="5"><b><center> Section 5</font></center>
<br>
<font size="5"><b><center>Recurrent Neural Network in PyTorch with an Introduction to Natural Language Processing</font></center>
</div>
Credit: This example is obtained from the following book:
Subramanian, Vishnu. 2018. "*Deep Learning with PyTorch: A Practical Approach to Building Neural Network Models Using PyTorch.*" Birmingham, U.K., Packt Publishing.
# Simple Text Processing
## Typically Data Preprocessing Steps before Modeling Training for NLP Applications
* Read the data from disk
* Tokenize the text
* Create a mapping from word to a unique integer
* Convert the text into lists of integers
* Load the data in whatever format your deep learning framework requires
* Pad the text so that all the sequences are the same length, so you can process them in batch
## Word Embedding
Word embedding is a very popular way of representing text data in problems that are solved by deep learning algorithms
Word embedding provides a dense representation of a word filled with floating numbers.
It drastically reduces the dimension of the dictionary
### `Torchtext` and Training word embedding by building a sentiment classifier
Torchtext takes a declarative approach to loading its data:
* you tell torchtext how you want the data to look like, and torchtext handles it for you
* Declaring a Field: The Field specifies how you want a certain field to be processed
The `Field` class is a fundamental component of torchtext and is what makes preprocessing very easy
### Load `torchtext.datasets`
# Use LSTM for Sentiment Classification
1. Preparing the data
2. Creating the batches
3. Creating the network
4. Training the model
```
from torchtext import data, datasets
from torchtext.vocab import GloVe,FastText,CharNGram
TEXT = data.Field(lower=True, fix_length=100,batch_first=False)
LABEL = data.Field(sequential=False,)
train, test = datasets.imdb.IMDB.splits(TEXT, LABEL)
TEXT.build_vocab(train, vectors=GloVe(name='6B', dim=300),max_size=10000,min_freq=10)
LABEL.build_vocab(train,)
len(TEXT.vocab.vectors)
train_iter, test_iter = data.BucketIterator.splits((train, test), batch_size=32, device=-1)
train_iter.repeat = False
test_iter.repeat = False
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
class IMDBRnn(nn.Module):
def __init__(self,vocab,hidden_size,n_cat,bs=1,nl=2):
super().__init__()
self.hidden_size = hidden_size
self.bs = bs
self.nl = nl
self.e = nn.Embedding(n_vocab,hidden_size)
self.rnn = nn.LSTM(hidden_size,hidden_size,nl)
self.fc2 = nn.Linear(hidden_size,n_cat)
self.softmax = nn.LogSoftmax(dim=-1)
def forward(self,inp):
bs = inp.size()[1]
if bs != self.bs:
self.bs = bs
e_out = self.e(inp)
h0 = c0 = Variable(e_out.data.new(*(self.nl,self.bs,self.hidden_size)).zero_())
rnn_o,_ = self.rnn(e_out,(h0,c0))
rnn_o = rnn_o[-1]
fc = F.dropout(self.fc2(rnn_o),p=0.8)
return self.softmax(fc)
n_vocab = len(TEXT.vocab)
n_hidden = 100
model = IMDBRnn(n_vocab,n_hidden,n_cat=3,bs=32)
#model = model.cuda()
optimizer = optim.Adam(model.parameters(),lr=1e-3)
def fit(epoch,model,data_loader,phase='training',volatile=False):
if phase == 'training':
model.train()
if phase == 'validation':
model.eval()
volatile=True
running_loss = 0.0
running_correct = 0
for batch_idx , batch in enumerate(data_loader):
text , target = batch.text , batch.label
# if is_cuda:
# text,target = text.cuda(),target.cuda()
if phase == 'training':
optimizer.zero_grad()
output = model(text)
loss = F.nll_loss(output,target)
#running_loss += F.nll_loss(output,target,size_average=False).data[0]
running_loss += F.nll_loss(output,target,size_average=False).data
preds = output.data.max(dim=1,keepdim=True)[1]
running_correct += preds.eq(target.data.view_as(preds)).cpu().sum()
if phase == 'training':
loss.backward()
optimizer.step()
loss = running_loss/len(data_loader.dataset)
accuracy = 100. * running_correct/len(data_loader.dataset)
print("epoch: ", epoch, "loss: ", loss, "accuracy: ", accuracy)
#print(f'{phase} loss is {loss:{5}.{2}} and {phase} accuracy is {running_correct}/{len(data_loader.dataset)}{accuracy:{10}.{4}}')
return loss,accuracy
import time
start = time.time()
train_losses , train_accuracy = [],[]
val_losses , val_accuracy = [],[]
for epoch in range(1,20):
epoch_loss, epoch_accuracy = fit(epoch,model,train_iter,phase='training')
val_epoch_loss , val_epoch_accuracy = fit(epoch,model,test_iter,phase='validation')
train_losses.append(epoch_loss)
train_accuracy.append(epoch_accuracy)
val_losses.append(val_epoch_loss)
val_accuracy.append(val_epoch_accuracy)
end = time.time()
print((end-start)/60)
print("Execution Time: ", round(((end-start)/60),1), "minutes")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(range(1,len(train_losses)+1),train_losses,'bo',label = 'training loss')
plt.plot(range(1,len(val_losses)+1),val_losses,'r',label = 'validation loss')
plt.legend()
plt.plot(range(1,len(train_accuracy)+1),train_accuracy,'bo',label = 'train accuracy')
plt.plot(range(1,len(val_accuracy)+1),val_accuracy,'r',label = 'val accuracy')
plt.legend()
```
| true |
code
| 0.86852 | null | null | null | null |
|
```
import numpy as np
import pandas as pd
import holoviews as hv
import networkx as nx
from holoviews import opts
hv.extension('bokeh')
defaults = dict(width=400, height=400)
hv.opts.defaults(
opts.EdgePaths(**defaults), opts.Graph(**defaults), opts.Nodes(**defaults))
```
Visualizing and working with network graphs is a common problem in many different disciplines. HoloViews provides the ability to represent and visualize graphs very simply and easily with facilities for interactively exploring the nodes and edges of the graph, especially using the bokeh plotting interface.
The ``Graph`` ``Element`` differs from other elements in HoloViews in that it consists of multiple sub-elements. The data of the ``Graph`` element itself are the abstract edges between the nodes. By default the element will automatically compute concrete ``x`` and ``y`` positions for the nodes and represent them using a ``Nodes`` element, which is stored on the Graph. The abstract edges and concrete node positions are sufficient to render the ``Graph`` by drawing straight-line edges between the nodes. In order to supply explicit edge paths we can also declare ``EdgePaths``, providing explicit coordinates for each edge to follow.
To summarize a ``Graph`` consists of three different components:
* The ``Graph`` itself holds the abstract edges stored as a table of node indices.
* The ``Nodes`` hold the concrete ``x`` and ``y`` positions of each node along with a node ``index``. The ``Nodes`` may also define any number of value dimensions, which can be revealed when hovering over the nodes or to color the nodes by.
* The ``EdgePaths`` can optionally be supplied to declare explicit node paths.
#### A simple Graph
Let's start by declaring a very simple graph connecting one node to all others. If we simply supply the abstract connectivity of the ``Graph``, it will automatically compute a layout for the nodes using the ``layout_nodes`` operation, which defaults to a circular layout:
```
# Declare abstract edges
N = 8
node_indices = np.arange(N, dtype=np.int32)
source = np.zeros(N, dtype=np.int32)
target = node_indices
simple_graph = hv.Graph(((source, target),))
simple_graph
```
#### Accessing the nodes and edges
We can easily access the ``Nodes`` and ``EdgePaths`` on the ``Graph`` element using the corresponding properties:
```
simple_graph.nodes + simple_graph.edgepaths
```
#### Displaying directed graphs
When specifying the graph edges the source and target node are listed in order, if the graph is actually a directed graph this may used to indicate the directionality of the graph. By setting ``directed=True`` as a plot option it is possible to indicate the directionality of each edge using an arrow:
```
simple_graph.relabel('Directed Graph').opts(directed=True, node_size=5, arrowhead_length=0.05)
```
The length of the arrows can be set as an fraction of the overall graph extent using the ``arrowhead_length`` option.
#### Supplying explicit paths
Next we will extend this example by supplying explicit edges:
```
def bezier(start, end, control, steps=np.linspace(0, 1, 100)):
return (1-steps)**2*start + 2*(1-steps)*steps*control+steps**2*end
x, y = simple_graph.nodes.array([0, 1]).T
paths = []
for node_index in node_indices:
ex, ey = x[node_index], y[node_index]
paths.append(np.column_stack([bezier(x[0], ex, 0), bezier(y[0], ey, 0)]))
bezier_graph = hv.Graph(((source, target), (x, y, node_indices), paths))
bezier_graph
```
## Interactive features
#### Hover and selection policies
Thanks to Bokeh we can reveal more about the graph by hovering over the nodes and edges. The ``Graph`` element provides an ``inspection_policy`` and a ``selection_policy``, which define whether hovering and selection highlight edges associated with the selected node or nodes associated with the selected edge, these policies can be toggled by setting the policy to ``'nodes'`` (the default) and ``'edges'``.
```
bezier_graph.relabel('Edge Inspection').opts(inspection_policy='edges')
```
In addition to changing the policy we can also change the colors used when hovering and selecting nodes:
```
bezier_graph.opts(
opts.Graph(inspection_policy='nodes', tools=['hover', 'box_select'],
edge_hover_line_color='green', node_hover_fill_color='red'))
```
#### Additional information
We can also associate additional information with the nodes and edges of a graph. By constructing the ``Nodes`` explicitly we can declare additional value dimensions, which are revealed when hovering and/or can be mapped to the color by setting the ``color`` to the dimension name ('Weight'). We can also associate additional information with each edge by supplying a value dimension to the ``Graph`` itself, which we can map to various style options, e.g. by setting the ``edge_color`` and ``edge_line_width``.
```
node_labels = ['Output']+['Input']*(N-1)
np.random.seed(7)
edge_labels = np.random.rand(8)
nodes = hv.Nodes((x, y, node_indices, node_labels), vdims='Type')
graph = hv.Graph(((source, target, edge_labels), nodes, paths), vdims='Weight')
(graph + graph.opts(inspection_policy='edges', clone=True)).opts(
opts.Graph(node_color='Type', edge_color='Weight', cmap='Set1',
edge_cmap='viridis', edge_line_width=hv.dim('Weight')*10))
```
If you want to supply additional node information without speciying explicit node positions you may pass in a ``Dataset`` object consisting of various value dimensions.
```
node_info = hv.Dataset(node_labels, vdims='Label')
hv.Graph(((source, target), node_info)).opts(node_color='Label', cmap='Set1')
```
## Working with NetworkX
NetworkX is a very useful library when working with network graphs and the Graph Element provides ways of importing a NetworkX Graph directly. Here we will load the Karate Club graph and use the ``circular_layout`` function provided by NetworkX to lay it out:
```
G = nx.karate_club_graph()
hv.Graph.from_networkx(G, nx.layout.circular_layout).opts(tools=['hover'])
```
It is also possible to pass arguments to the NetworkX layout function as keywords to ``hv.Graph.from_networkx``, e.g. we can override the k-value of the Fruchteran Reingold layout
```
hv.Graph.from_networkx(G, nx.layout.fruchterman_reingold_layout, k=1)
```
Finally if we want to layout a Graph after it has already been constructed, the ``layout_nodes`` operation may be used, which also allows applying the ``weight`` argument to graphs which have not been constructed with networkx:
```
from holoviews.element.graphs import layout_nodes
graph = hv.Graph([
('a', 'b', 3),
('a', 'c', 0.2),
('c', 'd', 0.1),
('c', 'e', 0.7),
('c', 'f', 5),
('a', 'd', 0.3)
], vdims='weight')
layout_nodes(graph, layout=nx.layout.fruchterman_reingold_layout, kwargs={'weight': 'weight'})
```
## Adding labels
If the ``Graph`` we have constructed has additional metadata we can easily use those as labels, we simply get a handle on the nodes, cast them to hv.Labels and then overlay them:
```
graph = hv.Graph.from_networkx(G, nx.layout.fruchterman_reingold_layout)
labels = hv.Labels(graph.nodes, ['x', 'y'], 'club')
(graph * labels.opts(text_font_size='8pt', text_color='white', bgcolor='gray'))
```
## Animating graphs
Like all other elements ``Graph`` can be updated in a ``HoloMap`` or ``DynamicMap``. Here we animate how the Fruchterman-Reingold force-directed algorithm lays out the nodes in real time.
```
hv.HoloMap({i: hv.Graph.from_networkx(G, nx.spring_layout, iterations=i, seed=10) for i in range(5, 30, 5)},
kdims='Iterations')
```
## Real world graphs
As a final example let's look at a slightly larger graph. We will load a dataset of a Facebook network consisting a number of friendship groups identified by their ``'circle'``. We will load the edge and node data using pandas and then color each node by their friendship group using many of the things we learned above.
```
kwargs = dict(width=800, height=800, xaxis=None, yaxis=None)
opts.defaults(opts.Nodes(**kwargs), opts.Graph(**kwargs))
colors = ['#000000']+hv.Cycle('Category20').values
edges_df = pd.read_csv('../assets/fb_edges.csv')
fb_nodes = hv.Nodes(pd.read_csv('../assets/fb_nodes.csv')).sort()
fb_graph = hv.Graph((edges_df, fb_nodes), label='Facebook Circles')
fb_graph.opts(cmap=colors, node_size=10, edge_line_width=1,
node_line_color='gray', node_color='circle')
```
## Bundling graphs
The datashader library provides algorithms for bundling the edges of a graph and HoloViews provides convenient wrappers around the libraries. Note that these operations need ``scikit-image`` which you can install using:
```
conda install scikit-image
```
or
```
pip install scikit-image
```
```
from holoviews.operation.datashader import datashade, bundle_graph
bundled = bundle_graph(fb_graph)
bundled
```
## Datashading graphs
For graphs with a large number of edges we can datashade the paths and display the nodes separately. This loses some of the interactive features but will let you visualize quite large graphs:
```
(datashade(bundled, normalization='linear', width=800, height=800) * bundled.nodes).opts(
opts.Nodes(color='circle', size=10, width=1000, cmap=colors, legend_position='right'))
```
### Applying selections
Alternatively we can select the nodes and edges by an attribute that resides on either. In this case we will select the nodes and edges for a particular circle and then overlay just the selected part of the graph on the datashaded plot. Note that selections on the ``Graph`` itself will select all nodes that connect to one of the selected nodes. In this way a smaller subgraph can be highlighted and the larger graph can be datashaded.
```
datashade(bundle_graph(fb_graph), normalization='linear', width=800, height=800) *\
bundled.select(circle='circle15').opts(node_fill_color='white')
```
To select just nodes that are in 'circle15' set the ``selection_mode='nodes'`` overriding the default of 'edges':
```
bundled.select(circle='circle15', selection_mode='nodes')
```
| true |
code
| 0.426471 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/athermal_linear_elasticity.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Imports and utility code
!pip install jax-md
import numpy as onp
import jax.numpy as jnp
from jax.config import config
config.update('jax_enable_x64', True)
from jax import random
from jax import jit, lax, grad, vmap
import jax.scipy as jsp
from jax_md import space, energy, smap, minimize, util, elasticity, quantity
from jax_md.colab_tools import renderer
f32 = jnp.float32
f64 = jnp.float64
from functools import partial
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 16})
def format_plot(x, y):
plt.grid(True)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 0.7)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
def run_minimization_while(energy_fn, R_init, shift, max_grad_thresh = 1e-12, max_num_steps=1000000, **kwargs):
init,apply=minimize.fire_descent(jit(energy_fn), shift, **kwargs)
apply = jit(apply)
@jit
def get_maxgrad(state):
return jnp.amax(jnp.abs(state.force))
@jit
def cond_fn(val):
state, i = val
return jnp.logical_and(get_maxgrad(state) > max_grad_thresh, i<max_num_steps)
@jit
def body_fn(val):
state, i = val
return apply(state), i+1
state = init(R_init)
state, num_iterations = lax.while_loop(cond_fn, body_fn, (state, 0))
return state.position, get_maxgrad(state), num_iterations
def run_minimization_while_neighbor_list(energy_fn, neighbor_fn, R_init, shift,
max_grad_thresh = 1e-12, max_num_steps = 1000000,
step_inc = 1000, verbose = False, **kwargs):
nbrs = neighbor_fn.allocate(R_init)
init,apply=minimize.fire_descent(jit(energy_fn), shift, **kwargs)
apply = jit(apply)
@jit
def get_maxgrad(state):
return jnp.amax(jnp.abs(state.force))
@jit
def body_fn(state_nbrs, t):
state, nbrs = state_nbrs
nbrs = neighbor_fn.update(state.position, nbrs)
state = apply(state, neighbor=nbrs)
return (state, nbrs), 0
state = init(R_init, neighbor=nbrs)
step = 0
while step < max_num_steps:
if verbose:
print('minimization step {}'.format(step))
rtn_state, _ = lax.scan(body_fn, (state, nbrs), step + jnp.arange(step_inc))
new_state, nbrs = rtn_state
# If the neighbor list overflowed, rebuild it and repeat part of
# the simulation.
if nbrs.did_buffer_overflow:
print('Buffer overflow.')
nbrs = neighbor_fn.allocate(state.position)
else:
state = new_state
step += step_inc
if get_maxgrad(state) <= max_grad_thresh:
break
if verbose:
print('successfully finished {} steps.'.format(step*step_inc))
return state.position, get_maxgrad(state), nbrs, step
def run_minimization_scan(energy_fn, R_init, shift, num_steps=5000, **kwargs):
init,apply=minimize.fire_descent(jit(energy_fn), shift, **kwargs)
apply = jit(apply)
@jit
def scan_fn(state, i):
return apply(state), 0.
state = init(R_init)
state, _ = lax.scan(scan_fn,state,jnp.arange(num_steps))
return state.position, jnp.amax(jnp.abs(state.force))
key = random.PRNGKey(0)
```
#Linear elasticity in athermal systems
## The elastic modulus tensor
An global affine deformation is given to lowest order by a symmetric strain tensor $\epsilon$, which transforms any vector $r$ according to
\begin{equation}
r \rightarrow (1 + \epsilon) \cdot r.
\end{equation}
Note that in $d$ dimensions, the strain tensor has $d(d + 1)/2$ independent elements. Now, when a mechanically stable system (i.e. a system at a local energy minimum where there is zero net force on every particle) is subject to an affine deformation, it usually does not remain in mechanical equilibrium. Therefore, there is a secondary, nonaffine response that returns the system to mechanical equilibrium, though usually at a different energy than the undeformed state.
The change of energy can be written to quadratic order as
\begin{equation}
\frac{ \Delta U}{V^0} = \sigma^0_{ij}\epsilon_{ji} + \frac 12 C_{ijkl} \epsilon_{ij} \epsilon_{kl} + O\left( \epsilon^3 \right)
\end{equation}
where $C_{ijkl}$ is the $d × d × d × d$ elastic modulus tensor, $\sigma^0$ is the $d × d$ symmetric stress tensor describing residual stresses in the initial state, and $V^0$ is the volume of the initial state. The symmetries of $\epsilon_{ij}$ imply the following:
\begin{equation}
C_{ijkl} = C_{jikl} = C_{ijlk} = C_{klij}
\end{equation}
When no further symmetries are assumed, the number of independent elastic constants becomes $\frac 18 d(d + 1)(d^2 + d + 2)$, which is 6 in two dimensions and 21 in three dimensions.
##Linear response to an external force
Consider a set of $N$ particles in $d$ dimensions with positions $R_0$. Using $u \equiv R - R_0$ and assuming fixed boundary conditions, we can expand the energy about $R_0$:
\begin{equation}
U = U^0 - F^0 u + \frac 12 u H^0 u + O(u^3),
\end{equation}
where $U^0$ is the energy at $R_0$, $F^0$ is the force, $F^0_\mu \equiv \left. \frac {\partial U}{\partial u_\mu} \right |_{u=0}$, and $H^0$ is the Hessian, $H^0 \equiv \left. \frac{ \partial^2 U}{\partial u_\mu \partial u_\nu}\right|_{u=0}$.
Note that here we are expanding in terms of the particle positions, where as above we were expanding in the global strain degrees of freedom.
If we assume that $R_0$ corresponds to a local energy minimum, then $F^0=0$. Dropping higher order terms, we have a system of coupled harmonic oscillators given by
\begin{equation}
\Delta U \equiv U - U^0 = \frac 12 u H^0 u.
\end{equation}
This is independent of the form or details of $U$.
Hooke's law for this system gives the net force $f$ as a result of displacing the particles by $u$:
\begin{equation}
f = -H^0 u.
\end{equation}
Thus, if an *external* force $f_\mathrm{ext}$ is applied, the particles will respond so that the total force is zero, i.e. $f = -f_\mathrm{ext}$. This response is obtained by solving for $u$:
\begin{equation}
u = (H^0)^{-1} f_\mathrm{ext}.
\end{equation}
## Response to an affine strain
Now consider a strain tensor $\epsilon = \tilde \epsilon \gamma$, where $\gamma$ is a scalar and will be used to explicitly take the limit of small strain for fixed $\tilde \epsilon$. Importantly, the strain tensor represents a deformation of the underlying space that the particles live in and thus is a degree of freedom that is independent of the $Nd$ particle degrees of freedom. Therefore, knowing the particle positions $R$ is not sufficient to describe the energy, we also need to know $\gamma$ to specify the correct boundary conditions:
\begin{equation}
U = U(R, \gamma).
\end{equation}
We now have a system with $Nd+1$ variables $\{R, \gamma\}$ that, like before, form a set of coupled harmonic oscillators. We can describe this using the so-called "generalized Hessian" matrix of second derivatives of the energy with respect to both $R$ and $\gamma$. Specifically, Hooke's law reads
\begin{equation}
\left( \begin{array}{ ccccc|c}
&&&&&\\
&&H^0 &&& -\Xi \\
&&&&& \\ \hline
&&-\Xi^T &&&\frac{\partial ^2U}{\partial \gamma^2}
\end{array}\right)
\left( \begin{array}{ c}
\\
u \\
\\ \hline
\gamma
\end{array}\right)
=
\left( \begin{array}{ c}
\\
0 \\
\\ \hline
\tilde \sigma
\end{array}\right),
\end{equation}
where $u = R - R_0$ is the displacement of every particle, $\Xi = -\frac{ \partial^2 U}{\partial R \partial \gamma}$, and $\tilde \sigma$ is the induced stress caused by the deformation. (If there is prestress in the system, i.e. $\sigma^0 = \frac{\partial U}{\partial \gamma} \neq 0$, the total stress is $\sigma = \sigma^0 + \tilde \sigma$.) In this equation, $\gamma$ is held fixed and the zero in the top of the right-hand-side imposes force balance after the deformation and resulting non-affine displacement of every particle. The non-affine displacement itself, $u$, and the induced stress $\sigma$, are both unknown but can be solved for. First, the non-affine response is
\begin{equation}
u = (H^0)^{-1} \Xi \; \gamma,
\end{equation}
where we note that in the limit of small $\gamma$, the force induced on every particle due to the affine deformation is $\Xi \; \gamma$. Second, the induced stress is
\begin{equation}
\tilde \sigma = \frac{\partial ^2U}{\partial \gamma^2} \gamma - \Xi^T u = \left(\frac{\partial ^2U}{\partial \gamma^2} - \Xi^T (H^0)^{-1} \Xi \right) \gamma.
\end{equation}
Similarly, the change in energy is
\begin{equation}
\frac{\Delta U}{V^0} = \sigma^0 \gamma + \frac 1{2V^0} \left(\frac{\partial ^2U}{\partial \gamma^2} - \Xi^T (H^0)^{-1} \Xi \right) \gamma^2,
\end{equation}
where $\sigma^0$ is the prestress in the system per unit volume. Comparing this to the above definition of the the elastic modulus tensor, we see that the elastic constant associated with the deformation $\tilde \epsilon$ is
\begin{equation}
C(\tilde \epsilon) = \frac 1{V^0} \left( \frac{\partial^2 U}{\partial \gamma^2} - \Xi^T (H^0)^{-1} \Xi \right).
\end{equation}
$C(\tilde \epsilon)$ is related to $C_{ijkl}$ by summing $C(\tilde \epsilon) = C_{ijkl}\tilde \epsilon_{ij} \tilde \epsilon_{kl}$. So, if $\tilde \epsilon_{ij} = \delta_{0i}\delta_{0j}$, then $C_{0000} = C(\tilde \epsilon)$.
The internal code in `jax_md.elasticity` repeats this calculation for different $\tilde \epsilon$ to back out the different independent elastic constants.
#First example
As a first example, let's consider a 3d system of 128 soft spheres. The elastic modulus tensor is only defined for systems that are at a local energy minimum, so we start by minimizing the energy.
```
N = 128
dimension = 3
box_size = quantity.box_size_at_number_density(N, 1.4, dimension)
displacement, shift = space.periodic(box_size)
energy_fn = energy.soft_sphere_pair(displacement)
key, split = random.split(key)
R_init = random.uniform(split, (N,dimension), minval=0.0, maxval=box_size, dtype=f64)
R, max_grad, niters = run_minimization_while(energy_fn, R_init, shift)
print('Minimized the energy in {} minimization steps and reached a final \
maximum gradient of {}'.format(niters, max_grad))
```
We can now calculate the elastic modulus tensor
```
emt_fn = jit(elasticity.athermal_moduli(energy_fn, check_convergence=True))
C, converged = emt_fn(R,box_size)
print(converged)
```
The elastic modulus tensor gives a quantitative prediction for how the energy should change if we deform the system according to a strain tensor
\begin{equation}
\frac{ \Delta U}{V^0} = \sigma^0\epsilon + \frac 12 \epsilon C \epsilon + O\left(\epsilon^3\right)
\end{equation}
To test this, we define $\epsilon = \tilde \epsilon \gamma$ for a randomly chosen strain tensor $\tilde \epsilon$ and for $\gamma << 1$. Ignoring terms of order $\gamma^3$ and higher, we have
\begin{equation}
\frac{ \Delta U}{V^0} - \sigma^0\epsilon = \left[\frac 12 \tilde \epsilon C \tilde \epsilon \right] \gamma^2
\end{equation}
Thus, we can test our calculation of $C$ by plotting $\frac{ \Delta U}{V^0} - \sigma^0\epsilon$ as a function of $\gamma$ for our randomly chosen $\tilde \epsilon$ and comparing it to the line $\left[\frac 12 \tilde \epsilon C \tilde \epsilon \right] \gamma^2$.
First, generate a random $\tilde \epsilon$ and calculate $U$ for different $\gamma$.
```
key, split = random.split(key)
#Pick a random (symmetric) strain tensor
strain_tensor = random.uniform(split, (dimension,dimension), minval=-1, maxval=1, dtype=f64)
strain_tensor = (strain_tensor + strain_tensor.T) / 2.0
#Define a function to calculate the energy at a given strain
def get_energy_at_strain(gamma, strain_tensor, R_init, box):
R_init = space.transform(space.inverse(box),R_init)
new_box = jnp.matmul(jnp.eye(strain_tensor.shape[0]) + gamma * strain_tensor, box)
displacement, shift = space.periodic_general(new_box, fractional_coordinates=True)
energy_fn = energy.soft_sphere_pair(displacement, sigma=1.0)
R_final, _, _ = run_minimization_while(energy_fn, R_init, shift)
return energy_fn(R_final)
gammas = jnp.logspace(-7,-4,50)
Us = vmap(get_energy_at_strain, in_axes=(0,None,None,None))(gammas, strain_tensor, R, box_size * jnp.eye(dimension))
```
Plot $\frac{ \Delta U}{V^0} - \sigma^0\epsilon$ and $\left[\frac 12 \tilde \epsilon C \tilde \epsilon \right] \gamma^2$ as functinos of $\gamma$. While there may be disagreements for very small $\gamma$ due to numerical precision or at large $\gamma$ due to higher-order terms becoming relevant, there should be a region of quantitative agreement.
```
U_0 = energy_fn(R)
stress_0 = -quantity.stress(energy_fn, R, box_size)
V_0 = quantity.volume(dimension, box_size)
#Plot \Delta E/V - sigma*epsilon
y1 = (Us - U_0)/V_0 - gammas * jnp.einsum('ij,ji->',stress_0,strain_tensor)
plt.plot(jnp.abs(gammas), y1, lw=3, label=r'$\Delta U/V^0 - \sigma^0 \epsilon$')
#Plot 0.5 * epsilon*C*epsilon
y2 = 0.5 * jnp.einsum('ij,ijkl,kl->',strain_tensor, C, strain_tensor) * gammas**2
plt.plot(jnp.abs(gammas), y2, ls='--', lw=3, label=r'$(1/2) \epsilon C \epsilon$')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
```
To test the accuracy of this agreement, we first define:
\begin{equation}
T(\gamma) = \frac{ \Delta U}{V^0} - \sigma^0\epsilon - \frac 12 \epsilon C \epsilon \sim O\left(\gamma^3\right)
\end{equation}
which should be proportional to $\gamma^3$ for small $\gamma$ (note that this expected scaling should break down when the y-axis approaches machine precision). This is a prediction of scaling only, so we plot a line proportional to $\gamma^3$ to compare the slopes.
```
#Plot the difference, which should scales as gamma**3
plt.plot(jnp.abs(gammas), jnp.abs(y1-y2), label=r'$T(\gamma)$')
#Plot gamma**3 for reference
plt.plot(jnp.abs(gammas), jnp.abs(gammas**3), 'black', label=r'slope = $\gamma^3$ (for reference)')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
```
Save `C` for later testing.
```
C_3d = C
```
#Example with neighbor lists
As a second example, consider a much larger systems that is implemented using neighbor lists.
```
N = 5000
dimension = 2
box_size = quantity.box_size_at_number_density(N, 1.3, dimension)
box = box_size * jnp.eye(dimension)
displacement, shift = space.periodic_general(box, fractional_coordinates=True)
sigma = jnp.array([[1.0, 1.2], [1.2, 1.4]])
N_2 = int(N / 2)
species = jnp.where(jnp.arange(N) < N_2, 0, 1)
neighbor_fn, energy_fn = energy.soft_sphere_neighbor_list(
displacement, box_size, species=species, sigma=sigma, dr_threshold = 0.1,
fractional_coordinates = True)
key, split = random.split(key)
R_init = random.uniform(split, (N,dimension), minval=0.0, maxval=1.0, dtype=f64)
R, max_grad, nbrs, niters = run_minimization_while_neighbor_list(energy_fn, neighbor_fn, R_init, shift)
print('Minimized the energy in {} minimization steps and reached a final \
maximum gradient of {}'.format(niters, max_grad))
```
We have to pass the neighbor list to `emt_fn`.
```
emt_fn = jit(elasticity.athermal_moduli(energy_fn, check_convergence=True))
C, converged = emt_fn(R,box,neighbor=nbrs)
print(converged)
```
We can time the calculation of the compiled function.
```
%timeit emt_fn(R,box,neighbor=nbrs)
```
Repeat the same tests as above. NOTE: this may take a few minutes.
```
key, split = random.split(key)
#Pick a random (symmetric) strain tensor
strain_tensor = random.uniform(split, (dimension,dimension), minval=-1, maxval=1, dtype=f64)
strain_tensor = (strain_tensor + strain_tensor.T) / 2.0
def get_energy_at_strain(gamma, strain_tensor, R_init, box):
new_box = jnp.matmul(jnp.eye(strain_tensor.shape[0]) + gamma * strain_tensor, box)
displacement, shift = space.periodic_general(new_box, fractional_coordinates=True)
neighbor_fn, energy_fn = energy.soft_sphere_neighbor_list(
displacement, box_size, species=species, sigma=sigma, dr_threshold = 0.1,
fractional_coordinates = True, capacity_multiplier = 1.5)
R_final, _, nbrs, _ = run_minimization_while_neighbor_list(energy_fn, neighbor_fn, R_init, shift)
return energy_fn(R_final, neighbor=nbrs)
gammas = jnp.logspace(-7,-3,20)
Us = jnp.array([ get_energy_at_strain(gamma, strain_tensor, R, box) for gamma in gammas])
U_0 = energy_fn(R, neighbor=nbrs)
stress_0 = -quantity.stress(energy_fn, R, box, neighbor=nbrs)
V_0 = quantity.volume(dimension, box)
#Plot \Delta E/V - sigma*epsilon
y1 = (Us - U_0)/V_0 - gammas * jnp.einsum('ij,ji->',stress_0,strain_tensor)
plt.plot(jnp.abs(gammas), y1, lw=3, label=r'$\Delta U/V^0 - \sigma^0 \epsilon$')
#Plot 0.5 * epsilon*C*epsilon
y2 = 0.5 * jnp.einsum('ij,ijkl,kl->',strain_tensor, C, strain_tensor) * gammas**2
plt.plot(jnp.abs(gammas), y2, ls='--', lw=3, label=r'$(1/2) \epsilon C \epsilon$')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
#Plot the difference, which should scales as gamma**3
plt.plot(jnp.abs(gammas), jnp.abs(y1-y2), label=r'$T(\gamma)$')
#Plot gamma**3 for reference
plt.plot(jnp.abs(gammas), jnp.abs(gammas**3), 'black', label=r'slope = $\gamma^3$ (for reference)')
plt.xscale('log')
plt.yscale('log')
plt.legend()
format_plot('$\gamma$','')
finalize_plot()
```
Save `C` for later testing.
```
C_2d = C
```
#Mandel notation
Mandel notation is a way to represent symmetric second-rank tensors and fourth-rank tensors with so-called "minor symmetries", i.e. $T_{ijkl} = T_{ijlk} = T_{jilk}$. The idea is to map pairs of indices so that $(i,i) \rightarrow i$ and $(i,j) \rightarrow K - i - j$ for $i\neq j$, where $K = d(d+1)/2$ is the number of independent pairs $(i,j)$ for tensors with $d$ elements along each axis. Thus, second-rank tensors become first-rank tensors, and fourth-rank tensors become second-rank tensors, according to:
\begin{align}
M_{m(i,j)} &= T_{ij} w(i,j) \\
M_{m(i,j),m(k,l)} &= T_{ijkl} w(i,j) w(k,l).
\end{align}
Here, $m(i,j)$ is the mapping function described above, and w(i,j) is a weight that preserves summation rules and is given by
\begin{align}
w(i,j) = \delta_{ij} + \sqrt{2} (\delta_{ij}-1).
\end{align}
We can convert strain tensors, stress tensors, and elastic modulus tensors to and from Mandel notation using the functions `elasticity.tensor_to_mandel` and `elasticity.mandel_to_tensor`.
First, lets copy one of the previously calculated elastic modulus tensors and define a random strain tensor.
```
#This can be 2 or 3 depending on which of the above solutions has been calculated
dimension = 3
if dimension == 2:
C = C_2d
else:
C = C_3d
key, split = random.split(key)
e = random.uniform(key, (dimension,dimension), minval=-1, maxval=1, dtype=f64)
e = (e + e.T)/2.
```
Convert `e` and `C` to Mental notation
```
e_m = jit(elasticity.tensor_to_mandel)(e)
C_m = jit(elasticity.tensor_to_mandel)(C)
print(e_m)
print(C_m)
```
Using "bar" notation to represent Mandel vectors and matrices, we have
\begin{equation}
\frac{ \Delta U}{V^0} = \bar \sigma_i^0 \bar\epsilon_i + \frac 12 \bar \epsilon_i \bar C_{ij} \bar\epsilon_j + O\left(\bar \epsilon^3\right)
\end{equation}
We can explicity test that the sums are equivalent to the sums involving the original tensors
```
sum_m = jnp.einsum('i,ij,j->',e_m, C_m, e_m)
sum_t = jnp.einsum('ij,ijkl,kl->',e, C, e)
print('Relative error is {}, which should be very close to 0'.format((sum_t-sum_m)/sum_t))
```
Finally, we can convert back to the full tensors and check that they are unchanged.
```
C_new = jit(elasticity.mandel_to_tensor)(C_m)
print('Max error in C is {}, which should be very close to 0.'.format(jnp.max(jnp.abs(C-C_new))))
e_new = jit(elasticity.mandel_to_tensor)(e_m)
print('Max error in e is {}, which should be very close to 0.'.format(jnp.max(jnp.abs(e-e_new))))
```
# Isotropic elastic constants
The calculation of the elastic modulus tensor does not make any assumptions about the underlying symmetries in the material. However, for isotropic systems, only two constants are needed to completely describe the elastic behavior. These are often taken to be the bulk modulus, $B$, and the shear modulus, $G$, or the Young's modulus, $E$, and the Poisson's ratio, $\nu$. The function `elasticity.extract_isotropic_moduli` extracts these values, as well as the longitudinal modulus, $M$, from an elastic modulus tensor.
Importantly, since there is not guarantee that `C` is calculated from a truely isotropic systems, these are "orientation-averaged" values. For example, there are many directions in which you can shear a system, and the shear modulus that is returned represents and average over all these orientations. This can be an effective way to average over small fluctuations in an "almost isotropic" system, but the values lose their typical meaning when the systems is highly anisotropic.
```
elasticity.extract_isotropic_moduli(C)
```
# Gradients
The calculation of the elastic modulus tensor is fully differentiable:
```
def setup(N,dimension,key):
box_size = quantity.box_size_at_number_density(N, 1.4, dimension)
box = box_size * jnp.eye(dimension)
displacement, shift = space.periodic_general(box, fractional_coordinates=True)
R_init = random.uniform(key, (N,dimension), minval=0.0, maxval=1.0, dtype=f64)
def run(sigma):
energy_fn = energy.soft_sphere_pair(displacement, sigma=sigma)
R, max_grad = run_minimization_scan(energy_fn, R_init, shift, num_steps=1000)
emt_fn = jit(elasticity.athermal_moduli(energy_fn))
C = emt_fn(R,box)
return elasticity.extract_isotropic_moduli(C)['G']
return run
key, split = random.split(key)
N = 50
dimension = 2
run = setup(N, dimension, split)
sigma = jnp.linspace(1.0,1.4,N)
print(run(sigma))
print(grad(run)(sigma))
```
| true |
code
| 0.648216 | null | null | null | null |
|
# RadiusNeighborsClassifier with MinMaxScaler
This Code template is for the Classification task using a simple Radius Neighbor Classifier, with data being scaled by MinMaxScaler. It implements learning based on the number of neighbors within a fixed radius r of each training point, where r is a floating-point value specified by the user.
### Required Packages
```
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.neighbors import RadiusNeighborsClassifier
from sklearn.pipeline import make_pipeline
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.
One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Model
RadiusNeighborsClassifier implements learning based on the number of neighbors within a fixed radius of each training point, where is a floating-point value specified by the user.
In cases where the data is not uniformly sampled, radius-based neighbors classification can be a better choice.
#### Tuning parameters
> **radius**: Range of parameter space to use by default for radius_neighbors queries.
> **algorithm**: Algorithm used to compute the nearest neighbors:
> **leaf_size**: Leaf size passed to BallTree or KDTree.
> **p**: Power parameter for the Minkowski metric.
> **metric**: the distance metric to use for the tree.
> **outlier_label**: label for outlier samples
> **weights**: weight function used in prediction.
For more information refer: [API](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.RadiusNeighborsClassifier.html)
#### Data Rescaling
MinMaxScaler subtracts the minimum value in the feature and then divides by the range, where range is the difference between the original maximum and original minimum.
```
# Build Model here
model = make_pipeline(MinMaxScaler(),RadiusNeighborsClassifier(n_jobs=-1))
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Viraj Jayant, Github: [Profile](https://github.com/Viraj-Jayant/)
| true |
code
| 0.271493 | null | null | null | null |
|
# 03 - Stats Review: The Most Dangerous Equation
In his famous article of 2007, Howard Wainer writes about very dangerous equations:
"Some equations are dangerous if you know them, and others are dangerous if you do not. The first category may pose danger because the secrets within its bounds open doors behind which lies terrible peril. The obvious winner in this is Einstein’s ionic equation \\(E = MC^2\\), for it provides a measure of the enormous energy hidden within ordinary matter. \[...\] Instead I am interested in equations that unleash their danger not when we know about them, but rather when we do not. Kept close at hand, these equations allow us to understand things clearly, but their absence leaves us dangerously ignorant."
The equation he talks about is Moivre’s equation:
$
SE = \dfrac{\sigma}{\sqrt{n}}
$
where \\(SE\\) is the standard error of the mean, \\(\sigma\\) is the standard deviation and \\(n\\) is the sample size. Sounds like a piece of math the brave and true should master, so let's get to it.
To see why not knowing this equation is very dangerous, let's take a look at some education data. I've compiled data on ENEM scores (Brazilian standardised high school scores, similar to SAT) from different schools for a period of 3 years. I also did some cleaning on the data to keep only the information relevant to us. The original data can be downloaded in the [Inep website](http://portal.inep.gov.br/web/guest/microdados#).
If we look at the top performing school, something catches the eye: those schools have a fairly small number of students.
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from scipy import stats
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib import style
style.use("fivethirtyeight")
df = pd.read_csv("./data/enem_scores.csv")
df.sort_values(by="avg_score", ascending=False).head(10)
```
Looking at it from another angle, we can separate only the 1% top schools and study them. What are they like? Perhaps we can learn something from the best and replicate it elsewhere. And sure enough, if we look at the top 1% schools, we figure out they have, on average, fewer students.
```
plot_data = (df
.assign(top_school = df["avg_score"] >= np.quantile(df["avg_score"], .99))
[["top_school", "number_of_students"]]
.query(f"number_of_students<{np.quantile(df['number_of_students'], .98)}")) # remove outliers
plt.figure(figsize=(6,6))
sns.boxplot(x="top_school", y="number_of_students", data=plot_data)
plt.title("Number of Students of 1% Top Schools (Right)");
```
One natural conclusion that follows is that small schools lead to higher academic performance. This makes intuitive sense, since we believe that less students per teacher allows the teacher to give focused attention to each student. But what does this have to do with Moivre’s equation? And why is it dangerous?
Well, it becomes dangerous once people start to make important and expensive decisions based on this information. In his article, Howard continues:
"In the 1990s, it became popular to champion reductions in the size of schools. Numerous philanthropic organisations and government agencies funded the division of larger schools based on the fact that students at small schools are over represented in groups with high test scores."
What people forgot to do was to look also at the bottom 1% of schools. If we do that, lo and behold! They also have very few students!
```
q_99 = np.quantile(df["avg_score"], .99)
q_01 = np.quantile(df["avg_score"], .01)
plot_data = (df
.sample(10000)
.assign(Group = lambda d: np.select([d["avg_score"] > q_99, d["avg_score"] < q_01],
["Top", "Bottom"], "Middle")))
plt.figure(figsize=(10,5))
sns.scatterplot(y="avg_score", x="number_of_students", hue="Group", data=plot_data)
plt.title("ENEM Score by Number of Students in the School");
```
What we are seeing above is exactly what is expected according to the Moivre’s equation. As the number of students grows, the average score becomes more and more precise. Schools with very few samples can have very high and very low scores simply due to chance. This is less likley to occur with large schools. Moivre’s equation talks about a fundamental fact about the reality of information and records in the form of data: it is always imprecise. The question then becomes how imprecise.
Statistics is the science that deals with these imprecisions so they don't catch us off-guard. As Taleb puts it in his book, Fooled by Randomness:
> Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance.
One way to quantify our uncertainty is the **variance of our estimates**. Variance tells us how much observation deviates from their central and most probably value. As indicated by Moivre’s equation, this uncertainty shrinks as the amount of data we observe increases. This makes sense, right? If we see lots and lots of students performing excellently at a school, we can be more confident that this is indeed a good school. However, if we see a school with only 10 students and 8 of them perform well, we need to be more suspicious. It could be that, by chance, that school got some above average students.
The beautiful triangular plot we see above tells exactly this story. It shows us how our estimates of the school performance has a huge variance when the sample sizes are small. It also shows that variance shrinks as the sample size increases. This is true for the average score in a school, but it is also true about any summary statistics that we have, including the ATE we so often want to estimate.
## The Standard Error of Our Estimates
Since this is just a review on statistics, I'll take the liberty to go a bit faster now. If you are not familiar with distributions, variance and standard errors, please, do read on, but keep in mind that you might need some additional resources. I suggest you google any MIT course on introduction to statistics. They are usually quite good.
In the previous section, we estimated the average treatment effect \\(E[Y_1-Y_0]\\) as the difference in the means between the treated and the untreated \\(E[Y|T=1]-E[Y|T=0]\\). As our motivating example, we figured out the \\(ATE\\) for online classes. We also saw that it was a negative impact, that is, online classes made students perform about 5 points worse than the students with face to face classes. Now, we get to see if this impact is statistically significant.
To do so, we need to estimate the \\(SE\\). We already have \\(n\\), our sample size. To get the estimate for the standard deviation we can do the following
$
\hat{\sigma}=\frac{1}{N-1}\sum_{i=0}^N (x-\bar{x})^2
$
where \\(\bar{x}\\) is the mean of \\(x\\). Fortunately for us, most programming software already implements this. In Pandas, we can use the method [std](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.std.html).
```
data = pd.read_csv("./data/online_classroom.csv")
online = data.query("format_ol==1")["falsexam"]
face_to_face = data.query("format_ol==0 & format_blended==0")["falsexam"]
def se(y: pd.Series):
return y.std() / np.sqrt(len(y))
print("SE for Online:", se(online))
print("SE for Face to Face:", se(face_to_face))
```
## Confidence Intervals
The standard error of our estimate is a measure of confidence. To understand exactly what it means, we need to go into turbulent and polemic statistical waters. For one view of statistics, the frequentist view, we would say that the data we have is nothing more than a manifestation of a true data generating process. This process is abstract and ideal. It is governed by true parameters that are unchanging but also unknown to us. In the context of the students test, if we could run multiple experiments and collect multiple datasets, all would resemble the true underlying data generating process, but wouldn't be exactly like it. This is very much like Plato's writing on the Forms:
> Each [of the essential forms] manifests itself in a great variety of combinations, with actions, with material things, and with one another, and each seems to be many
To better grasp this, let's suppose we have a true abstract distribution of students' test score. This is a normal distribution with true mean of 74 and true standard deviation of 2. From this distribution, we can run 10000 experiments. On each one, we collect 500 samples. Some experiment data will have a mean lower than the true one, some will be higher. If we plot them in a histogram, we can see that means of the experiments are distributed around the true mean.
```
true_std = 2
true_mean = 74
n = 500
def run_experiment():
return np.random.normal(true_mean,true_std, 500)
np.random.seed(42)
plt.figure(figsize=(8,5))
freq, bins, img = plt.hist([run_experiment().mean() for _ in range(10000)], bins=40, label="Experiment Means")
plt.vlines(true_mean, ymin=0, ymax=freq.max(), linestyles="dashed", label="True Mean", color="orange")
plt.legend();
```
Notice that we are talking about the mean of means here. So, by chance, we could have an experiment where the mean is somewhat below or above the true mean. This is to say that we can never be sure that the mean of our experiment matches the true platonic and ideal mean. However, **with the standard error, we can create an interval that will contain the true mean 95% of the time**.
In real life, we don't have the luxury of simulating the same experiment with multiple datasets. We often only have one. But we can draw on the intuition above to construct what we call **confidence intervals**. Confidence intervals come with a probability attached to them. The most common one is 95%. This probability tells us how many of the hypothetical confidence intervals we would build from different studies contain the true mean. For example, the 95% confidence intervals computed from many similar studies would contain the true mean 95% of the time.
To calculate the confidence interval, we use what is called the **central limit theorem**. This theorem states that **means of experiments are normally distributed**. From statistical theory, we know that 95% of the mass of a normal distribution is between 2 standard deviations above and below the mean. Technically, 1.96, but 2 is close enough.

The Standard Error of the mean serves as our estimate of the distribution of the experiment means. So, if we multiply it by 2 and add and subtract it from the mean of one of our experiments, we will construct a 95% confidence interval for the true mean.
```
np.random.seed(321)
exp_data = run_experiment()
exp_se = exp_data.std() / np.sqrt(len(exp_data))
exp_mu = exp_data.mean()
ci = (exp_mu - 2 * exp_se, exp_mu + 2 * exp_se)
print(ci)
x = np.linspace(exp_mu - 4*exp_se, exp_mu + 4*exp_se, 100)
y = stats.norm.pdf(x, exp_mu, exp_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=1)
plt.vlines(ci[0], ymin=0, ymax=1, label="95% CI")
plt.legend()
plt.show()
```
Of course, we don't need to restrict ourselves to the 95% confidence interval. We could generate the 99% interval by finding what we need to multiply the standard deviation by so the interval contains 99% of the mass of a normal distribution.
The function `ppf` in python gives us the inverse of the CDF. So, `ppf(0.5)` will return 0.0, saying that 50% of the mass of the standard normal distribution is below 0.0. By the same token, if we plug 99.5%, we will have the value `z`, such that 99.5% of the distribution mass falls below this value. In other words, 0.05% of the mass falls above this value. Instead of multiplying the standard error by 2 like we did to find the 95% CI, we will multiply it by `z`, which will result in the 99% CI.
```
from scipy import stats
z = stats.norm.ppf(.995)
print(z)
ci = (exp_mu - z * exp_se, exp_mu + z * exp_se)
ci
x = np.linspace(exp_mu - 4*exp_se, exp_mu + 4*exp_se, 100)
y = stats.norm.pdf(x, exp_mu, exp_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=1)
plt.vlines(ci[0], ymin=0, ymax=1, label="99% CI")
plt.legend()
plt.show()
```
Back to our classroom experiment, we can construct the confidence interval for the mean exam score for both the online and face to face students' group
```
def ci(y: pd.Series):
return (y.mean() - 2 * se(y), y.mean() + 2 * se(y))
print("95% CI for Online:", ci(online))
print("95% for Face to Face:", ci(face_to_face))
```
What we can see is that the 95% CI of the groups don't overlap. The lower end of the CI for Face to Face class is above the upper end of the CI for online classes. This is evidence that our result is not by chance, and that the true mean for students in face to face clases is higher than the true mean for students in online classes. In other words, there is a significant causal decrease in academic performance when switching from face to face to online classes.
As a recap, confidence intervals are a way to place uncertainty around our estimates. The smaller the sample size, the larger the standard error and the wider the confidence interval. Finally, you should always be suspicious of measurements without any uncertainty metric attached to it. Since they are super easy to compute, lack of confidence intervals signals either some bad intentions or simply lack of knowledge, which is equally concerning.

One final word of caution here. Confidence intervals are trickier to interpret than at first glance. For instance, I **shouldn't** say that this particular 95% confidence interval contains the true population mean with 95% chance. That's because in frequentist statistics, the one that uses confidence intervals, the population mean is regarded as a true population constant. So it either is or isn't in our particular confidence interval. In other words, our particular confidence interval either contains or doesn't contain the true mean. If it does, the chance of containing it would be 100%, not 95%. If it doesn't, the chance would be 0%. Rather, in confidence intervals, the 95% refers to the frequency that such confidence intervals, computed in many many studies, contain the true mean. 95% is our confidence in the algorithm used to compute the 95% CI, not on the particular interval itself.
Now, having said that, as an Economist (statisticians, please look away now), I think this purism is not very useful. In practice, you will see people saying that the particular confidence interval contains the true mean 95% of the time. Although wrong, this is not very harmful, as it still places a precise degree of uncertainty in our estimates. Moreover, if we switch to Bayesian statistics and use probable intervals instead of confidence intervals, we would be able to say that the interval contains the distribution mean 95% of the time. Also, from what I've seen in practice, with decent sample sizes, bayesian probability intervals are more similar to confidence intervals than both bayesian and frequentists would like to admit. So, if my word counts for anything, feel free to say whatever you want about your confidence interval. I don't care if you say they contain the true mean 95% of the time. Just, please, never forget to place them around your estimates, otherwise you will look silly.
## Hypothesis Testing
Another way to incorporate uncertainty is to state a hypothesis test: is the difference in means statistically different from zero (or any other value)? To do so, we will recall that the sum or difference of 2 normal distributions is also a normal distribution. The resulting mean will be the sum or difference between the two distributions, while the variance will always be the sum of the variance:
$
N(\mu_1, \sigma_1^2) - N(\mu_2, \sigma_2^2) = N(\mu_1 - \mu_2, \sigma_1^2 + \sigma_2^2)
$
$
N(\mu_1, \sigma_1^2) + N(\mu_2, \sigma_2^2) = N(\mu_1 + \mu_2, \sigma_1^2 + \sigma_2^2)
$
If you don't recall, its OK. We can always use code and simulated data to check:
```
np.random.seed(123)
n1 = np.random.normal(4, 3, 30000)
n2 = np.random.normal(1, 4, 30000)
n_diff = n2 - n1
sns.distplot(n1, hist=False, label="N(4,3)")
sns.distplot(n2, hist=False, label="N(1,4)")
sns.distplot(n_diff, hist=False, label=f"N(4,3) - N(1,4) = N(-1, 5)")
plt.show()
```
If we take the distribution of the means of our 2 groups and subtract one from the other, we will have a third distribution. The mean of this final distribution will be the difference in the means and the standard deviation of this distribution will be the square root of the sum of the standard deviations.
$
\mu_{diff} = \mu_1 - \mu_2
$
$
SE_{diff} = \sqrt{SE_1 + SE_2} = \sqrt{\sigma_1^2/n_1 + \sigma_2^2/n_2}
$
Let's return to our classroom example. We will construct this distribution of the difference. Of course, once we have it, building the 95% CI is very easy.
```
diff_mu = online.mean() - face_to_face.mean()
diff_se = np.sqrt(face_to_face.var()/len(face_to_face) + online.var()/len(online))
ci = (diff_mu - 1.96*diff_se, diff_mu + 1.96*diff_se)
print(ci)
x = np.linspace(diff_mu - 4*diff_se, diff_mu + 4*diff_se, 100)
y = stats.norm.pdf(x, diff_mu, diff_se)
plt.plot(x, y)
plt.vlines(ci[1], ymin=0, ymax=.05)
plt.vlines(ci[0], ymin=0, ymax=.05, label="95% CI")
plt.legend()
plt.show()
```
With this at hand, we can say that we are 95% confident that the true difference between the online and face to face group falls between -8.37 and -1.44. We can also construct a **z statistic** by dividing the difference in mean by the \\\(SE\\\\) of the differences.
$
z = \dfrac{\mu_{diff} - H_{0}}{SE_{diff}} = \dfrac{(\mu_1 - \mu_2) - H_{0}}{\sqrt{\sigma_1^2/n_1 + \sigma_2^2/n_2}}
$
Where \\(H_0\\) is the value which we want to test our difference against.
The z statistic is a measure of how extreme the observed difference is. To test our hypothesis that the difference in the means is statistically different from zero, we will use contradiction. We will assume that the opposite is true, that is, we will assume that the difference is zero. This is called a null hypothesis, or \\(H_0\\). Then, we will ask ourselves "is it likely that we would observe such a difference if the true difference were indeed zero?" In statistical math terms, we can translate this question to checking how far from zero is our z statistic.
Under \\(H_0\\), the z statistic follows a standard normal distribution. So, if the difference is indeed zero, we would see the z statistic within 2 standard deviations of the mean 95% of the time. The direct consequence of this is that if z falls above or below 2 standard deviations, we can reject the null hypothesis with 95% confidence.
Let's see how this looks like in our classroom example.
```
z = diff_mu / diff_se
print(z)
x = np.linspace(-4,4,100)
y = stats.norm.pdf(x, 0, 1)
plt.plot(x, y, label="Standard Normal")
plt.vlines(z, ymin=0, ymax=.05, label="Z statistic", color="C1")
plt.legend()
plt.show()
```
This looks like a pretty extreme value. Indeed, it is above 2, which means there is less than a 5% chance that we would see such an extreme value if there were no difference in the groups. This again leads us to conclude that switching from face to face to online classes causes a statistically significant drop in academic performance.
One final interesting thing about hypothesis tests is that it is less conservative than checking if the 95% CI from the treated and untreated group overlaps. In other words, if the confidence intervals in the two groups overlap, it can still be the case that the result is statistically significant. For example, let's pretend that the face-to-face group has an average score of 74 and standard error of 7 and the online group has an average score of 71 with a standard error of 1.
```
cont_mu, cont_se = (71, 1)
test_mu, test_se = (74, 7)
diff_mu = test_mu - cont_mu
diff_se = np.sqrt(cont_se + cont_se)
print("Control 95% CI:", (cont_mu-1.96*cont_se, cont_mu+1.96*cont_se))
print("Test 95% CI:", (test_mu-1.96*test_se, test_mu+1.96*test_se))
print("Diff 95% CI:", (diff_mu-1.96*diff_se, diff_mu+1.96*diff_se))
```
If we construct the confidence intervals for these groups, they overlap. The upper bound for the 95% CI of the online group is 72.96 and the lower bound for the face-to-face group is 60.28. However, once we compute the 95% confidence interval for the difference between the groups, we can see that it does not contain zero. In summary, even though the individual confidence intervals overlap, the difference can still be statistically different from zero.
## P-values
I've said previously that there is less than 5% chance that we would observe such an extreme value if the difference between online and face to face groups were actually zero. But can we estimate exactly what is that chance? How likely are we to observe such an extreme value? Enters p-values!
Just like with confidence intervals (and most frequentist statistics, as a matter of fact) the true definition of p-values can be very confusing. So, to not take any risks, I'll copy the definition from Wikipedia: "the p-value is the probability of obtaining test results at least as extreme as the results actually observed during the test, assuming that the null hypothesis is correct".
To put it more succinctly, the p-value is the probability of seeing such data, given that the null-hypothesis is true. It measures how unlikely it is that you are seeing a measurement if the null-hypothesis is true. Naturally, this often gets confused with the probability of the null-hypothesis being true. Note the difference here. The p-value is NOT \\(P(H_0|data)\\), but rather \\(P(data|H_0)\\).
But don't let this complexity fool you. In practical terms, they are pretty straightforward to use.

To get the p-value, we need to compute the area under the standard normal distribution before or after the z statistic. Fortunately, we have a computer to do this calculation for us. We can simply plug the z statistic in the CDF of the standard normal distribution.
```
print("P-value:", stats.norm.cdf(z))
```
This means that there is only a 0.2% chance of observing this extreme z statistic if the difference was zero. Notice how the p-value is interesting because it avoids us having to specify a confidence level, like 95% or 99%. But, if we wish to report one, from the p-value, we know exactly at which confidence our test will pass or fail. For instance, with a p-value of 0.0027, we know that we have significance up to the 0.2% level. So, while the 95% CI and the 99% CI for the difference will neither contain zero, the 99.9% CI will.
```
diff_mu = online.mean() - face_to_face.mean()
diff_se = np.sqrt(face_to_face.var()/len(face_to_face) + online.var()/len(online))
print("95% CI:", (diff_mu - stats.norm.ppf(.975)*diff_se, diff_mu + stats.norm.ppf(.975)*diff_se))
print("99% CI:", (diff_mu - stats.norm.ppf(.995)*diff_se, diff_mu + stats.norm.ppf(.995)*diff_se))
print("99.9% CI:", (diff_mu - stats.norm.ppf(.9995)*diff_se, diff_mu + stats.norm.ppf(.9995)*diff_se))
```
## Keys Ideas
We've seen how important it is to know Moivre’s equation and we used it to place a degree of certainty around our estimates. Namely, we figured out that the online classes cause a decrease in academic performance compared to face to face classes. We also saw that this was a statistically significant result. We did it by comparing the Confidence Intervals of the means for the 2 groups, by looking at the confidence interval for the difference, by doing a hypothesis test and by looking at the p-value. Let's wrap everything up in a single function that does A/B testing comparison like the one we did above
```
def AB_test(test: pd.Series, control: pd.Series, confidence=0.95, h0=0):
mu1, mu2 = test.mean(), control.mean()
se1, se2 = test.std() / np.sqrt(len(test)), control.std() / np.sqrt(len(control))
diff = mu1 - mu2
se_diff = np.sqrt(test.var()/len(test) + control.var()/len(control))
z_stats = (diff-h0)/se_diff
p_value = stats.norm.cdf(z_stats)
def critial(se): return -se*stats.norm.ppf((1 - confidence)/2)
print(f"Test {confidence*100}% CI: {mu1} +- {critial(se1)}")
print(f"Control {confidence*100}% CI: {mu2} +- {critial(se2)}")
print(f"Test-Control {confidence*100}% CI: {diff} +- {critial(se_diff)}")
print(f"Z Statistic {z_stats}")
print(f"P-Value {p_value}")
AB_test(online, face_to_face)
```
Since our function is generic enough, we can test other null hypotheses. For instance, can we try to reject that the difference between online and face to face class performance is -1. With the results we get, we can say with 95% confidence that the difference is greater than -1. But we can't say it with 99% confidence:
```
AB_test(online, face_to_face, h0=-1)
```
## References
I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
My final reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
In this particular section, I've also referenced The [Most Dangerous Equation](https://www.researchgate.net/publication/255612702_The_Most_Dangerous_Equation), by Howard Wainer.
Finally, if you are curious about the correct interpretation of the statistical concepts we've discussed here, I recommend reading the paper by Greenland et al, 2016: [Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations](https://link.springer.com/content/pdf/10.1007/s10654-016-0149-3.pdf).

## Contribute
Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers).
| true |
code
| 0.502502 | null | null | null | null |
|
# Gender Prediction, using Pre-trained Keras Model
Deep Neural Networks can be used to extract features in the input and derive higher level abstractions. This technique is used regularly in vision, speech and text analysis. In this exercise, we use a pre-trained model deep learning model that would identify low level features in texts containing people's names, and would be able to classify them in one of two categories - Male or Female.
## Network Architecture
The problem we are trying to solve is to predict whether a given name belongs to a male or female. We will use supervised learning, where the character sequence making up the names would be `X` variable, and the flag indicating **Male(M)** or **Female(F)** would be `Y` variable.
We use a stacked 2-Layer LSTM model and a final dense layer with softmax activation as our network architecture. We use categorical cross-entropy as loss function, with an Adam optimizer. We also add a 20% dropout layer is added for regularization to avoid over-fitting.
## Dependencies
* The model was built using Keras, therefore we need to include Keras deep learning library to build the network locally, in order to be able to test, prior to hosting the model.
* While running on SageMaker Notebook Instance, we choose conda_tensorflow kernel, so that Keras code is compiled to use tensorflow in the backend.
* If you choose P2 and P3 class of instances for your Notebook, using Tensorflow ensures the low level code takes advantage of all available GPUs. So further dependencies needs to be installed.
```
import os
import time
import numpy as np
import keras
from keras.models import load_model
import boto3
```
## Model testing
To test the validity of the model, we do some local testing.<p>
The model was built to be able to process one-hot encoded data representing names, therefore we need to do same pre-processing on our test data (one-hot encoding using the same character indices)<p>
We feed this one-hot encoded test data to the model, and the `predict` generates a vector, similar to the training labels vector we used before. Except in this case, it contains what model thinks the gender represented by each of the test records.<p>
To present data intutitively, we simply map it back to `Male` / `Female`, from the `0` / `1` flag.
```
!tar -zxvf ../pretrained-model/model.tar.gz -C ../pretrained-model/
model = load_model('../pretrained-model/lstm-gender-classifier-model.h5')
char_indices = np.load('../pretrained-model/lstm-gender-classifier-indices.npy').item()
max_name_length = char_indices['max_name_length']
char_indices.pop('max_name_length', None)
alphabet_size = len(char_indices)
print(char_indices)
print(max_name_length)
print(alphabet_size)
names_test = ["Tom","Allie","Jim","Sophie","John","Kayla","Mike","Amanda","Andrew"]
num_test = len(names_test)
X_test = np.zeros((num_test, max_name_length, alphabet_size))
for i,name in enumerate(names_test):
name = name.lower()
for t, char in enumerate(name):
X_test[i, t,char_indices[char]] = 1
predictions = model.predict(X_test)
for i,name in enumerate(names_test):
print("{} ({})".format(names_test[i],"M" if predictions[i][0]>predictions[i][1] else "F"))
```
## Model saving
In order to deploy the model behind an hosted endpoint, we need to save the model fileto an S3 location.<p>
We can obtain the name of the S3 bucket from the execution role we attached to this Notebook instance. This should work if the policies granting read permission to IAM policies was granted, as per the documentation.
If for some reason, it fails to fetch the associated bucket name, it asks the user to enter the name of the bucket. If asked, use the bucket that you created in Module-3, such as 'smworkshop-firstname-lastname'.<p>
It is important to ensure that this is the same S3 bucket, to which you provided access in the Execution role used while creating this Notebook instance.
```
sts = boto3.client('sts')
iam = boto3.client('iam')
caller = sts.get_caller_identity()
account = caller['Account']
arn = caller['Arn']
role = arn[arn.find("/AmazonSageMaker")+1:arn.find("/SageMaker")]
timestamp = role[role.find("Role-")+5:]
policyarn = "arn:aws:iam::{}:policy/service-role/AmazonSageMaker-ExecutionPolicy-{}".format(account, timestamp)
s3bucketname = ""
policystatements = []
try:
policy = iam.get_policy(
PolicyArn=policyarn
)['Policy']
policyversion = policy['DefaultVersionId']
policystatements = iam.get_policy_version(
PolicyArn = policyarn,
VersionId = policyversion
)['PolicyVersion']['Document']['Statement']
except Exception as e:
s3bucketname=input("Which S3 bucket do you want to use to host training data and model? ")
for stmt in policystatements:
action = ""
actions = stmt['Action']
for act in actions:
if act == "s3:ListBucket":
action = act
break
if action == "s3:ListBucket":
resource = stmt['Resource'][0]
s3bucketname = resource[resource.find(":::")+3:]
print(s3bucketname)
s3 = boto3.resource('s3')
s3.meta.client.upload_file('../pretrained-model/model.tar.gz', s3bucketname, 'model/model.tar.gz')
```
# Model hosting
Amazon SageMaker provides a powerful orchestration framework that you can use to productionize any of your own machine learning algorithm, using any machine learning framework and programming languages.<p>
This is possible because SageMaker, as a manager of containers, have standarized ways of interacting with your code running inside a Docker container. Since you are free to build a docker container using whatever code and depndency you like, this gives you freedom to bring your own machinery.<p>
In the following steps, we'll containerize the prediction code and host the model behind an API endpoint.<p>
This would allow us to use the model from web-application, and put it into real use.<p>
The boilerplate code, which we affectionately call the `Dockerizer` framework, was made available on this Notebook instance by the Lifecycle Configuration that you used. Just look into the folder and ensure the necessary files are available as shown.<p>
<home>
|
├── container
│
├── byoa
| |
│ ├── train
| |
│ ├── predictor.py
| |
│ ├── serve
| |
│ ├── nginx.conf
| |
│ └── wsgi.py
|
├── build_and_push.sh
│
├── Dockerfile.cpu
│
└── Dockerfile.gpu
```
os.chdir('../container')
os.getcwd()
!ls -Rl
```
* `Dockerfile` describes the container image and the accompanying script `build_and_push.sh` does the heavy lifting of building the container, and uploading it into an Amazon ECR repository
* Sagemaker containers that we'll be building serves prediction request using a Flask based application. `wsgi.py` is a wrapper to invoke the Flask application, while `nginx.conf` is the configuration for the nginx front end and `serve` is the program that launches the gunicorn server. These files can be used as-is, and are required to build the webserver stack serving prediction requests, following the architecture as shown:

<details>
<summary><strong>Request serving stack (expand to view diagram)</strong></summary><p>

</p></details>
* The file named `predictor.py` is where we need to package the code for generating inference using the trained model that was saved into an S3 bucket location by the training code during the training job run.<p>
* We'll write code into this file using Jupyter magic command - `writefile`.<p><br>
First part of the file would contain the necessary imports, as ususal.
```
%%writefile byoa/predictor.py
# This is the file that implements a flask server to do inferences. It's the file that you will modify to
# implement the scoring for your own algorithm.
from __future__ import print_function
import os
import json
import pickle
from io import StringIO
import sys
import signal
import traceback
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import Embedding
from keras.layers import LSTM
from keras.models import load_model
import flask
import tensorflow as tf
import pandas as pd
from os import listdir, sep
from os.path import abspath, basename, isdir
from sys import argv
```
When run within an instantiated container, SageMaker makes the trained model available locally at `/opt/ml`
```
%%writefile -a byoa/predictor.py
prefix = '/opt/ml/'
model_path = os.path.join(prefix, 'model')
```
The machinery to produce inference is wrapped around in a Pythonic class structure, within a `Singleton` class, aptly named - `ScoringService`.<p>
We create `Class` variables in this class to hold loaded model, character indices, tensor-flow graph, and anything else that needs to be referenced while generating prediction.
```
%%writefile -a byoa/predictor.py
# A singleton for holding the model. This simply loads the model and holds it.
# It has a predict function that does a prediction based on the model and the input data.
class ScoringService(object):
model_type = None # Where we keep the model type, qualified by hyperparameters used during training
model = None # Where we keep the model when it's loaded
graph = None
indices = None # Where we keep the indices of Alphabet when it's loaded
```
Generally, we have to provide class methods to load the model and related artefacts from the model path as assigned by SageMaker within the running container.<p>
Notice here that SageMaker copies the artefacts from the S3 location (as defined during model creation) into the container local file system.
```
%%writefile -a byoa/predictor.py
@classmethod
def get_indices(cls):
#Get the indices for Alphabet for this instance, loading it if it's not already loaded
if cls.indices == None:
model_type='lstm-gender-classifier'
index_path = os.path.join(model_path, '{}-indices.npy'.format(model_type))
if os.path.exists(index_path):
cls.indices = np.load(index_path).item()
else:
print("Character Indices not found.")
return cls.indices
@classmethod
def get_model(cls):
#Get the model object for this instance, loading it if it's not already loaded
if cls.model == None:
model_type='lstm-gender-classifier'
mod_path = os.path.join(model_path, '{}-model.h5'.format(model_type))
if os.path.exists(mod_path):
cls.model = load_model(mod_path)
cls.model._make_predict_function()
cls.graph = tf.get_default_graph()
else:
print("LSTM Model not found.")
return cls.model
```
Finally, inside another clas method, named `predict`, we provide the code that we used earlier to generate prediction.<p>
Only difference with our previous test prediciton (in development notebook) is that in this case, the predictor will grab the data from the `input` variable, which in turn is obtained from the HTTP request payload.
```
%%writefile -a byoa/predictor.py
@classmethod
def predict(cls, input):
mod = cls.get_model()
ind = cls.get_indices()
result = {}
if mod == None:
print("Model not loaded.")
else:
if 'max_name_length' not in ind:
max_name_length = 15
alphabet_size = 26
else:
max_name_length = ind['max_name_length']
ind.pop('max_name_length', None)
alphabet_size = len(ind)
inputs_list = input.strip('\n').split(",")
num_inputs = len(inputs_list)
X_test = np.zeros((num_inputs, max_name_length, alphabet_size))
for i,name in enumerate(inputs_list):
name = name.lower().strip('\n')
for t, char in enumerate(name):
if char in ind:
X_test[i, t,ind[char]] = 1
with cls.graph.as_default():
predictions = mod.predict(X_test)
for i,name in enumerate(inputs_list):
result[name] = 'M' if predictions[i][0]>predictions[i][1] else 'F'
print("{} ({})".format(inputs_list[i],"M" if predictions[i][0]>predictions[i][1] else "F"))
return json.dumps(result)
```
With the prediction code captured, we move on to define the flask app, and provide a `ping`, which SageMaker uses to conduct health check on container instances that are responsible behind the hosted prediction endpoint.<p>
Here we can have the container return healthy response, with status code `200` when everythings goes well.<p>
For simplicity, we are only validating whether model has been loaded in this case. In practice, this provides opportunity extensive health check (including any external dependency check), as required.
```
%%writefile -a byoa/predictor.py
# The flask app for serving predictions
app = flask.Flask(__name__)
@app.route('/ping', methods=['GET'])
def ping():
#Determine if the container is working and healthy.
# Declare it healthy if we can load the model successfully.
health = ScoringService.get_model() is not None and ScoringService.get_indices() is not None
status = 200 if health else 404
return flask.Response(response='\n', status=status, mimetype='application/json')
```
Last but not the least, we define a `transformation` method that would intercept the HTTP request coming through to the SageMaker hosted endpoint.<p>
Here we have the opportunity to decide what type of data we accept with the request. In this particular example, we are accepting only `CSV` formatted data, decoding the data, and invoking prediction.<p>
The response is similarly funneled backed to the caller with MIME type of `CSV`.<p>
You are free to choose any or multiple MIME types for your requests and response. However if you choose to do so, it is within this method that we have to transform the back to and from the format that is suitable to passed for prediction.
```
%%writefile -a byoa/predictor.py
@app.route('/invocations', methods=['POST'])
def transformation():
#Do an inference on a single batch of data
data = None
# Convert from CSV to pandas
if flask.request.content_type == 'text/csv':
data = flask.request.data.decode('utf-8')
else:
return flask.Response(response='This predictor only supports CSV data', status=415, mimetype='text/plain')
print('Invoked with {} records'.format(data.count(",")+1))
# Do the prediction
predictions = ScoringService.predict(data)
result = ""
for prediction in predictions:
result = result + prediction
return flask.Response(response=result, status=200, mimetype='text/csv')
```
Note that in containerizing our custom LSTM Algorithm, where we used `Keras` as our framework of our choice, we did not have to interact directly with the SageMaker API, even though SageMaker API doesn't support `Keras`.<p>
This serves to show the power and flexibility offered by containerized machine learning pipeline on SageMaker.
## Container publishing
In order to host and deploy the trained model using SageMaker, we need to build the `Docker` containers, publish it to `Amazon ECR` repository, and then either use SageMaker console or API to created the endpoint configuration and deploy the stages.<p>
Conceptually, the steps required for publishing are:<p>
1. Make the`predictor.py` files executable
2. Create an ECR repository within your default region
3. Build a docker container with an identifieable name
4. Tage the image and publish to the ECR repository
<p><br>
All of these are conveniently encapsulated inside `build_and_push` script. We simply run it with the unique name of our production run.
```
run_type='cpu'
instance_class = "p3" if run_type.lower()=='gpu' else "c4"
instance_type = "ml.{}.8xlarge".format(instance_class)
pipeline_name = 'gender-classifier'
run=input("Enter run version: ")
run_name = pipeline_name+"-"+run
if run_type == "cpu":
!cp "Dockerfile.cpu" "Dockerfile"
if run_type == "gpu":
!cp "Dockerfile.gpu" "Dockerfile"
!sh build_and_push.sh $run_name
```
## Orchestration
At this point, we can head to ECS console, grab the ARN for the repository where we published the docker image, and use SageMaker console to create hosted model, and endpoint.<p>
However, it is often more convenient to automate these steps. In this notebook we do exactly that using `boto3 SageMaker` API.<p>
Following are the steps:<p>
* First we create a model hosting definition, by providing the S3 location to the model artifact, and ARN to the ECR image of the container.
* Using the model hosting definition, our next step is to create configuration of a hosted endpoint that will be used to serve prediciton generation requests.
* Creating the endpoint is the last step in the ML cycle, that prepares your model to serve client reqests from applications.
* We wait until the provision is completed and the endpoint in service. At this point we can send request to this endpoint and obtain gender predictions.
```
import sagemaker
sm_role = sagemaker.get_execution_role()
print("Using Role {}".format(sm_role))
acc = boto3.client('sts').get_caller_identity().get('Account')
reg = boto3.session.Session().region_name
sagemaker = boto3.client('sagemaker')
#Check if model already exists
model_name = "{}-model".format(run_name)
models = sagemaker.list_models(NameContains=model_name)['Models']
model_exists = False
if len(models) > 0:
for model in models:
if model['ModelName'] == model_name:
model_exists = True
break
#Delete model, if chosen
if model_exists == True:
choice = input("Model already exists, do you want to delete and create a fresh one (Y/N) ? ")
if choice.upper()[0:1] == "Y":
sagemaker.delete_model(ModelName = model_name)
model_exists = False
else:
print("Model - {} already exists".format(model_name))
if model_exists == False:
model_response = sagemaker.create_model(
ModelName=model_name,
PrimaryContainer={
'Image': '{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(acc, reg, run_name),
'ModelDataUrl': 's3://{}/model/model.tar.gz'.format(s3bucketname)
},
ExecutionRoleArn=sm_role,
Tags=[
{
'Key': 'Name',
'Value': model_name
}
]
)
print("{} Created at {}".format(model_response['ModelArn'],
model_response['ResponseMetadata']['HTTPHeaders']['date']))
#Check if endpoint configuration already exists
endpoint_config_name = "{}-endpoint-config".format(run_name)
endpoint_configs = sagemaker.list_endpoint_configs(NameContains=endpoint_config_name)['EndpointConfigs']
endpoint_config_exists = False
if len(endpoint_configs) > 0:
for endpoint_config in endpoint_configs:
if endpoint_config['EndpointConfigName'] == endpoint_config_name:
endpoint_config_exists = True
break
#Delete endpoint configuration, if chosen
if endpoint_config_exists == True:
choice = input("Endpoint Configuration already exists, do you want to delete and create a fresh one (Y/N) ? ")
if choice.upper()[0:1] == "Y":
sagemaker.delete_endpoint_config(EndpointConfigName = endpoint_config_name)
endpoint_config_exists = False
else:
print("Endpoint Configuration - {} already exists".format(endpoint_config_name))
if endpoint_config_exists == False:
endpoint_config_response = sagemaker.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
'VariantName': 'default',
'ModelName': model_name,
'InitialInstanceCount': 1,
'InstanceType': instance_type,
'InitialVariantWeight': 1
},
],
Tags=[
{
'Key': 'Name',
'Value': endpoint_config_name
}
]
)
print("{} Created at {}".format(endpoint_config_response['EndpointConfigArn'],
endpoint_config_response['ResponseMetadata']['HTTPHeaders']['date']))
from ipywidgets import widgets
from IPython.display import display
#Check if endpoint already exists
endpoint_name = "{}-endpoint".format(run_name)
endpoints = sagemaker.list_endpoints(NameContains=endpoint_name)['Endpoints']
endpoint_exists = False
if len(endpoints) > 0:
for endpoint in endpoints:
if endpoint['EndpointName'] == endpoint_name:
endpoint_exists = True
break
#Delete endpoint, if chosen
if endpoint_exists == True:
choice = input("Endpoint already exists, do you want to delete and create a fresh one (Y/N) ? ")
if choice.upper()[0:1] == "Y":
sagemaker.delete_endpoint(EndpointName = endpoint_name)
print("Deleting Endpoint - {} ...".format(endpoint_name))
waiter = sagemaker.get_waiter('endpoint_deleted')
waiter.wait(EndpointName=endpoint_name,
WaiterConfig = {'Delay':1,'MaxAttempts':100})
endpoint_exists = False
print("Endpoint - {} deleted".format(endpoint_name))
else:
print("Endpoint - {} already exists".format(endpoint_name))
if endpoint_exists == False:
endpoint_response = sagemaker.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name,
Tags=[
{
'Key': 'string',
'Value': endpoint_name
}
]
)
status='Creating'
sleep = 3
print("{} Endpoint : {}".format(status,endpoint_name))
bar = widgets.FloatProgress(min=0, description="Progress") # instantiate the bar
display(bar) # display the bar
while status != 'InService' and status != 'Failed' and status != 'OutOfService':
endpoint_response = sagemaker.describe_endpoint(
EndpointName=endpoint_name
)
status = endpoint_response['EndpointStatus']
time.sleep(sleep)
bar.value = bar.value + 1
if bar.value >= bar.max-1:
bar.max = int(bar.max*1.05)
if status != 'InService' and status != 'Failed' and status != 'OutOfService':
print(".", end='')
bar.max = bar.value
html = widgets.HTML(
value="<H2>Endpoint <b><u>{}</b></u> - {}</H2>".format(endpoint_response['EndpointName'], status)
)
display(html)
```
At the end we run a quick test to validate we are able to generate meaningful predicitions using the hosted endpoint, as we did locally using the model on the Notebbok instance.
```
!aws sagemaker-runtime invoke-endpoint --endpoint-name "$run_name-endpoint" --body 'Tom,Allie,Jim,Sophie,John,Kayla,Mike,Amanda,Andrew' --content-type text/csv outfile
!cat outfile
```
Head back to Module-3 of the workshop now, to the section titled - `Integration`, and follow the steps described.<p>
You'll need to copy the endpoint name from the output of the cell below, to use in the Lambda function that will send request to this hosted endpoint.
```
print(endpoint_response
['EndpointName'])
```
| true |
code
| 0.439026 | null | null | null | null |
|
# Siamese Neural Network with Triplet Loss trained on MNIST
## Cameron Trotter
### [email protected]
This notebook builds an SNN to determine similarity scores between MNIST digits using a triplet loss function. The use of class prototypes at inference time is also explored.
This notebook is based heavily on the approach described in [this Coursera course](https://www.coursera.org/learn/siamese-network-triplet-loss-keras/), which in turn is based on the [FaceNet](https://arxiv.org/abs/1503.03832) paper. Any uses of open-source code are linked throughout where utilised.
For an in-depth guide to understand this code, and the theory behind it, please see LINK.
### Imports
```
# TF 1.14 gives lots of warnings for deprecations ready for the switch to TF 2.0
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import random
import os
import glob
from datetime import datetime
from tensorflow.keras.models import model_from_json
from tensorflow.keras.callbacks import Callback, CSVLogger, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Activation, Input, concatenate
from tensorflow.keras.layers import Layer, BatchNormalization, MaxPooling2D, Concatenate, Lambda, Flatten, Dense
from tensorflow.keras.initializers import glorot_uniform, he_uniform
from tensorflow.keras.regularizers import l2
from tensorflow.keras.utils import multi_gpu_model
from sklearn.decomposition import PCA
from sklearn.metrics import roc_curve, roc_auc_score
import math
from pylab import dist
import json
from tensorflow.python.client import device_lib
import matplotlib.gridspec as gridspec
```
## Import the data and reshape for use with the SNN
The data loaded in must be in the same format as `tf.keras.datasets.mnist.load_data()`, that is `(x_train, y_train), (x_test, y_test)`
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
num_classes = len(np.unique(y_train))
x_train_w = x_train.shape[1] # (60000, 28, 28)
x_train_h = x_train.shape[2]
x_test_w = x_test.shape[1]
x_test_h = x_test.shape[2]
x_train_w_h = x_train_w * x_train_h # 28 * 28 = 784
x_test_w_h = x_test_w * x_test_h
x_train = np.reshape(x_train, (x_train.shape[0], x_train_w_h))/255. # (60000, 784)
x_test = np.reshape(x_test, (x_test.shape[0], x_test_w_h))/255.
```
### Plotting the triplets
```
def plot_triplets(examples):
plt.figure(figsize=(6, 2))
for i in range(3):
plt.subplot(1, 3, 1 + i)
plt.imshow(np.reshape(examples[i], (x_train_w, x_train_h)), cmap='binary')
plt.xticks([])
plt.yticks([])
plt.show()
plot_triplets([x_train[0], x_train[1], x_train[2]])
```
### Create triplet batches
Random batches are generated by `create_batch`. Semi-hard triplet batches are generated by `create_batch_hard`.
Semi-Hard: dist(A, P) < dist(A, N) < dist(A, P) + margin. Using only easy triplets will lead to no learning. Hard triplets generate high loss and have high impact on training parameters, but may cause any mislabelled data to cause too much of a weight change.
```
def create_batch(batch_size=256, split = "train"):
x_anchors = np.zeros((batch_size, x_train_w_h))
x_positives = np.zeros((batch_size, x_train_w_h))
x_negatives = np.zeros((batch_size, x_train_w_h))
if split =="train":
data = x_train
data_y = y_train
else:
data = x_test
data_y = y_test
for i in range(0, batch_size):
# We need to find an anchor, a positive example and a negative example
random_index = random.randint(0, data.shape[0] - 1)
x_anchor = data[random_index]
y = data_y[random_index]
indices_for_pos = np.squeeze(np.where(data_y == y))
indices_for_neg = np.squeeze(np.where(data_y != y))
x_positive = data[indices_for_pos[random.randint(0, len(indices_for_pos) - 1)]]
x_negative = data[indices_for_neg[random.randint(0, len(indices_for_neg) - 1)]]
x_anchors[i] = x_anchor
x_positives[i] = x_positive
x_negatives[i] = x_negative
return [x_anchors, x_positives, x_negatives]
def create_hard_batch(batch_size, num_hard, split = "train"):
x_anchors = np.zeros((batch_size, x_train_w_h))
x_positives = np.zeros((batch_size, x_train_w_h))
x_negatives = np.zeros((batch_size, x_train_w_h))
if split =="train":
data = x_train
data_y = y_train
else:
data = x_test
data_y = y_test
# Generate num_hard number of hard examples:
hard_batches = []
batch_losses = []
rand_batches = []
# Get some random batches
for i in range(0, batch_size):
hard_batches.append(create_batch(1, split))
A_emb = embedding_model.predict(hard_batches[i][0])
P_emb = embedding_model.predict(hard_batches[i][1])
N_emb = embedding_model.predict(hard_batches[i][2])
# Compute d(A, P) - d(A, N) for each selected batch
batch_losses.append(np.sum(np.square(A_emb-P_emb),axis=1) - np.sum(np.square(A_emb-N_emb),axis=1))
# Sort batch_loss by distance, highest first, and keep num_hard of them
hard_batch_selections = [x for _, x in sorted(zip(batch_losses,hard_batches), key=lambda x: x[0])]
hard_batches = hard_batch_selections[:num_hard]
# Get batch_size - num_hard number of random examples
num_rand = batch_size - num_hard
for i in range(0, num_rand):
rand_batch = create_batch(1, split)
rand_batches.append(rand_batch)
selections = hard_batches + rand_batches
for i in range(0, len(selections)):
x_anchors[i] = selections[i][0]
x_positives[i] = selections[i][1]
x_negatives[i] = selections[i][2]
return [x_anchors, x_positives, x_negatives]
```
### Create the Embedding Model
This model takes in input image and generates some `emb_size`-dimensional embedding for the image, plotted on some latent space.
The untrained model's embedding space is stored for later use when comparing clustering between the untrained and the trained model using PCA, based on [this notebook](https://github.com/AdrianUng/keras-triplet-loss-mnist/blob/master/Triplet_loss_KERAS_semi_hard_from_TF.ipynb).
```
def create_embedding_model(emb_size):
embedding_model = tf.keras.models.Sequential([
Dense(4096,
activation='relu',
kernel_regularizer=l2(1e-3),
kernel_initializer='he_uniform',
input_shape=(x_train_w_h,)),
Dense(emb_size,
activation=None,
kernel_regularizer=l2(1e-3),
kernel_initializer='he_uniform')
])
embedding_model.summary()
return embedding_model
```
### Create the SNN
This model takes a triplet image input, passes them to the embedding model for embedding, then concats them together for the loss function
```
def create_SNN(embedding_model):
input_anchor = tf.keras.layers.Input(shape=(x_train_w_h,))
input_positive = tf.keras.layers.Input(shape=(x_train_w_h,))
input_negative = tf.keras.layers.Input(shape=(x_train_w_h,))
embedding_anchor = embedding_model(input_anchor)
embedding_positive = embedding_model(input_positive)
embedding_negative = embedding_model(input_negative)
output = tf.keras.layers.concatenate([embedding_anchor, embedding_positive,
embedding_negative], axis=1)
siamese_net = tf.keras.models.Model([input_anchor, input_positive, input_negative],
output)
siamese_net.summary()
return siamese_net
```
### Create the Triplet Loss Function
```
def triplet_loss(y_true, y_pred):
anchor, positive, negative = y_pred[:,:emb_size], y_pred[:,emb_size:2*emb_size],y_pred[:,2*emb_size:]
positive_dist = tf.reduce_mean(tf.square(anchor - positive), axis=1)
negative_dist = tf.reduce_mean(tf.square(anchor - negative), axis=1)
return tf.maximum(positive_dist - negative_dist + alpha, 0.)
```
### Data Generator
This function creates hard batches for the network to train on. `y` is required by TF but not by our model, so just return a filler to keep TF happy.
```
def data_generator(batch_size=256, num_hard=50, split="train"):
while True:
x = create_hard_batch(batch_size, num_hard, split)
y = np.zeros((batch_size, 3*emb_size))
yield x, y
```
### Evaluation
Allows for the model's metrics to be visualised and evaluated. Based on [this Medium post](https://medium.com/@crimy/one-shot-learning-siamese-networks-and-triplet-loss-with-keras-2885ed022352) and [this GitHub notebook](https://github.com/asagar60/One-Shot-Learning/blob/master/Omniglot_data/One_shot_implementation.ipynb).
```
def compute_dist(a,b):
return np.linalg.norm(a-b)
def compute_probs(network,X,Y):
'''
Input
network : current NN to compute embeddings
X : tensor of shape (m,w,h,1) containing pics to evaluate
Y : tensor of shape (m,) containing true class
Returns
probs : array of shape (m,m) containing distances
'''
m = X.shape[0]
nbevaluation = int(m*(m-1)/2)
probs = np.zeros((nbevaluation))
y = np.zeros((nbevaluation))
#Compute all embeddings for all imgs with current embedding network
embeddings = embedding_model.predict(X)
k = 0
# For each img in the evaluation set
for i in range(m):
# Against all other images
for j in range(i+1,m):
# compute the probability of being the right decision : it should be 1 for right class, 0 for all other classes
probs[k] = -compute_dist(embeddings[i,:],embeddings[j,:])
if (Y[i]==Y[j]):
y[k] = 1
#print("{3}:{0} vs {1} : \t\t\t{2}\tSAME".format(i,j,probs[k],k, Y[i], Y[j]))
else:
y[k] = 0
#print("{3}:{0} vs {1} : {2}\tDIFF".format(i,j,probs[k],k, Y[i], Y[j]))
k += 1
return probs, y
def compute_metrics(probs,yprobs):
'''
Returns
fpr : Increasing false positive rates such that element i is the false positive rate of predictions with score >= thresholds[i]
tpr : Increasing true positive rates such that element i is the true positive rate of predictions with score >= thresholds[i].
thresholds : Decreasing thresholds on the decision function used to compute fpr and tpr. thresholds[0] represents no instances being predicted and is arbitrarily set to max(y_score) + 1
auc : Area Under the ROC Curve metric
'''
# calculate AUC
auc = roc_auc_score(yprobs, probs)
# calculate roc curve
fpr, tpr, thresholds = roc_curve(yprobs, probs)
return fpr, tpr, thresholds,auc
def draw_roc(fpr, tpr,thresholds, auc):
#find threshold
targetfpr=1e-3
_, idx = find_nearest(fpr,targetfpr)
threshold = thresholds[idx]
recall = tpr[idx]
# plot no skill
plt.plot([0, 1], [0, 1], linestyle='--')
# plot the roc curve for the model
plt.plot(fpr, tpr, marker='.')
plt.title('AUC: {0:.3f}\nSensitivity : {2:.1%} @FPR={1:.0e}\nThreshold={3})'.format(auc,targetfpr,recall,abs(threshold) ))
# show the plot
plt.show()
def find_nearest(array,value):
idx = np.searchsorted(array, value, side="left")
if idx > 0 and (idx == len(array) or math.fabs(value - array[idx-1]) < math.fabs(value - array[idx])):
return array[idx-1],idx-1
else:
return array[idx],idx
def draw_interdist(network, epochs):
interdist = compute_interdist(network)
data = []
for i in range(num_classes):
data.append(np.delete(interdist[i,:],[i]))
fig, ax = plt.subplots()
ax.set_title('Evaluating embeddings distance from each other after {0} epochs'.format(epochs))
ax.set_ylim([0,3])
plt.xlabel('Classes')
plt.ylabel('Distance')
ax.boxplot(data,showfliers=False,showbox=True)
locs, labels = plt.xticks()
plt.xticks(locs,np.arange(num_classes))
plt.show()
def compute_interdist(network):
'''
Computes sum of distances between all classes embeddings on our reference test image:
d(0,1) + d(0,2) + ... + d(0,9) + d(1,2) + d(1,3) + ... d(8,9)
A good model should have a large distance between all theses embeddings
Returns:
array of shape (num_classes,num_classes)
'''
res = np.zeros((num_classes,num_classes))
ref_images = np.zeros((num_classes, x_test_w_h))
#generates embeddings for reference images
for i in range(num_classes):
ref_images[i,:] = x_test[i]
ref_embeddings = network.predict(ref_images)
for i in range(num_classes):
for j in range(num_classes):
res[i,j] = dist(ref_embeddings[i],ref_embeddings[j])
return res
def DrawTestImage(network, images, refidx=0):
'''
Evaluate some pictures vs some samples in the test set
image must be of shape(1,w,h,c)
Returns
scores : result of the similarity scores with the basic images => (N)
'''
nbimages = images.shape[0]
#generates embedings for given images
image_embedings = network.predict(images)
#generates embedings for reference images
ref_images = np.zeros((num_classes,x_test_w_h))
for i in range(num_classes):
images_at_this_index_are_of_class_i = np.squeeze(np.where(y_test == i))
ref_images[i,:] = x_test[images_at_this_index_are_of_class_i[refidx]]
ref_embedings = network.predict(ref_images)
for i in range(nbimages):
# Prepare the figure
fig=plt.figure(figsize=(16,2))
subplot = fig.add_subplot(1,num_classes+1,1)
plt.axis("off")
plotidx = 2
# Draw this image
plt.imshow(np.reshape(images[i], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
subplot.title.set_text("Test image")
for ref in range(num_classes):
#Compute distance between this images and references
dist = compute_dist(image_embedings[i,:],ref_embedings[ref,:])
#Draw
subplot = fig.add_subplot(1,num_classes+1,plotidx)
plt.axis("off")
plt.imshow(np.reshape(ref_images[ref, :], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
subplot.title.set_text(("Class {0}\n{1:.3e}".format(ref,dist)))
plotidx += 1
def generate_prototypes(x_data, y_data, embedding_model):
classes = np.unique(y_data)
prototypes = {}
for c in classes:
#c = classes[0]
# Find all images of the chosen test class
locations_of_c = np.where(y_data == c)[0]
imgs_of_c = x_data[locations_of_c]
imgs_of_c_embeddings = embedding_model.predict(imgs_of_c)
# Get the median of the embeddings to generate a prototype for the class (reshaping for PCA)
prototype_for_c = np.median(imgs_of_c_embeddings, axis = 0).reshape(1, -1)
# Add it to the prototype dict
prototypes[c] = prototype_for_c
return prototypes
def test_one_shot_prototypes(network, sample_embeddings):
distances_from_img_to_test_against = []
# As the img to test against is in index 0, we compare distances between img@0 and all others
for i in range(1, len(sample_embeddings)):
distances_from_img_to_test_against.append(compute_dist(sample_embeddings[0], sample_embeddings[i]))
# As the correct img will be at distances_from_img_to_test_against index 0 (sample_imgs index 1),
# If the smallest distance in distances_from_img_to_test_against is at index 0,
# we know the one shot test got the right answer
is_min = distances_from_img_to_test_against[0] == min(distances_from_img_to_test_against)
is_max = distances_from_img_to_test_against[0] == max(distances_from_img_to_test_against)
return int(is_min and not is_max)
def n_way_accuracy_prototypes(n_val, n_way, network):
num_correct = 0
for val_step in range(n_val):
num_correct += load_one_shot_test_batch_prototypes(n_way, network)
accuracy = num_correct / n_val * 100
return accuracy
def load_one_shot_test_batch_prototypes(n_way, network):
labels = np.unique(y_test)
# Reduce the label set down from size n_classes to n_samples
labels = np.random.choice(labels, size = n_way, replace = False)
# Choose a class as the test image
label = random.choice(labels)
# Find all images of the chosen test class
imgs_of_label = np.where(y_test == label)[0]
# Randomly select a test image of the selected class, return it's index
img_of_label_idx = random.choice(imgs_of_label)
# Expand the array at the selected indexes into useable images
img_of_label = np.expand_dims(x_test[img_of_label_idx],axis=0)
sample_embeddings = []
# Get the anchor image embedding
anchor_prototype = network.predict(img_of_label)
sample_embeddings.append(anchor_prototype)
# Get the prototype embedding for the positive class
positive_prototype = prototypes[label]
sample_embeddings.append(positive_prototype)
# Get the negative prototype embeddings
# Remove the selected test class from the list of labels based on it's index
label_idx_in_labels = np.where(labels == label)[0]
other_labels = np.delete(labels, label_idx_in_labels)
# Get the embedding for each of the remaining negatives
for other_label in other_labels:
negative_prototype = prototypes[other_label]
sample_embeddings.append(negative_prototype)
correct = test_one_shot_prototypes(network, sample_embeddings)
return correct
def visualise_n_way_prototypes(n_samples, network):
labels = np.unique(y_test)
# Reduce the label set down from size n_classes to n_samples
labels = np.random.choice(labels, size = n_samples, replace = False)
# Choose a class as the test image
label = random.choice(labels)
# Find all images of the chosen test class
imgs_of_label = np.where(y_test == label)[0]
# Randomly select a test image of the selected class, return it's index
img_of_label_idx = random.choice(imgs_of_label)
# Get another image idx that we know is of the test class for the sample set
label_sample_img_idx = random.choice(imgs_of_label)
# Expand the array at the selected indexes into useable images
img_of_label = np.expand_dims(x_test[img_of_label_idx],axis=0)
label_sample_img = np.expand_dims(x_test[label_sample_img_idx],axis=0)
# Make the first img in the sample set the chosen test image, the second the other image
sample_imgs = np.empty((0, x_test_w_h))
sample_imgs = np.append(sample_imgs, img_of_label, axis=0)
sample_imgs = np.append(sample_imgs, label_sample_img, axis=0)
sample_embeddings = []
# Get the anchor embedding image
anchor_prototype = network.predict(img_of_label)
sample_embeddings.append(anchor_prototype)
# Get the prototype embedding for the positive class
positive_prototype = prototypes[label]
sample_embeddings.append(positive_prototype)
# Get the negative prototype embeddings
# Remove the selected test class from the list of labels based on it's index
label_idx_in_labels = np.where(labels == label)[0]
other_labels = np.delete(labels, label_idx_in_labels)
# Get the embedding for each of the remaining negatives
for other_label in other_labels:
negative_prototype = prototypes[other_label]
sample_embeddings.append(negative_prototype)
# Find all images of the other class
imgs_of_other_label = np.where(y_test == other_label)[0]
# Randomly select an image of the selected class, return it's index
another_sample_img_idx = random.choice(imgs_of_other_label)
# Expand the array at the selected index into useable images
another_sample_img = np.expand_dims(x_test[another_sample_img_idx],axis=0)
# Add the image to the support set
sample_imgs = np.append(sample_imgs, another_sample_img, axis=0)
distances_from_img_to_test_against = []
# As the img to test against is in index 0, we compare distances between img@0 and all others
for i in range(1, len(sample_embeddings)):
distances_from_img_to_test_against.append(compute_dist(sample_embeddings[0], sample_embeddings[i]))
# + 1 as distances_from_img_to_test_against doesn't include the test image
min_index = distances_from_img_to_test_against.index(min(distances_from_img_to_test_against)) + 1
return sample_imgs, min_index
def evaluate(embedding_model, epochs = 0):
probs,yprob = compute_probs(embedding_model, x_test[:500, :], y_test[:500])
fpr, tpr, thresholds, auc = compute_metrics(probs,yprob)
draw_roc(fpr, tpr, thresholds, auc)
draw_interdist(embedding_model, epochs)
for i in range(3):
DrawTestImage(embedding_model, np.expand_dims(x_train[i],axis=0))
```
### Model Training Setup
FaceNet, the original triplet batch paper, draws a large random sample of triplets respecting the class distribution then picks N/2 hard and N/2 random samples (N = batch size), along with an `alpha` of 0.2
Logs out to Tensorboard, callback adapted from https://stackoverflow.com/a/52581175.
Saves best model only based on a validation loss. Adapted from https://stackoverflow.com/a/58103272.
```
# Hyperparams
batch_size = 256
epochs = 100
steps_per_epoch = int(x_train.shape[0]/batch_size)
val_steps = int(x_test.shape[0]/batch_size)
alpha = 0.2
num_hard = int(batch_size * 0.5) # Number of semi-hard triplet examples in the batch
lr = 0.00006
optimiser = 'Adam'
emb_size = 10
with tf.device("/cpu:0"):
# Create the embedding model
print("Generating embedding model... \n")
embedding_model = create_embedding_model(emb_size)
print("\nGenerating SNN... \n")
# Create the SNN
siamese_net = create_SNN(embedding_model)
# Compile the SNN
optimiser_obj = Adam(lr = lr)
siamese_net.compile(loss=triplet_loss, optimizer= optimiser_obj)
# Store visualisations of the embeddings using PCA for display next to "after training" for comparisons
num_vis = 500 # Take only the first num_vis elements of the test set to visualise
embeddings_before_train = embedding_model.predict(x_test[:num_vis, :])
pca = PCA(n_components=2)
decomposed_embeddings_before = pca.fit_transform(embeddings_before_train)
# Display evaluation the untrained model
print("\nEvaluating the model without training for a baseline...\n")
evaluate(embedding_model)
# Set up logging directory
## Use date-time as logdir name:
#dt = datetime.now().strftime("%Y%m%dT%H%M")
#logdir = os.path.join("PATH/TO/LOGDIR",dt)
## Use a custom non-dt name:
name = "snn-example-run"
logdir = os.path.join("PATH/TO/LOGDIR",name)
if not os.path.exists(logdir):
os.mkdir(logdir)
## Callbacks:
# Create the TensorBoard callback
tensorboard = tf.keras.callbacks.TensorBoard(
log_dir = logdir,
histogram_freq=0,
batch_size=batch_size,
write_graph=True,
write_grads=True,
write_images = True,
update_freq = 'epoch',
profile_batch=0
)
# Training logger
csv_log = os.path.join(logdir, 'training.csv')
csv_logger = CSVLogger(csv_log, separator=',', append=True)
# Only save the best model weights based on the val_loss
checkpoint = ModelCheckpoint(os.path.join(logdir, 'snn_model-{epoch:02d}-{val_loss:.2f}.h5'),
monitor='val_loss', verbose=1,
save_best_only=True, save_weights_only=True,
mode='auto')
# Save the embedding mode weights based on the main model's val loss
# This is needed to reecreate the emebedding model should we wish to visualise
# the latent space at the saved epoch
class SaveEmbeddingModelWeights(Callback):
def __init__(self, filepath, monitor='val_loss', verbose=1):
super(Callback, self).__init__()
self.monitor = monitor
self.verbose = verbose
self.best = np.Inf
self.filepath = filepath
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("SaveEmbeddingModelWeights requires %s available!" % self.monitor, RuntimeWarning)
if current < self.best:
filepath = self.filepath.format(epoch=epoch + 1, **logs)
#if self.verbose == 1:
#print("Saving embedding model weights at %s" % filepath)
embedding_model.save_weights(filepath, overwrite = True)
self.best = current
# Save the embedding model weights if you save a new snn best model based on the model checkpoint above
emb_weight_saver = SaveEmbeddingModelWeights(os.path.join(logdir, 'emb_model-{epoch:02d}.h5'))
callbacks = [tensorboard, csv_logger, checkpoint, emb_weight_saver]
# Save model configs to JSON
model_json = siamese_net.to_json()
with open(os.path.join(logdir, "siamese_config.json"), "w") as json_file:
json_file.write(model_json)
json_file.close()
model_json = embedding_model.to_json()
with open(os.path.join(logdir, "embedding_config.json"), "w") as json_file:
json_file.write(model_json)
json_file.close()
hyperparams = {'batch_size' : batch_size,
'epochs' : epochs,
'steps_per_epoch' : steps_per_epoch,
'val_steps' : val_steps,
'alpha' : alpha,
'num_hard' : num_hard,
'optimiser' : optimiser,
'lr' : lr,
'emb_size' : emb_size
}
with open(os.path.join(logdir, "hyperparams.json"), "w") as json_file:
json.dump(hyperparams, json_file)
# Set the model to TB
tensorboard.set_model(siamese_net)
def delete_older_model_files(filepath):
model_dir = filepath.split("emb_model")[0]
# Get model files
model_files = os.listdir(model_dir)
# Get only the emb_model files
emb_model_files = [file for file in model_files if "emb_model" in file]
# Get the epoch nums of the emb_model_files
emb_model_files_epoch_nums = [int(file.split("-")[1].split(".h5")[0]) for file in emb_model_files]
# Find all the snn model files
snn_model_files = [file for file in model_files if "snn_model" in file]
# Sort, get highest epoch num
emb_model_files_epoch_nums.sort()
highest_epoch_num = str(emb_model_files_epoch_nums[-1]).zfill(2)
# Filter the emb_model and snn_model file lists to remove the highest epoch number ones
emb_model_files_without_highest = [file for file in emb_model_files if highest_epoch_num not in file]
snn_model_files_without_highest = [file for file in snn_model_files if ("-" + highest_epoch_num + "-") not in file]
# Delete the non-highest model files from the subdir
if len(emb_model_files_without_highest) != 0:
print("Deleting previous best model file")
for model_file_list in [emb_model_files_without_highest, snn_model_files_without_highest]:
for file in model_file_list:
os.remove(os.path.join(model_dir, file))
```
### Show example batches
Based on code found [here](https://zhangruochi.com/Create-a-Siamese-Network-with-Triplet-Loss-in-Keras/2020/08/11/).
```
# Display sample batches. This has to be performed after the embedding model is created
# as create_batch_hard utilises the model to see which batches are actually hard.
examples = create_batch(1)
print("Example triplet batch:")
plot_triplets(examples)
print("Example semi-hard triplet batch:")
ex_hard = create_hard_batch(1, 1, split="train")
plot_triplets(ex_hard)
```
### Training
Using `.fit(workers = 0)` fixes the error when using hard batches where TF can't predict on the embedding network whilst fitting the siamese network (see: https://github.com/keras-team/keras/issues/5511#issuecomment-427666222).
```
def get_num_gpus():
local_device_protos = device_lib.list_local_devices()
return len([x.name for x in local_device_protos if x.device_type == 'GPU'])
## Training:
#print("Logging out to Tensorboard at:", logdir)
print("Starting training process!")
print("-------------------------------------")
# Make the model work over the two GPUs we have
num_gpus = get_num_gpus()
parallel_snn = multi_gpu_model(siamese_net, gpus = num_gpus)
batch_per_gpu = int(batch_size / num_gpus)
parallel_snn.compile(loss=triplet_loss, optimizer= optimiser_obj)
siamese_history = parallel_snn.fit(
data_generator(batch_per_gpu, num_hard),
steps_per_epoch=steps_per_epoch,
epochs=epochs,
verbose=1,
callbacks=callbacks,
workers = 0,
validation_data = data_generator(batch_per_gpu, num_hard, split="test"),
validation_steps = val_steps)
print("-------------------------------------")
print("Training complete.")
```
### Evaluate the trained network
Load the best performing models. We need to load the weights and configs seperately rather than using model.load() as our custom loss function relies on the embedding length. As such, it is easier to load the weights and config seperately and build a model based on them.
```
def json_to_dict(json_src):
with open(json_src, 'r') as j:
return json.loads(j.read())
## Load in best trained SNN and emb model
# The best performing model weights has the higher epoch number due to only saving the best weights
highest_epoch = 0
dir_list = os.listdir(logdir)
for file in dir_list:
if file.endswith(".h5"):
epoch_num = int(file.split("-")[1].split(".h5")[0])
if epoch_num > highest_epoch:
highest_epoch = epoch_num
# Find the embedding and SNN weights src for the highest_epoch (best) model
for file in dir_list:
# Zfill ensure a leading 0 on number < 10
if ("-" + str(highest_epoch).zfill(2)) in file:
if file.startswith("emb"):
embedding_weights_src = os.path.join(logdir, file)
elif file.startswith("snn"):
snn_weights_src = os.path.join(logdir, file)
hyperparams = os.path.join(logdir, "hyperparams.json")
snn_config = os.path.join(logdir, "siamese_config.json")
emb_config = os.path.join(logdir, "embedding_config.json")
snn_config = json_to_dict(snn_config)
emb_config = json_to_dict(emb_config)
# json.dumps to make the dict a string, as required by model_from_json
loaded_snn_model = model_from_json(json.dumps(snn_config))
loaded_snn_model.load_weights(snn_weights_src)
loaded_emb_model = model_from_json(json.dumps(emb_config))
loaded_emb_model.load_weights(embedding_weights_src)
# Store visualisations of the embeddings using PCA for display next to "after training" for comparisons
embeddings_after_train = loaded_emb_model.predict(x_test[:num_vis, :])
pca = PCA(n_components=2)
decomposed_embeddings_after = pca.fit_transform(embeddings_after_train)
evaluate(loaded_emb_model, highest_epoch)
```
### Comparisons of the embeddings in the latent space
Based on [this notebook](https://github.com/AdrianUng/keras-triplet-loss-mnist/blob/master/Triplet_loss_KERAS_semi_hard_from_TF.ipynb).
```
step = 1 # Step = 1, take every element
dict_embeddings = {}
dict_gray = {}
test_class_labels = np.unique(np.array(y_test))
decomposed_embeddings_after = pca.fit_transform(embeddings_after_train)
fig = plt.figure(figsize=(16, 8))
for label in test_class_labels:
y_test_labels = y_test[:num_vis]
decomposed_embeddings_class_before = decomposed_embeddings_before[y_test_labels == label]
decomposed_embeddings_class_after = decomposed_embeddings_after[y_test_labels == label]
plt.subplot(1,2,1)
plt.scatter(decomposed_embeddings_class_before[::step, 1], decomposed_embeddings_class_before[::step, 0], label=str(label))
plt.title('Embedding Locations Before Training')
plt.legend()
plt.subplot(1,2,2)
plt.scatter(decomposed_embeddings_class_after[::step, 1], decomposed_embeddings_class_after[::step, 0], label=str(label))
plt.title('Embedding Locations After %d Training Epochs' % epochs)
plt.legend()
plt.show()
```
### Determine n_way_accuracy
```
prototypes = generate_prototypes(x_test, y_test, loaded_emb_model)
n_way_accuracy_prototypes(val_steps, num_classes, loaded_emb_model)
```
### Visualise support set inference
Based on code found [here](https://github.com/asagar60/One-Shot-Learning/blob/master/Omniglot_data/One_shot_implementation.ipynb).
```
n_samples = 10
sample_imgs, min_index = visualise_n_way_prototypes(n_samples, loaded_emb_model)
img_matrix = []
for index in range(1, len(sample_imgs)):
img_matrix.append(np.reshape(sample_imgs[index], (x_train_w, x_train_h)))
img_matrix = np.asarray(img_matrix)
img_matrix = np.vstack(img_matrix)
f, ax = plt.subplots(1, 3, figsize = (10, 12))
f.tight_layout()
ax[0].imshow(np.reshape(sample_imgs[0], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
ax[0].set_title("Test Image")
ax[1].imshow(img_matrix ,vmin=0, vmax=1,cmap='Greys')
ax[1].set_title("Support Set (Img of same class shown first)")
ax[2].imshow(np.reshape(sample_imgs[min_index], (x_train_w, x_train_h)),vmin=0, vmax=1,cmap='Greys')
ax[2].set_title("Image most similar to Test Image in Support Set")
```
| true |
code
| 0.706507 | null | null | null | null |
|
# Multi-class Classification and Neural Networks
## 1. Multi-class Classification
In this exercise, we will use logistic regression and neural networks to recognize handwritten digits (from 0 to 9).
### 1.1 Dataset
The dataset ex3data1.mat contains 5000 training examples of handwritten digits. Each training example is a 20 pixel by 20 pixel grayscale image of the digit. Each pixel is represented by a floating point number indicating the grayscale intensity at that location (value between -1 and 1). The 20 by 20 grid of pixels are flattened into a 400 long vector. Each training example is a single row in data matrix X. This results in a 5000 by 400 matrix X where every row is a training example.
$$ X=\left[\matrix{-(x^{(1)})^T-\\ -(x^{(2)})^T-\\ \vdots\\ -(x^{(m)})^T-}\right]_{5000\times400} $$
The other dtat in the training set is a 5000 long vector y that contains labels for the training set. Since the data was prepared for MATLAB, in which index starts from 1, digits 0-9 have been converted to 1-10. Here, we will convert it back to using 0-9 as labels.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
from scipy.io import loadmat
data = loadmat('ex3data1.mat')
X = data["X"] # 5000x400 np array
y = data["y"] # 5000x1 np array (2d)
y = y.flatten() # change to (5000,) 1d array and
y[y==10] = 0 # convert to 0-9 scale from 1-10 scale
```
### 1.2 Visualizing the data
```
def displayData(X):
"""displays the 100 rows of digit image data stored in X in a nice grid.
It returns the figure handle fig, ax
"""
# form the big 10 x 10 matrix containing all 100 images data
# padding between 2 images
pad = 1
# initialize matrix with -1 (black)
wholeimage = -np.ones((20*10+9, 20*10+9))
# fill values
for i in range(10):
for j in range(10):
wholeimage[j*21:j*21+20, i*21:i*21+20] = X[10*i+j, :].reshape((20, 20))
fig, ax = plt.subplots(figsize=(6, 6))
ax.imshow(wholeimage.T, cmap=plt.cm.gray, vmin=-1, vmax=1)
ax.axis('off')
return fig, ax
x = X[3200:3300, :]
fig, ax = displayData(x)
ax.axis('off')
# randomly select 100 data points to display
rand_indices = np.random.randint(0, 5000, size=100)
sel = X[rand_indices, :]
# display images
fig, ax = displayData(sel)
```
### 1.3 Vectorizing Logistic Regression
Since it's already been vectorized in assignment 2, we will just copy the functions here, just renaming it to lrCostFunction(). This includes regularization.
```
def sigmoid(z):
"""sigmoid(z) computes the sigmoid of z. z can be a number,
vector, or matrix.
"""
g = 1 / (1 + np.exp(-z))
return g
def lrCostFucntion(theta, X, y, lmd):
"""computes the cost of using
% theta as the parameter for regularized logistic regression and the
% gradient of the cost w.r.t. to the parameters.
"""
m = len(y)
# prepare for matrix calculations
y = y[:, np.newaxis]
# to prevent error in scipy.optimize.minimize(method='CG')
# unroll theta first, make sure theta is (n+1) by 1 array
theta = theta.ravel()
theta = theta[:, np.newaxis]
# print('theta: {}'.format(theta.shape))
# print('X: {}'.format(X.shape))
# print('y: {}'.format(y.shape))
# cost
J = ([email protected](sigmoid(X@theta)))/m - ((1-y.T)@np.log(1-sigmoid(X@theta)))/m + (theta[1:].T@theta[1:])*lmd/(2*m)
# J = J[0, 0]
# gradient
grad = np.zeros(theta.shape)
# added newaxis in order to get 2d array instead of 1d array
grad[0] = X.T[0, np.newaxis, :]@(sigmoid(X@theta)-y)/m
grad[1:] = X.T[1:, :]@(sigmoid(X@theta)-y)/m + lmd*theta[1:]/m
return J, grad.flatten()
# Test lrCostFunction
theta_t = np.array([-2, -1, 1, 2])
X_t = np.concatenate((np.ones((5, 1)), np.arange(1, 16).reshape((5, 3), order='F')/10), axis=1)
y_t = np.array([1, 0, 1, 0, 1])
lambda_t = 3
J, grad = lrCostFucntion(theta_t, X_t, y_t, lambda_t)
print('Cost: {:.6f}'.format(J[0, 0]))
print('Expected: 2.534819')
print('Gradients: \n{}'.format(grad))
print('Expected: \n0.146561\n -0.548558\n 0.724722\n 1.398003\n')
```
### 1.4 One-vs-all Classification
Here, we implement one-vs-all classification by training multiple regularized logistic regression classifier, one for each of the K classes in our dataset. K=10 in this case.
```
from scipy.optimize import minimize
def oneVsAll(X, y, num_class, lmd):
"""trains num_labels logistic regression classifiers and returns each of these classifiers
% in a matrix all_theta, where the i-th row of all_theta corresponds
% to the classifier for label i
"""
# m is number of training samples, n is number of features + 1
m, n = X.shape
# store theta results
all_theta = np.zeros((num_class, n))
#print(all_theta.shape)
# initial conidition, 1d array
theta0 = np.zeros(n)
print(theta0.shape)
# train one theta at a time
for i in range(num_class):
# y should be either 0 or 1, representing true or false
ylabel = (y==i).astype(int)
# run optimization
result = minimize(lrCostFucntion, theta0, args=(X, ylabel, lmd), method='CG',
jac=True, options={'disp': True, 'maxiter':1000})
# print(result)
all_theta[i, :] = result.x
return all_theta
# prepare parameters
lmd = 0.1
m = len(y)
X_wb = np.concatenate((np.ones((m, 1)), X), axis=1)
num_class = 10 # 10 classes, digits 0 to 9
print(X_wb.shape)
print(y.shape)
# Run training
all_theta = oneVsAll(X_wb, y, num_class, lmd)
```
#### One-vs-all Prediction
```
def predictOneVsAll(all_theta, X):
"""will return a vector of predictions
% for each example in the matrix X. Note that X contains the examples in
% rows. all_theta is a matrix where the i-th row is a trained logistic
% regression theta vector for the i-th class. You should return column vector
% of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2
% for 4 examples)
"""
# apply np.argmax to the output matrix to find the predicted label
# for that training sample
out = (all_theta @ X.T).T
#print(out[4000:4020, :])
return np.argmax(out, axis=1)
# prediction accuracy
pred = predictOneVsAll(all_theta, X_wb)
print(pred.shape)
accuracy = np.sum((pred==y).astype(int))/m*100
print('Training accuracy is {:.2f}%'.format(accuracy))
```
## 2. Neural Networks
In the previous part of this exercise, you implemented multi-class logistic re-
gression to recognize handwritten digits. However, logistic regression cannot
form more complex hypotheses as it is only a linear classifier.3
In this part of the exercise, you will implement a neural network to rec-
ognize handwritten digits using the same training set as before. The neural
network will be able to represent complex models that form non-linear hy-
potheses.
For this week, you will be using parameters from a neural network
that we have already trained. Your goal is to implement the feedforward
propagation algorithm to use our weights for prediction.
Our neural network is shown in Figure 2. It has 3 layers: an input layer, a
hidden layer and an output layer. Recall that our inputs are pixel values of
digit images. Since the images are of size 20x20, this gives us 400 input layer
units (excluding the extra bias unit which always outputs +1). As before,
the training data will be loaded into the variables X and y.
A set of pre-trained network parameters ($\Theta_{(1)},\Theta_{(2)}$) are provided and stored in ex3weights.mat. The neural network used contains 25 units in the 2nd layer and 10 output units (corresponding to 10 digit classes).

```
#from scipy.io import loadmat
data = loadmat('ex3weights.mat')
Theta1 = data["Theta1"] # 25x401 np array
Theta2 = data["Theta2"] # 10x26 np array (2d)
print(Theta1.shape, Theta2.shape)
```
### Vectorizing the forward propagation
Matrix dimensions:
$X_wb$: 5000 x 401
$\Theta^{(1)}$: 25 x 401
$\Theta^{(2)}$: 10 x 26
$a^{(2)}$: 5000 x 25 or 5000 x 26 after adding intercept terms
$a^{(3)}$: 5000 x 10
$$a^{(2)} = g(X_{wb}\Theta^{(1)^T})$$
$$a^{(3)} = g(a^{(2)}_{wb}\Theta^{(2)^T})$$
```
def predict(X, Theta1, Theta2):
""" predicts output given network parameters Theta1 and Theta2 in Theta.
The prediction from the neural network will be the label that has the largest output.
"""
a2 = sigmoid(X @ Theta1.T)
# add intercept terms to a2
m, n = a2.shape
a2_wb = np.concatenate((np.ones((m, 1)), a2), axis=1)
a3 = sigmoid(a2_wb @ Theta2.T)
# print(a3[:10, :])
# apply np.argmax to the output matrix to find the predicted label
# for that training sample
# correct for indexing difference between MATLAB and Python
p = np.argmax(a3, axis=1) + 1
p[p==10] = 0
return p # this is a 1d array
# prediction accuracy
pred = predict(X_wb, Theta1, Theta2)
print(pred.shape)
accuracy = np.sum((pred==y).astype(int))/m*100
print('Training accuracy is {:.2f}%'.format(accuracy))
# randomly show 10 images and corresponding results
# randomly select 10 data points to display
rand_indices = np.random.randint(0, 5000, size=10)
sel = X[rand_indices, :]
for i in range(10):
# Display predicted digit
print("Predicted {} for this image: ".format(pred[rand_indices[i]]))
# display image
fig, ax = plt.subplots(figsize=(2, 2))
ax.imshow(sel[i, :].reshape(20, 20).T, cmap=plt.cm.gray, vmin=-1, vmax=1)
ax.axis('off')
plt.show()
```
| true |
code
| 0.556339 | null | null | null | null |
|
# Plotting Categorical Data
In this section, we will:
- Plot distributions of data across categorical variables
- Plot aggregate/summary statistics across categorical variables
## Plotting Distributions Across Categories
We have seen how to plot distributions of data. Often, the distributions reveal new information when you plot them across categorical variables.
Let's see some examples.
```
# loading libraries and reading the data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# set seaborn theme if you prefer
sns.set(style="white")
# read data
market_df = pd.read_csv("./global_sales_data/market_fact.csv")
customer_df = pd.read_csv("./global_sales_data/cust_dimen.csv")
product_df = pd.read_csv("./global_sales_data/prod_dimen.csv")
shipping_df = pd.read_csv("./global_sales_data/shipping_dimen.csv")
orders_df = pd.read_csv("./global_sales_data/orders_dimen.csv")
```
### Boxplots
We had created simple boxplots such as the ones shown below. Now, let's plot multiple boxplots and see what they can tell us the distribution of variables across categories.
```
# boxplot of a variable
sns.boxplot(y=market_df['Sales'])
plt.yscale('log')
plt.show()
```
Now, let's say you want to **compare the (distribution of) sales of various product categories**. Let's first merge the product data into the main dataframe.
```
# merge the dataframe to add a categorical variable
df = pd.merge(market_df, product_df, how='inner', on='Prod_id')
df.head()
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Sales', data=df)
plt.yscale('log')
plt.show()
```
So this tells you that the sales of office supplies are, on an average, lower than the other two categories. The sales of technology and furniture categories seem much better. Note that each order can have multiple units of products sold, so Sales being higher/lower may be due to price per unit or the number of units.
Let's now plot the other important variable - Profit.
```
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.show()
```
Profit clearly has some *outliers* due to which the boxplots are unreadable. Let's remove some extreme values from Profit (for the purpose of visualisation) and try plotting.
```
df = df[(df.Profit<1000) & (df.Profit>-1000)]
# boxplot of a variable across various product categories
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.show()
```
You can see that though the category 'Technology' has better sales numbers than others, it is also the one where the **most loss making transactions** happen. You can drill further down into this.
```
# adjust figure size
plt.figure(figsize=(10, 8))
# subplot 1: Sales
plt.subplot(1, 2, 1)
sns.boxplot(x='Product_Category', y='Sales', data=df)
plt.title("Sales")
plt.yscale('log')
# subplot 2: Profit
plt.subplot(1, 2, 2)
sns.boxplot(x='Product_Category', y='Profit', data=df)
plt.title("Profit")
plt.show()
```
Now that we've compared Sales and Profits across product categories, let's drill down further and do the same across **another categorical variable** - Customer_Segment.
We'll need to add the customer-related attributes (dimensions) to this dataframe.
```
# merging with customers df
df = pd.merge(df, customer_df, how='inner', on='Cust_id')
df.head()
# boxplot of a variable across various product categories
sns.boxplot(x='Customer_Segment', y='Profit', data=df)
plt.show()
```
You can **visualise the distribution across two categorical variables** using the ```hue= ``` argument.
```
# set figure size for larger figure
plt.figure(num=None, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
# specify hue="categorical_variable"
sns.boxplot(x='Customer_Segment', y='Profit', hue="Product_Category", data=df)
plt.show()
```
Across all customer segments, the product category ```Technology``` seems to be doing fairly well, though ```Furniture``` is incurring losses across all segments.
Now say you are curious to know why certain orders are making huge losses. One of your hypothesis is that the *shipping cost is too high in some orders*. You can **plot derived variables** as well, such as *shipping cost as percentage of sales amount*.
```
# plot shipping cost as percentage of Sales amount
sns.boxplot(x=df['Product_Category'], y=100*df['Shipping_Cost']/df['Sales'])
plt.ylabel("100*(Shipping cost/Sales)")
plt.show()
```
## Plotting Aggregated Values across Categories
### Bar Plots - Mean, Median and Count Plots
Bar plots are used to **display aggregated values** of a variable, rather than entire distributions. This is especially useful when you have a lot of data which is difficult to visualise in a single figure.
For example, say you want to visualise and *compare the average Sales across Product Categories*. The ```sns.barplot()``` function can be used to do that.
```
# bar plot with default statistic=mean
sns.barplot(x='Product_Category', y='Sales', data=df)
plt.show()
```
Note that, **by default, seaborn plots the mean value across categories**, though you can plot the count, median, sum etc. Also, barplot computes and shows the confidence interval of the mean as well.
```
# Create 2 subplots for mean and median respectively
# increase figure size
plt.figure(figsize=(12, 6))
# subplot 1: statistic=mean
plt.subplot(1, 2, 1)
sns.barplot(x='Product_Category', y='Sales', data=df)
plt.title("Average Sales")
# subplot 2: statistic=median
plt.subplot(1, 2, 2)
sns.barplot(x='Product_Category', y='Sales', data=df, estimator=np.median)
plt.title("Median Sales")
plt.show()
```
Look at that! The mean and median sales across the product categories tell different stories. This is because of some outliers (extreme values) in the ```Furniture``` category, distorting the value of the mean.
You can add another categorical variable in the plot.
```
# set figure size for larger figure
plt.figure(num=None, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
# specify hue="categorical_variable"
sns.barplot(x='Customer_Segment', y='Profit', hue="Product_Category", data=df, estimator=np.median)
plt.show()
```
The plot neatly shows the median profit across product categories and customer segments. It says that:
- On an average, only Technology products in Small Business and Corporate (customer) categories are profitable.
- Furniture is incurring losses across all Customer Segments
Compare this to the boxplot we had created above - though the bar plots contains 'lesser information' than the boxplot, it is more revealing.
<hr>
When you want to visualise having a large number of categories, it is helpful to plot the categories across the y-axis. Let's now *drill down into product sub categories*.
```
# Plotting categorical variable across the y-axis
plt.figure(figsize=(10, 8))
sns.barplot(x='Profit', y="Product_Sub_Category", data=df, estimator=np.median)
plt.show()
```
The plot clearly shows which sub categories are incurring the heaviest losses - Copiers and Fax, Tables, Chairs and Chairmats are the most loss making categories.
You can also plot the **count of the observations** across categorical variables using ```sns.countplot()```.
```
# Plotting count across a categorical variable
plt.figure(figsize=(10, 8))
sns.countplot(y="Product_Sub_Category", data=df)
plt.show()
```
Note the most loss making category - Copiers and Fax - has a very few number of orders.
In the next section, we will see how to plot Time Series data.
## Additional Stuff on Plotting Categorical Variables
1. <a href="https://seaborn.pydata.org/tutorial/categorical.html">Seaborn official tutorial on categorical variables</a>
| true |
code
| 0.663941 | null | null | null | null |
|
This model will cluster a set of data, first with KMeans and then with MiniBatchKMeans, and plot the results. It will also plot the points that are labelled differently between the two algorithms.
```
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import MiniBatchKMeans, KMeans
from sklearn.metrics.pairwise import pairwise_distances_argmin
from sklearn.datasets import make_blobs
# Generate sample data
np.random.seed(0)
batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1]]
n_clusters = len(centers)
X, labels_true = make_blobs(n_samples=3000, centers=centers, cluster_std=0.7)
# Compute clustering with Means
k_means = KMeans(init="k-means++", n_clusters=3, n_init=10)
t0 = time.time()
k_means.fit(X)
t_batch = time.time() - t0
# Compute clustering with MiniBatchKMeans
mbk = MiniBatchKMeans(
init="k-means++",
n_clusters=3,
batch_size=batch_size,
n_init=10,
max_no_improvement=10,
verbose=0,
)
t0 = time.time()
mbk.fit(X)
t_mini_batch = time.time() - t0
# Plot result
fig = plt.figure(figsize=(8, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9)
colors = ["#4EACC5", "#FF9C34", "#4E9A06"]
# We want to have the same colors for the same cluster from the
# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per
# closest one.
k_means_cluster_centers = k_means.cluster_centers_
order = pairwise_distances_argmin(k_means.cluster_centers_, mbk.cluster_centers_)
mbk_means_cluster_centers = mbk.cluster_centers_[order]
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
mbk_means_labels = pairwise_distances_argmin(X, mbk_means_cluster_centers)
# KMeans
ax = fig.add_subplot(1, 3, 1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".")
ax.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=6,
)
ax.set_title("KMeans")
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8, "train time: %.2fs\ninertia: %f" % (t_batch, k_means.inertia_))
# MiniBatchKMeans
ax = fig.add_subplot(1, 3, 2)
for k, col in zip(range(n_clusters), colors):
my_members = mbk_means_labels == k
cluster_center = mbk_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], "w", markerfacecolor=col, marker=".")
ax.plot(
cluster_center[0],
cluster_center[1],
"o",
markerfacecolor=col,
markeredgecolor="k",
markersize=6,
)
ax.set_title("MiniBatchKMeans")
ax.set_xticks(())
ax.set_yticks(())
plt.text(-3.5, 1.8, "train time: %.2fs\ninertia: %f" % (t_mini_batch, mbk.inertia_))
# Initialise the different array to all False
different = mbk_means_labels == 4
ax = fig.add_subplot(1, 3, 3)
for k in range(n_clusters):
different += (k_means_labels == k) != (mbk_means_labels == k)
identic = np.logical_not(different)
ax.plot(X[identic, 0], X[identic, 1], "w", markerfacecolor="#bbbbbb", marker=".")
ax.plot(X[different, 0], X[different, 1], "w", markerfacecolor="m", marker=".")
ax.set_title("Difference")
ax.set_xticks(())
ax.set_yticks(())
plt.show()
```
| true |
code
| 0.797103 | null | null | null | null |
|
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# `GiRaFFE_NRPy`: Source Terms
## Author: Patrick Nelson
<a id='intro'></a>
**Notebook Status:** <font color=green><b> Validated </b></font>
**Validation Notes:** This code produces the expected results for generated functions.
## This module presents the functionality of [GiRaFFE_NRPy_Source_Terms.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py).
## Introduction:
This writes and documents the C code that `GiRaFFE_NRPy` uses to compute the source terms for the right-hand sides of the evolution equations for the unstaggered prescription.
The equations themselves are already coded up in other functions; however, for the $\tilde{S}_i$ source term, we will need derivatives of the metric. It will be most efficient and accurate to take them using the interpolated metric values that we will have calculated anyway; however, we will need to write our derivatives in a nonstandard way within NRPy+ in order to take advantage of this, writing our own code for memory access.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#stilde_source): The $\tilde{S}_i$ source term
1. [Step 2](#code_validation): Code Validation against original C code
1. [Step 3](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
```
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import cmdline_helper as cmd
outdir = os.path.join("GiRaFFE_NRPy","GiRaFFE_Ccode_validation","RHSs")
cmd.mkdir(outdir)
```
<a id='stilde_source'></a>
## Step 1: The $\tilde{S}_i$ source term \[Back to [top](#toc)\]
$$\label{stilde_source}$$
We start in the usual way - import the modules we need. We will also import the Levi-Civita symbol from `indexedexp.py` and use it to set the Levi-Civita tensor $\epsilon^{ijk} = [ijk]/\sqrt{\gamma}$.
```
# Step 1: The StildeD RHS *source* term
from outputC import outputC, outCfunction # NRPy+: Core C code output module
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import GRHD.equations as GRHD # NRPy+: Generate general relativistic hydrodynamics equations
import GRFFE.equations as GRFFE # NRPy+: Generate general relativistic force-free electrodynamics equations
thismodule = "GiRaFFE_NRPy_Source_Terms"
def generate_memory_access_code(gammaDD,betaU,alpha):
# There are several pieces of C code that we will write ourselves because we need to do things
# a little bit outside of what NRPy+ is built for.
# First, we will write general memory access. We will read in values from memory at a given point
# for each quantity we care about.
global general_access
general_access = ""
for var in ["GAMMADD00", "GAMMADD01", "GAMMADD02",
"GAMMADD11", "GAMMADD12", "GAMMADD22",
"BETAU0", "BETAU1", "BETAU2","ALPHA",
"BU0","BU1","BU2",
"VALENCIAVU0","VALENCIAVU1","VALENCIAVU2"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U").replace("bU","BU").replace("valencia","Valencia")
# e.g.,
# const REAL gammaDD00dD0 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)];
general_access += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+var+"GF,i0,i1,i2)];\n"
# This quick function returns a nearby point for memory access. We need this because derivatives are not local operations.
def idxp1(dirn):
if dirn==0:
return "i0+1,i1,i2"
if dirn==1:
return "i0,i1+1,i2"
if dirn==2:
return "i0,i1,i2+1"
# Next we evaluate needed derivatives of the metric, based on their values at cell faces
global metric_deriv_access
metric_deriv_access = []
# for dirn in range(3):
# metric_deriv_access.append("")
# for var in ["GAMMA_FACEDDdD00", "GAMMA_FACEDDdD01", "GAMMA_FACEDDdD02",
# "GAMMA_FACEDDdD11", "GAMMA_FACEDDdD12", "GAMMA_FACEDDdD22",
# "BETA_FACEUdD0", "BETA_FACEUdD1", "BETA_FACEUdD2","ALPHA_FACEdD"]:
# lhsvar = var.lower().replace("dddd","DDdD").replace("udd","UdD").replace("dd","dD").replace("u","U").replace("_face","")
# rhsvar = var.replace("dD","")
# # e.g.,
# # const REAL gammaDDdD000 = (auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0+1,i1,i2)]-auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)])/dxx0;
# metric_deriv_access[dirn] += "const REAL "+lhsvar+str(dirn)+" = (auxevol_gfs[IDX4S("+rhsvar+"GF,"+idxp1(dirn)+")]-auxevol_gfs[IDX4S("+rhsvar+"GF,i0,i1,i2)])/dxx"+str(dirn)+";\n"
# metric_deriv_access[dirn] += "REAL Stilde_rhsD"+str(dirn)+";\n"
# For this workaround, instead of taking the derivative of the metric components and then building the
# four-metric, we build the four-metric and then take derivatives. Do this at i and i+1
for dirn in range(3):
metric_deriv_access.append("")
for var in ["GAMMA_FACEDD00", "GAMMA_FACEDD01", "GAMMA_FACEDD02",
"GAMMA_FACEDD11", "GAMMA_FACEDD12", "GAMMA_FACEDD22",
"BETA_FACEU0", "BETA_FACEU1", "BETA_FACEU2","ALPHA_FACE"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U")
rhsvar = var
# e.g.,
# const REAL gammaDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0,i1,i2)];
metric_deriv_access[dirn] += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+rhsvar+"GF,i0,i1,i2)];\n"
# Read in at the next grid point
for var in ["GAMMA_FACEDD00", "GAMMA_FACEDD01", "GAMMA_FACEDD02",
"GAMMA_FACEDD11", "GAMMA_FACEDD12", "GAMMA_FACEDD22",
"BETA_FACEU0", "BETA_FACEU1", "BETA_FACEU2","ALPHA_FACE"]:
lhsvar = var.lower().replace("dd","DD").replace("u","U").replace("_face","_facep1")
rhsvar = var
# e.g.,
# const REAL gammaDD00 = auxevol_gfs[IDX4S(GAMMA_FACEDD00GF,i0+1,i1,i2)];
metric_deriv_access[dirn] += "const REAL "+lhsvar+" = auxevol_gfs[IDX4S("+rhsvar+"GF,"+idxp1(dirn)+")];\n"
metric_deriv_access[dirn] += "REAL Stilde_rhsD"+str(dirn)+";\n"
import BSSN.ADMBSSN_tofrom_4metric as AB4m
AB4m.g4DD_ito_BSSN_or_ADM("ADM",gammaDD,betaU,alpha)
four_metric_vars = [
AB4m.g4DD[0][0],
AB4m.g4DD[0][1],
AB4m.g4DD[0][2],
AB4m.g4DD[0][3],
AB4m.g4DD[1][1],
AB4m.g4DD[1][2],
AB4m.g4DD[1][3],
AB4m.g4DD[2][2],
AB4m.g4DD[2][3],
AB4m.g4DD[3][3]
]
four_metric_names = [
"g4DD00",
"g4DD01",
"g4DD02",
"g4DD03",
"g4DD11",
"g4DD12",
"g4DD13",
"g4DD22",
"g4DD23",
"g4DD33"
]
global four_metric_C, four_metric_Cp1
four_metric_C = outputC(four_metric_vars,four_metric_names,"returnstring",params="outCverbose=False,CSE_sorting=none")
for ii in range(len(four_metric_names)):
four_metric_names[ii] += "p1"
four_metric_Cp1 = outputC(four_metric_vars,four_metric_names,"returnstring",params="outCverbose=False,CSE_sorting=none")
four_metric_C = four_metric_C.replace("gamma","gamma_face").replace("beta","beta_face").replace("alpha","alpha_face").replace("{","").replace("}","").replace("g4","const REAL g4").replace("tmp_","tmp_deriv")
four_metric_Cp1 = four_metric_Cp1.replace("gamma","gamma_facep1").replace("beta","beta_facep1").replace("alpha","alpha_facep1").replace("{","").replace("}","").replace("g4","const REAL g4").replace("tmp_","tmp_derivp")
global four_metric_deriv
four_metric_deriv = []
for dirn in range(3):
four_metric_deriv.append("")
for var in ["g4DDdD00", "g4DDdD01", "g4DDdD02", "g4DDdD03", "g4DDdD11",
"g4DDdD12", "g4DDdD13", "g4DDdD22", "g4DDdD23", "g4DDdD33"]:
lhsvar = var + str(dirn+1)
rhsvar = var.replace("dD","")
rhsvarp1 = rhsvar + "p1"
# e.g.,
# const REAL g44DDdD000 = (g4DD00p1 - g4DD00)/dxx0;
four_metric_deriv[dirn] += "const REAL "+lhsvar+" = ("+rhsvarp1+" - "+rhsvar+")/dxx"+str(dirn)+";\n"
# This creates the C code that writes to the Stilde_rhs direction specified.
global write_final_quantity
write_final_quantity = []
for dirn in range(3):
write_final_quantity.append("")
write_final_quantity[dirn] += "rhs_gfs[IDX4S(STILDED"+str(dirn)+"GF,i0,i1,i2)] += Stilde_rhsD"+str(dirn)+";"
def write_out_functions_for_StildeD_source_term(outdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi):
generate_memory_access_code(gammaDD,betaU,alpha)
# First, we declare some dummy tensors that we will use for the codegen.
gammaDDdD = ixp.declarerank3("gammaDDdD","sym01",DIM=3)
betaUdD = ixp.declarerank2("betaUdD","nosym",DIM=3)
alphadD = ixp.declarerank1("alphadD",DIM=3)
g4DDdD = ixp.declarerank3("g4DDdD","sym01",DIM=4)
# We need to rerun a few of these functions with the reset lists to make sure these functions
# don't cheat by using analytic expressions
GRHD.compute_sqrtgammaDET(gammaDD)
GRHD.u4U_in_terms_of_ValenciavU__rescale_ValenciavU_by_applying_speed_limit(alpha, betaU, gammaDD, ValenciavU)
GRFFE.compute_smallb4U(gammaDD, betaU, alpha, GRHD.u4U_ito_ValenciavU, BU, sqrt4pi)
GRFFE.compute_smallbsquared(gammaDD, betaU, alpha, GRFFE.smallb4U)
GRFFE.compute_TEM4UU(gammaDD,betaU,alpha, GRFFE.smallb4U, GRFFE.smallbsquared,GRHD.u4U_ito_ValenciavU)
# GRHD.compute_g4DD_zerotimederiv_dD(gammaDD,betaU,alpha, gammaDDdD,betaUdD,alphadD)
GRHD.compute_S_tilde_source_termD(alpha, GRHD.sqrtgammaDET,g4DDdD, GRFFE.TEM4UU)
for i in range(3):
desc = "Adds the source term to StildeD"+str(i)+"."
name = "calculate_StildeD"+str(i)+"_source_term"
outCfunction(
outfile = os.path.join(outdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,const REAL *auxevol_gfs, REAL *rhs_gfs",
body = general_access \
+metric_deriv_access[i]\
+four_metric_C\
+four_metric_Cp1\
+four_metric_deriv[i]\
+outputC(GRHD.S_tilde_source_termD[i],"Stilde_rhsD"+str(i),"returnstring",params=outCparams).replace("IDX4","IDX4S")\
+write_final_quantity[i],
loopopts ="InteriorPoints",
rel_path_to_Cparams=os.path.join("../"))
```
<a id='code_validation'></a>
# Step 2: Code Validation against original C code \[Back to [top](#toc)\]
$$\label{code_validation}$$
To validate the code in this tutorial we check for agreement between the files
1. that were written in this tutorial and
1. those that are stored in `GiRaFFE_NRPy/GiRaFFE_Ccode_library` or generated by `GiRaFFE_NRPy_A2B.py`
```
# Declare gridfunctions necessary to generate the C code:
gammaDD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL","gammaDD","sym01",DIM=3)
betaU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","betaU",DIM=3)
alpha = gri.register_gridfunctions("AUXEVOL","alpha",DIM=3)
BU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","BU",DIM=3)
ValenciavU = ixp.register_gridfunctions_for_single_rank1("AUXEVOL","ValenciavU",DIM=3)
StildeD = ixp.register_gridfunctions_for_single_rank1("EVOL","StildeD",DIM=3)
# Declare this symbol:
sqrt4pi = par.Cparameters("REAL",thismodule,"sqrt4pi","sqrt(4.0*M_PI)")
# First, we generate the file using the functions written in this notebook:
outCparams = "outCverbose=False"
write_out_functions_for_StildeD_source_term(outdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi)
# Define the directory that we wish to validate against:
valdir = os.path.join("GiRaFFE_NRPy","GiRaFFE_Ccode_library","RHSs")
cmd.mkdir(valdir)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Source_Terms as source
source.write_out_functions_for_StildeD_source_term(valdir,outCparams,gammaDD,betaU,alpha,ValenciavU,BU,sqrt4pi)
import difflib
import sys
print("Printing difference between original C code and this code...")
# Open the files to compare
files = ["calculate_StildeD0_source_term.h","calculate_StildeD1_source_term.h","calculate_StildeD2_source_term.h"]
for file in files:
print("Checking file " + file)
with open(os.path.join(valdir,file)) as file1, open(os.path.join(outdir,file)) as file2:
# Read the lines of each file
file1_lines = file1.readlines()
file2_lines = file2.readlines()
num_diffs = 0
for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(valdir+file), tofile=os.path.join(outdir+file)):
sys.stdout.writelines(line)
num_diffs = num_diffs + 1
if num_diffs == 0:
print("No difference. TEST PASSED!")
else:
print("ERROR: Disagreement found with .py file. See differences above.")
sys.exit(1)
```
<a id='latex_pdf_output'></a>
# Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-GiRaFFE_NRPy_C_code_library-Source_Terms](TTutorial-GiRaFFE_NRPy_C_code_library-Source_Terms.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-GiRaFFE_NRPy-Source_Terms",location_of_template_file=os.path.join(".."))
```
| true |
code
| 0.415966 | null | null | null | null |
|
# Radius and mean slip of rock patches failing in micro-seismic events
When stresses in a rock surpass its shear strength, the affected rock volume will fail to shearing.
Assume that we observe a circular patch with radius $r$ on, e.g. a fault, and that this patch is affected by a slip with an average slip distance $d$.
This slip is a response to increasing shear stresses, hence it reduces shear stresses by $\Delta \tau$.
These three parameters are linked by:
$$\Delta \tau = \frac{7 \, \pi \, \mu}{16 \, r} \, d $$
where $\mu$ is the shear modulus near the fault.
The seismic moment $M_0$, the energy to offset an area $A$ by a distance $d$, is defined by:
$$M_0 = \mu \, d \, A$$
$$ d = \frac{M_0}{\mu \, A} $$
with $A = \pi r^2$.
The [USGS definition](https://earthquake.usgs.gov/learn/glossary/?term=seismic%20moment) for the seismic moments is: *The seismic moment is a measure of the size of an earthquake based on the area of fault rupture, the average amount of slip, and the force that was required to overcome the friction sticking the rocks together that were offset by faulting. Seismic moment can also be calculated from the amplitude spectra of seismic waves.*
Putting the $d = ...$ equation in the first one and solving for the radius yields:
$$r = \bigg(\frac{7 \, M_0}{16 \, \Delta \tau}\bigg)^{1/3}$$
The following code leads to a plot which relates the influenced radius $r$ to the average displacement $d$ for micro-earthquakes. It shows that a larger area can be affected by smaller displacements for a small shear stress reduction $\Delta \tau$ to bigger displacements for smaller areas for larger shear stress reductions.
```
# import libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('ticks')
sns.set_context('talk')
def get_displacement(mu, dtau, m0):
r = ((7*m0)/(16*dtau))**(1./3.)
d = m0 / (mu*r**2 * np.pi)
# Alternatively:
# od = np.pi * mu * r * (7/(16*dtau*m0**2))**(1./3.)
# d = 1 / od
return r, d
# Parameters
dtau = np.arange(1,11)*1e6 # shear stress reduction
m0 = np.array([3.2e10, 1.0e12, 3.2e13]) # seismic moment
mu = 2.5e10 # shear modulus
# calculate displacements and radius
displacements = np.concatenate([get_displacement(mu, x, m0) for x in dtau])
# seperate arrays
disps = displacements[1::2,:]
rads = displacements[0::2,:]
# min tau and max tau
mitau = np.polyfit(disps[0,:], rads[0,:],1)
matau = np.polyfit(disps[-1,:], rads[-1,:],1)
dsim = np.linspace(0,0.033)
mirad = mitau[0]*dsim+mitau[1]
marad = matau[0]*dsim+matau[1]
# plot results
fig = plt.figure(figsize=[12,7])
plt.plot(disps[:,0]*1000, rads[:,0], '.', label='M$_w$1')
plt.plot(disps[:,1]*1000, rads[:,1], '^', label='M$_w$2')
plt.plot(disps[:,2]*1000, rads[:,2], 's', label='M$_w$3')
plt.plot(dsim*1000, mirad, '-', color='gray', alpha=.5)
plt.plot(dsim*1000, marad, '-', color='gray', alpha=.5)
plt.legend()
plt.ylim([0, 300])
plt.xlim([0, 0.033*1000])
plt.text(.8, 200, '$\Delta tau = 1$ MPa', fontsize=14)
plt.text(20, 55, '$\Delta tau = 10$ MPa', fontsize=14)
plt.xlabel('average displacement [mm]')
plt.ylabel('influenced radius [m]')
#fig.savefig('displacement_radius.png', dpi=300, bbox_inches='tight')
```
| true |
code
| 0.585042 | null | null | null | null |
|
# Color extraction from images with Lithops4Ray
In this tutorial we explain how to use Lithops4Ray to extract colors and [HSV](https://en.wikipedia.org/wiki/HSL_and_HSV) color range from the images persisted in the IBM Cloud Oject Storage. To experiment with this tutorial, you can use any public image dataset and upload it to your bucket in IBM Cloud Object Storage. For example follow [Stanford Dogs Dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) to download images. We also provide upload [script](https://github.com/project-codeflare/data-integration/blob/main/scripts/upload_to_ibm_cos.py) that can be used to upload local images to the IBM Cloud Object Storage
Our code is using colorthief package that need to be installed in the Ray cluster, both on head and worker nodes. You can edit `cluster.yaml` file and add
`- pip install colorthief`
To the `setup_commands` section. This will ensure that once Ray cluster is started required package will be installed automatically.
```
import lithops
import ray
```
We write function that extracts color from a single image. Once invoked, Lithops framework will inject a reserved parameter `obj` that points to the data stream of the image. More information on the reserved `obj` parameter can be found [here](https://github.com/lithops-cloud/lithops/blob/master/docs/data_processing.md#processing-data-from-a-cloud-object-storage-service)
```
def extract_color(obj):
from colorthief import ColorThief
body = obj.data_stream
dominant_color = ColorThief(body).get_color(quality=10)
return dominant_color, obj.key
```
We now write a Ray task that will return image name and HSV color range of the image. Instead of a direct call to extract_color function, Lithops is being used behind the scenes (through the data object) to call it only at the right moment.
```
@ray.remote
def identify_colorspace(data):
import colorsys
color, name = data.result()
hsv = colorsys.rgb_to_hsv(color[0], color[1], color[2])
val = hsv[0] * 180
return name, val
```
Now let's tie all together with a main method. By using Lithops allows us to remove all the boiler plate code required to list data from the object storage. It also inspects the data source by using the internal Lithops data partitioner and creates a lazy execution plan, where each entry maps an "extract_color" function to a single image. Moreover, Lithops creates a single authentication token that is used by all the tasks, instead of letting each task perform authentication. The parallelism is controlled by Ray and once Ray task is executed, it will call Lithops to execute the extract_color function directly in the context of the calling task. Thus, by using Lithops, we can allow code to access object storage data, without requiring additional coding effort from the user.
```
if __name__ == '__main__':
ray.init(ignore_reinit_error=True)
fexec = lithops.LocalhostExecutor(log_level=None)
my_data = fexec.map(extract_color, 'cos://<bucket>/<path to images>/')
results = [identify_colorspace.remote(d) for d in my_data]
for res in results:
value = ray.get(res)
print("Image: " + value[0] + ", dominant color HSV range: " + str(value[1]))
ray.shutdown()
```
| true |
code
| 0.385201 | null | null | null | null |
|
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 5 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
y_hat = activation(torch.mm(features, weights.T) + bias)
print(y_hat)
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
y_hat = activation(torch.mm(activation(torch.mm(features, W1) + B1), W2) + B2)
print(y_hat)
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
| true |
code
| 0.631026 | null | null | null | null |
|
# Tutorial - Evaluate DNBs additional Rules
This notebook contains a tutorial for the evaluation of DNBs additional Rules for the following Solvency II reports:
- Annual Reporting Solo (ARS); and
- Quarterly Reporting Solo (QRS)
Besides the necessary preparation, the tutorial consists of 6 steps:
1. Read possible datapoints
2. Read data
3. Clean data
4. Read additional rules
5. Evaluate rules
6. Save results
## 0. Preparation
### Import packages
```
import pandas as pd # dataframes
import numpy as np # mathematical functions, arrays and matrices
from os.path import join, isfile # some os dependent functionality
import data_patterns # evaluation of patterns
import regex as re # regular expressions
from pprint import pprint # pretty print
import logging
```
### Variables
```
# ENTRYPOINT: 'ARS' for 'Annual Reporting Solo' or 'QRS' for 'Quarterly Reporting Solo'
# INSTANCE: Name of the report you want to evaluate the additional rules for
ENTRYPOINT = 'ARS'
INSTANCE = 'ars_240_instance' # Test instances: ars_240_instance or qrs_240_instance
# DATAPOINTS_PATH: path to the excel-file containing all possible datapoints (simplified taxonomy)
# RULES_PATH: path to the excel-file with the additional rules
# INSTANCES_DATA_PATH: path to the source data
# RESULTS_PATH: path to the results
DATAPOINTS_PATH = join('..', 'data', 'datapoints')
RULES_PATH = join('..', 'solvency2-rules')
INSTANCES_DATA_PATH = join('..', 'data', 'instances', INSTANCE)
RESULTS_PATH = join('..', 'results')
# We log to rules.log in the data/instances path
logging.basicConfig(filename = join(INSTANCES_DATA_PATH, 'rules.log'),level = logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
```
## 1. Read possible datapoints
In the data/datapoints directory there is a file for both ARS and QRS in which all possible datapoints are listed (simplified taxonomy).
We will use this information to add all unreported datapoints to the imported data.
```
df_datapoints = pd.read_csv(join(DATAPOINTS_PATH, ENTRYPOINT.upper() + '.csv'), sep=";").fillna("") # load file to dataframe
df_datapoints.head()
```
## 2. Read data
We distinguish 2 types of tables:
- With a closed-axis, e.g. the balance sheet: an entity reports only 1 balance sheet per period
- With an open-axis, e.g. the list of assets: an entity reports several 'rows of data' in the relevant table
### General information
First we gather some general information:
- A list of all possible reported tables
- A list of all reported tables
- A list of all tables that have not been reported
```
tables_complete_set = df_datapoints.tabelcode.sort_values().unique().tolist()
tables_reported = [table for table in tables_complete_set if isfile(join(INSTANCES_DATA_PATH, table + '.pickle'))]
tables_not_reported = [table for table in tables_complete_set if table not in tables_reported]
```
### Closed-axis
Besides all separate tables, the 'Tutorial Convert XBRL-instance to CSV, HTML and pickles' also outputs a large dataframe with the data from all closed-axis tables combined.
We use this dataframe for evaluating the patterns on closed-axis tables.
```
df_closed_axis = pd.read_pickle(join(INSTANCES_DATA_PATH, INSTANCE + '.pickle'))
tables_closed_axis = sorted(list(set(x[:13] for x in df_closed_axis.columns)))
df_closed_axis.head()
```
### Open-axis
For open-axis tables we create a dictionary with all data per table.
Later we will evaluate the additional rules on each seperate table in this dictionary.
```
dict_open_axis = {}
tables_open_axis = [table for table in tables_reported if table not in tables_closed_axis]
for table in tables_open_axis:
df = pd.read_pickle(join(INSTANCES_DATA_PATH, table + '.pickle'))
# Identify which columns within the open-axis table make a table row unique (index-columns):
index_columns_open_axis = [col for col in list(df.index.names) if col not in ['entity','period']]
# Duplicate index-columns to data columns:
df.reset_index(level=index_columns_open_axis, inplace=True)
for i in range(len(index_columns_open_axis)):
df['index_col_' + str(i)] = df[index_columns_open_axis[i]].astype(str)
df.set_index(['index_col_' + str(i)], append=True, inplace=True)
dict_open_axis[table] = df
print("Open-axis tables:")
print(list(dict_open_axis.keys()))
```
## 3. Clean data
We have to make 2 modifications on the data:
1. Add unreported datapoints
so rules (partly) pointing to unreported datapoints can still be evaluated
2. Change string values to uppercase
because the additional rules are defined using capital letters for textual comparisons
```
all_datapoints = [x.replace(',,',',') for x in
list(df_datapoints['tabelcode'] + ',' + df_datapoints['rij'] + ',' + df_datapoints['kolom'])]
all_datapoints_closed = [x for x in all_datapoints if x[:13] in tables_closed_axis]
all_datapoints_open = [x for x in all_datapoints if x[:13] in tables_open_axis]
```
### Closed-axis tables
```
# add not reported datapoints to the dataframe with data from closed axis tables:
for col in [column for column in all_datapoints_closed if column not in list(df_closed_axis.columns)]:
df_closed_axis[col] = np.nan
df_closed_axis.fillna(0, inplace = True)
# string values to uppercase
df_closed_axis = df_closed_axis.applymap(lambda s:s.upper() if type(s) == str else s)
```
### Open-axis tables
```
for table in [table for table in dict_open_axis.keys()]:
all_datapoints_table = [x for x in all_datapoints_open if x[:13] == table]
for col in [column for column in all_datapoints_table if column not in list(dict_open_axis[table].columns)]:
dict_open_axis[table][col] = np.nan
dict_open_axis[table].fillna(0, inplace = True)
dict_open_axis[table] = dict_open_axis[table].applymap(lambda s:s.upper() if type(s) == str else s)
```
## 4. Read additional rules
DNBs additional validation rules are published as an Excel file on the DNB statistics website.
We included the Excel file in the project under data/downloaded files.
The rules are already converted to a syntax Python can interpret, using the notebook: 'Convert DNBs Additional Validation Rules to Patterns'.
In the next line of code we read these converted rules (patterns).
```
df_patterns = pd.read_excel(join(RULES_PATH, ENTRYPOINT.lower() + '_patterns_additional_rules.xlsx'), engine='openpyxl').fillna("").set_index('index')
```
## 5. Evaluate rules
### Closed-axis tables
To be able to evaluate the rules for closed-axis tables, we need to filter out:
- patterns for open-axis tables; and
- patterns pointing to tables that are not reported.
```
df_patterns_closed_axis = df_patterns.copy()
df_patterns_closed_axis = df_patterns_closed_axis[df_patterns_closed_axis['pandas ex'].apply(
lambda expr: not any(table in expr for table in tables_not_reported)
and not any(table in expr for table in tables_open_axis))]
df_patterns_closed_axis.head()
```
We now have:
- the data for closed-axis tables in a dataframe;
- the patterns for closed-axis tables in a dataframe.
To evaluate the patterns we need to create a 'PatternMiner' (part of the data_patterns package), and run the analyze function.
```
miner = data_patterns.PatternMiner(df_patterns=df_patterns_closed_axis)
df_results_closed_axis = miner.analyze(df_closed_axis)
df_results_closed_axis.head()
```
### Open-axis tables
First find the patterns defined for open-axis tables
```
df_patterns_open_axis = df_patterns.copy()
df_patterns_open_axis = df_patterns_open_axis[df_patterns_open_axis['pandas ex'].apply(
lambda expr: any(table in expr for table in tables_open_axis))]
```
Patterns involving multiple open-axis tables are not yet supported
```
df_patterns_open_axis = df_patterns_open_axis[df_patterns_open_axis['pandas ex'].apply(
lambda expr: len(set(re.findall('S.\d\d.\d\d.\d\d.\d\d',expr)))) == 1]
df_patterns_open_axis.head()
```
Next we loop through the open-axis tables en evaluate the corresponding patterns on the data
```
output_open_axis = {} # dictionary with input and results per table
for table in tables_open_axis: # loop through open-axis tables
if df_patterns_open_axis['pandas ex'].apply(lambda expr: table in expr).sum() > 0: # check if there are patterns
info = {}
info['data'] = dict_open_axis[table] # select data
info['patterns'] = df_patterns_open_axis[df_patterns_open_axis['pandas ex'].apply(
lambda expr: table in expr)] # select patterns
miner = data_patterns.PatternMiner(df_patterns=info['patterns'])
info['results'] = miner.analyze(info['data']) # evaluate patterns
output_open_axis[table] = info
```
Print results for the first table (if there are rules for tables with an open axis)
```
if len(output_open_axis.keys()) > 0:
display(output_open_axis[list(output_open_axis.keys())[0]]['results'].head())
```
## 6. Save results
### Combine results for closed- and open-axis tables
To output the results in a single file, we want to combine the results for closed-axis and open-axis tables
```
# Function to transform results for open-axis tables, so it can be appended to results for closed-axis tables
# The 'extra' index columns are converted to data columns
def transform_results_open_axis(df):
if df.index.nlevels > 2:
reset_index_levels = list(range(2, df.index.nlevels))
df = df.reset_index(level=reset_index_levels)
rename_columns={}
for x in reset_index_levels:
rename_columns['level_' + str(x)] = 'id_column_' + str(x - 1)
df.rename(columns=rename_columns, inplace=True)
return df
df_results = df_results_closed_axis.copy() # results for closed axis tables
for table in list(output_open_axis.keys()): # for all open axis tables with rules -> append and sort results
df_results = transform_results_open_axis(output_open_axis[table]['results']).append(df_results, sort=False).sort_values(by=['pattern_id']).sort_index()
```
Change column order so the dataframe starts with the identifying columns:
```
list_col_order = []
for i in range(1, len([col for col in list(df_results.columns) if col[:10] == 'id_column_']) + 1):
list_col_order.append('id_column_' + str(i))
list_col_order.extend(col for col in list(df_results.columns) if col not in list_col_order)
df_results = df_results[list_col_order]
df_results.head()
```
### Save results
The dataframe df_results contains all output of the evaluation of the validation rules.
```
# To save all results use df_results
# To save all exceptions use df_results['result_type']==False
# To save all confirmations use df_results['result_type']==True
# Here we save only the exceptions to the validation rules
df_results[df_results['result_type']==False].to_excel(join(RESULTS_PATH, "results.xlsx"))
```
### Example of an error in the report
```
# Get the pandas code from the first pattern and evaluate it
s = df_patterns.loc[4, 'pandas ex'].replace('df', 'df_closed_axis')
print('Pattern:', s)
display(eval(s)[re.findall('S.\d\d.\d\d.\d\d.\d\d,R\d\d\d\d,C\d\d\d\d',s)])
```
| true |
code
| 0.263422 | null | null | null | null |
|
# SST-2
# Simple Baselines using ``mean`` and ``last`` pooling
## Librairies
```
# !pip install transformers==4.8.2
# !pip install datasets==1.7.0
# !pip install ax-platform==0.1.20
import os
import sys
sys.path.insert(0, os.path.abspath("../..")) # comment this if library is pip installed
import io
import re
import pickle
from timeit import default_timer as timer
from tqdm.notebook import tqdm
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from datasets import load_dataset, Dataset, concatenate_datasets
from transformers import AutoTokenizer
from transformers import BertModel
from transformers.data.data_collator import DataCollatorWithPadding
from ax import optimize
from ax.plot.contour import plot_contour
from ax.plot.trace import optimization_trace_single_method
from ax.service.managed_loop import optimize
from ax.utils.notebook.plotting import render, init_notebook_plotting
import esntorch.core.reservoir as res
import esntorch.core.learning_algo as la
import esntorch.core.merging_strategy as ms
import esntorch.core.esn as esn
%config Completer.use_jedi = False
%load_ext autoreload
%autoreload 2
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
SEED = 42
```
## Global variables
```
CACHE_DIR = '~/Data/huggignface/' # put your path here
RESULTS_FILE = 'Results/Baselines_v2/sst-2_results_.pkl' # put your path here
```
## Dataset
```
# download dataset
# full train, mini train, and val sets
raw_datasets = load_dataset('glue', 'sst2', cache_dir=CACHE_DIR)
raw_datasets = raw_datasets.rename_column('sentence', 'text')
full_train_dataset = raw_datasets['train']
train_dataset = full_train_dataset.train_test_split(train_size=0.3, shuffle=True)['train']
val_dataset = raw_datasets['validation']
# special test set
test_dataset = load_dataset('gpt3mix/sst2', split='test', cache_dir=CACHE_DIR)
def clean(example):
example['text'] = example['text'].replace('-LRB-', '(').replace('-RRB-', ')').replace(r'\/', r'/')
example['label'] = np.abs(example['label'] - 1) # revert labels of test set
return example
test_dataset = test_dataset.map(clean)
# create dataset_d
dataset_d = {}
dataset_d = {
'full_train': full_train_dataset,
'train': train_dataset,
'val': val_dataset,
'test': test_dataset
}
dataset_d
# tokenize
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding=False, truncation=True, return_length=True)
for k, v in dataset_d.items():
tmp = v.map(tokenize_function, batched=True)
tmp = tmp.rename_column('length', 'lengths')
tmp = tmp.sort("lengths")
tmp = tmp.rename_column('label', 'labels')
tmp.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels', 'lengths'])
dataset_d[k] = tmp
# dataloaders
dataloader_d = {}
for k, v in dataset_d.items():
dataloader_d[k] = torch.utils.data.DataLoader(v, batch_size=256, collate_fn=DataCollatorWithPadding(tokenizer))
dataset_d
```
## Optimization
```
baseline_params = {
'embedding_weights': 'bert-base-uncased', # TEXT.vocab.vectors,
'distribution' : 'uniform', # uniform, gaussian
'input_dim' : 768, # dim of encoding!
'reservoir_dim' : 0, # not used
'bias_scaling' : 0.0, # not used
'sparsity' : 0.0, # not used
'spectral_radius' : None,
'leaking_rate': 0.5, # not used
'activation_function' : 'tanh',
'input_scaling' : 0.1,
'mean' : 0.0,
'std' : 1.0,
'learning_algo' : None,
'criterion' : None,
'optimizer' : None,
'merging_strategy' : None,
'lexicon' : None,
'bidirectional' : False,
'mode' : 'no_layer', # simple baseline
'device' : device,
'seed' : 4
}
results_d = {}
for pooling_strategy in tqdm(['last', 'mean']):
results_d[pooling_strategy] = {}
for alpha in tqdm([0.1, 1.0, 10.0, 100.0]):
results_d[pooling_strategy][alpha] = []
# model
baseline_params['merging_strategy'] = pooling_strategy
baseline_params['mode'] = 'no_layer'
print(baseline_params)
ESN = esn.EchoStateNetwork(**baseline_params)
ESN.learning_algo = la.RidgeRegression(alpha=alpha)
ESN = ESN.to(device)
# train
t0 = timer()
LOSS = ESN.fit(dataloader_d["full_train"]) # full train set
t1 = timer()
acc = ESN.predict(dataloader_d["test"], verbose=False)[1].item() # full test set
# results
results_d[pooling_strategy][alpha].append([acc, t1 - t0])
# clean objects
del ESN.learning_algo
del ESN.criterion
del ESN.merging_strategy
del ESN
torch.cuda.empty_cache()
results_d
```
## Results
```
# save results
with open(RESULTS_FILE, 'wb') as fh:
pickle.dump(results_d, fh)
# # load results
# with open(os.path.join(RESULTS_PATH, RESULTS_FILE), 'rb') as fh:
# results_d = pickle.load(fh)
# results_d
```
| true |
code
| 0.54256 | null | null | null | null |
|
# Introduction to Data Science
## From correlation to supervised segmentation and tree-structured models
Spring 2018 - Profs. Foster Provost and Josh Attenberg
Teaching Assistant: Apostolos Filippas
***
### Some general imports
```
import os
import numpy as np
import pandas as pd
import math
import matplotlib.pylab as plt
import seaborn as sns
%matplotlib inline
sns.set(style='ticks', palette='Set2')
```
Recall the automobile MPG dataset from last week? Because its familiar, let's reuse it here.
```
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data-original"
column_names = ['mpg', 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration',
'model', 'origin', 'car_name']
mpg_df = pd.read_csv(url,
delim_whitespace=True,
header=None,
names=column_names).dropna()
```
Rather than attempt to predict the MPG from the other aspects of a car, let's try a simple classification problem, whether a car gets good milage (high MPG) or not
```
mpg_df["mpg"].hist()
```
Arbitrarily, let's say that those cars with a MPG greater than the median get good miles per gallon.
```
median_mpg = mpg_df["mpg"].median()
print ("the median MPG is: %s" % median_mpg)
def is_high_mpg(mpg):
return 1 if mpg > median_mpg else 0
mpg_df["is_high_mpg"] = mpg_df["mpg"].apply(is_high_mpg)
```
We'd like to use information contained in the other automobile quantities to predict whether or not the car is efficient. Let's take a look at how well these observables "split" our data according to our target.
```
def visualize_split(df, target_column, info_column, color_one="red", color_two="blue"):
plt.rcParams['figure.figsize'] = [15.0, 2.0]
color = ["red" if x == 0 else "blue" for x in df[target_column]]
plt.scatter(df[info_column], df[target_column], c=color, s=50)
plt.xlabel(info_column)
plt.ylabel(target_column)
plt.show()
visualize_split(mpg_df, "is_high_mpg", "weight")
```
Above we see a scatter plot of all possible car weights and a color code that represents our target variable (is good mpg).
- Blue dots correspond to fuel efficient cars, red dots are fuel inefficient cars
- The horizontal position is the weight of the car
- The vertical position separates our two classes
Clearly car weight and high MPG-ness are correlated.
Looks like cars weighing more than 3000 lbs tend to be inefficient. How effective is this decision boundary? Let's quantify it!
***
**Entropy** ($H$) and **information gain** ($IG$) au useful tools for measuring the effectiveness of a split on the data. Entropy measures how random data is, information gain is a measure of the reduction in randomness after performing a split.
<table style="border: 0px">
<tr style="border: 0px">
<td style="border: 0px"><img src="images/dsfb_0304.png" height=80% width=80%>
Figure 3-4. Splitting the "write-off" sample into two segments, based on splitting the Balance attribute (account balance) at 50K.</td>
<td style="border: 0px; width: 30px"></td>
<td style="border: 0px"><img src="images/dsfb_0305.png" height=75% width=75%>
Figure 3-5. A classification tree split on the three-values Residence attribute.</td>
</tr>
</table>
Given the data, it is fairly straight forward to calculate both of these quantities.
##### Functions to get the entropy and IG
```
def entropy(target_column):
"""
computes -sum_i p_i * log_2 (p_i) for each i
"""
# get the counts of each target value
target_counts = target_column.value_counts().astype(float).values
total = target_column.count()
# compute probas
probas = target_counts/total
# p_i * log_2 (p_i)
entropy_components = probas * np.log2(probas)
# return negative sum
return - entropy_components.sum()
def information_gain(df, info_column, target_column, threshold):
"""
computes H(target) - H(target | info > thresh) - H(target | info <= thresh)
"""
data_above_thresh = df[df[info_column] > threshold]
data_below_thresh = df[df[info_column] <= threshold]
H = entropy(df[target_column])
entropy_above = entropy(data_above_thresh[target_column])
entropy_below = entropy(data_below_thresh[target_column])
ct_above = data_above_thresh.shape[0]
ct_below = data_below_thresh.shape[0]
tot = float(df.shape[0])
return H - entropy_above*ct_above/tot - entropy_below*ct_below/tot
```
Now that we have a way of calculating $H$ and $IG$, let's test our prior hunch, that using 3000 as a split on weight allows us to determine if a car is high MPG using $IG$.
```
threshold = 3000
prior_entropy = entropy(mpg_df["is_high_mpg"])
IG = information_gain(mpg_df, "weight", "is_high_mpg", threshold)
print ("IG of %.4f using a threshold of %.2f given a prior entropy of %.4f" % (IG, threshold, prior_entropy))
```
How good was our guess of 3000? Let's loop through all possible splits on weight and see what is the best!
```
def best_threshold(df, info_column, target_column, criteria=information_gain):
maximum_ig = 0
maximum_threshold = 0
for thresh in df[info_column]:
IG = criteria(df, info_column, target_column, thresh)
if IG > maximum_ig:
maximum_ig = IG
maximum_threshold = thresh
return (maximum_threshold, maximum_ig)
maximum_threshold, maximum_ig = best_threshold(mpg_df, "weight", "is_high_mpg")
print ("the maximum IG we can achieve splitting on weight is %.4f using a thresh of %.2f" % (maximum_ig, maximum_threshold))
```
Other observed features may also give us a strong clue about the efficiency of cars.
```
predictor_cols = ['cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'model', 'origin']
for col in predictor_cols:
visualize_split(mpg_df, "is_high_mpg", col)
```
This now begs the question: what feature gives the most effective split?
```
def best_split(df, info_columns, target_column, criteria=information_gain):
maximum_ig = 0
maximum_threshold = 0
maximum_column = ""
for info_column in info_columns:
thresh, ig = best_threshold(df, info_column, target_column, criteria)
if ig > maximum_ig:
maximum_ig = ig
maximum_threshold = thresh
maximum_column = info_column
return maximum_column, maximum_threshold, maximum_ig
maximum_column, maximum_threshold, maximum_ig = best_split(mpg_df, predictor_cols, "is_high_mpg")
print ("The best column to split on is %s giving us a IG of %.4f using a thresh of %.2f" % (maximum_column, maximum_ig, maximum_threshold))
```
### The Classifier Tree: Recursive Splitting
Of course, splitting the data one time sometimes isn't enough to make accurate categorical predictions. However, we can continue to split the data recursively until we achieve acceptable results. This recursive splitting is the basis for a "decision tree classifier" or "classifier tree", a popular and powerful class of machine learning algorithm. In particular, this specific algorithm is known as ID3 for Iterative Dichotomizer.
What are some other ways you might consider splitting the data?
```
def Plot_Data(df, info_col_1, info_col_2, target_column, color1="red", color2="blue"):
# Make the plot square
plt.rcParams['figure.figsize'] = [12.0, 8.0]
# Color
color = [color1 if x == 0 else color2 for x in df[target_column]]
# Plot and label
plt.scatter(df[info_col_1], df[info_col_2], c=color, s=50)
plt.xlabel(info_col_1)
plt.ylabel(info_col_2)
plt.xlim([min(df[info_col_1]) , max(df[info_col_1]) ])
plt.ylim([min(df[info_col_2]) , max(df[info_col_2]) ])
plt.show()
plt.figure(figsize=[7,5])
Plot_Data(mpg_df, "acceleration", "weight","is_high_mpg")
```
Rather than build a classifier tree from scratch (think if you could now do this!) let's use sklearn's implementation which includes some additional functionality.
```
from sklearn.tree import DecisionTreeClassifier
# Let's define the model (tree)
decision_tree = DecisionTreeClassifier(max_depth=1, criterion="entropy") # Look at those 2 arguments !!!
# Let's tell the model what is the data
decision_tree.fit(mpg_df[predictor_cols], mpg_df["is_high_mpg"])
```
We now have a classifier tree, let's visualize the results!
```
from IPython.display import Image
from sklearn.tree import export_graphviz
def visualize_tree(decision_tree, feature_names, class_names, directory="./images", name="tree",proportion=True):
# Export our decision tree to graphviz format
dot_name = "%s/%s.dot" % (directory, name)
dot_file = export_graphviz(decision_tree, out_file=dot_name,
feature_names=feature_names, class_names=class_names,proportion=proportion)
# Call graphviz to make an image file from our decision tree
image_name = "%s/%s.png" % (directory, name)
os.system("dot -Tpng %s -o %s" % (dot_name, image_name))
# to get this part to actually work, you may need to open a terminal window in Jupyter and run the following command "sudo apt install graphviz"
# Return the .png image so we can see it
return Image(filename=image_name)
visualize_tree(decision_tree, predictor_cols, ["n", "y"])
```
Let's look at the `"acceleration"`, `"weight"`, including the **DECISION SURFACE!!**
More details for this graph: [sklearn decision surface](http://scikit-learn.org/stable/auto_examples/tree/plot_iris.html)
```
def Decision_Surface(data, col1, col2, target, model, probabilities=False):
# Get bounds
x_min, x_max = data[col1].min(), data[col1].max()
y_min, y_max = data[col2].min(), data[col2].max()
# Create a mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max,0.5), np.arange(y_min, y_max,0.5))
meshed_data = pd.DataFrame(np.c_[xx.ravel(), yy.ravel()])
tdf = data[[col1, col2]]
model.fit(tdf, target)
if probabilities:
Z = model.predict(meshed_data).reshape(xx.shape)
else:
Z = model.predict_proba(meshed_data)[:, 1].reshape(xx.shape)
plt.figure(figsize=[12,7])
plt.title("Decision surface")
plt.ylabel(col1)
plt.xlabel(col2)
if probabilities:
# Color-scale on the contour (surface = separator)
cs = plt.contourf(xx, yy, Z,cmap=plt.cm.coolwarm, alpha=0.4)
else:
# Only a curve/line on the contour (surface = separator)
cs = plt.contourf(xx, yy, Z, levels=[-1,0,1],cmap=plt.cm.coolwarm, alpha=0.4)
color = ["blue" if t == 0 else "red" for t in target]
plt.scatter(data[col1], data[col2], color=color )
plt.show()
tree_depth=1
Decision_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["is_high_mpg"], DecisionTreeClassifier(max_depth=tree_depth, criterion="entropy"), True)
```
How good is our model? Let's compute accuracy, the percent of times where we correctly identified that a car was high MPG.
```
from sklearn import metrics
print ( "Accuracy = %.3f" % (metrics.accuracy_score(decision_tree.predict(mpg_df[predictor_cols]), mpg_df["is_high_mpg"])) )
```
What are some other ways we could classify the data? Last class we used linear regression, let's take a look to see how that partitions the data
```
from sklearn import linear_model
import warnings
warnings.filterwarnings('ignore')
Decision_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["is_high_mpg"], linear_model.Lasso(alpha=0.01), True)
```
## Decision Tree Regression
Recall our problem from last time, trying to predict the real-valued MPG for each car. In data science, problems where one tries to predict a real-valued number is known as regression. As with classification, much of the intuition for splitting data based on values of known observables applies:
```
from mpl_toolkits.mplot3d import Axes3D
def plot_regression_data(df, info_col_1, info_col_2, target_column):
# Make the plot square
plt.rcParams['figure.figsize'] = [12.0, 8.0]
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(df[info_col_1], df[info_col_2], df[target_column], cmap=plt.cm.viridis, linewidth=0.2)
ax.set_xlabel(info_col_1)
ax.set_ylabel(info_col_2)
ax.set_zlabel(target_column);
ax.view_init(60, 45)
plt.show()
plot_regression_data(mpg_df, "acceleration", "weight", "mpg")
```
At a high level, one could imagine splitting the data recursively, assigning an estimated MPG to each side of the split. On more thoughtful reflection, some questions emerge:
- how do predict a real number at a leaf node given the examples that "filter" to that node?
- how do we assess the effectiveness of a particular split?
As with decision tree classification, there are many valid answers to both of these questions. A typical approach involves collecting all nodes that filter to a leaf, computing the mean target value, and using this as a prediction. The effectiveness of a split can then be measured by computing the mean difference between all true values and this prediction.
As before, we can easily experiment with decison tree regression models using sklearn:
```
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(max_depth=1, criterion="mse") # note the use of mse (mean squared error) as a criterion
regressor.fit(mpg_df[predictor_cols], mpg_df["mpg"])
visualize_tree(regressor, predictor_cols, ["n", "y"])
```
As before, we can also view the "regression surface"
```
def Regression_Surface(data, col1, col2, target, model):
# Get bounds
x_min, x_max = data[col1].min(), data[col1].max()
y_min, y_max = data[col2].min(), data[col2].max()
# Create a mesh
xx, yy = np.meshgrid(np.arange(x_min, x_max,0.5), np.arange(y_min, y_max,0.5))
meshed_data = pd.DataFrame(np.c_[xx.ravel(), yy.ravel()])
tdf = data[[col1, col2]]
model.fit(tdf, target)
Z = model.predict(meshed_data).reshape(xx.shape)
plt.figure(figsize=[12,7])
plt.title("Decision surface")
plt.ylabel(col1)
plt.xlabel(col2)
cs = plt.contourf(xx, yy, Z, alpha=0.4, cmap=plt.cm.coolwarm)
plt.scatter(data[col1], data[col2], c=target, cmap=plt.cm.coolwarm)
plt.show()
tree_depth=1
Regression_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["mpg"], DecisionTreeRegressor(max_depth=tree_depth, criterion="mse"))
```
Let's also take a look using linear regression!
```
Regression_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["mpg"], linear_model.LinearRegression())
```
How about a more complicated model? Let's try random forrest regression!
```
from sklearn.ensemble import RandomForestRegressor
Regression_Surface(mpg_df[predictor_cols], "acceleration", "weight", mpg_df["mpg"], RandomForestRegressor(n_estimators=10))
```
| true |
code
| 0.619586 | null | null | null | null |
|
## Distinction of solid liquid atoms and clustering
In this example, we will take one snapshot from a molecular dynamics simulation which has a solid cluster in liquid. The task is to identify solid atoms and cluster them. More details about the method can be found [here](https://pyscal.readthedocs.io/en/latest/solidliquid.html).
The first step is, of course, importing all the necessary module. For visualisation, we will use [Ovito](https://www.ovito.org/).

The above image shows a visualisation of the system using Ovito. Importing modules,
```
import pyscal.core as pc
```
Now we will set up a System with this input file, and calculate neighbors. Here we will use a cutoff method to find neighbors. More details about finding neighbors can be found [here](https://pyscal.readthedocs.io/en/latest/nearestneighbormethods.html#).
```
sys = pc.System()
sys.read_inputfile('cluster.dump')
sys.find_neighbors(method='cutoff', cutoff=3.63)
```
Once we compute the neighbors, the next step is to find solid atoms. This can be done using [System.find_solids](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.find_solids) method. There are few parameters that can be set, which can be found in detail [here](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.find_solids).
```
sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=False)
```
The above statement found all the solid atoms. Solid atoms can be identified by the value of the `solid` attribute. For that we first get the atom objects and select those with `solid` value as True.
```
atoms = sys.atoms
solids = [atom for atom in atoms if atom.solid]
len(solids)
```
There are 202 solid atoms in the system. In order to visualise in Ovito, we need to first write it out to a trajectory file. This can be done with the help of [to_file](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.to_file) method of System. This method can help to save any attribute of the atom or ant Steinhardt parameter value.
```
sys.to_file('sys.solid.dat', custom = ['solid'])
```
We can now visualise this file in Ovito. After opening the file in Ovito, the modifier [compute property](https://ovito.org/manual/particles.modifiers.compute_property.html) can be selected. The `Output property` should be `selection` and in the expression field, `solid==0` can be selected to select all the non solid atoms. Applying a modifier [delete selected particles](https://ovito.org/manual/particles.modifiers.delete_selected_particles.html) can be applied to delete all the non solid particles. The system after removing all the liquid atoms is shown below.

### Clustering algorithm
You can see that there is a cluster of atom. The clustering functions that pyscal offers helps in this regard. If you used `find_clusters` with `cluster=True`, the clustering is carried out. Since we did used `cluster=False` above, we will rerun the function
```
sys.find_solids(bonds=6, threshold=0.5, avgthreshold=0.6, cluster=True)
```
You can see that the above function call returned the number of atoms belonging to the largest cluster as an output. In order to extract atoms that belong to the largest cluster, we can use the `largest_cluster` attribute of the atom.
```
atoms = sys.atoms
largest_cluster = [atom for atom in atoms if atom.largest_cluster]
len(largest_cluster)
```
The value matches that given by the function. Once again we will save this information to a file and visualise it in Ovito.
```
sys.to_file('sys.cluster.dat', custom = ['solid', 'largest_cluster'])
```
The system visualised in Ovito following similar steps as above is shown below.

It is clear from the image that the largest cluster of solid atoms was successfully identified. Clustering can be done over any property. The following example with the same system will illustrate this.
## Clustering based on a custom property
In pyscal, clustering can be done based on any property. The following example illustrates this. To find the clusters based on a custom property, the [System.clusters_atoms](https://docs.pyscal.org/en/latest/pyscal.html#pyscal.core.System.cluster_atoms) method has to be used. The simulation box shown above has the centre roughly at (25, 25, 25). For the custom clustering, we will cluster all atoms within a distance of 10 from the the rough centre of the box at (25, 25, 25). Let us define a function that checks the above condition.
```
def check_distance(atom):
#get position of atom
pos = atom.pos
#calculate distance from (25, 25, 25)
dist = ((pos[0]-25)**2 + (pos[1]-25)**2 + (pos[2]-25)**2)**0.5
#check if dist < 10
return (dist <= 10)
```
The above function would return True or False depending on a condition and takes the Atom as an argument. These are the two important conditions to be satisfied. Now we can pass this function to cluster. First, set up the system and find the neighbors.
```
sys = pc.System()
sys.read_inputfile('cluster.dump')
sys.find_neighbors(method='cutoff', cutoff=3.63)
```
Now cluster
```
sys.cluster_atoms(check_distance)
```
There are 242 atoms in the cluster! Once again we can check this, save to a file and visualise in ovito.
```
atoms = sys.atoms
largest_cluster = [atom for atom in atoms if atom.largest_cluster]
len(largest_cluster)
sys.to_file('sys.dist.dat', custom = ['solid', 'largest_cluster'])
```

This example illustrates that any property can be used to cluster the atoms!
| true |
code
| 0.395747 | null | null | null | null |
|
This script loads behavioral mice data (from `biasedChoiceWorld` protocol and, separately, the last three sessions of training) only from mice that pass a given (stricter) training criterion. For the `biasedChoiceWorld` protocol, only sessions achieving the `trained_1b` and `ready4ephysrig` training status are collected.
The data are slightly reformatted and saved as `.csv` files.
```
import datajoint as dj
dj.config['database.host'] = 'datajoint.internationalbrainlab.org'
from ibl_pipeline import subject, acquisition, action, behavior, reference, data
from ibl_pipeline.analyses.behavior import PsychResults, SessionTrainingStatus
from ibl_pipeline.utils import psychofit as psy
from ibl_pipeline.analyses import behavior as behavior_analysis
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
myPath = r"C:\Users\Luigi\Documents\GitHub\ibl-changepoint\data" # Write here your data path
os.chdir(myPath)
# Get list of mice that satisfy given training criteria (stringent trained_1b)
# Check query from behavioral paper:
# https://github.com/int-brain-lab/paper-behavior/blob/master/paper_behavior_functions.py
subj_query = (subject.Subject * subject.SubjectLab * reference.Lab * subject.SubjectProject
& 'subject_project = "ibl_neuropixel_brainwide_01"').aggr(
(acquisition.Session * behavior_analysis.SessionTrainingStatus())
# & 'training_status="trained_1a" OR training_status="trained_1b"',
# & 'training_status="trained_1b" OR training_status="ready4ephysrig"',
& 'training_status="trained_1b"',
'subject_nickname', 'sex', 'subject_birth_date', 'institution',
date_trained='min(date(session_start_time))')
subjects = (subj_query & 'date_trained < "2019-09-30"')
mice_names = sorted(subjects.fetch('subject_nickname'))
print(mice_names)
sess_train = ((acquisition.Session * behavior_analysis.SessionTrainingStatus) &
'task_protocol LIKE "%training%"' & 'session_start_time < "2019-09-30"')
sess_stable = ((acquisition.Session * behavior_analysis.SessionTrainingStatus) &
'task_protocol LIKE "%biased%"' & 'session_start_time < "2019-09-30"' &
('training_status="trained_1b" OR training_status="ready4ephysrig"'))
stable_mice_names = list()
# Perform at least this number of sessions
MinSessionNumber = 4
def get_mouse_data(df):
position_deg = 35. # Stimuli appear at +/- 35 degrees
# Create new dataframe
datamat = pd.DataFrame()
datamat['trial_num'] = df['trial_id']
datamat['session_num'] = np.cumsum(df['trial_id'] == 1)
datamat['stim_probability_left'] = df['trial_stim_prob_left']
signed_contrast = df['trial_stim_contrast_right'] - df['trial_stim_contrast_left']
datamat['contrast'] = np.abs(signed_contrast)
datamat['position'] = np.sign(signed_contrast)*position_deg
datamat['response_choice'] = df['trial_response_choice']
datamat.loc[df['trial_response_choice'] == 'CCW','response_choice'] = 1
datamat.loc[df['trial_response_choice'] == 'CW','response_choice'] = -1
datamat.loc[df['trial_response_choice'] == 'No Go','response_choice'] = 0
datamat['trial_correct'] = np.double(df['trial_feedback_type']==1)
datamat['reaction_time'] = df['trial_response_time'] - df['trial_stim_on_time'] # double-check
# Since some trials have zero contrast, need to compute the alleged position separately
datamat.loc[(datamat['trial_correct'] == 1) & (signed_contrast == 0),'position'] = \
datamat.loc[(datamat['trial_correct'] == 1) & (signed_contrast == 0),'response_choice']*position_deg
datamat.loc[(datamat['trial_correct'] == 0) & (signed_contrast == 0),'position'] = \
datamat.loc[(datamat['trial_correct'] == 0) & (signed_contrast == 0),'response_choice']*(-position_deg)
return datamat
# Loop over all mice
for mouse_nickname in mice_names:
mouse_subject = {'subject_nickname': mouse_nickname}
# Get mouse data for biased sessions
behavior_stable = (behavior.TrialSet.Trial & (subject.Subject & mouse_subject)) \
* sess_stable.proj('session_uuid','task_protocol','session_start_time','training_status') * subject.Subject.proj('subject_nickname') \
* subject.SubjectLab.proj('lab_name')
df = pd.DataFrame(behavior_stable.fetch(order_by='subject_nickname, session_start_time, trial_id', as_dict=True))
if len(df) > 0: # The mouse has performed in at least one stable session with biased blocks
datamat = get_mouse_data(df)
# Take mice that have performed a minimum number of sessions
if np.max(datamat['session_num']) >= MinSessionNumber:
# Should add 'N' to mice names that start with numbers?
# Save dataframe to CSV file
filename = mouse_nickname + '.csv'
datamat.to_csv(filename,index=False)
stable_mice_names.append(mouse_nickname)
# Get mouse last sessions of training data
behavior_train = (behavior.TrialSet.Trial & (subject.Subject & mouse_subject)) \
* sess_train.proj('session_uuid','task_protocol','session_start_time') * subject.Subject.proj('subject_nickname') \
* subject.SubjectLab.proj('lab_name')
df_train = pd.DataFrame(behavior_train.fetch(order_by='subject_nickname, session_start_time, trial_id', as_dict=True))
datamat_train = get_mouse_data(df_train)
Nlast = np.max(datamat_train['session_num']) - 3
datamat_final = datamat_train[datamat_train['session_num'] > Nlast]
# Save final training dataframe to CSV file
filename = mouse_nickname + '_endtrain.csv'
datamat_final.to_csv(filename,index=False)
print(stable_mice_names)
len(stable_mice_names)
```
| true |
code
| 0.496399 | null | null | null | null |
|
<h1>CI Midterm<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Q1-Simple-Linear-Regression" data-toc-modified-id="Q1-Simple-Linear-Regression-1">Q1 Simple Linear Regression</a></span></li><li><span><a href="#Q2-Fuzzy-Linear-Regression" data-toc-modified-id="Q2-Fuzzy-Linear-Regression-2">Q2 Fuzzy Linear Regression</a></span></li><li><span><a href="#Q3-Support-Vector-Regression" data-toc-modified-id="Q3-Support-Vector-Regression-3">Q3 Support Vector Regression</a></span></li><li><span><a href="#Q4-Single-layer-NN" data-toc-modified-id="Q4-Single-layer-NN-4">Q4 Single-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-4.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-4.2">Code</a></span></li></ul></li><li><span><a href="#Q5-Two-layer-NN" data-toc-modified-id="Q5-Two-layer-NN-5">Q5 Two-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-5.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-5.2">Code</a></span></li></ul></li><li><span><a href="#Q6-Re-do-Q1-Q5" data-toc-modified-id="Q6-Re-do-Q1-Q5-6">Q6 Re-do Q1-Q5</a></span><ul class="toc-item"><li><span><a href="#Simple-Linear-Regression" data-toc-modified-id="Simple-Linear-Regression-6.1">Simple Linear Regression</a></span></li><li><span><a href="#Fuzzy-Linear-Regression" data-toc-modified-id="Fuzzy-Linear-Regression-6.2">Fuzzy Linear Regression</a></span></li><li><span><a href="#Support-Vector-Regression" data-toc-modified-id="Support-Vector-Regression-6.3">Support Vector Regression</a></span></li><li><span><a href="#Single-layer-NN" data-toc-modified-id="Single-layer-NN-6.4">Single-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-6.4.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-6.4.2">Code</a></span></li></ul></li><li><span><a href="#Two-layer-NN" data-toc-modified-id="Two-layer-NN-6.5">Two-layer NN</a></span><ul class="toc-item"><li><span><a href="#First-two-iterations-illustration" data-toc-modified-id="First-two-iterations-illustration-6.5.1">First two iterations illustration</a></span></li><li><span><a href="#Code" data-toc-modified-id="Code-6.5.2">Code</a></span></li></ul></li></ul></li><li><span><a href="#Q7-Discussion" data-toc-modified-id="Q7-Discussion-7">Q7 Discussion</a></span><ul class="toc-item"><li><span><a href="#Discussion-of-Convergence-Issue" data-toc-modified-id="Discussion-of-Convergence-Issue-7.1">Discussion of Convergence Issue</a></span></li></ul></li><li><span><a href="#Q8-Bonus-Question" data-toc-modified-id="Q8-Bonus-Question-8">Q8 Bonus Question</a></span><ul class="toc-item"><li><span><a href="#Simple-Linear-Regression" data-toc-modified-id="Simple-Linear-Regression-8.1">Simple Linear Regression</a></span></li><li><span><a href="#Fuzzy-Linear-Regression" data-toc-modified-id="Fuzzy-Linear-Regression-8.2">Fuzzy Linear Regression</a></span></li><li><span><a href="#Support-Vector-Regression" data-toc-modified-id="Support-Vector-Regression-8.3">Support Vector Regression</a></span></li><li><span><a href="#Single-layer-NN" data-toc-modified-id="Single-layer-NN-8.4">Single-layer NN</a></span></li></ul></li></ul></div>
## Q1 Simple Linear Regression
First, the training data has been visualized as below.
```
%matplotlib inline
import numpy as np
import pandas as pd
import cvxpy as cp
import matplotlib.pyplot as plt
ar = np.array([[1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6], # x
[1, 2, 3, 4, 5, 6]]) # y
# plot the dot points
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.title('Visualization of training observations')
plt.axis('scaled')
plt.show()
```
The data has been processed and the optimization problem (least sum of square) has been formulated. The estimate of $a$ (the slope) is very close to 1 and $b$ (intercept) is very close to 0. The fitted line has been plotted above the training set as well.
```
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Define and solve the CVXPY problem.
beta = cp.Variable(X_lp.shape[1]) # return num of cols, 2 in total
cost = cp.sum_squares(X_lp * beta - y_lp) # define cost function
obj = cp.Minimize(cost) # define objective function
prob = cp.Problem(obj)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe estimated of a (slope) is:", beta.value[1],
"\nThe estimate of b (intercept) is:", beta.value[0])
x = np.linspace(0, 10, 100)
y = beta.value[1] * x + beta.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = ax + b')
plt.title('Fitted line using simple LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
## Q2 Fuzzy Linear Regression
Same as HW2, the optimization problem has been formulated as below. Here I pick the threshold $\alpha$ as $0.5$ for spread calculation. Similar to Q1, The estimate of $A_1$ (the slope) is 1 and $A_0$ (intercept) is 0. The spread of $A_1$ and $A_0$ have both been calculated. As expected, both spreads are 0 since the regression line fits perfectly to the training data and there is no need of spreads to cover any errors between the estimate $\hat{y}$ and the true values $y$.
The fitted line has been plotted above the training set as well.
```
# Define threshold h (it has same meaning as the alpha in alpha-cut). Higher the h, wider the spread.
h = 0.5
# Define and solve the CVXPY problem.
c = cp.Variable(X_lp.shape[1]) # for spread variables, A0 and A1
alpha = cp.Variable(X_lp.shape[1]) # for center/core variables, A0 and A1
cost = cp.sum(X_lp * c) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [c >= 0,
y_lp <= (1 - h) * abs(X_lp) * c + X_lp * alpha, # abs operate on each elements of X_lp
-y_lp <= (1 - h) * abs(X_lp) * c - X_lp * alpha]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe center of A1 (slope) is:", alpha.value[1],
"\nThe spread of A1 (slope) is:", c.value[1],
"\nThe center of A0 (intercept) is:", alpha.value[0],
"\nThe spread of A0 (intercept) is:", c.value[0])
x = np.linspace(0, 10, 100)
y = alpha.value[1] * x + alpha.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = A1x + A0')
plt.title('Fitted line using Fuzzy LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
## Q3 Support Vector Regression
In the course lecture, it was mentioned that the objective function of SVR is to ***minimize the sum of squares plus seek for flatness of the hyperplane.*** In $\epsilon$-SV regression, our goal is to find a function $f(x)$ that has at most $\epsilon$ deviation from the actually obtained targets $y_i$ for all the training data, and at the same time is as flat as possible. Flatness in the case means that one seeks a small $w$ and the approach here is to minimize the L2-norm. The problem can be written as a convex optimization problem:

Sometimes the convex optimization problem does not render feasible solution. We also may want to allow for some errors. Similarly to the loss function of “soft margin” in SVM, we introduce slack variables $ξ_i$, $ξ_i^*$ to cope with otherwise infeasible constraints of the optimization problem:

Here the constent $C$ should be $>0$ and determines the trade-off between the flatness of $f(x)$ and the amount up to which deviations larger than $\epsilon$ are tolerated. The optimization problem is formulated with slack variables and in the program below, I defined $C$ as $\frac{1}{N}$ where $N=6$ is the # of observations in the training set. The $\epsilon$ here has been set to 0.
From the output below, the estimated $w$ is very close to 1 and $b$ is very close to 0.
```
# The constant C, defines the trade-off between the flatness of f and the amount up to which deviations larger than ε are tolerated.
# When C gets bigger, the margin get softer. Here C is defined as 1/N. N is the # of observations.
C = 1 / len(ar[1])
epsilon = 0 # For this ε-SVR problem set ε=0
# Define and solve the CVXPY problem.
bw = cp.Variable(X_lp.shape[1]) # for b and w parameters in SVR. bw[0]=b, bw[1]=w
epsilon1 = cp.Variable(X_lp.shape[0]) # for slack variables ξi
epsilon2 = cp.Variable(X_lp.shape[0]) # for slack variables ξ*i
cost = 1 / 2 * bw[1] ** 2 + C * cp.sum(epsilon1 + epsilon2) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [epsilon1 >= 0,
epsilon2 >= 0,
y_lp <= X_lp * bw + epsilon + epsilon1,
-y_lp <= -(X_lp * bw) + epsilon + epsilon2]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe estimate of w is:", bw.value[1],
"\nThe estimate of b is:", bw.value[0], )
```
The fitted line has been plotted above the training set as well:
```
x = np.linspace(0, 10, 100)
y = bw.value[1] * x + bw.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = wx + b')
plt.title('Fitted line using SVR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
## Q4 Single-layer NN
### First two iterations illustration
From the NN archiecture on Lecture 7 page 13, the network output $a$ can be denoted as:
$$a=f(x)=f(wp+b)$$
where
$$x=wp+b\quad f(x)=5x\quad \frac{\partial f}{\partial x}=5$$
Since $b=1$,
$$a=f(x)=f(wp+b)=5(wp+1)$$
Set the loss function $E$ as:
$$ E=\sum_{i=1}^N \frac{1}{2}(T_i-a_i)^2 $$
where $T_i$ is the target value for each input $i$ and $N$ is the number of observations in the training set.
We can find the gradient for $w$ by:
$$\frac{\partial E}{\partial w}=\frac{\partial E}{\partial a}\frac{\partial a}{\partial x}\frac{\partial x}{\partial w}$$
**For the 1st iteration**, with initial value $w=10$:
$$
\frac{\partial E}{\partial a}=a-T=5(wp_i+1)-T_i\\
\frac{\partial f}{\partial x}=5$$
$$\frac{\partial x_1}{\partial w}=p_1=1$$
$$\vdots$$
$$\frac{\partial x_6}{\partial w}=p_6=6$$
For $i=1$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*1+1)-1=54\\
\frac{\partial E}{\partial w}=54*5*1$$
For $i=2$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*2+1)-2=103\\
\frac{\partial E}{\partial w}=103*5*2$$
For $i=3$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*3+1)-3=152\\
\frac{\partial E}{\partial w}=152*5*3$$
For $i=4$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*4+1)-4=201\\
\frac{\partial E}{\partial w}=201*5*4$$
For $i=5$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*5+1)-5=250\\
\frac{\partial E}{\partial w}=250*5*5$$
For $i=6$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*6+1)-6=299\\
\frac{\partial E}{\partial w}=299*5*6$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=(54*1+103*2+152*3+201*4+250*5+299*6)*5=22820
$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=380.333
$$.
The new $w$ and output $a$ is calculated:
$$w=10-380.333=-370.333\\
a=[-1846.667,-3698.333,-5550,-7401.667,-9253.333, -11105]
$$
**For the 2nd iteration:**
For $i=1$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*1+1)-1=-1847.667\\
\frac{\partial E}{\partial w}=-1847.667*5*1$$
For $i=2$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*2+1)-2=-3700.333\\
\frac{\partial E}{\partial w}=-3700.333*5*2$$
For $i=3$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*3+1)-3=-5553\\
\frac{\partial E}{\partial w}=-5553*5*3$$
For $i=4$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*4+1)-4=-7405.667\\
\frac{\partial E}{\partial w}=-7405.667*5*4$$
For $i=5$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*5+1)-5=-9258.333\\
\frac{\partial E}{\partial w}=-9258.333*5*5$$
For $i=6$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(-370.333*6+1)-6=-11111\\
\frac{\partial E}{\partial w}=-11111*5*6$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=(-1847.667*1+-3700.333*2+-5553*3+-7405.667*4+-9258.333*5+-11111*6)*5=-842438.333
$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=-14040.639
$$.
The new $w$ and output $a$ is calculated:
$$w=-370.333-(-14040.638)=-13670.305\\
a=[68356.528, 136708.056, 205059.583, 273411.111, 341762.639, 410114.167]
$$
### Code
**We can tell from the above that throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge.** Further discussion has been made in Q7 to explore for a proper learning rate in this case.
From the code below, after 30 iterations the loss function value becomes larger and larger and won't be able to converge, which further proves the findings.
```
def single_layer_NN(lr, w, maxiteration):
"""lr - learning rate\n
w - initial value of w\n
maxiteration - define # of max iteration """
E0 = sum(0.5 * np.power((y_lp - 5 * (w * X_lp[:, 1] + 1)), 2)) # initialize Loss, before 1st iteration
for i in range(maxiteration):
if i > 0: # Starting 2nd iteration, E1 value give to E0
E0 = E1 # Loss before iteration
print("Iteration=", i, ",", "Loss value=", E0)
gradient = np.mean((5 * (w * X_lp[:, 1] + 1) - y_lp) * 5 * X_lp[:, 1]) # calculate gradient
step = gradient * lr # calculate step size
w = w - step # refresh the weight
E1 = sum(0.5 * np.power((5 * (w * X_lp[:, 1] + 1) - y_lp), 2)) # Loss after iteration
a = 5 * (w * X_lp[:, 1] + 1) # the refreshed output
if abs(E0 - E1) <= 0.0001:
print('Break out of the loop and end at Iteration=', i,
'\nThe value of loss is:', E1,
'\nThe value of w is:', w)
break
return w, a, gradient
w, a, gradient = single_layer_NN(lr=0.1, w=10, maxiteration=30)
```
## Q5 Two-layer NN
### First two iterations illustration

The above structure will be used to model Q5, with $b_1=b_2=1$ and initial values $w_1=w_2=1$. For $f_1$, the activation function is sigmoid activation function. Since the sample data implies linear relationship, for $f_2$, a linear activation function (specifically, an **identify activation function**) has been chosen. The loss function $E$ has been the same as Q4:
$$
E=\sum_{i=1}^N \frac{1}{2}(T_i-a_2)^2
$$
where $T_i$ is the target value for each input $i$ and $N$ is the number of observations in the training set.
The output $a_1$ and $a_2$ can be denoted as:
$$
a_1=f_1(w_1p+b) \quad a_2=f_2(w_2a_1+b)
$$
where
$$
f_1(x)=\frac{1}{1+e^{-x}} \quad \frac{\partial f_1}{\partial x}=f_1(1-f_1)\\
and \quad f_2(x)=x \quad \frac{\partial f_2}{\partial x}=1
$$
We can find the gradient for $w_1$ and $w_2$ by:
$$
\frac{\partial E}{\partial w_2}=\frac{\partial E}{\partial a_2}\frac{\partial a_2}{\partial n_2}\frac{\partial n_2}{\partial w_2}=(w_2a_1+b-T)*1*a_1=(w_2a_1+1-T)a_1
\\
\frac{\partial E}{\partial w_1}=\frac{\partial E}{\partial a_2}\frac{\partial a_2}{\partial a_1}\frac{\partial a_1}{\partial n_1}\frac{\partial n_1}{\partial w_1}=(w_2a_1+b-T)*w_2*a_1(1-a_1)*p\\=\frac{\partial E}{\partial w_2}*w_2*(1-a_1)*p
$$
where
$$
a_1=f_1(w_1p+b)=\frac{1}{1+e^{-(w_1p+1)}}
$$
**We can see that the gradient of $w_1$ can be calculated from the gradient of $w_2$ and the gradient of both weights ($w_1$ and $w_2$) only relate to the input and the initial values of the weights!**
**For the 1st iteration**,
$$
For\quad i=1, 2, 3, 4, 5, 6, \quad a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1\\
$$
For $i=1:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-1=10\\
\frac{\partial E}{\partial w_2}=10*1*1=10,\quad \frac{\partial E}{\partial w_1}=10*10*(1-1)*1=0
$$
For $i=2:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-2=9\\
\frac{\partial E}{\partial w_2}=9*1*1=9,\quad \frac{\partial E}{\partial w_1}=9*10*(1-1)*1=0
$$
For $i=3:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-3=8\\
\frac{\partial E}{\partial w_2}=8*1*1=8,\quad \frac{\partial E}{\partial w_1}=8*10*(1-1)*1=0
$$
For $i=4:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-4=7\\
\frac{\partial E}{\partial w_2}=7*1*1=7,\quad \frac{\partial E}{\partial w_1}=7*10*(1-1)*1=0
$$
For $i=5:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-5=6\\
\frac{\partial E}{\partial w_2}=6*1*1=6,\quad \frac{\partial E}{\partial w_1}=6*10*(1-1)*1=0
$$
For $i=6:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-6=5\\
\frac{\partial E}{\partial w_2}=5*1*1=5,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=10+9+8+7+6+5=45$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.75$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated. The value of $a_1$ and $a_2$ are both for all 6 observations.
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=10-0.75=9.25\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=9.25*1+1=10.25
$$
**For the 2nd iteration**,
$$
For\quad i=1, 2, 3, 4, 5, 6, \quad a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1\\
$$
For $i=1:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-1=9.25\\
\frac{\partial E}{\partial w_2}=9.25*1*1=9.25,\quad \frac{\partial E}{\partial w_1}=9.25*9.25*(1-1)*1=0
$$
For $i=2:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-2=8.25\\
\frac{\partial E}{\partial w_2}=8.25*1*1=8.25,\quad \frac{\partial E}{\partial w_1}=8.25*9.25*(1-1)*1=0
$$
For $i=3:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-3=7.25\\
\frac{\partial E}{\partial w_2}=7.25*1*1=7.25,\quad \frac{\partial E}{\partial w_1}=7.25*9.25*(1-1)*1=0
$$
For $i=4:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-4=6.25\\
\frac{\partial E}{\partial w_2}=6.25*1*1=6.25,\quad \frac{\partial E}{\partial w_1}=6.25*9.25*(1-1)*1=0
$$
For $i=5:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-5=5.25\\
\frac{\partial E}{\partial w_2}=5.25*1*1=5.25,\quad \frac{\partial E}{\partial w_1}=5.25*9.25*(1-1)*1=0
$$
For $i=6:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(9.25*1+1)-6=4.25\\
\frac{\partial E}{\partial w_2}=4.25*1*1=4.25,\quad \frac{\partial E}{\partial w_1}=4.25*9.25*(1-1)*1=0
$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=9.25+8.25+7.25+6.25+5.25+4.25=40.5$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.675$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated, The value of $a_1$ and $a_2$ are both for all 6 observations:
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=9.25-0.675=8.575\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=8.575*1+1=9.575
$$
### Code
Below is the code to estimate all weights using batch training, with the stopping criteria as change in loss function less than 0.0001. We can tell the iteration stopped at Iteration 62 and $w_1=10$ while $w_2=2.511$. We can tell that the $w_1$ hardly changes throughout the iterations. I did not show the first 60 iteration results since it makes the report wordy.
```
def linear_activation_NN(C, lr, w1, w2, maxiteration):
# C - set the slope of the f2: f2(x)=Cx
# lr - learning rate
# w1 - initial value of w1
# w2 - initial value of w2
# maxiteration - define # of max iteration
a1 = 1 / (1 + np.exp(-(w1 * X_lp[:, 1] + 1))) # initialize output1 - a1
a2 = C * (w2 * a1 + 1) # initialize output2 - a2
E0 = sum(0.5 * np.power(y_lp - a2, 2)) # initialize Loss, before 1st iteration
for i in range(maxiteration):
if i > 0: # Starting 2nd iteration, E1 value will give to E0
E0 = E1 # Loss before iteration
# print("Iteration=", i, ",", "Loss value=", E0)
gradient_2 = np.mean((w2 * a1 + 1 - y_lp) * C * a1) # calculate gradient for w2
gradient_1 = np.mean(
(w2 * a1 + 1 - y_lp) * C * w2 * a1 * (1 - a1) * X_lp[:, 1]) # use BP to calculate gradient for w1
# gradient_1 = np.mean(gradient_2 * w2 * (1 - a1) * X_lp[:, 1])
step_1 = gradient_1 * lr # calculate step size
step_2 = gradient_2 * lr
w1 = w1 - step_1 # refresh w1
w2 = w2 - step_2 # refresh w2
a1 = 1 / (1 + np.exp(-(w1 * X_lp[:, 1] + 1))) # refresh a1
a2 = C * (w2 * a1 + 1) # refresh a2
E1 = sum(0.5 * np.power(y_lp - a2, 2)) # Loss after iteration
if abs(E0 - E1) <= 0.0001:
print('Break out of the loop and the iteration converge at Iteration=', i,
'\nThe value of loss is:', E1,
'\nThe value of w1 is:', w1,
'\nThe value of w2 is:', w2)
break
return w1, w2, a1, a2, gradient_1, gradient_2
w1, w2, a1, a2, gradient_1, gradient_2 = linear_activation_NN(C=1, lr=0.1, w1=10, w2=10, maxiteration=100)
```
Below gives a plot on how the NN model fit to the current sample data points.
```
# plot the fit
x = np.linspace(-4, 10, 100)
y = w2 * (1 / (1 + np.exp(-(w1 * x + 1)))) + 1
# plt.close('all')
plt.plot(x, y, c='red', label='y = f(w2 * a1 + b)')
plt.title('Fitted line using two-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.xlim((-5, 8))
plt.ylim((-2, 8))
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
## Q6 Re-do Q1-Q5
Two additional observations, (2, 3) and (3, 4) are added and below is the scatterplot showing how the data sample looks like.
```
ar = np.array([[1, 1, 1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6, 2, 3], # x
[1, 2, 3, 4, 5, 6, 3, 4]]) # y
# plot the dot points
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.title('Visualization of training observations')
plt.axis('scaled')
plt.show()
```
### Simple Linear Regression
The simple linear regression fit similar to Q1 has been conducted as below. The estimated $slope=0.923$ and estimated $intercept=0.5$.
```
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Define and solve the CVXPY problem.
beta = cp.Variable(X_lp.shape[1]) # return num of cols, 2 in total
cost = cp.sum_squares(X_lp * beta - y_lp) # define cost function
obj = cp.Minimize(cost) # define objective function
prob = cp.Problem(obj)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe estimated of a (slope) is:", beta.value[1],
"\nThe estimate of b (intercept) is:", beta.value[0])
```
The regression line has been plotted:
```
# Plot the fit
x = np.linspace(0, 10, 100)
y = beta.value[1] * x + beta.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = ax + b')
plt.title('Fitted line using simple LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Fuzzy Linear Regression
The fuzzy linear regression fit similar to Q2 has been conducted as below. We can see that some spread was estimated for the intercept $A0$ because we are unable to fit the data perfectly this time and there will have to be some spread to cover the data points around the regression line.
```
# Define threshold h (it has same meaning as the alpha in alpha-cut). Higher the h, wider the spread.
h = 0.5
# Define and solve the CVXPY problem.
c = cp.Variable(X_lp.shape[1]) # for spread variables, A0 and A1
alpha = cp.Variable(X_lp.shape[1]) # for center/core variables, A0 and A1
cost = cp.sum(X_lp * c) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [c >= 0,
y_lp <= (1 - h) * abs(X_lp) * c + X_lp * alpha, # abs operate on each elements of X_lp
-y_lp <= (1 - h) * abs(X_lp) * c - X_lp * alpha]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe center of A1 (slope) is:", alpha.value[1],
"\nThe spread of A1 (slope) is:", c.value[1],
"\nThe center of A0 (intercept) is:", alpha.value[0],
"\nThe spread of A0 (intercept) is:", c.value[0])
```
The regression line has been plotted, along with the fuzzy spread.
```
x = np.linspace(0, 10, 100)
y = alpha.value[1] * x + alpha.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = A1x + A0')
y = (alpha.value[1] + c.value[1]) * x + alpha.value[0] + c.value[0]
plt.plot(x, y, '--g', label='Fuzzy Spread')
y = (alpha.value[1] - c.value[1]) * x + alpha.value[0] - c.value[0]
plt.plot(x, y, '--g')
plt.title('Fitted line using Fuzzy LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Support Vector Regression
The support vector regression fit similar to Q3 has been conducted as below. Here a simpler version of SVR is used with $\epsilon$ has been set to 1:
$$
minimize \quad \frac{1}{2}||w||^2$$
$$
subject\, to=\left\{
\begin{aligned}
y_i-(w \cdot x_i)-b\le\epsilon\\
(w \cdot x_i)-b\le\epsilon-y_i\le\epsilon\\
\end{aligned}
\right.
$$
The fitted line and the hard margin has been plotted above the training set as well. The estimated $w=0.6$ and $b=1.4$.
```
# A simplified version without introducing the slack variables ξi and ξ*i
epsilon = 1
bw = cp.Variable(X_lp.shape[1]) # for b and w parameters in SVR. bw[0]=b, bw[1]=w
cost = 1 / 2 * bw[1] ** 2
obj = cp.Minimize(cost)
constraints = [
y_lp <= X_lp * bw + epsilon,
-y_lp <= -(X_lp * bw) + epsilon]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe estimate of w is:", bw.value[1],
"\nThe estimate of b is:", bw.value[0], )
upper = X_lp[:, 1] * bw.value[1] + bw.value[0] + epsilon # upper bound of the margin
lower = X_lp[:, 1] * bw.value[1] + bw.value[0] - epsilon # lower bound of the margin
plt.close('all')
x = np.linspace(.5, 6, 100)
y = bw.value[1] * x + bw.value[0]
plt.plot(x, y, c='red', label='y = wx + b')
x = [[min(X_lp[:, 1]), max(X_lp[:, 1])]]
y = [[min(lower), max(lower)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g')
y = [[min(upper), max(upper)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g', label='margin')
plt.title('Fitted line using SVR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Single-layer NN
#### First two iterations illustration
Similar to the Q4,
**For the 1st iteration**, with initial value $w=10$:
$$
\frac{\partial E}{\partial a}=a-T=5(wp_i+1)-T_i\\
\frac{\partial f}{\partial x}=5$$
$$\frac{\partial x_1}{\partial w}=p_1=1$$
$$\vdots$$
$$\frac{\partial x_6}{\partial w}=p_6=6$$
$$\frac{\partial x_7}{\partial w}=p_7=2$$
$$\frac{\partial x_8}{\partial w}=p_8=3$$
For $i=1$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*1+1)-1=54\\
\frac{\partial E}{\partial w}=54*5*1$$
For $i=2$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*2+1)-2=103\\
\frac{\partial E}{\partial w}=103*5*2$$
For $i=3$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*3+1)-3=152\\
\frac{\partial E}{\partial w}=152*5*3$$
For $i=4$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*4+1)-4=201\\
\frac{\partial E}{\partial w}=201*5*4$$
For $i=5$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*5+1)-5=250\\
\frac{\partial E}{\partial w}=250*5*5$$
For $i=6$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*6+1)-6=299\\
\frac{\partial E}{\partial w}=299*5*6$$
For $i=7$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*2+1)-3=102\\
\frac{\partial E}{\partial w}=102*5*2$$
For $i=8$,
$$
\frac{\partial E}{\partial a}=a_i-T_i=5(wp_i+1)-T_i=5(10*3+1)-4=151\\
\frac{\partial E}{\partial w}=151*5*3$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=26105
$$
Average the sum of gradient by $N=8$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=326.3125
$$.
The new $w$ and output $a$ is calculated:
$$w=10-326.3125=-316.3125\\
a=[-1576.562, -3158.125, -4739.688, -6321.25 , -7902.812, -9484.375, -3158.125, -4739.688]
$$
**For the 2nd iteration,** similar steps have been conducted as the 1st iteration and:
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w})=-822307.5
$$
Average the sum of gradient by $N=8$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w})}{N}=-10278.844
$$
The new $w$ and output $a$ is calculated:
$$w=-316.3125-(−10278.844)=9962.531\\
a=[49817.656, 99630.312, 149442.969, 199255.625, 249068.281, 298880.938, 99630.312, 149442.969]
$$
#### Code
Similar to Q4, **We can tell from the above that throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge.** Further discussion has been made in Q7 to explore for a proper learning rate in this case.
From the code below, after 30 iterations the loss function value becomes larger and larger and won't be able to converge, which further proves the findings.
```
w, a, gradient = single_layer_NN(lr=0.1, w=10, maxiteration=30)
```
### Two-layer NN
#### First two iterations illustration
The first two iterations calculation is enoughly similar to the Q5.
**For the 1st iteration**,
$$
For\quad i=1, 2, 3, 4, 5, 6, \quad a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1\\
$$
For $i=1:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-1=10\\
\frac{\partial E}{\partial w_2}=10*1*1=10,\quad \frac{\partial E}{\partial w_1}=10*10*(1-1)*1=0
$$
For $i=2:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-2=9\\
\frac{\partial E}{\partial w_2}=9*1*1=9,\quad \frac{\partial E}{\partial w_1}=9*10*(1-1)*1=0
$$
For $i=3:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-3=8\\
\frac{\partial E}{\partial w_2}=8*1*1=8,\quad \frac{\partial E}{\partial w_1}=8*10*(1-1)*1=0
$$
For $i=4:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-4=7\\
\frac{\partial E}{\partial w_2}=7*1*1=7,\quad \frac{\partial E}{\partial w_1}=7*10*(1-1)*1=0
$$
For $i=5:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-5=6\\
\frac{\partial E}{\partial w_2}=6*1*1=6,\quad \frac{\partial E}{\partial w_1}=6*10*(1-1)*1=0
$$
For $i=6:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-6=5\\
\frac{\partial E}{\partial w_2}=5*1*1=5,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
For $i=7:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-3=8\\
\frac{\partial E}{\partial w_2}=8*1*1=8,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
For $i=8:$
$$
\frac{\partial E}{\partial a_2}=a_2-T_i=(w_2a_1+1)-T_i=(10*1+1)-4=7\\
\frac{\partial E}{\partial w_2}=7*1*1=7,\quad \frac{\partial E}{\partial w_1}=5*10*(1-1)*1=0
$$
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=10+9+8+7+6+5+8+7=60$$
Average the sum of gradient by $N=8$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.75$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated. The value of $a_1$ and $a_2$ are both for all 6 observations.
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=10-0.75=9.25\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=9.25*1+1=10.25
$$
**For the 1st iteration**, similar steps have been conducted as the 1st iteration and:
The sum of gradient for the batch training is:
$$\sum_{i}(\frac{\partial E}{\partial w_1})=0$$
$$\sum_{i}(\frac{\partial E}{\partial w_2})=54$$
Average the sum of gradient by $N=6$ and the step size (learning rate=0.1) can be calculated as:
$$s_1=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_1})}{N}=0$$
$$s_2=0.1*\frac{\sum_{i}(\frac{\partial E}{\partial w_2})}{N}=0.675$$
The new weight $w_1$, $w_2$ and output $a_1$ and $a_2$ can be calculated, The value of $a_1$ and $a_2$ are both for all 6 observations:
$$
w_1=w_1-s_1=10-0=10,\\
w_2=w_2-s_2=9.25-0.675=8.575\\
a_1=\frac{1}{1+e^{-(w_1p_i+1)}}\approx1, \quad i\in [1,2,3,4,5,6]\\
a_2=w_2a_1+b=8.575*1+1=9.575
$$
#### Code
Below is the code to estimate all weights using batch training, with the stopping criteria as change in loss function less than 0.0001. We can tell the iteration stopped at Iteration 62 and $w_1=10$ while $w_2=2.51$. We can tell that the $w_1$ hardly changes throughout the iterations. I did not show the first 60 iteration results since it makes the report wordy.
One thing we can tell is, comparing to Q5, the fitted $w_1$ and $w_2$ are almost the same even thought we added two more points to the training set. Also a plot has been given to see how well the 2-layer NN model fit to the 8 sample data points. As we see, they are not fitted well.
```
w1, w2, a1, a2, gradient_1, gradient_2 = linear_activation_NN(C=1, lr=0.1, w1=10, w2=10, maxiteration=100)
# plot the fit
x = np.linspace(-4, 10, 100)
y = w2 * (1 / (1 + np.exp(-(w1 * x + 1)))) + 1
# plt.close('all')
plt.plot(x, y, c='red', label='y = f(w2 * a1 + b)')
plt.title('Fitted line using two-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.xlim((-5, 8))
plt.ylim((-2, 8))
plt.xlabel('x')
plt.ylabel('y')
plt.show()
```
## Q7 Discussion
The detailed comments for Q1, Q2, Q3, Q5 and Q6 have been made in each section respectively. Here the convergence issue in Q4 and Q6 (the Single-layer NN) will be discussed.
### Discussion of Convergence Issue
As mentioned in Q4, throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. From the code after running 30 iterations, the loss function value becomes larger and larger and won't be able to converge. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge. In below, the learning rate has been adjusted to 0.001 and the algorithm converged after 23 iterations with loss function value=`14.423`.
The fit has been plotted against the sample data points.
```
ar = np.array([[1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6], # x
[1, 2, 3, 4, 5, 6]]) # y
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Learning rate has been adjusted to 0.001
w, a, gradient = single_layer_NN(lr=0.001, w=10, maxiteration=100)
# plot the fit
x = np.linspace(0, 10, 100)
y = 5 * w * x + 5
plt.close('all')
plt.plot(x, y, c='red', label='y = f(wx + b)')
plt.title('Fitted line using single-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.show()
```
The same experiment has been conducted to the convergence issue in Q6 (single-layer NN). As mentioned in Q6, throughout the first 2 iterations, the updated fit $a$ more and more far away from the actual value. From the code after running 30 iterations, the loss function value becomes larger and larger and won't be able to converge. This is because of the learning rate=0.1 was set to be too large and cause the result to be oscillating and won't be able to converge. In below, the learning rate has been adjusted to 0.001 and the algorithm converged after 26 iterations with loss function value=`15.880`.
The fit has been plotted against the sample data points.
```
ar = np.array([[1, 1, 1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6, 2, 3], # x
[1, 2, 3, 4, 5, 6, 3, 4]]) # y
# Data preprocessing
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
# Learning rate has been adjusted to 0.001
w, a, gradient = single_layer_NN(lr=0.001, w=10, maxiteration=100)
# plot the fit
x = np.linspace(0, 10, 100)
y = 5 * w * x + 5
plt.close('all')
plt.plot(x, y, c='red', label='y = f(wx + b)')
plt.title('Fitted line using single-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.show()
```
## Q8 Bonus Question
I attempt to add two points aiming at balancing out the effect of the two additional points added in Q6. The (2,1) and (3,2) have been added.
**All four models (Simple Linear Regression, Fuzzy Linear Regression, Support Vector Regression and Single-layer NN) all lead to the same fitted line and they give the same predictions for x = 1, 2, 3, 4, 5, and 6. The prediction results are y = 1, 2, 3, 4, 5, and 6 respectively.**
The training observations look like the graph below.
```
ar = np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], # intercept
[1, 2, 3, 4, 5, 6, 2, 3, 2, 3], # x
[1, 2, 3, 4, 5, 6, 3, 4, 1, 2]]) # y
# plot the dot points
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.title('Visualization of training observations')
plt.axis('scaled')
plt.show()
X_lp = ar[[0, 1], :].T # transpose the array before modeling
y_lp = ar[2].T
```
### Simple Linear Regression
For Simple Linear Regression, the same model in Q1 is used. The estimated a is 1 and b is 0:
```
# Define and solve the CVXPY problem.
beta = cp.Variable(X_lp.shape[1]) # return num of cols, 2 in total
cost = cp.sum_squares(X_lp * beta - y_lp) # define cost function
obj = cp.Minimize(cost) # define objective function
prob = cp.Problem(obj)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe estimated of a (slope) is:", beta.value[1],
"\nThe estimate of b (intercept) is:", beta.value[0])
# Plot the fit
x = np.linspace(0, 10, 100)
y = beta.value[1] * x + beta.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = ax + b')
plt.title('Fitted line using simple LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Fuzzy Linear Regression
For Fuzzy Linear Regression, the same model has been used from Q6. The estimated $A0=0$ with spread=2 and $A1=1$ with spread=0.
```
# Define threshold h (it has same meaning as the alpha in alpha-cut). Higher the h, wider the spread.
h = 0.5
# Define and solve the CVXPY problem.
c = cp.Variable(X_lp.shape[1]) # for spread variables, A0 and A1
alpha = cp.Variable(X_lp.shape[1]) # for center/core variables, A0 and A1
cost = cp.sum(X_lp * c) # define cost function
obj = cp.Minimize(cost) # define objective function
constraints = [c >= 0,
y_lp <= (1 - h) * abs(X_lp) * c + X_lp * alpha, # abs operate on each elements of X_lp
-y_lp <= (1 - h) * abs(X_lp) * c - X_lp * alpha]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nThe optimal value of loss is:", prob.value)
print("\nThe center of A1 (slope) is:", alpha.value[1],
"\nThe spread of A1 (slope) is:", c.value[1],
"\nThe center of A0 (intercept) is:", alpha.value[0],
"\nThe spread of A0 (intercept) is:", c.value[0])
# Plot the FR fit
x = np.linspace(0, 10, 100)
y = alpha.value[1] * x + alpha.value[0]
plt.close('all')
plt.plot(x, y, c='red', label='y = A1x + A0')
y = (alpha.value[1] + c.value[1]) * x + alpha.value[0] + c.value[0]
plt.plot(x, y, '--g', label='Fuzzy Spread')
y = (alpha.value[1] - c.value[1]) * x + alpha.value[0] - c.value[0]
plt.plot(x, y, '--g')
plt.title('Fitted line using Fuzzy LR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Support Vector Regression
For Support Vector Regression, the same model has been used from Q6 with $\epsilon$ been set to 1. The estimated $w$ is 1 and $b$ is 0.
```
epsilon = 1
bw = cp.Variable(X_lp.shape[1]) # for b and w parameters in SVR. bw[0]=b, bw[1]=w
cost = 1 / 2 * bw[1] ** 2
obj = cp.Minimize(cost)
constraints = [
y_lp <= X_lp * bw + epsilon,
-y_lp <= -(X_lp * bw) + epsilon]
prob = cp.Problem(obj, constraints)
prob.solve(solver=cp.CPLEX, verbose=False)
# print("status:", prob.status)
print("\nSVR result:")
print("The estimate of w is:", bw.value[1],
"\nThe estimate of b is:", bw.value[0], )
# Plot the SVR fit
upper = X_lp[:, 1] * bw.value[1] + bw.value[0] + epsilon # upper bound of the margin
lower = X_lp[:, 1] * bw.value[1] + bw.value[0] - epsilon # lower bound of the margin
x = np.linspace(.5, 6, 100)
y = bw.value[1] * x + bw.value[0]
plt.plot(x, y, c='red', label='y = wx + b')
x = [[min(X_lp[:, 1]), max(X_lp[:, 1])]]
y = [[min(lower), max(lower)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g')
y = [[min(upper), max(upper)]]
for i in range(len(x)):
plt.plot(x[i], y[i], '--g', label='margin')
plt.title('Fitted line using SVR')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
### Single-layer NN
For single-layer NN, I use the same structure in Q4 ***with the bias set to 0***. As discussed in Q7, I set the learning rate=0.001 and the algorithm converge at iteration 30. The estimated $w$ is 0.2. The fitted line with the training sample points are plotted.
```
def single_layer_NN(lr, w, maxiteration, bias=1):
"""lr - learning rate\n
w - initial value of w\n
maxiteration - define # of max iteration\n
bias - default is 1 """
E0 = sum(0.5 * np.power((y_lp - 5 * (w * X_lp[:, 1] + bias)), 2)) # initialize Loss, before 1st iteration
for i in range(maxiteration):
if i > 0: # Starting 2nd iteration, E1 value give to E0
E0 = E1 # Loss before iteration
print("Iteration=", i, ",", "Loss value=", E0)
gradient = np.mean((5 * (w * X_lp[:, 1] + bias) - y_lp) * 5 * X_lp[:, 1]) # calculate gradient
step = gradient * lr # calculate step size
w = w - step # refresh the weight
E1 = sum(0.5 * np.power((5 * (w * X_lp[:, 1] + bias) - y_lp), 2)) # Loss after iteration
a = 5 * (w * X_lp[:, 1] + 1) # the refreshed output
if abs(E0 - E1) <= 0.0001:
print('Break out of the loop and end at Iteration=', i,
'\nThe value of loss is:', E1,
'\nThe value of w is:', w)
break
return w, a, gradient
w, a, gradient = single_layer_NN(lr=0.001, w=10, maxiteration=40, bias=0)
# plot the NN fit
x = np.linspace(0, 10, 100)
y = 5 * w * x + 0
plt.close('all')
plt.plot(x, y, c='red', label='y = f(wx + b)')
plt.title('Fitted line using single-layer NN')
plt.legend(loc='upper left')
plt.scatter(x=ar[1], y=ar[2], c='blue')
plt.axis('scaled')
plt.show()
```
| true |
code
| 0.672117 | null | null | null | null |
|
**Chapter 7 – Ensemble Learning and Random Forests**
_This notebook contains all the sample code and solutions to the exercises in chapter 7._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/07_ensemble_learning_and_random_forests.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Voting classifiers
```
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
**Note**: to be future-proof, we set `solver="lbfgs"`, `n_estimators=100`, and `gamma="scale"` since these will be the default values in upcoming Scikit-Learn versions.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
Soft voting:
```
log_clf = LogisticRegression(solver="lbfgs", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
svm_clf = SVC(gamma="scale", probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
```
# Bagging ensembles
```
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.sca(axes[1])
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
plt.ylabel("")
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
```
# Random Forests
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(splitter="random", max_leaf_nodes=16, random_state=42),
n_estimators=500, max_samples=1.0, bootstrap=True, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.45, -1, 1.5], alpha=0.02, contour=False)
plt.show()
```
## Out-of-Bag evaluation
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
```
## Feature importance
```
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.target = mnist.target.astype(np.uint8)
rnd_clf = RandomForestClassifier(n_estimators=100, random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
```
# AdaBoost
```
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
for subplot, learning_rate in ((0, 1), (1, 0.5)):
sample_weights = np.ones(m)
plt.sca(axes[subplot])
for i in range(5):
svm_clf = SVC(kernel="rbf", C=0.05, gamma="scale", random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
sample_weights[y_pred != y_train] *= (1 + learning_rate)
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 0:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
else:
plt.ylabel("")
save_fig("boosting_plot")
plt.show()
list(m for m in dir(ada_clf) if not m.startswith("_") and m.endswith("_"))
```
# Gradient Boosting
```
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)
plt.sca(axes[0])
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.sca(axes[1])
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
plt.xlabel("$x_1$", fontsize=16)
save_fig("gbrt_learning_rate_plot")
plt.show()
```
## Gradient Boosting with Early stopping
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors) + 1
gbrt_best = GradientBoostingRegressor(max_depth=2, n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.ylabel("Error", fontsize=16)
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.xlabel("$x_1$", fontsize=16)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
print("Minimum validation MSE:", min_val_error)
```
## Using XGBoost
```
try:
import xgboost
except ImportError as ex:
print("Error: the xgboost library is not installed.")
xgboost = None
if xgboost is not None: # not shown in the book
xgb_reg = xgboost.XGBRegressor(random_state=42)
xgb_reg.fit(X_train, y_train)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # Not shown
print("Validation MSE:", val_error) # Not shown
if xgboost is not None: # not shown in the book
xgb_reg.fit(X_train, y_train,
eval_set=[(X_val, y_val)], early_stopping_rounds=2)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred) # Not shown
print("Validation MSE:", val_error) # Not shown
%timeit xgboost.XGBRegressor().fit(X_train, y_train) if xgboost is not None else None
%timeit GradientBoostingRegressor().fit(X_train, y_train)
```
# Exercise solutions
## 1. to 7.
See Appendix A.
## 8. Voting Classifier
Exercise: _Load the MNIST data and split it into a training set, a validation set, and a test set (e.g., use 50,000 instances for training, 10,000 for validation, and 10,000 for testing)._
The MNIST dataset was loaded earlier.
```
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(
mnist.data, mnist.target, test_size=10000, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=10000, random_state=42)
```
Exercise: _Then train various classifiers, such as a Random Forest classifier, an Extra-Trees classifier, and an SVM._
```
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
random_forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
extra_trees_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)
svm_clf = LinearSVC(random_state=42)
mlp_clf = MLPClassifier(random_state=42)
estimators = [random_forest_clf, extra_trees_clf, svm_clf, mlp_clf]
for estimator in estimators:
print("Training the", estimator)
estimator.fit(X_train, y_train)
[estimator.score(X_val, y_val) for estimator in estimators]
```
The linear SVM is far outperformed by the other classifiers. However, let's keep it for now since it may improve the voting classifier's performance.
Exercise: _Next, try to combine them into an ensemble that outperforms them all on the validation set, using a soft or hard voting classifier._
```
from sklearn.ensemble import VotingClassifier
named_estimators = [
("random_forest_clf", random_forest_clf),
("extra_trees_clf", extra_trees_clf),
("svm_clf", svm_clf),
("mlp_clf", mlp_clf),
]
voting_clf = VotingClassifier(named_estimators)
voting_clf.fit(X_train, y_train)
voting_clf.score(X_val, y_val)
[estimator.score(X_val, y_val) for estimator in voting_clf.estimators_]
```
Let's remove the SVM to see if performance improves. It is possible to remove an estimator by setting it to `None` using `set_params()` like this:
```
voting_clf.set_params(svm_clf=None)
```
This updated the list of estimators:
```
voting_clf.estimators
```
However, it did not update the list of _trained_ estimators:
```
voting_clf.estimators_
```
So we can either fit the `VotingClassifier` again, or just remove the SVM from the list of trained estimators:
```
del voting_clf.estimators_[2]
```
Now let's evaluate the `VotingClassifier` again:
```
voting_clf.score(X_val, y_val)
```
A bit better! The SVM was hurting performance. Now let's try using a soft voting classifier. We do not actually need to retrain the classifier, we can just set `voting` to `"soft"`:
```
voting_clf.voting = "soft"
voting_clf.score(X_val, y_val)
```
Nope, hard voting wins in this case.
_Once you have found one, try it on the test set. How much better does it perform compared to the individual classifiers?_
```
voting_clf.voting = "hard"
voting_clf.score(X_test, y_test)
[estimator.score(X_test, y_test) for estimator in voting_clf.estimators_]
```
The voting classifier only very slightly reduced the error rate of the best model in this case.
## 9. Stacking Ensemble
Exercise: _Run the individual classifiers from the previous exercise to make predictions on the validation set, and create a new training set with the resulting predictions: each training instance is a vector containing the set of predictions from all your classifiers for an image, and the target is the image's class. Train a classifier on this new training set._
```
X_val_predictions = np.empty((len(X_val), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_val_predictions[:, index] = estimator.predict(X_val)
X_val_predictions
rnd_forest_blender = RandomForestClassifier(n_estimators=200, oob_score=True, random_state=42)
rnd_forest_blender.fit(X_val_predictions, y_val)
rnd_forest_blender.oob_score_
```
You could fine-tune this blender or try other types of blenders (e.g., an `MLPClassifier`), then select the best one using cross-validation, as always.
Exercise: _Congratulations, you have just trained a blender, and together with the classifiers they form a stacking ensemble! Now let's evaluate the ensemble on the test set. For each image in the test set, make predictions with all your classifiers, then feed the predictions to the blender to get the ensemble's predictions. How does it compare to the voting classifier you trained earlier?_
```
X_test_predictions = np.empty((len(X_test), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_test_predictions[:, index] = estimator.predict(X_test)
y_pred = rnd_forest_blender.predict(X_test_predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
```
This stacking ensemble does not perform as well as the voting classifier we trained earlier, it's not quite as good as the best individual classifier.
| true |
code
| 0.645623 | null | null | null | null |
|
# Using [vtreat](https://github.com/WinVector/pyvtreat) with Classification Problems
Nina Zumel and John Mount
November 2019
Note: this is a description of the [`Python` version of `vtreat`](https://github.com/WinVector/pyvtreat), the same example for the [`R` version of `vtreat`](https://github.com/WinVector/vtreat) can be found [here](https://github.com/WinVector/vtreat/blob/master/Examples/Classification/Classification.md).
## Preliminaries
Load modules/packages.
```
import pkg_resources
import pandas
import numpy
import numpy.random
import seaborn
import matplotlib.pyplot as plt
import vtreat
import vtreat.util
import wvpy.util
numpy.random.seed(2019)
```
Generate example data.
* `y` is a noisy sinusoidal function of the variable `x`
* `yc` is the output to be predicted: : whether `y` is > 0.5.
* Input `xc` is a categorical variable that represents a discretization of `y`, along some `NaN`s
* Input `x2` is a pure noise variable with no relationship to the output
```
def make_data(nrows):
d = pandas.DataFrame({'x': 5*numpy.random.normal(size=nrows)})
d['y'] = numpy.sin(d['x']) + 0.1*numpy.random.normal(size=nrows)
d.loc[numpy.arange(3, 10), 'x'] = numpy.nan # introduce a nan level
d['xc'] = ['level_' + str(5*numpy.round(yi/5, 1)) for yi in d['y']]
d['x2'] = numpy.random.normal(size=nrows)
d.loc[d['xc']=='level_-1.0', 'xc'] = numpy.nan # introduce a nan level
d['yc'] = d['y']>0.5
return d
d = make_data(500)
d.head()
outcome_name = 'yc' # outcome variable / column
outcome_target = True # value we consider positive
```
### Some quick data exploration
Check how many levels `xc` has, and their distribution (including `NaN`)
```
d['xc'].unique()
d['xc'].value_counts(dropna=False)
```
Find the prevalence of `yc == True` (our chosen notion of "positive").
```
numpy.mean(d[outcome_name] == outcome_target)
```
Plot of `yc` versus `x`.
```
seaborn.lineplot(x='x', y='yc', data=d)
```
## Build a transform appropriate for classification problems.
Now that we have the data, we want to treat it prior to modeling: we want training data where all the input variables are numeric and have no missing values or `NaN`s.
First create the data treatment transform object, in this case a treatment for a binomial classification problem.
```
transform = vtreat.BinomialOutcomeTreatment(
outcome_name=outcome_name, # outcome variable
outcome_target=outcome_target, # outcome of interest
cols_to_copy=['y'], # columns to "carry along" but not treat as input variables
)
```
Use the training data `d` to fit the transform and the return a treated training set: completely numeric, with no missing values.
Note that for the training data `d`: `transform.fit_transform()` is **not** the same as `transform.fit().transform()`; the second call can lead to nested model bias in some situations, and is **not** recommended.
For other, later data, not seen during transform design `transform.transform(o)` is an appropriate step.
```
d_prepared = transform.fit_transform(d, d['yc'])
```
Now examine the score frame, which gives information about each new variable, including its type, which original variable it is derived from, its (cross-validated) correlation with the outcome, and its (cross-validated) significance as a one-variable linear model for the outcome.
```
transform.score_frame_
```
Note that the variable `xc` has been converted to multiple variables:
* an indicator variable for each possible level (`xc_lev_level_*`)
* the value of a (cross-validated) one-variable model for `yc` as a function of `xc` (`xc_logit_code`)
* a variable that returns how prevalent this particular value of `xc` is in the training data (`xc_prevalence_code`)
* a variable indicating when `xc` was `NaN` in the original data (`xc_is_bad`, `x_is_bad`)
Any or all of these new variables are available for downstream modeling. `x` doesn't show as exciting a significance as `xc`, as we are only checking linear relations, and `x` is related to `y` in a very non-linear way.
The `recommended` column indicates which variables are non constant (`has_range` == True) and have a significance value smaller than `default_threshold`. See the section *Deriving the Default Thresholds* below for the reasoning behind the default thresholds. Recommended columns are intended as advice about which variables appear to be most likely to be useful in a downstream model. This advice attempts to be conservative, to reduce the possibility of mistakenly eliminating variables that may in fact be useful (although, obviously, it can still mistakenly eliminate variables that have a real but non-linear relationship to the output, as is the case with `x`, in our example).
Let's look at the variables that are and are not recommended:
```
# recommended variables
transform.score_frame_.loc[transform.score_frame_['recommended'], ['variable']]
# not recommended variables
transform.score_frame_.loc[~transform.score_frame_['recommended'], ['variable']]
```
Notice that `d_prepared` only includes recommended variables (along with `y` and `yc`):
```
d_prepared.head()
```
This is `vtreat`s default behavior; to include all variables in the prepared data, set the parameter `filter_to_recommended` to False, as we show later, in the *Parameters for `BinomialOutcomeTreatment`* section below.
## A Closer Look at `logit_code` variables
Variables of type `logit_code` are the outputs of a one-variable hierarchical logistic regression of a categorical variable (in our example, `xc`) against the centered output on the (cross-validated) treated training data.
Let's see whether `xc_logit_code` makes a good one-variable model for `yc`. It has a large AUC:
```
wvpy.util.plot_roc(prediction=d_prepared['xc_logit_code'],
istrue=d_prepared['yc'],
title = 'performance of xc_logit_code variable')
```
This indicates that `xc_logit_code` is strongly predictive of the outcome. Negative values of `xc_logit_code` correspond strongly to negative outcomes, and positive values correspond strongly to positive outcomes.
```
wvpy.util.dual_density_plot(probs=d_prepared['xc_logit_code'],
istrue=d_prepared['yc'])
```
The values of `xc_logit_code` are in "link space". We can often visualize the relationship a little better by converting the logistic score to a probability.
```
from scipy.special import expit # sigmoid
from scipy.special import logit
offset = logit(numpy.mean(d_prepared.yc))
wvpy.util.dual_density_plot(probs=expit(d_prepared['xc_logit_code'] + offset),
istrue=d_prepared['yc'])
```
Variables of type `logit_code` are useful when dealing with categorical variables with a very large number of possible levels. For example, a categorical variable with 10,000 possible values potentially converts to 10,000 indicator variables, which may be unwieldy for some modeling methods. Using a single numerical variable of type `logit_code` may be a preferable alternative.
## Using the Prepared Data in a Model
Of course, what we really want to do with the prepared training data is to fit a model jointly with all the (recommended) variables.
Let's try fitting a logistic regression model to `d_prepared`.
```
import sklearn.linear_model
import seaborn
not_variables = ['y', 'yc', 'prediction']
model_vars = [v for v in d_prepared.columns if v not in set(not_variables)]
fitter = sklearn.linear_model.LogisticRegression()
fitter.fit(d_prepared[model_vars], d_prepared['yc'])
# now predict
d_prepared['prediction'] = fitter.predict_proba(d_prepared[model_vars])[:, 1]
# look at the ROC curve (on the training data)
wvpy.util.plot_roc(prediction=d_prepared['prediction'],
istrue=d_prepared['yc'],
title = 'Performance of logistic regression model on training data')
```
Now apply the model to new data.
```
# create the new data
dtest = make_data(450)
# prepare the new data with vtreat
dtest_prepared = transform.transform(dtest)
# apply the model to the prepared data
dtest_prepared['prediction'] = fitter.predict_proba(dtest_prepared[model_vars])[:, 1]
wvpy.util.plot_roc(prediction=dtest_prepared['prediction'],
istrue=dtest_prepared['yc'],
title = 'Performance of logistic regression model on test data')
```
## Parameters for `BinomialOutcomeTreatment`
We've tried to set the defaults for all parameters so that `vtreat` is usable out of the box for most applications.
```
vtreat.vtreat_parameters()
```
**use_hierarchical_estimate:**: When True, uses hierarchical smoothing when estimating `logit_code` variables; when False, uses unsmoothed logistic regression.
**coders**: The types of synthetic variables that `vtreat` will (potentially) produce. See *Types of prepared variables* below.
**filter_to_recommended**: When True, prepared data only includes variables marked as "recommended" in score frame. When False, prepared data includes all variables. See the Example below.
**indicator_min_fraction**: For categorical variables, indicator variables (type `indicator_code`) are only produced for levels that are present at least `indicator_min_fraction` of the time. A consequence of this is that 1/`indicator_min_fraction` is the maximum number of indicators that will be produced for a given categorical variable. To make sure that *all* possible indicator variables are produced, set `indicator_min_fraction = 0`
**cross_validation_plan**: The cross validation method used by `vtreat`. Most people won't have to change this.
**cross_validation_k**: The number of folds to use for cross-validation
**user_transforms**: For passing in user-defined transforms for custom data preparation. Won't be needed in most situations, but see [here](https://github.com/WinVector/pyvtreat/blob/master/Examples/UserCoders/UserCoders.ipynb) for an example of applying a GAM transform to input variables.
**sparse_indicators**: When True, use a (Pandas) sparse representation for indicator variables. This representation is compatible with `sklearn`; however, it may not be compatible with other modeling packages. When False, use a dense representation.
**missingness_imputation** The function or value that `vtreat` uses to impute or "fill in" missing numerical values. The default is `numpy.mean()`. To change the imputation function or use different functions/values for different columns, see the [Imputation example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Imputation/Imputation.ipynb).
### Example: Use all variables to model, not just recommended
```
transform_all = vtreat.BinomialOutcomeTreatment(
outcome_name='yc', # outcome variable
outcome_target=True, # outcome of interest
cols_to_copy=['y'], # columns to "carry along" but not treat as input variables
params = vtreat.vtreat_parameters({
'filter_to_recommended': False
})
)
transform_all.fit_transform(d, d['yc']).columns
transform_all.score_frame_
```
Note that the prepared data produced by `fit_transform()` includes all the variables, including those that were not marked as "recommended".
## Types of prepared variables
**clean_copy**: Produced from numerical variables: a clean numerical variable with no `NaNs` or missing values
**indicator_code**: Produced from categorical variables, one for each (common) level: for each level of the variable, indicates if that level was "on"
**prevalence_code**: Produced from categorical variables: indicates how often each level of the variable was "on"
**logit_code**: Produced from categorical variables: score from a one-dimensional model of the centered output as a function of the variable
**missing_indicator**: Produced for both numerical and categorical variables: an indicator variable that marks when the original variable was missing or `NaN`
**deviation_code**: not used by `BinomialOutcomeTreatment`
**impact_code**: not used by `BinomialOutcomeTreatment`
### Example: Produce only a subset of variable types
In this example, suppose you only want to use indicators and continuous variables in your model;
in other words, you only want to use variables of types (`clean_copy`, `missing_indicator`, and `indicator_code`), and no `logit_code` or `prevalence_code` variables.
```
transform_thin = vtreat.BinomialOutcomeTreatment(
outcome_name='yc', # outcome variable
outcome_target=True, # outcome of interest
cols_to_copy=['y'], # columns to "carry along" but not treat as input variables
params = vtreat.vtreat_parameters({
'filter_to_recommended': False,
'coders': {'clean_copy',
'missing_indicator',
'indicator_code',
}
})
)
transform_thin.fit_transform(d, d['yc']).head()
transform_thin.score_frame_
```
## Deriving the Default Thresholds
While machine learning algorithms are generally tolerant to a reasonable number of irrelevant or noise variables, too many irrelevant variables can lead to serious overfit; see [this article](http://www.win-vector.com/blog/2014/02/bad-bayes-an-example-of-why-you-need-hold-out-testing/) for an extreme example, one we call "Bad Bayes". The default threshold is an attempt to eliminate obviously irrelevant variables early.
Imagine that you have a pure noise dataset, where none of the *n* inputs are related to the output. If you treat each variable as a one-variable model for the output, and look at the significances of each model, these significance-values will be uniformly distributed in the range [0:1]. You want to pick a weakest possible significance threshold that eliminates as many noise variables as possible. A moment's thought should convince you that a threshold of *1/n* allows only one variable through, in expectation.
This leads to the general-case heuristic that a significance threshold of *1/n* on your variables should allow only one irrelevant variable through, in expectation (along with all the relevant variables). Hence, *1/n* used to be our recommended threshold, when we developed the R version of `vtreat`.
We noticed, however, that this biases the filtering against numerical variables, since there are at most two derived variables (of types *clean_copy* and *missing_indicator* for every numerical variable in the original data. Categorical variables, on the other hand, are expanded to many derived variables: several indicators (one for every common level), plus a *logit_code* and a *prevalence_code*. So we now reweight the thresholds.
Suppose you have a (treated) data set with *ntreat* different types of `vtreat` variables (`clean_copy`, `indicator_code`, etc).
There are *nT* variables of type *T*. Then the default threshold for all the variables of type *T* is *1/(ntreat nT)*. This reweighting helps to reduce the bias against any particular type of variable. The heuristic is still that the set of recommended variables will allow at most one noise variable into the set of candidate variables.
As noted above, because `vtreat` estimates variable significances using linear methods by default, some variables with a non-linear relationship to the output may fail to pass the threshold. Setting the `filter_to_recommended` parameter to False will keep all derived variables in the treated frame, for the data scientist to filter (or not) as they will.
## Conclusion
In all cases (classification, regression, unsupervised, and multinomial classification) the intent is that `vtreat` transforms are essentially one liners.
The preparation commands are organized as follows:
* **Regression**: [`Python` regression example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Regression/Regression.md), [`R` regression example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Regression/Regression_FP.md), [`R` regression example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Regression/Regression.md).
* **Classification**: [`Python` classification example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Classification/Classification.md), [`R` classification example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Classification/Classification_FP.md), [`R` classification example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Classification/Classification.md).
* **Unsupervised tasks**: [`Python` unsupervised example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Unsupervised/Unsupervised.md), [`R` unsupervised example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Unsupervised/Unsupervised_FP.md), [`R` unsupervised example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Unsupervised/Unsupervised.md).
* **Multinomial classification**: [`Python` multinomial classification example](https://github.com/WinVector/pyvtreat/blob/master/Examples/Multinomial/MultinomialExample.md), [`R` multinomial classification example, fit/prepare interface](https://github.com/WinVector/vtreat/blob/master/Examples/Multinomial/MultinomialExample_FP.md), [`R` multinomial classification example, design/prepare/experiment interface](https://github.com/WinVector/vtreat/blob/master/Examples/Multinomial/MultinomialExample.md).
Some `vtreat` common capabilities are documented here:
* **Score Frame** [score_frame_](https://github.com/WinVector/pyvtreat/blob/master/Examples/ScoreFrame/ScoreFrame.md), using the `score_frame_` information.
* **Cross Validation** [Customized Cross Plans](https://github.com/WinVector/pyvtreat/blob/master/Examples/CustomizedCrossPlan/CustomizedCrossPlan.md), controlling the cross validation plan.
These current revisions of the examples are designed to be small, yet complete. So as a set they have some overlap, but the user can rely mostly on a single example for a single task type.
| true |
code
| 0.45423 | null | null | null | null |
|
# Scalars
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
## Integers
### Binary representation of integers
```
format(16, '032b')
```
### Bit shifting
```
format(16 >> 2, '032b')
16 >> 2
format(16 << 2, '032b')
16 << 2
```
### Overflow
In general, the computer representation of integers has a limited range, and may overflow. The range depends on whether the integer is signed or unsigned.
For example, with 8 bits, we can represent at most $2^8 = 256$ integers.
- 0 to 255 unsigned
- -128 ti 127 signed
Signed integers
```
np.arange(130, dtype=np.int8)[-5:]
```
Unsigned integers
```
np.arange(130, dtype=np.uint8)[-5:]
np.arange(260, dtype=np.uint8)[-5:]
```
### Integer division
In Python 2 or other languages such as C/C++, be very careful when dividing as the division operator `/` performs integer division when both numerator and denominator are integers. This is rarely what you want. In Python 3 the `/` always performs floating point division, and you use `//` for integer division, removing a common source of bugs in numerical calculations.
```
%%python2
import numpy as np
x = np.arange(10)
print(x/10)
```
Python 3 does the "right" thing.
```
x = np.arange(10)
x/10
```
## Real numbers
Real numbers are represented as **floating point** numbers. A floating point number is stored in 3 pieces (sign bit, exponent, mantissa) so that every float is represetned as get +/- mantissa ^ exponent. Because of this, the interval between consecutive numbers is smallest (high precison) for numebrs close to 0 and largest for numbers close to the lower and upper bounds.
Because exponents have to be singed to represent both small and large numbers, but it is more convenint to use unsigned numbers here, the exponnent has an offset (also knwnn as the exponentn bias). For example, if the expoennt is an unsigned 8-bit number, it can rerpesent the range (0, 255). By using an offset of 128, it will now represent the range (-127, 128).

**Note**: Intervals between consecutive floating point numbers are not constant. In particular, the precision for small numbers is much larger than for large numbers. In fact, approximately half of all floating point numbers lie between -1 and 1 when using the `double` type in C/C++ (also the default for `numpy`).

Because of this, if you are adding many numbers, it is more accurate to first add the small numbers before the large numbers.
#### IEEE 754 32-bit floating point representation

See [Wikipedia](https://en.wikipedia.org/wiki/Single-precision_floating-point_format) for how this binary number is evaluated to 0.15625.
```
from ctypes import c_int, c_float
s = c_int.from_buffer(c_float(0.15625)).value
s = format(s, '032b')
s
rep = {
'sign': s[:1],
'exponent' : s[1:9:],
'fraction' : s[9:]
}
rep
```
### Most base 10 real numbers are approximations
This is simply because numbers are stored in finite-precision binary format.
```
'%.20f' % (0.1 * 0.1 * 100)
```
### Never check for equality of floating point numbers
```
i = 0
loops = 0
while i != 1:
i += 0.1 * 0.1
loops += 1
if loops == 1000000:
break
i
i = 0
loops = 0
while np.abs(1 - i) > 1e-6:
i += 0.1 * 0.1
loops += 1
if loops == 1000000:
break
i
```
### Associative law does not necessarily hold
```
6.022e23 - 6.022e23 + 1
1 + 6.022e23 - 6.022e23
```
### Distributive law does not hold
```
a = np.exp(1)
b = np.pi
c = np.sin(1)
a*(b+c)
a*b + a*c
```
### Catastrophic cancellation
Consider calculating sample variance
$$
s^2= \frac{1}{n(n-1)}\sum_{i=1}^n x_i^2 - (\sum_{i=1}^n x_i)^2
$$
Be careful whenever you calculate the difference of potentially big numbers.
```
def var(x):
"""Returns variance of sample data using sum of squares formula."""
n = len(x)
return (1.0/(n*(n-1))*(n*np.sum(x**2) - (np.sum(x))**2))
```
### Underflow
```
np.warnings.filterwarnings('ignore')
np.random.seed(4)
xs = np.random.random(1000)
ys = np.random.random(1000)
np.prod(xs)/np.prod(ys)
```
#### Prevent underflow by staying in log space
```
x = np.sum(np.log(xs))
y = np.sum(np.log(ys))
np.exp(x - y)
```
### Overflow
```
np.exp(1000)
```
### Numerically stable algorithms
#### What is the sample variance for numbers from a normal distribution with variance 1?
```
np.random.seed(15)
x_ = np.random.normal(0, 1, int(1e6))
x = 1e12 + x_
var(x)
```
#### Use functions from numerical libraries where available
```
np.var(x)
```
There is also a variance function in the standard library, but it is slower for large arrays.
```
import statistics
statistics.variance(x)
```
Note that `numpy` uses does not use the asymptotically unbiased estimator by default. If you want the unbiased variance, set `ddof` to 1.
```
np.var([1,2,3,4], ddof=1)
statistics.variance([1,2,3,4])
```
### Useful numerically stable functions
Let's calculate
$$
\log(e^{1000} + e^{1000})
$$
Using basic algebra, we get the solution $\log(2) + 1000$.
\begin{align}
\log(e^{1000} + e^{1000}) &= \log(e^{0}e^{1000} + e^{0}e^{1000}) \\
&= \log(e^{100}(e^{0} + e^{0})) \\
&= \log(e^{1000}) + \log(e^{0} + e^{0}) \\
&= 1000 + \log(2)
\end{align}
**logaddexp**
```
x = np.array([1000, 1000])
np.log(np.sum(np.exp(x)))
np.logaddexp(*x)
```
**logsumexp**
This function generalizes `logaddexp` to an arbitrary number of addends and is useful in a variety of statistical contexts.
Suppose we need to calculate a probability distribution $\pi$ parameterized by a vector $x$
$$
\pi_i = \frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}}
$$
Taking logs, we get
$$
\log(\pi_i) = x_i - \log{\sum_{j=1}^n e^{x_j}}
$$
```
x = 1e6*np.random.random(100)
np.log(np.sum(np.exp(x)))
from scipy.special import logsumexp
logsumexp(x)
```
**logp1 and expm1**
```
np.exp(np.log(1 + 1e-6)) - 1
np.expm1(np.log1p(1e-6))
```
**sinc**
```
x = 1
np.sin(x)/x
np.sinc(x)
x = np.linspace(0.01, 2*np.pi, 100)
plt.plot(x, np.sinc(x), label='Library function')
plt.plot(x, np.sin(x)/x, label='DIY function')
plt.legend()
pass
```
| true |
code
| 0.591959 | null | null | null | null |
|
# PoissonRegressor with StandardScaler & Power Transformer
This Code template is for the regression analysis using Poisson Regressor, StandardScaler as feature rescaling technique and Power Transformer as transformer in a pipeline. This is a generalized Linear Model with a Poisson distribution.
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.linear_model import PoissonRegressor
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, PowerTransformer
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
Poisson regression is a generalized linear model form of regression used to model count data and contingency tables. It assumes the response variable or target variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. It is sometimes known as a log-linear model, especially when used to model contingency tables.
#### Model Tuning Parameters
> **alpha** -> Constant that multiplies the penalty term and thus determines the regularization strength. alpha = 0 is equivalent to unpenalized GLMs.
> **tol** -> Stopping criterion.
> **max_iter** -> The maximal number of iterations for the solver.
Feature Transformation
Power Transformers are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
Currently, <Code>PowerTransformer</Code> supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood.
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PowerTransformer.html) for the parameters
```
model=make_pipeline(StandardScaler(),PowerTransformer(),PoissonRegressor())
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Viraj Jayant , Github: [Profile](https://github.com/Viraj-Jayant)
| true |
code
| 0.50293 | null | null | null | null |
|
# Plots of the total distance covered by the particles as a function of their initial position
*Author: Miriam Sterl*
We plot the total distances covered by the particles during the simulation, as a function of their initial position. We do this for the FES, the GC and the GC+FES run.
```
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import cartopy.mpl.ticker as cticker
File1 = '/science/projects/oceanparcels/output_data/data_Miriam/Results_TrackingFES.nc'
dataset1 = Dataset(File1)
lat1 = dataset1.variables['lat'][:]
lon1 = dataset1.variables['lon'][:]
time1 = dataset1.variables['time'][:]
dist1 = dataset1.variables['distance'][:]
lon1[lon1>180]-=360
lon1[lon1<-180]+=360
File2 = '/science/projects/oceanparcels/output_data/data_Miriam/Results_TrackingGC.nc'
dataset2 = Dataset(File2)
lat2 = dataset2.variables['lat'][:]
lon2 = dataset2.variables['lon'][:]
time2 = dataset2.variables['time'][:]
dist2 = dataset2.variables['distance'][:]
lon2[lon2>180]-=360
lon2[lon2<-180]+=360
File3 = '/science/projects/oceanparcels/output_data/data_Miriam/Results_TrackingGCFES.nc'
dataset3 = Dataset(File3)
lat3 = dataset3.variables['lat'][:]
lon3 = dataset3.variables['lon'][:]
time3 = dataset3.variables['time'][:]
dist3 = dataset3.variables['distance'][:]
lon3[lon3>180]-=360
lon3[lon3<-180]+=360
# Initial longitudes and latitudes (on 2002-01-01)
startLons = lon1[:,0]
startLats = lat1[:,0]
# Distance travelled by the particles between 2002-01-01 and 2015-01-01
finalDist = [dist1[:,-1], dist2[:,-1], dist3[:,-1]]
titles = ['(a) FES run', '(b) GC run', '(c) GC+FES run']
def DistancePlot(lons, lats, dist, fig, ax, vmin, vmax, titlenr, titlesize, labelnr, labelsize, colormap):
"""
Function that plots the total distance covered by particles during a certain period as a function of their initial position
"""
minLat = np.min(np.round(lats)) # the minimal (rounded) latitude
maxLat = np.max(np.round(lats)) # the maximal (rounded) latitude
minLon = np.min(np.round(lons)) # the minimal (rounded) longitude
maxLon = np.max(np.round(lons)) # the maximal (rounded) longitude
allLats = np.arange(minLat, maxLat+1) # the latitudinal grid
allLons = np.arange(minLon, maxLon+1) # the longitudinal grid
distances = np.zeros((len(allLons), len(allLats)))
for i in range(len(dist)):
distances[int(np.round(lons[i]-minLon)), int(np.round(lats[i]-minLat))] = dist[i]
# shift by minLon, minLat to get positive indices
maskedDist = np.ma.masked_where(distances==0.0, distances) # mask land points
Lat, Lon = np.meshgrid(allLats, allLons)
distplot = ax.pcolormesh(Lon, Lat, maskedDist/1e4, cmap = colormap, vmin=vmin, vmax=vmax)
ax.set_title(titles[titlenr], fontsize=titlesize,fontweight='bold')
ax.coastlines()
ax.add_feature(cfeature.LAND, zorder=0, edgecolor='black', facecolor=(0.6,0.6,0.6))
ax.set_xticks([-180, -150, -120, -90, -60, -30, 0, 30, 60, 90, 120, 150, 180], crs=ccrs.PlateCarree())
ax.set_xticklabels([-180, -150, -120, -90, -60, -30, 0, 30, 60, 90, 120, 150, 180], fontsize=labelsize)
ax.set_yticks([-90, -60, - 30, 0, 30, 60, 90], crs=ccrs.PlateCarree())
ax.set_yticklabels([-90, -60, - 30, 0, 30, 60, 90], fontsize=labelsize)
lon_formatter = cticker.LongitudeFormatter()
lat_formatter = cticker.LatitudeFormatter()
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
ax.grid(linewidth=2, color='black', alpha=0.25, linestyle=':')
return distplot
# Compare the three different runs after 13 years
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(28,16), subplot_kw={'projection': ccrs.PlateCarree()})
i=0
for ax in axes.flat:
distance = DistancePlot(startLons, startLats, finalDist[i], fig, ax,
vmin=1, vmax=10, titlenr = i, titlesize=18, labelnr = 0, labelsize=15, colormap='YlOrRd')
i = i+1
cbar = fig.colorbar(distance, ax=axes.ravel().tolist(), shrink=0.53, extend='both', anchor=(2.2,0.5))
cbar.set_label("Distance ($10^{4}$ km)", rotation=90, fontsize=15)
cbar.ax.tick_params(labelsize=12)
fig.suptitle('Total distance covered', x=0.835, y=1.02, fontsize=21, fontweight='bold')
plt.tight_layout()
#plt.savefig('DistanceComparison', bbox_inches='tight')
```
| true |
code
| 0.595522 | null | null | null | null |
|
# 16 - Regression Discontinuity Design
We don't stop to think about it much, but it is impressive how smooth nature is. You can't grow a tree without first getting a bud, you can't teleport from one place to another, a wound takes its time to heal. Even in the social realm, smoothness seems to be the norm. You can't grow a business in one day, consistency and hard work are required to build wealth and it takes years before you learn how linear regression works. Under normal circumstances, nature is very cohesive and doesn't jump around much.
> When the intelligent and animal souls are held together in one embrace, they can be kept from separating.
\- Tao Te Ching, Lao Tzu.
Which means that **when we do see jumps and spikes, they are probably artificial** and often man-made situations. These events are usually accompanied by counterfactuals to the normal way of things: if a weird thing happens, this gives us some insight into what would have happened if nature was to work in a different way. Exploring these artificial jumps is at the core of Regression Discontinuity Design.

The basic setup goes like this. Imagine that you have a treatment variable $T$ and potential outcomes $Y_0$ and $Y_1$. The treatment T is a discontinuous function of an observed running variable $R$ such that
$
D_i = \mathcal{1}\{R_i>c\}
$
In other words, this is saying that treatment is zero when $R$ is below a threshold $c$ and one otherwise. This means that we get to observe $Y_1$ when $R>c$ and $Y_0$ when $R<c$. To wrap our head around this, think about the potential outcomes as 2 functions that we can't observe entirely. Both $Y_0(R)$ and $Y_1(R)$ are there, we just can't see that. The threshold acts as a switch that allows us to see one or the other of those function, but never both, much like in the image below:

The idea of regression discontinuity is to compare the outcome just above and just below the threshold to identify the treatment effect at the threshold. This is called a **sharp RD** design, since the probability of getting the treatment jumps from 0 to 1 at the threshold, but we could also think about a **fuzzy RD** design, where the probability also jumps, but is a less dramatic manner.
## Is Alcohol Killing You?
A very relevant public policy question is what should be the minimal drinking age. Most countries, Brazil included, set it to be 18 year, but in the US (most states) it is currently 21. So, is it the case that the US is being overly prudent and that they should lower their minimal drinking age? Or is it the case that other countries should make their legal drinking age higher?
One way to look at this question is from a [mortality rate perspective (Carpenter and Dobkin, 2009)](https://www.aeaweb.org/articles?id=10.1257/app.1.1.164). From the public policy standpoint, one could argue that we should lower the mortality rate as much as possible. If alcohol consumption increases the mortality rate by a lot, we should avoid lowering the minimum drinking age. This would be consistent with the objective of lowering deaths caused by alcohol consumption.
To estimate the impacts of alcohol on death, we could use the fact that legal drinking age imposes a discontinuity on nature. In the US, those just under 21 years don't drink (or drink much less) while those just older than 21 do drink. This means that the probability of drinking jumps at 21 years and that is something we can explore with an RDD.
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
%matplotlib inline
style.use("fivethirtyeight")
```
To do so we can grab some mortality data aggregated by age. Each row is the average age of a group of people and the average mortality by all causes (`all`), by moving vehicle accident (`mva`) and by suicide (`suicide`).
```
drinking = pd.read_csv("./data/drinking.csv")
drinking.head()[["agecell", "all", "mva", "suicide"]]
```
Just to aid visibility (and for another important reason we will see later) we will centralize the running variable `agecell` at the threshold 21.
```
drinking["agecell"] -= 21
```
If we plot the multiple outcome variables (`all`, `mva`, `suicide`) with the runing variable on the x axis, we get some visual cue about some sort of jump in mortality as we cross the legal drinking age.
```
plt.figure(figsize=(8,8))
ax = plt.subplot(3,1,1)
drinking.plot.scatter(x="agecell", y="all", ax=ax)
plt.title("Death Cause by Age (Centered at 0)")
ax = plt.subplot(3,1,2, sharex=ax)
drinking.plot.scatter(x="agecell", y="mva", ax=ax)
ax = plt.subplot(3,1,3, sharex=ax)
drinking.plot.scatter(x="agecell", y="suicide", ax=ax);
```
There are some cues, but we need more than that. What exactly is the effect of drinking on mortality at the threshold? And what is the standard error on that estimate?
## RDD Estimation
The key assumption that RDD relies on is the smoothness of the potential outcome at the threshold. Formally, the limits of the potential outcomes as the running variable approaches the threshold from the right and from the left should be the same.
$$
\lim_{r \to c^-} E[Y_{ti}|R_i=r] = \lim_{r \to c^+} E[Y_{ti}|R_i=r]
$$
If this holds true, we can find the causal effect at the threshold
$$
\begin{align}
\lim_{r \to c^+} E[Y_{ti}|R_i=r] - \lim_{r \to c^-} E[Y_{ti}|R_i=r]=&\lim_{r \to c^+} E[Y_{1i}|R_i=r] - \lim_{r \to c^-} E[Y_{0i}|R_i=r] \\
=& E[Y_{1i}|R_i=r] - E[Y_{0i}|R_i=r] \\
=& E[Y_{1i} - Y_{0i}|R_i=r]
\end{align}
$$
This is, in its own way, a sort of Local Average Treatment Effect (LATE), since we can only know it at the threshold. In this setting, we can think of RDD as a local randomized trial. For those at the threshold, the treatment could have gone either way and, by chance, some people fell below the threshold, and some people fell above. In our example, at the same point in time, some people are just above 21 years and some people are just below 21. What determines this is if someone was born some days later or not, which is pretty random. For this reason, RDD provides a very compelling causal story. It is not the golden standard of RCT, but it is close.
Now, to estimate the treatment effect at the threshold, all we need to do is estimate both of the limits in the formula above and compare them. The simplest way to do that is by running a linear regression

To make it work, we interact a dummy for being above the threshold with the running variable
$
y_i = \beta_0 + \beta_1 r_i + \beta_2 \mathcal{1}\{r_i>c\} + \beta_3 \mathcal{1}\{r_i>c\} r_i
$
Essentially, this is the same as fitting a linear regression above the threshold and another below it. The parameter $\beta_0$ is the intercept of the regression below the threshold and $\beta_0+\beta_2$ is the intercept for the regression above the threshold.
Here is where the trick of centering the running variable at the threshold comes into play. After this pre-processing step, the threshold becomes zero. This causes the intercept $\beta_0$ to be the predicted value at the threshold, for the regression below it. In other words, $\beta_0=\lim_{r \to c^-} E[Y_{ti}|R_i=r]$. By the same reasoning, $\beta_0+\beta_2$ is the limit of the outcome from above. Wich means, that
$
\lim_{r \to c^+} E[Y_{ti}|R_i=r] - \lim_{r \to c^-} E[Y_{ti}|R_i=r]=\beta_2=E[ATE|R=c]
$
Here is what this looks like in code for the case where we want to estimate the effect of alcohol consumption on death by all causes at 21 years.
```
rdd_df = drinking.assign(threshold=(drinking["agecell"] > 0).astype(int))
model = smf.wls("all~agecell*threshold", rdd_df).fit()
model.summary().tables[1]
```
This model is telling us that mortality increases by 7.6627 points with the consumption of alcohol. Another way of putting this is that alcohol increases the chance of death by all causes by 8% ((7.6627+93.6184)/93.6184). Notice that this also gives us standard errors for our causal effect estimate. In this case, the effect is statistically significant, since the p-value is below 0.01.
If we want to verify this model visually, we can show the predicted values on the data that we have. You can see that it is as though we had 2 regression models: one for those above the threshold and one for below it.
```
ax = drinking.plot.scatter(x="agecell", y="all", color="C0")
drinking.assign(predictions=model.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title("Regression Discontinuity");
```
If we do the same for the other causes, this is what we get.
```
plt.figure(figsize=(8,8))
for p, cause in enumerate(["all", "mva", "suicide"], 1):
ax = plt.subplot(3,1,p)
drinking.plot.scatter(x="agecell", y=cause, ax=ax)
m = smf.wls(f"{cause}~agecell*threshold", rdd_df).fit()
ate_pct = 100*((m.params["threshold"] + m.params["Intercept"])/m.params["Intercept"] - 1)
drinking.assign(predictions=m.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title(f"Impact of Alcohol on Death: {np.round(ate_pct, 2)}%")
plt.tight_layout()
```
RDD is telling us that alcohol increases the chance of death by suicide and car accidents by 15%, which is a pretty significant amount. These results are compelling arguments to not lower the drinking age, if we want to minimize mortality rates.
### Kernel Weighting
Regression Discontinuity relies heavily on the extrapolations properties of linear regression. Since we are looking at the values at the beginning and end of 2 regression lines, we better get those limits right. What can happen is that regression might focus too much on fitting the other data points at the cost of a poor fit at the threshold. If this happens, we might get the wrong measure of the treatment effect.
One way to solve this is to give higher weights for the points that are closer to the threshold. There are many ways to do this, but a popular one is to reweight the samples with the **triangular kernel**
$
K(R, c, h) = \mathcal{1}\{|R-c| \leq h\} * \bigg(1-\frac{|R-c|}{h}\bigg)
$
The first part of this kernel is an indicator function to whether we are close to the threshold. How close? This is determined by a bandwidth parameter $h$. The second part of this kernel is a weighting function. As we move away from the threshold, the weights get smaller and smaller. These weights are divided by the bandwidth. If the bandwidth is large, the weights get smaller at a slower rate. If the bandwidth is small, the weights quickly go to zero.
To make it easier to understand, here is what the weights look like for this kernel applied to our problem. I've set the bandwidth to be 1 here, meaning we will only consider data from people that are no older than 22 years and no younger than 20 years.
```
def kernel(R, c, h):
indicator = (np.abs(R-c) <= h).astype(float)
return indicator * (1 - np.abs(R-c)/h)
plt.plot(drinking["agecell"], kernel(drinking["agecell"], c=0, h=1))
plt.xlabel("agecell")
plt.ylabel("Weight")
plt.title("Kernel Weight by Age");
```
If we apply these weights to our original problem, the impact of alcohol gets bigger, at least for all causes. It jumps from 7.6627 to 9.7004. The result remains very significant. Also, notice that I'm using `wls` instead of `ols`
```
model = smf.wls("all~agecell*threshold", rdd_df,
weights=kernel(drinking["agecell"], c=0, h=1)).fit()
model.summary().tables[1]
ax = drinking.plot.scatter(x="agecell", y="all", color="C0")
drinking.assign(predictions=model.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title("Regression Discontinuity (Local Regression)");
```
And here is what it looks like for the other causes of death. Notice how the regression on the right is more negatively sloped since it disconsiders the right most points.
```
plt.figure(figsize=(8,8))
weights = kernel(drinking["agecell"], c=0, h=1)
for p, cause in enumerate(["all", "mva", "suicide"], 1):
ax = plt.subplot(3,1,p)
drinking.plot.scatter(x="agecell", y=cause, ax=ax)
m = smf.wls(f"{cause}~agecell*threshold", rdd_df, weights=weights).fit()
ate_pct = 100*((m.params["threshold"] + m.params["Intercept"])/m.params["Intercept"] - 1)
drinking.assign(predictions=m.fittedvalues).plot(x="agecell", y="predictions", ax=ax, color="C1")
plt.title(f"Impact of Alcohol on Death: {np.round(ate_pct, 2)}%")
plt.tight_layout()
```
With the exception of suicide, it looks like adding the kernel weight made the negative impact on alcohol bigger. Once again, if we want to minimize the death rate, we should NOT recommend lowering the legal drinking age, since there is a clear impact of alcohol on the death rates.
This simple case covers what happens when regression discontinuity design works perfectly. Next, we will see some diagnostics that we should run in order to check how much we can trust RDD and talk about a topic that is very dear to our heart: the effect of education on earnings.
## Sheepskin Effect and Fuzzy RDD
When it comes to the effect of education on earnings, there are two major views in economics. The first one is the widely known argument that education increases human capital, increasing productivity and thus, earnings. In this view, education actually changes you for the better. Another view is that education is simply a signaling mechanism. It just puts you through all these hard tests and academic tasks. If you can make it, it signals to the market that you are a good employee. In this way, education doesn't make you more productive. It only tells the market how productive you have always been. What matters here is the diploma. If you have it, you will be paid more. We refer to this as the **sheepskin effect**, since diplomas were printed in sheepskin in the past.
To test this hypothesis, [Clark and Martorell](https://faculty.smu.edu/millimet/classes/eco7321/papers/clark%20martorell%202014.pdf) used regression discontinuity to measure the effect of graduating 12th grade on earnings. In order to do that, they had to think about some running variable where students that fall above it graduate and those who fall below it, don't. They found such data in the Texas education system.
In order to graduate in Texas, one has to pass an exam. Testing starts at 10th grade and students can do it multiple times, but eventually, they face a last chance exam at the end of 12th grade. The idea was to get data from students who took those last chance exams and compare those that had barely failed it to those that barely passed it. These students will have very similar human capital, but different signaling credentials. Namely, those that barely passed it, will receive a diploma.
```
sheepskin = pd.read_csv("./data/sheepskin.csv")[["avgearnings", "minscore", "receivehsd", "n"]]
sheepskin.head()
```
Once again, this data is grouped by the running variable. It contains not only the running variable (minscore, already centered at zero) and the outcome (avgearnings), but it also has the probability of receiving a diploma in that score cell and the size of the call (n). So, for example, out of the 12 students in the cell -30 below the score threshold, only 5 were able to get the diploma (12 * 0,416).
This means that there is some slippage in the treatment assignment. Some students that are below the passing threshold managed to get the diploma anyway. Here, the regression discontinuity is **fuzzy**, rather than sharp. Notice how the probability of getting the diploma doesn't jump from zero to one at the threshold. But it does jump from something like 50% to 90%.
```
sheepskin.plot.scatter(x="minscore", y="receivehsd", figsize=(10,5))
plt.xlabel("Test Scores Relative to Cut off")
plt.ylabel("Fraction Receiving Diplomas")
plt.title("Last-chance Exams");
```
We can think of fuzzy RD as a sort of non compliance. Passing the threshold should make everyone receive the diploma, but some students, the never takers, don’t get it. Likewise, being below the threshold should prevent you from getting a diploma, but some students, the always takers, manage to get it anyway.
Just like when we have the potential outcome, we have the potential treatment status in this situation. $T_1$ is the treatment everyone would have received had they been above the threshold. $T_0$ is the treatment everyone would have received had they been below the threshold. As you've might have noticed, we can think of the **threshold as an Instrumental Variable**. Just as in IV, if we naively estimate the treatment effect, it will be biased towards zero.

The probability of treatment being less than one, even above the threshold, makes the outcome we observe less than the true potential outcome $Y_1$. By the same token, the outcome we observe below the threshold is higher than the true potential outcome $Y_0$. This makes it look like the treatment effect at the threshold is smaller than it actually is and we will have to use IV techniques to correct for that.
Just like when we've assumed smoothness on the potential outcome, we now assume it for the potential treatment. Also, we need to assume monotonicity, just like in IV. In case you don't remember, it states that $T_{i1}>T_{i0} \ \forall i$. This means that crossing the threshold from the left to the right only increases your chance of getting a diploma (or that there are no defiers). With these 2 assumptions, we have a Wald Estimator for LATE.
$$
\dfrac{\lim_{r \to c^+} E[Y_i|R_i=r] - \lim_{r \to c^-} E[Y_i|R_i=r]}{\lim_{r \to c^+} E[T_i|R_i=r] - \lim_{r \to c^-} E[T_i|R_i=r]} = E[Y_{1i} - Y_{0i} | T_{1i} > T_{0i}, R_i=c]
$$
Notice how this is a local estimate in two senses. First, it is local because it only gives the treatment effect at the threshold $c$. This is the RD locality. Second, it is local because it only estimates the treatment effect for the compliers. This is the IV locality.
To estimate this, we will use 2 linear regression. The numerator can be estimated just like we've done before. To get the denominator, we simply replace the outcome with the treatment. But first, let's talk about a sanity check we need to run to make sure we can trust our RDD estimates.
### The McCrary Test
One thing that could break our RDD argument is if people can manipulate where they stand at the threshold. In the sheepskin example this could happen if students just below the threshold found a way around the system to increase their test score by just a bit. Another example is when you need to be below a certain income level to get a government benefit. Some families might lower their income on purpose, just to be just eligible for the program.
In these sorts of situations, we tend to see a phenomenon called bunching on the density of the running variable. This means that we will have a lot of entities just above or just below the threshold. To check for that, we can plot the density function of the running variable and see if there are any spikes around the threshold. For our case, the density is given by the `n` column in our data.
```
plt.figure(figsize=(8,8))
ax = plt.subplot(2,1,1)
sheepskin.plot.bar(x="minscore", y="n", ax=ax)
plt.title("McCrary Test")
plt.ylabel("Smoothness at the Threshold")
ax = plt.subplot(2,1,2, sharex=ax)
sheepskin.replace({1877:1977, 1874:2277}).plot.bar(x="minscore", y="n", ax=ax)
plt.xlabel("Test Scores Relative to Cut off")
plt.ylabel("Spike at the Threshold");
```
The first plot shows how our data density looks like. As we can see, there are no spikes around the threshold, meaning there is no bunching. Students are not manipulating where they fall on the threshold. Just for illustrative purposes, the second plot shows what bunching would look like if students could manipulate where they fall on the threshold. We would see a spike in the density for the cells just above the threshold, since many students would be on that cell, barely passing the exam.
Getting this out of the way, we can go back to estimate the sheepskin effect. As I've said before, the numerator of the Wald estimator can be estimated just like we did in the Sharp RD. Here, we will use as weight the kernel with a bandwidth of 15. Since we also have the cell size, we will multiply the kernel by the sample size to get a final weight for the cell.
```
sheepsking_rdd = sheepskin.assign(threshold=(sheepskin["minscore"]>0).astype(int))
model = smf.wls("avgearnings~minscore*threshold",
sheepsking_rdd,
weights=kernel(sheepsking_rdd["minscore"], c=0, h=15)*sheepsking_rdd["n"]).fit()
model.summary().tables[1]
```
This is telling us that the effect of a diploma is -97.7571, but this is not statistically significant (P-value of 0.5). If we plot these results, we get a very continuous line at the threshold. More educated people indeed make more money, but there isn't a jump at the point where they receive the 12th grade diploma. This is an argument in favor of the view that says that education increases earnings by making people more productive, rather than being just a signal to the marker. In other words, there is no sheepskin effect.
```
ax = sheepskin.plot.scatter(x="minscore", y="avgearnings", color="C0")
sheepskin.assign(predictions=model.fittedvalues).plot(x="minscore", y="predictions", ax=ax, color="C1", figsize=(8,5))
plt.xlabel("Test Scores Relative to Cutoff")
plt.ylabel("Average Earnings")
plt.title("Last-chance Exams");
```
However, as we know from the way non compliance bias works, this result is biased towards zero. To correct for that, we need to scale it by the first stage and get the Wald estimator. Unfortunately, there isn't a good Python implementation for this, so we will have to do it manually and use bootstrap to get the standard errors.
The code below runs the numerator of the Wald estimator just like we did before and also constructs the denominator by replacing the target variable with the treatment variable `receivehsd`. The final step just divides the numerator by the denominator.
```
def wald_rdd(data):
weights=kernel(data["minscore"], c=0, h=15)*data["n"]
denominator = smf.wls("receivehsd~minscore*threshold", data, weights=weights).fit()
numerator = smf.wls("avgearnings~minscore*threshold", data, weights=weights).fit()
return numerator.params["threshold"]/denominator.params["threshold"]
from joblib import Parallel, delayed
np.random.seed(45)
bootstrap_sample = 1000
ates = Parallel(n_jobs=4)(delayed(wald_rdd)(sheepsking_rdd.sample(frac=1, replace=True))
for _ in range(bootstrap_sample))
ates = np.array(ates)
```
With the bootstrap samples, we can plot the distribution of ATEs and see where the 95% confidence interval is.
```
sns.distplot(ates, kde=False)
plt.vlines(np.percentile(ates, 2.5), 0, 100, linestyles="dotted")
plt.vlines(np.percentile(ates, 97.5), 0, 100, linestyles="dotted", label="95% CI")
plt.title("ATE Bootstrap Distribution")
plt.xlim([-10000, 10000])
plt.legend();
```
As you can see, even when we scale the effect by the first stage, it is still not statistically different from zero. This means that education doesn't increase earnings by a simple sheepskin effect, but rather by increasing one's productivity.
## Key Ideas
We learned how to take advantage of artificial discontinuities to estimate causal effects. The idea is that we will have some artificial threshold that makes the probability of treatment jump. One example that we saw was how age makes the probability of drinking jump at 21 years. We could use that to estimate the impact of drinking on mortality rate. We use the fact that very close to the threshold, we have something close to a randomized trial. Entities very close to the threshold could have gone either way and what determines where they've landed is essentially random. With this, we can compare those just above and just below to get the treatment effect. We saw how we could do that with weighted linear regression using a kernel and how this even gave us, for free, standard errors for our ATE.
Then, we look at what would happen in the fuzzy RD design, where we have non compliance. We saw how we could approach the situation much like we did with IV.
## References
I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
Other important reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)

## Contribute
Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers).
| true |
code
| 0.642545 | null | null | null | null |
|
# Module 3 Graded Assessment
```
"""
1.Question 1
Fill in the blanks of this code to print out the numbers 1 through 7.
"""
number = 1
while number <= 7:
print(number, end=" ")
number +=1
"""
2.Question 2
The show_letters function should print out each letter of a word on a separate line.
Fill in the blanks to make that happen.
"""
def show_letters(word):
for letter in word:
print(letter)
show_letters("Hello")
# Should print one line per letter
"""
3.Question 3
Complete the function digits(n) that returns how many digits the number has.
For example: 25 has 2 digits and 144 has 3 digits. Tip: you can figure out the digits of a number by dividing
it by 10 once per digit until there are no digits left.
"""
def digits(n):
count = str(n)
return len(count)
print(digits(25)) # Should print 2
print(digits(144)) # Should print 3
print(digits(1000)) # Should print 4
print(digits(0)) # Should print 1
"""
4.Question 4
This function prints out a multiplication table (where each number is the result of multiplying the first number of its row by the number at the top of its column). Fill in the blanks so that calling multiplication_table(1, 3) will print out:
1 2 3
2 4 6
3 6 9
"""
def multiplication_table(start, stop):
for x in range(start,stop+1):
for y in range(start,stop+1):
print(str(x*y), end=" ")
print()
multiplication_table(1, 3)
# Should print the multiplication table shown above
"""
5.Question 5
The counter function counts down from start to stop when start is bigger than stop,
and counts up from start to stop otherwise.
Fill in the blanks to make this work correctly.
"""
def counter(start, stop):
x = start
if x>stop:
return_string = "Counting down: "
while x >= stop:
return_string += str(x)
if x>stop:
return_string += ","
x = x-1
else:
return_string = "Counting up: "
while x <= stop:
return_string += str(x)
if x<stop:
return_string += ","
x = x+1
return return_string
print(counter(1, 10)) # Should be "Counting up: 1,2,3,4,5,6,7,8,9,10"
print(counter(2, 1)) # Should be "Counting down: 2,1"
print(counter(5, 5)) # Should be "Counting up: 5"
"""
6.Question 6
The loop function is similar to range(), but handles the parameters somewhat differently: it takes in 3 parameters:
the starting point, the stopping point, and the increment step. When the starting point is greater
than the stopping point, it forces the steps to be negative. When, instead, the starting point is less
than the stopping point, it forces the step to be positive. Also, if the step is 0, it changes to 1 or -1.
The result is returned as a one-line, space-separated string of numbers. For example, loop(11,2,3)
should return 11 8 5 and loop(1,5,0) should return 1 2 3 4. Fill in the missing parts to make that happen.
"""
def loop(start, stop, step):
return_string = ""
if step == 0:
step=1
if start>stop:
step = abs(step) * -1
else:
step = abs(step)
for count in range(start, stop, step):
return_string += str(count) + " "
return return_string.strip()
print(loop(11,2,3)) # Should be 11 8 5
print(loop(1,5,0)) # Should be 1 2 3 4
print(loop(-1,-2,0)) # Should be -1
print(loop(10,25,-2)) # Should be 10 12 14 16 18 20 22 24
print(loop(1,1,1)) # Should be empty
#8.Question 8
#What is the value of x at the end of the following code?
for x in range(1, 10, 3):
print(x)
#7
#9.Question 9
#What is the value of y at the end of the following code?
for x in range(10):
for y in range(x):
print(y)
#8
```
| true |
code
| 0.434881 | null | null | null | null |
|
論文<br>
https://arxiv.org/abs/2109.07161<br>
<br>
GitHub<br>
https://github.com/saic-mdal/lama<br>
<br>
<a href="https://colab.research.google.com/github/kaz12tech/ai_demos/blob/master/Lama_demo.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 環境セットアップ
## GitHubからソースコードを取得
## ライブラリをインストール
```
%cd /content
!git clone https://github.com/saic-mdal/lama.git
!pip install -r lama/requirements.txt --quiet
!pip install wget --quiet
!pip install --upgrade webdataset==0.1.103
!pip uninstall opencv-python-headless -y --quiet
!pip install opencv-python-headless==4.1.2.30 --quiet
# torchtext 0.8.0をインストール
!pip uninstall torch torchvision torchaudio torchtext -y
!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 torchtext -f https://download.pytorch.org/whl/torch_stable.html
# avoid AttributeError: 'builtin_function_or_method' object has no attribute 'rfftn'
!sed -E -i "15i import torch.fft" /content/lama/saicinpainting/training/modules/ffc.py
```
## 学習済みモデルのセットアップ
```
% cd /content/lama
!curl -L $(yadisk-direct https://disk.yandex.ru/d/ouP6l8VJ0HpMZg) -o big-lama.zip
!unzip big-lama.zip
```
## ライブラリをインポート
```
import base64, os
from IPython.display import HTML, Image
from google.colab.output import eval_js
from base64 import b64decode
import matplotlib.pyplot as plt
import numpy as np
import wget
from shutil import copyfile
import shutil
```
# Canvasのセットアップ
```
canvas_html = """
<style>
.button {
background-color: #4CAF50;
border: none;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
}
</style>
<canvas1 width=%d height=%d>
</canvas1>
<canvas width=%d height=%d>
</canvas>
<button class="button">Finish</button>
<script>
var canvas = document.querySelector('canvas')
var ctx = canvas.getContext('2d')
var canvas1 = document.querySelector('canvas1')
var ctx1 = canvas.getContext('2d')
ctx.strokeStyle = 'red';
var img = new Image();
img.src = "data:image/%s;charset=utf-8;base64,%s";
console.log(img)
img.onload = function() {
ctx1.drawImage(img, 0, 0);
};
img.crossOrigin = 'Anonymous';
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.lineWidth = %d
var button = document.querySelector('button')
var mouse = {x: 0, y: 0}
canvas.addEventListener('mousemove', function(e) {
mouse.x = e.pageX - this.offsetLeft
mouse.y = e.pageY - this.offsetTop
})
canvas.onmousedown = ()=>{
ctx.beginPath()
ctx.moveTo(mouse.x, mouse.y)
canvas.addEventListener('mousemove', onPaint)
}
canvas.onmouseup = ()=>{
canvas.removeEventListener('mousemove', onPaint)
}
var onPaint = ()=>{
ctx.lineTo(mouse.x, mouse.y)
ctx.stroke()
}
var data = new Promise(resolve=>{
button.onclick = ()=>{
resolve(canvas.toDataURL('image/png'))
}
})
</script>
"""
def draw(imgm, filename='drawing.png', w=400, h=200, line_width=1):
display(HTML(canvas_html % (w, h, w,h, filename.split('.')[-1], imgm, line_width)))
data = eval_js("data")
binary = b64decode(data.split(',')[1])
with open(filename, 'wb') as f:
f.write(binary)
```
# 画像のセットアップ
[使用画像1](https://www.pakutaso.com/shared/img/thumb/PAK85_oyakudachisimasu20140830_TP_V.jpg)<br>
[使用画像2](https://www.pakutaso.com/shared/img/thumb/TSU88_awaitoykyo_TP_V.jpg)<br>
[使用画像3](https://www.pakutaso.com/20211208341post-37933.html)
```
% cd /content/lama
from google.colab import files
files = files.upload()
fname = list(files.keys())[0]
shutil.rmtree('./data_for_prediction', ignore_errors=True)
! mkdir data_for_prediction
copyfile(fname, f'./data_for_prediction/{fname}')
os.remove(fname)
fname = f'./data_for_prediction/{fname}'
image64 = base64.b64encode(open(fname, 'rb').read())
image64 = image64.decode('utf-8')
print(f'Will use {fname} for inpainting')
img = np.array(plt.imread(f'{fname}')[:,:,:3])
```
# inpainting
```
mask_path = f".{fname.split('.')[1]}_mask.png"
draw(image64, filename=mask_path, w=img.shape[1], h=img.shape[0], line_width=0.04*img.shape[1])
with_mask = np.array(plt.imread(mask_path)[:,:,:3])
mask = (with_mask[:,:,0]==1)*(with_mask[:,:,1]==0)*(with_mask[:,:,2]==0)
plt.imsave(mask_path,mask, cmap='gray')
%cd /content/lama
!mkdir output/
copyfile(mask_path,os.path.join("./output/", os.path.basename(mask_path)))
!PYTHONPATH=. TORCH_HOME=$(pwd) python3 bin/predict.py \
model.path=$(pwd)/big-lama \
indir=$(pwd)/data_for_prediction \
outdir=/content/lama/output \
dataset.img_suffix={suffix}
plt.rcParams['figure.dpi'] = 200
plt.imshow(plt.imread(f"/content/lama/output/{fname.split('.')[1].split('/')[2]}_mask.png"))
_=plt.axis('off')
_=plt.title('inpainting result')
plt.show()
fname = None
```
| true |
code
| 0.341459 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/livjab/DS-Unit-2-Sprint-4-Practicing-Understanding/blob/master/module1-hyperparameter-optimization/LS_DS_241_Hyperparameter_Optimization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science — Practicing & Understanding Predictive Modeling_
# Hyperparameter Optimization
Today we'll use this process:
## "A universal workflow of machine learning"
_Excerpt from Francois Chollet, [Deep Learning with Python](https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/README.md), Chapter 4: Fundamentals of machine learning_
**1. Define the problem at hand and the data on which you’ll train.** Collect this data, or annotate it with labels if need be.
**2. Choose how you’ll measure success on your problem.** Which metrics will you monitor on your validation data?
**3. Determine your evaluation protocol:** hold-out validation? K-fold validation? Which portion of the data should you use for validation?
**4. Develop a first model that does better than a basic baseline:** a model with statistical power.
**5. Develop a model that overfits.** The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it.
**6. Regularize your model and tune its hyperparameters, based on performance on the validation data.** Repeatedly modify your model, train it, evaluate on your validation data (not the test data, at this point), modify it again, and repeat, until the model is as good as it can get.
**Iterate on feature engineering: add new features, or remove features that don’t seem to be informative.**
Once you’ve developed a satisfactory model configuration, you can **train your final production model on all the available data (training and validation) and evaluate it one last time on the test set.**
## 1. Define the problem at hand and the data on which you'll train
We'll apply the workflow to a [project from _Python Data Science Handbook_](https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic) by Jake VanderPlas:
> **Predicting Bicycle Traffic**
> As an example, let's take a look at whether we can predict the number of bicycle trips across Seattle's Fremont Bridge based on weather, season, and other factors.
> We will join the bike data with another dataset, and try to determine the extent to which weather and seasonal factors—temperature, precipitation, and daylight hours—affect the volume of bicycle traffic through this corridor. Fortunately, the NOAA makes available their daily [weather station data](http://www.ncdc.noaa.gov/cdo-web/search?datasetid=GHCND) (I used station ID USW00024233) and we can easily use Pandas to join the two data sources.
> Let's start by loading the two datasets, indexing by date:
So this is a regression problem, not a classification problem. We'll define the target, choose an evaluation metric, and choose models that are appropriate for regression problems.
### Download data
```
!curl -o FremontBridge.csv https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
!wget https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv
```
### Load data
```
# Modified from cells 15, 16, and 20, at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
import pandas as pd
# Download and join data into a dataframe
def load():
fremont_bridge = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
bicycle_weather = 'https://raw.githubusercontent.com/jakevdp/PythonDataScienceHandbook/master/notebooks/data/BicycleWeather.csv'
counts = pd.read_csv(fremont_bridge, index_col='Date', parse_dates=True,
infer_datetime_format=True)
weather = pd.read_csv(bicycle_weather, index_col='DATE', parse_dates=True,
infer_datetime_format=True)
daily = counts.resample('d').sum()
daily['Total'] = daily.sum(axis=1)
daily = daily[['Total']] # remove other columns
weather_columns = ['PRCP', 'SNOW', 'SNWD', 'TMAX', 'TMIN', 'AWND']
daily = daily.join(weather[weather_columns], how='inner')
# Make a feature for yesterday's total
daily['Total_yesterday'] = daily.Total.shift(1)
daily = daily.drop(index=daily.index[0])
return daily
daily = load()
```
### First fast look at the data
- What's the shape?
- What's the date range?
- What's the target and the features?
```
# TODO
daily.shape
daily.head()
daily.tail()
```
Target
- Total : Daily total number of bicycle trips across Seattle's Fremont Bridge
Features
- Date (index) : from 2012-10-04 to 2015-09-01
- Total_yesterday : Total trips yesterday
- PRCP : Precipitation (1/10 mm)
- SNOW : Snowfall (1/10 mm)
- SNWD : Snow depth (1/10 mm)
- TMAX : Maximum temperature (1/10 Celsius)
- TMIN : Minimum temperature (1/10 Celsius)
- AWND : Average daily wind speed (1/10 meters per second)
## 2. Choose how you’ll measure success on your problem.
Which metrics will you monitor on your validation data?
This is a regression problem, so we need to choose a regression [metric](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values).
I'll choose mean absolute error.
```
# TODO
from sklearn.metrics import mean_absolute_error
```
## 3. Determine your evaluation protocol
We're doing model selection, hyperparameter optimization, and performance estimation. So generally we have two ideal [options](https://sebastianraschka.com/images/blog/2018/model-evaluation-selection-part4/model-eval-conclusions.jpg) to choose from:
- 3-way holdout method (train/validation/test split)
- Cross-validation with independent test set
I'll choose cross-validation with independent test set. Scikit-learn makes cross-validation convenient for us!
Specifically, I will use random shuffled cross validation to train and validate, but I will hold out an "out-of-time" test set, from the last 100 days of data:
```
# TODO
test = daily[-100:]
train = daily[:-100]
train.shape, test.shape
X_train = train.drop(columns="Total")
y_train = train["Total"]
X_test = test.drop(columns="Total")
y_test = test["Total"]
X_train.shape, y_train.shape, X_test.shape, y_test.shape
```
## 4. Develop a first model that does better than a basic baseline
### Look at the target's distribution and descriptive stats
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.distplot(y_train);
y_train.describe()
```
### Basic baseline 1
```
y_pred = [y_train.median()] * len(y_train)
mean_absolute_error(y_train, y_pred)
```
### Basic baseline 2
```
y_pred = X_train["Total_yesterday"]
mean_absolute_error(y_train, y_pred)
```
### First model that does better than a basic baseline
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html
```
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_validate
scores = cross_validate(LinearRegression(),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
scores["estimator"][0].coef_
for i, model in enumerate(scores["estimator"]):
coefficients = model.coef_
intercept = model.intercept_
feature_names = X_train.columns
print(f'Model from cross-validation fols #{i}')
print("Intercept", intercept)
print(pd.Series(coefficients, feature_names).to_string())
print('\n')
```
## 5. Develop a model that overfits.
"The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it." —Chollet
<img src="https://jakevdp.github.io/PythonDataScienceHandbook/figures/05.03-validation-curve.png">
Diagram Source: https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html#Validation-curves-in-Scikit-Learn
### Random Forest?
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
```
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=100, max_depth=None, n_jobs=-1)
scores = cross_validate(model,
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
```
### Validation Curve
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html
> Validation curve. Determine training and test scores for varying parameter values. This is similar to grid search with one parameter.
```
import numpy as np
# Modified from cell 13 at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html#Validation-curves-in-Scikit-Learn
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.model_selection import validation_curve
model = RandomForestRegressor(n_estimators=100)
depth = [2, 3, 4, 5, 6]
train_score, val_score = validation_curve(
model, X_train, y_train,
param_name='max_depth', param_range=depth,
scoring='neg_mean_absolute_error', cv=3)
plt.plot(depth, np.median(train_score, 1), color='blue', label='training score')
plt.plot(depth, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.xlabel('depth');
```
### `RandomizedSearchCV`
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html
https://scikit-learn.org/stable/modules/grid_search.html
```
from sklearn.model_selection import RandomizedSearchCV
param_distributions = {
"n_estimators": [100, 200],
"max_depth": [4, 5],
"criterion": ["mse", "mae"]
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
n_iter=8,
cv=3, scoring="neg_mean_absolute_error",
verbose=10,
return_train_score=True)
gridsearch.fit(X_train, y_train)
results = pd.DataFrame(gridsearch.cv_results_)
results.sort_values(by="rank_test_score")
gridsearch.best_estimator_
```
## FEATURE ENGINEERING!
Jake VanderPlas demonstrates this feature engineering:
https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
```
# Modified from code cells 17-21 at
# https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic
def jake_wrangle(X):
X = X.copy()
# patterns of use generally vary from day to day;
# let's add binary columns that indicate the day of the week:
days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
for i, day in enumerate(days):
X[day] = (X.index.dayofweek == i).astype(float)
# we might expect riders to behave differently on holidays;
# let's add an indicator of this as well:
from pandas.tseries.holiday import USFederalHolidayCalendar
cal = USFederalHolidayCalendar()
holidays = cal.holidays('2012', '2016')
X = X.join(pd.Series(1, index=holidays, name='holiday'))
X['holiday'].fillna(0, inplace=True)
# We also might suspect that the hours of daylight would affect
# how many people ride; let's use the standard astronomical calculation
# to add this information:
def hours_of_daylight(date, axis=23.44, latitude=47.61):
"""Compute the hours of daylight for the given date"""
days = (date - pd.datetime(2000, 12, 21)).days
m = (1. - np.tan(np.radians(latitude))
* np.tan(np.radians(axis) * np.cos(days * 2 * np.pi / 365.25)))
return 24. * np.degrees(np.arccos(1 - np.clip(m, 0, 2))) / 180.
X['daylight_hrs'] = list(map(hours_of_daylight, X.index))
# temperatures are in 1/10 deg C; convert to C
X['TMIN'] /= 10
X['TMAX'] /= 10
# We can also calcuate the average temperature.
X['Temp (C)'] = 0.5 * (X['TMIN'] + X['TMAX'])
# precip is in 1/10 mm; convert to inches
X['PRCP'] /= 254
# In addition to the inches of precipitation, let's add a flag that
# indicates whether a day is dry (has zero precipitation):
X['dry day'] = (X['PRCP'] == 0).astype(int)
# Let's add a counter that increases from day 1, and measures how many
# years have passed. This will let us measure any observed annual increase
# or decrease in daily crossings:
X['annual'] = (X.index - X.index[0]).days / 365.
return X
X_train = jake_wrangle(X_train)
```
### Linear Regression (with new features)
```
scores = cross_validate(LinearRegression(),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
```
### Random Forest (with new features)
```
param_distributions = {
'n_estimators': [100],
'max_depth': [5, 10, 15, None],
'criterion': ["mae"]
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
n_iter=2,
cv=3,
scoring="neg_mean_absolute_error",
verbose=10,
return_train_score=True)
gridsearch.fit(X_train, y_train)
gridsearch.best_estimator_
```
### Feature engineering, explained by Francois Chollet
> _Feature engineering_ is the process of using your own knowledge about the data and about the machine learning algorithm at hand to make the algorithm work better by applying hardcoded (nonlearned) transformations to the data before it goes into the model. In many cases, it isn’t reasonable to expect a machine-learning model to be able to learn from completely arbitrary data. The data needs to be presented to the model in a way that will make the model’s job easier.
> Let’s look at an intuitive example. Suppose you’re trying to develop a model that can take as input an image of a clock and can output the time of day.
> If you choose to use the raw pixels of the image as input data, then you have a difficult machine-learning problem on your hands. You’ll need a convolutional neural network to solve it, and you’ll have to expend quite a bit of computational resources to train the network.
> But if you already understand the problem at a high level (you understand how humans read time on a clock face), then you can come up with much better input features for a machine-learning algorithm: for instance, write a Python script to follow the black pixels of the clock hands and output the (x, y) coordinates of the tip of each hand. Then a simple machine-learning algorithm can learn to associate these coordinates with the appropriate time of day.
> You can go even further: do a coordinate change, and express the (x, y) coordinates as polar coordinates with regard to the center of the image. Your input will become the angle theta of each clock hand. At this point, your features are making the problem so easy that no machine learning is required; a simple rounding operation and dictionary lookup are enough to recover the approximate time of day.
> That’s the essence of feature engineering: making a problem easier by expressing it in a simpler way. It usually requires understanding the problem in depth.
> Before convolutional neural networks became successful on the MNIST digit-classification problem, solutions were typically based on hardcoded features such as the number of loops in a digit image, the height of each digit in an image, a histogram of pixel values, and so on.
> Neural networks are capable of automatically extracting useful features from raw data. Does this mean you don’t have to worry about feature engineering as long as you’re using deep neural networks? No, for two reasons:
> - Good features still allow you to solve problems more elegantly while using fewer resources. For instance, it would be ridiculous to solve the problem of reading a clock face using a convolutional neural network.
> - Good features let you solve a problem with far less data. The ability of deep-learning models to learn features on their own relies on having lots of training data available; if you have only a few samples, then the information value in their features becomes critical.
# ASSIGNMENT
**1.** Complete the notebook cells that were originally commented **`TODO`**.
**2.** Then, focus on feature engineering to improve your cross validation scores. Collaborate with your cohort on Slack. You could start with the ideas [Jake VanderPlas suggests:](https://jakevdp.github.io/PythonDataScienceHandbook/05.06-linear-regression.html#Example:-Predicting-Bicycle-Traffic)
> Our model is almost certainly missing some relevant information. For example, nonlinear effects (such as effects of precipitation and cold temperature) and nonlinear trends within each variable (such as disinclination to ride at very cold and very hot temperatures) cannot be accounted for in this model. Additionally, we have thrown away some of the finer-grained information (such as the difference between a rainy morning and a rainy afternoon), and we have ignored correlations between days (such as the possible effect of a rainy Tuesday on Wednesday's numbers, or the effect of an unexpected sunny day after a streak of rainy days). These are all potentially interesting effects, and you now have the tools to begin exploring them if you wish!
**3.** Experiment with the Categorical Encoding notebook.
**4.** At the end of the day, take the last step in the "universal workflow of machine learning" — "You can train your final production model on all the available data (training and validation) and evaluate it one last time on the test set."
See the [`RandomizedSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) documentation for the `refit` parameter, `best_estimator_` attribute, and `predict` method:
> **refit : boolean, or string, default=True**
> Refit an estimator using the best found parameters on the whole dataset.
> The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `GridSearchCV` instance.
### STRETCH
**A.** Apply this lesson other datasets you've worked with, like Ames Housing, Bank Marketing, or others.
**B.** In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
**C.** _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
```
len(X_train.columns)
X_train.describe()
# Lets feature engineer a column determining if it rained yesterday.
# We can use the feature engineered by Jake VanderPlas called "dry day"
# to determine if there was rain on a given day
X_train["dry day"].value_counts()
X_train["yesterday dry day"] = X_train["dry day"].shift()
X_train[["dry day", "yesterday dry day"]].head(10)
# deal with Nan and change to int type
X_train["yesterday dry day"] = X_train["yesterday dry day"].fillna(value=1).astype(int)
# Let's try to make a column for the number of days since it was last sunny
X_train['rainy day streak'] = X_train.groupby( (X_train['dry day'] !=1)
.cumsum()).cumcount() + ( (X_train['dry day'] != 0)
.cumsum() == 0).astype(int)
X_train[["dry day", "rainy day streak"]].head(10)
# Let's make a feature for extreme cold/extreme heat
# Anything above about 80 degrees (F) and below 40 degrees (F) counts as extreme temp
# 80F = 26.67C, 40F = 4.44C
def extreme_temps(X_train):
if (X_train["Temp (C)"] > 26.67):
return 1
elif (X_train["Temp (C)"] < 4.44):
return 1
else:
return 0
X_train["extreme temp day"] = X_train.apply(extreme_temps, axis=1)
X_train["extreme temp day"].value_counts()
X_train[["Temp (C)", "extreme temp day"]].sort_values("Temp (C)").head()
X_train[["Temp (C)", "extreme temp day"]].sort_values("Temp (C)", ascending=False).head()
# linear regression with new added features
scores = cross_validate(LinearRegression(),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
# random forest regression
param_distributions = {
'n_estimators': [100, 200, 300],
'max_depth': [5, 10, 15, None],
'criterion': ["mse", "mae"]
}
gridsearch = RandomizedSearchCV(
RandomForestRegressor(n_jobs=-1, random_state=42),
param_distributions=param_distributions,
cv=3,
scoring="neg_mean_absolute_error",
verbose=10,
return_train_score=True)
gridsearch.fit(X_train, y_train)
gridsearch.best_estimator_
scores = cross_validate(RandomForestRegressor(bootstrap=True,
criterion='mse',
max_depth=None,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
n_estimators=300,
n_jobs=-1,
oob_score=False,
random_state=42,
verbose=0,
warm_start=False),
X_train,
y_train,
scoring="neg_mean_absolute_error",
cv=3,
return_train_score=True,
return_estimator=True)
pd.DataFrame(scores)
scores["test_score"].mean()
pd.DataFrame(gridsearch.cv_results_).sort_values(by="rank_test_score")
```
| true |
code
| 0.521837 | null | null | null | null |
|
# Scroll down to get to the interesting tables...
# Construct list of properties of widgets
"Properties" here is one of:
+ `keys`
+ `traits()`
+ `class_own_traits()`
Common (i.e. uninteresting) properties are filtered out.
The dependency on astropy is for their Table. Replace it with pandas if you want...
```
import itertools
from ipywidgets import *
from IPython.display import display
from traitlets import TraitError
from astropy.table import Table, Column
```
# Function definitions
## Calculate "interesting" properties
```
def properties(widget, omit=None, source=None):
"""
Return a list of widget properties for a widget instance, omitting
common properties.
Parameters
----------
widget : ipywidgets.Widget instance
The widget for which the list of preoperties is desired.
omit : list, optional
List of properties to omit in the return value. Default is
``['layout', 'style', 'msg_throttle']``, and for `source='traits'
is extended to add ``['keys', 'comm']``.
source : str, one of 'keys', 'traits', 'class_own_traits', 'style_keys' optional
Source of property list for widget. Default is ``'keys'``.
"""
if source is None:
source = 'keys'
valid_sources = ('keys', 'traits', 'class_own_traits', 'style_keys')
if source not in valid_sources:
raise ValueError('source must be one of {}'.format(', '.join(valid_sources)))
if omit is None:
omit = ['layout', 'style', 'msg_throttle']
if source == 'keys':
props = widget.keys
elif source == 'traits':
props = widget.traits()
omit.extend(['keys', 'comm'])
elif source == 'class_own_traits':
props = widget.class_own_traits()
elif source == 'style_keys':
props = widget.style.keys
props = [k for k in props if not k.startswith('_')]
return [k for k in props if k not in omit]
```
## Create a table (cross-tab style) for which properties are available for which widgets
This is the only place astropy.table.Table is used, so delete if you want to.
```
def table_for_keys(keys, keys_info, source):
unique_keys = set()
for k in keys:
unique_keys.update(keys_info[k])
unique_keys = sorted(unique_keys)
string_it = lambda x: 'X' if x else ''
colnames = ['Property ({})'.format(source)] + keys
columns = [Column(name=colnames[0], data=unique_keys)]
for c in colnames[1:]:
column = Column(name=c, data=[string_it(k in key_dict[c]) for k in unique_keys])
columns.append(column)
return Table(columns)
```
## List of widget objects...
```
widget_list = [
IntSlider,
FloatSlider,
IntRangeSlider,
FloatRangeSlider,
IntProgress,
FloatProgress,
BoundedIntText,
BoundedFloatText,
IntText,
FloatText,
ToggleButton,
Checkbox,
Valid,
Dropdown,
RadioButtons,
Select,
SelectionSlider,
SelectionRangeSlider,
ToggleButtons,
SelectMultiple,
Text,
Textarea,
Label,
HTML,
HTMLMath,
Image,
Button,
Play,
DatePicker,
ColorPicker,
Box,
HBox,
VBox,
Accordion,
Tab
]
```
## ...and their names
```
names = [wd.__name__ for wd in widget_list]
```
## Figure out the properties for each widget
The `try`/`except` below is to catch a couple of classes that *require* that `options` be passed on intialization.
```
property_source = 'keys'
all_keys = []
for widget_class in widget_list:
try:
keys = properties(widget_class(), source=property_source)
except TraitError as e:
keys = properties(widget_class(options=(2,10)), source=property_source)
finally:
all_keys.append(keys)
```
Probably should have used a dict from the beginning...
```
key_dict = {k: v for k, v in zip(names, all_keys)}
```
## Define a few groups of widgets by widget interface type
This makes for nicer (i.e. more compact and readable) tables later on.
```
sliders = [k for k in key_dict.keys() if 'Slider' in k]
buttons = [k for k in key_dict.keys() if 'Button' in k]
containers = ['Box', 'VBox', 'HBox', 'Accordion', 'Tab']
texts = [k for k in names if 'text' in k or 'Text' in k] + [k for k in names if 'HTML' in k] + ['Label']
progress = [k for k in names if 'Progress' in k]
selects = ['Dropdown', 'Select', 'SelectMultiple']
all_so_far = sliders + buttons + texts + containers + progress + selects
others = [k for k in names if k not in all_so_far]
slider_keys = set()
```
# Tables of keys (synced properties)
## Sliders
```
table_for_keys(sliders, key_dict, source=property_source)
```
## Buttons
```
table_for_keys(buttons, key_dict, source=property_source)
```
## Containers
```
table_for_keys(containers, key_dict, source=property_source)
```
## Text
```
table_for_keys(texts, key_dict, source=property_source)
```
## Progress bars
```
table_for_keys(progress, key_dict, source=property_source)
```
# Select widgets
```
table_for_keys(selects, key_dict, source=property_source)
```
## Everything else
```
table_for_keys(others, key_dict, source=property_source)
property_source = 'style_keys'
style_keys = []
for widget_class in widget_list:
try:
keys = properties(widget_class(), source=property_source)
except TraitError as e:
keys = properties(widget_class(options=(2,10)), source=property_source)
except AttributeError:
keys=''
finally:
style_keys.append(keys)
for w, s in zip(names, style_keys):
print('{} has style keys: {}'.format(w, ', '.join(s)))
```
| true |
code
| 0.53959 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.