Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
12,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Physique
Mini Table of Contents
Create or refresh data files
Using Physique from a working directory not containing Physique itself
NIST Fundamental Constants
NIST Official Conversions (to metric)
Webscraping example
Step1: NIST Fundamental Constants
Step2: Find a Fundamental Constant you are interested in using the usual panda modules
Step3: NIST Official Conversions (to metric)
This is the pandas DataFrame containing all the NIST Official Conversions to SI.
Step4: From the list of columns, search for the quantity you desired by trying out different search terms
Step5: Or we can look up the SI unit we want to convert to.
Step6: Look at what you want and see the index; it happens to be 340 in this example.
Step7: Then the attributes can accessed by the column names.
Step8: So for example, the reusable SSME delivers a vacuum thrust of 470000 lb or
Step9: To obtain the conversion for pressure in psia, which we search for with "psi"
Step10: So for a chamber pressure of 3028 psia for the SSME,
Step11: Also, get the conversion for atmospheres (atm) | Python Code:
import os
print(os.getcwd())
os.chdir(os.getcwd() + "/Physique/") # change current working directory
print(os.getcwd())
%run -i ./Scripts/Refresh.py # this is the main, important, command to run
import Physique
import sys
sys.executable # Check which Python you are running in case you have ImportError's
print(dir(Physique))
Explanation: Physique
Mini Table of Contents
Create or refresh data files
Using Physique from a working directory not containing Physique itself
NIST Fundamental Constants
NIST Official Conversions (to metric)
Webscraping example: JPL Solar System Dynamics (JPL SSD) - Planets and Pluto
Create or refresh data files
Create or refresh data files (if using Physique for the first time, do this first!) by running this script in the root directory of the project (i.e. ./Physique/):
python3 ./Scripts/Refresh.py
For instance, you would leave this current directory, change to the directory above (or wherever it really is) for the root directory of the project, and type the above command in the command line.
Alternatively, do it in Python here:
End of explanation
from Physique import FundamentalPhysicalConstants as FPC
print(FPC.columns)
print(FPC)
Explanation: NIST Fundamental Constants
End of explanation
g_0pd = FPC[FPC["Quantity"].str.contains("gravity") ]
# standard acceleration of gravity as a panda DataFrame
print(g_0pd)
g_0 = g_0pd["Value"].values[0]
print(type(g_0))
print(g_0)
# access the values you're interested in
print(g_0pd.Quantity)
print(g_0pd.Value.get_values()[0])
print(g_0pd.Unit.get_values()[0])
# you can also grab just the 1 entry from this DataFrame using the .loc module
FPC[FPC["Quantity"].str.contains("Boltzmann")].loc[49,:]
g_0pd.loc[303,:]
Explanation: Find a Fundamental Constant you are interested in using the usual panda modules
End of explanation
from Physique import Conversions
print(Conversions.columns)
Explanation: NIST Official Conversions (to metric)
This is the pandas DataFrame containing all the NIST Official Conversions to SI.
End of explanation
Conversions[Conversions['Toconvertfrom'].str.contains("pound-force ")]
Explanation: From the list of columns, search for the quantity you desired by trying out different search terms: e.g. I'm reading Huzel and Huang's Modern Engineering for Design of Liquid-Propellant Rocket Engines and I want to know how to convert from
* lb (pound or pound-force) for thrust into force in Newton (N)
* psia (pounds per square inch absolute) for (chamber) pressure into pressure in Pascal (Pa)
We can try to look up the U.S. or Imperial units from the Toconvertfrom column.
End of explanation
Conversions[Conversions['to'].str.contains("newton ")]
Explanation: Or we can look up the SI unit we want to convert to.
End of explanation
lbf2N = Conversions.loc[340,:];
print(lbf2N)
Explanation: Look at what you want and see the index; it happens to be 340 in this example.
End of explanation
print(lbf2N.Toconvertfrom)
print(lbf2N.to)
print(lbf2N.Multiplyby)
Explanation: Then the attributes can accessed by the column names.
End of explanation
print(470000 * lbf2N.Multiplyby, lbf2N.to)
Explanation: So for example, the reusable SSME delivers a vacuum thrust of 470000 lb or
End of explanation
Conversions[Conversions['Toconvertfrom'].str.match("psi")]
Explanation: To obtain the conversion for pressure in psia, which we search for with "psi"
End of explanation
psi2Pa = Conversions.loc[372,:]
print(3028 * psi2Pa.Multiplyby, psi2Pa.to)
Explanation: So for a chamber pressure of 3028 psia for the SSME,
End of explanation
Conversions[Conversions['Toconvertfrom'].str.match("atm")]
atm2Pa = Conversions.loc[15,:]
print(3028 * psi2Pa.Multiplyby / atm2Pa.Multiplyby, atm2Pa.Toconvertfrom)
Explanation: Also, get the conversion for atmospheres (atm):
End of explanation |
12,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(1000)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 500, activation='ELU')
net = tflearn.fully_connected(net, 200, activation='ELU')
net = tflearn.fully_connected(net, 10, activation ='ELU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='adam',
learning_rate=.001,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
with tf.device('/gpu:0'):
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=15)
# Training
model.fit(trainX, trainY, validation_set=0.05, show_metric=True, batch_size=64, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
12,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to
Step1: Problem Statement
Step3: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal
Step4: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
Step5: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
Step7: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from
Step9: Expected Output
Step10: Expected Output
Step11: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
Step13: Observations
Step15: Expected Output
Step16: Expected Output
Step17: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary. | Python Code:
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!
You will learn to: Use regularization in your deep learning models.
Let's first import the packages you are going to use.
End of explanation
train_X, train_Y, test_X, test_Y = load_2D_dataset()
Explanation: Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> Figure 1 </u>: Football field<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
End of explanation
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
Your goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
Analysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in regularization mode -- by setting the lambd input to a non-zero value. We use "lambd" instead of "lambda" because "lambda" is a reserved keyword in Python.
- in dropout mode -- by setting the keep_prob to a value less than one
You will first try the model without any regularization. Then, you will implement:
- L2 regularization -- functions: "compute_cost_with_regularization()" and "backward_propagation_with_regularization()"
- Dropout -- functions: "forward_propagation_with_dropout()" and "backward_propagation_with_dropout()"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
End of explanation
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Let's train the model without any regularization, and observe the accuracy on the train/test sets.
End of explanation
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
End of explanation
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = 1/m * lambd/2 * ( np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
Explanation: The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
2 - L2 Regularization
The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{L}\right) + (1-y^{(i)})\log\left(1- a^{L}\right) \large{)} }\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
python
np.sum(np.square(Wl))
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
End of explanation
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + lambd/m * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + lambd/m * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + + lambd/m * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
Exercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
End of explanation
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**dW1**
</td>
<td>
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
</td>
</tr>
<tr>
<td>
**dW2**
</td>
<td>
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
</td>
</tr>
<tr>
<td>
**dW3**
</td>
<td>
[[-1.77691347 -0.11832879 -0.09397446]]
</td>
</tr>
</table>
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The model() function will call:
- compute_cost_with_regularization instead of compute_cost
- backward_propagation_with_regularization instead of backward_propagation
End of explanation
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
End of explanation
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1 * D1 # Step 3: shut down some neurons of A1
A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2 * D2 # Step 3: shut down some neurons of A2
A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
Explanation: Observations:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
What is L2-regularization actually doing?:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
What you should remember -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning.
It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
3.1 - Forward propagation with dropout
Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
Instructions:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X > 0.5). Note that 0 and 1 are respectively equivalent to False and True.
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
End of explanation
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
3.2 - Backward propagation with dropout
Exercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
Instruction:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1.
2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob).
End of explanation
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**dA1**
</td>
<td>
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
</td>
</tr>
<tr>
<td>
**dA2**
</td>
<td>
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
</td>
</tr>
</table>
Let's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function model() will now call:
- forward_propagation_with_dropout instead of forward_propagation.
- backward_propagation_with_dropout instead of backward_propagation.
End of explanation
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
End of explanation |
12,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports and data
Step1: These data (~71 million rows) were taken from https
Step2: Apply any function in the fastest available manner
When possible, vectorized form of function is used for 100x speed of pandas
Step3: When vectorized form is not available, utilized dask parallel processing for 10x speed of pandas
Step4: But when possible, you should still write code in a vectorized format
Step5: When you can't write code in a vectorized format, swifter still makes parallel processing easy
Step6: Multiple columns apply example
Step7: Applymap example
Step8: Rolling objects apply example
Step9: Resampler apply example
Step10: Modin apply example | Python Code:
import pandas as pd
import numpy as np
import modin.pandas as md
import swifter
Explanation: Imports and data
End of explanation
trips = pd.read_csv('trip.csv')
data = pd.read_csv('status.csv')
print(data.shape)
data.head()
Explanation: These data (~71 million rows) were taken from https://www.kaggle.com/benhamner/sf-bay-area-bike-share/data
End of explanation
def bikes_proportion(x, max_x):
return x * 1.0 / max_x
%time data['bike_prop'] = data['bikes_available'].swifter.apply(bikes_proportion, max_x=np.max(data['bikes_available']))
Explanation: Apply any function in the fastest available manner
When possible, vectorized form of function is used for 100x speed of pandas
End of explanation
def gt_5_bikes(x):
if x > 5:
return True
else:
return False
%time data['gt_5_bikes'] = data['bikes_available'].swifter.apply(gt_5_bikes)
Explanation: When vectorized form is not available, utilized dask parallel processing for 10x speed of pandas
End of explanation
def gt_5_bikes_vectorized(x):
return np.where(x > 5, True, False)
%time data['gt_5_bikes_vec'] = data['bikes_available'].swifter.apply(gt_5_bikes_vectorized)
data.head()
Explanation: But when possible, you should still write code in a vectorized format
End of explanation
%time data['date'] = data['time'].swifter.apply(pd.to_datetime)
def convert_to_human(datetime):
return datetime.day_name() + ', the ' + str(datetime.day) + 'th day of ' + datetime.strftime("%B") + ', ' + str(datetime.year)
%time data['readable_date'] = data['date'].swifter.apply(convert_to_human)
data.head()
Explanation: When you can't write code in a vectorized format, swifter still makes parallel processing easy
End of explanation
def bikes_per_dock_availability_ratio(bikes_avail, docks_avail):
return bikes_avail / docks_avail
%time data["bikes_available_per_dock_available"] = data[['bikes_available', 'docks_available']].swifter.apply(lambda row: bikes_per_dock_availability_ratio(row["bikes_available"], row["docks_available"]))
data.head()
Explanation: Multiple columns apply example
End of explanation
data[["bikes_available", "docks_available"]] = data[["bikes_available", "docks_available"]].swifter.applymap(float)
Explanation: Applymap example
End of explanation
data.head()
%time data["rolling_sum_bikes_available"] = data['bikes_available'].swifter.rolling(10).apply(sum)
data.iloc[10:20,:]
Explanation: Rolling objects apply example
End of explanation
data.set_index("date", inplace=True)
%time data["daily_avg_bikes_available"] = data["bikes_available"].swifter.resample("1d").apply(np.mean)
Explanation: Resampler apply example
End of explanation
modin_data = md.DataFrame(data)
%time modin_data["bikes_available_plus1"] = modin_data["bikes_available"].swifter.apply(lambda x: x+1)
modin_data.head()
Explanation: Modin apply example
End of explanation |
12,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
Step6: Write out the graph for TensorBoard
Step7: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Step8: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
print(tf.__version__)
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
Explanation: Write out the graph for TensorBoard
End of explanation
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation |
12,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
In this example we will demonstrate how you can create a convolutional autoencoder in Gluon
Step1: Data
We will use the FashionMNIST dataset, which is of a similar format to MNIST but is richer and has more variance
Step2: Network
Step3: We can see that the original image goes from 28x28 = 784 pixels to a vector of length 32. That is a ~25x information compression rate.
Then the decoder brings back this compressed information to the original shape
Step4: Training loop
Step5: Testing reconstruction
We plot 10 images and their reconstruction by the autoencoder. The results are pretty good for a ~25x compression rate!
Step6: Manipulating latent space
We now use separately the encoder that takes an image to a latent vector and the decoder that transform a latent vector into images
We get two images from the testing set
Step7: We get the latent representations of the images by passing them through the network
Step8: We see that the latent vector is made of 32 components
Step9: We interpolate the two latent representations, vectors of 32 values, to get a new intermediate latent representation, pass it through the decoder and plot the resulting decoded image | Python Code:
import random
import matplotlib.pyplot as plt
import mxnet as mx
from mxnet import autograd, gluon
Explanation: Convolutional Autoencoder
In this example we will demonstrate how you can create a convolutional autoencoder in Gluon
End of explanation
batch_size = 512
ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()
transform = lambda x,y: (x.transpose((2,0,1)).astype('float32')/255., y)
train_dataset = gluon.data.vision.FashionMNIST(train=True)
test_dataset = gluon.data.vision.FashionMNIST(train=False)
train_dataset_t = train_dataset.transform(transform)
test_dataset_t = test_dataset.transform(transform)
train_data = gluon.data.DataLoader(train_dataset_t, batch_size=batch_size, last_batch='rollover', shuffle=True, num_workers=5)
test_data = gluon.data.DataLoader(test_dataset_t, batch_size=batch_size, last_batch='rollover', shuffle=True, num_workers=5)
plt.figure(figsize=(20,10))
for i in range(10):
ax = plt.subplot(1, 10, i+1)
ax.imshow(train_dataset[i][0].squeeze().asnumpy(), cmap='gray')
ax.axis('off')
Explanation: Data
We will use the FashionMNIST dataset, which is of a similar format to MNIST but is richer and has more variance
End of explanation
net = gluon.nn.HybridSequential(prefix='autoencoder_')
with net.name_scope():
# Encoder 1x28x28 -> 32x1x1
encoder = gluon.nn.HybridSequential(prefix='encoder_')
with encoder.name_scope():
encoder.add(
gluon.nn.Conv2D(channels=4, kernel_size=3, padding=1, strides=(2,2), activation='relu'),
gluon.nn.BatchNorm(),
gluon.nn.Conv2D(channels=8, kernel_size=3, padding=1, strides=(2,2), activation='relu'),
gluon.nn.BatchNorm(),
gluon.nn.Conv2D(channels=16, kernel_size=3, padding=1, strides=(2,2), activation='relu'),
gluon.nn.BatchNorm(),
gluon.nn.Conv2D(channels=32, kernel_size=3, padding=0, strides=(2,2),activation='relu'),
gluon.nn.BatchNorm()
)
decoder = gluon.nn.HybridSequential(prefix='decoder_')
# Decoder 32x1x1 -> 1x28x28
with decoder.name_scope():
decoder.add(
gluon.nn.Conv2D(channels=32, kernel_size=3, padding=2, activation='relu'),
gluon.nn.HybridLambda(lambda F, x: F.UpSampling(x, scale=2, sample_type='nearest')),
gluon.nn.BatchNorm(),
gluon.nn.Conv2D(channels=16, kernel_size=3, padding=1, activation='relu'),
gluon.nn.HybridLambda(lambda F, x: F.UpSampling(x, scale=2, sample_type='nearest')),
gluon.nn.BatchNorm(),
gluon.nn.Conv2D(channels=8, kernel_size=3, padding=2, activation='relu'),
gluon.nn.HybridLambda(lambda F, x: F.UpSampling(x, scale=2, sample_type='nearest')),
gluon.nn.BatchNorm(),
gluon.nn.Conv2D(channels=4, kernel_size=3, padding=1, activation='relu'),
gluon.nn.Conv2D(channels=1, kernel_size=3, padding=1, activation='sigmoid')
)
net.add(
encoder,
decoder
)
net.initialize(ctx=ctx)
net.summary(test_dataset_t[0][0].expand_dims(axis=0).as_in_context(ctx))
Explanation: Network
End of explanation
l2_loss = gluon.loss.L2Loss()
l1_loss = gluon.loss.L1Loss()
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': 0.001, 'wd':0.001})
net.hybridize(static_shape=True, static_alloc=True)
Explanation: We can see that the original image goes from 28x28 = 784 pixels to a vector of length 32. That is a ~25x information compression rate.
Then the decoder brings back this compressed information to the original shape
End of explanation
epochs = 20
for e in range(epochs):
curr_loss = 0.
for i, (data, _) in enumerate(train_data):
data = data.as_in_context(ctx)
with autograd.record():
output = net(data)
# Compute the L2 and L1 losses between the original and the generated image
l2 = l2_loss(output.flatten(), data.flatten())
l1 = l1_loss(output.flatten(), data.flatten())
l = l2 + l1
l.backward()
trainer.step(data.shape[0])
curr_loss += l.mean()
print("Epoch [{}], Loss {}".format(e, curr_loss.asscalar()/(i+1)))
Explanation: Training loop
End of explanation
plt.figure(figsize=(20,4))
for i in range(10):
idx = random.randint(0, len(test_dataset))
img, _ = test_dataset[idx]
x, _ = test_dataset_t[idx]
data = x.as_in_context(ctx).expand_dims(axis=0)
output = net(data)
ax = plt.subplot(2, 10, i+1)
ax.imshow(img.squeeze().asnumpy(), cmap='gray')
ax.axis('off')
ax = plt.subplot(2, 10, 10+i+1)
ax.imshow((output[0].asnumpy() * 255.).transpose((1,2,0)).squeeze(), cmap='gray')
_ = ax.axis('off')
Explanation: Testing reconstruction
We plot 10 images and their reconstruction by the autoencoder. The results are pretty good for a ~25x compression rate!
End of explanation
idx = random.randint(0, len(test_dataset))
img1, _ = test_dataset[idx]
x, _ = test_dataset_t[idx]
data1 = x.as_in_context(ctx).expand_dims(axis=0)
idx = random.randint(0, len(test_dataset))
img2, _ = test_dataset[idx]
x, _ = test_dataset_t[idx]
data2 = x.as_in_context(ctx).expand_dims(axis=0)
plt.figure(figsize=(2,2))
plt.imshow(img1.squeeze().asnumpy(), cmap='gray')
plt.show()
plt.figure(figsize=(2,2))
plt.imshow(img2.squeeze().asnumpy(), cmap='gray')
Explanation: Manipulating latent space
We now use separately the encoder that takes an image to a latent vector and the decoder that transform a latent vector into images
We get two images from the testing set
End of explanation
latent1 = encoder(data1)
latent2 = encoder(data2)
Explanation: We get the latent representations of the images by passing them through the network
End of explanation
latent1.shape
Explanation: We see that the latent vector is made of 32 components
End of explanation
num = 10
plt.figure(figsize=(20, 5))
for i in range(int(num)):
new_latent = latent2*(i+1)/num + latent1*(num-i)/num
output = decoder(new_latent)
#plot result
ax = plt.subplot(1, num, i+1)
ax.imshow((output[0].asnumpy() * 255.).transpose((1,2,0)).squeeze(), cmap='gray')
_ = ax.axis('off')
Explanation: We interpolate the two latent representations, vectors of 32 values, to get a new intermediate latent representation, pass it through the decoder and plot the resulting decoded image
End of explanation |
12,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.3 NumPy - Algebra liniowa
NumPy jest pakietem szczególnie przydatnym do obliczeń w dziedzinie algebry liniowej. W uczeniu maszynowym algebra liniowa będzie miała duże znaczenie.
Wektor o wymiarach $1 \times N$
$$
X =
\begin{pmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N}
\end{pmatrix}
$$
i jego transpozycję $\mathbf{x}^{T} = (x_{1}, x_{2},\ldots,x_{N})$ można wyrazić w Pythonie w następujący sposób
Step1: Macierz kolumnowa w NumPy.
$$X =
\begin{pmatrix}
3 \
4 \
5 \
6
\end{pmatrix}$$
Step2: A macierz wierszowa w NumPy.
$$ X =
\begin{pmatrix}
3 & 4 & 5 & 6
\end{pmatrix}$$
Step3: Obiekty typu matrix
Macierze ogólne omówiliśmy już w poprzednich dokumentach
Step4: Operacje na macierzach
Wyznacznik
Step5: Macierz odwrotna
Step6: Ponieważ $AA^{-1} = A^{-1}A = I$.
Wartości i wektory własne | Python Code:
import numpy as np
x = np.array([[1,2,3]]).T
xt = x.T
x.shape
xt.shape
Explanation: 1.3 NumPy - Algebra liniowa
NumPy jest pakietem szczególnie przydatnym do obliczeń w dziedzinie algebry liniowej. W uczeniu maszynowym algebra liniowa będzie miała duże znaczenie.
Wektor o wymiarach $1 \times N$
$$
X =
\begin{pmatrix}
x_{1} \
x_{2} \
\vdots \
x_{N}
\end{pmatrix}
$$
i jego transpozycję $\mathbf{x}^{T} = (x_{1}, x_{2},\ldots,x_{N})$ można wyrazić w Pythonie w następujący sposób:
End of explanation
x = np.array([[3,4,5,6]]).T
x
Explanation: Macierz kolumnowa w NumPy.
$$X =
\begin{pmatrix}
3 \
4 \
5 \
6
\end{pmatrix}$$
End of explanation
x = np.array([[3,4,5,6]])
x
Explanation: A macierz wierszowa w NumPy.
$$ X =
\begin{pmatrix}
3 & 4 & 5 & 6
\end{pmatrix}$$
End of explanation
x = np.array([1,2,3,4,5,6,7,8,9]).reshape(3,3)
x
X = np.matrix(x)
X
Explanation: Obiekty typu matrix
Macierze ogólne omówiliśmy już w poprzednich dokumentach:
$$A_{m,n} =
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \
\vdots & \vdots & \ddots & \vdots \
a_{m,1} & a_{m,2} & \cdots & a_{m,n}
\end{pmatrix}$$
Oprócz obiektów typu array istnieje wyspecjalizowany obiekt matrix, dla którego operacje * (mnożenie) oraz **-1 (odwracanie) są określone w sposób właściwy dla macierzy (w przeciwieństwu do operacji elementowych dla obietków array).
End of explanation
a = np.array([[3,-9],[2,5]])
np.linalg.det(a)
Explanation: Operacje na macierzach
Wyznacznik
End of explanation
A = np.array([[-4,-2],[5,5]])
A
invA = np.linalg.inv(A)
invA
np.round(np.dot(A,invA))
Explanation: Macierz odwrotna
End of explanation
a = np.diag((1, 2, 3))
a
w,v = np.linalg.eig(a)
w
v
Explanation: Ponieważ $AA^{-1} = A^{-1}A = I$.
Wartości i wektory własne
End of explanation |
12,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read HIV Data
Step1: Read in Clinical Data
Step2: Update clinical data with new data provided by Howard Fox
Step3: Clean up diabetes across annotation files
Step4: All of the patients are white or Caucasian
Step5: Sex is not recorded for the cases but they are all HIV+ men.
Step6: Fix BMI to unified labels
Step7: Trimming the clinical dataset
None of the patients are diabetic
Step8: All of the patients are hepatitis C negative
Step9: All patients are currently using anti-retoviral therepy, but 5 patients treatment is not classified as HAART
Step10: All patients are reported as adhererent, but a few are not 100% adherent
Step11: We have a wide varienty of regimens with 81 unique combinations of 35 drugs
Step12: These can be broken down into 8 regimen types
Step13: IADL = Instrumental activities of daily living
Step14: Global imparement
Step15: Patient's Assessment of Own Functioning Inventory (PAOFI)
Step16: RPR (Rapid plasma reagin) is a diagnostic used to detect syphilis
Step17: Batches
Step18: Beck Depression Inventory is a questionarre measuring depression levels (from Wikipedia)
* 0–13
Step19: Read in lab blood work data
abstract exam date onto exam year for anonymity
Step20: Dropping five patients because they don't look Kosher
Renormalizing cell percentages because some don't sum to 100%
Step21: As can be seen above, we have two groups of patients with respect to HIV duration, we don't really have the sample size to tease apart any differences other than this main distiction so for now I am just treating duration of HIV as a categorical variable (e.g. controls, short exposure and long exposure)
Step22: Patient Selection Criteria
There are a couple of female patients in the controls, we are going to get rid of those as all of the cases are males
Most of the HIV patients are under HAART therepy, there are a few that are not and we are going to filter those out for now and possibly look at them after the primary analysis
Step23: Read in HIV Methylation data
Read in quantile-normalized data, adjusted for cellular compositions and then normalized agin using BMIQ.
Step24: Read in data processed with BMIQ using Horvath's gold standard.
Step25: Adjust this data for cellular composition. This is done after the normalization to not mess around with Horvath's pipeline too much.
Step26: Set up Probe Filters | Python Code:
import os
if os.getcwd().endswith('Setup'):
os.chdir('..')
import NotebookImport
from Setup.Imports import *
Explanation: Read HIV Data
End of explanation
c1 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx',
'HIV- samples from OldStudy', index_col=0)
c2 = pd.read_excel(ucsd_path + 'DESIGN_Fox_v2_Samples-ChipLAyout-Clinical UNMC-UCSD methylomestudy.xlsx',
'HIV+ samples', index_col=0)
clinical = c1.append(c2)
clinical['Sentrix_Position'] = clinical['Sentrix_Position\\'].map(lambda s: s[:-1])
del clinical['Sentrix_Position\\']
Explanation: Read in Clinical Data
End of explanation
age_new = pd.read_csv(ucsd_path + 'UpdatesAges-Infection.csv', index_col=0)
age = age_new.age.combine_first(clinical.age)
age.name= 'age'
clinical['age'] = age
l = 'estimated duration hiv (months)'
clinical[l] = age_new['Estimated Duration HIV+ (months)'].combine_first(clinical[l])
Explanation: Update clinical data with new data provided by Howard Fox
End of explanation
diabetes = clinical['diabetes'].combine_first(clinical['Diabetes @ 000'])
diabetes = diabetes.replace('N','no')
clinical['diabetes'] = diabetes
del clinical['Diabetes @ 000']
diabetes.value_counts()
Explanation: Clean up diabetes across annotation files
End of explanation
ethnicity = clinical.ethnicity
ethnicity = ethnicity.replace('wht','white')
ethnicity = ethnicity.replace('Caucasian - European','white')
clinical['ethnicity'] = ethnicity
ethnicity.value_counts()
Explanation: All of the patients are white or Caucasian
End of explanation
clinical['sex'] = clinical['sex'].fillna('M')
Explanation: Sex is not recorded for the cases but they are all HIV+ men.
End of explanation
bmi = clinical['bmi'].combine_first(clinical['BMI'])
clinical['BMI'] = bmi
clinical = clinical[clinical.columns.difference(['bmi'])]
#Do not import
bmi.hist()
current_usage = ["Current 'Other' dx", 'Current Alcohol dx',
'Current Bipolar I', 'Current Bipolar II',
'Current Cannabis dx', 'Current Cocaine dx',
'Current Dysthymia', 'Current Halucinogen dx',
'Current Inhalant dx', 'Current MDD',
'Current Methamphetamine dx', 'Current Opioid dx',
'Current PCP dx', 'Current Sedative dx',
'Any Current Substance dx']
current_usage = clinical[current_usage]
current_usage.dropna(how='all').apply(pd.value_counts).fillna(0).T
past_usage = ["LT 'Other' dx", 'LT Alcohol dx', 'LT Bipolar I',
'LT Bipolar II', 'LT Cannabis dx', 'LT Cocaine dx',
'LT Dysthymia', 'LT Halucinogen dx', 'LT Inhalant dx',
'LT MDD', 'LT Methamphetamine dx', 'LT Opioid dx',
'LT PCP dx', 'LT Sedative dx', 'Any LT Substance dx']
past_usage = clinical[past_usage]
past_usage.dropna(how='all').apply(pd.value_counts).fillna(0).T
Explanation: Fix BMI to unified labels
End of explanation
clinical.diabetes.value_counts()
Explanation: Trimming the clinical dataset
None of the patients are diabetic
End of explanation
clinical['HCV'].dropna(0).value_counts(0)
Explanation: All of the patients are hepatitis C negative
End of explanation
clinical['ARV History'].value_counts()
clinical['ARV Status'].value_counts()
Explanation: All patients are currently using anti-retoviral therepy, but 5 patients treatment is not classified as HAART
End of explanation
clinical['adherent'].value_counts()
#Do not import
clinical['adherence %'].hist()
Explanation: All patients are reported as adhererent, but a few are not 100% adherent
End of explanation
reg = clinical['Current Regimen'].dropna().str.split('/').map(sorted)
drugs = {r for s in reg for r in s}
drug_mat = pd.DataFrame({i: {d: d in s for d in drugs} for i,s in
reg.iteritems()}).T
drug_mat.sum().order()
Explanation: We have a wide varienty of regimens with 81 unique combinations of 35 drugs
End of explanation
clinical['Regimen Type'].value_counts()
kill_list = ['zhang id', 'diabetes', 'Methylation ID',
'Sentrix_ID','Sample_Plate','Sample_Well','Sentrix_Position',
]
drugs = ['ARV History', 'ARV Status', 'Current Regimen', 'Regimen Type',
'adherence %' ,'adherent']
left = [c for c in clinical if c not in past_usage and c not in current_usage
and c not in drugs]
age = clinical.age
clinical['BDI > 17'].value_counts()
clinical['ARV History'].value_counts()
clinical['CDC stage'].value_counts()
Explanation: These can be broken down into 8 regimen types
End of explanation
iadl = clinical.IADL
iadl.value_counts()
Explanation: IADL = Instrumental activities of daily living
End of explanation
clinical['global impairment'].value_counts()
Explanation: Global imparement
End of explanation
#Do not import
paofi = clinical['paofi total']
paofi.hist()
Explanation: Patient's Assessment of Own Functioning Inventory (PAOFI)
End of explanation
clinical.RPR.value_counts()
Explanation: RPR (Rapid plasma reagin) is a diagnostic used to detect syphilis
End of explanation
site = clinical.Site
site.value_counts()
clinical.Utox.value_counts()
Explanation: Batches
End of explanation
#Do not import
beck = clinical['beck total'].dropna()
beck.hist()
#Do not import
clinical['paofi total'].hist()
Explanation: Beck Depression Inventory is a questionarre measuring depression levels (from Wikipedia)
* 0–13: minimal depression
* 14–19: mild depression
20–28: moderate depression
29–63: severe depression.
End of explanation
labs = pd.read_excel(ucsd_path + 'fox_methylation_labdata_073014.xlsx',
index_col=0)
labs['nb exam year']= labs['nb exam date'].map(lambda s: s.year)
del labs['nb exam date']
labs = labs.dropna(axis=1, how='all')
labs = labs.ix[labs.index.intersection(clinical.index)]
Explanation: Read in lab blood work data
abstract exam date onto exam year for anonymity
End of explanation
#Do not import
fig, axs = subplots(1,2, figsize=(9,4))
clinical.age.hist(ax=axs[0])
clinical['estimated duration hiv (months)'].hist(ax=axs[1])
axs[0].set_xlabel('Age')
axs[1].set_xlabel('estimated duration hiv (months)')
for ax in axs:
ax.set_ylabel('# of Patients')
prettify_ax(ax)
Explanation: Dropping five patients because they don't look Kosher
Renormalizing cell percentages because some don't sum to 100%
End of explanation
duration = clinical['estimated duration hiv (months)']
duration = (1.*duration.notnull()) + (1.*duration > 100)
duration = duration.map({0:'Control',1:'HIV Short',2:'HIV Long'})
duration.value_counts()
Explanation: As can be seen above, we have two groups of patients with respect to HIV duration, we don't really have the sample size to tease apart any differences other than this main distiction so for now I am just treating duration of HIV as a categorical variable (e.g. controls, short exposure and long exposure)
End of explanation
duration = duration.ix[ti(clinical['ARV Status'] != 'non-HAART')]
duration = duration.ix[ti(clinical.sex != 'F')].dropna()
duration.value_counts()
Explanation: Patient Selection Criteria
There are a couple of female patients in the controls, we are going to get rid of those as all of the cases are males
Most of the HIV patients are under HAART therepy, there are a few that are not and we are going to filter those out for now and possibly look at them after the primary analysis
End of explanation
df_hiv = pd.read_hdf(HDFS_DIR + 'methylation_norm.h5',
'quant_BMIQ_adj')
df_hiv = pd.read_hdf(HDFS_DIR + '/methylation_norm.h5',
'quant_BMIQ_adj')
df_hiv = df_hiv.ix[:, duration.index]
df_hiv = df_hiv.dropna(1)
Explanation: Read in HIV Methylation data
Read in quantile-normalized data, adjusted for cellular compositions and then normalized agin using BMIQ.
End of explanation
df_hiv_n = pd.read_hdf(HDFS_DIR + 'methylation_norm.h5',
'BMIQ_Horvath')
df_hiv_n = df_hiv_n.ix[:, duration.index]
df_hiv_n = df_hiv_n.dropna(1)
df_hiv_n = df_hiv_n.groupby(level=0).first()
Explanation: Read in data processed with BMIQ using Horvath's gold standard.
End of explanation
flow_sorted_data = pd.read_hdf(HDFS_DIR + 'methylation_annotation.h5',
'flow_sorted_data_horvath_norm')
cell_type = pd.read_hdf(HDFS_DIR + 'methylation_annotation.h5', 'label_map')
cell_counts = pd.read_hdf(HDFS_DIR + 'dx_methylation.h5', 'cell_counts')
cell_counts = cell_counts.groupby(level=0, axis=0).first()
n2 = flow_sorted_data.groupby(cell_type, axis=1).mean()
avg = n2[cell_counts.columns].dot(cell_counts.ix[df_hiv.columns].T)
d2 = df_hiv_n.ix[avg.index, df_hiv.columns].dropna(axis=[0,1], how='all')
cc = avg.ix[:, ti(duration=='Control')].mean(1)
df_hiv_n = (d2 - avg).add(cc, axis=0).dropna(how='all')
keepers = duration.index.intersection(df_hiv.columns.intersection(df_hiv_n.columns))
duration = duration.ix[keepers]
duration = duration.groupby(level=0).first()
consent = c1['zhang id'].isin([12373001,12805001,14055003,15455001]) == False
duration = duration.ix[duration.index.difference(ti(consent == False))]
ti(c1['zhang id'].isin([12373001,12805001,14055003,15455001]))
duration.value_counts()
store = pd.HDFStore(HDFS_DIR + 'dx_methylation.h5')
study = store['study']
age = store['age']
gender = store['gender']
Explanation: Adjust this data for cellular composition. This is done after the normalization to not mess around with Horvath's pipeline too much.
End of explanation
detection_p = pd.read_hdf(HDFS_DIR + 'dx_methylation.h5', 'detection_p')
#detection_p = detection_p[detection_p[0] > 10e-5]
detection_p = detection_p[detection_p.Sample_Name.isin(duration.index)]
ff = detection_p.groupby('level_0').size() > 3
ff.value_counts()
STORE = HDFS_DIR + 'methylation_annotation.h5'
snps = pd.read_hdf(STORE, 'snps')
snp_near = (snps.Probe_SNPs != '')
snp_near.value_counts()
probe_idx = df_hiv.index.difference(ti(ff))
Explanation: Set up Probe Filters
End of explanation |
12,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to NumPy
Numpy is a library that provides multi-dimensional array objects. You can think of these somewhat like normal Python lists, except they have a number of qualities that make them better for numeric computations.
Let's try adding two lists together
Step1: With Python lists the + operator appends them together. If we wanted to add these two lists elementwise we'd have to use a loop
Step2: or a for comprehension
Step3: With Numpy arrays this isn't the case
Step4: The + operator applied to Numpy arrays performs elementwise addition. -, * and / also apply elementwise. Using these operators makes it a lot easier to understand what's happening in the code.
The other advantaged of Numpy arrays has to do with performance. Let's perform elementwise multiplication of the first 1 million numbers divided by 3 and the first 1 million numbers divided by 7, that is
Step5: Both of the functions perform the same operation, one using a Python for loop and the other taking advantage of Numpy arrays. | Python Code:
x = [1,2,3]
y = [4,5,6]
x + y
Explanation: Introduction to NumPy
Numpy is a library that provides multi-dimensional array objects. You can think of these somewhat like normal Python lists, except they have a number of qualities that make them better for numeric computations.
Let's try adding two lists together
End of explanation
z = [0]*len(x) # Generates a list of zeroes the same length as x.
for i in range(len(x)):
z[i] = x[i] + y[i]
z
Explanation: With Python lists the + operator appends them together. If we wanted to add these two lists elementwise we'd have to use a loop
End of explanation
[i + j for (i, j) in zip(x, y)]
Explanation: or a for comprehension
End of explanation
xNumpy = np.array([1, 2, 3])
yNumpy = np.array([4, 5, 6])
xNumpy + yNumpy
Explanation: With Numpy arrays this isn't the case
End of explanation
def normal_multiply(x, y):
return [i * j for i, j in zip(x, y)]
def numpy_multiply(x, y):
return x * y
x = [i/3. for i in range(1,1000001)]
y = [i/7. for i in range(1,1000001)]
xNumpy = np.array(x)
yNumpy = np.array(y)
Explanation: The + operator applied to Numpy arrays performs elementwise addition. -, * and / also apply elementwise. Using these operators makes it a lot easier to understand what's happening in the code.
The other advantaged of Numpy arrays has to do with performance. Let's perform elementwise multiplication of the first 1 million numbers divided by 3 and the first 1 million numbers divided by 7, that is:
[1/3, 2/3, ..., 999999/3, 1000000/3] *
[1/7, 2/7, ..., 999999/7, 1000000/7]
End of explanation
%timeit normal_multiply(x, y)
%timeit numpy_multiply(xNumpy, yNumpy)
Explanation: Both of the functions perform the same operation, one using a Python for loop and the other taking advantage of Numpy arrays.
End of explanation |
12,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to generate batches from a dataset and work with batch components
Step1: Create a dataset
A dataset is defined by an index (a sequence of item ids) and a batch class (see the documentation for details).
In the simplest case an index is a natural sequence 0, 1, 2, 3, ...
So all you need to define the index is just a number of items in the dataset.
Step2: The dataset index
See the documentation for more info about how to create an index which fits your needs.
Here are the most frequent use cases
Step3: drop_last=True skips the last batch if it contains fewer than BATCH_SIZE items
Step4: shuffle permutes items across batches
Step5: Run the cell above multiple times to see how batches change.
Shuffle can be bool, int (seed number) or a RandomState object
Step6: Run the cell above multiple times to see that batches stay the same across runs.
Iterate with next_batch(...)
While gen_batch is a generator, next_batch is an ordinary method.
Most of the time you will use gen_batch, but for a deeper control over training and a more sophisticated finetuning next_batch might be more convenient.
If too many iterations are made, StopIteration will be raised.
Check that there are NUM_ITEMS * 3 iterations (i.e. 3 epochs) in loop, but n_epochs=2 is specified inside next_batch() call.
Step7: And finally with shuffle=True, n_epochs=None and a variable batch size
Do not forget to reset iterator to start next_batch'ing from scratch
Step8: n_epochs=None allows for infinite iterations.
Step9: To get a deeper understanding of drop_last read very important notes in the API.
Working with data
For illustrative purposes let's create a small array which will serve as a raw data source.
Step10: Load data into a batch
After loading data is available as batch.data
Step11: You can easily iterate over batch items too
Step12: Data components
Not infrequently, the batch stores a more complex data structures, e.g. features and labels or images, masks, bounding boxes and labels. To work with these you might employ data components. Just define a property as follows
Step13: Let's generate some random data
Step14: Now create a dataset (preloaded handles data loading from data stored in memory)
Step15: Since components are defined, you can address them as batch and even item attributes (they are created and loaded automatically).
Step16: You can iterate over batch items and change them on the fly
Step17: Splitting a dataset
For machine learning tasks you might need to split a dataset into train, test and validation parts.
Step18: Now the dataset is split into train / test in 80/20 ratio.
Step19: Dataset may be shuffled before splitting. | Python Code:
import sys
import numpy as np
# the following line is not required if BatchFlow is installed as a python package.
sys.path.append("../..")
from batchflow import Dataset, DatasetIndex, Batch
# number of items in the dataset
NUM_ITEMS = 10
# number of items in a batch when iterating
BATCH_SIZE = 3
Explanation: How to generate batches from a dataset and work with batch components
End of explanation
dataset = Dataset(index=NUM_ITEMS, batch_class=Batch)
Explanation: Create a dataset
A dataset is defined by an index (a sequence of item ids) and a batch class (see the documentation for details).
In the simplest case an index is a natural sequence 0, 1, 2, 3, ...
So all you need to define the index is just a number of items in the dataset.
End of explanation
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1)):
print("batch", i, " contains items", batch.indices)
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_iters=5)):
print("batch", i, " contains items", batch.indices)
Explanation: The dataset index
See the documentation for more info about how to create an index which fits your needs.
Here are the most frequent use cases:
client_index = DatasetIndex(my_client_ids)
images_index = FilesIndex(path="/path/to/images/*.jpg", no_ext=True)
Iterate with gen_batch(...)
gen_batch is a python generator.
End of explanation
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1, drop_last=True)):
print("batch", i, " contains items", batch.indices)
Explanation: drop_last=True skips the last batch if it contains fewer than BATCH_SIZE items
End of explanation
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_iters=4, drop_last=True, shuffle=True)):
print("batch", i, " contains items", batch.indices)
Explanation: shuffle permutes items across batches
End of explanation
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1, drop_last=True, shuffle=123)):
print("batch", i, " contains items", batch.indices)
Explanation: Run the cell above multiple times to see how batches change.
Shuffle can be bool, int (seed number) or a RandomState object
End of explanation
for i in range(NUM_ITEMS * 3):
try:
batch = dataset.next_batch(BATCH_SIZE, shuffle=True, n_epochs=2, drop_last=True)
print("batch", i + 1, "contains items", batch.indices)
except StopIteration:
print("got StopIteration")
break
Explanation: Run the cell above multiple times to see that batches stay the same across runs.
Iterate with next_batch(...)
While gen_batch is a generator, next_batch is an ordinary method.
Most of the time you will use gen_batch, but for a deeper control over training and a more sophisticated finetuning next_batch might be more convenient.
If too many iterations are made, StopIteration will be raised.
Check that there are NUM_ITEMS * 3 iterations (i.e. 3 epochs) in loop, but n_epochs=2 is specified inside next_batch() call.
End of explanation
dataset.reset('iter')
Explanation: And finally with shuffle=True, n_epochs=None and a variable batch size
Do not forget to reset iterator to start next_batch'ing from scratch
End of explanation
for i in range(int(NUM_ITEMS * 1.3)):
batch = dataset.next_batch(BATCH_SIZE + (-1)**i * i % 3, shuffle=True, n_epochs=None, drop_last=True)
print("batch", i + 1, "contains items", batch.indices)
Explanation: n_epochs=None allows for infinite iterations.
End of explanation
data = np.arange(NUM_ITEMS).reshape(-1, 1) * 100 + np.arange(3).reshape(1, -1)
data
Explanation: To get a deeper understanding of drop_last read very important notes in the API.
Working with data
For illustrative purposes let's create a small array which will serve as a raw data source.
End of explanation
for batch in dataset.gen_batch(BATCH_SIZE, n_epochs=1):
batch = batch.load(src=data)
print("batch contains items with indices", batch.indices)
print('and batch data is')
print(batch.data)
print()
Explanation: Load data into a batch
After loading data is available as batch.data
End of explanation
for batch in dataset.gen_batch(BATCH_SIZE, n_epochs=1):
batch = batch.load(src=data)
print("batch contains")
for item in batch:
print(item)
print()
Explanation: You can easily iterate over batch items too
End of explanation
class MyBatch(Batch):
components = 'features', 'labels'
Explanation: Data components
Not infrequently, the batch stores a more complex data structures, e.g. features and labels or images, masks, bounding boxes and labels. To work with these you might employ data components. Just define a property as follows:
End of explanation
features_array = np.arange(NUM_ITEMS).reshape(-1, 1) * 100 + np.arange(3).reshape(1, -1)
labels_array = np.random.choice(10, size=NUM_ITEMS)
data = features_array, labels_array
Explanation: Let's generate some random data:
End of explanation
dataset = Dataset(index=NUM_ITEMS, batch_class=MyBatch, preloaded=data)
Explanation: Now create a dataset (preloaded handles data loading from data stored in memory)
End of explanation
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1)):
print("batch", i, " contains items", batch.indices)
print("and batch data consists of features:")
print(batch.features)
print("and labels:", batch.labels)
print()
Explanation: Since components are defined, you can address them as batch and even item attributes (they are created and loaded automatically).
End of explanation
for i, batch in enumerate(dataset.gen_batch(BATCH_SIZE, n_epochs=1)):
print("Batch", i)
for item in batch:
print("item features:", item.features, " item label:", item.labels)
print()
print("You can change batch data, even scalars.")
for item in batch:
item.features = item.features + 1000
item.labels = item.labels + 100
print("New batch features:\n", batch.features)
print("and labels:", batch.labels)
print()
Explanation: You can iterate over batch items and change them on the fly
End of explanation
dataset.split(0.8)
Explanation: Splitting a dataset
For machine learning tasks you might need to split a dataset into train, test and validation parts.
End of explanation
len(dataset.train), len(dataset.test)
dataset.split([.6, .2, .2])
len(dataset.train), len(dataset.test), len(dataset.validation)
Explanation: Now the dataset is split into train / test in 80/20 ratio.
End of explanation
dataset.split(0.7, shuffle=True)
dataset.train.indices, dataset.test.indices
Explanation: Dataset may be shuffled before splitting.
End of explanation |
12,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Specify sample data csv paths. See the files listed here for expected structure. Marginal tables require multi-indexed columns with category name and category value in levels 0 and 1 of the index. Sample file category columns should be labeled with corresponding category names and values in those columns should match the category value headers in the marginal table.
Step1: Load and process input marginals and samples and geography crosswalk
Step2: Iterate over all marginals in the geography crosswalk and synthesize in-line
Step3: all_persons.household_id maps person records to all_households.index
Step4: Synthesize all marginal geographies in the crosswalk using a specified or default number of cores via multiprocessing | Python Code:
hh_marginal_file = 'input_data/hh_marginals.csv'
person_marginal_file = 'input_data/person_marginals.csv'
hh_sample_file = 'input_data/household_sample.csv'
person_sample_file = 'input_data/person_sample.csv'
Explanation: Specify sample data csv paths. See the files listed here for expected structure. Marginal tables require multi-indexed columns with category name and category value in levels 0 and 1 of the index. Sample file category columns should be labeled with corresponding category names and values in those columns should match the category value headers in the marginal table.
End of explanation
hh_marg, p_marg, hh_sample, p_sample, xwalk = zs.load_data(hh_marginal_file, person_marginal_file, hh_sample_file, person_sample_file)
hh_marg.head()
p_marg.head()
p_sample.head()
Explanation: Load and process input marginals and samples and geography crosswalk
End of explanation
all_households, all_persons, all_stats = zs.synthesize_all_zones(hh_marg, p_marg, hh_sample, p_sample, xwalk)
all_households.head()
Explanation: Iterate over all marginals in the geography crosswalk and synthesize in-line
End of explanation
all_persons.head()
Explanation: all_persons.household_id maps person records to all_households.index
End of explanation
all_persons, all_households, all_stats = zs.multiprocess_synthesize(hh_marg, p_marg, hh_sample, p_sample, xwalk)
all_persons.head()
all_households.head()
all_stats
Explanation: Synthesize all marginal geographies in the crosswalk using a specified or default number of cores via multiprocessing
End of explanation |
12,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PJRC's receive test
(host in C, variable buffer size, receiving in 64 Byte chunks)
Anything below 64 bytes is not a full USB packet and waits for transmission. Above, full speed is achieved.
Step1: Send arbitrary signals
Step2: Send "neural" data
Using Lynn's data set from the Klusters2 example | Python Code:
result_path = '../src/USB_Virtual_Serial_Rcv_Speed_Test/usb_serial_receive/host_software/'
print [f for f in os.listdir(result_path) if f.endswith('.txt')]
def read_result(filename):
results = {}
current_blocksize = None
with open(os.path.join(result_path, filename)) as f:
for line in f.readlines():
if line.startswith('port'):
current_blocksize = int(re.search('(?:size.)(\d*)', line).groups()[0])
results[current_blocksize] = []
else:
results[current_blocksize].append(int(line[:-4].strip())/1000.)
return results
# Example:
results = read_result('result_readbytes.txt')
for bs in sorted(results.keys()):
speeds = results[bs]
print "{bs:4d}B blocks: {avg:4.0f}±{sem:.0f} KB/s".format(bs=bs, avg=mean(speeds), sem=stats.sem(speeds))
# Standard
sizes, speeds_standard = zip(*[(k, mean(v)) for k, v in read_result('result_standard.txt').items()])
# ReadBytes
sizes, speeds_readbytes = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes.txt').items()])
# Readbytes+8us overhead per transferred SPI packet (worst case scenario?)
sizes, speeds_readbytes_oh = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes_overhead.txt').items()])
# ReadBytes+spi4teensy on 8 channels
sizes, speeds_readbytes_spi = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes_spi4teensy.txt').items()])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10, 5))
axes.semilogx(sizes, speeds_standard, 'gx', basex=2, label='Standard')
axes.semilogx(sizes, speeds_readbytes, 'rx', basex=2, label='ReadBytes')
axes.semilogx(sizes, speeds_readbytes_oh, 'bx', basex=2, label='ReadBytes+OH')
axes.semilogx(sizes, speeds_readbytes_spi, 'k+', basex=2, label='ReadBytes+spi4teensy@8channels')
axes.set_xlabel('Block size [B]')
axes.set_ylabel('Transfer speed [kB/s]')
axes.legend(loc=2)
axes.set_xlim((min(sizes)/2., max(sizes)*2))
fig.tight_layout()
#TODO: use individual values, make stats + error bars
n = int(1e6)
data = data=''.join([chr(i%256) for i in range(n)])
t = %timeit -o -q transfer_test(data)
print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(len(data)/1000., mean(t.all_runs), len(data)/1000./mean(t.all_runs))
Explanation: PJRC's receive test
(host in C, variable buffer size, receiving in 64 Byte chunks)
Anything below 64 bytes is not a full USB packet and waits for transmission. Above, full speed is achieved.
End of explanation
n_val = 4096
max_val = 4096
# cosines
cosines = ((np.cos(np.linspace(-np.pi, np.pi, num=n_val))+1)*(max_val/2)).astype('uint16')
# noise
noise = (np.random.rand(n_val)*max_val).astype('uint16')
# ramps
ramps = np.linspace(0, max_val, n_val).astype('uint16')
# squares
hi = np.ones(n_val/4, dtype='uint16')*max_val-1
lo = np.zeros_like(hi)
squares = np.tile(np.hstack((hi, lo)), 2)
# all together
arr = np.dstack((cosines, noise, ramps, squares, \
cosines, noise, ramps, squares, \
cosines, noise, ramps, squares, \
cosines, noise, ramps, squares)).flatten()
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(13, 8))
axes[0].set_xlim((0, cosines.size))
axes[0].plot(cosines, label='cosine');
axes[0].plot(noise, label='random');
axes[0].plot(ramps, label='ramp');
axes[0].plot(squares, label='square');
axes[0].legend()
axes[1].set_xlim((0, arr.size))
axes[1].plot(arr);
fig.tight_layout()
n = 500
data = np.tile(arr, n).view(np.uint8)
t = %timeit -o -q -n 1 -r 1 tx = transfer_test(data)
print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(arr.nbytes/1000.*n, mean(t.all_runs), arr.nbytes/1000.*n/mean(t.all_runs))
t = %timeit -o -q -n 1 -r 1 tx = transfer_test(data)
print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(arr.nbytes/1000.*n, mean(t.all_runs), arr.nbytes/1000.*n/mean(t.all_runs))
Explanation: Send arbitrary signals
End of explanation
data_path = "../data/lynn/lynn.dat"
data_float = np.fromfile(data_path, dtype='(64,)i2').astype(np.float)
# normalize the array to 12bit
data_float -= data_float.min()
data_float /= data_float.max()
data_float *= (2**12-1)
data_scaled = data_float.astype(np.uint16)
print data_scaled.min(), data_scaled.max()
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(13, 7))
for n in range(0, 64, 4):
axes.plot(data_scaled[0:20000, n]+n*70, label="Channel %d"%n);
plt.legend()
fig.tight_layout()
print "first channel :", data_scaled[0,0:3]
print "second channel:", data_scaled[8,0:3]
print "interleaved :", data_scaled[(0, 8), 0:3].transpose().flatten()
n = 5
data = np.tile(data_scaled[:, 0:64:4].transpose().flatten(), n).tobytes()
len(data)
transfer_test(data)
t = %timeit -q -o -n 1 -r 1 transfer_test(data);
print "{:.1f} KB, {:.2f} s, {:.1f} KB/s".format(data_scaled[:, 0:64:4].nbytes/1000.*n,
mean(t.all_runs),
data_scaled[:, 0:64:4].nbytes/1000.*n/mean(t.all_runs))
type(data)
data
Explanation: Send "neural" data
Using Lynn's data set from the Klusters2 example
End of explanation |
12,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x=np.linspace(-1.0,1.0,size)
if sigma==0:
y=m*x+b
else:
y=m*x+b+np.random.normal(0.0,sigma**2,size)
return x,y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
ran_line1, ran_line2=random_line(m,b,sigma,size)
f=plt.figure(figsize=(10,6))
plt.scatter(ran_line1,ran_line2,color=color)
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
plt.grid(True)
plt.title('Line with Gaussian Noise')
plt.xlabel('X'), plt.ylabel('Y')
plt.tick_params(axis='x',direction='inout')
plt.tick_params(axis='y',direction='inout')
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line, m=(-10.0,10.0,0.1),b=(-5.0,5.0,0.1),sigma=(0.0,5.0,0.01),size=(10,100,10),color={'red':'r','green':'g','blue':'b'});
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
12,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In the previous example we investigated if it was possible to query the NGDC CSW Catalog to extract records matching an IOOS RA acronym.
However, we could not trust the results.
Some RAs results in just a few records or no record at all, like AOOS and PacIOOS respectively.
We can make a more robust search using the UUID rather than the acronym.
The advantage is that all records will be associated to an UUID,
hence a more robust search.
The disadvantage is that we need to keep track of a long and unintelligible identification.
As usual let's start by instantiating the csw catalog object.
Step1: We will use the same list of all the Regional Associations as before,
but now we will match them with the corresponding UUID from the IOOS registry.
Step2: The function below is similar to the one we used before.
Note the same matching (PropertyIsEqualTo),
but different property name (sys.siteuuid rather than apiso
Step3: Compare the results above with cell [6] from before. Note that now we got 192 for PacIOOS and 74 for AOOS now!
You can see the original notebook here. | Python Code:
from owslib.csw import CatalogueServiceWeb
endpoint = 'http://www.ngdc.noaa.gov/geoportal/csw'
csw = CatalogueServiceWeb(endpoint, timeout=30)
Explanation: In the previous example we investigated if it was possible to query the NGDC CSW Catalog to extract records matching an IOOS RA acronym.
However, we could not trust the results.
Some RAs results in just a few records or no record at all, like AOOS and PacIOOS respectively.
We can make a more robust search using the UUID rather than the acronym.
The advantage is that all records will be associated to an UUID,
hence a more robust search.
The disadvantage is that we need to keep track of a long and unintelligible identification.
As usual let's start by instantiating the csw catalog object.
End of explanation
import pandas as pd
ioos_ras = ['AOOS', # Alaska
'CaRA', # Caribbean
'CeNCOOS', # Central and Northern California
'GCOOS', # Gulf of Mexico
'GLOS', # Great Lakes
'MARACOOS', # Mid-Atlantic
'NANOOS', # Pacific Northwest
'NERACOOS', # Northeast Atlantic
'PacIOOS', # Pacific Islands
'SCCOOS', # Southern California
'SECOORA'] # Southeast Atlantic
url = 'https://raw.githubusercontent.com/ioos/registry/master/uuid.csv'
df = pd.read_csv(url, index_col=0, header=0, names=['UUID'])
df['UUID'] = df['UUID'].str.strip()
Explanation: We will use the same list of all the Regional Associations as before,
but now we will match them with the corresponding UUID from the IOOS registry.
End of explanation
from owslib.fes import PropertyIsEqualTo
def query_ra(csw, uuid='B3EA8869-B726-4E39-898A-299E53ABBC98'):
q = PropertyIsEqualTo(propertyname='sys.siteuuid', literal='{%s}' % uuid)
csw.getrecords2(constraints=[q], maxrecords=2000, esn='full')
return csw
for ra in ioos_ras:
try:
uuid = df.ix[ra]['UUID'].strip('{').strip('}')
csw = query_ra(csw, uuid)
ret = csw.results['returned']
word = 'records' if ret > 1 else 'record'
print("{0:>8} has {1:>4} {2}".format(ra, ret, word))
csw.records.clear()
except KeyError:
pass
Explanation: The function below is similar to the one we used before.
Note the same matching (PropertyIsEqualTo),
but different property name (sys.siteuuid rather than apiso:Keywords).
That is the key difference for the robustness of the search,
keywords are not always defined and might return bogus matching.
While UUID will always mean one RA.
End of explanation
HTML(html)
Explanation: Compare the results above with cell [6] from before. Note that now we got 192 for PacIOOS and 74 for AOOS now!
You can see the original notebook here.
End of explanation |
12,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data generation
Step1: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep.
Step2: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
Step3: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format
Step4: As a sanity check, we can look at the first command that was generated and the number of commands generated.
Step5: Finally, we run our commands.
Step6: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
Step7: Move result files to repository
Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells. | Python Code:
from os.path import join, expandvars
from joblib import Parallel, delayed
from glob import glob
from os import system
from tax_credit.framework_functions import (parameter_sweep,
generate_per_method_biom_tables,
move_results_to_repository)
project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment")
analysis_name= "mock-community"
data_dir = join(project_dir, "data", analysis_name)
reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/")
results_dir = expandvars("$HOME/Desktop/projects/mock-community/")
Explanation: Data generation: using python to sweep over methods and parameters
This notebook demonstrates taxonomy classification using blast+ followed by consensus assignment in QIIME2's q2-feature-classifier.
Environment preparation
End of explanation
dataset_reference_combinations = [
('mock-1', 'gg_13_8_otus'), # formerly S16S-1
('mock-2', 'gg_13_8_otus'), # formerly S16S-2
('mock-3', 'gg_13_8_otus'), # formerly Broad-1
('mock-4', 'gg_13_8_otus'), # formerly Broad-2
('mock-5', 'gg_13_8_otus'), # formerly Broad-3
('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1
('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2
('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3
('mock-9', 'unite_20.11.2016_clean_fullITS'), # formerly ITS1
('mock-10', 'unite_20.11.2016_clean_fullITS'), # formerly ITS2-SAG
('mock-12', 'gg_13_8_otus'), # Extreme
('mock-13', 'gg_13_8_otus_full16S_clean'), # kozich-1
('mock-14', 'gg_13_8_otus_full16S_clean'), # kozich-2
('mock-15', 'gg_13_8_otus_full16S_clean'), # kozich-3
('mock-16', 'gg_13_8_otus'), # schirmer-1
('mock-18', 'gg_13_8_otus'),
('mock-19', 'gg_13_8_otus'),
('mock-20', 'gg_13_8_otus'),
('mock-21', 'gg_13_8_otus'),
('mock-22', 'gg_13_8_otus'),
('mock-23', 'gg_13_8_otus'),
('mock-24', 'unite_20.11.2016_clean_fullITS'),
('mock-25', 'unite_20.11.2016_clean_fullITS'),
('mock-26-ITS1', 'unite_20.11.2016_clean_fullITS'),
('mock-26-ITS9', 'unite_20.11.2016_clean_fullITS'),
]
reference_dbs = {'gg_13_8_otus_clean' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r.qza'),
join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')),
'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus_515f-806r_trim250.qza'),
join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')),
'gg_13_8_otus_full16S_clean' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.qza'),
join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')),
'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus.qza'),
join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.qza')),
'unite_20.11.2016_clean_fullITS' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.qza'),
join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')),
'unite_20.11.2016_clean' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r.qza'),
join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.qza')),
'unite_20.11.2016' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_ITS1Ff-ITS2r_trim250.qza'),
join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.qza'))}
Explanation: Preparing data set sweep
First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep.
End of explanation
method_parameters_combinations = {
'blast+' : {'p-evalue': [0.001],
'p-maxaccepts': [1, 10, 100],
'p-perc-identity': [0.80, 0.97, 0.99],
'p-min-consensus': [0.51, 0.75, 0.99]}
}
Explanation: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
End of explanation
command_template = "mkdir -p {0}; qiime feature-classifier blast --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}"
commands = parameter_sweep(data_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='rep_seqs.qza', output_name='rep_seqs_tax_assignments.qza')
Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format:
{0} = output directory
{1} = input data
{2} = reference sequences
{3} = reference taxonomy
{4} = method name
{5} = other parameters
End of explanation
print(len(commands))
commands[0]
Explanation: As a sanity check, we can look at the first command that was generated and the number of commands generated.
End of explanation
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
Explanation: Finally, we run our commands.
End of explanation
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'taxonomy.tsv')
generate_per_method_biom_tables(taxonomy_glob, data_dir)
Explanation: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
End of explanation
precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name)
method_dirs = glob(join(results_dir, '*', '*', '*', '*'))
move_results_to_repository(method_dirs, precomputed_results_dir)
Explanation: Move result files to repository
Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
End of explanation |
12,515 | Given the following text description, write Python code to implement the functionality described.
Description:
Given a positive integer, obtain its roman numeral equivalent as a string,
and return it in lowercase.
Restrictions: 1 <= num <= 1000
Examples:
This is how the function will work:
int_to_mini_roman(19) == 'xix'
This is how the function will work:
int_to_mini_roman(152) == 'clii'
This is how the function will work:
int_to_mini_roman(426) == 'cdxxvi'
| Python Code:
def int_to_mini_roman(number):
num = [1, 4, 5, 9, 10, 40, 50, 90,
100, 400, 500, 900, 1000]
sym = ["I", "IV", "V", "IX", "X", "XL",
"L", "XC", "C", "CD", "D", "CM", "M"]
i = 12
res = ''
while number:
div = number // num[i]
number %= num[i]
while div:
res += sym[i]
div -= 1
i -= 1
return res.lower() |
12,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quickstart
This notebook was made with the following version of emcee
Step1: The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern.
How to sample a multi-dimensional Gaussian
We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by
Step2: Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob
Step3: It is important that the first argument of the probability function is
the position of a single "walker" (a N dimensional
numpy array). The following arguments are going to be constant every
time the function is called and the values come from the args parameter
of our
Step4: and where cov is $\Sigma$.
How about we use 32 walkers? Before we go on, we need to guess a starting point for each
of the 32 walkers. This position will be a 5-dimensional vector so the
initial guess should be a 32-by-5 array.
It's not a very good guess but we'll just guess a
random number between 0 and 1 for each component
Step5: Now that we've gotten past all the bookkeeping stuff, we can move on to
the fun stuff. The main interface provided by emcee is the
Step6: Remember how our function log_prob required two extra arguments when it
was called? By setting up our sampler with the args argument, we're
saying that the probability function should be called as
Step7: If we didn't provide any
args parameter, the calling sequence would be log_prob(p0[0]) instead.
It's generally a good idea to run a few "burn-in" steps in your MCMC
chain to let the walkers explore the parameter space a bit and get
settled into the maximum of the density. We'll run a burn-in of 100
steps (yep, I just made that number up... it's hard to really know
how many steps of burn-in you'll need before you start) starting from
our initial guess p0
Step8: You'll notice that I saved the final position of the walkers (after the
100 steps) to a variable called pos. You can check out what will be
contained in the other output variables by looking at the documentation for
the
Step9: The samples can be accessed using the
Step10: Another good test of whether or not the sampling went well is to check
the mean acceptance fraction of the ensemble using the
Step11: and the integrated autocorrelation time (see the | Python Code:
import emcee
emcee.__version__
Explanation: Quickstart
This notebook was made with the following version of emcee:
End of explanation
import numpy as np
Explanation: The easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern.
How to sample a multi-dimensional Gaussian
We’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by:
$$
p(\vec{x}) \propto \exp \left [ - \frac{1}{2} (\vec{x} -
\vec{\mu})^\mathrm{T} \, \Sigma ^{-1} \, (\vec{x} - \vec{\mu})
\right ]
$$
where $\vec{\mu}$ is an $N$-dimensional vector position of the mean of the density and $\Sigma$ is the square N-by-N covariance matrix.
The first thing that we need to do is import the necessary modules:
End of explanation
def log_prob(x, mu, cov):
diff = x - mu
return -0.5*np.dot(diff, np.linalg.solve(cov,diff))
Explanation: Then, we’ll code up a Python function that returns the density $p(\vec{x})$ for specific values of $\vec{x}$, $\vec{\mu}$ and $\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob:
End of explanation
ndim = 5
np.random.seed(42)
means = np.random.rand(ndim)
cov = 0.5 - np.random.rand(ndim ** 2).reshape((ndim, ndim))
cov = np.triu(cov)
cov += cov.T - np.diag(cov.diagonal())
cov = np.dot(cov,cov)
Explanation: It is important that the first argument of the probability function is
the position of a single "walker" (a N dimensional
numpy array). The following arguments are going to be constant every
time the function is called and the values come from the args parameter
of our :class:EnsembleSampler that we'll see soon.
Now, we'll set up the specific values of those "hyperparameters" in 5
dimensions:
End of explanation
nwalkers = 32
p0 = np.random.rand(nwalkers, ndim)
Explanation: and where cov is $\Sigma$.
How about we use 32 walkers? Before we go on, we need to guess a starting point for each
of the 32 walkers. This position will be a 5-dimensional vector so the
initial guess should be a 32-by-5 array.
It's not a very good guess but we'll just guess a
random number between 0 and 1 for each component:
End of explanation
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, args=[means, cov])
Explanation: Now that we've gotten past all the bookkeeping stuff, we can move on to
the fun stuff. The main interface provided by emcee is the
:class:EnsembleSampler object so let's get ourselves one of those:
End of explanation
log_prob(p0[0], means, cov)
Explanation: Remember how our function log_prob required two extra arguments when it
was called? By setting up our sampler with the args argument, we're
saying that the probability function should be called as:
End of explanation
state = sampler.run_mcmc(p0, 100)
sampler.reset()
Explanation: If we didn't provide any
args parameter, the calling sequence would be log_prob(p0[0]) instead.
It's generally a good idea to run a few "burn-in" steps in your MCMC
chain to let the walkers explore the parameter space a bit and get
settled into the maximum of the density. We'll run a burn-in of 100
steps (yep, I just made that number up... it's hard to really know
how many steps of burn-in you'll need before you start) starting from
our initial guess p0:
End of explanation
sampler.run_mcmc(state, 10000);
Explanation: You'll notice that I saved the final position of the walkers (after the
100 steps) to a variable called pos. You can check out what will be
contained in the other output variables by looking at the documentation for
the :func:EnsembleSampler.run_mcmc function. The call to the
:func:EnsembleSampler.reset method clears all of the important bookkeeping
parameters in the sampler so that we get a fresh start. It also clears the
current positions of the walkers so it's a good thing that we saved them
first.
Now, we can do our production run of 10000 steps:
End of explanation
import matplotlib.pyplot as plt
samples = sampler.get_chain(flat=True)
plt.hist(samples[:, 0], 100, color="k", histtype="step")
plt.xlabel(r"$\theta_1$")
plt.ylabel(r"$p(\theta_1)$")
plt.gca().set_yticks([]);
Explanation: The samples can be accessed using the :func:EnsembleSampler.get_chain method.
This will return an array
with the shape (10000, 32, 5) giving the parameter values for each walker
at each step in the chain.
Take note of that shape and make sure that you know where each of those numbers come from.
You can make histograms of these samples to get an estimate of the density that you were sampling:
End of explanation
print("Mean acceptance fraction: {0:.3f}"
.format(np.mean(sampler.acceptance_fraction)))
Explanation: Another good test of whether or not the sampling went well is to check
the mean acceptance fraction of the ensemble using the
:func:EnsembleSampler.acceptance_fraction property:
End of explanation
print("Mean autocorrelation time: {0:.3f} steps"
.format(np.mean(sampler.get_autocorr_time())))
Explanation: and the integrated autocorrelation time (see the :ref:autocorr tutorial for more details)
End of explanation |
12,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow
Step1: Download the Data
Step2: Dataset Metadata
Step3: Building a TensorFlow Custom Estimator
Creating feature columns
Creating model_fn
Create estimator using the model_fn
Define data input_fn
Define Train and evaluate experiment
Run experiment with parameters
1. Create feature columns
Step4: 2. Create model_fn
Use feature columns to create input_layer
Use tf.keras.layers to define the model architecutre and output
Use binary_classification_head for create EstimatorSpec
Step5: 3. Create estimator
Step6: 4. Data Input Function
Step7: 5. Experiment Definition
Step8: 6. Run Experiment with Parameters
Step9: Building a Keras Model
Implement a data input_fn process the data for the Keras model
Create the Keras model
Create the callbacks
Run the experiment
1. Data input_fn
A typical way of feed data into Keras is to convert it to a numpy array and pass it to the model.fit() function of the model. However, in other (probably more parctical) cases, all the data may not fit into the memory of your worker. Thus, you woud need to either create a reader that reads your data chuck by chuck, and pass it to model.fit_generator(), or to use the tf.data.Dataset APIs, which are much easier to use.
In the input_fn,
1. Create a CSV dataset (similar to the one used with the TensorFlow Custom Estimator)
2. Create lookups for categorical features vocabolary to numerical index
3. Process the dataset features to
Step10: 2. Create the keras model
Create the model architecture
Step11: 3. Define callbacks
Early stopping callback
Checkpoints callback
Step12: 4. Run experiment
When using out-of-memory dataset, that is, reading data chuck by chuck from file(s) and feeding it to the model (using the tf.data.Dataset APIs), you usually do not know the size of the dataset. Thus, beside the number of epochs required to train the model for, you need to specify how many step is considered as an epoch.
This is not required when use an in-memory (numpy array) dataset, since the size of the dataset is know to the model, hence how many steps here are in the epoch.
In our experiment, we know the size of our dataset, thus we compute the steps_per_epoch as
Step13: Save and Load Keras Model for Prediction
Step14: Export Keras Model as saved_model for tf.Serving
Step15: Convert to Estimator for Distributed Training... | Python Code:
import math
import os
import pandas as pd
import numpy as np
from datetime import datetime
import tensorflow as tf
from tensorflow import data
print("TensorFlow : {}".format(tf.__version__))
SEED = 19831060
Explanation: TensorFlow: From Estimators to Keras
Building a custom TensorFlow estimator (as a reference)
Use Census classification dataset
Create feature columns from the estimator
Implement a tf.data input_fn
Create a custom estimator using tf.keras.layers
Train and evaluate the model
Building a Functional Keras model and using tf.data APIs
Modify the input_fn to process categorical features
Build a Functional Keras Model
Use the input_fn to fit the Keras model
Configure epochs and validation
Configure callbacks for early stopping and checkpoints
Save and Load Keras model
Export Keras model to saved_model
Converting Keras model to estimator
Concluding Remarks
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/tf-estimator-tutorials/blob/master/00_Miscellaneous/tf_train_eval_export/Tutorial%20-%20TensorFlow%20from%20Estimators%20to%20Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
DATA_DIR='data'
!mkdir $DATA_DIR
!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR
!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR
TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
TRAIN_DATA_SIZE = 32561
EVAL_DATA_SIZE = 16278
Explanation: Download the Data
End of explanation
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],
[0], [0], [0], [''], ['']]
NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']
CATEGORICAL_FEATURE_NAMES = ['gender', 'race', 'education', 'marital_status', 'relationship',
'workclass', 'occupation', 'native_country']
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
TARGET_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt'
NUM_CLASSES = len(TARGET_LABELS)
def get_categorical_features_vocabolary():
data = pd.read_csv(TRAIN_DATA_FILE, names=HEADER)
return {
column: list(data[column].unique())
for column in data.columns if column in CATEGORICAL_FEATURE_NAMES
}
feature_vocabolary = get_categorical_features_vocabolary()
print(feature_vocabolary)
Explanation: Dataset Metadata
End of explanation
def create_feature_columns():
feature_columns = []
for column in NUMERIC_FEATURE_NAMES:
feature_column = tf.feature_column.numeric_column(column)
feature_columns.append(feature_column)
for column in CATEGORICAL_FEATURE_NAMES:
vocabolary = feature_vocabolary[column]
embed_size = int(math.sqrt(len(vocabolary)))
feature_column = tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_vocabulary_list(column, vocabolary),
embed_size)
feature_columns.append(feature_column)
return feature_columns
Explanation: Building a TensorFlow Custom Estimator
Creating feature columns
Creating model_fn
Create estimator using the model_fn
Define data input_fn
Define Train and evaluate experiment
Run experiment with parameters
1. Create feature columns
End of explanation
def model_fn(features, labels, mode, params):
is_training = True if mode == tf.estimator.ModeKeys.TRAIN else False
# model body
def _inference(features, mode, params):
feature_columns = create_feature_columns()
input_layer = tf.feature_column.input_layer(features=features, feature_columns=feature_columns)
dense_inputs = input_layer
for i in range(len(params.hidden_units)):
dense = tf.keras.layers.Dense(params.hidden_units[i], activation='relu')(dense_inputs)
dense_dropout = tf.keras.layers.Dropout(params.dropout_prob)(dense, training=is_training)
dense_inputs = dense_dropout
fully_connected = dense_inputs
logits = tf.keras.layers.Dense(units=1, name='logits', activation=None)(fully_connected)
return logits
# model head
head = tf.contrib.estimator.binary_classification_head(
label_vocabulary=TARGET_LABELS,
weight_column=WEIGHT_COLUMN_NAME
)
return head.create_estimator_spec(
features=features,
mode=mode,
logits=_inference(features, mode, params),
labels=labels,
optimizer=tf.train.AdamOptimizer(params.learning_rate)
)
Explanation: 2. Create model_fn
Use feature columns to create input_layer
Use tf.keras.layers to define the model architecutre and output
Use binary_classification_head for create EstimatorSpec
End of explanation
def create_estimator(params, run_config):
feature_columns = create_feature_columns()
estimator = tf.estimator.Estimator(
model_fn,
params=params,
config=run_config
)
return estimator
Explanation: 3. Create estimator
End of explanation
def make_input_fn(file_pattern, batch_size, num_epochs,
mode=tf.estimator.ModeKeys.EVAL):
def _input_fn():
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
column_names=HEADER,
column_defaults=HEADER_DEFAULTS,
label_name=TARGET_NAME,
field_delim=',',
use_quote_delim=True,
header=False,
num_epochs=num_epochs,
shuffle= (mode==tf.estimator.ModeKeys.TRAIN)
)
return dataset
return _input_fn
Explanation: 4. Data Input Function
End of explanation
def train_and_evaluate_experiment(params, run_config):
# TrainSpec ####################################
train_input_fn = make_input_fn(
TRAIN_DATA_FILE,
batch_size=params.batch_size,
num_epochs=None,
mode=tf.estimator.ModeKeys.TRAIN
)
train_spec = tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps=params.traning_steps
)
###############################################
# EvalSpec ####################################
eval_input_fn = make_input_fn(
EVAL_DATA_FILE,
num_epochs=1,
batch_size=params.batch_size,
)
eval_spec = tf.estimator.EvalSpec(
name=datetime.utcnow().strftime("%H%M%S"),
input_fn = eval_input_fn,
steps=None,
start_delay_secs=0,
throttle_secs=params.eval_throttle_secs
)
###############################################
tf.logging.set_verbosity(tf.logging.INFO)
if tf.gfile.Exists(run_config.model_dir):
print("Removing previous artefacts...")
tf.gfile.DeleteRecursively(run_config.model_dir)
print('')
estimator = create_estimator(params, run_config)
print('')
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
return estimator
Explanation: 5. Experiment Definition
End of explanation
MODELS_LOCATION = 'models/census'
MODEL_NAME = 'dnn_classifier'
model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)
params = tf.contrib.training.HParams(
batch_size=200,
traning_steps=1000,
hidden_units=[100, 70, 50],
learning_rate=0.01,
dropout_prob=0.2,
eval_throttle_secs=0,
)
strategy = None
num_gpus = len([device_name for device_name in tf.contrib.eager.list_devices()
if '/device:GPU' in device_name])
if num_gpus > 1:
strategy = tf.contrib.distribute.MirroredStrategy()
params.batch_size = int(math.ceil(params.batch_size / num_gpus))
run_config = tf.estimator.RunConfig(
tf_random_seed=SEED,
save_checkpoints_steps=200,
keep_checkpoint_max=3,
model_dir=model_dir,
train_distribute=strategy
)
train_and_evaluate_experiment(params, run_config)
Explanation: 6. Run Experiment with Parameters
End of explanation
def make_keras_input_fn(file_pattern, batch_size, mode=tf.estimator.ModeKeys.EVAL):
mapping_tables = {}
mapping_tables[TARGET_NAME] = tf.contrib.lookup.index_table_from_tensor(
mapping=tf.constant(TARGET_LABELS))
for feature_name in CATEGORICAL_FEATURE_NAMES:
mapping_tables[feature_name] = tf.contrib.lookup.index_table_from_tensor(
mapping=tf.constant(feature_vocabolary[feature_name]))
try:
tf.tables_initializer().run(session=tf.keras.backend.get_session())
except:
pass
def _process_features(features, target):
weight = features.pop(WEIGHT_COLUMN_NAME)
target = mapping_tables[TARGET_NAME].lookup(target)
for feature in CATEGORICAL_FEATURE_NAMES:
features[feature] = mapping_tables[feature].lookup(features[feature])
return features, target, weight
def _input_fn():
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
column_names=HEADER,
column_defaults=HEADER_DEFAULTS,
label_name=TARGET_NAME,
field_delim=',',
use_quote_delim=True,
header=False,
shuffle= (mode==tf.estimator.ModeKeys.TRAIN)
).map(_process_features)
return dataset
return _input_fn
Explanation: Building a Keras Model
Implement a data input_fn process the data for the Keras model
Create the Keras model
Create the callbacks
Run the experiment
1. Data input_fn
A typical way of feed data into Keras is to convert it to a numpy array and pass it to the model.fit() function of the model. However, in other (probably more parctical) cases, all the data may not fit into the memory of your worker. Thus, you woud need to either create a reader that reads your data chuck by chuck, and pass it to model.fit_generator(), or to use the tf.data.Dataset APIs, which are much easier to use.
In the input_fn,
1. Create a CSV dataset (similar to the one used with the TensorFlow Custom Estimator)
2. Create lookups for categorical features vocabolary to numerical index
3. Process the dataset features to:
* extrat the instance weight column
* convert the categorical features to numerical index
End of explanation
def create_model(params):
inputs = []
to_concat = []
for column in HEADER:
if column not in [WEIGHT_COLUMN_NAME, TARGET_NAME]:
if column in NUMERIC_FEATURE_NAMES:
numeric_input = tf.keras.layers.Input(shape=(1, ), name=column, dtype='float32')
inputs.append(numeric_input)
to_concat.append(numeric_input)
else:
categorical_input = tf.keras.layers.Input(shape=(1, ), name=column, dtype='int32')
inputs.append(categorical_input)
vocabulary_size = len(feature_vocabolary[column])
embed_size = int(math.sqrt(vocabulary_size))
embedding = tf.keras.layers.Embedding(input_dim=vocabulary_size,
output_dim=embed_size)(categorical_input)
reshape = tf.keras.layers.Reshape(target_shape=(embed_size, ))(embedding)
to_concat.append(reshape)
input_layer = tf.keras.layers.Concatenate(-1)(to_concat)
dense_inputs = input_layer
for i in range(len(params.hidden_units)):
dense = tf.keras.layers.Dense(params.hidden_units[i], activation='relu')(dense_inputs)
dense_dropout = tf.keras.layers.Dropout(params.dropout_prob)(dense)#, training=is_training)
dense_inputs = dense_dropout
fully_connected = dense_inputs
logits = tf.keras.layers.Dense(units=1, name='logits', activation=None)(fully_connected)
sigmoid = tf.keras.layers.Activation(activation='sigmoid', name='probability')(logits)
# keras model
model = tf.keras.models.Model(inputs=inputs, outputs=sigmoid)
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
return model
!mkdir $model_dir/checkpoints
!ls $model_dir
Explanation: 2. Create the keras model
Create the model architecture: because Keras models do not suppurt feature columns (yet), we need to create:
One input for each feature
Embedding layer for each categorical feature
Sigmoid output
Compile the model
End of explanation
callbacks = [
tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=2),
tf.keras.callbacks.ModelCheckpoint(
os.path.join(model_dir,'checkpoints', 'model-{epoch:02d}.h5'),
monitor='val_loss',
period=1)
]
from keras.utils.training_utils import multi_gpu_model
model = create_model(params)
# model = multi_gpu_model(model, gpus=4) # This is to train the model with multiple GPUs
model.summary()
Explanation: 3. Define callbacks
Early stopping callback
Checkpoints callback
End of explanation
train_data = make_keras_input_fn(
TRAIN_DATA_FILE,
batch_size=params.batch_size,
mode=tf.estimator.ModeKeys.TRAIN
)()
valid_data = make_keras_input_fn(
EVAL_DATA_FILE,
batch_size=params.batch_size,
mode=tf.estimator.ModeKeys.EVAL
)()
steps_per_epoch = int(math.ceil(TRAIN_DATA_SIZE/float(params.batch_size)))
model.fit(
train_data,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=valid_data,
validation_steps=steps_per_epoch,
callbacks=callbacks
)
!ls $model_dir/checkpoints
Explanation: 4. Run experiment
When using out-of-memory dataset, that is, reading data chuck by chuck from file(s) and feeding it to the model (using the tf.data.Dataset APIs), you usually do not know the size of the dataset. Thus, beside the number of epochs required to train the model for, you need to specify how many step is considered as an epoch.
This is not required when use an in-memory (numpy array) dataset, since the size of the dataset is know to the model, hence how many steps here are in the epoch.
In our experiment, we know the size of our dataset, thus we compute the steps_per_epoch as: training data size /batch size
End of explanation
keras_model_dir = os.path.join(model_dir, 'keras_classifier.h5')
model.save(keras_model_dir)
print("Keras model saved to: {}".format(keras_model_dir))
model = tf.keras.models.load_model(keras_model_dir)
print("Keras model loaded.")
predict_data = make_keras_input_fn(
EVAL_DATA_FILE,
batch_size=5,
mode=tf.estimator.ModeKeys.EVAL
)()
predictions = map(
lambda probability: TARGET_LABELS[0] if probability <0.5 else TARGET_LABELS[1],
model.predict(predict_data, steps=1)
)
print(list(predictions))
Explanation: Save and Load Keras Model for Prediction
End of explanation
os.environ['MODEL_DIR'] = model_dir
export_dir = os.path.join(model_dir, 'export')
from tensorflow.contrib.saved_model.python.saved_model import keras_saved_model
keras_saved_model.save_keras_model(model, export_dir)
%%bash
saved_models_base=${MODEL_DIR}/export/
saved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)
echo ${saved_model_dir}
ls ${saved_model_dir}
saved_model_cli show --dir=${saved_model_dir} --all
Explanation: Export Keras Model as saved_model for tf.Serving
End of explanation
estimator = tf.keras.estimator.model_to_estimator(model)
Explanation: Convert to Estimator for Distributed Training...
End of explanation |
12,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tensorflow
Step1: In it's essence, tensorflow is a generic computing framework that allows you to define compute graphs (=definitions)
without running them at the same time - essentially splitting the time of computation definition from its execution.
Here's a hello world example
Step2: Let's do a simple calculation
Step3: Use tf.placeholder and TF sessions's feed_dict parameter if you want to specify the parameter values at execution time, like so
Step4: Basic graph operations
Standard vector and matrix operations
Step5: Reduce operations can be used to reduce vectors or matrix colums into single numbers
Step6: Sessions and graphs
You can work with multiple sessions or multiple graphs
Step7: Note that multiple sessions can reference the same graph. If you don't explicitely tell the session which graph to use, it will use the default graph. The result is that if you do multiple computations in tensorflow on this default graph, that there will be multiple disjointed subgraphs as part of it, one for each computation. This is why you see this many operations printed above. | Python Code:
import tensorflow as tf
import numpy as np
Explanation: Tensorflow
End of explanation
hello = tf.constant('Hello, TensorFlow!')
# To do anything useful in TF, you have the create a session and then run it
# This is also true for just outputting the value of a TF variable/constant/placeholder.
# While this is a trivial example to output the constant `hello`,
# in real-world scenarios variables will be read from some dataset.
sess = tf.Session()
print(sess.run(hello))
Explanation: In it's essence, tensorflow is a generic computing framework that allows you to define compute graphs (=definitions)
without running them at the same time - essentially splitting the time of computation definition from its execution.
Here's a hello world example:
End of explanation
num1 = tf.constant(5, name="number1")
num2 = tf.constant(3, name="number2")
summation = tf.add(num1, num2) #
sess = tf.Session()
print "Using tf.add:", sess.run(summation)
# Note that tensorflow also does operator overload, so you can do this too:
result2 = num1 + num2
print "Using + operator:", sess.run(result2)
# Under the hood, TF builds a graph ands adds operations to them, you can show that graph like so:
tf.get_default_graph().get_operations()
Explanation: Let's do a simple calculation:
End of explanation
num3 = tf.placeholder(dtype=np.int32) # dtype required for placeholder, uses numpy types
num4 = tf.placeholder(dtype=np.int32)
res = sess.run(num3 + num4, feed_dict={num3: 6, num4: 5})
print "6+5 =", res
res2 = sess.run(num3 + num4, {num3: 5, num4: 7}) # `feed_dict` keyword arg is optional (=2nd positional arg)
print "5+7 =", res2
# Note that feed_dict is actually taking `num3` and `num4` as dict keys: passing the objects themselves as keys.
# TF does some magic to make that work under the hood.
Explanation: Use tf.placeholder and TF sessions's feed_dict parameter if you want to specify the parameter values at execution time, like so:
End of explanation
vec1 = tf.placeholder(dtype=np.int32)
vec2 = tf.placeholder(dtype=np.int32)
print "** VECTORS " + "*"* 50
print "addition", sess.run(vec1 + vec2, {vec1: [1, 2], vec2: [3, 4]})
print "element-wise multiplication", sess.run(vec1 * vec2, {vec1: [1, 2], vec2: [3, 4]})
print "** MATRICES " + "*"* 49
mat1 = tf.placeholder(shape=(2, 2), dtype=np.float64)
mat2 = tf.placeholder(shape=(2, 2), dtype=np.float64)
print "addition\n", sess.run(mat1 + mat2, {mat1: [ [1.1, 2.2], [3.3, 4.4]], mat2: [[5.5, 6.6], [7.7, 8.8]]})
print "element-wise multiplication\n", sess.run(mat1 * mat2, {mat1: [ [1.1, 2.2], [3.3, 4.4]], mat2: [[5.5, 6.6], [7.7, 8.8]]})
print "matrix multiplication\n", sess.run(tf.matmul(mat1, mat2), {mat1: [ [1.1, 2.2], [3.3, 4.4]], mat2: [[5.5, 6.6], [7.7, 8.8]]})
# print "element-wise multiplication", sess.run(vec1 * vec2, feed_dict={vec1: [1, 2], vec2: [3, 4]})
Explanation: Basic graph operations
Standard vector and matrix operations:
End of explanation
# Python: reduce sum
print "Python reduce():", reduce(lambda x, y: x + y, [1, 2, 3])
# Tensorflow: reduce sum, reduce mean, etc
vec3 = tf.placeholder(dtype=np.int32)
print "Tensorflow tf.reduce_sum():", sess.run(tf.reduce_sum(vec3), {vec3: [1, 2, 3]})
print "Tensorflow tf.reduce_mean():", sess.run(tf.reduce_mean(vec3), {vec3: [1, 2, 3]})
# Reduce sum for matrix
mat3 = tf.placeholder(shape=(3, 2), dtype=np.int32)
print "\nMatrix tf.reduce_sum()"
# axis=0 => reduce colums: [1+3+5, 2+4+6] = [9, 12]
# axis=1 => reduce colums: [1+2, 3+4, 5+6] = [3, 7, 11]
# axis=None => reduce over axis 0, then reduce again: [1+3+5, 2+4+6] = [9, 12], [9+12] = 21
print "Tensorflow tf.reduce_sum(), axis=0:", sess.run(tf.reduce_sum(mat3, axis=0), {mat3: [[1, 2], [3, 4], [5, 6]]})
print "Tensorflow tf.reduce_sum(), axis=1:", sess.run(tf.reduce_sum(mat3, axis=1), {mat3: [[1, 2], [3, 4], [5, 6]]})
print "Tensorflow tf.reduce_sum(), axis=None:", sess.run(tf.reduce_sum(mat3, axis=None), {mat3: [[1, 2], [3, 4], [5, 6]]})
Explanation: Reduce operations can be used to reduce vectors or matrix colums into single numbers: similar to python's reduce function.
End of explanation
with tf.Session() as sess2:
foo = tf.constant(123)
print sess2.run(foo)
print sess2.graph.get_operations()
Explanation: Sessions and graphs
You can work with multiple sessions or multiple graphs
End of explanation
# Using a seperate graph:
with tf.Session(graph=tf.Graph()) as sess3:
bar = tf.constant(456)
print sess3.run(bar)
print sess3.graph.get_operations()
Explanation: Note that multiple sessions can reference the same graph. If you don't explicitely tell the session which graph to use, it will use the default graph. The result is that if you do multiple computations in tensorflow on this default graph, that there will be multiple disjointed subgraphs as part of it, one for each computation. This is why you see this many operations printed above.
End of explanation |
12,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Everyone!<br/>Oregon Curriculum Network
VPython inside Jupyter Notebooks
The Vector, Edge and Polyhedron types
The Vector class below is but a thin wrapper around VPython's built-in vector type. One might wonder, why bother? Why not just use vpython.vector and be done with it? Also, if wanting to reimplement, why not just subclass instead of wrap? All good questions.
A primary motivation is to keep the Vector and Edge types somewhat aloof from vpython's vector and more welded to vpython's cylinder instead. We want vectors and edges to materialize as cylinders quite easily.
So whether we subclass, or wrap, we want our vectors to have the ability to self-draw.
The three basis vectors must be negated to give all six spokes of the XYZ apparatus. Here's an opportunity to test our __neg__ operator then.
The overall plan is to have an XYZ "jack" floating in space, around which two tetrahedrons will be drawn, with a common center, as twins.
Their edges will intersect as at the respective face centers of the six-faced, twelve-edged hexahedron, our "duo-tet" cube (implied, but could be hard-wired as a next Polyhedron instance, just give it the six faces).
A lot of this wrapper code is about turning vpython.vectors into lists for feeding to Vector, which expects three separate arguments. A star in front of an iterable accomplishes the feat of exploding it into the separate arguments required.
Note that vector operations, including negation, always return fresh vectors. Even color has not been made a mutable property, but maybe could be.
Step3: Even though the top code cell contains no instructions to draw, Vpython's way of integrating into Jupyter Notebook seems to be by adding a scene right after the first code cell. Look below for the code that made all of the above happen. Yes, that's a bit strange. | Python Code:
from vpython import *
class Vector:
def __init__(self, x, y, z):
self.v = vector(x, y, z)
def __add__(self, other):
v_sum = self.v + other.v
return Vector(*v_sum.value)
def __neg__(self):
return Vector(*((-self.v).value))
def __sub__(self, other):
V = (self + (-other))
return Vector(*V.v.value)
def __mul__(self, scalar):
V = scalar * self.v
return Vector(*V.value)
def norm(self):
v = norm(self.v)
return Vector(*v.value)
def length(self):
return mag(self.v)
def draw(self):
self.the_cyl = cylinder(pos=vector(0,0,0), axis=self.v, radius=0.1)
self.the_cyl.color = color.cyan
XBASIS = Vector(1,0,0)
YBASIS = Vector(0,1,0)
ZBASIS = Vector(0,0,1)
XNEG = -XBASIS
YNEG = -YBASIS
ZNEG = -ZBASIS
XYZ = [XBASIS, XNEG, YBASIS, YNEG, ZBASIS, ZNEG]
sphere(pos=vector(0,0,0), color = color.orange, radius=0.2)
for radial in XYZ:
radial.draw()
Explanation: Python for Everyone!<br/>Oregon Curriculum Network
VPython inside Jupyter Notebooks
The Vector, Edge and Polyhedron types
The Vector class below is but a thin wrapper around VPython's built-in vector type. One might wonder, why bother? Why not just use vpython.vector and be done with it? Also, if wanting to reimplement, why not just subclass instead of wrap? All good questions.
A primary motivation is to keep the Vector and Edge types somewhat aloof from vpython's vector and more welded to vpython's cylinder instead. We want vectors and edges to materialize as cylinders quite easily.
So whether we subclass, or wrap, we want our vectors to have the ability to self-draw.
The three basis vectors must be negated to give all six spokes of the XYZ apparatus. Here's an opportunity to test our __neg__ operator then.
The overall plan is to have an XYZ "jack" floating in space, around which two tetrahedrons will be drawn, with a common center, as twins.
Their edges will intersect as at the respective face centers of the six-faced, twelve-edged hexahedron, our "duo-tet" cube (implied, but could be hard-wired as a next Polyhedron instance, just give it the six faces).
A lot of this wrapper code is about turning vpython.vectors into lists for feeding to Vector, which expects three separate arguments. A star in front of an iterable accomplishes the feat of exploding it into the separate arguments required.
Note that vector operations, including negation, always return fresh vectors. Even color has not been made a mutable property, but maybe could be.
End of explanation
class Edge:
def __init__(self, v0, v1):
self.v0 = v0
self.v1 = v1
def draw(self):
cylinder wants a starting point, and a direction vector
pointer = (self.v1 - self.v0)
direction_v = norm(pointer) * pointer.length() # normalize then stretch
self.the_cyl = cylinder(pos = self.v0.v, axis=direction_v.v, radius=0.1)
self.the_cyl.color = color.green
class Polyhedron:
def __init__(self, faces, corners):
self.faces = faces
self.corners = corners
self.edges = self._get_edges()
def _get_edges(self):
take a list of face-tuples and distill
all the unique edges,
e.g. ((1,2,3)) => ((1,2),(2,3),(1,3))
e.g. icosahedron has 20 faces and 30 unique edges
( = cubocta 24 + tetra's 6 edges to squares per
jitterbug)
uniqueset = set()
for f in self.faces:
edgetries = zip(f, f[1:]+ (f[0],))
for e in edgetries:
e = tuple(sorted(e)) # keeps out dupes
uniqueset.add(e)
return tuple(uniqueset)
def draw(self):
for edge in self.edges:
the_edge = Edge(Vector(*self.corners[edge[0]]),
Vector(*self.corners[edge[1]]))
the_edge.draw()
the_verts = \
{ 'A': (0.35355339059327373, 0.35355339059327373, 0.35355339059327373),
'B': (-0.35355339059327373, -0.35355339059327373, 0.35355339059327373),
'C': (-0.35355339059327373, 0.35355339059327373, -0.35355339059327373),
'D': (0.35355339059327373, -0.35355339059327373, -0.35355339059327373),
'E': (-0.35355339059327373, -0.35355339059327373, -0.35355339059327373),
'F': (0.35355339059327373, 0.35355339059327373, -0.35355339059327373),
'G': (0.35355339059327373, -0.35355339059327373, 0.35355339059327373),
'H': (-0.35355339059327373, 0.35355339059327373, 0.35355339059327373)}
the_faces = (('A','B','C'),('A','C','D'),('A','D','B'),('B','C','D'))
other_faces = (('E','F','G'), ('E','G','H'),('E','H','F'),('F','G','H'))
tetrahedron = Polyhedron(the_faces, the_verts)
inv_tetrahedron = Polyhedron(other_faces, the_verts)
print(tetrahedron._get_edges())
print(inv_tetrahedron._get_edges())
tetrahedron.draw()
inv_tetrahedron.draw()
Explanation: Even though the top code cell contains no instructions to draw, Vpython's way of integrating into Jupyter Notebook seems to be by adding a scene right after the first code cell. Look below for the code that made all of the above happen. Yes, that's a bit strange.
End of explanation |
12,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Weather Data
Daily ocean temperature data since 1981. http
Step1: Plots!
Step2: Computations
Step3: Note that this only builds up a graph of the computations, but doesn't actually run anything yet.
Step4: To actually run the computation, we call the compute method. We'll wrap the call to compute with some diagnostics so we can inspect the performance later.
Step5: Diagnostics | Python Code:
import zarr
import dask.array as da
a = zarr.open_array("sst.day.mean.v2.zarr/", mode='r')
data = da.from_array(a, chunks=a.chunks)
data
data.nbytes / 1e9
Explanation: Weather Data
Daily ocean temperature data since 1981. http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.oisst.v2.html. This is roughly 52 GB - far more than fits in memory on my laptop.
End of explanation
from matplotlib import pyplot as plt
%matplotlib inline
plt.imshow(data[0], origin='lower', cmap='viridis')
Explanation: Plots!
End of explanation
mean_over_time = da.nanmean(data, axis=(1, 2))
Explanation: Computations
End of explanation
mean_over_time
Explanation: Note that this only builds up a graph of the computations, but doesn't actually run anything yet.
End of explanation
from dask.diagnostics import (ProgressBar, Profiler, ResourceProfiler,
visualize)
with ProgressBar(), Profiler() as prof, ResourceProfiler() as rprof:
res = mean_over_time.compute()
# Plot the result
plt.plot(res)
plt.title("Average ocean temp over time")
plt.xlabel('time')
plt.ylabel('temp (C)')
Explanation: To actually run the computation, we call the compute method. We'll wrap the call to compute with some diagnostics so we can inspect the performance later.
End of explanation
# NOTE: the output cell below is best viewed in nbviewer - github's viewer
# doesn't allow javascript.
from bokeh.io import output_notebook
output_notebook()
visualize([prof, rprof]);
Explanation: Diagnostics
End of explanation |
12,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
arrows
Step1: Just adding some imports and setting graph display options.
Step2: Let's look at our data!
load_df loads it in as a pandas.DataFrame, excellent for statistical analysis and graphing.
Step3: We'll be looking primarily at candidate, created_at, lang, place, user_followers_count, user_time_zone, polarity, and influenced_polarity, and text.
Step4: First I'll look at sentiment, calculated with TextBlob using the text column. Sentiment is composed of two values, polarity - a measure of the positivity or negativity of a text - and subjectivity. Polarity is between -1.0 and 1.0; subjectivity between 0.0 and 1.0.
Step5: Unfortunately, it doesn't work too well on anything other than English.
Step6: TextBlob has a cool translate() function that uses Google Translate to take care of that for us, but we won't be using it here - just because tweets include a lot of slang and abbreviations that can't be translated very well.
Step7: All right - let's figure out the most (positively) polarized English tweets.
Step8: Extrema don't mean much. We might get more interesting data with mean polarities for each candidate. Let's also look at influenced polarity, which takes into account the number of retweets and followers.
Step9: So tweets about Jeb Bush, on average, aren't as positive as the other candidates, but the people tweeting about Bush get more retweets and followers.
I used the formula influence = sqrt(followers + 1) * sqrt(retweets + 1). You can experiment with different functions if you like [preprocess.py
Step10: Side note
Step11: Looks like our favorite toupéed candidate hasn't even been tweeting about anyone else!
What else can we do? We know the language each tweet was (tweeted?) in.
Step12: That's a lot of languages! Let's try plotting to get a better idea, but first, I'll remove smaller language/candidate groups.
By the way, each lang value is an IANA language tag - you can look them up at https
Step13: I'll also remove English, since it would just dwarf all the other languages.
Step14: Looks like Spanish and Portuguese speakers mostly tweet about Jeb Bush, while Francophones lean more liberal, and Clinton tweeters span the largest range of languages.
We also have the time-of-tweet information - I'll plot influenced polarity over time for each candidate. I'm also going to resample the influenced_polarity values to 1 hour intervals to get a smoother graph.
Step15: Since I only took the last 20,000 tweets for each candidate, I didn't receive as large a timespan from Clinton (a candidate with many, many tweeters) compared to Rand Paul.
But we can still analyze the data in terms of hour-of-day. I'd like to know when tweeters in each language tweet each day, and I'm going to use percentages instead of raw number of tweets so I can compare across different languages easily.
By the way, the times in the dataframe are in UTC.
Step16: Note that English, French, and Spanish are significantly flatter than the other languages - this means that there's a large spread of speakers all over the globe.
But why is Portuguese spiking at 11pm Brasilia time / 3 am Lisbon time? Let's find out!
My first guess was that maybe there's a single person making a ton of posts at that time.
Step17: So that's not it. Maybe there was a major event everyone was retweeting?
Step18: Seems to be a lot of these 'Jeb Bush diz que foi atingido...' tweets. How many? We can't just count unique ones because they all are different slightly, but we can check for a large-enough substring.
Step19: That's it!
Looks like there was a news article from a Brazilian website (http
Step20: That's our raw data
Step21: Next, I have to choose a projection and plot it (again using Cartopy). The Albers Equal-Area is good for maps of the U.S. I'll also download some featuresets from the Natural Earth dataset to display state borders.
Step22: My friend Gabriel Wang pointed out that U.S. timezones other than Pacific don't mean much since each timezone covers both blue and red states, but the data is still interesting.
As expected, midwestern states lean toward Jeb Bush. I wasn't expecting Jeb Bush's highest polarity-tweets to come from the East; this is probably Donald Trump (New York, New York) messing with our data again.
In a few months I'll look at these statistics with the latest tweets and compare.
What are tweeters outside the U.S. saying about our candidates?
Outside of the U.S., if someone is in a major city, the timezone is often that city itself. Here are the top (by number of tweets) non-American 25 timezones in our dataframe.
Step23: I also want to look at polarity, so I'll only use English tweets.
(Sorry, Central/South Americans - my very rough method of filtering out American timezones gets rid of some of your timezones too. Let me know if there's a better way to do this.)
Step24: Now we have a dataframe containing (mostly) world cities as time zones. Let's get the top cities by number of tweets for each candidate, then plot polarities.
Step25: Exercise for the reader | Python Code:
from arrows.preprocess import load_df
Explanation: arrows: Yet Another Twitter/Python Data Analysis
Geospatially, Temporally, and Linguistically Analyzing Tweets about Top U.S. Presidential Candidates with Pandas, TextBlob, Seaborn, and Cartopy
Hi, I'm Raj. For my internship this summer, I've been using data science and geospatial Python libraries like xray, numpy, rasterio, and cartopy. A week ago, I had a discussion about the relevance of Bernie Sanders among millenials - and so, I set out to get a rough idea by looking at recent tweets.
I don't explain any of the code in this document, but you can skip the code and just look at the results if you like. If you're interested in going further with this data, I've posted source code and the dataset at https://github.com/raj-kesavan/arrows.
If you have any comments or suggestions (oneither code or analysis), please let me know at [email protected]. Enjoy!
First, I used Tweepy to pull down 20,000 tweets for each of Hillary Clinton, Bernie Sanders, Rand Paul, and Jeb Bush [retrieve_tweets.py].
I've also already done some calculations, specifically of polarity, subjectivity, influence, influenced polarity, and longitude and latitude (all explained later) [preprocess.py].
End of explanation
from textblob import TextBlob
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import cartopy
pd.set_option('display.max_colwidth', 200)
pd.options.display.mpl_style = 'default'
matplotlib.style.use('ggplot')
sns.set_context('talk')
sns.set_style('whitegrid')
plt.rcParams['figure.figsize'] = [12.0, 8.0]
% matplotlib inline
Explanation: Just adding some imports and setting graph display options.
End of explanation
df = load_df('arrows/data/results.csv')
df.info()
Explanation: Let's look at our data!
load_df loads it in as a pandas.DataFrame, excellent for statistical analysis and graphing.
End of explanation
df[['candidate', 'created_at', 'lang', 'place', 'user_followers_count',
'user_time_zone', 'polarity', 'influenced_polarity', 'text']].head(1)
Explanation: We'll be looking primarily at candidate, created_at, lang, place, user_followers_count, user_time_zone, polarity, and influenced_polarity, and text.
End of explanation
TextBlob("Tear down this wall!").sentiment
Explanation: First I'll look at sentiment, calculated with TextBlob using the text column. Sentiment is composed of two values, polarity - a measure of the positivity or negativity of a text - and subjectivity. Polarity is between -1.0 and 1.0; subjectivity between 0.0 and 1.0.
End of explanation
TextBlob("Radix malorum est cupiditas.").sentiment
Explanation: Unfortunately, it doesn't work too well on anything other than English.
End of explanation
sentence = TextBlob("Radix malorum est cupiditas.").translate()
print(sentence)
print(sentence.sentiment)
Explanation: TextBlob has a cool translate() function that uses Google Translate to take care of that for us, but we won't be using it here - just because tweets include a lot of slang and abbreviations that can't be translated very well.
End of explanation
english_df = df[df.lang == 'en']
english_df.sort('polarity', ascending = False).head(3)[['candidate', 'polarity', 'subjectivity', 'text']]
Explanation: All right - let's figure out the most (positively) polarized English tweets.
End of explanation
candidate_groupby = english_df.groupby('candidate')
candidate_groupby[['polarity', 'influence', 'influenced_polarity']].mean()
Explanation: Extrema don't mean much. We might get more interesting data with mean polarities for each candidate. Let's also look at influenced polarity, which takes into account the number of retweets and followers.
End of explanation
jeb = candidate_groupby.get_group('Jeb Bush')
jeb_influence = jeb.sort('influence', ascending = False)
jeb_influence[['influence', 'polarity', 'influenced_polarity', 'user_name', 'text', 'created_at']].head(5)
Explanation: So tweets about Jeb Bush, on average, aren't as positive as the other candidates, but the people tweeting about Bush get more retweets and followers.
I used the formula influence = sqrt(followers + 1) * sqrt(retweets + 1). You can experiment with different functions if you like [preprocess.py:influence].
We can look at the most influential tweets about Jeb Bush to see what's up.
End of explanation
df[df.user_name == 'Donald J. Trump'].groupby('candidate').size()
Explanation: Side note: you can see that sentiment analysis isn't perfect - the last tweet is certainly negative toward Jeb Bush, but it was actually assigned a positive polarity. Over a large number of tweets, though, sentiment analysis is more meaningful.
As to the high influence of tweets about Bush: it looks like Donald Trump (someone with a lot of followers) has been tweeting a lot about Bush over the other candidates - one possible reason for Jeb's greater influenced_polarity.
End of explanation
language_groupby = df.groupby(['candidate', 'lang'])
language_groupby.size()
Explanation: Looks like our favorite toupéed candidate hasn't even been tweeting about anyone else!
What else can we do? We know the language each tweet was (tweeted?) in.
End of explanation
largest_languages = language_groupby.filter(lambda group: len(group) > 10)
Explanation: That's a lot of languages! Let's try plotting to get a better idea, but first, I'll remove smaller language/candidate groups.
By the way, each lang value is an IANA language tag - you can look them up at https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry.
End of explanation
non_english = largest_languages[largest_languages.lang != 'en']
non_english_groupby = non_english.groupby(['lang', 'candidate'], as_index = False)
sizes = non_english_groupby.text.agg(np.size)
sizes = sizes.rename(columns={'text': 'count'})
sizes_pivot = sizes.pivot_table(index='lang', columns='candidate', values='count', fill_value=0)
plot = sns.heatmap(sizes_pivot)
plot.set_title('Number of non-English Tweets by Candidate', family='Ubuntu')
plot.set_ylabel('language code', family='Ubuntu')
plot.set_xlabel('candidate', family='Ubuntu')
plot.figure.set_size_inches(12, 7)
Explanation: I'll also remove English, since it would just dwarf all the other languages.
End of explanation
mean_polarities = df.groupby(['candidate', 'created_at']).influenced_polarity.mean()
plot = mean_polarities.unstack('candidate').resample('60min').plot()
plot.set_title('Influenced Polarity over Time by Candidate', family='Ubuntu')
plot.set_ylabel('influenced polarity', family='Ubuntu')
plot.set_xlabel('time', family='Ubuntu')
plot.figure.set_size_inches(12, 7)
Explanation: Looks like Spanish and Portuguese speakers mostly tweet about Jeb Bush, while Francophones lean more liberal, and Clinton tweeters span the largest range of languages.
We also have the time-of-tweet information - I'll plot influenced polarity over time for each candidate. I'm also going to resample the influenced_polarity values to 1 hour intervals to get a smoother graph.
End of explanation
language_sizes = df.groupby('lang').size()
threshold = language_sizes.quantile(.75)
top_languages_df = language_sizes[language_sizes > threshold]
top_languages = set(top_languages_df.index) - {'und'}
top_languages
df['hour'] = df.created_at.apply(lambda datetime: datetime.hour)
for language_code in top_languages:
lang_df = df[df.lang == language_code]
normalized = lang_df.groupby('hour').size() / lang_df.lang.count()
plot = normalized.plot(label = language_code)
plot.set_title('Tweet Frequency in non-English Languages by Hour of Day', family='Ubuntu')
plot.set_ylabel('normalized frequency', family='Ubuntu')
plot.set_xlabel('hour of day (UTC)', family='Ubuntu')
plot.legend()
plot.figure.set_size_inches(12, 7)
Explanation: Since I only took the last 20,000 tweets for each candidate, I didn't receive as large a timespan from Clinton (a candidate with many, many tweeters) compared to Rand Paul.
But we can still analyze the data in terms of hour-of-day. I'd like to know when tweeters in each language tweet each day, and I'm going to use percentages instead of raw number of tweets so I can compare across different languages easily.
By the way, the times in the dataframe are in UTC.
End of explanation
df_of_interest = df[(df.hour == 2) & (df.lang == 'pt')]
print('Number of tweets:', df_of_interest.text.count())
print('Number of unique users:', df_of_interest.user_name.unique().size)
Explanation: Note that English, French, and Spanish are significantly flatter than the other languages - this means that there's a large spread of speakers all over the globe.
But why is Portuguese spiking at 11pm Brasilia time / 3 am Lisbon time? Let's find out!
My first guess was that maybe there's a single person making a ton of posts at that time.
End of explanation
df_of_interest.text.head(25).unique()
Explanation: So that's not it. Maybe there was a major event everyone was retweeting?
End of explanation
df_of_interest[df_of_interest.text.str.contains('Jeb Bush diz que foi atingido')].text.count()
Explanation: Seems to be a lot of these 'Jeb Bush diz que foi atingido...' tweets. How many? We can't just count unique ones because they all are different slightly, but we can check for a large-enough substring.
End of explanation
tz_df = english_df.dropna(subset=['user_time_zone'])
us_tz_df = tz_df[tz_df.user_time_zone.str.contains("US & Canada")]
us_tz_candidate_groupby = us_tz_df.groupby(['candidate', 'user_time_zone'])
us_tz_candidate_groupby.influenced_polarity.mean()
Explanation: That's it!
Looks like there was a news article from a Brazilian website (http://jconline.ne10.uol.com.br/canal/mundo/internacional/noticia/2015/07/05/jeb-bush-diz-que-foi-atingido-por-criticas-de-trump-a-mexicanos-188801.php) that happened to get a lot of retweets at that time period.
A similar article in English is at http://www.nytimes.com/politics/first-draft/2015/07/04/an-angry-jeb-bush-says-he-takes-donald-trumps-remarks-personally/.
Since languages can span across different countries, we might get results if we search by location, rather than just language.
We don't have very specific geolocation information other than timezone, so let's try plotting candidate sentiment over the 4 major U.S. timezones (Los Angeles, Denver, Chicago, and New York). This is also be a good opportunity to look at a geographical map.
End of explanation
tz_shapes = cartopy.io.shapereader.Reader('arrows/world/tz_world_mp.shp')
tz_records = list(tz_shapes.records())
tz_translator = {
'Eastern Time (US & Canada)': 'America/New_York',
'Central Time (US & Canada)': 'America/Chicago',
'Mountain Time (US & Canada)': 'America/Denver',
'Pacific Time (US & Canada)': 'America/Los_Angeles',
}
american_tz_records = {
tz_name: next(filter(lambda record: record.attributes['TZID'] == tz_id, tz_records))
for tz_name, tz_id
in tz_translator.items()
}
Explanation: That's our raw data: now to plot it on a map. I got the timezone Shapefile from http://efele.net/maps/tz/world/. First, I read in the Shapefile with Cartopy.
End of explanation
albers_equal_area = cartopy.crs.AlbersEqualArea(-95, 35)
plate_carree = cartopy.crs.PlateCarree()
states_and_provinces = cartopy.feature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none'
)
cmaps = [matplotlib.cm.Blues, matplotlib.cm.Greens,
matplotlib.cm.Reds, matplotlib.cm.Purples]
norm = matplotlib.colors.Normalize(vmin=0, vmax=30)
candidates = df['candidate'].unique()
plt.rcParams['figure.figsize'] = [6.0, 4.0]
for index, candidate in enumerate(candidates):
plt.figure()
plot = plt.axes(projection=albers_equal_area)
plot.set_extent((-125, -66, 20, 50))
plot.add_feature(cartopy.feature.LAND)
plot.add_feature(cartopy.feature.COASTLINE)
plot.add_feature(cartopy.feature.BORDERS)
plot.add_feature(states_and_provinces, edgecolor='gray')
plot.add_feature(cartopy.feature.LAKES, facecolor="#00BCD4")
for tz_name, record in american_tz_records.items():
tz_specific_df = us_tz_df[us_tz_df.user_time_zone == tz_name]
tz_candidate_specific_df = tz_specific_df[tz_specific_df.candidate == candidate]
mean_polarity = tz_candidate_specific_df.influenced_polarity.mean()
plot.add_geometries(
[record.geometry],
crs=plate_carree,
color=cmaps[index](norm(mean_polarity)),
alpha=.8
)
plot.set_title('Influenced Polarity toward {} by U.S. Timezone'.format(candidate), family='Ubuntu')
plot.figure.set_size_inches(6, 3.5)
plt.show()
print()
Explanation: Next, I have to choose a projection and plot it (again using Cartopy). The Albers Equal-Area is good for maps of the U.S. I'll also download some featuresets from the Natural Earth dataset to display state borders.
End of explanation
american_timezones = ('US & Canada|Canada|Arizona|America|Hawaii|Indiana|Alaska'
'|New_York|Chicago|Los_Angeles|Detroit|CST|PST|EST|MST')
foreign_tz_df = tz_df[~tz_df.user_time_zone.str.contains(american_timezones)]
foreign_tz_groupby = foreign_tz_df.groupby('user_time_zone')
foreign_tz_groupby.size().sort(inplace = False, ascending = False).head(25)
Explanation: My friend Gabriel Wang pointed out that U.S. timezones other than Pacific don't mean much since each timezone covers both blue and red states, but the data is still interesting.
As expected, midwestern states lean toward Jeb Bush. I wasn't expecting Jeb Bush's highest polarity-tweets to come from the East; this is probably Donald Trump (New York, New York) messing with our data again.
In a few months I'll look at these statistics with the latest tweets and compare.
What are tweeters outside the U.S. saying about our candidates?
Outside of the U.S., if someone is in a major city, the timezone is often that city itself. Here are the top (by number of tweets) non-American 25 timezones in our dataframe.
End of explanation
foreign_english_tz_df = foreign_tz_df[foreign_tz_df.lang == 'en']
Explanation: I also want to look at polarity, so I'll only use English tweets.
(Sorry, Central/South Americans - my very rough method of filtering out American timezones gets rid of some of your timezones too. Let me know if there's a better way to do this.)
End of explanation
foreign_tz_groupby = foreign_english_tz_df.groupby(['candidate', 'user_time_zone'])
top_foreign_tz_df = foreign_tz_groupby.filter(lambda group: len(group) > 40)
top_foreign_tz_groupby = top_foreign_tz_df.groupby(['user_time_zone', 'candidate'], as_index = False)
mean_influenced_polarities = top_foreign_tz_groupby.influenced_polarity.mean()
pivot = mean_influenced_polarities.pivot_table(
index='user_time_zone',
columns='candidate',
values='influenced_polarity',
fill_value=0
)
plot = sns.heatmap(pivot)
plot.set_title('Influenced Polarity in Major Foreign Cities by Candidate', family='Ubuntu')
plot.set_ylabel('city', family='Ubuntu')
plot.set_xlabel('candidate', family='Ubuntu')
plot.figure.set_size_inches(12, 7)
Explanation: Now we have a dataframe containing (mostly) world cities as time zones. Let's get the top cities by number of tweets for each candidate, then plot polarities.
End of explanation
df_place = df.dropna(subset=['place'])
mollweide = cartopy.crs.Mollweide()
plot = plt.axes(projection=mollweide)
plot.set_global()
plot.add_feature(cartopy.feature.LAND)
plot.add_feature(cartopy.feature.COASTLINE)
plot.add_feature(cartopy.feature.BORDERS)
plot.scatter(
list(df_place.longitude),
list(df_place.latitude),
transform=plate_carree,
zorder=2
)
plot.set_title('International Tweeters with Geolocation Enabled', family='Ubuntu')
plot.figure.set_size_inches(14, 9)
plot = plt.axes(projection=albers_equal_area)
plot.set_extent((-125, -66, 20, 50))
plot.add_feature(cartopy.feature.LAND)
plot.add_feature(cartopy.feature.COASTLINE)
plot.add_feature(cartopy.feature.BORDERS)
plot.add_feature(states_and_provinces, edgecolor='gray')
plot.add_feature(cartopy.feature.LAKES, facecolor="#00BCD4")
candidate_groupby = df_place.groupby('candidate', as_index = False)
colors = ['#1976d2', '#7cb342', '#f4511e', '#7b1fa2']
for index, (name, group) in enumerate(candidate_groupby):
longitudes = group.longitude.values
latitudes = group.latitude.values
plot.scatter(
longitudes,
latitudes,
transform=plate_carree,
color=colors[index],
label=name,
zorder=2
)
plot.set_title('U.S. Tweeters by Candidate', family='Ubuntu')
plt.legend(loc='lower left')
plot.figure.set_size_inches(12, 7)
Explanation: Exercise for the reader: why is Rand Paul disliked in Athens? You can probably guess, but the actual tweets causing this are rather amusing.
Greco-libertarian relations aside, the data shows that London and Amsterdam are among the most influential of cities, with the former leaning toward Jeb Bush and the latter about neutral.
In India, Clinton-supporters reside in New Delhi while Chennai tweeters back Rand Paul. By contrast, in 2014, New Delhi constituents voted for the conservative Bharatiya Janata Party while Chennai voted for the more liberal All India Anna Dravida Munnetra Kazhagam Party - so there seems to be some kind of cultural difference between the voters of 2014 and the tweeters of today.
Last thing I thought was interesting: Athens has the highest mean polarity for Bernie Sanders, the only city for which this is the case. Could this have anything to do with the recent economic crisis, 'no' vote for austerity, and Bernie's social democratic tendencies?
Finally, I'll look at specific geolocation (latitude and longitude) data. Since only about 750 out of 80,000 tweets had geolocation enabled, this data can't really be used for sentiment analysis, but we can still get a good idea of international spread.
First I'll plot everything on a world map, then break it up by candidate in the U.S.
End of explanation |
12,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
filter
The function filter(function, list) offers a convenient way to filter out all the elements of an iterable, for which the function returns True.
The function filter(function(),l) needs a function as its first argument. The function needs to return a Boolean value (either True or False). This function will be applied to every element of the iterable. Only if the function returns True will the element of the iterable be included in the result.
Lets see some examples
Step1: Now let's filter a list of numbers. Note
Step2: filter() is more commonly used with lambda functions, this because we usually use filter for a quick job where we don't want to write an entire function. Lets repeat the example above using a lambda expression | Python Code:
#First let's make a function
def even_check(num):
if num%2 ==0:
return True
Explanation: filter
The function filter(function, list) offers a convenient way to filter out all the elements of an iterable, for which the function returns True.
The function filter(function(),l) needs a function as its first argument. The function needs to return a Boolean value (either True or False). This function will be applied to every element of the iterable. Only if the function returns True will the element of the iterable be included in the result.
Lets see some examples:
End of explanation
lst =range(20)
filter(even_check,lst)
Explanation: Now let's filter a list of numbers. Note: putting the function into filter without any parenthesis might feel strange, but keep in mind that functions are objects as well.
End of explanation
filter(lambda x: x%2==0,lst)
Explanation: filter() is more commonly used with lambda functions, this because we usually use filter for a quick job where we don't want to write an entire function. Lets repeat the example above using a lambda expression:
End of explanation |
12,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solving an integer linear programming problem with linprog
Step1: Objective function
$$z(max) = 7x_1 + 6x_2$$
Constraints
Step2: In this case this problem is not giving Integer Variables.
Getting integer variables, add options={'maxiter'
Step3: To print the values that we need | Python Code:
from scipy.optimize import linprog
import numpy as np
Explanation: Solving an integer linear programming problem with linprog
End of explanation
z = np.array([ 7, 6])
C = np.array([
[ 2, 3], #C1
[ 6, 5] #C2
])
b = np.array([12, 30])
x1 = (0, None)
x2 = (0, None)
sol = linprog(-z, A_ub = C, b_ub = b, bounds = (x1, x2), method='simplex')
sol
Explanation: Objective function
$$z(max) = 7x_1 + 6x_2$$
Constraints:
$C_1 = 2x_1 + 3x_2 \leq 12$
$C_2 = 6x_1 + 5x_2 \leq 30$
$x_1,x_2 \geq 0$
$x_1, x_2 \space must \space be \space Integers$
End of explanation
sol = linprog(-z, A_ub = C, b_ub = b, bounds = (x1, x2), method='simplex', options={'maxiter':True})
sol
Explanation: In this case this problem is not giving Integer Variables.
Getting integer variables, add options={'maxiter':True} in the parameters of linprog.
End of explanation
print(f"x1 = {sol.x[0]}, x2 = {sol.x[1]}, z = {sol.fun*-1}")
Explanation: To print the values that we need:
End of explanation |
12,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review of Numerical Differentiation
Q. What orders are the errors?
Step2: Integration
Step3: Test case
Step4: Q. What should we get if we integrate from 0 to 1?
We can verify this analytically
Step5: How about from $x_i=0$ to $x_f=0.2$?
Q. Before we do this, do you expect the trapezoidal sum version of the integral to be an overestimate or an underestimate? Hint
Step6: Q. How can we improve the accuracy of this numerical integral?
Step7: Q. How many grid points do you expect to reach 1% accuracy?
Q. 1% accuracy? 0.1%?
Expanding the y-axis using logarithm
Step8: Q. How could we find the number of grid points required to achieve 0.01% accuracy more precisely than the visual approach above? | Python Code:
def forward_difference(f, x, h):
return (f(x + h) - f(x)) / h
def central_difference(f, x, h):
return (f(x + h) - f(x - h)) / (2. * h)
Explanation: Review of Numerical Differentiation
Q. What orders are the errors?
End of explanation
def trapezoidal(f, x_L, x_U, n):
Integrate function f from x_L to x_U using n points.
# Step size (or grid spacing):
h = (x_U - x_L) / float(n) # h is dx in our notes
# Include start and end points in summation
tot = f(x_L) + f(x_U)
# Loop over all the trapezoids:
# We don't include k = 0 (x = x_L) or k = n-1 (x = x_U) because
# they're included in the sum before the loop.
for k in range(1, n, 1):
x = x_L + (k * h)
tot += 2. * f(x)
tot *= h/2.
return tot
%matplotlib inline
import numpy as np
import matplotlib.pyplot as pl
Explanation: Integration: Chalk Board
Numerical Integration
End of explanation
tau= 2*np.pi
g = lambda x, A=1: tau*A*np.cos(tau*x)
x = np.linspace(0, 2, 100)
pl.plot(x, g(x), color='k')
pl.xlabel('x')
pl.ylabel('g(x)')
Explanation: Test case:
$$g(x) = \tau A \cos(\tau x)$$
with $\tau=2\pi$
End of explanation
from math import sin, pi
int_analytic = lambda A, x_i, x_f: A * (sin(tau*x_f) - sin(tau*x_i))
print('Integral of g(x) from x_i=%g to x_f=%g is %g ' \
% (0., 0.2, int_analytic(1., 0., 0.2)))
x_L = 0.
x_U = 0.2
n = 5
print('Approximate integral is %g' % trapezoidal(g, x_L, x_U, n))
Explanation: Q. What should we get if we integrate from 0 to 1?
We can verify this analytically:
$\int_{x_i}^{x_f} g(x) dx = \tau A \int_{x_i}^{x_f} \cos(\tau x) dx $
Substitute $u = \tau x$, in which case $du = \tau dx$,
$$ \int_{x_i}^{x_f} g(x) dx = A \int_{\tau x_i}^{\tau x_f} \cos(u) du$$
Finally,
$$ \int_{x_i}^{x_f} g(x) dx = A \left[\sin(u) \right]_{\tau x_i}^{\tau x_f} du = A \left[\sin(\tau x_f) - sin(\tau x_i) \right]$$
For $x_i = 0$, $x_f = 1$,
$$ \int_0^1 g(x) dx = A \left[\sin(\tau) - sin(0) \right] = 0$$
End of explanation
A = 1.0
pl.figure()
pl.plot(x, g(x, A=A), color='k')
pl.xlabel('x')
pl.ylabel('g(x)')
pl.xlim(0, 0.2)
print('Analytic integral is %g' % int_analytic(A, 0.0, 0.2))
print('Approximate integral is %g' \
% trapezoidal(lambda x: g(x, A=A), 0.0, 0.2, 5))
Explanation: How about from $x_i=0$ to $x_f=0.2$?
Q. Before we do this, do you expect the trapezoidal sum version of the integral to be an overestimate or an underestimate? Hint: see figure below.
End of explanation
n = 100 # Max number of grid points
x_L = 0.0 # limits of integration
x_U = 0.2
# Grid point and integral approximation arrays
n_grid = np.arange(1, n) # Starts with 1: need at least one trapezoid.
approx = np.zeros(n-1)
# Loop over all n starting with n = 1 (n_grid[0] = 1):
for i in range(n-1):
integral = trapezoidal(g, x_L, x_U, n_grid[i])
approx[i] = integral
# Compute relative error in percent
error = 100 * (approx - int_analytic(1., x_L, x_U)) / int_analytic(1., x_L, x_U)
# Plot results
pl.figure()
pl.plot(n_grid, error)
pl.xlabel('Number of Grid Points')
pl.ylabel('Percent Error')
Explanation: Q. How can we improve the accuracy of this numerical integral?
End of explanation
error_abs = np.abs(error) # absolute value of the error
pl.semilogy(n_grid, error_abs)
pl.xlabel('Number of Grid Points')
pl.ylabel('Percent Relative Error')
# Plot straight lines corresponding to 1% and 10% error
pl.axhline(y=1, color='k', ls='--')
pl.axhline(y=1e-1, color='k', ls=':')
pl.axhline(y=1e-2, color='k', ls='-.')
Explanation: Q. How many grid points do you expect to reach 1% accuracy?
Q. 1% accuracy? 0.1%?
Expanding the y-axis using logarithm
End of explanation
# location of point closest to 0.01% error
i = np.abs(error_abs - 0.01).argmin() # This yields the index of the point
print(np.abs(error_abs - 0.01).min()) # This prints the point closest to 0.01%.
pl.semilogy(np.abs(error_abs - 0.01))
print(i, n_grid[i], error_abs[i])
Explanation: Q. How could we find the number of grid points required to achieve 0.01% accuracy more precisely than the visual approach above?
End of explanation |
12,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CAPITOLO 1.1
Step1: Iterazione nelle liste e cicli for su indice
Step2: DIZIONARI
Step3: Iterazione nei dizionari
ATTENZIONE
Step4: DYI
Step5: WARNING / DANGER / EXPLOSION / ATTENZIONE!
Cosa succede se inizializzo liste e dizionari nei parametri nominali di una funzione?
Step7: Funzioni con parametri posizionali, nominali, e arbitrari
Una funzione si definisce con def <nomefunzione>([parametri]) dove i parametri possono essere
Step8: DYI | Python Code:
# creazione
l = [1,2,3,10,"a", -12.333, 1024, 768, "pippo"]
# concatenazione
l += ["la", "concatenazione", "della", "lista"]
# aggiunta elementi in fondo
l.append(32)
l.append(3)
print(u"la lista è {}".format(l))
l.remove(3) # rimuove la prima occorrenza
print(u"la lista è {}".format(l))
i = l.index(10) # restituisce l'indice della prima occorrenza del valore 10
print(u"l'indice di 10 è {}".format(i))
print(u"il valore all'indice 3 è {}".format(l[3]))
print(u"** vediamo come funziona lo SLICING delle liste")
print(u"Ecco i primi 3 valori della lista {}".format(l[:3]))
print(u"e poi i valori dal 3o al penultimo {}".format(l[3:-1]))
print(u"e poi i valori dal 3o al penultimo, ma ogni 2 {}".format(l[3:-1:2]))
print("\n***FUNZIONI RANGE e XRANGE***\n")
l2 = range(1000) # questi sono i primi 1000 valori da 0 a 999
print(u"ecco la lista ogni 50 elementi di n <=1000: {}".format(l2[::50]))
# LA FUNZIONE xrange è comoda per ottenere un oggetto tipo (ma non = a ) un generatore
# da cui i numeri vengono appunto generati al momento dell'accesso all'elemento stesso
# della sequenza
# Il codice di prima dà errore
try:
l2 = xrange(1000) # questi sono i primi 1000 valori da 0 a 999 ma senza occupare RAM
print(u"ecco la lista ogni 50 elementi di n <= 1000: {}".format(l2[::50]))
except Exception as e:
print("ECCEZIONE {}: {}".format(type(e), e))
# Il codice che funziona con lo slice valuta xrange in una lista quindi
# risulta inutile
l2 = list(xrange(1000)) # questi sono i primi 1000 valori da 0 a 999 ma senza occupare RAM
print(u"ecco la lista ogni 50 elementi di n <= 1000: {}\n".format(l2[::50]))
## ma si può fare direttamente con range o xrange!
print(u"[OK] lista ogni 50 elementi <= 1000: {}".format(range(0,1000,50)))
Explanation: CAPITOLO 1.1: liste, dizionari e modello dati
LISTE: Operazioni e metodi
End of explanation
print("***PER FARE UN CICLO FOR CON INDICE INCREMENTALE SI USA XRANGE!")
for el in xrange(1,21):
print("numero {}".format(el))
print("***PER NUMERARE GLI ELEMENTI DI UNA LISTA SI USA ENUMERATE!")
for i, el in enumerate(l, start=10): # numero partendo da 10, se start non specificato parto da 0
print("Il contenuto {} si trova all'indice {}".format(el, i))
Explanation: Iterazione nelle liste e cicli for su indice
End of explanation
# definizione
d = {"nome": "Luca", "cognome": "Ferroni", "classe": 1980}
# aggiornamento
d.update({
"professioni" : ["docente", "lavoratore autonomo"]
})
# recupero valore per chiave certa (__getitem__)
print(u"Il nome del personaggio è {}".format(d["nome"]))
# sfrutto il mini-formato di template per le stringhe
# https://docs.python.org/2.7/library/string.html#formatspec
print(u"Il personaggio è {nome} {cognome} nato nel {classe}".format(**d))
# Recupero di un valore per una chiave opzionale
print(u"'nome' è una chiave che esiste con valore = {}, 'codiceiban' invece non esiste = {}".format(
d.get('nome'), d.get('codiceiban')))
print(u"Se avessi usato la __getitem__ avrei avuto un KeyError")
# rimozione di una chiave dal dizionario
print(u"Rimuovo il nome dal dizionario con d.pop('nome')")
d.pop('nome')
print(u"'nome' ora non esiste con valore = {}, come 'codiceiban' = {}".format(
d.get('nome'), d.get('codiceiban')))
print(u"Allora, se non trovi la chiave 'nome' allora dimmi 'Pippo'. Cosa dici?")
print(d.get('nome', 'Pippo'))
Explanation: DIZIONARI: Operazioni e metodi
End of explanation
print("\n***PER ITERARE SU TUTTI GLI ELEMENTI DI UN DIZIONARIO SI USA .iteritems()***\n")
for key, value in d.iteritems():
print("Alla chiave {} corrisponde il valore {}".format(key,value))
print("\n***DIZIONARI E ORDINAMENTO***\n")
data_input = [('a', 1), ('b', 2), ('l', 10), ('c', 3)]
d1 = dict(data_input)
import collections
d2_ord = collections.OrderedDict(data_input)
print("input = {}".format(data_input))
print("dizionario non ordinato = {}".format(d1))
print("dizionario ordinato = {}".format(d2_ord))
print("lista di coppie da diz NON ordinato = {}".format(d1.items()))
print("lista di coppie da diz ordinato = {}".format(d2_ord.items()))
Explanation: Iterazione nei dizionari
ATTENZIONE: Il contenuto del dizionario non è ordinato! Non c'è alcuna garanzia sull'ordinamento. Per avere garanzia bisogna usare la classe collections.OrderedDict
End of explanation
def foo(bar):
bar.append(42)
print(bar)
# >> [42]
answer_list = []
foo(answer_list)
print(answer_list)
# >> [42]
def foo(bar):
bar = 'new value'
print (bar)
# >> 'new value'
answer_list = 'old value'
foo(answer_list)
print(answer_list)
# >> 'old value'
Explanation: DYI: Fibonacci optimized
Salvare i calcoli intermedi della funzione di Fibonacci in un dizionario da usare come cache.
Eseguire i test per essere sicuri di non aver rotto l'algoritmo
Caratteristiche del modello dati di Python
Tipi di dato "mutable" e "immutable"
Python Data Model
Ogni oggetto ha:
* identità -> non cambia mai e si può pensare come l'indirizzo in memoria
* tipo -> non cambia mai e rappresenta le operazioni che l'oggetto supporta
* valore -> può cambiare se il tipo è mutable, non se è immutable
Tipi di dato immutable sono:
interi
stringhe
tuple
set
Tipi di dato mutable sono:
liste
dizionari
Tipizzazione forte e dinamica
Da http://stackoverflow.com/questions/11328920/is-python-strongly-typed/11328980#11328980 (v. anche i commenti)
Python is strongly, dynamically typed.
Strong typing means that the type of a value doesn't suddenly change. A string containing only digits doesn't magically become a number, as may happen in Perl. Every change of type requires an explicit conversion.
Dynamic typing means that runtime objects (values) have a type, as opposed to static typing where variables have a type.
As for example
bob = 1
bob = "bob"
This works because the variable does not have a type; it can name any object. After bob=1, you'll find that type(bob) returns int, but after bob="bob", it returns str. (Note that type is a regular function, so it evaluates its argument, then returns the type of the value.)
Passaggio di parametro per valore o riferimento?
Nessuno dei due! V. https://jeffknupp.com/blog/2012/11/13/is-python-callbyvalue-or-callbyreference-neither/
Call by object, or call by object reference.
Concetto base: in python una variabile è solo un nome per un oggetto (= la tripla id,tipo,valore)
In sostanza il comportamento dipende dal fatto che gli oggetti nominati dalle variabili sono mutable o immutable.
Seguono esempi:
End of explanation
def ciao(n, l=[], d={}):
if n > 5:
return
l.append(n)
d[n] = n
print("la lista è {}".format(l))
print("il diz è {}".format(d))
ciao(1)
ciao(4)
ciao(2)
print("----")
ciao(2, l=[1])
ciao(5)
Explanation: WARNING / DANGER / EXPLOSION / ATTENZIONE!
Cosa succede se inizializzo liste e dizionari nei parametri nominali di una funzione?
End of explanation
# -*- coding: utf-8 -*-
# This is hello_who_3.py
import sys # <-- importo un modulo
def compose_hello(who, force=False): # <-- valore di default
Get the hello message.
try: # <-- gestione eccezioni `Duck Typing`
message = "Hello " + who + "!"
except TypeError: # <-- eccezione specifica
# except TypeError as e: # <-- eccezione specifica su parametro e
print("[WARNING] Il parametro `who` dovrebbe essere una stringa")
if force: # <-- controllo "if"
message = "Hello {}!".format(who)
else:
raise # <-- solleva eccezione originale
except Exception:
print("Verificatasi eccezione non prevista")
else:
print("nessuna eccezione")
finally:
print("Bye")
return message
def hello(who='world'): # <-- valore di default
print(compose_hello(who))
if __name__ == "__main__":
hello("mamma")
hello("pippo")
hello(1)
ret = compose_hello(1, force=True)
print("Ha composto {}".format(ret))
try:
hello(1)
except TypeError as e:
print("{}: {}".format(type(e).__name__, e))
print("Riprova")
Explanation: Funzioni con parametri posizionali, nominali, e arbitrari
Una funzione si definisce con def <nomefunzione>([parametri]) dove i parametri possono essere:
posizionali. Ad es: def hello(who)
nominali. Ad es: def hello(who='') o who=None o who='default'
entrambi, ma i nominali devono essere messi dopo i posizionali. Ad es: def hello(who, say="How are you?")
arbitrari sia posizionali con il simbolo * o nominali con **. Come convenzione si utilizzano i nomi args e kw o kwargs. Ad es: def hello(who, say="How are you?", *args, **kw)
I simboli * e ** indicano rispettivamente la rappresentazione di una lista come una sequenza di elementi, e di un dizionario come una sequenza di parametri <chiave>=<valore>
Scope delle variabili
http://www.saltycrane.com/blog/2008/01/python-variable-scope-notes/
e ricordatevi che:
for i in [1,2,3]:
print(i)
print("Sono fuori dal ciclo e posso vedere che i={}".format(i))
Namespace
I namespace in python sono raccoglitori di nomi e posson essere impliciti o espliciti. Sono impliciti lo spazio dei nomi __builtin__ e __main__. Sono espliciti, le classi, gli oggetti, le funzioni e in particolare i moduli.
Posso importare un modulo che mi va a costituire un namespace con import <nomemodulo> e accedere a tutti i simboli top-level inseriti nel modulo come <nomemodulo>.<simbolo>.
L'importazione di simboli singoli all'interno di un modulo in un altro namespace si può fare con from <nomemodulo> import <simbolo>. Quello che non si dovrebbe fare è importare tutti i simboli di un modulo dentro un altro nella forma: from <nomemodulo> import *. Non fatelo, a meno che non strettamente necessario.
Stack delle eccezioni e loro gestione
Lo stack delle eccezioni builtin, ossia già comprese nel linguaggio python sono al link: https://docs.python.org/2/library/exceptions.html#exception-hierarchy
Derivando da essere facilmente se ne possono definire di proprie.
La gestione delle eccezioni avviene in blocchi:
try:
...
except [eccezione] [as variabile]:
...
else:
...
finally:
...
Pratica del Duck Typing!
« If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. »
Segue esempio di composizione del saluto con la gestione delle eccezioni:
End of explanation
def main():
# Step 1. Finché l'utente non scrive STOP - si fa tutto in un while True con un break quando occorre
# Step 2. L'utente inserisce il nome
# Usa raw_input("Inserisci ...") per chiedere le info all'utente
# Step 3. L'utente inserisce la città
# Step 4. L'utente inserisce lo stipendio
# Step 5. Inserisci il dizionario con chiavi
# 'name', 'city', 'salary', 'genfibo'
# nella lista PEOPLE = []
PEOPLE.append(person_d)
# Step 6. Stampa a video PEOPLE nel modo che ti piace
# Step 7. Riinizia da Step 1
# FINE
# ---- BONUS ----
# Step 8. Quando l'utente smette -> scrivi i dati in un file
# se vuoi Step 8.1 in formato json
# se vuoi Step 8.2 in formato csv
# se vuoi Step 8.3 in formato xml
# Step 9. Fallo anche se l'utente preme CTRL+C o CTRL+Z
Explanation: DYI: Anagrafica e Fibonacci
Sapendo che l'input all'utente si chiede con la funzione raw_input(<richiesta>), scrivere una funzione che richieda nome, città, stipendio, e generazione di Fibonacci dei propri conigli.
End of explanation |
12,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Exclusive Guide to Exclusion Limits
In this notebook we will place exclusion limits using Python, it's the companion piece to an an introduction to exclusion limits we wrote that explains the theory and idea behind them. We will be restricting ourselves to use basic numerical packages such as numpy, scipy and matplotlib for visualisation, and the first step is to import them.
Step1: We need something to place a limit on, and in our first example we are looking for a bump in an otherwise smooth background. This could be a gamma-ray line from dark matter annihilation, or maybe a particle resonance at the LHC.
In this first toy example we keep the shape of the background, the width, and position of the signal bump are all fixed. The two parameters are then the normalisations of the signal bump and the smooth background.
Futhermore we define the data to be a measured number of events in $n=30$ energy bins, and define a data generating function with the signal spectra being a Gaussian at $E = 10$ and with a variance $\sigma^2 = 2$. The background is a power law with an index $-1$.
Step2: We define a function to visualise two realisations of this model.
Step3: On the left the signal is buried below the background and is completely invisible, and placing an upper exclusion limit on the signal normalisation $\theta_s$ would be natural. In the right panel however the signal is large enough to be visible in the data, and an upper limit would not be the thing to do.
Defining the likelihood and the test statistic
The likelihood $\mathcal{L}(\theta_s, \theta_b \,|\, D)$ is simply the product of poisson distributions for each bin. We define and work with the $\ln \mathcal{L}$ as it behaves better numerically better.
Step4: Our test statistic of choice is the logarithm of the maximum likelihood ratio which we defined as
$$ \mathrm{TS}(\theta) = -2 \ln \frac{\max_{\nu} \mathcal{L}( \theta, \nu \,|\, D)}{\max_{\theta, \nu} \mathcal{L}( \theta, \nu \,|\, D)}$$
where $\theta$ are the parameter(s) we want to constrain, and $\nu$ are the remained. So if we wanted to constrain the normalisation of the signal $\theta_s$ the test statistic to consider is
$$ \mathrm{TS}(\theta_s) = -2 \ln \frac{\max_{\theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
Step5: The optional bestfit arguments is to spare us the burden of finding the minima everytime we evaluate the test statistic. As always, it is useful to visualise, and as examples we take the two realisations we already plotted.
Step6: Where the black x marks the true parameter values of each of the data sets. We see that the test statistic is small in the region of the true values, not perfectly so as we have statistical noise.
We can also consider the full 2D case where
$$ \mathrm{TS}(\theta_s, \theta_b) = -2 \ln \frac{ \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
which is simply the likelihood normalised to it's maximum.
Step7: where the white x marks the true parameter values of each of the data sets. We see that, as expected, the test statistic is smaller closer to the true value.
The white contour is simply us moving ahead of ourselves and plotting the $95.4$% confidence limit region using Wilks' theorem, i.e. by assuming that our TS is distributed as $\chi^2_{k=2}$. Ignore this for now.
Confidence Intervals
From the visualisation of the test statistics above we see that they are deep valleys close to the true values. This can be used to construct confidence intervals. The interval is simply
$${\, \theta \, | \, \mathrm{TS}(\theta) < c \, }$$
where $c$ is a threshold value such that we have the desired coverage. To determine $c$ we need to know how the test statistic is distributed under the null, i.e. when $\theta$ is the true value. If it applies, Wilks' theorem states that it's asymptotically chi-squared distributed with as many degrees of freedom $k$ as parameters of interest. So for 1D and 2D we can compute
Step8: With this we can now find the 68% CL intervals of $\theta_s$ for our two examples.
Step9: Check coverage
To check if our procedure work we will do a 100 experiments and see if the true value are covered at the correct frequency, e.g. 68ish times if we want 68% CL.
Step10: Not bad, with more experiments this should be better. Try it yourself!
Verifying that Wilk's theorem applies
We can verify that our test statistic is $\chi^2_{k=1}$ distributed by doing monte carlo experiments. We can then compare the empirical distribution of the test statistic with the chi-squared one.
Step11: As we can see, in this case the empirical distribution is well-described by a $\chi_{k=1}^2$ distribution.
Upper Limits
We've seen how the the maximum likelihood test statistic leads to two sided intervals, but our aim is to do exclusion limits. We modify our test statistic as follows.
$$\mathrm{TS}(\theta_s) =
\begin{cases}
\mathrm{TS}(\theta_s) & \quad \theta_s \geq \hat{\theta_s}\
0 & \quad \text{elsewise}\
\end{cases}
$$
This is for a upper limit on $\theta_s$.
Step12: This is now distributed as
$$\mathrm{TS}{\mathrm{ul}} \sim \frac{1}{2}\delta(0) + \frac{1}{2} \chi^2{\mathrm{df}=1}$$
from this expression we can determine the required threshold for various confidence levels. Again the threshold for a $100n$ confidence level is simply
$$ \mathrm{CDF}(x) = n $$
Step13: We can now use a root finder to find for which $\theta_s$ our new TS has this threshold value, and this is our upper limit!
Step14: Check coverage for the upper limit
We perform a multitude of 'experiments' and investigate if the upper limit covers the true value as often as it should.
Step15: Again, not bad for the number of trials.
TODO
Extend the problem by letting the bump position vary.
Make brazil plot | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import poisson, norm, chi2
from scipy.optimize import minimize, brentq
import warnings; warnings.simplefilter('ignore') # ignore some numerical errors
Explanation: An Exclusive Guide to Exclusion Limits
In this notebook we will place exclusion limits using Python, it's the companion piece to an an introduction to exclusion limits we wrote that explains the theory and idea behind them. We will be restricting ourselves to use basic numerical packages such as numpy, scipy and matplotlib for visualisation, and the first step is to import them.
End of explanation
E = np.logspace(0,2,30+1)
E = (E[1:] + E[:-1]) / 2 # bin centers
def signal_shape(E=E, mu=10, var=2):
return norm.pdf(E, mu, var)
def background_shape(E=E, gamma=-1):
return E**gamma
def generate_data(bkg_norm, sig_norm, sig_mu=10, sig_var=2, E=E, seed=None):
np.random.seed(seed)
return np.random.poisson(sig_norm * signal_shape(E, mu=sig_mu, var=sig_var) +
bkg_norm * background_shape(E))
Explanation: We need something to place a limit on, and in our first example we are looking for a bump in an otherwise smooth background. This could be a gamma-ray line from dark matter annihilation, or maybe a particle resonance at the LHC.
In this first toy example we keep the shape of the background, the width, and position of the signal bump are all fixed. The two parameters are then the normalisations of the signal bump and the smooth background.
Futhermore we define the data to be a measured number of events in $n=30$ energy bins, and define a data generating function with the signal spectra being a Gaussian at $E = 10$ and with a variance $\sigma^2 = 2$. The background is a power law with an index $-1$.
End of explanation
def visualise_model(bkg_norm, sig_norm, sig_mu=10, ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
x = np.logspace(0,2,200)
b = bkg_norm*background_shape(x)
s = sig_norm*signal_shape(x, mu=sig_mu)
ax.plot(x, b, label='Background')
ax.plot(x, s, label='Signal')
ax.plot(x, s+b, color='black', linestyle='dotted', label='S+B')
N = generate_data(bkg_norm, sig_norm, sig_mu=sig_mu)
ax.errorbar(E, N, yerr=np.sqrt(N), fmt='o', color='grey', label='Data')
ax.set_ylim(0.4, 2*np.maximum(s.max(), b.max()))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('E')
ax.set_ylabel('dN/dE')
ax.set_title(title)
ax.legend(frameon=False)
return N
fig, axes = plt.subplots(ncols=2, figsize=(10,4))
data_small_sig = visualise_model(bkg_norm=1000, sig_norm=10, ax=axes[0], title='Small signal');
data_large_sig = visualise_model(bkg_norm=1000, sig_norm=600, ax=axes[1], title='Large signal');
Explanation: We define a function to visualise two realisations of this model.
End of explanation
def lnLike(bkg_norm, sig_norm, data, gamma=-1, mu=10, var=2):
s = sig_norm*signal_shape(mu=mu, var=var)
b = bkg_norm*background_shape(gamma=gamma)
return np.log(poisson.pmf(data, mu=s+b)).sum()
Explanation: On the left the signal is buried below the background and is completely invisible, and placing an upper exclusion limit on the signal normalisation $\theta_s$ would be natural. In the right panel however the signal is large enough to be visible in the data, and an upper limit would not be the thing to do.
Defining the likelihood and the test statistic
The likelihood $\mathcal{L}(\theta_s, \theta_b \,|\, D)$ is simply the product of poisson distributions for each bin. We define and work with the $\ln \mathcal{L}$ as it behaves better numerically better.
End of explanation
def TS_sig(sig_norm, data, bestfit=None):
numerator = minimize(lambda b: -2*lnLike(b, sig_norm, data), 1e3)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
return numerator.fun - bestfit.fun
Explanation: Our test statistic of choice is the logarithm of the maximum likelihood ratio which we defined as
$$ \mathrm{TS}(\theta) = -2 \ln \frac{\max_{\nu} \mathcal{L}( \theta, \nu \,|\, D)}{\max_{\theta, \nu} \mathcal{L}( \theta, \nu \,|\, D)}$$
where $\theta$ are the parameter(s) we want to constrain, and $\nu$ are the remained. So if we wanted to constrain the normalisation of the signal $\theta_s$ the test statistic to consider is
$$ \mathrm{TS}(\theta_s) = -2 \ln \frac{\max_{\theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
End of explanation
def visualise_TS_sig(data, siglim=(0, 1000), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
x = np.linspace(*siglim, 100)
ts = np.array([TS_sig(s, data, bestfit=bestfit) for s in x])
ax.plot(x, ts)
ax.set_ylim(0,10)
ax.set_xlim(*siglim)
ax.set_title(title)
ax.set_xlabel('Signal Normalisation')
ax.set_ylabel('TS')
fig, axes = plt.subplots(ncols=2, figsize=(10,4))
visualise_TS_sig(data_small_sig, siglim=(-90,130), ax=axes[0], title='Small signal')
axes[0].scatter(10, 0.5, color='black', marker='x')
visualise_TS_sig(data_large_sig, siglim=(400,720), ax=axes[1], title='Large signal');
axes[1].scatter(600, 0.5, color='black', marker='x');
Explanation: The optional bestfit arguments is to spare us the burden of finding the minima everytime we evaluate the test statistic. As always, it is useful to visualise, and as examples we take the two realisations we already plotted.
End of explanation
def TS_2d(bkg_norm, sig_norm, data, bestfit=None):
numerator = -2*lnLike(bkg_norm, sig_norm, data)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
return numerator - bestfit.fun
def visualise_TS_2d(data, siglim=(-100, 1000), bkglim=(800,1200), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
bkg_norms = np.linspace(*bkglim, 100)
sig_norms = np.linspace(*siglim, 100)
ts = [[TS_2d(b, s, data, bestfit=bestfit) for s in sig_norms] for b in bkg_norms]
X, Y = np.meshgrid(bkg_norms, sig_norms)
Z = np.array(ts).T
r = ax.contourf(X, Y, Z, 100, cmap='Blues_r')
plt.colorbar(r, label='TS', ax=ax)
ax.contour(X, Y, Z, colors='white', levels=[5.991])
ax.set_xlim(*bkglim)
ax.set_ylim(*siglim)
ax.set_xlabel('Background Normalisation')
ax.set_ylabel('Signal Normalisation')
ax.set_title(title)
fig, axes = plt.subplots(ncols=2, figsize=(12,4))
visualise_TS_2d(data_small_sig, ax=axes[0], title='Small signal')
axes[0].scatter(1000, 10, color='white', marker='x')
visualise_TS_2d(data_large_sig, ax=axes[1], title='Large signal')
axes[1].scatter(1000, 600, color='white', marker='x');
Explanation: Where the black x marks the true parameter values of each of the data sets. We see that the test statistic is small in the region of the true values, not perfectly so as we have statistical noise.
We can also consider the full 2D case where
$$ \mathrm{TS}(\theta_s, \theta_b) = -2 \ln \frac{ \mathcal{L}( \theta_s, \theta_b \,|\, D)}{\max_{\theta_s, \theta_b} \mathcal{L}( \theta_s, \theta_b \,|\, D)}$$
which is simply the likelihood normalised to it's maximum.
End of explanation
from functools import partial
def threshold(cl, cdf):
return brentq(lambda x: cl-cdf(x), 0, 10)
threshold_1d = partial(threshold, cdf=partial(chi2.cdf, df=1))
threshold_2d = partial(threshold, cdf=partial(chi2.cdf, df=2))
print('68%% and 95%% thresholds for 1D: %.3f and %.3f'
% tuple([threshold_1d(x) for x in [0.68, 0.95]]))
print('68%% and 95%% thresholds for 2D: %.3f and %.3f'
% tuple([threshold_2d(x) for x in [0.68, 0.95]]))
Explanation: where the white x marks the true parameter values of each of the data sets. We see that, as expected, the test statistic is smaller closer to the true value.
The white contour is simply us moving ahead of ourselves and plotting the $95.4$% confidence limit region using Wilks' theorem, i.e. by assuming that our TS is distributed as $\chi^2_{k=2}$. Ignore this for now.
Confidence Intervals
From the visualisation of the test statistics above we see that they are deep valleys close to the true values. This can be used to construct confidence intervals. The interval is simply
$${\, \theta \, | \, \mathrm{TS}(\theta) < c \, }$$
where $c$ is a threshold value such that we have the desired coverage. To determine $c$ we need to know how the test statistic is distributed under the null, i.e. when $\theta$ is the true value. If it applies, Wilks' theorem states that it's asymptotically chi-squared distributed with as many degrees of freedom $k$ as parameters of interest. So for 1D and 2D we can compute
End of explanation
def confidence_interval(data, CL=0.68, bestfit=None,
ts=TS_sig, threshold_fun=threshold_1d):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
threshold = threshold_fun(CL)
# Simple way to find starting points for the root finder.
# We need (a, b) for which TS-ts_threshold have different sign.
step = 10+bestfit.x[1]/2
u = bestfit.x[1] + step
while ts(u, data, bestfit=bestfit) <= threshold:
u += step
# The TS tend to be symmetrical which we can use to do a better initial guess.
l = 2*bestfit.x[1] - u
while ts(l, data, bestfit=bestfit) <= threshold:
l -= step
upper_bound = brentq(lambda x: ts(x, data, bestfit=bestfit) - threshold,
bestfit.x[1], u)
lower_bound = brentq(lambda x: ts(x, data, bestfit=bestfit) - threshold,
l, bestfit.x[1])
return lower_bound, upper_bound
print(confidence_interval(data_small_sig))
print(confidence_interval(data_large_sig))
Explanation: With this we can now find the 68% CL intervals of $\theta_s$ for our two examples.
End of explanation
def coverage_check(sig_norm, CL=0.68, bkg_norm=1000, n=100):
covered = 0
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
l, u = confidence_interval(d, CL=CL)
if l < sig_norm and u > sig_norm:
covered += 1
return covered/n
print('Coverage small signal: %.3f' % coverage_check(10))
print('Coverage large signal: %.3f' % coverage_check(600))
Explanation: Check coverage
To check if our procedure work we will do a 100 experiments and see if the true value are covered at the correct frequency, e.g. 68ish times if we want 68% CL.
End of explanation
def mc(sig_norm, bkg_norm=1000, n=100):
ts = []
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
bf = minimize(lambda x: -2*lnLike(x[0], x[1], d), (1e3,1e3))
ts.append(minimize(lambda s: TS_sig(s, d, bestfit=bf), sig_norm).fun)
return np.array(ts)
mc_small_signal = mc(sig_norm=10)
x = np.linspace(np.min(mc_small_signal), np.max(mc_small_signal), 100)
plt.hist(mc_small_signal, bins=20, normed=True, alpha=0.5, label='MC')
plt.plot(x, chi2.pdf(x, df=1), lw=4, label='chi2 df=1')
plt.legend(frameon=False)
plt.xlabel('TS');
Explanation: Not bad, with more experiments this should be better. Try it yourself!
Verifying that Wilk's theorem applies
We can verify that our test statistic is $\chi^2_{k=1}$ distributed by doing monte carlo experiments. We can then compare the empirical distribution of the test statistic with the chi-squared one.
End of explanation
def TS_upper_limit(sig_norm, data, bestfit=None):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
if sig_norm < bestfit.x[1]:
return 0.0
else:
return TS_sig(sig_norm, data, bestfit=bestfit)
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data_small_sig), (1e3,1e3))
x = np.linspace(-100, 100, 30)
y = [TS_sig(s, data_small_sig, bestfit=bestfit) for s in x]
plt.plot(x, y);
y = [TS_upper_limit(s, data_small_sig, bestfit=bestfit) for s in x]
plt.plot(x, y)
plt.xlabel('Signal Normalisation')
plt.ylabel('TS');
Explanation: As we can see, in this case the empirical distribution is well-described by a $\chi_{k=1}^2$ distribution.
Upper Limits
We've seen how the the maximum likelihood test statistic leads to two sided intervals, but our aim is to do exclusion limits. We modify our test statistic as follows.
$$\mathrm{TS}(\theta_s) =
\begin{cases}
\mathrm{TS}(\theta_s) & \quad \theta_s \geq \hat{\theta_s}\
0 & \quad \text{elsewise}\
\end{cases}
$$
This is for a upper limit on $\theta_s$.
End of explanation
threshold_ul = partial(threshold, cdf = lambda x: 0.5 + 0.5*chi2.cdf(x, df=1))
print('Threshold for 90%% CL upper limit: %.3f' % threshold_ul(0.90))
Explanation: This is now distributed as
$$\mathrm{TS}{\mathrm{ul}} \sim \frac{1}{2}\delta(0) + \frac{1}{2} \chi^2{\mathrm{df}=1}$$
from this expression we can determine the required threshold for various confidence levels. Again the threshold for a $100n$ confidence level is simply
$$ \mathrm{CDF}(x) = n $$
End of explanation
def upper_limit(data, bestfit=None, CL=0.90):
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data), (1e3,1e3))
threshold = threshold_ul(CL)
return brentq(lambda x: TS_upper_limit(x, data, bestfit=bestfit)-threshold,
-1000, 1000)
print('90%% CL upper limit for small signal: %.2f' % upper_limit(data_small_sig))
print('90%% CL upper limit for large signal: %.2f' % upper_limit(data_large_sig))
Explanation: We can now use a root finder to find for which $\theta_s$ our new TS has this threshold value, and this is our upper limit!
End of explanation
def coverage_check_ul(sig_norm, CL=0.90, bkg_norm=1000, n=100):
upper_limits = []
for _ in range(n):
d = generate_data(bkg_norm, sig_norm)
upper_limits.append(upper_limit(d, CL=CL))
upper_limits = np.array(upper_limits)
not_excluded = (upper_limits >= sig_norm).sum()
return not_excluded/n
print('Coverage small signal: %.3f' % coverage_check_ul(10))
print('Coverage large signal: %.3f' % coverage_check_ul(600))
Explanation: Check coverage for the upper limit
We perform a multitude of 'experiments' and investigate if the upper limit covers the true value as often as it should.
End of explanation
def find_bestfit(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
start = points[np.argmin(ts),:]
return minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), start)
def TS_pos(sig_norm, sig_pos, data, bestfit=None):
numerator = minimize(lambda b: -2*lnLike(b, sig_norm, data, mu=sig_pos), 1e3)
if not bestfit:
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]),
(1e3, 1e3, 1e1))
return numerator.fun - bestfit.fun
def visualise_TS_pos(data, signormlim=(500, 1500), sigposlim=(1,100), ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
def starting_point(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
return points[np.argmin(ts),:]
bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), starting_point(data))
print(bestfit)
sig_pos = np.logspace(*np.log10(sigposlim), 50)
sig_norms = np.linspace(*signormlim, 100)
ts = [[TS_pos(n, p, data, bestfit=bestfit) for n in sig_norms] for p in sig_pos]
X, Y = np.meshgrid(sig_pos, sig_norms)
Z = np.array(ts).T
r = ax.contourf(X, Y, Z, 100, cmap='Blues_r')
plt.colorbar(r, label='TS', ax=ax)
ax.contour(X, Y, Z, colors='white', levels=[5.991])
ax.set_xlim(*sigposlim)
ax.set_ylim(*signormlim)
ax.set_xscale('log')
ax.set_xlabel('Signal Position')
ax.set_ylabel('Signal Normalisation')
ax.set_title(title)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(14,8))
for i, p in enumerate([3, 10, 20]):
d = visualise_model(bkg_norm=1000, sig_norm=1000, sig_mu=p,
ax=axes[0,i], title='Peak at E=%i' % p)
visualise_TS_pos(d, ax=axes[1,i], title='')
d = visualise_model(bkg_norm=1000, sig_norm=1000, sig_mu=20,
ax=axes[0,i], title='Peak at E=%i' % p)
def find_starting_point(data):
N = 20
bkgs = np.linspace(0, 2000, N)
sigs = np.linspace(0, 2000, N)
pos = np.linspace(0, 100, N)
points = np.array(np.meshgrid(bkgs, sigs, pos)).T.reshape(-1,3)
ts = list(map(lambda x: -2*lnLike(x[0], x[1], data, mu=x[2]), points))
i = np.argmin(ts)
return (i, points[i,:], np.min(ts))
find_starting_point(d)
#bestfit = minimize(lambda x: -2*lnLike(x[0], x[1], d, mu=x[2]), (1e3,1e3,2e1),
# bounds=[(0, 2000)]*2+[(1,100)])
#print(bestfit)
np.argmin
Explanation: Again, not bad for the number of trials.
TODO
Extend the problem by letting the bump position vary.
Make brazil plot
End of explanation |
12,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BCycle Austin stations
This notebook looks at the stations that make up the Austin BCycle network. For each station we have the following information
Step1: Plot the stations on a map of Austin
Let's plot all the stations on an Open Street Map of Austin, to see where they're concentrated. We can use the latitude and longitude of the stations to center the map. To find out the name of the station, click on the marker.
Step2: There are a total of 50 stations, which can be roughly clustered into 4 different groups
Step3: Looking at the histogram, the most popular station capacity is 13, then 11, and 9. Maybe there's an advantage to having capacity an odd number for stations ! The largest stations have a capacity of 19, and the smallest have a capacity of 9 (approximately half of the largest station).
Station bike capacity and location
Now we have an idea of the bike station capacity, we can visualize this on a map to see if there is any relationship between their capacity and location. The plot below uses their capacity as the radius of each circle marker. For proper quantitative evaluation of the stations, we should take the square root of the radius so the areas of the circles are proportional to the capacity. But not doing this helps distinguish between the narrow range of capacities.
To find out the precise capacity of the stations, click on the circle markers.
Step4: The map above shows 4 of the largest stations are along the North edge of Lady Bird Lake. There is also a large station at Congress & 11th Street, at the north of the downtown area.
The downtown area is served by a larger number of smaller stations, concentrated relatively close together. East of I-35, the stations tend to be smaller and on major roads running North-to-South. The University area and South-of-the-Lake areas are more dispersed than the downtown and East areas.
Station health
For more insight into the stations and their characteristics, we can define a metric of station 'health'. When bike stations have no bikes available, customers can't start a journey from that location. If they have no docks available, they can't end a trip at that station. In addition to the station information, we also have station bike and dock availability sampled every 5 minutes. If we count the amount of 5-minute periods a station is full or empty, this can give us a guide to its health.
Step5: Empty/Full station health
Now we have a list of all the bike measurements where the station was empty or full, let's aggregate by station_id and count the results. This will tell us for every station, how many 5-minute intervals it was either full or empty. This is a good indicator of which stations are often full or empty, and are unusable. Let's merge the station names so the graph makes sense.
Step6: Empty/full by station in April and May 2016
Now we have a list of which stations were empty or full in each 5 minute period, we can total these up by station. If a station is either empty or full, this effectively removes it from the BCycle network temporarily. Let's use a stacked barchart to show the proportion of the time the station was full or empty. Sorting by the amount of 5-minute periods the station was full or empty also helps.
Step7: The bar chart shows a large variation between the empty/full durations for each of the stations. The worst offender is the Riverside @ S. Lamar station, which was full or empty for a total of 12 days during the 61-day period of April and May 2016.
The proportion of empty vs full 5-minute periods also varies from station to station, shown in the relative height of the green and blue stacked bars.
Station empty / full percentage in April and May 2016
The barchart above shows a large variation between the 'Riverside @ S. Lamar' with ~3500 empty or full 5 minute periods, and the 'State Capitol Visitors Garage' with almost no full or empty 5 minute periods. To dig into this further, let's calculate the percentage of the time each station was neither empty nor full. This shows the percentage of the time the station was active in the BCycle system.
Step8: The barchart above shows that 12 of the 50 stations are either full or empty 10% of the time.
Table of unhealthy stations
Let's show the table of stations, with only those available 90% of the time or more included.
Step9: Stations empty / full based on their location
After checking the proportion of time each station has docks and bikes available above, we can visualize these on a map, to see if there is any correlation in their location.
In the map below, the circle markers use both colour and size as below
Step10: The map shows that stations most frequently unavailable can be grouped into 3 clusters
Step11: Correlation between station empty/full and station capacity
Perhaps the reason stations are empty or full a lot is because they have a smaller capacity. Smaller stations would quickly run out of bikes, or become more full. Let's do a hypothesis test, assuming p < 0.05 for statistical significance.
Null hypothesis
Step12: Station empty / full by Time
To break the station health down further, we can check in which 5 minute periods the station was either full or empty. By grouping the results over various time scales, we can look for periodicity in the data. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import folium
import seaborn as sns
from bcycle_lib.utils import *
%matplotlib inline
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the stations table, and show the first 10 entries
STATIONS = 5
stations_df = load_stations()
num_stations = stations_df.shape[0]
print('Found {} stations, showing first {}'.format(num_stations, STATIONS))
stations_df.head(STATIONS)
Explanation: BCycle Austin stations
This notebook looks at the stations that make up the Austin BCycle network. For each station we have the following information:
station_id: A unique identifier for each of the station. Used to connect the bikes.csv time-varying table to the static stations table.
name: The name of the station. This is the nearest cross street to the station, or if the station is located at a building, the name of that building.
address: The address of the station. Note that if a company sponsors the station, it will include their name, for example 'Presented by Whole Foods Market'. For this reason, its best not to geocode this field to a lat/lon pair, and use those values from the respective fields.
lat: The latitude of the station.
lon: The longitude of the station.
datetime: The date and time that the station was first reported when fetching the BCycle Station webpage.
Imports and data loading
Before getting started, let's import some useful libraries (including the bcycle_lib created for these notebooks), and load the stations CSV file.
End of explanation
# Calculate where the map should be centred based on station locations
min_lat = stations_df['lat'].min()
max_lat = stations_df['lat'].max()
min_lon = stations_df['lon'].min()
max_lon = stations_df['lon'].max()
center_lat = min_lat + (max_lat - min_lat) / 2.0
center_lon = min_lon + (max_lon - min_lon) / 2.0
# Plot map using the B&W Stamen Toner tiles centred on BCycle stations
map = folium.Map(location=(center_lat, center_lon),
zoom_start=14,
tiles='Stamen Toner',
control_scale=True)
# Add markers to the map for each station. Click on them to see their name
for station in stations_df.iterrows():
stat=station[1]
folium.Marker([stat['lat'], stat['lon']],
popup=stat['name'],
icon=folium.Icon(icon='info-sign')
).add_to(map)
map.save('stations.html')
map
Explanation: Plot the stations on a map of Austin
Let's plot all the stations on an Open Street Map of Austin, to see where they're concentrated. We can use the latitude and longitude of the stations to center the map. To find out the name of the station, click on the marker.
End of explanation
# Load bikes dataframe, calculate the capacity of each every 5 minutes (bikes + docks)
bikes_df = load_bikes()
bikes_df['capacity'] = bikes_df['bikes'] + bikes_df['docks']
# Now find the max capacity across all the stations at all 5 minute intervals
bikes_df = bikes_df.groupby('station_id').max().reset_index()
bikes_df = bikes_df[['station_id', 'capacity']]
# Now join with the stations dataframe using station_id
stations_cap_df = pd.merge(stations_df, bikes_df, on='station_id')
# Print the smallest and largest stations
N = 4
sorted_stations = stations_cap_df.sort_values(by='capacity', ascending=True)
print('Smallest {} stations: \n{}\n'.format(N, sorted_stations[['name', 'capacity']][:N]))
print('Largest {} stations: \n{}\n'.format(N, sorted_stations[['name', 'capacity']][-N:]))
# Show a histogram of the capacities
# fig = plt.figure()
ax1 = stations_cap_df['capacity'].plot.hist(figsize=(10,6))
ax1.set_xlabel('Station Capacity', fontsize=14)
ax1.set_ylabel('Number of stations', fontsize=14)
ax1.set_title('Histogram of station capacities', fontsize=14)
Explanation: There are a total of 50 stations, which can be roughly clustered into 4 different groups:
Stations around the University, North of 11th Street. UT Austin buildings and student housing is based in this area, so bikes could be used to get around without the expense and hassle of having a car.
The downtown stations south of 11th Street, and north of the river. Austin's downtown is a mixture of residential and business buildings, so these stations could used for commute start and end points. There are also many bars on 6th Street, especially towards I-35.
The stations east of I-35, including those on East 5th and 11th streets. This area is almost an overspill from the downtown area, with a similar amount of nightlife. There are fewer businesses in this area compared to downtown. This area also has a light rail, which connects downtown Austin with North Austin, and up to Cedar Park and Leander.
Stations south of Lady Bird Lake. South Congress is good for nightlife, making it a popular destination on weekends and evenings. It also has limited parking, which you don't need to worry about when using a bike. There is also a bike and hike trail that runs along Lady Bird Lake on the North and South banks, which a lot of people enjoy on a bike.
Station bike capacity histogram
Now we've visualized where each station in the system is, let's show how many combined bikes and docks each of the stations has (their capacity). To do this we need to load in the bikes dataframe, and calculate the maximum of bikes + docks for each of the stations across the data. We can then plot a histogram of station capacity.
End of explanation
# Now plot each station as a circle whose area represents the capacity
map = folium.Map(location=(center_lat, center_lon),
zoom_start=14,
tiles='Stamen Toner',
control_scale=True)
# Hand-tuned values to make differences between circles larger
K = 0.5
P = 2
# Add markers whose radius is proportional to station capacity.
# Click on them to pop up their name and capacity
for station in stations_cap_df.iterrows():
stat=station[1]
folium.CircleMarker([stat['lat'], stat['lon']],
radius= K * (stat['capacity'] ** P), # Scale circles to show difference
popup='{} - capacity {}'.format(stat['name'], stat['capacity']),
fill_color='blue',
fill_opacity=0.8
).add_to(map)
map.save('station_capacity.html')
map
Explanation: Looking at the histogram, the most popular station capacity is 13, then 11, and 9. Maybe there's an advantage to having capacity an odd number for stations ! The largest stations have a capacity of 19, and the smallest have a capacity of 9 (approximately half of the largest station).
Station bike capacity and location
Now we have an idea of the bike station capacity, we can visualize this on a map to see if there is any relationship between their capacity and location. The plot below uses their capacity as the radius of each circle marker. For proper quantitative evaluation of the stations, we should take the square root of the radius so the areas of the circles are proportional to the capacity. But not doing this helps distinguish between the narrow range of capacities.
To find out the precise capacity of the stations, click on the circle markers.
End of explanation
# Load both the bikes and station dataframes
bikes_df = load_bikes()
stations_df = load_stations()
Explanation: The map above shows 4 of the largest stations are along the North edge of Lady Bird Lake. There is also a large station at Congress & 11th Street, at the north of the downtown area.
The downtown area is served by a larger number of smaller stations, concentrated relatively close together. East of I-35, the stations tend to be smaller and on major roads running North-to-South. The University area and South-of-the-Lake areas are more dispersed than the downtown and East areas.
Station health
For more insight into the stations and their characteristics, we can define a metric of station 'health'. When bike stations have no bikes available, customers can't start a journey from that location. If they have no docks available, they can't end a trip at that station. In addition to the station information, we also have station bike and dock availability sampled every 5 minutes. If we count the amount of 5-minute periods a station is full or empty, this can give us a guide to its health.
End of explanation
# Using the bikes and stations dataframes, mask off so the only rows remaining
# are either empty or full cases from 6AM onwards
bike_empty_mask = bikes_df['bikes'] == 0
bike_full_mask = bikes_df['docks'] == 0
bike_empty_full_mask = bike_empty_mask | bike_full_mask
bikes_empty_full_df = bikes_df[bike_empty_full_mask].copy()
bikes_empty_full_df['empty'] = bikes_empty_full_df['bikes'] == 0
bikes_empty_full_df['full'] = bikes_empty_full_df['docks'] == 0
bikes_empty_full_df.head()
Explanation: Empty/Full station health
Now we have a list of all the bike measurements where the station was empty or full, let's aggregate by station_id and count the results. This will tell us for every station, how many 5-minute intervals it was either full or empty. This is a good indicator of which stations are often full or empty, and are unusable. Let's merge the station names so the graph makes sense.
End of explanation
# Now aggregate the remaining rows by station_id, and plot the results
bike_health_df = bikes_empty_full_df.copy()
bike_health_df = bike_health_df[['station_id', 'empty', 'full']].groupby('station_id').sum().reset_index()
bike_health_df = pd.merge(bike_health_df, stations_df, on='station_id')
bike_health_df['oos'] = bike_health_df['full'] + bike_health_df['empty']
bike_health_df = bike_health_df.sort_values('oos', ascending=False)
ax1 = (bike_health_df[['name', 'empty', 'full']]
.plot.bar(x='name', y=['empty', 'full'], stacked=True, figsize=(16,8)))
ax1.set_xlabel('Station', fontsize=14)
ax1.set_ylabel('# 5 minute periods empty or full', fontsize=14)
ax1.set_title('Empty/Full station count during April/May 2016', fontdict={'size' : 18, 'weight' : 'bold'})
ax1.tick_params(axis='x', labelsize=13)
ax1.tick_params(axis='y', labelsize=13)
ax1.legend(fontsize=13)
Explanation: Empty/full by station in April and May 2016
Now we have a list of which stations were empty or full in each 5 minute period, we can total these up by station. If a station is either empty or full, this effectively removes it from the BCycle network temporarily. Let's use a stacked barchart to show the proportion of the time the station was full or empty. Sorting by the amount of 5-minute periods the station was full or empty also helps.
End of explanation
# For this plot, we don't want to mask out the time intervals where stations are neither full nor empty.
HEALTHY_RATIO = 0.9
station_ratio_df = bikes_df.copy()
station_ratio_df['empty'] = station_ratio_df['bikes'] == 0
station_ratio_df['full'] = station_ratio_df['docks'] == 0
station_ratio_df['neither'] = (station_ratio_df['bikes'] != 0) & (station_ratio_df['docks'] != 0)
station_ratio_df = station_ratio_df[['station_id', 'empty', 'full', 'neither']].groupby('station_id').sum().reset_index()
station_ratio_df['total'] = station_ratio_df['empty'] + station_ratio_df['full'] + station_ratio_df['neither']
station_ratio_df = pd.merge(station_ratio_df, stations_df, on='station_id')
station_ratio_df['full_ratio'] = station_ratio_df['full'] / station_ratio_df['total']
station_ratio_df['empty_ratio'] = station_ratio_df['empty'] / station_ratio_df['total']
station_ratio_df['oos_ratio'] = station_ratio_df['full_ratio'] + station_ratio_df['empty_ratio']
station_ratio_df['in_service_ratio'] = 1 - station_ratio_df['oos_ratio']
station_ratio_df['healthy'] = station_ratio_df['in_service_ratio'] >= HEALTHY_RATIO
station_ratio_df['color'] = np.where(station_ratio_df['healthy'], '#348ABD', '#A60628')
station_ratio_df = station_ratio_df.sort_values('in_service_ratio', ascending=False)
colors = ['b' if ratio >= 0.9 else 'r' for ratio in station_ratio_df['in_service_ratio']]
# station_ratio_df.head()
ax1 = (station_ratio_df.sort_values('in_service_ratio', ascending=False)
.plot.bar(x='name', y='in_service_ratio', figsize=(16,8), legend=None, yticks=np.linspace(0.0, 1.0, 11),
color=station_ratio_df['color']))
ax1.set_xlabel('Station', fontsize=14)
ax1.set_ylabel('%age of time neither empty nor full', fontsize=14)
ax1.set_title('In-service percentage by station during April/May 2016', fontdict={'size' : 16, 'weight' : 'bold'})
ax1.axhline(y = HEALTHY_RATIO, color = 'black')
ax1.tick_params(axis='x', labelsize=13)
ax1.tick_params(axis='y', labelsize=13)
Explanation: The bar chart shows a large variation between the empty/full durations for each of the stations. The worst offender is the Riverside @ S. Lamar station, which was full or empty for a total of 12 days during the 61-day period of April and May 2016.
The proportion of empty vs full 5-minute periods also varies from station to station, shown in the relative height of the green and blue stacked bars.
Station empty / full percentage in April and May 2016
The barchart above shows a large variation between the 'Riverside @ S. Lamar' with ~3500 empty or full 5 minute periods, and the 'State Capitol Visitors Garage' with almost no full or empty 5 minute periods. To dig into this further, let's calculate the percentage of the time each station was neither empty nor full. This shows the percentage of the time the station was active in the BCycle system.
End of explanation
mask = station_ratio_df['healthy'] == False
unhealthy_stations_df = station_ratio_df[mask].sort_values('oos_ratio', ascending=False)
unhealthy_stations_df = pd.merge(unhealthy_stations_df, stations_cap_df[['station_id', 'capacity']], on='station_id')
unhealthy_stations_df[['name', 'oos_ratio', 'full_ratio', 'empty_ratio', 'capacity']].reset_index(drop=True).round(2)
Explanation: The barchart above shows that 12 of the 50 stations are either full or empty 10% of the time.
Table of unhealthy stations
Let's show the table of stations, with only those available 90% of the time or more included.
End of explanation
# Merge in the station capacity also for the popup markers
station_ratio_cap_df = pd.merge(station_ratio_df, stations_cap_df[['station_id', 'capacity']], on='station_id')
map = folium.Map(location=(center_lat, center_lon),
zoom_start=14,
tiles='Stamen Toner',
control_scale=True)
# Hand-tuned parameter to increase circle size
K = 1000
C = 5
for station in station_ratio_cap_df.iterrows():
stat = station[1]
if stat['healthy']:
colour = 'blue'
else:
colour='red'
folium.CircleMarker([stat['lat'], stat['lon']], radius=(stat['oos_ratio'] * K) + C,
popup='{}, empty {:.1f}%, full {:.1f}%, capacity {}'.format(
stat['name'], stat['empty_ratio']*100, stat['full_ratio']*100, stat['capacity']),
fill_color=colour, fill_opacity=0.8
).add_to(map)
map.save('unhealthy_stations.html')
map
Explanation: Stations empty / full based on their location
After checking the proportion of time each station has docks and bikes available above, we can visualize these on a map, to see if there is any correlation in their location.
In the map below, the circle markers use both colour and size as below:
The colour of the circle shows whether the station is available less than 90% of the time. Red stations are in the unhealthy list above, and are empty or full 10% or more of the time. Blue stations are the healthy stations available 90% or more of the time.
The size of the circle shows how frequently the station is empty or full.
To see details about the stations, you can click on the circle markers.
End of explanation
# Plot the empty/full time periods grouped by hour for the top 10
oos_stations_df = bikes_df.copy()
oos_stations_df['empty'] = oos_stations_df['bikes'] == 0
oos_stations_df['full'] = oos_stations_df['docks'] == 0
oos_stations_df['neither'] = (oos_stations_df['bikes'] != 0) & (oos_stations_df['docks'] != 0)
oos_stations_df['hour'] = oos_stations_df['datetime'].dt.hour
oos_stations_df = (oos_stations_df[['station_id', 'hour', 'empty', 'full', 'neither']]
.groupby(['station_id', 'hour']).sum().reset_index())
oos_stations_df = oos_stations_df[oos_stations_df['station_id'].isin(unhealthy_stations_df['station_id'])]
oos_stations_df['oos'] = oos_stations_df['empty'] + oos_stations_df['full']
oos_stations_df = pd.merge(stations_df, oos_stations_df, on='station_id')
oos_stations_df
g = sns.factorplot(data=oos_stations_df, x="hour", y="oos", col='name',
kind='bar', col_wrap=2, size=3.5, aspect=2.0, color='#348ABD')
Explanation: The map shows that stations most frequently unavailable can be grouped into 3 clusters:
The downtown area around East 6th Street between Congress and I-35 and Red River street. This area has a large concentration of businesses, restaurants and bars. Their capacity is around 11 - 13, and they tend to be full most of the time.
South of the river along the Town Lake hiking and cycling trail along with South Congress. The Town Lake trail is a popular cycling route, and there are many restaurants and bars on South Congress. Both Riverside @ S.Lamar and Barton Springs at Riverside have capacities of 11, and are full 15% of the time.
Stations along East 5th Street, near the downtown area. This area has a lot of bars and restaurants, people may be using BCycles to get around to other bars. Their capacity is 12 and 9, and they're full 10% or more of the time. These stations would also benefit from extra capacity.
The South Congress trio of stations is interesting. They are all only a block or so away from each other, but the South Congress and James station has a capacity of 9, is full 12% of the time, and empty 4%. The other two stations on South Congress have a capacity of 13 each, and are full for much less of the time.
End of explanation
bikes_capacity_df = bikes_df.copy()
bikes_capacity_df['capacity'] = bikes_capacity_df['bikes'] + bikes_capacity_df['docks']
# Now find the max capacity across all the stations at all 5 minute intervals
bikes_capacity_df = bikes_capacity_df.groupby('station_id').max().reset_index()
bike_merged_health_df = pd.merge(bike_health_df,
bikes_capacity_df[['station_id', 'capacity']],
on='station_id',
how='inner')
plt.rc("legend", fontsize=14)
sns.jointplot("capacity", "full", data=bike_merged_health_df, kind="reg", size=8)
plt.xlabel('Station capacity', fontsize=14)
plt.ylabel('5-minute periods that are full', fontsize=14)
plt.tick_params(axis="both", labelsize=14)
sns.jointplot("capacity", "empty", data=bike_merged_health_df, kind="reg", size=8)
plt.xlabel('Station capacity', fontsize=14)
plt.ylabel('5-minute periods that are empty', fontsize=14)
plt.tick_params(axis="both", labelsize=14)
Explanation: Correlation between station empty/full and station capacity
Perhaps the reason stations are empty or full a lot is because they have a smaller capacity. Smaller stations would quickly run out of bikes, or become more full. Let's do a hypothesis test, assuming p < 0.05 for statistical significance.
Null hypothesis: The capacity of the station is not correlated with the full count.
Alternative hypothesis: The capacity of the station is correlated with the full count.
The plot below shows a negative correlation between the capacity of a station, and how frequently it becomes full. The probability of a result this extreme is 0.0086 given the null hypothesis, so we reject the null hypothesis. Stations with larger capacities become full less frequently.
End of explanation
bikes_df = load_bikes()
empty_mask = bikes_df['bikes'] == 0
full_mask = bikes_df['docks'] == 0
empty_full_mask = empty_mask | full_mask
bikes_empty_full_df = bikes_df[empty_full_mask].copy()
bikes_empty_full_df['day_of_week'] = bikes_empty_full_df['datetime'].dt.dayofweek
bikes_empty_full_df['hour'] = bikes_empty_full_df['datetime'].dt.hour
fig, axes = plt.subplots(1,2, figsize=(16,8))
bikes_empty_full_df.groupby(['day_of_week']).size().plot.bar(ax=axes[0], legend=None)
axes[0].set_xlabel('Day of week (0 = Monday, 1 = Tuesday, .. ,6 = Sunday)')
axes[0].set_ylabel('Station empty/full count per 5-minute interval ')
axes[0].set_title('Station empty/full by day of week', fontsize=15)
axes[0].tick_params(axis='x', labelsize=13)
axes[0].tick_params(axis='y', labelsize=13)
bikes_empty_full_df.groupby(['hour']).size().plot.bar(ax=axes[1])
axes[1].set_xlabel('Hour of day (24H clock)')
axes[1].set_ylabel('Station empty/full count per 5-minute interval ')
axes[1].set_title('Station empty/full by hour of day', fontsize=15)
axes[1].tick_params(axis='x', labelsize=13)
axes[1].tick_params(axis='y', labelsize=13)
Explanation: Station empty / full by Time
To break the station health down further, we can check in which 5 minute periods the station was either full or empty. By grouping the results over various time scales, we can look for periodicity in the data.
End of explanation |
12,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Question 1
Compute the average temperature by season ('season_desc'). (The temperatures are numbers between 0 and 1, but don't worry about that. Let's say that's the Shellman temperature scale.)
I get
season_desc
Fall 0.711445
Spring 0.321700
Summer 0.554557
Winter 0.419368
Which clearly looks wrong. Figure out what's wrong with the original data and fix it.
Step1: In this case, a pivot table is not really required, so a simple use of groupby and mean() will do the job.
Step2: Question 2
Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month.
Step3: Question 3
Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months?
Step4: Check how correlation between temp and total riders varies across months.
Step5: Check how correlation between temp and total riders varies across seasons.
Step6: Investigate total riders by month versus average monthly temp.
Step7: Investigate total riders by season versus average seasonal temp.
Step8: Question 4
There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?
Investigate correlations between casual and reg riders on work days and holidays.
Step9: Investigate correlations between casual and reg riders and windspeed.
Step10: Compare average rental duration between customer types. | Python Code:
from pandas import DataFrame, Series
import pandas as pd
import numpy as np
weather_data = pd.read_table('data/daily_weather.tsv')
season_mapping = {'Spring': 'Winter', 'Winter': 'Fall', 'Fall': 'Summer', 'Summer': 'Spring'}
def fix_seasons(x):
return season_mapping[x]
weather_data['season_desc'] = weather_data['season_desc'].apply(fix_seasons)
weather_data.pivot_table(index='season_desc', values='temp', aggfunc=np.mean)
Explanation: Question 1
Compute the average temperature by season ('season_desc'). (The temperatures are numbers between 0 and 1, but don't worry about that. Let's say that's the Shellman temperature scale.)
I get
season_desc
Fall 0.711445
Spring 0.321700
Summer 0.554557
Winter 0.419368
Which clearly looks wrong. Figure out what's wrong with the original data and fix it.
End of explanation
weather_data.groupby('season_desc')['temp'].mean()
Explanation: In this case, a pivot table is not really required, so a simple use of groupby and mean() will do the job.
End of explanation
weather_data['Month'] = pd.DatetimeIndex(weather_data.date).month
weather_data.groupby('Month')['total_riders'].sum()
Explanation: Question 2
Various of the columns represent dates or datetimes, but out of the box pd.read_table won't treat them correctly. This makes it hard to (for example) compute the number of rentals by month. Fix the dates and compute the number of rentals by month.
End of explanation
pd.concat([weather_data['temp'], weather_data['total_riders']], axis=1).corr()
Explanation: Question 3
Investigate how the number of rentals varies with temperature. Is this trend constant across seasons? Across months?
End of explanation
weather_data[['total_riders', 'temp', 'Month']].groupby('Month').corr()
Explanation: Check how correlation between temp and total riders varies across months.
End of explanation
weather_data[['total_riders', 'temp', 'season_desc']].groupby('season_desc').corr()
Explanation: Check how correlation between temp and total riders varies across seasons.
End of explanation
month_riders = weather_data.groupby('Month')['total_riders'].sum()
month_avg_temp = weather_data.groupby('Month')['temp'].mean()
pd.concat([month_riders, month_avg_temp], axis=1)
Explanation: Investigate total riders by month versus average monthly temp.
End of explanation
season_riders = weather_data.groupby('season_desc')['total_riders'].sum()
season_temp = weather_data.groupby('season_desc')['temp'].mean()
pd.concat([season_riders, season_temp], axis=1)
Explanation: Investigate total riders by season versus average seasonal temp.
End of explanation
weather_data[['no_casual_riders', 'no_reg_riders', 'is_work_day', 'is_holiday']].corr()
Explanation: Question 4
There are various types of users in the usage data sets. What sorts of things can you say about how they use the bikes differently?
Investigate correlations between casual and reg riders on work days and holidays.
End of explanation
weather_data[['no_casual_riders', 'no_reg_riders', 'windspeed']].corr()
usage = pd.read_table('data/usage_2012.tsv')
Explanation: Investigate correlations between casual and reg riders and windspeed.
End of explanation
usage.groupby('cust_type')['duration_mins'].mean()
Explanation: Compare average rental duration between customer types.
End of explanation |
12,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1h', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: GISS-E2-1H
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:20
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
12,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Le TD a commencé par un petit contrôle... que nous avons ensuite (partiellement) corrigé.
Correction de l'exercice mystère.
Rappel du sujet initial
Step1: 1
Step2: 2
Step3: Étape 2. Bien. Maintenant que l'on affiche une ligne correctement, on va s'occuper d'afficher toutes les lignes nécessaires à l'affichage de la première rangée de cases de notre damier. On reprend notre code précédent, et on le modifie un peu... Dans le code ci-dessous, seules les lignes modifiées ont été commentées.
Step4: Étape 3. Parfait. Nous avons réalisé la première ligne de cases de notre damier. Maintenant, il s'agit de répéter le processus pour le reste du damier... mais il nous faut trouver une astuce pour gérer le décalage que doivent avoir les cases d'une ligne sur l'autre. Ici, on utilise l'observation suivante. On numérote les cases de 1 à m, et les lignes de 1 à n. On a fait le choix que la case 1 de la ligne 1 était une case pleine. Ainsi, la case 2 de la ligne 1 est une case vide, mais la case 1 de la ligne 2 est vide aussi. En généralisant, on observe assez vite que lorsque la somme du rang de la case et du numéro de ligne est paire, la case est pleine, et dans l'autre cas, la case est vide. C'est cette observation que l'on va utiliser dans le code suivant. | Python Code:
def mystere(unparametre)
unevariable = True
uneautrevariable = 0
while unevariable:
truc = unparametre // 10
uneautrevariable+=1
if truc = 0:
print("truc 0")
if truc > 0:
unparametre = truc
else:
unevariable = False
return uneautrevariable
print(mystere(1))
print(mystere(22))
print(mystere(4444))
Explanation: Le TD a commencé par un petit contrôle... que nous avons ensuite (partiellement) corrigé.
Correction de l'exercice mystère.
Rappel du sujet initial :
End of explanation
def mystere(unparametre): # il manquait les deux points ici...
unevariable = True
uneautrevariable = 0
while unevariable:
truc = unparametre // 10
uneautrevariable+=1
if truc == 0: # la condition du if doit être une formule booléenne... il fallait donc un == et non un =
print("truc 0") # Il y avait un problème d'in
if truc > 0:
unparametre = truc
else:
unevariable = False
return uneautrevariable
print(mystere(1))
print(mystere(22))
print(mystere(4444))
Explanation: 1 : Correction des trois erreurs de syntaxe
End of explanation
def afficher_damier(m, n, c, d):
'''
:entree m: nombre de cases en largeur dans le damier
:entree n: nombre de cases en hauteur dans le damier
:entree c: premier caractère choisi pour l'affichage
:entree d: deuxieme caractère choisi pour l'affichage
:pre-cond: m et n sont des entiers strictement positifs
:pre-cond: c et d sont des chaînes de caractères de longueur 1 (autrement dit, des caractères)
:post-cond: un damier de m*n cases est affiché à l'écran, en utilisant les caractères c et d, comme dans l'exemple ci-dessous :
afficher_damier(2,3, "n", "z")
nnzz
nnzz
nnzz
nnzz
nnzz
nnzz
nnzz
nnzz
'''
case = 1 # On utilise une variable pour repérer sur quelle case on est.
chaine = "" # On utilise une variable pour stocker la chaine à afficher et on s'occupera de l'affichage de cette chaine à chaque fin de ligne
for compteur in range(m): # On fait une boucle pour construire une ligne entière de cases
if case % 2 == 0: # Si on est sur une case paire, on ajoute à la chaine le nombre d'espaces (2M)
chaine = chaine + (2*m* " ")
else: # Sinon (i.e. la case est impaire), on ajoute à la chaine M fois le premier caractère, puis M fois le second.
chaine = chaine + (m*c + m*d)
case += 1 # Quand on a fini d'écrire une case, on passe à la case suivante
print(chaine) # Quand on a fini d'écrire une ligne entière, on l'affiche.
afficher_damier(3, 4, "-", "6") # Hop, et on n'oublie pas de tester notre travail.
Explanation: 2 : Que fait le programme ?
Le programme retourne le nombre de chiffres dans l'écriture en base 10 d'un nombre passé en paramètre. Le premier if fait un print qui ne sert à rien.
3 : Quelles valeurs choisir pour le test
Les valeurs aux limites des conditions : par exemple, 0, 10 ou 100...
Les valeurs négatives.
Les nombres flotants.
Exercice du damier :
L'objectif de cet exercice est d'afficher un damier de M*N cases, chaque case étant soit vide (caractère espace) soit dessinée par un ensemble de lettre.
Chaque case du damier aura pour dimension 2M*N (cf. plus bas pour l'explication du 2M).
Par exemple, si l'on demande l'affichage d'un damier de 5*4, on aura 20 cases en tout, et le damier fera 5 colonnes par 4 lignes. Chaque case du damier fera quant à elle 10 caractères de large, et 4 caractères de haut.
Les cases pleines sont dessinées avec deux caractères passés en paramètres de la méthode. On affiche d'abord M fois le premier caractère, puis M fois le second. La largeur d'une case est donc de 2M. On doit répéter l'opération N fois, car la case doit avoir une hauteur de N. Les cases vides fonctionnent selon le même principe, sauf qu'au lieu d'afficher des caractères passés en paramètres, on affiche des espaces.
L'une des principales difficultés, outre les aspects algorithmiques, est que l'affichage doit se faire ligne par ligne (et non case par case), et qu'il va donc nous falloir décomposer le problème en fonction de cette contrainte.
Cet exercice est artificiellement difficile. Il a pour but de vous apprendre à décomposer un problème compliqué en plusieurs étapes "simples", et de vous faire travailler le concept de boucles imbriquées.
Nous allons donc le corriger par étapes.
Étape 1.
Tout d'abord, nous allons écrire et tester le code qui permet d'afficher une ligne (en faisant le choix arbitraire que la première case de la ligne est une case pleine.
End of explanation
def afficher_damier(m, n, c, d):
'''
idem
'''
for compteur_ligne in range(n): # On ajoute juste une boucle pour itérer sur la hauteur que doit faire une case...
case = 1
chaine = ""
for compteur in range(m):
if case % 2 == 0:
chaine = chaine + (2*m* " ")
else:
chaine = chaine + (m*c + m*d)
case += 1
print(chaine)
afficher_damier(3, 4, "-", "6") # Hop, et on n'oublie pas de tester notre travail.
Explanation: Étape 2. Bien. Maintenant que l'on affiche une ligne correctement, on va s'occuper d'afficher toutes les lignes nécessaires à l'affichage de la première rangée de cases de notre damier. On reprend notre code précédent, et on le modifie un peu... Dans le code ci-dessous, seules les lignes modifiées ont été commentées.
End of explanation
def afficher_damier(m, n, c, d):
'''
idem
'''
ligne = 1 # On introduit un compteur pour savoir quelle ligne on est en train de traiter
for compteur_lignes_damier in range(n): # On itère sur les lignes
for compteur_ligne in range(n):
case = 1
chaine = ""
for compteur in range(m):
if (case + ligne) % 2 != 0: # On modifie la condition qui nous permet de savoir ce que l'on doit écrire dans la case...
chaine = chaine + (2*m* " ")
else:
chaine = chaine + (m*c + m*d)
case += 1
print(chaine)
ligne += 1 # Quand on a fini d'écrire une ligne complète, on pense à incrémenter notre compteur de ligne...
afficher_damier(3, 4, "-", "6") # Hop, et on n'oublie pas de tester notre travail.
afficher_damier(6, 2, "g", "!")
Explanation: Étape 3. Parfait. Nous avons réalisé la première ligne de cases de notre damier. Maintenant, il s'agit de répéter le processus pour le reste du damier... mais il nous faut trouver une astuce pour gérer le décalage que doivent avoir les cases d'une ligne sur l'autre. Ici, on utilise l'observation suivante. On numérote les cases de 1 à m, et les lignes de 1 à n. On a fait le choix que la case 1 de la ligne 1 était une case pleine. Ainsi, la case 2 de la ligne 1 est une case vide, mais la case 1 de la ligne 2 est vide aussi. En généralisant, on observe assez vite que lorsque la somme du rang de la case et du numéro de ligne est paire, la case est pleine, et dans l'autre cas, la case est vide. C'est cette observation que l'on va utiliser dans le code suivant.
End of explanation |
12,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Módulo 2
Step5: Scatter Plot
Variáveis
Step6: Plot dos datasets
scatter plot simples
Step7: Customizando formas, cores e tamanho
Step8: Adicionando mais um dataset
Step12: Facilidades no Pandas
Step15: Case
Step17: Visualização com Pandas + Matplotlib
Step19: Visualização com Pandas + Seaborn
Step22: Text Plot
Posicionando Texto na Visualização
Step23: Aumentando a fonte
Step24: Mudando a Escala
Step25: Aplicando 'fontdict' em outros textos
Step26: Juntando com Scatter Plot
Step30: Desafio
Objetivo
Step32: [ A ]
Step34: [ B ] | Python Code:
import numpy as np
import os
import pandas as pd
habilitando plots no notebook
%matplotlib inline
plot libs
import matplotlib.pyplot as plt
import seaborn as sns
Configurando o Matplotlib para o modo manual
plt.interactive(False)
Explanation: Módulo 2: Scatter Plot + Text
Tutorial
Imports
End of explanation
Dataset com Distribuição Normal 2D
d1 = pd.DataFrame(
columns=["x", "y"],
data=np.random.randn(20, 2) + np.array([5, 5])
)
d2 = pd.DataFrame(
columns=["x", "y"],
data=np.random.randn(30, 2) + np.array([1, 2])
)
Explanation: Scatter Plot
Variáveis
End of explanation
plt.figure(figsize=(12, 8))
plt.scatter(d1.x, d1.y)
plt.show()
Explanation: Plot dos datasets
scatter plot simples
End of explanation
plt.figure(figsize=(12, 8))
plt.scatter(
d1.x, d1.y, # pares de coordenadas (x, y)
c="darkorange", # cor
s=100, # tamanho em pixels
marker="s" # simbolo a ser usado
)
plt.show()
Explanation: Customizando formas, cores e tamanho
End of explanation
plt.figure(figsize=(12, 8))
plt.scatter(d1.x, d1.y, c="darkorange", s=100, marker="s", label="golden squares")
plt.scatter(d2.x, d2.y, c="purple", s=200, marker="*", label="purple stars")
plt.legend() # captura os nomes das series em 'label'
plt.show()
Explanation: Adicionando mais um dataset
End of explanation
inicialização como antes
plt.figure(figsize=(12, 8))
plt.gca() retorna a janela mais recente
d1.plot(
ax=plt.gca(),
kind="scatter",
x="x", y="y",
c="k",
s=100,
marker="o",
label="black circles"
)
d2.plot(
ax=plt.gca(),
kind="scatter",
x="x", y="y",
c="cyan",
s=200,
marker="v",
label="cyan triangles"
)
visualizar com legenda
plt.legend()
plt.show()
Explanation: Facilidades no Pandas
End of explanation
df = pd.DataFrame(
columns=["S1", "S2", "S3", "S4"],
data=np.random.randn(100, 4)
)
df["S2"] += 2 * df.S1
df["S3"] -= 2 * df.S2
df.describe()
Tabela de Correlação
df.corr().unstack().drop_duplicates()
Visualização Simples
df.corr().unstack().drop_duplicates()
Explanation: Case: Visualização da Correlação
Correlação de Pearson:
Medida de quanto duas séries numéricas alinhadas, i.e., o quanto elas variam em relação uma à outra em comparação do quanto elas variam em relação a si mesmas.
Mais infos aqui: <a href="https://pt.wikipedia.org/wiki/Coeficiente_de_correla%C3%A7%C3%A3o_de_Pearson" target=_blank> https://pt.wikipedia.org/wiki/Coeficiente_de_correla%C3%A7%C3%A3o_de_Pearson <a/>
Matematicamente...
... é a covariância dividida pelo produto dos desvios padrão de ambas.
Também pode ser pensada como o cosseno do ângulo entre dois vetores multidimensionais.
A fórmula é dada por:
<img src="images/correlacao_pearson_formula.svg">
Na prática:
Número (float) entre -1 e 1 indicando (fonte: Wikipédia):
* 0.9 para mais ou para menos indica uma correlação muito forte.
* 0.7 a 0.9 positivo ou negativo indica uma correlação forte.
* 0.5 a 0.7 positivo ou negativo indica uma correlação moderada.
* 0.3 a 0.5 positivo ou negativo indica uma correlação fraca.
* 0 a 0.3 positivo ou negativo indica uma correlação desprezível.
Séries e suas Correlações
End of explanation
Visualização por Scatter Plot
pd.plotting.scatter_matrix(df, s=400, color="red", figsize=(13,13))
plt.show()
Explanation: Visualização com Pandas + Matplotlib
End of explanation
Visualização por Scatter Plot
sns.pairplot(df)
plt.show()
Explanation: Visualização com Pandas + Seaborn
End of explanation
Plot Simples
plt.figure(figsize=(12,8))
plt.text(x=0.5, y=0.5, s="Texto a ser mostrado")
plt.show()
Plot Simples
plt.figure(figsize=(12,8))
plt.text(x=1, y=1, s="Texto a ser mostrado")
plt.show()
Explanation: Text Plot
Posicionando Texto na Visualização
End of explanation
plt.figure(figsize=(12,8))
font = {
'family': 'serif',
'color': 'darkred',
'weight': 'bold',
'size': 26,
}
plt.text(x=0.5, y=0.5, s="Texto a ser mostrado", fontdict=font)
plt.show()
Explanation: Aumentando a fonte
End of explanation
plt.figure(figsize=(12,8))
plt.xlim(-10, 10)
plt.ylim(-10, 10)
font = {
'family': 'serif',
'color': 'darkred',
'weight': 'bold',
'size': 26,
}
plt.text(x=-5, y=-2, s="Texto a ser mostrado", fontdict=font)
plt.show()
Explanation: Mudando a Escala
End of explanation
plt.figure(figsize=(12,8))
plt.xlim(-10, 10)
plt.ylim(-10, 10)
font = {
'family': 'serif',
'color': 'darkred',
'weight': 'bold',
'size': 26,
}
plt.text(x=-5, y=-2, s="Texto a ser mostrado", fontdict=font)
plt.xlabel("X Axis", fontdict=font)
plt.ylabel("Y Axis", fontdict=font)
plt.show()
Explanation: Aplicando 'fontdict' em outros textos
End of explanation
plt.figure(figsize=(12,8))
plt.xlim(-10, 10)
plt.ylim(-10, 10)
font = {
'family': 'serif',
'color': 'darkred',
'weight': 'bold',
'size': 26,
}
plt.text(x=-5, y=-2, s="Texto a ser mostrado", fontdict=font)
plt.scatter(x=-5, y=-2, s=400, c="darkorange", marker="o")
plt.xlabel("X Axis", fontdict=font)
plt.ylabel("Y Axis", fontdict=font)
plt.show()
Explanation: Juntando com Scatter Plot
End of explanation
Tamanho do Dataset
df = pd.read_csv(
os.path.join("data", "produtos_ecommerce.csv"),
sep=";",
encoding="utf-8"
)
df[["coord_x", "coord_y"]] = df[["coord_x", "coord_y"]].astype(float)
df.head()
Tamanho da tabela
df.shape
Contagem de Produtos por Categoria
df.label.value_counts()
Explanation: Desafio
Objetivo:
Visualizar dados de um problema de classificação de produtos de e-commerce projetados em 2D.
Dataset:
End of explanation
Escreva a a Solução Aqui
Explanation: [ A ]: Visualizar Todo o Dataset
Plotar todos os produtos usando as coordenadas coord_x e coord_y.
Cada categoria deve ser representada por uma cor diferente.
A figura deve ter uma legenda mostrando as categorias e suas cores.
End of explanation
Escreva a a Solução Aqui
Explanation: [ B ]: Visualizar uma amostra com o texto
Plotar duas categorias, apenas 10 produtos de cada, usando o campo product para plotar também o texto.
A figura deve manter a legenda mostrando apenas as duas categorias e suas cores.
End of explanation |
12,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hacking for heat
In this series, I'm going to be posting about the process that goes on behind some of the blog posts we end up writing. In this first entry, I'm going to be exploring a number of datsets.
These are the ones that I'm going to be looking at
Step1: Let's take a look at unique values for some of the columns
Step2: The above table tells us that Manhattan has the lowest proportion of cases that receive judgement (about 1 in 80), whereas Staten Island has the highest (about 1 in 12). It may be something worth looking into, but it's also important to note that many cases settle out of court, and landlords in Manhattan may be more willing (or able) to do so.
Step3: The table above shows the same case judgement proportions, but conditioned on what type of case it was. Unhelpfully, the documentation does not specify what the difference between Access Warrant - Lead and Non-Lead is. It could be one of two possibilities
Step4: This dataset is less useful on its own. It doesn't tell us what the type of complaint was, only the date it was received and whether or not the complaint is still open. However, it may be useful in conjunction with the earlier dataset. For example, we might be interested in how many of these complaints end up in court (or at least, have some sort of legal action taken).
HPD violations
The following dataset tracks HPD violations.
Step5: These datasets all have different lengths, but that's not surprising, given they come from different years. One productive initial step would be to convert the date strings into something numerical.
HPD complaint problems database | Python Code:
import pandas as pd
litigation = pd.read_csv("Housing_Litigations.csv")
litigation.head()
Explanation: Hacking for heat
In this series, I'm going to be posting about the process that goes on behind some of the blog posts we end up writing. In this first entry, I'm going to be exploring a number of datsets.
These are the ones that I'm going to be looking at:
HPD (Housing Preservation and Development) housing litigations
Housing maintenance code complaints
Housing maintanence code violations
HPD complaints
(for HPD datasets, some documentation can be found here)
HPD litigation database
First, we're going to look at the smallest dataset, one that contains cases against landlords. From the documentation, this file contains "All cases commenced by HPD or by tennants (naming HPD as a party) in [housing court] since August 2006" either seeking orders for landlords to comply with regulations, or awarding HPD civil penalties (i.e., collecting on fines).
End of explanation
litigation['Boro'].unique()
litigation.groupby(by = ['Boro','CaseJudgement']).count()
Explanation: Let's take a look at unique values for some of the columns:
End of explanation
litigation['CaseType'].unique()
litigation.groupby(by = ['CaseType', 'CaseJudgement']).count()
Explanation: The above table tells us that Manhattan has the lowest proportion of cases that receive judgement (about 1 in 80), whereas Staten Island has the highest (about 1 in 12). It may be something worth looking into, but it's also important to note that many cases settle out of court, and landlords in Manhattan may be more willing (or able) to do so.
End of explanation
hpdcomp = pd.read_csv('Housing_Maintenance_Code_Complaints.csv')
hpdcomp.head()
len(hpdcomp)
Explanation: The table above shows the same case judgement proportions, but conditioned on what type of case it was. Unhelpfully, the documentation does not specify what the difference between Access Warrant - Lead and Non-Lead is. It could be one of two possibilities: The first is whether the warrants have to do with lead-based paint, which is a common problem, but perhaps still too idiosyncratic to have it's own warrant type. The second, perhaps more likely possibility is whether or not HPD was the lead party in the case.
We'll probably end up using these data by aggregating it and examining how complaints change over time, perhaps as a function of what type they are. There's also the possibility of looking up specific buildings' complaints and tying them to landlords. There's probably also an easy way to join this dataset with another, by converting the address information into something standardized, like borough-block-lot (BBL; http://www1.nyc.gov/nyc-resources/service/1232/borough-block-lot-bbl-lookup)
HPD complaints
Next, we're going to look at a dataset of HPD complaints.
End of explanation
hpdviol = pd.read_csv('Housing_Maintenance_Code_Violations.csv')
hpdviol.head()
len(hpdviol)
Explanation: This dataset is less useful on its own. It doesn't tell us what the type of complaint was, only the date it was received and whether or not the complaint is still open. However, it may be useful in conjunction with the earlier dataset. For example, we might be interested in how many of these complaints end up in court (or at least, have some sort of legal action taken).
HPD violations
The following dataset tracks HPD violations.
End of explanation
hpdcompprob = pd.read_csv('Complaint_Problems.csv')
hpdcompprob.head()
Explanation: These datasets all have different lengths, but that's not surprising, given they come from different years. One productive initial step would be to convert the date strings into something numerical.
HPD complaint problems database
End of explanation |
12,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The previous GenerateProbs and GenerateProbsPart2 focused on creating a CDF that can be indexed with a uniform random number to determine the coupon draw. But what if we went the other way? Each coupon occupies 1/N amount of space, but instead of a uniform random number, we draw a random variate from some distribution (probably beta) and that determines the likelihood that a coupon gets drawn. It's way easier to set up, but there may be a slowdown because we now draw random variates instead of uniform random numbers. Let's time a few options to generate beta distributed numbers.
Step2: It looks like using random variates will make the sim run a little slower, but not by a crazy amount if we use numpy's variate generation. Stay away from pure python and scipy's version though, they'll kill the speed.
Now that we are drawing coupons using random variates instead of generating a cdf, we can simulate the ccp with a single function | Python Code:
%matplotlib inline
import numpy as np
from numpy.random import beta as npbeta
from random import betavariate as pybeta
from scipy.stats import beta as scibeta
from matplotlib import pyplot as plt
from numpy import arange, vectorize
import timeit
start = timeit.default_timer()
for i in np.arange(1000000):
t = np.random.rand()
et = timeit.default_timer() - start
print(et)
start = timeit.default_timer()
for i in np.arange(1000000):
t = npbeta(1,1)
et = timeit.default_timer() - start
print(et)
start = timeit.default_timer()
for i in np.arange(1000000):
t = pybeta(1,1)
et = timeit.default_timer() - start
print(et)
start = timeit.default_timer()
for i in np.arange(1000000):
t = scibeta.rvs(1,1)
et = timeit.default_timer() - start
print(et)
Explanation: The previous GenerateProbs and GenerateProbsPart2 focused on creating a CDF that can be indexed with a uniform random number to determine the coupon draw. But what if we went the other way? Each coupon occupies 1/N amount of space, but instead of a uniform random number, we draw a random variate from some distribution (probably beta) and that determines the likelihood that a coupon gets drawn. It's way easier to set up, but there may be a slowdown because we now draw random variates instead of uniform random numbers. Let's time a few options to generate beta distributed numbers.
End of explanation
def single_run_looped(n, dist, alpha, beta):
This is a single run of the CCP.
n = number of unique coupons
m_max = max number of draws to simulate (should be much greater than n)
dist = how are the coupon probabilities distributed (uniform, normal, exponential)
norm_scale = how much to scale the normal distribution standard deviation (0<= norm_scale <1)
m = 0 #start at zero draws
cdf = (arange(n)+1.0)/n #create the draw probability distribution
draws = [] #create our draw array
uniques = [] #create our unique array (deque is faster but may break DB inserts - research)
unique = 0
while True:
m+=1 #increment our draw counter
rv = npbeta(alpha, beta) #randomness that decides which coupon to draw
draw = (cdf>rv).sum()
if draw not in draws:
draws.append(draw)
unique+=1
uniques.append(unique) #store the info
if unique==n:#we'll stop once we have drawn all our coupons
return m #this line returns the number of draws; for testing
#return uniques #this line returns the full unique draws list; the actual data we want to record
vectorized_single_run_looped = vectorize(single_run_looped)
start = timeit.default_timer()
#test our sim with known results on uniform probs
trials = 200000
n = 10
records = vectorized_single_run_looped([n]*trials, ['beta']*trials, [1.0]*trials, [1.0]*trials)
num_fails = np.where(records==0)
average = np.mean(records)
std_error = np.std(records)/np.sqrt(trials)
z_crit = 1.96
low_ci = average - z_crit*std_error
high_ci = average + z_crit*std_error
expectation = np.asarray([1/(n+1) for n in np.arange(n)]).sum()*n
et = timeit.default_timer()-start
print ("num_fails: ", len(num_fails[0]))
print("low_ci: ", low_ci, "point_est: ", average, "high_ci: ", high_ci)
print("expected value: ", expectation)
print("elapsed_time: ", et)
Explanation: It looks like using random variates will make the sim run a little slower, but not by a crazy amount if we use numpy's variate generation. Stay away from pure python and scipy's version though, they'll kill the speed.
Now that we are drawing coupons using random variates instead of generating a cdf, we can simulate the ccp with a single function:
End of explanation |
12,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: Visualizing a Three-Dimensional Function
We'll start by demonstrating a contour plot using a function $z = f(x, y)$, using the following particular choice for $f$ (we've seen this before in Computation on Arrays
Step2: A contour plot can be created with the plt.contour function.
It takes three arguments
Step3: Now let's look at this with a standard line-only contour plot
Step4: Notice that by default when a single color is used, negative values are represented by dashed lines, and positive values by solid lines.
Alternatively, the lines can be color-coded by specifying a colormap with the cmap argument.
Here, we'll also specify that we want more lines to be drawn—20 equally spaced intervals within the data range
Step5: Here we chose the RdGy (short for Red-Gray) colormap, which is a good choice for centered data.
Matplotlib has a wide range of colormaps available, which you can easily browse in IPython by doing a tab completion on the plt.cm module
Step6: The colorbar makes it clear that the black regions are "peaks," while the red regions are "valleys."
One potential issue with this plot is that it is a bit "splotchy." That is, the color steps are discrete rather than continuous, which is not always what is desired.
This could be remedied by setting the number of contours to a very high number, but this results in a rather inefficient plot
Step7: There are a few potential gotchas with imshow(), however | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-white')
import numpy as np
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< Visualizing Errors | Contents | Histograms, Binnings, and Density >
Density and Contour Plots
Sometimes it is useful to display three-dimensional data in two dimensions using contours or color-coded regions.
There are three Matplotlib functions that can be helpful for this task: plt.contour for contour plots, plt.contourf for filled contour plots, and plt.imshow for showing images.
This section looks at several examples of using these. We'll start by setting up the notebook for plotting and importing the functions we will use:
End of explanation
def f(x, y):
return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)
Explanation: Visualizing a Three-Dimensional Function
We'll start by demonstrating a contour plot using a function $z = f(x, y)$, using the following particular choice for $f$ (we've seen this before in Computation on Arrays: Broadcasting, when we used it as a motivating example for array broadcasting):
End of explanation
x = np.linspace(0, 5, 50)
y = np.linspace(0, 5, 40)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
Explanation: A contour plot can be created with the plt.contour function.
It takes three arguments: a grid of x values, a grid of y values, and a grid of z values.
The x and y values represent positions on the plot, and the z values will be represented by the contour levels.
Perhaps the most straightforward way to prepare such data is to use the np.meshgrid function, which builds two-dimensional grids from one-dimensional arrays:
End of explanation
plt.contour(X, Y, Z, colors='black');
Explanation: Now let's look at this with a standard line-only contour plot:
End of explanation
plt.contour(X, Y, Z, 20, cmap='RdGy');
Explanation: Notice that by default when a single color is used, negative values are represented by dashed lines, and positive values by solid lines.
Alternatively, the lines can be color-coded by specifying a colormap with the cmap argument.
Here, we'll also specify that we want more lines to be drawn—20 equally spaced intervals within the data range:
End of explanation
plt.contourf(X, Y, Z, 20, cmap='RdGy')
plt.colorbar();
Explanation: Here we chose the RdGy (short for Red-Gray) colormap, which is a good choice for centered data.
Matplotlib has a wide range of colormaps available, which you can easily browse in IPython by doing a tab completion on the plt.cm module:
plt.cm.<TAB>
Our plot is looking nicer, but the spaces between the lines may be a bit distracting.
We can change this by switching to a filled contour plot using the plt.contourf() function (notice the f at the end), which uses largely the same syntax as plt.contour().
Additionally, we'll add a plt.colorbar() command, which automatically creates an additional axis with labeled color information for the plot:
End of explanation
plt.imshow(Z, extent=[0, 5, 0, 5], origin='lower',
cmap='RdGy')
plt.colorbar()
plt.axis(aspect='image');
Explanation: The colorbar makes it clear that the black regions are "peaks," while the red regions are "valleys."
One potential issue with this plot is that it is a bit "splotchy." That is, the color steps are discrete rather than continuous, which is not always what is desired.
This could be remedied by setting the number of contours to a very high number, but this results in a rather inefficient plot: Matplotlib must render a new polygon for each step in the level.
A better way to handle this is to use the plt.imshow() function, which interprets a two-dimensional grid of data as an image.
The following code shows this:
End of explanation
contours = plt.contour(X, Y, Z, 3, colors='black')
plt.clabel(contours, inline=True, fontsize=8)
plt.imshow(Z, extent=[0, 5, 0, 5], origin='lower',
cmap='RdGy', alpha=0.5)
plt.colorbar();
Explanation: There are a few potential gotchas with imshow(), however:
plt.imshow() doesn't accept an x and y grid, so you must manually specify the extent [xmin, xmax, ymin, ymax] of the image on the plot.
plt.imshow() by default follows the standard image array definition where the origin is in the upper left, not in the lower left as in most contour plots. This must be changed when showing gridded data.
plt.imshow() will automatically adjust the axis aspect ratio to match the input data; this can be changed by setting, for example, plt.axis(aspect='image') to make x and y units match.
Finally, it can sometimes be useful to combine contour plots and image plots.
For example, here we'll use a partially transparent background image (with transparency set via the alpha parameter) and overplot contours with labels on the contours themselves (using the plt.clabel() function):
End of explanation |
12,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Read in the .csv data file
Use pandas pd.read_csv() function to read in the .csv file
Step2: Build the violin plot
Use seaborns built-in violin plot function to make the plot. The two fuction arguments are data=df and pallette="pastel".
Step3: Save the figure
Save the figure using matplotlib's ax.get_figure() method and fig.savefig() method. Since seaborn produces a matplotlib axis object, we can use matplotlib methods on our ax object. | Python Code:
# seaborn violin plot
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Title: Violin Plot using Python, matplotlib and seaborn
Date: 2017-10-21 16:00
Import the necessary packages
Pandas is used to read in the .csv data. Seaborn to build the plot and Matplotlib for displaying the plot.
End of explanation
df = pd.read_csv('data.csv')
Explanation: Read in the .csv data file
Use pandas pd.read_csv() function to read in the .csv file
End of explanation
ax = sns.violinplot(data=df, palette="pastel")
plt.show()
Explanation: Build the violin plot
Use seaborns built-in violin plot function to make the plot. The two fuction arguments are data=df and pallette="pastel".
End of explanation
fig = ax.get_figure()
fig.savefig('sns_violin_plot.png', dpi=300)
#seaborn violin plot
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('data.csv')
ax = sns.violinplot(data=df, palette="pastel")
fig = ax.get_figure()
fig.savefig('sns_violin_plot.png', dpi=300)
Explanation: Save the figure
Save the figure using matplotlib's ax.get_figure() method and fig.savefig() method. Since seaborn produces a matplotlib axis object, we can use matplotlib methods on our ax object.
End of explanation |
12,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kalman filter for altitude estimation from GPS, sonar, baro. Input with accelerometer. Estimation of acceleometer bias and baro bias.
I) TRAJECTORY
We assume sinusoidal trajectory
Step1: II) MEASUREMENTS
Sonar
Step2: Baro
Step3: GPS
Step4: GPS velocity
Step5: Acceleration
Step6: III) PROBLEM FORMULATION
State vector
$$x_{k} = \left[ \matrix{ z \ h \ \dot z \ \zeta \ \eta} \right]
= \matrix{ \text{Altitude} \ \text{Height above ground} \ \text{Vertical speed} \ \text{Accelerometer bias} \ \text{baro bias}}$$
Input vector
$$ u_{k} = \left[ \matrix{ \ddot z } \right] = \text{Accelerometer} $$
Formal definition (Law of motion)
Step7: Initial uncertainty $P_0$
Step8: Dynamic matrix $A$
Step9: Disturbance Control Matrix $B$
Step10: Measurement Matrix $H$
Step11: Measurement noise covariance $R$
Step12: Process noise covariance $Q$
Step13: Identity Matrix
Step14: Input
Step15: V) TEST
Filter loop
Step16: VI) PLOT | Python Code:
m = 50000 # timesteps
dt = 1/ 250.0 # update loop at 250Hz
t = np.arange(m) * dt
freq = 0.05 # Hz
amplitude = 5.0 # meter
alt_true = 405 + amplitude * np.cos(2 * np.pi * freq * t)
height_true = 6 + amplitude * np.cos(2 * np.pi * freq * t)
vel_true = - amplitude * (2 * np.pi * freq) * np.sin(2 * np.pi * freq * t)
acc_true = - amplitude * (2 * np.pi * freq)**2 * np.cos(2 * np.pi * freq * t)
plt.plot(t, height_true)
plt.plot(t, vel_true)
plt.plot(t, acc_true)
plt.legend(['elevation', 'velocity', 'acceleration'], loc='best')
plt.xlabel('time')
Explanation: Kalman filter for altitude estimation from GPS, sonar, baro. Input with accelerometer. Estimation of acceleometer bias and baro bias.
I) TRAJECTORY
We assume sinusoidal trajectory
End of explanation
sonar_sampling_period = 1 / 10.0 # sonar reading at 10Hz
# Sonar noise
sigma_sonar_true = 0.05 # in meters
meas_sonar = height_true[::(sonar_sampling_period/dt)] + sigma_sonar_true * np.random.randn(m // (sonar_sampling_period/dt))
t_meas_sonar = t[::(sonar_sampling_period/dt)]
plt.plot(t_meas_sonar, meas_sonar, 'or')
plt.plot(t, height_true)
plt.legend(['Sonar measure', 'Elevation (true)'])
plt.title("Sonar measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
Explanation: II) MEASUREMENTS
Sonar
End of explanation
baro_sampling_period = 1 / 10.0 # baro reading at 10Hz
# Baro noise
sigma_baro_true = 2.0 # in meters
# Baro bias
baro_bias = 20
meas_baro = baro_bias + alt_true[::(baro_sampling_period/dt)] + sigma_baro_true * np.random.randn(m // (baro_sampling_period/dt))
t_meas_baro = t[::(baro_sampling_period/dt)]
plt.plot(t_meas_baro, meas_baro, 'or')
plt.plot(t, alt_true)
plt.title("Baro measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
Explanation: Baro
End of explanation
gps_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gps_true = 5.0 # in meters
meas_gps = alt_true[::(gps_sampling_period/dt)] + sigma_gps_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gps, 'or')
plt.plot(t, alt_true)
plt.title("GPS measurement")
plt.xlabel('time (s)')
plt.ylabel('alt (m)')
Explanation: GPS
End of explanation
gpsvel_sampling_period = 1 / 1.0 # gps reading at 1Hz
# GPS noise
sigma_gpsvel_true = 10.0 # in meters/s
meas_gpsvel = vel_true[::(gps_sampling_period/dt)] + sigma_gpsvel_true * np.random.randn(m // (gps_sampling_period/dt))
t_meas_gps = t[::(gps_sampling_period/dt)]
plt.plot(t_meas_gps, meas_gpsvel, 'or')
plt.plot(t, vel_true)
plt.title("GPS velocity measurement")
plt.xlabel('time (s)')
plt.ylabel('vel (m/s)')
Explanation: GPS velocity
End of explanation
sigma_acc_true = 0.2 # in m.s^-2
acc_bias = 1.5
meas_acc = acc_true + sigma_acc_true * np.random.randn(m) + acc_bias
plt.plot(t, meas_acc, '.')
plt.plot(t, acc_true)
plt.title("Accelerometer measurement")
plt.xlabel('time (s)')
plt.ylabel('acc ($m.s^{-2}$)')
Explanation: Acceleration
End of explanation
x = np.matrix([0.0, 0.0, 0.0, 0.0, 0.0]).T
print(x, x.shape)
Explanation: III) PROBLEM FORMULATION
State vector
$$x_{k} = \left[ \matrix{ z \ h \ \dot z \ \zeta \ \eta} \right]
= \matrix{ \text{Altitude} \ \text{Height above ground} \ \text{Vertical speed} \ \text{Accelerometer bias} \ \text{baro bias}}$$
Input vector
$$ u_{k} = \left[ \matrix{ \ddot z } \right] = \text{Accelerometer} $$
Formal definition (Law of motion):
$$ x_{k+1} = \textbf{A} \cdot x_{k} + \textbf{B} \cdot u_{k} $$
$$ x_{k+1} = \left[
\matrix{ 1 & 0 & \Delta_t & \frac{1}{2} \Delta t^2 & 0
\ 0 & 1 & \Delta t & \frac{1}{2} \Delta t^2 & 0
\ 0 & 0 & 1 & \Delta t & 0
\ 0 & 0 & 0 & 1 & 0
\ 0 & 0 & 0 & 0 & 1} \right]
\cdot
\left[ \matrix{ z \ h \ \dot z \ \zeta \ \eta } \right]
+ \left[ \matrix{ \frac{1}{2} \Delta t^2 \ \frac{1}{2} \Delta t^2 \ \Delta t \ 0 \ 0} \right]
\cdot
\left[ \matrix{ \ddot z } \right] $$
Measurement
$$ y = H \cdot x $$
$$ \left[ \matrix{ y_{sonar} \ y_{baro} \ y_{gps} \ y_{gpsvel} } \right]
= \left[ \matrix{ 0 & 1 & 0 & 0 & 0
\ 1 & 0 & 0 & 0 & 1
\ 1 & 0 & 0 & 0 & 0
\ 0 & 0 & 1 & 0 & 0 } \right] \cdot \left[ \matrix{ z \ h \ \dot z \ \zeta \ \eta } \right] $$
Measures are done separately according to the refresh rate of each sensor
We measure the height from sonar
$$ y_{sonar} = H_{sonar} \cdot x $$
$$ y_{sonar} = \left[ \matrix{ 0 & 1 & 0 & 0 & 0 } \right] \cdot x $$
We measure the altitude from barometer
$$ y_{baro} = H_{baro} \cdot x $$
$$ y_{baro} = \left[ \matrix{ 1 & 0 & 0 & 0 & 1 } \right] \cdot x $$
We measure the altitude from gps
$$ y_{gps} = H_{gps} \cdot x $$
$$ y_{gps} = \left[ \matrix{ 1 & 0 & 0 & 0 & 0 } \right] \cdot x $$
We measure the velocity from gps
$$ y_{gpsvel} = H_{gpsvel} \cdot x $$
$$ y_{gpsvel} = \left[ \matrix{ 0 & 0 & 1 & 0 & 0 } \right] \cdot x $$
IV) IMPLEMENTATION
Initial state $x_0$
End of explanation
P = np.diag([100.0, 100.0, 100.0, 100.0, 100.0])
print(P, P.shape)
Explanation: Initial uncertainty $P_0$
End of explanation
dt = 1 / 250.0 # Time step between filter steps (update loop at 250Hz)
A = np.matrix([[1.0, 0.0, dt, 0.5*dt**2, 0.0],
[0.0, 1.0, dt, 0.5*dt**2, 0.0],
[0.0, 0.0, 1.0, dt, 0.0],
[0.0, 0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0]])
print(A, A.shape)
Explanation: Dynamic matrix $A$
End of explanation
B = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt ],
[0.0],
[0.0]])
print(B, B.shape)
Explanation: Disturbance Control Matrix $B$
End of explanation
H_sonar = np.matrix([[0.0, 1.0, 0.0, 0.0, 0.0]])
print(H_sonar, H_sonar.shape)
H_baro = np.matrix([[1.0, 0.0, 0.0, 0.0, 1.0]])
print(H_baro, H_baro.shape)
H_gps = np.matrix([[1.0, 0.0, 0.0, 0.0, 0.0]])
print(H_gps, H_gps.shape)
H_gpsvel = np.matrix([[0.0, 0.0, 1.0, 0.0, 0.0]])
print(H_gpsvel, H_gpsvel.shape)
Explanation: Measurement Matrix $H$
End of explanation
# sonar
sigma_sonar = sigma_sonar_true # sonar noise
R_sonar = np.matrix([[sigma_sonar**2]])
print(R_sonar, R_sonar.shape)
# baro
sigma_baro = sigma_baro_true # sonar noise
R_baro = np.matrix([[sigma_baro**2]])
print(R_baro, R_baro.shape)
# gps
sigma_gps = sigma_gps_true # sonar noise
R_gps = np.matrix([[sigma_gps**2]])
print(R_gps, R_gps.shape)
# gpsvel
sigma_gpsvel = sigma_gpsvel_true # sonar noise
R_gpsvel = np.matrix([[sigma_gpsvel**2]])
print(R_gpsvel, R_gpsvel.shape)
Explanation: Measurement noise covariance $R$
End of explanation
from sympy import Symbol, Matrix, latex
from sympy.interactive import printing
import sympy
printing.init_printing()
dts = Symbol('\Delta t')
s1 = Symbol('\sigma_1') # drift of accelerometer bias
s2 = Symbol('\sigma_2') # drift of barometer bias
Q = sympy.zeros(5)
Qs = Matrix([[0.5*dts**2], [0.5*dts**2], [dts], [1.0]])
Q[:4, :4] = Qs*Qs.T*s1**2
Q[4, 4] = s2**2
Q
sigma_acc_drift = 0.0001
sigma_baro_drift = 0.0001
G = np.matrix([[0.5*dt**2],
[0.5*dt**2],
[dt],
[1.0]])
Q = np.zeros([5, 5])
Q[:4, :4] = G*G.T*sigma_acc_drift**2
Q[4, 4] = sigma_baro_drift**2
print(Q, Q.shape)
Explanation: Process noise covariance $Q$
End of explanation
I = np.eye(5)
print(I, I.shape)
Explanation: Identity Matrix
End of explanation
u = meas_acc
print(u, u.shape)
Explanation: Input
End of explanation
# Re init state
# State
x[0] = 0.0
x[1] = 0.0
x[2] = 0.0
x[3] = 0.0
x[4] = 0.0
# Estimate covariance
P[0,0] = 1000.0
P[1,1] = 100.0
P[2,2] = 100.0
P[3,3] = 100.0
P[4,4] = 100.0
# Preallocation for Plotting
# estimate
zt = []
ht = []
dzt= []
zetat=[]
etat = []
# covariance
Pz = []
Ph = []
Pdz= []
Pzeta=[]
Peta=[]
# sonar off/on
sonar_off = 10000
sonar_on = 40000
for filterstep in range(m):
# ========================
# Time Update (Prediction)
# ========================
# Project the state ahead
x = A*x + B*u[filterstep]
# Project the error covariance ahead
P = A*P*A.T + Q
# ===============================
# Measurement Update (Correction)
# ===============================
# Sonar (only at the beginning, ex take off)
if filterstep%25 == 0 and (filterstep <sonar_off or filterstep>sonar_on):
# Compute the Kalman Gain
S_sonar = H_sonar*P*H_sonar.T + R_sonar
K_sonar = (P*H_sonar.T) * np.linalg.pinv(S_sonar)
# Update the estimate via z
Z_sonar = meas_sonar[filterstep//25]
y_sonar = Z_sonar - (H_sonar*x) # Innovation or Residual
x = x + (K_sonar*y_sonar)
# Update the error covariance
P = (I - (K_sonar*H_sonar))*P
# Baro
if filterstep%25 == 0:
# Compute the Kalman Gain
S_baro = H_baro*P*H_baro.T + R_baro
K_baro = (P*H_baro.T) * np.linalg.pinv(S_baro)
# Update the estimate via z
Z_baro = meas_baro[filterstep//25]
y_baro = Z_baro - (H_baro*x) # Innovation or Residual
x = x + (K_baro*y_baro)
# Update the error covariance
P = (I - (K_baro*H_baro))*P
# GPS
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gps = H_gps*P*H_gps.T + R_gps
K_gps = (P*H_gps.T) * np.linalg.pinv(S_gps)
# Update the estimate via z
Z_gps = meas_gps[filterstep//250]
y_gps = Z_gps - (H_gps*x) # Innovation or Residual
x = x + (K_gps*y_gps)
# Update the error covariance
P = (I - (K_gps*H_gps))*P
# GPSvel
if filterstep%250 == 0:
# Compute the Kalman Gain
S_gpsvel = H_gpsvel*P*H_gpsvel.T + R_gpsvel
K_gpsvel = (P*H_gpsvel.T) * np.linalg.pinv(S_gpsvel)
# Update the estimate via z
Z_gpsvel = meas_gpsvel[filterstep//250]
y_gpsvel = Z_gpsvel - (H_gpsvel*x) # Innovation or Residual
x = x + (K_gpsvel*y_gpsvel)
# Update the error covariance
P = (I - (K_gpsvel*H_gpsvel))*P
# ========================
# Save states for Plotting
# ========================
zt.append(float(x[0]))
ht.append(float(x[1]))
dzt.append(float(x[2]))
zetat.append(float(x[3]))
etat.append(float(x[4]))
Pz.append(float(P[0,0]))
Ph.append(float(P[1,1]))
Pdz.append(float(P[2,2]))
Pzeta.append(float(P[3,3]))
Peta.append(float(P[4,4]))
Explanation: V) TEST
Filter loop
End of explanation
plt.figure(figsize=(17,15))
plt.subplot(321)
plt.plot(t, zt, color='b')
plt.fill_between(t, np.array(zt) - 10* np.array(Pz), np.array(zt) + 10*np.array(Pz), alpha=0.2, color='b')
plt.plot(t, alt_true, 'g')
plt.plot(t_meas_baro, meas_baro, '.r')
plt.plot(t_meas_gps, meas_gps, 'ok')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([405 - 10 * amplitude, 405 + 5 * amplitude])
plt.legend(['estimate', 'true altitude', 'baro reading', 'gps reading', 'sonar switched off/on'], loc='lower right')
plt.title('Altitude')
plt.subplot(322)
plt.plot(t, ht, color='b')
plt.fill_between(t, np.array(ht) - 10* np.array(Ph), np.array(ht) + 10*np.array(Ph), alpha=0.2, color='b')
plt.plot(t, height_true, 'g')
plt.plot(t_meas_sonar, meas_sonar, '.r')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([5 - 2 * amplitude, 5 + 1.5 * amplitude])
#plt.ylim([5 - 1 * amplitude, 5 + 1 * amplitude])
plt.legend(['estimate', 'true height above ground', 'sonar reading', 'sonar switched off/on'], loc='lower right')
plt.title('Height')
plt.subplot(323)
plt.plot(t, dzt, color='b')
plt.fill_between(t, np.array(dzt) - 10* np.array(Pdz), np.array(dzt) + 10*np.array(Pdz), alpha=0.2, color='b')
plt.plot(t, vel_true, 'g')
plt.plot(t_meas_gps, meas_gpsvel, 'ok')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
#plt.ylim([1.7, 2.3])
plt.ylim([0 - 10.0 * amplitude, + 10.0 * amplitude])
plt.legend(['estimate', 'true velocity', 'gps_vel reading', 'sonar switched off/on'], loc='lower right')
plt.title('Velocity')
plt.subplot(324)
plt.plot(t, zetat, color='b')
plt.fill_between(t, np.array(zetat) - 10* np.array(Pzeta), np.array(zetat) + 10*np.array(Pzeta), alpha=0.2, color='b')
plt.plot(t, -acc_bias * np.ones_like(t), 'g')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
plt.ylim([-acc_bias-0.2, -acc_bias+0.2])
# plt.ylim([0 - 2.0 * amplitude, + 2.0 * amplitude])
plt.legend(['estimate', 'true bias', 'sonar switched off/on'])
plt.title('Acc bias')
plt.subplot(325)
plt.plot(t, etat, color='b')
plt.fill_between(t, np.array(etat) - 10* np.array(Peta), np.array(etat) + 10*np.array(Peta), alpha=0.2, color='b')
plt.plot(t, baro_bias * np.ones_like(t), 'g')
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
plt.ylim([baro_bias-10.0, baro_bias+10.0])
# plt.ylim([0 - 2.0 * amplitude, + 2.0 * amplitude])
plt.legend(['estimate', 'true bias', 'sonar switched off/on'])
plt.title('Baro bias')
plt.subplot(326)
plt.plot(t, Pz)
plt.plot(t, Ph)
plt.plot(t, Pdz)
plt.ylim([0, 1.0])
plt.plot([t[sonar_off], t[sonar_off]], [-1000, 1000], '--k')
plt.plot([t[sonar_on], t[sonar_on]], [-1000, 1000], '--k')
plt.legend(['Altitude', 'Height', 'Velocity', 'sonar switched off/on'])
plt.title('Incertitudes')
Explanation: VI) PLOT
End of explanation |
12,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DICS for power mapping
In this tutorial, we'll simulate two signals originating from two
locations on the cortex. These signals will be sinusoids, so we'll be looking
at oscillatory activity (as opposed to evoked activity).
We'll use dynamic imaging of coherent sources (DICS) [1]_ to map out
spectral power along the cortex. Let's see if we can find our two simulated
sources.
Step1: Setup
We first import the required packages to run this tutorial and define a list
of filenames for various things we'll be using.
Step3: Data simulation
The following function generates a timeseries that contains an oscillator,
whose frequency fluctuates a little over time, but stays close to 10 Hz.
We'll use this function to generate our two signals.
Step4: Let's simulate two timeseries and plot some basic information about them.
Step5: Now we put the signals at two locations on the cortex. We construct a
Step6: Before we simulate the sensor-level data, let's define a signal-to-noise
ratio. You are encouraged to play with this parameter and see the effect of
noise on our results.
Step7: Now we run the signal through the forward model to obtain simulated sensor
data. To save computation time, we'll only simulate gradiometer data. You can
try simulating other types of sensors as well.
Some noise is added based on the baseline noise covariance matrix from the
sample dataset, scaled to implement the desired SNR.
Step8: We create an
Step9: Power mapping
With our simulated dataset ready, we can now pretend to be researchers that
have just recorded this from a real subject and are going to study what parts
of the brain communicate with each other.
First, we'll create a source estimate of the MEG data. We'll use both a
straightforward MNE-dSPM inverse solution for this, and the DICS beamformer
which is specifically designed to work with oscillatory data.
Computing the inverse using MNE-dSPM
Step10: We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
A beamformer will construct for each vertex a spatial filter that aims to
pass activity originating from the vertex, while dampening activity from
other sources as much as possible.
The | Python Code:
# Author: Marijn van Vliet <[email protected]>
#
# License: BSD (3-clause)
Explanation: DICS for power mapping
In this tutorial, we'll simulate two signals originating from two
locations on the cortex. These signals will be sinusoids, so we'll be looking
at oscillatory activity (as opposed to evoked activity).
We'll use dynamic imaging of coherent sources (DICS) [1]_ to map out
spectral power along the cortex. Let's see if we can find our two simulated
sources.
End of explanation
import os.path as op
import numpy as np
from scipy.signal import welch, coherence
from mayavi import mlab
from matplotlib import pyplot as plt
import mne
from mne.simulation import simulate_raw
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
# We use the MEG and MRI setup from the MNE-sample dataset
data_path = sample.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
mri_path = op.join(subjects_dir, 'sample')
# Filenames for various files we'll be using
meg_path = op.join(data_path, 'MEG', 'sample')
raw_fname = op.join(meg_path, 'sample_audvis_raw.fif')
trans_fname = op.join(meg_path, 'sample_audvis_raw-trans.fif')
src_fname = op.join(mri_path, 'bem/sample-oct-6-src.fif')
bem_fname = op.join(mri_path, 'bem/sample-5120-5120-5120-bem-sol.fif')
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
cov_fname = op.join(meg_path, 'sample_audvis-cov.fif')
# Seed for the random number generator
rand = np.random.RandomState(42)
Explanation: Setup
We first import the required packages to run this tutorial and define a list
of filenames for various things we'll be using.
End of explanation
sfreq = 50. # Sampling frequency of the generated signal
times = np.arange(10. * sfreq) / sfreq # 10 seconds of signal
n_times = len(times)
def coh_signal_gen():
Generate an oscillating signal.
Returns
-------
signal : ndarray
The generated signal.
t_rand = 0.001 # Variation in the instantaneous frequency of the signal
std = 0.1 # Std-dev of the random fluctuations added to the signal
base_freq = 10. # Base frequency of the oscillators in Hertz
n_times = len(times)
# Generate an oscillator with varying frequency and phase lag.
signal = np.sin(2.0 * np.pi *
(base_freq * np.arange(n_times) / sfreq +
np.cumsum(t_rand * rand.randn(n_times))))
# Add some random fluctuations to the signal.
signal += std * rand.randn(n_times)
# Scale the signal to be in the right order of magnitude (~100 nAm)
# for MEG data.
signal *= 100e-9
return signal
Explanation: Data simulation
The following function generates a timeseries that contains an oscillator,
whose frequency fluctuates a little over time, but stays close to 10 Hz.
We'll use this function to generate our two signals.
End of explanation
signal1 = coh_signal_gen()
signal2 = coh_signal_gen()
fig, axes = plt.subplots(2, 2, figsize=(8, 4))
# Plot the timeseries
ax = axes[0][0]
ax.plot(times, 1e9 * signal1, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)',
title='Signal 1')
ax = axes[0][1]
ax.plot(times, 1e9 * signal2, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2')
# Power spectrum of the first timeseries
f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)
ax = axes[1][0]
# Only plot the first 100 frequencies
ax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]],
ylabel='Power (dB)', title='Power spectrum of signal 1')
# Compute the coherence between the two timeseries
f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)
ax = axes[1][1]
ax.plot(f[:50], coh[:50], lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence',
title='Coherence between the timeseries')
fig.tight_layout()
Explanation: Let's simulate two timeseries and plot some basic information about them.
End of explanation
# The locations on the cortex where the signal will originate from. These
# locations are indicated as vertex numbers.
source_vert1 = 146374
source_vert2 = 33830
# The timeseries at each vertex: one part signal, one part silence
timeseries1 = np.hstack([signal1, np.zeros_like(signal1)])
timeseries2 = np.hstack([signal2, np.zeros_like(signal2)])
# Construct a SourceEstimate object that describes the signal at the cortical
# level.
stc = mne.SourceEstimate(
np.vstack((timeseries1, timeseries2)), # The two timeseries
vertices=[[source_vert1], [source_vert2]], # Their locations
tmin=0,
tstep=1. / sfreq,
subject='sample', # We use the brain model of the MNE-Sample dataset
)
Explanation: Now we put the signals at two locations on the cortex. We construct a
:class:mne.SourceEstimate object to store them in.
The timeseries will have a part where the signal is active and a part where
it is not. The techniques we'll be using in this tutorial depend on being
able to contrast data that contains the signal of interest versus data that
does not (i.e. it contains only noise).
End of explanation
snr = 1. # Signal-to-noise ratio. Decrease to add more noise.
Explanation: Before we simulate the sensor-level data, let's define a signal-to-noise
ratio. You are encouraged to play with this parameter and see the effect of
noise on our results.
End of explanation
# Read the info from the sample dataset. This defines the location of the
# sensors and such.
info = mne.io.read_info(raw_fname)
info.update(sfreq=sfreq, bads=[])
# Only use gradiometers
picks = mne.pick_types(info, meg='grad', stim=True, exclude=())
mne.pick_info(info, picks, copy=False)
# This is the raw object that will be used as a template for the simulation.
raw = mne.io.RawArray(np.zeros((info['nchan'], len(stc.times))), info)
# Define a covariance matrix for the simulated noise. In this tutorial, we use
# a simple diagonal matrix.
cov = mne.cov.make_ad_hoc_cov(info)
cov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR
# Simulate the raw data, with a lowpass filter on the noise
raw = simulate_raw(raw, stc, trans_fname, src_fname, bem_fname, cov=cov,
random_state=rand, iir_filter=[4, -4, 0.8])
Explanation: Now we run the signal through the forward model to obtain simulated sensor
data. To save computation time, we'll only simulate gradiometer data. You can
try simulating other types of sensors as well.
Some noise is added based on the baseline noise covariance matrix from the
sample dataset, scaled to implement the desired SNR.
End of explanation
t0 = raw.first_samp # First sample in the data
t1 = t0 + n_times - 1 # Sample just before the second trial
epochs = mne.Epochs(
raw,
events=np.array([[t0, 0, 1], [t1, 0, 2]]),
event_id=dict(signal=1, noise=2),
tmin=0, tmax=10,
preload=True,
)
# Plot some of the channels of the simulated data that are situated above one
# of our simulated sources.
picks = mne.pick_channels(epochs.ch_names, mne.read_selection('Left-frontal'))
epochs.plot(picks=picks)
Explanation: We create an :class:mne.Epochs object containing two trials: one with
both noise and signal and one with just noise
End of explanation
# Compute the inverse operator
fwd = mne.read_forward_solution(fwd_fname)
inv = make_inverse_operator(epochs.info, fwd, cov)
# Apply the inverse model to the trial that also contains the signal.
s = apply_inverse(epochs['signal'].average(), inv)
# Take the root-mean square along the time dimension and plot the result.
s_rms = np.sqrt((s ** 2).mean())
brain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1,
size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('MNE-dSPM inverse (RMS)', height=0.9)
Explanation: Power mapping
With our simulated dataset ready, we can now pretend to be researchers that
have just recorded this from a real subject and are going to study what parts
of the brain communicate with each other.
First, we'll create a source estimate of the MEG data. We'll use both a
straightforward MNE-dSPM inverse solution for this, and the DICS beamformer
which is specifically designed to work with oscillatory data.
Computing the inverse using MNE-dSPM:
End of explanation
# Estimate the cross-spectral density (CSD) matrix on the trial containing the
# signal.
csd_signal = csd_morlet(epochs['signal'], frequencies=[10])
# Compute the spatial filters for each vertex, using two approaches.
filters_approach1 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', normalize_fwd=True,
inversion='single', weight_norm=None)
print(filters_approach1)
filters_approach2 = make_dics(
info, fwd, csd_signal, reg=0.1, pick_ori='max-power', normalize_fwd=False,
inversion='matrix', weight_norm='unit-noise-gain')
print(filters_approach2)
# You can save these to disk with:
# filters_approach1.save('filters_1-dics.h5')
# Compute the DICS power map by applying the spatial filters to the CSD matrix.
power_approach1, f = apply_dics_csd(csd_signal, filters_approach1)
power_approach2, f = apply_dics_csd(csd_signal, filters_approach2)
# Plot the DICS power maps for both approaches.
for approach, power in enumerate([power_approach1, power_approach2], 1):
brain = power.plot('sample', subjects_dir=subjects_dir, hemi='both',
figure=approach + 1, size=600)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(source_vert1, coords_as_verts=True, hemi='lh')
brain.add_foci(source_vert2, coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
mlab.view(0, 0, 550, [0, 0, 0])
mlab.title('DICS power map, approach %d' % approach, height=0.9)
Explanation: We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
A beamformer will construct for each vertex a spatial filter that aims to
pass activity originating from the vertex, while dampening activity from
other sources as much as possible.
The :func:mne.beamformer.make_dics function has many switches that offer
precise control
over the way the filter weights are computed. Currently, there is no clear
consensus regarding the best approach. This is why we will demonstrate two
approaches here:
The approach as described in [2]_, which first normalizes the forward
solution and computes a vector beamformer.
The scalar beamforming approach based on [3]_, which uses weight
normalization instead of normalizing the forward solution.
End of explanation |
12,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Theano exercises
This notebook contains Theano exercises not related to machine learning.
The exercises work in the following way
Step4: Solution
Step9: Exercise 2
This exercise requires you to create Theano variable, and apply elementwise multiplication and matrix/vector dot product.
Step10: Solution
Step14: Exercise 3
This exercise requires you to create Theano tensor variables, do broadcastable addition, and to compute the maximum over part of a tensor.
Step15: Solution
Step17: Exercise 4
This exercise requires you to compile a Theano function and call it to execute "x + y".
Step18: Solution
Step22: Exercise 5
This exercise makes you use shared variables. You must create them and update them by swapping 2 shared variables values.
Step23: Solution
Step25: Exercise 6
This exercise makes you use Theano symbolic differentiation features, grad.
Step26: Solution | Python Code:
import numpy as np
from theano import function
raise NotImplementedError("TODO: add any other imports you need")
def make_scalar():
Returns a new Theano scalar.
raise NotImplementedError("TODO: implement this function.")
def log(x):
Returns the logarithm of a Theano scalar x.
raise NotImplementedError("TODO: implement this function.")
def add(x, y):
Adds two theano scalars together and returns the result.
raise NotImplementedError("TODO: implement this function.")
# The following code will use your code and test it.
a = make_scalar()
b = make_scalar()
c = log(b)
d = add(a, c)
f = function([a, b], d)
a = np.cast[a.dtype](1.)
b = np.cast[b.dtype](2.)
actual = f(a, b)
expected = 1. + np.log(2.)
assert np.allclose(actual, expected)
print("SUCCESS!")
Explanation: Theano exercises
This notebook contains Theano exercises not related to machine learning.
The exercises work in the following way:
You have a cell with TODOs that raise errors with a description of what is needed.
The cell contains a description at the top.
Run the cell (Ctrl-Enter) to execute it. At first, it raises an error.
Modify the cell to implement what is asked in the error.
If you implement correctly all the todos and run the cell, it should print "Success" at the end (there is validation code in the cell). If not, try again.
If you want to check the solution, execute the cell that starts with "%load" after the exercise.
Exercise 1
This exercise requires you to create Theano variables and perform some computation on them.
End of explanation
%load solutions/01_scalar_soln.py
Explanation: Solution
End of explanation
import numpy as np
from theano import function
raise NotImplementedError("TODO: add any other imports you need")
def make_vector():
Returns a new Theano vector.
raise NotImplementedError("TODO: implement this function.")
def make_matrix():
Returns a new Theano matrix.
raise NotImplementedError("TODO: implement this function.")
def elemwise_mul(a, b):
a: A theano matrix
b: A theano matrix
Returns the elementwise product of a and b
raise NotImplementedError("TODO: implement this function.")
def matrix_vector_mul(a, b):
a: A theano matrix
b: A theano vector
Returns the matrix-vector product of a and b
raise NotImplementedError("TODO: implement this function.")
# The following code will use your code and test it.
a = make_vector()
b = make_vector()
c = elemwise_mul(a, b)
d = make_matrix()
e = matrix_vector_mul(d, c)
f = function([a, b, d], e)
rng = np.random.RandomState([1, 2, 3])
a_value = rng.randn(5).astype(a.dtype)
b_value = rng.rand(5).astype(b.dtype)
c_value = a_value * b_value
d_value = rng.randn(5, 5).astype(d.dtype)
expected = np.dot(d_value, c_value)
actual = f(a_value, b_value, d_value)
assert np.allclose(actual, expected)
print("SUCCESS!")
Explanation: Exercise 2
This exercise requires you to create Theano variable, and apply elementwise multiplication and matrix/vector dot product.
End of explanation
%load solutions/02_vector_mat_soln.py
Explanation: Solution
End of explanation
import numpy as np
from theano import function
raise NotImplementedError("TODO: add any other imports you need")
def make_tensor(dim):
Returns a new Theano tensor with no broadcastable dimensions.
dim: the total number of dimensions of the tensor.
(You can use any dtype you like)
raise NotImplementedError("TODO: implement this function.")
def broadcasted_add(a, b):
a: a 3D theano tensor
b: a 4D theano tensor
Returns c, a 4D theano tensor, where
c[i, j, k, l] = a[l, k, i] + b[i, j, k, l]
for all i, j, k, l
raise NotImplementedError("TODO: implement this function.")
def partial_max(a):
a: a 4D theano tensor
Returns b, a theano matrix, where
b[i, j] = max_{k,l} a[i, k, l, j]
for all i, j
raise NotImplementedError("TODO: implement this function.")
# The following code use your code and test it.
a = make_tensor(3)
b = make_tensor(4)
c = broadcasted_add(a, b)
d = partial_max(c)
f = function([a, b], d)
rng = np.random.RandomState([1, 2, 3])
a_value = rng.randn(2, 2, 2).astype(a.dtype)
b_value = rng.rand(2, 2, 2, 2).astype(b.dtype)
c_value = np.transpose(a_value, (2, 1, 0))[:, None, :, :] + b_value
expected = c_value.max(axis=1).max(axis=1)
actual = f(a_value, b_value)
assert np.allclose(actual, expected), (actual, expected)
print("SUCCESS!")
Explanation: Exercise 3
This exercise requires you to create Theano tensor variables, do broadcastable addition, and to compute the maximum over part of a tensor.
End of explanation
%load solutions/03_tensor_soln.py
Explanation: Solution
End of explanation
from theano import tensor as T
raise NotImplementedError("TODO: add any other imports you need")
def evaluate(x, y, expr, x_value, y_value):
x: A theano variable
y: A theano variable
expr: A theano expression involving x and y
x_value: A numpy value
y_value: A numpy value
Returns the value of expr when x_value is substituted for x
and y_value is substituted for y
raise NotImplementedError("TODO: implement this function.")
# The following code use your code and test it.
x = T.iscalar()
y = T.iscalar()
z = x + y
assert evaluate(x, y, z, 1, 2) == 3
print("SUCCESS!")
Explanation: Exercise 4
This exercise requires you to compile a Theano function and call it to execute "x + y".
End of explanation
%load solutions/04_function_soln.py
Explanation: Solution
End of explanation
import numpy as np
raise NotImplementedError("TODO: add any other imports you need")
def make_shared(shape):
Returns a theano shared variable containing a tensor of the specified
shape.
You can use any value you want.
raise NotImplementedError("TODO: implement the function")
def exchange_shared(a, b):
a: a theano shared variable
b: a theano shared variable
Uses get_value and set_value to swap the values stored in a and b
raise NotImplementedError("TODO: implement the function")
def make_exchange_func(a, b):
a: a theano shared variable
b: a theano shared variable
Returns f
where f is a theano function, that, when called, swaps the
values in a and b
f should not return anything
raise NotImplementedError("TODO: implement the function")
# The following code will use your code and test it.
a = make_shared((5, 4, 3))
assert a.get_value().shape == (5, 4, 3)
b = make_shared((5, 4, 3))
assert a.get_value().shape == (5, 4, 3)
a.set_value(np.zeros((5, 4, 3), dtype=a.dtype))
b.set_value(np.ones((5, 4, 3), dtype=b.dtype))
exchange_shared(a, b)
assert np.all(a.get_value() == 1.)
assert np.all(b.get_value() == 0.)
f = make_exchange_func(a, b)
rval = f()
assert isinstance(rval, list)
assert len(rval) == 0
assert np.all(a.get_value() == 0.)
assert np.all(b.get_value() == 1.)
print("SUCCESS!")
Explanation: Exercise 5
This exercise makes you use shared variables. You must create them and update them by swapping 2 shared variables values.
End of explanation
%load solutions/05_shared_soln.py
Explanation: Solution
End of explanation
from theano import tensor as T
def grad_sum(x, y, z):
x: A theano variable
y: A theano variable
z: A theano expression involving x and y
Returns dz / dx + dz / dy
raise NotImplementedError("TODO: implement this function.")
# The following code will use your code and test it.
x = T.scalar()
y = T.scalar()
z = x + y
s = grad_sum(x, y, z)
assert s.eval({x: 0, y: 0}) == 2
print("SUCCESS!")
Explanation: Exercise 6
This exercise makes you use Theano symbolic differentiation features, grad.
End of explanation
%load solutions/06_grad_soln.py
Explanation: Solution
End of explanation |
12,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Scan Examples
By Rob DiPietro – Version 0.32 – April 28, 2016.
<a href="https
Step1: This example shows how scan is used
Step3: <a id="generating-inputs-and-targets"></a>
Generating Inputs and Targets
First let's write a function for generating input, target sequences, one pair at a time. We'll limit ourselves to inputs with independent time steps, drawn from a standard normal distribution.
Each sequence is 2-D with time on the first axis and inputs or targets on the second. (This way it'd be easy to generalize to the case of multiple inputs/targets per time step.)
Step13: <a id="defining-the-rnn-model-from-scratch"></a>
Defining the RNN Model from Scratch
Next let's define the RNN model. The code is a bit verbose because it's meant to be self explanatory, but the pieces are simple
Step16: <a id="defining-an-optimizer"></a>
Defining an Optimizer
Next let's write an optimizer class. We'll use vanilla gradient descent after gradient "clipping," according to the method described by Pascanu, Mikolov, and Bengio.
The gradient-clipping method is simple and could instead be called gradient scaling
Step18: <a id="training"></a>
Training
Next let's define and run our training function. This is where we'll run the main optimization loop and export TensorBoard summaries.
Step19: Now we can train our model
Step21: After running tensorboard --logdir ./logdir and navigating to http | Python Code:
from __future__ import division, print_function
import tensorflow as tf
def fn(previous_output, current_input):
return previous_output + current_input
elems = tf.Variable([1.0, 2.0, 2.0, 2.0])
elems = tf.identity(elems)
initializer = tf.constant(0.0)
out = tf.scan(fn, elems, initializer=initializer)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print(sess.run(out))
Explanation: TensorFlow Scan Examples
By Rob DiPietro – Version 0.32 – April 28, 2016.
<a href="https://twitter.com/rsdipietro" class="twitter-follow-button" data-size="large" data-show-screen-name="false" data-show-count="false">Follow @rsdipietro</a>
Post or Jupyter Notebook?
This work is available both as a post and as a Jupyter notebook. If you see any mistakes or have any questions, please open a GitHub issue.
Contents
Overview
Preliminaries
Hard Coding the Cumulative Sum
Learning the Cumulative Sum
Generating Inputs and Targets
Defining the RNN Model from Scratch
Defining an Optimizer
Training
Testing Qualitatively
Ideas for Playing with the Code
Some Final Thoughts
Acknowledgements
About Me
<a id="overview"></a>
Overview
scan was recently made available in TensorFlow.
scan lets us write loops inside a computation graph, allowing backpropagation and all.
We could explicitly unroll the loops ourselves, creating new graph nodes for each loop
iteration, but then the number of iterations is fixed instead of dynamic, and graph
creation can be extremely slow.
Let's go over two examples. First, we'll create a simple cumulative-sum operation using
scan. For example, [1, 2, 2, 2] as input will produce [1, 3, 5, 7] as output. Second,
we'll build a toy RNN from scratch, and we'll have it learn the cumulative-sum operation
from example input, target sequences. For example, the RNN will learn to map [1, 2, 2, 2]
to [1, 3, 5, 7] (approximately).
<a id="hard-coding-the-cumulative-sum"></a>
Hard Coding the Cumulative Sum
Let's just start with code:
End of explanation
%reset -f
from __future__ import division, print_function
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
import matplotlib.pyplot as plt
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.python.ops import functional_ops
Explanation: This example shows how scan is used: it loops over the first dimension of elems, at each step applying fn, which takes in the previous step's output and the current step's input. The very first step's previous output is given by initializer:
- Iteration 0: fn(0.0, 1.0) == 1.0
- Iteration 1: fn(1.0, 2.0) == 3.0
- Iteration 2: fn(3.0, 2.0) == 5.0
- Iteration 3: fn(5.0, 2.0) == 7.0
And why the elems = tf.identity(elems) line? scan is new, and this is just a temporary workaround for a bug.
<a id="learning-the-cumulative-sum"></a>
Learning the Cumulative Sum
Now a more complex example: we'll build a recurrent neural network and learn the cumulative-sum function from data.
End of explanation
def input_target_generator(min_duration=5, max_duration=50):
Generate toy input, target sequences.
Each input sequence has values that are drawn from the standard normal
distribution, and each target sequence is the corresponding cumulative sum.
Sequence durations are chosen at random using a discrete uniform
distribution over `[min_duration, max_duration]`.
Args:
min_duration: A positive integer. The minimum sequence duration.
max_duration: A positive integer. The maximum sequence duration.
Yields:
A tuple,
inputs: A 2-D float32 NumPy array with shape `[duration, 1]`.
targets: A 2-D float32 NumPy array with shape `[duration, 1]`.
while True:
duration = np.random.randint(min_duration, max_duration)
inputs = np.random.randn(duration).astype(np.float32)
targets = np.cumsum(inputs).astype(np.float32)
yield inputs.reshape(-1, 1), targets.reshape(-1, 1)
Explanation: <a id="generating-inputs-and-targets"></a>
Generating Inputs and Targets
First let's write a function for generating input, target sequences, one pair at a time. We'll limit ourselves to inputs with independent time steps, drawn from a standard normal distribution.
Each sequence is 2-D with time on the first axis and inputs or targets on the second. (This way it'd be easy to generalize to the case of multiple inputs/targets per time step.)
End of explanation
class Model(object):
def __init__(self, hidden_layer_size, input_size, target_size, init_scale=0.1):
Create a vanilla RNN.
Args:
hidden_layer_size: An integer. The number of hidden units.
input_size: An integer. The number of inputs per time step.
target_size: An integer. The number of targets per time step.
init_scale: A float. All weight matrices will be initialized using
a uniform distribution over [-init_scale, init_scale].
self.hidden_layer_size = hidden_layer_size
self.input_size = input_size
self.target_size = target_size
self.init_scale = init_scale
self._inputs = tf.placeholder(tf.float32, shape=[None, input_size],
name='inputs')
self._targets = tf.placeholder(tf.float32, shape=[None, target_size],
name='targets')
initializer = tf.random_uniform_initializer(-init_scale, init_scale)
with tf.variable_scope('model', initializer=initializer):
self._states, self._predictions = self._compute_predictions()
self._loss = self._compute_loss()
def _vanilla_rnn_step(self, h_prev, x):
Vanilla RNN step.
Args:
h_prev: A 1-D float32 Tensor with shape `[hidden_layer_size]`.
x: A 1-D float32 Tensor with shape `[input_size]`.
Returns:
The updated state `h`, with the same shape as `h_prev`.
h_prev = tf.reshape(h_prev, [1, self.hidden_layer_size])
x = tf.reshape(x, [1, self.input_size])
with tf.variable_scope('rnn_block'):
W_h = tf.get_variable(
'W_h', shape=[self.hidden_layer_size, self.hidden_layer_size])
W_x = tf.get_variable(
'W_x', shape=[self.input_size, self.hidden_layer_size])
b = tf.get_variable('b', shape=[self.hidden_layer_size],
initializer=tf.constant_initializer(0.0))
h = tf.tanh( tf.matmul(h_prev, W_h) + tf.matmul(x, W_x) + b )
h = tf.reshape(h, [self.hidden_layer_size], name='h')
return h
def _compute_predictions(self):
Compute vanilla-RNN states and predictions.
with tf.variable_scope('states'):
initial_state = tf.zeros([self.hidden_layer_size],
name='initial_state')
states = tf.scan(self._vanilla_rnn_step, self.inputs,
initializer=initial_state, name='states')
with tf.variable_scope('predictions'):
W_pred = tf.get_variable(
'W_pred', shape=[self.hidden_layer_size, self.target_size])
b_pred = tf.get_variable('b_pred', shape=[self.target_size],
initializer=tf.constant_initializer(0.0))
predictions = tf.add(tf.matmul(states, W_pred), b_pred, name='predictions')
return states, predictions
def _compute_loss(self):
Compute l2 loss between targets and predictions.
with tf.variable_scope('loss'):
loss = tf.reduce_mean((self.targets - self.predictions)**2, name='loss')
return loss
@property
def inputs(self):
A 2-D float32 placeholder with shape `[dynamic_duration, input_size]`.
return self._inputs
@property
def targets(self):
A 2-D float32 placeholder with shape `[dynamic_duration, target_size]`.
return self._targets
@property
def states(self):
A 2-D float32 Tensor with shape `[dynamic_duration, hidden_layer_size]`.
return self._states
@property
def predictions(self):
A 2-D float32 Tensor with shape `[dynamic_duration, target_size]`.
return self._predictions
@property
def loss(self):
A 0-D float32 Tensor.
return self._loss
Explanation: <a id="defining-the-rnn-model-from-scratch"></a>
Defining the RNN Model from Scratch
Next let's define the RNN model. The code is a bit verbose because it's meant to be self explanatory, but the pieces are simple:
- The update for the vanilla RNN is $h_t = \tanh( W_h h_{t-1} + W_x x_t + b )$.
- _vanilla_rnn_step is the core of the vanilla RNN: it applies this update by taking in a previous hidden state along with a current input and producing a new hidden state. (The only difference below is that both sides of the equation are transposed, and each variable is replaced with its transpose.)
- _compute_predictions applies _vanilla_rnn_step to all time steps using scan, resulting in hidden states for each time step, and then applies a final linear layer to each state to yield final predictions.
- _compute_loss just computes the mean squared Euclidean distance between the ground-truth targets and our predictions.
End of explanation
class Optimizer(object):
def __init__(self, loss, initial_learning_rate, num_steps_per_decay,
decay_rate, max_global_norm=1.0):
Create a simple optimizer.
This optimizer clips gradients and uses vanilla stochastic gradient
descent with a learning rate that decays exponentially.
Args:
loss: A 0-D float32 Tensor.
initial_learning_rate: A float.
num_steps_per_decay: An integer.
decay_rate: A float. The factor applied to the learning rate
every `num_steps_per_decay` steps.
max_global_norm: A float. If the global gradient norm is less than
this, do nothing. Otherwise, rescale all gradients so that
the global norm because `max_global_norm`.
trainables = tf.trainable_variables()
grads = tf.gradients(loss, trainables)
grads, _ = tf.clip_by_global_norm(grads, clip_norm=max_global_norm)
grad_var_pairs = zip(grads, trainables)
global_step = tf.Variable(0, trainable=False, dtype=tf.int32)
learning_rate = tf.train.exponential_decay(
initial_learning_rate, global_step, num_steps_per_decay,
decay_rate, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
self._optimize_op = optimizer.apply_gradients(grad_var_pairs,
global_step=global_step)
@property
def optimize_op(self):
An Operation that takes one optimization step.
return self._optimize_op
Explanation: <a id="defining-an-optimizer"></a>
Defining an Optimizer
Next let's write an optimizer class. We'll use vanilla gradient descent after gradient "clipping," according to the method described by Pascanu, Mikolov, and Bengio.
The gradient-clipping method is simple and could instead be called gradient scaling: if the global norm is smaller than max_global_norm, do nothing. Otherwise, rescale all gradients so that the global norm becomes max_global_norm.
What is the global norm? It's just the norm over all gradients, as if they were concatenated together to form one global vector.
End of explanation
def train(sess, model, optimizer, generator, num_optimization_steps,
logdir='./logdir'):
Train.
Args:
sess: A Session.
model: A Model.
optimizer: An Optimizer.
generator: A generator that yields `(inputs, targets)` tuples, with
`inputs` and `targets` both having shape `[dynamic_duration, 1]`.
num_optimization_steps: An integer.
logdir: A string. The log directory.
if os.path.exists(logdir):
shutil.rmtree(logdir)
tf.scalar_summary('loss', model.loss)
ema = tf.train.ExponentialMovingAverage(decay=0.99)
update_loss_ema = ema.apply([model.loss])
loss_ema = ema.average(model.loss)
tf.scalar_summary('loss_ema', loss_ema)
summary_op = tf.merge_all_summaries()
summary_writer = tf.train.SummaryWriter(logdir=logdir, graph=sess.graph)
sess.run(tf.initialize_all_variables())
for step in xrange(num_optimization_steps):
inputs, targets = generator.next()
loss_ema_, summary, _, _ = sess.run(
[loss_ema, summary_op, optimizer.optimize_op, update_loss_ema],
{model.inputs: inputs, model.targets: targets})
summary_writer.add_summary(summary, global_step=step)
print('\rStep %d. Loss EMA: %.6f.' % (step+1, loss_ema_), end='')
Explanation: <a id="training"></a>
Training
Next let's define and run our training function. This is where we'll run the main optimization loop and export TensorBoard summaries.
End of explanation
generator = input_target_generator()
model = Model(hidden_layer_size=256, input_size=1, target_size=1, init_scale=0.1)
optimizer = Optimizer(model.loss, initial_learning_rate=1e-2, num_steps_per_decay=15000,
decay_rate=0.1, max_global_norm=1.0)
sess = tf.Session()
train(sess, model, optimizer, generator, num_optimization_steps=45000)
Explanation: Now we can train our model:
End of explanation
def test_qualitatively(sess, model, generator, num_examples=5, figsize=(10, 3)):
Test qualitatively.
Args:
sess: A Session.
model: A Model.
generator: A generator that yields `(inputs, targets)` tuples, with
`inputs` and `targets` both having shape `[dynamic_duration, 1]`.
num_examples: An integer. The number of examples to plot.
figsize: A tuple `(width, height)`, the size of each example's figure.
for i in xrange(num_examples):
inputs, targets = generator.next()
predictions = sess.run(model.predictions, {model.inputs: inputs})
fig, ax = plt.subplots(nrows=2, sharex=True, figsize=figsize)
ax[0].plot(inputs.flatten(), label='inputs')
ax[0].legend()
ax[1].plot(targets.flatten(), label='targets')
ax[1].plot(predictions.flatten(), 'o', label='predictions')
ax[1].legend()
test_qualitatively(sess, model, generator, figsize=(8, 2))
Explanation: After running tensorboard --logdir ./logdir and navigating to http://localhost:6006, we can view our loss summaries. Here the exponential moving average is especially helpful because our raw losses correspond to individual sequences (and are therefore very noisy estimates).
<a id="testing-qualitatively"></a>
Testing Qualitatively
Finally let's write a function to test the trained RNN qualitatively: we'll plot the original inputs (random real numbers), the ground-truth target (the cumulative sum), and our trained RNN's predictions (hopefully matching the cumulative sum).
End of explanation |
12,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def get_id_text(input, vocab_to_int):
return [[vocab_to_int[word] for word in sentence.split()] for sentence in input]
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = [sentence for sentence in source_text.split('\n')]
target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
source_id_text = get_id_text(source_sentences, source_vocab_to_int)
target_id_text = get_id_text(target_sentences, target_vocab_to_int)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input = tf.placeholder(tf.int32, (None, None), name='input')
targets = tf.placeholder(tf.int32, (None, None), name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_probability = tf.placeholder(tf.float32, name='keep_prob')
return input, targets, learning_rate, keep_probability
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
go = target_vocab_to_int['<GO>']
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], go), ending], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dropout = tf.contrib.rnn.DropoutWrapper(lstm, keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
_, rnn_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
dec_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
output_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell,
dec_fn_train,
dec_embed_input,
sequence_length,
scope=decoding_scope
)
train_logits = output_fn(output_logits)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
vocab_size
)
infer_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, decoder_fn=infer_decoder_fn, scope=decoding_scope)
return infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
with tf.variable_scope('decoding') as decoding_scope:
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob, output_keep_prob=keep_prob)
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope('decoding') as decoding_scope:
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope('decoding', reuse=True) as decoding_scope:
infer_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length - 1, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
enc_state = encoding_layer(enc_inputs, rnn_size, num_layers, keep_prob)
dec_inputs = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.truncated_normal([target_vocab_size, dec_embedding_size], stddev=0.01))
dec_embed_inputs = tf.nn.embedding_lookup(dec_embeddings, dec_inputs)
train_logits, infer_logits = decoding_layer(
dec_embed_inputs,
dec_embeddings,
enc_state,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob
)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 7
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 10
decoding_embedding_size = 10
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.7
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
12,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get Data
Step1: Basic Heat map
Step2: Hide tick_labels and color axis using 'axes_options'
Step3: Non Uniform Heat map
Step4: Alignment of the data with respect to the grid
For a N-by-N matrix, N+1 points along the row or the column are assumed to be end points.
Step5: By default, for N points along any dimension, data aligns to the start of the rectangles in the grid.
The grid extends infinitely in the other direction. By default, the grid extends infintely
towards the bottom and the right.
Step6: By changing the row_align and column_align properties, the grid can extend in the opposite direction
Step7: For N+1 points on any direction, the grid extends infintely in both directions
Step8: Changing opacity and stroke
Step9: Selections on the grid map
Selection on the GridHeatMap works similar to excel. Clicking on a cell selects the cell, and deselects the previous selection. Using the Ctrl key allows multiple cells to be selected, while the Shift key selects the range from the last cell in the selection to the current cell.
Step10: The selected trait of a GridHeatMap contains a list of lists, with each sub-list containing the row and column index of a selected cell.
Step11: Registering on_element_click event handler | Python Code:
np.random.seed(0)
data = np.random.randn(10, 10)
Explanation: Get Data
End of explanation
from ipywidgets import *
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data)
fig
grid_map.display_format = ".2f"
grid_map.font_style = {"font-size": "16px", "fill": "blue", "font-weight": "bold"}
Explanation: Basic Heat map
End of explanation
axes_options = {
"column": {"visible": False},
"row": {"visible": False},
"color": {"visible": False},
}
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, axes_options=axes_options)
fig
Explanation: Hide tick_labels and color axis using 'axes_options'
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={"x": LinearScale(), "y": LinearScale(reverse=True)})
## The data along the rows is not uniform. Hence the 5th row(from top) of the map
## is twice the height of the remaining rows.
row_data = np.arange(10)
row_data[5:] = np.arange(6, 11)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
print(row_data.shape)
print(column_data.shape)
print(data.shape)
Explanation: Non Uniform Heat map
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={"x": LinearScale(), "y": LinearScale(reverse=True)})
row_data = np.arange(11)
column_data = np.arange(10, 21)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
Explanation: Alignment of the data with respect to the grid
For a N-by-N matrix, N+1 points along the row or the column are assumed to be end points.
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={"x": LinearScale(), "y": LinearScale(reverse=True, max=15)})
row_data = np.arange(10)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data)
fig
Explanation: By default, for N points along any dimension, data aligns to the start of the rectangles in the grid.
The grid extends infinitely in the other direction. By default, the grid extends infintely
towards the bottom and the right.
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={"x": LinearScale(), "y": LinearScale(reverse=True, min=-5, max=15)})
row_data = np.arange(10)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data, row_align="end")
fig
Explanation: By changing the row_align and column_align properties, the grid can extend in the opposite direction
End of explanation
fig = plt.figure(padding_y=0.0)
plt.scales(scales={"x": LinearScale(), "y": LinearScale(reverse=True, min=-5, max=15)})
row_data = np.arange(9)
column_data = np.arange(10, 20)
grid_map = plt.gridheatmap(data, row=row_data, column=column_data, row_align="end")
fig
Explanation: For N+1 points on any direction, the grid extends infintely in both directions
End of explanation
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(data, opacity=0.3, stroke="white", axes_options=axes_options)
fig
Explanation: Changing opacity and stroke
End of explanation
data = np.random.randn(10, 10)
fig = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(
data,
interactions={"click": "select"},
selected_style={"stroke": "blue", "stroke-width": 3},
axes_options=axes_options,
)
fig
Explanation: Selections on the grid map
Selection on the GridHeatMap works similar to excel. Clicking on a cell selects the cell, and deselects the previous selection. Using the Ctrl key allows multiple cells to be selected, while the Shift key selects the range from the last cell in the selection to the current cell.
End of explanation
grid_map.selected
Explanation: The selected trait of a GridHeatMap contains a list of lists, with each sub-list containing the row and column index of a selected cell.
End of explanation
import numpy as np
from IPython.display import display
np.random.seed(0)
data = np.random.randn(10, 10)
figure = plt.figure(padding_y=0.0)
grid_map = plt.gridheatmap(
data,
interactions={"click": "select"},
selected_style={"stroke": "blue", "stroke-width": 3},
)
from ipywidgets import Output
out = Output()
@out.capture()
def print_event(self, target):
print(target)
# test
print_event(1, "test output")
grid_map.on_element_click(print_event)
display(figure)
display(out)
Explanation: Registering on_element_click event handler
End of explanation |
12,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Linear regression
Linear regression in Python can be done in different ways. From coding it yourself to using a function from a statistics module.
Here we will do both.
Coding with numpy
From the Wikipedia, we see that linear regression can be expressed as
Step3: We could also implement it with the numpy covariance function. The diagonal terms represent the variance.
Step5: Coding as a least square problem
The previous methods only works for single variables. We could generalize it if we code it as a least square problem
Step6: The simple ways
numpy
As usual, for tasks as common as a linear regression, there are already implemented solutions in several packages. In numpy, we can use polyfit, which can fit polinomial of degree $N$.
Step7: scipy
scipy has a statistics module that returns the fit and the correlation coefficient
Step8: scikit-learn
The most powerful module for doing data analysis, and Machine Learning is scikit-learn. There is a good documentation on linear models
Step9: Efficiency
As an exercice test the speed of these implementation for a larger dataset. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(10.)
y = 5*x+3
np.random.seed(3)
y+= np.random.normal(scale=10,size=x.size)
plt.scatter(x,y);
def lin_reg(x,y):
Perform a linear regression of x vs y.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
beta = np.mean(x*y)-np.mean(x)*np.mean(y)
#finish...
lin_reg(x,y)
Explanation: Linear regression
Linear regression in Python can be done in different ways. From coding it yourself to using a function from a statistics module.
Here we will do both.
Coding with numpy
From the Wikipedia, we see that linear regression can be expressed as:
$$
y = \alpha + \beta x
$$
where:
$$
\beta = \frac{\overline{xy} -\bar x \bar y}{\overline{x^2} - \bar{x}^2}=\frac{\mathrm{Cov}[x,y]}{\mathrm{Var}[x]}
$$
and $\alpha=\overline y - \beta \bar x$
We first import the basic modules and generate some data with noise.
End of explanation
def lin_reg2(x,y):
Perform a linear regression of x vs y. Uses covariances.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
c = np.cov(x,y)
#finish...
lin_reg2(x,y)
Explanation: We could also implement it with the numpy covariance function. The diagonal terms represent the variance.
End of explanation
def lin_reg3(x,y):
Perform a linear regression of x vs y. Uses least squares.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
#finish...
lin_reg3(x,y)
Explanation: Coding as a least square problem
The previous methods only works for single variables. We could generalize it if we code it as a least square problem:
$$
\bf y = \bf A \boldsymbol \beta
$$
Remark that $\bf A$ is $\bf X$ with an extra column to represent independent term, previously called $\alpha$, that now corresponds to $\beta_{N+1}$.
$$
\bf A = \left[\bf X , \bf 1 \right]
$$
End of explanation
#finish...
Explanation: The simple ways
numpy
As usual, for tasks as common as a linear regression, there are already implemented solutions in several packages. In numpy, we can use polyfit, which can fit polinomial of degree $N$.
End of explanation
import scipy.stats as stats
#finish
Explanation: scipy
scipy has a statistics module that returns the fit and the correlation coefficient:
End of explanation
from sklearn import linear_model
#Finish
Explanation: scikit-learn
The most powerful module for doing data analysis, and Machine Learning is scikit-learn. There is a good documentation on linear models
End of explanation
x = np.arange(10.)
y = 5*x+3
np.random.seed(3)
y+= np.random.normal(scale=10,size=x.size)
plt.scatter(x,y);
Explanation: Efficiency
As an exercice test the speed of these implementation for a larger dataset.
End of explanation |
12,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Význam nízkých a vysokých harmonických složek
Obdélníkový časový průběh
Amplitudy jednotlivých hramonických složek obdélníkového časového průběhu lze vyjádřit vztahem
Step1: Součet všech harmonických složek
Step2: Nižší harmonické složky -- základní tvar
Nižší harmonické složky určují základní tvar. Pokud zachováme nižší harmonické složky
a odstraníme vyšší harmonické složky
přesto, že tvar nebude přesný bude rozeznatelný.
Čím méně harmonických složek zahrneme tím bude tvar méně přesný, ale jeho zákldní rysy zůstanou zachovýny.
Všímněte si teké strmosti hran. Čím více vyššíh hramonických složek je zachováno, tím jsou hrany strměší.
Step3: Vyšší harmonické složky -- rychlé změny
Vyšší harmonické složky představují rychlé změny.
Na předchozích obrázcích je patrné, že čím více vyššíh hramonických složek je zachováno, tím jsou hrany strměší.
Pokud odfiltrujeme nižsí harmonické složky a ponecháme pouze vyšší harmonické složky původní tvar už nebude možné
rozeznat, ale zůstanou nám práve rychlé změny -- v místě původní strmé hrany se oběvý zákmit. | Python Code:
Um=1
DCL=0.25
f=linspace(0,1000,1001)
U=2.*Um*DCL*sinc(f*DCL)
U[0]=U[0]/2.
figure(figsize=(10,7))
minorticks_on()
xlabel(r'$\rightarrow$ \\f [Hz]',fontsize=16, x=0.9 )
ylabel(r'U [V] $\uparrow$',fontsize=16, y=0.9, rotation=0)
title(u"Amplitudové frekvenční spektrum -- obdélník DCL=25\%)")
grid(True, 'major', linewidth=1)
grid(True, 'minor', linewidth=0.5)
# Amplitudové spekturm
stem(f[0:40],abs(U[0:40]),'b-','ro')
# ještě obálková finkce
x=linspace(0,40,1000)
plot(x,abs(2*Um*DCL*sinc(x*DCL)),':k')
grid(1)
Explanation: Význam nízkých a vysokých harmonických složek
Obdélníkový časový průběh
Amplitudy jednotlivých hramonických složek obdélníkového časového průběhu lze vyjádřit vztahem:
$$U_n = 2U_{max} \mathrm{DCL}\cdot \mathrm{sinc}(n \pi \mathrm{DCL})$$
Kde $\mathrm{DCL}$ je čnitel plnění a $\mathrm{sinc}$ je tzv.
kardinální sinus
$\mathrm{sinc}(x) = \frac{\sin(x)}{x}$.
Amplitudové spektrum obdélníkového napětí $U_{max}=1 V$, $\mathrm{DCL}=25%$, $f=1 Hz$, tedy vypadá například takto:
End of explanation
############################################
def soucet(U,titulek,f1=1):
T1=1./f1
t = linspace(0,3*T1,1000)
u = zeros(1000)
for f,A in enumerate(U):
u+=A*cos(2*pi*f*f1*t)
figure(figsize=(10,7))
subplot(211)
title(titulek)
xlim(0,60)
stem(arange(len(U)),abs(U),'b-','r.')
xlabel(r'f [Hz] $\rightarrow$ ',fontsize=16, x=0.9 )
ylabel(r'U [V] $\uparrow$',fontsize=16, y=0.9, rotation=0)
grid(True, 'major', linewidth=1)
grid(True, 'minor', linewidth=0.5)
subplot(212)
minorticks_on()
xlabel(r'$\rightarrow$ \\t [s]',fontsize=16, x=0.9 )
ylabel(r'u [V] $\uparrow$',fontsize=16, y=0.9, rotation=0)
grid(True, 'major', linewidth=1)
grid(True, 'minor', linewidth=0.5)
plot(t,u,lw=2)
ylim( (-1.1*abs(min(u)) if abs(min(u))>0.1 else -0.2 , 1.1*max(u) ) )
###########################################
soucet(U,u"Obdélníkové napětí -- DCL=25\%")
Explanation: Součet všech harmonických složek
End of explanation
import copy
Q=copy.deepcopy(U)
Q[31:]=0
soucet(Q,u'Obdélníkové napětí -- prvních 30 harmonických složek')
Q=copy.deepcopy(U)
Q[21:]=0
soucet(Q,u'Obdélníkové napětí -- prvních 20 harmonických složek')
Q=copy.deepcopy(U)
Q[11:]=0
soucet(Q,u'Obdélníkové napětí -- prvních 10 harmonických složek')
Q=copy.deepcopy(U)
Q[6:]=0
soucet(Q,u'Obdélníkové napětí -- prvních 5 harmonických složek')
Explanation: Nižší harmonické složky -- základní tvar
Nižší harmonické složky určují základní tvar. Pokud zachováme nižší harmonické složky
a odstraníme vyšší harmonické složky
přesto, že tvar nebude přesný bude rozeznatelný.
Čím méně harmonických složek zahrneme tím bude tvar méně přesný, ale jeho zákldní rysy zůstanou zachovýny.
Všímněte si teké strmosti hran. Čím více vyššíh hramonických složek je zachováno, tím jsou hrany strměší.
End of explanation
Q=copy.deepcopy(U)
Q[:11]=0
soucet(Q,u'Obdélníkové napětí -- Jen vyšší harmonicé složky')
Q=copy.deepcopy(U)
Q[:31]=0
soucet(Q,u'Obdélníkové napětí -- Jen vyšší harmonicé složky')
Q=copy.deepcopy(U)
Q[:51]=0
soucet(Q,u'Obdélníkové napětí -- Jen vyšší harmonicé složky')
Q=copy.deepcopy(U)
Q[:30]=0
Q[51:]=0
soucet(Q,u'Obdélníkové napětí -- Jen vyšší harmonicé složky')
Explanation: Vyšší harmonické složky -- rychlé změny
Vyšší harmonické složky představují rychlé změny.
Na předchozích obrázcích je patrné, že čím více vyššíh hramonických složek je zachováno, tím jsou hrany strměší.
Pokud odfiltrujeme nižsí harmonické složky a ponecháme pouze vyšší harmonické složky původní tvar už nebude možné
rozeznat, ale zůstanou nám práve rychlé změny -- v místě původní strmé hrany se oběvý zákmit.
End of explanation |
12,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z) - y
dz = x*y - beta*z
return np.array([dx,dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t = np.linspace(0,max_time,250)
soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho, beta))
return (soln, t)
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
np.random.seed(1)
g=[]
h=[]
f=[]
for i in range(5):
rnd = np.random.random(size=3)
a,b,c = 30*rnd - 15
g.append(a)
h.append(b)
f.append(c)
g,h,f
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
colors = plt.cm.hot(np.linspace(0,1,N))
f = plt.figure(figsize=(7,7))
for i in range(N):
ic = 30*np.random.random(size=3) - 15
soln, t = solve_lorentz(ic, max_time, sigma, rho, beta)
plt.plot(soln[:,0], soln[:,2], color=colors[i])
plt.xlabel('x(t)')
plt.ylabel('z(t)')
plt.title('Lorenz System: x(t) vs. z(t)')
plt.ylim(-20,110)
plt.xlim(-60,60)
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
interact(plot_lorentz, N=(1,50), max_time=(1,10), sigma=(0.0,50.0), rho=(0.0,50.0), beta=fixed(8/3));
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
12,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word prediction using Quadgram
This program reads the corpus line by line.This reads the corpus one line at a time loads it into the memory
Time Complexity for word prediction
Step1: <u>Do preprocessing</u>
Step2: Tokenize and load the corpus data
Step3: Create a Hash Table for Probable words for Trigram sentences
Step4: Sort the probable words
Step5: <u>Driver function for doing the prediction</u>
Step6: <u>For Taking input from the User</u>
Step7: <u>Test Score ,Perplexity Calculation
Step8: For Computing the Perplexity
Step9: <u> For Computing Interpolated Probability</u>
Step10: <u>Driver Function for Testing the Language Model</u>
Step13: <u>main function</u>
Step14: <i><u>For Debugging Purpose Only</u></i>
<i>Uncomment the above two cells and ignore running the cells below if not debugging</i> | Python Code:
from nltk.util import ngrams
from collections import defaultdict
from collections import OrderedDict
import string
import time
import gc
start_time = time.time()
Explanation: Word prediction using Quadgram
This program reads the corpus line by line.This reads the corpus one line at a time loads it into the memory
Time Complexity for word prediction : O(1)
Time Complexity for word prediction with rank 'r': O(r)
<u>Import corpus</u>
End of explanation
#returns: string
#arg: string
#remove punctuations and make the string lowercase
def removePunctuations(sen):
#split the string into word tokens
temp_l = sen.split()
i = 0
#changes the word to lowercase and removes punctuations from it
for word in temp_l :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
temp_l[i] = word.lower()
i=i+1
#spliting is being don here beacause in sentences line here---so after punctuation removal it should
#become "here so"
content = " ".join(temp_l)
return content
Explanation: <u>Do preprocessing</u>:
Remove the punctuations and lowercase the tokens
End of explanation
#returns : void
#arg: string,dict,dict,dict,dict
#loads the corpus for the dataset and makes the frequency count of quadgram and trigram strings
def loadCorpus(file_path,bi_dict,tri_dict,quad_dict,vocab_dict):
w1 = '' #for storing the 3rd last word to be used for next token set
w2 = '' #for storing the 2nd last word to be used for next token set
w3 = '' #for storing the last word to be used for next token set
token = []
word_len = 0
#open the corpus file and read it line by line
with open(file_path,'r') as file:
for line in file:
#split the line into tokens
token = line.split()
i = 0
#for each word in the token list ,remove pucntuations and change to lowercase
for word in token :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
token[i] = word.lower()
i += 1
#make the token list into a string
content = " ".join(token)
token = content.split()
word_len = word_len + len(token)
if not token:
continue
#add the last word from previous line
if w3!= '':
token.insert(0,w3)
temp0 = list(ngrams(token,2))
#since we are reading line by line some combinations of word might get missed for pairing
#for trigram
#first add the previous words
if w2!= '':
token.insert(0,w2)
#tokens for trigrams
temp1 = list(ngrams(token,3))
#insert the 3rd last word from previous line for quadgram pairing
if w1!= '':
token.insert(0,w1)
#add new unique words to the vocaulary set if available
for word in token:
if word not in vocab_dict:
vocab_dict[word] = 1
else:
vocab_dict[word]+= 1
#tokens for quadgrams
temp2 = list(ngrams(token,4))
#count the frequency of the bigram sentences
for t in temp0:
sen = ' '.join(t)
bi_dict[sen] += 1
#count the frequency of the trigram sentences
for t in temp1:
sen = ' '.join(t)
tri_dict[sen] += 1
#count the frequency of the quadgram sentences
for t in temp2:
sen = ' '.join(t)
quad_dict[sen] += 1
#then take out the last 3 words
n = len(token)
#store the last few words for the next sentence pairing
w1 = token[n -3]
w2 = token[n -2]
w3 = token[n -1]
return word_len
Explanation: Tokenize and load the corpus data
End of explanation
#returns: void
#arg: dict,dict,dict,dict,dict,int
#creates dict for storing probable words with their probabilities for a trigram sentence
def createProbableWordDict(bi_dict,tri_dict,quad_dict,prob_dict,vocab_dict,token_len):
for quad_sen in quad_dict:
prob = 0.0
quad_token = quad_sen.split()
tri_sen = ' '.join(quad_token[:3])
tri_count = tri_dict[tri_sen]
if tri_count != 0:
prob = interpolatedProbability(quad_token,token_len, vocab_dict, bi_dict, tri_dict, quad_dict,
l1 = 0.25, l2 = 0.25, l3 = 0.25 , l4 = 0.25)
if tri_sen not in prob_dict:
prob_dict[tri_sen] = []
prob_dict[tri_sen].append([prob,quad_token[-1]])
else:
prob_dict[tri_sen].append([prob,quad_token[-1]])
prob = None
tri_count = None
quad_token = None
tri_sen = None
Explanation: Create a Hash Table for Probable words for Trigram sentences
End of explanation
#returns: void
#arg: dict
#for sorting the probable word acc. to their probabilities
def sortProbWordDict(prob_dict):
for key in prob_dict:
if len(prob_dict[key])>1:
sorted(prob_dict[key],reverse = True)
Explanation: Sort the probable words
End of explanation
#returns: string
#arg: string,dict,int
#does prediction for the the sentence
def doPrediction(sen, prob_dict, rank = 1):
if sen in prob_dict:
if rank <= len(prob_dict[sen]):
return prob_dict[sen][rank-1][1]
else:
return prob_dict[sen][0][1]
else:
return "Can't predict"
Explanation: <u>Driver function for doing the prediction</u>
End of explanation
#returns: string
#arg: void
#for taking input from user
def takeInput():
cond = False
#take input
while(cond == False):
sen = input('Enter the string\n')
sen = removePunctuations(sen)
temp = sen.split()
if len(temp) < 3:
print("Please enter atleast 3 words !")
else:
cond = True
temp = temp[-3:]
sen = " ".join(temp)
return sen
Explanation: <u>For Taking input from the User</u>
End of explanation
#return:int
#arg:list,dict,dict,dict,dict
#computes the score for test data
def computeTestScore(test_sent,tri_dict,quad_dict,vocab_dict,prob_dict):
#increment the score value if correct prediction is made else decrement its value
score = 0
w = open('test_result.txt','w')
for sent in test_sent:
sen_token = sent[:3]
sen = " ".join(sen_token)
correct_word = sent[3]
# print(sen,':',correct_word)
result = doPrediction(sen,prob_dict)
if result == correct_word:
s = sen +" : "+result+'\n'
w.write(s)
score+=1
w.close()
return score
Explanation: <u>Test Score ,Perplexity Calculation:</u>
For computing the Test Score
End of explanation
#return:float
#arg:list,int,dict,dict,dict,dict
#computes the score for test data
def computePerplexity(test_quadgrams,token_len,tri_dict,quad_dict,vocab_dict,prob_dict):
perplexity = float(1.0)
n = token_len
for item in quad_dict:
sen_token = item.split()
sen = ' '.join(sen_token[0:3])
prob = quad_dict[item]/tri_dict[sen]
perplexity = perplexity * ( prob**(1./n))
return perplexity
Explanation: For Computing the Perplexity
End of explanation
#returns: float
#arg: float,float,float,float,list,list,dict,dict,dict,dict
#for calculating the interpolated probablity
def interpolatedProbability(quad_token,token_len, vocab_dict, bi_dict, tri_dict, quad_dict,
l1 = 0.25, l2 = 0.25, l3 = 0.25 , l4 = 0.25):
sen = ' '.join(quad_token)
prob =(
l1*(quad_dict[sen] / tri_dict[' '.join(quad_token[0:3])])
+ l2*(tri_dict[' '.join(quad_token[1:4])] / bi_dict[' '.join(quad_token[1:3])])
+ l3*(bi_dict[' '.join(quad_token[2:4])] / vocab_dict[quad_token[2]])
+ l4*(vocab_dict[quad_token[3]] / token_len)
)
return prob
Explanation: <u> For Computing Interpolated Probability</u>
End of explanation
#return: void
#arg:string,string,dict,dict,dict,dict,dict
#Used for testing the Language Model
def trainCorpus(train_file,test_file,bi_dict,tri_dict,quad_dict,vocab_dict,prob_dict):
test_result = ''
score = 0
#load the training corpus for the dataset
token_len = loadCorpus('training_corpus.txt',bi_dict,tri_dict,quad_dict,vocab_dict)
print("---Processing Time for Corpus Loading: %s seconds ---" % (time.time() - start_time))
start_time1 = time.time()
#creates a dictionary of probable words
createProbableWordDict(bi_dict,tri_dict,quad_dict,prob_dict,vocab_dict,token_len)
#sort the dictionary of probable words
sortProbWordDict(prob_dict)
gc.collect()
print("---Processing Time for Creating Probable Word Dict: %s seconds ---" % (time.time() - start_time1))
test_data = ''
#Now load the test corpus
with open('testing_corpus.txt','r') as file :
test_data = file.read()
#remove punctuations from the test data
test_data = removePunctuations(test_data)
test_token = test_data.split()
#split the test data into 4 words list
test_token = test_data.split()
test_quadgrams = list(ngrams(test_token,4))
#print(len(test_token))
start_time1 = time.time()
score = computeTestScore(test_quadgrams,tri_dict,quad_dict,vocab_dict,prob_dict)
print('Score:',score)
print("---Processing Time for computing score: %s seconds ---" % (time.time() - start_time1))
start_time2 = time.time()
perplexity = computePerplexity(test_token,token_len,tri_dict,quad_dict,vocab_dict,prob_dict)
print('Perplexity:',perplexity)
print("---Processing Time for computing Perplexity: %s seconds ---" % (time.time() - start_time2))
test_result += 'TEST RESULTS\nScore: '+str(score) + '\nPerplexity: '+str(perplexity)
with open('test_results.txt','w') as file:
file.write(test_result)
Explanation: <u>Driver Function for Testing the Language Model</u>
End of explanation
def main():
#variable declaration
tri_dict = defaultdict(int) #for keeping count of sentences of three words
quad_dict = defaultdict(int) #for keeping count of sentences of three words
vocab_dict = defaultdict(int) #for storing the different words with their frequencies
prob_dict = OrderedDict() #for storing the probabilities of probable words for a sentence
bi_dict = defaultdict(int)
#load the corpus for the dataset
loadCorpus('corpusfile.txt',tri_dict,quad_dict,vocab_dict)
print("---Preprocessing Time for Corpus loading: %s seconds ---" % (time.time() - start_time))
start_time1 = time.time()
#creates a dictionary of probable words
createProbableWordDict(tri_dict,quad_dict,prob_dict)
#sort the dictionary of probable words
sortProbWordDict(prob_dict)
gc.collect()
print("---Preprocessing Time for Creating Probable Word Dict: %s seconds ---" % (time.time() - start_time1))
input_sen = takeInput()
start_time2 = time.time()
prediction = doPrediction(input_sen,prob_dict)
print('Word Prediction:',prediction)
print("---Time for Prediction Operation: %s seconds ---" % (time.time() - start_time2))
"
if __name__ == '__main__':
main()
Explanation: <u>main function</u>
End of explanation
#variable declaration
tri_dict = defaultdict(int) #for keeping count of sentences of three words
quad_dict = defaultdict(int) #for keeping count of sentences of three words
vocab_dict = defaultdict(int) #for storing the different words with their frequencies
prob_dict = OrderedDict() #for storing the probabilities of probable words for a sentence
bi_dict = defaultdict(int)
#load the corpus for the dataset
#loadCorpus('corpusfile.txt',bi_dict,tri_dict,quad_dict,vocab_dict)
print("---Preprocessing Time for Corpus loading: %s seconds ---" % (time.time() - start_time))
train_file = 'training_corpus.txt'
test_file = 'testing_corpus.txt'
#load the corpus for the dataset
trainCorpus(train_file,test_file,bi_dict,tri_dict,quad_dict,vocab_dict,prob_dict)
Explanation: <i><u>For Debugging Purpose Only</u></i>
<i>Uncomment the above two cells and ignore running the cells below if not debugging</i>
End of explanation |
12,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step3: 1. UI界面
Step4: 从回测结果中可以看到最终收益为正值,由于使用较高的止盈位,偏倚重盈亏比值,胜率不高,最终策略是否应该使用这组参数,即最优参数的选择在‘第7节-寻找策略最优参数和评分’中讲解过,这里不过多赘述,只示例更简洁的接口使用以及ui操作。
首先示例ui界面的操作使用WidgetGridSearch,如下:
备注:
ui具体操作步骤在之后的视频教程中详细示例
更多界面操作请直接运行abupy/abupy_ui/文件夹下的界面功能文件
更多界面操作简单示例
Step5: 2. 代码实现Grid Search寻找策略最优参数
如下可以使用GridSearch.grid_search寻找买入AbuDownUpTrend策略参数:
短线基数'xd'
Step6: 上述最优结果输出为对所有参数评分结果的最优结果,由于策略的苛刻买入条件导致整体交易数量少,所以很多时候需要根据top n的输出结果来进行筛选使用的最优参数。
使用show_top_score_metrics多显示一些最优结果,top_cnt为正数时显示评分最高的参数组合及回测结果,top_cnt为负数时相反,如下top_cnt=3显示度量结果最好的3组参数组合及回测结果:
Step7: 如下top_cnt=-1显示度量结果最差的回测参数组合及回测结果:
Step8: 3 交叉相关性策略验证
通过gird search可以暂时选定策略的参数,但是进一步验证策略的有效性,普适性就需要扩大策略的回测范围以及symbol数量,可以使用全市场对策略进行多年的历史回测来验证策略的有效性,但更推荐使用abupy中内置的交叉相关性策略验证模块,它整体思路如下:
选择一个市场中所有的symbol,计算所有的symbol与大盘指标的相关系数值
根据计算出的相关系数值将全市场中的symbol切分成多个组
依次从各个symbol相关系数组中多次进行抽取几个symbol进行历史交易策略回测
合并计算所有相关系数组的交易回测度量结果,进行输出
合并计算每一个相关系数组中多次进行抽取的symbol的交易回测度量结果,进行输出
abupy中内置的AbuCrossVal实现了上述功能,下面为使用示例:
首先使用上面grid search结果的top1参数组合进行验证:
买入策略
Step9: 上最终输出为
合并计算所有相关系数组的交易回测度量结果
合并计算每一个相关系数组中多次进行抽取的symbol的交易回测度量结果
由于涉及随机抽取所以这里每次的运行结果都会不一样
有效性判定需要根据整体策略的风格来判断:
上面的激进风格的止盈止损设置,止盈位倍数(1) < 止损位倍数(0.5),且值偏低:
代表交易策略整体风格为均值回复类型
需要交叉相关性分组结果的高胜率,盈亏比可以偏低
注意很多组的平均获利期望都比平均亏损期望低。
普适性的判定需要在有效性的基础上针对不同相关性组进行判定:
上面回测结果中所有相关性回测组的胜率都大于55%, 一定程度代表普适性很好
一般10组中有7组以上符合判定就可以认为策略良好。
上面的运行结果由于使用随机抽取的原因每一次的运行结果都不同,所以可以多运行几次,也可以切换不同的交易市场进行验证。
上面验证策略使用了低止盈位形成的整体策略风格为均值回复,如果要构建整体策略风格为趋势跟踪需要比较高的止盈位,但上面使用
GridSearch.show_top_score_metrics
输出前top个结果都是均值回复类型的策略参数,可以使用
GridSearch.show_top_constraints_metrics
限制top的排序结果,比如下面的示例要求止盈位stop_win_n的值为3.0的情况下上面所有度量结果的排序top,代码如下所示:
Step10: 上面带参数限制条件的输出即为止盈位为3.0情况下的最好度量结果以及参数组合。
下面将带参数限制条件的top1参数组合进行相关交叉验证:
买入策略
Step11: 上面的整体风格的止盈止损设置,止盈位倍数(3.0) >> 止损位倍数(1.0),止盈位偏高:
代表交易策略整体风格为趋势跟踪类型
总体交叉验证交易度量胜率低于50%符合趋势跟踪风格,
所有平均获利期望 > 平均亏损期望低。
有一组相关回测组的胜率以及盈亏比都偏低导致收益总盈亏为负值
为保证策略的有效,可以多次运行进行验证。
fit运行后所有的交易回测结果都可以使用show_cross_val_se进行查看,通过start,end参数设置游标,如下示例:
Step12: 4. 人工分析比人工智能要聪明, 懂事
除了上述最终度量结果做为策略最终有效性,普适性的判别标准外,还可以把上面相关交叉验证回测的所有交易都通过可视化接口保存在本地,然后一个一个的查看交易的买入点与卖出点,排查策略是否存在问题,以及是否存在改进方案等。
可以使用plot_all_cross_val_orders接口对交叉验证回测的所有交易进行保存,保存结果如下所示:
备注:再次强调:对交易策略结果的人工分析,注重分析失败的结果以及是否存在改进方案,改进方案是否会引进新的问题是非常重要的,不要排斥人工, 人工分析比人工智能要聪明, 懂事。
Step13: 保存完成后,快照将保存在~/abu/data/save_png下当前日期的文件夹中,可使用如下命令直接打开查看:
Step14: 5. UI界面 | Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 开始的示例先使用沙盒数据,之后的示例需要下载缓存
abupy.env.enable_example_env_ipython()
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS']
cn_choice_symbols = ['002230', '300104', '300059', '601766', '600085', '600036', '600809', '000002', '002594', '002739']
hk_choice_symbols = ['hk03333', 'hk00700', 'hk02333', 'hk01359', 'hk00656', 'hk03888', 'hk02318']
from abupy import ABuSymbolPd, AbuUpDownTrend, AbuDownUpTrend, AbuUpDownGolden, AbuMetricsBase
from abupy import AbuFactorCloseAtrNStop, AbuFactorAtrNStop, AbuFactorPreAtrNStop, tl, ABuProgress
from abupy import GridSearch, AbuCrossVal, WidgetCrossVal, EMarketTargetType, abu, WidgetGridSearch
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第32节 策略有效性的验证</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
上一节讲解资金仓位管理与买入策略搭配的示例,本节讲解策有效性的验证。
首先导入本节需要使用的abupy中的模块:
本节大多数内容无法使用沙盒数据运行,需要下载缓存,百度云地址如下:
csv格式美股,A股,港股,币类,期货6年日k数据 密码: gvtr
End of explanation
# 初始资金量
cash = 3000000
def run_loo_back(choice_symbols, ps=None, n_folds=3, start=None, end=None, only_info=False):
封装一个回测函数,返回回测结果,以及回测度量对象
if choice_symbols[0].startswith('us'):
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US
else:
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abu_result_tuple, _ = abu.run_loop_back(cash,
buy_factors,
sell_factors,
ps,
start=start,
end=end,
n_folds=n_folds,
choice_symbols=choice_symbols)
ABuProgress.clear_output()
metrics = AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=only_info,
only_info=only_info,
only_show_returns=True)
return abu_result_tuple, metrics
买入策略使用AbuDownUpTrend:
短线基数xd=30: 30个交易日整体趋势为上涨趋势, 长线下跌乘数基数, 海龟突破的30日突破
长线乘数past_factor=4: xd * 4 = 30 * 4 = 120 过去120个交易日整体趋势为下跌趋势
趋势角度阀值down_deg_threshold: 判定趋势是否为上涨下跌的拟合角度值为+-4
buy_factors = [{'class': AbuDownUpTrend, 'xd': 30, 'past_factor': 4, 'down_deg_threshold': -4}]
sell_factors = [{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}]
# 开始回测
_, _ = run_loo_back(us_choice_symbols, only_info=True)
Explanation: 1. UI界面: Grid Search寻找策略最优参数
30,31两节都使用了abupy内置的一个长短线买入策略AbuDownUpTrend,策略默认的参数实现如下:
1. 寻找长线下跌的股票,比如一个季度(4个月)整体趋势为下跌趋势
2. 短线走势上涨的股票,比如一个月整体趋势为上涨趋势
3. 最后使用海龟突破的xd日突破策略作为策略最终买入信号
上面策略描述的‘一个季度(4个月)’,‘xd日突破’都是策略的默认参数,通过改变设置因子的默认参数可以修改值,而且做为判断趋势是否为上涨下跌的趋势拟合角度阀值也是可以设置的,如下将策略修改为:
寻找长线下跌的股票,过去120个交易日整体趋势为下跌趋势
短线走势上涨的股票,过去30个交易日整体趋势为上涨趋势
最后使用30日突破策略作为策略最终买入信号
判定趋势是否为上涨下跌的拟合角度值为+-4
代码示例如下:
End of explanation
# 直接启动grid search界面
WidgetGridSearch()()
Explanation: 从回测结果中可以看到最终收益为正值,由于使用较高的止盈位,偏倚重盈亏比值,胜率不高,最终策略是否应该使用这组参数,即最优参数的选择在‘第7节-寻找策略最优参数和评分’中讲解过,这里不过多赘述,只示例更简洁的接口使用以及ui操作。
首先示例ui界面的操作使用WidgetGridSearch,如下:
备注:
ui具体操作步骤在之后的视频教程中详细示例
更多界面操作请直接运行abupy/abupy_ui/文件夹下的界面功能文件
更多界面操作简单示例
End of explanation
buy_factors = {'class': AbuDownUpTrend, 'xd': [20, 30, 40],
'past_factor': [3, 4, 5], 'down_deg_threshold': [-2, -3, -4]}
sell_factors = [{'class': AbuFactorAtrNStop, 'stop_loss_n': [0.5, 1.0, 1.5],
'stop_win_n': [0.5, 1.0, 2.0, 3.0]},
]
# 使用类方法GridSearch.grid_search进行最优参数查找
scores, score_tuple_array = GridSearch.grid_search(us_choice_symbols, buy_factors, sell_factors)
Explanation: 2. 代码实现Grid Search寻找策略最优参数
如下可以使用GridSearch.grid_search寻找买入AbuDownUpTrend策略参数:
短线基数'xd': [20, 30, 40]
长线乘数'past_factor': [3, 4, 5]
趋势角度阀值'down_deg_threshold': [-2, -3, -4]
寻找卖出AbuFactorAtrNStop策略参数:
止损atr倍数stop_loss_n:[0.5, 1.0, 1.5]
止盈atr倍数stop_win_n:[0.5, 1.0, 2.0, 3.0]
End of explanation
GridSearch.show_top_score_metrics(scores, score_tuple_array, top_cnt=3)
Explanation: 上述最优结果输出为对所有参数评分结果的最优结果,由于策略的苛刻买入条件导致整体交易数量少,所以很多时候需要根据top n的输出结果来进行筛选使用的最优参数。
使用show_top_score_metrics多显示一些最优结果,top_cnt为正数时显示评分最高的参数组合及回测结果,top_cnt为负数时相反,如下top_cnt=3显示度量结果最好的3组参数组合及回测结果:
End of explanation
GridSearch.show_top_score_metrics(scores, score_tuple_array, top_cnt=-1)
Explanation: 如下top_cnt=-1显示度量结果最差的回测参数组合及回测结果:
End of explanation
# 交叉相关性策略验证只支持本地非沙盒数据模式
abupy.env.disable_example_env_ipython()
# 使用上面grid search结果的top1参数组合进行验证
buy_factors = [{'class': AbuDownUpTrend, 'down_deg_threshold': -2, 'past_factor': 5, 'xd': 20}]
sell_factors = [{'stop_loss_n': 1, 'stop_win_n': 0.5,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}]
cross_val = AbuCrossVal()
cross_val.fit(buy_factors, sell_factors, cv=10)
Explanation: 3 交叉相关性策略验证
通过gird search可以暂时选定策略的参数,但是进一步验证策略的有效性,普适性就需要扩大策略的回测范围以及symbol数量,可以使用全市场对策略进行多年的历史回测来验证策略的有效性,但更推荐使用abupy中内置的交叉相关性策略验证模块,它整体思路如下:
选择一个市场中所有的symbol,计算所有的symbol与大盘指标的相关系数值
根据计算出的相关系数值将全市场中的symbol切分成多个组
依次从各个symbol相关系数组中多次进行抽取几个symbol进行历史交易策略回测
合并计算所有相关系数组的交易回测度量结果,进行输出
合并计算每一个相关系数组中多次进行抽取的symbol的交易回测度量结果,进行输出
abupy中内置的AbuCrossVal实现了上述功能,下面为使用示例:
首先使用上面grid search结果的top1参数组合进行验证:
买入策略:[{'past_factor': 3, 'xd': 30, 'down_deg_threshold': -4}]
卖出策略:[{'stop_loss_n': 1.5, 'stop_win_n': 3.0}]
代码如下所示:
End of explanation
def constraints(scores, score_tuple_array, top_cnt):
result_top = []
for sc_ind in scores.index:
for sell_factor in score_tuple_array[sc_ind].sell_factors:
if 'stop_win_n' in sell_factor and sell_factor['stop_win_n'] == 3.0:
result_top.append(score_tuple_array[sc_ind])
if len(result_top) >= top_cnt:
return result_top
return result_top
# top_cnt=1:输出stop_win_n=3.0下最好的参数组合度量结果
GridSearch.show_top_constraints_metrics(constraints, scores, score_tuple_array, top_cnt=1)
Explanation: 上最终输出为
合并计算所有相关系数组的交易回测度量结果
合并计算每一个相关系数组中多次进行抽取的symbol的交易回测度量结果
由于涉及随机抽取所以这里每次的运行结果都会不一样
有效性判定需要根据整体策略的风格来判断:
上面的激进风格的止盈止损设置,止盈位倍数(1) < 止损位倍数(0.5),且值偏低:
代表交易策略整体风格为均值回复类型
需要交叉相关性分组结果的高胜率,盈亏比可以偏低
注意很多组的平均获利期望都比平均亏损期望低。
普适性的判定需要在有效性的基础上针对不同相关性组进行判定:
上面回测结果中所有相关性回测组的胜率都大于55%, 一定程度代表普适性很好
一般10组中有7组以上符合判定就可以认为策略良好。
上面的运行结果由于使用随机抽取的原因每一次的运行结果都不同,所以可以多运行几次,也可以切换不同的交易市场进行验证。
上面验证策略使用了低止盈位形成的整体策略风格为均值回复,如果要构建整体策略风格为趋势跟踪需要比较高的止盈位,但上面使用
GridSearch.show_top_score_metrics
输出前top个结果都是均值回复类型的策略参数,可以使用
GridSearch.show_top_constraints_metrics
限制top的排序结果,比如下面的示例要求止盈位stop_win_n的值为3.0的情况下上面所有度量结果的排序top,代码如下所示:
End of explanation
# grid search结果的带参数限制条件的top1参数
buy_factors = [{'class': AbuDownUpTrend, 'down_deg_threshold': -3, 'past_factor': 4, 'xd': 20}]
# 限制条件为stop_win_n值为3.0
sell_factors = [{'stop_loss_n': 1.5, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}]
cross_val.fit(buy_factors, sell_factors, cv=10)
Explanation: 上面带参数限制条件的输出即为止盈位为3.0情况下的最好度量结果以及参数组合。
下面将带参数限制条件的top1参数组合进行相关交叉验证:
买入策略:[{'past_factor': 4, 'xd': 20, 'down_deg_threshold': -4}]
卖出策略:[{'stop_loss_n': 1.0, 'stop_win_n': 3.0}]
代码以及运行结果如下所示:
End of explanation
cross_val.show_cross_val_se(start=8, end=10)
Explanation: 上面的整体风格的止盈止损设置,止盈位倍数(3.0) >> 止损位倍数(1.0),止盈位偏高:
代表交易策略整体风格为趋势跟踪类型
总体交叉验证交易度量胜率低于50%符合趋势跟踪风格,
所有平均获利期望 > 平均亏损期望低。
有一组相关回测组的胜率以及盈亏比都偏低导致收益总盈亏为负值
为保证策略的有效,可以多次运行进行验证。
fit运行后所有的交易回测结果都可以使用show_cross_val_se进行查看,通过start,end参数设置游标,如下示例:
End of explanation
cross_val.plot_all_cross_val_orders()
Explanation: 4. 人工分析比人工智能要聪明, 懂事
除了上述最终度量结果做为策略最终有效性,普适性的判别标准外,还可以把上面相关交叉验证回测的所有交易都通过可视化接口保存在本地,然后一个一个的查看交易的买入点与卖出点,排查策略是否存在问题,以及是否存在改进方案等。
可以使用plot_all_cross_val_orders接口对交叉验证回测的所有交易进行保存,保存结果如下所示:
备注:再次强调:对交易策略结果的人工分析,注重分析失败的结果以及是否存在改进方案,改进方案是否会引进新的问题是非常重要的,不要排斥人工, 人工分析比人工智能要聪明, 懂事。
End of explanation
if abupy.env.g_is_mac_os:
!open $abupy.env.g_project_data_dir
else:
!echo $abupy.env.g_project_data_dir
Explanation: 保存完成后,快照将保存在~/abu/data/save_png下当前日期的文件夹中,可使用如下命令直接打开查看:
End of explanation
WidgetCrossVal()()
Explanation: 5. UI界面: 交叉相关性策略验证
与寻找最优参数类似,可以使用WidgetCrossVal进行界面操作进行交叉相关性策略验证,如下:
备注:
ui具体操作步骤在之后的视频教程中详细示例
更多界面操作请直接运行abupy/abupy_ui/文件夹下的界面功能文件
更多界面操作简单示例
End of explanation |
12,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
12,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Landcover factor exploration
This notebook explores the relationship between the soundscape power and contributing land cover area for sounds in a pumilio database.
Required packages
pandas <br />
numpy <br />
matplotlib <br />
pymilio <br />
colour
Import statements
Step1: Connect to database
Step2: ...
Step3: sort on landcover percentage
Step4: NDSI | Python Code:
import pandas
from Pymilio import database
import numpy as np
from colour import Color
import matplotlib.pylab as plt
%matplotlib inline
Explanation: Landcover factor exploration
This notebook explores the relationship between the soundscape power and contributing land cover area for sounds in a pumilio database.
Required packages
pandas <br />
numpy <br />
matplotlib <br />
pymilio <br />
colour
Import statements
End of explanation
db = database.Pymilio_db_connection(user='pumilio',
database='pumilio',
read_default_file='~/.my.cnf.pumilio')
Sounds = db.fetch_as_pandas_df(table='Sounds', fields=['SoundID', 'SiteID', 'ColID']).set_index('SoundID')
Sites = db.fetch_as_pandas_df(table='Sites',
fields=['SiteID', 'ID', 'SiteName'],
where='ID > 0 AND ID <= 30').set_index('ID')
LandcoverTypes = db.fetch_as_pandas_df(table='LandcoverTypes',
fields=['*']).set_index('ID')
Explanation: Connect to database
End of explanation
LandcoverArea = db.fetch_as_pandas_df(table='LandcoverAreas',
fields=['*'],
where='IncludedArea = "500m"').set_index('ID')
landcover_area = LandcoverArea.join(Sites.drop('SiteID', axis=1), on='SiteID', how='right').sort_values(by='SiteID')
full_area = 775665.717
Explanation: ...
End of explanation
landcover_area_nosort = landcover_area
landcover_area.sort_values(by=['9', '2'],
ascending=[False, True],
inplace=True)
landcover_area['SiteID'].as_matrix()
# all categories
plt.figure(figsize=(15, 5))
bar_width = 0.9
ID = np.array([ n for n in range(len(landcover_area)) ])
SiteIDs = landcover_area['SiteID'].as_matrix()
left = ID + 0.05
height = np.zeros(len(landcover_area))
bottom = np.zeros(len(landcover_area))
for index, column in landcover_area.ix[:,'1':'15'].iteritems():
height = column.as_matrix()
plt.bar(left=left,
height=height,
bottom=bottom,
width=bar_width,
color=LandcoverTypes['Color'][int(index)],
edgecolor=None,
linewidth=0)
bottom = bottom + height
plt.xlim(0, 30)
plt.ylim(0, full_area)
plt.xlabel('Sites')
plt.ylabel('Area (square meters)')
plt.title('Landcover')
xticks = ID + 0.5
xticklabels = landcover_area['SiteName'].as_matrix()
xticklabels = [ "{0} - {1}".format(xticklabels[i], SiteIDs[i]) for i in ID ]
plt.xticks(xticks, xticklabels, rotation='vertical')
plt.show()
Explanation: sort on landcover percentage
End of explanation
IndexNDSI = db.fetch_as_pandas_df(table='IndexNDSI', fields=['Sound', 'ndsi_left', 'ndsi_right', 'biophony_left', 'biophony_right', 'anthrophony_left', 'anthrophony_right']).set_index('Sound')
ndsi = IndexNDSI.join(Sounds).join(Sites.drop('SiteID', axis=1), on='SiteID')
ndsi_collection1 = ndsi.groupby('ColID').get_group(1)
ndsi_collection1 = ndsi_collection1.rename(columns={"SiteID": "ID"})
ndsi_collection1_byID = ndsi_collection1.groupby('ID')
xticklabels = landcover_area['SiteName'].as_matrix()
xticklabels = [ "{0} - {1}".format(xticklabels[i], SiteIDs[i]) for i in ID ]
plt.figure(figsize=(15,10))
for name, group in ndsi_collection1_byID:
x = (group['ID'].as_matrix() - 100) - 0.1
y = group['biophony_left'].as_matrix()
plt.plot(x, y, 'r-')
plt.scatter(x, y, color='red', marker='.')
x = (group['ID'].as_matrix() - 100) + 0.1
y = group['biophony_right'].as_matrix()
plt.plot(x, y, 'b-')
plt.scatter(x, y, color='blue', marker='.')
plt.xlim(0, 31)
plt.ylim(0, 2.5)
plt.xlabel('Sites')
plt.ylabel('biophony')
plt.title('biophony_left and biophony_right')
xticks = [i for i in range(1, len(ndsi_collection1_byID)+1)]
xticklabels = Sites.sort_index()['SiteName'].as_matrix()
xticklabels = ["{0} - {1}".format(xticklabels[i-1], i) for i in xticks]
plt.xticks(xticks, xticklabels, rotation='vertical')
plt.grid()
Explanation: NDSI
End of explanation |
12,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exclusive OR
Following symbol are used to explain Exclusive OR in logic, or mathematics.
$A{\veebar}B \ A{\oplus}B$
logical operation
| A | B | $A{\oplus}B$ |
|---|---|---|
| F | F | F |
| F | T | T |
| T | F | T |
| T | T | F |
$X \oplus False(0) = X \ X \oplus True(1) = \lnot X$
Commutative low, Associative low
Like Addition or Multiplication, commulative low and associative low can be applied to XOR.
Commutative low
$A \oplus B = B \oplus A$
Assciative low
$A \oplus (B \oplus C) = (A \oplus B) \oplus C$
| A | B | C | $A{\oplus}(B{\oplus}C)$ | $(A{\oplus}B){\oplus}C$ |
|---|---|---|---|---|
| F | F | F | F | F |
| F | F | T | T | T |
| F | T | F | T | T |
| F | T | T | F | F |
| T | F | F | T | T |
| T | F | T | F | F |
| T | T | F | F | F |
| T | T | T | T | T |
Step1: Find general rule from specific case
What will be returned for 1 True $\oplus$ 100 False
What will be returned for 2 True $\oplus$ 100 False
What will be returned for 3 True $\oplus$ 100 False
What will be returned for 101 True $\oplus$ 10000 False
What is the general rule ?
Bit operation
When 2 integers are calculated with XOR operator, each bit is calculated separately. (Same as AND, OR, NOT)
```
10(0b1010) xor 6(0b0110) = 12 (0b1100)
10(0b1010) and 6(0b0110) = 2 (0b0010)
10(0b1010) or 6(0b0110) = 14 (0b1110)
NOT 10(0b1010) = 5 (0b0101)
0b1010 0b1010 0b1010 0b1010
0b0110 0b0110 0b0110
-(XOR)- -(AND)- -(OR)- -(NOT)-
0b1100 0b0010 0b1110 0b0101
```
Step2: XOR usage
Bit reverse (ex. 32bit integer, remain upper 16bit, reverse lower 16bit)
CPU register 0 clear (Often used in assemblar)
Step3: XOR parity recalculation
Initialization | Python Code:
# In Python, XOR operator is ^
for a in (False, True):
for b in (False,True):
for c in (False, True):
# print (a, b, c, a^(b^c), (a^b)^c)
print ("{!r:5} {!r:5} {!r:5} | {!r:5} {!r:5}".format(a, b, c, a^(b^c), (a^b)^c))
Explanation: Exclusive OR
Following symbol are used to explain Exclusive OR in logic, or mathematics.
$A{\veebar}B \ A{\oplus}B$
logical operation
| A | B | $A{\oplus}B$ |
|---|---|---|
| F | F | F |
| F | T | T |
| T | F | T |
| T | T | F |
$X \oplus False(0) = X \ X \oplus True(1) = \lnot X$
Commutative low, Associative low
Like Addition or Multiplication, commulative low and associative low can be applied to XOR.
Commutative low
$A \oplus B = B \oplus A$
Assciative low
$A \oplus (B \oplus C) = (A \oplus B) \oplus C$
| A | B | C | $A{\oplus}(B{\oplus}C)$ | $(A{\oplus}B){\oplus}C$ |
|---|---|---|---|---|
| F | F | F | F | F |
| F | F | T | T | T |
| F | T | F | T | T |
| F | T | T | F | F |
| T | F | F | T | T |
| T | F | T | F | F |
| T | T | F | F | F |
| T | T | T | T | T |
End of explanation
# CONSTANT is not supported in Python, Upper case variable is typically used as Constant
INT10 = 0b1010
INT06 = 0b0110
REV_MASK = 0b1111
print('INT10:', INT10, 'INT06:', INT06)
print('Bit XOR:', INT10^INT06)
print('Bit AND:', INT10&INT06)
print('Bit OR: ', INT10|INT06)
print('Bit NOT', INT10, ':', (~INT10)&0x0f)
print('Reverse INT10:', INT10^REV_MASK)
print('Reverse INT06:', INT06^REV_MASK)
Explanation: Find general rule from specific case
What will be returned for 1 True $\oplus$ 100 False
What will be returned for 2 True $\oplus$ 100 False
What will be returned for 3 True $\oplus$ 100 False
What will be returned for 101 True $\oplus$ 10000 False
What is the general rule ?
Bit operation
When 2 integers are calculated with XOR operator, each bit is calculated separately. (Same as AND, OR, NOT)
```
10(0b1010) xor 6(0b0110) = 12 (0b1100)
10(0b1010) and 6(0b0110) = 2 (0b0010)
10(0b1010) or 6(0b0110) = 14 (0b1110)
NOT 10(0b1010) = 5 (0b0101)
0b1010 0b1010 0b1010 0b1010
0b0110 0b0110 0b0110
-(XOR)- -(AND)- -(OR)- -(NOT)-
0b1100 0b0010 0b1110 0b0101
```
End of explanation
# XOR data recovery test, Create parity from 3 random data, Recover any data from other data and parity
import random
data = [random.randrange(1000000) for i in range(3)]
print(data)
parity = 0
for x in data:
parity ^= x
print('XOR of all data is:', parity)
print('DATA[0]:', data[1]^data[2]^parity)
print('DATA[1]:', data[0]^data[2]^parity)
print('DATA[2]:', data[0]^data[1]^parity)
print('Parity: ', data[0]^data[1]^data[2])
print('XOR of all data + Parity:', data[0]^data[1]^data[2]^parity)
Explanation: XOR usage
Bit reverse (ex. 32bit integer, remain upper 16bit, reverse lower 16bit)
CPU register 0 clear (Often used in assemblar): $X \oplus X = 0$
Encryption ($Original \oplus Key = Encrypted \rightarrow Encrypted \oplus Key = Original$)
Parity (RAID5 etc, N HDD with (N-1) Capacity, If one of the HDD is broken, its data can be recovered from other data)
XOR parity
$A \oplus A = 0$
$A \oplus B = 0 \rightarrow A = B$
$A \oplus B = C \rightarrow A \oplus B \oplus C = 0 \rightarrow A = B \oplus C$
$A \oplus B \oplus C \oplus ... Z = 0 \rightarrow X =$ XOR of other DATA
End of explanation
import random
data = [random.randrange(1000000) for i in range(5)]
print(data)
# Initialize Parity
parity = 0
for x in data:
parity ^= x
# Update data[2]
olddata = data[2]
newdata = random.randrange(1000000)
print('data[2] is updated from:', olddata, 'to', newdata)
data[2] = newdata
diff = olddata ^ newdata
# Calc new parity
newparity = parity ^ diff
print('old parity:', parity)
parity = newparity
print('new parity:', parity)
# Check if parity is correct
chk_parity = 0
for x in data:
chk_parity ^= x
print('updated parity:', chk_parity)
Explanation: XOR parity recalculation
Initialization: XOR of every data
Update:
Calculate $OldData \oplus NewData = Diff$
$NewParity=OldParity \oplus Diff$
End of explanation |
12,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The purpose of this notebook is twofold. First, it demonstrates the basic functionality of PyLogit for estimatin Mixed Logit models. Secondly, it compares the estimation results for a Mixed Logit model from PyLogit and MLogit on a common dataset and model specification. The dataset is the "Electricity" dataset. Both the dataset and the model speecification come from Kenneth Train's exercises. See <a href=https
Step1: 1. Load the Electricity Dataset
Step2: 2. Clean the dataset
Step3: 6. Specify and Estimate the Mixed Logit Model
6a. Specify the Model
Step4: 6b. Estimate the model
Compared to estimating a Multinomial Logit Model, creating Mixed Logit models requires the declaration of more arguments.
In particular, we have to specify the identification column over which the coefficients will be randomly distributed. Usually, this is the "observation id" for our dataset. While it is unfortunately named (for now), the "obs_id_col" argument should be used to specify the id of each choice situation. The "mixing_id_col" should be used to specify the id of each unit of observation over which the coefficients are believed to be randomly distributed.
At the moment, PyLogit does not support estimation of Mixed Logit models where some coefficients are mixed over one unit of observation (such as individuals) and other coefficients are mixed over a different unit of observation (such as households).
Beyond, specification of the mixing_id_col, one must also specify the variables that will be treated as random variables. This is done by passing a list containing the names of the coefficients in the model. Note, the strings in the passed list must be present in one of the lists within "names.values()".
When estimating the mixed logit model, we must specify the number of draws to be taken from the distributions of the random coefficients. This is done through the "num_draws" argument of the "fit_mle()" function. Additionally, we can specify a random seed so the results of our estimation are reproducible. This is done through the "seed" argument of the "fit_mle()" function. Finally, the initial values argument should specify enough initial values for the original index coefficients as well as the standard deviation values of the coefficients that are being treated as random variables. | Python Code:
from collections import OrderedDict # For recording the model specification
import pandas as pd # For file input/output
import numpy as np # For vectorized math operations
import pylogit as pl # For choice model estimation
Explanation: The purpose of this notebook is twofold. First, it demonstrates the basic functionality of PyLogit for estimatin Mixed Logit models. Secondly, it compares the estimation results for a Mixed Logit model from PyLogit and MLogit on a common dataset and model specification. The dataset is the "Electricity" dataset. Both the dataset and the model speecification come from Kenneth Train's exercises. See <a href=https://cran.r-project.org/web/packages/mlogit/vignettes/Exercises.pdf>this mlogit pdf</a>.
End of explanation
# Load the raw electricity data
long_electricity = pd.read_csv("../data/electricity_r_data_long.csv")
long_electricity.head().T
Explanation: 1. Load the Electricity Dataset
End of explanation
# Make sure that the choice column contains ones and zeros as opposed
# to true and false
long_electricity["choice"] = long_electricity["choice"].astype(int)
# List the variables that are the index variables
index_var_names = ["pf", "cl", "loc", "wk", "tod", "seas"]
# Transform all of the index variable columns to have float dtypes
for col in index_var_names:
long_electricity[col] = long_electricity[col].astype(float)
Explanation: 2. Clean the dataset
End of explanation
# Create the model's specification dictionary and variable names dictionary
# NOTE: - Keys should be variables within the long format dataframe.
# The sole exception to this is the "intercept" key.
# - For the specification dictionary, the values should be lists
# or lists of lists. Within a list, or within the inner-most
# list should be the alternative ID's of the alternative whose
# utility specification the explanatory variable is entering.
example_specification = OrderedDict()
example_names = OrderedDict()
# Note that the names used below are simply for consistency with
# the coefficient names given in the mlogit vignette.
for col in index_var_names:
example_specification[col] = [[1, 2, 3, 4]]
example_names[col] = [col]
Explanation: 6. Specify and Estimate the Mixed Logit Model
6a. Specify the Model
End of explanation
# Provide the module with the needed input arguments to create
# an instance of the Mixed Logit model class.
# Note that "chid" is used as the obs_id_col because "chid" is
# the choice situation id.
# Currently, the obs_id_col argument name is unfortunate because
# in the most general of senses, it refers to the situation id.
# In panel data settings, the mixing_id_col argument is what one
# would generally think of as a "observation id".
# For mixed logit models, the "mixing_id_col" argument specifies
# the units of observation that the coefficients are randomly
# distributed over.
example_mixed = pl.create_choice_model(data=long_electricity,
alt_id_col="alt",
obs_id_col="chid",
choice_col="choice",
specification=example_specification,
model_type="Mixed Logit",
names=example_names,
mixing_id_col="id",
mixing_vars=index_var_names)
# Note 2 * len(index_var_names) is used because we are estimating
# both the mean and standard deviation of each of the random coefficients
# for the listed index variables.
example_mixed.fit_mle(init_vals=np.zeros(2 * len(index_var_names)),
num_draws=600,
seed=123)
# Look at the estimated results
example_mixed.get_statsmodels_summary()
Explanation: 6b. Estimate the model
Compared to estimating a Multinomial Logit Model, creating Mixed Logit models requires the declaration of more arguments.
In particular, we have to specify the identification column over which the coefficients will be randomly distributed. Usually, this is the "observation id" for our dataset. While it is unfortunately named (for now), the "obs_id_col" argument should be used to specify the id of each choice situation. The "mixing_id_col" should be used to specify the id of each unit of observation over which the coefficients are believed to be randomly distributed.
At the moment, PyLogit does not support estimation of Mixed Logit models where some coefficients are mixed over one unit of observation (such as individuals) and other coefficients are mixed over a different unit of observation (such as households).
Beyond, specification of the mixing_id_col, one must also specify the variables that will be treated as random variables. This is done by passing a list containing the names of the coefficients in the model. Note, the strings in the passed list must be present in one of the lists within "names.values()".
When estimating the mixed logit model, we must specify the number of draws to be taken from the distributions of the random coefficients. This is done through the "num_draws" argument of the "fit_mle()" function. Additionally, we can specify a random seed so the results of our estimation are reproducible. This is done through the "seed" argument of the "fit_mle()" function. Finally, the initial values argument should specify enough initial values for the original index coefficients as well as the standard deviation values of the coefficients that are being treated as random variables.
End of explanation |
12,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies Classification Solution By Team_BGC
Cheolkyun Jeong and Ping Zhang From Team_BGC
Import Header
Step1: 1. Data Prepocessing
1) Filtered data preparation
After the initial data validation, we figure out the NM_M input is a key differentiator to group non-marine stones (sandstone, c_siltstone, and f_siltstone) and marine stones (marine_silt_shale, mudstone, wakestone, dolomite, packstone, and bafflestone) in the current field. Our team decides to use this classifier aggressively and prepare a filtered dataset which cleans up the outliers.
Step2: Using Full data to train
Step3: Add Missing PE by following AR4 Team
Step4: 2. Feature Selection
Log Plot of Facies
Filtered Data
Step5: Filtered facies
Step6: Normailization
Step7: 3. Prediction Model
Accuracy
Step8: Augment Features
Step9: SVM
Step10: 4. Result Analysis
Prepare test data
Step11: 5. Using Tensorflow
Filtered Data Model
Step12: Result from DNN
Step13: Post Processing
Step14: Find best model by setting different parameters | Python Code:
##### import basic function
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
##### import stuff from scikit learn
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.metrics import confusion_matrix, make_scorer, f1_score, accuracy_score, recall_score, precision_score
Explanation: Facies Classification Solution By Team_BGC
Cheolkyun Jeong and Ping Zhang From Team_BGC
Import Header
End of explanation
# Input file paths
facies_vector_path = 'facies_vectors.csv'
train_path = 'training_data.csv'
test_path = 'validation_data_nofacies.csv'
# Read training data to dataframe
#training_data = pd.read_csv(train_path)
Explanation: 1. Data Prepocessing
1) Filtered data preparation
After the initial data validation, we figure out the NM_M input is a key differentiator to group non-marine stones (sandstone, c_siltstone, and f_siltstone) and marine stones (marine_silt_shale, mudstone, wakestone, dolomite, packstone, and bafflestone) in the current field. Our team decides to use this classifier aggressively and prepare a filtered dataset which cleans up the outliers.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale
#5=mudstone 6=wackestone 7=dolomite 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041', '#DC7633','#A569BD',
'#000000', '#000080', '#2E86C1', '#AED6F1', '#196F3D']
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
training_data = pd.read_csv(facies_vector_path)
X = training_data[feature_names].values
y = training_data['Facies'].values
well = training_data['Well Name'].values
depth = training_data['Depth'].values
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
training_data.describe()
# Fitering out some outliers
j = []
for i in range(len(training_data)):
if ((training_data['NM_M'].values[i]==2)and ((training_data['Facies'].values[i]==1)or(training_data['Facies'].values[i]==2)or(training_data['Facies'].values[i]==3))):
j.append(i)
elif((training_data['NM_M'].values[i]==1)and((training_data['Facies'].values[i]!=1)and(training_data['Facies'].values[i]!=2)and(training_data['Facies'].values[i]!=3))):
j.append(i)
training_data_filtered = training_data.drop(training_data.index[j])
print(np.shape(training_data_filtered))
Explanation: Using Full data to train
End of explanation
#X = training_data_filtered[feature_names].values
# Testing without filtering
X = training_data[feature_names].values
reg = RandomForestRegressor(max_features='sqrt', n_estimators=50)
# DataImpAll = training_data_filtered[feature_names].copy()
DataImpAll = training_data[feature_names].copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))
Explanation: Add Missing PE by following AR4 Team
End of explanation
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
#facies_counts_filtered = training_data_filtered['Facies'].value_counts().sort_index()
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
#facies_counts_filtered.index = facies_labels
facies_counts.index = facies_labels
#facies_counts_filtered.plot(kind='bar',color=facies_colors,
# title='Distribution of Filtered Training Data by Facies')
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Filtered Training Data by Facies')
#facies_counts_filtered
#training_data_filtered.columns
#facies_counts_filtered
training_data.columns
facies_counts
Explanation: 2. Feature Selection
Log Plot of Facies
Filtered Data
End of explanation
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
Explanation: Filtered facies
End of explanation
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
scaler = preprocessing.StandardScaler().fit(X)
scaled_features = scaler.transform(X)
Explanation: Normailization
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
Explanation: 3. Prediction Model
Accuracy
End of explanation
# HouMath Team algorithm
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# HouMath Team algorithm
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# HouMath Team algorithm
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
X_aug, padded_rows = augment_features(scaled_features, well, depth)
X_aug.shape
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_aug, y, test_size=0.3, random_state=16)
X_train_full, X_test_zero, y_train_full, y_test_full = train_test_split(X_aug, y, test_size=0.0, random_state=42)
X_train_full.shape
Explanation: Augment Features
End of explanation
from classification_utilities import display_cm, display_adj_cm
import sklearn.svm as svm
clf_filtered = svm.SVC(C=10, gamma=1)
clf_filtered.fit(X_train, y_train)
#predicted_labels_filtered = clf_filtered.predict(X_test_filtered)
predicted_labels = clf_filtered.predict(X_test)
cv_conf_svm = confusion_matrix(y_test, predicted_labels)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf_svm))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf_svm, adjacent_facies))
display_cm(cv_conf_svm, facies_labels,display_metrics=True, hide_zeros=True)
Explanation: SVM
End of explanation
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Prepare test data
well_ts = well_data['Well Name'].values
depth_ts = well_data['Depth'].values
X_ts = well_data[feature_names].values
X_ts = scaler.transform(X_ts)
# Augment features
X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# Using all data and optimize parameter to train the data
clf_filtered = svm.SVC(C=10, gamma=1)
clf_filtered.fit(X_train_full, y_train_full)
#clf_filtered.fit(X_train_filtered, y_train_filtered)
y_pred = clf_filtered.predict(X_ts)
well_data['Facies'] = y_pred
well_data
well_data.to_csv('predict_result_svm_full_data.csv')
Explanation: 4. Result Analysis
Prepare test data
End of explanation
X_train.shape
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
# Specify that all features have real-value data
# feature_columns_filtered = [tf.contrib.layers.real_valued_column("", dimension=7)]
feature_columns_filtered = [tf.contrib.layers.real_valued_column("", dimension=28)]
# Build DNN
classifier_filtered = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns_filtered,
hidden_units=[7,14],
n_classes=10)
classifier_filtered.fit(x=X_train,y=y_train,steps=5000)
y_predict = []
predictions = classifier_filtered.predict(x=X_test)
for i, p in enumerate(predictions):
y_predict.append(p)
#print("Index %s: Prediction - %s, Real - %s" % (i + 1, p, y_test_filtered[i]))
# Evaluate accuracy.
accuracy_score = classifier_filtered.evaluate(x=X_test, y=y_test)["accuracy"]
print('Accuracy: {0:f}'.format(accuracy_score))
cv_conf_dnn = confusion_matrix(y_test, y_predict)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf_dnn))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf_dnn, adjacent_facies))
display_cm(cv_conf_dnn, facies_labels,display_metrics=True, hide_zeros=True)
Explanation: 5. Using Tensorflow
Filtered Data Model
End of explanation
multirun=5
dnn_result_array = np.zeros((multirun,830))
for j in range(0,multirun):
classifier_filtered.fit(x=X_train_full,
y=y_train_full,
steps=10000)
predictions = classifier_filtered.predict(X_ts)
y_predict_filtered = []
for i, p in enumerate(predictions):
y_predict_filtered.append(p)
dnn_result_array[j]=y_predict_filtered
final_prediction = []
from scipy.stats import mode
for k in range(0,830):
pp = mode(dnn_result_array[0:multirun,k])[0][0]
final_prediction.append(pp)
well_data['Facies'] = final_prediction
well_data
well_data.to_csv('predict_result_dnn_ModeResult.csv')
#dnn_result_array.to_csv('dnn_result_array.csv')
final_prediction = []
for k in range(0,830):
pp = dnn_result_array[4,k]
final_prediction.append(pp)
well_data['Facies'] = final_prediction
well_data
well_data.to_csv('predict_result_dnn_Result5.csv')
final_prediction = []
from scipy.stats import mode
for k in range(0,830):
pp = mode(dnn_result_array[0:multirun,k])[0][0]
final_prediction.append(pp)
well_data['Facies'] = final_prediction
well_data
well_data.to_csv('predict_result_dnn_ModeResult.csv')
Explanation: Result from DNN
End of explanation
well_data["Well Name"].value_counts()
idx_s = []
idx_c = []
for i in range(len(well_data)):
if (well_data["Well Name"].values[i] == "STUART"):
idx_s.append(i)
else:
idx_c.append(i)
well_s = well_data.drop(well_data.index[idx_c])
well_c = well_data.drop(well_data.index[idx_s])
def smooth_results(w_data):
data = w_data.copy()
first_face = 0
next_face = 0
face_len = 0
last_face = 0
for i in range(len(data)):
if(i==0):
first_face = data["Facies"].values[i]
continue
next_face = data["Facies"].values[i]
if (first_face == next_face):
face_len = face_len+1
if (face_len >=4):
last_face = first_face
if (first_face != next_face):
if(last_face == next_face) and (face_len <4):
for j in range(i-face_len, i):
data["Facies"].values[j]=last_face
face_len = 1
else:
face_len = 1
first_face = next_face
return data
well_s_s = smooth_results(well_s)
well_c_s = smooth_results(well_c)
smooth_result = well_s_s
smooth_result = smooth_result.append(well_c_s)
smooth_result.to_csv('predict_result_dnn_full_data_smooth.csv')
Explanation: Post Processing
End of explanation
def dnn_prediction(dnn,s_num):
dnn.fit(x=X_train,y=y_train,steps=s_num)
y_predict = []
predictions = classifier_filtered.predict(x=X_test)
for i, p in enumerate(predictions):
y_predict.append(p)
score = classifier_filtered.evaluate(x=X_test, y=y_test)["accuracy"]
print('Accuracy: {0:f}'.format(accuracy_score))
cv_conf = confusion_matrix(y_test, y_predict)
return score, cv_conf
Explanation: Find best model by setting different parameters
End of explanation |
12,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Digit Classification using K-Neighbours and Logistic Regression
http
Step1: What does our data look like?
This is how we represent a handwritten '0' character - values with a 0 are dark, and comparatively higher values are much lighter.
We have a . suffix, to indicate this is a floating point number, for more accuracy in computations.
Step2: Extract our input data (X digits), our target output data (Y digits) and the number of samples we will process.
Step3: Extract 90% of our available data as training data for the models.
Step4: Extract 10% of our available data as test data, to check the level of accuracy for the models.
Step5: Use the K-Neighbours algorithm to create a Classifier implementing the k-nearest neighbors vote.
Step6: Use the Logistic Regression algorithm
Step7: We can see from the outcomes KNN was better at predicting the target result, with ~96% accuracy. | Python Code:
from sklearn import datasets, neighbors, linear_model
digits = datasets.load_digits() # Retrieves digits dataset from scikit-learn
print(digits['DESCR'])
Explanation: Digit Classification using K-Neighbours and Logistic Regression
http://scikit-learn.org/stable/auto_examples/exercises/plot_digits_classification_exercise.html#sphx-glr-auto-examples-exercises-plot-digits-classification-exercise-py
Scikit-Learn includes a number of datasets to practice with, and many machine learning algorithms.
End of explanation
digits['images'][0]
import matplotlib.pyplot as plt
plt.gray()
plt.matshow(digits.images[0])
plt.matshow(digits.images[10])
plt.show()
for i in range(0,10):
plt.matshow(digits.images[i])
plt.show()
Explanation: What does our data look like?
This is how we represent a handwritten '0' character - values with a 0 are dark, and comparatively higher values are much lighter.
We have a . suffix, to indicate this is a floating point number, for more accuracy in computations.
End of explanation
X_digits = digits.data
X_digits
y_digits = digits.target
y_digits
n_samples = len(X_digits)
n_samples
Explanation: Extract our input data (X digits), our target output data (Y digits) and the number of samples we will process.
End of explanation
X_train = X_digits[:int(.9 * n_samples)]
X_train
y_train = y_digits[:int(.9 * n_samples)]
y_train
Explanation: Extract 90% of our available data as training data for the models.
End of explanation
X_test = X_digits[int(.9 * n_samples):]
y_test = y_digits[int(.9 * n_samples):]
X_test
Explanation: Extract 10% of our available data as test data, to check the level of accuracy for the models.
End of explanation
knn = neighbors.KNeighborsClassifier() # Retrieve the default K-Neighbours Classification algorithm
knn
fitting = knn.fit(X_train, y_train) # Train the algorithm on 90% of the samples
fitting
knn_score = fitting.score(X_test, y_test) # Score the algorithm on how well it fits the 10% of the data that was left out
print('KNN score: %f' % knn_score)
Explanation: Use the K-Neighbours algorithm to create a Classifier implementing the k-nearest neighbors vote.
End of explanation
logistic = linear_model.LogisticRegression()
logistic
log_regression_fitting = logistic.fit(X_train, y_train)
log_regression_fitting
log_regression_score = log_regression_fitting.score(X_test, y_test)
print('LogisticRegression score: %f' % log_regression_score)
Explanation: Use the Logistic Regression algorithm
End of explanation
print('KNN score: %f' % knn_score)
print('LGR score: %f' % log_regression_score)
Explanation: We can see from the outcomes KNN was better at predicting the target result, with ~96% accuracy.
End of explanation |
12,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 4 - Sistemas de equações Lineares
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
1 - Resolução de um sistema de equações lineares $Ax = b$ pelo método de eliminação de Gauss
Step1: Método de Eliminação de Gauss (sem pivoting)
Step2: Backward Substitution
Step3: Forward Substitution
Step4: Método de Eliminação de Gauss (com pivoting parcial)
Step5: Funções para resolver sistemas utilizando em primeiro lugar o método sem pivoting e depois com pivoting parcial
Step6: Função para gerar Matrizes de Hilbert de orden n
2 - Resolver a Matriz de Hilbert com ordem 3, 5 e 10.
Step7: O determinante deu 2.164480500135833e-53, que é parecido com o obtido pelo cálculo do determinante utilizando o NumPy
Exercício 3
Step8: 3 - Cálculo da matriz inversa utilizando em primeiro lugar a Eliminação de Gauss e Bacward Substitution.
A ideia é resolver n equações para uma matriz quadrada de ordem n, igualando aos vectores.
Step9: Utilizando o NumPy determina-se a inversa de A como
Step10: Exercício 4 - Resolver um sistema para entender a necessidade do pivoting | Python Code:
import numpy as np
Explanation: Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 4 - Sistemas de equações Lineares
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
1 - Resolução de um sistema de equações lineares $Ax = b$ pelo método de eliminação de Gauss
End of explanation
def gauss_elim_nopiv(n, matrix, b):
origin_m = np.copy(matrix)
origin_b = np.copy(b)
aux_m = np.array([[0.]*n]*n)
for k in range(0, n):
# print(origin_m, origin_b)
for i in range(k+1, n):
aux_m[i, k] = -origin_m[i, k] / origin_m[k, k]
origin_m[i, k] = 0
origin_b[i] = origin_b[i] + aux_m[i, k] * origin_b[k]
for j in range(k + 1, n):
origin_m[i, j] = origin_m[i, j] + aux_m[i, k] * origin_m[k, j]
determ = origin_m[0, 0]
for d in range(1, n):
determ *= origin_m[d, d]
return origin_m, aux_m, origin_b, determ
Explanation: Método de Eliminação de Gauss (sem pivoting)
End of explanation
def backward_sub(n, U, c):
x = np.zeros_like(c)
x[n-1] = c[n-1]/U[n-1, n-1]
for k in range(n-2, -1, -1):
s = 0
for j in range(k+1, n):
# print(k, j)
s += U[k, j] * x[j]
# print(k, j, U[k, j], x[j], s)
x[k] = (1/U[k, k]) * (c[k] - s)
# print(x[k], x)
return x
Explanation: Backward Substitution
End of explanation
def forward_sub(n, L, b):
y = np.zeros_like(b)
y[0] = b[1] / L[1, 1]
for k in range(1, n):
soma = 0
for j in range(0, k - 1): soma += L[k, j] * y[j]
y[k] = (b[k] - soma) / L[k, k]
return y
Explanation: Forward Substitution
End of explanation
def gauss_elim_partial(n, matrix, b):
origin_m = np.copy(matrix)
origin_b = np.copy(b)
for k in range(0, n - 1):
maxi = 0
km = k
for k1 in range(k, n):
if abs(origin_m[k1, k]) > maxi:
maxi = abs(origin_m[k1, k])
km = k1
if km != k:
origin_m[k, k:n-1], origin_m[km, k:n-1] = origin_m[km, k:n-1], origin_m[k, k:n-1]
origin_b[k], origin_b[km] = origin_b[km], origin_b[k]
U_A, aux_A, c_A, det_A = gauss_elim_nopiv(n, A, b)
return U_A, aux_A, c_A, det_A
Explanation: Método de Eliminação de Gauss (com pivoting parcial)
End of explanation
def solve_nopiv(matrix, b):
n = len(matrix)
u, aux, c, det = gauss_elim_nopiv(n, matrix, b)
x = backward_sub(n, u, c)
return x
def solve_partpiv(matrix, b):
n = len(matrix)
u, aux, c, det = gauss_elim_partial(n, matrix, b)
x = backward_sub(n, u, c)
return x
A = np.array([[1, 2, 1], [2, 3, 3], [-1, -3, 1]])
b = np.array([1, 3, 2])
A, b
gauss_elim_partial(3, A, b)
gauss_elim_nopiv(3, A, b)
U_A, aux_A, c_A, det_A = gauss_elim_nopiv(3, A, b)
print(U_A, aux_A, c_A, det_A)
x_A = backward_sub(3, U_A, c_A)
print(x_A)
# y_A = forward_sub(3, aux_A, c_A)
# print(y_A)
Explanation: Funções para resolver sistemas utilizando em primeiro lugar o método sem pivoting e depois com pivoting parcial
End of explanation
def hilbert_gen(n):
H = np.array([[1.]*n]*n)
H_b = []
for b in range(1, n+1): H_b.append(float(b))
for i in range(0, n):
for j in range(0, n):
H[i, j] = (1/((i + 1) + (j + 1) - 1))
return H, np.array(H_b)
Hil_3, Hil_3_b = hilbert_gen(3)
print(Hil_3, Hil_3_b)
print(np.linalg.solve(Hil_3, Hil_3_b))
print(np.linalg.det(Hil_3))
H3_S, H3_A, H3_B, H3_D = gauss_elim_nopiv(3, Hil_3, Hil_3_b)
backward_sub(3, H3_S, H3_B)
print(gauss_elim_nopiv(3, Hil_3, Hil_3_b))
Hil_5, Hil_5_b = hilbert_gen(5)
gauss_elim_nopiv(5, Hil_5, Hil_5_b)
Hil_10, Hil_10_b = hilbert_gen(10)
print(Hil_10, Hil_10_b)
print(np.linalg.solve(Hil_10, Hil_10_b))
print(np.linalg.det(Hil_10))
gauss_elim_nopiv(10, Hil_10, Hil_10_b)
Explanation: Função para gerar Matrizes de Hilbert de orden n
2 - Resolver a Matriz de Hilbert com ordem 3, 5 e 10.
End of explanation
A = np.array([[1, 2, 1], [2, 3, 3], [-1, -3, 1]])
b = np.array([1, 3, 2])
A, b
Explanation: O determinante deu 2.164480500135833e-53, que é parecido com o obtido pelo cálculo do determinante utilizando o NumPy
Exercício 3
End of explanation
def matrix_inverse(matrix):
n = len(matrix)
inv_matrix = [[] for _ in range(n)]
for i in range(0, n):
e = [0, 0, 0]
e[i] = 1
U, aux, c, det = gauss_elim_nopiv(n, matrix, e)
x = backward_sub(n, U, c)
inv_matrix[i] = x
inv_matrix = np.array(inv_matrix)
return inv_matrix.transpose()
A_inv = matrix_inverse(A)
print(A_inv)
print(A.dot(A_inv))
Explanation: 3 - Cálculo da matriz inversa utilizando em primeiro lugar a Eliminação de Gauss e Bacward Substitution.
A ideia é resolver n equações para uma matriz quadrada de ordem n, igualando aos vectores.
End of explanation
np.linalg.inv(A)
Explanation: Utilizando o NumPy determina-se a inversa de A como:
End of explanation
C = [[-10e-20, 1], [2, 1]]
cs = [1, 0]
solve_partpiv(C, cs)
x1 = -0.4999975
x2 = 0.999995
C = [[1, 2], [3, 4]]
D = [[1, 1], [3, 2]]
Explanation: Exercício 4 - Resolver um sistema para entender a necessidade do pivoting
End of explanation |
12,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Euler Bernoulli Beam "solver"
The Euler-Bernoulli equation describes the relationship between the beam's deflection and the applied load
$$\frac{d^2}{dx^2}\left(EI\frac{d^2w}{dx^2}\right) = q \enspace .$$
The curve $w(x)$ describes the delection of the beam at some point $x$, $q$ is a distributed load.
This equation cannot be solve in this form in Sympy. Nevertheless, we can "trick" it to do it for us. Let us rewrite the equation as two equations
$$\begin{align}
&-\frac{d^2 M}{dx^2} = q \enspace ,\
&- \frac{d^2w}{dx^2} = \frac{M}{EI} \enspace ,
\end{align}$$
where $M$ is the bending moment in the beam. We can, then, solve the two equation as if they have source terms and then couple the two solutions.
The code below do this
Step1: We want to be sure that this solution is ok. We replaced known values for $E$, $I$ and $q$ to check it.
Cantilever beam with end load
Step2: Cantilever beam with uniformly distributed load
Step3: Cantilever beam with exponential loading
Step4: Load written as a Taylor series and constant EI
We can prove that the general function is written as
Step5: Uniform load and varying cross-section
Step6: The shear stress would be | Python Code:
from sympy import *
%matplotlib notebook
init_printing()
x = symbols('x')
E, I = symbols('E I', positive=True)
C1, C2, C3, C4 = symbols('C1 C2 C3 C4')
w, M, q, f = symbols('w M q f', cls=Function)
EI = symbols('EI', cls=Function, nonnegative=True)
M_eq = -diff(M(x), x, 2) - q(x)
M_eq
M_sol = dsolve(M_eq, M(x)).rhs.subs([(C1, C3), (C2, C4)])
M_sol
w_eq = f(x) + diff(w(x),x,2)
w_eq
w_sol = dsolve(w_eq, w(x)).subs(f(x), M_sol/EI(x)).rhs
w_sol
Explanation: Euler Bernoulli Beam "solver"
The Euler-Bernoulli equation describes the relationship between the beam's deflection and the applied load
$$\frac{d^2}{dx^2}\left(EI\frac{d^2w}{dx^2}\right) = q \enspace .$$
The curve $w(x)$ describes the delection of the beam at some point $x$, $q$ is a distributed load.
This equation cannot be solve in this form in Sympy. Nevertheless, we can "trick" it to do it for us. Let us rewrite the equation as two equations
$$\begin{align}
&-\frac{d^2 M}{dx^2} = q \enspace ,\
&- \frac{d^2w}{dx^2} = \frac{M}{EI} \enspace ,
\end{align}$$
where $M$ is the bending moment in the beam. We can, then, solve the two equation as if they have source terms and then couple the two solutions.
The code below do this
End of explanation
sub_list = [(q(x), 0), (EI(x), E*I)]
w_sol1 = w_sol.subs(sub_list).doit()
L, F = symbols('L F')
# Fixed end
bc_eq1 = w_sol1.subs(x, 0)
bc_eq2 = diff(w_sol1, x).subs(x, 0)
# Free end
bc_eq3 = diff(w_sol1, x, 2).subs(x, L)
bc_eq4 = diff(w_sol1, x, 3).subs(x, L) + F/(E*I)
[bc_eq1, bc_eq2, bc_eq3, bc_eq4]
constants = solve([bc_eq1, bc_eq2, bc_eq3, bc_eq4], [C1, C2, C3, C4])
constants
w_sol1.subs(constants).simplify()
Explanation: We want to be sure that this solution is ok. We replaced known values for $E$, $I$ and $q$ to check it.
Cantilever beam with end load
End of explanation
sub_list = [(q(x), 1), (EI(x), E*I)]
w_sol1 = w_sol.subs(sub_list).doit()
L = symbols('L')
# Fixed end
bc_eq1 = w_sol1.subs(x, 0)
bc_eq2 = diff(w_sol1, x).subs(x, 0)
# Free end
bc_eq3 = diff(w_sol1, x, 2).subs(x, L)
bc_eq4 = diff(w_sol1, x, 3).subs(x, L)
constants = solve([bc_eq1, bc_eq2, bc_eq3, bc_eq4], [C1, C2, C3, C4])
w_sol1.subs(constants).simplify()
Explanation: Cantilever beam with uniformly distributed load
End of explanation
sub_list = [(q(x), exp(x)), (EI(x), E*I)]
w_sol1 = w_sol.subs(sub_list).doit()
L = symbols('L')
# Fixed end
bc_eq1 = w_sol1.subs(x, 0)
bc_eq2 = diff(w_sol1, x).subs(x, 0)
# Free end
bc_eq3 = diff(w_sol1, x, 2).subs(x, L)
bc_eq4 = diff(w_sol1, x, 3).subs(x, L)
constants = solve([bc_eq1, bc_eq2, bc_eq3, bc_eq4], [C1, C2, C3, C4])
w_sol1.subs(constants).simplify()
Explanation: Cantilever beam with exponential loading
End of explanation
k = symbols('k', integer=True)
C = symbols('C1:4')
D = symbols('D', cls=Function)
w_sol1 = 6*(C1 + C2*x) - 1/(E*I)*(3*C3*x**2 + C4*x**3 -
6*Sum(D(k)*x**(k + 4)/((k + 1)*(k + 2)*(k + 3)*(k + 4)),(k, 0, oo)))
w_sol1
Explanation: Load written as a Taylor series and constant EI
We can prove that the general function is written as
End of explanation
Q, alpha = symbols("Q alpha")
sub_list = [(q(x), Q), (EI(x), E*x**3/12/tan(alpha))]
w_sol1 = w_sol.subs(sub_list).doit()
M_eq = -diff(M(x), x, 2) - Q
M_eq
M_sol = dsolve(M_eq, M(x)).rhs.subs([(C1, C3), (C2, C4)])
M_sol
w_eq = f(x) + diff(w(x),x,2)
w_eq
w_sol1 = dsolve(w_eq, w(x)).subs(f(x), M_sol/(E*x**3/tan(alpha)**3)).rhs
w_sol1 = w_sol1.doit()
expand(w_sol1)
limit(w_sol1, x, 0)
L = symbols('L')
# Fixed end
bc_eq1 = w_sol1.subs(x, L)
bc_eq2 = diff(w_sol1, x).subs(x, L)
# Finite solution
bc_eq3 = C3
constants = solve([bc_eq1, bc_eq2, bc_eq3], [C1, C2, C3, C4])
simplify(w_sol1.subs(constants).subs(C4, 0))
Explanation: Uniform load and varying cross-section
End of explanation
M = -E*x**3/tan(alpha)**3*diff(w_sol1.subs(constants).subs(C4, 0), x, 2)
M
diff(M, x)
w_plot = w_sol1.subs(constants).subs({C4: 0, L: 1, Q: -1, E: 1, alpha: pi/9})
plot(w_plot, (x, 1e-6, 1));
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: The shear stress would be
End of explanation |
12,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
$$
\renewcommand{\like}{{\cal L}}
\renewcommand{\loglike}{{\ell}}
\renewcommand{\err}{{\cal E}}
\renewcommand{\dat}{{\cal D}}
\renewcommand{\hyp}{{\cal H}}
\renewcommand{\Ex}[2]{E_{#1}[#2]}
\renewcommand{\x}{{\mathbf x}}
\renewcommand{\v}[1]{{\mathbf #1}}
$$
Note
Step1: Using sklearn
Step2: Remember that the form of data we will use always is
with the "response" as a plain array
[1,1,0,0,0,1,0,1,0....].
Your turn
Step3: In the Linear Regression Mini Project, the last (extra credit) exercise was to write a K-Fold cross-validation. Feel free to use that code below, or just use the cv_score function we've provided.
Step4: First, we try a basic Logistic Regression
Step5: While this looks like a pretty great model, we would like to ensure two things
Step6: Things to think about
You may notice that this particular value of C may or may not do as well as simply running the default model on a random train-test split.
Do you think that's a problem?
Why do we need to do this whole cross-validation and grid search stuff anyway?
Not neccesarily. The goal in cross validation is to avoid overfitting. We want to find the best choice of regularization parameter that will allow us to generalize our predictions. So our choice of C is influenced by how well the various parameters perform in the split up training data. We use this to pick the best parameter. The accuracy score at the end compating the predicted values to the true ones is really a test of the choice of MODEL. We would compare this score with alternative models/algorithms.
We do these processes to avoid over-fitting. We want to make sure we are fitting our data so as to best make predictions about unseen data. By creating multiple training sets, and looking at performance across each split, we have a better chance at making our model robust to new entries.
We note that all the Cs apart from 0.1 onwards returned the same score.
Use scikit-learn's GridSearchCV tool
Your turn (extra credit)
Step7: We do not obtain the same value of $C$. Here we get that $C=0.001$ is the highest scoring choice. This time we find that the choices from C=1 and upwards score the exact same.
Using the gridSearchCV tool, we now get a higher accuracy score using $C=0.001$ at $0.9256$, instead of our previous value of $0.9252$ for $C=0.01$.
Recap of the math behind Logistic Regression (optional, feel free to skip)
Setting up some code
Lets make a small diversion, though, and set some code up for classification using cross-validation so that we can easily run classification models in scikit-learn. We first set up a function cv_optimize which takes a classifier clf, a grid of hyperparameters (such as a complexity parameter or regularization parameter as in the last ) implemented as a dictionary parameters, a training set (as a samples x features array) Xtrain, and a set of labels ytrain. The code takes the traning set, splits it into n_folds parts, sets up n_folds folds, and carries out a cross-validation by splitting the training set into a training and validation section for each foldfor us. It prints the best value of the parameters, and retuens the best classifier to us.
Step8: We then use this best classifier to fit the entire training set. This is done inside the do_classify function which takes a dataframe indf as input. It takes the columns in the list featurenames as the features used to train the classifier. The column targetname sets the target. The classification is done by setting those samples for which targetname has value target1val to the value 1, and all others to 0. We split the dataframe into 80% training and 20% testing by default, standardizing the dataset if desired. (Standardizing a data set involves scaling the data so that it has 0 mean and is described in units of its standard deviation. We then train the model on the training set using cross-validation. Having obtained the best classifier using cv_optimize, we retrain on the entire training set and calculate the training and testing accuracy, which we print. We return the split data and the trained classifier.
Step9: Logistic Regression
Step10: So we then come up with our rule by identifying
Step11: In the figure here showing the results of the logistic regression, we plot the actual labels of both the training(circles) and test(squares) samples. The 0's (females) are plotted in red, the 1's (males) in blue. We also show the classification boundary, a line (to the resolution of a grid square). Every sample on the red background side of the line will be classified female, and every sample on the blue side, male. Notice that most of the samples are classified well, but there are misclassified people on both sides, as evidenced by leakage of dots or squares of one color ontothe side of the other color. Both test and traing accuracy are about 92%.
The probabilistic interpretaion
Remember we said earlier that if $h > 0.5$ we ought to identify the sample with $y=1$? One way of thinking about this is to identify $h(\v{w}\cdot\v{x})$ with the probability that the sample is a '1' ($y=1$). Then we have the intuitive notion that lets identify a sample as 1 if we find that the probabilty of being a '1' is $\ge 0.5$.
So suppose we say then that the probability of $y=1$ for a given $\v{x}$ is given by $h(\v{w}\cdot\v{x})$?
Then, the conditional probabilities of $y=1$ or $y=0$ given a particular sample's features $\v{x}$ are
Step12: Discriminative classifier
Logistic regression is what is known as a discriminative classifier. Let us plot the probabilities obtained from predict_proba, overlayed on the samples with their true labels | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
c0=sns.color_palette()[0]
c1=sns.color_palette()[1]
c2=sns.color_palette()[2]
from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
def points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light, cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False, predicted=False):
h = .02
X=np.concatenate((Xtr, Xte))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
#plt.figure(figsize=(10,6))
if zfunc:
p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]
p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z=zfunc(p0, p1)
else:
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
ZZ = Z.reshape(xx.shape)
if mesh:
plt.pcolormesh(xx, yy, ZZ, cmap=cmap_light, alpha=alpha, axes=ax)
if predicted:
showtr = clf.predict(Xtr)
showte = clf.predict(Xte)
else:
showtr = ytr
showte = yte
ax.scatter(Xtr[:, 0], Xtr[:, 1], c=showtr-1, cmap=cmap_bold, s=psize, alpha=alpha,edgecolor="k")
# and testing points
ax.scatter(Xte[:, 0], Xte[:, 1], c=showte-1, cmap=cmap_bold, alpha=alpha, marker="s", s=psize+10)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
return ax,xx,yy
def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light, cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):
ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False, colorscale=colorscale, cdiscrete=cdiscrete, psize=psize, alpha=alpha, predicted=True)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)
cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)
plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)
return ax
Explanation: Classification
$$
\renewcommand{\like}{{\cal L}}
\renewcommand{\loglike}{{\ell}}
\renewcommand{\err}{{\cal E}}
\renewcommand{\dat}{{\cal D}}
\renewcommand{\hyp}{{\cal H}}
\renewcommand{\Ex}[2]{E_{#1}[#2]}
\renewcommand{\x}{{\mathbf x}}
\renewcommand{\v}[1]{{\mathbf #1}}
$$
Note: We've adapted this Mini Project from Lab 5 in the CS109 course. Please feel free to check out the original lab, both for more exercises, as well as solutions.
We turn our attention to classification[^classification]. Classification tries to predict, which of a small set of classes, a sample in a population belongs to. Mathematically, the aim is to find $y$, a label based on knowing a feature vector $\x$. For instance, consider predicting gender from seeing a person's face, something we do fairly well as humans. To have a machine do this well, we would typically feed the machine a bunch of images of people which have been labelled "male" or "female" (the training set), and have it learn the gender of the person in the image. Then, given a new photo, the algorithm learned returns us the gender of the person in the photo.
There are different ways of making classifications. One idea is shown schematically in the image below, where we find a line that divides "things" of two different types in a 2-dimensional feature space.
End of explanation
dflog=pd.read_csv("data/01_heights_weights_genders.csv")
dflog.head()
Explanation: Using sklearn: The heights and weights example
We'll use a dataset of heights and weights of males and females to hone our understanding of classifiers. We load the data into a dataframe and plot it.
End of explanation
#your turn
plt.scatter(dflog.Weight, dflog.Height, c=[cmap_bold.colors[i] for i in dflog.Gender=="Male"], alpha=0.1);
Explanation: Remember that the form of data we will use always is
with the "response" as a plain array
[1,1,0,0,0,1,0,1,0....].
Your turn:
Create a scatter plot of Weight vs. Height
Color the points differently by Gender
End of explanation
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
def cv_score(clf, x, y, score_func=accuracy_score):
result = 0
nfold = 5
kf = KFold(shuffle =False, n_splits= nfold)
for train, test in kf.split(x):# split data into train/test groups, 5 times
clf.fit(x[train], y[train]) # fit
result += score_func(clf.predict(x[test]), y[test]) # evaluate score function on held-out data
return result / nfold # average
Explanation: In the Linear Regression Mini Project, the last (extra credit) exercise was to write a K-Fold cross-validation. Feel free to use that code below, or just use the cv_score function we've provided.
End of explanation
from sklearn.cross_validation import train_test_split
Xlr, Xtestlr, ylr, ytestlr = train_test_split(dflog[['Height','Weight']].values,
(dflog.Gender=="Male").values,random_state=5)
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(Xlr,ylr)
print(accuracy_score(clf.predict(Xtestlr),ytestlr))
clf = LogisticRegression()
score = cv_score(clf, Xlr, ylr)
print(score)
Explanation: First, we try a basic Logistic Regression:
Split the data into a training and test (hold-out) set
Train on the training set, and test for accuracy on the testing set
End of explanation
#the grid of parameters to search over
Cs = [0.001, 0.01, 0.1, 1, 10, 100]
score_array = []
#your turn
for reg_param in Cs:
clf = LogisticRegression(C=reg_param)
score = cv_score(clf,Xlr,ylr)
score_array.append(score)
#score2 =
max_score = max(score_array)
max_idx = score_array.index(max(score_array))
best_C = Cs[max_idx]
print(score_array)
print("Best score: ", max_score, ", from C =", best_C)
#your turn
clf = LogisticRegression(C=best_C)
clf.fit(Xlr,ylr)
y_prediction = clf.predict(Xtestlr)
print("Accuracy score is: ",accuracy_score(y_prediction,ytestlr))
Explanation: While this looks like a pretty great model, we would like to ensure two things:
We have found the best model (in terms of model parameters).
The model is highly likely to generalize i.e. perform well on unseen data.
For tuning your model, you will use a mix of cross-validation and grid search. In Logistic Regression, the most important parameter to tune is the regularization parameter C. You will now implement some code to perform model tuning.
Your turn: Implement the following search procedure to find a good model
You are given a list of possible values of C below
For each C:
Create a logistic regression model with that value of C
Find the average score for this model using the cv_score function only on the training set (Xlr,ylr)
Pick the C with the highest average score
Your goal is to find the best model parameters based only on the training set, without showing the model test set at all (which is why the test set is also called a hold-out set).
End of explanation
#your turn
from sklearn.model_selection import GridSearchCV
clf2 = LogisticRegression()
parameters = {"C": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100]}
grid_fit = GridSearchCV(clf2,param_grid=parameters,cv=5, scoring="accuracy")
grid_fit.fit(Xlr,ylr)
print("Best parameter is: ", grid_fit.best_params_)
clf2 = LogisticRegression(C=grid_fit.best_params_['C'])
clf2.fit(Xlr,ylr)
y_predictions=clf2.predict(Xtestlr)
print("Accuracy score is: ", accuracy_score(y_predictions, ytestlr))
print("grid scores were: ", grid_fit.cv_results_['mean_test_score'])
Explanation: Things to think about
You may notice that this particular value of C may or may not do as well as simply running the default model on a random train-test split.
Do you think that's a problem?
Why do we need to do this whole cross-validation and grid search stuff anyway?
Not neccesarily. The goal in cross validation is to avoid overfitting. We want to find the best choice of regularization parameter that will allow us to generalize our predictions. So our choice of C is influenced by how well the various parameters perform in the split up training data. We use this to pick the best parameter. The accuracy score at the end compating the predicted values to the true ones is really a test of the choice of MODEL. We would compare this score with alternative models/algorithms.
We do these processes to avoid over-fitting. We want to make sure we are fitting our data so as to best make predictions about unseen data. By creating multiple training sets, and looking at performance across each split, we have a better chance at making our model robust to new entries.
We note that all the Cs apart from 0.1 onwards returned the same score.
Use scikit-learn's GridSearchCV tool
Your turn (extra credit): Use scikit-learn's GridSearchCV tool to perform cross validation and grid search.
Instead of writing your own loops above to iterate over the model parameters, can you use GridSearchCV to find the best model over the training set?
Does it give you the same best value of C?
How does this model you've obtained perform on the test set?
End of explanation
def cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5):
gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds)
gs.fit(Xtrain, ytrain)
print "BEST PARAMS", gs.best_params_
best = gs.best_estimator_
return best
Explanation: We do not obtain the same value of $C$. Here we get that $C=0.001$ is the highest scoring choice. This time we find that the choices from C=1 and upwards score the exact same.
Using the gridSearchCV tool, we now get a higher accuracy score using $C=0.001$ at $0.9256$, instead of our previous value of $0.9252$ for $C=0.01$.
Recap of the math behind Logistic Regression (optional, feel free to skip)
Setting up some code
Lets make a small diversion, though, and set some code up for classification using cross-validation so that we can easily run classification models in scikit-learn. We first set up a function cv_optimize which takes a classifier clf, a grid of hyperparameters (such as a complexity parameter or regularization parameter as in the last ) implemented as a dictionary parameters, a training set (as a samples x features array) Xtrain, and a set of labels ytrain. The code takes the traning set, splits it into n_folds parts, sets up n_folds folds, and carries out a cross-validation by splitting the training set into a training and validation section for each foldfor us. It prints the best value of the parameters, and retuens the best classifier to us.
End of explanation
from sklearn.cross_validation import train_test_split
def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8):
subdf=indf[featurenames]
if standardize:
subdfstd=(subdf - subdf.mean())/subdf.std()
else:
subdfstd=subdf
X=subdfstd.values
y=(indf[targetname].values==target1val)*1
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size)
clf = cv_optimize(clf, parameters, Xtrain, ytrain)
clf=clf.fit(Xtrain, ytrain)
training_accuracy = clf.score(Xtrain, ytrain)
test_accuracy = clf.score(Xtest, ytest)
print "Accuracy on training data: %0.2f" % (training_accuracy)
print "Accuracy on test data: %0.2f" % (test_accuracy)
return clf, Xtrain, ytrain, Xtest, ytest
Explanation: We then use this best classifier to fit the entire training set. This is done inside the do_classify function which takes a dataframe indf as input. It takes the columns in the list featurenames as the features used to train the classifier. The column targetname sets the target. The classification is done by setting those samples for which targetname has value target1val to the value 1, and all others to 0. We split the dataframe into 80% training and 20% testing by default, standardizing the dataset if desired. (Standardizing a data set involves scaling the data so that it has 0 mean and is described in units of its standard deviation. We then train the model on the training set using cross-validation. Having obtained the best classifier using cv_optimize, we retrain on the entire training set and calculate the training and testing accuracy, which we print. We return the split data and the trained classifier.
End of explanation
h = lambda z: 1./(1+np.exp(-z))
zs=np.arange(-5,5,0.1)
plt.plot(zs, h(zs), alpha=0.5);
Explanation: Logistic Regression: The Math
We could approach classification as linear regression, there the class, 0 or 1, is the target variable $y$. But this ignores the fact that our output $y$ is discrete valued, and futhermore, the $y$ predicted by linear regression will in general take on values less than 0 and greater than 1. Thus this does not seem like a very good idea.
But what if we could change the form of our hypotheses $h(x)$ instead?
The idea behind logistic regression is very simple. We want to draw a line in feature space that divides the '1' samples from the '0' samples, just like in the diagram above. In other words, we wish to find the "regression" line which divides the samples. Now, a line has the form $w_1 x_1 + w_2 x_2 + w_0 = 0$ in 2-dimensions. On one side of this line we have
$$w_1 x_1 + w_2 x_2 + w_0 \ge 0,$$
and on the other side we have
$$w_1 x_1 + w_2 x_2 + w_0 < 0.$$
Our classification rule then becomes:
\begin{eqnarray}
y = 1 &if& \v{w}\cdot\v{x} \ge 0\
y = 0 &if& \v{w}\cdot\v{x} < 0
\end{eqnarray}
where $\v{x}$ is the vector ${1,x_1, x_2,...,x_n}$ where we have also generalized to more than 2 features.
What hypotheses $h$ can we use to achieve this? One way to do so is to use the sigmoid function:
$$h(z) = \frac{1}{1 + e^{-z}}.$$
Notice that at $z=0$ this function has the value 0.5. If $z > 0$, $h > 0.5$ and as $z \to \infty$, $h \to 1$. If $z < 0$, $h < 0.5$ and as $z \to -\infty$, $h \to 0$. As long as we identify any value of $y > 0.5$ as 1, and any $y < 0.5$ as 0, we can achieve what we wished above.
This function is plotted below:
End of explanation
dflog.head()
clf_l, Xtrain_l, ytrain_l, Xtest_l, ytest_l = do_classify(LogisticRegression(),
{"C": [0.01, 0.1, 1, 10, 100]},
dflog, ['Weight', 'Height'], 'Gender','Male')
plt.figure()
ax=plt.gca()
points_plot(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, alpha=0.2);
Explanation: So we then come up with our rule by identifying:
$$z = \v{w}\cdot\v{x}.$$
Then $h(\v{w}\cdot\v{x}) \ge 0.5$ if $\v{w}\cdot\v{x} \ge 0$ and $h(\v{w}\cdot\v{x}) \lt 0.5$ if $\v{w}\cdot\v{x} \lt 0$, and:
\begin{eqnarray}
y = 1 &if& h(\v{w}\cdot\v{x}) \ge 0.5\
y = 0 &if& h(\v{w}\cdot\v{x}) \lt 0.5.
\end{eqnarray}
We will show soon that this identification can be achieved by minimizing a loss in the ERM framework called the log loss :
$$ R_{\cal{D}}(\v{w}) = - \sum_{y_i \in \cal{D}} \left ( y_i log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) log(1 - h(\v{w}\cdot\v{x})) \right )$$
More generally we add a regularization term (as in the ridge regression):
$$ R_{\cal{D}}(\v{w}) = - \sum_{y_i \in \cal{D}} \left ( y_i log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) log(1 - h(\v{w}\cdot\v{x})) \right ) + \frac{1}{C} \v{w}\cdot\v{w},$$
where $C$ is the regularization strength (corresponding to $1/\alpha$ from the Ridge case), and smaller values of $C$ mean stronger regularization. As before, the regularization tries to prevent features from having terribly high weights, thus implementing a form of feature selection.
How did we come up with this loss? We'll come back to that, but let us see how logistic regression works out.
End of explanation
clf_l.predict_proba(Xtest_l)
Explanation: In the figure here showing the results of the logistic regression, we plot the actual labels of both the training(circles) and test(squares) samples. The 0's (females) are plotted in red, the 1's (males) in blue. We also show the classification boundary, a line (to the resolution of a grid square). Every sample on the red background side of the line will be classified female, and every sample on the blue side, male. Notice that most of the samples are classified well, but there are misclassified people on both sides, as evidenced by leakage of dots or squares of one color ontothe side of the other color. Both test and traing accuracy are about 92%.
The probabilistic interpretaion
Remember we said earlier that if $h > 0.5$ we ought to identify the sample with $y=1$? One way of thinking about this is to identify $h(\v{w}\cdot\v{x})$ with the probability that the sample is a '1' ($y=1$). Then we have the intuitive notion that lets identify a sample as 1 if we find that the probabilty of being a '1' is $\ge 0.5$.
So suppose we say then that the probability of $y=1$ for a given $\v{x}$ is given by $h(\v{w}\cdot\v{x})$?
Then, the conditional probabilities of $y=1$ or $y=0$ given a particular sample's features $\v{x}$ are:
\begin{eqnarray}
P(y=1 | \v{x}) &=& h(\v{w}\cdot\v{x}) \
P(y=0 | \v{x}) &=& 1 - h(\v{w}\cdot\v{x}).
\end{eqnarray}
These two can be written together as
$$P(y|\v{x}, \v{w}) = h(\v{w}\cdot\v{x})^y \left(1 - h(\v{w}\cdot\v{x}) \right)^{(1-y)} $$
Then multiplying over the samples we get the probability of the training $y$ given $\v{w}$ and the $\v{x}$:
$$P(y|\v{x},\v{w}) = P({y_i} | {\v{x}i}, \v{w}) = \prod{y_i \in \cal{D}} P(y_i|\v{x_i}, \v{w}) = \prod_{y_i \in \cal{D}} h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}$$
Why use probabilities? Earlier, we talked about how the regression function $f(x)$ never gives us the $y$ exactly, because of noise. This hold for classification too. Even with identical features, a different sample may be classified differently.
We said that another way to think about a noisy $y$ is to imagine that our data $\dat$ was generated from a joint probability distribution $P(x,y)$. Thus we need to model $y$ at a given $x$, written as $P(y|x)$, and since $P(x)$ is also a probability distribution, we have:
$$P(x,y) = P(y | x) P(x) ,$$
and can obtain our joint probability ($P(x, y))$.
Indeed its important to realize that a particular training set can be thought of as a draw from some "true" probability distribution (just as we did when showing the hairy variance diagram). If for example the probability of classifying a test sample as a '0' was 0.1, and it turns out that the test sample was a '0', it does not mean that this model was necessarily wrong. After all, in roughly a 10th of the draws, this new sample would be classified as a '0'! But, of-course its more unlikely than its likely, and having good probabilities means that we'll be likely right most of the time, which is what we want to achieve in classification. And furthermore, we can quantify this accuracy.
Thus its desirable to have probabilistic, or at the very least, ranked models of classification where you can tell which sample is more likely to be classified as a '1'. There are business reasons for this too. Consider the example of customer "churn": you are a cell-phone company and want to know, based on some of my purchasing habit and characteristic "features" if I am a likely defector. If so, you'll offer me an incentive not to defect. In this scenario, you might want to know which customers are most likely to defect, or even more precisely, which are most likely to respond to incentives. Based on these probabilities, you could then spend a finite marketing budget wisely.
Maximizing the probability of the training set.
Now if we maximize $$P(y|\v{x},\v{w})$$, we will maximize the chance that each point is classified correctly, which is what we want to do. While this is not exactly the same thing as maximizing the 1-0 training risk, it is a principled way of obtaining the highest probability classification. This process is called maximum likelihood estimation since we are maximising the likelihood of the training data y,
$$\like = P(y|\v{x},\v{w}).$$
Maximum likelihood is one of the corenerstone methods in statistics, and is used to estimate probabilities of data.
We can equivalently maximize
$$\loglike = log(P(y|\v{x},\v{w}))$$
since the natural logarithm $log$ is a monotonic function. This is known as maximizing the log-likelihood. Thus we can equivalently minimize a risk that is the negative of $log(P(y|\v{x},\v{w}))$:
$$R_{\cal{D}}(h(x)) = -\loglike = -log \like = - log(P(y|\v{x},\v{w})).$$
Thus
\begin{eqnarray}
R_{\cal{D}}(h(x)) &=& -log\left(\prod_{y_i \in \cal{D}} h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\right)\
&=& -\sum_{y_i \in \cal{D}} log\left(h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\right)\
&=& -\sum_{y_i \in \cal{D}} log\,h(\v{w}\cdot\v{x_i})^{y_i} + log\,\left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\
&=& - \sum_{y_i \in \cal{D}} \left ( y_i log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) log(1 - h(\v{w}\cdot\v{x})) \right )
\end{eqnarray}
This is exactly the risk we had above, leaving out the regularization term (which we shall return to later) and was the reason we chose it over the 1-0 risk.
Notice that this little process we carried out above tells us something very interesting: Probabilistic estimation using maximum likelihood is equivalent to Empiricial Risk Minimization using the negative log-likelihood, since all we did was to minimize the negative log-likelihood over the training samples.
sklearn will return the probabilities for our samples, or for that matter, for any input vector set ${\v{x}_i}$, i.e. $P(y_i | \v{x}_i, \v{w})$:
End of explanation
plt.figure()
ax=plt.gca()
points_plot_prob(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, psize=20, alpha=0.1);
Explanation: Discriminative classifier
Logistic regression is what is known as a discriminative classifier. Let us plot the probabilities obtained from predict_proba, overlayed on the samples with their true labels:
End of explanation |
12,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras 中的遮盖和填充
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 简介
遮盖的作用是告知序列处理层输入中有某些时间步骤丢失,因此在处理数据时应将其跳过。
填充是遮盖的一种特殊形式,其中被遮盖的步骤位于序列的起点或开头。填充是出于将序列数据编码成连续批次的需要:为了使批次中的所有序列适合给定的标准长度,有必要填充或截断某些序列。
让我们仔细看看。
填充序列数据
在处理序列数据时,各个样本常常具有不同长度。请考虑以下示例(文本被切分为单词):
[
["Hello", "world", "!"],
["How", "are", "you", "doing", "today"],
["The", "weather", "will", "be", "nice", "tomorrow"],
]
进行词汇查询后,数据可能会被向量化为整数,例如:
[
[71, 1331, 4231]
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
此数据是一个嵌套列表,其中各个样本的长度分别为 3、5 和 6。由于深度学习模型的输入数据必须为单一张量(例如在此例中形状为 (batch_size, 6, vocab_size)),短于最长条目的样本需要用占位符值进行填充(或者,也可以在填充短样本前截断长样本)。
Keras 提供了一个效用函数来截断和填充 Python 列表,使其具有相同长度:tf.keras.preprocessing.sequence.pad_sequences。
Step3: 遮盖
既然所有样本现在都具有了统一长度,那就必须告知模型,数据的某些部分实际上是填充,应该忽略。这种机制就是遮盖。
在 Keras 模型中引入输入掩码有三种方式:
添加一个 keras.layers.Masking 层。
使用 mask_zero=True 配置一个 keras.layers.Embedding 层。
在调用支持 mask 参数的层(如 RNN 层)时,手动传递此参数。
掩码生成层:Embedding 和 Masking
这些层将在后台创建一个掩码张量(形状为 (batch, sequence_length) 的二维张量),并将其附加到由 Masking 或 Embedding 层返回的张量输出上。
Step4: 您可以在输出结果中看到,该掩码是一个形状为 (batch_size, sequence_length) 的二维布尔张量,其中每个 False 条目表示对应的时间步骤应在处理时忽略。
函数式 API 和序列式 API 中的掩码传播
在使用函数式 API 或序列式 API 时,由 Embedding 或 Masking 层生成的掩码将通过网络传播给任何能够使用它们的层(如 RNN 层)。Keras 将自动提取与输入相对应的掩码,并将其传递给任何知道该掩码使用方法的层。
例如,在下面的序贯模型中,LSTM 层将自动接收掩码,这意味着它将忽略填充的值:
Step5: 对以下函数式 API 的情况也是如此:
Step6: 将掩码张量直接传递给层
能够处理掩码的层(如 LSTM 层)在其 __call__ 方法中有一个 mask 参数。
同时,生成掩码的层(如 Embedding)会公开一个 compute_mask(input, previous_mask) 方法,供您调用。
因此,您可以将掩码生成层的 compute_mask() 方法的输出传递给掩码使用层的 __call__ 方法,如下所示:
Step8: 在自定义层中支持遮盖
有时,您可能需要编写生成掩码的层(如 Embedding),或者需要修改当前掩码的层。
例如,任何生成与其输入具有不同时间维度的张量的层(如在时间维度上进行连接的 Concatenate 层)都需要修改当前掩码,这样下游层才能正确顾及被遮盖的时间步骤。
为此,您的层应实现 layer.compute_mask() 方法,该方法会根据输入和当前掩码生成新的掩码。
以下是需要修改当前掩码的 TemporalSplit 层的示例。
Step9: 下面是关于 CustomEmbedding 层的另一个示例,该层能够根据输入值生成掩码:
Step10: 在兼容层上选择启用掩码传播
大多数层都不会修改时间维度,因此无需修改当前掩码。但是,这些层可能仍希望能够将当前掩码不加更改地传播到下一层。这是一种可以选择启用的行为。默认情况下,自定义层将破坏当前掩码(因为框架无法确定传播该掩码是否安全)。
如果您有一个不会修改时间维度的自定义层,且您希望它能够传播当前的输入掩码,您应该在层构造函数中设置 self.supports_masking = True。在这种情况下,compute_mask() 的默认行为是仅传递当前掩码。
下面是被列入掩码传播白名单的层的示例:
Step11: 现在,您可以在掩码生成层(如 Embedding)和掩码使用层(如 LSTM)之间使用此自定义层,它会将掩码一路传递到掩码使用层。
Step12: 编写需要掩码信息的层
有些层是掩码使用者:他们会在 call 中接受 mask 参数,并使用该参数来决定是否跳过某些时间步骤。
要编写这样的层,您只需在 call 签名中添加一个 mask=None 参数。与输入关联的掩码只要可用就会被传递到您的层。
下面是一个简单示例:示例中的层在输入序列的时间维度(轴 1)上计算 Softmax,同时丢弃遮盖的时间步骤。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Keras 中的遮盖和填充
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/keras/masking_and_padding"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/masking_and_padding.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/masking_and_padding.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/masking_and_padding.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" class="_active_edit_src">下载笔记本</a></td>
</table>
设置
End of explanation
raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences(
raw_inputs, padding="post"
)
print(padded_inputs)
Explanation: 简介
遮盖的作用是告知序列处理层输入中有某些时间步骤丢失,因此在处理数据时应将其跳过。
填充是遮盖的一种特殊形式,其中被遮盖的步骤位于序列的起点或开头。填充是出于将序列数据编码成连续批次的需要:为了使批次中的所有序列适合给定的标准长度,有必要填充或截断某些序列。
让我们仔细看看。
填充序列数据
在处理序列数据时,各个样本常常具有不同长度。请考虑以下示例(文本被切分为单词):
[
["Hello", "world", "!"],
["How", "are", "you", "doing", "today"],
["The", "weather", "will", "be", "nice", "tomorrow"],
]
进行词汇查询后,数据可能会被向量化为整数,例如:
[
[71, 1331, 4231]
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
此数据是一个嵌套列表,其中各个样本的长度分别为 3、5 和 6。由于深度学习模型的输入数据必须为单一张量(例如在此例中形状为 (batch_size, 6, vocab_size)),短于最长条目的样本需要用占位符值进行填充(或者,也可以在填充短样本前截断长样本)。
Keras 提供了一个效用函数来截断和填充 Python 列表,使其具有相同长度:tf.keras.preprocessing.sequence.pad_sequences。
End of explanation
embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
masked_output = embedding(padded_inputs)
print(masked_output._keras_mask)
masking_layer = layers.Masking()
# Simulate the embedding lookup by expanding the 2D input to 3D,
# with embedding dimension of 10.
unmasked_embedding = tf.cast(
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32
)
masked_embedding = masking_layer(unmasked_embedding)
print(masked_embedding._keras_mask)
Explanation: 遮盖
既然所有样本现在都具有了统一长度,那就必须告知模型,数据的某些部分实际上是填充,应该忽略。这种机制就是遮盖。
在 Keras 模型中引入输入掩码有三种方式:
添加一个 keras.layers.Masking 层。
使用 mask_zero=True 配置一个 keras.layers.Embedding 层。
在调用支持 mask 参数的层(如 RNN 层)时,手动传递此参数。
掩码生成层:Embedding 和 Masking
这些层将在后台创建一个掩码张量(形状为 (batch, sequence_length) 的二维张量),并将其附加到由 Masking 或 Embedding 层返回的张量输出上。
End of explanation
model = keras.Sequential(
[layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),]
)
Explanation: 您可以在输出结果中看到,该掩码是一个形状为 (batch_size, sequence_length) 的二维布尔张量,其中每个 False 条目表示对应的时间步骤应在处理时忽略。
函数式 API 和序列式 API 中的掩码传播
在使用函数式 API 或序列式 API 时,由 Embedding 或 Masking 层生成的掩码将通过网络传播给任何能够使用它们的层(如 RNN 层)。Keras 将自动提取与输入相对应的掩码,并将其传递给任何知道该掩码使用方法的层。
例如,在下面的序贯模型中,LSTM 层将自动接收掩码,这意味着它将忽略填充的值:
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
outputs = layers.LSTM(32)(x)
model = keras.Model(inputs, outputs)
Explanation: 对以下函数式 API 的情况也是如此:
End of explanation
class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype("int32")
layer(x)
Explanation: 将掩码张量直接传递给层
能够处理掩码的层(如 LSTM 层)在其 __call__ 方法中有一个 mask 参数。
同时,生成掩码的层(如 Embedding)会公开一个 compute_mask(input, previous_mask) 方法,供您调用。
因此,您可以将掩码生成层的 compute_mask() 方法的输出传递给掩码使用层的 __call__ 方法,如下所示:
End of explanation
class TemporalSplit(keras.layers.Layer):
Split the input tensor into 2 tensors along the time dimension.
def call(self, inputs):
# Expect the input to be 3D and mask to be 2D, split the input tensor into 2
# subtensors along the time axis (axis 1).
return tf.split(inputs, 2, axis=1)
def compute_mask(self, inputs, mask=None):
# Also split the mask into 2 if it presents.
if mask is None:
return None
return tf.split(mask, 2, axis=1)
first_half, second_half = TemporalSplit()(masked_embedding)
print(first_half._keras_mask)
print(second_half._keras_mask)
Explanation: 在自定义层中支持遮盖
有时,您可能需要编写生成掩码的层(如 Embedding),或者需要修改当前掩码的层。
例如,任何生成与其输入具有不同时间维度的张量的层(如在时间维度上进行连接的 Concatenate 层)都需要修改当前掩码,这样下游层才能正确顾及被遮盖的时间步骤。
为此,您的层应实现 layer.compute_mask() 方法,该方法会根据输入和当前掩码生成新的掩码。
以下是需要修改当前掩码的 TemporalSplit 层的示例。
End of explanation
class CustomEmbedding(keras.layers.Layer):
def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs):
super(CustomEmbedding, self).__init__(**kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.mask_zero = mask_zero
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer="random_normal",
dtype="float32",
)
def call(self, inputs):
return tf.nn.embedding_lookup(self.embeddings, inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, 0)
layer = CustomEmbedding(10, 32, mask_zero=True)
x = np.random.random((3, 10)) * 9
x = x.astype("int32")
y = layer(x)
mask = layer.compute_mask(x)
print(mask)
Explanation: 下面是关于 CustomEmbedding 层的另一个示例,该层能够根据输入值生成掩码:
End of explanation
class MyActivation(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyActivation, self).__init__(**kwargs)
# Signal that the layer is safe for mask propagation
self.supports_masking = True
def call(self, inputs):
return tf.nn.relu(inputs)
Explanation: 在兼容层上选择启用掩码传播
大多数层都不会修改时间维度,因此无需修改当前掩码。但是,这些层可能仍希望能够将当前掩码不加更改地传播到下一层。这是一种可以选择启用的行为。默认情况下,自定义层将破坏当前掩码(因为框架无法确定传播该掩码是否安全)。
如果您有一个不会修改时间维度的自定义层,且您希望它能够传播当前的输入掩码,您应该在层构造函数中设置 self.supports_masking = True。在这种情况下,compute_mask() 的默认行为是仅传递当前掩码。
下面是被列入掩码传播白名单的层的示例:
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
x = MyActivation()(x) # Will pass the mask along
print("Mask found:", x._keras_mask)
outputs = layers.LSTM(32)(x) # Will receive the mask
model = keras.Model(inputs, outputs)
Explanation: 现在,您可以在掩码生成层(如 Embedding)和掩码使用层(如 LSTM)之间使用此自定义层,它会将掩码一路传递到掩码使用层。
End of explanation
class TemporalSoftmax(keras.layers.Layer):
def call(self, inputs, mask=None):
broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1)
inputs_exp = tf.exp(inputs) * broadcast_float_mask
inputs_sum = tf.reduce_sum(
inputs_exp * broadcast_float_mask, axis=-1, keepdims=True
)
return inputs_exp / inputs_sum
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs)
x = layers.Dense(1)(x)
outputs = TemporalSoftmax()(x)
model = keras.Model(inputs, outputs)
y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1)))
Explanation: 编写需要掩码信息的层
有些层是掩码使用者:他们会在 call 中接受 mask 参数,并使用该参数来决定是否跳过某些时间步骤。
要编写这样的层,您只需在 call 签名中添加一个 mask=None 参数。与输入关联的掩码只要可用就会被传递到您的层。
下面是一个简单示例:示例中的层在输入序列的时间维度(轴 1)上计算 Softmax,同时丢弃遮盖的时间步骤。
End of explanation |
12,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
用Python 3开发网络爬虫
By Terrill Yang (Github
Step1: urllib.request是一个库, 隶属urllib. 点此打开官方相关文档. 官方文档应该怎么使用呢? 首先点刚刚提到的这个链接进去的页面有urllib的几个子库, 我们暂时用到了request, 所以我们先看urllib.request部分. 首先看到的是一句话介绍这个库是干什么用的
Step2: 3. 用Python简单处理URL
如果要抓取百度上面搜索关键词为Jecvay Notes的网页, 则代码如下 | Python Code:
#encoding:UTF-8
import urllib.request
url = "http://www.pku.edu.cn"
data = urllib.request.urlopen(url).read()
data = data.decode('UTF-8')
print(data)
Explanation: 用Python 3开发网络爬虫
By Terrill Yang (Github: https://github.com/yttty)
由你需要这些:Python3.x爬虫学习资料整理 - 知乎专栏整理而来。
用Python 3开发网络爬虫 - Chapter 01
1. 一个简单的伪代码
以下这个简单的伪代码用到了set和queue这两种经典的数据结构, 集与队列. 集的作用是记录那些已经访问过的页面, 队列的作用是进行广度优先搜索.
queue Q
set S
StartPoint = "http://jecvay.com"
Q.push(StartPoint) # 经典的BFS开头
S.insert(StartPoint) # 访问一个页面之前先标记他为已访问
while (Q.empty() == false) # BFS循环体
T = Q.top() # 并且pop
for point in PageUrl(T) # PageUrl(T)是指页面T中所有url的集合, point是这个集合中的一个元素.
if (point not in S)
Q.push(point)
S.insert(point)
这个伪代码不能执行, 但是看懂是没问题的, 这就是个最简单的BFS结构. 我是看了知乎里面的那个伪代码之后, 自己用我的风格写了一遍. 你也需要用你的风格写一遍.
这里用到的Set其内部原理是采用了Hash表, 传统的Hash对爬虫来说占用空间太大, 因此有一种叫做Bloom Filter的数据结构更适合用在这里替代Hash版本的set. 我打算以后再看这个数据结构怎么使用, 现在先跳过, 因为对于零基础的我来说, 这不是重点.
2. 用Python抓取指定页面
End of explanation
a = urllib.request.urlopen(url)
type(a)
a.geturl()
a.info()
a.getcode()
Explanation: urllib.request是一个库, 隶属urllib. 点此打开官方相关文档. 官方文档应该怎么使用呢? 首先点刚刚提到的这个链接进去的页面有urllib的几个子库, 我们暂时用到了request, 所以我们先看urllib.request部分. 首先看到的是一句话介绍这个库是干什么用的:
The urllib.request module defines functions and classes which help in opening URLs (mostly HTTP) in a complex world — basic and digest authentication, redirections, cookies and more.
然后把我们代码中用到的urlopen()函数部分阅读完.
urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False)
重点部分是返回值, 这个函数返回一个 http.client.HTTPResponse 对象, 这个对象又有各种方法, 比如我们用到的read()方法, 这些方法都可以根据官方文档的链接链过去. 根据官方文档所写, 我用控制台运行完毕上面这个程序后, 又继续运行如下代码, 以更熟悉这些乱七八糟的方法是干什么的.
End of explanation
import urllib
import urllib.request
data={}
data['word']='Jecvay Notes'
url_values=urllib.parse.urlencode(data)
url="http://www.baidu.com/s?"
full_url=url+url_values
data=urllib.request.urlopen(full_url).read()
data=data.decode('UTF-8')
print(data)
Explanation: 3. 用Python简单处理URL
如果要抓取百度上面搜索关键词为Jecvay Notes的网页, 则代码如下
End of explanation |
12,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Partial Dependence Plots with categorical values
Sigurd Carlsen Feb 2019
Holger Nahrstaedt 2020
.. currentmodule
Step1: objective function
Here we define a function that we evaluate.
Step2: Bayesian optimization
Step3: Partial dependence plot
Here we see an example of using partial dependence. Even when setting
n_points all the way down to 10 from the default of 40, this method is
still very slow. This is because partial dependence calculates 250 extra
predictions for each point on the plots.
Step4: Plot without partial dependence
Here we plot without partial dependence. We see that it is a lot faster.
Also the values for the other parameters are set to the default "result"
which is the parameter set of the best observed value so far. In the case
of funny_func this is close to 0 for all parameters.
Step5: Modify the shown minimum
Here we try with setting the other parameters to something other than
"result". When dealing with categorical dimensions we can't use
'expected_minimum'. Therefore we try with "expected_minimum_random"
which is a naive way of finding the minimum of the surrogate by only
using random sampling. n_minimum_search sets the number of random samples,
which is used to find the minimum
Step6: Set a minimum location
Lastly we can also define these parameters ourselfs by
parsing a list as the pars argument | Python Code:
print(__doc__)
import sys
from skopt.plots import plot_objective
from skopt import forest_minimize
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
from skopt.space import Integer, Categorical
from skopt import plots, gp_minimize
from skopt.plots import plot_objective
Explanation: Partial Dependence Plots with categorical values
Sigurd Carlsen Feb 2019
Holger Nahrstaedt 2020
.. currentmodule:: skopt
Plot objective now supports optional use of partial dependence as well as
different methods of defining parameter values for dependency plots.
End of explanation
def objective(params):
clf = DecisionTreeClassifier(
**{dim.name: val for dim, val in
zip(SPACE, params) if dim.name != 'dummy'})
return -np.mean(cross_val_score(clf, *load_breast_cancer(True)))
Explanation: objective function
Here we define a function that we evaluate.
End of explanation
SPACE = [
Integer(1, 20, name='max_depth'),
Integer(2, 100, name='min_samples_split'),
Integer(5, 30, name='min_samples_leaf'),
Integer(1, 30, name='max_features'),
Categorical(list('abc'), name='dummy'),
Categorical(['gini', 'entropy'], name='criterion'),
Categorical(list('def'), name='dummy'),
]
result = gp_minimize(objective, SPACE, n_calls=20)
Explanation: Bayesian optimization
End of explanation
_ = plot_objective(result, n_points=10)
Explanation: Partial dependence plot
Here we see an example of using partial dependence. Even when setting
n_points all the way down to 10 from the default of 40, this method is
still very slow. This is because partial dependence calculates 250 extra
predictions for each point on the plots.
End of explanation
_ = plot_objective(result, sample_source='result', n_points=10)
Explanation: Plot without partial dependence
Here we plot without partial dependence. We see that it is a lot faster.
Also the values for the other parameters are set to the default "result"
which is the parameter set of the best observed value so far. In the case
of funny_func this is close to 0 for all parameters.
End of explanation
_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random',
minimum='expected_minimum_random', n_minimum_search=10000)
Explanation: Modify the shown minimum
Here we try with setting the other parameters to something other than
"result". When dealing with categorical dimensions we can't use
'expected_minimum'. Therefore we try with "expected_minimum_random"
which is a naive way of finding the minimum of the surrogate by only
using random sampling. n_minimum_search sets the number of random samples,
which is used to find the minimum
End of explanation
_ = plot_objective(result, n_points=10, sample_source=[15, 4, 7, 15, 'b', 'entropy', 'e'],
minimum=[15, 4, 7, 15, 'b', 'entropy', 'e'])
Explanation: Set a minimum location
Lastly we can also define these parameters ourselfs by
parsing a list as the pars argument:
End of explanation |
12,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rough outline brainstorm
Reading in data
briefly address encoding issues. I believe pandas default to ASCII? (-- it's acutally UTF-8)
basic manipulations
Subsetting
Accessing rows/columns/individual items
changing column headers
creating calculated values
using lambda functions (or functions in general) to manipulate the data
more advanced manipulations
group by
melting
merging
miscellanous thoughts
TBD
Step1: Lesson 1
Step2: R equivalent
Step3: R equivalent
```R
install.packages('dplyr')
library(dplyr)
adults_df <- filter(df, age>=18)
ggplot(data=adults_df, mapping=aes(x=age, y=height)) + geom_point()
```
Step4: R equivalent
Step5: R equivalent
Step6: R equivalent
Step7: Lesson 2
Step8: One central abstraction in pandas is the DataFrame, which is similar to a data frame in R — that is, basically a spreadsheet. It is made up of columns, which are usually names, and rows, which may be named or just accessed by index.
Pandas is designed to be fast and efficient, so the table isn't necessarily stored the way you think it is internally. In particular, data is stored in columns, and each column is a pandas Series, which itself builds on numpy arrays, not native Python arrays.
Pandas can read data in many formats. CSV and JSON are common ones to use. You can control many aspects about how the data is read. Above, you see that the structure of the file is csv-like, but instead the ';' is used as the column separator. This is not a problem. Pandas can also handle different file encodings (UTF-8 is the default), etc.
Basic frame manipulations — Accessing columns, rows, elements
Step9: In many cases, columns of a frame can be accessed like an array. The result is a pandas Series object, which, as you see, has a name, index, and type.
Aside — why all the calls to 'head()'?
Series and frames can be very large. The methods head() and tail() can be used get a few of the first and last rows, respectively. By default, first/last 5 rows are returned. It's used here to limit output to a small number of rows, since there is no need to see the whole table.
Step10: Rows are accessed using the method loc or iloc. The method 'loc' takes the index, 'iloc' takes the row index. In the above case, these are the same, but that is not always true. For example...
Step11: To access an individual cell, specify a row and column using loc.
Step12: Another aside -- loc vs. iloc
As you saw above, the method loc takes the "label" of the index. The method iloc takes the index as arguments, with the parameters [row-index, col-index]
Step13: Basic frame manipulations — data subsetting
Step14: Specifiying an array of column names returns a frame containing just those columns.
Step15: It's also possible to access a subset of the rows by index. More commonly, though, you will want to subset the data by some property.
Step16: This is intiutive to understand, but may seem a little magical at first. It is worth understanding what is going on underneath the covers.
The expression
df['age'] >= 18
returns an series of bool indicating whether the expression is true or false for that row (identified by index).
Step17: When such a series is the argument to the indexing operator, [], pandas returns a frame containing the rows where the value is True. These kinds of expressions can be combined as well, using the bitwise operators (not and/or).
Step18: This way, code for subsetting is intuitive to understand. It is also possible to subset rows and columns simultaneously.
Step19: Basic frame manipulations — renaming columns
Renaming columns
Step20: Creating columns based on other columns
If I wanted to create a new column based on adding up the weight and age, I could do this
Step21: If I wanted to create a calculated column using a dictionary replacement, I could use the map function
Step22: What about using a lambda function to create a new column? | Python Code:
# Import the packages we will use
import pandas as pd
import numpy as np
from IPython.display import display, HTML
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Rough outline brainstorm
Reading in data
briefly address encoding issues. I believe pandas default to ASCII? (-- it's acutally UTF-8)
basic manipulations
Subsetting
Accessing rows/columns/individual items
changing column headers
creating calculated values
using lambda functions (or functions in general) to manipulate the data
more advanced manipulations
group by
melting
merging
miscellanous thoughts
TBD
End of explanation
# Read some data into a frame
# A frame is like an table in a spreadsheet.
# It contains columns (which usually have names) and rows (which can be indexed by number,
# but may also have names)
df = pd.read_csv('https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/Howell1.csv', sep=";")
df.head()
Explanation: Lesson 1 : Demonstration
This first example follows Chapter 4, section 3 of Richard McElreath's book Statistical Rethinking.
The task is understand height in a population, in this case using data about the !Kung San people. Anthropologist Nancy Howell conducted interviews with the !Kung San and collected the data used here.
The data are available in the github repository for the book: https://github.com/rmcelreath/rethinking/blob/master/data/Howell1.csv
Throughout this lesson, I'll provide the R equivalent for these actions in markdown as such:
R equivalent:
```r
df <- read.csv("https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/Howell1.csv", sep = ";", header = TRUE)
head(df)
```
End of explanation
# Graph the data -- let's look at height vs. age
df.plot.scatter(x='age', y='height')
Explanation: R equivalent:
Installing ggplot2 is optional if you already have it.
```R
install.packages("ggplot2")
library(ggplot2)
ggplot(data=df, mapping=aes(x=age, y=height)) + geom_point()
```
End of explanation
# Filter to adults, since height and age are correlated in children
adults_df = df[df['age'] >= 18]
# Look at height vs. age again
adults_df.plot.scatter(x='age', y='height')
Explanation: R equivalent
```R
install.packages('dplyr')
library(dplyr)
adults_df <- filter(df, age>=18)
ggplot(data=adults_df, mapping=aes(x=age, y=height)) + geom_point()
```
End of explanation
# Print out how many rows are in each frame
len(df), len(adults_df)
Explanation: R equivalent:
``` R
nrow(adults_df); nrow(df)
```
End of explanation
# Let's look at how the data are distributed
adults_df['height'].plot.hist()
Explanation: R equivalent:
``` R
ggplot(data=adults_df, mapping=aes(x=age)) + geom_histogram()
```
End of explanation
# Split data in to male and female
# -- first add in a sex column to make it less confusing
df['sex'] = df.apply(lambda row: 'Male' if row['male'] == 1 else 'Female', axis=1)
# -- re-apply the filter, since we modified the data
adults_df = df[df['age'] >= 18]
adults_df.head()
# Let's summarize the data
adults_df[['age', 'height', 'weight']].describe()
# Let's look at the data broken down by sex
adults_df[['age', 'height', 'weight', 'sex']].groupby('sex').describe()
# Let's focus on the means and std
summary_df = adults_df[['age', 'height', 'weight', 'sex']].groupby('sex').describe()
summary_df.loc[(slice(None),['mean', 'std']), :]
# Let's look at this visually -- plot height broken down by sex
g = sns.FacetGrid(adults_df, hue='sex', size=6)
g.map(sns.distplot, "height")
g.add_legend()
# Actually, let's look at everything
# -- first, get rid of the male column, it's redundant and confusing
del adults_df['male']
adults_df.head()
# -- now flatten the data -- very confusing, it will be explained later
flat_df = adults_df.set_index('sex', append=True)
flat_df = flat_df.stack().reset_index([1, 2])
flat_df.columns = ['sex', 'measurement', 'value']
flat_df.head()
# Plot!
g = sns.FacetGrid(flat_df, col='measurement', hue='sex', size=6, sharex=False)
g.map(sns.distplot, "value")
g.add_legend()
Explanation: R equivalent:
```R
```
End of explanation
df = pd.read_csv('https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/Howell1.csv', sep=";")
df.head()
Explanation: Lesson 2: Details
What just happened!? Let's take a deeper look at what was done above.
Reading in data
End of explanation
df['height'].head()
Explanation: One central abstraction in pandas is the DataFrame, which is similar to a data frame in R — that is, basically a spreadsheet. It is made up of columns, which are usually names, and rows, which may be named or just accessed by index.
Pandas is designed to be fast and efficient, so the table isn't necessarily stored the way you think it is internally. In particular, data is stored in columns, and each column is a pandas Series, which itself builds on numpy arrays, not native Python arrays.
Pandas can read data in many formats. CSV and JSON are common ones to use. You can control many aspects about how the data is read. Above, you see that the structure of the file is csv-like, but instead the ';' is used as the column separator. This is not a problem. Pandas can also handle different file encodings (UTF-8 is the default), etc.
Basic frame manipulations — Accessing columns, rows, elements
End of explanation
df.loc[0]
Explanation: In many cases, columns of a frame can be accessed like an array. The result is a pandas Series object, which, as you see, has a name, index, and type.
Aside — why all the calls to 'head()'?
Series and frames can be very large. The methods head() and tail() can be used get a few of the first and last rows, respectively. By default, first/last 5 rows are returned. It's used here to limit output to a small number of rows, since there is no need to see the whole table.
End of explanation
summary_df = df.describe()
summary_df.loc['mean']
Explanation: Rows are accessed using the method loc or iloc. The method 'loc' takes the index, 'iloc' takes the row index. In the above case, these are the same, but that is not always true. For example...
End of explanation
summary_df.loc['mean', 'age']
Explanation: To access an individual cell, specify a row and column using loc.
End of explanation
# select row index 0, and all the columns in that row
df.iloc[0,:]
# select all the rows in column 0 by index
df.iloc[:,0]
Explanation: Another aside -- loc vs. iloc
As you saw above, the method loc takes the "label" of the index. The method iloc takes the index as arguments, with the parameters [row-index, col-index]
End of explanation
df[['age', 'height', 'weight']].head()
Explanation: Basic frame manipulations — data subsetting
End of explanation
df.iloc[0:5]
Explanation: Specifiying an array of column names returns a frame containing just those columns.
End of explanation
df[df['age'] >= 18].head()
Explanation: It's also possible to access a subset of the rows by index. More commonly, though, you will want to subset the data by some property.
End of explanation
(df['age'] >= 18).head()
(df['male'] == 0).head()
Explanation: This is intiutive to understand, but may seem a little magical at first. It is worth understanding what is going on underneath the covers.
The expression
df['age'] >= 18
returns an series of bool indicating whether the expression is true or false for that row (identified by index).
End of explanation
((df['age'] >= 18) & (df['male'] == 0)).head()
df[(df['age'] >= 18) & (df['male'] == 0)].head()
Explanation: When such a series is the argument to the indexing operator, [], pandas returns a frame containing the rows where the value is True. These kinds of expressions can be combined as well, using the bitwise operators (not and/or).
End of explanation
df.loc[(df['age'] >= 18) & (df['male'] == 0), ['height', 'weight', 'age']].head()
Explanation: This way, code for subsetting is intuitive to understand. It is also possible to subset rows and columns simultaneously.
End of explanation
df.columns = ['new_height', 'new_weight', 'new_age', 'coded_gender']
Explanation: Basic frame manipulations — renaming columns
Renaming columns: just feed a list of new columns and pass it to df.columns
End of explanation
df['new_id'] = df['new_weight'] + df['new_age']
df.head(2)
Explanation: Creating columns based on other columns
If I wanted to create a new column based on adding up the weight and age, I could do this:
End of explanation
gender_text = {1: 'Male', 0: 'Female'}
df['text_gender'] = df['coded_gender'].map(gender_text)
df.head(2)
Explanation: If I wanted to create a calculated column using a dictionary replacement, I could use the map function
End of explanation
df['double_age'] = df['new_age'].apply(lambda x: x*2)
df.head(2)
Explanation: What about using a lambda function to create a new column?
End of explanation |
12,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr4', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-HR4
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
12,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow IO Authors.
Step1: Prometheus 서버에서 메트릭 로드하기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: CoreDNS 및 Prometheus 설치 및 설정하기
데모 목적으로, DNS 쿼리를 수신하기 위해 포트 9053이 열려 있고 스크래핑에 대한 메트릭을 노출하기 위해 포트 9153(기본값)이 열려 있는 CoreDNS 서버가 로컬에 있습니다. 다음은 CoreDNS에 대한 기본 Corefile 구성이며 다운로드할 수 있습니다.
.
Step3: 다음 단계로 Prometheus 서버를 설정하고 Prometheus를 사용하여 위의 포트 9153에서 노출된 CoreDNS 메트릭을 스크래핑합니다. 구성을 위한 prometheus.yml 파일도 다운로드할 수 있습니다.
Step4: 일부 활동을 표시하기 위해 dig 명령을 사용하여 설정된 CoreDNS 서버에 대해 몇 가지 DNS 쿼리를 생성할 수 있습니다.
Step5: 이제 CoreDNS 서버의 메트릭을 Prometheus 서버에서 스크래핑하고 TensorFlow에서 사용할 준비가 됩니다.
CoreDNS 메트릭에 대한 Dataset를 만들고 TensorFlow에서 사용하기
PostgreSQL 서버에서 사용할 수 있고 tfio.experimental.IODataset.from_prometheus를 통해 수행할 수 있는 CoreDNS 메트릭의 Dataset를 만듭니다. 최소한 두 가지 인수가 필요합니다. query는 메트릭을 선택하기 위해 Prometheus 서버로 전달되고 length는 Dataset에 로드하려는 기간입니다.
"coredns_dns_request_count_total" 및 "5"(초)로 시작하여 아래 Dataset를 만들 수 있습니다. 튜토리얼 앞부분에서 두 개의 DNS 쿼리가 보내졌기 때문에 "coredns_dns_request_count_total"에 대한 메트릭은 시계열 마지막에서 "2.0"이 될 것으로 예상됩니다.
Step6: Dataset의 사양을 자세히 살펴보겠습니다.
( TensorSpec(shape=(), dtype=tf.int64, name=None), { 'coredns'
Step7: 생성된 Dataset는 이제 훈련 또는 추론 목적으로 tf.keras로 직접 전달할 수 있습니다.
모델 훈련에 Dataset 사용하기
메트릭 Dataset가 생성되면 모델 훈련 또는 추론을 위해 Dataset를 tf.keras로 바로 전달할 수 있습니다.
데모 목적으로 이 튜토리얼에서는 1개의 특성과 2개의 스텝을 입력으로 포함하는 매우 간단한 LSTM 모델을 사용합니다.
Step8: 사용할 데이터세트는 10개의 샘플이 있는 CoreDNS의 'go_memstats_sys_bytes' 값입니다. 그러나 window=n_steps 및 shift=1의 슬라이딩 윈도우가 형성되기 때문에 추가 샘플이 필요합니다(연속된 두 요소에 대해 첫 번째 요소는 x로, 두 번째 요소는 훈련을 위해 y로 입력됨). 합계는 10 + n_steps - 1 + 1 = 12초입니다.
데이터 값의 스케일도 [0, 1]로 조정됩니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow IO Authors.
End of explanation
import os
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
from datetime import datetime
import tensorflow as tf
import tensorflow_io as tfio
Explanation: Prometheus 서버에서 메트릭 로드하기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/prometheus"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/io/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
주의: Python 패키지 외에 이 노트북에서는 sudo apt-get install을 사용하여 타자 패키지를 설치합니다.
개요
이 튜토리얼은 Prometheus 서버에서 tf.data.Dataset로 CoreDNS 메트릭을 로드한 다음 훈련과 추론에 tf.keras를 사용합니다.
CoreDNS는 서비스 검색에 중점을 둔 DNS 서버이며 Kubernetes 클러스터의 일부로 널리 배포됩니다. 이 때문에 종종 연산을 통해 면밀하게 모니터링됩니다.
이 튜토리얼은 머신러닝을 통해 연산을 자동화하려는 DevOps에서 사용할 수 있는 예입니다.
설정 및 사용법
필요한 tensorflow-io 패키지를 설치하고 런타임 다시 시작하기
End of explanation
!curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz
!tar -xzf coredns_1.6.7_linux_amd64.tgz
!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile
!cat Corefile
# Run `./coredns` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./coredns &')
Explanation: CoreDNS 및 Prometheus 설치 및 설정하기
데모 목적으로, DNS 쿼리를 수신하기 위해 포트 9053이 열려 있고 스크래핑에 대한 메트릭을 노출하기 위해 포트 9153(기본값)이 열려 있는 CoreDNS 서버가 로컬에 있습니다. 다음은 CoreDNS에 대한 기본 Corefile 구성이며 다운로드할 수 있습니다.
.:9053 { prometheus whoami }
설치에 대한 자세한 내용은 CoreDNS 설명서에서 찾을 수 있습니다.
End of explanation
!curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz
!tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1
!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml
!cat prometheus.yml
# Run `./prometheus` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./prometheus &')
Explanation: 다음 단계로 Prometheus 서버를 설정하고 Prometheus를 사용하여 위의 포트 9153에서 노출된 CoreDNS 메트릭을 스크래핑합니다. 구성을 위한 prometheus.yml 파일도 다운로드할 수 있습니다.
End of explanation
!sudo apt-get install -y -qq dnsutils
!dig @127.0.0.1 -p 9053 demo1.example.org
!dig @127.0.0.1 -p 9053 demo2.example.org
Explanation: 일부 활동을 표시하기 위해 dig 명령을 사용하여 설정된 CoreDNS 서버에 대해 몇 가지 DNS 쿼리를 생성할 수 있습니다.
End of explanation
dataset = tfio.experimental.IODataset.from_prometheus(
"coredns_dns_request_count_total", 5, endpoint="http://localhost:9090")
print("Dataset Spec:\n{}\n".format(dataset.element_spec))
print("CoreDNS Time Series:")
for (time, value) in dataset:
# time is milli second, convert to data time:
time = datetime.fromtimestamp(time // 1000)
print("{}: {}".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total']))
Explanation: 이제 CoreDNS 서버의 메트릭을 Prometheus 서버에서 스크래핑하고 TensorFlow에서 사용할 준비가 됩니다.
CoreDNS 메트릭에 대한 Dataset를 만들고 TensorFlow에서 사용하기
PostgreSQL 서버에서 사용할 수 있고 tfio.experimental.IODataset.from_prometheus를 통해 수행할 수 있는 CoreDNS 메트릭의 Dataset를 만듭니다. 최소한 두 가지 인수가 필요합니다. query는 메트릭을 선택하기 위해 Prometheus 서버로 전달되고 length는 Dataset에 로드하려는 기간입니다.
"coredns_dns_request_count_total" 및 "5"(초)로 시작하여 아래 Dataset를 만들 수 있습니다. 튜토리얼 앞부분에서 두 개의 DNS 쿼리가 보내졌기 때문에 "coredns_dns_request_count_total"에 대한 메트릭은 시계열 마지막에서 "2.0"이 될 것으로 예상됩니다.
End of explanation
dataset = tfio.experimental.IODataset.from_prometheus(
"go_memstats_gc_sys_bytes", 5, endpoint="http://localhost:9090")
print("Time Series CoreDNS/Prometheus Comparision:")
for (time, value) in dataset:
# time is milli second, convert to data time:
time = datetime.fromtimestamp(time // 1000)
print("{}: {}/{}".format(
time,
value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'],
value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes']))
Explanation: Dataset의 사양을 자세히 살펴보겠습니다.
( TensorSpec(shape=(), dtype=tf.int64, name=None), { 'coredns': { 'localhost:9153': { 'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None) } } } )
데이터세트는 (time, values) 튜플로 구성되는 것을 분명히 알 수 있으며, values 필드는 다음으로 확장된 Python dict입니다.
"job_name": { "instance_name": { "metric_name": value, }, }
위의 예에서 'coredns'는 작업 이름이고, 'localhost:9153'은 인스턴스 이름이며, 'coredns_dns_request_count_total'은 메트릭 이름입니다. 사용된 Prometheus 쿼리에 따라 여러 작업/인스턴스/메트릭이 반환될 수 있습니다. 이것은 또한 Python dict이 Dataset의 구조에 사용된 이유이기도 합니다.
다른 쿼리 "go_memstats_gc_sys_bytes"를 예로 들어 보겠습니다. CoreDNS와 Prometheus가 모두 Golang으로 작성되었으므로 "go_memstats_gc_sys_bytes" 메트릭은 "coredns" 작업과 "prometheus" 작업 모두에 사용할 수 있습니다.
참고: 이 셀은 처음 실행할 때 오류가 발생할 수 있습니다. 다시 실행하면 통과됩니다.
End of explanation
n_steps, n_features = 2, 1
simple_lstm_model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)),
tf.keras.layers.Dense(1)
])
simple_lstm_model.compile(optimizer='adam', loss='mae')
Explanation: 생성된 Dataset는 이제 훈련 또는 추론 목적으로 tf.keras로 직접 전달할 수 있습니다.
모델 훈련에 Dataset 사용하기
메트릭 Dataset가 생성되면 모델 훈련 또는 추론을 위해 Dataset를 tf.keras로 바로 전달할 수 있습니다.
데모 목적으로 이 튜토리얼에서는 1개의 특성과 2개의 스텝을 입력으로 포함하는 매우 간단한 LSTM 모델을 사용합니다.
End of explanation
n_samples = 10
dataset = tfio.experimental.IODataset.from_prometheus(
"go_memstats_sys_bytes", n_samples + n_steps - 1 + 1, endpoint="http://localhost:9090")
# take go_memstats_gc_sys_bytes from coredns job
dataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes'])
# find the max value and scale the value to [0, 1]
v_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum)
dataset = dataset.map(lambda v: (v / v_max))
# expand the dimension by 1 to fit n_features=1
dataset = dataset.map(lambda v: tf.expand_dims(v, -1))
# take a sliding window
dataset = dataset.window(n_steps, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda d: d.batch(n_steps))
# the first value is x and the next value is y, only take 10 samples
x = dataset.take(n_samples)
y = dataset.skip(1).take(n_samples)
dataset = tf.data.Dataset.zip((x, y))
# pass the final dataset to model.fit for training
simple_lstm_model.fit(dataset.batch(1).repeat(10), epochs=5, steps_per_epoch=10)
Explanation: 사용할 데이터세트는 10개의 샘플이 있는 CoreDNS의 'go_memstats_sys_bytes' 값입니다. 그러나 window=n_steps 및 shift=1의 슬라이딩 윈도우가 형성되기 때문에 추가 샘플이 필요합니다(연속된 두 요소에 대해 첫 번째 요소는 x로, 두 번째 요소는 훈련을 위해 y로 입력됨). 합계는 10 + n_steps - 1 + 1 = 12초입니다.
데이터 값의 스케일도 [0, 1]로 조정됩니다.
End of explanation |
12,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ateliers
Step1: 1. Importation des données
Définition du répertoir de travail, des noms des différents fichiers utilisés et des variables globales.
Dans un premier temps, il vous faut télécharger les fichiers Categorie_reduit.csv et lucene_stopwords.txt disponible dans le corpus de données de wikistat.
Une fois téléchargées, placez ces données dans le repertoire de travail de votre choix et préciser la direction de ce repertoir dans la variable DATA_DIR
Step2: ### Read & Split Dataset
Fonction permettant de lire le fichier d'apprentissage et de créer deux DataFrame Pandas, un pour l'apprentissage, l'autre pour la validation.
La première méthode créée un DataFrame en lisant entièrement le fichier. Puis elle scinde le DataFrame en deux grâce à la fonction dédiée de sklearn.
Step3: 2. Nettoyage des données
Afin de limiter la dimension de l'espace des variables ou features, tout en conservant les informations essentielles, il est nécessaire de nettoyer les données en appliquant plusieurs étapes
Step4: Fonction de nettoyage de texte
Fonction qui prend en intrée un texte et retourne le texte nettoyé en appliquant successivement les étapes suivantes
Step5: Nettoyage des DataFrames
Applique le nettoyage sur toutes les lignes de la DataFrame
Step6: Affiche les 5 premières lignes de la DataFrame d'apprentissage après nettoyage.
Step7: 3 Construction des caractéristiques ou features (TF-IDF)¶
Introduction
La vectorisation, c'est-à-dire la construction des caractéristiques à partir de la liste des mots se fait en 2 étapes
Step8: 4. Modélisation et performances | Python Code:
#Importation des librairies utilisées
import unicodedata
import time
import pandas as pd
import numpy as np
import random
import nltk
import collections
import itertools
import csv
import warnings
from sklearn.cross_validation import train_test_split
Explanation: Ateliers: Technologies de l'intelligence Artificielle
<center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" width=400, style="max-width: 150px; display: inline" alt="Wikistat"/></a>
<a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" width=400, style="float:right; display: inline" alt="IMT"/> </a>
</center>
Traitement Naturel du Langage (NLP) : Catégorisation de Produits Cdiscount
Il s'agit d'une version simplifiée du concours proposé par Cdiscount et paru sur le site datascience.net. Les données d'apprentissage sont accessibles sur demande auprès de Cdiscount mais les solutions de l'échantillon test du concours ne sont pas et ne seront pas rendues publiques. Un échantillon test est donc construit pour l'usage de ce tutoriel. L'objectif est de prévoir la catégorie d'un produit à partir de son descriptif (text mining). Seule la catégorie principale (1er niveau, 47 classes) est prédite au lieu des trois niveaux demandés dans le concours. L'objectif est plutôt de comparer les performances des méthodes et technologies en fonction de la taille de la base d'apprentissage ainsi que d'illustrer sur un exemple complexe le prétraitement de données textuelles.
Le jeux de données complet (15M produits) permet un test en vrai grandeur du passage à l'échelle volume des phases de préparation (munging), vectorisation (hashage, TF-IDF) et d'apprentissage en fonction de la technologie utilisée.
La synthèse des résultats obtenus est développée par Besse et al. 2016 (section 5).
Partie 2-1 Catégorisation des Produits Cdiscount avec Scikit-learn de <a href="http://spark.apache.org/"><img src="http://spark.apache.org/images/spark-logo-trademark.png" style="max-width: 100px; display: inline" alt="Python"/></a>
Le principal objectif est de comparer les performances: temps de calcul, qualité des résultats, des principales technologies; ici Python avec la librairie Scikit-Learn. Il s'agit d'un problème de fouille de texte qui enchaîne nécessairement plusieurs étapes et le choix de la meilleure stratégie est fonction de l'étape:
- Spark pour la préparation des données: nettoyage, racinisaiton
- Python Scikit-learn pour la transformaiton suivante (TF-IDF) et l'apprentissage notamment avec la régresison logistique qui conduit aux meilleurs résultats.
L'objectif est ici de comparer les performances des méthodes et technologies en fonction de la taille de la base d'apprentissage. La stratégie de sous ou sur échantillonnage des catégories qui permet d'améliorer la prévision n'a pas été mise en oeuvre.
L'exemple est présenté avec la possibilité de sous-échantillonner afin de réduire les temps de calcul.
L'échantillon réduit peut encore l'être puis, après "nettoyage", séparé en 2 parties: apprentissage et test.
Les données textuelles de l'échantillon d'apprentissage sont, "racinisées", "hashées", "vectorisées" avant modélisation.
Les mêmes transformations, notamment (hashage et TF-IDF) évaluées sur l'échantillon d'apprentissage sont appliquées à l'échantillon test.
Un seul modèle est estimé par régression logistique "multimodal", plus précisément et implicitement, un modèle par classe.
Différents paramètres: de vectorisation (hashage, TF-IDF), paramètres de la régression logistique (pénalisation L1) pourraient encore être optimisés.
End of explanation
# Répertoire de travail
DATA_DIR = ""
# Nom des fichiers
training_reduit_path = DATA_DIR + "data/cdiscount_train.csv.zip"
# Variable Globale
HEADER_TEST = ['Description','Libelle','Marque']
HEADER_TRAIN =['Categorie1','Categorie2','Categorie3','Description','Libelle','Marque']
## Si nécessaire (première exécution) chargement de nltk, librairie pour la suppression
## des mots d'arrêt et la racinisation
# nltk.download()
Explanation: 1. Importation des données
Définition du répertoir de travail, des noms des différents fichiers utilisés et des variables globales.
Dans un premier temps, il vous faut télécharger les fichiers Categorie_reduit.csv et lucene_stopwords.txt disponible dans le corpus de données de wikistat.
Une fois téléchargées, placez ces données dans le repertoire de travail de votre choix et préciser la direction de ce repertoir dans la variable DATA_DIR
End of explanation
def split_dataset(input_path, nb_line, tauxValid,columns):
time_start = time.time()
data_all = pd.read_csv(input_path,sep=",",names=columns,nrows=nb_line)
data_all = data_all.fillna("")
data_train, data_valid = train_test_split(data_all, test_size = tauxValid)
time_end = time.time()
print("Split Takes %d s" %(time_end-time_start))
return data_train, data_valid
nb_line=20000 # part totale extraite du fichier initial ici déjà réduit
tauxValid=0.10 # part totale extraite du fichier initial ici déjà réduit
data_train, data_valid = split_dataset(training_reduit_path, nb_line, tauxValid, HEADER_TRAIN)
# Cette ligne permet de visualiser les 5 premières lignes de la DataFrame
data_train.head(5)
Explanation: ### Read & Split Dataset
Fonction permettant de lire le fichier d'apprentissage et de créer deux DataFrame Pandas, un pour l'apprentissage, l'autre pour la validation.
La première méthode créée un DataFrame en lisant entièrement le fichier. Puis elle scinde le DataFrame en deux grâce à la fonction dédiée de sklearn.
End of explanation
# Librairies
from bs4 import BeautifulSoup #Nettoyage d'HTML
import re # Regex
import nltk # Nettoyage des données
## listes de mots à supprimer dans la description des produits
## Depuis NLTK
nltk_stopwords = nltk.corpus.stopwords.words('french')
## Depuis Un fichier externe.
lucene_stopwords =open(DATA_DIR+"data/lucene_stopwords.txt","r").read().split(",") #En local
## Union des deux fichiers de stopwords
stopwords = list(set(nltk_stopwords).union(set(lucene_stopwords)))
## Fonction de setmming de stemming permettant la racinisation
stemmer=nltk.stem.SnowballStemmer('french')
Explanation: 2. Nettoyage des données
Afin de limiter la dimension de l'espace des variables ou features, tout en conservant les informations essentielles, il est nécessaire de nettoyer les données en appliquant plusieurs étapes:
* Chaque mot est écrit en minuscule.
* Les termes numériques, de ponctuation et autres symboles sont supprimés.
* 155 mots-courants, et donc non informatifs, de la langue française sont supprimés (STOPWORDS). Ex: le, la, du, alors, etc...
* Chaque mot est "racinisé", via la fonction STEMMER.stem de la librairie nltk. La racinisation transforme un mot en son radical ou sa racine. Par exemple, les mots: cheval, chevaux, chevalier, chevalerie, chevaucher sont tous remplacés par "cheva".
Importation des librairies et fichier pour le nettoyage des données.
End of explanation
# Fonction clean générale
def clean_txt(txt):
### remove html stuff
txt = BeautifulSoup(txt,"html.parser",from_encoding='utf-8').get_text()
### lower case
txt = txt.lower()
### special escaping character '...'
txt = txt.replace(u'\u2026','.')
txt = txt.replace(u'\u00a0',' ')
### remove accent btw
txt = unicodedata.normalize('NFD', txt).encode('ascii', 'ignore').decode("utf-8")
###txt = unidecode(txt)
### remove non alphanumeric char
txt = re.sub('[^a-z_]', ' ', txt)
### remove french stop words
tokens = [w for w in txt.split() if (len(w)>2) and (w not in stopwords)]
### french stemming
tokens = [stemmer.stem(token) for token in tokens]
### tokens = stemmer.stemWords(tokens)
return ' '.join(tokens)
def clean_marque(txt):
txt = re.sub('[^a-zA-Z0-9]', '_', txt).lower()
return txt
Explanation: Fonction de nettoyage de texte
Fonction qui prend en intrée un texte et retourne le texte nettoyé en appliquant successivement les étapes suivantes: Nettoyage des données HTML, conversion en texte minuscule, encodage uniforme, suppression des caractéres non alpha numérique (ponctuations), suppression des stopwords, racinisation de chaque mot individuellement.
End of explanation
# fonction de nettoyage du fichier(stemming et liste de mots à supprimer)
def clean_df(input_data, column_names= ['Description', 'Libelle', 'Marque']):
#Test if columns entry match columns names of input data
column_names_diff= set(column_names).difference(set(input_data.columns))
if column_names_diff:
warnings.warn("Column(s) '"+", ".join(list(column_names_diff)) +"' do(es) not match columns of input data", Warning)
nb_line = input_data.shape[0]
print("Start Clean %d lines" %nb_line)
# Cleaning start for each columns
time_start = time.time()
clean_list=[]
for column_name in column_names:
column = input_data[column_name].values
if column_name == "Marque":
array_clean = np.array(list(map(clean_marque,column)))
else:
array_clean = np.array(list(map(clean_txt,column)))
clean_list.append(array_clean)
time_end = time.time()
print("Cleaning time: %d secondes"%(time_end-time_start))
#Convert list to DataFrame
array_clean = np.array(clean_list).T
data_clean = pd.DataFrame(array_clean, columns = column_names)
return data_clean
# Take approximately 2 minutes fors 100.000 rows
data_valid_clean = clean_df(data_valid)
data_train_clean = clean_df(data_train)
Explanation: Nettoyage des DataFrames
Applique le nettoyage sur toutes les lignes de la DataFrame
End of explanation
data_train_clean.head(5)
Explanation: Affiche les 5 premières lignes de la DataFrame d'apprentissage après nettoyage.
End of explanation
## Création d’une matrice indiquant
## les fréquences des mots contenus dans chaque description
## de nombreux paramètres seraient à tester
from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer
from sklearn.feature_extraction import FeatureHasher
def vectorizer_train(df, columns=['Description', 'Libelle', 'Marque'], nb_hash=None, stop_words=None):
# Hashage
if nb_hash is None:
data_hash = map(lambda x : " ".join(x), df[columns].values)
feathash = None
# TFIDF
vec = TfidfVectorizer(
min_df = 1,
stop_words = stop_words,
smooth_idf=True,
norm='l2',
sublinear_tf=True,
use_idf=True,
ngram_range=(1,2)) #bi-grams
tfidf = vec.fit_transform(data_hash)
else:
df_text = map(lambda x : collections.Counter(" ".join(x).split(" ")), df[columns].values)
feathash = FeatureHasher(nb_hash)
data_hash = feathash.fit_transform(map(collections.Counter,df_text))
vec = TfidfTransformer(use_idf=True,
smooth_idf=True, sublinear_tf=False)
tfidf = vec.fit_transform(data_hash)
return vec, feathash, tfidf
def apply_vectorizer(df, vec, columns =['Description', 'Libelle', 'Marque'], feathash = None ):
#Hashage
if feathash is None:
data_hash = map(lambda x : " ".join(x), df[columns].values)
else:
df_text = map(lambda x : collections.Counter(" ".join(x).split(" ")), df[columns].values)
data_hash = feathash.transform(df_text)
# TFIDF
tfidf=vec.transform(data_hash)
return tfidf
vec, feathash, X = vectorizer_train(data_train_clean, nb_hash=60)
Y = data_train['Categorie1'].values
Xv = apply_vectorizer(data_valid_clean, vec, feathash=feathash)
Yv=data_valid['Categorie1'].values
Explanation: 3 Construction des caractéristiques ou features (TF-IDF)¶
Introduction
La vectorisation, c'est-à-dire la construction des caractéristiques à partir de la liste des mots se fait en 2 étapes:
* Hashage. Il permet de réduire l'espace des variables (taille du dictionnaire) en un nombre limité et fixé a priori n_hash de caractéristiques. Il repose sur la définition d'une fonction de hashage, $h$ qui à un indice $j$ défini dans l'espace des entiers naturels, renvoie un indice $i=h(j)$ dans dans l'espace réduit (1 à n_hash) des caractéristiques. Ainsi le poids de l'indice $i$, du nouvel espace, est l'association de tous les poids d'indice $j$ tels que $i=h(j)$ de l'espace originale. Ici, les poids sont associés d'après la méthode décrite par Weinberger et al. (2009).
N.B. $h$ n'est pas généré aléatoirement. Ainsi pour un même fichier d'apprentissage (ou de test) et pour un même entier n_hash, le résultat de la fonction de hashage est identique
TF-IDF. Le TF-IDF permet de faire ressortir l'importance relative de chaque mot $m$ (ou couples de mots consécutifs) dans un texte-produit ou un descriptif $d$, par rapport à la liste entière des produits. La fonction $TF(m,d)$ compte le nombre d'occurences du mot $m$ dans le descriptif $d$. La fonction $IDF(m)$ mesure l'importance du terme dans l'ensemble des documents ou descriptifs en donnant plus de poids aux termes les moins fréquents car considérés comme les plus discriminants (motivation analogue à celle de la métrique du chi2 en anamlyse des correspondance). $IDF(m,l)=\log\frac{D}{f(m)}$ où $D$ est le nombre de documents, la taille de l'échantillon d'apprentissage, et $f(m)$ le nombre de documents ou descriptifs contenant le mot $m$. La nouvelle variable ou features est $V_m(l)=TF(m,l)\times IDF(m,l)$.
Comme pour les transformations des variables quantitatives (centrage, réduction), la même transformation c'est-à-dire les mêmes pondérations, est calculée sur l'achantillon d'apprentissage et appliquée à celui de test.
Fonction de Vectorisation
End of explanation
# Regression Logistique
## estimation
from sklearn.linear_model import LogisticRegression
cla = LogisticRegression(C=100)
cla.fit(X,Y)
score=cla.score(X,Y)
print('# training score:',score)
## erreur en validation
scoreValidation=cla.score(Xv,Yv)
print('# validation score:',scoreValidation)
#Méthode CART
from sklearn import tree
clf = tree.DecisionTreeClassifier()
time_start = time.time()
clf = clf.fit(X, Y)
time_end = time.time()
print("CART Takes %d s" %(time_end-time_start) )
score=clf.score(X,Y)
print('# training score :',score)
scoreValidation=clf.score(Xv,Yv)
print('# validation score :',scoreValidation)
# Random forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100,n_jobs=-1,max_features=24)
time_start = time.time()
rf = rf.fit(X, Y)
time_end = time.time()
print("RF Takes %d s" %(time_end-time_start) )
score=rf.score(X,Y)
print('# training score :',score)
scoreValidation=rf.score(Xv,Yv)
print('# validation score :',scoreValidation)
Explanation: 4. Modélisation et performances
End of explanation |
12,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing Citations
With AnyStyle.io and Crossref's search api
Step1: ok, we got a pile of citations. But they aren't in shape. When we look at the cites, they are a collection of 1,505 strings, this isn't very useful for doing data analysis. We need to get them into shape, that is, we need to break them up into their component parts.
Step2: Note, these citations are not always formatted in the same way, for example, lets looks at some from a different part of the piles.
Step3: Parsing citaitons is a whole area of research to be discussed at another time. I am going to use AnyStyle.io to try and parse these citations because it has a nicely designed API.
Step4: Sweet! | Python Code:
import pandas as pd
citations = pd.read_csv("cites.csv")
citations
citations.iloc[0]
len(citations)
Explanation: Parsing Citations
With AnyStyle.io and Crossref's search api
End of explanation
citations.iloc[0:5]
Explanation: ok, we got a pile of citations. But they aren't in shape. When we look at the cites, they are a collection of 1,505 strings, this isn't very useful for doing data analysis. We need to get them into shape, that is, we need to break them up into their component parts.
End of explanation
citations.iloc[390:395]
Explanation: Note, these citations are not always formatted in the same way, for example, lets looks at some from a different part of the piles.
End of explanation
import requests
import os
import json
import numpy as np
import time
# get the API key for AnyStyle.io from a text file in this directory
with open('anystyle_key.txt','r') as f:
api_key = f.read()
# I want to figure out what cite is causing the error
parsed_cites = []
for cite in list(segment['cite']):
payload = {"format": "json",
"access_token": api_key,
"references": cite}
headers = {"Content-Type": "application/json;charset=UTF-8"}
#print("Payload Build, requesting")
r = requests.post("http://anystyle.io/parse/references",
headers=headers,
data=json.dumps(payload))
#print("Got response", r)
if r.status_code == 400:
print(cite)
#parsed_cites.append(r.json())
parsed_cites = []
for segment in np.array_split(citations,5):
print("Segment Length: ",len(segment))
cite_pile = list(segment['cite'])
payload = {"format": "json",
"access_token": api_key,
"references": cite_pile}
headers = {"Content-Type": "application/json;charset=UTF-8"}
print("Payload Build, requesting")
r = requests.post("http://anystyle.io/parse/references",
headers=headers,
data=json.dumps(payload))
print("Got response", r)
parsed_cites.append(r.json())
print(len(parsed_cites))
parsed_cites_master = [cite for cites in parsed_cites for cite in cites]
parsed_cites_master[0:10]
Explanation: Parsing citaitons is a whole area of research to be discussed at another time. I am going to use AnyStyle.io to try and parse these citations because it has a nicely designed API.
End of explanation
with open("parsed_cites.json",'w') as f:
print(json.dumps(parsed_cites_master, indent=4), file=f )
df_citations = pd.DataFrame(parsed_cites_master)
df_citations
df_citations.to_csv("parsed_cites.csv")
Explanation: Sweet!
End of explanation |
12,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic Survival Analysis
Step1: The next step is to explore the dataset
Step2: We can see that Passenger ID, Name and Cabin have little value to the analysis, so we drop these columns off the dataset
Step3: Data cleaning
Step4: Someone's family size would be equal to their number of spouses/siblings and parents/children on the ship, plus themselves
Step5: Now we would extract the survived dataset for future analysis
Step6: Women and children first?
Assuming people are neutral on the gender of a kid, I would split the passengers into 3 types
Step7: We can see that male adults are the initial largest type of people on the ship, followed by female adults and child.
Now looking into the survival rate
Step8: Comparing to the initial number of people of each type, we can see that children have more than 50% survival rate, female adults have an impressive survival rate around 75%, while male adults have a small survival rate of around 16% comparing to their intial numbers. So we can see that there was an inherent "women and children first" code when it came to saving people on the ship.
Step9: The histogram of the age distribution of the survival group also confirms that younger people had a higher advantage in survival comparing to older ages.
Socio-economic classes
Step10: Approximately 55% of the passengers belonged to the third class, while the rest of the ship belong to the first and second classes. Now we'll see if the first and second class passengers also paid a premimum when it comes to safety?
Step11: The survival rate of the first class passengers was more than 60%, while the survival rate of the third class ones was merely around 25%. So we can see that there was a bias on weathiness and soci-economic statuses, even in life-threatning situations.
Now, what if we factor in both passenger classes and types (male, female or children), which would have more weight in survival rate?
Step12: We can see the women and children of the first class had a significantly impressive survival rate (more than 90% and 80% respectively), when the women and children of the third class had a much lower survival rate (more than 45% and around 40% respectively). However, the women and children from the third class did have a higher survival rate than the men from higher classes. Men from the first class had a survival rate of around 35%, which was actually below the overall survival rate of 38.38%. Men from the second and third classes suffered very low survival rates, which was around 8 % and around 12 % respectively comparing to their initial numbers.
Family size
Step13: We can see that the majority of the ship traveled by themselves, followed by families of 2 or 3. The families that had more than 3 members made up a small part of the ship. Now look into the survival statistics | Python Code:
# Import the libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
# Read the csv file
titanic = pd.read_csv("titanic-data.csv")
Explanation: Titanic Survival Analysis:
Questions:
According to Wikipedia, "Women and children first" is a code of conduct dating from 1860, whereby the lives of women and children were to be saved first in a life-threatening situation, typically abandoning ship, when survival resources such as lifeboats were limited. The wiki page actually gives some insights and statistics on the survival rate of the Titanic; however, in this analysis, I would reconfirm them, and attempt to find out which other factors that determine the survival rate in the Titanic tragedy.
The questions I am going to answer in this analysis are:
Was there really a "Women and children first" rule on the Titanic?
Did other factors such as wealth/classes and family sizes affect someone's chance of survival?
First steps:
First, we need to import all the libraries needed for the analysis and load the data file:
End of explanation
titanic.shape
titanic.columns
titanic
Explanation: The next step is to explore the dataset:
End of explanation
titanic = titanic.drop(['PassengerId','Name','Ticket', 'Cabin', 'Embarked'], axis=1)
titanic['Survived'].describe()
Explanation: We can see that Passenger ID, Name and Cabin have little value to the analysis, so we drop these columns off the dataset:
End of explanation
titanic['Age'].describe()
average_age = titanic["Age"].mean()
std_age = titanic["Age"].std()
count_nan_age = titanic["Age"].isnull().sum()
# generate random numbers between (mean - std) & (mean + std)
rand = np.random.randint(average_age - std_age, average_age + std_age, size = count_nan_age)
# Fill NAs in age with median age
titanic['Age'][np.isnan(titanic["Age"])] = rand
titanic['Age'].describe()
sns.distplot(titanic['Age'])
plt.ylabel('Distribution')
plt.title("The distribution of ages of Titanic passengers")
plt.show()
Explanation: Data cleaning:
We can see that both the Age column has a lot of NAs. We would need to fill in the blank with random values generated within their standardized value.
End of explanation
# Family size
titanic['Family_size'] = titanic['SibSp'] + titanic['Parch'] + 1
Explanation: Someone's family size would be equal to their number of spouses/siblings and parents/children on the ship, plus themselves:
End of explanation
survived = titanic[titanic['Survived'] == 1]
Explanation: Now we would extract the survived dataset for future analysis:
End of explanation
def passenger_type(person):
if person['Age'] <= 16:
return "child"
elif person['Sex'] == "female":
return "female_adult"
else:
return "male_adult"
titanic['Type'] = titanic.apply(passenger_type, axis = 1)
titanic
titanic['Type'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x="Type", data = titanic)
plt.title("Number of passengers sorted by type")
plt.show()
Explanation: Women and children first?
Assuming people are neutral on the gender of a kid, I would split the passengers into 3 types:
End of explanation
survived = titanic[titanic['Survived'] == 1]
non_survived = titanic[titanic['Survived'] == 0]
survived['Type'].value_counts()
non_survived['Type'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x="Survived", hue = "Type", data = titanic)
plt.title("Numbers of survivals and non-survivals, sorted by type")
plt.show()
Explanation: We can see that male adults are the initial largest type of people on the ship, followed by female adults and child.
Now looking into the survival rate:
End of explanation
sns.distplot(survived['Age'])
plt.ylabel("Distribution")
plt.title("The distribution of ages of Titanic survivals")
plt.show()
Explanation: Comparing to the initial number of people of each type, we can see that children have more than 50% survival rate, female adults have an impressive survival rate around 75%, while male adults have a small survival rate of around 16% comparing to their intial numbers. So we can see that there was an inherent "women and children first" code when it came to saving people on the ship.
End of explanation
titanic['Pclass'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", data = titanic)
plt.xlabel("Passenger classes")
plt.title("Number of passengers, sorted by passenger classes")
plt.show()
Explanation: The histogram of the age distribution of the survival group also confirms that younger people had a higher advantage in survival comparing to older ages.
Socio-economic classes:
We can assume that someone's class on the Titanic represented their socio-economic status. Also, we would assume that the fares have a direct correlation with the classes; so we only need to examine one of them.
End of explanation
survived['Pclass'].value_counts()
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", hue = "Survived", data = titanic)
plt.xlabel("Passenger classes")
plt.title("Number of survivals and non-survivals, sorted by passenger classes")
plt.show()
Explanation: Approximately 55% of the passengers belonged to the third class, while the rest of the ship belong to the first and second classes. Now we'll see if the first and second class passengers also paid a premimum when it comes to safety?
End of explanation
titanic.groupby(['Pclass', 'Type']).Type.count()
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", hue = "Type", data = titanic)
plt.title("Number of people per type, sorted by passenger classes")
plt.show()
titanic.groupby(['Pclass', 'Type']).agg({'Survived': 'sum'})
sns.set(style="darkgrid")
ax = sns.countplot(x = "Pclass", hue = "Type", data = survived)
plt.title("Number of survivals and non-survivals per type, sorted by passenger classes")
plt.show()
Explanation: The survival rate of the first class passengers was more than 60%, while the survival rate of the third class ones was merely around 25%. So we can see that there was a bias on weathiness and soci-economic statuses, even in life-threatning situations.
Now, what if we factor in both passenger classes and types (male, female or children), which would have more weight in survival rate?
End of explanation
titanic['Family_size'].value_counts()
Explanation: We can see the women and children of the first class had a significantly impressive survival rate (more than 90% and 80% respectively), when the women and children of the third class had a much lower survival rate (more than 45% and around 40% respectively). However, the women and children from the third class did have a higher survival rate than the men from higher classes. Men from the first class had a survival rate of around 35%, which was actually below the overall survival rate of 38.38%. Men from the second and third classes suffered very low survival rates, which was around 8 % and around 12 % respectively comparing to their initial numbers.
Family size:
Did people have a higher chance of survival if they traveled with family rather than traveling alone? We'll find out.
End of explanation
survived['Family_size'].value_counts()
sns.boxplot(x="Survived", y="Family_size", data=titanic)
plt.title("The distribution of family sizes of non-survivals and survivals")
plt.show()
sns.kdeplot(survived['Family_size'], shade=True)
plt.ylabel("Distribution")
plt.xlabel("Family size")
plt.title("The distribution of family sizes of non-survivals and survivals")
plt.show()
Explanation: We can see that the majority of the ship traveled by themselves, followed by families of 2 or 3. The families that had more than 3 members made up a small part of the ship. Now look into the survival statistics:
End of explanation |
12,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization of Bus Bunching
When we are working on spatio-temporal data sets, it will be handy if we can visualize the spatial components of data while understanding their relations with time series. In this post, I present an example of how to visualize the bus bunching (buses of the same service number arriving at the same stop) of New York city.
The original data set are available at New York City Bus Data. In addition, we download the bus stop information of New York City
This jupyter notebook is available at my Github page
Step1: New York City Bus Stop Info
Step2: Determine the Bus Bunching Based on the NextStopPointName
Step3: Prepare Data for Visualization
Assign bus bunching status for all the bus records.
Group the RecordedAtTime into 30 seconds interval.
Convert the latitude, longitude information since Bokeh employs Web Mercator projection for mapping. Credit goes to Charlie Harper's post on Visualizing Data with Bokeh and Pandas, where he detailed a helper function for the conversion using Pyproj.
Step4: Visualization with Bokeh
Step5: The following plot visualizes the bus bunching instance at period around 12
Step6: Add An Interactive Dropdown Menu
Finally, we can use ipywidgets to generate a dropdown menu, which allows us to visualize the potential bus bunching at any selected time interval.
Note that this interactive menu requires a functioning local server. You may download this notebook and play with the visualization using the following dropdown menu. | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv("mta_1706.csv")
# Set to datetime object
df['RecordedAtTime'] = pd.to_datetime(df['RecordedAtTime'])
df = df[(df['RecordedAtTime'] < pd.Timestamp('2017-06-02')) & (df['RecordedAtTime'] > pd.Timestamp('2017-05-31'))]
# filter missing values
df = df.dropna(axis=0, how='any')
# BusCoord records both Longitude and Latidue info
df['BusCoord'] = list(zip(df["VehicleLocation.Longitude"], df["VehicleLocation.Latitude"]))
df.head(5)
vehicle_gb = df.groupby(["RecordedAtTime", "PublishedLineName", "DirectionRef", "VehicleRef"])
vehicle_cnt_df = vehicle_gb.count()
# The following demonstrates an example of duplicate record
vehicle_gb.get_group(vehicle_cnt_df[vehicle_cnt_df["BusCoord"] > 1].index[0])
bus_df = vehicle_gb.head(1).copy()
Explanation: Visualization of Bus Bunching
When we are working on spatio-temporal data sets, it will be handy if we can visualize the spatial components of data while understanding their relations with time series. In this post, I present an example of how to visualize the bus bunching (buses of the same service number arriving at the same stop) of New York city.
The original data set are available at New York City Bus Data. In addition, we download the bus stop information of New York City
This jupyter notebook is available at my Github page: VisualizeBusBunching.ipynb, and it is part of the repository jqlearning.
Preprocessing Data
Before loading the city bus data of 2017-06, an additional preprocessing step is required. The file 'mta_1706.csv' has additional ',' that prevents us to load the correct number of columns with pd.read_csv(). For instance, if you refer to the line 53292, you will notice that information such as 'CPW/110st (non-public, for GEO)'. The comma that follows the word non-public causes the loading error. We can thus remove this additional comma by Regular Expressions.
The data set of 2017-06 has about 1 Gb size. For simplicity, we focus on the records of 2017-06-01, and filter any row record having missing values.
Some vehicle recods could have duplicate records due to different scheduled arrival time values. We should filter the duplicate records by choosing only one of the vehicle records.
End of explanation
stop_bronx_df = pd.read_csv("stops_bronx.txt")
stop_brooklyn_df = pd.read_csv("stops_brooklyn.txt")
stop_manhattan_df = pd.read_csv("stops_manhattan.txt")
stop_queens_df = pd.read_csv("stops_queens.txt")
stop_staten_island_df = pd.read_csv("stops_staten_island.txt")
stop_new_york_df = pd.concat([stop_bronx_df,
stop_brooklyn_df,
stop_manhattan_df,
stop_queens_df,
stop_staten_island_df], axis=0)
stop_new_york_df.drop_duplicates(inplace=True)
Explanation: New York City Bus Stop Info
End of explanation
bus_at_stop_df = bus_df[bus_df["ArrivalProximityText"] == "at stop"]
# For bus bunching, buses of the same service number arriving at the same stop
bus_at_stop_gb = bus_at_stop_df.groupby(["RecordedAtTime", "PublishedLineName", "DirectionRef", "NextStopPointName"])
bus_at_stop_cnt = bus_at_stop_gb.count()
# Bunched buses have multiple locations
bunched_bus_df = bus_at_stop_cnt[bus_at_stop_cnt["BusCoord"] > 1]
# An example of a bus bunchin scenario
bus_at_stop_gb.get_group(bunched_bus_df.index[0])
Explanation: Determine the Bus Bunching Based on the NextStopPointName
End of explanation
# Initilialize as False
bus_df["BunchedStatus"] = False
bus_df["BunchedStatus"] = bus_df.apply(lambda x: True if (x["RecordedAtTime"],
x["PublishedLineName"],
x["DirectionRef"],
x["NextStopPointName"]) in bunched_bus_df.index else False, axis=1)
bus_df['TimeInterval'] = bus_df['RecordedAtTime'].map(lambda x: x.floor('30s'))
from pyproj import Proj, transform
# helper function to convert lat/long to easting/northing for mapping
def LongLat_to_EN(long, lat):
try:
easting, northing = transform(Proj(init='epsg:4326'), Proj(init='epsg:3857'), long, lat)
return easting, northing
except:
return None, None
bus_df['VehicleLocation.E'], bus_df['VehicleLocation.N'] = zip(*bus_df.apply(
lambda x: LongLat_to_EN(x['VehicleLocation.Longitude'], x['VehicleLocation.Latitude']), axis=1))
stop_new_york_df["stop.E"], stop_new_york_df["stop.N"] = zip(*stop_new_york_df.apply(
lambda x: LongLat_to_EN(x["stop_lon"], x["stop_lat"]), axis=1))
Explanation: Prepare Data for Visualization
Assign bus bunching status for all the bus records.
Group the RecordedAtTime into 30 seconds interval.
Convert the latitude, longitude information since Bokeh employs Web Mercator projection for mapping. Credit goes to Charlie Harper's post on Visualizing Data with Bokeh and Pandas, where he detailed a helper function for the conversion using Pyproj.
End of explanation
from bokeh.plotting import figure, show, ColumnDataSource
from bokeh.tile_providers import CARTODBPOSITRON
from bokeh.io import output_notebook, push_notebook
from bokeh.models import HoverTool
output_notebook()
def busSources(datetime):
# given a datetime, separate bunched bus and non-bunched bus sources.
# Selected data sourceNonBunched for bunched bus
sourceBunched = ColumnDataSource(data=dict(
lon=bus_df[(bus_df['BunchedStatus'] == True) & (bus_df['TimeInterval'] == datetime)]["VehicleLocation.E"],
lat=bus_df[(bus_df['BunchedStatus'] == True) & (bus_df['TimeInterval'] == datetime)]["VehicleLocation.N"],
PublishedLineName=bus_df[(bus_df['BunchedStatus'] == True) & (bus_df['TimeInterval'] == datetime)]["PublishedLineName"],
DirectionRef=bus_df[(bus_df['BunchedStatus'] == True) & (bus_df['TimeInterval'] == datetime)]["DirectionRef"],
VehicleRef=bus_df[(bus_df['BunchedStatus'] == True) & (bus_df['TimeInterval'] == datetime)]["VehicleRef"],
RecordedAtTime=bus_df[(bus_df['BunchedStatus'] == True) & (bus_df['TimeInterval'] == datetime)]["RecordedAtTime"],
NextStopPoint=bus_df[(bus_df['BunchedStatus'] == True) & (bus_df['TimeInterval'] == datetime)]["NextStopPointName"]
))
# Selected data sourceNonBunchedfor non-bunching bus
sourceNonBunched = ColumnDataSource(data=dict(
lon=bus_df[(bus_df['BunchedStatus'] == False) & (bus_df['TimeInterval'] == datetime)]["VehicleLocation.E"],
lat=bus_df[(bus_df['BunchedStatus'] == False) & (bus_df['TimeInterval'] == datetime)]["VehicleLocation.N"],
PublishedLineName=bus_df[(bus_df['BunchedStatus'] == False) & (bus_df['TimeInterval'] == datetime)]["PublishedLineName"],
DirectionRef=bus_df[(bus_df['BunchedStatus'] == False) & (bus_df['TimeInterval'] == datetime)]["DirectionRef"],
VehicleRef=bus_df[(bus_df['BunchedStatus'] == False) & (bus_df['TimeInterval'] == datetime)]["VehicleRef"],
RecordedAtTime=bus_df[(bus_df['BunchedStatus'] == False) & (bus_df['TimeInterval'] == datetime)]["RecordedAtTime"],
NextStopPoint=bus_df[(bus_df['BunchedStatus'] == False) & (bus_df['TimeInterval'] == datetime)]["NextStopPointName"]
))
return sourceBunched, sourceNonBunched
def visualize_selected_time(datetime):
plot = figure(x_range=(bus_df["VehicleLocation.E"].min(), bus_df["VehicleLocation.E"].max()),
y_range=(bus_df["VehicleLocation.N"].min(), bus_df["VehicleLocation.N"].max()),
x_axis_type="mercator", y_axis_type="mercator")
plot.add_tile(CARTODBPOSITRON)
sourceBunched, sourceNonBunched = busSources(datetime)
# Bus Stop Info
bus_stops = ColumnDataSource(data=dict(
lon=stop_new_york_df["stop.E"],
lat=stop_new_york_df["stop.N"],
Name=stop_new_york_df["stop_name"]))
circle1 = plot.circle('lon', 'lat', size=2.5, color="orange", alpha=0.3, source=bus_stops)
# add a circle renderer with a size, color, and alpha
circle2 = plot.circle('lon', 'lat', size=5, color="navy", alpha=0.5, source=sourceNonBunched)
circle3 = plot.circle('lon', 'lat', size=8, color="red", alpha=0.5, source=sourceBunched)
plot.add_tools(HoverTool(renderers=[circle2, circle3], tooltips=[("BusService", "@PublishedLineName"),
("Direction", "@DirectionRef"),
("Vehicle", "@VehicleRef"),
# The timestamp will be automatically converted to epoch time by default
# https://bokeh.pydata.org/en/latest/docs/reference/models/formatters.html#bokeh.models.formatters.NumeralTickFormatter.format
("Time", "@RecordedAtTime{%F %T}"),
("NextStopPoint", "@NextStopPoint")],
formatters={"RecordedAtTime": "datetime"}))
plot.add_tools(HoverTool(renderers=[circle1], tooltips=[
("Stop Name", "@Name")]))
return plot, sourceBunched, sourceNonBunched
plot, sourceBunched, sourceNonBunched = visualize_selected_time(pd.Timestamp("20170601 12:44:00"))
Explanation: Visualization with Bokeh
End of explanation
show(plot, notebook_handle=True)
Explanation: The following plot visualizes the bus bunching instance at period around 12:44:00. The bunched bus instances are highlighted by red, whereas the other bus records are makred by blue. There are thousands of bus stops available at the New York city, and they are shown as orange circles in this plot
End of explanation
TimeIntervalStr = bus_df["TimeInterval"].astype(str)
UniqueTimeIntervalStr = TimeIntervalStr.unique()
UniqueTimeIntervalStr = sorted(UniqueTimeIntervalStr)
def update_plot(datetime="2017-06-01 12:44:00"):
timestamp = pd.Timestamp(datetime)
newBunched, newNonBunched = busSources(timestamp)
sourceBunched.data = newBunched.data
sourceNonBunched.data = newNonBunched.data
push_notebook()
from ipywidgets import interact
interact_panel = interact(update_plot, datetime=UniqueTimeIntervalStr)
Explanation: Add An Interactive Dropdown Menu
Finally, we can use ipywidgets to generate a dropdown menu, which allows us to visualize the potential bus bunching at any selected time interval.
Note that this interactive menu requires a functioning local server. You may download this notebook and play with the visualization using the following dropdown menu.
End of explanation |
12,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_stats_cluster_sensor_rANOVA_tfr
Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
Step1: Set parameters
Step2: Setup repeated measures ANOVA
Step3: Account for multiple comparisons using FDR versus permutation clustering test | Python Code:
# Authors: Denis Engemann <[email protected]>
# Eric Larson <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.time_frequency import single_trial_power
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
Explanation: .. _tut_stats_cluster_sensor_rANOVA_tfr
Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = raw.info['ch_names'][picks[0]]
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject)
# make sure all conditions have the same counts, as the ANOVA expects a
# fully balanced data matrix and does not forgive imbalances that generously
# (risk of type-I error)
epochs.equalize_event_counts(event_id, copy=False)
# Time vector
times = 1e3 * epochs.times # change unit to ms
# Factor to downs-sample the temporal dimension of the PSD computed by
# single_trial_power.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
n_cycles = frequencies / frequencies[0]
baseline_mask = times[::decim] < 0
# now create TFR representations for all conditions
epochs_power = []
for condition in [epochs[k].get_data()[:, 97:98, :] for k in event_id]:
this_power = single_trial_power(condition, sfreq=sfreq,
frequencies=frequencies, n_cycles=n_cycles,
decim=decim)
this_power = this_power[:, 0, :, :] # we only have one channel.
# Compute ratio with baseline power (be sure to correct time vector with
# decimation factor)
epochs_baseline = np.mean(this_power[:, :, baseline_mask], axis=2)
this_power /= epochs_baseline[..., np.newaxis]
epochs_power.append(this_power)
Explanation: Set parameters
End of explanation
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] / n_conditions
# we will tell the ANOVA how to interpret the data matrix in terms of
# factors. This done via the factor levels argument which is a list
# of the number factor levels for each factor.
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_frequencies = len(frequencies)
n_times = len(times[::decim])
# Now we'll assemble the data matrix and swap axes so the trial replications
# are the first dimension and the conditions are the second dimension
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_frequencies * n_times)
# so we have replications * conditions * observations:
print(data.shape)
# while the iteration scheme used above for assembling the data matrix
# makes sure the first two dimensions are organized as expected (with A =
# modality and B = location):
#
# A1B1 A1B2 A2B1 B2B2
# trial 1 1.34 2.53 0.97 1.74
# trial ... .... .... .... ....
# trial 56 2.45 7.90 3.09 4.76
#
# Now we're ready to run our repeated measures ANOVA.
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
# Note. As we treat trials as subjects, the test only accounts for
# time locked responses despite the 'induced' approach.
# For analysis for induced power at the group level averaged TRFs
# are required.
Explanation: Setup repeated measures ANOVA
End of explanation
# First we need to slightly modify the ANOVA function to be suitable for
# the clustering procedure. Also want to set some defaults.
# Let's first override effects to confine the analysis to the interaction
effects = 'A:B'
# A stat_fun must deal with a variable number of input arguments.
def stat_fun(*args):
# Inside the clustering function each condition will be passed as
# flattened array, necessitated by the clustering procedure.
# The ANOVA however expects an input array of dimensions:
# subjects X conditions X observations (optional).
# The following expression catches the list input and swaps the first and
# the second dimension and finally calls the ANOVA function.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
# Create new stats image with only significant clusters
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Time-locked response for \'modality by location\' (%s)\n'
' cluster-level corrected (p <= 0.05)' % ch_name)
plt.show()
# now using FDR
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Time-locked response for \'modality by location\' (%s)\n'
' FDR corrected (p <= 0.05)' % ch_name)
plt.show()
# Both, cluster level and FDR correction help getting rid of
# putatively spots we saw in the naive f-images.
Explanation: Account for multiple comparisons using FDR versus permutation clustering test
End of explanation |
12,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fortran cython i numpy
Porównanie różnych podejść - do rozwiązywanie równania dyfuzji jawnym algorytmem.
Obliczanie operatora Laplace'a na siatce z f2py, stosując wektorowy kod w f90 działa najszybciej!
Step1: Parametry symulacji
Step2: Walidacja wyników
Step3: znamy rozwiązanie równania dyfuzji na nieskończonym obszarze startujące z punktu
$$ u(x,0) = \delta (x)$$
$$ u(x,t) = \frac{1}{4 \, \pi D t}e^{\left(-\frac{x^{2} + y^{2}}{4 D t}\right)}$$
Step5: Benchmarks
Step7: Najszybsza wersja - wektorowy Fortran
Warto zauważyć, że
Step8: teraz możemy skompilować wielokrotnie ten sam kod i załadować modluł | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
%load_ext Cython
%%cython
cimport cython
cimport numpy as np
@cython.wraparound(False)
@cython.boundscheck(False)
def cython_diff2d(np.ndarray[double, ndim=2] u,np.ndarray[double, ndim=2] v, double dx2, double dy2, double c):
cdef unsigned int i, j
for i in xrange(1,u.shape[0]-1):
for j in xrange(1, u.shape[1]-1):
v[i,j] = u[i,j] + c*( (u[i+1, j] + u[i-1, j]-2.0*u[i,j])/dy2 +
(u[i, j+1] + u[i, j-1]-2.0*u[i,j])/dx2 )
def numpy_diff2d(u,v,dx2,dy2,c):
v[1:-1,1:-1] =u[1:-1,1:-1] + c*((u[2:,1:-1]+u[:-2,1:-1]-2.0*u[1:-1,1:-1])/dy2 +
(u[1:-1,2:] + u[1:-1,:-2]-2.0*u[1:-1,1:-1])/dx2)
def numpy_diff2d_a(u,v,dx2,dy2,c):
A = (1.0-2.0*(c/dx2+c/dy2))
v[1:-1,1:-1] =A*u[1:-1,1:-1] + c/dy2*(u[2:,1:-1] + u[:-2,1:-1]) + \
c/dx2*(u[1:-1,2:] + u[1:-1,:-2])
def numpy_diff2d_b(u,v,dx2,dy2,c):
v[1:-1,1:-1] =u[1:-1,1:-1] + c/dx2*(np.diff(u,2,axis=0)[:,1:-1] + np.diff(u,2,axis=1)[1:-1,:])
def calc(N, Niter, func, dx2,dy2,c):
u = np.zeros([N, N])
v = np.zeros_like(u)
u[u.shape[0]//2,u.shape[1]//2] = 1.0/np.sqrt(dx2*dy2)
for i in range(Niter//2):
func(u,v,dx2,dy2,c)
func(v,u,dx2,dy2,c)
return u
Explanation: Fortran cython i numpy
Porównanie różnych podejść - do rozwiązywanie równania dyfuzji jawnym algorytmem.
Obliczanie operatora Laplace'a na siatce z f2py, stosując wektorowy kod w f90 działa najszybciej!
End of explanation
N = 100
dx = 0.1
dy = 0.1
dx2 = dx*dx
dy2 = dy*dy
dt = 0.01
D = 0.1
c = D*dt
print("CLF = ",c/dx2,c/dy2)
Explanation: Parametry symulacji
End of explanation
u = calc(N,125,numpy_diff2d_b,dx2,dy2,c)
plt.imshow(u)
u = calc(N,125,cython_diff2d,dx2,dy2,c)
plt.imshow(u)
Explanation: Walidacja wyników
End of explanation
Lx,Ly = N/2*dx,N/2*dy
x = np.linspace(-Lx,Lx,N)
y = np.linspace(-Ly,Ly,N)
X,Y = np.meshgrid(x,y )
Niter = 125
t = dt*Niter
P = 1/(4*np.pi*D*t)*np.exp(-(X**2+Y**2)/(4*D*t) )
plt.contourf(X,Y,P)
plt.axes().set_aspect('equal')
u = calc(N,Niter,cython_diff2d,dx2,dy2,c)
np.sum(P)*dx2,np.sum(u)*dx2
plt.plot(X[X.shape[0]//2,:],P[X.shape[0]//2,:],'b')
plt.plot(X[X.shape[0]//2,:],u[X.shape[0]//2,:],'r')
Explanation: znamy rozwiązanie równania dyfuzji na nieskończonym obszarze startujące z punktu
$$ u(x,0) = \delta (x)$$
$$ u(x,t) = \frac{1}{4 \, \pi D t}e^{\left(-\frac{x^{2} + y^{2}}{4 D t}\right)}$$
End of explanation
%%time
u = calc(1000,200,cython_diff2d,dx2,dy2,c)
%%time
u = calc(1000,200,numpy_diff2d,dx2,dy2,c)
%%time
u = calc(1000,200,numpy_diff2d_a,dx2,dy2,c)
%%time
u = calc(1000,200,numpy_diff2d_b,dx2,dy2,c)
N = 1000
fortran_source =
subroutine fortran_diff2d(u, v, dx2, dy2, c)
real(8), intent(inout) :: u({0}, {1})
real(8), intent(inout) :: v({0}, {1})
real(8), intent(in) :: dx2, dy2, c
v(2:{0}-1,2:{1}-1) = u(2:{0}-1,2:{1}-1)+ c*( (u(3:,2:{1}-1)+u(:{0}-2,2:{1}-1))/dy2 + &
(u(2:{0}-1,3:) + u(2:{0}-1,:{1}-2))/dx2)
end subroutine
.format(N,N)
fp = open("myfile.f90", "w")
fp.write(fortran_source)
fp.close()
!cat myfile.f90
%%capture f2py.log
!f2py -c -m my_fortran_module myfile.f90
!ls -l
from my_fortran_module import fortran_diff2d
def calcF(N, Niter, func, dx2,dy2,c):
u = np.zeros([N, N],order='F')
v = np.zeros_like(u)
u[u.shape[0]//2,u.shape[1]//2] = 1.0/np.sqrt(dx2*dy2)
for i in range(Niter//2):
func(u,v,dx2,dy2,c)
func(v,u,dx2,dy2,c)
return u
%%time
u = calcF(1000,200,fortran_diff2d,dx2,dy2,c)
Explanation: Benchmarks
End of explanation
import subprocess
subprocess.check_output(["pwd"])
import subprocess
import importlib
counter = 12
def prepare_fortran_module(N=100):
global counter
fortran_source =
subroutine fortran_diff2d(u, v, dx2, dy2, c)
real(8), intent(in) :: u({0}, {1})
real(8), intent(inout) :: v({0}, {1})
real(8), intent(in) :: dx2, dy2, c
v(2:{0}-1,2:{1}-1) = u(2:{0}-1,2:{1}-1)+ c*( (u(3:,2:{1}-1)+u(:{0}-2,2:{1}-1))/dy2 + &
(u(2:{0}-1,3:) + u(2:{0}-1,:{1}-2))/dx2)
end subroutine
subroutine fortran_diff2d_a(u, dx2, dy2, c)
real(8), intent(inout) :: u({0}, {1})
real(8), intent(in) :: dx2, dy2, c
u(2:{0}-1,2:{1}-1) = u(2:{0}-1,2:{1}-1)+ c*( (u(3:,2:{1}-1)+u(:{0}-2,2:{1}-1))/dy2 + &
(u(2:{0}-1,3:) + u(2:{0}-1,:{1}-2))/dx2)
end subroutine
.format(N,N)
fp = open("myfile.f90", "w")
fp.write(fortran_source)
fp.close()
counter=counter+1
try:
output = subprocess.check_output(["f2py", "-c","-m", "fortran_module%05d"%counter, "myfile.f90"])
m = importlib.import_module("fortran_module%05d"%counter)
except:
print ("problem z kompilacja!")
return output
return m
fortran_module = prepare_fortran_module(N=1000)
fortran_diff2d, fortran_diff2d_a = fortran_module.fortran_diff2d, fortran_module.fortran_diff2d_a
def calcF(N, Niter, func, dx2,dy2,c):
u = np.zeros([N, N],order='F')
v = np.zeros_like(u)
u[u.shape[0]//2,u.shape[1]//2] = 1.0/np.sqrt(dx2*dy2)
for i in range(Niter//2):
func(u,v,dx2,dy2,c)
func(v,u,dx2,dy2,c)
return u
Explanation: Najszybsza wersja - wektorowy Fortran
Warto zauważyć, że:
nie zawsze jest taka duża różnica miedzy numpy/Cython/Fortran, sprawdź dla np. mapowania funkcji na tablicy wartości
numpy jest kilka razy wolniejsze bo przy każdym działaniu stosuje wartości pośrednie
istnieje możliwośc obejścia z pomocą modułu numexpr, który kompiluje całe wyrażenie na tablicy numpy, sprawdź!
wektorowy Fortran najefektywniej kompiluje wyrażenia typu $u_i+u_{i+1}$
Wniosek - warto stosować Fortran do pewnych operacji!
Automatyczna generacja kodu
Problem: kompilowane moduły nie mogą być ponownie ładowane.
Przeanalizujmy prostą metodę generacji kodu
End of explanation
N = 1000
fortran_module = prepare_fortran_module(N=N)
fortran_diff2d, fortran_diff2d_a = fortran_module.fortran_diff2d, fortran_module.fortran_diff2d_a
u = calcF(N,1225,fortran_diff2d,dx2,dy2,c)
plt.imshow(u)
def calcF_a(N, Niter, func, dx2,dy2,c):
u = np.zeros([N, N],order='F')
v = np.zeros_like(u)
u[u.shape[0]//2,u.shape[1]//2] = 1.0/np.sqrt(dx2*dy2)
for i in range(Niter):
func(u,dx2,dy2,c)
return u
u = calcF_a(1000,1225,fortran_diff2d_a,dx2,dy2,c)
plt.imshow(u)
%%time
u = calc(1000,200,numpy_diff2d_a,dx2,dy2,c)
%%time
u = calc(1000,200,cython_diff2d,dx2,dy2,c)
fortran_diff2d = prepare_fortran_module(N=1000).fortran_diff2d
%%time
u = calcF(1000,200,fortran_diff2d,dx2,dy2,c)
%%time
u = calcF_a(1000,200,fortran_diff2d_a,dx2,dy2,c)
Explanation: teraz możemy skompilować wielokrotnie ten sam kod i załadować modluł
End of explanation |
12,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting a diagonal covariance Gaussian mixture model to text data
In a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component.
* Computational cost becomes prohibitive in high dimensions
Step1: We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
Step2: Load Wikipedia data and extract TF-IDF features
Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
Step3: Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
Step4: As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
Step5: We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
Step6: EM in high dimensions
EM for high-dimensional data requires some special treatment
Step7: Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
Step8: Initializing covariances
To initialize our covariance parameters, we compute $\hat{\sigma}{k, j}^2 = \sum{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
Step9: Running EM
Now that we have initialized all of our parameters, run EM.
Step10: Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix).
A sample output may be
Step11: Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
Step12: Quiz Question | Python Code:
import graphlab
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
Explanation: Fitting a diagonal covariance Gaussian mixture model to text data
In a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component.
* Computational cost becomes prohibitive in high dimensions: score calculations have complexity cubic in the number of dimensions M if the Gaussian has a full covariance matrix.
* A model with many parameters require more data: bserve that a full covariance matrix for an M-dimensional Gaussian will have M(M+1)/2 parameters to fit. With the number of parameters growing roughly as the square of the dimension, it may quickly become impossible to find a sufficient amount of data to make good inferences.
Both of these issues are avoided if we require the covariance matrix of each component to be diagonal, as then it has only M parameters to fit and the score computation decomposes into M univariate score calculations. Recall from the lecture that the M-step for the full covariance is:
\begin{align}
\hat{\Sigma}k &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_i-\hat{\mu}_k)(x_i - \hat{\mu}_k)^T
\end{align}
Note that this is a square matrix with M rows and M columns, and the above equation implies that the (v, w) element is computed by
\begin{align}
\hat{\Sigma}{k, v, w} &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}{kv})(x{iw} - \hat{\mu}_{kw})
\end{align}
When we assume that this is a diagonal matrix, then non-diagonal elements are assumed to be zero and we only need to compute each of the M elements along the diagonal independently using the following equation.
\begin{align}
\hat{\sigma}^2_{k, v} &= \hat{\Sigma}{k, v, v} \
&= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})^2
\end{align}
In this section, we will use an EM implementation to fit a Gaussian mixture model with diagonal covariances to a subset of the Wikipedia dataset. The implementation uses the above equation to compute each variance term.
We'll begin by importing the dataset and coming up with a useful representation for each article. After running our algorithm on the data, we will explore the output to see whether we can give a meaningful interpretation to the fitted parameters in our model.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.
End of explanation
from em_utilities import *
Explanation: We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
End of explanation
wiki = graphlab.SFrame('people_wiki.gl/').head(5000)
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
Explanation: Load Wikipedia data and extract TF-IDF features
Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
End of explanation
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
Explanation: Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
End of explanation
tf_idf = normalize(tf_idf)
Explanation: As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
End of explanation
for i in range(5):
doc = tf_idf[i]
print(np.linalg.norm(doc.todense()))
Explanation: We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
End of explanation
from sklearn.cluster import KMeans
np.random.seed(5)
num_clusters = 25
# Use scikit-learn's k-means to simplify workflow
kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1)
kmeans_model.fit(tf_idf)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
means = [centroid for centroid in centroids]
Explanation: EM in high dimensions
EM for high-dimensional data requires some special treatment:
* E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python.
* All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by sparsity of data.
* Initially, some words may be entirely absent from a cluster, causing the M step to produce zero mean and variance for those words. This means any data point with one of those words will have 0 probability of being assigned to that cluster since the cluster allows for no variability (0 variance) around that count being 0 (0 mean). Since there is a small chance for those words to later appear in the cluster, we instead assign a small positive variance (~1e-10). Doing so also prevents numerical overflow.
We provide the complete implementation for you in the file em_utilities.py. For those who are interested, you can read through the code to see how the sparse matrix implementation differs from the previous assignment.
You are expected to answer some quiz questions using the results of clustering.
Initializing mean parameters using k-means
Recall from the lectures that EM for Gaussian mixtures is very sensitive to the choice of initial means. With a bad initial set of means, EM may produce clusters that span a large area and are mostly overlapping. To eliminate such bad outcomes, we first produce a suitable set of initial means by using the cluster centers from running k-means. That is, we first run k-means and then take the final set of means from the converged solution as the initial means in our EM algorithm.
End of explanation
num_docs = tf_idf.shape[0]
weights = []
for i in xrange(num_clusters):
# Compute the number of data points assigned to cluster i:
num_assigned = ... # YOUR CODE HERE
w = float(num_assigned) / num_docs
weights.append(w)
Explanation: Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
End of explanation
covs = []
for i in xrange(num_clusters):
member_rows = tf_idf[cluster_assignment==i]
cov = (member_rows.power(2) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \
+ means[i]**2
cov[cov < 1e-8] = 1e-8
covs.append(cov)
Explanation: Initializing covariances
To initialize our covariance parameters, we compute $\hat{\sigma}{k, j}^2 = \sum{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
End of explanation
out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10)
out['loglik']
Explanation: Running EM
Now that we have initialized all of our parameters, run EM.
End of explanation
# Fill in the blanks
def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):
print('')
print('==========================================================')
num_clusters = len(means)
for c in xrange(num_clusters):
print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))
print('\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance'))
# The k'th element of sorted_word_ids should be the index of the word
# that has the k'th-largest value in the cluster mean. Hint: Use np.argsort().
sorted_word_ids = ... # YOUR CODE HERE
for i in sorted_word_ids[:5]:
print '{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word['category'][i],
means[c][i],
covs[c][i])
print '\n=========================================================='
'''By EM'''
visualize_EM_clusters(tf_idf, out['means'], out['covs'], map_index_to_word)
Explanation: Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix).
A sample output may be:
```
==========================================================
Cluster 0: Largest mean parameters in cluster
Word Mean Variance
football 1.08e-01 8.64e-03
season 5.80e-02 2.93e-03
club 4.48e-02 1.99e-03
league 3.94e-02 1.08e-03
played 3.83e-02 8.45e-04
...
```
End of explanation
np.random.seed(5) # See the note below to see why we set seed=5.
num_clusters = len(means)
num_docs, num_words = tf_idf.shape
random_means = []
random_covs = []
random_weights = []
for k in range(num_clusters):
# Create a numpy array of length num_words with random normally distributed values.
# Use the standard univariate normal distribution (mean 0, variance 1).
# YOUR CODE HERE
mean = ...
# Create a numpy array of length num_words with random values uniformly distributed between 1 and 5.
# YOUR CODE HERE
cov = ...
# Initially give each cluster equal weight.
# YOUR CODE HERE
weight = ...
random_means.append(mean)
random_covs.append(cov)
random_weights.append(weight)
Explanation: Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
End of explanation
# YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word.
...
Explanation: Quiz Question: Try fitting EM with the random initial parameters you created above. (Use cov_smoothing=1e-5.) Store the result to out_random_init. What is the final loglikelihood that the algorithm converges to?
Quiz Question: Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means?
Quiz Question: For the above model, out_random_init, use the visualize_EM_clusters method you created above. Are the clusters more or less interpretable than the ones found after initializing using k-means?
End of explanation |
12,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Rabbit example
Copyright 2017 Allen Downey
License
Step1: Rabbit is Rich
This notebook starts with a version of the rabbit population growth model. You will modify it using some of the tools in Chapter 5. Before you attempt this diagnostic, you should have a good understanding of State objects, as presented in Section 5.4. And you should understand the version of run_simulation in Section 5.7.
Separating the State from the System
Here's the System object from the previous diagnostic. Notice that it includes system parameters, which don't change while the simulation is running, and population variables, which do. We're going to improve that by pulling the population variables into a State object.
Step2: In the following cells, define a State object named init that contains two state variables, juveniles and adults, with initial values 0 and 10. Make a version of the System object that does NOT contain juvenile_pop0 and adult_pop0, but DOES contain init.
Step4: Updating run_simulation
Here's the version of run_simulation from last time
Step5: In the cell below, write a version of run_simulation that works with the new System object (the one that contains a State object named init).
Hint
Step6: Test your changes in run_simulation
Step8: Plotting the results
Here's a version of plot_results that plots both the adult and juvenile TimeSeries.
Step9: If your changes in the previous section were successful, you should be able to run this new version of plot_results.
Step10: That's the end of the diagnostic. If you were able to get it done quickly, and you would like a challenge, here are two bonus questions
Step11: Bonus question #2
Factor out the update function.
Write a function called update that takes a State object and a System object and returns a new State object that represents the state of the system after one time step.
Write a version of run_simulation that takes an update function as a parameter and uses it to compute the update.
Run your new version of run_simulation and plot the results.
WARNING | Python Code:
%matplotlib inline
from modsim import *
Explanation: Modeling and Simulation in Python
Rabbit example
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
system = System(t0 = 0,
t_end = 20,
juvenile_pop0 = 0,
adult_pop0 = 10,
birth_rate = 0.9,
mature_rate = 0.33,
death_rate = 0.5)
system
Explanation: Rabbit is Rich
This notebook starts with a version of the rabbit population growth model. You will modify it using some of the tools in Chapter 5. Before you attempt this diagnostic, you should have a good understanding of State objects, as presented in Section 5.4. And you should understand the version of run_simulation in Section 5.7.
Separating the State from the System
Here's the System object from the previous diagnostic. Notice that it includes system parameters, which don't change while the simulation is running, and population variables, which do. We're going to improve that by pulling the population variables into a State object.
End of explanation
# Solution goes here
# Solution goes here
Explanation: In the following cells, define a State object named init that contains two state variables, juveniles and adults, with initial values 0 and 10. Make a version of the System object that does NOT contain juvenile_pop0 and adult_pop0, but DOES contain init.
End of explanation
def run_simulation(system):
Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object
juveniles = TimeSeries()
juveniles[system.t0] = system.juvenile_pop0
adults = TimeSeries()
adults[system.t0] = system.adult_pop0
for t in linrange(system.t0, system.t_end):
maturations = system.mature_rate * juveniles[t]
births = system.birth_rate * adults[t]
deaths = system.death_rate * adults[t]
if adults[t] > 30:
market = adults[t] - 30
else:
market = 0
juveniles[t+1] = juveniles[t] + births - maturations
adults[t+1] = adults[t] + maturations - deaths - market
system.adults = adults
system.juveniles = juveniles
Explanation: Updating run_simulation
Here's the version of run_simulation from last time:
End of explanation
# Solution goes here
Explanation: In the cell below, write a version of run_simulation that works with the new System object (the one that contains a State object named init).
Hint: you only have to change two lines.
End of explanation
run_simulation(system)
system.adults
Explanation: Test your changes in run_simulation:
End of explanation
def plot_results(system, title=None):
Plot the estimates and the model.
system: System object with `results`
newfig()
plot(system.adults, 'bo-', label='adults')
plot(system.juveniles, 'gs-', label='juveniles')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
Explanation: Plotting the results
Here's a version of plot_results that plots both the adult and juvenile TimeSeries.
End of explanation
plot_results(system, title='Proportional growth model')
Explanation: If your changes in the previous section were successful, you should be able to run this new version of plot_results.
End of explanation
# Solution goes here
run_simulation(system)
# Solution goes here
plot_results(system)
Explanation: That's the end of the diagnostic. If you were able to get it done quickly, and you would like a challenge, here are two bonus questions:
Bonus question #1
Write a version of run_simulation that puts the results into a single TimeFrame named results, rather than two TimeSeries objects.
Write a version of plot_results that can plot the results in this form.
WARNING: This question is substantially harder, and requires you to have a good understanding of everything in Chapter 5. We don't expect most people to be able to do this exercise at this point.
End of explanation
# Solution goes here
run_simulation(system, update)
plot_results(system)
Explanation: Bonus question #2
Factor out the update function.
Write a function called update that takes a State object and a System object and returns a new State object that represents the state of the system after one time step.
Write a version of run_simulation that takes an update function as a parameter and uses it to compute the update.
Run your new version of run_simulation and plot the results.
WARNING: This question is substantially harder, and requires you to have a good understanding of everything in Chapter 5. We don't expect most people to be able to do this exercise at this point.
End of explanation |
12,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automatic Hyperparameter tuning
This notebook will show you how to extend the code in the cloud-ml-housing-prices notebook to take advantage of Cloud ML Engine's automatic hyperparameter tuning.
We will use it to determine the ideal number of hidden units to use in our neural network.
Cloud ML Engine uses bayesian optimization to find the hyperparameter settings for you. You can read the details of how it works here.
1) Modify Tensorflow Code
We need to make code changes to
Step1: 2) Define Hyperparameter Configuration File
Here you specify
Step2: 3) Train
Step3: Run local
It's a best practice to first run locally to check for errors. Note you can ignore the warnings in this case, as long as there are no errors.
Step4: Run on cloud (1 cloud ML unit) | Python Code:
%%bash
mkdir trainer
touch trainer/__init__.py
%%writefile trainer/task.py
import argparse
import pandas as pd
import tensorflow as tf
import os #NEW
import json #NEW
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
print(tf.__version__)
tf.logging.set_verbosity(tf.logging.ERROR)
data_train = pd.read_csv(
filepath_or_buffer='https://storage.googleapis.com/spls/gsp418/housing_train.csv',
names=["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","MEDV"])
data_test = pd.read_csv(
filepath_or_buffer='https://storage.googleapis.com/spls/gsp418/housing_test.csv',
names=["CRIM","ZN","INDUS","CHAS","NOX","RM","AGE","DIS","RAD","TAX","PTRATIO","MEDV"])
FEATURES = ["CRIM", "ZN", "INDUS", "NOX", "RM",
"AGE", "DIS", "TAX", "PTRATIO"]
LABEL = "MEDV"
feature_cols = [tf.feature_column.numeric_column(k)
for k in FEATURES] #list of Feature Columns
def generate_estimator(output_dir):
return tf.estimator.DNNRegressor(feature_columns=feature_cols,
hidden_units=[args.hidden_units_1, args.hidden_units_2], #NEW (use command line parameters for hidden units)
model_dir=output_dir)
def generate_input_fn(data_set):
def input_fn():
features = {k: tf.constant(data_set[k].values) for k in FEATURES}
labels = tf.constant(data_set[LABEL].values)
return features, labels
return input_fn
def serving_input_fn():
#feature_placeholders are what the caller of the predict() method will have to provide
feature_placeholders = {
column.name: tf.placeholder(column.dtype, [None])
for column in feature_cols
}
#features are what we actually pass to the estimator
features = {
# Inputs are rank 1 so that we can provide scalars to the server
# but Estimator expects rank 2, so we expand dimension
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(
features, feature_placeholders
)
train_spec = tf.estimator.TrainSpec(
input_fn=generate_input_fn(data_train),
max_steps=3000)
exporter = tf.estimator.LatestExporter('Servo', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn=generate_input_fn(data_test),
steps=1,
exporters=exporter)
######START CLOUD ML ENGINE BOILERPLATE######
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Input Arguments
parser.add_argument(
'--output_dir',
help='GCS location to write checkpoints and export models',
required=True
)
parser.add_argument(
'--job-dir',
help='this model ignores this field, but it is required by gcloud',
default='junk'
)
parser.add_argument(
'--hidden_units_1', #NEW (expose hyperparameter to command line)
help='number of neurons in first hidden layer',
type = int,
default=10
)
parser.add_argument(
'--hidden_units_2', #NEW (expose hyperparameter to command line)
help='number of neurons in second hidden layer',
type = int,
default=10
)
args = parser.parse_args()
arguments = args.__dict__
output_dir = arguments.pop('output_dir')
output_dir = os.path.join(#NEW (give each trial its own output_dir)
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
######END CLOUD ML ENGINE BOILERPLATE######
#initiate training job
tf.estimator.train_and_evaluate(generate_estimator(output_dir), train_spec, eval_spec)
Explanation: Automatic Hyperparameter tuning
This notebook will show you how to extend the code in the cloud-ml-housing-prices notebook to take advantage of Cloud ML Engine's automatic hyperparameter tuning.
We will use it to determine the ideal number of hidden units to use in our neural network.
Cloud ML Engine uses bayesian optimization to find the hyperparameter settings for you. You can read the details of how it works here.
1) Modify Tensorflow Code
We need to make code changes to:
1. Expose any hyperparameter we wish to tune as a command line argument (this is how CMLE passes new values)
2. Modify the output_dir so each hyperparameter 'trial' gets written to a unique directory
These changes are illustrated below. Any change from the original code has a #NEW comment next to it for easy reference
End of explanation
%%writefile config.yaml
trainingInput:
hyperparameters:
goal: MINIMIZE
hyperparameterMetricTag: average_loss
maxTrials: 5
maxParallelTrials: 1
params:
- parameterName: hidden_units_1
type: INTEGER
minValue: 1
maxValue: 100
scaleType: UNIT_LOG_SCALE
- parameterName: hidden_units_2
type: INTEGER
minValue: 1
maxValue: 100
scaleType: UNIT_LOG_SCALE
Explanation: 2) Define Hyperparameter Configuration File
Here you specify:
Which hyperparamters to tune
The min and max range to search between
The metric to optimize
The number of trials to run
End of explanation
GCS_BUCKET = 'gs://vijays-sandbox-ml' #CHANGE THIS TO YOUR BUCKET
PROJECT = 'vijays-sandbox' #CHANGE THIS TO YOUR PROJECT ID
REGION = 'us-central1' #OPTIONALLY CHANGE THIS
import os
os.environ['GCS_BUCKET'] = GCS_BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
Explanation: 3) Train
End of explanation
%%bash
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=trainer \
-- \
--output_dir='./output'
Explanation: Run local
It's a best practice to first run locally to check for errors. Note you can ignore the warnings in this case, as long as there are no errors.
End of explanation
%%bash
gcloud config set project $PROJECT
%%bash
JOBNAME=housing_$(date -u +%y%m%d_%H%M%S)
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=./trainer \
--job-dir=$GCS_BUCKET/$JOBNAME/ \
--runtime-version 1.4 \
--config config.yaml \
-- \
--output_dir=$GCS_BUCKET/$JOBNAME/output
Explanation: Run on cloud (1 cloud ML unit)
End of explanation |
12,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rejecting bad data spans
This tutorial covers manual marking of bad spans of data, and automated
rejection of data spans based on signal amplitude.
Step1: Annotating bad spans of data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The tutorial tut-events-vs-annotations describes how
Step2: .. sidebar
Step3: Now we can confirm that the annotations are centered on the EOG events. Since
blinks are usually easiest to see in the EEG channels, we'll only plot EEG
here
Step4: See the section tut-section-programmatic-annotations for more details
on creating annotations programmatically.
Rejecting Epochs based on channel amplitude
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Besides "bad" annotations, the
Step5: The values that are appropriate are dataset- and hardware-dependent, so some
trial-and-error may be necessary to find the correct balance between data
quality and loss of power due to too many dropped epochs. Here, we've set the
rejection criteria to be fairly stringent, for illustration purposes.
Two additional parameters, reject_tmin and reject_tmax, are used to
set the temporal window in which to calculate peak-to-peak amplitude for the
purposes of epoch rejection. These default to the same tmin and tmax
of the entire epoch. As one example, if you wanted to only apply the
rejection thresholds to the portion of the epoch that occurs before the
event marker around which the epoch is created, you could set
reject_tmax=0. A summary of the causes of rejected epochs can be
generated with the
Step6: Notice that we've passed reject_by_annotation=False above, in order to
isolate the effects of the rejection thresholds. If we re-run the epoching
with reject_by_annotation=True (the default) we see that the rejections
due to EEG and EOG channels have disappeared (suggesting that those channel
fluctuations were probably blink-related, and were subsumed by rejections
based on the "bad blink" label).
Step7: More importantly, note that many more epochs are rejected (~20% instead of
~2.5%) when rejecting based on the blink labels, underscoring why it is
usually desirable to repair artifacts rather than exclude them.
The
Step8: Finally, it should be noted that "dropped" epochs are not necessarily deleted
from the
Step9: Alternatively, if rejection thresholds were not originally given to the | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw-eve.fif')
events = mne.read_events(events_file)
Explanation: Rejecting bad data spans
This tutorial covers manual marking of bad spans of data, and automated
rejection of data spans based on signal amplitude.
:depth: 2
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>; to save memory we'll use a pre-filtered
and downsampled version of the example data, and we'll also load an events
array to use when converting the continuous data to epochs:
End of explanation
fig = raw.plot()
fig.canvas.key_press_event('a')
Explanation: Annotating bad spans of data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The tutorial tut-events-vs-annotations describes how
:class:~mne.Annotations can be read from embedded events in the raw
recording file, and tut-annotate-raw describes in detail how to
interactively annotate a :class:~mne.io.Raw data object. Here, we focus on
best practices for annotating bad data spans so that they will be excluded
from your analysis pipeline.
The reject_by_annotation parameter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the interactive raw.plot() window, the annotation controls can be
opened by pressing :kbd:a. Here, new annotation labels can be created or
existing annotation labels can be selected for use.
End of explanation
eog_events = mne.preprocessing.find_eog_events(raw)
onsets = eog_events[:, 0] / raw.info['sfreq'] - 0.25
durations = [0.5] * len(eog_events)
descriptions = ['bad blink'] * len(eog_events)
blink_annot = mne.Annotations(onsets, durations, descriptions,
orig_time=raw.info['meas_date'])
raw.set_annotations(blink_annot)
Explanation: .. sidebar:: Annotating good spans
The default "BAD\_" prefix for new labels can be removed simply by
pressing the backspace key four times before typing your custom
annotation label.
You can see that default annotation label is "BAD_"; this can be edited
prior to pressing the "Add label" button to customize the label. The intent
is that users can annotate with as many or as few labels as makes sense for
their research needs, but that annotations marking spans that should be
excluded from the analysis pipeline should all begin with "BAD" or "bad"
(e.g., "bad_cough", "bad-eyes-closed", "bad door slamming", etc). When this
practice is followed, many processing steps in MNE-Python will automatically
exclude the "bad"-labelled spans of data; this behavior is controlled by a
parameter reject_by_annotation that can be found in many MNE-Python
functions or class constructors, including:
creation of epoched data from continuous data (:class:mne.Epochs)
independent components analysis (:class:mne.preprocessing.ICA)
functions for finding heartbeat and blink artifacts
(:func:~mne.preprocessing.find_ecg_events,
:func:~mne.preprocessing.find_eog_events)
covariance computations (:func:mne.compute_raw_covariance)
power spectral density computation (:meth:mne.io.Raw.plot_psd,
:func:mne.time_frequency.psd_welch)
For example, when creating epochs from continuous data, if
reject_by_annotation=True the :class:~mne.Epochs constructor will drop
any epoch that partially or fully overlaps with an annotated span that begins
with "bad".
Generating annotations programmatically
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The tut-artifact-overview tutorial introduced the artifact detection
functions :func:~mne.preprocessing.find_eog_events and
:func:~mne.preprocessing.find_ecg_events (although that tutorial mostly
relied on their higher-level wrappers
:func:~mne.preprocessing.create_eog_epochs and
:func:~mne.preprocessing.create_ecg_epochs). Here, for demonstration
purposes, we make use of the lower-level artifact detection function to get
an events array telling us where the blinks are, then automatically add
"bad_blink" annotations around them (this is not necessary when using
:func:~mne.preprocessing.create_eog_epochs, it is done here just to show
how annotations are added non-interactively). We'll start the annotations
250 ms before the blink and end them 250 ms after it:
End of explanation
eeg_picks = mne.pick_types(raw.info, meg=False, eeg=True)
raw.plot(events=eog_events, order=eeg_picks)
Explanation: Now we can confirm that the annotations are centered on the EOG events. Since
blinks are usually easiest to see in the EEG channels, we'll only plot EEG
here:
End of explanation
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=100e-6, # 100 μV
eog=200e-6) # 200 μV
flat_criteria = dict(mag=1e-15, # 1 fT
grad=1e-13, # 1 fT/cm
eeg=1e-6) # 1 μV
Explanation: See the section tut-section-programmatic-annotations for more details
on creating annotations programmatically.
Rejecting Epochs based on channel amplitude
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Besides "bad" annotations, the :class:mne.Epochs class constructor has
another means of rejecting epochs, based on signal amplitude thresholds for
each channel type. In the overview tutorial
<tut-section-overview-epoching> we saw an example of this: setting maximum
acceptable peak-to-peak amplitudes for each channel type in an epoch, using
the reject parameter. There is also a related parameter, flat, that
can be used to set minimum acceptable peak-to-peak amplitudes for each
channel type in an epoch:
End of explanation
epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, reject_tmax=0,
reject=reject_criteria, flat=flat_criteria,
reject_by_annotation=False, preload=True)
epochs.plot_drop_log()
Explanation: The values that are appropriate are dataset- and hardware-dependent, so some
trial-and-error may be necessary to find the correct balance between data
quality and loss of power due to too many dropped epochs. Here, we've set the
rejection criteria to be fairly stringent, for illustration purposes.
Two additional parameters, reject_tmin and reject_tmax, are used to
set the temporal window in which to calculate peak-to-peak amplitude for the
purposes of epoch rejection. These default to the same tmin and tmax
of the entire epoch. As one example, if you wanted to only apply the
rejection thresholds to the portion of the epoch that occurs before the
event marker around which the epoch is created, you could set
reject_tmax=0. A summary of the causes of rejected epochs can be
generated with the :meth:~mne.Epochs.plot_drop_log method:
End of explanation
epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, reject_tmax=0,
reject=reject_criteria, flat=flat_criteria, preload=True)
epochs.plot_drop_log()
Explanation: Notice that we've passed reject_by_annotation=False above, in order to
isolate the effects of the rejection thresholds. If we re-run the epoching
with reject_by_annotation=True (the default) we see that the rejections
due to EEG and EOG channels have disappeared (suggesting that those channel
fluctuations were probably blink-related, and were subsumed by rejections
based on the "bad blink" label).
End of explanation
print(epochs.drop_log)
Explanation: More importantly, note that many more epochs are rejected (~20% instead of
~2.5%) when rejecting based on the blink labels, underscoring why it is
usually desirable to repair artifacts rather than exclude them.
The :meth:~mne.Epochs.plot_drop_log method is a visualization of an
:class:~mne.Epochs attribute, namely epochs.drop_log, which stores
empty lists for retained epochs and lists of strings for dropped epochs, with
the strings indicating the reason(s) why the epoch was dropped. For example:
End of explanation
epochs.drop_bad()
Explanation: Finally, it should be noted that "dropped" epochs are not necessarily deleted
from the :class:~mne.Epochs object right away. Above, we forced the
dropping to happen when we created the :class:~mne.Epochs object by using
the preload=True parameter. If we had not done that, the
:class:~mne.Epochs object would have been memory-mapped_ (not loaded into
RAM), in which case the criteria for dropping epochs are stored, and the
actual dropping happens when the :class:~mne.Epochs data are finally loaded
and used. There are several ways this can get triggered, such as:
explicitly loading the data into RAM with the :meth:~mne.Epochs.load_data
method
plotting the data (:meth:~mne.Epochs.plot,
:meth:~mne.Epochs.plot_image, etc)
using the :meth:~mne.Epochs.average method to create an
:class:~mne.Evoked object
You can also trigger dropping with the :meth:~mne.Epochs.drop_bad method;
if reject and/or flat criteria have already been provided to the
epochs constructor, :meth:~mne.Epochs.drop_bad can be used without
arguments to simply delete the epochs already marked for removal (if the
epochs have already been dropped, nothing further will happen):
End of explanation
stronger_reject_criteria = dict(mag=2000e-15, # 2000 fT
grad=2000e-13, # 2000 fT/cm
eeg=100e-6, # 100 μV
eog=100e-6) # 100 μV
epochs.drop_bad(reject=stronger_reject_criteria)
print(epochs.drop_log)
Explanation: Alternatively, if rejection thresholds were not originally given to the
:class:~mne.Epochs constructor, they can be passed to
:meth:~mne.Epochs.drop_bad later instead; this can also be a way of
imposing progressively more stringent rejection criteria:
End of explanation |
12,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
Step1: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, set MODEL_TYPE and run it locally for a few steps to test the code.
Step2: Now, let's do it on Cloud ML Engine so we can train on GPU
Step3: Monitor training with TensorBoard
To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
Deploying and predicting with model
Deploy the model
Step4: To predict with the model, let's take one of the example images.
Step5: Send it to the prediction service | Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "dnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "1.13" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
End of explanation
%%bash
rm -rf mnistmodel.tar.gz mnist_trained
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
-- \
--output_dir=${PWD}/mnist_trained \
--train_steps=100 \
--learning_rate=0.01 \
--model=$MODEL_TYPE
Explanation: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, set MODEL_TYPE and run it locally for a few steps to test the code.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/mnist/trained_${MODEL_TYPE}
JOBNAME=mnist_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE --batch_norm
Explanation: Now, let's do it on Cloud ML Engine so we can train on GPU: --scale-tier=BASIC_GPU
Note the GPU speed up depends on the model type. You'll notice the more complex CNN model trains significantly faster on GPU, however the speed up on the simpler models is not as pronounced.
End of explanation
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/mnist/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: Monitor training with TensorBoard
To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
Deploying and predicting with model
Deploy the model:
End of explanation
import json, codecs
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
HEIGHT = 28
WIDTH = 28
mnist = input_data.read_data_sets("mnist/data", one_hot = True, reshape = False)
IMGNO = 5 #CHANGE THIS to get different images
jsondata = {"image": mnist.test.images[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(mnist.test.images[IMGNO].reshape(HEIGHT, WIDTH));
Explanation: To predict with the model, let's take one of the example images.
End of explanation
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
Explanation: Send it to the prediction service
End of explanation |
12,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started
Step1: One of the essential paradigms of do-mpc is a modular architecture, where individual building bricks can be used independently our jointly, depending on the application.
In the following we will present the configuration, setup and connection between these blocks, starting with the model.
Example system
First, we introduce a simple system for which we setup do-mpc. We want to control a triple mass spring system as depicted below
Step2: Model variables
The next step is to define the model variables. It is important to define the variable type, name and optionally shape (default is scalar variable). The following types are available
Step3: Note that model.set_variable() returns the symbolic variable
Step4: Query variables
If at any time you need to obtain the model variables, e.g. if you create the model in a different file than additional do-mpc modules, you might need to retrieve the defined variables. do-mpc facilitates this process with the Model properties x, u, z, p, tvp, y and aux
Step5: The properties itself a structured symbolic variables, which hold the user-defined variables.
These can be accessed with indices
Step6: Note that this is identical to the output of model.set_variable from above
Step7: Further indices are possible in the case of variables with multiple elements
Step8: Note that you can use the following methods
Step9: Model parameters
Next we define parameters. Known values can and should be hardcoded but with robust MPC in mind, we define uncertain parameters explictly. We assume that the inertia is such an uncertain parameter and hardcode the spring constant and friction coefficient.
Step10: Right-hand-side equation
Finally, we set the right-hand-side of the model by calling model.set_rhs(var_name, expr) with the var_name from the state variables defined above and an expression in terms of $x, u, z, p$.
Step11: For the vector valued state dphi we need to concatenate symbolic expressions. We import the symbolic library CasADi
Step12: The model setup is completed by calling model.setup()
Step13: After calling model.setup() we cannot define further variables etc.
Configuring the MPC controller
With the configured and setup model we can now create the optimizer for model predictive control (MPC). We start by creating the object (with the model as the only input)
Step14: Optimizer parameters
Next, we need to parametrize the optimizer. Please see the API documentation for optimizer.set_param() for a full description of available parameters and their meaning. Many parameters already have suggested default values. Most importantly, we need to set n_horizon and t_step. We also choose n_robust=1 for this example, which would default to 0.
Note that by default the continuous system is discretized with collocation.
Step15: Objective function
The MPC formulation is at its core an optimization problem for which we need to define an objective function
Step16: Part of the objective function is also the penality for the control inputs. This penalty can often be used to smoothen the obtained optimal solution and is an important tuning parameter. We add a quadratic penalty on changes
Step17: where the keyword arguments refer to the previously defined input names. Note that in the notation above ($\Delta u_k^T R \Delta u_k$), this results in setting the diagonal elements of $R$.
Constraints
It is an important feature of MPC to be able to set constraints on inputs and states. In do-mpc these constraints are set like this
Step18: Scaling
Scaling is an important feature if the OCP is poorly conditioned, e.g. different states have significantly different magnitudes. In that case the unscaled problem might not lead to a (desired) solution.
Scaling factors can be introduced for all states, inputs and algebraic variables and the objective is to scale them to roughly the same order of magnitude. For the given problem, this is not necessary but we briefly show the syntax (note that this step can also be skipped).
Step19: Uncertain Parameters
An important feature of do-mpc is scenario based robust MPC. Instead of predicting and controlling a single future trajectory, we investigate multiple possible trajectories depending on different uncertain parameters. These parameters were previously defined in the model (the mass inertia). Now we must provide the optimizer with different possible scenarios.
This can be done in the following way
Step20: We provide a number of keyword arguments to the method optimizer.set_uncertain_parameter(). For each referenced parameter the value is a numpy.ndarray with a selection of possible values. The first value is the nominal case, where further values will lead to an increasing number of scenarios. Since we investigate each combination of possible parameters, the number of scenarios is growing rapidly. For our example, we are therefore only treating the inertia of mass 1 and 2 as uncertain and supply only one possible value for the mass of inertia 3.
Setup
The last step of configuring the optimizer is to call optimizer.setup, which finalizes the setup and creates the optimization problem. Only now can we use the optimizer to obtain the control input.
Step21: Configuring the Simulator
In many cases a developed control approach is first tested on a simulated system. do-mpc responds to this need with the do_mpc.simulator class. The simulator uses state-of-the-art DAE solvers, e.g. Sundials CVODE to solve the DAE equations defined in the supplied do_mpc.model. This will often be the same model as defined for the optimizer but it is also possible to use a more complex model of the same system.
In this section we demonstrate how to setup the simulator class for the given example. We initilize the class with the previously defined model
Step22: Simulator parameters
Next, we need to parametrize the simulator. Please see the API documentation for simulator.set_param() for a full description of available parameters and their meaning. Many parameters already have suggested default values. Most importantly, we need to set t_step. We choose the same value as for the optimizer.
Step23: Uncertain parameters
In the model we have defined the inertia of the masses as parameters, for which we have chosen multiple scenarios in the optimizer. The simulator is now parametrized to simulate with the "true" values at each timestep. In the most general case, these values can change, which is why we need to supply a function that can be evaluted at each time to obtain the current values.
do-mpc requires this function to have a specific return structure which we obtain first by calling
Step24: This object is a CasADi structure
Step25: which can be indexed with the following keys
Step26: We need to now write a function which returns this structure with the desired numerical values. For our simple case
Step27: This function is now supplied to the simulator in the following way
Step28: Setup
Similarly to the optimizer we need to call simulator.setup() to finalize the setup of the simulator.
Step29: Creating the control loop
In theory, we could now also create an estimator but for this concise example we just assume direct state-feedback. This means we are now ready to setup and run the control loop.
The control loop consists of running the optimizer with the current state ($x_0$) to obtain the current control input ($u_0$) and then running the simulator with the current control input ($u_0$) to obtain the next state.
As discussed before, we setup a controller for regulating a triple-mass-spring system. To show some interesting control action we choose an arbitrary initial state $x_0\neq 0$
Step30: and use the x0 property to set the initial state.
Step31: While we are able to set just a regular numpy array, this populates the state structure which was inherited from the model
Step32: We can thus easily obtain the value of particular states by calling
Step33: Note that the properties x0 (as well as u0, z0 and t0) always display the values of the current variables in the class.
To set the initial guess of the MPC optimization problem we call
Step34: The chosen initial guess is based on x0, z0 and u0 which are set for each element of the MPC sequence.
Setting up the Graphic
To investigate the controller performance AND the MPC predictions, we are using the do-mpc graphics module. This versatile tool allows us to conveniently configure a user-defined plot based on Matplotlib and visualize the results stored in the mpc.data, simulator.data (and if applicable estimator.data) objects.
We start by importing matplotlib
Step35: And initializing the graphics module with the data object of interest.
In this particular example, we want to visualize both the mpc.data as well as the simulator.data.
Step36: Next, we create a figure and obtain its axis object. Matplotlib offers multiple alternative ways to obtain an axis object, e.g. subplots, subplot2grid, or simply gca. We use subplots
Step37: Most important API element for setting up the graphics module is graphics.add_line, which mimics the API of model.add_variable, except that we also need to pass an axis.
We want to show both the simulator and MPC results on the same axis, which is why we configure both of them identically
Step38: Running the simulator
We start investigating the do-mpc simulator and the graphics package by simulating the autonomous system without control inputs ($u = 0$). This can be done as follows
Step39: We can visualize the resulting trajectory with the pre-defined graphic
Step40: As desired, the motor angle (input) is constant at zero and the oscillating masses slowly come to a rest. Our control goal is to significantly shorten the time until the discs are stationary.
Remember the animation you saw above, of the uncontrolled system? This is where the data came from.
Running the optimizer
To obtain the current control input we call optimizer.make_step(x0) with the current state ($x_0$)
Step41: Note that we obtained the output from IPOPT regarding the given optimal control problem (OCP). Most importantly we obtained Optimal Solution Found.
We can also visualize the predicted trajectories with the configure graphics instance. First we clear the existing lines from the simulator by calling
Step42: And finally, we can call plot_predictions to obtain
Step43: We are seeing the predicted trajectories for the states and the optimal control inputs. Note that we are seeing different scenarios for the configured uncertain inertia of the three masses.
We can also see that the solution is considering the defined upper and lower bounds. This is especially true for the inputs.
Changing the line appearance
Before we continue, we should probably improve the visualization a bit. We can easily obtain all line objects from the graphics module by using the result_lines and pred_lines properties
Step44: We obtain a structure that can be queried conveniently as follows
Step45: We obtain all lines for our first state. To change the color we can simply
Step46: Note that we can work in the same way with the result_lines property.
For example, we can use it to create a legend
Step47: Running the control loop
Finally, we are now able to run the control loop as discussed above. We obtain the input from the optimizer and then run the simulator.
To make sure we start fresh, we erase the history and set the initial state for the simulator
Step48: This is the main-loop. We run 20 steps, whic is identical to the prediction horizon. Note that we use "capture" again, to supress the output from IPOPT.
It is usually suggested to display the output as it contains important information about the state of the solver.
Step49: We can now plot the previously shown prediction from time $t=0$, as well as the closed-loop trajectory from the simulator
Step50: The simulated trajectory with the nominal value of the parameters follows almost exactly the nominal open-loop predictions. The simulted trajectory is bounded from above and below by further uncertain scenarios.
Data processing
Saving and retrieving results
do-mpc results can be stored and retrieved with the methods save_results and load_results from the do_mpc.data module. We start by importing these methods
Step51: The method save_results is passed a list of the do-mpc objects that we want to store. In our case, the optimizer and simulator are available and configured.
Note that by default results are stored in the subfolder results under the name results.pkl. Both can be changed and the folder is created if it doesn't exist already.
Step52: We investigate the content of the newly created folder
Step53: Automatically, the save_results call will check if a file with the given name already exists. To avoid overwriting, the method prepends an index. If we save again, the folder contains
Step54: The pickled results can be loaded manually by writing
Step55: The obtained results is a dictionary with the data objects from the passed do-mpc modules. Such that
Step56: As expected, we have 20 elements (we ran the loop for 20 steps) and 8 states. Further indices allow to get selected states
Step57: For vector-valued states we can even query
Step58: Of course, we could also query inputs etc.
Furthermore, we can easily retrieve the predicted trajectories with the prediction method. The syntax is slightly different
Step59: The first dimension shows that this state is a scalar, the second dimension shows the horizon and the third dimension refers to the nine uncertain scenarios that were investigated.
Animating results
Animating MPC results, to compare prediction and closed-loop trajectories, allows for a very meaningful investigation of the obtained results.
do-mpc significantly facilitates this process while working hand in hand with Matplotlib for full customizability. Obtaining publication ready animations is as easy as writing the following short blocks of code
Step60: The graphics module can also be used without restrictions with loaded do-mpc data. This allows for convenient data post-processing, e.g. in a Jupyter Notebook. We simply would have to initiate the graphics module with the loaded results from above. | Python Code:
import numpy as np
# Add do_mpc to path. This is not necessary if it was installed via pip.
import sys
sys.path.append('../../')
# Import do_mpc package:
import do_mpc
Explanation: Getting started: MPC
In this Jupyter Notebook we illustrate the core functionalities of do-mpc.
Open an interactive online Jupyter Notebook with this content on Binder:
We start by importing the required modules, most notably do_mpc.
End of explanation
model_type = 'continuous' # either 'discrete' or 'continuous'
model = do_mpc.model.Model(model_type)
Explanation: One of the essential paradigms of do-mpc is a modular architecture, where individual building bricks can be used independently our jointly, depending on the application.
In the following we will present the configuration, setup and connection between these blocks, starting with the model.
Example system
First, we introduce a simple system for which we setup do-mpc. We want to control a triple mass spring system as depicted below:
Three rotating discs are connected via springs and we denote their angles as $\phi_1, \phi_2, \phi_3$.
The two outermost discs are each connected to a stepper motor with additional springs. The stepper motor angles ($\phi_{m,1}$ and $\phi_{m,2}$ are used as inputs to the system. Relevant parameters of the system are the inertia $\Theta$ of the three discs, the spring constants $c$ as well as the damping factors $d$.
The second degree ODE of this system can be written as follows:
\begin{align}
\Theta_1 \ddot{\phi}1 &= -c_1 \left(\phi_1 - \phi{m,1} \right) -c_2 \left(\phi_1 - \phi_2 \right)- d_1 \dot{\phi}1\
\Theta_2 \ddot{\phi}_2 &= -c_2 \left(\phi_2 - \phi{1} \right) -c_3 \left(\phi_2 - \phi_3 \right)- d_2 \dot{\phi}2\
\Theta_3 \ddot{\phi}_3 &= -c_3 \left(\phi_3 - \phi_2 \right) -c_4 \left(\phi_3 - \phi{m,2} \right)- d_3 \dot{\phi}_3
\end{align}
The uncontrolled system, starting from a non-zero initial state will osciallate for an extended period of time, as shown below:
Later, we want to be able to use the motors efficiently to bring the oscillating masses to a rest. It will look something like this:
Creating the model
As indicated above, the model block is essential for the application of do-mpc.
In mathmatical terms the model is defined either as a continuous ordinary differential equation (ODE), a differential algebraic equation (DAE) or a discrete equation).
In the case of an DAE/ODE we write:
\begin{align}
\frac{\partial x}{\partial t} &= f(x,u,z,p)\
0 &= g(x,u,z,p)\
y &= h(x,u,z,p)
\end{align}
We denote $x\in \mathbb{R}^{n_x}$ as the states, $u \in \mathbb{R}^{n_u}$ as the inputs, $z\in \mathbb{R}^{n_z}$ the algebraic states and $p \in \mathbb{R}^{n_p}$ as parameters.
We reformulate the second order ODEs above as the following first order ODEs, be introducing the following states:
\begin{align}
x_1 &= \phi_1\
x_2 &= \phi_2\
x_3 &= \phi_3\
x_4 &= \dot{\phi}_1\
x_5 &= \dot{\phi}_2\
x_6 &= \dot{\phi}_3\
\end{align}
and derive the right-hand-side function $f(x,u,z,p)$ as:
\begin{align}
\dot{x}_1 &= x_4\
\dot{x}_2 &= x_5\
\dot{x}_3 &= x_6\
\dot{x}_4 &= -\frac{c_1}{\Theta_1} \left(x_1 - u_1 \right) -\frac{c_2}{\Theta_1} \left(x1 - x_2 \right)- \frac{d_1}{\Theta_1} x_4\
\dot{x}_5 &= -\frac{c_2}{\Theta_2} \left(x_2 - x_1 \right) -\frac{c_3}{\Theta_2} \left(x_2 - x_3 \right)- \frac{d_2}{\Theta_2} x_5\
\dot{x}_6 &= -\frac{c_3}{\Theta_3} \left(x_3 - x_2 \right) -\frac{c_4}{\Theta_3} \left(x_4 - u_2 \right)- \frac{d_3}{\Theta_3} x_6\
\end{align}
With this theoretical background we can start configuring the do-mpc model object.
First, we need to decide on the model type. For the given example, we are working with a continuous model.
End of explanation
phi_1 = model.set_variable(var_type='_x', var_name='phi_1', shape=(1,1))
phi_2 = model.set_variable(var_type='_x', var_name='phi_2', shape=(1,1))
phi_3 = model.set_variable(var_type='_x', var_name='phi_3', shape=(1,1))
# Variables can also be vectors:
dphi = model.set_variable(var_type='_x', var_name='dphi', shape=(3,1))
# Two states for the desired (set) motor position:
phi_m_1_set = model.set_variable(var_type='_u', var_name='phi_m_1_set')
phi_m_2_set = model.set_variable(var_type='_u', var_name='phi_m_2_set')
# Two additional states for the true motor position:
phi_1_m = model.set_variable(var_type='_x', var_name='phi_1_m', shape=(1,1))
phi_2_m = model.set_variable(var_type='_x', var_name='phi_2_m', shape=(1,1))
Explanation: Model variables
The next step is to define the model variables. It is important to define the variable type, name and optionally shape (default is scalar variable). The following types are available:
|Long name | short name | Remark |
|-----------|---------------------------|----------------|
|states | _x | Required |
|inputs | _u | Required |
|algebraic | _z | Optional |
|parameter | _p | Optional |
|timevarying_parameter | _tvp | Optional |
End of explanation
print('phi_1={}, with phi_1.shape={}'.format(phi_1, phi_1.shape))
print('dphi={}, with dphi.shape={}'.format(dphi, dphi.shape))
Explanation: Note that model.set_variable() returns the symbolic variable:
End of explanation
model.x
Explanation: Query variables
If at any time you need to obtain the model variables, e.g. if you create the model in a different file than additional do-mpc modules, you might need to retrieve the defined variables. do-mpc facilitates this process with the Model properties x, u, z, p, tvp, y and aux:
End of explanation
model.x['phi_1']
Explanation: The properties itself a structured symbolic variables, which hold the user-defined variables.
These can be accessed with indices:
End of explanation
bool(model.x['phi_1'] == phi_1)
Explanation: Note that this is identical to the output of model.set_variable from above:
End of explanation
model.x['dphi',0]
Explanation: Further indices are possible in the case of variables with multiple elements:
End of explanation
model.x.keys()
model.x.labels()
Explanation: Note that you can use the following methods:
.keys()
.labels()
to get more information from the symbolic structures:
End of explanation
# As shown in the table above, we can use Long names or short names for the variable type.
Theta_1 = model.set_variable('parameter', 'Theta_1')
Theta_2 = model.set_variable('parameter', 'Theta_2')
Theta_3 = model.set_variable('parameter', 'Theta_3')
c = np.array([2.697, 2.66, 3.05, 2.86])*1e-3
d = np.array([6.78, 8.01, 8.82])*1e-5
Explanation: Model parameters
Next we define parameters. Known values can and should be hardcoded but with robust MPC in mind, we define uncertain parameters explictly. We assume that the inertia is such an uncertain parameter and hardcode the spring constant and friction coefficient.
End of explanation
model.set_rhs('phi_1', dphi[0])
model.set_rhs('phi_2', dphi[1])
model.set_rhs('phi_3', dphi[2])
Explanation: Right-hand-side equation
Finally, we set the right-hand-side of the model by calling model.set_rhs(var_name, expr) with the var_name from the state variables defined above and an expression in terms of $x, u, z, p$.
End of explanation
from casadi import *
dphi_next = vertcat(
-c[0]/Theta_1*(phi_1-phi_1_m)-c[1]/Theta_1*(phi_1-phi_2)-d[0]/Theta_1*dphi[0],
-c[1]/Theta_2*(phi_2-phi_1)-c[2]/Theta_2*(phi_2-phi_3)-d[1]/Theta_2*dphi[1],
-c[2]/Theta_3*(phi_3-phi_2)-c[3]/Theta_3*(phi_3-phi_2_m)-d[2]/Theta_3*dphi[2],
)
model.set_rhs('dphi', dphi_next)
tau = 1e-2
model.set_rhs('phi_1_m', 1/tau*(phi_m_1_set - phi_1_m))
model.set_rhs('phi_2_m', 1/tau*(phi_m_2_set - phi_2_m))
Explanation: For the vector valued state dphi we need to concatenate symbolic expressions. We import the symbolic library CasADi:
End of explanation
model.setup()
Explanation: The model setup is completed by calling model.setup():
End of explanation
mpc = do_mpc.controller.MPC(model)
Explanation: After calling model.setup() we cannot define further variables etc.
Configuring the MPC controller
With the configured and setup model we can now create the optimizer for model predictive control (MPC). We start by creating the object (with the model as the only input)
End of explanation
setup_mpc = {
'n_horizon': 20,
't_step': 0.1,
'n_robust': 1,
'store_full_solution': True,
}
mpc.set_param(**setup_mpc)
Explanation: Optimizer parameters
Next, we need to parametrize the optimizer. Please see the API documentation for optimizer.set_param() for a full description of available parameters and their meaning. Many parameters already have suggested default values. Most importantly, we need to set n_horizon and t_step. We also choose n_robust=1 for this example, which would default to 0.
Note that by default the continuous system is discretized with collocation.
End of explanation
mterm = phi_1**2 + phi_2**2 + phi_3**2
lterm = phi_1**2 + phi_2**2 + phi_3**2
mpc.set_objective(mterm=mterm, lterm=lterm)
Explanation: Objective function
The MPC formulation is at its core an optimization problem for which we need to define an objective function:
$$
C = \sum_{k=0}^{n-1}\left( \underbrace{l(x_k,u_k,z_k,p)}{\text{lagrange term}}
+ \underbrace{\Delta u_k^T R \Delta u_k}{\text{r-term}}\right)
+ \underbrace{m(x_n)}_{\text{meyer term}}
$$
We need to define the meyer term (mterm) and lagrange term (lterm). For the given example we set:
$$
l(x_k,u_k,z_k,p) = \phi_1^2+\phi_2^2+\phi_3^2\
m(x_n) = \phi_1^2+\phi_2^2+\phi_3^2
$$
End of explanation
mpc.set_rterm(
phi_m_1_set=1e-2,
phi_m_2_set=1e-2
)
Explanation: Part of the objective function is also the penality for the control inputs. This penalty can often be used to smoothen the obtained optimal solution and is an important tuning parameter. We add a quadratic penalty on changes:
$$
\Delta u_k = u_k - u_{k-1}
$$
and automatically supply the solver with the previous solution of $u_{k-1}$ for $\Delta u_0$.
The user can set the tuning factor for these quadratic terms like this:
End of explanation
# Lower bounds on states:
mpc.bounds['lower','_x', 'phi_1'] = -2*np.pi
mpc.bounds['lower','_x', 'phi_2'] = -2*np.pi
mpc.bounds['lower','_x', 'phi_3'] = -2*np.pi
# Upper bounds on states
mpc.bounds['upper','_x', 'phi_1'] = 2*np.pi
mpc.bounds['upper','_x', 'phi_2'] = 2*np.pi
mpc.bounds['upper','_x', 'phi_3'] = 2*np.pi
# Lower bounds on inputs:
mpc.bounds['lower','_u', 'phi_m_1_set'] = -2*np.pi
mpc.bounds['lower','_u', 'phi_m_2_set'] = -2*np.pi
# Lower bounds on inputs:
mpc.bounds['upper','_u', 'phi_m_1_set'] = 2*np.pi
mpc.bounds['upper','_u', 'phi_m_2_set'] = 2*np.pi
Explanation: where the keyword arguments refer to the previously defined input names. Note that in the notation above ($\Delta u_k^T R \Delta u_k$), this results in setting the diagonal elements of $R$.
Constraints
It is an important feature of MPC to be able to set constraints on inputs and states. In do-mpc these constraints are set like this:
End of explanation
mpc.scaling['_x', 'phi_1'] = 2
mpc.scaling['_x', 'phi_2'] = 2
mpc.scaling['_x', 'phi_3'] = 2
Explanation: Scaling
Scaling is an important feature if the OCP is poorly conditioned, e.g. different states have significantly different magnitudes. In that case the unscaled problem might not lead to a (desired) solution.
Scaling factors can be introduced for all states, inputs and algebraic variables and the objective is to scale them to roughly the same order of magnitude. For the given problem, this is not necessary but we briefly show the syntax (note that this step can also be skipped).
End of explanation
inertia_mass_1 = 2.25*1e-4*np.array([1., 0.9, 1.1])
inertia_mass_2 = 2.25*1e-4*np.array([1., 0.9, 1.1])
inertia_mass_3 = 2.25*1e-4*np.array([1.])
mpc.set_uncertainty_values(
Theta_1 = inertia_mass_1,
Theta_2 = inertia_mass_2,
Theta_3 = inertia_mass_3
)
Explanation: Uncertain Parameters
An important feature of do-mpc is scenario based robust MPC. Instead of predicting and controlling a single future trajectory, we investigate multiple possible trajectories depending on different uncertain parameters. These parameters were previously defined in the model (the mass inertia). Now we must provide the optimizer with different possible scenarios.
This can be done in the following way:
End of explanation
mpc.setup()
Explanation: We provide a number of keyword arguments to the method optimizer.set_uncertain_parameter(). For each referenced parameter the value is a numpy.ndarray with a selection of possible values. The first value is the nominal case, where further values will lead to an increasing number of scenarios. Since we investigate each combination of possible parameters, the number of scenarios is growing rapidly. For our example, we are therefore only treating the inertia of mass 1 and 2 as uncertain and supply only one possible value for the mass of inertia 3.
Setup
The last step of configuring the optimizer is to call optimizer.setup, which finalizes the setup and creates the optimization problem. Only now can we use the optimizer to obtain the control input.
End of explanation
simulator = do_mpc.simulator.Simulator(model)
Explanation: Configuring the Simulator
In many cases a developed control approach is first tested on a simulated system. do-mpc responds to this need with the do_mpc.simulator class. The simulator uses state-of-the-art DAE solvers, e.g. Sundials CVODE to solve the DAE equations defined in the supplied do_mpc.model. This will often be the same model as defined for the optimizer but it is also possible to use a more complex model of the same system.
In this section we demonstrate how to setup the simulator class for the given example. We initilize the class with the previously defined model:
End of explanation
# Instead of supplying a dict with the splat operator (**), as with the optimizer.set_param(),
# we can also use keywords (and call the method multiple times, if necessary):
simulator.set_param(t_step = 0.1)
Explanation: Simulator parameters
Next, we need to parametrize the simulator. Please see the API documentation for simulator.set_param() for a full description of available parameters and their meaning. Many parameters already have suggested default values. Most importantly, we need to set t_step. We choose the same value as for the optimizer.
End of explanation
p_template = simulator.get_p_template()
Explanation: Uncertain parameters
In the model we have defined the inertia of the masses as parameters, for which we have chosen multiple scenarios in the optimizer. The simulator is now parametrized to simulate with the "true" values at each timestep. In the most general case, these values can change, which is why we need to supply a function that can be evaluted at each time to obtain the current values.
do-mpc requires this function to have a specific return structure which we obtain first by calling:
End of explanation
type(p_template)
Explanation: This object is a CasADi structure:
End of explanation
p_template.keys()
Explanation: which can be indexed with the following keys:
End of explanation
def p_fun(t_now):
p_template['Theta_1'] = 2.25e-4
p_template['Theta_2'] = 2.25e-4
p_template['Theta_3'] = 2.25e-4
return p_template
Explanation: We need to now write a function which returns this structure with the desired numerical values. For our simple case:
End of explanation
simulator.set_p_fun(p_fun)
Explanation: This function is now supplied to the simulator in the following way:
End of explanation
simulator.setup()
Explanation: Setup
Similarly to the optimizer we need to call simulator.setup() to finalize the setup of the simulator.
End of explanation
x0 = np.pi*np.array([1, 1, -1.5, 1, -1, 1, 0, 0]).reshape(-1,1)
Explanation: Creating the control loop
In theory, we could now also create an estimator but for this concise example we just assume direct state-feedback. This means we are now ready to setup and run the control loop.
The control loop consists of running the optimizer with the current state ($x_0$) to obtain the current control input ($u_0$) and then running the simulator with the current control input ($u_0$) to obtain the next state.
As discussed before, we setup a controller for regulating a triple-mass-spring system. To show some interesting control action we choose an arbitrary initial state $x_0\neq 0$:
End of explanation
simulator.x0 = x0
mpc.x0 = x0
Explanation: and use the x0 property to set the initial state.
End of explanation
mpc.x0
Explanation: While we are able to set just a regular numpy array, this populates the state structure which was inherited from the model:
End of explanation
mpc.x0['phi_1']
Explanation: We can thus easily obtain the value of particular states by calling:
End of explanation
mpc.set_initial_guess()
Explanation: Note that the properties x0 (as well as u0, z0 and t0) always display the values of the current variables in the class.
To set the initial guess of the MPC optimization problem we call:
End of explanation
import matplotlib.pyplot as plt
import matplotlib as mpl
# Customizing Matplotlib:
mpl.rcParams['font.size'] = 18
mpl.rcParams['lines.linewidth'] = 3
mpl.rcParams['axes.grid'] = True
Explanation: The chosen initial guess is based on x0, z0 and u0 which are set for each element of the MPC sequence.
Setting up the Graphic
To investigate the controller performance AND the MPC predictions, we are using the do-mpc graphics module. This versatile tool allows us to conveniently configure a user-defined plot based on Matplotlib and visualize the results stored in the mpc.data, simulator.data (and if applicable estimator.data) objects.
We start by importing matplotlib:
End of explanation
mpc_graphics = do_mpc.graphics.Graphics(mpc.data)
sim_graphics = do_mpc.graphics.Graphics(simulator.data)
Explanation: And initializing the graphics module with the data object of interest.
In this particular example, we want to visualize both the mpc.data as well as the simulator.data.
End of explanation
%%capture
# We just want to create the plot and not show it right now. This "inline magic" supresses the output.
fig, ax = plt.subplots(2, sharex=True, figsize=(16,9))
fig.align_ylabels()
Explanation: Next, we create a figure and obtain its axis object. Matplotlib offers multiple alternative ways to obtain an axis object, e.g. subplots, subplot2grid, or simply gca. We use subplots:
End of explanation
%%capture
for g in [sim_graphics, mpc_graphics]:
# Plot the angle positions (phi_1, phi_2, phi_2) on the first axis:
g.add_line(var_type='_x', var_name='phi_1', axis=ax[0])
g.add_line(var_type='_x', var_name='phi_2', axis=ax[0])
g.add_line(var_type='_x', var_name='phi_3', axis=ax[0])
# Plot the set motor positions (phi_m_1_set, phi_m_2_set) on the second axis:
g.add_line(var_type='_u', var_name='phi_m_1_set', axis=ax[1])
g.add_line(var_type='_u', var_name='phi_m_2_set', axis=ax[1])
ax[0].set_ylabel('angle position [rad]')
ax[1].set_ylabel('motor angle [rad]')
ax[1].set_xlabel('time [s]')
Explanation: Most important API element for setting up the graphics module is graphics.add_line, which mimics the API of model.add_variable, except that we also need to pass an axis.
We want to show both the simulator and MPC results on the same axis, which is why we configure both of them identically:
End of explanation
u0 = np.zeros((2,1))
for i in range(200):
simulator.make_step(u0)
Explanation: Running the simulator
We start investigating the do-mpc simulator and the graphics package by simulating the autonomous system without control inputs ($u = 0$). This can be done as follows:
End of explanation
sim_graphics.plot_results()
# Reset the limits on all axes in graphic to show the data.
sim_graphics.reset_axes()
# Show the figure:
fig
Explanation: We can visualize the resulting trajectory with the pre-defined graphic:
End of explanation
u0 = mpc.make_step(x0)
Explanation: As desired, the motor angle (input) is constant at zero and the oscillating masses slowly come to a rest. Our control goal is to significantly shorten the time until the discs are stationary.
Remember the animation you saw above, of the uncontrolled system? This is where the data came from.
Running the optimizer
To obtain the current control input we call optimizer.make_step(x0) with the current state ($x_0$):
End of explanation
sim_graphics.clear()
Explanation: Note that we obtained the output from IPOPT regarding the given optimal control problem (OCP). Most importantly we obtained Optimal Solution Found.
We can also visualize the predicted trajectories with the configure graphics instance. First we clear the existing lines from the simulator by calling:
End of explanation
mpc_graphics.plot_predictions()
mpc_graphics.reset_axes()
# Show the figure:
fig
Explanation: And finally, we can call plot_predictions to obtain:
End of explanation
mpc_graphics.pred_lines
Explanation: We are seeing the predicted trajectories for the states and the optimal control inputs. Note that we are seeing different scenarios for the configured uncertain inertia of the three masses.
We can also see that the solution is considering the defined upper and lower bounds. This is especially true for the inputs.
Changing the line appearance
Before we continue, we should probably improve the visualization a bit. We can easily obtain all line objects from the graphics module by using the result_lines and pred_lines properties:
End of explanation
mpc_graphics.pred_lines['_x', 'phi_1']
Explanation: We obtain a structure that can be queried conveniently as follows:
End of explanation
# Change the color for the three states:
for line_i in mpc_graphics.pred_lines['_x', 'phi_1']: line_i.set_color('#1f77b4') # blue
for line_i in mpc_graphics.pred_lines['_x', 'phi_2']: line_i.set_color('#ff7f0e') # orange
for line_i in mpc_graphics.pred_lines['_x', 'phi_3']: line_i.set_color('#2ca02c') # green
# Change the color for the two inputs:
for line_i in mpc_graphics.pred_lines['_u', 'phi_m_1_set']: line_i.set_color('#1f77b4')
for line_i in mpc_graphics.pred_lines['_u', 'phi_m_2_set']: line_i.set_color('#ff7f0e')
# Make all predictions transparent:
for line_i in mpc_graphics.pred_lines.full: line_i.set_alpha(0.2)
Explanation: We obtain all lines for our first state. To change the color we can simply:
End of explanation
# Get line objects (note sum of lists creates a concatenated list)
lines = sim_graphics.result_lines['_x', 'phi_1']+sim_graphics.result_lines['_x', 'phi_2']+sim_graphics.result_lines['_x', 'phi_3']
ax[0].legend(lines,'123',title='disc')
# also set legend for second subplot:
lines = sim_graphics.result_lines['_u', 'phi_m_1_set']+sim_graphics.result_lines['_u', 'phi_m_2_set']
ax[1].legend(lines,'12',title='motor')
Explanation: Note that we can work in the same way with the result_lines property.
For example, we can use it to create a legend:
End of explanation
simulator.reset_history()
simulator.x0 = x0
mpc.reset_history()
Explanation: Running the control loop
Finally, we are now able to run the control loop as discussed above. We obtain the input from the optimizer and then run the simulator.
To make sure we start fresh, we erase the history and set the initial state for the simulator:
End of explanation
%%capture
for i in range(20):
u0 = mpc.make_step(x0)
x0 = simulator.make_step(u0)
Explanation: This is the main-loop. We run 20 steps, whic is identical to the prediction horizon. Note that we use "capture" again, to supress the output from IPOPT.
It is usually suggested to display the output as it contains important information about the state of the solver.
End of explanation
# Plot predictions from t=0
mpc_graphics.plot_predictions(t_ind=0)
# Plot results until current time
sim_graphics.plot_results()
sim_graphics.reset_axes()
fig
Explanation: We can now plot the previously shown prediction from time $t=0$, as well as the closed-loop trajectory from the simulator:
End of explanation
from do_mpc.data import save_results, load_results
Explanation: The simulated trajectory with the nominal value of the parameters follows almost exactly the nominal open-loop predictions. The simulted trajectory is bounded from above and below by further uncertain scenarios.
Data processing
Saving and retrieving results
do-mpc results can be stored and retrieved with the methods save_results and load_results from the do_mpc.data module. We start by importing these methods:
End of explanation
save_results([mpc, simulator])
Explanation: The method save_results is passed a list of the do-mpc objects that we want to store. In our case, the optimizer and simulator are available and configured.
Note that by default results are stored in the subfolder results under the name results.pkl. Both can be changed and the folder is created if it doesn't exist already.
End of explanation
!ls ./results/
Explanation: We investigate the content of the newly created folder:
End of explanation
save_results([mpc, simulator])
!ls ./results/
Explanation: Automatically, the save_results call will check if a file with the given name already exists. To avoid overwriting, the method prepends an index. If we save again, the folder contains:
End of explanation
results = load_results('./results/results.pkl')
Explanation: The pickled results can be loaded manually by writing:
python
with open(file_name, 'rb') as f:
results = pickle.load(f)
or by calling load_results with the appropriate file_name (and path). load_results contains simply the code above for more convenience.
End of explanation
results['mpc']
x = results['mpc']['_x']
x.shape
Explanation: The obtained results is a dictionary with the data objects from the passed do-mpc modules. Such that:
results['optimizer'] and optimizer.data contain the same information (similarly for simulator and, if applicable, estimator).
Working with data objects
The do_mpc.data.Data objects also hold some very useful properties that you should know about.
Most importantly, we can query them with indices, such as:
End of explanation
phi_1 = results['mpc']['_x','phi_1']
phi_1.shape
Explanation: As expected, we have 20 elements (we ran the loop for 20 steps) and 8 states. Further indices allow to get selected states:
End of explanation
dphi_1 = results['mpc']['_x','dphi', 0]
dphi_1.shape
Explanation: For vector-valued states we can even query:
End of explanation
phi_1_pred = results['mpc'].prediction(('_x','phi_1'), t_ind=0)
phi_1_pred.shape
Explanation: Of course, we could also query inputs etc.
Furthermore, we can easily retrieve the predicted trajectories with the prediction method. The syntax is slightly different: The first argument is a tuple that mimics the indices shown above. The second index is the time instance. With the following call we obtain the prediction of phi_1 at time 0:
End of explanation
from matplotlib.animation import FuncAnimation, FFMpegWriter, ImageMagickWriter
def update(t_ind):
sim_graphics.plot_results(t_ind)
mpc_graphics.plot_predictions(t_ind)
mpc_graphics.reset_axes()
Explanation: The first dimension shows that this state is a scalar, the second dimension shows the horizon and the third dimension refers to the nine uncertain scenarios that were investigated.
Animating results
Animating MPC results, to compare prediction and closed-loop trajectories, allows for a very meaningful investigation of the obtained results.
do-mpc significantly facilitates this process while working hand in hand with Matplotlib for full customizability. Obtaining publication ready animations is as easy as writing the following short blocks of code:
End of explanation
anim = FuncAnimation(fig, update, frames=20, repeat=False)
gif_writer = ImageMagickWriter(fps=3)
anim.save('anim.gif', writer=gif_writer)
Explanation: The graphics module can also be used without restrictions with loaded do-mpc data. This allows for convenient data post-processing, e.g. in a Jupyter Notebook. We simply would have to initiate the graphics module with the loaded results from above.
End of explanation |
12,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step4: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step6: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
Step7: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step8: Problem 3
Another check
Step9: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
Step10: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step11: Problem 4
Convince yourself that the data is still good after shuffling!
Step12: Finally, let's save the data for later reuse
Step13: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions
Step14: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint | Python Code:
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://commondatastorage.googleapis.com/books1000/'
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
import random
def display_sample_files(folders):
for f in folders:
display(f)
display(*[(Image(filename=os.path.join(f, name))) for name in random.sample(os.listdir(f), 5)])
display_sample_files(test_folders)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
for image_index, image in enumerate(image_files):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index + 1
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
Explanation: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
def display_sample_images(datasets):
for d in datasets:
with open(d, 'rb') as data:
images = pickle.load(data)
fig = plt.figure()
for i in range(1):
fig.add_subplot()
plt.imshow(images[random.randint(0, images.shape[0])])
plt.show()
display_sample_images(test_datasets)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
def show_counts(datasets):
counts = []
for d in datasets:
with open(d, 'rb') as data:
images = pickle.load(data)
counts.append(images.shape[0])
display(counts)
show_counts(train_datasets)
show_counts(test_datasets)
Explanation: Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
import numpy as np
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
train_datasets = ['notMNIST_large/A.pickle',
'notMNIST_large/B.pickle',
'notMNIST_large/C.pickle',
'notMNIST_large/D.pickle',
'notMNIST_large/E.pickle',
'notMNIST_large/F.pickle',
'notMNIST_large/G.pickle',
'notMNIST_large/H.pickle',
'notMNIST_large/I.pickle',
'notMNIST_large/J.pickle']
test_datasets = ['notMNIST_small/A.pickle',
'notMNIST_small/B.pickle',
'notMNIST_small/C.pickle',
'notMNIST_small/D.pickle',
'notMNIST_small/E.pickle',
'notMNIST_small/F.pickle',
'notMNIST_small/G.pickle',
'notMNIST_small/H.pickle',
'notMNIST_small/I.pickle',
'notMNIST_small/J.pickle']
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Explanation: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import random
letters = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
def verify(dataset, labels):
index = random.randint(0, dataset.shape[0])
fig = plt.figure()
fig.add_subplot()
plt.imshow(dataset[index])
display(letters[labels[index]])
verify(train_dataset, train_labels)
verify(test_dataset, test_labels)
verify(valid_dataset, valid_labels)
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
Explanation: Finally, let's save the data for later reuse:
End of explanation
import pickle
pickle_file = 'notMNIST.pickle'
f = open(pickle_file, 'rb')
datasets = pickle.load(f)
f.close()
train_set = set([tuple(row) for row in datasets['train_dataset'].reshape(200000, 28*28)])
test_set = set([tuple(row) for row in datasets['test_dataset'].reshape(10000, 28*28)])
valid_set = set([tuple(row) for row in datasets['valid_dataset'].reshape(10000, 28*28)])
print('Overlap between test and valid: {0:.2f}%'.format(len(valid_set & test_set) / 10000 * 100))
print('Overlap between test and train: {0:.2f}%'.format(len(train_set & test_set) / 10000 * 100))
print('Overlap between train and valid: {0:.2f}%'.format(len(train_set & valid_set) / 10000 * 100))
sanitised_test = np.array(list(test_set - train_set)).reshape(8651, 28, 28)
sanitised_valid = np.array(list(valid_set - train_set)).reshape((-1, 28, 28))
print(sanitised_test.shape)
print(sanitised_valid.shape)
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
End of explanation
from sklearn import metrics
train = train_dataset.reshape((-1, 28*28))
train_labels = datasets['train_labels']
(train_labels.shape, train.shape)
lr = LogisticRegression(multi_class='multinomial', solver='lbfgs')
model = lr.fit(train, train_labels)
test = test_dataset.reshape((-1, 28*28))
test_labels = datasets['test_labels']
predicted_labels = model.predict(test)
print('Precision: {0}'.format(metrics.precision_score(test_labels, predicted_labels)))
Explanation: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation |
12,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network Traffic Forecasting with AutoTSEstimator
In telco, accurate forecast of KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demostrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to use AutoTS in project Chronos to do time series forecasting in an automated and distributed way.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe. Steps are as below.
First, run the script get_data.sh to download the raw data. It will download the monthly aggregated traffic data in year 2018 and 2019 into data folder. The raw data contains aggregated network traffic (average MBPs and total bytes) as well as other metrics.
Second, run extract_data.sh to extract relavant traffic KPI's from raw data, i.e. AvgRate for average use rate, and total for total bytes. The script will extract the KPI's with timestamps into data/data.csv.
Finally, use pandas to load data/data.csv into a dataframe as shown below
Step1: Below are some example records of the data
Step2: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the network traffic data we're using, the processing contains 2 parts
Step3: Plot the data to see how the KPI's look like
Step4: Time series forecasting with AutoTS
AutoTS provides AutoML support for building end-to-end time series analysis pipelines (including automatic feature generation, model selection and hyperparameter tuning).
The general workflow using automated training contains below two steps.
1. create a AutoTSEstimator to train a TSPipeline, save it to file to use later or elsewhere if you wish.
2. use TSPipeline to do prediction, evaluation, and incremental fitting as well.
Chronos uses Orca to enable distributed training and AutoML capabilities. Init orca as below. View Orca Context for more details. Note that argument init_ray_on_spark must be True for Chronos.
Step5: Then we initialize a AutoTSEstimator.
Step6: We need to split the data frame into train, validation and test data frame before training.
Then we impute the data to handle missing data and scale the data.
You can use TSDataset as an easy way to finish it.
Step7: Then we fit on train data and evaluate on validation data.
Step8: We get a TSPipeline after training. Let's print the hyper paramters selected.
Note that past_seq_len is the lookback value that is automatically chosen
Step9: We use tspipeline to predict and evaluate.
Step10: plot actual and prediction values for AvgRate KPI
Step11: Calculate mean square error and the symetric mean absolute percentage error.
Step12: You can save the pipeline to file and reload it to do incremental fitting or others.
Step13: You can stop the orca context after auto training.
Step14: Next, we demonstrate how to do incremental fitting with your saved pipeline file.
First load saved pipeline file.
Step15: Then do incremental fitting with TSPipeline.fit(). We use validation data frame as additional data for demonstration. You can use your new data frame.
Step16: predict and plot the result after incremental fitting.
Step17: Calculate mean square error and the symetric mean absolute percentage error. | Python Code:
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
raw_df = pd.read_csv("data/data.csv")
Explanation: Network Traffic Forecasting with AutoTSEstimator
In telco, accurate forecast of KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demostrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to use AutoTS in project Chronos to do time series forecasting in an automated and distributed way.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe. Steps are as below.
First, run the script get_data.sh to download the raw data. It will download the monthly aggregated traffic data in year 2018 and 2019 into data folder. The raw data contains aggregated network traffic (average MBPs and total bytes) as well as other metrics.
Second, run extract_data.sh to extract relavant traffic KPI's from raw data, i.e. AvgRate for average use rate, and total for total bytes. The script will extract the KPI's with timestamps into data/data.csv.
Finally, use pandas to load data/data.csv into a dataframe as shown below
End of explanation
raw_df.head()
Explanation: Below are some example records of the data
End of explanation
df = pd.DataFrame(pd.to_datetime(raw_df.StartTime))
# we can find 'AvgRate' is of two scales: 'Mbps' and 'Gbps'
raw_df.AvgRate.str[-4:].unique()
# Unify AvgRate value
df['AvgRate'] = raw_df.AvgRate.apply(lambda x:float(x[:-4]) if x.endswith("Mbps") else float(x[:-4])*1000)
df["total"] = raw_df["total"]
df.head()
df.describe()
Explanation: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the network traffic data we're using, the processing contains 2 parts:
1. Convert string datetime to TimeStamp
2. Unify the measurement scale for AvgRate value - some uses Mbps, some uses Gbps
End of explanation
ax = df.plot(y='AvgRate',figsize=(12,5), title="AvgRate of network traffic data")
Explanation: Plot the data to see how the KPI's look like
End of explanation
from zoo.orca import init_orca_context
init_orca_context(cores=10, init_ray_on_spark=True)
Explanation: Time series forecasting with AutoTS
AutoTS provides AutoML support for building end-to-end time series analysis pipelines (including automatic feature generation, model selection and hyperparameter tuning).
The general workflow using automated training contains below two steps.
1. create a AutoTSEstimator to train a TSPipeline, save it to file to use later or elsewhere if you wish.
2. use TSPipeline to do prediction, evaluation, and incremental fitting as well.
Chronos uses Orca to enable distributed training and AutoML capabilities. Init orca as below. View Orca Context for more details. Note that argument init_ray_on_spark must be True for Chronos.
End of explanation
from zoo.chronos.autots import AutoTSEstimator, TSPipeline
import torch
import zoo.orca.automl.hp as hp
auto_estimator = AutoTSEstimator(model='lstm',
search_space="normal",
past_seq_len=hp.randint(50, 100),
future_seq_len=1,
metric="mse",
cpus_per_trial=2)
Explanation: Then we initialize a AutoTSEstimator.
End of explanation
from zoo.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
tsdata_train, tsdata_val, tsdata_test = TSDataset.from_pandas(df,
dt_col="StartTime",
target_col="AvgRate",
with_split=True,
val_ratio=0.1,
test_ratio=0.1)
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_val, tsdata_test]:
tsdata.gen_dt_feature(one_hot_features=["HOUR", "WEEKDAY"])\
.impute(mode="last")\
.scale(standard_scaler, fit=(tsdata is tsdata_train))
Explanation: We need to split the data frame into train, validation and test data frame before training.
Then we impute the data to handle missing data and scale the data.
You can use TSDataset as an easy way to finish it.
End of explanation
%%time
ts_pipeline = auto_estimator.fit(data=tsdata_train,
epochs=20,
batch_size=128,
validation_data=tsdata_val,
n_sampling=4)
Explanation: Then we fit on train data and evaluate on validation data.
End of explanation
best_config = auto_estimator.get_best_config()
best_config
Explanation: We get a TSPipeline after training. Let's print the hyper paramters selected.
Note that past_seq_len is the lookback value that is automatically chosen
End of explanation
y_pred = ts_pipeline.predict(tsdata_test)
Explanation: We use tspipeline to predict and evaluate.
End of explanation
# plot the predicted values and actual values
lookback = best_config['past_seq_len']
plt.figure(figsize=(16,6))
test_df = tsdata_test.unscale().to_pandas()
tsdata_test.scale(standard_scaler, fit=False)
plt.plot(test_df.StartTime[lookback - 1:], y_pred[:,0,0], color='red', label='predicted values')
plt.plot(test_df.StartTime[lookback - 1:], test_df.AvgRate[lookback - 1:], color='blue', label='actual values')
plt.title('the predicted values and actual values (for the test data)')
plt.xlabel('datetime')
plt.legend(loc='upper left')
plt.show()
Explanation: plot actual and prediction values for AvgRate KPI
End of explanation
mse, smape = ts_pipeline.evaluate(tsdata_test, metrics=["mse", "smape"])
print("Evaluate: the mean square error is", mse)
print("Evaluate: the smape value is", smape)
Explanation: Calculate mean square error and the symetric mean absolute percentage error.
End of explanation
# save pipeline file
my_ppl_file_path = "/tmp/saved_pipeline"
ts_pipeline.save(my_ppl_file_path)
Explanation: You can save the pipeline to file and reload it to do incremental fitting or others.
End of explanation
from zoo.orca import stop_orca_context
stop_orca_context()
Explanation: You can stop the orca context after auto training.
End of explanation
new_ts_pipeline = TSPipeline.load(my_ppl_file_path)
Explanation: Next, we demonstrate how to do incremental fitting with your saved pipeline file.
First load saved pipeline file.
End of explanation
new_ts_pipeline.fit(tsdata_val)
Explanation: Then do incremental fitting with TSPipeline.fit(). We use validation data frame as additional data for demonstration. You can use your new data frame.
End of explanation
# predict results of test_df
y_pred = new_ts_pipeline.predict(tsdata_test)
lookback = best_config['past_seq_len']
plt.figure(figsize=(16,6))
test_df = tsdata_test.unscale().to_pandas()
tsdata_test.scale(standard_scaler, fit=False)
plt.plot(test_df.StartTime[lookback - 1:], y_pred[:,0,0], color='red', label='predicted values')
plt.plot(test_df.StartTime[lookback - 1:], test_df.AvgRate[lookback - 1:], color='blue', label='actual values')
plt.title('the predicted values and actual values (for the test data)')
plt.xlabel('datetime')
plt.legend(loc='upper left')
plt.show()
Explanation: predict and plot the result after incremental fitting.
End of explanation
mse, smape = new_ts_pipeline.evaluate(tsdata_test, metrics=["mse", "smape"])
print("Evaluate: the mean square error is", mse)
print("Evaluate: the smape value is", smape)
Explanation: Calculate mean square error and the symetric mean absolute percentage error.
End of explanation |
12,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 9</font>
Download
Step1: Preço médio de um veículo por marca, bem como tipo de veículo | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Imports
import os
import subprocess
import stat
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib as mat
import matplotlib.pyplot as plt
from datetime import datetime
sns.set(style="white")
%matplotlib inline
np.__version__
pd.__version__
sns.__version__
mat.__version__
# Dataset
clean_data_path = "dataset/autos.csv"
df = pd.read_csv(clean_data_path,encoding="latin-1")
# Calcule a média de preço por marca e por veículo
trial = pd.DataFrame()
for b in list(df["brand"].unique()):
for v in list(df["vehicleType"].unique()):
z = df[(df["brand"] == b) & (df["vehicleType"] == v)]["price"].mean()
trial = trial.append(pd.DataFrame({'brand':b , 'vehicleType':v , 'avgPrice':z}, index=[0]))
trial = trial.reset_index()
del trial["index"]
trial["avgPrice"].fillna(0,inplace=True)
trial["avgPrice"].isnull().value_counts()
trial["avgPrice"] = trial["avgPrice"].astype(int)
trial.head(5)
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 9</font>
Download: http://github.com/dsacademybr
Mini-Projeto 2 - Análise Exploratória em Conjunto de Dados do Kaggle
Análise 4
End of explanation
# Crie um Heatmap com Preço médio de um veículo por marca, bem como tipo de veículo
tri = trial.pivot("brand","vehicleType", "avgPrice")
fig, ax = plt.subplots(figsize=(15,20))
sns.heatmap(tri,linewidths=1,cmap="YlGnBu",annot=True, ax=ax, fmt="d")
ax.set_title("Heatmap - Preço médio de um veículo por marca e tipo de veículo",fontdict={'size':20})
ax.xaxis.set_label_text("Tipo de Veículo",fontdict= {'size':20})
ax.yaxis.set_label_text("Marca",fontdict= {'size':20})
plt.show()
# Salvando o plot
fig.savefig("plots/Analise4/heatmap-price-brand-vehicleType.png")
Explanation: Preço médio de um veículo por marca, bem como tipo de veículo
End of explanation |
12,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Breaking MCMC
Step1: Here's the Rosenbrock function in code. Since we're pretending it's the log-posterior, I've introduced a minus sign that doesn't normally appear.
Step2: Let's plot "standard delta-log-pdf" contours, just to get a feel for the shape of the function.
Step3: That's one ugly banana! It's worth noting that, in less extreme cases, cleverly re-parametrizing your model can be a significant help, for example sampling the logarithm of a parameter with a large dynamic range. As you'll see in the homework, however, it's important to be aware of the effect re-parametrizations have on the prior if not done carefully. As you can see below, considering $\ln(y)$ instead of $y$ makes part of the posterior distribution look easier to deal with... but at the cost of introducing another little problem.
Step4: Eggbox function
Step5: Yikes! Lot's of well separated peaks in the posterior distribution. To converge properly, chains need to be able to move between them, which is clearly a challenge for the approach we've used so far.
As it happens, the real-world CMB banana contour plot given above also features a (minor) secondary peak, which is visible in the marginal posterior. | Python Code:
from IPython.display import Image
Image(filename="DifficultDensities_banana_eg.png", width=350)
Explanation: Breaking MCMC: difficult densities
We're fortunate that much of the time the posterior functions we care about are relatively simple, i.e. unimodal and roughly Gaussian shaped. But not always! So, let's have a look at some density functions that simple MCMC samplers have trouble with. There are loads of functions to choose from (see this Wikipedia entry), but the two given here exemplify the cases that you're most likley to see in practice, namely
strong, non-linear degeneracies, and
multiple peaks.
Since these test functions are intended to work out minimization codes, we'll interpret them as minus the log-posterior.
Rosenbrock function: the dreaded "banana"
As the subtitle suggests, this function is an extreme version of the banana-shaped degeneracies we sometimes see in practice, for example this:
End of explanation
def Rosenbrock_lnP(x, y, a=1.0, b=100.0):
return -( (a-x)**2 + b*(y-x**2)**2 )
Explanation: Here's the Rosenbrock function in code. Since we're pretending it's the log-posterior, I've introduced a minus sign that doesn't normally appear.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8.0, 8.0)
xs = np.arange(-2.0, 3.0, 0.0025)
ys = np.arange(0.0, 5.0, 0.0025)
zs = np.array([Rosenbrock_lnP(xs,y) for y in ys])
plt.contour(xs, ys, -2.0*zs, levels=[2.3, 6.18, 11.8]);
Explanation: Let's plot "standard delta-log-pdf" contours, just to get a feel for the shape of the function.
End of explanation
xs = np.arange(-2.0, 3.0, 0.0025)
lnys = np.arange(-3.0, 1.6, 0.0025)
zs = np.array([Rosenbrock_lnP(xs,y) for y in np.exp(lnys)])
plt.contour(xs, lnys, -2.0*zs, levels=[2.3, 6.18, 11.8]);
Explanation: That's one ugly banana! It's worth noting that, in less extreme cases, cleverly re-parametrizing your model can be a significant help, for example sampling the logarithm of a parameter with a large dynamic range. As you'll see in the homework, however, it's important to be aware of the effect re-parametrizations have on the prior if not done carefully. As you can see below, considering $\ln(y)$ instead of $y$ makes part of the posterior distribution look easier to deal with... but at the cost of introducing another little problem.
End of explanation
def eggbox_lnP(x, y):
return (2.0 + np.cos(0.5*x)*np.cos(0.5*y))**3
xs = np.arange(0.0, 30.0, 0.1)
ys = np.arange(0.0, 30.0, 0.1)
zs = np.array([eggbox_lnP(xs,y) for y in ys])
plt.contour(xs, ys, -2.0*(zs-np.max(zs)), levels=[2.3, 6.18, 11.8]);
Explanation: Eggbox function: multiple modes
Next, we look at an "eggbox" function. Again, the function below will be interpreted as a log-posterior.
End of explanation
Image(filename="DifficultDensities_multimodes_eg.png", width=350)
Explanation: Yikes! Lot's of well separated peaks in the posterior distribution. To converge properly, chains need to be able to move between them, which is clearly a challenge for the approach we've used so far.
As it happens, the real-world CMB banana contour plot given above also features a (minor) secondary peak, which is visible in the marginal posterior.
End of explanation |
12,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
Step1: 1
Step2: The input parameter
sele_atoms
enables the user to choose which atoms she/he wants to use as beads when constructing the ENM.
Standard options are
Step3: We can see that this takes considerably more time compared to the 3-beads choice.
2
Step4: We are usually not interested in the whole spectrum. In particular we can focus on the first (smallest) eigenvalues.
6 of them will be equal to zero, they correspond to the rototranslational null modes of the system
Step5: After these, we have the eigenvalues representing the non-null normal modes of the system.
Their amplitude is given by the inverse of the associated eigenvalue
Step6: 3
Step7: The MSF usually represent a decent approximation of the B-factors obtained from crystallography
4
Step8: 5
Step9: This is now much faster (x10) than the full-matrix diagonalization.
Step10: 5b
Step11: As we can see the final result is extremely accurate, while the computational time is much reduced, already for a relatively small-sized molecule like the one used here (71 nucleotides).
6
Step12: We can take a look at this trajectory using nglview
Step13: We can also save it as a pdb (or any other format) to visualize it later on with a different visualization software
Step14: References | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# import barnaba
import barnaba.enm as enm
# define the input file
fname = "../test/data/sample1.pdb"
Explanation: Example: Elastic Network Model
Here we show how to use BaRNAba to construct an elastic network model (ENM) of an RNA molecule.
End of explanation
%time enm_obj=enm.Enm(fname,sparse=False)
Explanation: 1: ENM construction
End of explanation
%time enm_AA=enm.Enm(fname,sele_atoms="AA",cutoff=0.7)
Explanation: The input parameter
sele_atoms
enables the user to choose which atoms she/he wants to use as beads when constructing the ENM.
Standard options are:
- S-ENM (C1'): best 1-bead choice [2,3]
- SBP-ENM (C1', C2, P): optimal compromise between accuracy and computational burden [3]
- AA-ENM (all heavy (non-hydrogen) atoms): best level of accuracy [3,4]
For each of these beads choices there is an optimal cutoff radius (see the analysis in [2] for more details).
The default model implemented by BaRNAba is SBP-ENM, with a cutoff of 0.9 nm.
Let's try to build an AA-ENM
End of explanation
e_val=enm_obj.get_eval()
plt.plot(e_val)
Explanation: We can see that this takes considerably more time compared to the 3-beads choice.
2: Eigenvalues and eigenvectors
Let's take a look of the eigenvalues:
End of explanation
plt.plot(e_val[:10],marker='s')
plt.ylabel(r'$\lambda_i$')
plt.xlabel('$i$')
Explanation: We are usually not interested in the whole spectrum. In particular we can focus on the first (smallest) eigenvalues.
6 of them will be equal to zero, they correspond to the rototranslational null modes of the system:
End of explanation
plt.plot(1/e_val[6:26],marker='o')
plt.ylabel(r'$1/\lambda_i$')
plt.xlabel('$i$')
Explanation: After these, we have the eigenvalues representing the non-null normal modes of the system.
Their amplitude is given by the inverse of the associated eigenvalue:
$\sigma_i=1 / \lambda_i$
End of explanation
msf=enm_obj.get_MSF()
plt.figure(figsize=(12,3))
plt.plot(msf)
plt.ylabel('MSF$_i$')
plt.xlabel('$i$')
plt.xlim(-1,enm_obj.n_beads)
plt.grid()
Explanation: 3: Mean square fluctuations
The first information we can obtain from the ENM are the mean square fluctuations (MSF) of its beads.
This can be easily computed as:
$MSF_i=\langle \delta \mathbf{r}^2_i \rangle = \sum_\alpha 1/\lambda_\alpha \sum_\mu v^\alpha_{i,\mu} v^\alpha_{i,\mu}$
where $i=1,...,N$ is the bead index, $\mu=0,1,2$ indicates the coordinates component (x,y, or z), $\alpha=1,...,3N$ indicates the mode.
End of explanation
%time fluc_C2,reslist=enm_AA.c2_fluctuations()
plt.plot(fluc_C2,c='r')
plt.ylabel('C2-C2 fluctuations')
plt.xlabel('Res index')
plt.xlim(-0.5,69.5)
Explanation: The MSF usually represent a decent approximation of the B-factors obtained from crystallography
4: C2-C2 fluctuations
We can see that the MSFs become very large at the terminals. This is because they represent global fluctuations with respect to the equilibrium position, not local fluctuations of distances.
A way of estimating local flexibility is to compute inter-bead distances. In particular C2-C2 fluctuations have been shown to correlate well with SHAPE reactivity (Pinamonti et al. NAR 2015)
End of explanation
%time enm_sparse=enm.Enm(fname,sparse=True,sele_atoms="AA",cutoff=0.7)
Explanation: 5: Sparse diagonalization
The dynamics of the ENM is determined by the eigenvectors and eigenvalues of the interaction matrix $M_{ij}$, and the covariance matrix $C_{ij}$ can be obtained via pseudo-inversion of $M_{ij}$.
However, we are usually interested only in the eigenvectors corresponding to high amplitude modes, i.e. those with the smallest eigenvalues.
We can exploit the sparsity of $M_{ij}$ to speed up the computation of these modes.
End of explanation
plt.plot(1/enm_AA.get_eval()[6:26],label='Full',marker='x')
plt.plot(1/enm_sparse.get_eval()[6:],marker='o',label='Sparse',ls='')
plt.legend()
plt.ylabel(r'$1/\lambda_i$')
plt.xlabel('$i$')
Explanation: This is now much faster (x10) than the full-matrix diagonalization.
End of explanation
%time fluc_C2_sparse,reslist=enm_sparse.c2_fluctuations()
plt.plot(fluc_C2,c='r',label='Full')
plt.plot(fluc_C2_sparse,c='b',label='Sparse',ls='',marker='s')
plt.ylabel('C2-C2 fluctuations')
plt.xlabel('Res index')
plt.legend()
plt.xlim(-0.5,69.5)
Explanation: 5b: C2-C2 fluctuations with sparse matrices:
The problem here is that all eigenvectors are required to compute these fluctuations (see formula in [2])
Thus, we cannot use the results obtained in the previous section, because computing all 3N eigenvectors would be computationally very inefficient.
Luckily, there's a nice trick to solve this.
Since we are only interested in a subset of the system (i.e. the C2 atoms), we can obtain the “effective” interaction matrix, following the formula proposed in [6], as
$M_{eff}=M_a - W M_b^{-1} W^T$
This is the matrix governing the dynamics of the C2 beads. This is convenient because $M_a$ is in general much smaller than $M_{tot}$ (1/3 for SBP-ENM, ~1/20 for AA-ENM) and is therefore much quicker to diagonalize.
The calculation of $M_{eff}$ itself is simply performed using scipy.sparse.spsolve.
The computational time is considerably reduced when employing a AA-ENM for molecule of increasing sizes.
End of explanation
traj_mode=enm_AA.get_mode_traj(6,amp=5.0,nframes=50)
Explanation: As we can see the final result is extremely accurate, while the computational time is much reduced, already for a relatively small-sized molecule like the one used here (71 nucleotides).
6: Visualize the eigenmodes
What do the different eigenmodes of our ENM look like?
We can take a look at that using the method
get_mode_traj(i)
That will return a mdtraj.trajectory object representing the fluctuations defined by the i-th eigenvector
$\mathbf{x}(t)=\mathbf{x}_0+A \mathbf{v}^i \sin(\omega t) $
End of explanation
import nglview
view = nglview.show_mdtraj(traj_mode)
view
Explanation: We can take a look at this trajectory using nglview
End of explanation
traj_mode.save_pdb('./enm_traj_mode_6.pdb')
Explanation: We can also save it as a pdb (or any other format) to visualize it later on with a different visualization software
End of explanation
# default is beads_name="C2"
%time fluc_mat,res_list=enm_sparse.get_dist_fluc_mat()
plt.figure(figsize=(10,10))
plt.imshow(fluc_mat,)
tt=plt.xticks(np.arange(len(res_list)),res_list,rotation=90,fontsize=7)
tt=plt.yticks(np.arange(len(res_list)),res_list,fontsize=7)
# one can calculate fluctuations of different atoms
%time fluc_mat,res_list=enm_sparse.get_dist_fluc_mat(beads_name="C1\'")
plt.figure(figsize=(10,10))
plt.imshow(fluc_mat,)
tt=plt.xticks(np.arange(len(res_list)),res_list,rotation=90,fontsize=7)
tt=plt.yticks(np.arange(len(res_list)),res_list,fontsize=7)
Explanation: References:
[1] Tirion, Monique M. "Large amplitude elastic motions in proteins from a single-parameter, atomic analysis." Physical review letters 77.9 (1996): 1905.
[2] Pinamonti, Giovanni, et al. "Elastic network models for RNA: a comparative assessment with molecular dynamics and SHAPE experiments." Nucleic acids research 43.15 (2015): 7260-7269.
[3] Setny, Piotr, and Martin Zacharias. "Elastic network models of nucleic acids flexibility." Journal of chemical theory and computation 9.12 (2013): 5460-5470.
[4] Zimmermann, Michael T., and Robert L. Jernigan. "Elastic network models capture the motions apparent within ensembles of RNA structures." RNA 20.6 (2014): 792-804.
[5] Fuglebakk, Edvin, Nathalie Reuter, and Konrad Hinsen. "Evaluation of protein elastic network models based on an analysis of collective motions." Journal of chemical theory and computation 9.12 (2013): 5618-5628.
[6] Zen A. Carnevale V. Lesk A.M. Micheletti C. “Correspondences between low-energy modes in enzymes: dynamics-based alignment of enzymatic functional families.” Protein Sci. (2008) 17 918 929.
Extra credits: Distance fluctuation matrix
End of explanation |
12,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Catigorical Columns Don't just One-Hot, They Count.
Step1: Here we have 3 examples, each containing 5 strings.
Note
Step2: Now we define a categorical column to represent it.
Step3: Use indicator_column to define a dense representation, and input_layer to build the conversion operations.
Step4: Embedding Columns reduce Over Entries, Using the combiner
Step5: The result is 3, 10d embeddings, because it takes the mean over the strings in each example.
Step6: To Skip the reduce?
Use sequence_input_from_feature_columns
Note
Step7: If we run this, it gives a 10d embedding for each of the 5 strings in each of the 3 examples
Step8: Or some careful reshaping
Step9: Linear Bag of Words Model
Step10: Load the IMDB dataset
Step11: Look at an example review
(Punctuation and capitalization are stripped)
Step12: Create the Input Function
Step13: Test the input function
Step14: Note the zero padding, and that x and y have the same shuffle applied.
Step15: Build the Estimator
Step16: DNN Bag of words? | Python Code:
tf.reset_default_graph()
Explanation: Catigorical Columns Don't just One-Hot, They Count.
End of explanation
strings = np.array([['a','a','','b','c'],['a','c','zz','',''],['b','qq','qq','b','']])
Explanation: Here we have 3 examples, each containing 5 strings.
Note: empty strings are ignored, and can be used as padding
End of explanation
vocab = np.array(['a','b','c','<UNK>'],dtype=object)
sparse = tf.feature_column.categorical_column_with_vocabulary_list('strings',vocab, default_value=len(vocab)-1)
Explanation: Now we define a categorical column to represent it.
End of explanation
layer = tf.feature_column.input_layer({'strings':strings}, [tf.feature_column.indicator_column(sparse)])
with tf.train.MonitoredSession() as sess:
input_value = sess.run(layer)
pd.DataFrame(input_value, columns=vocab)
Explanation: Use indicator_column to define a dense representation, and input_layer to build the conversion operations.
End of explanation
tf.reset_default_graph()
layer = tf.feature_column.input_layer(
{'strings':strings},
[tf.feature_column.embedding_column(sparse,10, combiner='mean')])
init = tf.global_variables_initializer()
with tf.train.MonitoredSession() as sess:
sess.run(init)
input_value = sess.run(layer)
Explanation: Embedding Columns reduce Over Entries, Using the combiner
End of explanation
input_value.shape
input_value
Explanation: The result is 3, 10d embeddings, because it takes the mean over the strings in each example.
End of explanation
tf.reset_default_graph()
vocab
sparse = tf.contrib.layers.sparse_column_with_keys('strings', vocab)
layer = tf.contrib.layers.sequence_input_from_feature_columns(
{'strings':tf.constant(strings)},
[tf.contrib.layers.embedding_column(sparse ,10)])
Explanation: To Skip the reduce?
Use sequence_input_from_feature_columns
Note: this is only compatible with contrib feature_columns
End of explanation
init = tf.global_variables_initializer()
with tf.train.MonitoredSession() as sess:
sess.run(init)
input_value = sess.run(layer)
input_value.shape
Explanation: If we run this, it gives a 10d embedding for each of the 5 strings in each of the 3 examples
End of explanation
tf.reset_default_graph()
shape = tf.shape(strings)
embedding_dim = 10
layer = tf.feature_column.input_layer(
{'strings':tf.reshape(strings,[tf.reduce_prod(shape)])},
[tf.feature_column.embedding_column(sparse,embedding_dim, combiner='mean')])
layer = tf.reshape(layer,tf.concat([shape, [embedding_dim]],0))
init = tf.global_variables_initializer()
with tf.train.MonitoredSession() as sess:
sess.run(init)
input_value = sess.run(layer)
input_value.shape
Explanation: Or some careful reshaping
End of explanation
import tensorflow as tf
from tensorflow.contrib import keras as keras
import numpy as np
tf.reset_default_graph()
Explanation: Linear Bag of Words Model
End of explanation
NUM_WORDS=1000 # only use top 1000 words
MAX_LEN=250 # truncate after 250 words
INDEX_FROM=3 # word index offset
train,test = keras.datasets.imdb.load_data(maxlen=MAX_LEN, num_words=NUM_WORDS, index_from=INDEX_FROM)
train_x,train_y = train
test_x,test_y = test
Explanation: Load the IMDB dataset
End of explanation
word_to_id = keras.datasets.imdb.get_word_index()
word_to_id = {k:(v+INDEX_FROM) for k,v in word_to_id.items()}
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
id_to_word = {value:key for key,value in word_to_id.items()}
print(' '.join(id_to_word[id] for id in train_x[0] ))
Explanation: Look at an example review
(Punctuation and capitalization are stripped)
End of explanation
def get_input_fn(x_in, y_in, shuffle=True, epochs=1):
def input_fn():
ys = tf.contrib.data.Dataset.from_tensor_slices(y_in)
# Convert x_in to a sparse tensor
nested_sparse = [
(np.array([[n]*len(x),range(len(x))]).T,x)
for n,x in enumerate(x_in)
]
indices = np.concatenate([idx for idx,value in nested_sparse], axis = 0)
values = np.concatenate([value for idx,value in nested_sparse], axis = 0)
max_len = max(len(ex) for ex in x_in)
xs = tf.SparseTensor(indices = indices, values = values, dense_shape=[25000, max_len])
xs = tf.contrib.data.Dataset.from_sparse_tensor_slices(xs)
xs = xs.map(lambda *x: tf.sparse_tensor_to_dense(tf.SparseTensor(*x)))
ds = tf.contrib.data.Dataset.zip([xs,ys]).repeat(epochs)
if shuffle:
ds = ds.shuffle(10000)
ds = ds.batch(32)
x,y = ds.make_one_shot_iterator().get_next()
return {'word_ids':x},y
return input_fn
Explanation: Create the Input Function
End of explanation
in_fn = get_input_fn(x_in = np.array([[1,1,1],[2,2],[3,3,3],[4,4,4,4],[5],[6,6],[7,7,7,7,7]]),
y_in = np.array([1,2,3,4,5,6,7]))
x,y = in_fn()
init = tf.global_variables_initializer()
with tf.train.MonitoredSession() as sess:
sess.run(init)
x,y = sess.run([x,y])
Explanation: Test the input function
End of explanation
x['word_ids']
y[:,None]
Explanation: Note the zero padding, and that x and y have the same shuffle applied.
End of explanation
word_ids = tf.feature_column.categorical_column_with_identity('word_ids', NUM_WORDS)
bow_estimator = tf.contrib.learn.LinearClassifier(feature_columns=[word_ids], model_dir='tensorboard/BOW')
for n in range(25):
bow_estimator.fit(input_fn=get_input_fn(train_x,train_y,epochs=10))
bow_estimator.evaluate(input_fn=get_input_fn(test_x, test_y,epochs=1))
Explanation: Build the Estimator
End of explanation
tf.reset_default_graph()
DNN_bow_estimator = tf.contrib.learn.DNNClassifier(
[256, 256], model_dir='tensorboard/DNN_BOW',
feature_columns=[tf.feature_column.embedding_column(word_ids, 30, combiner='mean')])
for n in range(25):
DNN_bow_estimator.fit(input_fn=get_input_fn(train_x,train_y,epochs=10))
DNN_bow_estimator.evaluate(input_fn=get_input_fn(test_x,test_y,epochs=1))
Explanation: DNN Bag of words?
End of explanation |
12,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Analyzing convention speeches with Google Language API </h1>
This notebook accompanies my Medium article
Step1: <b> Note
Step2: <h2> Sentiment analysis with Language API </h2>
Let's evaluate the sentiment of nomination acceptance speeches for the last 5 election cycles. The text of the speeches will be downloaded from U. California, Santa Barbara. | Python Code:
APIKEY="AIzaSyBNa0Hw5_SZpmQP2-iXgUfchVHa4Ot956M"
Explanation: <h1> Analyzing convention speeches with Google Language API </h1>
This notebook accompanies my Medium article: <a href="https://medium.com/@lakshmanok/is-this-presidential-election-more-negative-than-years-past-yes-ca254e35eb9#.krrlhkryr"> Is this presidential election more negative then years past? Yes. </a>
<h2> API Key </h2>
To repeat my analysis, you need a Google Cloud Platform account (use the free trial). Then, visit <a href="http://code.google.com/apis/console">API console</a>, choose "Credentials" on the left-hand menu. Choose "API Key" and generate a server key for your application. You should probably restrict it by IP address to prevent abuse, but for now, just leave that field blank and delete the API key after trying out this demo.
Copy-paste your API Key here:
End of explanation
# Copyright 2016 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
!pip install --upgrade google-api-python-client
Explanation: <b> Note: Make sure you got an API Key and pasted it above. Mine won't work for you </b>
From the same API console, choose "Dashboard" on the left-hand menu and "Enable API".
Finally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)
End of explanation
from googleapiclient.discovery import build
import pandas as pd
import numpy as np
import urllib2
lservice = build('language', 'v1beta1', developerKey=APIKEY)
speeches = [
['Hillary Clinton', 'D', 2016, 'http://www.presidency.ucsb.edu/ws/index.php?pid=118051'],
['Donald Trump', 'R', 2016, 'http://www.presidency.ucsb.edu/ws/index.php?pid=117935'],
['Barack Obama', 'D', 2012, 'http://www.presidency.ucsb.edu/ws/index.php?pid=101968'],
['Mitt Romney', 'R', 2012, 'http://www.presidency.ucsb.edu/ws/index.php?pid=101966'],
['Barack Obama', 'D', 2008, 'http://www.presidency.ucsb.edu/ws/index.php?pid=78284'],
['John McCain', 'R', 2008, 'http://www.presidency.ucsb.edu/ws/index.php?pid=78576'],
['John Kerry', 'D', 2004, 'http://www.presidency.ucsb.edu/ws/index.php?pid=25971'],
['George W Bush', 'R', 2004, 'http://www.presidency.ucsb.edu/ws/index.php?pid=72727'],
['Al Gore', 'D', 2000, 'http://www.presidency.ucsb.edu/ws/index.php?pid=25963'],
['George W Bush', 'R', 2000, 'http://www.presidency.ucsb.edu/ws/index.php?pid=25954']
]
sentiment = []
for (speaker, party, year, url) in speeches:
text_of_speech = urllib2.urlopen(url).read()
response = lservice.documents().analyzeSentiment(
body={
'document': {
'type': 'HTML',
'content': unicode(text_of_speech, errors='ignore')
}
}).execute()
polarity = response['documentSentiment']['polarity']
magnitude = response['documentSentiment']['magnitude']
print('POLARITY=%s MAGNITUDE=%s SPEAKER=%s' % (polarity, magnitude, speaker))
sentiment.extend([speaker, party, year, float(polarity), float(magnitude)])
df = pd.DataFrame(data=np.array(sentiment).reshape(10,5),
columns=['speaker', 'party', 'year', 'polarity', 'magnitude'])
for col in ['year', 'polarity', 'magnitude']:
df[col] = pd.to_numeric(df[col])
print df
df = df.sort_values('year')
df.plot(x='speaker', y='polarity', kind='bar')
df.plot(x='speaker', y='magnitude', kind='bar')
df[df['party'] == 'D'].mean()
df[df['party'] == 'R'].mean()
Explanation: <h2> Sentiment analysis with Language API </h2>
Let's evaluate the sentiment of nomination acceptance speeches for the last 5 election cycles. The text of the speeches will be downloaded from U. California, Santa Barbara.
End of explanation |
12,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now that we have created and saved a configuration file, let’s read it back and explore the data it holds.
Step1: Please note that default values have precedence over fallback values. For instance, in our example the 'CompressionLevel' key was specified only in the 'DEFAULT' section. If we try to get it from the section 'topsecret.server.com', we will always get the default, even if we specify a fallback
Step2: The same fallback argument can be used with the getint(), getfloat() and getboolean() methods, for example | Python Code:
config = configparser.ConfigParser()
config.sections()
config.read('example.ini')
config.sections()
'bitbucket.org' in config
'bytebong.com' in config
config['bitbucket.org']['User']
config['DEFAULT']['Compression']
topsecret = config['topsecret.server.com']
topsecret['ForwardX11']
topsecret['Port']
for key in config['bitbucket.org']:
print(key)
config['bitbucket.org']['ForwardX11']
type(topsecret['Port'])
type(int(topsecret['Port']))
type(topsecret.getint('Port'))
type(topsecret.getfloat('Port'))
int(topsecret['Port']) - 22.0
int(topsecret['Port']) - 22
try:
topsecret.getint('ForwardX11')
except ValueError:
print(True)
topsecret.getboolean('ForwardX11')
config['bitbucket.org'].getboolean('ForwardX11')
config.getboolean('bitbucket.org', 'Compression')
topsecret.get('Port')
topsecret.get('CompressionLevel')
topsecret.get('Cipher')
topsecret.get('Cipher', '3des-cbc')
Explanation: Now that we have created and saved a configuration file, let’s read it back and explore the data it holds.
End of explanation
topsecret.get('CompressionLevel', '3')
Explanation: Please note that default values have precedence over fallback values. For instance, in our example the 'CompressionLevel' key was specified only in the 'DEFAULT' section. If we try to get it from the section 'topsecret.server.com', we will always get the default, even if we specify a fallback:
End of explanation
'BatchMode' in topsecret
topsecret.getboolean('BatchMode', fallback=True)
config['DEFAULT']['BatchMode'] = 'no'
topsecret.getboolean('BatchMode', fallback=True)
import yaml
with open("config.yml", 'r') as ymlfile:
cfg = yaml.load(ymlfile)
for section in cfg:
print(section)
print(cfg['mysql'])
print(cfg['other'])
# Load the configuration file
with open("config.yml") as f:
sample_config = f.read()
config = configparser.RawConfigParser(allow_no_value=True)
config.readfp(io.BytesIO(sample_config))
# List all contents
print("List all contents")
for section in config.sections():
print("Section: %s" % section)
for options in config.options(section):
print("x %s:::%s:::%s" % (options,
config.get(section, options),
str(type(options))))
# Print some contents
print("\nPrint some contents")
print(config.get('other', 'use_anonymous')) # Just get the value
print(config.getboolean('other', 'use_anonymous')) # You know the datatype?
Explanation: The same fallback argument can be used with the getint(), getfloat() and getboolean() methods, for example:
End of explanation |
12,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Units and Quantities
Objectives
Use units
Create functions that accept quantities as arguments
Create new units
Basics
How do we define a Quantity and which parts does it have?
Step1: Quantities can be converted to other units systems or factors by using to()
Step2: We can do arithmetic operations when the quantities have the compatible units
Step3: Quantities can also be combined, for example to measure speed
Step4: <div style='background
Step5: Composed units
Many units are compositions of others, for example, one could create new combinationes for ease of use
Step6: and others are already a composition
Step7: Sometime we get no units quantitites
Step8: What happen if we add a number to this?
Step9: Equivalencies
Some conversions are not done by a conversion factor as between miles and kilometers, for example converting between wavelength and frequency.
Step10: Other built-in equivalencies are
Step11: Printing the quantities
Step12: Arrays
Quantities can also be applied to arrays
Step13: Plotting quantities
To work nicely with matplotlib we need to do as follows
Step14: Creating functions with quantities as units
We want to have functions that contain the information of the untis, and with them we can be sure that we will be always have the right result.
Step15: <div style='background
Step16: Create your own units
Some times we want to create our own units
Step17: <div style='background | Python Code:
from astropy import units as u
# Define a quantity length
length = 26.2 * u.meter
# print it
print(length) # length is a quantity
# Type of quantity
type(length)
# Type of unit
type(u.meter)
# Quantity
length
# value
length.value
# unit
length.unit
# information
length.info
Explanation: Units and Quantities
Objectives
Use units
Create functions that accept quantities as arguments
Create new units
Basics
How do we define a Quantity and which parts does it have?
End of explanation
# Convert it to: km, lyr
print(length.to(u.km))
print(length.to(u.lightyear))
Explanation: Quantities can be converted to other units systems or factors by using to()
End of explanation
# arithmetic with distances
distance_start = 10 * u.mm
distance_end = 23 * u.km
length = distance_end - distance_start
print(length)
Explanation: We can do arithmetic operations when the quantities have the compatible units:
End of explanation
# calculate a speed
time = 15 * u.minute
speed = length / time
print(speed)
# decompose it
print(speed.decompose())
print(speed.si)
Explanation: Quantities can also be combined, for example to measure speed
End of explanation
#1
from astropy.units import imperial
print(speed.to(imperial.mile/u.hour))
#2
imperial.pint > 0.5 * u.l
# A liquid pint in US is 473 ml; in UK is 568 ml
#3
rectangle_area = 3 * u.km * 5 * u.m
print(rectangle_area)
print(rectangle_area.decompose())
print(rectangle_area.to(imperial.yard ** 2))
Explanation: <div style='background:#B1E0A8; padding:10px 10px 10px 10px;'>
<H2> Challenges </H2>
<ol>
<li> Convert the speed in imperial units (miles/hour) using: <br>
```from astropy.units import imperial```
</li>
<li> Calculate whether a pint is more than half litre<br>
<emph>You can compare quantities as comparing variables.</emph> <br>
Something strange? Check what deffinition of <a href='https://en.wikipedia.org/wiki/Pint'>pint</a> astropy is using.
</li>
<li> Does units work with areas? calculate the area of a rectangle of 3 km of side and 5 meter of width. Show them in m^2 and convert them to yards^2</li>
</div>
End of explanation
# create a composite unit
cms = u.cm / u.s
speed.to(cms)
# and in the imperial system
mph = imperial.mile / u.hour
speed.to(mph)
Explanation: Composed units
Many units are compositions of others, for example, one could create new combinationes for ease of use:
End of explanation
# what can be converted from s-1?
(u.s ** -1).compose()
# or Jules?
(u.joule).compose()
# Unity of R
(13.605692 * u.eV).to(u.Ry)
Explanation: and others are already a composition:
End of explanation
# no units
nounits = 20. * u.cm / (1. * u.m)
nounits
Explanation: Sometime we get no units quantitites
End of explanation
# arithmetic with no units
nounits + 3
# final value of a no unit quantity
nounits.decompose() # It's a unitless quantity
Explanation: What happen if we add a number to this?
End of explanation
# converting spectral quantities
(656.281 * u.nm).to(u.Hz) # Fails because they are not compatible
# but doing it right
(656.281 * u.nm).to(u.Hz, equivalencies=u.spectral())
Explanation: Equivalencies
Some conversions are not done by a conversion factor as between miles and kilometers, for example converting between wavelength and frequency.
End of explanation
# finding the equivalencies
u.Hz.find_equivalent_units()
# but also using other systems
u.Hz.find_equivalent_units(equivalencies=u.spectral())
Explanation: Other built-in equivalencies are:
- parallax()
- Doppler (dopplr_radio, doppler_optical, doppler_relativistic)
- spectral flux density
- brigthness temperature
- temperature energy
- and you can build your own
End of explanation
# Printing values with different formats
print("{0.value:0.03f} {0.unit:FITS}".format(speed))
print("{0.value:0.03f} {0.unit:latex_inline}".format(speed))
Explanation: Printing the quantities
End of explanation
# different ways of defining a quantity for a single value
length = 44 * u.m
time = u.Quantity(23, u.s)
speed = length / time
speed
# now with lists
length_list = [1, 2, 3] * u.m
# and arrays
import numpy as np
time_array = np.array([1, 2, 3]) * u.s
# and its arithmetics
length_list / time_array
# angles are smart!
angle = u.Quantity(np.arange(180), u.deg)
print(angle[[0, -1]])
print(np.sin(angle[[0, -1]]))
Explanation: Arrays
Quantities can also be applied to arrays
End of explanation
# allowing for plotting
from astropy.visualization import quantity_support
quantity_support()
# loading matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
# Ploting the previous array
plt.plot(angle, np.sin(angle))
Explanation: Plotting quantities
To work nicely with matplotlib we need to do as follows:
End of explanation
# Create a function for the Kinetic energy
@u.quantity_input(mass=u.kg, speed=u.m/u.s)
def kinetic(mass, speed):
return (mass * speed ** 2 / 2.).to(u.joule)
# run with and without units
kinetic(5, 10) # Fails! it doesn't specify the units.
kinetic(5 * u.kg, 100 * cms)
Explanation: Creating functions with quantities as units
We want to have functions that contain the information of the untis, and with them we can be sure that we will be always have the right result.
End of explanation
#4
@u.quantity_input(mass=u.kg, height=u.m, g=u.m / u.s ** 2)
def potential(mass, height, g=9.8 * u.m / u.s **2):
return (0.5 * mass * g * height).to(u.joule)
# run it for some values
potential(5 * u.kg, 30 *u.cm)
# on Mars:
potential(5 * u.kg, 1 * u.m, g=3.75 * u.m/u.s**2)
Explanation: <div style='background:#B1E0A8; padding:10px 10px 10px 10px;'>
<H2> Challenges </H2>
<ol start=4>
<li> Create a function that calculates potential energy where *g* defaults to Earth value,
but could be used for different planets.
Test it for any of the *g* values for any other
<a href="http://www.physicsclassroom.com/class/circles/Lesson-3/The-Value-of-g">planet</a>.
</li>
</ol>
</div>
End of explanation
# Create units for a laugh scale
titter = u.def_unit('titter')
chuckle = u.def_unit('chuckle', 5 * titter)
laugh = u.def_unit('laugh', 4 * chuckle)
guffaw = u.def_unit('guffaw', 3 * laugh)
rofl = u.def_unit('rofl', 4 * guffaw)
death_by_laughing = u.def_unit('death_by_laughing', 10 * rofl)
print((1 * rofl).to(titter))
Explanation: Create your own units
Some times we want to create our own units:
End of explanation
#5
ares = u.def_unit('ares', (10 * u.m)**2)
hectar = u.def_unit('hectares', 100 * ares)
print(rectangle_area.to(hectar))
Explanation: <div style='background:#B1E0A8; padding:10px 10px 10px 10px;'>
<H2> Challenges </H2>
<ol start=5>
<li> Convert the area calculated before `rectangle_area` in <a href="https://en.wikipedia.org/wiki/Hectare">Hectare</a>
(1 hectare = 100 ares; 1 are = 100 m2).
</li>
</ol>
</div>
End of explanation |
12,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick Start
This quick start will show how to do the following
Step1: Now let's import a GAM that's made for regression problems.
Let's fit a spline term to the first 2 features, and a factor term to the 3rd feature.
Step2: Let's take a look at the model fit
Step3: Even though we have 3 terms with a total of (20 + 20 + 5) = 45 free variables, the default smoothing penalty (lam=0.6) reduces the effective degrees of freedom to just ~25.
By default, the spline terms, s(...), use 20 basis functions. This is a good starting point. The rule of thumb is to use a fairly large amount of flexibility, and then let the smoothing penalty regularize the model.
However, we can always use our expert knowledge to add flexibility where it is needed, or remove basis functions, and make fitting easier
Step4: Automatically tune the model
By default, spline terms, s() have a penalty on their 2nd derivative, which encourages the functions to be smoother, while factor terms, f() and linear terms l(), have a l2, ie ridge penalty, which encourages them to take on smaller values.
lam, short for $\lambda$, controls the strength of the regularization penalty on each term. Terms can have multiple penalties, and therefore multiple lam.
Step5: Our model has 3 lam parameters, currently just one per term.
Let's perform a grid-search over multiple lam values to see if we can improve our model.
We will seek the model with the lowest generalized cross-validation (GCV) score.
Our search space is 3-dimensional, so we have to be conservative with the number of points we consider per dimension.
Let's try 5 values for each smoothing parameter, resulting in a total of 5*5*5 = 125 points in our grid.
Step6: This is quite a bit better. Even though the in-sample $R^2$ value is lower, we can expect our model to generalize better because the GCV error is lower.
We could be more rigorous by using a train/test split, and checking our model's error on the test set. We were also quite lazy and only tried 125 values in our hyperopt. We might find a better model if we spent more time searching across more points.
For high-dimensional search-spaces, it is sometimes a good idea to try a randomized search.
We can acheive this by using numpy's random module
Step7: In this case, our deterministic search found a better model
Step8: The statistics_ attribute is populated after the model has been fitted.
There are lots of interesting model statistics to check out, although many are automatically reported in the model summary
Step9: Partial Dependence Functions
One of the most attractive properties of GAMs is that we can decompose and inspect the contribution of each feature to the overall prediction.
This is done via partial dependence functions.
Let's plot the partial dependence for each term in our model, along with a 95% confidence interval for the estimated function. | Python Code:
from pygam.datasets import wage
X, y = wage()
Explanation: Quick Start
This quick start will show how to do the following:
Install everything needed to use pyGAM.
fit a regression model with custom terms
search for the best smoothing parameters
plot partial dependence functions
Install pyGAM
Pip
pip install pygam
Conda
pyGAM is on conda-forge, however this is typically less up-to-date:
conda install -c conda-forge pygam
Bleeding edge
You can install the bleeding edge from github using flit.
First clone the repo, cd into the main directory and do:
pip install flit
flit install
Get pandas and matplotlib
pip install pandas matplotlib
Fit a Model
Let's get to it. First we need some data:
End of explanation
from pygam import LinearGAM, s, f
gam = LinearGAM(s(0) + s(1) + f(2)).fit(X, y)
Explanation: Now let's import a GAM that's made for regression problems.
Let's fit a spline term to the first 2 features, and a factor term to the 3rd feature.
End of explanation
gam.summary()
Explanation: Let's take a look at the model fit:
End of explanation
gam = LinearGAM(s(0, n_splines=5) + s(1) + f(2)).fit(X, y)
Explanation: Even though we have 3 terms with a total of (20 + 20 + 5) = 45 free variables, the default smoothing penalty (lam=0.6) reduces the effective degrees of freedom to just ~25.
By default, the spline terms, s(...), use 20 basis functions. This is a good starting point. The rule of thumb is to use a fairly large amount of flexibility, and then let the smoothing penalty regularize the model.
However, we can always use our expert knowledge to add flexibility where it is needed, or remove basis functions, and make fitting easier:
End of explanation
print(gam.lam)
Explanation: Automatically tune the model
By default, spline terms, s() have a penalty on their 2nd derivative, which encourages the functions to be smoother, while factor terms, f() and linear terms l(), have a l2, ie ridge penalty, which encourages them to take on smaller values.
lam, short for $\lambda$, controls the strength of the regularization penalty on each term. Terms can have multiple penalties, and therefore multiple lam.
End of explanation
import numpy as np
lam = np.logspace(-3, 5, 5)
lams = [lam] * 3
gam.gridsearch(X, y, lam=lams)
gam.summary()
Explanation: Our model has 3 lam parameters, currently just one per term.
Let's perform a grid-search over multiple lam values to see if we can improve our model.
We will seek the model with the lowest generalized cross-validation (GCV) score.
Our search space is 3-dimensional, so we have to be conservative with the number of points we consider per dimension.
Let's try 5 values for each smoothing parameter, resulting in a total of 5*5*5 = 125 points in our grid.
End of explanation
lams = np.random.rand(100, 3) # random points on [0, 1], with shape (100, 3)
lams = lams * 6 - 3 # shift values to -3, 3
lams = 10 ** lams # transforms values to 1e-3, 1e3
random_gam = LinearGAM(s(0) + s(1) + f(2)).gridsearch(X, y, lam=lams)
random_gam.summary()
Explanation: This is quite a bit better. Even though the in-sample $R^2$ value is lower, we can expect our model to generalize better because the GCV error is lower.
We could be more rigorous by using a train/test split, and checking our model's error on the test set. We were also quite lazy and only tried 125 values in our hyperopt. We might find a better model if we spent more time searching across more points.
For high-dimensional search-spaces, it is sometimes a good idea to try a randomized search.
We can acheive this by using numpy's random module:
End of explanation
gam.statistics_['GCV'] < random_gam.statistics_['GCV']
Explanation: In this case, our deterministic search found a better model:
End of explanation
list(gam.statistics_.keys())
Explanation: The statistics_ attribute is populated after the model has been fitted.
There are lots of interesting model statistics to check out, although many are automatically reported in the model summary:
End of explanation
import matplotlib.pyplot as plt
for i, term in enumerate(gam.terms):
if term.isintercept:
continue
XX = gam.generate_X_grid(term=i)
pdep, confi = gam.partial_dependence(term=i, X=XX, width=0.95)
plt.figure()
plt.plot(XX[:, term.feature], pdep)
plt.plot(XX[:, term.feature], confi, c='r', ls='--')
plt.title(repr(term))
plt.show()
Explanation: Partial Dependence Functions
One of the most attractive properties of GAMs is that we can decompose and inspect the contribution of each feature to the overall prediction.
This is done via partial dependence functions.
Let's plot the partial dependence for each term in our model, along with a 95% confidence interval for the estimated function.
End of explanation |
12,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: A simple classification model using Keras with Cloud TPUs
Overview
This notebook shows how to use Keras to build a simple classification model. The model can train, evaluate, and generate predictions using Cloud TPUs. It uses the iris dataset to predict the species of the flower and also shows how to use your own data instead of using pre-loaded data. This model uses 4 input features (SepalLength, SepalWidth, PetalLength, PetalWidth) to determine one of these flower species (Setosa, Versicolor, Virginica).
The model trains for 50 epochs and completes in approximately 2 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
NOTE
Step2: Resolve TPU Address
Step3: FLAGS used as model params
Step5: Download training input data and define prediction input & output
Step6: Define a Keras model (2 hidden layers with 10 neurons in each)
Step7: Compiling the model with a distribution strategy
To make the model usable by a TPU, we first must create and compile it using a distribution strategy.
Step8: Train the model on TPU
Step9: Evaluation of the model
Step10: Save the model
Step11: Prediction
Prediction data
Step12: Prediction on TPU
Step13: Prediction on CPU | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2020 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import json
import os
import pandas as pd
import pprint
import tensorflow as tf
import time
import numpy as np
from tensorflow import keras
print(tf.__version__)
import distutils
if distutils.version.LooseVersion(tf.__version__) < '1.14':
raise Exception('This notebook is compatible with TensorFlow 1.14 or higher, for TensorFlow 1.13 or lower please use the previous version at https://github.com/tensorflow/tpu/blob/r1.13/tools/colab/classification_iris_data_with_keras.ipynb')
Explanation: A simple classification model using Keras with Cloud TPUs
Overview
This notebook shows how to use Keras to build a simple classification model. The model can train, evaluate, and generate predictions using Cloud TPUs. It uses the iris dataset to predict the species of the flower and also shows how to use your own data instead of using pre-loaded data. This model uses 4 input features (SepalLength, SepalWidth, PetalLength, PetalWidth) to determine one of these flower species (Setosa, Versicolor, Virginica).
The model trains for 50 epochs and completes in approximately 2 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
NOTE: This tutorial is designed to show how to write a simple model using Keras. It should not be used for comparision with training on CPU's because of the very small amount of data being used.
Learning objectives
In this Colab, you will learn how to:
* Define a Keras model with 2 hidden layers and 10 nodes in each layer.
* Create and compile a Keras model on TPU with a distribution strategy.
* Train, evaluate, and and generate predictions on Cloud TPU.
Instructions
<h3> Train on TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
On the main menu, click Runtime and select Change runtime type. Set "TPU" as the hardware accelerator.
Click Runtime again and select Runtime > Run All. You can also run the cells manually with Shift-ENTER.
Data, model, and training
Imports
End of explanation
use_tpu = True #@param {type:"boolean"}
if use_tpu:
assert 'COLAB_TPU_ADDR' in os.environ, 'Missing TPU; did you request a TPU in Notebook Settings?'
if 'COLAB_TPU_ADDR' in os.environ:
TF_MASTER = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])
else:
TF_MASTER=''
Explanation: Resolve TPU Address
End of explanation
# Model specific parameters
# TPU address
tpu_address = TF_MASTER
# Number of epochs
epochs = 50
# Number of steps_per_epoch
steps_per_epoch = 5
# NOTE: Total number of training steps = Number of epochs * Number of steps_per_epochs
Explanation: FLAGS used as model params
End of explanation
TRAIN_URL = "http://download.tensorflow.org/data/iris_training.csv"
TEST_URL = "http://download.tensorflow.org/data/iris_test.csv"
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth',
'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
PREDICTION_INPUT_DATA = {
'SepalLength': [6.9, 5.1, 5.9, 6.0, 5.5, 6.2, 5.5, 6.3],
'SepalWidth': [3.1, 3.3, 3.0, 3.4, 2.5, 2.9, 4.2, 2.8],
'PetalLength': [5.4, 1.7, 4.2, 4.5, 4.0, 4.3, 1.4, 5.1],
'PetalWidth': [2.1, 0.5, 1.5, 1.6, 1.3, 1.3, 0.2, 1.5],
}
PREDICTION_OUTPUT_DATA = ['Virginica', 'Setosa', 'Versicolor', 'Versicolor', 'Versicolor', 'Versicolor', 'Setosa', 'Virginica']
def maybe_download():
train_path = tf.keras.utils.get_file(TRAIN_URL.split('/')[-1], TRAIN_URL)
test_path = tf.keras.utils.get_file(TEST_URL.split('/')[-1], TEST_URL)
return train_path, test_path
def load_data(y_name='Species'):
Returns the iris dataset as (train_x, train_y), (test_x, test_y).
train_path, test_path = maybe_download()
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0, dtype={'SepalLength': pd.np.float32,
'SepalWidth': pd.np.float32, 'PetalLength': pd.np.float32, 'PetalWidth': pd.np.float32, 'Species': pd.np.int32})
train_x, train_y = train, train.pop(y_name)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0, dtype={'SepalLength': pd.np.float32,
'SepalWidth': pd.np.float32, 'PetalLength': pd.np.float32, 'PetalWidth': pd.np.float32, 'Species': pd.np.int32})
test_x, test_y = test, test.pop(y_name)
return (train_x, train_y), (test_x, test_y)
Explanation: Download training input data and define prediction input & output
End of explanation
def get_model():
return keras.Sequential([
keras.layers.Dense(10, input_shape=(4,), activation=tf.nn.relu, name = "Dense_1"),
keras.layers.Dense(10, activation=tf.nn.relu, name = "Dense_2"),
keras.layers.Dense(3, activation=None, name = "logits"),
keras.layers.Dense(3, activation=tf.nn.softmax, name = "softmax")
])
Explanation: Define a Keras model (2 hidden layers with 10 neurons in each)
End of explanation
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(TF_MASTER)
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
model = get_model()
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.summary()
Explanation: Compiling the model with a distribution strategy
To make the model usable by a TPU, we first must create and compile it using a distribution strategy.
End of explanation
# Fetch the data
(train_x, train_y), (test_x, test_y) = load_data()
# Train the model
model.fit(
train_x.values, train_y.values,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
)
Explanation: Train the model on TPU
End of explanation
model.evaluate(test_x.values, test_y.values,
batch_size=8)
Explanation: Evaluation of the model
End of explanation
model.save_weights('./DNN_TPU_1024.h5', overwrite=True)
Explanation: Save the model
End of explanation
COLUMNS_NAME=['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']
data = pd.DataFrame(PREDICTION_INPUT_DATA, columns=COLUMNS_NAME)
print(data)
Explanation: Prediction
Prediction data
End of explanation
predictions = model.predict(data.values.astype(np.float32))
template = ('\nPrediction is "{}" ({:.1f}%), expected "{}"')
for pred_dict, expec in zip(predictions, PREDICTION_OUTPUT_DATA):
class_index = np.argmax(pred_dict)
class_probability = np.max(pred_dict)
print(template.format(SPECIES[class_index], 100*class_probability, expec))
Explanation: Prediction on TPU
End of explanation
cpu_model = get_model()
cpu_model.load_weights('./DNN_TPU_1024.h5')
cpu_predictions = cpu_model.predict(data)
template = ('\nPrediction is "{}" ({:.1f}%), expected "{}"')
for pred_dict, expec in zip(cpu_predictions, PREDICTION_OUTPUT_DATA):
class_index = np.argmax(pred_dict)
class_probability = np.max(pred_dict)
print(template.format(SPECIES[class_index], 100*class_probability, expec))
Explanation: Prediction on CPU
End of explanation |
12,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Models
By Saurabh Mahindre - <a href="https
Step1: Training and generating weights
LeastSquaresRegression has to be initialised with the training features and training labels. Once that is done to learn from data we train() it. This also generates the $\text w$ from the general equation described above. To access $\text w$ use get_real_vector('w').
Step2: This value of $\text w$ is pretty close to 3, which certifies a pretty good fit for the training data. Now let's apply this trained machine to our test data to get the ouput values.
Step3: As an aid to visualisation, a plot of the output and also of the residuals is shown. The sum of the squares of these residuals is minimised.
Step4: Ridge Regression
The function we choose should not only best fit the training data but also generalise well. If the coefficients/weights are unconstrained, they are susceptible to high variance and overfitting. To control variance, one has to regularize the coefficients i.e. control how large the coefficients grow. This is what is done in Ridge regression which is L2 (sum of squared components of $\bf w$) regularized form of least squares. A penalty is imposed on the size of coefficients. The error to be minimized is
Step5: Relationship between weights and regularization
The prediction in the basic regression example was simliar to that of least squares one. To actually see ridge regression's forte, we analyse how the weights change along with the regularization constant. Data with slightly higher dimensions is sampled to do this because overfitting is more likely to occur in such data. Here put('tau', tau) method is used to set the necessary parameter.
Step6: The mean squared error (MSE) of an estimator measures the average of the squares of the errors. CMeanSquaredError class is used to compute the MSE as
Step7: Data with dimension
Step8: As seen from the plot of errors, regularisation doesn't seem to affect the errors significantly. One interpretation could be that this is beacuse there is less overfitting as we have large number of samples. For a small sample size as compared to the dimensionality, the test set performance may be poor even. The reason for this is that the regression function will fit the noise too much, while the interesting part of the signal is too small. We now generate 10 samples of 10-dimensions to test this.
Step9: The first plot is the famous ridge trace that is the signature of this technique. The plot is really very straight forward to read. It presents the standardized regression coefficients (weights) on the vertical axis and various values of tau (Regularisation constant) along the horizontal axis. Since the values of tau ($\tau$) span several orders of magnitude, we adopt a logarithmic scale along this axis. As tau is increased, the values of the regression estimates change, often wildly at first. At some point, the coefficients seem to settle down and then gradually drift towards zero. Often the value of tau for which these coefficients are at their stable values is the best one. This should be supported by a low error value for that tau.
Least Angle Regression and LASSO
LASSO (Least Absolute Shrinkage and Selection Operator) is another version of Least Squares regression, which uses a L1-norm of the parameter vector. This intuitively enforces sparse solutions, whereas L2-norm penalties usually result in smooth and dense solutions.
$$ \min \|X^T\beta - y\|^2 + \lambda\|\beta\|_1$$
In Shogun, following equivalent form is solved, where increasing $C$ selects more variables
Step10: CLeastAngleRegression requires the features to be normalized with a zero mean and unit norm. Hence we use two preprocessors
Step11: Next we train on the data. Keeping in mind that we had 10 attributes/dimensions in our data, let us have a look at the size of LASSO path which is obtained readily using get_path_size().
Step12: The weights generated ($\beta_i$) and their norm ($\sum_i|\beta_i|$) change with each step. This is when a new variable is added to path. To get the weights at each of these steps get_w_for_var() method is used. The argument is the index of the variable which should be in the range [0, path_size).
Step13: Each color in the plot represents a coefficient and the vertical lines denote steps. It is clear that the weights are piecewise linear function of the norm.
Kernel Ridge Regression
Kernel ridge regression (KRR) is a kernel-based regularized form of regression. The dual form of Ridge regression can be shown to be
Step14: As seen from the example KRR (using the kernel trick) can apply techniques for linear regression in the feature space to perform nonlinear regression in the input space.
Support Vector Regression
In Kernel Ridge Regression $(1)$ we have seen the result to be a dense solution. Thus all training examples are active which limits its usage to fewer number of training examples. Support Vector Regression (SVR) uses the concept of support vectors as in Support Vector Machines that leads to a sparse solution. In the SVM the penalty was paid for being on the wrong side of the discriminating plane. Here we do the same thing
Step15: Let us do comparison of time taken for the 2 different models simliar to that done in section 6 of [1]. The Boston Housing Dataset is used. | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from cycler import cycler
# import all shogun classes
from shogun import *
import shogun as sg
slope = 3
X_train = rand(30)*10
y_train = slope*(X_train)+random.randn(30)*2+2
y_true = slope*(X_train)+2
X_test = concatenate((linspace(0,10, 50),X_train))
#Convert data to shogun format features
feats_train = features(X_train.reshape(1,len(X_train)))
feats_test = features(X_test.reshape(1,len(X_test)))
labels_train = RegressionLabels(y_train)
Explanation: Regression Models
By Saurabh Mahindre - <a href="https://github.com/Saurabh7">github.com/Saurabh7</a> as a part of <a href="http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de/">herrstrathmann.de</a>
This notebook demonstrates various regression methods provided in Shogun. Linear models like Least Square regression, Ridge regression, Least Angle regression, etc. and also kernel based methods like Kernel Ridge regression are discussed and applied to toy and real life data.
Introduction
Least Squares regression
Prediction using Least Squares
Training and generating weights
Ridge Regression
Weights and regularization
Least Angle Regression and LASSO
Kernel Ridge Regression
Support Vector Regression
Introduction
Regression is a case of supervised learning where the goal is to learn a mapping from inputs $x\in\mathcal{X}$ to outputs $y\in\mathcal{Y}$, given a labeled set of input-output pairs $\mathcal{D} = {(x_i,y_i)}^{\text N}_{i=1} \subseteq \mathcal{X} \times \mathcal{Y}$. The response variable $y_i$ is continuous in regression analysis. Regression finds applications in many fields like for predicting stock prices or predicting consumption spending, etc. In linear regression, the mapping is a linear (straight-line) equation.
Least Squares regression
A Linear regression model can be defined as $\text y =$ $\bf {w} \cdot \bf{x} $ $+ b$. Here $\text y$ is the predicted value, $\text x$ the independent variable and $\text w$ the so called weights.</br> We aim to find the linear function (line) that best explains the data, i.e. that minimises some measure of deviation to the training data $\mathcal{D}$. One such measure is the sum of squared distances. The Ordinary Least Sqaures method minimizes the sum of squared distances between the observed responses in the dataset and the responses predicted by the linear approximation.
The distances called residuals have to minimized. This can be represented as:$$E({\bf{w}}) = \sum_{i=1}^N(y_i-{\bf w}\cdot {\bf x}_i)^2$$
One can differentiate with respect to $\bf w$ and equate to zero to determine the $\bf w$ that minimises $E({\bf w})$. This leads to solution of the form:
$${\bf w} = \left(\sum_{i=1}^N{\bf x}i{\bf x}_i^T\right)^{-1}\left(\sum{i=1}^N y_i{\bf x}_i\right)$$
Prediction using Least Squares
Regression using Least squares is demonstrated below on toy data. Shogun provides the tool to do it using CLeastSquaresRegression class. The data is a straight line with lot of noise and having slope 3. Comparing with the mathematical equation above we thus expect $\text w$ to be around 3 for a good prediction. Once the data is converted to Shogun format, we are ready to train the machine. To label the training data CRegressionLabels are used.
End of explanation
ls = LeastSquaresRegression(feats_train, labels_train)
ls.train()
w = ls.get_real_vector('w')
print('Weights:')
print(w)
Explanation: Training and generating weights
LeastSquaresRegression has to be initialised with the training features and training labels. Once that is done to learn from data we train() it. This also generates the $\text w$ from the general equation described above. To access $\text w$ use get_real_vector('w').
End of explanation
out = ls.apply(feats_test).get_labels()
Explanation: This value of $\text w$ is pretty close to 3, which certifies a pretty good fit for the training data. Now let's apply this trained machine to our test data to get the ouput values.
End of explanation
figure(figsize=(20,5))
#Regression and true plot
pl1 = subplot(131)
title('Regression')
_ = plot(X_train,labels_train, 'ro')
_ = plot(X_test,out, color='blue')
_ = plot(X_train, y_true, color='green')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
pl1.legend((p1, p2, p3), ["Training samples", "Predicted output", "True relationship"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
#plot residues
pl2 = subplot(132)
title("Squared error and output")
_ = plot(X_test,out, linewidth=2)
gray()
_ = scatter(X_train,labels_train.get_labels(),c=ones(30) ,cmap=gray(), s=40)
for i in range(50,80):
plot([X_test[i],X_test[i]],[out[i],y_train[i-50]] , linewidth=2, color='red')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
pl2.legend((p1, p2), ["Error/residuals to be squared", "Predicted output"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
jet()
Explanation: As an aid to visualisation, a plot of the output and also of the residuals is shown. The sum of the squares of these residuals is minimised.
End of explanation
tau = 0.8
rr = LinearRidgeRegression(tau, feats_train, labels_train)
rr.train()
w = rr.get_real_vector('w')
print(w)
out = rr.apply(feats_test).get_labels()
figure(figsize=(20,5))
#Regression and true plot
pl1 = subplot(131)
title('Ridge Regression')
_ = plot(X_train,labels_train, 'ro')
_ = plot(X_test, out, color='blue')
_ = plot(X_train, y_true, color='green')
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
pl1.legend((p1, p2, p3), ["Training samples", "Predicted output", "True relationship"], loc=2)
xlabel('Samples (X)', fontsize=12)
ylabel('Response variable (Y)', fontsize=12)
jet()
Explanation: Ridge Regression
The function we choose should not only best fit the training data but also generalise well. If the coefficients/weights are unconstrained, they are susceptible to high variance and overfitting. To control variance, one has to regularize the coefficients i.e. control how large the coefficients grow. This is what is done in Ridge regression which is L2 (sum of squared components of $\bf w$) regularized form of least squares. A penalty is imposed on the size of coefficients. The error to be minimized is:
$$E({\bf{w}}) = \sum_{i=1}^N(y_i-{\bf w}\cdot {\bf x}_i)^2 + \tau||{\bf w}||^2$$
Here $\tau$ imposes a penalty on the weights.</br>
By differentiating the regularised training error and equating to zero, we find the optimal $\bf w$, given by:
$${\bf w} = \left(\tau {\bf I}+ \sum_{i=1}^N{\bf x}i{\bf x}_i^T\right)^{-1}\left(\sum{i=1}^N y_i{\bf x}_i\right)$$
Ridge regression can be performed in Shogun using CLinearRidgeRegression class. It takes the regularization constant $\tau$ as an additional argument. Let us see the basic regression example solved using the same.
End of explanation
#Generate Data
def generate_data(N, D):
w = randn(D,1)
X = zeros((N,D))
y = zeros((N,1))
for i in range(N):
x = randn(1,D)
for j in range(D):
X[i][j] = x[0][j]
y = dot(X,w) + randn(N,1);
y.reshape(N,)
return X, y.T
def generate_weights(taus, feats_train, labels_train):
preproc = PruneVarSubMean(True)
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
weights = []
rr = LinearRidgeRegression(tau, feats_train, labels_train)
#vary regularization
for t in taus:
rr.put('tau', t)
rr.train()
weights.append(rr.get_w())
return weights, rr
def plot_regularization(taus, weights):
ax = gca()
ax.set_prop_cycle(cycler('color', ['b', 'r', 'g', 'c', 'k', 'y', 'm']))
ax.plot(taus, weights, linewidth=2)
xlabel('Tau', fontsize=12)
ylabel('Weights', fontsize=12)
ax.set_xscale('log')
Explanation: Relationship between weights and regularization
The prediction in the basic regression example was simliar to that of least squares one. To actually see ridge regression's forte, we analyse how the weights change along with the regularization constant. Data with slightly higher dimensions is sampled to do this because overfitting is more likely to occur in such data. Here put('tau', tau) method is used to set the necessary parameter.
End of explanation
def xval_results(taus):
errors = []
for t in taus:
rr.put('tau', t)
splitting_strategy = CrossValidationSplitting(labels_train, 5)
# evaluation method
evaluation_criterium = MeanSquaredError()
# cross-validation instance
cross_validation = CrossValidation(rr, feats_train, labels_train, splitting_strategy, evaluation_criterium, False)
cross_validation.put('num_runs', 100)
result = cross_validation.evaluate()
result = CrossValidationResult.obtain_from_generic(result)
errors.append(result.get_mean())
return errors
Explanation: The mean squared error (MSE) of an estimator measures the average of the squares of the errors. CMeanSquaredError class is used to compute the MSE as :
$$\frac{1}{|L|} \sum_{i=1}^{|L|} (L_i - R_i)^2$$
Here $L$ is the vector of predicted labels and $R$ is the vector of real labels.
We use 5-fold cross-validation to compute MSE and have a look at how MSE varies with regularisation.
End of explanation
n = 500
taus = logspace(-6, 4, n)
figure(figsize=(20,6))
suptitle('Effect of Regularisation for 10-dimensional data with 200 samples', fontsize=12)
matrix, y = generate_data(200,10)
feats_train = features(matrix.T)
labels_train = RegressionLabels(y[0])
weights, rr = generate_weights(taus, feats_train, labels_train)
errors = xval_results(taus)
p1=subplot(121)
plot_regularization(taus, weights)
p2 = subplot(122)
plot(taus, errors)
p2.set_xscale('log')
xlabel('Tau', fontsize=12)
ylabel('Error', fontsize=12)
jet()
Explanation: Data with dimension: 10 and number of samples: 200 is now sampled.
End of explanation
figure(figsize=(20,6))
suptitle('Effect of Regularisation for 10-dimensional data with 10 samples', fontsize=12)
matrix, y = generate_data(10,10)
feats_train = features(matrix.T)
labels_train = RegressionLabels(y[0])
weights, rr = generate_weights(taus, feats_train, labels_train)
errors = xval_results(taus)
p1 = subplot(121)
plot_regularization(taus, weights)
p2 = subplot(122)
plot(taus, errors)
p2.set_xscale('log')
xlabel('Tau', fontsize=12)
ylabel('Error', fontsize=12)
jet()
Explanation: As seen from the plot of errors, regularisation doesn't seem to affect the errors significantly. One interpretation could be that this is beacuse there is less overfitting as we have large number of samples. For a small sample size as compared to the dimensionality, the test set performance may be poor even. The reason for this is that the regression function will fit the noise too much, while the interesting part of the signal is too small. We now generate 10 samples of 10-dimensions to test this.
End of explanation
#sample some data
X=rand(10)*1.5
for i in range(9):
x=random.standard_normal(10)*0.5
X=vstack((X, x))
y=ones(10)
feats_train=features(X)
labels_train=RegressionLabels(y)
Explanation: The first plot is the famous ridge trace that is the signature of this technique. The plot is really very straight forward to read. It presents the standardized regression coefficients (weights) on the vertical axis and various values of tau (Regularisation constant) along the horizontal axis. Since the values of tau ($\tau$) span several orders of magnitude, we adopt a logarithmic scale along this axis. As tau is increased, the values of the regression estimates change, often wildly at first. At some point, the coefficients seem to settle down and then gradually drift towards zero. Often the value of tau for which these coefficients are at their stable values is the best one. This should be supported by a low error value for that tau.
Least Angle Regression and LASSO
LASSO (Least Absolute Shrinkage and Selection Operator) is another version of Least Squares regression, which uses a L1-norm of the parameter vector. This intuitively enforces sparse solutions, whereas L2-norm penalties usually result in smooth and dense solutions.
$$ \min \|X^T\beta - y\|^2 + \lambda\|\beta\|_1$$
In Shogun, following equivalent form is solved, where increasing $C$ selects more variables:
$$\min \|X^T\beta - y\|^2 \quad s.t. \|\beta\|_1 \leq C $$
One way to solve this regularized form is by using Least Angle Regression (LARS).
LARS is essentially forward stagewise made fast. LARS can be briefly described as follows.
Start with an empty set.
Select $x_j$ that is most correlated with residuals.
Proceed in the direction of $x_j$ until another variable $x_k$ is equally correlated with residuals.
Choose equiangular direction between $x_j$ and $x_k$.
Proceed until third variable enters the active set, etc.
It should be noticed that instead of making tiny hops in the direction of one variable at a time, LARS makes optimally-sized leaps in optimal directions. These directions are chosen to make equal angles (equal correlations) with each of the variables currently in our set (equiangular).
Shogun provides tools for Least angle regression (LARS) and lasso using CLeastAngleRegression class. As explained in the mathematical formaulation, LARS is just like Stepwise Regression but increases the estimated variables in a direction equiangular to each one's correlations with the residual. The working of this is shown below by plotting the LASSO path. Data is generated in a similar way to the previous section.
End of explanation
#Preprocess data
preproc=PruneVarSubMean()
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor()
preprocessor=NormOne()
preprocessor.init(feats_train)
feats_train.add_preprocessor(preprocessor)
feats_train.apply_preprocessor()
print("(No. of attributes, No. of samples) of data:")
print(feats_train.get_feature_matrix().shape)
Explanation: CLeastAngleRegression requires the features to be normalized with a zero mean and unit norm. Hence we use two preprocessors: PruneVarSubMean and NormOne.
End of explanation
#Train and generate weights
la=LeastAngleRegression()
la.put('labels', labels_train)
la.train(feats_train)
size=la.get_path_size()
print ("Size of path is %s" %size)
Explanation: Next we train on the data. Keeping in mind that we had 10 attributes/dimensions in our data, let us have a look at the size of LASSO path which is obtained readily using get_path_size().
End of explanation
#calculate weights
weights=[]
for i in range(size):
weights.append(la.get_w_for_var(i))
s = sum(abs(array(weights)), axis=1)
print ('Max. norm is %s' %s[-1])
figure(figsize(30,7))
#plot 1
ax=subplot(131)
title('Lasso path')
ax.plot(s, weights, linewidth=2)
ymin, ymax = ylim()
ax.vlines(s[1:-1], ymin, ymax, linestyle='dashed')
xlabel("Norm")
ylabel("weights")
#Restrict norm to half for early termination
la.put('max_l1_norm', s[-1]*0.5)
la.train(feats_train)
size=la.get_path_size()
weights=[]
for i in range(size):
weights.append(la.get_w_for_var(i))
s = sum(abs(array(weights)), axis=1)
#plot 2
ax2=subplot(132)
title('Lasso path with restricted norm')
ax2.plot(s, weights, linewidth=2)
ax2.vlines(s[1:-1], ymin, ymax, linestyle='dashed')
xlabel("Norm")
ylabel("weights")
print ('Restricted norm is %s' %(s[-1]))
Explanation: The weights generated ($\beta_i$) and their norm ($\sum_i|\beta_i|$) change with each step. This is when a new variable is added to path. To get the weights at each of these steps get_w_for_var() method is used. The argument is the index of the variable which should be in the range [0, path_size).
End of explanation
feats = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
train_labels = RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
mat = feats.get_real_matrix('feature_matrix')
crime_rate = mat[0]
feats_train = features(crime_rate.reshape(1, len(mat[0])))
preproc = RescaleFeatures()
preproc.init(feats_train)
feats_train.add_preprocessor(preproc)
feats_train.apply_preprocessor(True)
# Store preprocessed feature matrix.
preproc_data = feats_train.get_feature_matrix()
size=500
x1=linspace(0, 1, size)
width=0.5
tau=0.5
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
krr=KernelRidgeRegression(tau, kernel, train_labels)
krr.train(feats_train)
feats_test=features(x1.reshape(1,len(x1)))
kernel.init(feats_train, feats_test)
out = krr.apply().get_labels()
#Visualization of regression
fig=figure(figsize(6,6))
#first plot with only one attribute
title("Regression with 1st attribute")
_=scatter(preproc_data[0:], train_labels.get_labels(), c=ones(506), cmap=gray(), s=20)
_=xlabel('Crime rate')
_=ylabel('Median value of homes')
_=plot(x1,out, linewidth=3)
Explanation: Each color in the plot represents a coefficient and the vertical lines denote steps. It is clear that the weights are piecewise linear function of the norm.
Kernel Ridge Regression
Kernel ridge regression (KRR) is a kernel-based regularized form of regression. The dual form of Ridge regression can be shown to be:
$${\bf \alpha}=\left({\bf X}^T{\bf X}+\tau{\bf I}\right)^{-1}{\bf y} \quad \quad(1)$$
It can be seen that the equation to compute $\alpha$ only contains the vectors $\bf X$ in inner products with each other. If a non-linear mapping
$\Phi : x \rightarrow \Phi(x) \in \mathcal F$ is used, the equation can be defined in terms of inner products $\Phi(x)^T \Phi(x)$ instead. We can then use the kernel trick where a kernel function, which can be evaluated efficiently, is choosen $K({\bf x_i, x_j})=\Phi({\bf x_i})\Phi({\bf x_j})$. This is done because it is sufficient to know these inner products only, instead of the actual vectors $\bf x_i$. Linear regression methods like above discussed Ridge Regression can then be carried out in the feature space by using a kernel function representing a non-linear map which amounts to nonlinear regression in original input space.
KRR can be performed in Shogun using CKernelRidgeRegression class. Let us apply it on a non linear regression problem from the Boston Housing Dataset, where the task is to predict prices of houses by finding a relationship with the various attributes provided. The per capita crime rate attribute is used in this particular example.
End of explanation
# Use different kernels
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
kernel.init(feats_train, feats_train)
#Polynomial kernel of degree 2
poly_kernel=PolyKernel(feats_train, feats_train, 2, 1.0)
linear_kernel=LinearKernel(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
svr_param=1
svr_C=10
svr=LibSVR(svr_C, svr_param, gaussian_kernel, train_labels, LIBSVR_EPSILON_SVR)
#Visualization of regression
x1=linspace(0, 1, size)
feats_test_=features(x1.reshape(1,len(x1)))
def svr_regress(kernels):
fig=figure(figsize(8,8))
for i, kernel in enumerate(kernels):
svr.put('kernel', kernel)
svr.train()
out=svr.apply(feats_test_).get_labels()
#subplot(1,len(kernels), i)
#first plot with only one attribute
title("Support Vector Regression")
_=scatter(preproc_data[0:], train_labels.get_labels(), c=ones(506), cmap=gray(), s=20)
_=xlabel('Crime rate')
_=ylabel('Median value of homes')
_=plot(x1,out, linewidth=3)
ylim([0, 40])
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="b")
p3 = Rectangle((0, 0), 1, 1, fc="g")
_=legend((p1, p2, p3), ["Gaussian Kernel", "Linear Kernel", "Polynomial Kernel"], loc=1)
svr_regress(kernels)
Explanation: As seen from the example KRR (using the kernel trick) can apply techniques for linear regression in the feature space to perform nonlinear regression in the input space.
Support Vector Regression
In Kernel Ridge Regression $(1)$ we have seen the result to be a dense solution. Thus all training examples are active which limits its usage to fewer number of training examples. Support Vector Regression (SVR) uses the concept of support vectors as in Support Vector Machines that leads to a sparse solution. In the SVM the penalty was paid for being on the wrong side of the discriminating plane. Here we do the same thing: we introduce a penalty for being far away from predicted line, but once you are close enough, i.e. in some “epsilon-tube” around this line, there is no penalty.
We are given a labeled set of input-output pairs $\mathcal{D}=(x_i,y_i)^N_{i=1}\subseteq \mathcal{X} \times \mathcal{Y}$ where $x\in\mathcal{X}$ and $y\in \mathcal{Y}$ and the primary problem is as follows:
$$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n (\xi_i+ {\xi_i}^*)) }$$
For the constraints:
$$ {\bf w}^T{\bf x}_i+b-c_i-\xi_i\leq 0, \, \forall i=1\dots N$$
$$ -{\bf w}^T{\bf x}_i-b-c_i^-\xi_i^\leq 0, \, \forall i=1\dots N $$
with $c_i=y_i+ \epsilon$ and $c_i^*=-y_i+ \epsilon$
The resulting dual optimaization problem is:
$$ \max_{{\bf \alpha},{\bf \alpha}^} -\frac{1}{2}\sum_{i,j=1}^N(\alpha_i-\alpha_i^)(\alpha_j-\alpha_j^) {\bf x}i^T {\bf x}_j-\sum{i=1}^N(\alpha_i+\alpha_i^)\epsilon - \sum_{i=1}^N(\alpha_i-\alpha_i^)y_i\$$ $$ \mbox{wrt}:$$
$${\bf \alpha},{\bf \alpha}^\in{\bf R}^N\ \mbox{s.t.}: 0\leq \alpha_i,\alpha_i^\leq C,\, \forall i=1\dots N\ \sum_{i=1}^N(\alpha_i-\alpha_i^)y_i=0 $$
This class also support the $\nu$-SVR regression version of the problem, where $\nu$ replaces the $\epsilon$ parameter and represents an upper bound on the action of margin errors and a lower bound on the fraction of support vectors. The resulting problem generally takes a bit longer to solve. The details and comparison of these two versioins can be found in [1].
Let us try regression using Shogun's LibSVR. The dataset from last section is used. The svr_param argument is the $\epsilon$-tube for the $\epsilon$ version and is the $\nu$ parameter in other case.
End of explanation
import time
gaussian_kernel=sgkernel("GaussianKernel", log_width=np.log(13))
nus=[0.2, 0.4, 0.6, 0.8]
epsilons=[0.16, 0.09, 0.046, 0.0188]
svr_C=10
def compare_svr(nus, epsilons):
time_eps=[]
time_nus=[]
for i in range(len(epsilons)):
svr_param=1
svr=LibSVR(svr_C, epsilons[i], gaussian_kernel, train_labels, LIBSVR_EPSILON_SVR)
t_start=time.clock()
svr.train()
time_test=(time.clock() - t_start)
time_eps.append(time_test)
for i in range(len(nus)):
svr_param=1
svr=LibSVR(svr_C, nus[i], gaussian_kernel, train_labels, LIBSVR_NU_SVR)
t_start=time.clock()
svr.train()
time_test=(time.clock() - t_start)
time_nus.append(time_test)
print("-"*72 )
print("|", "%15s" % 'Nu' ,"|", "%15s" % 'Epsilon',"|","%15s" % 'Time (Nu)' ,"|", "%15s" % 'Time(Epsilon)' ,"|")
for i in range(len(nus)):
print( "-"*72 )
print( "|", "%15s" % nus[i] ,"|", "%15s" %epsilons[i],"|","%15s" %time_nus[i] ,"|", "%15s" %time_eps[i] ,"|" )
print("-"*72 )
title_='SVR Performance on Boston Housing dataset'
print("%50s" %title_)
compare_svr(nus, epsilons)
Explanation: Let us do comparison of time taken for the 2 different models simliar to that done in section 6 of [1]. The Boston Housing Dataset is used.
End of explanation |
12,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generic Android viewer
Step1: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are connected to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
Step2: Workload definition
The Viewer workload will simply read an URI and let Android pick the best application to view the item designated by that URI. That item could be a web page, a photo, a pdf, etc. For instance, if given an URL to a Google Maps location, the Google Maps application will be opened at that location. If the device doesn't have Google Play Services (e.g. HiKey960), it will open Google Maps through the default web browser.
The Viewer class is intended to be subclassed to customize your workload. There are pre_interact(), interact() and post_interact() methods that are made to be overridden.
In this case we'll simply execute a script on the target to swipe around a location on Gmaps. This script is generated using the TargetScript class, which is used here on System.{h,v}swipe() calls to accumulate commands instead of executing them directly. Those commands are then outputted to a script on the remote device, and that script is later on executed as the item is being viewed. See ${LISA_HOME}/libs/util/target_script.py
Step3: Workload execution
Step4: Traces visualisation | Python Code:
from conf import LisaLogging
LisaLogging.setup()
%pylab inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Import support for Android devices
from android import Screen, Workload, System, ViewerWorkload
from target_script import TargetScript
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
import pandas as pd
import sqlite3
from IPython.display import display
Explanation: Generic Android viewer
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'android',
"board" : 'hikey960',
# Device serial ID
# Not required if there is only one device connected to your computer
"device" : "0123456789ABCDEF",
# Android home
# Not required if already exported in your .bashrc
#"ANDROID_HOME" : "/home/vagrant/lisa/tools/",
# Folder where all the results will be collected
"results_dir" : "Viewer_example",
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_load_waking_task",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff"
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'taskset'],
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False)
target = te.target
Explanation: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are connected to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
End of explanation
class GmapsViewer(ViewerWorkload):
def pre_interact(self):
self.script = TargetScript(te, "gmaps_swiper.sh")
# Define commands to execute during experiment
for i in range(2):
System.hswipe(self.script, 40, 60, 100, False)
self.script.append('sleep 1')
System.vswipe(self.script, 40, 60, 100, True)
self.script.append('sleep 1')
System.hswipe(self.script, 40, 60, 100, True)
self.script.append('sleep 1')
System.vswipe(self.script, 40, 60, 100, False)
self.script.append('sleep 1')
# Push script to the target
self.script.push()
def interact(self):
self.script.run()
def experiment():
# Configure governor
target.cpufreq.set_all_governors('sched')
# Get workload
wload = Workload.getInstance(te, 'gmapsviewer')
# Run workload
wload.run(out_dir=te.res_dir,
collect="ftrace",
uri="https://goo.gl/maps/D8Sn3hxsHw62")
# Dump platform descriptor
te.platform_dump(te.res_dir)
Explanation: Workload definition
The Viewer workload will simply read an URI and let Android pick the best application to view the item designated by that URI. That item could be a web page, a photo, a pdf, etc. For instance, if given an URL to a Google Maps location, the Google Maps application will be opened at that location. If the device doesn't have Google Play Services (e.g. HiKey960), it will open Google Maps through the default web browser.
The Viewer class is intended to be subclassed to customize your workload. There are pre_interact(), interact() and post_interact() methods that are made to be overridden.
In this case we'll simply execute a script on the target to swipe around a location on Gmaps. This script is generated using the TargetScript class, which is used here on System.{h,v}swipe() calls to accumulate commands instead of executing them directly. Those commands are then outputted to a script on the remote device, and that script is later on executed as the item is being viewed. See ${LISA_HOME}/libs/util/target_script.py
End of explanation
results = experiment()
# Load traces in memory (can take several minutes)
platform_file = os.path.join(te.res_dir, 'platform.json')
with open(platform_file, 'r') as fh:
platform = json.load(fh)
trace_file = os.path.join(te.res_dir, 'trace.dat')
trace = Trace(platform, trace_file, events=my_conf['ftrace']['events'], normalize_time=False)
Explanation: Workload execution
End of explanation
!kernelshark {trace_file} 2>/dev/null
Explanation: Traces visualisation
End of explanation |
12,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook has some profiling of Dask used to make a selection along both first and second axes of a large-ish multidimensional array. The use case is making selections of genotype data, e.g., as required for making a web-browser for genotype data as in www.malariagen.net/apps/ag1000g.
Step1: Real data
Step2: Synthetic data | Python Code:
import zarr; print('zarr', zarr.__version__)
import dask; print('dask', dask.__version__)
import dask.array as da
import numpy as np
Explanation: This notebook has some profiling of Dask used to make a selection along both first and second axes of a large-ish multidimensional array. The use case is making selections of genotype data, e.g., as required for making a web-browser for genotype data as in www.malariagen.net/apps/ag1000g.
End of explanation
# here's the real data
callset = zarr.open_group('/kwiat/2/coluzzi/ag1000g/data/phase1/release/AR3.1/variation/main/zarr2/zstd/ag1000g.phase1.ar3',
mode='r')
callset
# here's the array we're going to work with
g = callset['3R/calldata/genotype']
g
# wrap as dask array with very simple chunking of first dim only
%time gd = da.from_array(g, chunks=(g.chunks[0], None, None))
gd
# load condition used to make selection on first axis
dim0_condition = callset['3R/variants/FILTER_PASS'][:]
dim0_condition.shape, dim0_condition.dtype, np.count_nonzero(dim0_condition)
# invent a random selection for second axis
dim1_indices = sorted(np.random.choice(765, size=100, replace=False))
# setup the 2D selection - this is the slow bit
%time gd_sel = gd[dim0_condition][:, dim1_indices]
gd_sel
# now load a slice from this new selection - quick!
%time gd_sel[1000000:1100000].compute(optimize_graph=False)
# what's taking so long?
import cProfile
cProfile.run('gd[dim0_condition][:, dim1_indices]', sort='time')
cProfile.run('gd[dim0_condition][:, dim1_indices]', sort='cumtime')
Explanation: Real data
End of explanation
# create a synthetic dataset for profiling
a = zarr.array(np.random.randint(-1, 4, size=(20000000, 200, 2), dtype='i1'),
chunks=(10000, 100, 2), compressor=zarr.Blosc(cname='zstd', clevel=1, shuffle=2))
a
# create a synthetic selection for first axis
c = np.random.randint(0, 2, size=a.shape[0], dtype=bool)
# create a synthetic selection for second axis
s = sorted(np.random.choice(a.shape[1], size=100, replace=False))
%time d = da.from_array(a, chunks=(a.chunks[0], None, None))
d
%time ds = d[c][:, s]
cProfile.run('d[c][:, s]', sort='time')
%time ds[1000000:1100000].compute(optimize_graph=False)
# problem is in fact just the dim0 selection
cProfile.run('d[c]', sort='time')
Explanation: Synthetic data
End of explanation |
12,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OLGA ppl files, examples and howto
For an tpl file the following methods are available
Step1: Profile selection
As for tpl files, a ppl file may contain hundreds of profiles, in particular for complex networks. For this reason a filtering method is quite useful.
The easiest way is to filter on all the profiles using patters, the command ppl.filter_trends("PT") filters all the pressure profiless (or better, all the profiles with "PT" in the description, if you have defined a temperature profile in the position "PTTOPSIDE", for example, this profile will be selected too).
The resulting python dictionaly will have a unique index for each filtered profile that can be used to identify the interesting profile(s).
In case of an emply pattern all the available profiles will be reported.
Step2: The same outpout can be reported as a pandas dataframe
Step3: Dump to excel
To dump all the variables in an excel file use ppl.to_excel()
If no path is provided an excel file with the same name of the tpl file is generated in the working folder. Depending on the tpl size this may take a while.
Extract a specific variable
Once you know the variable(s) index you are interested in (see the filtering paragraph above for more info) you can extract it (or them) and use the data directly in python.
Let's assume you are interested in the pressure and the temperature profile of the branch riser
Step4: Our targets are
Step5: The ppl object now has the two profiles available in the data attribute
Step6: while the label attibute stores the variable type
Step7: Ppl data structure
The ppl data structure at the moment contains
Step8: To plot the last timestep
Step9: The time can also be used as parameter | Python Code:
ppl_path = '../../pyfas/test/test_files/'
fname = 'FC1_rev01.ppl'
ppl = fa.Ppl(ppl_path+fname)
Explanation: OLGA ppl files, examples and howto
For an tpl file the following methods are available:
<b>filter_data</b> - return a filtered subset of trends
<b>extract</b> - extract a single trend variable
<b>to_excel</b> - dump all the data to an excel file
The usual workflow should be:
Load the correct tpl
Select the desired variable(s)
Extract the results or dump all the variables to an excel file
Post-process your data in Excel or in the notebook itself
Ppl loading
To load a specific tpl file the correct path and filename have to be provided:
End of explanation
ppl.filter_data('PT')
Explanation: Profile selection
As for tpl files, a ppl file may contain hundreds of profiles, in particular for complex networks. For this reason a filtering method is quite useful.
The easiest way is to filter on all the profiles using patters, the command ppl.filter_trends("PT") filters all the pressure profiless (or better, all the profiles with "PT" in the description, if you have defined a temperature profile in the position "PTTOPSIDE", for example, this profile will be selected too).
The resulting python dictionaly will have a unique index for each filtered profile that can be used to identify the interesting profile(s).
In case of an emply pattern all the available profiles will be reported.
End of explanation
pd.DataFrame(ppl.filter_data('PT'), index=("Profiles",)).T
Explanation: The same outpout can be reported as a pandas dataframe:
End of explanation
pd.DataFrame(ppl.filter_data("TM"), index=("Profiles",)).T
pd.DataFrame(ppl.filter_data("PT"), index=("Profiles",)).T
Explanation: Dump to excel
To dump all the variables in an excel file use ppl.to_excel()
If no path is provided an excel file with the same name of the tpl file is generated in the working folder. Depending on the tpl size this may take a while.
Extract a specific variable
Once you know the variable(s) index you are interested in (see the filtering paragraph above for more info) you can extract it (or them) and use the data directly in python.
Let's assume you are interested in the pressure and the temperature profile of the branch riser:
End of explanation
ppl.extract(13)
ppl.extract(12)
Explanation: Our targets are:
<i>variable 13</i> for the temperature
and
<i>variable 12</i> for the pressure
Now we can proceed with the data extraction:
End of explanation
ppl.data.keys()
Explanation: The ppl object now has the two profiles available in the data attribute:
End of explanation
ppl.label[13]
Explanation: while the label attibute stores the variable type:
End of explanation
%matplotlib inline
geometry = ppl.data[12][0]
pt_riser = ppl.data[12][1]
tm_riser = ppl.data[13][1]
def ppl_plot(geo, v0, v1, ts):
fig, ax0 = plt.subplots(figsize=(12, 7));
ax0.grid(True)
p0, = ax0.plot(geo, v0[ts])
ax0.set_ylabel("[C]", fontsize=16)
ax0.set_xlabel("[m]", fontsize=16)
ax1 = ax0.twinx()
p1, = ax1.plot(geo, v1[ts]/1e5, 'r')
ax1.grid(False)
ax1.set_ylabel("[bara]", fontsize=16)
ax1.tick_params(axis="both", labelsize=16)
ax1.tick_params(axis="both", labelsize=16)
plt.legend((p0, p1), ("Temperature profile", "Pressure profile"), loc=3, fontsize=16)
plt.title("P and T for case FC1", size=20);
Explanation: Ppl data structure
The ppl data structure at the moment contains:
the geometry profile of the branch as ppl.data[variable_index][0]
the selected profile at the timestep 0 as ppl.data[variable_index][1][0]
the selected profile at the last timestep as ppl.data[variable_index][1][-1]
In other words the first index is the variable, the second is 0 for the geometry and 1 for the data, the last one identifies the timestep.
Data processing
The results available in the data attribute are numpy arrays and can be easily manipulated and plotted:
End of explanation
ppl_plot(geometry, tm_riser, pt_riser, -1)
Explanation: To plot the last timestep:
End of explanation
import ipywidgets.widgets as widgets
from ipywidgets import interact
timesteps=len(tm_riser)-1
@interact
def ppl_plot(ts=widgets.IntSlider(min=0, max=timesteps)):
fig, ax0 = plt.subplots(figsize=(12, 7));
ax0.grid(True)
p0, = ax0.plot(geometry, tm_riser[ts])
ax0.set_ylabel("[C]", fontsize=16)
ax0.set_xlabel("[m]", fontsize=16)
ax0.set_ylim(10, 12)
ax1 = ax0.twinx()
ax1.set_ylim(90, 130)
p1, = ax1.plot(geometry, pt_riser[ts]/1e5, 'r')
ax1.grid(False)
ax1.set_ylabel("[bara]", fontsize=16)
ax1.tick_params(axis="both", labelsize=16)
ax1.tick_params(axis="both", labelsize=16)
plt.legend((p0, p1), ("Temperature profile", "Pressure profile"), loc=3, fontsize=16)
plt.title("P and T for case FC1 @ timestep {}".format(ts), size=20);
Explanation: The time can also be used as parameter:
End of explanation |
12,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using an SBML model
Getting started
Installing libraries
Before you start, you will need to install a couple of libraries
Step1: Running an SBML model
If you have run your genome through RAST, you can download the SBML model and use that directly.
We have provided an SBML model of Citrobacter sedlakii that you can download and use. You can right-ctrl click on this link and save the SBML file in the same location you are running this iPython notebook.
We use this SBML model to demonstrate the key points of the FBA approach
Step2: Find all the reactions and identify those that are boundary reactions
We need a set of reactions to run in the model. In this case, we are going to run all the reactions in our SBML file. However, you can change this set if you want to knock out reactions, add reactions, or generally modify the model. We store those in the reactions_to_run set.
The boundary reactions refer to compounds that are secreted but then need to be removed from the reactions_to_run set. We usually include a consumption of those compounds that is open ended, as if they are draining away. We store those reactions in the uptake_secretion_reactions dictionary.
Step3: At this point, we can take a look at how many reactions are in the model, not counting the biomass reaction
Step4: Find all the compounds in the model, and filter out those that are secreted
We need to filter out uptake and secretion compounds from our list of all compounds before we can make a stoichiometric matrix.
Step5: Again, we can see how many compounds there are in the model.
Step6: And now we have the size of our stoichiometric matrix! Notice that the stoichiometric matrix is composed of the reactions that we are going to run and the compounds that are in those reactions (but not the uptake/secretion reactions and compounds).
Step7: Read the media file, and correct the media names
In our media directory, we have a lot of different media formulations, most of which we use with the Genotype-Phenotype project. For this example, we are going to use Lysogeny Broth (LB). There are many different formulations of LB, but we have included the recipe created by the folks at Argonne so that it is comparable with their analysis. You can download ArgonneLB.txt and put it in the same directory as this iPython notebook to run it.
Once we have read the file we need to correct the names in the compounds. Sometimes when compound names are exported to the SBML file they are modified slightly. This just corrects those names.
Step8: Set the reaction bounds for uptake/secretion compounds
The uptake and secretion compounds typically have reaction bounds that allow them to be consumed (i.e. diffuse away from the cell) but not produced. However, our media components can also increase in concentration (i.e. diffuse to the cell) and thus the bounds are set higher. Whenever you change the growth media, you also need to adjust the reaction bounds to ensure that the media can be consumed!
Step9: Run the FBA
Now that we have constructed our model, we can run the FBA! | Python Code:
import sys
import copy
import PyFBA
from __future__ import print_function
Explanation: Using an SBML model
Getting started
Installing libraries
Before you start, you will need to install a couple of libraries:
The ModelSeedDatabase has all the biochemistry we'll need. You can install that with git clone.
The PyFBA library has detailed installation instructions. Don't be scared, its mostly just pip install.
(Optional) Also, get the SEED Servers as you can get a lot of information from them. You can install the git python repo from github. Make sure that the SEED_Servers_Python is in your PYTHONPATH.
We start with importing some modules that we are going to use.
We import sys so that we can use standard out and standard error if we have some error messages.<br>
We import copy so that we can make a deep copy of data structures for later comparisons.<br>
We import print_function so that we can use the print using Python 3.X syntax.<br>
Then we import the PyFBA module to get started.
End of explanation
sbml = PyFBA.parse.parse_sbml_file("Citrobacter_sedlakii.sbml")
Explanation: Running an SBML model
If you have run your genome through RAST, you can download the SBML model and use that directly.
We have provided an SBML model of Citrobacter sedlakii that you can download and use. You can right-ctrl click on this link and save the SBML file in the same location you are running this iPython notebook.
We use this SBML model to demonstrate the key points of the FBA approach: defining the reactions, including the boundary, or drainflux, reactions; the compounds, including the drain compounds; the media; and the reaction bounds.
We'll take it step by step!
We start by parsing the model:
End of explanation
# Get a dict of reactions.
# The key is the reaction ID, and the value is a metabolism.reaction.Reaction object
reactions = sbml.reactions
reactions_to_run = set()
uptake_secretion_reactions = {}
biomass_equation = None
for r in reactions:
if 'biomass_equation' in reactions[r].name.lower():
biomass_equation = reactions[r]
continue
is_boundary = False
for c in reactions[r].all_compounds():
if c.uptake_secretion:
is_boundary = True
if is_boundary:
reactions[r].is_uptake_secretion = True
uptake_secretion_reactions[r] = reactions[r]
else:
reactions_to_run.add(r)
Explanation: Find all the reactions and identify those that are boundary reactions
We need a set of reactions to run in the model. In this case, we are going to run all the reactions in our SBML file. However, you can change this set if you want to knock out reactions, add reactions, or generally modify the model. We store those in the reactions_to_run set.
The boundary reactions refer to compounds that are secreted but then need to be removed from the reactions_to_run set. We usually include a consumption of those compounds that is open ended, as if they are draining away. We store those reactions in the uptake_secretion_reactions dictionary.
End of explanation
print("There are {} reactions in the model".format(len(reactions)))
print("There are {}".format(len(uptake_secretion_reactions)),
"uptake/secretion reactions in the model")
print("There are {}".format(len(reactions_to_run)),
"reactions to be run in the model")
Explanation: At this point, we can take a look at how many reactions are in the model, not counting the biomass reaction:
End of explanation
# Get a dict of compounds.
# The key is the string representation of the compound and
# the value is a metabolite.compound.Compound object
all_compounds = sbml.compounds
# Filter for compounds that are boundary compounds
filtered_compounds = {}
for c in all_compounds:
if not all_compounds[c].uptake_secretion:
filtered_compounds[c] = all_compounds[c]
Explanation: Find all the compounds in the model, and filter out those that are secreted
We need to filter out uptake and secretion compounds from our list of all compounds before we can make a stoichiometric matrix.
End of explanation
print("There are {} total compounds in the model".format(len(all_compounds)))
print("There are {}".format(len(filtered_compounds)),
"compounds that are not involved in uptake and secretion")
Explanation: Again, we can see how many compounds there are in the model.
End of explanation
print("The stoichiometric matrix will",
"be {} reactions by {} compounds".format(len(reactions_to_run),
len(filtered_compounds)))
Explanation: And now we have the size of our stoichiometric matrix! Notice that the stoichiometric matrix is composed of the reactions that we are going to run and the compounds that are in those reactions (but not the uptake/secretion reactions and compounds).
End of explanation
# Read the media file
media = PyFBA.parse.read_media_file("ArgonneLB.txt")
# Correct the names
media = PyFBA.parse.correct_media_names(media, all_compounds)
Explanation: Read the media file, and correct the media names
In our media directory, we have a lot of different media formulations, most of which we use with the Genotype-Phenotype project. For this example, we are going to use Lysogeny Broth (LB). There are many different formulations of LB, but we have included the recipe created by the folks at Argonne so that it is comparable with their analysis. You can download ArgonneLB.txt and put it in the same directory as this iPython notebook to run it.
Once we have read the file we need to correct the names in the compounds. Sometimes when compound names are exported to the SBML file they are modified slightly. This just corrects those names.
End of explanation
# Adjust the lower bounds of uptake secretion reactions
# for things that are not in the media
for u in uptake_secretion_reactions:
is_media_component = False
for c in uptake_secretion_reactions[u].all_compounds():
if c in media:
is_media_component = True
if not is_media_component:
reactions[u].lower_bound = 0.0
uptake_secretion_reactions[u].lower_bound = 0.0
Explanation: Set the reaction bounds for uptake/secretion compounds
The uptake and secretion compounds typically have reaction bounds that allow them to be consumed (i.e. diffuse away from the cell) but not produced. However, our media components can also increase in concentration (i.e. diffuse to the cell) and thus the bounds are set higher. Whenever you change the growth media, you also need to adjust the reaction bounds to ensure that the media can be consumed!
End of explanation
status, value, growth = PyFBA.fba.run_fba(filtered_compounds, reactions,
reactions_to_run, media, biomass_equation,
uptake_secretion_reactions)
print("The FBA completed with a flux value of {} --> growth: {}".format(value, growth))
Explanation: Run the FBA
Now that we have constructed our model, we can run the FBA!
End of explanation |
12,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Identify Coupled Patterns between SLP and SST through Maximum Covariance Analysis
Maximum Correlation Analysis (MCA; Bretherton et al., 1992) is similar to Empirical Orthogonal Function Analysis (EOF) in that they both deal with the decomposition of a covariance matrix. In EOF, this is a covariance matrix based on a single spatio-temporal field, while MCA is based on the decomposition of a "cross-covariance" matrix derived from two variables. The resulting expansion coefficients (ECs) associated with the left and right singular vectors can then be projected onto the original data to obtain homogeneous and heterogeneous regression maps.
This notebook applies MCA method to real data—the tropical Pacific sea level pressure (SLP) and sea surface temperature (SST) fields—and identify the coupled patterns from the data, including the famous tropical Pacific climate variability known as the El Nin o–Southern Oscillation (ENSO). ENSO has warm El Nino states and cool La Nina states, with changes found not only in the SST but also in the SLP.
The MCA approach is chosen because it is able to capture patterns of maximum covariance between two variables; it has been found to reasonably capture atmospheric and oceanic processes (Wilks, 2015). It is a robust
method to investigate dominant modes of interaction, because it favors a better understanding of the relationship between groups of variables (Frankignoul et al., 2011).
The SLP and SST data are downloaded from http
Step1: 2. Load data
Spatial domain
Step2: 2.2 SST
Step3: 2.3 Preprocess
Convert 3D SLP and SST to 2D arrays and only use none-NaN values of SST.
Step4: 3. Carry out Maximum Covariance Analysis
The key step is to apply SVD to decompose the covariance matrix of SLP and SST. It is worth noting that np.linalg.svd will produce V.T, instead of V. In practice, have to transpose it.
3.1 MCA
Step5: 3.2 Postprocess
Calculate cumulative fraction of squares covariance explained
Extract the leading SLP MCA pattern and EC
Extract the leading SST MCA pattern and EC
Normalize MCD mode 1 by standardizing the EC1, so patterns correspond to a 1-std deviation variation in EC1. The positive EC1 of SST should match the Nino SSTA.
3.2.1 Calculate cumulative fraction of squares covariance explained
Step6: 3.2.2 Extract the leading SLP MCA pattern and EC
Step7: 3.2.3 Extract the leading SST MCA pattern and EC
Step8: 4 Visualize MCA results
4.1 Plot cumulative fraction of squares covariance explained
Step9: 4.2 Plot the leading SLP/SST MCA spatial pattern and EC | Python Code:
%matplotlib inline
import numpy as np
import xarray as xr
import cartopy.crs as ccrs
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.gridspec as gridspec
import matplotlib.dates as mdates
mpl.rcParams['figure.figsize'] = 8.0, 4.0
mpl.rcParams['font.size'] = 13
Explanation: Identify Coupled Patterns between SLP and SST through Maximum Covariance Analysis
Maximum Correlation Analysis (MCA; Bretherton et al., 1992) is similar to Empirical Orthogonal Function Analysis (EOF) in that they both deal with the decomposition of a covariance matrix. In EOF, this is a covariance matrix based on a single spatio-temporal field, while MCA is based on the decomposition of a "cross-covariance" matrix derived from two variables. The resulting expansion coefficients (ECs) associated with the left and right singular vectors can then be projected onto the original data to obtain homogeneous and heterogeneous regression maps.
This notebook applies MCA method to real data—the tropical Pacific sea level pressure (SLP) and sea surface temperature (SST) fields—and identify the coupled patterns from the data, including the famous tropical Pacific climate variability known as the El Nin o–Southern Oscillation (ENSO). ENSO has warm El Nino states and cool La Nina states, with changes found not only in the SST but also in the SLP.
The MCA approach is chosen because it is able to capture patterns of maximum covariance between two variables; it has been found to reasonably capture atmospheric and oceanic processes (Wilks, 2015). It is a robust
method to investigate dominant modes of interaction, because it favors a better understanding of the relationship between groups of variables (Frankignoul et al., 2011).
The SLP and SST data are downloaded from http://www.esrl.noaa.gov/psd/gcos_wgsp/Gridded/data.hadslp2.html and http://www.esrl.noaa.gov/psd/thredds/catalog/Datasets/kaplan_sst/catalog.html, respectively.
1. Load all needed libraries
End of explanation
ds1 = xr.open_dataset('data/slp.mnmean.hadslp2.nc')
slp = ds1.slp.sel(lat=slice(30, -30), lon=slice(180, 290), time=slice('1950-01-01','2005-12-31'))
lon_slp = ds1.lon.sel(lon=slice(180, 290))
lat_slp = ds1.lat.sel(lat=slice(30, -30))
dates = ds1.time.sel(time=slice('1950-01-01','2005-12-31')).values
# climatology
slp_clm = slp.groupby('time.month').mean(dim='time')
# anomaly
slp_anom = slp.groupby('time.month') - slp_clm
Explanation: 2. Load data
Spatial domain: -180W ~ -70W, and 30N ~ -30S
Period: 1950~2005
Raw SLP and SST are converted to monthly anomalies
2.1 SLP
End of explanation
ds2 = xr.open_dataset('data/sst.mon.anom.kaplan.nc')
sst_anom = ds2.sst.sel(lat=slice(-30, 30), lon=slice(180, 290), time=slice('1950-01-01','2005-12-31'))
lat_sst = ds2.lat.sel(lat=slice(-30, 30))
lon_sst = ds2.lon.sel(lon=slice(180, 290))
Explanation: 2.2 SST
End of explanation
slp2d = slp_anom.values
ntime, nrow_slp, ncol_slp = slp2d.shape
slp2d = np.reshape(slp2d, (ntime, nrow_slp*ncol_slp), order='F')
sst2d = sst_anom.values
ntime, nrow_sst, ncol_sst = sst2d.shape
sst2d = np.reshape(sst2d, (ntime, nrow_sst*ncol_sst), order='F')
nonMissingIndex = np.where(np.isnan(sst2d[0]) == False)[0]
sst2dNoMissing = sst2d[:, nonMissingIndex]
Explanation: 2.3 Preprocess
Convert 3D SLP and SST to 2D arrays and only use none-NaN values of SST.
End of explanation
Cxy = np.dot(slp2d.T, sst2dNoMissing)/(ntime-1.0)
U, s, V = np.linalg.svd(Cxy, full_matrices=False)
V = V.T
Explanation: 3. Carry out Maximum Covariance Analysis
The key step is to apply SVD to decompose the covariance matrix of SLP and SST. It is worth noting that np.linalg.svd will produce V.T, instead of V. In practice, have to transpose it.
3.1 MCA
End of explanation
scf = s**2./np.sum(s**2.0)
Explanation: 3.2 Postprocess
Calculate cumulative fraction of squares covariance explained
Extract the leading SLP MCA pattern and EC
Extract the leading SST MCA pattern and EC
Normalize MCD mode 1 by standardizing the EC1, so patterns correspond to a 1-std deviation variation in EC1. The positive EC1 of SST should match the Nino SSTA.
3.2.1 Calculate cumulative fraction of squares covariance explained
End of explanation
# SLP MCA pattern
U1 = np.reshape(U[:,0, None], (nrow_slp, ncol_slp), order='F')
# EC1 of SLP
a1 = np.dot(slp2d, U[:,0, np.newaxis])
# normalize
U1_norm = U1*np.std(a1)
a1_norm = a1/np.std(a1)
Explanation: 3.2.2 Extract the leading SLP MCA pattern and EC
End of explanation
# SST MCA pattern
V1 = np.ones([nrow_sst*ncol_sst,1]) * np.NaN
V1 = V1.astype(V.dtype)
V1[nonMissingIndex,0] = V[:,0]
V1 = V1.reshape([nrow_sst,ncol_sst], order='F')
# EC1 of SST
b1 = np.dot(sst2dNoMissing, V[:,0, np.newaxis])
# normalize
V1_norm = V1*np.std(b1)
b1_norm = b1/np.std(b1)
Explanation: 3.2.3 Extract the leading SST MCA pattern and EC
End of explanation
plt.plot(np.cumsum(scf),'x')
plt.xlabel('SVD mode')
plt.ylabel('Cumulative squares covariance fraction')
plt.ylim([0.7,1.1])
plt.xlim([-0.5, 40])
Explanation: 4 Visualize MCA results
4.1 Plot cumulative fraction of squares covariance explained
End of explanation
gs = gridspec.GridSpec(2, 2)
gs.update(wspace=0.1, hspace=0.15)
fig = plt.figure(figsize = (14,10))
levels = np.arange(-1.0, 1.01, 0.05)
# SLP Pattern
ax0 = fig.add_subplot(gs[0,0], projection=ccrs.PlateCarree())
x1, y1 = np.meshgrid(lon_slp, lat_slp)
cs = ax0.contourf(x1, y1, U1_norm,
levels=levels,
transform=ccrs.PlateCarree(),
cmap='RdBu_r')
cb=fig.colorbar(cs, ax=ax0, shrink=0.8, aspect=20)
ax0.coastlines()
ax0.set_global()
ax0.set_extent([-180, -70, -19, 19])
ax0.set_title('Normalized SLP MCA Mode 1')
# SST Pattern
ax1 = fig.add_subplot(gs[0,1], projection=ccrs.PlateCarree())
x2, y2 = np.meshgrid(lon_sst, lat_sst)
cs2 = ax1.contourf(x2, y2, V1_norm,
levels=levels,
transform=ccrs.PlateCarree(),
cmap='RdBu_r')
cb=fig.colorbar(cs, ax=ax1, shrink=0.8, aspect=20)
ax1.coastlines()
ax1.set_global()
ax1.set_extent([-180, -70, -19, 19])
ax1.set_title('Normalized SST MCA Mode 1')
# EC1
ax2 = fig.add_subplot(gs[1,:])
ax2.plot(dates, a1_norm, label='SLP')
ax2.plot(dates, b1_norm, label='SST')
r = np.corrcoef(a1[:,0], b1[:,0])[0, 1]
ax2.set_title('Expansion Coefficients: SFC = '+ str(round(scf[0],2)) + ', R = ' + str(round(r,2)))
ax2.legend()
ax2.set_ylim([-4,4])
ax2.format_xdata = mdates.DateFormatter('%Y')
fig.autofmt_xdate()
Explanation: 4.2 Plot the leading SLP/SST MCA spatial pattern and EC
End of explanation |
12,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5. Basic Numerical Analysis
5.1. NumPy
5.1.1. The standard Python library for linear algebra is numpy. In this section, we cover just enough of the numpy API to implement a few algorithms in the following sections.
5.1.2. The basic object in numpy is ndarray, an $n$-dimensional generalization of Python's list.
Step1: The shape of an ndarray gives us the dimensions. b is a 1-by-4 matrix, or a row vector. c is a 2-by-2 vector, or a column vector. d is a 2-by-2 matrix.
5.1.3. Entering a column vector requires many brackets, which can be tedious. We can instead use transpose.
Step2: Similarly, a matrix can be entered by first typing out a one-dimensional array and then putting the array through reshape
Step3: 5.1.4. There are a number of pre-built functions for special kinds of vectors and matrices.
Step4: 5.1.5. Matrix multiplications are performed via dot
Step5: If you are running Python 3.5 or higher, the binary operator @ may be used to denote matrix multiplication.
Step6: 5.1.6. ndarray supports coordinatewise operations. Observe, in particular, that * does not result in matrix multiplication.
Step7: Operations with mismatching dimensions sometimes result in legitimate operation, via broadcasting. For example, scalar multiplication works fine
Step8: Row-wise operations and column-wise operations are also possible
Step9: 5.2. Floating-Point Arithmetic
5.2.1. The real and complex number systems that we use in mathematics unfortunately cannot be stored in a computer. On a computer, we use a finite approximation known as the floating-point number system.
The floating-point number system in base $\beta$ with precision $p$, minimum exponent $e_\min$, and maximum exponent $e_\max$ is the discrete subset $\mathscr{F} = \mathscr{F}(\beta,p,e_\min,e_\max)$ consisting of 0, $\infty$, $-\infty$, NaN, and numbers of the form
$$\pm(d_0 + d_1 / \beta + \cdots + d_{p-1} / \beta^{p-1}) \beta^e,$$
where
- $d_0,\ldots,d_{p-1},e$ are integers,
- $1 \leq d_0 < \beta$,
- $0 \leq d_i < \beta$ for all $1 \leq i \leq p-1$, and
- $e_\min \leq e \leq e_\max$.
Given a floating-point number of the above form, we call the number $d_0 + d_1/\beta + \cdots + d_{p-1}/\beta^{p-1}$ the significand, or the mantissa. The number $e$ is called the exponent. The sign $\pm$ is called, naturally, the sign.
The condition $d_0 \geq 1$ makes the floating-point representation of a number unique. For this reason, a floating-point number with $d_0 \geq 1$ is called normalized.
5.2.2. How do we compute the floating-point representation $\operatorname{fl}(x)$ of a real number $x$?
It makes sense to define $\operatorname{fl}(x)$ as the floating-point number nearest to $x$. To approximate $\operatorname{fl}(x)$, we observe that $x$ admits a unique infinite-sum representation
$$x = \pm (d_0 + d_1/\beta + \cdots + d_{p-1}/\beta_{p-1} + d_p /\beta_p + \cdots) \beta^e,$$
provided that $d_0 \geq 1$ and $d_i \neq \beta - 1$ for at least one $i \geq 1$. The infinite-sum representation of $x$ shows that $\operatorname{fl}(x)$ must lie between
$$\pm (d_0 + d_1/\beta + \cdots + d_{p-1}/\beta_{p-1}) \beta^e - \beta_{p-1}\beta^e$$
and
$$\pm (d_0 + d_1/\beta + \cdots + d_{p-1}/\beta_{p-1}) \beta^e + \beta_{p-1}\beta^e$$
It follows that
$$\vert\operatorname{fl}(x) - x\vert \leq 2\beta^{-(p-1)}\beta^e.$$
To obtain an error bound independent of the choice of $x$, we normalize the difference | Python Code:
import numpy as np
a = np.array([1, 2, 3, 4])
b = np.array([[1, 2, 3, 4]])
c = np.array([[1], [2], [3], [4]])
d = np.array([[1, 2], [3, 4]])
print(a)
print('shape of a: {}'.format(a.shape))
print()
print(b)
print('shape of b: {}'.format(b.shape))
print()
print(c)
print('shape of c: {}'.format(c.shape))
print()
print(d)
print('shape of d: {}'.format(d.shape))
Explanation: 5. Basic Numerical Analysis
5.1. NumPy
5.1.1. The standard Python library for linear algebra is numpy. In this section, we cover just enough of the numpy API to implement a few algorithms in the following sections.
5.1.2. The basic object in numpy is ndarray, an $n$-dimensional generalization of Python's list.
End of explanation
print(b)
print('shape of b: {}'.format(b.shape))
print()
print(b.transpose())
print('shape of b.transpose(): {}'.format(b.transpose().shape))
Explanation: The shape of an ndarray gives us the dimensions. b is a 1-by-4 matrix, or a row vector. c is a 2-by-2 vector, or a column vector. d is a 2-by-2 matrix.
5.1.3. Entering a column vector requires many brackets, which can be tedious. We can instead use transpose.
End of explanation
print(b)
print('shape of b: {}'.format(b.shape))
print()
print(b.reshape((2,2)))
print('shape of b.reshape((2,2)): {}'.format(b.transpose().reshape((2,2))))
print()
print(b.reshape((4,1)))
print('shape of b.reshape((4,1)): {}'.format(b.transpose().reshape((4,1))))
Explanation: Similarly, a matrix can be entered by first typing out a one-dimensional array and then putting the array through reshape:
End of explanation
print(np.arange(5))
print()
print(np.arange(2, 8))
print()
print(np.arange(2, 15, 3))
print()
print(np.eye(1))
print()
print(np.eye(2))
print()
print(np.eye(3))
print()
print(np.zeros(1))
print()
print(np.zeros(2))
print()
print(np.zeros(3))
print()
Explanation: 5.1.4. There are a number of pre-built functions for special kinds of vectors and matrices.
End of explanation
x = np.array([[2, 3], [5, 7]])
y = np.array([[1, -1], [-1, 1]])
print(np.dot(x,y))
print()
print(x.dot(y))
Explanation: 5.1.5. Matrix multiplications are performed via dot:
End of explanation
import sys
version_major, version_minor = sys.version_info[0:2]
if version_major >= 3 and version_minor >= 5:
print(x @ y)
else:
print('unsupported operation')
Explanation: If you are running Python 3.5 or higher, the binary operator @ may be used to denote matrix multiplication.
End of explanation
print(np.array([1, 2, 3]) + np.array([3,4,5]))
print()
print(np.array([[4,3],[2,1]]) - np.array([[1,1],[1,1]]))
print()
print(np.array([[1, 2, 3], [4, 5, 6]]) * np.array([[1, 2, 1], [3, -1, -1]]))
print()
print(np.array([[1], [3]]) / np.array([[2], [2]]))
Explanation: 5.1.6. ndarray supports coordinatewise operations. Observe, in particular, that * does not result in matrix multiplication.
End of explanation
3 * np.array([[2,4,3], [1,2,5], [-1, -1, -1]])
Explanation: Operations with mismatching dimensions sometimes result in legitimate operation, via broadcasting. For example, scalar multiplication works fine:
End of explanation
x = np.array([5, -1, 3])
y = np.arange(9).reshape((3, 3))
print(y)
print()
print(x.reshape((3, 1)) + y)
print()
print(x.reshape((1, 3)) + y)
Explanation: Row-wise operations and column-wise operations are also possible:
End of explanation
e = 1.0
while (1.0 + 0.5 * e) != 1.0:
e = 0.5 * e
print(e)
Explanation: 5.2. Floating-Point Arithmetic
5.2.1. The real and complex number systems that we use in mathematics unfortunately cannot be stored in a computer. On a computer, we use a finite approximation known as the floating-point number system.
The floating-point number system in base $\beta$ with precision $p$, minimum exponent $e_\min$, and maximum exponent $e_\max$ is the discrete subset $\mathscr{F} = \mathscr{F}(\beta,p,e_\min,e_\max)$ consisting of 0, $\infty$, $-\infty$, NaN, and numbers of the form
$$\pm(d_0 + d_1 / \beta + \cdots + d_{p-1} / \beta^{p-1}) \beta^e,$$
where
- $d_0,\ldots,d_{p-1},e$ are integers,
- $1 \leq d_0 < \beta$,
- $0 \leq d_i < \beta$ for all $1 \leq i \leq p-1$, and
- $e_\min \leq e \leq e_\max$.
Given a floating-point number of the above form, we call the number $d_0 + d_1/\beta + \cdots + d_{p-1}/\beta^{p-1}$ the significand, or the mantissa. The number $e$ is called the exponent. The sign $\pm$ is called, naturally, the sign.
The condition $d_0 \geq 1$ makes the floating-point representation of a number unique. For this reason, a floating-point number with $d_0 \geq 1$ is called normalized.
5.2.2. How do we compute the floating-point representation $\operatorname{fl}(x)$ of a real number $x$?
It makes sense to define $\operatorname{fl}(x)$ as the floating-point number nearest to $x$. To approximate $\operatorname{fl}(x)$, we observe that $x$ admits a unique infinite-sum representation
$$x = \pm (d_0 + d_1/\beta + \cdots + d_{p-1}/\beta_{p-1} + d_p /\beta_p + \cdots) \beta^e,$$
provided that $d_0 \geq 1$ and $d_i \neq \beta - 1$ for at least one $i \geq 1$. The infinite-sum representation of $x$ shows that $\operatorname{fl}(x)$ must lie between
$$\pm (d_0 + d_1/\beta + \cdots + d_{p-1}/\beta_{p-1}) \beta^e - \beta_{p-1}\beta^e$$
and
$$\pm (d_0 + d_1/\beta + \cdots + d_{p-1}/\beta_{p-1}) \beta^e + \beta_{p-1}\beta^e$$
It follows that
$$\vert\operatorname{fl}(x) - x\vert \leq 2\beta^{-(p-1)}\beta^e.$$
To obtain an error bound independent of the choice of $x$, we normalize the difference:
$$\delta_x = \frac{\vert \operatorname{fl}(x) - x\vert}{\vert x \vert}.$$
Since $d_0$ in the infinite-series representation of $x$ must at least be 1, it follows that $\vert x \vert \geq \beta^e$. We now see that
$$\delta_x \leq \frac{2 \beta^{-(p-1)\beta^e}}{\beta^e} = 2 \beta^{-(p-1)}$$
regardless of the choice of $x$.
Since there exists at least one bound independent of the choice of $x$, we can find the smallest number $\epsilon_{\beta, p}$ such that
$$\delta_x \leq \epsilon_{\beta, p}$$
for every real number $x$. The bound $\epsilon_{\beta, p}$ is called the machine epsilon with respect to base $\beta$ and precision $p$.
Instead of the optimal bound, many references settle for the bound
$$\delta_x \leq \frac{1}{2} \beta^{-(p-1)},$$
which is still four times better than the crude bound we have obtained above. It is common to see this quantity referred to see as the machine epsilon.
5.2.3. We typically take the base $\beta$ to be 2. For notational convenience, let us write
$$(a_k \ldots a_1.b_1 \ldots b_m)_2$$
to denote the sum
$$\sum_{i=1}^k a_i 2^i + \sum_{j=1}^k b_i 2^{-j}.$$
A typical binary representation of a floating-point number is
$$\underbrace{s}{\text{sign}} \mid \underbrace{e_1 \cdots e{q}}{\text{exponent}} \mid \underbrace{d_1 \cdots d{p-1}}_{\text{significand}},$$
which codifies
$$\pm(1.d_1 \ldots d_{p-1})2 \times 2^{(e_1,\ldots,e_q)_2 - (\beta^{e\max} - 1)}$$
Why is $d_0$ not included? Since $\beta = 2$, normalization implies that $d_0$ always equals 1. There is no need to record it.
The number of bits available for the exponent determines $e_\min$ and $e_\max$. Typically, two's complement representation is used. If $e_1 = \cdots = e_{p-1} = 1$, then the above binary representation codifies $\infty$ if all significand bits are zero, and NaN otherwise. This practice exists to ensure consistent computational results across machines.
5.2.4. The IEEE Standard for Floating-Point Arithmetic (IEEE 754) stipulates how a floating-point number system should be implemented in order to guarantee consistent computational results across machines.
The single format uses 1 bit for the sign, 8 bits for the exponent, and 23 bits for the significand, totalling 32 bits. The double format uses 1 bit for sign, 11 bits for the exponent, and 52 bits for the significand, totalling 64 bits.
The floating-point representation of real numbers follows rounding modes specified in the standard. The most popular rounding mode is, of course, rounding to the nearest floating-point number (§5.2.3). The machine epsilon for the double format is approximately $2^{-52} \approx 10^{-16}$. We can check this with a simple binary search:
End of explanation |
12,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learn the alphabet
http
Step1: Naive LSTM for Learning One-Char to One-Char Mapping
Let’s start off by designing a simple LSTM to learn how to predict the next character in the alphabet given the context of just one character.
We will frame the problem as a random collection of one-letter input to one-letter output pairs. As we will see this is a difficult framing of the problem for the LSTM to learn.
Let’s define an LSTM network with 32 units and a single output neuron with a softmax activation function for making predictions. Because this is a multi-class classification problem, we can use the log loss function (called “categorical_crossentropy” in Keras), and optimize the network using the ADAM optimization function.
The model is fit over 500 epochs with a batch size of 1.
Step2: We can see that this problem is indeed difficult for the network to learn.
The reason is, the poor LSTM units do not have any context to work with. Each input-output pattern is shown to the network in a random order and the state of the network is reset after each pattern (each batch where each batch contains one pattern).
This is abuse of the LSTM network architecture, treating it like a standard multilayer Perceptron.
Naive LSTM for a Three-Char Feature Window to One-Char Mapping
A popular approach to adding more context to data for multilayer Perceptrons is to use the window method.
This is where previous steps in the sequence are provided as additional input features to the network. We can try the same trick to provide more context to the LSTM network.
Step3: Stateful LSTM for a One-Char to One-Char Mapping
We have seen that we can break-up our raw data into fixed size sequences and that this representation can be learned by the LSTM, but only to learn random mappings of 3 characters to 1 character.
We have also seen that we can pervert batch size to offer more sequence to the network, but only during training.
Ideally, we want to expose the network to the entire sequence and let it learn the inter-dependencies, rather than us define those dependencies explicitly in the framing of the problem.
We can do this in Keras by making the LSTM layers stateful and manually resetting the state of the network at the end of the epoch, which is also the end of the training sequence.
This is truly how the LSTM networks are intended to be used. We find that by allowing the network itself to learn the dependencies between the characters, that we need a smaller network (half the number of units) and fewer training epochs (almost half).
We first need to define our LSTM layer as stateful. In so doing, we must explicitly specify the batch size as a dimension on the input shape. This also means that when we evaluate the network or make predictions, we must also specify and adhere to this same batch size. This is not a problem now as we are using a batch size of 1. This could introduce difficulties when making predictions when the batch size is not one as predictions will need to be made in batch and in sequence.
Step4: LSTM with Variable-Length Input to One-Char Output
In the previous section, we discovered that the Keras “stateful” LSTM was really only a shortcut to replaying the first n-sequences, but didn’t really help us learn a generic model of the alphabet.
In this section we explore a variation of the “stateless” LSTM that learns random subsequences of the alphabet and an effort to build a model that can be given arbitrary letters or subsequences of letters and predict the next letter in the alphabet.
Firstly, we are changing the framing of the problem. To simplify we will define a maximum input sequence length and set it to a small value like 5 to speed up training. This defines the maximum length of subsequences of the alphabet will be drawn for training. In extensions, this could just as set to the full alphabet (26) or longer if we allow looping back to the start of the sequence.
We also need to define the number of random sequences to create, in this case 1000. This too could be more or less. I expect less patterns are actually required. | Python Code:
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.utils import np_utils
# fix random seed for reproducibility
numpy.random.seed(7)
# define the raw dataset
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
# create mapping of characters to integers (0-25) and the reverse
char_to_int = dict((c, i) for i, c in enumerate(alphabet))
int_to_char = dict((i, c) for i, c in enumerate(alphabet))
# prepare the dataset of input to output pairs encoded as integers
seq_length = 1
dataX = []
dataY = []
def create_XY(seq_length, alphabet, dataX, dataY):
for i in range(0, len(alphabet) - seq_length, 1):
seq_in = alphabet[i:i + seq_length]
seq_out = alphabet[i + seq_length]
dataX.append([char_to_int[char] for char in seq_in])
dataY.append(char_to_int[seq_out])
print(seq_in, '->', seq_out)
create_XY(seq_length, alphabet, dataX, dataY)
dataX, dataY
len(dataX)
# reshape X to be [samples, time steps, features]
X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
X.shape
X[0:3]
# normalize
X = X / float(len(alphabet))
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
Explanation: Learn the alphabet
http://machinelearningmastery.com/understanding-stateful-lstm-recurrent-neural-networks-python-keras/
End of explanation
# create and fit the model
model = Sequential()
model.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, nb_epoch=500, batch_size=1, verbose=2)
# summarize performance of the model
scores = model.evaluate(X, y, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
# demonstrate some model predictions
def predict(dataX):
for pattern in dataX:
x = numpy.reshape(pattern, (1, len(pattern), 1))
print(x.shape)
x = x / float(len(alphabet))
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
print(seq_in, "->", result)
predict(dataX)
Explanation: Naive LSTM for Learning One-Char to One-Char Mapping
Let’s start off by designing a simple LSTM to learn how to predict the next character in the alphabet given the context of just one character.
We will frame the problem as a random collection of one-letter input to one-letter output pairs. As we will see this is a difficult framing of the problem for the LSTM to learn.
Let’s define an LSTM network with 32 units and a single output neuron with a softmax activation function for making predictions. Because this is a multi-class classification problem, we can use the log loss function (called “categorical_crossentropy” in Keras), and optimize the network using the ADAM optimization function.
The model is fit over 500 epochs with a batch size of 1.
End of explanation
# prepare the dataset of input to output pairs encoded as integers
seq_length = 3
dataX, dataY = [], []
create_XY(seq_length, alphabet, dataX, dataY)
# reshape X to be [samples, time steps, features]
X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
X.shape
X[0:3]
# normalize
X = X / float(len(alphabet))
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
# reshape X to be [samples, time steps, features]
X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
X.shape
X[0:3]
# normalize
X = X / float(len(alphabet))
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
# create and fit the model
model = Sequential()
model.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, nb_epoch=500, batch_size=1, verbose=2)
# summarize performance of the model
scores = model.evaluate(X, y, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
predict(dataX)
Explanation: We can see that this problem is indeed difficult for the network to learn.
The reason is, the poor LSTM units do not have any context to work with. Each input-output pattern is shown to the network in a random order and the state of the network is reset after each pattern (each batch where each batch contains one pattern).
This is abuse of the LSTM network architecture, treating it like a standard multilayer Perceptron.
Naive LSTM for a Three-Char Feature Window to One-Char Mapping
A popular approach to adding more context to data for multilayer Perceptrons is to use the window method.
This is where previous steps in the sequence are provided as additional input features to the network. We can try the same trick to provide more context to the LSTM network.
End of explanation
seq_length = 1
dataX = []
dataY = []
create_XY(seq_length, alphabet, dataX, dataY)
# reshape X to be [samples, time steps, features]
X = numpy.reshape(dataX, (len(dataX), seq_length, 1))
# normalize
X = X / float(len(alphabet))
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
# create and fit the model
batch_size = 1
model = Sequential()
model.add(LSTM(16, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
for i in range(300):
model.fit(X, y, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)
model.reset_states()
# summarize performance of the model
scores = model.evaluate(X, y, batch_size=batch_size, verbose=0)
model.reset_states()
print("Model Accuracy: %.2f%%" % (scores[1]*100))
# demonstrate some model predictions
seed = [char_to_int[alphabet[0]]]
for i in range(0, len(alphabet)-1):
x = numpy.reshape(seed, (1, len(seed), 1))
x = x / float(len(alphabet))
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
print(int_to_char[seed[0]], "->", int_to_char[index])
seed = [index]
model.reset_states()
# demonstrate a random starting point
letter = "K"
seed = [char_to_int[letter]]
print("New start: ", letter)
for i in range(0, 5):
x = numpy.reshape(seed, (1, len(seed), 1))
x = x / float(len(alphabet))
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
print(int_to_char[seed[0]], "->", int_to_char[index])
seed = [index]
model.reset_states()
Explanation: Stateful LSTM for a One-Char to One-Char Mapping
We have seen that we can break-up our raw data into fixed size sequences and that this representation can be learned by the LSTM, but only to learn random mappings of 3 characters to 1 character.
We have also seen that we can pervert batch size to offer more sequence to the network, but only during training.
Ideally, we want to expose the network to the entire sequence and let it learn the inter-dependencies, rather than us define those dependencies explicitly in the framing of the problem.
We can do this in Keras by making the LSTM layers stateful and manually resetting the state of the network at the end of the epoch, which is also the end of the training sequence.
This is truly how the LSTM networks are intended to be used. We find that by allowing the network itself to learn the dependencies between the characters, that we need a smaller network (half the number of units) and fewer training epochs (almost half).
We first need to define our LSTM layer as stateful. In so doing, we must explicitly specify the batch size as a dimension on the input shape. This also means that when we evaluate the network or make predictions, we must also specify and adhere to this same batch size. This is not a problem now as we are using a batch size of 1. This could introduce difficulties when making predictions when the batch size is not one as predictions will need to be made in batch and in sequence.
End of explanation
# prepare the dataset of input to output pairs encoded as integers
num_inputs = 1000
max_len = 5
dataX = []
dataY = []
for i in range(num_inputs):
start = numpy.random.randint(len(alphabet)-2)
end = numpy.random.randint(start, min(start+max_len,len(alphabet)-1))
sequence_in = alphabet[start:end+1]
sequence_out = alphabet[end + 1]
dataX.append([char_to_int[char] for char in sequence_in])
dataY.append(char_to_int[sequence_out])
print(sequence_in, '->', sequence_out)
from keras.preprocessing.sequence import pad_sequences
X = pad_sequences(dataX, max_len)
X.shape
X[:3]
# reshape X to be [samples, time steps, features]
X = numpy.reshape(X, (X.shape[0], max_len, 1))
# normalize
X = X / float(len(alphabet))
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
# create and fit the model
batch_size = 1
model = Sequential()
model.add(LSTM(32, input_shape=(X.shape[1], 1)))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, nb_epoch=50, batch_size=batch_size, verbose=2)
# summarize performance of the model
scores = model.evaluate(X, y, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
# demonstrate some model predictions
for i in range(20):
pattern_index = numpy.random.randint(len(dataX))
pattern = dataX[pattern_index]
x = pad_sequences([pattern], maxlen=max_len)
x = numpy.reshape(x, (1, max_len, 1))
x = x / float(len(alphabet))
prediction = model.predict(x, verbose=0)
index = numpy.argmax(prediction)
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
print(seq_in, "->", result)
Explanation: LSTM with Variable-Length Input to One-Char Output
In the previous section, we discovered that the Keras “stateful” LSTM was really only a shortcut to replaying the first n-sequences, but didn’t really help us learn a generic model of the alphabet.
In this section we explore a variation of the “stateless” LSTM that learns random subsequences of the alphabet and an effort to build a model that can be given arbitrary letters or subsequences of letters and predict the next letter in the alphabet.
Firstly, we are changing the framing of the problem. To simplify we will define a maximum input sequence length and set it to a small value like 5 to speed up training. This defines the maximum length of subsequences of the alphabet will be drawn for training. In extensions, this could just as set to the full alphabet (26) or longer if we allow looping back to the start of the sequence.
We also need to define the number of random sequences to create, in this case 1000. This too could be more or less. I expect less patterns are actually required.
End of explanation |
12,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pixels to Tabular Data
Agricultural Statistical Analysis Use Case
Talk about pixels and tabular data.
The use case addressed in this tutorial is
Step1: Get Field and Sample Blocks AOIs
Step2: Part 2
Step3: Now that we have found the images that match the search criteria, let's make sure all of the images fully contain the field AOI (we don't want to just get half of the field) and then let's see what the image footprint and the AOI look like together.
Step4: Whoa look! That AOI is tiny relative to the image footprint. We don't want to wrangle all those pixels outside of the AOI. We definately want to clip the imagery footprints to the AOI.
Step 2
Step5: Step 2.2
Step6: Step 2.3
Step7: Step 3
Step8: Step 3.2
Step9: Step 4
Step10: Step 5
Step11: Step 6
Step12: The field AOI isn't an exact square so there are some blank pixels. Let's mask those out. We can use the UDM for that.
Step13: That looks better! We now have the NDVI values for the pixels within the field AOI.
Now, lets make that a little easier to generate.
Step14: In the images above, we are just using the default visualization for the imagery.
But this is NDVI imagery. Values are given between -1 and 1. Let's see how this looks if we use visualization specivic to NDVI.
Step15: Well, the contrast has certainly gone down. This is because the NDVI values within the field are pretty uniform. That's what we would expect for a uniform field! So it is actually good news. The NDVI values are pretty low, ranging from 0.16 to just above 0.22. The time range used for this search is basically the month of April. This is pretty early in the growth season and so likely the plants are still tiny seedlings. So even the low NDVI value makes sense here.
Part 3
Step16: Step 2
Step17: Ok, great! The first image shows what would result from sampling 13 pixels. The second image is for nearly all the pixels and demonstrates that the mask is taken into account with sampling.
Now lets get down to calculating the summary statistics and placing them in a table entry.
Step 3
Step18: Okay! We have statistics for each block in a table. Yay! Okay, now lets move on to running this across a time series.
Step 4
Step19: Okay! We have 165 rows, which is (number of blocks)x(number of images). It all checks out.
Lets check out these stats in some plots!
In these plots, color indicates the blocks. The blocks are colored red, blue, green, black, and purple. The x axis is acquisition time. So each 'column' of colored dots is the block statistic value for a given image. | Python Code:
import datetime
import json
import os
from pathlib import Path
from pprint import pprint
import shutil
import time
from zipfile import ZipFile
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from planet import api
from planet.api import downloader, filters
import pyproj
from rasterio import plot
from rasterio.mask import raster_geometry_mask
from shapely.geometry import shape, MultiPolygon
from shapely.ops import transform
Explanation: Pixels to Tabular Data
Agricultural Statistical Analysis Use Case
Talk about pixels and tabular data.
The use case addressed in this tutorial is:
As an agriculture customer, I'd like to create an imagery pipeline that provides for trialing different fungicides by ordering Planet imagery within a single field (AOI), cutting the imagery into multiple field blocks, filtering based on cloud coverage within the blocks, and comparing values across blocks in two ways. First, comparison is performed by extracting median, mean, variance NDVI values for each day (using random point sampling) in each block. Second, comparison is performed by random point selection in each block.
Introduction
Two things are interesting about this use case. First, we are gridding the AOI into blocks. Second, we are performing some calculations with the output to compare results across different blocks in the field.
Implementation
For this use case, the area of interest is specified (the field) but the time range is not. For time-series analysis the daily coverage of PS satellites is ideal. Because we are only looking at a field, we want to clip the images to the field area of interest to avoid unnecessary pixel wrangling. Also, we don't need all the bands, we are only interested in NDVI. We can use the Orders API to help us clip the images and calculate NDVI. Finally, we will need to implement a bit of functionality to filter to images that have no unusable pixels within the area of interest and for the comparisons across the field blocks.
To summarize, these are the major steps:
1. Part 1: Setup
1. Part 2: Get Field NDVI
1. Part 3: Sample Field Blocks
Part 1: Setup
In this section, we set up the notebook and define the field and field block geometries.
Import Dependencies
End of explanation
def load_geojson(filename):
with open(filename, 'r') as f:
return json.load(f)
# this feature comes from within the sacramento_crops aoi
# it is the first feature in 'ground-truth-test.geojson', which
# was prepared in crop-classification/datasets-prepare.ipynb
field_filename = os.path.join('pre-data', 'field.geojson')
field = load_geojson(field_filename)
pprint(field)
# visualize field and determine size in acres
print('{} acres'.format(field['properties']['ACRES']))
field_aoi = field['geometry']
shape(field_aoi)
# visualize field and sample blocks
# these blocks were drawn by hand randomly for this demo
# they don't actually represent test field blocks
blocks = load_geojson(os.path.join('pre-data', 'blocks.geojson'))
block_aois = [b['geometry'] for b in blocks]
MultiPolygon([shape(a) for a in [field_aoi] + block_aois])
Explanation: Get Field and Sample Blocks AOIs
End of explanation
# if your Planet API Key is not set as an environment variable, you can paste it below
API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE')
client = api.ClientV1(api_key=API_KEY)
# create an api request from the search specifications
# relax the cloud cover requirement as filtering will be done within the aoi
def build_request(aoi_geom, start_date, stop_date):
'''build a data api search request for clear PSScene imagery'''
query = filters.and_filter(
filters.geom_filter(aoi_geom),
filters.date_range('acquired', gt=start_date),
filters.date_range('acquired', lt=stop_date)
)
return filters.build_search_request(query, ['PSScene'])
def search_data_api(request, client, limit=500):
result = client.quick_search(request)
# this returns a generator
return result.items_iter(limit=limit)
# define test data for the filter
test_start_date = datetime.datetime(year=2019,month=4,day=1)
test_stop_date = datetime.datetime(year=2019,month=5,day=1)
request = build_request(field_aoi, test_start_date, test_stop_date)
print(request)
items = list(search_data_api(request, client))
print('{} images match the search criteria.'.format(len(items)))
# uncomment to see what an item looks like
# pprint(items[0])
Explanation: Part 2: Get Field NDVI
In this section, we use the Data and Orders APIs to find images that overlap the field AOI in the specified time period and then to download the NDVI values of pixels within the field for all of the images. Once the images are downloaded, we use the UDM2 asset to filter to images that have no unusable pixels within the AOI. Finally, we get to check out what the NDVI of the field looks like!
Step 1: Search Data API
The goal of this step is to get the scene ids that meet the search criteria for this use case.
End of explanation
footprints = [shape(i['geometry']) for i in items]
# make sure all footprints contain the field aoi (that is, no partial overlaps)
for f in footprints:
assert f.contains(shape(field_aoi))
# visualize aoi and footprint
MultiPolygon([shape(field_aoi), footprints[0]])
Explanation: Now that we have found the images that match the search criteria, let's make sure all of the images fully contain the field AOI (we don't want to just get half of the field) and then let's see what the image footprint and the AOI look like together.
End of explanation
def get_tools(aoi_geom):
# clip to AOI
clip_tool = {'clip': {'aoi': aoi_geom}}
# convert to NDVI
ndvi_tool = {'bandmath': {
"pixel_type": "32R",
"b1": "(b4 - b3) / (b4+b3)"
}}
tools = [clip_tool, ndvi_tool]
return tools
Explanation: Whoa look! That AOI is tiny relative to the image footprint. We don't want to wrangle all those pixels outside of the AOI. We definately want to clip the imagery footprints to the AOI.
Step 2: Submit Order
Now that we have the scene ids, we can create the order. The output of this step is a single zip file that contains all of the scenes that meet our criteria.
The tools we want to apply are: clip imagery to AOI and convert imagery to NDVI.
Step 2.1: Define Toolchain Tools
End of explanation
def build_order(ids, name, aoi_geom):
# specify the PSScene 4-Band surface reflectance product
# make sure to get the *_udm2 bundle so you get the udm2 product
# note: capitalization really matters in item_type when using planet client orders api
item_type = 'PSScene'
bundle = 'analytic_sr_udm2'
orders_request = {
'name': name,
'products': [{
'item_ids': ids,
'item_type': item_type,
'product_bundle': bundle
}],
'tools': get_tools(aoi_geom),
'delivery': {
'single_archive': True,
'archive_filename':'{{name}}_{{order_id}}.zip',
'archive_type':'zip'
},
'notifications': {
'email': False
},
}
return orders_request
# uncomment to see what an order request would look like
# pprint(build_order(['id'], 'demo', test_aoi_geom), indent=4)
ids = [i['id'] for i in items]
name = 'pixels_to_tabular'
order_request = build_order(ids, name, field_aoi)
Explanation: Step 2.2: Build Order Requests
End of explanation
def create_order(order_request, client):
orders_info = client.create_order(order_request).get()
return orders_info['id']
order_id = create_order(order_request, client)
order_id
Explanation: Step 2.3: Submit Order
End of explanation
def poll_for_success(order_id, client, num_loops=50):
count = 0
while(count < num_loops):
count += 1
order_info = client.get_individual_order(order_id).get()
state = order_info['state']
print(state)
success_states = ['success', 'partial']
if state == 'failed':
raise Exception(response)
elif state in success_states:
break
time.sleep(10)
poll_for_success(order_id, client)
Explanation: Step 3: Download Orders
Step 3.1: Wait Until Orders are Successful
Before we can download the orders, they have to be prepared on the server.
End of explanation
data_dir = os.path.join('data', 'field_statistical_analysis')
# make the download directory if it doesn't exist
Path(data_dir).mkdir(parents=True, exist_ok=True)
def poll_for_download(dest, endswith, num_loops=50):
count = 0
while(count < num_loops):
count += 1
matched_files = (f for f in os.listdir(dest)
if os.path.isfile(os.path.join(dest, f))
and f.endswith(endswith))
match = next(matched_files, None)
if match:
match = os.path.join(dest, match)
print('downloaded')
break
else:
print('waiting...')
time.sleep(10)
return match
def download_order(order_id, dest, client, limit=None):
'''Download an order by given order ID'''
# this returns download stats but they aren't accurate or informative
# so we will look for the downloaded file on our own.
dl = downloader.create(client, order=True)
urls = client.get_individual_order(order_id).items_iter(limit=limit)
dl.download(urls, [], dest)
endswith = '{}.zip'.format(order_id)
filename = poll_for_download(dest, endswith)
return filename
downloaded_file = download_order(order_id, data_dir, client)
downloaded_file
Explanation: Step 3.2: Run Download
For this step we will use the planet python orders API because the CLI doesn't do a complete download with large orders.
End of explanation
def unzip(filename, overwrite=False):
location = Path(filename)
zipdir = location.parent / location.stem
if os.path.isdir(zipdir):
if overwrite:
print('{} exists. overwriting.'.format(zipdir))
shutil.rmtree(zipdir)
else:
raise Exception('{} already exists'.format(zipdir))
with ZipFile(location) as myzip:
myzip.extractall(zipdir)
return zipdir
zipdir = unzip(downloaded_file)
zipdir
def get_unzipped_files(zipdir):
filedir = zipdir / 'files'
filenames = os.listdir(filedir)
return [filedir / f for f in filenames]
file_paths = get_unzipped_files(zipdir)
file_paths[0]
Explanation: Step 4: Unzip Order
In this section, we will unzip the order into a directory named after the downloaded zip file.
End of explanation
udm2_files = [f for f in file_paths if 'udm2' in str(f)]
# we want to find pixels that are inside the footprint but cloudy
# the easiest way to do this is is the udm values (band 8)
# https://developers.planet.com/docs/data/udm-2/
# the UDM values are given in
# https://assets.planet.com/docs/Combined-Imagery-Product-Spec-Dec-2018.pdf
# Bit 0: blackfill (footprint)
# Bit 1: cloud covered
def read_udm(udm2_filename):
with rasterio.open(udm2_filename) as img:
# band 8 is the udm band
return img.read(8)
def get_cloudy_percent(udm_band):
blackfill = udm_band == int('1', 2)
footprint_count = udm_band.size - np.count_nonzero(blackfill)
cloudy = udm_band.size - udm_band == int('10', 2)
cloudy_count = np.count_nonzero(cloudy)
return (cloudy_count / footprint_count)
get_cloudy_percent(read_udm(udm2_files[0]))
clear_udm2_files = [f for f in udm2_files
if get_cloudy_percent(read_udm(f)) < 0.00001]
print(len(clear_udm2_files))
def get_id(udm2_filename):
return udm2_filename.name.split('_3B')[0]
clear_ids = [get_id(f) for f in clear_udm2_files]
clear_ids[0]
Explanation: Step 5: Filter by Cloudiness
In this section, we will filter images that have any clouds within the AOI. We use the Unusable Data Mask (UDM2) to determine cloud pixels.
End of explanation
def get_img_path(img_id, file_paths):
filename = '{}_3B_AnalyticMS_SR_clip_bandmath.tif'.format(img_id)
return next(f for f in file_paths if f.name == filename)
def read_ndvi(img_filename):
with rasterio.open(img_filename) as img:
# ndvi is a single-band image
band = img.read(1)
return band
plot.show(read_ndvi(get_img_path(clear_ids[0], file_paths)))
Explanation: Step 6: Get Clear Images
End of explanation
def get_udm2_path(img_id, file_paths):
filename = '{}_3B_udm2_clip.tif'.format(img_id)
return next(f for f in file_paths if f.name == filename)
def read_blackfill(udm2_filename):
with rasterio.open(udm2_filename) as img:
# the last band is the udm band
udm_band = img.read(8)
blackfill = udm_band == int('1', 2)
return blackfill
plot.show(read_blackfill(get_udm2_path(clear_ids[0], file_paths)))
# there is an issue where some udms aren't the same size as the images
# to deal with this just cut off any trailing rows/columns
# this isn't ideal as it can result in up to one pixel shift in x or y direction
def crop(img, shape):
return img[:shape[0], :shape[1]]
def read_masked_ndvi(img_filename, udm2_filename):
ndvi = read_ndvi(img_filename)
blackfill = read_blackfill(udm2_filename)
# crop image and mask to same size
img_shape = min(ndvi.shape, blackfill.shape)
ndvi = np.ma.array(crop(ndvi, img_shape), mask=crop(blackfill, img_shape))
return ndvi
plot.show(read_masked_ndvi(get_img_path(clear_ids[0], file_paths),
get_udm2_path(clear_ids[0], file_paths)))
Explanation: The field AOI isn't an exact square so there are some blank pixels. Let's mask those out. We can use the UDM for that.
End of explanation
def read_masked_ndvi_by_id(iid, file_paths):
return read_masked_ndvi(get_img_path(iid, file_paths), get_udm2_path(iid, file_paths))
plot.show(read_masked_ndvi_by_id(clear_ids[0], file_paths))
Explanation: That looks better! We now have the NDVI values for the pixels within the field AOI.
Now, lets make that a little easier to generate.
End of explanation
# we demonstrated visualization in the best practices tutorial
# here, we save space by just importing the functionality
from visual import show_ndvi
# and here's what it looks like when we visualize as ndvi
# (data range -1 to 1). it actually looks worse becaue the
# pixel value range is so small
show_ndvi(read_masked_ndvi_by_id(clear_ids[0], file_paths))
Explanation: In the images above, we are just using the default visualization for the imagery.
But this is NDVI imagery. Values are given between -1 and 1. Let's see how this looks if we use visualization specivic to NDVI.
End of explanation
def block_aoi_masks(block_aois, ref_img_path):
# find the coordinate reference system of the image
with rasterio.open(ref_img_path) as src:
dst_crs = src.crs
# geojson features (the field block geometries)
# are always given in WGS84
# project these to the image coordinates
wgs84 = pyproj.CRS('EPSG:4326')
project = pyproj.Transformer.from_crs(wgs84, dst_crs, always_xy=True).transform
proj_block_aois = [transform(project, shape(b)) for b in block_aois]
masks = [raster_geometry_mask(src, [b], crop=False)[0]
for b in proj_block_aois]
return masks
ref_img_path = get_img_path(clear_ids[0], file_paths)
block_masks = block_aoi_masks(block_aois, img)
ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths)
fig, ax = plt.subplots(2,3, figsize=(15,10))
axf = ax.flatten()
fig.delaxes(axf[-1])
for i, mask in enumerate(block_masks):
ndvi.mask = mask
plot.show(ndvi, ax=axf[i])
Explanation: Well, the contrast has certainly gone down. This is because the NDVI values within the field are pretty uniform. That's what we would expect for a uniform field! So it is actually good news. The NDVI values are pretty low, ranging from 0.16 to just above 0.22. The time range used for this search is basically the month of April. This is pretty early in the growth season and so likely the plants are still tiny seedlings. So even the low NDVI value makes sense here.
Part 3: Sample Field Blocks
Ok, here is where we convert pixels to tabular data. We do this for one image then we expand to doing this for all images in the time series.
In this section, we want to sample the pixel values within each field block and put the values into a table. For this, we first need to identify the field block pixels. Next, we calculate the median, mean, variance, and random point value for each field block. We put those into a table. And at the end we visualize the results.
Step 1: Get Field Block Pixels
In this step, we find the pixel values that are associated with each field block. To get the field block pixels, we have to project the block geometries into the image coordinates. Then we create masks that just pull the field block pixels from the aoi.
End of explanation
np.random.seed(0) # 0 - make random sampling repeatable, no arg - nonrepeatable
def random_mask_sample(mask, count):
# get shape of unmasked pixels
unmasked = mask == False
unmasked_shape = mask[unmasked].shape
# uniformly sample pixel indices
num_unmasked = unmasked_shape[0]
idx = np.random.choice(num_unmasked, count, replace=False)
# assign uniformly sampled indices to False (unmasked)
random_mask = np.ones(unmasked_shape, dtype=np.bool)
random_mask[idx] = False
# reshape back to image shape and account for image mask
random_sample_mask = np.ones(mask.shape, dtype=np.bool)
random_sample_mask[unmasked] = random_mask
return random_sample_mask
# lets just check out how our random sampling performs
ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths)
ndvi.mask = random_mask_sample(ndvi.mask, 13)
plot.show(ndvi)
ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths)
ndvi.mask = random_mask_sample(ndvi.mask, 1300)
plot.show(ndvi)
Explanation: Step 2: Random Sampling
Summary statistics such as mean, mode, and variance will be easy to calculate with the numpy python package. We need to do a little work to get random sampling, however.
End of explanation
def get_stats(ndvi, masks):
def _get_stats(mask, block_number):
block = np.ma.array(ndvi, mask=mask)
mean = np.ma.mean(block)
median = np.ma.median(block)
var = np.ma.var(block)
random_mask = random_mask_sample(block.mask, 1)
random_val = np.ma.mean(np.ma.array(block, mask=random_mask))
return {'block': block_number,
'mean': mean,
'median': median,
'variance': var,
'random': random_val}
data = [_get_stats(m, i) for i, m in enumerate(masks)]
df = pd.DataFrame(data)
return df
ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths)
get_stats(ndvi, block_masks)
Explanation: Ok, great! The first image shows what would result from sampling 13 pixels. The second image is for nearly all the pixels and demonstrates that the mask is taken into account with sampling.
Now lets get down to calculating the summary statistics and placing them in a table entry.
Step 3: Prepare Table of Summary Statistics
Now that we have all the tools we need, we are ready to calculate summary statistics for each field block and put them into a table. We will calculate the median, mean, variance, and single random point value for each field block.
End of explanation
def get_stats_by_id(iid, block_masks, file_paths):
ndvi = read_masked_ndvi_by_id(iid, file_paths)
ndvi_stats = get_stats(ndvi, block_masks)
acquired = get_acquired(iid)
ndvi_stats['acquired'] = [acquired]*len(block_masks)
return ndvi_stats
def get_acquired(iid):
metadata_path = get_metadata(iid, file_paths)
with open(metadata_path) as src:
md = json.load(src)
return md['properties']['acquired']
def get_metadata(img_id, file_paths):
filename = '{}_metadata.json'.format(img_id)
return next(f for f in file_paths if f.name == filename)
get_stats_by_id(clear_ids[0], block_masks, file_paths)
dfs = [get_stats_by_id(i, block_masks, file_paths) for i in clear_ids]
all_stats = pd.concat(dfs)
all_stats
Explanation: Okay! We have statistics for each block in a table. Yay! Okay, now lets move on to running this across a time series.
Step 4: Perform Time Series Analysis
End of explanation
colors = {0:'red', 1:'blue', 2:'green', 3:'black', 4:'purple'}
df = all_stats
stats = ['mean', 'median', 'random', 'variance']
fig, axes = plt.subplots(2, 2, sharex=True, figsize=(15,15))
# print(dir(axes[0][0]))
for stat, ax in zip(stats, axes.flatten()):
ax.scatter(df['acquired'], df[stat], c=df['block'].apply(lambda x: colors[x]))
ax.set_title(stat)
plt.sca(ax)
plt.xticks(rotation=90)
plt.show()
Explanation: Okay! We have 165 rows, which is (number of blocks)x(number of images). It all checks out.
Lets check out these stats in some plots!
In these plots, color indicates the blocks. The blocks are colored red, blue, green, black, and purple. The x axis is acquisition time. So each 'column' of colored dots is the block statistic value for a given image.
End of explanation |
12,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
计算传播应用
推荐系统简介
王成军
[email protected]
计算传播网 http
Step1: 1. User-based filtering
1.0 Finding similar users
Step2: This formula calculates the distance, which will be smaller for people who are more similar. However, you need a function that gives higher values for people who are similar. This can be done by adding 1 to the function (so you don’t get a division-by- zero error) and inverting it
Step3: Pearson correlation coefficient
Step4: 1.1 Recommending Items
<img src='./img/usercf.png' width = 700px>
Toby相似的五个用户(Rose, Reymour, Puig, LaSalle, Matthews)及相似度(依次为0.99, 0.38, 0.89, 0.92)
这五个用户看过的三个电影(Night,Lady, Luck)及其评分
例如,Rose对Night评分是3.0
S.xNight是用户相似度与电影评分的乘积
例如,Toby于Rose相似度对Night评分是3.0*0.99 = 2.97
可以得到每部电影的得分
例如,Night的得分是12.89 = 2.97+1.14+4.02+2.77+1.99
电影得分需要使用用户相似度之和进行加权
例如,Night电影的预测得分是3.35 = 12.89/3.84
Step5: 2. Item-based filtering
Now you know how to find similar people and recommend products for a given person
But what if you want to see which products are similar to each other?
This is actually the same method we used earlier to determine similarity between people—
将item-user字典的键值翻转
Step6: 计算item的相似性
Step7: 给item推荐user
Step8: <img src = './img/itemcf1.png' width=800px>
Toby看过三个电影(snakes、Superman、dupree)和评分(依次是4.5、4.0、1.0)
表格2-3给出这三部电影与另外三部电影的相似度
例如superman与night的相似度是0.103
R.xNight表示Toby对自己看过的三部定影的评分与Night这部电影相似度的乘积
例如,0.412 = 4.0*0.103
那么Toby对于Night的评分可以表达为0.818+0.412+0.148 = 1.378
已经知道Night相似度之和是0.182+0.103+0.148 = 0.433
那么Toby对Night的最终评分可以表达为:1.378/0.433 = 3.183
Step9: <img src = './img/itemcfNetwork.png' width = 700px>
基于物品的协同过滤算法的网络表示方法
基于图的模型
使用二分图表示用户行为,因此基于图的算法可以应用到推荐系统当中。
<img src = './img/graphrec.png' width = 500px>
Step10: 3. MovieLens Recommender
MovieLens是一个电影评价的真实数据,由明尼苏达州立大学的GroupLens项目组开发。
数据下载
http
Step11: user-based filtering
Step12: Item-based filtering
Step13: Buiding Recommendation System with GraphLab
In this notebook we will import GraphLab Create and use it to
train two models that can be used for recommending new songs to users
compare the performance of the two models
Step14: After importing GraphLab Create, we can download data directly from S3. We have placed a preprocessed version of the Million Song Dataset on S3. This data set was used for a Kaggle challenge and includes data from The Echo Nest, SecondHandSongs, musiXmatch, and Last.fm. This file includes data for a subset of 10000 songs.
The CourseTalk dataset
Step15: In order to evaluate the performance of our model, we randomly split the observations in our data set into two partitions
Step16: Popularity model
Create a model that makes recommendations using item popularity. When no target column is provided, the popularity is determined by the number of observations involving each item. When a target is provided, popularity is computed using the item’s mean target value. When the target column contains ratings, for example, the model computes the mean rating for each item and uses this to rank items for recommendations.
One typically wants to initially create a simple recommendation system that can be used as a baseline and to verify that the rest of the pipeline works as expected. The recommender package has several models available for this purpose. For example, we can create a model that predicts songs based on their overall popularity across all users.
Step17: Item similarity Model
Collaborative filtering methods make predictions for a given user based on the patterns of other users' activities. One common technique is to compare items based on their Jaccard similarity.This measurement is a ratio
Step18: Factorization Recommender Model
Create a FactorizationRecommender that learns latent factors for each user and item and uses them to make rating predictions. This includes both standard matrix factorization as well as factorization machines models (in the situation where side data is available for users and/or items). link
Step19: Model Evaluation
It's straightforward to use GraphLab to compare models on a small subset of users in the test_set. The precision-recall plot that is computed shows the benefits of using the similarity-based model instead of the baseline popularity_model
Step20: Now let's ask the item similarity model for song recommendations on several users. We first create a list of users and create a subset of observations, users_ratings, that pertain to these users.
Step21: Next we use the recommend() function to query the model we created for recommendations. The returned object has four columns
Step22: To learn what songs these ids pertain to, we can merge in metadata about each song. | Python Code:
# A dictionary of movie critics and their ratings of a small
# set of movies
critics={'Lisa Rose': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.5,
'Just My Luck': 3.0, 'Superman Returns': 3.5, 'You, Me and Dupree': 2.5,
'The Night Listener': 3.0},
'Gene Seymour': {'Lady in the Water': 3.0, 'Snakes on a Plane': 3.5,
'Just My Luck': 1.5, 'Superman Returns': 5.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 3.5},
'Michael Phillips': {'Lady in the Water': 2.5, 'Snakes on a Plane': 3.0,
'Superman Returns': 3.5, 'The Night Listener': 4.0},
'Claudia Puig': {'Snakes on a Plane': 3.5, 'Just My Luck': 3.0,
'The Night Listener': 4.5, 'Superman Returns': 4.0,
'You, Me and Dupree': 2.5},
'Mick LaSalle': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'Just My Luck': 2.0, 'Superman Returns': 3.0, 'The Night Listener': 3.0,
'You, Me and Dupree': 2.0},
'Jack Matthews': {'Lady in the Water': 3.0, 'Snakes on a Plane': 4.0,
'The Night Listener': 3.0, 'Superman Returns': 5.0, 'You, Me and Dupree': 3.5},
'Toby': {'Snakes on a Plane':4.5,'You, Me and Dupree':1.0,'Superman Returns':4.0}}
critics['Lisa Rose']['Lady in the Water']
critics['Toby']['Snakes on a Plane']=4.5
critics['Toby']
Explanation: 计算传播应用
推荐系统简介
王成军
[email protected]
计算传播网 http://computational-communication.com
集体智慧编程
集体智慧是指为了创造新想法,将一群人的行为、偏好或思想组合在一起。一般基于聪明的算法(Netflix, Google)或者提供内容的用户(Wikipedia)。
集体智慧编程所强调的是前者,即通过编写计算机程序、构造具有智能的算法收集并分析用户的数据,发现新的信息甚至是知识。
Netflix
Google
Wikipedia
Toby Segaran, 2007, Programming Collective Intelligence. O'Reilly.
https://github.com/computational-class/programming-collective-intelligence-code/blob/master/chapter2/recommendations.py
推荐系统
目前互联网世界最常见的智能产品形式。
从信息时代过渡到注意力时代:
信息过载(information overload)
注意力稀缺
推荐系统的基本任务是联系用户和物品,帮助用户快速发现有用信息,解决信息过载的问题。
针对长尾分布问题,找到个性化需求,优化资源配置
推荐系统的类型
社会化推荐(Social Recommendation)
让朋友帮助推荐物品
基于内容的推荐 (Content-based filtering)
基于用户已经消费的物品内容,推荐新的物品。例如根据看过的电影的导演和演员,推荐新影片。
基于协同过滤的推荐(collaborative filtering)
找到和某用户的历史兴趣一致的用户,根据这些用户之间的相似性或者他们所消费物品的相似性,为该用户推荐物品
协同过滤算法
基于邻域的方法(neighborhood-based method)
基于用户的协同过滤(user-based filtering)
基于物品的协同过滤 (item-based filtering)
隐语义模型(latent factor model)
基于图的随机游走算法(random walk on graphs)
UserCF和ItemCF的比较
UserCF较为古老, 1992年应用于电子邮件个性化推荐系统Tapestry, 1994年应用于Grouplens新闻个性化推荐, 后来被Digg采用
推荐那些与个体有共同兴趣爱好的用户所喜欢的物品(群体热点,社会化)
反映用户所在小型群体中物品的热门程度
ItemCF相对较新,应用于电子商务网站Amazon和DVD租赁网站Netflix
推荐那些和用户之前喜欢的物品相似的物品 (历史兴趣,个性化)
反映了用户自己的兴趣传承
新闻更新快,物品数量庞大,相似度变化很快,不利于维护一张物品相似度的表格,电影、音乐、图书则可以。
推荐系统评测
用户满意度
预测准确度
$r_{ui}$用户实际打分, $\hat{r_{ui}}$推荐算法预测打分
均方根误差RMSE
$RMSE = \sqrt{\frac{\sum_{u, i \in T} (r_{ui} - \hat{r_{ui}})}{\left | T \right |}^2} $
平均绝对误差MAE
$ MAE = \frac{\sum_{u, i \in T} \left | r_{ui} - \hat{r_{ui}} \right|}{\left | T \right|}$
End of explanation
# 欧几里得距离
import numpy as np
np.sqrt(np.power(5-4, 2) + np.power(4-1, 2))
Explanation: 1. User-based filtering
1.0 Finding similar users
End of explanation
1.0 /(1 + np.sqrt(np.power(5-4, 2) + np.power(4-1, 2)) )
# Returns a distance-based similarity score for person1 and person2
def sim_distance(prefs,person1,person2):
# Get the list of shared_items
si={}
for item in prefs[person1]:
if item in prefs[person2]:
si[item]=1
# if they have no ratings in common, return 0
if len(si)==0: return 0
# Add up the squares of all the differences
sum_of_squares=np.sum([np.power(prefs[person1][item]-prefs[person2][item],2)
for item in prefs[person1] if item in prefs[person2]])
return 1/(1+np.sqrt(sum_of_squares) )
sim_distance(critics, 'Lisa Rose','Gene Seymour')
Explanation: This formula calculates the distance, which will be smaller for people who are more similar. However, you need a function that gives higher values for people who are similar. This can be done by adding 1 to the function (so you don’t get a division-by- zero error) and inverting it:
End of explanation
# Returns the Pearson correlation coefficient for p1 and p2
def sim_pearson(prefs,p1,p2):
# Get the list of mutually rated items
si={}
for item in prefs[p1]:
if item in prefs[p2]: si[item]=1
# Find the number of elements
n=len(si)
# if they are no ratings in common, return 0
if n==0: return 0
# Add up all the preferences
sum1=np.sum([prefs[p1][it] for it in si])
sum2=np.sum([prefs[p2][it] for it in si])
# Sum up the squares
sum1Sq=np.sum([np.power(prefs[p1][it],2) for it in si])
sum2Sq=np.sum([np.power(prefs[p2][it],2) for it in si])
# Sum up the products
pSum=np.sum([prefs[p1][it]*prefs[p2][it] for it in si])
# Calculate Pearson score
num=pSum-(sum1*sum2/n)
den=np.sqrt((sum1Sq-np.power(sum1,2)/n)*(sum2Sq-np.power(sum2,2)/n))
if den==0: return 0
return num/den
sim_pearson(critics, 'Lisa Rose','Gene Seymour')
# Returns the best matches for person from the prefs dictionary.
# Number of results and similarity function are optional params.
def topMatches(prefs,person,n=5,similarity=sim_pearson):
scores=[(similarity(prefs,person,other),other)
for other in prefs if other!=person]
# Sort the list so the highest scores appear at the top
scores.sort( )
scores.reverse( )
return scores[0:n]
topMatches(critics,'Toby',n=3) # topN
Explanation: Pearson correlation coefficient
End of explanation
# Gets recommendations for a person by using a weighted average
# of every other user's rankings
def getRecommendations(prefs,person,similarity=sim_pearson):
totals={}
simSums={}
for other in prefs:
# don't compare me to myself
if other==person: continue
sim=similarity(prefs,person,other)
# ignore scores of zero or lower
if sim<=0: continue
for item in prefs[other]:
# only score movies I haven't seen yet
if item not in prefs[person] or prefs[person][item]==0:
# Similarity * Score
totals.setdefault(item,0)
totals[item]+=prefs[other][item]*sim
# Sum of similarities
simSums.setdefault(item,0)
simSums[item]+=sim
# Create the normalized list
rankings=[(total/simSums[item],item) for item,total in totals.items()]
# Return the sorted list
rankings.sort()
rankings.reverse()
return rankings
# Now you can find out what movies I should watch next:
getRecommendations(critics,'Toby')
# You’ll find that the results are only affected very slightly by the choice of similarity metric.
getRecommendations(critics,'Toby',similarity=sim_distance)
Explanation: 1.1 Recommending Items
<img src='./img/usercf.png' width = 700px>
Toby相似的五个用户(Rose, Reymour, Puig, LaSalle, Matthews)及相似度(依次为0.99, 0.38, 0.89, 0.92)
这五个用户看过的三个电影(Night,Lady, Luck)及其评分
例如,Rose对Night评分是3.0
S.xNight是用户相似度与电影评分的乘积
例如,Toby于Rose相似度对Night评分是3.0*0.99 = 2.97
可以得到每部电影的得分
例如,Night的得分是12.89 = 2.97+1.14+4.02+2.77+1.99
电影得分需要使用用户相似度之和进行加权
例如,Night电影的预测得分是3.35 = 12.89/3.84
End of explanation
# you just need to swap the people and the items.
def transformPrefs(prefs):
result={}
for person in prefs:
for item in prefs[person]:
result.setdefault(item,{})
# Flip item and person
result[item][person]=prefs[person][item]
return result
movies = transformPrefs(critics)
Explanation: 2. Item-based filtering
Now you know how to find similar people and recommend products for a given person
But what if you want to see which products are similar to each other?
This is actually the same method we used earlier to determine similarity between people—
将item-user字典的键值翻转
End of explanation
topMatches(movies,'Superman Returns')
Explanation: 计算item的相似性
End of explanation
def calculateSimilarItems(prefs,n=10):
# Create a dictionary of items showing which other items they
# are most similar to.
result={}
# Invert the preference matrix to be item-centric
itemPrefs=transformPrefs(prefs)
c=0
for item in itemPrefs:
# Status updates for large datasets
c+=1
if c%100==0:
print "%d / %d" % (c,len(itemPrefs))
# Find the most similar items to this one
scores=topMatches(itemPrefs,item,n=n,similarity=sim_distance)
result[item]=scores
return result
itemsim=calculateSimilarItems(critics)
itemsim
Explanation: 给item推荐user
End of explanation
def getRecommendedItems(prefs,itemMatch,user):
userRatings=prefs[user]
scores={}
totalSim={}
# Loop over items rated by this user
for (item,rating) in userRatings.items( ):
# Loop over items similar to this one
for (similarity,item2) in itemMatch[item]:
# Ignore if this user has already rated this item
if item2 in userRatings: continue
# Weighted sum of rating times similarity
scores.setdefault(item2,0)
scores[item2]+=similarity*rating
# Sum of all the similarities
totalSim.setdefault(item2,0)
totalSim[item2]+=similarity
# Divide each total score by total weighting to get an average
rankings=[(score/totalSim[item],item) for item,score in scores.items( )]
# Return the rankings from highest to lowest
rankings.sort( )
rankings.reverse( )
return rankings
getRecommendedItems(critics,itemsim,'Toby')
getRecommendations(movies,'Just My Luck')
getRecommendations(movies, 'You, Me and Dupree')
Explanation: <img src = './img/itemcf1.png' width=800px>
Toby看过三个电影(snakes、Superman、dupree)和评分(依次是4.5、4.0、1.0)
表格2-3给出这三部电影与另外三部电影的相似度
例如superman与night的相似度是0.103
R.xNight表示Toby对自己看过的三部定影的评分与Night这部电影相似度的乘积
例如,0.412 = 4.0*0.103
那么Toby对于Night的评分可以表达为0.818+0.412+0.148 = 1.378
已经知道Night相似度之和是0.182+0.103+0.148 = 0.433
那么Toby对Night的最终评分可以表达为:1.378/0.433 = 3.183
End of explanation
# https://github.com/ParticleWave/RecommendationSystemStudy/blob/d1960056b96cfaad62afbfe39225ff680240d37e/PersonalRank.py
import os
import random
class Graph:
def __init__(self):
self.G = dict()
def addEdge(self, p, q):
if p not in self.G: self.G[p] = dict()
if q not in self.G: self.G[q] = dict()
self.G[p][q] = 1
self.G[q][p] = 1
def getGraphMatrix(self):
return self.G
graph = Graph()
graph.addEdge('A', 'a')
graph.addEdge('A', 'c')
graph.addEdge('B', 'a')
graph.addEdge('B', 'b')
graph.addEdge('B', 'c')
graph.addEdge('B', 'd')
graph.addEdge('C', 'c')
graph.addEdge('C', 'd')
G = graph.getGraphMatrix()
print(G.keys())
G
def PersonalRank(G, alpha, root, max_step):
# G is the biparitite graph of users' ratings on items
# alpha is the probability of random walk forward
# root is the studied User
# max_step if the steps of iterations.
rank = dict()
rank = {x:0.0 for x in G.keys()}
rank[root] = 1.0
for k in range(max_step):
tmp = {x:0.0 for x in G.keys()}
for i,ri in G.items():
for j,wij in ri.items():
if j not in tmp: tmp[j] = 0.0 #
tmp[j] += alpha * rank[i] / (len(ri)*1.0)
if j == root: tmp[j] += 1.0 - alpha
rank = tmp
print(k, rank)
return rank
print(PersonalRank(G, 0.8, 'A', 20))
# print(PersonalRank(G, 0.8, 'B', 20))
# print(PersonalRank(G, 0.8, 'C', 20))
Explanation: <img src = './img/itemcfNetwork.png' width = 700px>
基于物品的协同过滤算法的网络表示方法
基于图的模型
使用二分图表示用户行为,因此基于图的算法可以应用到推荐系统当中。
<img src = './img/graphrec.png' width = 500px>
End of explanation
def loadMovieLens(path='/Users/chengjun/bigdata/ml-1m/'):
# Get movie titles
movies={}
for line in open(path+'movies.dat'):
(id,title)=line.split('::')[0:2]
movies[id]=title
# Load data
prefs={}
for line in open(path+'/ratings.dat'):
(user,movieid,rating,ts)=line.split('::')
prefs.setdefault(user,{})
prefs[user][movies[movieid]]=float(rating)
return prefs
prefs=loadMovieLens()
prefs['87']
Explanation: 3. MovieLens Recommender
MovieLens是一个电影评价的真实数据,由明尼苏达州立大学的GroupLens项目组开发。
数据下载
http://grouplens.org/datasets/movielens/1m/
These files contain 1,000,209 anonymous ratings of approximately 3,900 movies
made by 6,040 MovieLens users who joined MovieLens in 2000.
数据格式
All ratings are contained in the file "ratings.dat" and are in the following format:
UserID::MovieID::Rating::Timestamp
1::1193::5::978300760
1::661::3::978302109
1::914::3::978301968
End of explanation
getRecommendations(prefs,'87')[0:30]
Explanation: user-based filtering
End of explanation
itemsim=calculateSimilarItems(prefs,n=50)
getRecommendedItems(prefs,itemsim,'87')[0:30]
Explanation: Item-based filtering
End of explanation
# set product key using GraphLab Create API
#import graphlab
#graphlab.product_key.set_product_key('4972-65DF-8E02-816C-AB15-021C-EC1B-0367')
%matplotlib inline
import graphlab
# set canvas to show sframes and sgraphs in ipython notebook
gl.canvas.set_target('ipynb')
import matplotlib.pyplot as plt
sf = graphlab.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"],
'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"],
'rating': [1, 3, 2, 5, 4, 1, 4, 3]})
sf
m = graphlab.recommender.create(sf, target='rating')
recs = m.recommend()
print recs
m['coefficients']
Explanation: Buiding Recommendation System with GraphLab
In this notebook we will import GraphLab Create and use it to
train two models that can be used for recommending new songs to users
compare the performance of the two models
End of explanation
#train_file = 'http://s3.amazonaws.com/dato-datasets/millionsong/10000.txt'
train_file = '/Users/chengjun/GitHub/cjc2016/data/ratings.dat'
sf = gl.SFrame.read_csv(train_file, header=False, delimiter='|', verbose=False)
sf.rename({'X1':'user_id', 'X2':'course_id', 'X3':'rating'}).show()
Explanation: After importing GraphLab Create, we can download data directly from S3. We have placed a preprocessed version of the Million Song Dataset on S3. This data set was used for a Kaggle challenge and includes data from The Echo Nest, SecondHandSongs, musiXmatch, and Last.fm. This file includes data for a subset of 10000 songs.
The CourseTalk dataset: loading and first look
Loading of the CourseTalk database.
End of explanation
(train_set, test_set) = sf.random_split(0.8, seed=1)
Explanation: In order to evaluate the performance of our model, we randomly split the observations in our data set into two partitions: we will use train_set when creating our model and test_set for evaluating its performance.
End of explanation
import graphlab as gl
popularity_model = gl.popularity_recommender.create(train_set, 'user_id', 'course_id', target = 'rating')
Explanation: Popularity model
Create a model that makes recommendations using item popularity. When no target column is provided, the popularity is determined by the number of observations involving each item. When a target is provided, popularity is computed using the item’s mean target value. When the target column contains ratings, for example, the model computes the mean rating for each item and uses this to rank items for recommendations.
One typically wants to initially create a simple recommendation system that can be used as a baseline and to verify that the rest of the pipeline works as expected. The recommender package has several models available for this purpose. For example, we can create a model that predicts songs based on their overall popularity across all users.
End of explanation
item_sim_model = gl.item_similarity_recommender.create(train_set, 'user_id', 'course_id', target = 'rating',
similarity_type='cosine')
Explanation: Item similarity Model
Collaborative filtering methods make predictions for a given user based on the patterns of other users' activities. One common technique is to compare items based on their Jaccard similarity.This measurement is a ratio: the number of items they have in common, over the total number of distinct items in both sets.
We could also have used another slightly more complicated similarity measurement, called Cosine Similarity.
If your data is implicit, i.e., you only observe interactions between users and items, without a rating, then use ItemSimilarityModel with Jaccard similarity.
If your data is explicit, i.e., the observations include an actual rating given by the user, then you have a wide array of options. ItemSimilarityModel with cosine or Pearson similarity can incorporate ratings. In addition, MatrixFactorizationModel, FactorizationModel, as well as LinearRegressionModel all support rating prediction.
Now data contains three columns: ‘user_id’, ‘item_id’, and ‘rating’.
itemsim_cosine_model = graphlab.recommender.create(data,
target=’rating’,
method=’item_similarity’,
similarity_type=’cosine’)
factorization_machine_model = graphlab.recommender.create(data,
target=’rating’,
method=’factorization_model’)
In the following code block, we compute all the item-item similarities and create an object that can be used for recommendations.
End of explanation
factorization_machine_model = gl.recommender.factorization_recommender.create(train_set, 'user_id', 'course_id',
target='rating')
Explanation: Factorization Recommender Model
Create a FactorizationRecommender that learns latent factors for each user and item and uses them to make rating predictions. This includes both standard matrix factorization as well as factorization machines models (in the situation where side data is available for users and/or items). link
End of explanation
result = gl.recommender.util.compare_models(test_set, [popularity_model, item_sim_model, factorization_machine_model],
user_sample=.1, skip_set=train_set)
Explanation: Model Evaluation
It's straightforward to use GraphLab to compare models on a small subset of users in the test_set. The precision-recall plot that is computed shows the benefits of using the similarity-based model instead of the baseline popularity_model: better curves tend toward the upper-right hand corner of the plot.
The following command finds the top-ranked items for all users in the first 500 rows of test_set. The observations in train_set are not included in the predicted items.
End of explanation
K = 10
users = gl.SArray(sf['user_id'].unique().head(100))
Explanation: Now let's ask the item similarity model for song recommendations on several users. We first create a list of users and create a subset of observations, users_ratings, that pertain to these users.
End of explanation
recs = item_sim_model.recommend(users=users, k=K)
recs.head()
Explanation: Next we use the recommend() function to query the model we created for recommendations. The returned object has four columns: user_id, song_id, the score that the algorithm gave this user for this song, and the song's rank (an integer from 0 to K-1). To see this we can grab the top few rows of recs:
End of explanation
# Get the meta data of the courses
courses = gl.SFrame.read_csv('/Users/chengjun/GitHub/cjc2016/data/cursos.dat', header=False, delimiter='|', verbose=False)
courses.rename({'X1':'course_id', 'X2':'title', 'X3':'avg_rating',
'X4':'workload', 'X5':'university', 'X6':'difficulty', 'X7':'provider'}).show()
courses = courses[['course_id', 'title', 'provider']]
results = recs.join(courses, on='course_id', how='inner')
# Populate observed user-course data with course info
userset = frozenset(users)
ix = sf['user_id'].apply(lambda x: x in userset, int)
user_data = sf[ix]
user_data = user_data.join(courses, on='course_id')[['user_id', 'title', 'provider']]
# Print out some recommendations
for i in range(5):
user = list(users)[i]
print "User: " + str(i + 1)
user_obs = user_data[user_data['user_id'] == user].head(K)
del user_obs['user_id']
user_recs = results[results['user_id'] == str(user)][['title', 'provider']]
print "We were told that the user liked these courses: "
print user_obs.head(K)
print "We recommend these other courses:"
print user_recs.head(K)
print ""
Explanation: To learn what songs these ids pertain to, we can merge in metadata about each song.
End of explanation |
12,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pyomo - Stock problem v1
Pyomo installation
Step1: Version 1 (P1)
$x_t < 0$ = buy $|x|$ at time $t$
$x_t > 0$ = sell $|x|$ at time $t$
$s_t$ = the battery level at time $t$
$$
\begin{align}
\max_{x_1,x_2,x_3} & \quad p_1 x_1 + p_2 x_2 + p_3 x_3 \quad\quad\quad\quad \text{(P1)} \
\text{s.t.} & \quad s_0 = 0 \
& \quad s_t = s_{t-1} - x_t \
& \quad x_t \geq -(s_{\max} - s_{t-1}) \quad \quad ~ \text{can't buy more than the "free space" of the battery} \
& \quad x_t \leq s_{t-1} \quad \quad \quad \quad \quad \quad \text{can't sell more than what have been stored in the battery}
\end{align}
$$
Step2: Version 2 (P2)
$$
\begin{align}
\max_{x_1,x_2,x_3} & \quad p_1 x_1 + p_2 x_2 + p_3 x_3 \quad\quad\quad\quad \text{(P1)} \
\text{s.t.} & \quad s_0 = 0 \
& \quad s_t = s_{t-1} - x_t \
& \quad x_t \geq -(s_{\max} - s_{t-1}) \quad \quad ~ \text{can't buy more than the "free space" of the battery} \
& \quad x_t \leq s_{t-1} \quad \quad \quad \quad \quad \quad \text{can't sell more than what have been stored in the battery} \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad s_0 = 0 \
& \quad s_t = s_{t-1} - x_t \
& \quad x_t \geq -(s_{\max} - s_{t-1}) \Leftrightarrow x_t \geq -s_{\max} + s_{t-1} \
& \quad x_t \leq s_{t-1} \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad s_1 = -x_1 \
& \quad x_1 \geq -s_{\max} \
& \quad x_1 \leq 0 \
& \quad s_2 = s_1 - x_2 = -x_1 - x_2 \
& \quad x_2 \geq -s_{\max} + s_1 \Leftrightarrow x_2 \geq -s_{\max} - x_1 \
& \quad x_2 \leq s_1 \Leftrightarrow x_2 \leq -x_1 \
& \quad s_3 = s_2 - x_3 = -x_1 - x_2 - x_3 \
& \quad x_3 \geq -s_{\max} + s_2 \Leftrightarrow x_3 \geq -s_{\max} - x_1 - x_2 \
& \quad x_3 \leq s_2 \Leftrightarrow x_3 \leq -x_1 - x_2 \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad x_1 \geq -s_{\max} \
& \quad x_1 \leq 0 \
& \quad x_2 \geq -s_{\max} - x_1 \
& \quad x_2 \leq -x_1 \
& \quad x_3 \geq -s_{\max} - x_1 - x_2 \
& \quad x_3 \leq -x_1 - x_2 \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad x_1 \geq -s_{\max} \
& \quad x_1 \leq 0 \
& \quad x_1 + x_2 \geq -s_{\max} \
& \quad x_1 + x_2 \leq 0 \
& \quad x_1 + x_2 + x_3 \geq -s_{\max} \
& \quad x_1 + x_2 + x_3 \leq 0 \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \quad\quad\quad\quad \text{(P2)} \
\text{s.t.} & \quad -x_1 \leq s_{\max} \
& \quad x_1 \leq 0 \
& \quad -x_1 - x_2 \leq s_{\max} \
& \quad x_1 + x_2 \leq 0 \
& \quad -x_1 - x_2 - x_3 \leq s_{\max} \
& \quad x_1 + x_2 + x_3 \leq 0 \
& \
& \
\end{align}
$$
$$
\color{\red}{
\boldsymbol{c} = \begin{pmatrix}
-p_1 \
-p_2 \
-p_3
\end{pmatrix}
}
\quad
\color{\orange}{
\boldsymbol{A} = \begin{pmatrix}
-1 & 0 & 0 \
1 & 0 & 0 \
-1 & -1 & 0 \
1 & 1 & 0 \
-1 & -1 & -1 \
1 & 1 & 1
\end{pmatrix}
}
\quad
\color{\green}{
\boldsymbol{b} = \begin{pmatrix}
s_{\max} \
0 \
s_{\max} \
0 \
s_{\max} \
0
\end{pmatrix}
}
\quad
\color{\purple}{
\boldsymbol{B} = \begin{pmatrix}
- & - \
- & - \
- & -
\end{pmatrix}
}
$$ | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from pyomo.environ import *
Explanation: Pyomo - Stock problem v1
Pyomo installation: see http://www.pyomo.org/installation
pip install pyomo
End of explanation
# Cost of energy on the market
#cost = [10, 30, 20] # -> -100, 100, 0
#cost = [10, 30, 10, 30] # -> [-100., 100., -100., 100.]
#cost = [10, 30, 10, 30, 30] # -> [-100., 100., -100., 100., 0.]
#cost = [10, 20, 30, 40] # -> [-100., 0., 0., 100.]
#cost = [10, 30, 20, 50]
price = [10, 30, 20, 50]
stock_max = 100 # battery capacity
T = list(range(len(price) + 1)) # num decision variables
plt.plot(T[:-1], price, "o-")
plt.xlabel("Unit of time (t)")
plt.ylabel("Price of one unit of energy (p)")
plt.title("Price of energy on the market")
plt.show();
model = ConcreteModel(name="Stock problem v1")
model.x = Var(T)
model.s = Var(T)
def objective_fn(model):
return sum(price[t-1] * model.x[t] for t in T if t != 0)
model.obj = Objective(rule=objective_fn, sense=maximize)
########
model.constraint_s0 = Constraint(expr = model.s[0] == 0)
def constraint_stock_level(model, t):
if t > 0:
return model.s[t] == model.s[t-1] - model.x[t]
else:
return Constraint.Skip
model.constraint_st = Constraint(T, rule=constraint_stock_level)
def constraint_decision_inf(model, t):
if t > 0:
return model.x[t] >= -(stock_max - model.s[t-1])
else:
return Constraint.Skip
model.constraint_xt_inf = Constraint(T, rule=constraint_decision_inf)
def constraint_decision_sup(model, t):
if t > 0:
return model.x[t] <= model.s[t-1]
else:
return Constraint.Skip
model.constraint_xt_sup = Constraint(T, rule=constraint_decision_sup)
########
model.pprint()
# @tail:
print()
print("-" * 60)
print()
opt = SolverFactory('glpk')
results = opt.solve(model) # solves and updates instance
model.display()
print()
print("Optimal solution: ", [value(model.x[t]) for t in T if t != 0])
print("Gain of the optimal solution: ", value(model.obj))
# @:tail
Explanation: Version 1 (P1)
$x_t < 0$ = buy $|x|$ at time $t$
$x_t > 0$ = sell $|x|$ at time $t$
$s_t$ = the battery level at time $t$
$$
\begin{align}
\max_{x_1,x_2,x_3} & \quad p_1 x_1 + p_2 x_2 + p_3 x_3 \quad\quad\quad\quad \text{(P1)} \
\text{s.t.} & \quad s_0 = 0 \
& \quad s_t = s_{t-1} - x_t \
& \quad x_t \geq -(s_{\max} - s_{t-1}) \quad \quad ~ \text{can't buy more than the "free space" of the battery} \
& \quad x_t \leq s_{t-1} \quad \quad \quad \quad \quad \quad \text{can't sell more than what have been stored in the battery}
\end{align}
$$
End of explanation
T = list(range(len(price))) # num decision variables
M = list(range(len(price) * 2)) # num decision variables
plt.plot(T, price, "o-")
plt.xlabel("Unit of time (t)")
plt.ylabel("Price of one unit of energy (p)")
plt.title("Price of energy on the market")
plt.show();
p = -np.array(price)
A = np.repeat(np.tril(np.ones(len(price))), 2, axis=0)
A[::2, :] *= -1
A
b = np.zeros(A.shape[0])
b[::2] = stock_max
b
model = ConcreteModel(name="Stock problem v2")
model.x = Var(T)
def objective_fn(model):
return sum(p[t] * model.x[t] for t in T)
model.obj = Objective(rule=objective_fn)
def constraint_fn(model, m):
return sum(A[m][t] * model.x[t] for t in T) <= b[m]
model.constraint = Constraint(M, rule=constraint_fn)
model.pprint()
# @tail:
print()
print("-" * 60)
print()
opt = SolverFactory('glpk')
results = opt.solve(model) # solves and updates instance
model.display()
print()
print("Optimal solution: ", [value(model.x[t]) for t in T])
print("Gain of the optimal solution: ", value(model.obj))
# @:tail
Explanation: Version 2 (P2)
$$
\begin{align}
\max_{x_1,x_2,x_3} & \quad p_1 x_1 + p_2 x_2 + p_3 x_3 \quad\quad\quad\quad \text{(P1)} \
\text{s.t.} & \quad s_0 = 0 \
& \quad s_t = s_{t-1} - x_t \
& \quad x_t \geq -(s_{\max} - s_{t-1}) \quad \quad ~ \text{can't buy more than the "free space" of the battery} \
& \quad x_t \leq s_{t-1} \quad \quad \quad \quad \quad \quad \text{can't sell more than what have been stored in the battery} \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad s_0 = 0 \
& \quad s_t = s_{t-1} - x_t \
& \quad x_t \geq -(s_{\max} - s_{t-1}) \Leftrightarrow x_t \geq -s_{\max} + s_{t-1} \
& \quad x_t \leq s_{t-1} \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad s_1 = -x_1 \
& \quad x_1 \geq -s_{\max} \
& \quad x_1 \leq 0 \
& \quad s_2 = s_1 - x_2 = -x_1 - x_2 \
& \quad x_2 \geq -s_{\max} + s_1 \Leftrightarrow x_2 \geq -s_{\max} - x_1 \
& \quad x_2 \leq s_1 \Leftrightarrow x_2 \leq -x_1 \
& \quad s_3 = s_2 - x_3 = -x_1 - x_2 - x_3 \
& \quad x_3 \geq -s_{\max} + s_2 \Leftrightarrow x_3 \geq -s_{\max} - x_1 - x_2 \
& \quad x_3 \leq s_2 \Leftrightarrow x_3 \leq -x_1 - x_2 \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad x_1 \geq -s_{\max} \
& \quad x_1 \leq 0 \
& \quad x_2 \geq -s_{\max} - x_1 \
& \quad x_2 \leq -x_1 \
& \quad x_3 \geq -s_{\max} - x_1 - x_2 \
& \quad x_3 \leq -x_1 - x_2 \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \
\text{s.t.} & \quad x_1 \geq -s_{\max} \
& \quad x_1 \leq 0 \
& \quad x_1 + x_2 \geq -s_{\max} \
& \quad x_1 + x_2 \leq 0 \
& \quad x_1 + x_2 + x_3 \geq -s_{\max} \
& \quad x_1 + x_2 + x_3 \leq 0 \
& \
& \
\Leftrightarrow \min_{x_1,x_2,x_3} & \quad - p_1 x_1 - p_2 x_2 - p_3 x_3 \quad\quad\quad\quad \text{(P2)} \
\text{s.t.} & \quad -x_1 \leq s_{\max} \
& \quad x_1 \leq 0 \
& \quad -x_1 - x_2 \leq s_{\max} \
& \quad x_1 + x_2 \leq 0 \
& \quad -x_1 - x_2 - x_3 \leq s_{\max} \
& \quad x_1 + x_2 + x_3 \leq 0 \
& \
& \
\end{align}
$$
$$
\color{\red}{
\boldsymbol{c} = \begin{pmatrix}
-p_1 \
-p_2 \
-p_3
\end{pmatrix}
}
\quad
\color{\orange}{
\boldsymbol{A} = \begin{pmatrix}
-1 & 0 & 0 \
1 & 0 & 0 \
-1 & -1 & 0 \
1 & 1 & 0 \
-1 & -1 & -1 \
1 & 1 & 1
\end{pmatrix}
}
\quad
\color{\green}{
\boldsymbol{b} = \begin{pmatrix}
s_{\max} \
0 \
s_{\max} \
0 \
s_{\max} \
0
\end{pmatrix}
}
\quad
\color{\purple}{
\boldsymbol{B} = \begin{pmatrix}
- & - \
- & - \
- & -
\end{pmatrix}
}
$$
End of explanation |
12,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
List - Dictionary - Tuples
<font color='red'>What will be Covered?</font>
<OL>
<LI> List
<LI> Tuples
<LI> Dictionary
</OL>
<font color='red'>Reference Documents</font>
<OL>
<LI> <A HREF="http
Step1: Typical List Operations
Step2: Add Elements to a List
Step3: Slice Elements from a List
Step4: Search the Lists and find Elements
Step5: Delete Elements from the List
Step6: Python List Operators
Step7: Assignment Create Object Reference
Step8: Problem 1
Consider the list
Step9: Use operations on lists to manipulate oldList and create the list [[1,2],[4,6]]
Step10: <font color='red'>Tuples</font>
<UL>
<LI> Tuples are a sequence of objects just like lists.
<LI> Unlike lists, tuples are immutable objects
<LI> Tuples are faster than lists.
<LI> A good rule of thumb is to use list whenever you need a generic sequence
</UL>
Examples
Step11: Tuple = constant list; items cannot be modified
Step12: Tuples have no methods
Step13: You can add two tuples to form a new tuple
Step14: Accessing Values in Tuples
Step15: Operations on Tuples
Step16: Problem 2
Consider the tuple
Step17: Use tuple operations to manipulate the above tuple to get a new tuple
Step18: <font color='red'>Dictionary</font>
<UL>
<LI> An array with with text indices
(keys, even user-defined objects can be indices!)
<LI> Also called hash or associative array in other languages
<LI> Can store 'anything'
<LI> The text index is called key
</UL>
Initializing a Dictionary
Step19: Common Operations
Step20: Examples
Step21: Problem 3
Consider the following dictionary | Python Code:
myList = []
myList = ["The", "earth", "revolves", "around", "sun"]
myList[5]
myList[4]
myList[-1]
Explanation: List - Dictionary - Tuples
<font color='red'>What will be Covered?</font>
<OL>
<LI> List
<LI> Tuples
<LI> Dictionary
</OL>
<font color='red'>Reference Documents</font>
<OL>
<LI> <A HREF="http://effbot.org/zone/python-list.htm">An Introcduction to Python List</A>
<LI> <A HREF="http://www.python-course.eu/dictionaries.php">Python Dictionaries</A>
<LI> <A HREF="http://www.diveintopython.net/native_data_types/tuples.html">Introducing Tuples</A>
</OL>
<font color='red'>List</font>
<UL>
<LI> Stores elements one after another
<LI> It does not provide fast lookups. Finding an element is often slow. A search is required.
<LI> Elements can be easily added, looped over or sorted.
</UL>
Create a List
End of explanation
L = [] # declare empty list
L.append(1.2) # add a number 1.2
L.append('a') # add a text
L.append(None) # add a text
L[0] = 1.3 # change an item
len(L) # length of list
x='a'
L.count(x) # count the number of times x occurs
L.index(x) # return the index of the first occurrence of x
L.remove(x) # delete the first occurrence of x
L.reverse # reverse the order of elements in the list
del L[1] # delete an item
# after all this what elements are left in L?
print L
Explanation: Typical List Operations
End of explanation
myList.insert(0,"Yes")
len(myList)
myList.append(["a", "true"])
len(myList)
myList.extend(["statement", "for", "sure"])
print myList
len(myList)
Explanation: Add Elements to a List
End of explanation
myList[0]
myList[1:4]
myList[:4]
myList[-1]
myList[:]
Explanation: Slice Elements from a List
End of explanation
# which element in the list is 'revolves'?
myList.index("revolves")
# which element in the list is 'true'?
myList.index("true")
myList.index(["a", "true"])
"sun" in myList
"true" in myList
Explanation: Search the Lists and find Elements
End of explanation
# Remove an element from the list
print myList
myList.remove("Yes")
print myList
# To delete the last element of the list
myList.pop()
print myList
Explanation: Delete Elements from the List
End of explanation
# Combine two lists
print myList
myList = myList + ["sure"]
print myList
myList += ["."]
print myList
# Repeat the list
myList *= 2
print myList
Explanation: Python List Operators
End of explanation
x = [0,1,2]
# y = x causes x and y to point to the same list
y = x
print y
# Change to y also change x
y[1] = 6
print y
print x
# re-assigning y to a new list decouples the two lists
y = [3, 4]
Explanation: Assignment Create Object Reference
End of explanation
oldList = [[1,2,3],[4,5,6]]
Explanation: Problem 1
Consider the list:
End of explanation
# your code goes here!
Explanation: Use operations on lists to manipulate oldList and create the list [[1,2],[4,6]]
End of explanation
l1=[1.2, 1.3, 1.4] # list
t1=(1.2, 1.3, 1.4) # tuple
t1=1.2, 1.3, 1.4 # may skip parenthesis
l1[1]=0 # ok
l1.sort() # only
l1.append(0.0) # for
l1.remove(1.2) # lists
print l1
Explanation: <font color='red'>Tuples</font>
<UL>
<LI> Tuples are a sequence of objects just like lists.
<LI> Unlike lists, tuples are immutable objects
<LI> Tuples are faster than lists.
<LI> A good rule of thumb is to use list whenever you need a generic sequence
</UL>
Examples
End of explanation
t1[1]=0 # illegal
Explanation: Tuple = constant list; items cannot be modified
End of explanation
t1.sort() # nope
t1.append(0.0) # nuh-uh
t1.remove(1.2) # not gonna do it
Explanation: Tuples have no methods
End of explanation
t2 = ('a', 'b')
t3 = t1+t2
print t3
Explanation: You can add two tuples to form a new tuple
End of explanation
print "t3[0]: ", t3[0]
print "t3[1:3]: ", t3[1:3]
Explanation: Accessing Values in Tuples
End of explanation
print len(t3)
print "c" in t3
for x in t3:
print x
Explanation: Operations on Tuples
End of explanation
julia = ("Julia", "Roberts", 1967, "Duplicity", 2009, "Actress", "Atlanta, Georgia")
Explanation: Problem 2
Consider the tuple:
End of explanation
("Julia", "Roberts", 1967, "Eat Pray Love", 2010, "Actress", "Atlanta, Georgia")
Explanation: Use tuple operations to manipulate the above tuple to get a new tuple:
End of explanation
d = {} # empty dictionary
value1 = 'Python BootCamp'
value2 = 2016
oxford = { 'key1' : value1, 'key2' : value2 }
oxford = dict(key1=value1, key2=value2)
d['list'] = [1.2, 2.5]
d['tuple'] = 'a','b','c'
d['pi'] = 3.14159265359
print d
Explanation: <font color='red'>Dictionary</font>
<UL>
<LI> An array with with text indices
(keys, even user-defined objects can be indices!)
<LI> Also called hash or associative array in other languages
<LI> Can store 'anything'
<LI> The text index is called key
</UL>
Initializing a Dictionary
End of explanation
d['pi'] # extract item corresp. to key 'mass’
d.keys() # return copy of list of keys
len(d) # the number of items
d.get('pi',1.0) # return 1.0 if 'mass' is not a key
d.has_key('pi') # does d have a key 'mass'?
d.values() # return a list of all values in the dictionary
d.items() # return list of (key,value) tuples
d.copy() # create a copy of the dictionary
del d['pi'] # delete an item
d.clear() # remove all key/value pair from the dictionary
Explanation: Common Operations
End of explanation
# create an empty dictionary using curly brackets
record = {}
record['first'] = 'Jmes'
record['last'] = 'Maxwell'
record['born'] = 1831
print record
# create another dictionary with initial entries
new_record = {'first': 'James', 'middle':'Clerk'}
# now update the first dictionary with values from the new one
record.update(new_record)
print record
Explanation: Examples
End of explanation
ab = {'met_opt': '3',
'met_grid_type': 'A',
'do_timinterp_met': 'F',
'mrnum_in': '1',
'met_filnam_list': 'jan2004.list',
'do_read_met_list': 'T',
'do_cycle_met': 'F',
'gwet_opt': '1',
'mdt': '10800.0'
}
Explanation: Problem 3
Consider the following dictionary
End of explanation |
12,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
return None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Sequence Length
seq_length = None
# Learning Rate
learning_rate = None
# Show stats for every n number of batches
show_every_n_batches = None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
12,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<figure>
<IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
IHE Python course, 2017
Three-doors quiz (famous statistical fallacy)
T.N.Olsthoorn,
April 18, 2017
The famous three doors fallacy goes like this. You participate in a television game and won. The only thing you still have to do is choose one out of three doors behing one of which is the prize to be won. After choosing your door, the quizmaster opens one of the two other doors. He shows that the prize is not behind the door he opened. Now you're given the opportunity to change your mind. What do you decide? Stick with your door, choose randomly one of the two closed doors? Or switch to the other closed door? Does it make any difference and if so, why?
Below this problem is coded in Python. First in pieces, step by step and easy to checke each step. At the end everything is put together in one cell, so that the game can be run any number of times, while the scores of each strategy are accumulated. Guess beforehand what the outcomes of the strategies "stick", "choose" or "change" will be after thousand trials.
First part, step by step
Step1: Three strategies will be explored
a) The candicate sticks with his/her initial door
b) The candiates chooses anew from the twoo remaining doors
c) The candidats switches to the other of the two remaining doors, the one he / she did not choose initially
Step2: So we have three doors, numbered 1, 2 and 3
The prize door will be chosen randomly from these three doors. It will not been known to anyone at this point.
Then the candicate chooses hes /her door from the three doors.
Then we remove the chosen door from the doors. So that doors now only contains the two unchosen doors.
If verbose, we show the differnt doors (only for ourselves to check the procedure).
Step3: Then the quiz master steps in. He / she is informed behind the scenes which door has the prize and also knows the choise of the candicate. He / she then opens the door that was neither chosen nor has the prize.
Step4: Now, finally it's up to the candidate to make his / her decision. That is choose a stragegy, either stick, choose, or stick
Step5: And we score the successes for each strategy
Step6: Put all together in a single loop to do N trials | Python Code:
import random
verbose = True
N = 1 if verbose else 10000
verbose = True
Explanation: <figure>
<IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
IHE Python course, 2017
Three-doors quiz (famous statistical fallacy)
T.N.Olsthoorn,
April 18, 2017
The famous three doors fallacy goes like this. You participate in a television game and won. The only thing you still have to do is choose one out of three doors behing one of which is the prize to be won. After choosing your door, the quizmaster opens one of the two other doors. He shows that the prize is not behind the door he opened. Now you're given the opportunity to change your mind. What do you decide? Stick with your door, choose randomly one of the two closed doors? Or switch to the other closed door? Does it make any difference and if so, why?
Below this problem is coded in Python. First in pieces, step by step and easy to checke each step. At the end everything is put together in one cell, so that the game can be run any number of times, while the scores of each strategy are accumulated. Guess beforehand what the outcomes of the strategies "stick", "choose" or "change" will be after thousand trials.
First part, step by step
End of explanation
# here are the scores (total number of successes) of the three strategies:
str_a, str_b, str_c = 0, 0, 0 # initially all three zero
# here are the doors, a list (we could also use a set)
# it's easy to switch between list and set using list() and set() functions
doors = [1, 2, 3]
Explanation: Three strategies will be explored
a) The candicate sticks with his/her initial door
b) The candiates chooses anew from the twoo remaining doors
c) The candidats switches to the other of the two remaining doors, the one he / she did not choose initially
End of explanation
prize_door
prize_door = random.choice(doors) # door with prize, chosen randomly
cand_door = random.choice(doors) # the candidate's choice
Explanation: So we have three doors, numbered 1, 2 and 3
The prize door will be chosen randomly from these three doors. It will not been known to anyone at this point.
Then the candicate chooses hes /her door from the three doors.
Then we remove the chosen door from the doors. So that doors now only contains the two unchosen doors.
If verbose, we show the differnt doors (only for ourselves to check the procedure).
End of explanation
quizm_door = set(doors) - {prize_door, cand_door}
# note that the quizm_door is a set (with one item) no list, that's ok for now
# but we may need to use *quizm_door to unpack this time as an int, see below
if verbose:
print("cand_door = ", cand_door)
print("prize_door = ", prize_door)
print("quizm_door = ", *quizm_door) # use * to unpack, because quizm_door is a set
closed_doors = set(doors) - quizm_door # closed_doors is a set, not a list, that's fine
if verbose:
print("closed_doors =", closed_doors)
Explanation: Then the quiz master steps in. He / she is informed behind the scenes which door has the prize and also knows the choise of the candicate. He / she then opens the door that was neither chosen nor has the prize.
End of explanation
stick = cand_door
choose = random.choice(list(closed_doors))
switch = closed_doors - {cand_door}
if verbose:
print("prize = {}, stick = {}, choose = {}, switch = {}".format(prize_door, stick, choose, switch))
Explanation: Now, finally it's up to the candidate to make his / her decision. That is choose a stragegy, either stick, choose, or stick
End of explanation
# strategy 1: stick with the original door
if stick == prize_door:
str_stick += 1
# strategy 2: choose anew from the two remaining doors
if choose == prize_door:
str_choose += 1
# strategy 3: choose the other closed door
if switch == prize_door: # we use set logic { }
str_switch += 1
Explanation: And we score the successes for each strategy
End of explanation
import random
verbose=False
N = 10 if verbose else 1000
if verbose:
print((" {:>8s}"*8).format(
"trial", "prize", "stick", "choose", "switch",
"str_stick", "str_choose", "str_switch"))
# intialize scores for the 3 scenarios
str_stick, str_choose, str_switch= 0, 0, 0 # initially all three zero
for trial in range(N):
doors = [1, 2, 3]
prize_door = random.choice(doors)
cand_door = random.choice(doors)
quizm_door = random.choice(list(set(doors) - {prize_door, cand_door}))
closed_doors = set(doors) - {quizm_door} # closed_doors is a set
#print(doors, prize_door, cand_door, quizm_door, closed_doors, cand_door)
# apply strategy, all three
stick = cand_door
choose = random.choice(list(closed_doors))
switch = list(closed_doors - {cand_door})[0]
# score result of each strategy
if stick == prize_door: str_stick += 1
if choose == prize_door: str_choose += 1
if switch == prize_door: str_switch += 1
if verbose:
print((" {:8d}" * 8).format(
trial,
prize_door,
stick,
choose,
switch,
str_stick,
str_choose,
str_switch))
print("Total")
print((" {:>8s}"*8).format("trial", "prize", "stick", "choose", "switch", "str_stick", "str_choose", "str_switch"))
print(((" {:8}") * 8).format(N, "_", "_", "_", "_", str_stick, str_choose, str_switch))
Explanation: Put all together in a single loop to do N trials
End of explanation |
12,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean Variance - Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn Rate Tune - Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
return 0.1 + (image_data - 0) * (0.9 - 0.1) / (255 - 0)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean Variance - Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal([features_count, labels_count]))
biases = tf.Variable(tf.zeros(10))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn Rate Tune - Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.