Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
normOfX = list()
minOfX = np.min(x)
maxOfX = np.max(x)
for elements in x:
normOfX.append((elements - minOfX) / (maxOfX - minOfX))
return np.array(normOfX)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
oneHotEncodedVector = np.zeros((len(x),10))
for i,j in enumerate(x):
oneHotEncodedVector[i][j] = 1
return oneHotEncodedVector
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape=[None, image_shape[0], image_shape[1], image_shape[2]],
name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32,
shape=[None, n_classes],
name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32,
name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
depth = x_tensor.get_shape().as_list()[-1]
padding = 'SAME'
conStrides = [1, *conv_strides, 1]
poolStrides = [1, *pool_strides, 1]
poolKSize = [1, *pool_ksize, 1]
biases = tf.Variable(tf.zeros(conv_num_outputs))
weights = tf.Variable(tf.truncated_normal([*conv_ksize, depth, conv_num_outputs],stddev=0.1))
conv_layer = tf.nn.conv2d(x_tensor, weights, conStrides, padding)
conv_layer = tf.nn.bias_add(conv_layer, biases)
conv_layer = tf.nn.relu(conv_layer)
conv_layer = tf.nn.max_pool(conv_layer, poolKSize,
poolStrides, padding)
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return tf.contrib.layers.fully_connected(inputs = x_tensor, num_outputs=num_outputs,activation_fn=None)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 12
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
layer1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer2 = conv2d_maxpool(layer1, conv_num_outputs * 2, conv_ksize, conv_strides, pool_ksize, pool_strides)
layer3 = conv2d_maxpool(layer2, conv_num_outputs * 4, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_layer3 = flatten(layer3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_connected_layer1 = fully_conn(flatten_layer3, 576)
fully_connected_layer1 = tf.nn.dropout(fully_connected_layer1, keep_prob)
fully_connected_layer2 = fully_conn(fully_connected_layer1, 384)
fully_connected_layer2 = tf.nn.dropout(fully_connected_layer2, keep_prob)
fully_connected_layer3 = fully_conn(fully_connected_layer2, 192)
fully_connected_layer3 = tf.nn.dropout(fully_connected_layer3, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_connected_layer3, 10)
# TODO: return output
return output_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, {x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
train_loss = session.run(cost, {x: feature_batch, y: label_batch, keep_prob: 1.})
valid_loss = session.run(cost, {x: valid_features, y: valid_labels, keep_prob: 1.})
valid_acc = session.run(accuracy, {x: valid_features, y: valid_labels, keep_prob: 1.})
print('Train Loss: {:>10.6f}, Validation Loss: {:>10.6f}, Validation Accuracy: {:.6f}'
.format(train_loss, valid_loss, valid_acc))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 20
batch_size = 256
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
15,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting
In order to do inline plotting within a notebook, ipython needs a magic command, commands that start with the %
Step1: Importing some modules (libraries) and giving them short names such as np and plt. You will find that most users will use these common ones.
Step2: It might be tempting to import a module in a blank namespace, to make for "more readable code" like the following example
Step3: Scatter plot
Step4: Multi planel plots|
Step5: Histogram | Python Code:
%matplotlib inline
Explanation: Plotting
In order to do inline plotting within a notebook, ipython needs a magic command, commands that start with the %
End of explanation
import numpy as np
import matplotlib.pyplot as plt
Explanation: Importing some modules (libraries) and giving them short names such as np and plt. You will find that most users will use these common ones.
End of explanation
x = 0.5*np.arange(20)
y = x*x*0.1
z = np.sqrt(x)*3
plt.plot(x,y,'o-',label='y')
plt.plot(x,z,'*--',label='z')
plt.title("$x^2$ and $\sqrt{x}$")
#plt.legend(loc='best')
plt.legend()
plt.xlabel('X axis')
plt.ylabel('Y axis')
#plt.xscale('log')
#plt.yscale('log')
#plt.savefig('sample1.png')
Explanation: It might be tempting to import a module in a blank namespace, to make for "more readable code" like the following example:
from math import *
s2 = sqrt(2)
but the danger of this is that importing multiple modules in blank namespace can make some invisible, plus obfuscates the code where the function came from. So it is safer to stick to import where you get the module namespace (or a shorter alias):
import math
s2 = math.sqrt(2)
Line plot
The array $x$ will contain numbers from 0 to 9.5 in steps of 0.5. We then compute two arrays $y$ and $z$ as follows:
$$
y = {1\over{10}}{x^2}
$$
and
$$
z = 3\sqrt{x}
$$
End of explanation
plt.scatter(x,y,s=40.0,c='r',label='y')
plt.scatter(x,z,s=20.0,c='g',label='z')
plt.legend(loc='best')
plt.show()
Explanation: Scatter plot
End of explanation
fig = plt.figure()
fig1 = fig.add_subplot(121)
fig1.scatter(x,z,s=20.0,c='g',label='z')
fig2 = fig.add_subplot(122)
fig2.scatter(x,y,s=40.0,c='r',label='y');
Explanation: Multi planel plots|
End of explanation
n = 100000
mean = 4.0
disp = 2.0
bins = 32
g = np.random.normal(mean,disp,n)
p = np.random.poisson(mean,n)
gh=plt.hist(g,bins)
ph=plt.hist(p,bins)
plt.hist([g,p],bins)
Explanation: Histogram
End of explanation |
15,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to import the data
1. Define search filters. This is needed if some data has to be filtered out.
2. Import data from ase databases.
3. Store references and calculate formation energies.
4. Export catmap input file.
It is important to pay attention to the search filters. If you get garbage results, it is likely because the search filters are not sufficient for your dataset. Make sure you filter calculator parameters such as XC-functional, basis sets cutoffs, k-point sampling, ect., when necessary.
Importing data from correctly formatted .db files
Step1: The site_specific option accepts True, False or a string. In the latter case, the site key is recognized only if the value matches the string, while all other sites are treated as identical.
Your data is now stored in the EnergyLandscape object.
Get formation energies and export to catmap format.
Formation energies are calculated
Step2: references is a required parameter, that should contain gas phase references. If a gas phase reference is dependent on another, order the dependent one after the latter.
Step3: How to import frequencies.
The field data always contains a dictionary. Use it with the key frequencies to make them accessible to the EnergyLandscape.
Importing frequencies is handled by the methods get_surfaces and get_molecules, which we have already used. It is necessary to pass the parameter frequency_db to it to import frequencies along with atomic structures like so
Step4: How to import transition states and pathways.
Transition states and paths have the mandatory key value pairs | Python Code:
# Import and instantiate energy_landscape object.
from catmap.api.ase_data import EnergyLandscape
energy_landscape = EnergyLandscape()
# Import all gas phase species from db.
search_filter_gas = []
energy_landscape.get_molecules('molecules.db', selection=search_filter_gas)
# Import all adsorbates and slabs from db.
search_filter_slab = []
energy_landscape.get_surfaces('surfaces.db', selection=search_filter_slab, site_specific=False)
Explanation: How to import the data
1. Define search filters. This is needed if some data has to be filtered out.
2. Import data from ase databases.
3. Store references and calculate formation energies.
4. Export catmap input file.
It is important to pay attention to the search filters. If you get garbage results, it is likely because the search filters are not sufficient for your dataset. Make sure you filter calculator parameters such as XC-functional, basis sets cutoffs, k-point sampling, ect., when necessary.
Importing data from correctly formatted .db files:
End of explanation
references = (('H', 'H2_gas'), ('O', 'H2O_gas'), ('C', 'CH4_gas'),)
energy_landscape.calc_formation_energies(references)
Explanation: The site_specific option accepts True, False or a string. In the latter case, the site key is recognized only if the value matches the string, while all other sites are treated as identical.
Your data is now stored in the EnergyLandscape object.
Get formation energies and export to catmap format.
Formation energies are calculated:
End of explanation
file_name = 'my_input.txt'
energy_landscape.make_input_file(file_name)
# Take a peak at the file.
with open(file_name) as fp:
for line in fp.readlines()[:5]:
print(line)
Explanation: references is a required parameter, that should contain gas phase references. If a gas phase reference is dependent on another, order the dependent one after the latter.
End of explanation
energy_landscape.get_molecules('molecules.db', frequency_db='frequencies.db', selection=search_filter_gas)
Explanation: How to import frequencies.
The field data always contains a dictionary. Use it with the key frequencies to make them accessible to the EnergyLandscape.
Importing frequencies is handled by the methods get_surfaces and get_molecules, which we have already used. It is necessary to pass the parameter frequency_db to it to import frequencies along with atomic structures like so:
End of explanation
energy_landscape.get_transition_states('neb.db')
energy_landscape.calc_formation_energies(references)
Explanation: How to import transition states and pathways.
Transition states and paths have the mandatory key value pairs:
path_id
step or image
step or image is used to order the images.
There is one additional recommended key value pair:
distance
which is useful for making plots of the energy versus a reaction coordinate.
To add formation energies of transition states to your catmap input, you can use the method:
End of explanation |
15,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
๋๋ฆฌํด๋ ๋ถํฌ
2์ฐจ์ ํ์คํ ๊ทธ๋จ์ด๋ผ๊ณ ๋ณด๋ฉด ๋๋ค. ์งํ๊ธฐ๋ก ํ์ํ๋ฉด ๋๋ค. 6๊ฐํ์ผ๋ก ํด์ผ ์์ ๊ฐ๊น๊ธฐ ๋๋ฌธ์ ์ด๋ ๊ฒ ๋ง๋ค์๋ค.
๋ชจ๋๊ฐ๋ง ์ธ์ฐ๋ฉด ๋๋ค. ๋ถ์ฐ์์๋ ๋ฐ์๊ฐ 3์ฐจ์. ๊ทธ๋์ ๋ถ์ฐ๊ฐ์ด ์์์ง๋ค.
๋ชจ์๋ฅผ ์ฐพ์๊ฐ๋ ๊ณผ์ ์ด๋ค. ๋ฒ ์ด์ง์์์. ์ํ์ ์๊ฐ ๋ถ์กฑํ๊ธฐ ๋๋ฌธ์. ์ํ์ด ๋ฌดํ๋๋ก ์์ผ๋ฉด ๋ถ์ฐ์ 0์ผ๋ก ๋ณด๋ผ ์๊ฐ ์๋ค.
ฮฑ ๊ฐ (1,1,1)์ด ์๋ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด ํน์ ์์น์ ๋ถํฌ๊ฐ ์ง์ค๋๋๋ก ํ ์ ์๋ค. ์ด ํน์ฑ์ ์ด์ฉํ๋ฉด ๋คํญ ๋ถํฌ์ ๋ชจ์๋ฅผ ์ถ์ ํ๋ ๋ฒ ์ด์ง์ ์ถ์ ๋ฌธ์ ์ ์์ฉํ ์ ์๋ค.
๋๋ฆฌํด๋ ๋ถํฌ(Dirichlet distribution)๋ ๋ฒ ํ ๋ถํฌ์ ํ์ฅํ์ด๋ผ๊ณ ํ ์ ์๋ค. ๋ฒ ํ ๋ถํฌ๋ 0๊ณผ 1์ฌ์ด์ ๊ฐ์ ๊ฐ์ง๋ ๋จ์ผ(univariate) ํ๋ฅ ๋ณ์์ ๋ฒ ์ด์ง์ ๋ชจํ์ ์ฌ์ฉ๋๊ณ ๋๋ฆฌํด๋ ๋ถํฌ๋ 0๊ณผ 1์ฌ์ด์ ์ฌ์ด์ ๊ฐ์ ๊ฐ์ง๋ ๋ค๋ณ์(multivariate) ํ๋ฅ ๋ณ์์ ๋ฒ ์ด์ง์ ๋ชจํ์ ์ฌ์ฉ๋๋ค. ๋ํ ๋๋ฆฌํด๋ ๋ถํฌ๋ ๋ค๋ณ์ ํ๋ฅ ๋ณ์๋ค์ ํฉ์ด 1์ด๋์ด์ผ ํ๋ค๋ ์ ํ ์กฐ๊ฑด์ ๊ฐ์ง๋ค.
๋๋ฆฌํด๋ ๋ถํฌ์ ํ๋ฅ ๋ฐ๋ ํจ์๋ ๋ค์๊ณผ ๊ฐ๋ค.
$$ f(x_1, x_2, \cdots, x_K) = \frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1} $$
์ฌ๊ธฐ์์
$$ \mathrm{B}(\boldsymbol\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)} {\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)} $$
์ด๊ณ ๋ค์๊ณผ ๊ฐ์ ์ ํ ์กฐ๊ฑด์ด ์๋ค.
$$ \sum_{i=1}^{K} x_i = 1 $$
๋ฒ ํ ๋ถํฌ์ ๋๋ฆฌํด๋ ๋ถํฌ์ ๊ด๊ณ
๋ฒ ํ ๋ถํฌ๋ $K=2$ ์ธ ๋๋ฆฌํด๋ ๋ถํฌ๋ผ๊ณ ๋ณผ ์ ์๋ค.
์ฆ $x_1 = x$, $x_2 = 1 - x$, $\alpha_1 = a$, $\alpha_2 = b$ ๋ก ํ๋ฉด
$$
\begin{eqnarray}
\text{Beta}(x;a,b)
&=& \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\, x^{a-1}(1-x)^{b-1} \
&=& \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\, x_1^{\alpha_1 - 1} x_2^{\alpha_2 - 1} \
&=& \frac{1}{\mathrm{B}(\alpha_1, \alpha_2)} \prod_{i=1}^2 x_i^{\alpha_i - 1}
\end{eqnarray}
$$
๋๋ฆฌํด๋ ๋ถํฌ์ ๋ชจ๋ฉํธ ํน์ฑ
๋๋ฆฌํด๋ ๋ถํฌ์ ๊ธฐ๋๊ฐ, ๋ชจ๋, ๋ถ์ฐ์ ๋ค์๊ณผ ๊ฐ๋ค.
๊ธฐ๋๊ฐ
$$E[x_k] = \dfrac{\alpha_k}{\alpha}$$
์ฌ๊ธฐ์์
$$\alpha=\sum\alpha_k$$
๋ชจ๋
$$ \dfrac{\alpha_k - 1}{\alpha - K}$$
๋ถ์ฐ
$$\text{Var}[x_k] =\dfrac{\alpha_k(\alpha - \alpha_k)}{\alpha^2(\alpha + 1)}$$
๊ธฐ๋๊ฐ ๊ณต์์ ๋ณด๋ฉด ๋ชจ์์ธ $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$๋ $(x_1, x_2, \ldots, x_K$ ์ค ์ด๋ ์๊ฐ ๋ ํฌ๊ฒ ๋์ฌ ๊ฐ๋ฅ์ฑ์ด ๋์์ง๋ฅผ ๊ฒฐ์ ํ๋ ํ์ ์ธ์(shape factor)์์ ์ ์ ์๋ค. ๋ชจ๋ $\alpha_i$๊ฐ์ด ๋์ผํ๋ฉด ๋ชจ๋ $x_i$์ ๋ถํฌ๊ฐ ๊ฐ์์ง๋ค.
๋ํ ๋ถ์ฐ ๊ณต์์ ๋ณด๋ฉด $\boldsymbol\alpha$์ ์ ๋๊ฐ์ด ํด์๋ก ๋ถ์ฐ์ด ์์์ง๋ค. ์ฆ, ์ด๋ค ํน์ ํ ๊ฐ์ด ๋์ฌ ๊ฐ๋ฅ์ฑ์ด ๋์์ง๋ค.
๋๋ฆฌํด๋ ๋ถํฌ์ ์์ฉ
๋ค์๊ณผ ๊ฐ์ ๋ฌธ์ ๋ฅผ ๋ณด์ ์ด ๋ฌธ์ ๋ $K=3$์ด๊ณ $ \alpha_1 = \alpha_2 = \alpha_3$ ์ธ Dirichlet ๋ถํฌ์ ํน์ํ ๊ฒฝ์ฐ์ด๋ค.
<img src="https
Step1: ๋ค์ ํจ์๋ ์์ฑ๋ ์ ๋ค์ 2์ฐจ์ ์ผ๊ฐํ ์์์ ๋ณผ ์ ์๋๋ก ๊ทธ๋ ค์ฃผ๋ ํจ์์ด๋ค.
Step2: ๋ง์ฝ ์ด ๋ฌธ์ ๋ฅผ ๋จ์ํ๊ฒ ์๊ฐํ์ฌ ์๋ก ๋
๋ฆฝ์ธ 0๊ณผ 1์ฌ์ด์ ์ ๋ํผ ํ๋ฅ ๋ณ์๋ฅผ 3๊ฐ ์์ฑํ๊ณ ์ด๋ค์ ํฉ์ด 1์ด ๋๋๋ก ํฌ๊ธฐ๋ฅผ ์ ๊ทํ(normalize)ํ๋ฉด ๋ค์ ๊ทธ๋ฆผ๊ณผ ๊ฐ์ด ์ผ๊ฐํ์ ์ค์ ๊ทผ์ฒ์ ๋ง์ ํ๋ฅ ๋ถํฌ๊ฐ ์ง์ค๋๋ค. ์ฆ, ํ๋ฅ ๋ณ์๊ฐ ๊ณจ๊ณ ๋ฃจ ๋ถํฌ๋์ง ์๋๋ค.
Step3: ๊ทธ๋ฌ๋ $\alpha=(1,1,1)$์ธ ๋๋ฆฌํด๋ ๋ถํฌ๋ ๋ค์๊ณผ ๊ฐ์ด ๊ณจ๊ณ ๋ฃจ ์ํ์ ์์ฑํ๋ค.
Step4: $\alpha$๊ฐ $(1,1,1)$์ด ์๋ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด ํน์ ์์น์ ๋ถํฌ๊ฐ ์ง์ค๋๋๋ก ํ ์ ์๋ค. ์ด ํน์ฑ์ ์ด์ฉํ๋ฉด ๋คํญ ๋ถํฌ์ ๋ชจ์๋ฅผ ์ถ์ ํ๋ ๋ฒ ์ด์ง์ ์ถ์ ๋ฌธ์ ์ ์์ฉํ ์ ์๋ค. | Python Code:
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
fig = plt.figure()
ax = Axes3D(fig)
x = [1, 0, 0]
y = [0, 1, 0]
z = [0, 0, 1]
verts = [zip(x, y, z)]
ax.add_collection3d(Poly3DCollection(verts, edgecolor="k", lw=5, alpha=0.4))
ax.text(1, 0, 0, "(1,0,0)", position=(0.7, 0.1))
ax.text(0, 1, 0, "(0,1,0)", position=(0, 1.04))
ax.text(0, 0, 1, "(0,0,1)", position=(-0.2, 0))
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.set_xticks([0, 1])
ax.set_yticks([0, 1])
ax.set_zticks([0, 1])
ax.view_init(20, -20)
plt.show()
Explanation: ๋๋ฆฌํด๋ ๋ถํฌ
2์ฐจ์ ํ์คํ ๊ทธ๋จ์ด๋ผ๊ณ ๋ณด๋ฉด ๋๋ค. ์งํ๊ธฐ๋ก ํ์ํ๋ฉด ๋๋ค. 6๊ฐํ์ผ๋ก ํด์ผ ์์ ๊ฐ๊น๊ธฐ ๋๋ฌธ์ ์ด๋ ๊ฒ ๋ง๋ค์๋ค.
๋ชจ๋๊ฐ๋ง ์ธ์ฐ๋ฉด ๋๋ค. ๋ถ์ฐ์์๋ ๋ฐ์๊ฐ 3์ฐจ์. ๊ทธ๋์ ๋ถ์ฐ๊ฐ์ด ์์์ง๋ค.
๋ชจ์๋ฅผ ์ฐพ์๊ฐ๋ ๊ณผ์ ์ด๋ค. ๋ฒ ์ด์ง์์์. ์ํ์ ์๊ฐ ๋ถ์กฑํ๊ธฐ ๋๋ฌธ์. ์ํ์ด ๋ฌดํ๋๋ก ์์ผ๋ฉด ๋ถ์ฐ์ 0์ผ๋ก ๋ณด๋ผ ์๊ฐ ์๋ค.
ฮฑ ๊ฐ (1,1,1)์ด ์๋ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด ํน์ ์์น์ ๋ถํฌ๊ฐ ์ง์ค๋๋๋ก ํ ์ ์๋ค. ์ด ํน์ฑ์ ์ด์ฉํ๋ฉด ๋คํญ ๋ถํฌ์ ๋ชจ์๋ฅผ ์ถ์ ํ๋ ๋ฒ ์ด์ง์ ์ถ์ ๋ฌธ์ ์ ์์ฉํ ์ ์๋ค.
๋๋ฆฌํด๋ ๋ถํฌ(Dirichlet distribution)๋ ๋ฒ ํ ๋ถํฌ์ ํ์ฅํ์ด๋ผ๊ณ ํ ์ ์๋ค. ๋ฒ ํ ๋ถํฌ๋ 0๊ณผ 1์ฌ์ด์ ๊ฐ์ ๊ฐ์ง๋ ๋จ์ผ(univariate) ํ๋ฅ ๋ณ์์ ๋ฒ ์ด์ง์ ๋ชจํ์ ์ฌ์ฉ๋๊ณ ๋๋ฆฌํด๋ ๋ถํฌ๋ 0๊ณผ 1์ฌ์ด์ ์ฌ์ด์ ๊ฐ์ ๊ฐ์ง๋ ๋ค๋ณ์(multivariate) ํ๋ฅ ๋ณ์์ ๋ฒ ์ด์ง์ ๋ชจํ์ ์ฌ์ฉ๋๋ค. ๋ํ ๋๋ฆฌํด๋ ๋ถํฌ๋ ๋ค๋ณ์ ํ๋ฅ ๋ณ์๋ค์ ํฉ์ด 1์ด๋์ด์ผ ํ๋ค๋ ์ ํ ์กฐ๊ฑด์ ๊ฐ์ง๋ค.
๋๋ฆฌํด๋ ๋ถํฌ์ ํ๋ฅ ๋ฐ๋ ํจ์๋ ๋ค์๊ณผ ๊ฐ๋ค.
$$ f(x_1, x_2, \cdots, x_K) = \frac{1}{\mathrm{B}(\boldsymbol\alpha)} \prod_{i=1}^K x_i^{\alpha_i - 1} $$
์ฌ๊ธฐ์์
$$ \mathrm{B}(\boldsymbol\alpha) = \frac{\prod_{i=1}^K \Gamma(\alpha_i)} {\Gamma\bigl(\sum_{i=1}^K \alpha_i\bigr)} $$
์ด๊ณ ๋ค์๊ณผ ๊ฐ์ ์ ํ ์กฐ๊ฑด์ด ์๋ค.
$$ \sum_{i=1}^{K} x_i = 1 $$
๋ฒ ํ ๋ถํฌ์ ๋๋ฆฌํด๋ ๋ถํฌ์ ๊ด๊ณ
๋ฒ ํ ๋ถํฌ๋ $K=2$ ์ธ ๋๋ฆฌํด๋ ๋ถํฌ๋ผ๊ณ ๋ณผ ์ ์๋ค.
์ฆ $x_1 = x$, $x_2 = 1 - x$, $\alpha_1 = a$, $\alpha_2 = b$ ๋ก ํ๋ฉด
$$
\begin{eqnarray}
\text{Beta}(x;a,b)
&=& \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}\, x^{a-1}(1-x)^{b-1} \
&=& \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\, x_1^{\alpha_1 - 1} x_2^{\alpha_2 - 1} \
&=& \frac{1}{\mathrm{B}(\alpha_1, \alpha_2)} \prod_{i=1}^2 x_i^{\alpha_i - 1}
\end{eqnarray}
$$
๋๋ฆฌํด๋ ๋ถํฌ์ ๋ชจ๋ฉํธ ํน์ฑ
๋๋ฆฌํด๋ ๋ถํฌ์ ๊ธฐ๋๊ฐ, ๋ชจ๋, ๋ถ์ฐ์ ๋ค์๊ณผ ๊ฐ๋ค.
๊ธฐ๋๊ฐ
$$E[x_k] = \dfrac{\alpha_k}{\alpha}$$
์ฌ๊ธฐ์์
$$\alpha=\sum\alpha_k$$
๋ชจ๋
$$ \dfrac{\alpha_k - 1}{\alpha - K}$$
๋ถ์ฐ
$$\text{Var}[x_k] =\dfrac{\alpha_k(\alpha - \alpha_k)}{\alpha^2(\alpha + 1)}$$
๊ธฐ๋๊ฐ ๊ณต์์ ๋ณด๋ฉด ๋ชจ์์ธ $\boldsymbol\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_K)$๋ $(x_1, x_2, \ldots, x_K$ ์ค ์ด๋ ์๊ฐ ๋ ํฌ๊ฒ ๋์ฌ ๊ฐ๋ฅ์ฑ์ด ๋์์ง๋ฅผ ๊ฒฐ์ ํ๋ ํ์ ์ธ์(shape factor)์์ ์ ์ ์๋ค. ๋ชจ๋ $\alpha_i$๊ฐ์ด ๋์ผํ๋ฉด ๋ชจ๋ $x_i$์ ๋ถํฌ๊ฐ ๊ฐ์์ง๋ค.
๋ํ ๋ถ์ฐ ๊ณต์์ ๋ณด๋ฉด $\boldsymbol\alpha$์ ์ ๋๊ฐ์ด ํด์๋ก ๋ถ์ฐ์ด ์์์ง๋ค. ์ฆ, ์ด๋ค ํน์ ํ ๊ฐ์ด ๋์ฌ ๊ฐ๋ฅ์ฑ์ด ๋์์ง๋ค.
๋๋ฆฌํด๋ ๋ถํฌ์ ์์ฉ
๋ค์๊ณผ ๊ฐ์ ๋ฌธ์ ๋ฅผ ๋ณด์ ์ด ๋ฌธ์ ๋ $K=3$์ด๊ณ $ \alpha_1 = \alpha_2 = \alpha_3$ ์ธ Dirichlet ๋ถํฌ์ ํน์ํ ๊ฒฝ์ฐ์ด๋ค.
<img src="https://datascienceschool.net/upfiles/d0acaf490aaa41389b975e20c58ac1ee.png" style="width:90%; margin: 0 auto 0 auto;">
3์ฐจ์ ๋๋ฆฌํด๋ ๋ฌธ์ ๋ ๋ค์ ๊ทธ๋ฆผ๊ณผ ๊ฐ์ด 3์ฐจ์ ๊ณต๊ฐ ์์์ (1,0,0), (0,1,0), (0,0,1) ์ธ ์ ์ ์ฐ๊ฒฐํ๋ ์ ์ผ๊ฐํ ๋ฉด์์ ์ ์ ์์ฑํ๋ ๋ฌธ์ ๋ผ๊ณ ๋ณผ ์ ์๋ค.
End of explanation
def plot_triangle(X, kind):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2) / 2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
X1 = (X-n12).dot(m1)
X2 = (X-n12).dot(m2)
g = sns.jointplot(X1, X2, kind=kind, xlim=(-0.8,0.8), ylim=(-0.45,0.9))
g.ax_joint.axis("equal")
plt.show()
Explanation: ๋ค์ ํจ์๋ ์์ฑ๋ ์ ๋ค์ 2์ฐจ์ ์ผ๊ฐํ ์์์ ๋ณผ ์ ์๋๋ก ๊ทธ๋ ค์ฃผ๋ ํจ์์ด๋ค.
End of explanation
X1 = np.random.rand(1000, 3)
X1 = X1 / X1.sum(axis=1)[:, np.newaxis]
plot_triangle(X1, kind="scatter")
plot_triangle(X1, kind="hex")
Explanation: ๋ง์ฝ ์ด ๋ฌธ์ ๋ฅผ ๋จ์ํ๊ฒ ์๊ฐํ์ฌ ์๋ก ๋
๋ฆฝ์ธ 0๊ณผ 1์ฌ์ด์ ์ ๋ํผ ํ๋ฅ ๋ณ์๋ฅผ 3๊ฐ ์์ฑํ๊ณ ์ด๋ค์ ํฉ์ด 1์ด ๋๋๋ก ํฌ๊ธฐ๋ฅผ ์ ๊ทํ(normalize)ํ๋ฉด ๋ค์ ๊ทธ๋ฆผ๊ณผ ๊ฐ์ด ์ผ๊ฐํ์ ์ค์ ๊ทผ์ฒ์ ๋ง์ ํ๋ฅ ๋ถํฌ๊ฐ ์ง์ค๋๋ค. ์ฆ, ํ๋ฅ ๋ณ์๊ฐ ๊ณจ๊ณ ๋ฃจ ๋ถํฌ๋์ง ์๋๋ค.
End of explanation
X2 = sp.stats.dirichlet((1, 1, 1)).rvs(1000)
plot_triangle(X2, kind="scatter")
plot_triangle(X2, kind="hex")
Explanation: ๊ทธ๋ฌ๋ $\alpha=(1,1,1)$์ธ ๋๋ฆฌํด๋ ๋ถํฌ๋ ๋ค์๊ณผ ๊ฐ์ด ๊ณจ๊ณ ๋ฃจ ์ํ์ ์์ฑํ๋ค.
End of explanation
def project(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return np.dstack([(x-n12).dot(m1), (x-n12).dot(m2)])[0]
def project_reverse(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return x[:,0][:, np.newaxis] * m1 + x[:,1][:, np.newaxis] * m2 + n12
eps = np.finfo(float).eps * 10
X = project([[1-eps,0,0], [0,1-eps,0], [0,0,1-eps]])
import matplotlib.tri as mtri
triang = mtri.Triangulation(X[:, 0], X[:, 1], [[0, 1, 2]])
refiner = mtri.UniformTriRefiner(triang)
triang2 = refiner.refine_triangulation(subdiv=6)
XYZ = project_reverse(np.dstack([triang2.x, triang2.y, 1-triang2.x-triang2.y])[0])
pdf = sp.stats.dirichlet((1,1,1)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((3,4,2)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
pdf = sp.stats.dirichlet((16,24,14)).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
Explanation: $\alpha$๊ฐ $(1,1,1)$์ด ์๋ ๊ฒฝ์ฐ์๋ ๋ค์๊ณผ ๊ฐ์ด ํน์ ์์น์ ๋ถํฌ๊ฐ ์ง์ค๋๋๋ก ํ ์ ์๋ค. ์ด ํน์ฑ์ ์ด์ฉํ๋ฉด ๋คํญ ๋ถํฌ์ ๋ชจ์๋ฅผ ์ถ์ ํ๋ ๋ฒ ์ด์ง์ ์ถ์ ๋ฌธ์ ์ ์์ฉํ ์ ์๋ค.
End of explanation |
15,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enable GPU
This notebook and pretty much every other notebook in this repository will run faster if you are using a GPU.
On Colab
Step1: Image and patch generation functions
Step2: Train a regression model to predict density
Step3: Plots for book
Step4: Actual image
Let's try it on an actual berry image
<img height="512" width="512" src="berries.jpg" /> | Python Code:
import tensorflow as tf
print(tf.version.VERSION)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
Explanation: Enable GPU
This notebook and pretty much every other notebook in this repository will run faster if you are using a GPU.
On Colab:
* Navigate to EditโNotebook Settings
* Select GPU from the Hardware Accelerator drop-down
On Cloud AI Platform Notebooks:
* Navigate to https://console.cloud.google.com/ai-platform/notebooks
* Create an instance with a GPU or select your instance and add a GPU
Next, we'll confirm that we can connect to the GPU with tensorflow:
End of explanation
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import scipy.ndimage as ndimage
from skimage import draw
FULL_IMG_WIDTH = 512
FULL_IMG_HEIGHT = 512
IMG_CHANNELS = 3
PATCH_WIDTH = 32
PATCH_HEIGHT = 32
INPUT_WIDTH = PATCH_WIDTH*2
INPUT_HEIGHT = PATCH_HEIGHT*2
def generate_image(with_intermediates=False):
# the image has a random greenish background that is smoothed
backgr = np.zeros([FULL_IMG_HEIGHT, FULL_IMG_WIDTH, IMG_CHANNELS])
backgr[ np.random.rand(FULL_IMG_HEIGHT, FULL_IMG_WIDTH) < 0.3, 1 ] = 1
backgr = ndimage.gaussian_filter(backgr, sigma=(5, 5, 0), order=0)
# between 25 and 75 berries randomly placed
num_berries = np.random.randint(25, 75)
berry_cx = np.random.randint(0, FULL_IMG_WIDTH, size=num_berries)
berry_cy = np.random.randint(0, FULL_IMG_HEIGHT, size=num_berries)
label = np.zeros([FULL_IMG_WIDTH, FULL_IMG_HEIGHT])
label[berry_cx, berry_cy] = 1
# draw the berries which are 10 px in radius
berries = np.zeros([FULL_IMG_HEIGHT, FULL_IMG_WIDTH])
for idx in range(len(berry_cx)):
rr, cc = draw.circle(berry_cx[idx], berry_cy[idx],
radius=10,
shape=berries.shape)
berries[rr, cc] = 1
# add berries to the background
img = np.copy(backgr)
img[berries > 0] = [1, 0, 0] # red
if with_intermediates:
print("{} berries simulated".format(num_berries))
return backgr, berries, img, label
else:
return img, label
images = generate_image(True)
f, ax = plt.subplots(1, len(images), figsize=(15, 5))
for idx, img in enumerate(images):
ax[idx].imshow(img)
ax[idx].axis('off')
## given an image, get the patches
def get_patches(img, label, verbose=False):
img = tf.expand_dims(img, axis=0)
label = tf.expand_dims(tf.expand_dims(label, axis=0), axis=-1)
if verbose:
print(img.shape, label.shape)
num_patches = (FULL_IMG_HEIGHT // PATCH_HEIGHT)**2
patches = tf.image.extract_patches(img,
sizes=[1, INPUT_HEIGHT, INPUT_WIDTH, 1],
strides=[1, PATCH_HEIGHT, PATCH_WIDTH, 1],
rates=[1, 1, 1, 1],
padding='SAME',
name='get_patches')
patches = tf.reshape(patches, [num_patches, -1])
labels = tf.image.extract_patches(label,
sizes=[1, PATCH_HEIGHT, PATCH_WIDTH, 1],
strides=[1, PATCH_HEIGHT, PATCH_WIDTH, 1],
rates=[1, 1, 1, 1],
padding='VALID',
name='get_labels')
labels = tf.reshape(labels, [num_patches, -1])
# the "density" is the number of points in the label patch
patch_labels = tf.math.reduce_sum(labels, axis=[1], name='calc_density')
if verbose:
print(patches.shape, labels.shape, patch_labels.shape)
return patches, patch_labels
Explanation: Image and patch generation functions
End of explanation
# Getting input data
def create_dataset(num_full_images):
def generate_patches():
for i in range(num_full_images):
img, label = generate_image()
patches, patch_labels = get_patches(img, label)
# print(len(patches) * num_full_images)
for patch, patch_label in zip(patches, patch_labels):
yield patch, patch_label
return tf.data.Dataset.from_generator(
generate_patches,
(tf.float32, tf.float32), # patch, patch_label
(tf.TensorShape([INPUT_HEIGHT*INPUT_WIDTH*IMG_CHANNELS]),
tf.TensorShape([]))
)
trainds = create_dataset(1) # will create 256 patches per image
for img, label in trainds.take(3):
avg = tf.math.reduce_mean(img) # avg pixel in image
print(img.shape, label.numpy(), avg.numpy())
# Train
NUM_TRAIN = 200 # 10000 more realistic
NUM_EVAL = 10 # 1000 more realistic
NUM_EPOCHS = 5
def training_plot(metrics, history):
f, ax = plt.subplots(1, len(metrics), figsize=(5*len(metrics), 5))
for idx, metric in enumerate(metrics):
ax[idx].plot(history.history[metric], ls='dashed')
ax[idx].set_xlabel("Epochs")
ax[idx].set_ylabel(metric)
ax[idx].plot(history.history['val_' + metric]);
ax[idx].legend([metric, 'val_' + metric])
def train_and_evaluate(batch_size = 32,
lrate = 0.001, # default in Adam constructor
l1 = 0,
l2 = 0,
num_filters = 32):
regularizer = tf.keras.regularizers.l1_l2(l1, l2)
train_dataset = create_dataset(NUM_TRAIN).batch(batch_size)
eval_dataset = create_dataset(NUM_EVAL).batch(64)
# a simple convnet. you can make it more complex, of course
# the patch is flattened, so we start by reshaping to an image
model = tf.keras.Sequential([
tf.keras.layers.Reshape([INPUT_HEIGHT, INPUT_WIDTH, IMG_CHANNELS],
input_shape=[INPUT_WIDTH * INPUT_HEIGHT * IMG_CHANNELS]),
tf.keras.layers.Conv2D(num_filters, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(num_filters*2, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(num_filters*2, (3,3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(num_filters*2,
kernel_regularizer=regularizer,
activation=tf.keras.activations.relu),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lrate),
loss=tf.keras.losses.mean_squared_error,
metrics=['mse', 'mae'])
print(model.summary())
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=NUM_EPOCHS)
training_plot(['loss', 'mse', 'mae'], history)
return model
model = train_and_evaluate()
## prediction.
def count_berries(model, img):
num_patches = (FULL_IMG_HEIGHT // PATCH_HEIGHT)**2
img = tf.expand_dims(img, axis=0)
patches = tf.image.extract_patches(img,
sizes=[1, INPUT_HEIGHT, INPUT_WIDTH, 1],
strides=[1, PATCH_HEIGHT, PATCH_WIDTH, 1],
rates=[1, 1, 1, 1],
padding='SAME',
name='get_patches')
patches = tf.reshape(patches, [num_patches, -1])
densities = model.predict(patches)
return tf.reduce_sum(densities)
# use an example image
f, ax = plt.subplots(4, 4, figsize=(20, 20))
for idx in range(16):
backgr, berries, img, label = generate_image(True)
ax[idx//4, idx%4].imshow(img)
ax[idx//4, idx%4].set_title("actual={:.1f} pred={:.1f}".format(
tf.reduce_sum(label).numpy(),
count_berries(model, img).numpy()
))
ax[idx//4, idx%4].axis('off')
Explanation: Train a regression model to predict density
End of explanation
# OPTIONAL, CAN BE OMITTED
img, label = images = generate_image()
patches, labels = get_patches(img, label, verbose=True)
# display a few patches
f, ax = plt.subplots(4, 4, figsize=(20, 20))
for idx in range(16):
r = np.random.randint(0, patches.shape[0])
ax[idx//4, idx%4].imshow(tf.reshape(patches[r], [INPUT_HEIGHT, INPUT_WIDTH, IMG_CHANNELS]).numpy())
ax[idx//4, idx%4].set_title("density={:.1f}".format(labels[r].numpy()))
ax[idx//4, idx%4].axis('off')
Explanation: Plots for book
End of explanation
!file berries.jpg
contents = tf.io.read_file('./berries.jpg')
img = tf.image.decode_image(contents)
img = tf.image.resize(img, [FULL_IMG_WIDTH, FULL_IMG_HEIGHT])
n = count_berries(model, img)
print(n.numpy())
Explanation: Actual image
Let's try it on an actual berry image
<img height="512" width="512" src="berries.jpg" />
End of explanation |
15,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Messy Sensor Data
Step1: The file seems to be tab seperated. There are dates, and some empty items.
Can we read it more clearly?
pandas.read_csv() is very versatile with keyword arguments.
sep
Step2: Note
Step3: Small steps in data cleaning
The data seems complete, but still has some issues
Step4: Let wind be numbers
To process wind direction, we need them in numbers.
* North wind (โ) is 0 degrees, going clockwise โณ.
* East wind (โ) is 90 degrees.
We need to transform each direction label "N", "NNE", "NE", "ENE", "E", etc., to numbers on the column.
Series.apply()
Step5: Irregular data frequency
Our data has odd timestamps from time to time, For example
Step6: We can solve this. DataFrame.asfreq() forces a specific frequency on the index, discarding and filling the rest.
Setting data frequency
Let's set each data point to be every 30 minutes.
Step7: Plotting some data for a better picture
Step8: There seems to be gaps in our dataset between 4th Jan and 9th Jan.
Gaps in the data
Weather sensors sometimes drop out.
But for modelling purposes, we need a gap-free dataset.
Series.interpolate()
Step9: Plotting to be sure!
Step10: Now the dataset is ready to be used for modelling! ๐
You too can keep our water healthy ๐น
Today you learned how to use pandas to in many ways | Python Code:
import pandas as pd
# Open a comma-separated values (CSV) file as a DataFrame
weather_observations = pd.read_csv('observations/Canberra_observations.csv')
# Print the first 5 entries
weather_observations.head()
Explanation: Messy Sensor Data:
A Programmer's Cleaning Guide
@Xavier_Ho, #pyconau
<small>Feel free to tag me on Twitter for questions and comments.</small>
Related #pyconau talks
This talk is a tutorial (thanks @evildmp!) for cleaning up messy data with pandas. You might also find these other ones useful:
Using Python in a Data Hackathon - Tennessee Leeuwenburg
Visualising data with Python - Clare Sloggett
Slides will be available online.
There will be time for questions at the end - happy to navigate specific problems you may have.
Motivation: healthy water
Aquatic microorganisms thrive and die with seasonal temperatures.
Cyanobacteria, or "blue-green algae", produces oxygen in water.
Too much cyanobacteria in water is harmful to consume.
We model cyanobacteria in water with weather data.
In short: we can predict and track cyanobacteria, and keep our water healthy.
Let's take a look at our weather sensor data...
In this talk, we use pandas for data wrangling.
You can install it with your favourite package manager (instructions).
$ conda install pandas
To begin:
* pandas.read_csv(): Opens a CSV file as a DataFrame, like a table.
* DataFrame.head(): Displays the first 5 entries.
Read a CSV file
End of explanation
# Supply pandas with some hints about the file to read
weather_observations = pd.read_csv('observations/Canberra_observations.csv',
# sep='\t',
# parse_dates={'Datetime': ['Date', 'Time']},
# dayfirst=True,
# infer_datetime_format=True,
# na_values=['-'],
)
# Display some entries
weather_observations.head()
Explanation: The file seems to be tab seperated. There are dates, and some empty items.
Can we read it more clearly?
pandas.read_csv() is very versatile with keyword arguments.
sep: The separator between columns.
parse_dates: Treat one or more columns like dates.
dayfirst: Use DD.MM.YYYY format, not month first.
infer_datetime_format: Tell pandas to guess the date format.
na_values: Specify values to be treated as empty.
Read a CSV file the ๐ผ๐ผ way
End of explanation
# For consistency between slides
weather_observations = pd.read_csv('observations/Canberra_observations.csv',
sep='\t',
parse_dates={'Datetime': ['Date', 'Time']},
dayfirst=True,
infer_datetime_format=True,
na_values=['-']
)
Explanation: Note: NaN in the table above means empty, not the floating-point number value.
End of explanation
# Remove duplicated items with the same date and time
no_duplicates = weather_observations.drop_duplicates('Datetime', keep='last')
# Sorting is ascending by default, or chronological order
# sorted_dataframe = no_duplicates.sort_values('Datetime')
# Use `Datetime` as our DataFrame index
# indexed_weather_observations = sorted_dataframe.set_index('Datetime')
# Display some entries
no_duplicates.head()
# For consistency
no_duplicates = weather_observations.drop_duplicates('Datetime', keep='last')
sorted_dataframe = no_duplicates.sort_values('Datetime')
indexed_weather_observations = sorted_dataframe.set_index('Datetime')
Explanation: Small steps in data cleaning
The data seems complete, but still has some issues:
Each day includes midnight, and another midnight next day.
Order starts at end of day and goes backwards in time.
pandas offers some functions to help us out:
* DataFrame.drop_duplicates(): Delete duplicated items.
* DataFrame.sort_values(): Rearrange in order.
* DataFrame.set_index(): Specify a column to use as index.
Keep order in our data
End of explanation
# Translate wind direction to degrees
wind_directions = {
'N': 0. , 'NNE': 22.5, 'NE': 45. , 'ENE': 67.5 ,
'E': 90. , 'ESE': 112.5, 'SE': 135. , 'SSE': 157.5 ,
'S': 180. , 'SSW': 202.5, 'SW': 225. , 'WSW': 247.5 ,
'W': 270. , 'WNW': 292.5, 'NW': 315. , 'NNW': 337.5 }
# Replace wind directions column with a new number column
# `get()` accesses values safely from dictionary
indexed_weather_observations['Wind dir'] = \
indexed_weather_observations['Wind dir'].apply(wind_directions.get)
# Display some entries
indexed_weather_observations.head()
Explanation: Let wind be numbers
To process wind direction, we need them in numbers.
* North wind (โ) is 0 degrees, going clockwise โณ.
* East wind (โ) is 90 degrees.
We need to transform each direction label "N", "NNE", "NE", "ENE", "E", etc., to numbers on the column.
Series.apply(): Transforms each entry with a function.
Transforming DataFrames
End of explanation
# One section where the data has weird timestamps ...
indexed_weather_observations[1800:1806]
Explanation: Irregular data frequency
Our data has odd timestamps from time to time, For example:
End of explanation
# Force the index to be every 30 minutes
regular_observations = indexed_weather_observations.asfreq('30min')
# Same section at different indices since setting its frequency :)
regular_observations[1633:1638]
Explanation: We can solve this. DataFrame.asfreq() forces a specific frequency on the index, discarding and filling the rest.
Setting data frequency
Let's set each data point to be every 30 minutes.
End of explanation
# Plot the first 500 entries with selected columns
regular_observations[['Wind spd', 'Wind gust', 'Tmp', 'Feels like']][:500].plot()
Explanation: Plotting some data for a better picture
End of explanation
# Interpolate data to fill empty values
for column in regular_observations.columns:
regular_observations[column].interpolate('time', inplace=True)
# Display some interpolated entries
regular_observations[1633:1638]
Explanation: There seems to be gaps in our dataset between 4th Jan and 9th Jan.
Gaps in the data
Weather sensors sometimes drop out.
But for modelling purposes, we need a gap-free dataset.
Series.interpolate(): Fill in empty values based on index.
Interpolate and fill empty rows
End of explanation
# Plot it again - gap free!
regular_observations[['Wind spd', 'Wind gust', 'Tmp', 'Feels like']][:500].plot()
Explanation: Plotting to be sure!
End of explanation
# BONUS SECTION
# Similarly, for sky observations
sky_observations = pd.read_csv('observations/Canberra_sky.csv',
sep='\t',
parse_dates={'Datetime': ['Date', 'Time']},
dayfirst=True,
infer_datetime_format=True,
na_values=['-', 'obscured'])
sky_observations.head()
# As before, remove duplicates and set index to datetime.
sky_observations.drop_duplicates('Datetime', keep='last', inplace=True)
sky_observations.sort_values('Datetime', inplace=True)
sky_observations.set_index('Datetime', inplace=True)
sky_observations.head()
# Drop rows that have no data
sky_observations.dropna(how='all', inplace=True)
sky_observations.head()
# Display the inferred data types
sky_observations.dtypes
# What are the values in the 'Cloud' column?
sky_observations['Cloud'].unique()
# 'obscured' means that the visibility was too low to see clouds. We will consider it to be NaN.
# Define a function to Change the 'Cloud' column to numerical values
def cloud_to_numeric(s):
if s == 'clear' or pd.isnull(s):
return 0
else:
return int(s[0]) / 8.0
# Apply the function to every item and assign it back to the original dataframe
sky_observations['Cloud'] = \
sky_observations['Cloud'].apply(cloud_to_numeric, convert_dtype=False).astype('float64')
sky_observations.head()
# Plot the cloud cover with scatter plot using matplotlib
clouds = sky_observations[['Cloud']][:100]
plt.plot_date(clouds.index, clouds.values)
# Join the two observations together
combined_observations = regular_observations.combine_first(sky_observations[['Cloud']])
combined_observations.head()
# Create a new series with 30-minutely timestamps
time_series = pd.date_range('2013-01-01', '2017-01-01', freq='30min')[:-1]
time_series
# Reindex our dataset
indexed_observations = combined_observations.reindex(time_series)
indexed_observations
# Display the columns in the dataset
indexed_observations.columns
# Interpolate data to fill NaN
for column in indexed_observations.columns:
indexed_observations[column].interpolate('time', inplace=True, limit_direction='both')
# Preview the cleaned data
indexed_observations
# Current bug in pandas fails plotting some interpolated frequencies
# see https://github.com/pandas-dev/pandas/issues/14763 (to be fixed by 31 August, 2017)
# Convert pandas DateTimeIndex to Python's datetime.datetime
timestamps = indexed_observations.index[:1000].to_pydatetime()
# Selecting a few columns to plot
selection1 = indexed_observations[['Wind spd', 'Wind gust', 'Tmp', 'Feels like']][:1000]
selection2 = indexed_observations[['Cloud']][:1000]
# For now, we copy the index and values to matplotlib
# see https://stackoverflow.com/questions/43206554/typeerror-float-argument-must-be-a-string-or-a-number-not-period/45191625#45191625
legend = plt.plot_date(timestamps, selection1.values, '-')
plt.legend(selection1.columns)
plt.show()
legend = plt.plot_date(timestamps, selection2.values, '-')
plt.legend(selection2.columns)
plt.show()
Explanation: Now the dataset is ready to be used for modelling! ๐
You too can keep our water healthy ๐น
Today you learned how to use pandas to in many ways:
Reading a CSV file with proper structures
Sorting your dataset
Transforming columns by applying a function
Regulate data frequency from odd ones
Interpolate and fill missing data
Plotting your dataset
pandas is much more powerful than what we covered today. Check out the documentation! You might find some gems.
Messy Sensor Data: A Programmer's Cleaning Guide
Slides available at github.com/Spaxe/pyconau2017-messy-sensor-data
There is a bonus section on combining two DataFrames with different frequencies in the notebook.
๐น Let's be friends on Twitter: @Xavier_Ho
End of explanation |
15,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read CIFAR10 dataset
Step1: Normalize data
This maps all values in trn. and tst. data to range <-0.5,0.5>.
Some kind of value normalization is preferable to
provide consistent behavior accross different problems and datasets.
Step2: VGG net
http
Step3: Resnet
Inception
Build and compile model
Create the computation graph of the network and compile a 'model' for optimization inluding loss function and optimizer.
Step4: Define TensorBoard callback
TensorBoard is able to store network statistics (loss, accuracy, weight histograms, activation histograms, ...) and view them through web interface. To view the statistics, run 'tensorboard --logdir=path/to/log-directory' and go to localhost
Step5: Predict and evaluate
Step6: Compute test accuracy by hand | Python Code:
from tools import readCIFAR, mapLabelsOneHot
# First run ../data/downloadCIFAR.sh
# This reads the dataset
trnData, tstData, trnLabels, tstLabels = readCIFAR('../data/cifar-10-batches-py')
plt.subplot(1, 2, 1)
img = collage(trnData[:16])
print(img.shape)
plt.imshow(img)
plt.subplot(1, 2, 2)
img = collage(tstData[:16])
plt.imshow(img)
plt.show()
# Convert categorical labels to one-hot encoding which
# is needed by categorical_crossentropy in Keras.
# This is not universal. The loss can be easily implemented
# with category IDs as labels.
trnLabels = mapLabelsOneHot(trnLabels)
tstLabels = mapLabelsOneHot(tstLabels)
print('One-hot trn. labels shape:', trnLabels.shape)
Explanation: Read CIFAR10 dataset
End of explanation
trnData = trnData.astype(np.float32) / 255.0 - 0.5
tstData = tstData.astype(np.float32) / 255.0 - 0.5
from keras.layers import Input, Reshape, Dense, Dropout, Flatten, BatchNormalization
from keras.layers import Activation, Conv2D, MaxPooling2D, PReLU
from keras.models import Model
from keras import regularizers
w_decay = 0.0001
w_reg = regularizers.l2(w_decay)
Explanation: Normalize data
This maps all values in trn. and tst. data to range <-0.5,0.5>.
Some kind of value normalization is preferable to
provide consistent behavior accross different problems and datasets.
End of explanation
def build_VGG_block(net, channels, layers, prefix):
for i in range(layers):
net = Conv2D(channels, 3, activation='relu', padding='same',
name='{}.{}'.format(prefix, i))(net)
net = MaxPooling2D(2, 2, padding="same")(net)
return net
def build_VGG(input_data, block_channels=[16,32,64], block_layers=[2,2,2], fcChannels=[256,256], p_drop=0.4):
net = input_data
for i, (cCount, lCount) in enumerate(zip(block_channels, block_layers)):
net = build_VGG_block(net, cCount, lCount, 'conv{}'.format(i))
net = Flatten()(net)
for i, cCount in enumerate(fcChannels):
FC = Dense(cCount, activation='relu', name='fc{}'.format(i))
net = Dropout(rate=p_drop)(FC(net))
net = Dense(10, name='out', activation='softmax')(net)
return net
def build_VGG_Bnorm_block(net, channels, layers, prefix):
for i in range(layers):
net = Conv2D(channels, 3, padding='same',
name='{}.{}'.format(prefix, i))(net)
net = BatchNormalization()(net)
net = PReLU()(net)
net = MaxPooling2D(2, 2, padding="same")(net)
return net
def build_VGG_Bnorm(input_data, block_channels=[16,32,64], block_layers=[2,2,2], fcChannels=[256,256], p_drop=0.4):
net = input_data
for i, (cCount, lCount) in enumerate(zip(block_channels, block_layers)):
net = build_VGG_Bnorm_block(net, cCount, lCount, 'conv{}'.format(i))
net = Dropout(rate=0.25)(net)
net = Flatten()(net)
for i, cCount in enumerate(fcChannels):
net = Dense(cCount, name='fc{}'.format(i))(net)
net = BatchNormalization()(net)
net = PReLU()(net)
net = Dropout(rate=p_drop)(net)
net = Dense(10, name='out', activation='softmax')(net)
return net
Explanation: VGG net
http://www.robots.ox.ac.uk/~vgg/research/very_deep/
End of explanation
from keras import optimizers
from keras.models import Model
from keras import losses
from keras import metrics
input_data = Input(shape=(trnData.shape[1:]), name='data')
net = build_VGG_Bnorm(input_data, block_channels=[64,128,256], block_layers=[3,3,3], fcChannels=[320,320],
p_drop=0.5)
model = Model(inputs=[input_data], outputs=[net])
print('Model')
model.summary()
model.compile(
loss=losses.categorical_crossentropy,
optimizer=optimizers.Adam(lr=0.001),
metrics=[metrics.categorical_accuracy])
Explanation: Resnet
Inception
Build and compile model
Create the computation graph of the network and compile a 'model' for optimization inluding loss function and optimizer.
End of explanation
import keras
tbCallBack = keras.callbacks.TensorBoard(
log_dir='./Graph',
histogram_freq=1,
write_graph=True, write_images=True)
model.fit(
x=trnData, y=trnLabels,
batch_size=48, epochs=20, verbose=1,
validation_data=[tstData, tstLabels], shuffle=True)#, callbacks=[tbCallBack])
Explanation: Define TensorBoard callback
TensorBoard is able to store network statistics (loss, accuracy, weight histograms, activation histograms, ...) and view them through web interface. To view the statistics, run 'tensorboard --logdir=path/to/log-directory' and go to localhost:6006.
End of explanation
classProb = model.predict(x=tstData[0:2])
print('Class probabilities:', classProb, '\n')
loss, acc = model.evaluate(x=tstData, y=tstLabels, batch_size=1024)
print()
print('loss', loss)
print('acc', acc)
Explanation: Predict and evaluate
End of explanation
classProb = model.predict(x=tstData)
print(classProb.shape)
correctProb = (classProb * tstLabels).sum(axis=1)
wrongProb = (classProb * (1-tstLabels)).max(axis=1)
print(correctProb.shape, wrongProb.shape)
accuracy = (correctProb > wrongProb).mean()
print('Accuracy: ', accuracy)
Explanation: Compute test accuracy by hand
End of explanation |
15,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Similarity Queries using Annoy Tutorial
This tutorial is about using the Annoy(Approximate Nearest Neighbors Oh Yeah) library for similarity queries in gensim
Why use Annoy?
The current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications
Step1: A similarity query using Annoy is significantly faster than using the traditional brute force method
Note
Step2: Making a Similarity Query
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
See the Word2Vec tutorial for how to initialize and save this model.
Step3: Creating an indexer
An instance of AnnoyIndexer needs to be created in order to use Annoy in gensim. The AnnoyIndexer class is located in gensim.similarities.index
AnnoyIndexer() takes two parameters
Step4: Now that we are ready to make a query, lets find the top 5 most similar words to "army" in the lee corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexer in gensim as of now is Annoy.
Step5: Analyzing the results
The closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for "army".
Relationship between num_trees and initialization time
Step6: Initialization time of the annoy indexer increases in a linear fashion with num_trees. Initialization time will vary from corpus to corpus, in the graph above the lee corpus was used
Relationship between num_trees and accuracy | Python Code:
#Set up the model and vector that we are using in the comparison
from gensim.similarities.index import AnnoyIndexer
from gensim.models.word2vec import Word2Vec
model = Word2Vec.load("/tmp/leemodel")
model.init_sims()
vector = model.syn0norm[0]
annoy_index = AnnoyIndexer(model, 500)
%%time
#Traditional implementation:
model.most_similar([vector], topn=5)
%%time
#Annoy implementation:
neighbors = model.most_similar([vector], topn=5, indexer=annoy_index)
for neighbor in neighbors:
print neighbor
Explanation: Similarity Queries using Annoy Tutorial
This tutorial is about using the Annoy(Approximate Nearest Neighbors Oh Yeah) library for similarity queries in gensim
Why use Annoy?
The current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Annoy can find approximate nearest neighbors much faster.
Comparing the traditional implementation and the Annoy
End of explanation
# import modules & set up logging
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: A similarity query using Annoy is significantly faster than using the traditional brute force method
Note: Initialization time for the annoy indexer was not included in the times. The optimal knn algorithm for you to use will depend on how many queries you need to make and the size of the corpus. If you are making very few similarity queries, the time taken to initialize the annoy indexer will be longer than the time it would take the brute force method to retrieve results. If you are making many queries however, the time it takes to initialize the annoy indexer will be made up for by the incredibly fast retrieval times for queries once the indexer has been initialized
What is Annoy?
Annoy is an open source library to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. For our purpose, it is used to find similarity between words or documents in a vector space. See the tutorial on similarity queries for more information on them.
Getting Started
First thing to do is to install annoy, by running the following in the command line:
sudo pip install annoy
And then set up the logger:
End of explanation
# Load the model
import gensim
model = gensim.models.Word2Vec.load('/tmp/leemodel')
print model
Explanation: Making a Similarity Query
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
See the Word2Vec tutorial for how to initialize and save this model.
End of explanation
from gensim.similarities.index import AnnoyIndexer
# 100 trees are being used in this example
annoy_index = AnnoyIndexer(model,100)
Explanation: Creating an indexer
An instance of AnnoyIndexer needs to be created in order to use Annoy in gensim. The AnnoyIndexer class is located in gensim.similarities.index
AnnoyIndexer() takes two parameters:
model: A wWord2Vec or Doc2Vec model
num_trees: A positive integer. num_trees effects the build time and the index size. A larger value will give more accurate results, but larger indexes. More information on what trees in Annoy do can be found here. The relationship between num_trees, build time, and accuracy will be investigated later in the tutorial.
End of explanation
# Derive the vector for the word "army" in our model
vector = model["army"]
# The instance of AnnoyIndexer we just created is passed
approximate_neighbors = model.most_similar([vector], topn=5, indexer=annoy_index)
# Neatly print the approximate_neighbors and their corresponding cosine similarity values
for neighbor in approximate_neighbors:
print neighbor
Explanation: Now that we are ready to make a query, lets find the top 5 most similar words to "army" in the lee corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexer in gensim as of now is Annoy.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt, time
x_cor = []
y_cor = []
for x in range(200):
start_time = time.time()
AnnoyIndexer(model, x)
y_cor.append(time.time()-start_time)
x_cor.append(x)
plt.plot(x_cor, y_cor)
plt.title("num_trees vs initalization time")
plt.ylabel("Initialization time (s)")
plt.xlabel("num_tress")
plt.show()
Explanation: Analyzing the results
The closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for "army".
Relationship between num_trees and initialization time
End of explanation
exact_results = [element[0] for element in model.most_similar([model.syn0norm[0]], topn=100)]
x_axis = []
y_axis = []
for x in range(1,30):
annoy_index = AnnoyIndexer(model, x)
approximate_results = model.most_similar([model.syn0norm[0]],topn=100, indexer=annoy_index)
top_words = [result[0] for result in approximate_results]
x_axis.append(x)
y_axis.append(len(set(top_words).intersection(exact_results)))
plt.plot(x_axis, y_axis)
plt.title("num_trees vs accuracy")
plt.ylabel("% accuracy")
plt.xlabel("num_trees")
plt.show()
Explanation: Initialization time of the annoy indexer increases in a linear fashion with num_trees. Initialization time will vary from corpus to corpus, in the graph above the lee corpus was used
Relationship between num_trees and accuracy
End of explanation |
15,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load the data
Step4: Computing the Cost Function
Fill in the compute_cost function below
Step6: Grid Search
Fill in the function grid_search() below
Step7: Let us play with the grid search demo now!
Step9: Gradient Descent
Again, please fill in the functions compute_gradient below
Step11: Please fill in the functions gradient_descent below
Step12: Test your gradient descent function through gradient descent demo shown below
Step15: Stochastic gradient descent | Python Code:
import datetime
from helpers import *
height, weight, gender = load_data(sub_sample=False, add_outlier=False)
x, mean_x, std_x = standardize(height)
y, tx = build_model_data(x, weight)
y.shape, tx.shape
Explanation: Load the data
End of explanation
def calculate_mse(e):
Calculate the mse for vector e.
return 1/2*np.mean(e**2)
def calculate_mae(e):
Calculate the mae for vector e.
return np.mean(np.abs(e))
def compute_loss(y, tx, w):
Calculate the loss.
You can calculate the loss using mse or mae.
e = y - tx.dot(w)
return calculate_mse(e)
Explanation: Computing the Cost Function
Fill in the compute_cost function below:
End of explanation
# from costs import *
def grid_search(y, tx, w0, w1):
Algorithm for grid search.
loss = np.zeros((len(w0), len(w1)))
# compute loss for each combinationof w0 and w1.
for ind_row, row in enumerate(w0):
for ind_col, col in enumerate(w1):
w = np.array([row, col])
loss[ind_row, ind_col] = compute_loss(y, tx, w)
return loss
Explanation: Grid Search
Fill in the function grid_search() below:
End of explanation
from grid_search import generate_w, get_best_parameters
from plots import grid_visualization
# Generate the grid of parameters to be swept
grid_w0, grid_w1 = generate_w(num_intervals=10)
# Start the grid search
start_time = datetime.datetime.now()
grid_losses = grid_search(y, tx, grid_w0, grid_w1)
# Select the best combinaison
loss_star, w0_star, w1_star = get_best_parameters(grid_w0, grid_w1, grid_losses)
end_time = datetime.datetime.now()
execution_time = (end_time - start_time).total_seconds()
# Print the results
print("Grid Search: loss*={l}, w0*={w0}, w1*={w1}, execution time={t:.3f} seconds".format(
l=loss_star, w0=w0_star, w1=w1_star, t=execution_time))
# Plot the results
fig = grid_visualization(grid_losses, grid_w0, grid_w1, mean_x, std_x, height, weight)
fig.set_size_inches(10.0,6.0)
fig.savefig("grid_plot") # Optional saving
Explanation: Let us play with the grid search demo now!
End of explanation
def compute_gradient(y, tx, w):
Compute the gradient.
err = y - tx.dot(w)
grad = -tx.T.dot(err) / len(err)
return grad, err
Explanation: Gradient Descent
Again, please fill in the functions compute_gradient below:
End of explanation
def gradient_descent(y, tx, initial_w, max_iters, gamma):
Gradient descent algorithm.
# Define parameters to store w and loss
ws = [initial_w]
losses = []
w = initial_w
for n_iter in range(max_iters):
# compute loss, gradient
grad, err = compute_gradient(y, tx, w)
loss = calculate_mse(err)
# gradient w by descent update
w = w - gamma * grad
# store w and loss
ws.append(w)
losses.append(loss)
print("Gradient Descent({bi}/{ti}): loss={l}, w0={w0}, w1={w1}".format(
bi=n_iter, ti=max_iters - 1, l=loss, w0=w[0], w1=w[1]))
return losses, ws
Explanation: Please fill in the functions gradient_descent below:
End of explanation
# from gradient_descent import *
from plots import gradient_descent_visualization
# Define the parameters of the algorithm.
max_iters = 50
gamma = 0.7
# Initialization
w_initial = np.array([0, 0])
# Start gradient descent.
start_time = datetime.datetime.now()
gradient_losses, gradient_ws = gradient_descent(y, tx, w_initial, max_iters, gamma)
end_time = datetime.datetime.now()
# Print result
exection_time = (end_time - start_time).total_seconds()
print("Gradient Descent: execution time={t:.3f} seconds".format(t=exection_time))
# Time Visualization
from ipywidgets import IntSlider, interact
def plot_figure(n_iter):
fig = gradient_descent_visualization(
gradient_losses, gradient_ws, grid_losses, grid_w0, grid_w1, mean_x, std_x, height, weight, n_iter)
fig.set_size_inches(10.0, 6.0)
interact(plot_figure, n_iter=IntSlider(min=1, max=len(gradient_ws)))
Explanation: Test your gradient descent function through gradient descent demo shown below:
End of explanation
def compute_stoch_gradient(y, tx, w):
Compute a stochastic gradient from just few examples n and their corresponding y_n labels.
err = y - tx.dot(w)
grad = -tx.T.dot(err) / len(err)
return grad, err
def stochastic_gradient_descent(
y, tx, initial_w, batch_size, max_iters, gamma):
Stochastic gradient descent.
# Define parameters to store w and loss
ws = [initial_w]
losses = []
w = initial_w
for n_iter in range(max_iters):
for y_batch, tx_batch in batch_iter(y, tx, batch_size=batch_size, num_batches=1):
# compute a stochastic gradient and loss
grad, _ = compute_stoch_gradient(y_batch, tx_batch, w)
# update w through the stochastic gradient update
w = w - gamma * grad
# calculate loss
loss = compute_loss(y, tx, w)
# store w and loss
ws.append(w)
losses.append(loss)
print("SGD({bi}/{ti}): loss={l}, w0={w0}, w1={w1}".format(
bi=n_iter, ti=max_iters - 1, l=loss, w0=w[0], w1=w[1]))
return losses, ws
# from stochastic_gradient_descent import *
# Define the parameters of the algorithm.
max_iters = 50
gamma = 0.7
batch_size = 1
# Initialization
w_initial = np.array([0, 0])
# Start SGD.
start_time = datetime.datetime.now()
sgd_losses, sgd_ws = stochastic_gradient_descent(
y, tx, w_initial, batch_size, max_iters, gamma)
end_time = datetime.datetime.now()
# Print result
exection_time = (end_time - start_time).total_seconds()
print("SGD: execution time={t:.3f} seconds".format(t=exection_time))
# Time Visualization
from ipywidgets import IntSlider, interact
def plot_figure(n_iter):
fig = gradient_descent_visualization(
sgd_losses, sgd_ws, grid_losses, grid_w0, grid_w1, mean_x, std_x, height, weight, n_iter)
fig.set_size_inches(10.0, 6.0)
interact(plot_figure, n_iter=IntSlider(min=1, max=len(gradient_ws)))
Explanation: Stochastic gradient descent
End of explanation |
15,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style="width
Step1: Creating the file and dimensions
The first step is to create a new file and set up the shared dimensions we'll be using in the file. We'll be using the netCDF4-python library to do all of the requisite netCDF API calls.
Step2: We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.
Step3: At this point, this is the CDL representation of this dataset
Step4: The CDL representation now shows our dimensions
Step5: Now that we have the variable, we tell python to write our array of data to it.
Step6: If instead we wanted to write data sporadically, like once per time step, we could do that instead (though the for loop below might actually be at a higher level in the program
Step7: At this point, this is the CDL representation of our dataset
Step8: The resulting CDL (truncated to the variables only) looks like
Step9: We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the variable increases when going up or down. In the case of pressure, this is technically optional.
Step10: Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12
Step11: Now we can create the forecast_time variable just as we did before for the other coordinate variables
Step12: The CDL representation of the variables now contains much more information
Step13: Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except the units are 'degrees_north' and the standard_name is 'latitude'.
Step14: With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables
Step15: This yields the following CDL
Step16: Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable
Step17: This yields the CDL
Step18: Creation and basic setup
First we create a new file and define some dimensions. Since this is profile data, heights will be one dimension. We use station as our other dimension. We also set the global featureType attribute to 'profile' to indicate that this file holds "an ordered set of data points along a vertical line at a fixed horizontal position and fixed time". We also add a dimension to assist in storing our string station ids.
Step19: Which gives this CDL
Step20: The standard refers to these as "instance variables" because each one refers to an instance of a feature. From here we can create our height coordinate variable
Step21: Station IDs
Now we can also write our station IDs to a variable. This is a 2D variable, but one of the dimensions is simply there to facilitate treating strings as character arrays. We also assign this an attribute cf_role with a value of 'profile_id' to facilitate software to identify individual profiles
Step22: Now our CDL looks like | Python Code:
# Import some useful Python tools
from datetime import datetime, timedelta
import numpy as np
# Twelve hours of hourly output starting at 22Z today
start = datetime.utcnow().replace(hour=22, minute=0, second=0, microsecond=0)
times = np.array([start + timedelta(hours=h) for h in range(13)])
# 3km spacing in x and y
x = np.arange(-150, 153, 3)
y = np.arange(-100, 100, 3)
# Standard pressure levels in hPa
press = np.array([1000, 925, 850, 700, 500, 300, 250])
temps = np.random.randn(times.size, press.size, y.size, x.size)
Explanation: <div style="width:1000 px">
<div style="float:left; width:98 px; height:98px;">
<img src="https://www.unidata.ucar.edu/images/logos/netcdf-150x150.png" alt="netCDF Logo" style="height: 98px;">
</div>
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<div style="text-align:center;">
<h1>NetCDF and CF: The Basics</h1>
</div>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
Overview
This workshop will teach some of the basics of Climate and Forecasting metadata for netCDF data files with some hands-on work available in Jupyter Notebooks using Python. Along with introduction to netCDF and CF, we will introduce the CF data model and discuss some netCDF implementation details to consider when deciding how to write data with CF and netCDF. We will cover gridded data as well as in situ data (stations, soundings, etc.) and touch on storing geometries data in CF.
This assumes a basic understanding of netCDF.
Outline
<a href="#gridded">Gridded Data</a>
<a href="#obs">Observation Data</a>
<a href="#exercises">Exercises</a>
<a href="#references">References</a>
<a name="gridded"></a>
Gridded Data
Let's say we're working with some numerical weather forecast model output. Let's walk through the steps necessary to store this data in netCDF, using the Climate and Forecasting metadata conventions to ensure that our data are available to as many tools as possible.
To start, let's assume the following about our data:
* It corresponds to forecast three dimensional temperature at several times
* The native coordinate system of the model is on a regular grid that represents the Earth on a Lambert conformal projection.
We'll also go ahead and generate some arrays of data below to get started:
End of explanation
from netCDF4 import Dataset
nc = Dataset('forecast_model.nc', 'w', format='NETCDF4_CLASSIC', diskless=True)
Explanation: Creating the file and dimensions
The first step is to create a new file and set up the shared dimensions we'll be using in the file. We'll be using the netCDF4-python library to do all of the requisite netCDF API calls.
End of explanation
nc.Conventions = 'CF-1.7'
nc.title = 'Forecast model run'
nc.institution = 'Unidata'
nc.source = 'WRF-1.5'
nc.history = str(datetime.utcnow()) + ' Python'
nc.references = ''
nc.comment = ''
Explanation: We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.
End of explanation
nc.createDimension('forecast_time', None)
nc.createDimension('x', x.size)
nc.createDimension('y', y.size)
nc.createDimension('pressure', press.size)
nc
Explanation: At this point, this is the CDL representation of this dataset:
netcdf forecast_model {
attributes:
:Conventions = "CF-1.7" ;
:title = "Forecast model run" ;
:institution = "Unidata" ;
:source = "WRF-1.5" ;
:history = "2019-07-16 02:21:52.005718 Python" ;
:references = "" ;
:comment = "" ;
}
Next, before adding variables to the file to define each of the data fields in this file, we need to define the dimensions that exist in this data set. We set each of x, y, and pressure to the size of the corresponding array. We set forecast_time to be an "unlimited" dimension, which allows the dataset to grow along that dimension if we write additional data to it later.
End of explanation
temps_var = nc.createVariable('Temperature', datatype=np.float32,
dimensions=('forecast_time', 'pressure', 'y', 'x'),
zlib=True)
Explanation: The CDL representation now shows our dimensions:
netcdf forecast_model {
dimensions:
forecast_time = UNLIMITED (currently 13) ;
x = 101 ;
y = 67 ;
pressure = 7 ;
attributes:
:Conventions = "CF-1.7" ;
:title = "Forecast model run" ;
:institution = "Unidata" ;
:source = "WRF-1.5" ;
:history = "2019-07-16 02:21:52.005718 Python" ;
:references = "" ;
:comment = "" ;
}
Creating and filling a variable
So far, all we've done is outlined basic information about our dataset: broad metadata and the dimensions of our dataset. Now we create a variable to hold one particular data field for our dataset, in this case the forecast air temperature. When defining this variable, we specify the datatype for the values being stored, the relevant dimensions, as well as enable optional compression.
End of explanation
temps_var[:] = temps
temps_var
Explanation: Now that we have the variable, we tell python to write our array of data to it.
End of explanation
next_slice = 0
for temp_slice in temps:
temps_var[next_slice] = temp_slice
next_slice += 1
Explanation: If instead we wanted to write data sporadically, like once per time step, we could do that instead (though the for loop below might actually be at a higher level in the program:
End of explanation
temps_var.units = 'Kelvin'
temps_var.standard_name = 'air_temperature'
temps_var.long_name = 'Forecast air temperature'
temps_var.missing_value = -9999
temps_var
Explanation: At this point, this is the CDL representation of our dataset:
netcdf forecast_model {
dimensions:
forecast_time = UNLIMITED (currently 13) ;
x = 101 ;
y = 67 ;
pressure = 7 ;
variables:
float Temperature(forecast_time, pressure, y, x) ;
attributes:
:Conventions = "CF-1.7" ;
:title = "Forecast model run" ;
:institution = "Unidata" ;
:source = "WRF-1.5" ;
:history = "2019-07-16 02:21:52.005718 Python" ;
:references = "" ;
:comment = "" ;
}
We can also add attributes to this variable to define metadata. The CF conventions require a units attribute to be set for all variables that represent a dimensional quantity. The value of this attribute needs to be parsable by the UDUNITS library. Here we set it to a value of 'Kelvin'. We also set the standard (optional) attributes of long_name and standard_name. The former contains a longer description of the variable, while the latter comes from a controlled vocabulary in the CF conventions. This allows users of data to understand, in a standard fashion, what a variable represents. If we had missing values, we could also set the missing_value attribute to an appropriate value.
NASA Dataset Interoperability Recommendations:
Section 2.2 - Include Basic CF Attributes
Include where applicable: units, long_name, standard_name, valid_min / valid_max, scale_factor / add_offset and others.
End of explanation
x_var = nc.createVariable('x', np.float32, ('x',))
x_var[:] = x
x_var.units = 'km'
x_var.axis = 'X' # Optional
x_var.standard_name = 'projection_x_coordinate'
x_var.long_name = 'x-coordinate in projected coordinate system'
y_var = nc.createVariable('y', np.float32, ('y',))
y_var[:] = y
y_var.units = 'km'
y_var.axis = 'Y' # Optional
y_var.standard_name = 'projection_y_coordinate'
y_var.long_name = 'y-coordinate in projected coordinate system'
Explanation: The resulting CDL (truncated to the variables only) looks like:
variables:
float Temperature(forecast_time, pressure, y, x) ;
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Coordinate variables
To properly orient our data in time and space, we need to go beyond dimensions (which define common sizes and alignment) and include values along these dimensions, which are called "Coordinate Variables". Generally, these are defined by creating a one dimensional variable with the same name as the respective dimension.
To start, we define variables which define our x and y coordinate values. These variables include standard_names which allow associating them with projections (more on this later) as well as an optional axis attribute to make clear what standard direction this coordinate refers to.
End of explanation
press_var = nc.createVariable('pressure', np.float32, ('pressure',))
press_var[:] = press
press_var.units = 'hPa'
press_var.axis = 'Z' # Optional
press_var.standard_name = 'air_pressure'
press_var.positive = 'down' # Optional
Explanation: We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the variable increases when going up or down. In the case of pressure, this is technically optional.
End of explanation
from cftime import date2num
time_units = 'hours since {:%Y-%m-%d 00:00}'.format(times[0])
time_vals = date2num(times, time_units)
time_vals
Explanation: Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12:00:00.00'. 'seconds', 'minutes', 'hours', and 'days' are the most commonly used units for time. Due to the variable length of months and years, they are not recommended.
Before we can write data, we need to first need to convert our list of Python datetime instances to numeric values. We can use the cftime library to make this easy to convert using the unit string as defined above.
End of explanation
time_var = nc.createVariable('forecast_time', np.int32, ('forecast_time',))
time_var[:] = time_vals
time_var.units = time_units
time_var.axis = 'T' # Optional
time_var.standard_name = 'time' # Optional
time_var.long_name = 'time'
Explanation: Now we can create the forecast_time variable just as we did before for the other coordinate variables:
End of explanation
from pyproj import Proj
X, Y = np.meshgrid(x, y)
lcc = Proj({'proj':'lcc', 'lon_0':-105, 'lat_0':40, 'a':6371000.,
'lat_1':25})
lon, lat = lcc(X * 1000, Y * 1000, inverse=True)
Explanation: The CDL representation of the variables now contains much more information:
dimensions:
forecast_time = UNLIMITED (currently 13) ;
x = 101 ;
y = 67 ;
pressure = 7 ;
variables:
float x(x) ;
x:units = "km" ;
x:axis = "X" ;
x:standard_name = "projection_x_coordinate" ;
x:long_name = "x-coordinate in projected coordinate system" ;
float y(y) ;
y:units = "km" ;
y:axis = "Y" ;
y:standard_name = "projection_y_coordinate" ;
y:long_name = "y-coordinate in projected coordinate system" ;
float pressure(pressure) ;
pressure:units = "hPa" ;
pressure:axis = "Z" ;
pressure:standard_name = "air_pressure" ;
pressure:positive = "down" ;
float forecast_time(forecast_time) ;
forecast_time:units = "hours since 2019-07-16 00:00" ;
forecast_time:axis = "T" ;
forecast_time:standard_name = "time" ;
forecast_time:long_name = "time" ;
float Temperature(forecast_time, pressure, y, x) ;
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Auxilliary Coordinates
Our data are still not CF-compliant because they do not contain latitude and longitude information, which is needed to properly locate the data. To solve this, we need to add variables with latitude and longitude. These are called "auxillary coordinate variables", not because they are extra, but because they are not simple one dimensional variables.
Below, we first generate longitude and latitude values from our projected coordinates using the pyproj library.
End of explanation
lon_var = nc.createVariable('lon', np.float64, ('y', 'x'))
lon_var[:] = lon
lon_var.units = 'degrees_east'
lon_var.standard_name = 'longitude' # Optional
lon_var.long_name = 'longitude'
lat_var = nc.createVariable('lat', np.float64, ('y', 'x'))
lat_var[:] = lat
lat_var.units = 'degrees_north'
lat_var.standard_name = 'latitude' # Optional
lat_var.long_name = 'latitude'
Explanation: Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except the units are 'degrees_north' and the standard_name is 'latitude'.
End of explanation
temps_var.coordinates = 'lon lat'
Explanation: With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables:
End of explanation
proj_var = nc.createVariable('lambert_projection', np.int32, ())
proj_var.grid_mapping_name = 'lambert_conformal_conic'
proj_var.standard_parallel = 25.
proj_var.latitude_of_projection_origin = 40.
proj_var.longitude_of_central_meridian = -105.
proj_var.semi_major_axis = 6371000.0
proj_var
Explanation: This yields the following CDL:
double lon(y, x);
lon:units = "degrees_east";
lon:long_name = "longitude coordinate";
lon:standard_name = "longitude";
double lat(y, x);
lat:units = "degrees_north";
lat:long_name = "latitude coordinate";
lat:standard_name = "latitude";
float Temperature(time, y, x);
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Temperature:coordinates = "lon lat";
Coordinate System Information
With our data specified on a Lambert conformal projected grid, it would be good to include this information in our metadata. We can do this using a "grid mapping" variable. This uses a dummy scalar variable as a namespace for holding all of the required information. Relevant variables then reference the dummy variable with their grid_mapping attribute.
Below we create a variable and set it up for a Lambert conformal conic projection on a spherical earth. The grid_mapping_name attribute describes which of the CF-supported grid mappings we are specifying. The names of additional attributes vary between the mappings.
End of explanation
temps_var.grid_mapping = 'lambert_projection' # or proj_var.name
Explanation: Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable:
End of explanation
lons = np.array([-97.1, -105, -80])
lats = np.array([35.25, 40, 27])
heights = np.linspace(10, 1000, 10)
temps = np.random.randn(lats.size, heights.size)
stids = ['KBOU', 'KOUN', 'KJUP']
Explanation: This yields the CDL:
variables:
int lambert_projection ;
lambert_projection:grid_mapping_name = "lambert_conformal_conic ;
lambert_projection:standard_parallel = 25.0 ;
lambert_projection:latitude_of_projection_origin = 40.0 ;
lambert_projection:longitude_of_central_meridian = -105.0 ;
lambert_projection:semi_major_axis = 6371000.0 ;
float Temperature(forecast_time, pressure, y, x) ;
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Temperature:coordinates = "lon lat" ;
Temperature:grid_mapping = "lambert_projection" ;
Cell Bounds
NASA Dataset Interoperability Recommendations:
Section 2.3 - Use CF โboundsโ attributes
CF conventions state: โWhen gridded data does not represent the point values of a field but instead represents some characteristic of the field within cells of finite โvolume,โ a complete description of the variable should include metadata that describes the domain or extent of each cell, and the characteristic of the field that the cell values represent.โ
For example, if a rain guage is read every 3 hours but only dumped every six hours, it might look like this
netcdf precip_bucket_bounds {
dimensions:
lat = 12 ;
lon = 19 ;
time = 8 ;
tbv = 2;
variables:
float lat(lat) ;
float lon(lon) ;
float time(time) ;
time:units = "hours since 2019-07-12 00:00:00.00";
time:bounds = "time_bounds" ;
float time_bounds(time,tbv)
float precip(time, lat, lon) ;
precip:units = "inches" ;
data:
time = 3, 6, 9, 12, 15, 18, 21, 24;
time_bounds = 0, 3, 0, 6, 6, 9, 6, 12, 12, 15, 12, 18, 18, 21, 18, 24;
}
So the time coordinate looks like
|---X
|-------X
|---X
|-------X
|---X
|-------X
|---X
|-------X
0 3 6 9 12 15 18 21 24
<a name="obs"></a>
Observational Data
So far we've focused on how to handle storing data that are arranged in a grid. What about observation data? The CF conventions describe this as conventions for Discrete Sampling Geometeries (DSG).
For data that are regularly sampled (say, all at the same heights) this is straightforward. First, let's define some sample profile data, all at a few heights less than 1000m:
End of explanation
nc.close()
nc = Dataset('obs_data.nc', 'w', format='NETCDF4_CLASSIC', diskless=True)
nc.createDimension('station', lats.size)
nc.createDimension('heights', heights.size)
nc.createDimension('str_len', 4)
nc.Conventions = 'CF-1.7'
nc.featureType = 'profile'
nc
Explanation: Creation and basic setup
First we create a new file and define some dimensions. Since this is profile data, heights will be one dimension. We use station as our other dimension. We also set the global featureType attribute to 'profile' to indicate that this file holds "an ordered set of data points along a vertical line at a fixed horizontal position and fixed time". We also add a dimension to assist in storing our string station ids.
End of explanation
lon_var = nc.createVariable('lon', np.float64, ('station',))
lon_var.units = 'degrees_east'
lon_var.standard_name = 'longitude'
lat_var = nc.createVariable('lat', np.float64, ('station',))
lat_var.units = 'degrees_north'
lat_var.standard_name = 'latitude'
Explanation: Which gives this CDL:
netcdf obs_data {
dimensions:
station = 3 ;
heights = 10 ;
str_len = 4 ;
attributes:
:Conventions = "CF-1.7" ;
:featureType = "profile" ;
}
We can create our coordinates with:
End of explanation
heights_var = nc.createVariable('heights', np.float32, ('heights',))
heights_var.units = 'meters'
heights_var.standard_name = 'altitude'
heights_var.positive = 'up'
heights_var[:] = heights
Explanation: The standard refers to these as "instance variables" because each one refers to an instance of a feature. From here we can create our height coordinate variable:
End of explanation
stid_var = nc.createVariable('stid', 'c', ('station', 'str_len'))
stid_var.cf_role = 'profile_id'
stid_var.long_name = 'Station identifier'
stid_var[:] = stids
Explanation: Station IDs
Now we can also write our station IDs to a variable. This is a 2D variable, but one of the dimensions is simply there to facilitate treating strings as character arrays. We also assign this an attribute cf_role with a value of 'profile_id' to facilitate software to identify individual profiles:
End of explanation
time_var = nc.createVariable('time', np.float32, ())
time_var.units = 'minutes since 2019-07-16 17:00'
time_var.standard_name = 'time'
time_var[:] = [5.]
temp_var = nc.createVariable('temperature', np.float32, ('station', 'heights'))
temp_var.units = 'celsius'
temp_var.standard_name = 'air_temperature'
temp_var.coordinates = 'lon lat heights time'
Explanation: Now our CDL looks like:
netcdf obs_data {
dimensions:
station = 3 ;
heights = 10 ;
str_len = 4 ;
variables:
double lon(station) ;
lon:units = "degrees_east" ;
lon:standard_name = "longitude" ;
double lat(station) ;
lat:units = "degrees_north" ;
lat:standard_name = "latitude" ;
float heights(heights) ;
heights:units = "meters" ;
heights:standard_name = "altitude";
heights:positive = "up" ;
char stid(station, str_len) ;
stid:cf_role = "profile_id" ;
stid:long_name = "Station identifier" ;
attributes:
:Conventions = "CF-1.7" ;
:featureType = "profile" ;
}
Writing the field
Now all that's left is to write our profile data, which looks fairly standard. We also add a scalar variable for the time at which these profiles were captured:
End of explanation |
15,110 | Given the following text description, write Python code to implement the functionality described.
Description:
This function takes two positive numbers x and y and returns the
biggest even integer number that is in the range [x, y] inclusive. If
there's no such number, then the function should return -1.
For example:
choose_num(12, 15) = 14
choose_num(13, 12) = -1
| Python Code:
def choose_num(x, y):
if x > y:
return -1
if y % 2 == 0:
return y
if x == y:
return -1
return y - 1 |
15,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: TensorBoard ๆง่ฝๅๆ
Step2: ็กฎ่ฎค TensorFlow ๅฏไปฅ็ๅฐ GPUใ
Step7: ไฝฟ็จ TensorBoard callback ่ฟ่กไธไธช็ฎๅ็ๆจกๅ
ไฝ ๅฐไฝฟ็จ Keras ๆฅๆๅปบไธไธชไฝฟ็จ ResNet56 (ๅ่
Step8: ไป TensorFlow ๆฐๆฎ้ไธ่ฝฝ CIFAR-10 ๆฐๆฎ้ใ
Step9: ๅปบ็ซๆฐๆฎ่พๅ
ฅ็บฟๆง้ไฟกๆจกๅๅนถ็ผ่ฏ ResNet56 ๆจกๅใ
Step10: ๅฝไฝ ๅๅปบ TensorBoard ๅ่ฐๆถ๏ผๆจๅฏไปฅๆๅฎๆจๆณ่ฆ่ฟ่กๆง่ฝๅๆ็ๆนๆฌกใ้ป่ฎคๆ
ๅตไธ๏ผTensorFlow ๅฐๅฏน็ฌฌไบไธชๆนๆฌก่ฟ่กๆง่ฝๅๆ๏ผๅ ไธบ็ฌฌไธไธชๆนๆฌก็ๆถๅไผ่ฟ่กๅพๅคไธๆฌกๆง็ๅพไผๅใๆจๅฏไปฅ้่ฟ่ฎพ็ฝฎ profile_batch ๅฏนๅ
ถ่ฟ่กไฟฎๆนใๆจ่ฟๅฏไปฅ้่ฟๅฐๅ
ถ่ฎพ็ฝฎไธบ 0 ๆฅๅ
ณ้ญๆง่ฝๅๆใ
่ฟๆถๅ๏ผๆจๅฐไผๅฏน็ฌฌไธๆนๆฌก่ฟ่กๆง่ฝๅๆใ
Step11: ๅผๅงไฝฟ็จ Model.fit() ่ฟ่ก่ฎญ็ปใ
Step12: ไฝฟ็จ TensorBoard ๅฏ่งๅๆง่ฝๅๆ็ปๆ
ไธๅนธ็ๆฏ๏ผ็ฑไบ๏ผ1913, ๆจๆ ๆณๅจ Colab ไธญไฝฟ็จ TensorBoard ๆฅๅฏ่งๅๆง่ฝๅๆ็ปๆใๆจ้่ฆไธ่ฝฝๆฅๅฟ็ฎๅฝๅนถๅจๆฌๅฐ่ฎก็ฎๆบไธๅฏๅจ TensorBoardใ
ๅ็ผฉไธ่ฝฝๆฅๅฟ
Step13: ๅจโๆไปถโ้้กนๅกไธญๅณ้ฎๅๅปไปฅไธ่ฝฝ logdir.tar.gzใ
่ฏทไฟ่ฏๅจไฝ ๆฌๅฐ็ๆบๅจๅฎ่ฃ
ๆๆฐ็ TensorBoardใๅจไฝ ็ๆฌๅฐๆบๅจไธๆง่กไธ้ข็ๅฝไปค๏ผ
```
cd download/directory
tar -zxvf logs.tar.gz
tensorboard --logdir=logs/ --port=6006
```
ๅจๆจ็Chromeๆต่งๅจไธญๆๅผไธไธชๆฐๆ ็ญพ๏ผ็ถๅๅฏผ่ช่ณlocalhost๏ผ6006๏ผๅๅป โProfileโ ๆ ็ญพใๆจๅฏ่ฝไผ็ๅฐไปฅไธๆง่ฝๅๆ็ปๆ๏ผ
่ท่ธชๆฅ็ๅจ
ๅฝๆจๅๅปๆง่ฝๅๆ้้กนๅกๅ๏ผๆจๅฐ็ๅฐ่ท่ธชๆฅ็ๅจใ่ฏฅ้กต้ขๆพ็คบไบ่ๅๆ้ด CPU ๅๅ ้ๅจไธๅ็็ไธๅไบไปถ็ๆถ้ด่ฝดใ
่ท่ธชๆฅ็ๅจๅจๅ็ด่ฝดไธๆพ็คบๅคไธช ไบไปถ็ปใ ๆฏไธชไบไปถ็ป้ฝๆๅคไธชๆฐดๅนณ ่ท่ธช๏ผๅ
ถไธญๅกซๅ
ไบ่ท่ธชไบไปถใ่ท่ธช ไบไปถๆฏๅจ็บฟ็จๆ GPU ๆตไธๆง่ก็ๅบๆฌๆถ้ด็บฟ๏ผใๅไธชไบไปถๆฏๆถ้ด่ฝด่ฝจ้ไธ็ๅฝฉ่ฒ็ฉๅฝขๅใๆถ้ดไปๅทฆๅฐๅณ็งปๅจใ
ๆจๅฏไปฅไฝฟ็จ w๏ผๆพๅคง๏ผ๏ผs๏ผ็ผฉๅฐ๏ผ๏ผa๏ผๅๅทฆๆปๅจ๏ผ๏ผd๏ผๅๅณๆปๅจ๏ผๆต่ง็ปๆใ
ๅไธช็ฉๅฝขไปฃ่กจ ่ท่ธชไบไปถ ๏ผไป่ฟไธชๆถ้ด็ๅผๅงๅฐ็ปๆๆถ้ดใ ่ฆ็ ็ฉถๅไธช็ฉๅฝข๏ผๅฏไปฅๅจๆตฎๅจๅทฅๅ
ทๆ ไธญ้ๆฉ้ผ ๆ ๅ
ๆ ๅพๆ ๅๅๅปๅฎใ ่ฟๅฐๆพ็คบๆๅ
ณ็ฉๅฝข็ไฟกๆฏ๏ผไพๅฆๅ
ถๅผๅงๆถ้ดๅๆ็ปญๆถ้ดใ
้คไบ็นๅปไนๅค๏ผๆจ่ฟๅฏไปฅๆๅจ้ผ ๆ ไปฅ้ๆฉ่ฆ็ไธ็ป่ท่ธชไบไปถ็็ฉๅฝขใ่ฟๅฐไธบๆจๆไพไธ่ฏฅ็ฉๅฝข็ธไบคๅนถๆฑๆป็ไบไปถๅ่กจใ m ้ฎๅฏ็จไบๆต้ๆ้ไบไปถ็ๆ็ปญๆถ้ดใ
่ท่ธชไบไปถๆฏไปไธไธชๆฅๆบๆถ้็๏ผ
CPU
Step14: ้ๆฐ่ฟ่กๆจกๅใ
Step15: Woohoo! ไฝ ๅๅๆ่ฎญ็ปๆง่ฝไป ~235ms/step ๆ้ซๅฐ ~200ms/stepใ
Step16: ๅไธๆฌกไธ่ฝฝ logs ็ฎๅฝๆฅๆฅ็ TensorBoard็ๆฐ็ๅๆ็ปๆใ
Iterator
Step17: Profiler Service | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
try:
# %tensorflow_version ๅชๅจ Colab ไธญๅญๅจใ
%tensorflow_version 2.x
except Exception:
pass
# ๅ ่ฝฝ TensorBoard notebook ๆฉๅฑใ
%load_ext tensorboard
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
from packaging import version
import functools
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.python.keras import backend
from tensorflow.python.keras import layers
import numpy as np
print("TensorFlow version: ", tf.__version__)
Explanation: TensorBoard ๆง่ฝๅๆ: ๅจ Keras ไธญๅฏนๅบๆฌ่ฎญ็ปๆๆ ่ฟ่กๆง่ฝๅๆ
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tensorboard/tensorboard_profiling_keras"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />ๅจ TensorFlow.org ไธๆฅ็</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/tensorboard_profiling_keras.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />ๅจ Google Colab ไธ่ฟ่ก</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/tensorboard_profiling_keras.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />ๅจ GitHub ไธๆฅ็ๆบไปฃ็ </a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tensorboard/tensorboard_profiling_keras.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />ไธ่ฝฝๆญค notebook</a>
</td>
</table>
ๆป่ง
ๅจๆบๅจๅญฆไน ไธญๆง่ฝๅๅ้่ฆใTensorFlow ๆไธไธชๅ
็ฝฎ็ๆง่ฝๅๆๅจๅฏไปฅไฝฟๆจไธ็จ่ดนๅ่ฎฐๅฝๆฏไธชๆไฝ็่ฟ่กๆถ้ดใ็ถๅๆจๅฐฑๅฏไปฅๅจ TensorBoard ็ Profile Plugin ไธญๅฏน้
็ฝฎ็ปๆ่ฟ่กๅฏ่งๅใๆฌๆ็จไพง้ไบ GPU ๏ผไฝๆง่ฝๅๆๆไปถไนๅฏไปฅๆ็
งไบ TPU ๅทฅๅ
ทๆฅๅจ TPU ไธไฝฟ็จใ
ๆฌๆ็จๆไพไบ้ๅธธๅบ็ก็็คบไพไปฅๅธฎๅฉๆจๅญฆไน ๅฆไฝๅจๅผๅ Keras ๆจกๅๆถๅฏ็จๆง่ฝๅๆๅจใๆจๅฐๅญฆไน ๅฆไฝไฝฟ็จ Keras TensorBoard ๅ่ฐๅฝๆฐๆฅๅฏ่งๅๆง่ฝๅๆ็ปๆใโๅ
ถไปๆง่ฝๅๆๆนๅผโไธญๆๅฐ็ Profiler API ๅ Profiler Server ๅ
่ฎธๆจๅๆ้ Keras TensorFlow ็ไปปๅกใ
ไบๅ
ๅๅค
ๅจไฝ ็ๆฌๅฐๆบๅจไธๅฎ่ฃ
ๆๆฐ็TensorBoardใ
ๅจ Notebook ่ฎพ็ฝฎ็ๅ ้ๅจ็ไธๆ่ๅไธญ้ๆฉ โGPUโ๏ผๅ่ฎพๆจๅจColabไธ่ฟ่กๆญคnotebook๏ผ
่ฎพ็ฝฎ
End of explanation
device_name = tf.test.gpu_device_name()
if not tf.test.is_gpu_available():
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
Explanation: ็กฎ่ฎค TensorFlow ๅฏไปฅ็ๅฐ GPUใ
End of explanation
BATCH_NORM_DECAY = 0.997
BATCH_NORM_EPSILON = 1e-5
L2_WEIGHT_DECAY = 2e-4
def identity_building_block(input_tensor,
kernel_size,
filters,
stage,
block,
training=None):
ๆ ่ฏๅๆฏไธ็งๅจๆทๅพไธๆฒกๆๅท็งฏๅฑ็ๅใ
ๅๆฐ๏ผ
input_tensor๏ผ่พๅ
ฅๅผ ้
kernel_size๏ผ้ป่ฎคไธบ3๏ผๅ
ๆ ธๅคงๅฐไธบ
ไธป่ทฏๅพไธ็ไธญ้ดๅท็งฏๅฑ
่ฟๆปคๅจ๏ผๆดๆฐๅ่กจ๏ผไธป่ทฏๅพไธ3ไธชๅท็งฏๅฑ็่ฟๆปคๅจ
stage๏ผๆดๆฐ๏ผๅฝๅ้ถๆฎตๆ ็ญพ๏ผ็จไบ็ๆๅฑๅ็งฐ
block๏ผๅฝๅๅๆ ็ญพ๏ผ็จไบ็ๆๅฑๅ็งฐ
training๏ผไป
ๅจไฝฟ็จ Estimator ่ฎญ็ป keras ๆจกๅๆถไฝฟ็จใ ๅจๅ
ถไปๆ
ๅตไธ๏ผๅฎๆฏ่ชๅจๅค็็ใ
่ฟๅๅผ๏ผ
่พๅบๅ็ๅผ ้ใ
filters1, filters2 = filters
if tf.keras.backend.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = tf.keras.layers.Conv2D(filters1, kernel_size,
padding='same',
kernel_initializer='he_normal',
kernel_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
bias_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
name=conv_name_base + '2a')(input_tensor)
x = tf.keras.layers.BatchNormalization(axis=bn_axis,
name=bn_name_base + '2a',
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON)(
x, training=training)
x = tf.keras.layers.Activation('relu')(x)
x = tf.keras.layers.Conv2D(filters2, kernel_size,
padding='same',
kernel_initializer='he_normal',
kernel_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
bias_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
name=conv_name_base + '2b')(x)
x = tf.keras.layers.BatchNormalization(axis=bn_axis,
name=bn_name_base + '2b',
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON)(
x, training=training)
x = tf.keras.layers.add([x, input_tensor])
x = tf.keras.layers.Activation('relu')(x)
return x
def conv_building_block(input_tensor,
kernel_size,
filters,
stage,
block,
strides=(2, 2),
training=None):
ๅจๆทๅพไธญๅ
ทๆๅท็งฏๅฑ็ๅใ
ๅๆฐ๏ผ
input_tensor๏ผ่พๅ
ฅๅผ ้
kernel_size๏ผ้ป่ฎคไธบ3๏ผๅ
ๆ ธๅคงๅฐไธบ
ไธป่ทฏๅพไธ็ไธญ้ดๅท็งฏๅฑ
filters๏ผๆดๆฐๅ่กจ๏ผไธป่ทฏๅพไธ3ไธชๅท็งฏๅฑ็่ฟๆปคๅจ
stage๏ผๆดๆฐ๏ผๅฝๅ้ถๆฎตๆ ็ญพ๏ผ็จไบ็ๆๅฑๅ็งฐ
block๏ผๅฝๅๅๆ ็ญพ๏ผ็จไบ็ๆๅฑๅ็งฐ
training๏ผไป
ๅจไฝฟ็จ Estimator ่ฎญ็ป keras ๆจกๅๆถไฝฟ็จใๅจๅ
ถไปๆ
ๅตไธ๏ผๅฎๆฏ่ชๅจๅค็็ใ
่ฟๅๅผ๏ผ
่พๅบๅ็ๅผ ้ใ
่ฏทๆณจๆ๏ผไป็ฌฌ3้ถๆฎตๅผๅง๏ผ
ไธป่ทฏๅพไธ็็ฌฌไธไธชๅท็งฏๅฑ็ๆญฅ้ฟ=๏ผ2๏ผ2๏ผ
่ไธๆทๅพ็ๆญฅ้ฟ=๏ผ2๏ผ2๏ผ
filters1, filters2 = filters
if tf.keras.backend.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = tf.keras.layers.Conv2D(filters1, kernel_size, strides=strides,
padding='same',
kernel_initializer='he_normal',
kernel_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
bias_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
name=conv_name_base + '2a')(input_tensor)
x = tf.keras.layers.BatchNormalization(axis=bn_axis,
name=bn_name_base + '2a',
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON)(
x, training=training)
x = tf.keras.layers.Activation('relu')(x)
x = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same',
kernel_initializer='he_normal',
kernel_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
bias_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
name=conv_name_base + '2b')(x)
x = tf.keras.layers.BatchNormalization(axis=bn_axis,
name=bn_name_base + '2b',
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON)(
x, training=training)
shortcut = tf.keras.layers.Conv2D(filters2, (1, 1), strides=strides,
kernel_initializer='he_normal',
kernel_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
bias_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
name=conv_name_base + '1')(input_tensor)
shortcut = tf.keras.layers.BatchNormalization(
axis=bn_axis, name=bn_name_base + '1',
momentum=BATCH_NORM_DECAY, epsilon=BATCH_NORM_EPSILON)(
shortcut, training=training)
x = tf.keras.layers.add([x, shortcut])
x = tf.keras.layers.Activation('relu')(x)
return x
def resnet_block(input_tensor,
size,
kernel_size,
filters,
stage,
conv_strides=(2, 2),
training=None):
ไธไธชๅบ็จๅฑๅ่ทๅคไธชๆ ่ฏๅ็ๅใ
ๅๆฐ๏ผ
input_tensor๏ผ่พๅ
ฅๅผ ้
size๏ผๆดๆฐ๏ผๆๆ่ฝฌๅๅท็งฏ/่บซไปฝๅ็ๆฐ้ใ
ไธไธชๅท็งฏๅฑไฝฟ็จๅ๏ผๅ่ท๏ผsize-1๏ผไธช่บซไปฝๅใ
kernel_size๏ผ้ป่ฎคไธบ3๏ผๅ
ๆ ธๅคงๅฐไธบ
ไธป่ทฏๅพไธ็ไธญ้ดๅท็งฏๅฑ
filters๏ผๆดๆฐๅ่กจ๏ผไธป่ทฏๅพไธ3ไธชๅท็งฏๅฑ็่ฟๆปคๅจ
stage๏ผๆดๆฐ๏ผๅฝๅ้ถๆฎตๆ ็ญพ๏ผ็จไบ็ๆๅฑๅ็งฐ
conv_strides๏ผๅไธญ็ฌฌไธไธชๅท็งฏๅฑ็ๆญฅ้ฟใ
training๏ผไป
ๅจไฝฟ็จ Estimator ่ฎญ็ป keras ๆจกๅๆถไฝฟ็จใๅ
ถไปๆ
ๅตๅฎไผ่ชๅจๅค็ใ
่ฟๅๅผ๏ผ
ๅบ็จๅฑๅ่บซไปฝๅๅ็่พๅบๅผ ้ใ
x = conv_building_block(input_tensor, kernel_size, filters, stage=stage,
strides=conv_strides, block='block_0',
training=training)
for i in range(size - 1):
x = identity_building_block(x, kernel_size, filters, stage=stage,
block='block_%d' % (i + 1), training=training)
return x
def resnet(num_blocks, classes=10, training=None):
ๅฎไพๅResNetไฝ็ณป็ปๆใ
ๅๆฐ๏ผ
num_blocks๏ผๆดๆฐ๏ผๆฏไธชๅไธญ็ๅท็งฏ/่บซไปฝๅ็ๆฐ้ใ
ResNet ๅ
ๅซ3ไธชๅ๏ผๆฏไธชๅๅ
ๅซไธไธชๅท็งฏๅ
ๅ้ข่ท็(layers_per_block - 1) ไธช่บซไปฝๅๆฐใ ๆฏ
ๅท็งฏ/็ๆณๅบฆๅๅ
ทๆ2ไธชๅท็งฏๅฑใ ็จ่พๅ
ฅ
ๅท็งฏๅฑๅๆฑ ๅๅฑ่ณๆๅ๏ผ่ฟๅธฆๆฅไบ
็ฝ็ป็ๆปๅคงๅฐไธบ๏ผ6 * num_blocks + 2๏ผ
classes๏ผๅฐๅพๅๅ็ฑปไธบ็ๅฏ้็ฑปๆฐ
training๏ผไป
ๅจไฝฟ็จ Estimator ่ฎญ็ป keras ๆจกๅๆถไฝฟ็จใๅ
ถไปๆ
ๅตไธๅฎไผ่ชๅจๅค็ใ
่ฟๅๅผ๏ผ
Kerasๆจกๅๅฎไพใ
input_shape = (32, 32, 3)
img_input = layers.Input(shape=input_shape)
if backend.image_data_format() == 'channels_first':
x = layers.Lambda(lambda x: backend.permute_dimensions(x, (0, 3, 1, 2)),
name='transpose')(img_input)
bn_axis = 1
else: # channel_last
x = img_input
bn_axis = 3
x = tf.keras.layers.ZeroPadding2D(padding=(1, 1), name='conv1_pad')(x)
x = tf.keras.layers.Conv2D(16, (3, 3),
strides=(1, 1),
padding='valid',
kernel_initializer='he_normal',
kernel_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
bias_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
name='conv1')(x)
x = tf.keras.layers.BatchNormalization(axis=bn_axis, name='bn_conv1',
momentum=BATCH_NORM_DECAY,
epsilon=BATCH_NORM_EPSILON)(
x, training=training)
x = tf.keras.layers.Activation('relu')(x)
x = resnet_block(x, size=num_blocks, kernel_size=3, filters=[16, 16],
stage=2, conv_strides=(1, 1), training=training)
x = resnet_block(x, size=num_blocks, kernel_size=3, filters=[32, 32],
stage=3, conv_strides=(2, 2), training=training)
x = resnet_block(x, size=num_blocks, kernel_size=3, filters=[64, 64],
stage=4, conv_strides=(2, 2), training=training)
x = tf.keras.layers.GlobalAveragePooling2D(name='avg_pool')(x)
x = tf.keras.layers.Dense(classes, activation='softmax',
kernel_initializer='he_normal',
kernel_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
bias_regularizer=
tf.keras.regularizers.l2(L2_WEIGHT_DECAY),
name='fc10')(x)
inputs = img_input
# ๅๅปบๆจกๅ
model = tf.keras.models.Model(inputs, x, name='resnet56')
return model
resnet20 = functools.partial(resnet, num_blocks=3)
resnet32 = functools.partial(resnet, num_blocks=5)
resnet56 = functools.partial(resnet, num_blocks=9)
resnet110 = functools.partial(resnet, num_blocks=18)
Explanation: ไฝฟ็จ TensorBoard callback ่ฟ่กไธไธช็ฎๅ็ๆจกๅ
ไฝ ๅฐไฝฟ็จ Keras ๆฅๆๅปบไธไธชไฝฟ็จ ResNet56 (ๅ่: ็จไบๅพๅ่ฏๅซ็ๆทฑๅบฆๆฎๅทฎๅญฆไน )ๆฅๅ็ฑปCIFAR-10ๅพๅ้็็ฎๅๆจกๅใ
ไป TensorFlow ๆจกๅๅญๅคๅถ ResNet ๆจกๅไปฃ็ ใ
End of explanation
cifar_builder = tfds.builder('cifar10')
cifar_builder.download_and_prepare()
Explanation: ไป TensorFlow ๆฐๆฎ้ไธ่ฝฝ CIFAR-10 ๆฐๆฎ้ใ
End of explanation
HEIGHT = 32
WIDTH = 32
NUM_CHANNELS = 3
NUM_CLASSES = 10
BATCH_SIZE = 128
def preprocess_data(record):
image = record['image']
label = record['label']
# ่ฐๆดๅพๅๅคงๅฐไปฅๅจๆฏไพงๅขๅ ๅไธช้ขๅค็ๅ็ด ใ
image = tf.image.resize_with_crop_or_pad(
image, HEIGHT + 8, WIDTH + 8)
# ้ๆบ่ฃๅชๅพๅ็ [HEIGHT๏ผWIDTH] ้จๅใ
image = tf.image.random_crop(image, [HEIGHT, WIDTH, NUM_CHANNELS])
# ้ๆบๆฐดๅนณ็ฟป่ฝฌๅพๅใ
image = tf.image.random_flip_left_right(image)
# ๅๅปๅๅผๅนถ้คไปฅๅ็ด ๆนๅทฎใ
image = tf.image.per_image_standardization(image)
label = tf.compat.v1.sparse_to_dense(label, (NUM_CLASSES,), 1)
return image, label
train_data = cifar_builder.as_dataset(split=tfds.Split.TRAIN)
train_data = train_data.repeat()
train_data = train_data.map(
lambda value: preprocess_data(value))
train_data = train_data.shuffle(1024)
train_data = train_data.batch(BATCH_SIZE)
model = resnet56(classes=NUM_CLASSES)
model.compile(optimizer='SGD',
loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
Explanation: ๅปบ็ซๆฐๆฎ่พๅ
ฅ็บฟๆง้ไฟกๆจกๅๅนถ็ผ่ฏ ResNet56 ๆจกๅใ
End of explanation
log_dir="logs/profile/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, profile_batch = 3)
Explanation: ๅฝไฝ ๅๅปบ TensorBoard ๅ่ฐๆถ๏ผๆจๅฏไปฅๆๅฎๆจๆณ่ฆ่ฟ่กๆง่ฝๅๆ็ๆนๆฌกใ้ป่ฎคๆ
ๅตไธ๏ผTensorFlow ๅฐๅฏน็ฌฌไบไธชๆนๆฌก่ฟ่กๆง่ฝๅๆ๏ผๅ ไธบ็ฌฌไธไธชๆนๆฌก็ๆถๅไผ่ฟ่กๅพๅคไธๆฌกๆง็ๅพไผๅใๆจๅฏไปฅ้่ฟ่ฎพ็ฝฎ profile_batch ๅฏนๅ
ถ่ฟ่กไฟฎๆนใๆจ่ฟๅฏไปฅ้่ฟๅฐๅ
ถ่ฎพ็ฝฎไธบ 0 ๆฅๅ
ณ้ญๆง่ฝๅๆใ
่ฟๆถๅ๏ผๆจๅฐไผๅฏน็ฌฌไธๆนๆฌก่ฟ่กๆง่ฝๅๆใ
End of explanation
model.fit(train_data,
steps_per_epoch=20,
epochs=5,
callbacks=[tensorboard_callback])
Explanation: ๅผๅงไฝฟ็จ Model.fit() ่ฟ่ก่ฎญ็ปใ
End of explanation
!tar -zcvf logs.tar.gz logs/profile/
Explanation: ไฝฟ็จ TensorBoard ๅฏ่งๅๆง่ฝๅๆ็ปๆ
ไธๅนธ็ๆฏ๏ผ็ฑไบ๏ผ1913, ๆจๆ ๆณๅจ Colab ไธญไฝฟ็จ TensorBoard ๆฅๅฏ่งๅๆง่ฝๅๆ็ปๆใๆจ้่ฆไธ่ฝฝๆฅๅฟ็ฎๅฝๅนถๅจๆฌๅฐ่ฎก็ฎๆบไธๅฏๅจ TensorBoardใ
ๅ็ผฉไธ่ฝฝๆฅๅฟ:
End of explanation
train_data = cifar_builder.as_dataset(split=tfds.Split.TRAIN)
train_data = train_data.repeat()
train_data = train_data.map(
lambda value: preprocess_data(value))
train_data = train_data.shuffle(1024)
train_data = train_data.batch(BATCH_SIZE)
# ๅฎๅฐๅจ๏ผs-1๏ผๆญฅ้ชคไธญ้ขๅๆฐๆฎ
train_data = train_data.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
Explanation: ๅจโๆไปถโ้้กนๅกไธญๅณ้ฎๅๅปไปฅไธ่ฝฝ logdir.tar.gzใ
่ฏทไฟ่ฏๅจไฝ ๆฌๅฐ็ๆบๅจๅฎ่ฃ
ๆๆฐ็ TensorBoardใๅจไฝ ็ๆฌๅฐๆบๅจไธๆง่กไธ้ข็ๅฝไปค๏ผ
```
cd download/directory
tar -zxvf logs.tar.gz
tensorboard --logdir=logs/ --port=6006
```
ๅจๆจ็Chromeๆต่งๅจไธญๆๅผไธไธชๆฐๆ ็ญพ๏ผ็ถๅๅฏผ่ช่ณlocalhost๏ผ6006๏ผๅๅป โProfileโ ๆ ็ญพใๆจๅฏ่ฝไผ็ๅฐไปฅไธๆง่ฝๅๆ็ปๆ๏ผ
่ท่ธชๆฅ็ๅจ
ๅฝๆจๅๅปๆง่ฝๅๆ้้กนๅกๅ๏ผๆจๅฐ็ๅฐ่ท่ธชๆฅ็ๅจใ่ฏฅ้กต้ขๆพ็คบไบ่ๅๆ้ด CPU ๅๅ ้ๅจไธๅ็็ไธๅไบไปถ็ๆถ้ด่ฝดใ
่ท่ธชๆฅ็ๅจๅจๅ็ด่ฝดไธๆพ็คบๅคไธช ไบไปถ็ปใ ๆฏไธชไบไปถ็ป้ฝๆๅคไธชๆฐดๅนณ ่ท่ธช๏ผๅ
ถไธญๅกซๅ
ไบ่ท่ธชไบไปถใ่ท่ธช ไบไปถๆฏๅจ็บฟ็จๆ GPU ๆตไธๆง่ก็ๅบๆฌๆถ้ด็บฟ๏ผใๅไธชไบไปถๆฏๆถ้ด่ฝด่ฝจ้ไธ็ๅฝฉ่ฒ็ฉๅฝขๅใๆถ้ดไปๅทฆๅฐๅณ็งปๅจใ
ๆจๅฏไปฅไฝฟ็จ w๏ผๆพๅคง๏ผ๏ผs๏ผ็ผฉๅฐ๏ผ๏ผa๏ผๅๅทฆๆปๅจ๏ผ๏ผd๏ผๅๅณๆปๅจ๏ผๆต่ง็ปๆใ
ๅไธช็ฉๅฝขไปฃ่กจ ่ท่ธชไบไปถ ๏ผไป่ฟไธชๆถ้ด็ๅผๅงๅฐ็ปๆๆถ้ดใ ่ฆ็ ็ฉถๅไธช็ฉๅฝข๏ผๅฏไปฅๅจๆตฎๅจๅทฅๅ
ทๆ ไธญ้ๆฉ้ผ ๆ ๅ
ๆ ๅพๆ ๅๅๅปๅฎใ ่ฟๅฐๆพ็คบๆๅ
ณ็ฉๅฝข็ไฟกๆฏ๏ผไพๅฆๅ
ถๅผๅงๆถ้ดๅๆ็ปญๆถ้ดใ
้คไบ็นๅปไนๅค๏ผๆจ่ฟๅฏไปฅๆๅจ้ผ ๆ ไปฅ้ๆฉ่ฆ็ไธ็ป่ท่ธชไบไปถ็็ฉๅฝขใ่ฟๅฐไธบๆจๆไพไธ่ฏฅ็ฉๅฝข็ธไบคๅนถๆฑๆป็ไบไปถๅ่กจใ m ้ฎๅฏ็จไบๆต้ๆ้ไบไปถ็ๆ็ปญๆถ้ดใ
่ท่ธชไบไปถๆฏไปไธไธชๆฅๆบๆถ้็๏ผ
CPU: CPUไบไปถไฝไบๅไธบ/host:CPU็ไบไปถ็ปไธใๆฏไธช่ฝจ้ไปฃ่กจ CPU ไธ็ไธไธช็บฟ็จใไพๅฆ๏ผ่พๅ
ฅ็บฟๆง้ไฟกๆจกๅไบไปถ๏ผGPU ๆไฝ่ฐๅบฆไบไปถ๏ผ CPU ๆไฝๆง่กไบไปถ็ญใ
GPU: GPU ไบไปถไฝไบไปฅ /device:GPU:ไธบๅ็ผ็ไบไปถ็ปไธใ ้คไบ stream:all๏ผๆฏไธชไบไปถ็ป้ฝไปฃ่กจๅจ GPU ไธไธไธชๆตใ stream::allๅฐๆๆไบไปถๆฑ้ๅฐไธไธช GPU ไธใไพๅฆใ ๅ
ๅญๅคๅถไบไปถ๏ผๅ
ๆ ธๆง่กไบไปถ็ญใ
TensorFlow ่ฟ่กๆถ้ด: ่ฟ่กๆถไบไปถๅจไปฅ /job:ไธบๅ็ผ็ไบไปถ็ปไธใ่ฟ่กไบไปถ่กจ็คบ python ็จๅบ่ฐ็จ็ TensorFlow opsใ ไพๅฆ๏ผ tf.function ๆง่กไบไปถ็ญใ
่ฐ่ฏๆง่ฝ
็ฐๅจ๏ผๆจๅฐไฝฟ็จ Trace Viewer ๆฅๆนๅๆจ็ๆจกๅ็ๆง่ฝใ
่ฎฉๆไปฌๅๅฐๅๅๆ่ท็ๅๆ็ปๆใ
GPU ไบไปถ่กจๆ๏ผGPU ๅจ่ฏฅๆญฅ้ชค็ไธๅ้จๅไปไน้ฝๆฒกๆๅใ
CPU ไบไปถ่กจๆ๏ผๅจๆญคๆญฅ้ชค็ๅผๅง็ๆถๅ๏ผCPU ่ขซๆฐๆฎ่พๅ
ฅ็ฎก้ๅ ็จใ
ๅจ TensorFlow ่ฟ่กๆถไธญ๏ผๆไธไธชๅซ Iterator::GetNextSync็ๅคง้ปๅก๏ผ่ฟๆฏไปๆฐๆฎ่พๅ
ฅ็ฎก้ไธญ่ทๅไธไธๆน็้ปๅก่ฐ็จใ่ไธๅฎ้ป็ขไบ่ฎญ็ปๆญฅ้ชคใ ๅ ๆญค๏ผๅฆๆๆจๅฏไปฅๅจ s-1 ็ๆถๅไธบ s ๆญฅ้ชคๅๅค่พๅ
ฅๆฐๆฎ๏ผๅๅฏไปฅๆดๅฟซๅฐ่ฎญ็ป่ฏฅๆจกๅใ
ๆจไนๅฏไปฅ้่ฟไฝฟ็จ tf.data.prefetch.
End of explanation
log_dir="logs/profile/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, profile_batch = 3)
model.fit(train_data,
steps_per_epoch=20,
epochs=5,
callbacks=[tensorboard_callback])
Explanation: ้ๆฐ่ฟ่กๆจกๅใ
End of explanation
!tar -zcvf logs.tar.gz logs/profile/
Explanation: Woohoo! ไฝ ๅๅๆ่ฎญ็ปๆง่ฝไป ~235ms/step ๆ้ซๅฐ ~200ms/stepใ
End of explanation
# ๅ
ๅฎน็ฎก็ๆฅๅฃ
with tf.python.eager.profiler.Profiler('logdir_path'):
# ่ฟ่กไฝ ็่ฎญ็ป
pass
# ๅ่ฝๆฅๅฃ
tf.python.eager.profiler.start()
# ่ฟ่กไฝ ็่ฎญ็ป
profiler_result = tf.python.eager.profiler.stop()
tf.python.eager.profiler.save('logdir_path', profiler_result)
Explanation: ๅไธๆฌกไธ่ฝฝ logs ็ฎๅฝๆฅๆฅ็ TensorBoard็ๆฐ็ๅๆ็ปๆใ
Iterator::GetNextSyncๅคง้ปๅกไธๅๅญๅจใ
ๅๅพๅฅฝ๏ผ
ๆพ็ถ๏ผ่ฟไพๆงไธๆฏๆไฝณๆง่ฝใ่ฏท่ชๅทฑๅฐ่ฏ๏ผ็็ๆฏๅฆๅฏไปฅๆๆดๅค็ๆน่ฟใ
ๆๅ
ณๆง่ฝ่ฐๆด็ไธไบๆ็จๅ่๏ผ
ๆฐๆฎ่พๅ
ฅ็บฟๆง้ไฟกๆจกๅ
่ฎญ็ป่กจ็ฐ: ๆดๅฟซๆถๆ็็จๆทๆๅ (TensorFlow Dev Summit 2018)
ๅ
ถไปๅๆๆนๅผ
้คไบ TensorBoard ๅ่ฐๅค๏ผTensorFlow ่ฟๆไพไบๅ
ถไปไธค็งๆนๅผๆฅๆๅจ่งฆๅๅๆๅจ๏ผProfiler APIs ๅ Profiler Serviceใ
ๆณจๆ๏ผ่ฏทไธ่ฆๅๆถ่ฟ่กๅคไธชๅๆๅจใๅฆๆๆจๆณๅฐ Profiler API ๆ Profiler Service ไธ TensorBoard ๅ่ฐไธ่ตทไฝฟ็จ๏ผ่ฏท็กฎไฟๅฐprofile_batch ๅๆฐ่ฎพ็ฝฎไธบ0ใ
Profiler APIs
End of explanation
# ๆญค API ๅฐๅจๆจ็ TensorFlow ไฝไธไธๅฏๅจ gRPC ๆๅกๅจ๏ผ่ฏฅ API ๅฏไปฅๆ้ๆฅๆถๅๆ่ฏทๆฑใ
tf.python.eager.profiler.start_profiler_server(6009)
# ๅจ่ฟ้ๅไฝ ็ TensorFlow ้กน็ฎ
Explanation: Profiler Service
End of explanation |
15,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data from H2Cooling with Gravity
Step1: $$
\rho_{BE} (r) = \frac{C_s^2}{4 \pi G r^2} =\frac{L_0^2 \rho _0 t_0}{t_0^2 L_0^2}
$$
Step2: Simulate Profile
Step3: $$
4\pi \rho_0 \int _0 ^R \frac{R_c r^2}{R_c +r^2}dr =
4\pi \rho_0 Rc \Big( R-\sqrt{R_c} \arctan\big( \frac{R}{\sqrt{R_C}}\big) \Big)
$$ | Python Code:
#HCG=np.load('../Data/H2CoolingG512.npz')
HCG=np.load('../Data/TabulatedG.npz')
f.quadruple(HCG,np.log10(HCG['RHO']),rows=2,nn=0,tlim=26,Save_Figure='H2CoolGRquad')
f.pprofile(HCG,'RHO',steps=4,itlim=26,tdk='Myrs',Save_Figure='H2CoolGRHOprofile',sc2='log',xlim=[0,20],yprop=256)
f.pprofile(HCG,'Temp',steps=4,itlim=26,tdk='Myrs',yl='Temperature (K)',Save_Figure='H2CoolGTempprofile',sc2='log',xlim=[0,20],yprop=256)
f.pprofile(HCG,'PRS',steps=4,itlim=26,tdk='Myrs',yl='Pressure $(\si{dyne.cm^{-2}})$',y0=f.PRS0,Save_Figure='H2CoolGPRSprofile',yprop=256,sc2='log',xlim=[0,15])
(HCG['T'][26]-HCG['T'][25])/1e6
NCG['RHO'][128,128,:60].argmax()
plt.figure(figsize=(12,6))
plt.plot(NCG['T'][:38]/1e6,NCG['PRS'][128,128,:][:38]*f.Temp0/NCG['RHO'][128,128,:][:38],label='No Cooling Simulation Data')
plt.plot(HCG['T']/1e6,HCG['PRS'][256,256,:]*f.Temp0/HCG['RHO'][256,256,:],label='Cooling Simulation Data')
plt.yscale('log')
plt.ylabel('Temperature (K)')
plt.xlabel('$\si{Myrs}$')
plt.legend()
datafolder='../Document/DataImages/'
plt.savefig(datafolder+'GRcenterTimeTemp.png',bbox_inches='tight',format='png', dpi=100)
plt.figure(figsize=(12,6))
plt.plot(NCG['T'][:38]/1e6,NCG['RHO'][128,128,:][:38]*f.RHO0,label='No Cooling Simulation Data')
plt.plot(HCG['T']/1e6,HCG['RHO'][256,256,:]*f.RHO0,label='Cooling Simulation Data')
plt.yscale('log')
plt.ylabel('$ \\rho\, (\si{g.cm^{-3}})$')
plt.xlabel('$\si{Myrs}$')
plt.legend()
datafolder='../Document/DataImages/'
plt.savefig(datafolder+'GRcenterTimeRHO.png',bbox_inches='tight',format='png', dpi=100)
mid=256
ind= HCG['RHO'][mid,mid,:].argmax()
print 'Maximum Density: {:.2e} g/cm3 ({:.2e} cu) at time {:.2f} Myrs (index: {})'.format(HCG['RHO'][mid,mid:,ind].max()*f.RHO0,HCG['RHO'][mid,mid,:].max(),HCG['T'][ind]/1e6,ind)
xs=HCG['X'][mid:]#/10.
ys=HCG['RHO'][mid,mid:,ind]#*f.RHO0
ps=HCG['PRS'][mid,mid:,ind]
cs=np.sqrt((5./3)*ps*f.PRS0/(ys*f.RHO0))
#cs=np.sqrt(ps*f.PRS0/(ys*f.RHO0))
xspc=xs*10.
xau=xs*206265.
yscgs=ys*f.RHO0
ybe=cs**2/(2.*np.pi*6.67e-8*(xspc*3.086e18)**2)
ax=plt.subplot(111)
ax.plot(xspc,yscgs,label='Cooling Simulation Data')
ax.plot(xspc[xspc<2.],ybe[xspc<2.],label='Bonnor-Ebert Sphere')
ax.plot(NCG['X'][128:]*10.,NCG['RHO'][128,128:,23]*f.RHO0,'--',alpha=1.,label='No Cooling Simulation Data')
ax.set_yscale('log')
ax.set_ylabel('$ \\rho\, (\si{g.cm^{-3}})$')
ax.set_xlabel('$\si{pc}$')
#ax.vlines([0.11267],1e-21,1.3e-17,linewidth=0.5,linestyles='--',label='Critical Radius')
ax.set_xlim(0,5.2)
plt.legend()
datafolder='../Document/DataImages/'
plt.savefig(datafolder+'H2CoolGRHOprofile-BE.png',bbox_inches='tight',format='png', dpi=100)
kmtopc=3.24078e-14
1.3*kmtopc/np.sqrt(4.*np.pi*6.67e-8*1e-19)
plt.plot(xspc,cs*1e-5)
plt.ylabel('$C_s \, (\si{km.s^{-1}})$')
plt.xlabel('$\si{pc}$')
plt.xlim(0,8)
#plt.yscale('log')
ax1=plt.subplot(111)
ax1.plot(xs,ys)
ax1.set_yscale('log')
ax1.set_ylabel('$ \\rho\, (\si{cu})$')
ax1.set_xlabel('$x \, (\si{cu}) $')
ax1.set_xlim(0,0.8)
regions= [[0.0,0.02],[0.04,0.2],[0.21,0.36],[0.01,0.35]]
regions=[[0.015,0.29]]
for region in regions:
xmin,xmax=region[0],region[1]
whr=np.logical_and(xs>xmin,xs<xmax)
xsl=np.log(xs[whr])
ysl=np.log(ys[whr])
from scipy.optimize import curve_fit
def ff(x,a,b,): return a*x+b
p,dp2=curve_fit(ff,xsl,ysl,[-2.1,100.])
dp=np.sqrt(np.diag(dp2))
print 'NonLinear Fit: ln(p) = ({:.1f}+-{:.2f})*ln(x)+({:.1f}+-{:.1f})'.format(p[0],dp[0],p[1],dp[1])
xx=np.linspace(xmin,xmax,100)
plt.plot(xx,np.exp(p[1])*xx**(p[0]),'--',label='{:.1f}'.format(p[0]))
plt.plot(xs,ys)
plt.yscale('log')
plt.ylabel('$ \\rho\, (\si{cu})$')
plt.xlabel('$x\, (\si{cu})$')
plt.legend()
plt.xlim(None,0.45)
Explanation: Data from H2Cooling with Gravity
End of explanation
xs=HCG['X'][mid:]
ysP=HCG['PRS'][mid,mid:,ind]
xscgs=xs*10.
xau=xscgs*206265.
ysPcgs=ysP*f.PRS0
ax=plt.subplot(111)
ax.plot(xscgs,ysPcgs)
ax.set_yscale('log')
ax.set_ylabel('$ P\, (\si{dyne.cm^{-2}})$')
ax.set_xlabel('$\si{pc}$')
ax.set_xlim(None,10)
Explanation: $$
\rho_{BE} (r) = \frac{C_s^2}{4 \pi G r^2} =\frac{L_0^2 \rho _0 t_0}{t_0^2 L_0^2}
$$
End of explanation
Radius = 1.0
Density1 = 10.
P0=1e-8
T0=10891304347826.088
def rho(r,a=2.3,rho1=10.,R=1.,rc=0.002): return np.piecewise(r, [r < R , r >= R], [lambda r: rho1/(rc+r**a), 1.])
def mass(r,sm=False):
return 4.*np.pi*r**2 *rho(r)*24.73 if sm else 4.*np.pi*r**2 *rho(r)
#def massmo(r,a,rc): return mass(r,a=-2.3,rho0=1e5,rc=0.005,R=10.) *24.73 #(10pc)^3 * hydrogen_mass /cm^3 = 24.73 Mo
A=10.
B=0.002
a=2.3
A/B,B**(1/a)
r=np.linspace(-2,3,500)
plt.ylabel('Number Density $(\si{cm^{-3}})$')
plt.xlabel('Radius $(\si{pc})$')
plt.yscale('log')
plt.plot(r*10.,rho(r),label=-2.3)
#plt.plot(r*10.,10./r**(2.3),label=-2.3)
#plt.vlines(0.07,1e2,1e5,linewidth=0.5)
plt.legend()
r=np.linspace(0,3,500)
plt.ylabel('$ \\rho\, (\si{g.cm^{-3}})$')
plt.xlabel('Radius $(\si{pc})$')
plt.yscale('log')
plt.plot(r*10.,rho(r)*f.RHO0,label='{} Density Profile'.format(-2.3))
plt.legend()
plt.xlim(0,16)
datafolder='../Document/DataImages/'
plt.savefig(datafolder+'SimRHOProfile.png',bbox_inches='tight',format='png', dpi=100)
plt.ylabel('Temperature$(\si{K})$')
plt.xlabel('Radius $(\SI{10}{pc})$')
plt.yscale('log')
plt.plot(r*10,P0*T0/rho(r),label=-2.3)
#plt.legend()
plt.xlim(0,16)
datafolder='../Document/DataImages/'
plt.savefig(datafolder+'SimTMPProfile.png',bbox_inches='tight',format='png', dpi=100)
(P0*T0/rho(r))[0]
plt.plot(r*10,mass(r,sm=True),label=-2.3)
#plt.plot(r,4.*np.pi*r**2 *rho(r),label=-2.3)
plt.ylabel('Mass $(\si{M_\odot})$')
plt.xlabel('Radius $(\si{pc})$')
plt.legend()
plt.plot(r,mass(r),label=-2.3)
plt.ylabel('Mass (cu)')
plt.xlabel('Radius $(\si{pc})$')
plt.legend()
TotalMass=quad(mass,0,1.,args=True)[0]
print u'Total Mass: {:.3e} Mโ'.format(TotalMass)
TotalMass=quad(mass,0,1.)[0]
print u'Total Mass: {:e} cu'.format(TotalMass)
Explanation: Simulate Profile
End of explanation
def integral(rho1,R):
return 17.952*rho1*R**0.7
#return 4.*np.pi*rho0*Rc*(R-np.sqrt(Rc)*np.arctan(R/np.sqrt(Rc)))
integral(10.,1.)
Explanation: $$
4\pi \rho_0 \int _0 ^R \frac{R_c r^2}{R_c +r^2}dr =
4\pi \rho_0 Rc \Big( R-\sqrt{R_c} \arctan\big( \frac{R}{\sqrt{R_C}}\big) \Big)
$$
End of explanation |
15,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="4"> MOOC
Step1: 2) Letting $K$ range from 1 to 11, plot the loss probability for $\lambda = 4$ and for $\lambda = 10$ (and $\mu=5$). Remarks ? Compare it to the theoretical loss probability.
Observe on the curves that when $\rho<1$ ($\lambda=4$), the blocking probability of the $M/M/1/K$ queue tends to 0 as $K$ increases since the system tends to behave as a stable $M/M/1$ queue.
When $\rho>1$ ($\lambda=10$), the rate of arrivals exceeds that of departures and the corresponding $M/M/1$ queue is unstable. So, even if $K$ is large the loss probability of the $M/M/1/K$ queue does not tend to zero.
The loss probability in a M/M/1/K queue is
Step2: Your answers for the exercise | Python Code:
%matplotlib inline
from pylab import *
def MM1K(K=3,lambda_ = 4.,mu = 5.,N0 = 2,Tmax=100):
N0 = min(N0,K)# enforcing buffer length constraint
p = lambda_/(mu+lambda_) # probability that the next event is an arrival when N(t) > 0
T = [0] # list of instants of events (arrivals/departures)
N = [N0] # initial number of customers in the system, list of number of customers at arrivals/departures
losses = 0 # number of lost clients
arrivals = 0
while T[-1]<Tmax:
if N[-1]==0:
# inter-event when N(t)=0:
tau = -1./lambda_*log(rand())
event = 1
else:
tau = -1./(lambda_+mu)*log(rand()) # inter-event when N(t)>0
event = 2*(rand()<p)-1 # +1 for an arrival, -1 for a departure
# Unlike in function MM1, when N[-1]==K, if a new client arrives this client is lost
# and the number of lost clients is incremented by 1
if event==1:
arrivals +=1 # event==1 corresponds to an arrival
if N[-1]==K:
###########################
# supply value of events and update losses
# when a customer arrives while N[-1]==K
event = 0 #loss cliente does not increase N
losses += 1
###########################
N = N + [N[-1]+event]
T = T + [T[-1]+tau]
T = T[:-1] # event after Tmax is discarded
N = N[:-1]
return array(T),array(N),arrivals,losses
#------------------
T,N,arrivals,losses = MM1K(K=3,lambda_ = 4.,mu = 5.,N0 = 2,Tmax=10**3)
V1 = losses/arrivals
# Plotting the evolution of the number of clients in the system
def step(x,y,Tmax=0,color='b'):
# plots a step function representing the number
# of clients in the system at each instant
if Tmax==0:
Tmax = max(x)
x = append(x,[Tmax]) # number of clients
y = append(y,[y[-1]]) # instants of events
for k in range(len(x)-1):
vlines(x[k+1],y[k],y[k+1],color=color)
hlines(y[k],x[k],x[k+1],color=color)
K=3
T,N,arrivals,losses = MM1K()
rcParams['figure.figsize'] = [15,3]
step(T,N)
xlabel('Time')
ylabel('Number of clients')
lambda_ = 4.
mu = 5.
title('Number of clients in the M/M/1/K queue'
+r'($\rho =%g$, $K=%g$)'%(lambda_/mu,K))
axis(ymin=-1,ymax=4)
yticks(range(4), range(4));
Explanation: <p><font size="4"> MOOC: Understanding queues</font></p>
<p><font size="4"> Python simulations</p>
<p><font size="4"> Week IV: Continuous time Markov chains </p>
In this lab, we focus on the simulation of continuous time Markov chains. In the lab of week 2 we have simulated a M/M/1 queue. This week, we are going to study a $M/M/1/K$ queue to illustrate the effects of a finite buffer length. We will compute the loss probability in this model and observe the influence of the load $\rho$ when the capacity $K$ is large.
1) Complete the code of the function MM1K below. This function generates one trajectory of a $M/M/1/K$ queue. The function returns the instants of events (arrivals or departures), the number of customers in the system at these instants, as well as the number of arrivals and of lost customers. Customers are lost if the buffer is full when they arrive. Default parameters will be set as follows: MMM1K($K=3$, $\lambda = 4$, $\mu = 5$, $N0 = 2$, $Tmax=100$). $\lambda$ and $\mu$ are the arrival and departure rates, $K$ is the maximum number of customers in the system, $N0$ is the initial number of customers, and the evolution of the number of customers in the system is simulated over $[0,Tmax]$. Plot a trajectory of the number of customers in the system against time, obtained after running function MM1K with the default parameters.
End of explanation
Ks = arange(1,12,2) # system capacities
Ploss_est = zeros(len(Ks)) # estimated loss probabilities
########################################################
# complete the value returned by function estimate_Ploss
# that estimates the loss probability from the obbserved
# number of arrivals and number of lost arrivals
def estimate_Ploss(arrivals, losses):
return float(losses)/float(arrivals)
########################################################
mu = 5
for lambda_ in [4,10]:
# estimated loss probabilities:
for index,K in enumerate(Ks):
T,N,arrivals,losses = \
MM1K(lambda_=lambda_,K=K,Tmax=10**3)
Ploss_est[index] = estimate_Ploss(arrivals, losses)
plot(Ks,Ploss_est,label="$\lambda$=%d"%lambda_)
# loss probabilities:
rho = lambda_/mu
Ploss = (1-rho)/(1-rho**(Ks+1))*rho**Ks
plot(Ks,Ploss,'.',label="Theory, $\lambda$=%d"%lambda_)
axis(xmin=1,xmax=11)
xlabel("System capacity")
title("Loss probability")
legend(loc=(.85,.2))
#--------------------------
V2 = estimate_Ploss(2,1)
Explanation: 2) Letting $K$ range from 1 to 11, plot the loss probability for $\lambda = 4$ and for $\lambda = 10$ (and $\mu=5$). Remarks ? Compare it to the theoretical loss probability.
Observe on the curves that when $\rho<1$ ($\lambda=4$), the blocking probability of the $M/M/1/K$ queue tends to 0 as $K$ increases since the system tends to behave as a stable $M/M/1$ queue.
When $\rho>1$ ($\lambda=10$), the rate of arrivals exceeds that of departures and the corresponding $M/M/1$ queue is unstable. So, even if $K$ is large the loss probability of the $M/M/1/K$ queue does not tend to zero.
The loss probability in a M/M/1/K queue is:
$$
\pi_K=\dfrac{1-\rho}{1-\rho^{K+1}}\rho^K
$$
Clearly, when $\rho<1$, $\pi_K$ tends to 0 as $K$ tends to infinity, whereas $\pi_K$ tends to $(\rho-1)/\rho$ when $\rho>1$ and $K$ tends to infinity.
End of explanation
print("---------------------------\n"
+"RESULTS SUPPLIED FOR LAB 4:\n"
+"---------------------------")
results = ("V"+str(k) for k in range(1,3))
for x in results:
try:
print(x+" = {0:.2f}".format(eval(x)))
except:
print(x+": variable is undefined")
Explanation: Your answers for the exercise
End of explanation |
15,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sympy (sympy.org) is a Python package used for solving equations with symbolic math.
Using Python and SymPy we can write and solve equations that come up in Engineering.
The example problem below contains two equations with two unknown variables. You could use a pencil and paper to solve the problem, but we are going to use Python and programming to solve the problem.
Given
Step1: Next we need to define six different variables
Step2: Now we can create two SymPy expressions that represent our two equations. We can subtract the %crystallinity from the left side of the equation to set the equation to zero. The result of moving the %crystallinity term to the other side of the equation is shown below. Note how the second equation equals zero.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
$$ \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% - \%crystallinity = 0 $$
Substitue in $\rho_s = \rho_1$ and $\rho_s = \rho_2$ into the expression above. Also substitue in $\%crystallinity = \%c_1$ and $\%crystallinity = \%c_2$. The result is two equations, each equation is equal to zero.
$$ \frac{ \rho_c(\rho_1 - \rho_a) }{\rho_1(\rho_c - \rho_a) } \times 100 \% - \%c_1 = 0 $$
$$ \frac{ \rho_c(\rho_2 - \rho_a) }{\rho_2(\rho_c - \rho_a) } \times 100 \% - \%c_2 = 0 $$
Now we have two equations (the two equations above) which we can solve for two unknowns ($\rho_a$ and $\rho_s$). The two equations can be coded into SymPy expressions. The SymPy expressions contain the variables we defined earlier.
Step3: Next, we'll substitute in the known values $\rho_1 = 0.904$ and $c_1 = 0.628$ into our first expression expr1. Note you need to set the output of SymPy's .subs method to a variable. SymPy expressions are not modified in-place. You need to capture the output of the .subs method in a variable.
Step4: Now we'll substitue the second set of given values $\rho_2 = 0.895$ and $c_2 = 0.544$ into our second expression expr2.
Step5: We'll use SymPy's nonlinsolve() function to solve the two equations expr1 and expr2 for to unknows pa and pc. SymPy's nonlinsolve() function expects a list of expressions [expr1,expr2] followed by a list variables [pa,pc] to solve for.
Step6: We see that the value of $\rho_a = 0.84079$ and $\rho_c = 0.94613$.
The solution is a SymPy FiniteSet object. To pull the values of $\rho_a$ and $\rho_c$ out of the FiniteSet, use the syntax sol.args[0][<var num>] to pull the answers out.
Step7: Use SymPy to calculate a numerical result
Besides solving equations, SymPy expressions can also be used to calculate a numerical result. A numerical result can be calculated if all of the variables in an expression are set to floats or integers.
Let's solve the following problem with SymPy and calculate a numerical result.
Given
Step8: Next, we will create three SymPy symbols objects. These three symbols objects will be used to build our expression.
Step9: The expression that relates % crystallinity of a polymer sample to the density of 100% amorphus and 100% crystalline versions of the same polymer is below.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
We can build a SymPy expression that represents the equation above using the symbols objects (variables) we just defined.
Step10: Now we can substitute our $ \rho_a $ and $ \rho_c $ from above. Note the SymPy's .subs() method does not modify an expression in place. We have to set the modified expression to a new variable before we can make another substitution. After the substitutions are complete, we can print out the numerical value of the expression. This is accomplished with SymPy's .evalf() method.
Step11: As a final step, we can print out the answer using a Python f-string. | Python Code:
from sympy import symbols, nonlinsolve
Explanation: Sympy (sympy.org) is a Python package used for solving equations with symbolic math.
Using Python and SymPy we can write and solve equations that come up in Engineering.
The example problem below contains two equations with two unknown variables. You could use a pencil and paper to solve the problem, but we are going to use Python and programming to solve the problem.
Given:
The density of two different samples of a polymer $\rho_1$ and $\rho_2$ are measured.
$$ \rho_1 = 0.904 \ g/cm^3 $$
$$ \rho_2 = 0.895 \ g/cm^3 $$
The percent crystallinity of the two samples ($\%c_1 $ and $\%c_2$) is known.
$$ \%c_1 = 62.8 \% $$
$$ \%c_2 = 54.4 \% $$
The percent crystalinity of a polymer sample is related to the density of 100% amorphus regions ($\rho_a$) and 100% crystaline regions ($\rho_c$) according to:
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
Find:
Find the density of 100% amorphus regions ($\rho_a$) and the density of 100% crystaline regions ($\rho_c$) for this polymer.
Solution:
We are going to use Python and a package called SymPy to solve this problem. I recommend installing the Anaconda distribution of Python. If you install Anaconda, SymPy is included. If you downloaded Python from Python.org or if you are using a virtual environment, SymPy can be installed at a terminal using pip with the command below.
text
$ pip install sympy
We need a couple of functions from the SymPy package to solve this problem. We need the symbols() function to create symbolic math variables for the density of 100% amorphous and 100% crystalline regions ($\rho_a$ and $\rho_c$) and variables for the given information in the problem ($\%c_1 $, $\%c_2$, $\rho_1$ and $\rho_2$ ). We also need SymPy's nonlinsolve() function to solve a system of non-linear equations.
The symbols() function and the nonlinsolve() function can be imported from SymPy using the line below.
End of explanation
pc, pa, p1, p2, c1, c2 = symbols('pc pa p1 p2 c1 c2')
Explanation: Next we need to define six different variables:
$$\rho_c, \rho_a, \rho_1, \rho_2, c_1, c_2$$
Note commas are included in the symbols output, but there are no commas in the symbols input.
End of explanation
expr1 = ( (pc*(p1-pa) ) / (p1*(pc-pa)) - c1)
expr2 = ( (pc*(p2-pa) ) / (p2*(pc-pa)) - c2)
Explanation: Now we can create two SymPy expressions that represent our two equations. We can subtract the %crystallinity from the left side of the equation to set the equation to zero. The result of moving the %crystallinity term to the other side of the equation is shown below. Note how the second equation equals zero.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
$$ \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% - \%crystallinity = 0 $$
Substitue in $\rho_s = \rho_1$ and $\rho_s = \rho_2$ into the expression above. Also substitue in $\%crystallinity = \%c_1$ and $\%crystallinity = \%c_2$. The result is two equations, each equation is equal to zero.
$$ \frac{ \rho_c(\rho_1 - \rho_a) }{\rho_1(\rho_c - \rho_a) } \times 100 \% - \%c_1 = 0 $$
$$ \frac{ \rho_c(\rho_2 - \rho_a) }{\rho_2(\rho_c - \rho_a) } \times 100 \% - \%c_2 = 0 $$
Now we have two equations (the two equations above) which we can solve for two unknowns ($\rho_a$ and $\rho_s$). The two equations can be coded into SymPy expressions. The SymPy expressions contain the variables we defined earlier.
End of explanation
expr1 = expr1.subs(p1, 0.904)
expr1 = expr1.subs(c1, 0.628)
print(expr1)
Explanation: Next, we'll substitute in the known values $\rho_1 = 0.904$ and $c_1 = 0.628$ into our first expression expr1. Note you need to set the output of SymPy's .subs method to a variable. SymPy expressions are not modified in-place. You need to capture the output of the .subs method in a variable.
End of explanation
expr2 = expr2.subs(p2, 0.895)
expr2 = expr2.subs(c2, 0.544)
print(expr2)
Explanation: Now we'll substitue the second set of given values $\rho_2 = 0.895$ and $c_2 = 0.544$ into our second expression expr2.
End of explanation
sol = nonlinsolve([expr1,expr2],[pa,pc])
print(sol)
Explanation: We'll use SymPy's nonlinsolve() function to solve the two equations expr1 and expr2 for to unknows pa and pc. SymPy's nonlinsolve() function expects a list of expressions [expr1,expr2] followed by a list variables [pa,pc] to solve for.
End of explanation
print(type(sol))
pa = sol.args[0][0]
pc = sol.args[0][1]
print(f' Density of 100% amorphous polymer, pa = {round(pa,2)} g/cm3')
print(f' Density of 100% crystaline polymer, pc = {round(pc,2)} g/cm3')
Explanation: We see that the value of $\rho_a = 0.84079$ and $\rho_c = 0.94613$.
The solution is a SymPy FiniteSet object. To pull the values of $\rho_a$ and $\rho_c$ out of the FiniteSet, use the syntax sol.args[0][<var num>] to pull the answers out.
End of explanation
print(pa)
print(pc)
Explanation: Use SymPy to calculate a numerical result
Besides solving equations, SymPy expressions can also be used to calculate a numerical result. A numerical result can be calculated if all of the variables in an expression are set to floats or integers.
Let's solve the following problem with SymPy and calculate a numerical result.
Given:
The density of a 100\% amorphous polymer sample $\rho_a$ and the density of a 100% crystaline sample $\rho_c$ of the same polymer are measured.
$$ \rho_a = 0.84 \ g/cm^3 $$
$$ \rho_c = 0.95 \ g/cm^3 $$
The density of a sample $\rho_s$ of the same polymer is measured.
$$ \rho_s = 0.921 \ g/cm^3 $$
Find:
What is the \% crytallinity of the sample with a measured density $ \rho_s = 1.382 \ g/cm^3 $?
Solution
We have precise values for $ \rho_a $ and $ \rho_c $ from the previous problem. Let's see what the values of $ \rho_a $ and $ \rho_c $ are. We will use these more precise values that we calculated earlier to solve the problem.
End of explanation
pc, pa, ps = symbols('pc pa ps')
Explanation: Next, we will create three SymPy symbols objects. These three symbols objects will be used to build our expression.
End of explanation
expr = ( pc*(ps-pa) ) / (ps*(pc-pa))
Explanation: The expression that relates % crystallinity of a polymer sample to the density of 100% amorphus and 100% crystalline versions of the same polymer is below.
$$ \%crystallinity = \frac{ \rho_c(\rho_s - \rho_a) }{\rho_s(\rho_c - \rho_a) } \times 100 \% $$
We can build a SymPy expression that represents the equation above using the symbols objects (variables) we just defined.
End of explanation
expr = expr.subs(pa, 0.840789786223278)
expr = expr.subs(pc, 0.946134313397929)
expr = expr.subs(ps, 0.921)
print(expr.evalf())
Explanation: Now we can substitute our $ \rho_a $ and $ \rho_c $ from above. Note the SymPy's .subs() method does not modify an expression in place. We have to set the modified expression to a new variable before we can make another substitution. After the substitutions are complete, we can print out the numerical value of the expression. This is accomplished with SymPy's .evalf() method.
End of explanation
print(f'The percent crystallinity of the sample is {round(expr*100,1)} percent')
Explanation: As a final step, we can print out the answer using a Python f-string.
End of explanation |
15,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing TRICERATOPS EB modeling vs. isochrones
We want to test how the luminosity-scaling method compares to physical modeling based on colors & parallax.
This is the layout of the test
Step1: Sample space of allowed binaries
Step2: Let's check out the joint distribution of R1, R2 allowed by this sampling.
Step3: Compute $f_{EB}$ for each sample
The above sampling provides derived samples of the primary and secondary TESS mags. This allows us to compute $f_{EB}$. First note that
$$ f_{EB} = \frac{f_2}{f_1 + f_2} $$
From the magnitude difference TESS_mag_1 - TESS_mag_0 we can compute the secondary/primary flux ratio $f_2/f_1$, and we can rewrite $f_{EB}$ as follows
Step4: Compute primary & secondary radii using TRICERATOPS method
Step5: Compare derived R1, R2 with true R1, R2
Step6: OK, let's ask a more specific question. For each sample here (representing a fixed value of $f_{EB}$), how much is the estimate of the radius ratio $R_2/R_1$ biased with respect to the "true" value?
Step7: OK, so for this particular star, the estimated radius ratio tends to be off by about 15% from the truth. So the question remains
Step8: This is fairly different from assuming a flat prior on $f_{EB}$, though this will probably only matter in borderline cases, i.e., where the max-likelihood of EB model is close to that of the TP model.
Now remember, this was all done in the context of the true simulated star actually being a single star, meaning all the purported binary companions are forced toward low masses. Is this any different if the true scenario is actually a more luminous binary?
Pt 2 | Python Code:
from isochrones import get_ichrone
mist = get_ichrone('mist', bands=['TESS', 'V', 'K'])
mass, age, feh = (0.8, 9.7, 0.0)
distance = 10 # pc
AV = 0.0
simulated_props = mist.generate(mass, age, feh, distance=distance, AV=AV)
simulated_props[['mass', 'radius', 'TESS_mag', 'V_mag', 'K_mag']]
Explanation: Testing TRICERATOPS EB modeling vs. isochrones
We want to test how the luminosity-scaling method compares to physical modeling based on colors & parallax.
This is the layout of the test:
Choose arbitrary properties for our test target star.
Sample space of possible stellar binaries physically consistent with observed apparent V, K, and parallax.
Compute f_EB for each sample--this gives a distribution of the true allowed distribution of f_EB.
For each of these f_EB samples, compute the primary and secondary radius via the "luminosity scaling" method.
Compare these comupted R1, R2 with the actual true primary and secondary radii from the samples.
Set properties of primary star
End of explanation
from isochrones import BinaryStarModel
# set "observed" properties to the true simulated V, K, and parallax.
props = {'V': (float(simulated_props['V_mag']), 0.02), 'K': (float(simulated_props['K_mag']), 0.02), 'parallax': (100, 0.05)}
mod = BinaryStarModel(mist, **props, maxAV=0.0001, eep_bounds=(0, 450), name='triceratops_eb_1')
mod.fit()
Explanation: Sample space of allowed binaries
End of explanation
from corner import corner
corner(mod.derived_samples[['radius_0', 'radius_1']]);
Explanation: Let's check out the joint distribution of R1, R2 allowed by this sampling.
End of explanation
dmag = mod.derived_samples['TESS_mag_1'] - mod.derived_samples['TESS_mag_0']
f_ratio = 10**(-0.4 * dmag)
f_EB = 1 / (1./f_ratio + 1)
f_EB.describe()
Explanation: Compute $f_{EB}$ for each sample
The above sampling provides derived samples of the primary and secondary TESS mags. This allows us to compute $f_{EB}$. First note that
$$ f_{EB} = \frac{f_2}{f_1 + f_2} $$
From the magnitude difference TESS_mag_1 - TESS_mag_0 we can compute the secondary/primary flux ratio $f_2/f_1$, and we can rewrite $f_{EB}$ as follows:
$$ f_{EB} = \frac{1}{\frac{f_1}{f_2} + 1} $$
End of explanation
from triceratops.funcs import stellar_relations
def get_radii(L, f_EB):
L1 = L * (1 - f_EB)
L2 = L * f_EB
_, R1, _, _, _ = stellar_relations(lum=L1)
_, R2, _, _, _ = stellar_relations(lum=L2)
return R1, R2
R1, R2 = zip(*[get_radii(float(10**simulated_props['logL']), f) for f in f_EB])
Explanation: Compute primary & secondary radii using TRICERATOPS method
End of explanation
import numpy as np
def compare_radii(mod, R1, R2):
samples1 = np.array([R1, R2]).T
samples2 = mod.derived_samples[['radius_0', 'radius_1']]
param_range = [(min(min(R1), samples2.radius_0.min()), max(max(R1), samples2.radius_0.max())),
(min(min(R2), samples2.radius_1.min()), max(max(R2), samples2.radius_0.max()))]
fig = corner(samples1, range=param_range, color='red')
return corner(samples2, fig=fig, range=param_range)
compare_radii(mod, R1, R2);
Explanation: Compare derived R1, R2 with true R1, R2
End of explanation
import matplotlib.pyplot as plt
def bias_hist(mod, R1, R2):
R2R1_iso = mod.derived_samples.radius_1 / mod.derived_samples.radius_0
R2R1_tri = np.array(R2)/np.array(R1)
bias = R2R1_tri / R2R1_iso
plt.hist(bias);
plt.axvline(bias.mean(), color='k', ls='--')
plt.xlabel('radius ratio bias: derived/true')
bias_hist(mod, R1, R2)
Explanation: OK, let's ask a more specific question. For each sample here (representing a fixed value of $f_{EB}$), how much is the estimate of the radius ratio $R_2/R_1$ biased with respect to the "true" value?
End of explanation
plt.hist(f_EB);
Explanation: OK, so for this particular star, the estimated radius ratio tends to be off by about 15% from the truth. So the question remains: does this matter?
Well, the TRICERATOPS algorithm computes the EB light curve as a function of $f_{EB}$ and looks for the value of $f_{EB}$ that gives the best fit to the data. I think this bias should then not affect the actual maximum likelihood value (the most important number for FPP analysis), but rather that the actual computed value of the radius ratio $at$ the max-likelihood value of $f_{EB}$ will be biased by about this much.
However, one thing that perhaps should be different if we wanted to properly take into account the fact that we know the color of the target star, would be the prior on $f_{EB}$. Here's the distribution of $f_{EB}$ allowed by the color constraint:
End of explanation
mass1, mass2, age, feh = (1.0, 0.6, 9.7, 0.0)
distance = 10 # pc
AV = 0.0
simulated_props_2 = mist.generate_binary(mass1, mass2, age, feh)
props = {'V': (float(simulated_props_2['V_mag']), 0.02), 'K': (float(simulated_props_2['K_mag']), 0.02), 'parallax': (100, 0.05)}
mod_2 = BinaryStarModel(mist, **props, maxAV=0.0001, eep_bounds=(0, 450), name='triceratops_eb_2')
mod_2.fit()
corner(mod_2.derived_samples[['radius_0', 'radius_1']]);
dmag_2 = mod_2.derived_samples['TESS_mag_1'] - mod_2.derived_samples['TESS_mag_0']
f_ratio_2 = 10**(-0.4 * dmag_2)
f_EB_2 = 1 / (1./f_ratio_2 + 1)
L_tot = float(10**simulated_props_2['logL_0'] + 10**simulated_props_2['logL_1'])
R1_2, R2_2 = zip(*[get_radii(L_tot, f) for f in f_EB_2])
compare_radii(mod_2, R1_2, R2_2);
bias_hist(mod_2, R1_2, R2_2)
Explanation: This is fairly different from assuming a flat prior on $f_{EB}$, though this will probably only matter in borderline cases, i.e., where the max-likelihood of EB model is close to that of the TP model.
Now remember, this was all done in the context of the true simulated star actually being a single star, meaning all the purported binary companions are forced toward low masses. Is this any different if the true scenario is actually a more luminous binary?
Pt 2: binary simulation
Here, we do the same as above, but generate properties of a binary star instead of a single star
End of explanation |
15,116 | Given the following text description, write Python code to implement the functionality described.
Description:
Find sub
Python implementation of the approach ; Function to return the sum of the sub - matrix ; Function that returns true if it is possible to find the sub - matrix with required sum ; 2 - D array to store the sum of all the sub - matrices ; Filling of dp [ ] [ ] array ; Checking for each possible sub - matrix of size k X k ; Sub - matrix with the given sum not found ; Driver code ; Function call
| Python Code:
N = 4
def getSum(r1 , r2 , c1 , c2 , dp ) :
return dp[r2 ][c2 ] - dp[r2 ][c1 ] - dp[r1 ][c2 ] + dp[r1 ][c1 ]
def sumFound(K , S , grid ) :
dp =[[ 0 for i in range(N + 1 ) ] for j in range(N + 1 ) ]
for i in range(N ) :
for j in range(N ) :
dp[i + 1 ][j + 1 ] = dp[i + 1 ][j ] + dp[i ][j + 1 ] - dp[i ][j ] + grid[i ][j ]
for i in range(0 , N ) :
for j in range(0 , N ) :
sum = getSum(i , i + K , j , j + K , dp )
if(sum == S ) :
return True
return False
grid =[[ 1 , 2 , 3 , 4 ] ,[5 , 6 , 7 , 8 ] ,[9 , 10 , 11 , 12 ] ,[13 , 14 , 15 , 16 ] ]
K = 2
S = 14
if(sumFound(K , S , grid ) ) :
print("Yes ")
else :
print("No ")
|
15,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Tree Classifier - random_state
In the previous notebook we got an accuracy score of just over 40%.
Lets just do that again.
Step1: and again.
Step2: one more time
Step3: We see that the results are not the same. This is because the Decision Tree Classifier chooses a feature at random in order to try to avoid overfitting. As we are about to start trying to improve the results by trying different strategies of preparing and loading data having varying will be unhelpful.
To aviod this we can manually set the random_state. | Python Code:
# Imports
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
# Training Data
training_raw = pd.read_table("../data/training_data.dat")
df_training = pd.DataFrame(training_raw)
# test Data
test_raw = pd.read_table("../data/test_data.dat")
df_test = pd.DataFrame(test_raw)
# target names
target_categories = ['Unclassified','Art','Aviation','Boating','Camping /Walking /Climbing','Collecting']
# Extract target results from panda
target = df_training["CategoryID"].values
# Create classifier class
model_dtc = DecisionTreeClassifier()
# features
feature_names_integers = ['Barcode','UnitRRP']
# Extra features from panda (without description)
training_data_integers = df_training[feature_names_integers].values
training_data_integers[:3]
# train model
model_dtc.fit(training_data_integers, target)
# Extract test data and test the model
test_data_integers = df_test[feature_names_integers].values
test_target = df_test["CategoryID"].values
expected = test_target
predicted_dtc = model_dtc.predict(test_data_integers)
print(metrics.classification_report(expected, predicted_dtc, target_names=target_categories))
print(metrics.confusion_matrix(expected, predicted_dtc))
metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None)
Explanation: Decision Tree Classifier - random_state
In the previous notebook we got an accuracy score of just over 40%.
Lets just do that again.
End of explanation
model_dtc = DecisionTreeClassifier()
model_dtc.fit(training_data_integers, target)
predicted_dtc = model_dtc.predict(test_data_integers)
metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None)
Explanation: and again.
End of explanation
model_dtc = DecisionTreeClassifier()
model_dtc.fit(training_data_integers, target)
predicted_dtc = model_dtc.predict(test_data_integers)
metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None)
Explanation: one more time :)
End of explanation
model_dtc = DecisionTreeClassifier(random_state=511)
model_dtc.fit(training_data_integers, target)
predicted_dtc = model_dtc.predict(test_data_integers)
metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None)
model_dtc = DecisionTreeClassifier(random_state=511)
model_dtc.fit(training_data_integers, target)
predicted_dtc = model_dtc.predict(test_data_integers)
metrics.accuracy_score(expected, predicted_dtc, normalize=True, sample_weight=None)
Explanation: We see that the results are not the same. This is because the Decision Tree Classifier chooses a feature at random in order to try to avoid overfitting. As we are about to start trying to improve the results by trying different strategies of preparing and loading data having varying will be unhelpful.
To aviod this we can manually set the random_state.
End of explanation |
15,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Introduction to Scikit-Learn
Step1: This may seem like a trivial task, but it is a simple version of a very important concept.
By drawing this separating line, we have learned a model which can generalize to new
data
Step2: Again, this is an example of fitting a model to data, such that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data
Step3: Quick Question
Step4: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot
Step5: Quick Exercise | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
# Import the example plot from the figures directory
from fig_code import plot_sgd_separator
plot_sgd_separator()
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Introduction to Scikit-Learn: Machine Learning with Python
This session will cover the basics of Scikit-Learn, a popular package containing a collection of tools for machine learning written in Python. See more at http://scikit-learn.org.
Outline
Main Goal: To introduce the central concepts of machine learning, and how they can be applied in Python using the Scikit-learn Package.
Definition of machine learning
Data representation in scikit-learn
Introduction to the Scikit-learn API
About Scikit-Learn
Scikit-Learn is a Python package designed to give access to well-known machine learning algorithms within Python code, through a clean, well-thought-out API. It has been built by hundreds of contributors from around the world, and is used across industry and academia.
Scikit-Learn is built upon Python's NumPy (Numerical Python) and SciPy (Scientific Python) libraries, which enable efficient in-core numerical and scientific computation within Python. As such, scikit-learn is not specifically designed for extremely large datasets, though there is some work in this area.
For this short introduction, I'm going to stick to questions of in-core processing of small to medium datasets with Scikit-learn.
What is Machine Learning?
In this section we will begin to explore the basic principles of machine learning.
Machine Learning is about building programs with tunable parameters (typically an
array of floating point values) that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at two very simple machine learning tasks here.
The first is a classification task: the figure shows a
collection of two-dimensional data, colored according to two different class
labels. A classification algorithm may be used to draw a dividing boundary
between the two clusters of points:
End of explanation
from fig_code import plot_linear_regression
plot_linear_regression()
Explanation: This may seem like a trivial task, but it is a simple version of a very important concept.
By drawing this separating line, we have learned a model which can generalize to new
data: if you were to drop another point onto the plane which is unlabeled, this algorithm
could now predict whether it's a blue or a red point.
If you'd like to see the source code used to generate this, you can either open the
code in the figures directory, or you can load the code using the %load magic command:
The next simple task we'll look at is a regression task: a simple best-fit line
to a set of data:
End of explanation
from IPython.core.display import Image, display
display(Image(filename='images/iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(filename='images/iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(filename='images/iris_virginica.jpg'))
print("Iris Virginica")
Explanation: Again, this is an example of fitting a model to data, such that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data:
here, we might be given an x-value, and the model would
allow us to predict the y value. Again, this might seem like a trivial problem,
but it is a basic example of a type of operation that is fundamental to
machine learning tasks.
Representation of Data in Scikit-learn
Machine learning is about creating models from data: for that reason, we'll start by
discussing how data can be represented in order to be understood by the computer. Along
with this, we'll build on our matplotlib examples from the previous section and show some
examples of how to visualize data.
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix. The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be [n_samples, n_features]
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than numpy arrays.
(Figure from the Python Data Science Handbook)
A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the
iris data stored by scikit-learn.
The data consists of measurements of three different species of irises.
There are three species of iris in the dataset, which we can picture here:
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
iris.keys()
n_samples, n_features = iris.data.shape
print((n_samples, n_features))
print(iris.data[10])
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
print(iris.target_names)
Explanation: Quick Question:
If we want to design an algorithm to recognize iris species, what might the data be?
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature
number i must be a similar kind of quantity for each sample.
Loading the Iris Data with Scikit-Learn
Scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
x_index = 2
y_index = 1
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(iris.data[:, x_index], iris.data[:, y_index],
c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index]);
Explanation: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot:
End of explanation
from sklearn import datasets
# Type datasets.fetch_<TAB> or datasets.load_<TAB> in IPython to see all possibilities
# datasets.fetch_
# datasets.load_
Explanation: Quick Exercise:
Change x_index and y_index in the above script
and find a combination of two parameters
which maximally separate the three classes.
This exercise is a preview of dimensionality reduction, which we'll see later.
Other Available Data
They come in three flavors:
Packaged Data: these small datasets are packaged with the scikit-learn installation,
and can be downloaded using the tools in sklearn.datasets.load_*
Downloadable Data: these larger datasets are available for download, and scikit-learn
includes tools which streamline this process. These tools can be found in
sklearn.datasets.fetch_*
Generated Data: there are several datasets which are generated from models based on a
random seed. These are available in the sklearn.datasets.make_*
You can explore the available dataset loaders, fetchers, and generators using IPython's
tab-completion functionality. After importing the datasets submodule from sklearn,
type
datasets.load_ + TAB
or
datasets.fetch_ + TAB
or
datasets.make_ + TAB
to see a list of available functions.
End of explanation |
15,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orbital Elements
We can add particles to a simulation by specifying cartesian components
Step1: Any components not passed automatically default to 0. REBOUND can also accept orbital elements.
Reference bodies
As a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. This is a useful set for theoretical calculations, and gives a logical behavior as the mass ratio increase, e.g., in the case of a circumbinary planet. Let's set up a binary,
Step2: We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
Step3: This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class)
Step4: All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, we can always calculate them when required with sim.calculate_orbits(). Note that REBOUND will always output angles in the range $[-\pi,\pi]$, except the inclination which is always in $[0,\pi]$.
Step5: Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.
To get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class
Step6: though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).
Edge cases and orbital element sets
Different orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,
Step7: The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\theta$, the broken angle from the $x$ axis to the ascending node = $\Omega + \omega + f$, and then to the particle's position, is always well defined
Step8: To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
Step9: Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\varpi = \Omega + \omega$), which is the angle from the $x$ axis to pericenter
Step10: We can specify the pericenter of the orbit with either $\omega$ or $\varpi$
Step11: Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$.
Finally, we can initialize particles using mean, rather than true, longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \lambda = \Omega + \omega + M$, which is analogous to $\theta$ above,
Step12: Accuracy
As a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly
Step13: We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results
Step14: Hyperbolic & Parabolic Orbits
REBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$
Step15: Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
Step16: Retrograde Orbits
Orbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example, | Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1., x=1., vz = 2.)
Explanation: Orbital Elements
We can add particles to a simulation by specifying cartesian components:
End of explanation
sim.add(m=1., a=1.)
sim.status()
Explanation: Any components not passed automatically default to 0. REBOUND can also accept orbital elements.
Reference bodies
As a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. This is a useful set for theoretical calculations, and gives a logical behavior as the mass ratio increase, e.g., in the case of a circumbinary planet. Let's set up a binary,
End of explanation
sim.add(m=1.e-3, a=100.)
Explanation: We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,
End of explanation
sim.add(primary=sim.particles[1], a=0.01)
Explanation: This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class):
End of explanation
orbits = sim.calculate_orbits()
for orbit in orbits:
print(orbit)
Explanation: All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, we can always calculate them when required with sim.calculate_orbits(). Note that REBOUND will always output angles in the range $[-\pi,\pi]$, except the inclination which is always in $[0,\pi]$.
End of explanation
print(sim.particles[3].calculate_orbit(primary=sim.particles[1]))
Explanation: Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.
To get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, omega=0.1)
print(sim.particles[1].orbit)
Explanation: though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).
Edge cases and orbital element sets
Different orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,
End of explanation
print(sim.particles[1].theta)
Explanation: The problem here is that $\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results (i.e., $\omega = 0$ rather than 0.1, and $f=0.1$ rather than the default 0). Similarly, $f$, the angle from pericenter to the particle's position, is undefined. However, the true longitude $\theta$, the broken angle from the $x$ axis to the ascending node = $\Omega + \omega + f$, and then to the particle's position, is always well defined:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0., inc=0.1, Omega=0.3, theta = 0.4)
print(sim.particles[1].theta)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, Omega=0.1)
print(sim.particles[1].orbit)
Explanation: To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.
End of explanation
print(sim.particles[1].pomega)
Explanation: Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\Omega$ is not a good variable, but we pass it anyway! In this case, $\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\varpi = \Omega + \omega$), which is the angle from the $x$ axis to pericenter:
End of explanation
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.2, pomega=0.1)
print(sim.particles[1].orbit)
Explanation: We can specify the pericenter of the orbit with either $\omega$ or $\varpi$:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, Omega=0.3, M = 0.1)
sim.add(a=1., e=0.1, Omega=0.3, l = 0.4)
print(sim.particles[1].l)
print(sim.particles[2].l)
import rebound
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1., e=0.1, omega=1.)
print(sim.particles[1].orbit)
Explanation: Note that if the inclination is exactly zero, REBOUND sets $\Omega$ (which is undefined) to 0, so $\omega = \varpi$.
Finally, we can initialize particles using mean, rather than true, longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \lambda = \Omega + \omega + M$, which is analogous to $\theta$ above,
End of explanation
import random
import numpy as np
def simulation(par):
e,f = par
e = 10**e
f = 10**f
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, f=f)
o=sim.particles[1].orbit
if o.f < 0: # avoid wrapping issues
o.f += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.f-f)/f)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
fs = np.linspace(-16.,-1.,N)
params = [(e,f) for e in es for f in fs]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import ticker
from matplotlib.colors import LogNorm
import matplotlib
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[fs.min(), fs.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true anomaly (f)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
Explanation: Accuracy
As a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly:
End of explanation
def simulation(par):
e,theta = par
e = 10**e
theta = 10**theta
sim = rebound.Simulation()
sim.add(m=1.)
a = 1.
inc = random.random()*np.pi
Omega = random.random()*2*np.pi
omega = random.random()*2*np.pi
sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, theta=theta)
o=sim.particles[1].orbit
if o.theta < 0:
o.theta += 2*np.pi
err = max(np.fabs(o.e-e)/e, np.fabs(o.theta-theta)/theta)
return err
random.seed(1)
N = 100
es = np.linspace(-16.,-1.,N)
thetas = np.linspace(-16.,-1.,N)
params = [(e,theta) for e in es for theta in thetas]
pool=rebound.InterruptiblePool()
res = pool.map(simulation, params)
res = np.array(res).reshape(N,N)
res = np.nan_to_num(res)
f,ax = plt.subplots(1,1,figsize=(7,5))
extent=[thetas.min(), thetas.max(), es.min(), es.max()]
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.set_xlabel(r"true longitude (\theta)")
ax.set_ylabel(r"eccentricity")
im = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin="lower", interpolation='nearest', cmap="RdYlGn_r", extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.solids.set_rasterized(True)
cb.set_label("Relative Error")
Explanation: We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\theta$ as discussed above, we get much better results:
End of explanation
sim.add(a=-0.2, e=1.4)
sim.status()
Explanation: Hyperbolic & Parabolic Orbits
REBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$:
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
q = 0.1
a=-1.e14
e=1.+q/np.fabs(a)
sim.add(a=a, e=e)
print(sim.particles[1].orbit)
Explanation: Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(a=1.,inc=np.pi,e=0.1, Omega=0., pomega=1.)
print(sim.particles[1].orbit)
Explanation: Retrograde Orbits
Orbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example,
End of explanation |
15,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DataFrame
A DataFrame is a Dataset organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as
Step1: Once we have our RDD of Row we can infer and get a schema. We can operate on this schema with SQL queries.
Step2: We can use other dataframes for filtering our data efficiently. | Python Code:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
from pyspark.sql import Row
csv_data = raw.map(lambda l: l.split(","))
row_data = csv_data.map(lambda p: Row(
duration=int(p[0]),
protocol_type=p[1],
service=p[2],
flag=p[3],
src_bytes=int(p[4]),
dst_bytes=int(p[5])
)
)
Explanation: DataFrame
A DataFrame is a Dataset organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs
We want to convert our raw data into a table. But first we have to parse it and assign desired rows and headers, something like csv format.
End of explanation
kdd_df = sqlContext.createDataFrame(row_data)
kdd_df.registerTempTable("KDDdata")
# Select tcp network interactions with more than 2 second duration and no transfer from destination
tcp_interactions = sqlContext.sql("SELECT duration, dst_bytes FROM KDDdata WHERE protocol_type = 'tcp' AND duration > 2000 AND dst_bytes = 0")
tcp_interactions.show(10)
# Complete the query to filter data with duration > 2000, dst_bytes = 0.
# Then group the filtered elements by protocol_type and show the total count in each group.
# Refer - https://spark.apache.org/docs/latest/sql-programming-guide.html#dataframegroupby-retains-grouping-columns
kdd_df.select("protocol_type", "duration", "dst_bytes").filter(kdd_df.duration>2000)#.more query...
def transform_label(label):
'''
Create a function to parse input label
such that if input label is not normal
then it is an attack
'''
row_labeled_data = csv_data.map(lambda p: Row(
duration=int(p[0]),
protocol_type=p[1],
service=p[2],
flag=p[3],
src_bytes=int(p[4]),
dst_bytes=int(p[5]),
label=transform_label(p[41])
)
)
kdd_labeled = sqlContext.createDataFrame(row_labeled_data)
'''
Write a query to select label,
group it and then count total elements
in that group
'''
# query
Explanation: Once we have our RDD of Row we can infer and get a schema. We can operate on this schema with SQL queries.
End of explanation
kdd_labeled.select("label", "protocol_type", "dst_bytes").groupBy("label", "protocol_type", kdd_labeled.dst_bytes==0).count().show()
Explanation: We can use other dataframes for filtering our data efficiently.
End of explanation |
15,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iterative Deepening
The function search takes three arguments to solve a search problem
Step1: The function depth_limited_search tries to find a solution to the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle $$
that has a length of at most limit. The algorithm used is depth first search.
Step2: Solving the Sliding Puzzle | Python Code:
def search(start, goal, next_states):
limit = 36
while True:
Path = depth_limited_search(start, goal, next_states, [start], { start }, limit)
if Path is not None:
return Path
limit += 1
print(f'limit = {limit}')
Explanation: Iterative Deepening
The function search takes three arguments to solve a search problem:
- start is the start state of the search problem,
- goal is the goal state, and
- next_states is a function with signature $\texttt{next_states}:Q \rightarrow 2^Q$, where $Q$ is the set of states.
For every state $s \in Q$, $\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.
If successful, search returns a path from start to goal that is a solution of the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle. $$
The procedure search tries to find a solution to the search problem by first trying to find a solution that has a length of $1$, then of length $2$, then of length $3$, etc.
The search only stops when a solution is found.
End of explanation
def depth_limited_search(state, goal, next_states, Path, PathSet, limit):
if state == goal:
return Path
if len(Path) == limit:
return None
for ns in next_states(state):
if ns not in PathSet:
Path .append(ns)
PathSet.add(ns)
Result = depth_limited_search(ns, goal, next_states, Path, PathSet, limit)
if Result is not None:
return Result
Path .pop()
PathSet.remove(ns)
return None
Explanation: The function depth_limited_search tries to find a solution to the search problem
$$ \langle Q, \texttt{next_states}, \texttt{start}, \texttt{goal} \rangle $$
that has a length of at most limit. The algorithm used is depth first search.
End of explanation
%run Sliding-Puzzle.ipynb
%load_ext memory_profiler
%%time
Path=search(start, goal, next_states)
len(Path)
animation(Path)
Explanation: Solving the Sliding Puzzle
End of explanation |
15,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sensors
Hi ha quatre sensors diferents montats i connectats al robot
Step1: Sensor de tacte
รs un polsador, que segons estiga polsat o no, donarร un valor vertader (True) o fals (False). Per a comprovar-ho, proveu a executar vร ries vegades la funciรณ segรผent, amb el sensor activat o sense activar-lo.
Step2: Sensor de llum
Estร format per un transistor que emet llum, i un diode que detecta la llum reflectida per la superfรญcie. Dรณna un valor numรจric, mรฉs alt com mรฉs quantitat de llum, รฉs a dir, valors baixos (prรฒxims a 0) per a les superfรญcies fosques, i valor alts (prรฒxims a 100) per a les clares.
Step3: Sensor de so (micrรฒfon)
Permet mesurar el so ambient en decibelis, tornant un valor en percentatge, mรฉs alt com mรฉs fort รฉs el so. Per exemple
Step4: Sensor ultrasรฒnic
Aquest sensor funciona emetent ultrasons, i medint el temps que tarda l'eco del senyal en tornar al sensor. D'eixa manera permet calcular la distร ncia (en cm) a un obstacle que estiga al davant. รs el mateix principi que usen els ratpenats.
Step5: <img src="img/interrupt.png" align="right">
Comprovaciรณ dels sensors
Per a finalitzar, la segรผent funciรณ mostra repetidament en pantalla els valors de tots els sensors, per a comprovar fร cilment el funcionament correcte de tots ells. Per a parar l'execuciรณ, has de prรฉmer el botรณ interrupt kernel del panell de dalt.
Step6: รs el moment de fer nous programes amb els sensors, perรฒ abans cal desconnectar el robot d'esta pร gina. | Python Code:
from functions import connect, touch, light, sound, ultrasonic, disconnect, next_notebook
connect()
Explanation: Sensors
Hi ha quatre sensors diferents montats i connectats al robot:
<img src="img/sensors.jpg" width=400>
Els de la figura corresponen al model NXT, perรฒ els de l'EV3 sรณn equivalents.
Anem a comprovar el funcionament de cadascun d'ells.
Primer, necessitem algunes funcions, i com sempre, connectar-nos al robot.
End of explanation
touch() # Per a executar repetidament, useu Ctrl + Enter
Explanation: Sensor de tacte
รs un polsador, que segons estiga polsat o no, donarร un valor vertader (True) o fals (False). Per a comprovar-ho, proveu a executar vร ries vegades la funciรณ segรผent, amb el sensor activat o sense activar-lo.
End of explanation
light() # Per a executar repetidament, useu Ctrl + Enter
Explanation: Sensor de llum
Estร format per un transistor que emet llum, i un diode que detecta la llum reflectida per la superfรญcie. Dรณna un valor numรจric, mรฉs alt com mรฉs quantitat de llum, รฉs a dir, valors baixos (prรฒxims a 0) per a les superfรญcies fosques, i valor alts (prรฒxims a 100) per a les clares.
End of explanation
sound() # Per a executar repetidament, useu Ctrl + Enter
Explanation: Sensor de so (micrรฒfon)
Permet mesurar el so ambient en decibelis, tornant un valor en percentatge, mรฉs alt com mรฉs fort รฉs el so. Per exemple:
4-5% รฉs com una sala d'estar en silenci
5-10% seria algรบ parlant a certa distร ncia
10-30% รฉs una conversa normal a prop del sensor o mรบsica reproduรฏda en un nivell normal
30-100% sรณn gent cridant o la mรบsica que s'estร reproduint a un volum alt
End of explanation
ultrasonic() # Per a executar repetidament, useu Ctrl + Enter
Explanation: Sensor ultrasรฒnic
Aquest sensor funciona emetent ultrasons, i medint el temps que tarda l'eco del senyal en tornar al sensor. D'eixa manera permet calcular la distร ncia (en cm) a un obstacle que estiga al davant. รs el mateix principi que usen els ratpenats.
End of explanation
from functions import test_sensors
test_sensors()
Explanation: <img src="img/interrupt.png" align="right">
Comprovaciรณ dels sensors
Per a finalitzar, la segรผent funciรณ mostra repetidament en pantalla els valors de tots els sensors, per a comprovar fร cilment el funcionament correcte de tots ells. Per a parar l'execuciรณ, has de prรฉmer el botรณ interrupt kernel del panell de dalt.
End of explanation
disconnect()
next_notebook('touch')
Explanation: รs el moment de fer nous programes amb els sensors, perรฒ abans cal desconnectar el robot d'esta pร gina.
End of explanation |
15,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Step1: Case Study
Step2: Understanding the Data Format
Each row represents one labeled example. Column 0 represents the label that a human rater has assigned for one handwritten digit. For example, if Column 0 contains '6', then a human rater interpreted the handwritten character as the digit '6'. The ten digits 0-9 are each represented, with a unique class label for each possible digit. Thus, this is a multi-class classification problem with 10 classes.
Columns 1 through 784 contain the feature values, one per pixel for the 28ร28=784 pixel values. The pixel values are on a gray scale in which 0 represents white, 255 represents black, and values between 0 and 255 represent shades of gray. Most of the pixel values are 0; you may want to take a minute to confirm that they aren't all 0. Modify the form below and run the code to view data for a given example.
Step3: Do you have Imbalanced Classes?
As we read in the course, imbalanced classes make classification harder. Let's look at the distribution of classes. Do you have imbalanced classes?
Step4: The preceding graph shows that the 10 classes are roughly equally represented.
Shuffle and Split Dataset
As part of Data Debugging best practices, ensure your splits are statistically equivalent by shuffling your data to remove any pre-existing order.
Step5: Process Data
Scale the data values to [0,1] since the values are bounded to [0,255] and do not contain outliers. Then check that the scaled data values are as expected by generating summary statistics using the DataFrame.describe() function.
Run the following cell to scale data and generate statistics. This cell takes a few minutes to run.
Step6: Oh no! Some of your features are all NaN. What do you think the cause is? Hint
Step7: Solution
Start exploring your data by generating a high-level summary using Dataframe.describe().
Step8: Because some of the feature columns are all zeros, the scaling function divided by 0 (because np.max returns 0). The division by 0 resulted in NaN values. This result shows you how easily NaNs can arise in engineered data. The describe function will not detect every occurrence of NaN (or None). Instead, use the command DataFrame.isnull().any().
Note
Step9: You should follow best practice and prevent this bug from recurring by writing a unit test to check for not having NaN values in your engineered data.
Remove All-Zero Features?
You might think that getting NaNs and discovering that some features were all-zero is good luck because those features can be discarded. However, your training data and validation data might have different all-zero features. Since you should not use validation data to make modeling decisions, you cannot remove only those features that are all-zero in both. Furthermore, data in the future might have different characteristics. There are pros and cons in either case. This Colab keeps the features since reducing the feature set is not a concern.
Establish Baseline
Following development best practices, you should establish a baseline. The simplest baseline is predicting the most common class. You saw that the most common class is 1. Let's check the accuracy when always predicting 1.
Step11: Your baseline accuracy is about 11%. Should be easy to beat, right?
Train a Linear Model
Let's start nice and easy with a linear model. All we need is an accuracy > 11%.
First, let's define a function to plot our loss and accuracy curves. The function will also print the final loss and accuracy. Instead of using verbose=1, you can call the function.
Step12: Now train a linear model with an output layer and a hidden layer.
Step13: Wow, that accuracy is terrible! What could the cause be?
Hint
Step14: Your loss curves are much better. Your accuracy has improved too. You're on the right track.
Train a Nonlinear Model
Switch to a nonlinear model by modifying the code below to use relu activation functions instead of linear activation functions. Run the code. What do you observe?
Step15: The quality of the nonlinear model is significantly better than of the linear model. Progress! Move onto the next section.
Adding a Second Layer
Increasing the model's capacity significantly improved your results. Perhaps you can continue this strategy by adding a second relu layer. Run the following code cell to train the model with another relu layer.
Step16: Guess what. Your previous model had training and validation accuracies of 100% and 95%. You can't do much better than that! So your new accuracy is about the same. How high can you push your accuracy? With this configuration the highest training and validation accuracies appear to be 100% and 96% respectively. Since the neural net returns similar accuracy with 1 or 2 layers, let's use the simpler model with 1 layer.
Does your model begin to overfit the training data if you train for long enough? (Your model starts overfitting training data at the point when your validation loss starts increasing.)
Check for Training/Validation Data Skew
Our validation accuracy is a little worse than our training accuracy. While this result is always expected, you should check for typical errors. The commonest cause is having different distributions of data and labels in training and validation. Confirm that the distribution of classes in training and validation data is similar.
Step17: Apply Dropout Regularization
Dropout regularization is a common regularization method that removes a random selection of a fixed number of units in a network layer for a single gradient step. Typically, dropout will improve generalization at a dropout rate of between 10% and 50% of neurons.
Try to reduce the divergence between training and validation loss by using dropout regularization with values between 0.1 and 0.5. Dropout does not improve the results in this case. However, at a dropout of 0.5, the difference in loss decreases, though both training and validation loss decrease in absolute terms.
Step18: Sample results using dropout regularization after 30 epochs
Step19: Testing for Anomalous Values
In the section Train a Linear Model, you debugged an incorrect calculation of loss. Before running your model, if you wrote a test to validate the output values, your test would detect the anomalous output. For example, you could test whether the distribution of predicted labels on the training dataset is similar to the actual distribution of training labels. A simple statistical implementation of this concept is to compare the standard deviation and mean of the predicted and actual labels.
First, check the standard deviation and mean of the actual labels.
Step20: Write tests to check if the mean and standard deviation of the predicted labels falls within the expected range. The expected range defined in the tests below is somewhat arbitrary. In practice, you will tune the range thresholds to accommodate natural variation in predictions.
Step21: Run the following cell to train a model with the wrong loss calculation and execute the tests. The tests should fail.
Step22: Since the tests fail, check the data distribution of predicted labels for anomalies. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
End of explanation
# Reset environment for a new run
% reset -f
# Load Libraries
from os.path import join # for joining file pathnames
import pandas as pd
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import unittest
import sys
# Set Pandas display options
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
# Load data
mnistDf_backup = pd.read_csv(
"https://download.mlcc.google.com/mledu-datasets/mnist_train_small.csv",
sep=",",
header=None)
# Shuffle data
mnistDf_backup.sample(frac=1).reset_index(drop=True)
# Use the first 5000 examples for faster prototyping
mnistDf = mnistDf_backup[0:5000]
mnistDf.head()
Explanation: Case Study: Debugging in Classification
This Colab quickly demonstrates a few concepts related to debugging classification models. You will explore potential problems in implementing these tasks:
Calculating loss for classification problems.
Optimizing your model
Applying regularization.
Following best practices in development and debugging.
Please make a copy of this Colab before running it. Click on File, and then click on Save a copy in Drive.
Load MNIST Data
MNIST is a dataset of images of the numbers 0 to 9. The problem is to classify the images as numbers. Setup libraries and load the MNIST dataset. Display the first few rows to verify that the data loaded. You'll explore the data format after the data loads.
End of explanation
showExample = 1000 # @param
digitData = np.reshape(mnistDf.iloc[showExample,0:-1],[28,28])
print digitData
Explanation: Understanding the Data Format
Each row represents one labeled example. Column 0 represents the label that a human rater has assigned for one handwritten digit. For example, if Column 0 contains '6', then a human rater interpreted the handwritten character as the digit '6'. The ten digits 0-9 are each represented, with a unique class label for each possible digit. Thus, this is a multi-class classification problem with 10 classes.
Columns 1 through 784 contain the feature values, one per pixel for the 28ร28=784 pixel values. The pixel values are on a gray scale in which 0 represents white, 255 represents black, and values between 0 and 255 represent shades of gray. Most of the pixel values are 0; you may want to take a minute to confirm that they aren't all 0. Modify the form below and run the code to view data for a given example.
End of explanation
%hide_result # hides result of cell computation
# Calculate the number of classes
numClasses = mnistDf.iloc[:,0].unique().shape[0]
# Plot histogram of class distribution
plt.hist(mnistDf.iloc[:,0], bins=range(numClasses+1))
plt.xticks(range(numClasses+1))
Explanation: Do you have Imbalanced Classes?
As we read in the course, imbalanced classes make classification harder. Let's look at the distribution of classes. Do you have imbalanced classes?
End of explanation
# Shuffle data
mnistDf = mnistDf.sample(frac=1).reset_index(drop=True)
# Split dataset into data and labels
mnistData = mnistDf.iloc[:,1:-1].copy(deep=True)
mnistLabels = mnistDf.iloc[:,0].copy(deep=True)
Explanation: The preceding graph shows that the 10 classes are roughly equally represented.
Shuffle and Split Dataset
As part of Data Debugging best practices, ensure your splits are statistically equivalent by shuffling your data to remove any pre-existing order.
End of explanation
def minMaxScaler(arr):
min = np.min(arr)
max = np.max(arr)
arr = (arr-min)/max
return arr
for featureIdx in range(mnistData.shape[1]):
mnistData.iloc[:,featureIdx] = minMaxScaler(mnistData.iloc[:,featureIdx])
mnistData.describe()
Explanation: Process Data
Scale the data values to [0,1] since the values are bounded to [0,255] and do not contain outliers. Then check that the scaled data values are as expected by generating summary statistics using the DataFrame.describe() function.
Run the following cell to scale data and generate statistics. This cell takes a few minutes to run.
End of explanation
# First reload your data
mnistData = mnistDf.iloc[:,1:-1].copy(deep=True)
# Explore your data
Explanation: Oh no! Some of your features are all NaN. What do you think the cause is? Hint: While NaNs have many causes, in this case, the NaN values are caused by the properties of your data. Use the next code cell to explore your data. Then check the next cell for the solution. Try to find the solution yourself. Debugging NaNs and exploring your data are important skills.
End of explanation
mnistData.describe()
Explanation: Solution
Start exploring your data by generating a high-level summary using Dataframe.describe().
End of explanation
# Redefine the scaling function to check for zeros
def minMaxScaler(arr):
max = np.max(arr)
if(max!=0): # avoid /0
min = np.min(arr)
arr = (arr-min)/max
return arr
# Reload data
mnistData = mnistDf.iloc[:,1:-1].copy(deep=True)
# Scale data
for featureIdx in range(mnistData.shape[1]):
mnistData.iloc[:,featureIdx] = minMaxScaler(mnistData.iloc[:,featureIdx])
Explanation: Because some of the feature columns are all zeros, the scaling function divided by 0 (because np.max returns 0). The division by 0 resulted in NaN values. This result shows you how easily NaNs can arise in engineered data. The describe function will not detect every occurrence of NaN (or None). Instead, use the command DataFrame.isnull().any().
Note: Given the maximum value of the feature data is 255, you could simply divide the input by 255 instead of using min-max scaling, and avoid introducing NaNs. However, this example purposely uses min-max scaling to show how NaNs can appear in engineered data.
Now let's try scaling the data again.
End of explanation
np.sum(mnistLabels==1)*1.0/mnistLabels.shape[0]*100
Explanation: You should follow best practice and prevent this bug from recurring by writing a unit test to check for not having NaN values in your engineered data.
Remove All-Zero Features?
You might think that getting NaNs and discovering that some features were all-zero is good luck because those features can be discarded. However, your training data and validation data might have different all-zero features. Since you should not use validation data to make modeling decisions, you cannot remove only those features that are all-zero in both. Furthermore, data in the future might have different characteristics. There are pros and cons in either case. This Colab keeps the features since reducing the feature set is not a concern.
Establish Baseline
Following development best practices, you should establish a baseline. The simplest baseline is predicting the most common class. You saw that the most common class is 1. Let's check the accuracy when always predicting 1.
End of explanation
def showClassificationResults(trainHistory):
Function to:
* Print final loss & accuracy.
* Plot loss & accuracy curves.
Args:
trainHistory: object returned by model.fit
# Print final loss and accuracy
print("Final training loss: " + str(trainHistory.history['loss'][-1]))
print("Final validation loss: " + str(trainHistory.history['val_loss'][-1]))
print("Final training accuracy: " + str(trainHistory.history['acc'][-1]))
print("Final validation accuracy: " + str(trainHistory.history['val_acc'][-1]))
# Plot loss and accuracy curves
f = plt.figure(figsize=(10,4))
axLoss = f.add_subplot(121)
axAcc = f.add_subplot(122)
axLoss.plot(trainHistory.history['loss'])
axLoss.plot(trainHistory.history['val_loss'])
axLoss.legend(['Training loss', 'Validation loss'], loc='best')
axLoss.set_xlabel('Training epochs')
axLoss.set_ylabel('Loss')
axAcc.plot(trainHistory.history['acc'])
axAcc.plot(trainHistory.history['val_acc'])
axAcc.legend(['Training accuracy', 'Validation accuracy'], loc='best')
axAcc.set_xlabel('Training epochs')
axAcc.set_ylabel('Accuracy')
Explanation: Your baseline accuracy is about 11%. Should be easy to beat, right?
Train a Linear Model
Let's start nice and easy with a linear model. All we need is an accuracy > 11%.
First, let's define a function to plot our loss and accuracy curves. The function will also print the final loss and accuracy. Instead of using verbose=1, you can call the function.
End of explanation
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(mnistData.shape[1],
activation='linear',
input_dim=mnistData.shape[1]))
model.add(keras.layers.Dense(1, activation='linear'))
# Compile
model.compile(optimizer="adam", loss='mse', metrics=['accuracy'])
# Train
trainHistory = model.fit(mnistData, mnistLabels, epochs=10, batch_size=100,
validation_split=0.2, verbose=0)
# Plot
showClassificationResults(trainHistory)
Explanation: Now train a linear model with an output layer and a hidden layer.
End of explanation
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(mnistData.shape[1], activation='linear',
input_dim = mnistData.shape[1]))
model.add(keras.layers.Dense(10, activation='softmax'))
# Compile
model.compile(optimizer="adam",
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train
trainHistory = model.fit(mnistData, mnistLabels, epochs=10, batch_size=100,
validation_split=0.1, verbose=0)
# Plot
showClassificationResults(trainHistory)
Explanation: Wow, that accuracy is terrible! What could the cause be?
Hint: You followed the same procedure as for the previous regression problem. Do you need an adaptation for a classification problem? Experiment with the code above or skip to the solution below.
Solution
In regression, the last layer uses a linear activation function. In classification, the last layer cannot use a linear transform. Instead, one option is a softmax transform. Furthermore, in regression, the loss is calculated using MSE while in classification, loss is calculated using crossentropy. Before running your model, if you wrote a test to validate the output values, your test would detect the anomalous output. You'll look at such a test later. Move onto the next section to fix the loss calculation.
Fixing Loss Calculation
Since your labels are integers instead of one-hot encodings, use sparse_categorical_crossentropy instead of categorical_crossentropy so that you avoid converting the integers to one-hot encoding.
Retrain the model with the new loss calculation by running the following cell. Look through the code to note the changes. What do you think of the result?
End of explanation
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(mnistData.shape[1], activation='', # use 'relu'
input_dim=mnistData.shape[1]))
model.add(keras.layers.Dense(10, activation='softmax'))
# Compile
model.compile(optimizer="adam", loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train
trainHistory = model.fit(mnistData, mnistLabels, epochs=20, batch_size=100,
validation_split=0.1, verbose=0)
# Plot
showClassificationResults(trainHistory)
Explanation: Your loss curves are much better. Your accuracy has improved too. You're on the right track.
Train a Nonlinear Model
Switch to a nonlinear model by modifying the code below to use relu activation functions instead of linear activation functions. Run the code. What do you observe?
End of explanation
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(mnistData.shape[1], activation='relu',
input_dim = mnistData.shape[1]))
model.add(keras.layers.Dense(mnistData.shape[1], activation='relu'))
model.add(keras.layers.Dense(10,activation='softmax'))
# Compile
model.compile(optimizer="adam", loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train
trainHistory = model.fit(mnistData, mnistLabels, epochs=10, batch_size=100,
validation_split=0.1, verbose=0)
# Plot
showClassificationResults(trainHistory)
Explanation: The quality of the nonlinear model is significantly better than of the linear model. Progress! Move onto the next section.
Adding a Second Layer
Increasing the model's capacity significantly improved your results. Perhaps you can continue this strategy by adding a second relu layer. Run the following code cell to train the model with another relu layer.
End of explanation
%hide_result # hides result of cell computation
f = plt.figure(figsize=(10,3))
ax = f.add_subplot(1,2,1)
plt.hist(mnistLabels[0:len(mnistLabels)*8/10], bins=range(numClasses+1))
plt.xticks(range(numClasses+1))
ax2 = f.add_subplot(1,2,2,)
plt.hist(mnistLabels[len(mnistLabels)*8/10:-1], bins=range(numClasses+1))
plt.xticks(range(numClasses+1))
Explanation: Guess what. Your previous model had training and validation accuracies of 100% and 95%. You can't do much better than that! So your new accuracy is about the same. How high can you push your accuracy? With this configuration the highest training and validation accuracies appear to be 100% and 96% respectively. Since the neural net returns similar accuracy with 1 or 2 layers, let's use the simpler model with 1 layer.
Does your model begin to overfit the training data if you train for long enough? (Your model starts overfitting training data at the point when your validation loss starts increasing.)
Check for Training/Validation Data Skew
Our validation accuracy is a little worse than our training accuracy. While this result is always expected, you should check for typical errors. The commonest cause is having different distributions of data and labels in training and validation. Confirm that the distribution of classes in training and validation data is similar.
End of explanation
from keras import regularizers
model = None
# Define lambda
dropoutLambda = 0.5 #@param
# Define model
model = keras.Sequential()
model.add(keras.layers.Dense(mnistData.shape[1],
input_dim=mnistData.shape[1],
activation='relu'))
model.add(keras.layers.Dropout(dropoutLambda,
noise_shape=(1, mnistData.shape[1])))
model.add(keras.layers.Dense(10, activation='softmax'))
# Compile
model.compile(optimizer = "adam",
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'])
# Train
trainHistory = model.fit(mnistData,
mnistLabels,
epochs=30,
batch_size=500,
validation_split=0.1,
verbose=0)
# Plot
showClassificationResults(trainHistory)
Explanation: Apply Dropout Regularization
Dropout regularization is a common regularization method that removes a random selection of a fixed number of units in a network layer for a single gradient step. Typically, dropout will improve generalization at a dropout rate of between 10% and 50% of neurons.
Try to reduce the divergence between training and validation loss by using dropout regularization with values between 0.1 and 0.5. Dropout does not improve the results in this case. However, at a dropout of 0.5, the difference in loss decreases, though both training and validation loss decrease in absolute terms.
End of explanation
from sklearn.metrics import classification_report
mnistPred = model.predict_classes(x = mnistData)
print(classification_report(mnistLabels, mnistPred))
Explanation: Sample results using dropout regularization after 30 epochs:
Lambda | Training Loss | Validation Loss
------- | ------------------------------------------------------
0.1 | 0.99 | 0.95
0.2 | 0.99 | 0.95
0.3 | 0.99 | 0.95
0.5 | 0.97 | 0.94
Check Accuracy for Data Slices
For classification problems, you should always check the metrics by class to ensure your model predicts well across all classes. Check accuracy on the 10 classes by running the next cell by using the function sklearn.metrics.classification_report from the scikit-learn library. In the output, the rows with indices 0 to 9 correspond to the classes for the labels 0 to 9. The columns "Precision", "Recall", and "F1-Score" correspond to the respective classification metrics for each class. "Support" is the number of examples for the class in question. For example, for the label "4", when predicting on 464 examples labelled "4", the model has a precision of 0.98, a recall of 0.97, and a F1 score of 0.98.
The classification metrics are very uniform across all classes, which is perfect. In your classification problem, in case any metric is lower for a class, then you should investigate why the model has lower-quality predictions for that class.
End of explanation
print("Mean of actual labels: " + str(np.mean(mnistLabels)))
print("Standard deviation of actual labels: " + str(np.std(mnistLabels)))
Explanation: Testing for Anomalous Values
In the section Train a Linear Model, you debugged an incorrect calculation of loss. Before running your model, if you wrote a test to validate the output values, your test would detect the anomalous output. For example, you could test whether the distribution of predicted labels on the training dataset is similar to the actual distribution of training labels. A simple statistical implementation of this concept is to compare the standard deviation and mean of the predicted and actual labels.
First, check the standard deviation and mean of the actual labels.
End of explanation
class mlTest(unittest.TestCase):
'''Class to test statistics of predicted output on training data against
statistics of labels to validate that model predictions are in the]
expected range.
'''
def testStd(self):
y = model.predict(mnistData)
yStd = np.std(y)
yStdActual = np.std(mnistLabels)
deltaStd = 0.05
errorMsg = 'Std. dev. of predicted values ' + str(yStd) + \
' and actual values ' + str(yStdActual) + \
' differs by >' + str(deltaStd) + '.'
self.assertAlmostEqual(yStd, yStdActual, delta=deltaStd, msg=errorMsg)
def testMean(self):
y = model.predict(mnistData)
yMean = np.mean(y)
yMeanActual = np.mean(mnistLabels)
deltaMean = 0.05
errorMsg = 'Mean of predicted values ' + str(yMean) + \
' and actual values ' + str(yMeanActual) + \
' differs by >' + str(deltaMean) + '.'
self.assertAlmostEqual(yMean, yMeanActual, delta=deltaMean, msg=errorMsg)
Explanation: Write tests to check if the mean and standard deviation of the predicted labels falls within the expected range. The expected range defined in the tests below is somewhat arbitrary. In practice, you will tune the range thresholds to accommodate natural variation in predictions.
End of explanation
#@title Train model and run tests
model = None
# Define
model = keras.Sequential()
model.add(keras.layers.Dense(mnistData.shape[1],
activation='linear',
input_dim=mnistData.shape[1]))
model.add(keras.layers.Dense(1, activation='linear'))
# Compile
model.compile(optimizer="adam", loss='mse', metrics=['accuracy'])
# Train
trainHistory = model.fit(mnistData, mnistLabels, epochs=10, batch_size=100,
validation_split=0.1, verbose=0)
suite = unittest.TestLoader().loadTestsFromTestCase(mlTest)
unittest.TextTestRunner(verbosity=1, stream=sys.stderr).run(suite)
Explanation: Run the following cell to train a model with the wrong loss calculation and execute the tests. The tests should fail.
End of explanation
yPred = model.predict(mnistData)
plt.hist(yPred, bins=range(11))
Explanation: Since the tests fail, check the data distribution of predicted labels for anomalies.
End of explanation |
15,124 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have dfs as follows: | Problem:
import pandas as pd
df1 = pd.DataFrame({'id': [1, 2, 3, 4, 5],
'city': ['bj', 'bj', 'sh', 'sh', 'sh'],
'district': ['ft', 'ft', 'hp', 'hp', 'hp'],
'date': ['2019/1/1', '2019/1/1', '2019/1/1', '2019/1/1', '2019/1/1'],
'value': [1, 5, 9, 13, 17]})
df2 = pd.DataFrame({'id': [3, 4, 5, 6, 7],
'date': ['2019/2/1', '2019/2/1', '2019/2/1', '2019/2/1', '2019/2/1'],
'value': [1, 5, 9, 13, 17]})
def g(df1, df2):
return pd.concat([df1,df2.merge(df1[['id','city','district']], how='left', on='id')],sort=False).reset_index(drop=True)
result = g(df1.copy(),df2.copy()) |
15,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A strawberry flavour gene vector for Saccharomyces cerevisiae
This Jupyter notebook describes the simulated cloning of the strawberry Fragaria ร ananassa alcohol acyltransferase SAAT gene and the construction of a S. cerevisiae expression vector for this gene.
The SAAT gene is involved in the production of the strawberry fragrance. It is necessary to first produce cDNA, a process which is not decribed in this notebook. Here is a recent protocol for the extraction of nucleic acids from Strawberry.
Step1: In the cell below, primers relevant to the Yeast Pathway Kit are read into six sequence objects. These are similar to the ones created in cell [3]
Step2: The final vector pYPKa_TDH3_FaPDC_TEF1 has 8769 bp.
The sequence can be inspected by the hyperlink above.
The restriction enzyme PvuI cuts twice in the plasmid backbone and once in the SAAT gene. | Python Code:
# Import the pydna package functions
from pydna.all import *
# Give your email address to Genbank, so they can contact you.
# This is a requirement for using their services
gb=Genbank("[email protected]")
# download the SAAT CDS from Genbank
# We know from inspecting the
saat = gb.nucleotide("AF193791 REGION: 78..1895")
# The representation of the saat Dseqrecord object contains a link to Genbank
saat
# design two new primers for SAAT
saat_amplicon = primer_design(saat)
fw="aa"+saat_amplicon.forward_primer
rv=saat_amplicon.reverse_primer
# We can set the primer identities to something descriptive
fw.id, rv.id = "fw_saat_cds", "rv_saat_cds"
saat_pcr_prod = pcr(fw,rv, saat)
# The result is an object of the Amplicon class
saat_pcr_prod
# The object has several useful methods like .figure()
# which shows how the primers anneal
saat_pcr_prod.figure()
# read the cloning vector from a local file
pYPKa=read("pYPKa.gb")
# This is a GenbankFile object, its representation include a link to the local file:
pYPKa
# import the restriction enzyme AjiI from Biopython
from Bio.Restriction import AjiI
# cut the vector with the .linearize method. This will give an error is more than one
# fragment is formed
pYPKa_AjiI = pYPKa.linearize(AjiI)
# The result from the digestion is a linear Dseqrecord object
pYPKa_AjiI
# clone the PCR product by adding the linearized vector to the insert
# and close it using the .looped() method.
pYPKa_A_saat = ( pYPKa_AjiI + saat_pcr_prod ).looped()
pYPKa_A_saat
# read promoter vector from a local file
pYPKa_Z_prom = read("pYPKa_Z_TEF1.gb")
# read terminator vector from a local file
pYPKa_E_term = read("pYPKa_E_TPI1.gb")
pYPKa_Z_prom
pYPKa_E_term
[pYPKa_Z_prom,pYPKa_Z_prom]
Explanation: A strawberry flavour gene vector for Saccharomyces cerevisiae
This Jupyter notebook describes the simulated cloning of the strawberry Fragaria ร ananassa alcohol acyltransferase SAAT gene and the construction of a S. cerevisiae expression vector for this gene.
The SAAT gene is involved in the production of the strawberry fragrance. It is necessary to first produce cDNA, a process which is not decribed in this notebook. Here is a recent protocol for the extraction of nucleic acids from Strawberry.
End of explanation
# Standard primers
p567,p577,p468,p467,p568,p578 = parse_primers('''
>567_pCAPsAjiIF (23-mer)
GTcggctgcaggtcactagtgag
>577_crp585-557 (29-mer)
gttctgatcctcgagcatcttaagaattc
>468_pCAPs_release_fw (25-mer)
gtcgaggaacgccaggttgcccact
>467_pCAPs_release_re (31-mer)
ATTTAAatcctgatgcgtttgtctgcacaga
>568_pCAPsAjiIR (22-mer)
GTGCcatctgtgcagacaaacg
>578_crp42-70 (29-mer)
gttcttgtctcattgccacattcataagt''')
p567
# Promoter amplified using p577 and p567
p = pcr(p577, p567, pYPKa_Z_prom)
# Gene amplified using p468 and p467
g = pcr(p468, p467, pYPKa_A_saat)
# Terminator amplified using p568 and p578
t = pcr(p568, p578, pYPKa_E_term)
# Yeast backbone vector read from a local file
pYPKpw = read("pYPKpw.gb")
from Bio.Restriction import ZraI
# Vector linearized with ZraI
pYPKpw_lin = pYPKpw.linearize(ZraI)
# Assembly simulation between four linear DNA fragments:
# plasmid, promoter, gene and terminator
# Only one circular product is formed (8769 bp)
asm = Assembly( (pYPKpw_lin, p, g, t) )
asm
# Inspect the only circular product
candidate = asm.assemble_circular()[0]
candidate.figure()
# Synchronize vectors
pYPK0_TDH3_FaPDC_TEF1 = candidate.synced(pYPKa)
# Write new vector to local file
pYPK0_TDH3_FaPDC_TEF1.write("pYPK0_TDH3_FaPDC_TPI1.gb")
Explanation: In the cell below, primers relevant to the Yeast Pathway Kit are read into six sequence objects. These are similar to the ones created in cell [3]
End of explanation
from Bio.Restriction import PvuI
#PYTEST_VALIDATE_IGNORE_OUTPUT
%matplotlib inline
from pydna.gel import Gel, weight_standard_sample
standard = weight_standard_sample('1kb+_GeneRuler')
Gel( [ standard,
pYPKpw.cut(PvuI),
pYPK0_TDH3_FaPDC_TEF1.cut(PvuI) ] ).run()
Explanation: The final vector pYPKa_TDH3_FaPDC_TEF1 has 8769 bp.
The sequence can be inspected by the hyperlink above.
The restriction enzyme PvuI cuts twice in the plasmid backbone and once in the SAAT gene.
End of explanation |
15,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 13
Step1: The try instruction allows exception handling in Python. If an exception occurs in a block marked by try, it is possible to handle the exception through the instruction except. It is possible to have many except blocks for the same try block.
Step3: If except receives the name of an exception, only that exception will be handled. If no exception name is passed as a parameter, all exceptions will be handled.
Example
Step4: The module traceback offers functions for dealing with error messages. The function format_exc() returns the output of the last exception formatted in a string.
The handling of exceptions may have an else block, which will be executed when no exception occurs and a finally block, which will be executed anyway, whether an exception occurred or <span class="note" title="The finally declaration may be used for freeing resources that were used in the try block, such as database connections or open files.">not</span>. New types of exceptions may be defined through inheritance of the class Exception.
Since version 2.6, the instruction with is available, that may replace the combination of try / finally in many situations. It is possible to define an object that will be used during the with block execution. The object will support the context management protocol, which means that it will need to have an __enter__() method, which will be executed at the beginning of the block, and another called __exit__(), which will be called at the end of the block.
Example
Step5: Writing Exception Classes
Step6: Exception hierarchy
The class hierarchy for built-in exceptions is | Python Code:
print (10/0)
Explanation: Chapter 13: Exceptions
When a failure occurs in the program (such as division by zero, for example) at runtime, an exception is generated. If the exception is not handled, it will be propagated through function calls to the main program module, interrupting execution.
End of explanation
try:
print (1/0)
except ZeroDivisionError:
print ('Error trying to divide by zero.')
try:
print (1/0)
except:
print ('Error trying to divide by zero.')
try:
print (1/0)
except Exception:
# Please put the ex.message in some logfile instead of on the console
print ('Error trying to divide by zero.', Exception)
Explanation: The try instruction allows exception handling in Python. If an exception occurs in a block marked by try, it is possible to handle the exception through the instruction except. It is possible to have many except blocks for the same try block.
End of explanation
import sys
try:
print("... TESTing.. ")
with open('myfile.txt', "w") as myFile:
for a in ["a", "b", "c"]:
myFile.write(str(a))
for a in [1,2,3,4,5,"6"]:
myFile.write(str(a))
f = open('myfile.txt')
s = f.readline()
i = int(s.strip())
# raise Exception("Test Exception")
except OSError as err:
print("OS error: {0}".format(err))
except ValueError:
print("Could not convert data to an integer.")
raise
except:
print("Unexpected error:", sys.exc_info())
try:
print(1/0)
except:
print("Hallo, Ja")
raise
int("2A")
# -*- coding: utf-8 -*-
Created on Fri Aug 5 08:50:42 2016
@author: [email protected]
import traceback
# Try to get a file name
try:
fn = input('File Name (temp.txt): ').strip()
# Numbering lines
for i, s in enumerate(open(fn)):
print( i + 1,"> ", s,)
# If an error happens
except:
# Show it on the screen
trace = traceback.format_exc()
# And save it on a file
print ('An error happened:\n', trace)
with open("trace_asd.log", "a+") as file:
file.write(trace)
# file('trace_asd.log', 'a').write(trace)
# end the program
# raise SystemExit
Explanation: If except receives the name of an exception, only that exception will be handled. If no exception name is passed as a parameter, all exceptions will be handled.
Example:
End of explanation
def do_some_stuff():
print("Doing some stuff")
def do_some_stuff_e():
print("Doing some stuff and will now raise error")
raise ValueError('A very specific bad thing happened')
def rollback():
print("reverting the changes")
def commit():
print("commiting the changes")
print("Testing")
try:
# do_some_stuff()
do_some_stuff_e()
except:
rollback()
# raise
else:
commit()
finally:
print("Exiting out")
# #### ERROR Condtion
# Testing
# try block
# Doing some stuff and will now raise error
# except block
# reverting the changes
# Finally block
# Exiting out
# NO ERROR
# Testing
# Try block
# Doing some stuff
# else block
# commiting the changes
# finally block
# Exiting out
Explanation: The module traceback offers functions for dealing with error messages. The function format_exc() returns the output of the last exception formatted in a string.
The handling of exceptions may have an else block, which will be executed when no exception occurs and a finally block, which will be executed anyway, whether an exception occurred or <span class="note" title="The finally declaration may be used for freeing resources that were used in the try block, such as database connections or open files.">not</span>. New types of exceptions may be defined through inheritance of the class Exception.
Since version 2.6, the instruction with is available, that may replace the combination of try / finally in many situations. It is possible to define an object that will be used during the with block execution. The object will support the context management protocol, which means that it will need to have an __enter__() method, which will be executed at the beginning of the block, and another called __exit__(), which will be called at the end of the block.
Example:
End of explanation
class HostNotFound(Exception):
def __init__( self, host ):
self.host = host
Exception.__init__(self, 'Host Not Found exception: missing %s' % host)
try:
raise HostNotFound("gitpub.com")
except HostNotFound as hcf:
# Handle exception.
print (hcf) # -> 'Host Not Found exception: missing taoriver.net'
print (hcf.host) # -> 'gitpub.net'
try:
fh = open("nonexisting.txt", "r")
try:
fh.write("This is my test file for exception handling!!")
print(1/0)
except:
print("Caught error message")
finally:
print ("Going to close the file")
fh.close()
except IOError:
print ("Error: can\'t find file or read data")
try:
# fh = open("nonexisting.txt", "r")
try:
fh.write("This is my test file for exception handling!!")
print(1/0)
except:
print("Caught error message")
raise
finally:
print ("Going to close the file")
fh.close()
except:
print ("Error: can\'t find file or read data")
try:
# fh = open("nonexisting.txt", "r")
try:
# fh.write("This is my test file for exception handling!!")
print(1/0)
except:
print("Caught error message")
finally:
print ("Going to close the file")
# fh.close()print(1/0)
print(1/0)
except :
print ("Error: can\'t find file or read data")
raise
Explanation: Writing Exception Classes
End of explanation
import inspect
inspect.getclasstree(inspect.getmro(Exception))
# https://stackoverflow.com/questions/18296653/print-the-python-exception-error-hierarchy
def classtree(cls, indent=0):
print ('.' * indent, cls.__name__)
for subcls in cls.__subclasses__():
classtree(subcls, indent + 3)
classtree(BaseException)
Explanation: Exception hierarchy
The class hierarchy for built-in exceptions is:
```
BaseException
+-- SystemExit
+-- KeyboardInterrupt
+-- GeneratorExit
+-- Exception
+-- StopIteration
+-- StopAsyncIteration
+-- ArithmeticError
| +-- FloatingPointError
| +-- OverflowError
| +-- ZeroDivisionError
+-- AssertionError
+-- AttributeError
+-- BufferError
+-- EOFError
+-- ImportError
+-- LookupError
| +-- IndexError
| +-- KeyError
+-- MemoryError
+-- NameError
| +-- UnboundLocalError
+-- OSError
| +-- BlockingIOError
| +-- ChildProcessError
| +-- ConnectionError
| | +-- BrokenPipeError
| | +-- ConnectionAbortedError
| | +-- ConnectionRefusedError
| | +-- ConnectionResetError
| +-- FileExistsError
| +-- FileNotFoundError
| +-- InterruptedError
| +-- IsADirectoryError
| +-- NotADirectoryError
| +-- PermissionError
| +-- ProcessLookupError
| +-- TimeoutError
+-- ReferenceError
+-- RuntimeError
| +-- NotImplementedError
| +-- RecursionError
+-- SyntaxError
| +-- IndentationError
| +-- TabError
+-- SystemError
+-- TypeError
+-- ValueError
| +-- UnicodeError
| +-- UnicodeDecodeError
| +-- UnicodeEncodeError
| +-- UnicodeTranslateError
+-- Warning
+-- DeprecationWarning
+-- PendingDeprecationWarning
+-- RuntimeWarning
+-- SyntaxWarning
+-- UserWarning
+-- FutureWarning
+-- ImportWarning
+-- UnicodeWarning
+-- BytesWarning
+-- ResourceWarning
```
End of explanation |
15,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display objects
A striplog depends on a hierarchy of objects. This notebook shows the objects related to display
Step1: A Decor attaches a display style to a Rock. From the docs
Step2: Like Rocks, we instantiate Decors with a dict of properties
Step3: Or instantiate with keyword parameters
Step4: You can access its attributes. It has two ways to understand colour
Step5: There are the standard matplotlib hatch patterns
Step6: And there are some custom ones. These really need to be reconciled and implemented in a more flexible way, perhaps even going as far as a redesign of the mpl implementation.
Step7: We can disaply hatches in a single plot for quick reference
Step9: <hr />
Legend
A look-up table to assist in the conversion of Components to a plot colour.
We'll define a legend in a CSV file. I can't think of a better way for now. It would be easy to make a web form to facilitate this with, for example, a colour picker. It may not be worth it, though; I imagine one would create one and then leave it alone most of the time.
Step11: Duplicate lithologies will result in a warning. To avoid strange results, you should fix the problem by removing duplicates.
Step12: We can also export a legend as CSV text
Step13: Builtin legends
There are several
Step14: There is also a default legend, which you can call with Legend.default() (no arguments).
Step15: There are also default timescales
Step16: Legend from image
If you have an image of a legend (just the colours), Striplog will have a go at reading colours from it.
Step17: Querying a legend
The legend is basically a query table. We can ask the Legend what colour to use for a given Rock object
Step18: Sometimes we also want to use a width for a given lithology
Step19: We can also ask the legend which Rock is represented by a particular colour. (I doubt you'd ever really need to do this, but I had to implement this to allow you to make a Striplog from an image
Step20: The Legend behaves more or less like a list, so we can index into it
Step21: Legends can plot themselves.
Step22: Sometimes you don't want to have to make a legend, so you can use a random one. Just pass a list of Components...
Step23: There is a default colour table for geological timescales too... it's based on the Wikipedia's colour scheme for the geological timetable. | Python Code:
from striplog import Decor
Explanation: Display objects
A striplog depends on a hierarchy of objects. This notebook shows the objects related to display:
Decor: One element from a legend โย describes how to display a Rock.
Legend: A set of Decorsย โย describes how to display a set of Rocks or a Striplog.
<hr />
Decor
End of explanation
from striplog import Component
r = {'colour': 'grey',
'grainsize': 'vf-f',
'lithology': 'sand',
'porosity': 0.123
}
rock = Component(r)
rock
d = {'color': '#267022',
'component': rock,
'width': 3
}
decor = Decor(d)
decor
from striplog import Component
r = {'colour': 'grey',
'grainsize': 'vf-f',
'lithology': 'sand',
'porosity': 0.123
}
rock = Component(r)
rock
Explanation: A Decor attaches a display style to a Rock. From the docs:
A single display style. A Decor describes how to display a given set
of Component properties.
In general, you will not usually use a Decor on its own. Instead, you
will want to use a Legend, which is just a list of Decors, and leave
the Decors to the Legend.
We are going to need a Component to make a Decor. 'Components' represent things like rock types.
End of explanation
d = {'color': '#267022',
'component': rock,
'width': 3
}
decor = Decor(d)
decor
Explanation: Like Rocks, we instantiate Decors with a dict of properties:
End of explanation
Decor(colour='#86f0b6',
component=Component({'colour': 'grey', 'grainsize': 'vf-f', 'porosity': 0.123, 'lithology': 'sand'}),
width=3.0
)
Explanation: Or instantiate with keyword parameters:
End of explanation
print("Hex: {}... and RGB: {}".format(decor.colour, decor.rgb))
print(decor)
%matplotlib inline
decor.plot()
decor.hatch = '+'
decor.plot()
Explanation: You can access its attributes. It has two ways to understand colour:
End of explanation
hatches = "\/|+x-.o*"
for h in hatches:
Decor({'component': Component({'hatch':h}), 'hatch': h, 'colour': '#eeeeee'}).plot()
Explanation: There are the standard matplotlib hatch patterns:
End of explanation
hatches = "pctLbs!=v^"
for h in hatches:
Decor({'component': Component({'hatch':h}), 'hatch': h, 'colour': '#eeeeee'}).plot(fmt="{hatch}")
Explanation: And there are some custom ones. These really need to be reconciled and implemented in a more flexible way, perhaps even going as far as a redesign of the mpl implementation.
End of explanation
import matplotlib.pyplot as plt
hatches = ['.', '..', 'o', 'p', 'c', '*', '-', '--', '=', '==', '|',
'||', '!', '!!', '+', '++', '/', '\\', '//', '\\\\', '///',
'\\\\\\', 'x', 'xx', '^', 'v', 't', 'l', 'b', 's']
fig, axs = plt.subplots(figsize=(16,5.25), ncols=10, nrows=3)
fig.subplots_adjust(hspace=0.5)
for ax, h in zip(axs.flatten(), hatches):
ax.set_title(h)
Decor(colour='#eeeeee',
component=Component({'hatch': h}),
hatch=h).plot(fmt='', ax=ax)
Explanation: We can disaply hatches in a single plot for quick reference:
End of explanation
l = ucolour, width, component lithology, component colour, component grainsize
#F7E9A6, 3, Sandstone, Grey, VF-F
#FF99CC, 2, Anhydrite, ,
#DBD6BC, 3, Heterolithic, Grey,
#FF4C4A, 2, Volcanic, ,
#86F0B6, 5, Conglomerate, ,
#FF96F6, 2, Halite, ,
#F2FF42, 4, Sandstone, Grey, F-M
#DBC9BC, 3, Heterolithic, Red,
#A68374, 2, Siltstone, Grey,
#A657FA, 3, Dolomite, ,
#FFD073, 4, Sandstone, Red, C-M
#A6D1FF, 3, Limestone, ,
#FFDBBA, 3, Sandstone, Red, VF-F
#FFE040, 4, Sandstone, Grey, C-M
#A1655A, 2, Siltstone, Red,
#363434, 1, Coal, ,
#664A4A, 1, Mudstone, Red,
#666666, 1, Mudstone, Grey,
from striplog import Legend
legend = Legend.from_csv(text=l)
legend[:5]
Explanation: <hr />
Legend
A look-up table to assist in the conversion of Components to a plot colour.
We'll define a legend in a CSV file. I can't think of a better way for now. It would be easy to make a web form to facilitate this with, for example, a colour picker. It may not be worth it, though; I imagine one would create one and then leave it alone most of the time.
End of explanation
l = ucolour, component lithology
#F7E9A6, Sandstone
#F2FF42, Sandstone
#FF99CC, Anhydrite
#DBD6BC, Heterolithic
#FF4C4A, Volcanic
#86F0B6, Conglomerate
#FFD073, Sandstone
Legend.from_csv(text=l)
Explanation: Duplicate lithologies will result in a warning. To avoid strange results, you should fix the problem by removing duplicates.
End of explanation
print(legend.to_csv())
Explanation: We can also export a legend as CSV text:
End of explanation
legend = Legend.builtin('nsdoe')
legend
Explanation: Builtin legends
There are several:
'nsdoe': Nova Scotia Dept. of Energy
'nagmdm__6_2': USGS N. Am. Geol. Map Data Model 6.2 <<< default
'nagmdm__6_1': USGS N. Am. Geol. Map Data Model 6.1
'nagmdm__4_3': USGS N. Am. Geol. Map Data Model 4.3
'sgmc': USGS State Geologic Map Compilation
End of explanation
Legend.default()
Explanation: There is also a default legend, which you can call with Legend.default() (no arguments).
End of explanation
time = Legend.default_timescale()
time[:10]
Explanation: There are also default timescales:
End of explanation
from IPython.display import Image
Image('z_Lithology_legend_gapless2.png', width=15)
liths = [
'Conglomerate',
'Sandstone',
'Sandstone',
'Sandstone',
'Sandstone',
'Sandstone',
'Siltstone',
'Siltstone',
'Heterolithic',
'Heterolithic',
'Mudstone',
'Mudstone',
'Limestone',
'Dolomite',
'Anhydrite',
'Halite',
'Coal',
'Volcanic',
'NULL',
]
colours = [
None,
'Grey',
'Red',
'Grey',
'Grey',
'Red',
'Grey',
'Red',
'Grey',
'Red',
'Grey',
'Red',
None,
None,
None,
None,
None,
None,
None,
]
components = [Component({'lithology': l, 'colour': c}) for l, c in zip(liths, colours)]
Legend.from_image('z_Lithology_legend_gapless2.png', components)
Explanation: Legend from image
If you have an image of a legend (just the colours), Striplog will have a go at reading colours from it.
End of explanation
legend.get_colour(rock)
rock3 = Component({'colour': 'red',
'grainsize': 'vf-f',
'lithology': 'sandstone'})
legend.get_colour(rock3)
Legend.random(rock3)
Explanation: Querying a legend
The legend is basically a query table. We can ask the Legend what colour to use for a given Rock object:
End of explanation
legend.get_width(rock3)
Explanation: Sometimes we also want to use a width for a given lithology:
End of explanation
legend.get_component('#f7e9a6')
Explanation: We can also ask the legend which Rock is represented by a particular colour. (I doubt you'd ever really need to do this, but I had to implement this to allow you to make a Striplog from an image: it looks up the rocks to use by colour.)
End of explanation
legend[3:5]
Explanation: The Legend behaves more or less like a list, so we can index into it:
End of explanation
legend.plot()
Explanation: Legends can plot themselves.
End of explanation
# We'll scrape a quick list of 7 components from the default legend:
c = [d.component for d in legend[:7]]
l = Legend.random(c)
l.plot()
Explanation: Sometimes you don't want to have to make a legend, so you can use a random one. Just pass a list of Components...
End of explanation
time[74:79].plot(fmt="{age!t}") # Pass a format for proper case
Explanation: There is a default colour table for geological timescales too... it's based on the Wikipedia's colour scheme for the geological timetable.
End of explanation |
15,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solvers
A constraints-based reconstruction and analysis model for biological systems is actually just an application of a class of discrete optimization problems typically solved with linear, mixed integer or quadratic programming techniques. Cobrapy does not implement any algorithms to find solutions to such problems but rather creates an biologically motivated abstraction to these techniques to make it easier to think of how metabolic systems work without paying much attention to how that formulates to an optimization problem.
The actual solving is instead done by tools such as the free software glpk or commercial tools gurobi and cplex which are all made available as a common programmers interface via the optlang package.
When you have defined your model, you can switch solver backend by simply assigning to the model.solver property.
Step1: For information on how to configure and tune the solver, please see the documentation for optlang project and note that model.solver is simply an object optlang of class Model. | Python Code:
import cobra.test
model = cobra.test.create_test_model('textbook')
model.solver = 'glpk'
# or if you have cplex installed
model.solver = 'cplex'
Explanation: Solvers
A constraints-based reconstruction and analysis model for biological systems is actually just an application of a class of discrete optimization problems typically solved with linear, mixed integer or quadratic programming techniques. Cobrapy does not implement any algorithms to find solutions to such problems but rather creates an biologically motivated abstraction to these techniques to make it easier to think of how metabolic systems work without paying much attention to how that formulates to an optimization problem.
The actual solving is instead done by tools such as the free software glpk or commercial tools gurobi and cplex which are all made available as a common programmers interface via the optlang package.
When you have defined your model, you can switch solver backend by simply assigning to the model.solver property.
End of explanation
type(model.solver)
Explanation: For information on how to configure and tune the solver, please see the documentation for optlang project and note that model.solver is simply an object optlang of class Model.
End of explanation |
15,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assessing classifiers using GO in Shalek2013
For the GO analysis, we'll need a few other packages
Step2: Utility functions for gene ontology
Step3: Read in the Shalek2013 data and classify it
Step4: Make the coefficients series
Step5: Let's remind ourselves of the distribution of the coefficients again
Step6: How can we biologically assess what genes are found by our classifier? One way is through Gene Ontology enrichment
Evaluating classifiers through Gene Ontology (GO) Enrichment
Gene ontology is a tree (aka directed acyclic graph or "dag") of gene annotations. The topmost node is the most general, and the bottommost nodes are the most specific. Here is an example GO graph.
Three GO Domains
Step7: GOEA Step 2
Step8: GOEA Step 3
Step9: GOEA Step 4
Step10: GOEA Step 5
Step11: Now we're going to ..... say it with me ...... add a column! that is the negative log10 of the p-value so it's easier to plot and deal with.
Step12: Let's make sure this dataframe is sorted in order of enrichment. By default, this is sorting in ascending order, and we want the most enriched to be at the top, so let's say ascending=False.
Step13: Let's look at the distribution of the log10 p-values
Step14: Now we can also plot this data! Let's just take a subset, say the first 10 and look at the distribution of p-values here.
Step15: Now let's plot the GO categories! We want to make sure they stay in the highest-lowest order by specifying "order" (otherwise they will be alphabetical!)
Step16: Exercise
Perform GO enrichment on the genes ABOVE the upper cutoff.
Plot the enrichment as a bar graph.
Try using only the LPS response genes for classification, and calculate GO enrichment on those, too. Do you have to change the background as well?
Use the cells below. | Python Code:
# Alphabetical order is standard
# We're doing "import superlongname as abbrev" for our laziness - this way we don't have to type out the whole thing each time.
import collections
# Python plotting library
import matplotlib.pyplot as plt
# Numerical python library (pronounced "num-pie")
import numpy as np
# Dataframes in Python
import pandas as pd
# Statistical plotting library we'll use
import seaborn as sns
sns.set(style='whitegrid')
# Label processing
from sklearn import preprocessing
# Matrix decomposition
from sklearn.decomposition import PCA, FastICA
# Classification
from sklearn.svm import SVC
# Gene ontology
import goatools
import mygene
# This is necessary to show the plotted figures inside the notebook -- "inline" with the notebook cells
%matplotlib inline
Explanation: Assessing classifiers using GO in Shalek2013
For the GO analysis, we'll need a few other packages:
mygene for looking up the gene ontology categories of genes
goatools for performing gene ontology enrichment analysis
fishers_exact_test for goatools
Use the following commands at your terminal to install the packages. Some of them are on Github so it's important to get the whole command right.
$ pip install mygene
$ pip install git+git://github.com/tanghaibao/goatools.git
$ pip install git+https://github.com/brentp/fishers_exact_test.git
End of explanation
GO_KEYS = 'go.BP', 'go.MF', 'go.CC'
def parse_mygene_output(mygene_output):
Convert mygene.querymany output to a gene id to go term mapping (dictionary)
Parameters
----------
mygene_output : dict or list
Dictionary (returnall=True) or list (returnall=False) of
output from mygene.querymany
Output
------
gene_name_to_go : dict
Mapping of gene name to a set of GO ids
# if "returnall=True" was specified, need to get just the "out" key
if isinstance(mygene_output, dict):
mygene_output = mygene_output['out']
gene_name_to_go = collections.defaultdict(set)
for line in mygene_output:
gene_name = line['query']
for go_key in GO_KEYS:
try:
go_terms = line[go_key]
except KeyError:
continue
if isinstance(go_terms, dict):
go_ids = set([go_terms['id']])
else:
go_ids = set(x['id'] for x in go_terms)
gene_name_to_go[gene_name] |= go_ids
return gene_name_to_go
Explanation: Utility functions for gene ontology
End of explanation
metadata = pd.read_csv('../data/shalek2013/metadata.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
expression = pd.read_csv('../data/shalek2013/expression.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
expression_feature = pd.read_csv('../data/shalek2013/expression_feature.csv',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0)
# creating new column indicating color
metadata['color'] = metadata['maturity'].map(
lambda x: 'MediumTurquoise' if x == 'immature' else 'Teal')
metadata.loc[metadata['pooled'], 'color'] = 'black'
# Create a column indicating both maturity and pooled for coloring with seaborn, e.g. sns.pairplot
metadata['group'] = metadata['maturity']
metadata.loc[metadata['pooled'], 'group'] = 'pooled'
# Create a palette and ordering for using with sns.pairplot
palette = ['MediumTurquoise', 'Teal', 'black']
order = ['immature', 'mature', 'pooled']
singles_ids = [x for x in expression.index if x.startswith('S')]
singles = expression.loc[singles_ids]
# Use only the genes that are substantially expressed in single cells
singles = singles.loc[:, (singles > 1).sum() >= 3]
singles.shape
# Now because computers only understand numbers, we'll convert the
# category label of "mature" and "immature" into integers to a using a
# `LabelEncoder`. Let's look at that column again, only for mature cells:
singles_maturity = metadata.loc[singles.index, 'maturity']
# Instantiate the encoder
encoder = preprocessing.LabelEncoder()
# Get number of categories and transform "mature"/"immature" to numbers
target = encoder.fit_transform(singles_maturity)
## Run the classifier!!
# Yay so now we can run a classifier!
classifier = SVC(kernel='linear')
classifier.fit(singles, target)
Explanation: Read in the Shalek2013 data and classify it
End of explanation
coefficients = pd.Series(classifier.coef_.flat, index=singles.columns)
coefficients.head()
Explanation: Make the coefficients series
End of explanation
mean = coefficients.mean()
std = coefficients.std()
multiplier = 2
lower_cutoff = mean - multiplier * std
upper_cutoff = mean + multiplier * std
fig, ax = plt.subplots()
sns.distplot(coefficients)
# Add vertical lines
ymin, ymax = ax.get_ylim()
ax.vlines([lower_cutoff, upper_cutoff], ymin, ymax, linestyle='--', color='Crimson')
below_cutoff = coefficients[coefficients < lower_cutoff]
print(below_cutoff.shape)
below_cutoff.head()
Explanation: Let's remind ourselves of the distribution of the coefficients again
End of explanation
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
# Show the filename
obo_fname
Explanation: How can we biologically assess what genes are found by our classifier? One way is through Gene Ontology enrichment
Evaluating classifiers through Gene Ontology (GO) Enrichment
Gene ontology is a tree (aka directed acyclic graph or "dag") of gene annotations. The topmost node is the most general, and the bottommost nodes are the most specific. Here is an example GO graph.
Three GO Domains:
Cellular Component (CC)
Molecular Function (MF)
Biological Process (BP)
Perform GO enrichment analysis (GOEA)
GOEA Step 1: Download GO graph file of "obo" type (same for all species)
This will download the file "go-basic.obo" if it doesn't already exist. This only needs to be done once.
End of explanation
obo_dag = goatools.obo_parser.GODag(obo_file=obo_fname)
Explanation: GOEA Step 2: Create the GO graph (same for all species)
(this may take some time to build the graph)
End of explanation
# Initialize the "mygene.info" (http://mygene.info/) interface
mg = mygene.MyGeneInfo()
mygene_output = mg.querymany(singles.columns,
scopes='symbol', fields=['go.BP', 'go.MF', 'go.CC'], species='mouse',
returnall=True)
gene_name_to_go = parse_mygene_output(mygene_output)
Explanation: GOEA Step 3: Get gene ID to GO id mapping (species-specific and experiment-specific)
Here we are establishing the background for our GOEA. Defining your background is very important because, for example, there are lots of neural genes so if you use all human genes as background in your study of which genes are upregulated in Neuron Type X vs Neuron Type Y, you'll get a bunch of neuron genes (which is true) but not the smaller differences between X and Y. Typicall, you use all expressed genes as the background.
For our data, we can access all expressed genes very simply by getting the column names in the dataframe: singles.columns, which is the dataframe we used for classifying and shows all expressed genes in single cells. This will be our background.
End of explanation
go_enricher = goatools.GOEnrichmentStudy(singles.columns, gene_name_to_go, obo_dag)
Explanation: GOEA Step 4: Create a GO enrichment calculator object go_enricher (species- and experiment-specific)
In this step, we are using the two objects we've created (obo_dag from Step 2 and gene_name_to_go from Step 3) plus the gene ids to create a go_enricher object
End of explanation
genes_of_interest = below_cutoff.index
# "results" is a list and is annoying to deal with ...
# ... so we'll make a dataframe in the next step
results = go_enricher.run_study(genes_of_interest)
# Create a dataframe of the results so it's easier to deal with
below_cutoff_go_enrichment = pd.DataFrame([r.__dict__ for r in results])
print(below_cutoff_go_enrichment.shape)
below_cutoff_go_enrichment.head()
Explanation: GOEA Step 5: Calculate go enrichment!!! (species- and experiment-specific)
Now we are ready to run go enrichment!! Let's take our enriched genes of interest and run the enrichment analysis!
End of explanation
below_cutoff_go_enrichment['log10_p_bonferroni'] = -np.log10(below_cutoff_go_enrichment['p_bonferroni'])
print(below_cutoff_go_enrichment.shape)
below_cutoff_go_enrichment.head()
Explanation: Now we're going to ..... say it with me ...... add a column! that is the negative log10 of the p-value so it's easier to plot and deal with.
End of explanation
below_cutoff_go_enrichment = below_cutoff_go_enrichment.sort_values('log10_p_bonferroni', ascending=False)
print(below_cutoff_go_enrichment.shape)
below_cutoff_go_enrichment.head()
Explanation: Let's make sure this dataframe is sorted in order of enrichment. By default, this is sorting in ascending order, and we want the most enriched to be at the top, so let's say ascending=False.
End of explanation
sns.distplot(below_cutoff_go_enrichment['log10_p_bonferroni'])
Explanation: Let's look at the distribution of the log10 p-values
End of explanation
below_cutoff_go_enrichment_subset = below_cutoff_go_enrichment.iloc[:10, :]
sns.distplot(below_cutoff_go_enrichment_subset['log10_p_bonferroni'])
Explanation: Now we can also plot this data! Let's just take a subset, say the first 10 and look at the distribution of p-values here.
End of explanation
order = below_cutoff_go_enrichment_subset['name']
fig, ax = plt.subplots()
sns.barplot(x='log10_p_bonferroni', y='name', data=below_cutoff_go_enrichment_subset, orient='h', order=order)
fig.savefig("below_cutoff_go_enrichment.pdf")
Explanation: Now let's plot the GO categories! We want to make sure they stay in the highest-lowest order by specifying "order" (otherwise they will be alphabetical!)
End of explanation
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
Explanation: Exercise
Perform GO enrichment on the genes ABOVE the upper cutoff.
Plot the enrichment as a bar graph.
Try using only the LPS response genes for classification, and calculate GO enrichment on those, too. Do you have to change the background as well?
Use the cells below.
End of explanation |
15,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reprise pour proof of concept du pdf PythonEdu Amiens
Passer par un logiciel Windows alors que jupyter et jupyterhub existent me semble une grossiรจre erreur d'aiguillage.
Je vais tenter de dรฉmontrer mon point de vue ร partir de ce Notebook. Les librairies utilisรฉes par le module lycee sont instalรฉes en python3 sur la raspberryPi du projet Tremplin.
Il va de soi que ce cahier peut s'exรฉcuter en local sur toute machine disposant d'une instalation de jupyter notebook.
L'รฉcriture du code en Python
Step1: # Exercice 1
Step2: la casse des caractรจres
Il est important de se souvenir que les instruction en python s'รฉcrivent en minuscule.
Lorsque une variable contient des Majuscules il faut dans tout le programme conserver ces majuscules.
Les champs de code suivants montrent ce qui se produit quand on suit la rรจgle puis lorsqu'on ne la suit pas.
Step3: L'affectation d'une valeur ร une variable
Step4: Les variables peuvent รชtre affectรฉes par lot. On peut rรฉaffecter les variables en mรชme temps ou successivement.
Step5: De l'utilitรฉ du module lycee pour nos รฉlรจves ?
Le projet Pythonedu semble considรฉrer qu'il faut mettre ร disposition des รฉlรจves un module qui masque les fonctionnalitรฉs de python ...
Il me semble en tant que prof validรฉ ISN que cette dรฉmarche est contre productive.
Que fait lycee ?
Il est liรฉ ร des librairies python qui doivent รชtre installรฉes pour qu'il puisse fonctionner.
import math
import tkinter as Tk
import tkinter.filedialog as tkf
import random as alea
import matplotlib.pyplot as repere
import numpy as np
import builtins
from scipy.stats import norm
Puis il crรฉe des fonctions qui sont pour le moins peu utiles ou en tout cas pas nรฉcessaires pour un รฉlรจve mรชme dรฉbutant. Voyons deux exemples, avec lycee et sans lycee, qui pourraient รชtre utilisรฉs.
Pour les besoins de la dรฉmonstration lycee.py est placรฉ dans le mรชme dossier que ce notebook sur la raspberryPi de Tremplin.
Step6: On constate donc dans le premier cas que la dรฉmarche masque ce que fait le programme en privant l'รฉlรจve d'un commentaire intermรฉdiaire expliquant les rรฉaffectation, et que la commande python est remplacรฉe par une fonction qui au final n'a aucune utilitรฉ pour comprendre comment le programme fonctionne.
En ajoutant l'importation de time on peut mรชme marquer des pauses qui permettent ร l'รฉlรจve de comprendre la sรฉquence qui se produit dans le programme. | Python Code:
#solution de l'รฉquation ax = b pour a = 2 et b = 6
a=2
b=6
print("la solution solution de ",a,"* x = ",b,"est :")
print("x =",b/a)
#rรฉsoudre l'รฉquation ax=b pour aโ 0
a = int(input("entrez une valeur pour a โ 0 : "))
#on s'assure que a est bien diffรฉrent de 0
if a==0:
#on redemande la saisie de a
a = int(input("Attention entrez une valeur pour a โ 0 : "))
#tant que a vaut 0
while a==0:
#on demande une valuer diffรฉrente de 0
a = int(input("Attention entrez une valeur pour a โ 0 : "))
#dรจs que a est diffรฉrent de 0
else:
#on demande la saisie de b
b = int(input("entrez une valeur pour b : "))
# on affiche le rรฉsultat
print("la solution solution de ",a,"* x = ",b,"est :")
print("x =",b/a)
else:
#quand a est diffรฉrent de 0 on demande la saisie de b
b = int(input("entrez une valeur pour b : "))
print("la solution solution de ",a,"* x = ",b,"est :")
print("x =",b/a)
Explanation: Reprise pour proof of concept du pdf PythonEdu Amiens
Passer par un logiciel Windows alors que jupyter et jupyterhub existent me semble une grossiรจre erreur d'aiguillage.
Je vais tenter de dรฉmontrer mon point de vue ร partir de ce Notebook. Les librairies utilisรฉes par le module lycee sont instalรฉes en python3 sur la raspberryPi du projet Tremplin.
Il va de soi que ce cahier peut s'exรฉcuter en local sur toute machine disposant d'une instalation de jupyter notebook.
L'รฉcriture du code en Python : des rรจgles ร respecter
l'indentation et les commentaires ##
La position du premier caractรจre d'une ligne de code obรฉit ร une rรจgle simple :
Une ligne de code commence au dรฉbut d'une ligne sauf si ":" terminent la ligne prรฉcรฉdente. Dans ce cas un dรฉcalage est nรฉcessaire, il s'agit d'une indentation.
Le nombre d'espaces est paramรฉtrable dans les รฉditeurs de code python mais gรฉnรฉralement il s'agit d'une tabulation.
Les commentaires sont insรฉrรฉs ร la suite d'un #
Les codes des champs suivants illustrent ce propos.
remarque : L'invite de saisie input() est utilisรฉe dans sa version "importer un entier" int(input()).
End of explanation
# Pensez ร faire une copie de ce Cahier avant de rรฉdiger "File โฆโฆ "Make a Copy" "
# Renommez le cahier avec votre nom.
# Utilisez ce champ de saisie pour rรฉdiger votre programme puis le tester.
Explanation: # Exercice 1 : appliquer ses connaissances #
Rรฉdigez un programme qui vous donne la solution de l'รฉquation
2ax + b = C
pour aโ 0
End of explanation
# utilisation de la casse dans les variables
CoefficientDirecteur=2
ordonneeAlOrigine=6
print ("y =",CoefficientDirecteur,"x +", ordonneeAlOrigine)
# non respect de la casse dans un nom de variable
CoefficientDirecteur=2
ordonneeAlOrigine=6
print ("y=",Coefficientdirecteur,"x +", ordonneeAlorigine)
Explanation: la casse des caractรจres
Il est important de se souvenir que les instruction en python s'รฉcrivent en minuscule.
Lorsque une variable contient des Majuscules il faut dans tout le programme conserver ces majuscules.
Les champs de code suivants montrent ce qui se produit quand on suit la rรจgle puis lorsqu'on ne la suit pas.
End of explanation
# affectation d'une chaine de caractรจre
a="ceci est une chaรฎne de caractรจres"
print(a)
b='ceci est une chaรฎne de caractรจres'
print(b)
c="l'idรฉe est de ne pas mรฉlanger les guillemets et les apostrophes"
print(c)
d='"Bien mal acqui de profite jamais"'
print(d)
e=3.8
print(e)
Explanation: L'affectation d'une valeur ร une variable : utilisation du signe =
L'affectation peut se faire pour des entiers, des rรฉels, des chaines de caractรจres
Le champ de code suivant montre l'unitรฉ de l'affectation quelque soit le type de donnรฉes.
End of explanation
#affection de a et b
a,b=" ceci est une chaรฎne de caractรจres ",' ceci est une chaรฎne de caractรจres '
print(a,b)
f=a+b
print(f)
e,g=3,4
print(e," : ",g)
h,i=e+g,e-g
print(h," : ",i)
Explanation: Les variables peuvent รชtre affectรฉes par lot. On peut rรฉaffecter les variables en mรชme temps ou successivement.
End of explanation
from lycee import *
# version utilisant lycee
from lycee import *
x=demande('Entrez une valeur pour x = ')
y=demande('Entrez une valeur pour y = ')
x,y=x+y,x-y
print("x = ",x,"y = ",y)
x,y=x+y,x-y
print ("maintenant , x =",x,"et y =",y)
# version utilisant python
x = int(input("Entrez un entier x : "))
y = int(input("Entrez un entier y : "))
print("rรฉaffectation de x ET y avec x,y = x+y,x-y ")
x,y=x+y,x-y
print("nouvelle valeur de x = ",x,"nouvelle valeur de y = ",y)
print("rรฉaffectation de x ET y avec x,y = x+y,x-y")
x,y=x+y,x-y
print("Maintenant la valeur de x = ",x," et la valeur de y = ",y)
Explanation: De l'utilitรฉ du module lycee pour nos รฉlรจves ?
Le projet Pythonedu semble considรฉrer qu'il faut mettre ร disposition des รฉlรจves un module qui masque les fonctionnalitรฉs de python ...
Il me semble en tant que prof validรฉ ISN que cette dรฉmarche est contre productive.
Que fait lycee ?
Il est liรฉ ร des librairies python qui doivent รชtre installรฉes pour qu'il puisse fonctionner.
import math
import tkinter as Tk
import tkinter.filedialog as tkf
import random as alea
import matplotlib.pyplot as repere
import numpy as np
import builtins
from scipy.stats import norm
Puis il crรฉe des fonctions qui sont pour le moins peu utiles ou en tout cas pas nรฉcessaires pour un รฉlรจve mรชme dรฉbutant. Voyons deux exemples, avec lycee et sans lycee, qui pourraient รชtre utilisรฉs.
Pour les besoins de la dรฉmonstration lycee.py est placรฉ dans le mรชme dossier que ce notebook sur la raspberryPi de Tremplin.
End of explanation
# version utilisant python
import time
x = int(input("Entrez un entier x : "))
y = int(input("Entrez un entier y : "))
print("rรฉaffectation de x ET y avec x,y = x+y,x-y ")
x,y=x+y,x-y
print("Calculez les valeurs attendue de x et y")
time.sleep(10)
print("nouvelle valeur de x = ",x,"nouvelle valeur de y = ",y)
time.sleep(10)
print("rรฉaffectation de x ET y avec x,y = x+y,x-y")
x,y=x+y,x-y
print("Calculez les valeurs attendue de x et y")
time.sleep(10)
print("Maintenant la valeur de x = ",x," et la valeur de y = ",y)
Explanation: On constate donc dans le premier cas que la dรฉmarche masque ce que fait le programme en privant l'รฉlรจve d'un commentaire intermรฉdiaire expliquant les rรฉaffectation, et que la commande python est remplacรฉe par une fonction qui au final n'a aucune utilitรฉ pour comprendre comment le programme fonctionne.
En ajoutant l'importation de time on peut mรชme marquer des pauses qui permettent ร l'รฉlรจve de comprendre la sรฉquence qui se produit dans le programme.
End of explanation |
15,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise
Step1: Part 2
Use indexing (not a for loop) to find the 9 values representing $y_{i}+y_{i-1}$ for $i$ between 1 and 10.
Hint
Step2: Part 3
Write a function trapz(x, y), that applies the trapezoid formula to pre-computed values, where x and y are 1-d arrays. The function should not use a for loop.
Step3: Part 4
Verify that your function is correct by using the arrays created in #1 as input to trapz. Your answer should be a close approximation of $\int_0^3 x^2$ which is $9$.
Step4: Part 5 (extension)
numpy and scipy.integrate provides many common integration schemes. Find the documentation for NumPy's own version of the trapezoidal integration scheme and check its result with your own
Step5: Part 6 (extension)
Write a function trapzf(f, a, b, npts=100) that accepts a function f, the endpoints a and b and the number of samples to take npts. Sample the function uniformly at these
points and return the value of the integral.
Use the trapzf function to identify the minimum number of sampling points needed to approximate the integral $\int_0^3 x^2$ with an absolute error of $<=0.0001$. (A loop is necessary here) | Python Code:
import numpy as np
x = np.linspace(0, 3, 10)
y = x ** 2
print(x)
print(y)
Explanation: Exercise: trapezoidal integration
In this exercise, you are tasked with implementing the simple trapezoid rule
formula for numerical integration. If we want to compute the definite integral
$$
\int_{a}^{b}f(x)dx
$$
we can partition the integration interval $[a,b]$ into smaller subintervals. We then approximate the area under the curve for each subinterval by calculating the area of the trapezoid created by linearly interpolating between the two function values at each end of the subinterval:
For a pre-computed $y$ array (where $y = f(x)$ at discrete samples) the trapezoidal rule equation is:
$$
\int_{a}^{b}f(x)dx\approx\frac{1}{2}\sum_{i=1}^{n}\left(x_{i}-x_{i-1}\right)\left(y_{i}+y_{i-1}\right).
$$
In pure python, this can be written as:
def trapz_slow(x, y):
area = 0.
for i in range(1, len(x)):
area += (x[i] - x[i-1]) * (y[i] + y[i-1])
return area / 2
Exercise 2
Part 1
Create two arrays $x$ and $y$, where $x$ is a linearly spaced array in the interval $[0, 3]$ of length 10, and $y$ represents the function $f(x) = x^2$ sampled at $x$.
End of explanation
y_roll_sum = y[:-1] + y[1:]
print(y_roll_sum)
Explanation: Part 2
Use indexing (not a for loop) to find the 9 values representing $y_{i}+y_{i-1}$ for $i$ between 1 and 10.
Hint: What indexing would be needed to get all but the last element of the 1d array y. Similarly what indexing would be needed to get all but the first element of a 1d array.
End of explanation
def trapz(x, y):
return 0.5 * np.sum((x[1:] - x[:-1]) * (y[:-1] + y[1:]))
Explanation: Part 3
Write a function trapz(x, y), that applies the trapezoid formula to pre-computed values, where x and y are 1-d arrays. The function should not use a for loop.
End of explanation
trapz(x, y)
Explanation: Part 4
Verify that your function is correct by using the arrays created in #1 as input to trapz. Your answer should be a close approximation of $\int_0^3 x^2$ which is $9$.
End of explanation
print(np.trapz(y, x))
Explanation: Part 5 (extension)
numpy and scipy.integrate provides many common integration schemes. Find the documentation for NumPy's own version of the trapezoidal integration scheme and check its result with your own:
End of explanation
def trapzf(f, a, b, npts=100):
x = np.linspace(a, b, npts)
y = f(x)
return trapz(x, y)
def x_squared(x):
return x ** 2
abs_err = 1.0
n_samples = 0
expected = 9
while abs_err > 0.0001:
n_samples += 1
integral = trapzf(x_squared, 0, 3, npts=n_samples)
abs_err = np.abs(integral - 9)
print('Minimum samples for absolute error less than or equal to 0.0001:', n_samples)
Explanation: Part 6 (extension)
Write a function trapzf(f, a, b, npts=100) that accepts a function f, the endpoints a and b and the number of samples to take npts. Sample the function uniformly at these
points and return the value of the integral.
Use the trapzf function to identify the minimum number of sampling points needed to approximate the integral $\int_0^3 x^2$ with an absolute error of $<=0.0001$. (A loop is necessary here)
End of explanation |
15,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Encoder-Decoders Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
Explanation: Comparing Encoder-Decoders Analysis
Model Architecture
End of explanation
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
Explanation: Generations
End of explanation
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
15,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - Arbre et Trie (correction)
Correction.
Step1: Exercice 1
Step2: Exercice 2
Step3: Avec %timeit
Step4: Exercice 3
Step5: Exercice 4
Step6: Soit $N$ le nombre de mots dans la liste
Step7: Quelques explications supplรฉmentaires concernant cette correction
Step8: Pour illustrer la structure d'arbre du trie, on l'affiche avec la fonction suivante
Step9: Il reste un inconvรฉnient ร cette reprรฉsentation. Si on construire le trie pour le mot ["aaa"] ou les mots ["aa","aaa]", on obtient le mรชme trie
Step10: Pour รฉviter cela, le plus simple est de reprรฉsenter la fin d'un mot comme un caractรจre ร part entiรจre.
Step11: Exercice 7
Soit $L$ la longueur maximale des mots et $C$ le nombre de lettres distinctes, avec un trie, le coรปt de la recherche est majorรฉ par
Step12: Encore une fois, le temps de construction du trie n'est pas pris en compte. Plus il y a de recherche ร effectuer, plus il devient nรฉgligeable.
Le dictionnaire est un object courant dans la plupart des languages. En python, celui-ci utilise une table de hachage et le coรปt d'accรจs ร un รฉlรฉment n'est pas en $O(\ln n)$ mais en $O(n)$ (voir time complexity). En C++, le dictionnaire (ou map) utilise un arbre binaire et l'accรจs ร un รฉlรฉment a un coรปt logarithmique
Step13: Les arbres sont des graphes particuliers car il ne contiennent pas de cycles. Il est possible de parcourir les noeuds, de les numรฉroter. Ils sont trรจs utilisรฉs en machine learning avec les arbres de dรฉcisions ou les random forests. Ils sont parfois cachรฉs comme dans le cas de la recherche dichotomique qui peut รชtre implรฉmentรฉe ร partir d'une structure d'arbre.
Dans le cas de le recherche dichotomique, on suppose que le nombre de noeuds fils est toujours 2. L'ordre alphabรฉtique est le suivant | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - Arbre et Trie (correction)
Correction.
End of explanation
import random
def mot_alea (l) :
l = [ chr(97+random.randint(0,25)) for i in range(l) ]
return "".join(l)
taille = 20
N = 10000
mots = [ mot_alea(taille) for _ in range (N) ]
print(len(mots))
Explanation: Exercice 1
End of explanation
import time
debut = time.perf_counter()
for k in mots :
i = mots.index(k)
fin = time.perf_counter()
print ("recherche simple",fin - debut)
Explanation: Exercice 2
End of explanation
%timeit for k in mots : i = mots.index(k)
Explanation: Avec %timeit :
End of explanation
def dicho (mots, x) :
a = 0
b = len(mots)-1
while a < b :
m = (a+b)//2
t = mots[m]
if t < x :
b = m-1
elif t == x :
return m
else :
a = m+1
return a
mots.sort()
debut = time.perf_counter()
for k in mots :
i = dicho(mots, k)
fin = time.perf_counter()
print ("dichotomie",fin - debut)
%timeit for k in mots : i = dicho(mots, k)
Explanation: Exercice 3 : recherche dichotomique
End of explanation
import math
for N in [10, 100, 1000, 10000, 100000] :
mots = [ mot_alea(taille) for _ in range (N) ]
tolook = [ mots[ random.randint(0,len(mots)-1) ] for i in range(0,1000) ]
mots.sort()
debut = time.perf_counter()
for k in tolook :
i = mots.index(k)
fin = time.perf_counter()
ds = fin-debut
debut = time.perf_counter()
for k in tolook :
i = dicho(mots, k)
fin = time.perf_counter()
dd = fin-debut
print(N, "simple",ds, "dicho",dd, "ratio", ds / max(dd, 1), " ratio thรฉorique ",
len(mots)/math.log(len(mots)) * math.log(2)/30)
for N in [10, 100, 1000, 10000, 100000] :
print("N=",N)
mots = [ mot_alea(taille) for _ in range (N) ]
tolook = [ mots[ random.randint(0,len(mots)-1) ] for i in range(0,1000) ]
mots.sort()
%timeit for k in tolook : i = mots.index(k)
%timeit for k in tolook : i = dicho(mots, k)
Explanation: Exercice 4
End of explanation
def build_trie(mots) :
trie = { }
for m in mots :
r = trie
for c in m :
if c not in r : r[c] = { }
r = r[c]
return trie
mots = [ "aaa", "aba", "aab", "baa", "bbb", "bba", "bab" ]
trie = build_trie(mots)
print(trie)
Explanation: Soit $N$ le nombre de mots dans la liste :
coรปt de la recherche simple : $O(N)$
coรปt de la recherche dichotomique : $O(\ln N)$
Le ratio $N/\ln N$ mesurรฉ en pratique devrait รชtre sensiblement รฉgal au ratio thรฉorique ร un facteur multiplication prรจs. Le trie du tableau qui prรฉcรจde la recherche dichotomique n'est pas pris en compte. Plus on effectue de recherche, plus son coรปt devient marginal. Ce coรปt explique nรฉanmoins pourquoi on ne fait pas toujours une recherche dichotomique.
Exercice 5 : trie
End of explanation
def lookup(trie, m) :
r = trie
for c in m :
if c in r :
r = r[c]
else :
return False
return True
for k in mots :
print(k, lookup(trie, k))
print("bcc", lookup(trie, "bcc"))
Explanation: Quelques explications supplรฉmentaires concernant cette correction :
Question ร propos de trie
Exercice 6 : recherche dans un trie
End of explanation
def print_trie(trie, niveau = 0):
for k,v in sorted(trie.items()):
print(" " * niveau + k)
if len(v) > 0 :
print_trie(v, niveau+1)
print_trie(trie)
Explanation: Pour illustrer la structure d'arbre du trie, on l'affiche avec la fonction suivante :
End of explanation
print_trie (build_trie( ["aaa"]) )
print_trie (build_trie( ["aaa", "aa"]) )
Explanation: Il reste un inconvรฉnient ร cette reprรฉsentation. Si on construire le trie pour le mot ["aaa"] ou les mots ["aa","aaa]", on obtient le mรชme trie :
End of explanation
print_trie (build_trie( ["aaa*"]) )
print_trie (build_trie( ["aaa*", "aa*"]) )
Explanation: Pour รฉviter cela, le plus simple est de reprรฉsenter la fin d'un mot comme un caractรจre ร part entiรจre.
End of explanation
for N in [10, 100, 1000, 10000, 100000, 200000, 400000] :
mots = [ mot_alea(taille) for _ in range (N) ]
tolook = [ mots[ random.randint(0,len(mots)-1) ] for i in range(0,10000) ]
trie = build_trie(mots)
mots.sort()
debut = time.perf_counter()
for k in tolook :
i = dicho(mots, k)
fin = time.perf_counter()
dd = fin-debut
debut = time.perf_counter()
for k in tolook :
i = lookup(trie, k)
fin = time.perf_counter()
dt = fin - debut
print(N, "dicho",dd, "trie", dt)
for N in [10, 100, 1000, 10000, 100000, 200000, 400000] :
print("N=",N)
mots = [ mot_alea(taille) for _ in range (N) ]
tolook = [ mots[ random.randint(0,len(mots)-1) ] for i in range(0,10000) ]
trie = build_trie(mots)
mots.sort()
%timeit for k in tolook : i = dicho(mots, k)
%timeit for k in tolook : i = lookup(trie, k)
Explanation: Exercice 7
Soit $L$ la longueur maximale des mots et $C$ le nombre de lettres distinctes, avec un trie, le coรปt de la recherche est majorรฉ par : $O(L \ln C)$. On reprend le code de l'exercice 5 et on ajoute le code associรฉ au trie. On effectue 10000 recherches au lieu de 1000 pour avoir une meilleure estimation de la diffรฉrence (pour vous en convaincre, il suffit comparer les temps obtenus par deux exรฉcution de ce mรชme code).
End of explanation
class Arbre:
def __init__(self, value):
self.value = value
self.children = [ ]
def add_child(self, child):
self.children.append(child)
def __str__(self):
rows = [ "value={0}".format(self.value) ]
for c in self.children:
s = str(c)
lines = [ " " + l for l in s.split("\n") ]
rows.extend(lines)
return "\n".join(rows)
root = Arbre("racine")
child1 = Arbre("child 1")
child1.add_child ( Arbre("child 2") )
child1.add_child ( Arbre("child 1000") )
root.add_child(child1)
root.add_child( Arbre ("child 3") )
print(root)
Explanation: Encore une fois, le temps de construction du trie n'est pas pris en compte. Plus il y a de recherche ร effectuer, plus il devient nรฉgligeable.
Le dictionnaire est un object courant dans la plupart des languages. En python, celui-ci utilise une table de hachage et le coรปt d'accรจs ร un รฉlรฉment n'est pas en $O(\ln n)$ mais en $O(n)$ (voir time complexity). En C++, le dictionnaire (ou map) utilise un arbre binaire et l'accรจs ร un รฉlรฉment a un coรปt logarithmique : Standard C++ Containers.
Plus en dรฉtails
La recherche dichotomique est รฉquivalente ร celle opรฉrรฉe avec un arbre binaire de recherche (si ce dernier est รฉquilibrรฉ ou arbre rouge/noir). Ce dernier consiste ร classer les รฉlรฉments par ordre alphabรฉtique. Un arbre est souvent reprรฉsentรฉ par une classe et non par un dictionnaire comme la derniรจre partie de cette session le laissait supposer.
End of explanation
class ArbreDicho:
def __init__(self, value):
self.value = value
self.before = None
self.after = None
def __str__(self):
return "value={0}".format(self.value)
def add_before(self, child):
self.before = child
def add_after(self, child):
self.after = child
def find(self, word):
if self.value == word : return self
elif word < self.value :
if self.before is None : return None
else : return self.before.find(word)
else :
if self.after is None : return None
else : return self.after.find(word)
def sorted_list(self):
res = [ ]
if self.before is not None: res.extend ( self.before.sorted_list() )
res.append(self.value)
if self.after is not None: res.extend ( self.after.sorted_list() )
return res
# on crรฉe un arbre dont les noeuds vรฉrifient la propriรฉtรฉ รฉnoncรฉ plus haut (les mots apparaissent dans le bon ordre)
root = ArbreDicho("milieu")
root.add_before(ArbreDicho("avant"))
root.add_after(ArbreDicho("zillion"))
root.before.add_before(ArbreDicho("alphabet"))
root.before.add_after(ArbreDicho("avant aprรจs"))
# on vรฉrifie que c'est bien le cas
all = root.sorted_list()
assert all == sorted(all)
print(all)
# on effectue la recherche
for a in all:
f = root.find(a)
print(f)
Explanation: Les arbres sont des graphes particuliers car il ne contiennent pas de cycles. Il est possible de parcourir les noeuds, de les numรฉroter. Ils sont trรจs utilisรฉs en machine learning avec les arbres de dรฉcisions ou les random forests. Ils sont parfois cachรฉs comme dans le cas de la recherche dichotomique qui peut รชtre implรฉmentรฉe ร partir d'une structure d'arbre.
Dans le cas de le recherche dichotomique, on suppose que le nombre de noeuds fils est toujours 2. L'ordre alphabรฉtique est le suivant : noeuds fils 1, noeud courant, noeud fils 2. Les deux noeuds fils pourraient รชtre nuls. L'implรฉmentation de l'arbre serait la suivante :
End of explanation |
15,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Abstract
In this work we will to take a look at a data visualization using Python and the Titanic dataset. It's not intended to be the most accurate Titanic dataset analysis about Titanic, it's a project resultant of the course Data Analyst Nanodegree by Udacity,however it is a serious project, so I believe it will be interesting to demonstrate the process to generate a data visualization and make some superficial conclusions about the data.
Introduรงรฃo
Esse trabalho tem como objetivo explorar um dos maiores desastres causados pela natureza e por sucessivas falhas humanas, รฉ quase certeza(baseado no meu feeling) que vocรช, que estรก lendo este documento, jรก tenha ouvido falar sobre a trรกgica histรณria do Titanic, eu poderia listar fรกcil 50 pessoas que jรก ouviram, ou melhor, assistiram a essa histรณria mais de 5 vezes, - deixo aqui os meus parabรฉns ao James Cameron, Leonardo DiCaprio, Kate Winslet e todos os outros envolvidos. Mas com um pouquinho de esforรงo o Jack poderia caber naquela porta tambรฉm ein? - enfim, o objetivo aqui nรฃo รฉ falar sobre a histรณria do Titanic e muito menos sobre filme, "Near, far, wherever you are I believe that the heart does go on", se vocรช quiser saber sobre a histรณria e filme basta pesquisar no Google (http
Step1: Lendo o arquivo
"Lรช"(carrega) o arquivo que contรฉm o conjunto de dados e verifica superficialmente se os dados estรฃo corretos
Step2: Limpeza de dados
Dados carregados, agora รฉ hora de comeรงar a exploraรงรฃo? Mais ou menos, na verdade, para atingir o objetivo de explorar e por fim transpor os resultados de uma forma que sejam facilmente observรกveis, รฉ preciso antes passar por uma outra fase, que a limpeza ou correรงรฃo de dados e registros, nรฃo se pode pular essa fase, pois corre-se o risco de trabalhar com um conjunto de dados contendo algum problema que inviabilizarรก o trabalho ou que gerarรก erros nos resultados, e nรฃo รฉ objetivo desse trabalho fazer falsa revelaรงรตes. Logo em frente serรฃo executadas 3 aรงรตes para esse processo de limpeza tendo em vista os objetivos que foram definidos no comeรงo do trabalho.
1. Checar os dados referentes a sexo
Nas 3 questรตes serรก necessรกrio manipular informaรงรตes de sexo dos passageiros, entรฃo รฉ preciso avaliar se estรก tudo ok com esses dados, ao analisar o conjunto de dados รฉ possรญvel observar que existe uma coluna nomeada como Sex(surprise!), nela aparentemente contรฉm o sexo para cada passageiro da lista, aparentemente pois pode existir algum valor invรกlido ou a ausรชncia de algum valor para algum registro(entenda passageiro).
Entรฃo a primeira coisa que serรก feita รฉ saber se todos os dados satisfazem as opรงรตes male ou female, que sรฃo os valores esperados na coluna Sex para cada registro. Para realizar essa tarefa serรก necessรกrio "incriveis" uma linha de cรณdigo
Step3: Legal nรฉ? Com uma linha de cรณdigo foi possรญvel verificar os dados da coluna Sex de 891 registros, NICE! Apรณs a execuรงรฃo do cรณdigo acima รฉ possรญvel observar que nรฃo existem dados faltantes e que sรณ existem dados com valores
Step4: A "investigaรงรฃo" revelou 177 dados nulos(NaN) na coluna Age do dataframe titanic_dataframe e infelizmente no contexto desse trabalho nรฃo รฉ possรญvel obter os valores dos dados faltantes, sendo assim teremos que eliminar os registros que nรฃo possuem valor para a coluna Age.
A prรณxima cรฉlula de cรณdigos obterรก somente os dados nรฃo nulos do conjunto titanic_dataframe, alรฉm disso os dados vรกlidos jรก serรฃo separados por sexo, assim evita-se "duas viagens" e tambรฉm serรก executado um teste nos valores da coluna Age para cada registro com o objetivo de verificar se eles sรฃo nรบmericos, ou seja, 3 coelhos em uma sรณ cajadada. Nรฃo รฉ possรญvel fazer esse teste antes de remover os registro com valores NaN na coluna Age, pois aparentemente numpy.isreal trata NaN como valor real.
ร importante salientar que o dataframe titanic_dataframe nรฃo serรก modificado, isso mantรฉm o conjunto de dados original permitindo utilizรก-lo no futuro caso venha ser necessรกrio.
Step5: O cรณdigo pode parecer grande, mas somente 4 linhas sรฃo realmente necessรกrias, as outras linhas sรฃo somente para explicitar que a operaรงรฃo foi realizada com sucesso. Abaixo serรฃo exibidos os 5 primeiros registros dos novos dataframes gerados, sรณ para desengargo de consciรชncia e ter uma ideia se tudo realmente ocorreu da forma correta.
Step6: Aparentemente tudo estรก correto, entรฃo รฉ possรญvel avanรงar para o prรณximo e รบltimo passo da limpeza.
3. Checar os valores de sobrevivรชncia
Step7: A aparentemente nenhum dado estรก faltando, entรฃo serรก verificado se existem dados diferente de 0 e 1 na coluna Survived de cada dataframe.
Step9: E aparentemente nenhum dado รฉ diferente de 0 e 1 e assim o processo de limpeza รฉ finalizado, agora รฉ o momento de partir para a exploraรงรฃo e revelaรงรตes, o que serรก que esse conjunto de dados irรก revelar? Algo ele terรก que revelar, por bem ou por mal, nรฃo cheguei atรฉ aqui atoa! #PAZ
Exploraรงรฃo e Revelaรงรตes
De agora em diante este trabalho terรก como objetivo buscar de forma direta as respostas para as 3 questรตes levantadas na introduรงรฃo.
1. Qual era a quantidade de pessoas por sexo?
Essa questรฃo รฉ bem simples de ser respondida com o conjunto de dados em questรฃo, ainda bem jรก que ela รฉ essencial para a anรกlise, muito do trabalho para resolver essa questรฃo jรก foi realizado na etapa de limpeza, agora sรณ resta gerar uma visualizaรงรฃo para comunicar os resultados.
Perceba que serรฃo utilizados os dataframes
Step10: O cรณdigo acima gera um grรกfico de barras, o eixo "X" representa duas categorias, no caso sexo, jรก o eixo "Y" representa a quantidade de cada sexo, รฉ possรญvel observar que o sexo masculino รฉ maioria no conjunto de dados, mais especificamente nos dataframes contendo somente registros vรกlidos para anรกlise.
O cรณdigo para gerar o grรกfico pode parecer complexo e/ou grande, mas poderia ser bem mais simples, porรฉm era meu objetivo ter controle de cada aspecto do grรกfico, cada um com as suas manias, mas o que importa mesmo รฉ o que o grรกfico revelou apรณs os dados serem organizados e trabalhados.
E assim a primeira questรฃo foi satisfeita, agora serรก possรญvel seguir para a segunda questรฃo.
2. Qual era a quantidade de crianรงas, adultos e idosos, categorizadas por sexo?
Com os dataframes de males_age_normalized e females_age_normalized contendo somente dados vรกlidos tanto para o sexo como para a idade รฉ possรญvel separar os dados em crianรงas, adultos e idosos.
Para isso serรก definido as "classes" de idade para separar cada categoria
Step11: O "grande" cรณdigo acima acrescentou uma coluna age_group em cada um dos dois dataframes
Step13: Aparentemente tudo estรก correto, sendo assim jรก รฉ possรญvel gerar um grรกfico categorizando os passageiros, aqueles que possuem os dados vรกlidos, por sexo e idade.
Step14: O cรณdigo acima gera um grรกfico de barras novamente divido em categorias no eixo "X", mas agora as categorias sรฃo sexo e grupo de idade, jรก o eixo "Y" informa a quantidade de cada conjunto de categorias. ร um grรกfico que comunica rรกpido muitos aspectos dos dados, como a pouca diferenรงa entre crianรงas do sexo masculino e feminino e o melhor de tudo รฉ ser uma forma que รฉ compreensรญvel pela grande maioria das pessoas.
Novamente uma cรฉlula de cรณdigo intimidadora, mas esse รฉ preรงo a se pagar por querer tudo nos minimos detalhes.
E assim a questรฃo 2 foi satisfeita, agora serรก proposto uma soluรงรฃo para a questรฃo 3.
3. Qual foi o sexo e idade maioria entre os sobreviventes?
A intenรงรฃo dessa questรฃo รฉ obter o sexo e a idade maioria entre os sobreviventes, para isso serรก selecionado somente os sobreviventes para cada dataframe
Step16: E por fim serรก gerado a visualizaรงรฃo com os valores dos novos dataframes criados no passo anterior
Step17: O grรกfico acima รฉ a grande revelaรงรฃo desse trabalho, o resultado final de todo esforรงo em fazer os dados "falarem", รฉ uma revelaรงรฃo mais emocionante do que o os episรณdios finais de Baccano!(fica a dica), mais emocionante do que final de temporada de sรฉrie, mais emocionante do que descobrir quem รฉ o assassino de alguma novela da Globo, ok, as novelas da Globo nรฃo sรฃo tudo isso(IMHO), mas ainda sim o grรกfico รฉ legal, pois atravรฉs dele รฉ possรญvel ter uma noรงรฃo da quantidade de sobreviventes de cada sexo e conjunto de idade, alรฉm tambรฉm de estar explicito a razรฃo de cada categoria de idade e sexo entre sobreviventes e nรฃo sobreviventes.
Esse grรกfico nos revela coisas interessantes como o fato de que em nenhuma categoria de idade o sexo masculino alcanรงou 50% de sobreviventes, enquanto o sexo feminino alcanรงou mais de 50% de sobreviventes em todas as categorias de idade.
Outro fato observรกvel graรงas a visualizaรงรฃo acima รฉ que 94% das idosas sobreviveram, enquanto menos de 15% de homens idosos sobreviveram, graรงas a organizaรงรฃo da visualizaรงรฃo รฉ possรญvel observar tambรฉm que atรฉ mesmo em nรบmeros absolutos a quantidade de idosas sobreviventes foi maior que a quantidade de homens idosos, e que isso รฉ vรกlido para os outros conjuntos de categorias de idade e sexo apresentados no grรกfico.
BรNUS
Uma outra visรฃo da questรฃo final | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: Abstract
In this work we will to take a look at a data visualization using Python and the Titanic dataset. It's not intended to be the most accurate Titanic dataset analysis about Titanic, it's a project resultant of the course Data Analyst Nanodegree by Udacity,however it is a serious project, so I believe it will be interesting to demonstrate the process to generate a data visualization and make some superficial conclusions about the data.
Introduรงรฃo
Esse trabalho tem como objetivo explorar um dos maiores desastres causados pela natureza e por sucessivas falhas humanas, รฉ quase certeza(baseado no meu feeling) que vocรช, que estรก lendo este documento, jรก tenha ouvido falar sobre a trรกgica histรณria do Titanic, eu poderia listar fรกcil 50 pessoas que jรก ouviram, ou melhor, assistiram a essa histรณria mais de 5 vezes, - deixo aqui os meus parabรฉns ao James Cameron, Leonardo DiCaprio, Kate Winslet e todos os outros envolvidos. Mas com um pouquinho de esforรงo o Jack poderia caber naquela porta tambรฉm ein? - enfim, o objetivo aqui nรฃo รฉ falar sobre a histรณria do Titanic e muito menos sobre filme, "Near, far, wherever you are I believe that the heart does go on", se vocรช quiser saber sobre a histรณria e filme basta pesquisar no Google (http://bfy.tw/Yys).
O objetivo desse trabalho รฉ analisar esse acontecimento de uma outra perspectiva, nรฃo atravรฉs da exposiรงรฃo de fatos histรณricos ou filmes "hollywoodianos", este fato serรก analisado atravรฉs de um dataset(nome bonito para conjunto de dados) contendo dados demogrรกficos e informaรงรตes de 891 dos 2224 passageiros e tripulantes a bordo do Titanic , e nรฃo, o Jack nรฃo estรก presente nos dados, mas ele nรฃo farรก falta jรก que รฉ um personagem fictรญcio(por favor, me diga que vocรช sabia isso!), enfim serรก que um conjunto de dados irรก revelar coisas legais?
ร possรญvel encontrar mais informaรงรตes sobre este conjunto de dados no [site do Kaggle] (https://www.kaggle.com/c/titanic/data).
O que essa anรกlise pretende "revelar"?
Como jรก dito, ao longo desse trabalho serรก executada uma anรกlise de dados utilizando todo um conjunto de tรฉcnicas e tecnologias para entรฃo fazer revelaรงรตes sobre o dataset em questรฃo relacionado ao Titanic, porรฉm antes รฉ preciso salientar que todas as revelaรงรตes que serรฃo retiradas desse conjunto de dados serรฃo apenas teorias com base nas observaรงรตes, com nenhum valor estatรญstico. Nenhum teste foi aplicado para saber se as diferenรงas observadas poderiam ser causadas por mera aleatoriedade.
Aviso dados, agora basta seguir em frente, qual frente? Digo รฉ difรญcil chegar em algum lugar quando nรฃo se sabe onde se quer chegar, sendo assim, serรฃo definidas 3 questรตes a serem respondidas atravรฉs da tortura desse dataset. Essas 3 questรตes serรฃo o "norte" do trabalho, dando um caminho e destino para a anรกlise, alรฉm de definir quais serรฃo as "revelaรงรตes", as questรตes sรฃo:
Qual era a quantidade de pessoas por sexo?
Qual era a quantidade de crianรงas, adultos e idosos, categorizadas por sexo?
Qual foi o sexo e idade maioria entre os sobreviventes?
Preparaรงรฃo dos dados
Obtendo os dados
Apรณs definido o caminho e o destino รฉ preciso cumprir outros requisitos, o primeiro deles รฉ a obtenรงรฃo dos dados, pois nรฃo รฉ possรญvel comeรงar a se "aventurar" na exploraรงรฃo de dados sem o bรกsico, que sรฃo os dados!
Imports
Trรชs linhas de extrema importรขncia para todo o projeto, o cรณdigo abaixo รฉ responsรกvel por fazer a importaรงรฃo de todas as bibliotecas ou mรณdulos necessรกrios para o projeto:
End of explanation
titanic_dataframe = pd.read_csv("data/titanic_data.csv")
#Display total of rows in dataset
print "Total rows = "+str(len(titanic_dataframe))
print ""
#Print only the 5 first rows
print "Preview:"
titanic_dataframe.head()
Explanation: Lendo o arquivo
"Lรช"(carrega) o arquivo que contรฉm o conjunto de dados e verifica superficialmente se os dados estรฃo corretos:
End of explanation
#Return the total counts of each value in column
print titanic_dataframe['Sex'].value_counts()
Explanation: Limpeza de dados
Dados carregados, agora รฉ hora de comeรงar a exploraรงรฃo? Mais ou menos, na verdade, para atingir o objetivo de explorar e por fim transpor os resultados de uma forma que sejam facilmente observรกveis, รฉ preciso antes passar por uma outra fase, que a limpeza ou correรงรฃo de dados e registros, nรฃo se pode pular essa fase, pois corre-se o risco de trabalhar com um conjunto de dados contendo algum problema que inviabilizarรก o trabalho ou que gerarรก erros nos resultados, e nรฃo รฉ objetivo desse trabalho fazer falsa revelaรงรตes. Logo em frente serรฃo executadas 3 aรงรตes para esse processo de limpeza tendo em vista os objetivos que foram definidos no comeรงo do trabalho.
1. Checar os dados referentes a sexo
Nas 3 questรตes serรก necessรกrio manipular informaรงรตes de sexo dos passageiros, entรฃo รฉ preciso avaliar se estรก tudo ok com esses dados, ao analisar o conjunto de dados รฉ possรญvel observar que existe uma coluna nomeada como Sex(surprise!), nela aparentemente contรฉm o sexo para cada passageiro da lista, aparentemente pois pode existir algum valor invรกlido ou a ausรชncia de algum valor para algum registro(entenda passageiro).
Entรฃo a primeira coisa que serรก feita รฉ saber se todos os dados satisfazem as opรงรตes male ou female, que sรฃo os valores esperados na coluna Sex para cada registro. Para realizar essa tarefa serรก necessรกrio "incriveis" uma linha de cรณdigo:
End of explanation
#Check missing data
#Get all data in titanic_dataframe with null(NaN) in the column Age
count_errors_age_null = titanic_dataframe[pd.isnull(titanic_dataframe['Age'])]
print "Amount missing data: "+str(count_errors_age_null.Age.value_counts(dropna=False))
count_errors_age_null.head()
Explanation: Legal nรฉ? Com uma linha de cรณdigo foi possรญvel verificar os dados da coluna Sex de 891 registros, NICE! Apรณs a execuรงรฃo do cรณdigo acima รฉ possรญvel observar que nรฃo existem dados faltantes e que sรณ existem dados com valores: male ou female, ou seja, todos os dados da coluna Sex do DataFrame estรฃo corretos para os objetivos desse trabalho, nรฃo precisando de correรงรตes ou intervenรงรตes, sendo assim, รฉ possรญvel seguir em frente com a limpeza dos dados.
2. Checar os dados referentes a idade
Para satisfazer duas das trรชs questรตes definidas no comeรงo do trabalho รฉ necessรกrio manipular os dados de idade dos passageiros, entรฃo agora serรก verificado se existe algum problema com os dados da coluna Age, mas antes de procurar por erros รฉ preciso pressupor uma lista de erros que podem existir no conjunto de dados. No contexto atual foi imaginado 2 possรญveis erros:
Dados nulos: Serรก preciso descobrir se todos os registro(Passageiros) possuem a coluna Age nรฃo nula.
Dados nรฃo numรฉricos: Os dados da coluna Age identificam a idade do passageiro, e รฉ esperado que a idade seja em formato numรฉrico, sendo assim serรก verificado se existe algum dado nรฃo numรฉrico na coluna Age.
Primeiro serรก "investigado" a existรชncia de dados nulos, para isso serรก necessรกrio novamente "incriveis" 2 linhas de cรณdigo, a รบltima รฉ sรณ para confirmar que os valores da coluna Age realmente sรฃo nulos, ou seja, รฉ dispensรกvel se vocรช jรก tem experiรชncia na รกrea.
End of explanation
#Get all data with column Sex is 'equal' male and column Age 'is not null(NaN)'
males_notnull = titanic_dataframe[titanic_dataframe['Sex'] == 'male'][pd.notnull(titanic_dataframe['Age'])]
#Check all data in males_notnull is real ande get all wich is true
males_age_normalized = males_notnull[males_notnull['Age'].apply(np.isreal)]
#Get all data with column Sex is 'equal' female and column Age 'is not null(NaN)'
females_notnull = titanic_dataframe[titanic_dataframe['Sex'] == 'female'][pd.notnull(titanic_dataframe['Age'])]
#Check all data in females_notnull is real ande get all wich is true
females_age_normalized = females_notnull[females_notnull['Age'].apply(np.isreal)]
print ""
#Display amount erros
print "Errors: "+str(count_errors_age_null.Age.value_counts(dropna=False))
print ""
#Display amount valid males
print "Valid Age Men: "+str(len(males_age_normalized))
#Display amount valid females
print "Valid Age Women: "+str(len(females_age_normalized))
#Check if all data is present
print "Total: "+str(len(count_errors_age_null) + len(males_age_normalized) + len(females_age_normalized))+" = "+str(len(titanic_dataframe['Sex']=='male'))
print ""
print "Amount of valid data: "+str(+ len(males_age_normalized) + len(females_age_normalized))
Explanation: A "investigaรงรฃo" revelou 177 dados nulos(NaN) na coluna Age do dataframe titanic_dataframe e infelizmente no contexto desse trabalho nรฃo รฉ possรญvel obter os valores dos dados faltantes, sendo assim teremos que eliminar os registros que nรฃo possuem valor para a coluna Age.
A prรณxima cรฉlula de cรณdigos obterรก somente os dados nรฃo nulos do conjunto titanic_dataframe, alรฉm disso os dados vรกlidos jรก serรฃo separados por sexo, assim evita-se "duas viagens" e tambรฉm serรก executado um teste nos valores da coluna Age para cada registro com o objetivo de verificar se eles sรฃo nรบmericos, ou seja, 3 coelhos em uma sรณ cajadada. Nรฃo รฉ possรญvel fazer esse teste antes de remover os registro com valores NaN na coluna Age, pois aparentemente numpy.isreal trata NaN como valor real.
ร importante salientar que o dataframe titanic_dataframe nรฃo serรก modificado, isso mantรฉm o conjunto de dados original permitindo utilizรก-lo no futuro caso venha ser necessรกrio.
End of explanation
males_age_normalized.head()
females_age_normalized.head()
Explanation: O cรณdigo pode parecer grande, mas somente 4 linhas sรฃo realmente necessรกrias, as outras linhas sรฃo somente para explicitar que a operaรงรฃo foi realizada com sucesso. Abaixo serรฃo exibidos os 5 primeiros registros dos novos dataframes gerados, sรณ para desengargo de consciรชncia e ter uma ideia se tudo realmente ocorreu da forma correta.
End of explanation
print len(males_age_normalized[pd.isnull(males_age_normalized['Survived'])])
print len(females_age_normalized[pd.isnull(females_age_normalized['Survived'])])
Explanation: Aparentemente tudo estรก correto, entรฃo รฉ possรญvel avanรงar para o prรณximo e รบltimo passo da limpeza.
3. Checar os valores de sobrevivรชncia:
Esse processo de limpeza serรก mais simples, jรก que boa parte do trabalho foi realizado no passo anterior. Os dados jรก foram analisados e รฉ sabido que existe uma coluna Survived, esta coluna possui valores que variam entre 0 e 1, sendo:
0 = Nรฃo sobreviveu
1 = Sobreviveu
ร preciso verificar se existe algum problema com os dados da coluna Survived, o problema mais comum รฉ a falta de algum valor, o cรณdigo abaixo buscarรก por dados faltantes na coluna Survived para cada datraframe gerado no passo anterior.
End of explanation
print len(males_age_normalized[(males_age_normalized['Survived'] != 0) & (males_age_normalized['Survived'] != 1)])
print len(females_age_normalized[(females_age_normalized['Survived'] != 0) & (females_age_normalized['Survived'] != 1)])
Explanation: A aparentemente nenhum dado estรก faltando, entรฃo serรก verificado se existem dados diferente de 0 e 1 na coluna Survived de cada dataframe.
End of explanation
ind = np.arange(1)
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(ind, males_age_normalized.Sex.count(), width, color='#7b92aa')
rects2 = ax.bar(ind + width + 0.04, females_age_normalized.Sex.count(), width, color='#c5a7ce')
ax.set_ylabel('Amount')
ax.set_title('Number of people per sex.')
plt.xticks((ind, ind + width + 0.04), ('Male', 'Female'))
#ax.set_xticklabels()
ax.set_ylim([0, 650])
ax.legend((rects1[0], rects2[0]), ('Male', 'Female'))
def autolabel(rects):
Attach a text label above each bar displaying its height
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1)
autolabel(rects2)
plt.show()
Explanation: E aparentemente nenhum dado รฉ diferente de 0 e 1 e assim o processo de limpeza รฉ finalizado, agora รฉ o momento de partir para a exploraรงรฃo e revelaรงรตes, o que serรก que esse conjunto de dados irรก revelar? Algo ele terรก que revelar, por bem ou por mal, nรฃo cheguei atรฉ aqui atoa! #PAZ
Exploraรงรฃo e Revelaรงรตes
De agora em diante este trabalho terรก como objetivo buscar de forma direta as respostas para as 3 questรตes levantadas na introduรงรฃo.
1. Qual era a quantidade de pessoas por sexo?
Essa questรฃo รฉ bem simples de ser respondida com o conjunto de dados em questรฃo, ainda bem jรก que ela รฉ essencial para a anรกlise, muito do trabalho para resolver essa questรฃo jรก foi realizado na etapa de limpeza, agora sรณ resta gerar uma visualizaรงรฃo para comunicar os resultados.
Perceba que serรฃo utilizados os dataframes:
males_age_normalized
females_age_normalized
Esses dataframes surgiram durante o processo de limpeza dos dados e serรฃo a base para todo o restante do trabalho. Abaixo segue a visualizaรงรฃo dos conjuntos de dataframes.
End of explanation
males_age_normalized["age_group"] = pd.cut(titanic_dataframe.Age, [0, 18, 50, 150], labels=["boys", "men", "old_men"])
females_age_normalized["age_group"] = pd.cut(titanic_dataframe.Age, [0, 18, 50, 150], labels=["girls", "women", "old_women"])
Explanation: O cรณdigo acima gera um grรกfico de barras, o eixo "X" representa duas categorias, no caso sexo, jรก o eixo "Y" representa a quantidade de cada sexo, รฉ possรญvel observar que o sexo masculino รฉ maioria no conjunto de dados, mais especificamente nos dataframes contendo somente registros vรกlidos para anรกlise.
O cรณdigo para gerar o grรกfico pode parecer complexo e/ou grande, mas poderia ser bem mais simples, porรฉm era meu objetivo ter controle de cada aspecto do grรกfico, cada um com as suas manias, mas o que importa mesmo รฉ o que o grรกfico revelou apรณs os dados serem organizados e trabalhados.
E assim a primeira questรฃo foi satisfeita, agora serรก possรญvel seguir para a segunda questรฃo.
2. Qual era a quantidade de crianรงas, adultos e idosos, categorizadas por sexo?
Com os dataframes de males_age_normalized e females_age_normalized contendo somente dados vรกlidos tanto para o sexo como para a idade รฉ possรญvel separar os dados em crianรงas, adultos e idosos.
Para isso serรก definido as "classes" de idade para separar cada categoria:
Crianรงas: [0 - 18[
Adultos: [18 - 50[
Idosos: [50 - +50]
Definido o que significa crianรงas, adultos e idosos pode-se seguir para os dados:
End of explanation
males_age_normalized[males_age_normalized["age_group"]=='boys']['Sex'].value_counts()
females_age_normalized[females_age_normalized["age_group"]=='girls']['Sex'].value_counts()
males_age_normalized[males_age_normalized["age_group"]=='men']['Sex'].value_counts()
females_age_normalized[females_age_normalized["age_group"]=='women']['Sex'].value_counts()
males_age_normalized[males_age_normalized["age_group"]=='old_men']['Sex'].value_counts()
females_age_normalized[females_age_normalized["age_group"]=='old_women']['Sex'].value_counts()
Explanation: O "grande" cรณdigo acima acrescentou uma coluna age_group em cada um dos dois dataframes: males_age_normalized, females_age_normalized e classificou cada registro baseado nas classes de idades definidas.
Para garantir que estรก tudo ok com os dados antes de colocรก-los para revelar algo, acho vรกlido verificar se tudo estรก correto com os dados, com uma linha de cรณdigo para cada categoria de idade รฉ possรญvel fazer essa verificaรงรฃo:
End of explanation
N = 3
ind = np.arange(N)
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(ind, males_age_normalized.groupby(['age_group']).size(), width, color='#7b92aa')
rects2 = ax.bar(ind + width + 0.04, females_age_normalized.groupby(['age_group']).size(), width, color='#c5a7ce')
ax.set_ylabel('Amount')
ax.set_title('Number of children, adults and elderly by sex')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(('Children', 'Adults', 'Elderly'))
ax.set_ylim([0, 390])
ax.legend((rects1[0], rects2[0]), ('Male', 'Female'))
def autolabel(rects):
Attach a text label above each bar displaying its height
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1)
autolabel(rects2)
plt.show()
Explanation: Aparentemente tudo estรก correto, sendo assim jรก รฉ possรญvel gerar um grรกfico categorizando os passageiros, aqueles que possuem os dados vรกlidos, por sexo e idade.
End of explanation
females_survived = females_age_normalized[females_age_normalized['Survived']==1]
males_survived = males_age_normalized[males_age_normalized['Survived']==1]
Explanation: O cรณdigo acima gera um grรกfico de barras novamente divido em categorias no eixo "X", mas agora as categorias sรฃo sexo e grupo de idade, jรก o eixo "Y" informa a quantidade de cada conjunto de categorias. ร um grรกfico que comunica rรกpido muitos aspectos dos dados, como a pouca diferenรงa entre crianรงas do sexo masculino e feminino e o melhor de tudo รฉ ser uma forma que รฉ compreensรญvel pela grande maioria das pessoas.
Novamente uma cรฉlula de cรณdigo intimidadora, mas esse รฉ preรงo a se pagar por querer tudo nos minimos detalhes.
E assim a questรฃo 2 foi satisfeita, agora serรก proposto uma soluรงรฃo para a questรฃo 3.
3. Qual foi o sexo e idade maioria entre os sobreviventes?
A intenรงรฃo dessa questรฃo รฉ obter o sexo e a idade maioria entre os sobreviventes, para isso serรก selecionado somente os sobreviventes para cada dataframe: males_age_normalized e females_age_normalized
End of explanation
N = 3
diff_males_total_survived = males_age_normalized.groupby(['age_group']).size() - males_survived.groupby(['age_group']).size()
diff_females_total_survived = females_age_normalized.groupby(['age_group']).size() - females_survived.groupby(['age_group']).size()
ind = np.arange(N)
width = 0.35
fig, ax = plt.subplots()
rects1a = ax.bar(ind, males_survived.groupby(['age_group']).size(), width, color='#7b92aa')
rects1b = ax.bar(ind, diff_males_total_survived, width, alpha=0.8, color='#D8E2EC',
bottom=males_survived.groupby(['age_group']).size())
rects2a = ax.bar(ind + width + 0.04, females_survived.groupby(['age_group']).size(), width, color='#c5a7ce')
rects2b = ax.bar(ind + width + 0.04, diff_females_total_survived, width, color='#F4EDF6',
bottom=females_survived.groupby(['age_group']).size())
ax.set_ylabel('Number of survivors')
ax.set_title('Rates of survivors of children, adults and the elderly by sex')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(('Children', 'Adults', 'Elderly'))
ax.set_ylim([0, 390])
ax.legend((rects1a[0], rects1b[0], rects2a[0], rects2b[0]), ('Male Survived', 'Male Total', 'Female Survived', 'Female Total'))
def autolabel_survived(rects, rects_top):
Attach a text label above each bar displaying its height
for rect in range(len(rects)):
height = rects[rect].get_height()
total = rects[rect].get_height() + rects_top[rect].get_height()
percent = int((height * 100) / total)
ax.text(x = rects[rect].get_x() + rects[rect].get_width()/2., y = height,
s = '%s' % str(percent)+'%',
ha='center', va='bottom')
autolabel_survived(rects1a, rects1b)
autolabel_survived(rects2a, rects2b)
plt.show()
Explanation: E por fim serรก gerado a visualizaรงรฃo com os valores dos novos dataframes criados no passo anterior:
End of explanation
all_passengers_with_age = titanic_dataframe[pd.notnull(titanic_dataframe['Age'])]
my_plot = all_passengers_with_age['Age'].plot(kind='kde', color="red", figsize=(9, 8), linestyle='--')
my_plot = females_age_normalized['Age'].plot(kind='kde', color="#7b92aa")
my_plot = males_age_normalized['Age'].plot(kind='kde', color="#c5a7ce")
my_plot.set_xlabel("Age")
my_plot.set_ylabel("Density")
my_plot.grid(True)
my_plot.set_xticks(np.arange( -40, 110, 5))
my_plot.set_xlim([-20, 95])
my_plot.set_ylim([0, 0.035])
my_plot.set_title("Density of survival by age visualized by sex and total")
my_plot.legend(["Total Passengers", "Male Survived", "Female Survived",], loc=9,ncol=4)
plt.show()
Explanation: O grรกfico acima รฉ a grande revelaรงรฃo desse trabalho, o resultado final de todo esforรงo em fazer os dados "falarem", รฉ uma revelaรงรฃo mais emocionante do que o os episรณdios finais de Baccano!(fica a dica), mais emocionante do que final de temporada de sรฉrie, mais emocionante do que descobrir quem รฉ o assassino de alguma novela da Globo, ok, as novelas da Globo nรฃo sรฃo tudo isso(IMHO), mas ainda sim o grรกfico รฉ legal, pois atravรฉs dele รฉ possรญvel ter uma noรงรฃo da quantidade de sobreviventes de cada sexo e conjunto de idade, alรฉm tambรฉm de estar explicito a razรฃo de cada categoria de idade e sexo entre sobreviventes e nรฃo sobreviventes.
Esse grรกfico nos revela coisas interessantes como o fato de que em nenhuma categoria de idade o sexo masculino alcanรงou 50% de sobreviventes, enquanto o sexo feminino alcanรงou mais de 50% de sobreviventes em todas as categorias de idade.
Outro fato observรกvel graรงas a visualizaรงรฃo acima รฉ que 94% das idosas sobreviveram, enquanto menos de 15% de homens idosos sobreviveram, graรงas a organizaรงรฃo da visualizaรงรฃo รฉ possรญvel observar tambรฉm que atรฉ mesmo em nรบmeros absolutos a quantidade de idosas sobreviventes foi maior que a quantidade de homens idosos, e que isso รฉ vรกlido para os outros conjuntos de categorias de idade e sexo apresentados no grรกfico.
BรNUS
Uma outra visรฃo da questรฃo final:
End of explanation |
15,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 07
Step1: 1. Running a Default Simulation
In order to create a scenario object in Flow with network features depicted from OpenStreetMap, we will use the base Scenario class. This class can sufficiently support the generation of any .osm file.
Step2: In order to recreate the network features of a specific osm file, the path to the osm file must be specified in NetParams. For this example, we will use an osm file extracted from the section of the Bay Bridge as depicted in Figure 1.
In order to specify the path to the osm file, simply fill in the osm_path attribute with the path to the .osm file as follows
Step3: Next, we create all other parameters as we have in tutorials 1 and 2. For this example, we will assume a total of 1000 are uniformly spread across the Bay Bridge. Once again, if the choice of parameters is unclear, you are encouraged to review Tutorial 1.
Step4: We are finally ready to test our scenario in simulation. In order to do so, we create an Experiment object and run the simulation for a number of steps. This is done in the cell below.
Step5: 2. Customizing the Scenario
While the above example does allow you to view the network within Flow, the simulation is limited for two reasons. For one, vehicles are placed on all edges within the network; if we wished to simulate traffic solely on the on the bridge and do not care about the artireols, for instance, this would result in unnecessary computational burdens. Next, as you may have noticed if you ran the above example to completion, routes in the base scenario class are defaulted to consist of the vehicles' current edges only, meaning that vehicles exit the network as soon as they reach the end of the edge they are originated on. In the next subsections, we discuss how the scenario can be modified to resolve these issues.
2.1 Specifying Traversable Edges
In order to limit the edges vehicles are placed on to the road sections edges corresponding to the westbound Bay Bridge, we define an EDGES_DISTRIBUTION variable. This variable specifies the names of the edges within the network that vehicles are permitted to originated in, and is assigned to the scenario via the edges_distribution component of the InitialConfig input parameter, as seen in the code snippet below. Note that the names of the edges can be identified from the .osm file or by right clicking on specific edges from the SUMO gui (see the figure below).
<img src="img/osm_edge_name.png" width=600>
<center> Figure 2
Step6: 2.2 Creating Custom Routes
Next, we choose to specify the routes of vehicles so that they can traverse the entire Bay Bridge, instead of the only the edge they are currently on. In order to this, we create a new scenario class that inherits all its properties from Scenario and simply redefine the routes by modifying the specify_routes variable. This method was originally introduced in Tutorial 07
Step7: 2.3 Rerunning the SImulation
We are now ready to rerun the simulation with fully defined vehicle routes and a limited number of traversable edges. If we run the cell below, we can see the new simulation in action. | Python Code:
# the TestEnv environment is used to simply simulate the network
from flow.envs import TestEnv
# the Experiment class is used for running simulations
from flow.core.experiment import Experiment
# all other imports are standard
from flow.core.params import VehicleParams
from flow.core.params import NetParams
from flow.core.params import InitialConfig
from flow.core.params import EnvParams
from flow.core.params import SumoParams
Explanation: Tutorial 07: Networks from OpenStreetMap
In this tutorial, we discuss how networks that have been imported from OpenStreetMap can be integrated and run in Flow. This will all be presented via the Bay Bridge network, seen in the figure below. Networks from OpenStreetMap are commonly used in many traffic simulators for the purposes of replicating traffic in realistic traffic geometries. This is true in both SUMO and Aimsun (which are both supported in Flow), with each supporting several techniques for importing such network files. This process is further simplified and abstracted in Flow, with users simply required to specify the path to the osm file in order to simulate traffic in the network.
<img src="img/bay_bridge_osm.png" width=750>
<center> Figure 1: Snapshot of the Bay Bridge from OpenStreetMap </center>
Before we begin, let us import all relevant Flow parameters as we have done for previous tutorials. If you are unfamiliar with these parameters, you are encouraged to review tutorial 1.
End of explanation
from flow.scenarios import Scenario
Explanation: 1. Running a Default Simulation
In order to create a scenario object in Flow with network features depicted from OpenStreetMap, we will use the base Scenario class. This class can sufficiently support the generation of any .osm file.
End of explanation
net_params = NetParams(
osm_path='networks/bay_bridge.osm'
)
Explanation: In order to recreate the network features of a specific osm file, the path to the osm file must be specified in NetParams. For this example, we will use an osm file extracted from the section of the Bay Bridge as depicted in Figure 1.
In order to specify the path to the osm file, simply fill in the osm_path attribute with the path to the .osm file as follows:
End of explanation
# create the remainding parameters
env_params = EnvParams()
sim_params = SumoParams(render=True)
initial_config = InitialConfig()
vehicles = VehicleParams()
vehicles.add('human', num_vehicles=100)
# create the scenario
scenario = Scenario(
name='bay_bridge',
net_params=net_params,
initial_config=initial_config,
vehicles=vehicles
)
Explanation: Next, we create all other parameters as we have in tutorials 1 and 2. For this example, we will assume a total of 1000 are uniformly spread across the Bay Bridge. Once again, if the choice of parameters is unclear, you are encouraged to review Tutorial 1.
End of explanation
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
scenario=scenario
)
# run the simulation for 1000 steps
exp = Experiment(env=env)
exp.run(1, 1000)
Explanation: We are finally ready to test our scenario in simulation. In order to do so, we create an Experiment object and run the simulation for a number of steps. This is done in the cell below.
End of explanation
# we define an EDGES_DISTRIBUTION variable with the edges within
# the westbound Bay Bridge
EDGES_DISTRIBUTION = [
"11197898",
"123741311",
"123741303",
"90077193#0",
"90077193#1",
"340686922",
"236348366",
"340686911#0",
"340686911#1",
"340686911#2",
"340686911#3",
"236348361",
"236348360#0",
"236348360#1"
]
# the above variable is added to initial_config
new_initial_config = InitialConfig(
edges_distribution=EDGES_DISTRIBUTION
)
Explanation: 2. Customizing the Scenario
While the above example does allow you to view the network within Flow, the simulation is limited for two reasons. For one, vehicles are placed on all edges within the network; if we wished to simulate traffic solely on the on the bridge and do not care about the artireols, for instance, this would result in unnecessary computational burdens. Next, as you may have noticed if you ran the above example to completion, routes in the base scenario class are defaulted to consist of the vehicles' current edges only, meaning that vehicles exit the network as soon as they reach the end of the edge they are originated on. In the next subsections, we discuss how the scenario can be modified to resolve these issues.
2.1 Specifying Traversable Edges
In order to limit the edges vehicles are placed on to the road sections edges corresponding to the westbound Bay Bridge, we define an EDGES_DISTRIBUTION variable. This variable specifies the names of the edges within the network that vehicles are permitted to originated in, and is assigned to the scenario via the edges_distribution component of the InitialConfig input parameter, as seen in the code snippet below. Note that the names of the edges can be identified from the .osm file or by right clicking on specific edges from the SUMO gui (see the figure below).
<img src="img/osm_edge_name.png" width=600>
<center> Figure 2: Name of an edge from SUMO </center>
End of explanation
# we create a new scenario class to specify the expected routes
class BayBridgeOSMScenario(Scenario):
def specify_routes(self, net_params):
return {
"11197898": [
"11197898", "123741311", "123741303", "90077193#0", "90077193#1",
"340686922", "236348366", "340686911#0", "340686911#1",
"340686911#2", "340686911#3", "236348361", "236348360#0", "236348360#1",
],
"123741311": [
"123741311", "123741303", "90077193#0", "90077193#1", "340686922",
"236348366", "340686911#0", "340686911#1", "340686911#2",
"340686911#3", "236348361", "236348360#0", "236348360#1"
],
"123741303": [
"123741303", "90077193#0", "90077193#1", "340686922", "236348366",
"340686911#0", "340686911#1", "340686911#2", "340686911#3", "236348361",
"236348360#0", "236348360#1"
],
"90077193#0": [
"90077193#0", "90077193#1", "340686922", "236348366", "340686911#0",
"340686911#1", "340686911#2", "340686911#3", "236348361", "236348360#0",
"236348360#1"
],
"90077193#1": [
"90077193#1", "340686922", "236348366", "340686911#0", "340686911#1",
"340686911#2", "340686911#3", "236348361", "236348360#0", "236348360#1"
],
"340686922": [
"340686922", "236348366", "340686911#0", "340686911#1", "340686911#2",
"340686911#3", "236348361", "236348360#0", "236348360#1"
],
"236348366": [
"236348366", "340686911#0", "340686911#1", "340686911#2", "340686911#3",
"236348361", "236348360#0", "236348360#1"
],
"340686911#0": [
"340686911#0", "340686911#1", "340686911#2", "340686911#3", "236348361",
"236348360#0", "236348360#1"
],
"340686911#1": [
"340686911#1", "340686911#2", "340686911#3", "236348361", "236348360#0",
"236348360#1"
],
"340686911#2": [
"340686911#2", "340686911#3", "236348361", "236348360#0", "236348360#1"
],
"340686911#3": [
"340686911#3", "236348361", "236348360#0", "236348360#1"
],
"236348361": [
"236348361", "236348360#0", "236348360#1"
],
"236348360#0": [
"236348360#0", "236348360#1"
],
"236348360#1": [
"236348360#1"
]
}
Explanation: 2.2 Creating Custom Routes
Next, we choose to specify the routes of vehicles so that they can traverse the entire Bay Bridge, instead of the only the edge they are currently on. In order to this, we create a new scenario class that inherits all its properties from Scenario and simply redefine the routes by modifying the specify_routes variable. This method was originally introduced in Tutorial 07: Creating Custom Scenarios. The new scenario class looks as follows:
End of explanation
# create the scenario
new_scenario = BayBridgeOSMScenario(
name='bay_bridge',
net_params=net_params,
initial_config=new_initial_config,
vehicles=vehicles,
)
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
scenario=new_scenario
)
# run the simulation for 1000 steps
exp = Experiment(env=env)
exp.run(1, 10000)
Explanation: 2.3 Rerunning the SImulation
We are now ready to rerun the simulation with fully defined vehicle routes and a limited number of traversable edges. If we run the cell below, we can see the new simulation in action.
End of explanation |
15,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loops
During the course of solving client requirements, comes across situations where group of some data needs to be processed against a defined set of instructions.
Loops help in resolving situations where a piece of code needs to be executed against a set of data repetitedly or till certain condition is met or un-met. Or, you use to process a large quantity of data, such as lines of a file or records of a database that must be processed by the same code block.
Python provides two constructs to help in these situations.
for
while
Lets start with the for loops.
For
It is one of the most often used construct in Python. It can accept not only accept staticย sequences, but also sequences generated by iterators (Structures which allow iterations, i.e. sequentially access to collection of elements). It runs the code block against known iterations of dataset.
The syntax for for is as follows
Step1: In the above example, "Manish Gupta" is a sequence of characters and for loop traverse that sequence of characters. Also you will note that we are ending the print statement with space instead of new line using the option end=.
Similarly in the below example we are going to use the range function to generate the sequence of numbers starting from 30 and ending with 5 with difference of 5.
The range() function
The function range(m, n, p), is very useful in loops, as it returns a list of integers starting at m through smaller thanย n in steps of length p, which can be used as the order for the loop.
We can also define the start, stop and step size as range(start,stop,step size). step size defaults to 1 if not provided.
We can generate a sequence of numbers using range() function. range(10) will generate numbers from 0 to 9 (10 numbers).
This function does not store all the values in memory, it would be inefficient. So it remembers the start, stop, step size and generates the next number on the go.
To force this function to output all the items, we can use the function list().
Step2: Nested loops
We can also have nested for loops as shown in the below example
Step3: NOTE
Step4: for loop with a list
Step5: we can also have conditions where multiple values are returned every iteration.
Step6: for loop with dictionary.
Traversing the values.
Step7: Traversing the keys
Step8: Uses of forloops
Reading & processing a log file which contains logs one line at a time.
While
Executes a block of code in response to a condition.
Syntax
Step9: NOTE
Step10: NOTE
Step11: Break
The break statement is used to exit a for or a while loop. The purpose of this statement is to end the execution of the loop (for or while) immediately and the program control goes to the statement after the last statement of the loop. If there is an optional else statement in while or for loop it skips the optional clause also
Step12: Continue Statement
The continue statement is used in a while or for loop to take the control to the top of the loop without executing the rest statements inside the loop. Here is a simple example.
Step13: The else in for
Step14: Usecases for else
A common use case for the else clause in loops is to implement search loops; say youโre performing a search for an item that meets a particular condition, and need to perform additional processing or raise an error if no acceptable value is found
Step15: python
n-> 2
Step16: NOTE | Python Code:
for x in "Manish Gupta":
print(x, end="^~", flush=True)
Explanation: Loops
During the course of solving client requirements, comes across situations where group of some data needs to be processed against a defined set of instructions.
Loops help in resolving situations where a piece of code needs to be executed against a set of data repetitedly or till certain condition is met or un-met. Or, you use to process a large quantity of data, such as lines of a file or records of a database that must be processed by the same code block.
Python provides two constructs to help in these situations.
for
while
Lets start with the for loops.
For
It is one of the most often used construct in Python. It can accept not only accept staticย sequences, but also sequences generated by iterators (Structures which allow iterations, i.e. sequentially access to collection of elements). It runs the code block against known iterations of dataset.
The syntax for for is as follows:
Syntax:
for <reference> in <sequence>:
<code block>
continue
break
else:
<code block>
During the execution of a for loop, the reference points to an element in the sequence. At each iteration, the reference is updated, in order for the for code block to process the corresponding element.
The clause breakย stops the loop and continue passes it to the next iteration. The code inside the else is executed at the end of the loop, except if the loop has been interrupted by break.
Example:
End of explanation
# Output: range(0, 10)
print(range(10))
# Output: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
print(list(range(10)))
# Output: [2, 3, 4, 5, 6, 7]
print(list(range(2, 8)))
# Output: [2, 5, 8, 11, 14, 17]
print(list(range(2, 20, 3)))
print(list(range(20, 2, -3)))
print(dir(range(10)))
# Sum 0 to 99
s = 0
for x in range(30, 1, -5):
print(x)
s = s + x
print("sum of 30 to 1 with steps -5 is", s)
# Sum 0 to 99
s = 0
for x in range(30, 1, -5):
print(x)
s += x
print("sum of 30 to 1 with steps -5 is", s)
Explanation: In the above example, "Manish Gupta" is a sequence of characters and for loop traverse that sequence of characters. Also you will note that we are ending the print statement with space instead of new line using the option end=.
Similarly in the below example we are going to use the range function to generate the sequence of numbers starting from 30 and ending with 5 with difference of 5.
The range() function
The function range(m, n, p), is very useful in loops, as it returns a list of integers starting at m through smaller thanย n in steps of length p, which can be used as the order for the loop.
We can also define the start, stop and step size as range(start,stop,step size). step size defaults to 1 if not provided.
We can generate a sequence of numbers using range() function. range(10) will generate numbers from 0 to 9 (10 numbers).
This function does not store all the values in memory, it would be inefficient. So it remembers the start, stop, step size and generates the next number on the go.
To force this function to output all the items, we can use the function list().
End of explanation
for x in range(1, 6):
for y in range(1, x+1):
print(x, y)
Explanation: Nested loops
We can also have nested for loops as shown in the below example
End of explanation
for x in range(1, 6):
for x in range(1, x+1):
print(x, x)
Explanation: NOTE: Please avoid below case
End of explanation
cols = ["Red", "Green", "Yellow", "White"]
for color in cols:
print(color)
cols = ["Red", "Green", "Yellow", "White"]
for color in cols:
print(color)
else:
print(" ~~~~ Done ~~~~")
Explanation: for loop with a list
End of explanation
# Advance code, to be used after learning slicing. Please use instead the other code
# for x in "Manish Gupta"[::-1]:
# print(x, end=" ")
reverse_text = ""
for char in "Manish Gupta":
reverse_text = char + reverse_text
print(reverse_text)
x_test = [[1,2],[3,4],[5,6]]
for x in x_test:
print(x)
a = x[0]
b = x[1]
print (a, b)
x_test = [[1,2],[3,4],[5,6], [7,8,9]]
for x in x_test:
print(x)
x_test = [[1, 2],[3, 4],[5, 6], [7, 8]]
for x, y in x_test:
print(x, y)
x_test = [[1, 2],[3, 4],[5, 6], [7, 8, 9]]
try:
for x, y in x_test:
print(x, y)
except Exception as e:
print(e)
x_test = [[1,2],[3,4],[5,6], [7,8,9]]
for x in x_test:
print(x)
a = x[0]
b = x[1]
print (a, b)
Explanation: we can also have conditions where multiple values are returned every iteration.
End of explanation
color = {"c2": "Red", "c1": "Green", "c3": "Orange"}
for value in color.values():
print(value)
Explanation: for loop with dictionary.
Traversing the values.
End of explanation
color = {"c1": "Red", "c2": "Green", "c3": "Orange"}
for col in color:
print(col, color[col])
color = {"c1": "Red", "c2": "Green", "c3": "Orange"}
for value in color.values():
if(value=="Green"):
break
print(value)
else:
print("Done")
color = {"c1": "Red", "c2": "Green", "c3": "Orange"}
for value in color.values():
if(value=="Green"):
continue
print(value)
else:
print("Done")
Explanation: Traversing the keys
End of explanation
# Sum 0 to 99
s = 0
x = 1
while x < 100:
s = s + x
x = x + 1
else:
print("!!! Hurry Hurry !!!")
print(x)
print ("Sum of 0 to 99", s)
# Sum 0 to 99
s = 0
x = 1
while x < 100:
s += x
x += 1
else:
print("!!! Hurry Hurry !!!")
print(x)
print ("Sum of 0 to 99", s)
Explanation: Uses of forloops
Reading & processing a log file which contains logs one line at a time.
While
Executes a block of code in response to a condition.
Syntax:
while <condition>:
<code block>
continue/break/pass
else:
<code block>
The code block inside the while loop is repeated while the loop condition is evaluated as true.
Example:
End of explanation
while x < 0:
print("Hello")
else:
print("Sorry")
# while x > 0:
# print("Hello")
# else:
# print("Sorry")
Explanation: NOTE: Bad Code Below
End of explanation
s = 0
x = 100
while x < 100:
s = s + x
x = x + 1
else:
print("x is already equal or greater than 100")
print(s)
x = 1;
s = 0
while (x < 10):
s = s + x
x = x + 1
if (x == 5):
break
else:
print('The sum of first 9 integers : ',s)
print('The sum of', x, 'numbers is :',s)
while 10 != int(input('Enter a passkeyid: ')):
print("Wrong Passkey"),
while int(input('Enter a passkeyid: ')) != 10:
print("Wrong Passkey"),
else:
print("!!! Welcome to the world of Magic !!!")
Explanation: NOTE: Please try to avoid code similar to above commented code
NOTE: The while loop is appropriate when there is no way to determine how many iterations will occur and there is a sequence to follow.
End of explanation
num_sum = 0
count = 0
for x in range(1, 9):
print(x)
num_sum = num_sum + x
count = count + 1
if count == 5:
break
print("Sum of first ",count,"integers is : ", num_sum)
Explanation: Break
The break statement is used to exit a for or a while loop. The purpose of this statement is to end the execution of the loop (for or while) immediately and the program control goes to the statement after the last statement of the loop. If there is an optional else statement in while or for loop it skips the optional clause also
End of explanation
for x in range(8):
if (x == 3 or x==6):
print("\tSkipping:", x)
continue
print("This should never print")
else:
print(x)
Explanation: Continue Statement
The continue statement is used in a while or for loop to take the control to the top of the loop without executing the rest statements inside the loop. Here is a simple example.
End of explanation
for x in [1, 10, 4]:
if x == 10:
continue
print("Hello", x)
else:
print("processing completed without issues.")
print("-" * 20)
for x in [1, 10, 4]:
if x == 10:
break
print("Hello", x)
else:
print("processing completed without issues.")
Explanation: The else in for
End of explanation
def meets_condition(x):
return x==20
data = [10, 20, 33, 42, 44]
for x in data:
if meets_condition(x):
break
else:
print("No one met the condition")
print("lets end it")
def meets_condition(x):
return x==21
data = [10, 20, 33, 42, 44]
for x in data:
if meets_condition(x):
break
else:
print("No one met the condition")
print("lets end it")
print(list(range(2, 4)))
print(4%2)
Explanation: Usecases for else
A common use case for the else clause in loops is to implement search loops; say youโre performing a search for an item that meets a particular condition, and need to perform additional processing or raise an error if no acceptable value is found:
End of explanation
for n in [2, 3, 4, 5, 6, 7, 8, 9]:
for x in range(2, n):
if n % x == 0:
print(n, 'equals', x, '*', n/x)
break
else:
# loop fell through without finding a factor
print(n, 'is a prime number')
Explanation: python
n-> 2:
x <- []
n -> 3:
x -> 2
3%2
Prime number
n -> 4
x -> [2, 3]
4%2
n -> 5:
x -> [2, 3, 4]
End of explanation
a = [1, 2, 3, 4, 5, 6, 7, 8]
b = [2, 6]
c , d = [], []
for x in a:
(c, d)[x in b].append(x)
print(c, d)
Explanation: NOTE: When used with a loop, the else clause has more in common with the else clause of a try statement than it does that of if statements: a try statementโs else clause runs when no exception occurs, and a loopโs else clause runs when no break occurs. For more on the try statement and exceptions, see Handling Exceptions.
The tricky onces
End of explanation |
15,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CNN HandsOn with Keras
Problem Definition
Recognize handwritten digits
Data
The MNIST database (link) has a database of handwritten digits.
The training set has $60,000$ samples.
The test set has $10,000$ samples.
The digits are size-normalized and centered in a fixed-size image.
The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset.
Load the data
The data is available in the repo's data folder. Let's load that using the keras library.
For now, let's load the data and see how it looks.
Step1: Basic data analysis on the dataset
Step2: Display Images
Let's now display some of the images and see how they look
We will be using matplotlib library for displaying the image | Python Code:
import numpy as np
import keras
from keras.datasets import mnist
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
# Load the datasets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
Explanation: CNN HandsOn with Keras
Problem Definition
Recognize handwritten digits
Data
The MNIST database (link) has a database of handwritten digits.
The training set has $60,000$ samples.
The test set has $10,000$ samples.
The digits are size-normalized and centered in a fixed-size image.
The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset.
Load the data
The data is available in the repo's data folder. Let's load that using the keras library.
For now, let's load the data and see how it looks.
End of explanation
# What is the type of X_train?
# What is the type of y_train?
# Find number of observations in training data
# Find number of observations in test data
# Display first 2 records of X_train
# Display the first 10 records of y_train
# Find the number of observations for each digit in the y_train dataset
# Find the number of observations for each digit in the y_test dataset
# What is the dimension of X_train?. What does that mean?
Explanation: Basic data analysis on the dataset
End of explanation
from matplotlib import pyplot
import matplotlib as mpl
%matplotlib inline
# Displaying the first training data
fig = pyplot.figure()
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(X_train[20], cmap=mpl.cm.Greys)
imgplot.set_interpolation('nearest')
ax.xaxis.set_ticks_position('top')
ax.yaxis.set_ticks_position('left')
pyplot.show()
# Let's now display the 11th record
Explanation: Display Images
Let's now display some of the images and see how they look
We will be using matplotlib library for displaying the image
End of explanation |
15,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Atmospheres & Passbands
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: And we'll add a single light curve dataset to expose all the passband-dependent options.
Step3: Relevant Parameters
An 'atm' parameter exists for each of the components in the system (for each set of compute options) and defines which atmosphere table should be used.
By default, these are set to 'ck2004' (Castelli-Kurucz) but can be set to 'blackbody' as well as 'extern_atmx' and 'extern_planckint' (which are included primarily for direct comparison with PHOEBE legacy).
Step4: Note that if you change the value of 'atm' to anything other than 'ck2004', the corresponding 'ld_func' will need to be changed to something other than 'interp' (warnings and errors will be raised to remind you of this).
Step5: A 'passband' parameter exists for each passband-dependent-dataset (i.e. not meshes or orbits, but light curves and radial velocities). This parameter dictates which passband should be used for the computation of all intensities.
Step6: The available choices will include both locally installed passbands as well as passbands currently available from the online PHOEBE repository. If you choose an online-passband, it will be downloaded and installed locally as soon as required by b.run_compute.
Step7: To see your current locally-installed passbands, call phoebe.list_installed_passbands().
Step8: These installed passbands can be in any of a number of directories, which can be accessed via phoebe.list_passband_directories().
The first entry is the global location - this is where passbands can be stored by a server-admin to be available to all PHOEBE-users on that machine.
The second entry is the local location - this is where individual users can store passbands and where PHOEBE will download and install passbands (by default).
Step9: To see the passbands available from the online repository, call phoebe.list_online_passbands().
Step10: Lastly, to manually download and install one of these online passbands, you can do so explicitly via phoebe.download_passband or by visiting tables.phoebe-project.org. See also the tutorial on updating passbands.
Note that this isn't necessary unless you want to explicitly download passbands before needed by run_compute (perhaps if you're expecting to have unreliable network connection in the future and want to ensure you have all needed passbands). | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: Atmospheres & Passbands
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: And we'll add a single light curve dataset to expose all the passband-dependent options.
End of explanation
b['atm']
b['atm@primary']
b['atm@primary'].description
b['atm@primary'].choices
Explanation: Relevant Parameters
An 'atm' parameter exists for each of the components in the system (for each set of compute options) and defines which atmosphere table should be used.
By default, these are set to 'ck2004' (Castelli-Kurucz) but can be set to 'blackbody' as well as 'extern_atmx' and 'extern_planckint' (which are included primarily for direct comparison with PHOEBE legacy).
End of explanation
b['ld_func@primary']
b['atm@primary'] = 'blackbody'
print(b.run_checks())
b['ld_mode@primary'] = 'manual'
b['ld_func@primary'] = 'logarithmic'
print(b.run_checks())
Explanation: Note that if you change the value of 'atm' to anything other than 'ck2004', the corresponding 'ld_func' will need to be changed to something other than 'interp' (warnings and errors will be raised to remind you of this).
End of explanation
b['passband']
Explanation: A 'passband' parameter exists for each passband-dependent-dataset (i.e. not meshes or orbits, but light curves and radial velocities). This parameter dictates which passband should be used for the computation of all intensities.
End of explanation
print(b['passband'].choices)
Explanation: The available choices will include both locally installed passbands as well as passbands currently available from the online PHOEBE repository. If you choose an online-passband, it will be downloaded and installed locally as soon as required by b.run_compute.
End of explanation
print(phoebe.list_installed_passbands())
Explanation: To see your current locally-installed passbands, call phoebe.list_installed_passbands().
End of explanation
print(phoebe.list_passband_directories())
Explanation: These installed passbands can be in any of a number of directories, which can be accessed via phoebe.list_passband_directories().
The first entry is the global location - this is where passbands can be stored by a server-admin to be available to all PHOEBE-users on that machine.
The second entry is the local location - this is where individual users can store passbands and where PHOEBE will download and install passbands (by default).
End of explanation
print(phoebe.list_online_passbands())
Explanation: To see the passbands available from the online repository, call phoebe.list_online_passbands().
End of explanation
phoebe.download_passband('Cousins:Rc')
print(phoebe.list_installed_passbands())
Explanation: Lastly, to manually download and install one of these online passbands, you can do so explicitly via phoebe.download_passband or by visiting tables.phoebe-project.org. See also the tutorial on updating passbands.
Note that this isn't necessary unless you want to explicitly download passbands before needed by run_compute (perhaps if you're expecting to have unreliable network connection in the future and want to ensure you have all needed passbands).
End of explanation |
15,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step2: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
# YOUR CODE HERE
raise NotImplementedError()
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
15,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="https
Step1: MCMC (emcee)
MCMC is a convenient tool for drawing a sample from a given probability distribution.
Therefore, is mostly used to estimate parameters in Bayesian way.
emcee
Step2: a simple example - draw sample from uniformly distribution
Step3: how about Gaussian distribution?
1-D Gauassian
$p(x|\mu, \sigma) \propto
\exp{(-\frac{(x-\mu)^2}{2\sigma^2})}$
N-D Gauassian
$p(\overrightarrow{x}|\overrightarrow{\mu}, \Sigma) \propto \exp{(-\frac{1}{2}(\overrightarrow{x}-\overrightarrow{\mu})^T\Sigma (\overrightarrow{x}-\overrightarrow{\mu}))}$
where $\Sigma$ is the covariance matrix
Step4: how to use MCMC to estimate model parameters?
suppose you choose a Gaussian likelihood
Step5: comparison with the results from optimization | Python Code:
%pylab inline
np.random.seed(0)
p = [3.2, 5.6, 9.2]
x = np.arange(-8., 5., 0.1)
y = np.polyval(p, x) + np.random.randn(x.shape[0])*1.
plt.plot(x, y);
# STEP 1 - define your model
def my_model(p, x):
return np.polyval(p, x)
# STEP 2 - define your cost function
def my_costfun(p, x, y):
return np.sum((my_model(p, x) - y)**2)
# STEP 3 - minimize cost function
from scipy.optimize import minimize
result = minimize(my_costfun, np.array([2., 3., 5.]), args=(x,y) )
print result
print 'RESULT:\n', result
print ''
print 'RELATIVE ERROR:\n', (result.x - p)/p*100., '%'
print ''
print 'Hessian ERROR:' #err = sqrt(diag(inv(Hessian)))
hess_err = np.sqrt(np.diag(result['hess_inv']))
print hess_err
Explanation: <img src="https://www.python.org/static/img/python-logo.png">
Welcome to my lessons
Bo Zhang (NAOC, bozhang@nao.cas.cn) will have a few lessons on python.
These are very useful knowledge, skills and code styles when you use python to process astronomical data.
All materials can be found on my github page.
jupyter notebook (formerly named ipython notebook) is recommeded to use
These lectures are organized as below:
1. install python
2. basic syntax
3. numerical computing
4. scientific computing
5. plotting
6. astronomical data processing
7. high performance computing
8. version control
numpy
Docs: http://docs.scipy.org/doc/numpy/user/index.html
scipy
Docs: http://docs.scipy.org/doc/scipy/reference/index.html
scipy.optimize.minimize
Docs: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
optimization / minimization
End of explanation
from emcee import EnsembleSampler
Explanation: MCMC (emcee)
MCMC is a convenient tool for drawing a sample from a given probability distribution.
Therefore, is mostly used to estimate parameters in Bayesian way.
emcee: http://dan.iel.fm/emcee/current/#
End of explanation
def lnprob(theta):
theta = np.array(theta)
if np.all(theta>-3.) and np.all(theta<3.):
return 0
return -np.inf
nwalkers = 10
ndim = 3
p0 = [np.random.rand(ndim) for i in range(nwalkers)]
sampler = EnsembleSampler(nwalkers, ndim, lnprob)
pos = sampler.run_mcmc(p0, 2000)
np.corrcoef(sampler.flatchain[0:2000, 0], sampler.flatchain[2000:4000, 0])
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(311)
ax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(312)
ax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(313)
ax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);
import corner
fig = corner.corner(sampler.flatchain, labels=["p0", "p1", "p2"],
truths=[0., 0., 0.])
# fig.savefig("triangle.png")
Explanation: a simple example - draw sample from uniformly distribution
End of explanation
def lnprob(x, mu, ivar):
# if np.all(np.abs(x)<100.):
x = x.reshape(-1, 1)
mu = mu.reshape(-1, 1)
return -np.dot(np.dot((x-mu).T, ivar), x-mu)
# else:
# return -np.inf
mu = np.array([0.1, 0.2, 0.5])
cov = np.array([[1.0, 0.0, 0.0],
[0.0, 10, 9],
[0.0, 9, 10]])
ivar = np.linalg.inv(cov)
print 'ivar: \n', ivar
print 'det(cov): \n', np.linalg.det(cov)
print 'det(ivar): \n', np.linalg.det(ivar)
nwalkers = 10
ndim = 3
p0 = [np.random.rand(ndim) for i in range(nwalkers)]
sampler = EnsembleSampler(nwalkers, ndim, lnprob, args=(mu, ivar), threads=10)
pos,prob,state = sampler.run_mcmc(p0, 2000)
p0
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(311)
ax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(312)
ax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(313)
ax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);
fig = corner.corner(sampler.flatchain, labels=["mu1", "mu2", "mu3"],
truths=mu)
print mu
print ivar
Explanation: how about Gaussian distribution?
1-D Gauassian
$p(x|\mu, \sigma) \propto
\exp{(-\frac{(x-\mu)^2}{2\sigma^2})}$
N-D Gauassian
$p(\overrightarrow{x}|\overrightarrow{\mu}, \Sigma) \propto \exp{(-\frac{1}{2}(\overrightarrow{x}-\overrightarrow{\mu})^T\Sigma (\overrightarrow{x}-\overrightarrow{\mu}))}$
where $\Sigma$ is the covariance matrix
End of explanation
def lnprior(theta):
if np.all(np.abs(theta)<10000.):
return 0
else:
return -np.inf
def lnlike(theta, x, y):
y_model = np.polyval(theta, x)
return -np.sum((y_model-y)**2)
def lnprob(theta, x, y):
return lnprior(theta)+lnlike(theta, x, y)
nwalkers = 10
ndim = 3
p0 = [np.random.rand(ndim) for i in range(nwalkers)]
sampler = EnsembleSampler(nwalkers, ndim, lnprob, args=(x, y), threads=10)
pos,prob,state = sampler.run_mcmc(p0, 500)
np.corrcoef(sampler.flatchain[0:500, 0], sampler.flatchain[500:1000, 0])
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(311)
ax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(312)
ax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(313)
ax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);
fig = corner.corner(sampler.flatchain, labels=["p0", "p1", "p2"],
truths=p)
sampler.reset()
pos,prob,state = sampler.run_mcmc(pos, 2000)
np.corrcoef(sampler.flatchain[0:2000, 0], sampler.flatchain[4000:6000, 0])
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(311)
ax.plot(sampler.chain[:,:,0].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(312)
ax.plot(sampler.chain[:,:,1].T, '-', color='k', alpha=0.3)
ax = fig.add_subplot(313)
ax.plot(sampler.chain[:,:,2].T, '-', color='k', alpha=0.3);
fig = corner.corner(sampler.flatchain, labels=["p0", "p1", "p2"],
truths=p)
fig = corner.corner(sampler.flatchain, labels=["p0", "p1", "p2"],
truths=result.x)
Explanation: how to use MCMC to estimate model parameters?
suppose you choose a Gaussian likelihood:
$L(\theta|x_i,model) \propto \exp{(-\frac{(x_i-x_{i, model})^2}{2\sigma^2})} $
$ \log{(L(\theta|x_i,model))} \propto -\frac{(x_i-x_{i, model})^2}{2\sigma^2} = -\frac{1}{2}{\chi^2}$
End of explanation
# truth
p
# MCMC results
np.percentile(sampler.flatchain, [15., 50., 85.], axis=0)
print result.x - hess_err
print result.x
print result.x + hess_err
Explanation: comparison with the results from optimization
End of explanation |
15,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
norm = np.array (x / x.max())
return norm
#norm=np.linalg.norm(x)
#if norm==0:
# return x
#return x/norm
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn import preprocessing
one_hot_classes = None
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
global one_hot_classes
# TODO: Implement Function
return preprocessing.label_binarize(x,classes=[0,1,2,3,4,5,6,7,8,9])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
stddev=0.05
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
x=tf.placeholder(tf.float32,(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
return x
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, (None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32,name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
import math
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
height = math.ceil((float(x_tensor.shape[1].value - conv_ksize[0] + 1))/float((conv_strides[0])))
width = math.ceil(float((x_tensor.shape[2].value - conv_ksize[1] + 1))/float((conv_strides[1])))
#height = math.ceil((float(x_tensor.shape[1].value - conv_ksize[0] + 2))/float((conv_strides[0] + 1)))
#width = math.ceil(float((x_tensor.shape[2].value - conv_ksize[1] + 2))/float((conv_strides[1] + 1)))
weight = tf.Variable(tf.truncated_normal((height, width, x_tensor.shape[3].value, conv_num_outputs),stddev=stddev))
bias = tf.Variable(tf.zeros(conv_num_outputs))
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1,conv_strides[0],conv_strides[1],1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer,bias)
conv_layer = tf.nn.relu(conv_layer)
maxpool_layer = tf.nn.max_pool(conv_layer, ksize=[1,pool_ksize[0],pool_ksize[1],1], strides=[1,pool_strides[0],pool_strides[1],1], padding='SAME')
return maxpool_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
flattened = x_tensor.shape[1].value * x_tensor.shape[2].value * x_tensor.shape[3].value
return tf.reshape(x_tensor, shape=(-1, flattened))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs],stddev=stddev))
bias = tf.Variable(tf.zeros([num_outputs], dtype=tf.float32))
fc1 = tf.add(tf.matmul(x_tensor, weights), bias)
out = tf.nn.relu(fc1)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.shape[1].value, num_outputs],stddev=stddev))
bias = tf.Variable(tf.zeros([num_outputs], dtype=tf.float32))
return tf.add(tf.matmul(x_tensor, weights), bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
#def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
stddev=0.01
conv_strides = (2,2) # Getting out of mem errors with stride=1
pool_strides = (2,2)
pool_ksize = (2,2)
conv_num_outputs1 = 32
conv_ksize1 = (2,2)
conv_num_outputs2 = 128
conv_ksize2 = (4,4)
conv_num_outputs3 = 128
conv_ksize3 = (2,2)
fully_conn_out1 = 1024
fully_conn_out2 = 512
fully_conn_out3 = 128
num_outputs = 10
x = conv2d_maxpool(x, conv_num_outputs1, conv_ksize1, conv_strides, pool_ksize, pool_strides)
#x = tf.nn.dropout(x, keep_prob)
x = conv2d_maxpool(x, conv_num_outputs2, conv_ksize2, conv_strides, pool_ksize, pool_strides)
x = tf.nn.dropout(x, keep_prob)
#x = conv2d_maxpool(x, conv_num_outputs3, conv_ksize3, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
x = fully_conn(x,fully_conn_out1)
x = tf.nn.dropout(x, keep_prob)
x = fully_conn(x,fully_conn_out2)
#x = tf.nn.dropout(x, keep_prob)
#x = fully_conn(x,fully_conn_out3)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
x = output(x, num_outputs)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict={
x: valid_features[:256],
y: valid_labels[:256],
keep_prob: 1.})
train_acc = session.run (accuracy, feed_dict = {
x: feature_batch,
y: label_batch,
keep_prob: 1.})
print('Loss: {:>10.4f} Training: {:.6f} Validation: {:.6f}'.format(
loss,
train_acc,
valid_acc))
pass
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 100
batch_size = 1024
keep_probability = 0.4
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
15,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass
Step9: Loss layers
Step10: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
Step11: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
Step12: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
Step20: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | Python Code:
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in list(data.items()):
print(('%s: ' % k, v.shape))
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
# Test the affine_backward function
np.random.seed(231)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 5e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
np.random.seed(231)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 3e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
np.random.seed(231)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
np.random.seed(231)
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-3
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg = ', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
model = TwoLayerNet()
solver = None
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data,
update_rule='sgd',
optim_config={
'learning_rate': 1e-3,
},
lr_decay=0.95,
num_epochs=9, batch_size=100,
print_every=100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print('Running check with reg = ', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1e-2
learning_rate = 1e-2
model = FullyConnectedNet([100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-3
weight_scale = 1e-1
model = FullyConnectedNet([100, 100, 100, 100],
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error: ', rel_error(next_w, expected_next_w))
print('velocity error: ', rel_error(expected_velocity, config['velocity']))
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
[FILL THIS IN]
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('cache error: ', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print()
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in list(solvers.items()):
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
learning_rates['sgd_momentum']=1e-2
best_model_score=0.0
for learning_rate in [1e-2,5e-3,1e-3]:
for weight_scale in [5e-2,5e-1]:
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=weight_scale)
solver = Solver(model, data,
num_epochs=8, batch_size=500,
update_rule='adam',
optim_config={
'learning_rate': learning_rate
},
verbose=True)
solver.train()
print(".")
if best_model_score < solver.val_acc_history[-1]:
best_model = model
best_model_score = solver.val_acc_history[-1]
print ("score is "+str(best_model_score))
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)
print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())
print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation |
15,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyMC Geomod 1
Step1: Simplest case
Step2: Tha axis here represent the number of cells not the real values of geomodeller
Step3: Setting Bayes Model
Step4: Plotting Posteriors
Step5: Extracting Posterior Traces to Arrays
Step6: Generating new model in Geomodeller
Setting the new folder where we want to work
Step7: Loading GeoModeller project where we want to apply Bayes Inferences (As in first part of the notebook)
Step8: Showing pygeomod functions to extract data from the GeoModeller Project
Step9: Changing point position values and creating new xml projects
We want to change all points of the three formations. To do so we use the Section 1 that is what we are plotting
Step10: Now we can see the position of the points has changed (in this case just the last iteration, i.e. last value of our Metropolis chain)
Step12: Plotting the results | Python Code:
%matplotlib inline
from IPython.core.display import Image
import numpy as np
import matplotlib.pyplot as plt
import sys, os
import shutil
#import geobayes_simple as gs
import pymc as pm # PyMC 2
from pymc.Matplot import plot
from pymc import graph as gr
import numpy as np
#import daft
from IPython.core.pylabtools import figsize
figsize(12.5, 10)
# as we have our model and pygeomod in different paths, let's change the pygeomod path to the default path.
sys.path.append("C:\Users\Miguel\workspace\pygeomod\pygeomod")
#sys.path.append(r'/home/jni/git/tmp/pygeomod_tmp')
import geogrid
import geomodeller_xml_obj as gxml
reload(gxml)
reload(geogrid)
Explanation: PyMC Geomod 1: Basic concepts
The goal of this notebook is to show how to use pygeomod to change the position of points in a section in combination with PyMC to use Metropolis to define the mentionated positions
Importing
End of explanation
hor_lay = r'..\Geomodeller\Basic_case\3_horizontal_layers\horizontal_layers.xml'
print hor_lay
reload(geogrid)
G1 = geogrid.GeoGrid()
# Using G1, we can read the dimensions of our Murci geomodel
G1.get_dimensions_from_geomodeller_xml_project(hor_lay)
#G1.set_dimensions(dim=(0,23000,0,16000,-8000,1000))
nx = 400
ny = 2
nz = 400
G1.define_regular_grid(nx,ny,nz)
G1.update_from_geomodeller_project(hor_lay)
Explanation: Simplest case: three horizontal layers, with depth unknow
Loading pre-made Geomodeller model
End of explanation
G1.plot_section('y',cell_pos=1,colorbar = True, cmap='RdBu', figsize=(6,6),interpolation= 'nearest' ,ve = 1, geomod_coord= True)
Explanation: Tha axis here represent the number of cells not the real values of geomodeller
End of explanation
Image("Nice Notebooks\THL_no_thickness.png")
alpha = pm.Normal("alpha", -350, 0.005, value = -200)#, value= 250)
beta = pm.Normal("beta", -500, 0.0001, value = -300)#, value=0)
gamma = pm.Normal("gamma", -650, 0.0001, value = -650)#, value = 0)
#MODEL!!
model = pm.Model([alpha, beta, gamma])
M = pm.MCMC(model)
M.sample(iter=1500, burn = 800)
Explanation: Setting Bayes Model
End of explanation
plot(M)
Explanation: Plotting Posteriors
End of explanation
n_samples = 10
alpha_samples, alpha_samples_all = M.trace('alpha')[-n_samples:], M.trace("alpha")[:]
beta_samples, beta_samples_all = M.trace('beta')[-n_samples:], M.trace("beta")[:]
gamma_samples, gamma_samples_all = M.trace('gamma')[-n_samples:], M.trace('gamma')[:]
samples = zip (alpha_samples,beta_samples, gamma_samples,alpha_samples,beta_samples, gamma_samples)
Explanation: Extracting Posterior Traces to Arrays
End of explanation
try:
shutil.copytree('C:/Users/Miguel/workspace/Thesis/Geomodeller/Basic_case/3_horizontal_layers', 'Temp/')
except:
print "The folder is already created"
#r'..\Geomodeller\Basic_case\3_horizontal_layers\
Explanation: Generating new model in Geomodeller
Setting the new folder where we want to work
End of explanation
reload(gxml)
gmod_obj = gxml.GeomodellerClass()
gmod_obj.load_geomodeller_file(hor_lay)
gmod_obj.write_xml("backup\orihor_lay.xml")
Explanation: Loading GeoModeller project where we want to apply Bayes Inferences (As in first part of the notebook):
End of explanation
# Section names:
section_names = gmod_obj.get_section_names()
print "section names",section_names, "\n"
# Choose the section we want to use with Positon
sections = gmod_obj.get_sections()[0]
print "Chosen section by position", sections, "\n"
# Create a dictionary so we can acces the section through the name
section_dict = gmod_obj.create_sections_dict()
print "Chosen section by entry", section_dict['Section1'], "\n"
# Formation names
formation_names = gmod_obj.get_formation_names()
print "formation names", formation_names, "\n"
# Get the points of all formation for a given section: Position
contact_points = gmod_obj.get_formation_point_data(sections) #to extract points you have to choose one of the sections
print "Contact points on the chosen section", contact_points, "\n", type(contact_points)
## Get the points of all formation for a given section: Dictionary
contact_points = gmod_obj.get_formation_point_data(section_dict['Section1']) #to extract points you have to choose one of the sections
print "Contact points on the chosen section", contact_points, "\n", type(contact_points)
# Showing contact points
points = gmod_obj.get_point_coordinates(contact_points)
print "Points coordinates", points
Explanation: Showing pygeomod functions to extract data from the GeoModeller Project
End of explanation
for j in range(n_samples):
for i, point in enumerate(contact_points):
gmod_obj.change_formation_point_pos(point, y_coord = [samples[j][i],samples[j][i]])
gmod_obj.write_xml("Temp/test"+ str(j)+".xml")
Explanation: Changing point position values and creating new xml projects
We want to change all points of the three formations. To do so we use the Section 1 that is what we are plotting
End of explanation
# Showing contact points
points_changed = gmod_obj.get_point_coordinates(contact_points)
print "Points coordinates", points_changed
Explanation: Now we can see the position of the points has changed (in this case just the last iteration, i.e. last value of our Metropolis chain)
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].hist(alpha_samples_all, histtype='stepfilled', bins=30, alpha=1,
label="Upper most layer", normed=True)
ax[0].hist(beta_samples_all, histtype='stepfilled', bins=30, alpha=1,
label="Middle layer", normed=True, color = "g")
ax[0].hist(gamma_samples_all, histtype='stepfilled', bins=30, alpha=1,
label="Bottom most layer", normed=True, color = "r")
ax[0].invert_xaxis()
ax[0].legend()
ax[0].set_title(rPosterior distributions of the layers)
ax[0].set_xlabel("Depth(m)")
ax[1].set_title("Representation")
for j in range(n_samples):
hor_lay_new = 'Temp/test'+str(j)+'.xml'
# Read the new xml
#hor_lay_new = 'Temp_test/new.xml'
G1 = geogrid.GeoGrid()
# Getting dimensions and definning grid
G1.get_dimensions_from_geomodeller_xml_project(hor_lay_new)
nx = 400
ny = 2
nz = 400
G1.define_regular_grid(nx,ny,nz)
# Updating project
G1.update_from_geomodeller_project(hor_lay_new)
# Printing new model
G1.plot_section('y',cell_pos=1,colorbar = True, ax = ax[1], alpha = 0.3, cmap='RdBu', figsize=(6,6),interpolation= 'nearest' ,ve = 1, geomod_coord= True, contour = True)
Explanation: Plotting the results
End of explanation |
15,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step function
DGP papers have often demonstrated a step function, as this cannot be well captured by GP with a stationary kernel. We'll do that here also
Step1: We'll now use a 2 layer DGP
Step2: Here are samples from the final layer
Step3: We can also plot all the layers to see what's going on
Step4: Here's the three layer version | Python Code:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
from gpflow.likelihoods import Gaussian
from gpflow.kernels import RBF, White
from gpflow.models.gpr import GPR
from gpflow.training import AdamOptimizer, ScipyOptimizer
from doubly_stochastic_dgp.dgp import DGP
np.random.seed(0)
Ns = 300
Xs = np.linspace(-0.5, 1.5, Ns)[:, None]
N, M = 50, 25
X = np.random.uniform(0, 1, N)[:, None]
Z = np.random.uniform(0, 1, M)[:, None]
f_step = lambda x: 0. if x<0.5 else 1.
Y = np.reshape([f_step(x) for x in X], X.shape) + np.random.randn(*X.shape)*1e-2
def train_and_plot_gp(X, Y, kernel):
m_gp = GPR(X, Y, kernel)
ScipyOptimizer().minimize(m_gp)
m, v = m_gp.predict_y(Xs)
plt.plot(Xs, m, color='r')
l = (m - 2*v**0.5).flatten()
u = (m + 2*v**0.5).flatten()
plt.fill_between(Xs.flatten(), l, u, color='r', alpha=0.1)
plt.title('single layer GP')
plt.scatter(X, Y)
plt.show()
train_and_plot_gp(X, Y, RBF(1, lengthscales=0.2))
Explanation: Step function
DGP papers have often demonstrated a step function, as this cannot be well captured by GP with a stationary kernel. We'll do that here also
End of explanation
def make_DGP(L):
kernels = []
for l in range(L):
k = RBF(1, lengthscales=0.2, variance=1.) + White(1, variance=1e-5)
kernels.append(k)
m_dgp = DGP(X, Y, Z, kernels, Gaussian(), num_samples=100)
# init the layers to near determinisic
for layer in m_dgp.layers[:-1]:
layer.q_sqrt = layer.q_sqrt.value * 1e-5
return m_dgp
m_dgp_2 = make_DGP(2)
AdamOptimizer(0.01).minimize(m_dgp_2, maxiter=1000)
Explanation: We'll now use a 2 layer DGP
End of explanation
samples, _, _ = m_dgp_2.predict_all_layers_full_cov(Xs, 10)
plt.plot(Xs, samples[-1][:, :, 0].T, color='r', alpha=0.3)
plt.title('2 layer DGP')
plt.scatter(X, Y)
plt.show()
Explanation: Here are samples from the final layer
End of explanation
def plot_layers(model, X, Y):
L = len(model.layers)
f, axs = plt.subplots(L, 1, figsize=(4, 2*L), sharex=True)
if L == 1:
axs = [axs, ]
samples, _, _ = model.predict_all_layers_full_cov(Xs, 10)
for s, ax in zip(samples, axs):
ax.plot(Xs.flatten(), s[:, :, 0].T, color='r', alpha=0.2)
axs[-1].scatter(X, Y)
for l in range(L):
axs[l].set_title('layer {}'.format(l+1))
plt.show()
plot_layers(m_dgp_2, X, Y)
Explanation: We can also plot all the layers to see what's going on
End of explanation
m_dgp_3 = make_DGP(3)
AdamOptimizer(0.01).minimize(m_dgp_3, maxiter=1000)
plot_layers(m_dgp_3, X, Y)
Explanation: Here's the three layer version
End of explanation |
15,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enter your project ID and region in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Step2: Import libraries and define constants
Step3: Creating a BigQuery dataset
In this notebook, you will need to create a dataset in your project called bqml. To create it, run the following cell
Step4: Raw data
Before beginning, take a look at the raw data
Step5: Pre-process the data
With collaborative filtering (matrix factorization), the dataset must indicate a user's preference for a product, like a rating between 1 and 5 stars. However, in the retail industry, there is usually no or insufficient explicit feedback on how much a user liked a product. Thus, other behavioral metrics need to be used to infer their implicit "rating". One way to infer user interest in a product is to look at the total time spent on a product detail page (e.g., session duration).
With matrix factorization, in order to train the model, you will need a table with userId, itemId, and the rating. In this notebook example, session duration will be used as the implicit rating. If you have other metrics (e.g., frequency of pageviews), you can simply combine the metrics together using a weighted sum to compute a rating value.
|userId|itemId|rating|
|-|-|-|
|visitor1|productSKU_1|3000|
|visitor1|productSKU_4|15000|
|visitor1|productSKU_9|920|
|visitor2|productSKU_12|0|
Notice how every row is a unique combination of userId and itemId, along with the (implicit) rating.
The query below will pre-process the data by calculating the total pageview duration per product per user, and materialize the data in a new table, aggregate_web_stats.
Step6: The training data
With the data stored in an output table in the correct format for matrix factorization, the data is now ready for training a matrix factorization model.
Step7: Train the matrix factorization model
To train the matrix factorization model (with implicit feedback), you will need to set the options
Step8: Model Evaluation
Inspect the resulting metrics from model evaluation.
For more information on these metrics, read the ML.EVALUATE documentation here.
Step9: Hyperparameter Tuning
If you want to improve your model, some of the hyperparameters you can tune are
Step10: What are the names of the recommended products? Discover the product names by joining the resulting productSKU recommendations back with the product names
Step11: Batch predictions for all users
To retrieve the top 5 recommended products for all existing users, run the following query. As the result can be large (num_users * num_products * top N), this also outputs the recommendations to a separate table.
Step12: Using the predicted recommendations in production
Once you have the recommendations, plugging into your production pipeline will depend on your use case.
Here are a few possible ways to help you get started
Step13: To create a column per product, you can use the pivot() function as described in this blogpost.
For Google Analytics Data Import, it's recommended that you use clientId as the key, along with individual columns that show some propensity score. In other words, you may need to create a new column for each product that you are interested in recommending, and create a custom dimension in Google Analytics that can be then used to build your audiences. It's also likely best to ensure that you have one row per clientId. If you know you will be exporting predictions to Google Analytics, it's recommended that you train your models using clientId directly instead of visitorId.
Exporting the data from BigQuery into Google Analytics 360
The easiest way to export your BigQuery ML predictions from a BigQuery table to Google Analytics 360 is to use the MoDeM (Model Deployment for Marketing) reference implementation. MoDeM helps you load data into Google Analytics for eventual activation in Google Ads, Display & Video 360 and Search Ads 360.
To export to Google Analytics 360 from BigQuery
Step14: 2-2. Export predictions table to Google Cloud Storage
There are several ways to export the predictions table to Google Cloud Storage (GCS), so that you can use them in a separate service. Perhaps the easiest way is to export directly to GCS using SQL (documentation). | Python Code:
!pip install google-cloud-bigquery
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/analytics-componentized-patterns/blob/master/notebooks/bqml_matrix_factorization_retail_ecommerce.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=BigQuery%20ML%20-%20Retail%20Recommendation%20System&download_url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fanalytics-componentized-patterns%2Fraw%2Fmaster%2Fretail%2Frecommendation-system%2Fbqml%2Fbqml_retail_recommendation_system.ipynb&url=https%3A%2F%2Fgithub.com%2FGoogleCloudPlatform%2Fanalytics-componentized-patterns%2Fblob%2Fmaster%2Fretail%2Frecommendation-system%2Fbqml%2Fbqml_retail_recommendation_system.ipynb">
<img src="https://cloud.google.com/images/products/ai/ai-solutions-icon.svg" alt="AI Platform Notebooks">Run on AI Platform Notebooks</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/analytics-componentized-patterns/blob/master/notebooks/bqml_matrix_factorization_retail_ecommerce.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
In this notebook, youโll learn how to build a product recommendation system in a retail scenario using matrix factorization, and how to use the predicted recommendations to drive marketing activation.
Why are recommendation systems so important?
The majority of consumers today expect personalization โ to see products and services relevant to their interests. Naturally, they can help businesses too. By learning from user behaviours and preferences, businesses can deliver their recommendations in a variety of ways, including personalized coupons, marketing emails, and search results, or targeted ads. Ultimately, this enables businesses to attract more customer spending with targeted cross-selling or upselling, while reducing unnecessary costs by marketing irrelevant products.
"Companies that fail to show customers they know them and their buying preferences risk losing business to competitors who are more attuned to what their customers want."
Harvard Business Review. โThe Age of Personalizationโ. September 2018
How does matrix factorization work?
Based on user preferences, matrix factorization (collaborative filtering) is one of the most common and effective methods of creating recommendation systems. For more information about how they work, see this introduction to recommendation systems here.
What is BigQuery ML?
BigQuery ML enables users to create and execute machine learning models in BigQuery by using standard SQL queries. This means, if your data is already in BigQuery, you donโt need to export your data to train and deploy machine learning models โ by training, youโre also deploying in the same step. Combined with BigQueryโs auto-scaling of compute resources, you wonโt have to worry about spinning up a cluster or building a model training and deployment pipeline. This means youโll be saving time building your machine learning pipeline, enabling your business to focus more on the value of machine learning instead of spending time setting up the infrastructure.
You may have also heard of Recommendations AI, a Google Cloud product purpose-built for real-time recommendations on a website using state-of-the-art deep learning models. Matrix factorization with BigQuery ML, on the other hand, is a more generic ML algorithm that can be used for offline and online recommendations (e.g. personalized e-mail campaigns).
Scope of this notebook
Dataset
The Google Analytics Sample dataset, which is hosted publicly on BigQuery, is a dataset that provides 12 months (August 2016 to August 2017) of obfuscated Google Analytics 360 data from the Google Merchandise Store, a real e-commerce store that sells Google-branded merchandise.
Objective
By the end of this notebook, you will know how to:
* pre-process data into the correct format needed to create a recommender system using BigQuery ML
* train (and deploy) the matrix factorization model in BigQuery ML
* evaluate the model
* make predictions using the model
* take action on the predicted recommendations:
* for activation via Google Ads, Display & Video 360 and Search Ads 360
* for activation via emails
* export predictions to a pandas dataframe
* export predictions into Google Cloud Storage
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
BigQuery
BigQuery ML
Learn about BigQuery pricing, BigQuery ML
pricing and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
PIP Install Packages and dependencies
End of explanation
PROJECT_ID = "your_project_id"
REGION = "US"
Explanation: Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enter your project ID and region in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
End of explanation
import time
import pandas as pd
from google.cloud import bigquery
pd.set_option("display.float_format", lambda x: "%.3f" % x)
Explanation: Import libraries and define constants
End of explanation
!bq mk --location=$REGION --dataset $PROJECT_ID:bqml
Explanation: Creating a BigQuery dataset
In this notebook, you will need to create a dataset in your project called bqml. To create it, run the following cell:
End of explanation
%%bigquery --project $PROJECT_ID
## follows the Google Analytics schema:
#https://support.google.com/analytics/answer/3437719?hl=en
SELECT
CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING)) AS visitorId,
hitNumber,
time,
page.pageTitle,
type,
productSKU,
v2ProductName,
v2ProductCategory,
productPrice/1000000 as productPrice_USD
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_20160801`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
LIMIT 5
Explanation: Raw data
Before beginning, take a look at the raw data:
Note: Jupyter runs cells starting with %%bigquery as SQL queries
End of explanation
%%bigquery --project $PROJECT_ID
## follows schema from https://support.google.com/analytics/answer/3437719?hl=en&ref_topic=3416089
CREATE OR REPLACE TABLE bqml.aggregate_web_stats AS (
WITH
durations AS (
--calculate pageview durations
SELECT
CONCAT(fullVisitorID,'-',
CAST(visitNumber AS STRING),'-',
CAST(hitNumber AS STRING) ) AS visitorId_session_hit,
LEAD(time, 1) OVER (
PARTITION BY CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING))
ORDER BY
time ASC ) - time AS pageview_duration
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_2016*`,
UNNEST(hits) AS hit
),
prodview_durations AS (
--filter for product detail pages only
SELECT
CONCAT(fullVisitorID,'-',CAST(visitNumber AS STRING)) AS visitorId,
productSKU AS itemId,
IFNULL(dur.pageview_duration,
1) AS pageview_duration,
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_2016*` t,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
JOIN
durations dur
ON
CONCAT(fullVisitorID,'-',
CAST(visitNumber AS STRING),'-',
CAST(hitNumber AS STRING)) = dur.visitorId_session_hit
WHERE
#action_type: Product detail views = 2
eCommerceAction.action_type = "2"
),
aggregate_web_stats AS(
--sum pageview durations by visitorId, itemId
SELECT
visitorId,
itemId,
SUM(pageview_duration) AS session_duration
FROM
prodview_durations
GROUP BY
visitorId,
itemId )
SELECT
*
FROM
aggregate_web_stats
);
-- Show table
SELECT
*
FROM
bqml.aggregate_web_stats
LIMIT
10
Explanation: Pre-process the data
With collaborative filtering (matrix factorization), the dataset must indicate a user's preference for a product, like a rating between 1 and 5 stars. However, in the retail industry, there is usually no or insufficient explicit feedback on how much a user liked a product. Thus, other behavioral metrics need to be used to infer their implicit "rating". One way to infer user interest in a product is to look at the total time spent on a product detail page (e.g., session duration).
With matrix factorization, in order to train the model, you will need a table with userId, itemId, and the rating. In this notebook example, session duration will be used as the implicit rating. If you have other metrics (e.g., frequency of pageviews), you can simply combine the metrics together using a weighted sum to compute a rating value.
|userId|itemId|rating|
|-|-|-|
|visitor1|productSKU_1|3000|
|visitor1|productSKU_4|15000|
|visitor1|productSKU_9|920|
|visitor2|productSKU_12|0|
Notice how every row is a unique combination of userId and itemId, along with the (implicit) rating.
The query below will pre-process the data by calculating the total pageview duration per product per user, and materialize the data in a new table, aggregate_web_stats.
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
bqml.aggregate_web_stats
LIMIT
10
Explanation: The training data
With the data stored in an output table in the correct format for matrix factorization, the data is now ready for training a matrix factorization model.
End of explanation
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE MODEL bqml.retail_recommender
OPTIONS(model_type='matrix_factorization',
user_col='visitorId',
item_col='itemId',
rating_col='session_duration',
feedback_type='implicit'
)
AS
SELECT * FROM bqml.aggregate_web_stats
Explanation: Train the matrix factorization model
To train the matrix factorization model (with implicit feedback), you will need to set the options:
* model_type: 'matrix_factorization'
* user_col: \<user column name>
* item_col: \<item column name>
* rating_col: \<rating column name>
* feedback_type: 'implicit' (default is 'explicit')
To learn more about the parameters when training a model, read the documentation on the CREATE MODEL statement for Matrix Factorization.
Note: You may need to setup slot reservations. For more information, you can read up on how to set up flex slots programmatically or via the BigQuery UI.
End of explanation
%%bigquery --project $PROJECT_ID
SELECT
*
FROM
ML.EVALUATE(MODEL bqml.retail_recommender)
Explanation: Model Evaluation
Inspect the resulting metrics from model evaluation.
For more information on these metrics, read the ML.EVALUATE documentation here.
End of explanation
%%bigquery --project $PROJECT_ID
#check for a single visitor
DECLARE MY_VISITORID STRING DEFAULT "0824461277962362623-1";
SELECT
*
FROM
ML.RECOMMEND(MODEL `bqml.retail_recommender`,
(SELECT MY_VISITORID as visitorID)
)
ORDER BY predicted_session_duration_confidence DESC
LIMIT 5
Explanation: Hyperparameter Tuning
If you want to improve your model, some of the hyperparameters you can tune are:
* NUM_FACTORS: Specifies the number of latent factors to use for matrix factorization models (int64_value)
* L2_REG: The amount of L2 regularization applied (float64_value)
* WALS_ALPHA: A hyperparameter for 'IMPLICIT' matrix factorization model (float64_value)
See the official documentation on CREATE MODEL (matrix factorization) for more information on hyperparameter tuning.
Make predictions
Inspect the predicted recommendations for a single user
What are the top 5 items you could recommend to a specific visitorId?
End of explanation
%%bigquery --project $PROJECT_ID
DECLARE MY_VISITORID STRING DEFAULT "6499749315992064304-2";
WITH product_details AS(
SELECT
productSKU,
v2ProductName,
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_2016*`,
UNNEST(hits) AS hits,
UNNEST(hits.product) AS hits_product
GROUP BY 2,1
)
SELECT
r.*,
d.v2ProductName
FROM
ML.RECOMMEND(MODEL `bqml.retail_recommender`,
(
SELECT
MY_VISITORID as visitorId)) r
JOIN
product_details d
ON
r.itemId = d.productSKU
ORDER BY predicted_session_duration_confidence DESC
LIMIT 5
Explanation: What are the names of the recommended products? Discover the product names by joining the resulting productSKU recommendations back with the product names:
End of explanation
%%bigquery --project $PROJECT_ID
-- Create output table
CREATE OR REPLACE TABLE bqml.prod_recommendations AS (
WITH predictions AS (
SELECT
visitorId,
ARRAY_AGG(STRUCT(itemId,
predicted_session_duration_confidence)
ORDER BY
predicted_session_duration_confidence DESC
LIMIT 5) as recommended
FROM ML.RECOMMEND(MODEL bqml.retail_recommender)
GROUP BY visitorId
)
SELECT
visitorId,
itemId,
predicted_session_duration_confidence
FROM
predictions p,
UNNEST(recommended)
);
-- Show table
SELECT
*
FROM
bqml.prod_recommendations
ORDER BY
visitorId
LIMIT
20
Explanation: Batch predictions for all users
To retrieve the top 5 recommended products for all existing users, run the following query. As the result can be large (num_users * num_products * top N), this also outputs the recommendations to a separate table.
End of explanation
%%bigquery --project $PROJECT_ID
WITH predictions AS (
SELECT
visitorId,
ARRAY_AGG(STRUCT(itemId,
predicted_session_duration_confidence)
ORDER BY
predicted_session_duration_confidence) as recommended
FROM ML.RECOMMEND(MODEL bqml.retail_recommender)
WHERE itemId = "GGOEYOLR018699"
GROUP BY visitorId
)
SELECT
visitorId,
ML.MIN_MAX_SCALER(
predicted_session_duration_confidence
) OVER() as GGOEYOLR018699
FROM
predictions p,
UNNEST(recommended)
ORDER BY GGOEYOLR018699 DESC
Explanation: Using the predicted recommendations in production
Once you have the recommendations, plugging into your production pipeline will depend on your use case.
Here are a few possible ways to help you get started:
1. Export recommendations for marketing activation:
1. For activation via Google Ads, Display & Video 360 and Search Ads 360
1. For activation via emails
1. Other ways to export recommendations from BigQuery
1. BigQuery to pandas dataframes
1. Export the predictions to Google Cloud Storage
<a id="export_ga360"></a>
1-1. Export recommendations to Google Analytics 360 (Google Marketing Platform)
By exporting the resulting predictions from BigQuery ML back to Google Analytics, you will be able to generate custom remarketing audiences and target customers more effectively with ads, search, or email activation.
Formatting the data for Google Analytics 360
You may need to format the data output into something that Google Analytics, for example:
|clientId | LikelyToBuyProductA |
|-|-|
| 123 | 0.70 |
| 345 | 0.90 |
Here's a sample query for an itemId "GGOEYOLR018699", that normalizes the confidence scores between 0 and 1, using ML.MIN_MAX_SCALER:
End of explanation
%%bigquery df --project $PROJECT_ID
SELECT
*
FROM
bqml.prod_recommendations
LIMIT 100
df.head()
Explanation: To create a column per product, you can use the pivot() function as described in this blogpost.
For Google Analytics Data Import, it's recommended that you use clientId as the key, along with individual columns that show some propensity score. In other words, you may need to create a new column for each product that you are interested in recommending, and create a custom dimension in Google Analytics that can be then used to build your audiences. It's also likely best to ensure that you have one row per clientId. If you know you will be exporting predictions to Google Analytics, it's recommended that you train your models using clientId directly instead of visitorId.
Exporting the data from BigQuery into Google Analytics 360
The easiest way to export your BigQuery ML predictions from a BigQuery table to Google Analytics 360 is to use the MoDeM (Model Deployment for Marketing) reference implementation. MoDeM helps you load data into Google Analytics for eventual activation in Google Ads, Display & Video 360 and Search Ads 360.
To export to Google Analytics 360 from BigQuery:
- Follow the step-by-step instructions here to build your ETL pipeline from BigQuery ML to Google Analytics using MoDeM. You can also view the interactive instructions in this notebook.
1-2. Email activation using Salesforce Marketing Cloud
As Google Analytics does not contain email addresses, you may need to integrate with a 3rd-party platform like Salesforce Marketing Cloud for email activations.
Google Analytics 360 customers can activate their Analytics 360 audiences in Marketing Cloud on Salesforce direct marketing channels (email and SMS). This enables your marketing team to build audiences based on online web behavior and engage with those customers via emails and SMS.
Follow the step-by-step instructions here to integrate Google Analytics 360 with Salesforce Marketing Cloud, or learn more about Audience Activation through Salesforce Trailhead.
<a id="export_other"></a>
2. Other ways to export recommendations from BigQuery
If you want to use the predicted recommendations in other services, two ways to leverage the results are to export the data from BigQuery as a pandas dataframe, or if you want to store the result on Google Cloud Storage, you can also export the table directly as a CSV file.
2-1. Read from the predictions directly from BigQuery
With the predictions stored in a separate table, you can export the data into a Pandas dataframe using the BigQuery Storage API (see documentation and code samples). You can also use other BigQuery client libraries.
Alternatively you can also export directly into pandas in a notebook using the %%bigquery <variable name> as in:
End of explanation
%%bigquery --project $PROJECT_ID
EXPORT DATA OPTIONS (
uri="gs://mybucket/myfile/recommendations_*.csv",
format=CSV
) AS
SELECT
*
FROM
bqml.prod_recommendations
Explanation: 2-2. Export predictions table to Google Cloud Storage
There are several ways to export the predictions table to Google Cloud Storage (GCS), so that you can use them in a separate service. Perhaps the easiest way is to export directly to GCS using SQL (documentation).
End of explanation |
15,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Neural Network using Numpy on Bike Sharing Time Series dataset
In this project, we'll build a neural network and use it to predict daily bike rental ridership.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data.
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below we'll build the network. We've built out the structure and the backwards pass. The forward pass through the network is to be implemented. We'll also set the hyperparameters
Step8: Training the network
Here we'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
We'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. We'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. We can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out the predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: A Neural Network using Numpy on Bike Sharing Time Series dataset
In this project, we'll build a neural network and use it to predict daily bike rental ridership.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data.
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
_VERBOSE = False
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes ** -0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes ** -0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = (lambda x: 1 / (1 + np.exp(-x)))
# All shapes
if _VERBOSE:
print(
'Inputs: {0}, Hidden: {1}, Output: {2}'.format(self.input_nodes, self.hidden_nodes, self.output_nodes))
print('Weights - Input-to-Hidden: {0}, Hidden-to-Output: {1}'.format(self.weights_input_to_hidden.shape,
self.weights_hidden_to_output.shape))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
if _VERBOSE:
print('Input-list: {0}, Target-list: {1}'.format(inputs_list.shape, targets_list.shape))
print('Transposed - Input-list: {0}, Target-list: {1}'.format(inputs.shape, targets.shape))
print('Targets:', targets_list, targets)
#### Implement the forward pass here ####
### Forward pass ###
# Hidden layer (Input to Hidden)
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # (2, 56) x (56, 1) -> (2, 1)
hidden_outputs = self.activation_function(hidden_inputs) # (2, 1) -> (2, 1)
# Output layer (Hidden to Output)
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # (1, 2) -> (2, 1) -> (1, 1)
final_outputs = final_inputs # signals from final output layer, eg. f(x)=x. (1, 1)
if _VERBOSE:
print('Final inputs:', final_inputs.shape, 'Final outputs:', final_outputs.shape)
#### Implement the backward pass here ####
### Backward pass ###
# Output error
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# (1, 1) - (1, 1) -> (1, 1)
if _VERBOSE:
print('Shapes - Targets:', targets.shape, 'Final outputs:', final_outputs.shape, 'Output errors:',
output_errors.shape)
print('Values - Targets:', targets, 'Final outputs:', final_outputs, 'Output errors:', output_errors)
# Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
# (1, 2) x (2, 1) -> (1, 1)
hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients. (2, 1) -> (2, 1)
if _VERBOSE:
print('Shapes - Output errors:', output_errors.shape, 'Weights/Hidden to Output:',
self.weights_hidden_to_output.shape, 'Hidden errors:', hidden_errors.shape)
print('Shapes - Hidden outputs:', hidden_outputs.shape, 'Hidden grad:', hidden_grad.shape)
# Update the weights
self.weights_hidden_to_output += np.dot(output_errors,
hidden_outputs.T) * self.lr # update hidden-to-output weights with gradient descent step. (1, 2) x (2, 1) [transposed of (1, 2)] -> (1, 1)
if _VERBOSE:
print('Shapes - Output errors:', output_errors.shape, 'Hidden errors:', hidden_outputs.T.shape,
'Weights/Hidden to Output:', self.weights_hidden_to_output.shape)
print('Shapes - Hidden errors:', hidden_errors.shape, 'Hidden grad:', hidden_grad.shape, 'Input (trans):',
inputs.T.shape, 'Weights/Input to Hidden:', self.weights_input_to_hidden.shape)
self.weights_input_to_hidden += np.dot(hidden_errors * hidden_grad,
inputs.T) * self.lr # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below we'll build the network. We've built out the structure and the backwards pass. The forward pass through the network is to be implemented. We'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 3000
learning_rate = 0.01
hidden_nodes = 15
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here we'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
We'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. We'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. We can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
accuracy = np.sum((predictions[0] > 0.5)) / len(predictions[0])
print('Accuracy:', accuracy)
Explanation: Check out the predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
15,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Document retrieval from wikipedia data
Fire up GraphLab Create
Step1: Load some text data - from wikipedia, pages on people
Step2: Data contains
Step3: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
Step4: Exploring the entry for actor George Clooney
Step5: Get the word counts for Obama article
Step6: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
stack function takes one column of SFrame containing a dict and stacks one after another.
Step7: Sorting the word counts to show most common words at the top
Step8: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
Step9: Examine the TF-IDF for the Obama article
Step10: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
Step11: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
(Tip
Step12: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
Step13: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval | Python Code:
import graphlab
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
people = graphlab.SFrame('people_wiki.gl/')
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
people.head()
len(people)
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
clooney = people[people['name'] == 'George Clooney']
clooney['text']
Explanation: Exploring the entry for actor George Clooney
End of explanation
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
Explanation: Get the word counts for Obama article
End of explanation
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
stack function takes one column of SFrame containing a dict and stacks one after another.
End of explanation
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
Explanation: Sorting the word counts to show most common words at the top
End of explanation
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
tfidf
people['tfidf'] = tfidf['docs']
people.head()
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
Explanation: Examine the TF-IDF for the Obama article
End of explanation
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
(Tip: lower number means closer distance and thereform higher similarity.)
End of explanation
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
knn_model.query(obama)
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
elton = people[people['name'] == 'Elton John']
elton_word_count_table = elton[['word_count']].stack('word_count', new_column_name = ['word','count']).sort('count',ascending=False)
elton_word_count_table.head()
elton[['tfidf']].stack('tfidf', new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
victoria = people[people['name'] == 'Victoria Beckham']
mccartney = people[people['name'] == 'Paul McCartney']
graphlab.distances.cosine(elton['tfidf'][0],victoria['tfidf'][0])
graphlab.distances.cosine(elton['tfidf'][0],mccartney['tfidf'][0])
nn_model_wc = graphlab.nearest_neighbors.create(people, distance='cosine',features=['word_count'],label='name')
nn_model_tfidf = graphlab.nearest_neighbors.create(people, distance='cosine',features=['tfidf'],label='name')
nn_model_wc.query(elton)
nn_model_tfidf.query(elton)
nn_model_wc.query(victoria)
nn_model_tfidf.query(victoria)
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation |
15,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We will train a model to predict drug resistance values from sequence.
This is the other general variant of supervised learning - where instead of predicting a "label" for a class (classification), we are predicting a "number" or a "value". This is called regression, analogous to, say, linear regression or logistic regression.
Step1: Exercise
Practice what you've learned! Split the data into features, response variable, and then do a train/test split.
Step2: Exercise
Now, let's train the Random Forest Regressor on the data.
Step3: Make a plot of what the predictions (y-axis) against the actual values (x-axis)
Step4: Evaluating the Model
Just as with classification tasks, we also need metrics to help evaluate how good a trained model is, given the input features.
Exercise
Look through the sklearn.metrics module. What might be a suitable metric to use?
Justify the use of two of them, and write the code that computes the evaluation metric.
Step5: Discussion
What does the distribution of values look like? Where is its skew? How could we tell?
What would be a better way of transforming the data prior to doing ML?
Live-Coding
Let's try log10-transforming the values to be predicted.
Step6: Note
Step7: The r-squared score goes up. Less skew is always good.
Step8: Challenge Exercise
Can you compare the following algorithms to see which one performs best?
RandomForestRegressor
GradientBoostingRegressor
AdaBoostRegressor
ExtraTreesRegressor
Statistical Practices
Here, I will show you how to use the ShuffleSplit iterator, alongside the cross_val_score method, to evaluate the performance of different regressor models.
The methods here aren't limited to regression, though. You can easily implement these for classification as well. | Python Code:
# Load the sequence data as a Pandas dataframe.
seqids = [s.id for s in SeqIO.parse('data/hiv-protease-sequences-expanded.fasta', 'fasta')]
sequences = [s for s in SeqIO.parse('data/hiv-protease-sequences-expanded.fasta', 'fasta')]
sequences = MultipleSeqAlignment(sequences)
sequences = pd.DataFrame(np.array(sequences))
sequences.index = seqids
# Ensure that all of the letters are upper-case, otherwise the replace function in the next cell won't work.
for col in sequences.columns:
sequences[col] = sequences[col].apply(lambda x: x.upper())
sequences[col] = sequences[col].replace('*', np.nan)
sequences.head()
seqdf = sequences.replace(isoelectric_points.keys(), isoelectric_points.values())
seqdf.head()
# Load the drug resistance values
dr_vals = pd.read_csv('data/hiv-protease-data-expanded.csv', index_col=0)
dr_vals.set_index('seqid', inplace=True)
dr_vals.head()
# Join the sequence data together with that of one drug of interest.
drug_name = 'FPV'
data_matrix = seqdf.join(dr_vals[drug_name]).dropna() # we have to drop NaN values because scikit-learn algorithms are not designed to accept them.
data_matrix.head()
Explanation: We will train a model to predict drug resistance values from sequence.
This is the other general variant of supervised learning - where instead of predicting a "label" for a class (classification), we are predicting a "number" or a "value". This is called regression, analogous to, say, linear regression or logistic regression.
End of explanation
# Your Answer
# Hint: to select a set of columns from a dataframe, use: dataframe[[columns]]
# Hint: the columns 0 to 98 can be expressed as a list comprehension: [i for i in range(99)]
X = data_matrix[[i for i in range(99)]]
Y = data_matrix[drug_name]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
Explanation: Exercise
Practice what you've learned! Split the data into features, response variable, and then do a train/test split.
End of explanation
# Answer
mdl = RandomForestRegressor()
mdl.fit(X_train, Y_train)
preds = mdl.predict(X_test)
Explanation: Exercise
Now, let's train the Random Forest Regressor on the data.
End of explanation
plt.scatter(preds, Y_test)
Explanation: Make a plot of what the predictions (y-axis) against the actual values (x-axis)
End of explanation
# Metric 1: correlation coefficient.
r2_score(preds, Y_test)
# Metric 2: mean squared error
mean_squared_error(preds, Y_test)
Explanation: Evaluating the Model
Just as with classification tasks, we also need metrics to help evaluate how good a trained model is, given the input features.
Exercise
Look through the sklearn.metrics module. What might be a suitable metric to use?
Justify the use of two of them, and write the code that computes the evaluation metric.
End of explanation
X = data_matrix[[i for i in range(99)]]
Y = data_matrix[drug_name].apply(np.log10) # the log10 transformation is applied here.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
mdl = RandomForestRegressor()
mdl.fit(X_train, Y_train)
preds = mdl.predict(X_test)
Explanation: Discussion
What does the distribution of values look like? Where is its skew? How could we tell?
What would be a better way of transforming the data prior to doing ML?
Live-Coding
Let's try log10-transforming the values to be predicted.
End of explanation
mean_squared_error(preds, Y_test)
Explanation: Note: the MSE goes down because of the log10 transform. However, that's exactly what we would have expected by definition.
End of explanation
r2_score(preds, Y_test)
Explanation: The r-squared score goes up. Less skew is always good.
End of explanation
cv = ShuffleSplit(n=len(X), n_iter=10, test_size=0.3)
models = dict()
models['rf'] = RandomForestRegressor()
models['gb'] = GradientBoostingRegressor()
models['ad'] = AdaBoostRegressor()
models['ex'] = ExtraTreesRegressor()
scores = dict()
for abbr, model in models.items():
print(abbr, model)
score = cross_val_score(model, X, Y, cv=cv, scoring='mean_squared_error')
scores[abbr] = -score # a known issue in the scikit-learn package; we have to take the negative of the result.
score_summary = pd.DataFrame(scores)
score_summary = pd.DataFrame(score_summary.unstack()).reset_index()
# sns.violinplot(x=)
score_summary.columns = ['model', 'idx', 'error']
sns.violinplot(x='model', y='error', data=score_summary)
Explanation: Challenge Exercise
Can you compare the following algorithms to see which one performs best?
RandomForestRegressor
GradientBoostingRegressor
AdaBoostRegressor
ExtraTreesRegressor
Statistical Practices
Here, I will show you how to use the ShuffleSplit iterator, alongside the cross_val_score method, to evaluate the performance of different regressor models.
The methods here aren't limited to regression, though. You can easily implement these for classification as well.
End of explanation |
15,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using tf.keras
This Colab is about how to use Keras to define and train simple models on the data generated in the last Colab 1_data.ipynb
Step2: Attention
Step3: Linear model
Step4: Convolutional model
Step5: Store model
Step6: ----- Optional part -----
Learn from errors
Looking at classification mistakes is a great way to better understand how a model is performing. This section walks you through the necessary steps to load some examples from the dataset, make predictions, and plot the mistakes.
Step7: Data from DataFrame
For comparison, this section shows how you would load data from a pandas.DataFrame and then use Keras for training. Note that this approach does not scale well and can only be used for quite small datasets.
Step8: TPU Support
For using TF with a TPU we'll need to make some adjustments. Generally, please note that several TF TPU features are experimental and might not work as smooth as it does with a CPU or GPU.
Attention
Step9: Attention | Python Code:
# In Jupyter, you would need to install TF 2.0 via !pip.
%tensorflow_version 2.x
import tensorflow as tf
import json, os
# Tested with TensorFlow 2.1.0
print('version={}, CUDA={}, GPU={}, TPU={}'.format(
tf.__version__, tf.test.is_built_with_cuda(),
# GPU attached?
len(tf.config.list_physical_devices('GPU')) > 0,
# TPU accessible? (only works on Colab)
'COLAB_TPU_ADDR' in os.environ))
Explanation: Using tf.keras
This Colab is about how to use Keras to define and train simple models on the data generated in the last Colab 1_data.ipynb
End of explanation
# Load data from Drive (Colab only).
data_path = '/content/gdrive/My Drive/amld_data/zoo_img'
# Or, you can load data from different sources, such as:
# From your local machine:
# data_path = './amld_data'
# Or use a prepared dataset from Cloud (Colab only).
# - 50k training examples, including pickled DataFrame.
# data_path = 'gs://amld-datasets/zoo_img_small'
# - 1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/zoo_img'
# - 4.1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/animals_img'
# - 29M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/all_img'
# Store models on Drive (Colab only).
models_path = '/content/gdrive/My Drive/amld_data/models'
# Or, store models to local machine.
# models_path = './amld_models'
if data_path.startswith('/content/gdrive/'):
from google.colab import drive
drive.mount('/content/gdrive')
if data_path.startswith('gs://'):
from google.colab import auth
auth.authenticate_user()
!gsutil ls -lh "$data_path"
else:
!sleep 1 # wait a bit for the mount to become ready
!ls -lh "$data_path"
labels = [label.strip() for label
in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]
print('All labels in the dataset:', ' '.join(labels))
counts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path)))
print('Splits sizes:', counts)
# This dictionary specifies what "features" we want to extract from the
# tf.train.Example protos (i.e. what they look like on disk). We only
# need the image data "img_64" and the "label". Both features are tensors
# with a fixed length.
# You need to specify the correct "shape" and "dtype" parameters for
# these features.
feature_spec = {
# Single label per example => shape=[1] (we could also use shape=() and
# then do a transformation in the input_fn).
'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64),
# The bytes_list data is parsed into tf.string.
'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64),
}
def parse_example(serialized_example):
# Convert string to tf.train.Example and then extract features/label.
features = tf.io.parse_single_example(serialized_example, feature_spec)
label = features['label']
label = tf.one_hot(tf.squeeze(label), len(labels))
features['img_64'] = tf.cast(features['img_64'], tf.float32) / 255.
return features['img_64'], label
batch_size = 100
steps_per_epoch = counts['train'] // batch_size
eval_steps_per_epoch = counts['eval'] // batch_size
# Create datasets from TFRecord files.
train_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/train-*'.format(data_path)))
train_ds = train_ds.map(parse_example)
train_ds = train_ds.batch(batch_size).repeat()
eval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/eval-*'.format(data_path)))
eval_ds = eval_ds.map(parse_example)
eval_ds = eval_ds.batch(batch_size)
# Read a single batch of examples from the training set and display shapes.
for img_feature, label in train_ds:
break
print('img_feature.shape (batch_size, image_height, image_width) =',
img_feature.shape)
print('label.shape (batch_size, number_of_labels) =', label.shape)
# Visualize some examples from the training set.
from matplotlib import pyplot as plt
def show_img(img_64, title='', ax=None):
Displays an image.
Args:
img_64: Array (or Tensor) with monochrome image data.
title: Optional title.
ax: Optional Matplotlib axes to show the image in.
(ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray')
if isinstance(img_64, tf.Tensor):
img_64 = img_64.numpy()
ax = ax if ax else plt.gca()
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
rows, cols = 3, 5
for img_feature, label in train_ds:
break
_, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows))
for i in range(rows):
for j in range(cols):
show_img(img_feature[i*rows+j].numpy(),
title=labels[label[i*rows+j].numpy().argmax()], ax=axs[i][j])
Explanation: Attention: Please avoid using the TPU runtime (TPU=True) for now. The notebook contains an optional part on TPU usage at the end if you're interested. You can change the runtime via: "Runtime > Change runtime type > Hardware Accelerator" in Colab.
Data from Protobufs
End of explanation
# Sample linear model.
linear_model = tf.keras.Sequential()
linear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,)))
linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))
# "adam, categorical_crossentropy, accuracy" and other string constants can be
# found at https://keras.io.
linear_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', tf.keras.metrics.categorical_accuracy])
linear_model.summary()
linear_model.fit(train_ds,
validation_data=eval_ds,
steps_per_epoch=steps_per_epoch,
validation_steps=eval_steps_per_epoch,
epochs=1,
verbose=True)
Explanation: Linear model
End of explanation
# Let's define a convolutional model:
conv_model = tf.keras.Sequential([
tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Conv2D(filters=64,
kernel_size=(5, 5),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(len(labels), activation='softmax'),
])
# YOUR ACTION REQUIRED:
# Compile + print summary of the model (analogous to the linear model above).
# YOUR ACTION REQUIRED:
# Train the model (analogous to linear model above).
# Note: You might want to reduce the number of steps if if it takes too long.
# Pro tip: Change the runtime type ("Runtime" menu) to GPU! After the change you
# will need to rerun the cells above because the Python kernel's state is reset.
Explanation: Convolutional model
End of explanation
tf.io.gfile.makedirs(models_path)
# Save model as Keras model.
keras_path = os.path.join(models_path, 'linear.h5')
linear_model.save(keras_path)
# Keras model is a single file.
!ls -hl "$keras_path"
# Load Keras model.
loaded_keras_model = tf.keras.models.load_model(keras_path)
loaded_keras_model.summary()
# Save model as Tensorflow Saved Model.
saved_model_path = os.path.join(models_path, 'saved_model/linear')
linear_model.save(saved_model_path, save_format='tf')
# Inspect saved model directory structure.
!find "$saved_model_path"
saved_model = tf.keras.models.load_model(saved_model_path)
saved_model.summary()
# YOUR ACTION REQUIRED:
# Store the convolutional model and any additional models that you trained
# in the previous sections in Keras format so we can use them in later
# notebooks for prediction.
Explanation: Store model
End of explanation
import collections
Mistake = collections.namedtuple('Mistake', 'label pred img_64')
mistakes = []
eval_ds_iter = iter(eval_ds)
for img_64_batch, label_onehot_batch in eval_ds_iter:
break
img_64_batch.shape, label_onehot_batch.shape
# YOUR ACTION REQUIRED:
# Use model.predict() to get a batch of predictions.
preds =
# Iterate through the batch:
for label_onehot, pred, img_64 in zip(label_onehot_batch, preds, img_64_batch):
# YOUR ACTION REQUIRED:
# Both `label_onehot` and pred are vectors with length=len(labels), with every
# element corresponding to a probability of the corresponding class in
# `labels`. Get the value with the highest value to get the index within
# `labels`.
label_i =
pred_i =
if label_i != pred_i:
mistakes.append(Mistake(label_i, pred_i, img_64.numpy()))
# You can run this and above 2 cells multiple times to get more mistakes.
len(mistakes)
# Let's examine the cases when our model gets it wrong. Would you recognize
# these images correctly?
# YOUR ACTION REQUIRED:
# Run above cell but using a different model to get a different set of
# classification mistakes. Then copy over this cell to plot the mistakes for
# comparison purposes. Can you spot a pattern?
rows, cols = 5, 5
plt.figure(figsize=(cols*2.5, rows*2.5))
for i, mistake in enumerate(mistakes[:rows*cols]):
ax = plt.subplot(rows, cols, i + 1)
title = '{}? {}!'.format(labels[mistake.pred], labels[mistake.label])
show_img(mistake.img_64, title, ax)
Explanation: ----- Optional part -----
Learn from errors
Looking at classification mistakes is a great way to better understand how a model is performing. This section walks you through the necessary steps to load some examples from the dataset, make predictions, and plot the mistakes.
End of explanation
# Note: used memory BEFORE loading the DataFrame.
!free -h
# Loading all the data in memory takes a while (~40s).
import pickle
df = pickle.load(tf.io.gfile.GFile('%s/dataframe.pkl' % data_path, mode='rb'))
print(len(df))
print(df.columns)
df_train = df[df.split == b'train']
len(df_train)
# Note: used memory AFTER loading the DataFrame.
!free -h
# Show some images from the dataset.
from matplotlib import pyplot as plt
def show_img(img_64, title='', ax=None):
(ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray')
ax = ax if ax else plt.gca()
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(title)
rows, cols = 3, 3
_, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows))
for i in range(rows):
for j in range(cols):
d = df.sample(1).iloc[0]
show_img(d.img_64, title=labels[d.label], ax=axs[i][j])
df_x = tf.convert_to_tensor(df_train.img_64, dtype=tf.float32)
df_y = tf.one_hot(df_train.label, depth=len(labels), dtype=tf.float32)
# Note: used memory AFTER defining the Tenors based on the DataFrame.
!free -h
# Checkout the shape of these rather large tensors.
df_x.shape, df_x.dtype, df_y.shape, df_y.dtype
# Copied code from section "Linear model" above.
linear_model = tf.keras.Sequential()
linear_model.add(tf.keras.layers.Flatten(input_shape=(64 * 64,)))
linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))
# "adam, categorical_crossentropy, accuracy" and other string constants can be
# found at https://keras.io.
linear_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', tf.keras.metrics.categorical_accuracy])
linear_model.summary()
# How much of a speedup do you see because the data is already in memory?
# How would this compare to the convolutional model?
linear_model.fit(df_x, df_y, epochs=1, batch_size=100)
Explanation: Data from DataFrame
For comparison, this section shows how you would load data from a pandas.DataFrame and then use Keras for training. Note that this approach does not scale well and can only be used for quite small datasets.
End of explanation
%tensorflow_version 2.x
import json, os
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
# Disable duplicate logging output in TF.
logger = tf.get_logger()
logger.propagate = False
# This will fail if no TPU is connected...
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
# Set up distribution strategy.
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu);
strategy = tf.distribute.experimental.TPUStrategy(tpu)
# Tested with TensorFlow 2.1.0
print('\n\nTF version={} TPUs={} accelerators={}'.format(
tf.__version__, tpu.cluster_spec().as_dict()['worker'],
strategy.num_replicas_in_sync))
Explanation: TPU Support
For using TF with a TPU we'll need to make some adjustments. Generally, please note that several TF TPU features are experimental and might not work as smooth as it does with a CPU or GPU.
Attention: Please make sure to switch the runtime to TPU for this part. You can do so via: "Runtime > Change runtime type > Hardware Accelerator" in Colab. As this might create a new environment this section can be executed isolated from anything above.
End of explanation
from google.colab import auth
auth.authenticate_user()
# Browse datasets:
# https://console.cloud.google.com/storage/browser/amld-datasets
# - 50k training examples, including pickled DataFrame.
data_path = 'gs://amld-datasets/zoo_img_small'
# - 1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/zoo_img'
# - 4.1M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/animals_img'
# - 29M training examples, without pickled DataFrame.
# data_path = 'gs://amld-datasets/all_img'
#@markdown **Copied and adjusted data definition code from above**
#@markdown
#@markdown Note: You can double-click this cell to see its code.
#@markdown
#@markdown The changes have been highlighted with `!` in the contained code
#@markdown (things like the `batch_size` and added `drop_remainder=True`).
#@markdown
#@markdown Feel free to just **click "execute"** and ignore the details for now.
labels = [label.strip() for label
in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))]
print('All labels in the dataset:', ' '.join(labels))
counts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path)))
print('Splits sizes:', counts)
# This dictionary specifies what "features" we want to extract from the
# tf.train.Example protos (i.e. what they look like on disk). We only
# need the image data "img_64" and the "label". Both features are tensors
# with a fixed length.
# You need to specify the correct "shape" and "dtype" parameters for
# these features.
feature_spec = {
# Single label per example => shape=[1] (we could also use shape=() and
# then do a transformation in the input_fn).
'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64),
# The bytes_list data is parsed into tf.string.
'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64),
}
def parse_example(serialized_example):
# Convert string to tf.train.Example and then extract features/label.
features = tf.io.parse_single_example(serialized_example, feature_spec)
# Important step: remove "label" from features!
# Otherwise our classifier would simply learn to predict
# label=features['label'].
label = features['label']
label = tf.one_hot(tf.squeeze(label), len(labels))
features['img_64'] = tf.cast(features['img_64'], tf.float32)
return features['img_64'], label
# Adjust the batch size to the given hardware (#accelerators).
batch_size = 64 * strategy.num_replicas_in_sync
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
steps_per_epoch = counts['train'] // batch_size
eval_steps_per_epoch = counts['eval'] // batch_size
# Create datasets from TFRecord files.
train_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/train-*'.format(data_path)))
train_ds = train_ds.map(parse_example)
train_ds = train_ds.batch(batch_size, drop_remainder=True).repeat()
# !!!!!!!!!!!!!!!!!!!
eval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob(
'{}/eval-*'.format(data_path)))
eval_ds = eval_ds.map(parse_example)
eval_ds = eval_ds.batch(batch_size, drop_remainder=True)
# !!!!!!!!!!!!!!!!!!!
# Read a single example and display shapes.
for img_feature, label in train_ds:
break
print('img_feature.shape (batch_size, image_height, image_width) =',
img_feature.shape)
print('label.shape (batch_size, number_of_labels) =', label.shape)
# Model definition code needs to be wrapped in scope.
with strategy.scope():
linear_model = tf.keras.Sequential()
linear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,)))
linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax'))
linear_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy', tf.keras.metrics.categorical_accuracy])
linear_model.summary()
linear_model.fit(train_ds,
validation_data=eval_ds,
steps_per_epoch=steps_per_epoch,
validation_steps=eval_steps_per_epoch,
epochs=1,
verbose=True)
# Model definition code needs to be wrapped in scope.
with strategy.scope():
conv_model = tf.keras.Sequential([
tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.ZeroPadding2D((1,1)),
tf.keras.layers.Conv2D(filters=32,
kernel_size=(10, 10),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Conv2D(filters=64,
kernel_size=(5, 5),
padding='same',
activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(len(labels), activation='softmax'),
])
conv_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
conv_model.summary()
conv_model.fit(train_ds,
validation_data=eval_ds,
steps_per_epoch=steps_per_epoch,
validation_steps=eval_steps_per_epoch,
epochs=3,
verbose=True)
conv_model.evaluate(eval_ds, steps=eval_steps_per_epoch)
!nvidia-smi n
Explanation: Attention: TPUs require all files (input and models) to be stored in cloud storage buckets (gs://bucket-name/...). If you plan to use TPUs please choose the data_path below accordingly. Otherwise, you might run into File system scheme '[local]' not implemented errors.
End of explanation |
15,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell
Step1: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
Step3: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. | Python Code:
from IPython.display import display
from IPython.display import Image
from IPython.display import HTML
assert True # leave this to grade the import statements
Explanation: Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell:
End of explanation
Image(url='https://english.tau.ac.il/sites/default/files/styles/reaserch_main_image_580_x_330/public/sackler%20physics%20cropped.jpg?itok=oanzfnK-')
assert True # leave this to grade the image display
Explanation: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
End of explanation
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display(h)
assert True # leave this here to grade the quark table
Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
End of explanation |
15,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is intended to demonstrate the basic features of the Python API for constructing input files and running OpenMC. In it, we will show how to create a basic reflective pin-cell model that is equivalent to modeling an infinite array of fuel pins. If you have never used OpenMC, this can serve as a good starting point to learn the Python API. We highly recommend having a copy of the Python API reference documentation open in another browser tab that you can refer to.
Step1: Defining Materials
Materials in OpenMC are defined as a set of nuclides with specified atom/weight fractions. To begin, we will create a material by making an instance of the Material class. In OpenMC, many objects, including materials, are identified by a "unique ID" that is simply just a positive integer. These IDs are used when exporting XML files that the solver reads in. They also appear in the output and can be used for identification. Since an integer ID is not very useful by itself, you can also give a material a name as well.
Step2: On the XML side, you have no choice but to supply an ID. However, in the Python API, if you don't give an ID, one will be automatically generated for you
Step3: We see that an ID of 2 was automatically assigned. Let's now move on to adding nuclides to our uo2 material. The Material object has a method add_nuclide() whose first argument is the name of the nuclide and second argument is the atom or weight fraction.
Step4: We see that by default it assumes we want an atom fraction.
Step5: Now we need to assign a total density to the material. We'll use the set_density for this.
Step6: You may sometimes be given a material specification where all the nuclide densities are in units of atom/b-cm. In this case, you just want the density to be the sum of the constituents. In that case, you can simply run mat.set_density('sum').
With UO2 finished, let's now create materials for the clad and coolant. Note the use of add_element() for zirconium.
Step7: An astute observer might now point out that this water material we just created will only use free-atom cross sections. We need to tell it to use an $S(\alpha,\beta)$ table so that the bound atom cross section is used at thermal energies. To do this, there's an add_s_alpha_beta() method. Note the use of the GND-style name "c_H_in_H2O".
Step8: When we go to run the transport solver in OpenMC, it is going to look for a materials.xml file. Thus far, we have only created objects in memory. To actually create a materials.xml file, we need to instantiate a Materials collection and export it to XML.
Step9: Note that Materials is actually a subclass of Python's built-in list, so we can use methods like append(), insert(), pop(), etc.
Step10: Finally, we can create the XML file with the export_to_xml() method. In a Jupyter notebook, we can run a shell command by putting ! before it, so in this case we are going to display the materials.xml file that we created.
Step11: Element Expansion
Did you notice something really cool that happened to our Zr element? OpenMC automatically turned it into a list of nuclides when it exported it! The way this feature works is as follows
Step12: We see that now O16 and O17 were automatically added. O18 is missing because our cross sections file (which is based on ENDF/B-VII.1) doesn't have O18. If OpenMC didn't know about the cross sections file, it would have assumed that all isotopes exist.
The cross_sections.xml file
The cross_sections.xml tells OpenMC where it can find nuclide cross sections and $S(\alpha,\beta)$ tables. It serves the same purpose as MCNP's xsdir file and Serpent's xsdata file. As we mentioned, this can be set either by the OPENMC_CROSS_SECTIONS environment variable or the Materials.cross_sections attribute.
Let's have a look at what's inside this file
Step13: Enrichment
Note that the add_element() method has a special argument enrichment that can be used for Uranium. For example, if we know that we want to create 3% enriched UO2, the following would work
Step14: Mixtures
In OpenMC it is also possible to define materials by mixing existing materials. For example, if we wanted to create MOX fuel out of a mixture of UO2 (97 wt%) and PuO2 (3 wt%) we could do the following
Step15: The 'wo' argument in the mix_materials() method specifies that the fractions are weight fractions. Materials can also be mixed by atomic and volume fractions with 'ao' and 'vo', respectively. For 'ao' and 'wo' the fractions must sum to one. For 'vo', if fractions do not sum to one, the remaining fraction is set as void.
Defining Geometry
At this point, we have three materials defined, exported to XML, and ready to be used in our model. To finish our model, we need to define the geometric arrangement of materials. OpenMC represents physical volumes using constructive solid geometry (CSG), also known as combinatorial geometry. The object that allows us to assign a material to a region of space is called a Cell (same concept in MCNP, for those familiar). In order to define a region that we can assign to a cell, we must first define surfaces which bound the region. A surface is a locus of zeros of a function of Cartesian coordinates $x$, $y$, and $z$, e.g.
A plane perpendicular to the x axis
Step16: Note that by default the sphere is centered at the origin so we didn't have to supply x0, y0, or z0 arguments. Strictly speaking, we could have omitted R as well since it defaults to one. To get the negative or positive half-space, we simply need to apply the - or + unary operators, respectively.
(NOTE
Step17: Now let's see if inside_sphere actually contains points inside the sphere
Step18: Everything works as expected! Now that we understand how to create half-spaces, we can create more complex volumes by combining half-spaces using Boolean operators
Step19: For many regions, OpenMC can automatically determine a bounding box. To get the bounding box, we use the bounding_box property of a region, which returns a tuple of the lower-left and upper-right Cartesian coordinates for the bounding box
Step20: Now that we see how to create volumes, we can use them to create a cell.
Step21: By default, the cell is not filled by any material (void). In order to assign a material, we set the fill property of a Cell.
Step22: Universes and in-line plotting
A collection of cells is known as a universe (again, this will be familiar to MCNP/Serpent users) and can be used as a repeatable unit when creating a model. Although we don't need it yet, the benefit of creating a universe is that we can visualize our geometry while we're creating it.
Step23: The Universe object has a plot method that will display our the universe as current constructed
Step24: By default, the plot will appear in the $x$-$y$ plane. We can change that with the basis argument.
Step25: If we have particular fondness for, say, fuchsia, we can tell the plot() method to make our cell that color.
Step26: Pin cell geometry
We now have enough knowledge to create our pin-cell. We need three surfaces to define the fuel and clad
Step27: With the surfaces created, we can now take advantage of the built-in operators on surfaces to create regions for the fuel, the gap, and the clad
Step28: Now we can create corresponding cells that assign materials to these regions. As with materials, cells have unique IDs that are assigned either manually or automatically. Note that the gap cell doesn't have any material assigned (it is void by default).
Step29: Finally, we need to handle the coolant outside of our fuel pin. To do this, we create x- and y-planes that bound the geometry.
Step30: The water region is going to be everything outside of the clad outer radius and within the box formed as the intersection of four half-spaces.
Step31: OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.
Step32: Pay attention here -- the object that was returned is NOT a surface. It is actually the intersection of four surface half-spaces, just like we created manually before. Thus, we don't need to apply the unary operator (-box). Instead, we can directly combine it with +clad_or.
Step33: The final step is to assign the cells we created to a universe and tell OpenMC that this universe is the "root" universe in our geometry. The Geometry is the final object that is actually exported to XML.
Step34: Starting source and settings
The Python API has a module openmc.stats with various univariate and multivariate probability distributions. We can use these distributions to create a starting source using the openmc.Source object.
Step35: Now let's create a Settings object and give it the source we created along with specifying how many batches and particles we want to run.
Step36: User-defined tallies
We actually have all the required files needed to run a simulation. Before we do that though, let's give a quick example of how to create tallies. We will show how one would tally the total, fission, absorption, and (n,$\gamma$) reaction rates for $^{235}$U in the cell containing fuel. Recall that filters allow us to specify where in phase-space we want events to be tallied and scores tell us what we want to tally
Step37: The what is the total, fission, absorption, and (n,$\gamma$) reaction rates in $^{235}$U. By default, if we only specify what reactions, it will gives us tallies over all nuclides. We can use the nuclides attribute to name specific nuclides we're interested in.
Step38: Similar to the other files, we need to create a Tallies collection and export it to XML.
Step39: Running OpenMC
Running OpenMC from Python can be done using the openmc.run() function. This function allows you to set the number of MPI processes and OpenMP threads, if need be.
Step40: Great! OpenMC already told us our k-effective. It also spit out a file called tallies.out that shows our tallies. This is a very basic method to look at tally data; for more sophisticated methods, see other example notebooks.
Step41: Geometry plotting
We saw before that we could call the Universe.plot() method to show a universe while we were creating our geometry. There is also a built-in plotter in the codebase that is much faster than the Python plotter and has more options. The interface looks somewhat similar to the Universe.plot() method. Instead though, we create Plot instances, assign them to a Plots collection, export it to XML, and then run OpenMC in geometry plotting mode. As an example, let's specify that we want the plot to be colored by material (rather than by cell) and we assign yellow to fuel and blue to water.
Step42: With our plot created, we need to add it to a Plots collection which can be exported to XML.
Step43: Now we can run OpenMC in plotting mode by calling the plot_geometry() function. Under the hood this is calling openmc --plot.
Step44: OpenMC writes out a peculiar image with a .ppm extension. If you have ImageMagick installed, this can be converted into a more normal .png file.
Step45: We can use functionality from IPython to display the image inline in our notebook
Step46: That was a little bit cumbersome. Thankfully, OpenMC provides us with a method on the Plot class that does all that "boilerplate" work. | Python Code:
%matplotlib inline
import openmc
Explanation: This notebook is intended to demonstrate the basic features of the Python API for constructing input files and running OpenMC. In it, we will show how to create a basic reflective pin-cell model that is equivalent to modeling an infinite array of fuel pins. If you have never used OpenMC, this can serve as a good starting point to learn the Python API. We highly recommend having a copy of the Python API reference documentation open in another browser tab that you can refer to.
End of explanation
uo2 = openmc.Material(1, "uo2")
print(uo2)
Explanation: Defining Materials
Materials in OpenMC are defined as a set of nuclides with specified atom/weight fractions. To begin, we will create a material by making an instance of the Material class. In OpenMC, many objects, including materials, are identified by a "unique ID" that is simply just a positive integer. These IDs are used when exporting XML files that the solver reads in. They also appear in the output and can be used for identification. Since an integer ID is not very useful by itself, you can also give a material a name as well.
End of explanation
mat = openmc.Material()
print(mat)
Explanation: On the XML side, you have no choice but to supply an ID. However, in the Python API, if you don't give an ID, one will be automatically generated for you:
End of explanation
help(uo2.add_nuclide)
Explanation: We see that an ID of 2 was automatically assigned. Let's now move on to adding nuclides to our uo2 material. The Material object has a method add_nuclide() whose first argument is the name of the nuclide and second argument is the atom or weight fraction.
End of explanation
# Add nuclides to uo2
uo2.add_nuclide('U235', 0.03)
uo2.add_nuclide('U238', 0.97)
uo2.add_nuclide('O16', 2.0)
Explanation: We see that by default it assumes we want an atom fraction.
End of explanation
uo2.set_density('g/cm3', 10.0)
Explanation: Now we need to assign a total density to the material. We'll use the set_density for this.
End of explanation
zirconium = openmc.Material(2, "zirconium")
zirconium.add_element('Zr', 1.0)
zirconium.set_density('g/cm3', 6.6)
water = openmc.Material(3, "h2o")
water.add_nuclide('H1', 2.0)
water.add_nuclide('O16', 1.0)
water.set_density('g/cm3', 1.0)
Explanation: You may sometimes be given a material specification where all the nuclide densities are in units of atom/b-cm. In this case, you just want the density to be the sum of the constituents. In that case, you can simply run mat.set_density('sum').
With UO2 finished, let's now create materials for the clad and coolant. Note the use of add_element() for zirconium.
End of explanation
water.add_s_alpha_beta('c_H_in_H2O')
Explanation: An astute observer might now point out that this water material we just created will only use free-atom cross sections. We need to tell it to use an $S(\alpha,\beta)$ table so that the bound atom cross section is used at thermal energies. To do this, there's an add_s_alpha_beta() method. Note the use of the GND-style name "c_H_in_H2O".
End of explanation
mats = openmc.Materials([uo2, zirconium, water])
Explanation: When we go to run the transport solver in OpenMC, it is going to look for a materials.xml file. Thus far, we have only created objects in memory. To actually create a materials.xml file, we need to instantiate a Materials collection and export it to XML.
End of explanation
mats = openmc.Materials()
mats.append(uo2)
mats += [zirconium, water]
isinstance(mats, list)
Explanation: Note that Materials is actually a subclass of Python's built-in list, so we can use methods like append(), insert(), pop(), etc.
End of explanation
mats.export_to_xml()
!cat materials.xml
Explanation: Finally, we can create the XML file with the export_to_xml() method. In a Jupyter notebook, we can run a shell command by putting ! before it, so in this case we are going to display the materials.xml file that we created.
End of explanation
water.remove_nuclide('O16')
water.add_element('O', 1.0)
mats.export_to_xml()
!cat materials.xml
Explanation: Element Expansion
Did you notice something really cool that happened to our Zr element? OpenMC automatically turned it into a list of nuclides when it exported it! The way this feature works is as follows:
First, it checks whether Materials.cross_sections has been set, indicating the path to a cross_sections.xml file.
If Materials.cross_sections isn't set, it looks for the OPENMC_CROSS_SECTIONS environment variable.
If either of these are found, it scans the file to see what nuclides are actually available and will expand elements accordingly.
Let's see what happens if we change O16 in water to elemental O.
End of explanation
!cat $OPENMC_CROSS_SECTIONS | head -n 10
print(' ...')
!cat $OPENMC_CROSS_SECTIONS | tail -n 10
Explanation: We see that now O16 and O17 were automatically added. O18 is missing because our cross sections file (which is based on ENDF/B-VII.1) doesn't have O18. If OpenMC didn't know about the cross sections file, it would have assumed that all isotopes exist.
The cross_sections.xml file
The cross_sections.xml tells OpenMC where it can find nuclide cross sections and $S(\alpha,\beta)$ tables. It serves the same purpose as MCNP's xsdir file and Serpent's xsdata file. As we mentioned, this can be set either by the OPENMC_CROSS_SECTIONS environment variable or the Materials.cross_sections attribute.
Let's have a look at what's inside this file:
End of explanation
uo2_three = openmc.Material()
uo2_three.add_element('U', 1.0, enrichment=3.0)
uo2_three.add_element('O', 2.0)
uo2_three.set_density('g/cc', 10.0)
Explanation: Enrichment
Note that the add_element() method has a special argument enrichment that can be used for Uranium. For example, if we know that we want to create 3% enriched UO2, the following would work:
End of explanation
# Create PuO2 material
puo2 = openmc.Material()
puo2.add_nuclide('Pu239', 0.94)
puo2.add_nuclide('Pu240', 0.06)
puo2.add_nuclide('O16', 2.0)
puo2.set_density('g/cm3', 11.5)
# Create the mixture
mox = openmc.Material.mix_materials([uo2, puo2], [0.97, 0.03], 'wo')
Explanation: Mixtures
In OpenMC it is also possible to define materials by mixing existing materials. For example, if we wanted to create MOX fuel out of a mixture of UO2 (97 wt%) and PuO2 (3 wt%) we could do the following:
End of explanation
sph = openmc.Sphere(r=1.0)
Explanation: The 'wo' argument in the mix_materials() method specifies that the fractions are weight fractions. Materials can also be mixed by atomic and volume fractions with 'ao' and 'vo', respectively. For 'ao' and 'wo' the fractions must sum to one. For 'vo', if fractions do not sum to one, the remaining fraction is set as void.
Defining Geometry
At this point, we have three materials defined, exported to XML, and ready to be used in our model. To finish our model, we need to define the geometric arrangement of materials. OpenMC represents physical volumes using constructive solid geometry (CSG), also known as combinatorial geometry. The object that allows us to assign a material to a region of space is called a Cell (same concept in MCNP, for those familiar). In order to define a region that we can assign to a cell, we must first define surfaces which bound the region. A surface is a locus of zeros of a function of Cartesian coordinates $x$, $y$, and $z$, e.g.
A plane perpendicular to the x axis: $x - x_0 = 0$
A cylinder parallel to the z axis: $(x - x_0)^2 + (y - y_0)^2 - R^2 = 0$
A sphere: $(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 - R^2 = 0$
Between those three classes of surfaces (planes, cylinders, spheres), one can construct a wide variety of models. It is also possible to define cones and general second-order surfaces (tori are not currently supported).
Note that defining a surface is not sufficient to specify a volume -- in order to define an actual volume, one must reference the half-space of a surface. A surface half-space is the region whose points satisfy a positive or negative inequality of the surface equation. For example, for a sphere of radius one centered at the origin, the surface equation is $f(x,y,z) = x^2 + y^2 + z^2 - 1 = 0$. Thus, we say that the negative half-space of the sphere, is defined as the collection of points satisfying $f(x,y,z) < 0$, which one can reason is the inside of the sphere. Conversely, the positive half-space of the sphere would correspond to all points outside of the sphere.
Let's go ahead and create a sphere and confirm that what we've told you is true.
End of explanation
inside_sphere = -sph
outside_sphere = +sph
Explanation: Note that by default the sphere is centered at the origin so we didn't have to supply x0, y0, or z0 arguments. Strictly speaking, we could have omitted R as well since it defaults to one. To get the negative or positive half-space, we simply need to apply the - or + unary operators, respectively.
(NOTE: Those unary operators are defined by special methods: __pos__ and __neg__ in this case).
End of explanation
print((0,0,0) in inside_sphere, (0,0,2) in inside_sphere)
print((0,0,0) in outside_sphere, (0,0,2) in outside_sphere)
Explanation: Now let's see if inside_sphere actually contains points inside the sphere:
End of explanation
z_plane = openmc.ZPlane(z0=0)
northern_hemisphere = -sph & +z_plane
Explanation: Everything works as expected! Now that we understand how to create half-spaces, we can create more complex volumes by combining half-spaces using Boolean operators: & (intersection), | (union), and ~ (complement). For example, let's say we want to define a region that is the top part of the sphere (all points inside the sphere that have $z > 0$.
End of explanation
northern_hemisphere.bounding_box
Explanation: For many regions, OpenMC can automatically determine a bounding box. To get the bounding box, we use the bounding_box property of a region, which returns a tuple of the lower-left and upper-right Cartesian coordinates for the bounding box:
End of explanation
cell = openmc.Cell()
cell.region = northern_hemisphere
# or...
cell = openmc.Cell(region=northern_hemisphere)
Explanation: Now that we see how to create volumes, we can use them to create a cell.
End of explanation
cell.fill = water
Explanation: By default, the cell is not filled by any material (void). In order to assign a material, we set the fill property of a Cell.
End of explanation
universe = openmc.Universe()
universe.add_cell(cell)
# this also works
universe = openmc.Universe(cells=[cell])
Explanation: Universes and in-line plotting
A collection of cells is known as a universe (again, this will be familiar to MCNP/Serpent users) and can be used as a repeatable unit when creating a model. Although we don't need it yet, the benefit of creating a universe is that we can visualize our geometry while we're creating it.
End of explanation
universe.plot(width=(2.0, 2.0))
Explanation: The Universe object has a plot method that will display our the universe as current constructed:
End of explanation
universe.plot(width=(2.0, 2.0), basis='xz')
Explanation: By default, the plot will appear in the $x$-$y$ plane. We can change that with the basis argument.
End of explanation
universe.plot(width=(2.0, 2.0), basis='xz',
colors={cell: 'fuchsia'})
Explanation: If we have particular fondness for, say, fuchsia, we can tell the plot() method to make our cell that color.
End of explanation
fuel_or = openmc.ZCylinder(r=0.39)
clad_ir = openmc.ZCylinder(r=0.40)
clad_or = openmc.ZCylinder(r=0.46)
Explanation: Pin cell geometry
We now have enough knowledge to create our pin-cell. We need three surfaces to define the fuel and clad:
The outer surface of the fuel -- a cylinder parallel to the z axis
The inner surface of the clad -- same as above
The outer surface of the clad -- same as above
These three surfaces will all be instances of openmc.ZCylinder, each with a different radius according to the specification.
End of explanation
fuel_region = -fuel_or
gap_region = +fuel_or & -clad_ir
clad_region = +clad_ir & -clad_or
Explanation: With the surfaces created, we can now take advantage of the built-in operators on surfaces to create regions for the fuel, the gap, and the clad:
End of explanation
fuel = openmc.Cell(1, 'fuel')
fuel.fill = uo2
fuel.region = fuel_region
gap = openmc.Cell(2, 'air gap')
gap.region = gap_region
clad = openmc.Cell(3, 'clad')
clad.fill = zirconium
clad.region = clad_region
Explanation: Now we can create corresponding cells that assign materials to these regions. As with materials, cells have unique IDs that are assigned either manually or automatically. Note that the gap cell doesn't have any material assigned (it is void by default).
End of explanation
pitch = 1.26
left = openmc.XPlane(x0=-pitch/2, boundary_type='reflective')
right = openmc.XPlane(x0=pitch/2, boundary_type='reflective')
bottom = openmc.YPlane(y0=-pitch/2, boundary_type='reflective')
top = openmc.YPlane(y0=pitch/2, boundary_type='reflective')
Explanation: Finally, we need to handle the coolant outside of our fuel pin. To do this, we create x- and y-planes that bound the geometry.
End of explanation
water_region = +left & -right & +bottom & -top & +clad_or
moderator = openmc.Cell(4, 'moderator')
moderator.fill = water
moderator.region = water_region
Explanation: The water region is going to be everything outside of the clad outer radius and within the box formed as the intersection of four half-spaces.
End of explanation
box = openmc.rectangular_prism(width=pitch, height=pitch,
boundary_type='reflective')
type(box)
Explanation: OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.
End of explanation
water_region = box & +clad_or
Explanation: Pay attention here -- the object that was returned is NOT a surface. It is actually the intersection of four surface half-spaces, just like we created manually before. Thus, we don't need to apply the unary operator (-box). Instead, we can directly combine it with +clad_or.
End of explanation
root = openmc.Universe(cells=(fuel, gap, clad, moderator))
geom = openmc.Geometry()
geom.root_universe = root
# or...
geom = openmc.Geometry(root)
geom.export_to_xml()
!cat geometry.xml
Explanation: The final step is to assign the cells we created to a universe and tell OpenMC that this universe is the "root" universe in our geometry. The Geometry is the final object that is actually exported to XML.
End of explanation
point = openmc.stats.Point((0, 0, 0))
src = openmc.Source(space=point)
Explanation: Starting source and settings
The Python API has a module openmc.stats with various univariate and multivariate probability distributions. We can use these distributions to create a starting source using the openmc.Source object.
End of explanation
settings = openmc.Settings()
settings.source = src
settings.batches = 100
settings.inactive = 10
settings.particles = 1000
settings.export_to_xml()
!cat settings.xml
Explanation: Now let's create a Settings object and give it the source we created along with specifying how many batches and particles we want to run.
End of explanation
cell_filter = openmc.CellFilter(fuel)
t = openmc.Tally(1)
t.filters = [cell_filter]
Explanation: User-defined tallies
We actually have all the required files needed to run a simulation. Before we do that though, let's give a quick example of how to create tallies. We will show how one would tally the total, fission, absorption, and (n,$\gamma$) reaction rates for $^{235}$U in the cell containing fuel. Recall that filters allow us to specify where in phase-space we want events to be tallied and scores tell us what we want to tally:
$$X = \underbrace{\int d\mathbf{r} \int d\mathbf{\Omega} \int dE}{\text{filters}} \; \underbrace{f(\mathbf{r},\mathbf{\Omega},E)}{\text{scores}} \psi (\mathbf{r},\mathbf{\Omega},E)$$
In this case, the where is "the fuel cell". So, we will create a cell filter specifying the fuel cell.
End of explanation
t.nuclides = ['U235']
t.scores = ['total', 'fission', 'absorption', '(n,gamma)']
Explanation: The what is the total, fission, absorption, and (n,$\gamma$) reaction rates in $^{235}$U. By default, if we only specify what reactions, it will gives us tallies over all nuclides. We can use the nuclides attribute to name specific nuclides we're interested in.
End of explanation
tallies = openmc.Tallies([t])
tallies.export_to_xml()
!cat tallies.xml
Explanation: Similar to the other files, we need to create a Tallies collection and export it to XML.
End of explanation
openmc.run()
Explanation: Running OpenMC
Running OpenMC from Python can be done using the openmc.run() function. This function allows you to set the number of MPI processes and OpenMP threads, if need be.
End of explanation
!cat tallies.out
Explanation: Great! OpenMC already told us our k-effective. It also spit out a file called tallies.out that shows our tallies. This is a very basic method to look at tally data; for more sophisticated methods, see other example notebooks.
End of explanation
p = openmc.Plot()
p.filename = 'pinplot'
p.width = (pitch, pitch)
p.pixels = (200, 200)
p.color_by = 'material'
p.colors = {uo2: 'yellow', water: 'blue'}
Explanation: Geometry plotting
We saw before that we could call the Universe.plot() method to show a universe while we were creating our geometry. There is also a built-in plotter in the codebase that is much faster than the Python plotter and has more options. The interface looks somewhat similar to the Universe.plot() method. Instead though, we create Plot instances, assign them to a Plots collection, export it to XML, and then run OpenMC in geometry plotting mode. As an example, let's specify that we want the plot to be colored by material (rather than by cell) and we assign yellow to fuel and blue to water.
End of explanation
plots = openmc.Plots([p])
plots.export_to_xml()
!cat plots.xml
Explanation: With our plot created, we need to add it to a Plots collection which can be exported to XML.
End of explanation
openmc.plot_geometry()
Explanation: Now we can run OpenMC in plotting mode by calling the plot_geometry() function. Under the hood this is calling openmc --plot.
End of explanation
!convert pinplot.ppm pinplot.png
Explanation: OpenMC writes out a peculiar image with a .ppm extension. If you have ImageMagick installed, this can be converted into a more normal .png file.
End of explanation
from IPython.display import Image
Image("pinplot.png")
Explanation: We can use functionality from IPython to display the image inline in our notebook:
End of explanation
p.to_ipython_image()
Explanation: That was a little bit cumbersome. Thankfully, OpenMC provides us with a method on the Plot class that does all that "boilerplate" work.
End of explanation |
15,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see BABDEEEB.
Step1: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see BABDEEEB.
Step2: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the same
as the end of the recording, see
Step3: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with
Step4: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
Step5: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
Step6: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization), especially if only few samples are available.
Unfortunately it is not easy to tell the effective number of samples, hence,
to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization
Step7: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the global field power
(GFP) is 1 (calculation of the GFP should take into account the true degrees
of freedom, e.g. ddof=3 with 2 active SSP vectors)
Step8: This plot displays both, the whitened evoked signals for each channels and
the whitened GFP. The numbers in the GFP panel represent the estimated rank
of the data, which amounts to the effective degrees of freedom by which the
squared sum across sensors is divided when computing the whitened GFP.
The whitened GFP also helps detecting spurious late evoked components which
can be the consequence of over- or under-regularization.
Note that if data have been processed using signal space separation
(SSS) [2],
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https
Step9: This will plot the whitened evoked for the optimal estimator and display the
GFPs for all estimators as separate lines in the related panel.
Finally, let's have a look at the difference between empty room and
event related covariance. | Python Code:
import os.path as op
import mne
from mne.datasets import sample
Explanation: Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see BABDEEEB.
End of explanation
data_path = sample.data_path()
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference('average', projection=True)
raw.info['bads'] += ['EEG 053'] # bads + 1 more
Explanation: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see BABDEEEB.
End of explanation
raw_empty_room.info['bads'] = [
bb for bb in raw.info['bads'] if 'EEG' not in bb]
raw_empty_room.add_proj(
[pp.copy() for pp in raw.info['projs'] if 'EEG' not in pp['desc']])
noise_cov = mne.compute_raw_covariance(
raw_empty_room, tmin=0, tmax=None)
Explanation: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the same
as the end of the recording, see :func:mne.compute_raw_covariance).
Keep in mind that you want to match your empty room dataset to your
actual MEG data, processing-wise. Ensure that filters
are all the same and if you use ICA, apply it to your empty-room and subject
data equivalently. In this case we did not filter the data and
we don't use ICA. However, we do have bad channels and projections in
the MEG data, and, hence, we want to make sure they get stored in the
covariance object.
End of explanation
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.5,
baseline=(-0.2, 0.0), decim=3, # we'll decimate for speed
verbose='error') # and ignore the warning about aliasing
Explanation: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with :func:mne.write_cov. Later you can read it back
using :func:mne.read_cov.
You can also use the pre-stimulus baseline to estimate the noise covariance.
First we have to construct the epochs. When computing the covariance, you
should use baseline correction when constructing the epochs. Otherwise the
covariance matrix will be inaccurate. In MNE this is done by default, but
just to be sure, we define it here manually.
End of explanation
noise_cov_baseline = mne.compute_covariance(epochs, tmax=0)
Explanation: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
End of explanation
noise_cov.plot(raw_empty_room.info, proj=True)
noise_cov_baseline.plot(epochs.info, proj=True)
Explanation: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
End of explanation
noise_cov_reg = mne.compute_covariance(epochs, tmax=0., method='auto',
rank=None)
Explanation: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization), especially if only few samples are available.
Unfortunately it is not easy to tell the effective number of samples, hence,
to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization:
End of explanation
evoked = epochs.average()
evoked.plot_white(noise_cov_reg, time_unit='s')
Explanation: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the global field power
(GFP) is 1 (calculation of the GFP should take into account the true degrees
of freedom, e.g. ddof=3 with 2 active SSP vectors):
End of explanation
noise_covs = mne.compute_covariance(
epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True,
rank=None)
evoked.plot_white(noise_covs, time_unit='s')
Explanation: This plot displays both, the whitened evoked signals for each channels and
the whitened GFP. The numbers in the GFP panel represent the estimated rank
of the data, which amounts to the effective degrees of freedom by which the
squared sum across sensors is divided when computing the whitened GFP.
The whitened GFP also helps detecting spurious late evoked components which
can be the consequence of over- or under-regularization.
Note that if data have been processed using signal space separation
(SSS) [2],
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https://goo.gl/ElWrxe>.
For expert use cases or debugging the alternative estimators can also be
compared (see
sphx_glr_auto_examples_visualization_plot_evoked_whitening.py) and
sphx_glr_auto_examples_inverse_plot_covariance_whitening_dspm.py):
End of explanation
evoked_meg = evoked.copy().pick_types(meg=True, eeg=False)
noise_cov_meg = mne.pick_channels_cov(noise_cov_baseline, evoked_meg.ch_names)
noise_cov['method'] = 'empty_room'
noise_cov_meg['method'] = 'baseline'
evoked_meg.plot_white([noise_cov_meg, noise_cov], time_unit='s')
Explanation: This will plot the whitened evoked for the optimal estimator and display the
GFPs for all estimators as separate lines in the related panel.
Finally, let's have a look at the difference between empty room and
event related covariance.
End of explanation |
15,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dealing with Lookahead Conflicts
This notebook discusses conflicts that have their origin in insufficient looakahead.
We will discuss the following grammar
Step1: Specification of the Parser
Step2: The start variable of our grammar is expr, but we dont't have to specify that. The default
start variable is the first vvariable that is defined.
Step4: We can specify multiple expressions in a single rule. In this case, we have used the passstatement
as we just want to generate some conflicts.
Step5: Setting the optional argument write_tables to False <B style="color
Step6: Let's look at the action table that is generated. Conflicts are always resolved in favour of shifting. | Python Code:
import ply.lex as lex
tokens = [ 'USELESS' ]
literals = ['U', 'V', 'W', 'X']
def t_USELESS(t):
r'This will never be used.'
__file__ = 'main'
lexer = lex.lex()
Explanation: Dealing with Lookahead Conflicts
This notebook discusses conflicts that have their origin in insufficient looakahead.
We will discuss the following grammar:
```
a : b 'U' 'V'
| c 'U' 'W'
b : 'X'
c : 'X'
```
Specification of the Scanner
We implement a minimal scanner. Below we need to declare. The reason is that Ply only works when the list
tokens is defined and contains at least one token.
End of explanation
import ply.yacc as yacc
Explanation: Specification of the Parser
End of explanation
start = 'a'
Explanation: The start variable of our grammar is expr, but we dont't have to specify that. The default
start variable is the first vvariable that is defined.
End of explanation
def p_a(p):
a : b 'U' 'V'
| c 'U' 'W'
b : 'X'
c : 'X'
pass
def p_error(p):
if p:
print(f'Syntax error at {p.value}.')
else:
print('Syntax error at end of input.')
Explanation: We can specify multiple expressions in a single rule. In this case, we have used the passstatement
as we just want to generate some conflicts.
End of explanation
parser = yacc.yacc(write_tables=False, debug=True)
Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.
End of explanation
!cat parser.out
Explanation: Let's look at the action table that is generated. Conflicts are always resolved in favour of shifting.
End of explanation |
15,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Ingestion & Exploratory Analysis of the UFO Database
Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at National UFO Report Center (NUFORC). Some of these reports are hoax and amongst those that seem legitimate, there isnโt currently an established method to confirm that they indeed are events related to flying objects from aliens in outer space. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We present a scheme consisting of feature extraction by filtering related datasets over a time-band of 24 hrs and use multi-dimensional textual summaries along with geospatial information to determine best clusters of UFO activity. Later, we look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and discover interesting correlations with real-time events!
Step1: The UFO database as you can see below has the following columns
Step2: The data used in this research is collected and made public by the National UFO Reporting Center launched in 1974. The NUFORC site hosts an extensive database of UFO sighting reports that are submitted either online or though a 24-hour telephone hotline. The data undergoes an internal quality check by the NUFORC staff before being made public and, at the moment, presents one of the most comprehensive UFO reports databases available online. It provides the following information
Step3: Let us convert the bin the events by time since the time of reporting is continous and by the minute. A binning of the events helps analyse statistics such as, how many events occurred during noon?
Step4: Let us look at events where durations are reported as null
Step5: Impute duration column with mean
Step6: From the above you can see that there are no null values in Durations.
Let us look at the unique states and which states are most reporting UFO events.
Step7: Let us look at how many of these are in the United States.
Step8: We can see from above that many states are outside of United States. Since we are looking to study most events that occurred in the United States, let us create a column "US".
Step9: Let us see how much we stand to lose in terms of considering data inside of US by plotting a count plot with a hue.
Step10: We can study from countplot that there is a small percentage of data that we stand to lose by ignoring states outisde of United States. Let us keep only states within US for our initial analysis.
Step11: Let us explore how the city column looks. It is important to take a look at lots of values in the dataset to check for anomalies or data with noise. For example the city data has noise with text containing additional information within () and other such noises.
Step12: Transfrom the City column to exclude all irrelevant text entries (e.g., additional comments).
Step13: Latitude & Longitude of reported events.
These events are reported at various cities. We need the latitude and longitude information to perform geospatial analysis. A process of coverting an address to a latitude and longitude is called forward geocoding. The geopy library is useful for forward geocoding. It connects to a network and looks up the address and returns back the latitude and longitude information. The code below is used to determine the coordianates. We have already run the code below and generated a file. Hence, do not run this portion of the code below. Also it takes a while to look up all the addresses.
Warning
Don't run this code as it is a placeholder; instead use data exported to a csv
Step14: Now that we have the cleaned and processed the data, let us extract right columns from the dataframe that are useful and shall be our features for modeling.
Step15: Adding other data sources
In order to have a better understanding of the UFO reports, let us add the following external-data sources
Step16: We can now merge the two datasets on state abbreviations.
Step17: Hoax Prediction
The inevitable presence of IFO reports in the dataset can, in fact, be considered an added value, since the non-UFO reports are still indicative of actual events taking place. Therefore, our analysis focuses on the events that are reported as UFOs, regardless of them being an alien activity or in future recognized as an IFO. In addition to general reporting trends, the analysis of NUFORC data can offer insight into the UFO perception and their validity as some of the latter are labeled to be hoax reports by NUFORC.
Step18: Average reports during the day per state grouped by Time.
Step19: Violin plots of reports on weekdays vs weekends
Create a categorical variable column called WeekEnd. Violin plots showcase the distribution of events that aren't hoax over the weekdays vs weekends. This will be a large plot as we can get information about the density of reports in all states.
Step20: Let us look at the states which reported the highest UFO events and look at their violin plots.
Step21: Reports of various shapes in a violin plot on a weekday vs weekend.
Let us look at the largest shapes reported.
Step22: Remove the rows where State or City is unknown. Alternatively impute the rows for missing values. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import pandas as pd
import numpy as np
import geocoder
import re
import math
Explanation: Data Ingestion & Exploratory Analysis of the UFO Database
Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at National UFO Report Center (NUFORC). Some of these reports are hoax and amongst those that seem legitimate, there isnโt currently an established method to confirm that they indeed are events related to flying objects from aliens in outer space. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We present a scheme consisting of feature extraction by filtering related datasets over a time-band of 24 hrs and use multi-dimensional textual summaries along with geospatial information to determine best clusters of UFO activity. Later, we look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and discover interesting correlations with real-time events!
End of explanation
ufo_data = pd.read_csv("data/ufo/ufo_data.csv", sep='\t')
ufo_data.head()
Explanation: The UFO database as you can see below has the following columns:
* Date/Time of the event.
* City where the event was reported.
* State of the city.
* Shape that the observer thought they saw.
* Duration of the event.
* Summary - description of the UFO event.
* Date posted by the UFO website.
End of explanation
ufo_data['Month'] = [int(r.split('/')[0]) for r in ufo_data['Date/Time']]
ufo_data['Day'] = [int(r.split('/')[1]) for r in ufo_data['Date/Time']]
ufo_data['Date'] = [(r.split(' ')[0]) for r in ufo_data['Date/Time']]
ufo_data['Time'] = [(r.split(' ')[-1]) for r in ufo_data['Date/Time']]
Explanation: The data used in this research is collected and made public by the National UFO Reporting Center launched in 1974. The NUFORC site hosts an extensive database of UFO sighting reports that are submitted either online or though a 24-hour telephone hotline. The data undergoes an internal quality check by the NUFORC staff before being made public and, at the moment, presents one of the most comprehensive UFO reports databases available online. It provides the following information: Date/Time, City, State, Shape, Duration, Summary, and Posting date. The data gets occasionally used for local news reports as well as a broader-level reporting.
The Date/Time needs to be parsed to extract the date components. The datetime utility cannot be easily used as the format of Date doesnt come with padded 0s for single digits.
End of explanation
def time_period(time):
'''
Convert time into periods 1,2,...,12. If the time is hh: mm
and 2i<hh<2(i+1), then hh:mm belongs to period t.
Suppose the time is 6:30am, since 2*3<6<2*4, then it belongs to period 3.
Args:
time (time): Time period.
Returns:
periods (time): Formatted time period.
'''
periods=[]
for t in time:
try:
p = int(t.split(':')[0])
for i in range(12):
if(p>=2*i) & (p<2*(i+1)):
periods.append(i+1)
except ValueError:
periods.append(-1)
return periods
ufo_data['TimePeriod'] = time_period(ufo_data['Time'])
ufo_data.head(1)
Explanation: Let us convert the bin the events by time since the time of reporting is continous and by the minute. A binning of the events helps analyse statistics such as, how many events occurred during noon?
End of explanation
null_data = ufo_data[ufo_data.Duration.isnull()]
null_data.head(1)
def duration_sec(duration_text):
'''
Add a duration column with normalized units of measurement (seconds). Extracts the duration in seconds
by infering the duration from the text.
Args:
text (str): String of text.
Returns:
(int): Time duration in seconds.
'''
try:
metric_text = ["second","s","Second","minute","m","min","Minute","hour","h","Hour"]
metric_seconds = [1,1,1,60,60,60,60,3600,3600,3600]
for m,st in zip(metric_text, metric_seconds):
regex = "\s*(\d+)\+?\s*{}s?".format(m)
a = re.findall(regex, duration_text)
if len(a)>0:
return int(int(a[0]) * st)
else:
return None
except:
return None
Explanation: Let us look at events where durations are reported as null
End of explanation
ufo_data["Duration_Sec"] = ufo_data["Duration"].apply(duration_sec)
ufo_data["Duration_Sec"] = ufo_data.Duration_Sec.fillna(int(ufo_data.Duration_Sec.mean()))
ufo_data['Duration_Sec'].unique()
Explanation: Impute duration column with mean
End of explanation
sns.set(style="darkgrid")
plt.figure(figsize=(8, 12))
sns.countplot(y="State", data=ufo_data, palette="Greens_d");
Explanation: From the above you can see that there are no null values in Durations.
Let us look at the unique states and which states are most reporting UFO events.
End of explanation
all_states = ufo_data['State'].value_counts(dropna=False)
US = ["AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DC", "DE",
"FL", "GA", "HI", "ID", "IL", "IN", "IA", "KS", "KY",
"LA", "ME", "MD", "MA", "MI", "MN", "MS", "MO", "MT",
"NE", "NV", "NH", "NJ", "NM", "NY", "NC", "ND", "OH",
"OK", "OR", "PA", "RI", "SC", "SD", "TN", "TX", "UT",
"VT", "VA", "WA", "WV", "WI", "WY"]
print([state for state in all_states.index if state not in US])
Explanation: Let us look at how many of these are in the United States.
End of explanation
def is_US(state):
'''
Check if the state is in United States or not.
Args:
state (str): state reported in UFO data.
Returns:
(boolean): True if state is in US or False otherwise.
'''
if state in US:
return True
else:
return False
ufo_data['US'] = ufo_data['State'].apply(is_US)
ufo_data.head(1)
Explanation: We can see from above that many states are outside of United States. Since we are looking to study most events that occurred in the United States, let us create a column "US".
End of explanation
plt.figure(figsize=(8, 12))
g = sns.countplot(y="State", hue="US", data=ufo_data)
Explanation: Let us see how much we stand to lose in terms of considering data inside of US by plotting a count plot with a hue.
End of explanation
ufo_data = ufo_data[ufo_data.US == 1]
Explanation: We can study from countplot that there is a small percentage of data that we stand to lose by ignoring states outisde of United States. Let us keep only states within US for our initial analysis.
End of explanation
#print(ufo_data['City'].unique()[0:1000])
Explanation: Let us explore how the city column looks. It is important to take a look at lots of values in the dataset to check for anomalies or data with noise. For example the city data has noise with text containing additional information within () and other such noises.
End of explanation
def clean_city_data(city_name):
'''
Cleans the city string of additional comments and irrelevant data.
Args:
city_name (str): Name of the city
Returns:
(str): correct city name.
'''
try:
city_name = city_name.split('/')[0]
city_name = city_name.split('(')[0]
city_name = city_name.split(',')[0]
city_name = city_name.split('?')[0]
return city_name
except AttributeError:
return 'Unknown'
ufo_data['City'].apply(clean_city_data)
Explanation: Transfrom the City column to exclude all irrelevant text entries (e.g., additional comments).
End of explanation
# from geopy.geocoders import ArcGIS
# from geopy.exc import GeocoderTimedOut
# Latitude=[]
# Longitude=[]
# geolocator = ArcGIS()
# fails=[]
# for i in range(len(df)):
# try:
# location = geolocator.geocode(df.iloc[i,1]+','+df.iloc[i,2])
# df.iloc[i,-2] = location.latitude
# df.iloc[i,-1] = location.longitude
# print (i, location.address, df.iloc[i,-2], df.iloc[i,-1])
# df.to_csv('data_coord_1.csv',sep='\t', encoding='utf-8', index=False)
# except (AttributeError, GeocoderTimedOut) as e:
# df.to_csv('data_coord_1.csv',sep='\t', encoding='utf-8', index=False)
# print ('exception:', i)
Explanation: Latitude & Longitude of reported events.
These events are reported at various cities. We need the latitude and longitude information to perform geospatial analysis. A process of coverting an address to a latitude and longitude is called forward geocoding. The geopy library is useful for forward geocoding. It connects to a network and looks up the address and returns back the latitude and longitude information. The code below is used to determine the coordianates. We have already run the code below and generated a file. Hence, do not run this portion of the code below. Also it takes a while to look up all the addresses.
Warning
Don't run this code as it is a placeholder; instead use data exported to a csv:data_coord.csv
This code was used to extract coordinates for each state-city combination.
End of explanation
features = ['Date', 'Month', 'Day', 'Time', 'TimePer', 'City', 'State', 'Lat', 'Long',
'Shape', 'Duration', 'Duration_Sec', 'Summary', 'Posted', 'US_STATE']
ufo_data = pd.DataFrame(ufo_data, columns = features)
ufo_data.head(1)
ufo_data['Lat'] = np.nan
ufo_data['Long'] = np.nan
ufo_data = pd.read_csv("data/ufo/data_coord.csv", sep='\t')
ufo_data = ufo_data[(~ufo_data.Long.isnull()) & (~ufo_data.Lat.isnull())]
ufo_data['Date'] = pd.to_datetime(ufo_data['Date'])
ufo_data.set_index('Date', inplace=True)
ufo_data = ufo_data.reset_index()
ufo_data['WeekDay'] = ufo_data['Date'].dt.dayofweek
ufo_data['Week'] = ufo_data['Date'].dt.weekofyear
ufo_data['Quarter'] = ufo_data['Date'].dt.quarter
ufo_data['Year'] = ufo_data['Date'].dt.year
ufo_data.columns
Explanation: Now that we have the cleaned and processed the data, let us extract right columns from the dataframe that are useful and shall be our features for modeling.
End of explanation
ufo_stats = pd.read_excel("data/ufo/stats2.xlsx")
ufo_stats.head(2)
Explanation: Adding other data sources
In order to have a better understanding of the UFO reports, let us add the following external-data sources:
* dates of astronomical events in CY 2014-2015
* dates of national holidays in CY 2014-2015
* US state population and share of active military population per year per each state.
US state population and share of active military population per year per each state.
End of explanation
ufo_stats['State'] = ufo_stats['state abbr']
ufo_data = pd.merge(ufo_data, ufo_stats, on='State')
ufo_data.columns
ufo_data.loc[ufo_data['Year'] == 2014, 'Pop'] = ufo_data['2014 Popualtion'][ufo_data['Year'] == 2014]
ufo_data.loc[ufo_data['Year'] == 2015, 'Pop'] = ufo_data['2015 Population'][ufo_data['Year'] == 2015]
ufo_data.loc[ufo_data['Year'] == 2014, 'Milit_Share'] = ufo_data['Number of Active Duty members 2014'][ufo_data['Year'] == 2014]/ufo_data['2014 Popualtion'][ufo_data['Year'] == 2014]
ufo_data.loc[ufo_data['Year'] == 2015, 'Milit_Share'] = ufo_data['Number of Active Duty members 2014'][ufo_data['Year'] == 2015]/ufo_data['2015 Population'][ufo_data['Year'] == 2015]
Explanation: We can now merge the two datasets on state abbreviations.
End of explanation
#Adding a Hoax column derived from the Summary column
pattern = '|'.join(["HOAX","NUFORC Note"])
ufo_data['Validity'] = ufo_data.Summary.str.contains(pattern)
def binary_convert(value):
if value==True:
return 0
else:
return 1
ufo_data['Validity'] = ufo_data['Validity'].apply(lambda x: binary_convert(x))
ufo_data_hoax = ufo_data[['Summary','Validity']]
ufo_data_hoax[ufo_data_hoax.Validity==0].shape
Explanation: Hoax Prediction
The inevitable presence of IFO reports in the dataset can, in fact, be considered an added value, since the non-UFO reports are still indicative of actual events taking place. Therefore, our analysis focuses on the events that are reported as UFOs, regardless of them being an alien activity or in future recognized as an IFO. In addition to general reporting trends, the analysis of NUFORC data can offer insight into the UFO perception and their validity as some of the latter are labeled to be hoax reports by NUFORC.
End of explanation
# sns.set(style="white", palette="muted", color_codes=True)
g = sns.distplot(ufo_data.groupby(['State'])['Time'].count().nlargest(10), color="r")
Explanation: Average reports during the day per state grouped by Time.
End of explanation
ufo_data['WeekEnd'] = "WeekDay"
ufo_data.ix[ufo_data['WeekDay']>4, 'WeekEnd'] = "WeekEnd"
Explanation: Violin plots of reports on weekdays vs weekends
Create a categorical variable column called WeekEnd. Violin plots showcase the distribution of events that aren't hoax over the weekdays vs weekends. This will be a large plot as we can get information about the density of reports in all states.
End of explanation
ufo_states = ufo_data.groupby('State')['Year'].count().nlargest(10)
plt.figure(figsize=(10, 10))
sns.set(style="whitegrid", palette="pastel", color_codes=True)
sns.violinplot(y="State", x="Validity", hue="WeekEnd",
data=ufo_data.loc[ufo_data['State'].isin(ufo_states.index.tolist())],
split=True, inner="quart", palette={"WeekDay": "b", "WeekEnd": "y"})
Explanation: Let us look at the states which reported the highest UFO events and look at their violin plots.
End of explanation
ufo_shapes = ufo_data.groupby('Shape')['Year'].count().nlargest(10)
print(ufo_shapes)
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="whitegrid")
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(10, 8))
# Draw a violinplot with a narrower bandwidth than the default
sns.violinplot(y="Validity", x="Shape", hue="WeekEnd", data=ufo_data.loc[ufo_data['Shape'].isin(ufo_shapes.index.tolist())],
split=True, inner="quart", palette="Set3", bw=.2, cut=2, linewidth=1)
# Finalize the figure
sns.despine(left=True, bottom=True)
Explanation: Reports of various shapes in a violin plot on a weekday vs weekend.
Let us look at the largest shapes reported.
End of explanation
ufo_data = ufo_data[~ufo_data.State.isnull() & ~ufo_data.City.isnull()]
ufo_data.isnull().sum()
import folium
# Get a basic world map.
UFOmap = folium.Map(location=[30, 0], zoom_start=2)
# Draw markers on the map.
for name, row in ufo_data.iterrows():
UFOmap.circle_marker(location=[row["Lat"], row["Long"]], popup=row["City"])
# Create and show the map.
UFOmap.create_map('UFOmap.html')
UFOmap
Explanation: Remove the rows where State or City is unknown. Alternatively impute the rows for missing values.
End of explanation |
15,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating Labeled Data from a Planet Mosaic with Label Maker
In this notebook, we create labeled data for training a machine learning algorithm. As inputs, we use OpenStreetMap as the ground truth source and a Planet mosaic as the image source. Development Seed's Label Maker tool is used to download and prepare the ground truth data, chip the Planet imagery, and package the two to feed into the training process.
The primary interface for Label Maker is through the command-line interface (cli). It is configured through the creation of a configuration file. More information about that configuration file and command line usage can be found in the Label Maker repo README.
RUNNING NOTE
This notebook is meant to be run in a docker image specific to this folder. The docker image must be built from the custom Dockerfile according to the directions below.
In label-data directory
Step1: Define Mosaic Parameters
In this tutorial, we use the Planet mosaic tile service. There are many mosaics to choose from. For a list of mosaics available, visit https
Step2: Prepare label maker config file
This config file is pulled from the label-maker repo README.md example and then customized to utilize the Planet mosaic. The imagery url is set to the Planet mosaic url and the zoom is changed to 15, the maximum zoom supported by the Planet tile services.
See the label-maker README.md file for a description of the config entries.
Step3: Visualize Mosaic at config area of interest
Step4: Download OSM tiles
In this step, label-maker downloads the OSM vector tiles for the country specified in the config file.
According to Label Maker documentation, these can be visualized with mbview. So far I have not been successful getting mbview to work. I will keep on trying and would love to hear how you got this to work!
Step5: Create ground-truth labels from OSM tiles
In this step, the OSM tiles are chipped into label tiles at the zoom level specified in the config file. Also, a geojson file is created for visual inspection.
Step6: Visualizing classification.geojson in QGIS gives
Step7: Other than the fact that 4 tiles were created instead of the specified 3, the results look pretty good! All Road examples have roads, and all Building examples have buildings.
Create image tiles
In this step, we invoke label-maker images, which downloads and chips the mosaic into tiles that match the label tiles.
Interestingly, only 372 image tiles are downloaded, while 576 label tiles were generated. Looking at the label tile generation output (370 Road tiles, 270 Building tiles) along with the classification.geojson visualization (only two tiles that are Building and not Road), we find that there are only 372 label tiles that represent at least one of the Road/Building classes. This is why only 372 image tiles were generated.
Step8: Package tiles and labels
Convert the image and label tiles into train and test datasets.
Step9: Check Package
Let's load the packaged data and look at the train and test datasets. | Python Code:
import json
import os
import ipyleaflet as ipyl
import ipywidgets as ipyw
from IPython.display import Image
import numpy as np
Explanation: Creating Labeled Data from a Planet Mosaic with Label Maker
In this notebook, we create labeled data for training a machine learning algorithm. As inputs, we use OpenStreetMap as the ground truth source and a Planet mosaic as the image source. Development Seed's Label Maker tool is used to download and prepare the ground truth data, chip the Planet imagery, and package the two to feed into the training process.
The primary interface for Label Maker is through the command-line interface (cli). It is configured through the creation of a configuration file. More information about that configuration file and command line usage can be found in the Label Maker repo README.
RUNNING NOTE
This notebook is meant to be run in a docker image specific to this folder. The docker image must be built from the custom Dockerfile according to the directions below.
In label-data directory:
docker build -t planet-notebooks:label .
Then start up the docker container as you usually would, specifying planet-notebooks:label as the image.
Install Dependencies
In addition to the python packages imported below, the label-maker python package is also a dependency. However, it's primary usage is through the command-line interface (cli), so we use juypter notebook bash magic to run label-maker via the cli instead of importing the python package.
End of explanation
# Planet tile server base URL (Planet Explorer Mosaics Tiles)
mosaic = 'global_monthly_2018_02_mosaic'
mosaicsTilesURL_base = 'https://tiles.planet.com/basemaps/v1/planet-tiles/{}/gmap/{{z}}/{{x}}/{{y}}.png'.format(mosaic)
mosaicsTilesURL_base
# Planet tile server url with auth
planet_api_key = os.environ['PL_API_KEY']
planet_mosaic = mosaicsTilesURL_base + '?api_key=' + planet_api_key
# url is not printed because it will show private api key
Explanation: Define Mosaic Parameters
In this tutorial, we use the Planet mosaic tile service. There are many mosaics to choose from. For a list of mosaics available, visit https://api.planet.com/basemaps/v1/mosaics.
We first build the url for the xyz basemap tile service, then we add authorization in the form of the Planet API key.
End of explanation
# create data directory
data_dir = os.path.join('data', 'label-maker-mosaic')
if not os.path.isdir(data_dir):
os.makedirs(data_dir)
# label-maker doesn't clean up, so start with a clean slate
!cd $data_dir && rm -R *
# create config file
bounding_box = [1.09725, 6.05520, 1.34582, 6.30915]
config = {
"country": "togo",
"bounding_box": bounding_box,
"zoom": 15,
"classes": [
{ "name": "Roads", "filter": ["has", "highway"] },
{ "name": "Buildings", "filter": ["has", "building"] }
],
"imagery": planet_mosaic,
"background_ratio": 1,
"ml_type": "classification"
}
# define project files and folders
config_filename = os.path.join(data_dir, 'config.json')
# write config file
with open(config_filename, 'w') as cfile:
cfile.write(json.dumps(config))
print('wrote config to {}'.format(config_filename))
Explanation: Prepare label maker config file
This config file is pulled from the label-maker repo README.md example and then customized to utilize the Planet mosaic. The imagery url is set to the Planet mosaic url and the zoom is changed to 15, the maximum zoom supported by the Planet tile services.
See the label-maker README.md file for a description of the config entries.
End of explanation
# calculate center of map
bounds_lat = [bounding_box[1], bounding_box[3]]
bounds_lon = [bounding_box[0], bounding_box[2]]
def calc_center(bounds):
return bounds[0] + (bounds[1] - bounds[0])/2
map_center = [calc_center(bounds_lat), calc_center(bounds_lon)] # lat/lon
print(bounding_box)
print(map_center)
# create and visualize mosaic at approximately the same bounds as defined in the config file
map_zoom = 12
layout=ipyw.Layout(width='800px', height='800px') # set map layout
mosaic_map = ipyl.Map(center=map_center, zoom=map_zoom, layout=layout)
mosaic_map.add_layer(ipyl.TileLayer(url=planet_mosaic))
mosaic_map
mosaic_map.bounds
Explanation: Visualize Mosaic at config area of interest
End of explanation
!cd $data_dir && label-maker download
Explanation: Download OSM tiles
In this step, label-maker downloads the OSM vector tiles for the country specified in the config file.
According to Label Maker documentation, these can be visualized with mbview. So far I have not been successful getting mbview to work. I will keep on trying and would love to hear how you got this to work!
End of explanation
!cd $data_dir && label-maker labels
Explanation: Create ground-truth labels from OSM tiles
In this step, the OSM tiles are chipped into label tiles at the zoom level specified in the config file. Also, a geojson file is created for visual inspection.
End of explanation
# !cd $data_dir && label-maker preview -n 3
# !ls $data_dir/data/examples
# for fclass in ('Roads', 'Buildings'):
# example_dir = os.path.join(data_dir, 'data', 'examples', fclass)
# print(example_dir)
# for img in os.listdir(example_dir):
# print(img)
# display(Image(os.path.join(example_dir, img)))
Explanation: Visualizing classification.geojson in QGIS gives:
Although Label Maker doesn't tell us which classes line up with the labels (see the legend in the visualization for labels), it looks like the following relationships hold:
- (1,0,0) - no roads or buildings
- (0,1,1) - both roads and buildings
- (0,0,1) - only buildings
- (0,1,0) - only roads
Most of the large region with no roads or buildings at the bottom portion of the image is the water off the coast.
Preview image chips
Create a subset of the image chips for preview before creating them all. Preview chips are placed in subdirectories named after each class specified in the config file.
NOTE This section is commented out because preview fails due to imagery-offset arg. See more:
https://github.com/developmentseed/label-maker/issues/79
End of explanation
!cd $data_dir && label-maker images
# look at three tiles that were generated
tiles_dir = os.path.join(data_dir, 'data', 'tiles')
print(tiles_dir)
for img in os.listdir(tiles_dir)[:3]:
print(img)
display(Image(os.path.join(tiles_dir, img)))
Explanation: Other than the fact that 4 tiles were created instead of the specified 3, the results look pretty good! All Road examples have roads, and all Building examples have buildings.
Create image tiles
In this step, we invoke label-maker images, which downloads and chips the mosaic into tiles that match the label tiles.
Interestingly, only 372 image tiles are downloaded, while 576 label tiles were generated. Looking at the label tile generation output (370 Road tiles, 270 Building tiles) along with the classification.geojson visualization (only two tiles that are Building and not Road), we find that there are only 372 label tiles that represent at least one of the Road/Building classes. This is why only 372 image tiles were generated.
End of explanation
# will not be able to open image tiles that weren't generated because the label tiles contained no classes
!cd $data_dir && label-maker package
Explanation: Package tiles and labels
Convert the image and label tiles into train and test datasets.
End of explanation
data_file = os.path.join(data_dir, 'data', 'data.npz')
data = np.load(data_file)
for k in data.keys():
print('data[\'{}\'] shape: {}'.format(k, data[k].shape))
Explanation: Check Package
Let's load the packaged data and look at the train and test datasets.
End of explanation |
15,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-1', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
15,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Major League Baseball's Billion Dollar Problem
A study of MLB pitching injuries at the NYU Stern School of Business
Written by Isaac Gammal ([email protected])
Background
Since the first Tommy John surgery was performed in 1974, shoulder and elbow injuries have become priority issues for players, coaches and general managers. Recovery from shoulder and elbow soft tissue injury, particularly ulnar collateral ligament (UCL) tears and glenoid labrum tears, is often slow and greuling due to the drastic nature of surgical reconstruction and intense rehabilitation required. With 112 UCL injuries requiring reconstructive surgery in the 2015 season alone, the competitive costs, and substantial economic costs, continue to rise, prompting many to investigate risk factors associate with upper extremity injuries. Some have posited an association with rising fastball velocity, pitch counts, and pitch variability, however, due to the small sample sizes, few have found statistically significant relationships. Nevertheless, many professional and amateur organizations are taking conservative approaches to developing young pitchers, encouraging them to limit pitches counts, extend rest between starts, and delay the use of off-speed pitches.
Purpose
In the first part of this paper, I used MLB disabled list culled by Fangraphs writer Jeff Zimmerman and salary information provided by Spotrac to compute the average length of playing time lost due to injury, and economic costs from lost salary over the past five seasons. In the second part, I used pitchf/x data, a pitch tracking system created by Sportvision installed in every MLB stadium, to look at pitching characteristics leading up to an injury.
Step1: The graph above plots average length of disability due to injury, broken down by injury location. Shoulder and elbow injuries are far and away the most devastating and common pitching injuries. The aggregate number of days spent on the DL due to UCL injury is 10,414, representing a staggering 31% of the total number of days for all elbow injuries, and 12% for all pitching injuries.
Step2: Interestingly, despite being the most severe injuries, shoulder and elbow injuries are middle-of-the-pack in terms of lost salary (~$4,000,000). There may be several possible explanations for this finding. Perhaps pitchers with a history of these injuries are labeled as such, and then offered lower salaries in contract negotiations.
Next I loaded the pitchf/x database and merged with the DL database. Because the databases were divided into injured and healthy pitchers, I first separated the two and then concatenated both to get a database of all pitchers. The variables in the pitchf/x database included maximum velocity (vFA), difference between maximum and minimum velocity pitch (delta), and number of unique pitches thrown (# pitches). These variables were used as predictors and regressed against innings pitched, a contnuous variable used as a proxy for injury, and scaled up for relievers vs starters. | Python Code:
'''Data were imported from referenced sources and stored locally'''
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
import statsmodels.formula.api as smf
%matplotlib inline
file1 = '/Users/isaacgammal/Desktop/Sports data/pitchers.xlsx'
df1 = pd.read_excel(file1, usecols=[0,1,2,3,4,5,6,7,8,9,10,11,12,13]) #injured pitchers and salaries
file2 = '/Users/isaacgammal/Downloads/fx.xlsx' #pitchf/x data for injured and healthy pitchers
df2 = fx = pd.read_excel(file2)
df1.head()
#compute and plot mean length of time on disabled list by season
x = df1['Days on DL'].groupby(df1['Location']).mean()
fig, ax = plt.subplots()
plt.style.use('fivethirtyeight')
x.plot(kind='barh', ax=ax, legend=False)
ax.set_title('Time Spent on Disabled List by Injury Location', fontsize=16)
ax.set_xlabel('Average Days on DL')
ax.set_ylabel('Injury Location')
ax.get_children()[4].set_color('r')
ax.get_children()[17].set_color('r')
Explanation: Major League Baseball's Billion Dollar Problem
A study of MLB pitching injuries at the NYU Stern School of Business
Written by Isaac Gammal ([email protected])
Background
Since the first Tommy John surgery was performed in 1974, shoulder and elbow injuries have become priority issues for players, coaches and general managers. Recovery from shoulder and elbow soft tissue injury, particularly ulnar collateral ligament (UCL) tears and glenoid labrum tears, is often slow and greuling due to the drastic nature of surgical reconstruction and intense rehabilitation required. With 112 UCL injuries requiring reconstructive surgery in the 2015 season alone, the competitive costs, and substantial economic costs, continue to rise, prompting many to investigate risk factors associate with upper extremity injuries. Some have posited an association with rising fastball velocity, pitch counts, and pitch variability, however, due to the small sample sizes, few have found statistically significant relationships. Nevertheless, many professional and amateur organizations are taking conservative approaches to developing young pitchers, encouraging them to limit pitches counts, extend rest between starts, and delay the use of off-speed pitches.
Purpose
In the first part of this paper, I used MLB disabled list culled by Fangraphs writer Jeff Zimmerman and salary information provided by Spotrac to compute the average length of playing time lost due to injury, and economic costs from lost salary over the past five seasons. In the second part, I used pitchf/x data, a pitch tracking system created by Sportvision installed in every MLB stadium, to look at pitching characteristics leading up to an injury.
End of explanation
y = df1['Salary'].groupby(df1['Location']).mean()
fig, ax = plt.subplots(figsize=(10,6))
plt.style.use('fivethirtyeight')
y.plot(kind='barh', ax=ax, legend=False)
ax.set_title('Average Sunk Salary by Injury Location', fontsize=16)
ax.set_xlabel('Average Salary')
ax.set_ylabel('Injury Location')
ax.get_children()[4].set_color('r')
ax.get_children()[17].set_color('r')
Explanation: The graph above plots average length of disability due to injury, broken down by injury location. Shoulder and elbow injuries are far and away the most devastating and common pitching injuries. The aggregate number of days spent on the DL due to UCL injury is 10,414, representing a staggering 31% of the total number of days for all elbow injuries, and 12% for all pitching injuries.
End of explanation
fx_injured = pd.merge(df1,df2,how='left',on=['Name','Season'])
fx_healthy = pd.read_csv('/Users/isaacgammal/Downloads/healthy.csv')
fx = pd.concat([fx_injured,fx_healthy],axis=0)
fx.head()
lm = smf.ols(formula='vFA ~ IP', data=fx).fit()
lm.params
lm.summary()
lm2 = smf.ols(formula='Delta ~ IP', data=fx).fit()
lm2.params
lm2.summary()
Explanation: Interestingly, despite being the most severe injuries, shoulder and elbow injuries are middle-of-the-pack in terms of lost salary (~$4,000,000). There may be several possible explanations for this finding. Perhaps pitchers with a history of these injuries are labeled as such, and then offered lower salaries in contract negotiations.
Next I loaded the pitchf/x database and merged with the DL database. Because the databases were divided into injured and healthy pitchers, I first separated the two and then concatenated both to get a database of all pitchers. The variables in the pitchf/x database included maximum velocity (vFA), difference between maximum and minimum velocity pitch (delta), and number of unique pitches thrown (# pitches). These variables were used as predictors and regressed against innings pitched, a contnuous variable used as a proxy for injury, and scaled up for relievers vs starters.
End of explanation |
15,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Apache Spark
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program).
SparkContext allocate resources across applications.
Once connected, Spark acquires executors on nodes in the cluster, which are processes that run computations and store data for your application.
Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to the executors.
Finally, SparkContext sends tasks to the executors to run.
Step1: Interactive programming
Step2: Answer 2
Step3: Answer 3
Step4: Answer 4
Step5: Answer 5 | Python Code:
import pyspark
sc = pyspark.SparkContext(appName="my_spark_app")
sc
Explanation: Using Apache Spark
Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program).
SparkContext allocate resources across applications.
Once connected, Spark acquires executors on nodes in the cluster, which are processes that run computations and store data for your application.
Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to the executors.
Finally, SparkContext sends tasks to the executors to run.
End of explanation
## just check that sc variables is not
print("is SpartContext loaded?", sc != '')
Explanation: Interactive programming: is the procedure of writing parts of a program while it is already active. The Jupyter Notebook will be the frontend for our active program.
For interactive programming we will have:
* A Jupyter/IPython notebook: where we run Python code
* PySparkShell application UI: to monitor Spark Cluster
Monitoring Spark Jobs
Every SparkContext launches its own instance of Web UI which is available at http://[master]:4040 by default.
Web UI comes with the following tabs:
* Jobs
* Stages
* Storage with RDD size and memory use
* Environment
* Executors
* SQL
This information is available only until the application is running by default.
Jobs
Job id
Description
Submission date
Job Duration
Stages
Tasks
Stages
What is a Stage?:
A stage is a physical unit of execution. It is a step in a physical execution plan.
A stage is a set of parallel tasks, one per partition of an RDD, that compute partial results of a function executed as part of a Spark job.
In other words, a Spark job is a computation with that computation sliced into stages.
A stage is uniquely identified by id. When a stage is created, DAGScheduler increments internal counter nextStageId to track the number of stage submissions.
A stage can only work on the partitions of a single RDD (identified by rdd), but can be associated with many other dependent parent stages (via internal field parents), with the boundary of a stage marked by shuffle dependencies.
Storage
Storage page permit us to see how RDD are partitioned across the cluster.
Environment
This tab shows configuration and variables used in Apache Spark execution.
Executors
In this tab, we can see information about executors available in the cluster.
We can have relevant information about CPU and Memory, as well as RDD storage.
We can also have information about executed tasks.
Main Spark Concepts
Partitions
Sparkโs basic abstraction is the Resilient Distributed Dataset, or RDD.
That fragmentation is what enables Spark to execute in parallel, and the level of fragmentation is a function of the number of partitions of your RDD.
Caching
You will often hear: "Spark handles all data in memory".
This is tricky and here's where the magic relies. Most of the time you will be working with metadata not with all the data, and computations are only left for the time that you need the results.
Storing that results or leaving them to compute them again has a high impact in response times. When you store the results, it is said to be catching the RDD.
Shuffling
(from: https://0x0fff.com/spark-architecture-shuffle/)
(more about shuffling: https://spark.apache.org/docs/1.3.1/programming-guide.html#performance-impact)
(best practices: https://robertovitillo.com/2015/06/30/spark-best-practices/)
There are many different tasks that require shuffling of the data across the cluster, for instance table join โ to join two tables on the field โidโ, you must be sure that all the data for the same values of โidโ for both of the tables are stored in the same chunks.
Imagine the tables with integer keys ranging from 1 to 1โ000โ000. By storing the data in same chunks I mean that for instance for both tables values of the key 1-100 are stored in a single partition/chunk, this way instead of going through the whole second table for each partition of the first one, we can join partition with partition directly, because we know that the key values 1-100 are stored only in these two partitions. To achieve this both tables should have the same number of partitions, this way their join would require much less computations. So now you can understand how important shuffling is.
Exercises
(from: http://blog.insightdatalabs.com/jupyter-on-apache-spark-step-by-step/)
Exercise 1: Check that SparkContext is loaded in your current environment.
Exercise 2: Create your first RDD with 20 partitions and check WebUI that the RDD has created a job, an stage and 20 partitions. The RDD must contain a list of 1000 integers starting from 0. Get the number of partitions using getNumPartitions().
(Hint 1: you can use sc.parallelize)
(Hint 2: check Spark API docs: http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkContext.parallelize)
Exercise 3: Get 5 elements of the RDD.
Exercise 4: Name the RDD as "my_rdd" and persist it into memory and disk serialized.
Exercise 5: Perform a transformation to group the numbers into the lowest 100s and count the total frequency for each bin.
Exercise 6: Browse the WebUI. And:
* identify the RDD generated in Exercise X and its job
* identify the job in Exercise X
* check that the RDD has been cached
* identify the job in Exercise X
Answer 1:
End of explanation
rdd = sc.parallelize([x for x in range(1000)],20)
rdd.getNumPartitions()
Explanation: Answer 2:
End of explanation
rdd.take(5)
Explanation: Answer 3:
End of explanation
rdd.setName("my_rdd").persist(pyspark.StorageLevel.MEMORY_AND_DISK_SER)
Explanation: Answer 4:
End of explanation
rdd.map(lambda r: (round(r/100)*100, 1))\
.reduceByKey(lambda x,y: x+y)\
.collect()
Explanation: Answer 5:
End of explanation |
15,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ะขะตัั. ะะพะฒะตัะธัะตะปัะฝัะต ะธะฝัะตัะฒะฐะปั ะดะปั ััะตะดะฝะตะณะพ
Step1: ะะปั 61 ะฑะพะปััะพะณะพ ะณะพัะพะดะฐ ะฒ ะะฝะณะปะธะธ ะธ ะฃัะปััะต ะธะทะฒะตััะฝั ััะตะดะฝัั ะณะพะดะพะฒะฐั ัะผะตััะฝะพััั ะฝะฐ 100000 ะฝะฐัะตะปะตะฝะธั (ะฟะพ ะดะฐะฝะฝัะผ 1958โ1964) ะธ ะบะพะฝัะตะฝััะฐัะธั ะบะฐะปััะธั ะฒ ะฟะธััะตะฒะพะน ะฒะพะดะต (ะฒ ัะฐัััั
ะฝะฐ ะผะธะปะปะธะพะฝ). ะงะตะผ ะฒััะต ะบะพะฝัะตะฝััะฐัะธั ะบะฐะปััะธั, ัะตะผ ะถััััะต ะฒะพะดะฐ. ะะพัะพะดะฐ ะดะพะฟะพะปะฝะธัะตะปัะฝะพ ะฟะพะดะตะปะตะฝั ะฝะฐ ัะตะฒะตัะฝัะต ะธ ัะถะฝัะต.
Step2: ะะพัััะพะนัะต 95% ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะดะปั ััะตะดะฝะตะน ะณะพะดะพะฒะพะน ัะผะตััะฝะพััะธ ะฒ ะฑะพะปััะธั
ะณะพัะพะดะฐั
. ะงะตะผั ัะฐะฒะฝะฐ ะตะณะพ ะฝะธะถะฝัั ะณัะฐะฝะธัะฐ? ะะบััะณะปะธัะต ะพัะฒะตั ะดะพ 4 ะทะฝะฐะบะพะฒ ะฟะพัะปะต ะดะตัััะธัะฝะพะน ัะพัะบะธ.
Step3: ะะฐ ะดะฐะฝะฝัั
ะธะท ะฟัะตะดัะดััะตะณะพ ะฒะพะฟัะพัะฐ ะฟะพัััะพะนัะต 95% ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะดะปั ััะตะดะฝะตะน ะณะพะดะพะฒะพะน ัะผะตััะฝะพััะธ ะฟะพ ะฒัะตะผ ัะถะฝัะผ ะณะพัะพะดะฐะผ. ะงะตะผั ัะฐะฒะฝะฐ ะตะณะพ ะฒะตัั
ะฝัั ะณัะฐะฝะธัะฐ? ะะบััะณะปะธัะต ะพัะฒะตั ะดะพ 4 ะทะฝะฐะบะพะฒ ะฟะพัะปะต ะดะตัััะธัะฝะพะน ัะพัะบะธ.
Step4: ะะฐ ัะตั
ะถะต ะดะฐะฝะฝัั
ะฟะพัััะพะนัะต 95% ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะดะปั ััะตะดะฝะตะน ะณะพะดะพะฒะพะน ัะผะตััะฝะพััะธ ะฟะพ ะฒัะตะผ ัะตะฒะตัะฝัะผ ะณะพัะพะดะฐะผ. ะะตัะตัะตะบะฐะตััั ะปะธ ััะพั ะธะฝัะตัะฒะฐะป ั ะฟัะตะดัะดััะธะผ? ะะฐะบ ะฒั ะดัะผะฐะตัะต, ะบะฐะบะพะน ะธะท ััะพะณะพ ะผะพะถะฝะพ ัะดะตะปะฐัั ะฒัะฒะพะด?
Step5: ะะตัะตัะตะบะฐัััั ะปะธ 95% ะดะพะฒะตัะธัะตะปัะฝัะต ะธะฝัะตัะฒะฐะปั ะดะปั ััะตะดะฝะตะน ะถัััะบะพััะธ ะฒะพะดั ะฒ ัะตะฒะตัะฝัั
ะธ ัะถะฝัั
ะณะพัะพะดะฐั
?
Step6: <b>
ะัะฟะพะผะฝะธะผ ัะพัะผัะปั ะดะพะฒะตัะธัะตะปัะฝะพะณะพ ะธะฝัะตัะฒะฐะปะฐ ะดะปั ััะตะดะฝะตะณะพ ะฝะพัะผะฐะปัะฝะพ ัะฐัะฟัะตะดะตะปัะฝะฝะพะน ัะปััะฐะนะฝะพะน ะฒะตะปะธัะธะฝั ั ะดะธัะฟะตััะธะตะน ฯ2 | Python Code:
import pandas as pd
import numpy as np
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Explanation: ะขะตัั. ะะพะฒะตัะธัะตะปัะฝัะต ะธะฝัะตัะฒะฐะปั ะดะปั ััะตะดะฝะตะณะพ
End of explanation
water_data = pd.read_table('water.txt')
water_data.info()
water_data.describe()
water_data.head()
Explanation: ะะปั 61 ะฑะพะปััะพะณะพ ะณะพัะพะดะฐ ะฒ ะะฝะณะปะธะธ ะธ ะฃัะปััะต ะธะทะฒะตััะฝั ััะตะดะฝัั ะณะพะดะพะฒะฐั ัะผะตััะฝะพััั ะฝะฐ 100000 ะฝะฐัะตะปะตะฝะธั (ะฟะพ ะดะฐะฝะฝัะผ 1958โ1964) ะธ ะบะพะฝัะตะฝััะฐัะธั ะบะฐะปััะธั ะฒ ะฟะธััะตะฒะพะน ะฒะพะดะต (ะฒ ัะฐัััั
ะฝะฐ ะผะธะปะปะธะพะฝ). ะงะตะผ ะฒััะต ะบะพะฝัะตะฝััะฐัะธั ะบะฐะปััะธั, ัะตะผ ะถััััะต ะฒะพะดะฐ. ะะพัะพะดะฐ ะดะพะฟะพะปะฝะธัะตะปัะฝะพ ะฟะพะดะตะปะตะฝั ะฝะฐ ัะตะฒะตัะฝัะต ะธ ัะถะฝัะต.
End of explanation
mort_mean = water_data['mortality'].mean()
print('Mean mortality: %f' % mort_mean)
from statsmodels.stats.weightstats import _tconfint_generic
mort_mean_std = water_data['mortality'].std() / np.sqrt(water_data['mortality'].shape[0])
print('Mortality 95%% interval: %s' % str(_tconfint_generic(mort_mean, mort_mean_std, water_data['mortality'].shape[0] - 1,
0.05, 'two-sided')))
Explanation: ะะพัััะพะนัะต 95% ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะดะปั ััะตะดะฝะตะน ะณะพะดะพะฒะพะน ัะผะตััะฝะพััะธ ะฒ ะฑะพะปััะธั
ะณะพัะพะดะฐั
. ะงะตะผั ัะฐะฒะฝะฐ ะตะณะพ ะฝะธะถะฝัั ะณัะฐะฝะธัะฐ? ะะบััะณะปะธัะต ะพัะฒะตั ะดะพ 4 ะทะฝะฐะบะพะฒ ะฟะพัะปะต ะดะตัััะธัะฝะพะน ัะพัะบะธ.
End of explanation
water_data_south = water_data[water_data.location == 'South']
mort_mean_south = water_data_south['mortality'].mean()
print('Mean south mortality: %f' % mort_mean_south)
mort_mean_south_std = water_data_south['mortality'].std() / np.sqrt(water_data_south['mortality'].shape[0])
print('Mortality south 95%% interval: %s' % str(_tconfint_generic(mort_mean_south, mort_mean_south_std,
water_data_south['mortality'].shape[0] - 1,
0.05, 'two-sided')))
Explanation: ะะฐ ะดะฐะฝะฝัั
ะธะท ะฟัะตะดัะดััะตะณะพ ะฒะพะฟัะพัะฐ ะฟะพัััะพะนัะต 95% ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะดะปั ััะตะดะฝะตะน ะณะพะดะพะฒะพะน ัะผะตััะฝะพััะธ ะฟะพ ะฒัะตะผ ัะถะฝัะผ ะณะพัะพะดะฐะผ. ะงะตะผั ัะฐะฒะฝะฐ ะตะณะพ ะฒะตัั
ะฝัั ะณัะฐะฝะธัะฐ? ะะบััะณะปะธัะต ะพัะฒะตั ะดะพ 4 ะทะฝะฐะบะพะฒ ะฟะพัะปะต ะดะตัััะธัะฝะพะน ัะพัะบะธ.
End of explanation
water_data_north = water_data[water_data.location == 'North']
mort_mean_north = water_data_north['mortality'].mean()
print('Mean north mortality: %f' % mort_mean_north)
mort_mean_north_std = water_data_north['mortality'].std() / np.sqrt(water_data_north['mortality'].shape[0])
print('Mortality north 95%% interval: %s' % str(_tconfint_generic(mort_mean_north, mort_mean_north_std,
water_data_north['mortality'].shape[0] - 1,
0.05, 'two-sided')))
Explanation: ะะฐ ัะตั
ะถะต ะดะฐะฝะฝัั
ะฟะพัััะพะนัะต 95% ะดะพะฒะตัะธัะตะปัะฝัะน ะธะฝัะตัะฒะฐะป ะดะปั ััะตะดะฝะตะน ะณะพะดะพะฒะพะน ัะผะตััะฝะพััะธ ะฟะพ ะฒัะตะผ ัะตะฒะตัะฝัะผ ะณะพัะพะดะฐะผ. ะะตัะตัะตะบะฐะตััั ะปะธ ััะพั ะธะฝัะตัะฒะฐะป ั ะฟัะตะดัะดััะธะผ? ะะฐะบ ะฒั ะดัะผะฐะตัะต, ะบะฐะบะพะน ะธะท ััะพะณะพ ะผะพะถะฝะพ ัะดะตะปะฐัั ะฒัะฒะพะด?
End of explanation
hardness_mean_south = water_data_south['hardness'].mean()
print('Mean south hardness: %f' % hardness_mean_south)
hardness_mean_north = water_data_north['hardness'].mean()
print('Mean north hardness: %f' % hardness_mean_north)
hardness_mean_south_std = water_data_south['hardness'].std() / np.sqrt(water_data_south['hardness'].shape[0])
print('Hardness south 95%% interval: %s' % str(_tconfint_generic(hardness_mean_south, hardness_mean_south_std,
water_data_south['hardness'].shape[0] - 1,
0.05, 'two-sided')))
hardness_mean_north_std = water_data_north['hardness'].std() / np.sqrt(water_data_north['hardness'].shape[0])
print('Hardness north 95%% interval: %s' % str(_tconfint_generic(hardness_mean_north, hardness_mean_north_std,
water_data_north['hardness'].shape[0] - 1,
0.05, 'two-sided')))
Explanation: ะะตัะตัะตะบะฐัััั ะปะธ 95% ะดะพะฒะตัะธัะตะปัะฝัะต ะธะฝัะตัะฒะฐะปั ะดะปั ััะตะดะฝะตะน ะถัััะบะพััะธ ะฒะพะดั ะฒ ัะตะฒะตัะฝัั
ะธ ัะถะฝัั
ะณะพัะพะดะฐั
?
End of explanation
from scipy import stats
np.ceil((stats.norm.ppf(1-0.05/2) / 0.1)**2)
Explanation: <b>
ะัะฟะพะผะฝะธะผ ัะพัะผัะปั ะดะพะฒะตัะธัะตะปัะฝะพะณะพ ะธะฝัะตัะฒะฐะปะฐ ะดะปั ััะตะดะฝะตะณะพ ะฝะพัะผะฐะปัะฝะพ ัะฐัะฟัะตะดะตะปัะฝะฝะพะน ัะปััะฐะนะฝะพะน ะฒะตะปะธัะธะฝั ั ะดะธัะฟะตััะธะตะน ฯ2:
ะัะธ ฯ=1 ะบะฐะบะพะน ะฝัะถะตะฝ ะพะฑััะผ ะฒัะฑะพัะบะธ, ััะพะฑั ะฝะฐ ััะพะฒะฝะต ะดะพะฒะตัะธั 95% ะพัะตะฝะธัั ััะตะดะฝะตะต ั ัะพัะฝะพัััั ยฑ0.1?
End of explanation |
15,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, load the UKDALE dataset into NILMTK. Here we are loading the HDF5 version of UKDALE which you can download by following the instructions on the UKDALE website.
Step1: Next, to speed up processing, we'll set a "window of interest" so NILMTK will only consider one month of data.
Step2: Get the ElecMeter associated with the Fridge in House 1
Step3: Now load the activations | Python Code:
dataset = nilmtk.DataSet('/data/mine/vadeec/merged/ukdale.h5')
Explanation: First, load the UKDALE dataset into NILMTK. Here we are loading the HDF5 version of UKDALE which you can download by following the instructions on the UKDALE website.
End of explanation
dataset.set_window("2014-06-01", "2014-07-01")
Explanation: Next, to speed up processing, we'll set a "window of interest" so NILMTK will only consider one month of data.
End of explanation
BUILDING = 1
elec = dataset.buildings[BUILDING].elec
fridge = elec['fridge']
Explanation: Get the ElecMeter associated with the Fridge in House 1:
End of explanation
activations = fridge.get_activations()
print("Number of activations =", len(activations))
activations[1].plot()
plt.show()
Explanation: Now load the activations:
End of explanation |
15,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: working with numpy arrays
This notebook demonstrates how to use differences and sums to calculate derivatives and integrals and make some simple plots using the matplotlib module. If you haven't seen matlab-style array indexing before, take a look at
Step2: 1) Take the derivative of this function with python
Find the first derivative of $y = 6x^3 + 5$
Answer
Step3: What's going on above at x=0?
2) Find the definite integral of this function with python
In first year you learned that if you start with this sum | Python Code:
import numpy as np
from matplotlib import pyplot as plt
def cubeit(x,a,b):
construct cubic polynomial of the form
y = ax^3 + b
Parameters
----------
x: vector or float
x values
a: float
coefficient to multiply
b: float
coefficient to add
return a*x**3 + b
Explanation: working with numpy arrays
This notebook demonstrates how to use differences and sums to calculate derivatives and integrals and make some simple plots using the matplotlib module. If you haven't seen matlab-style array indexing before, take a look at:
The Whirlwind section on lists -- search for "indexing and slicing"
The Pine section on numpy array slicing and addressing
For more on plotting see the Chapter 5 of the Pine book
End of explanation
%matplotlib inline
#
# create 1000 x values from -5 to 5
#
spacing=0.01
x = np.arange(-5,5,spacing)
#
# find dx and dy
#
dx = np.diff(x)
y=cubeit(x,6,5)
dy = np.diff(y)
deriv = dy/dx
#
# compare to the exact answer
# note that deriv is one element shorter than x or y, so find
# the average value for each interval so they line up
#
exact = 18*x**2.
exact = (exact[1:] + exact[:-1])/2.
avgx=(x[1:] + x[:-1])/2.
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(12,6))
ax1.plot(avgx,deriv,linewidth=4,alpha=0.6,label='python')
ax1.plot(avgx,exact,linestyle=':',linewidth=4,label='calculus')
ax1.legend()
ax1.set(xlabel='x',ylabel='dy/dx',title='approx and exact')
ax2.plot(avgx,(deriv - exact)/exact)
_=ax2.set(ylabel = '(approx - exact)/exact',xlabel='x',title='relative error')
Explanation: 1) Take the derivative of this function with python
Find the first derivative of $y = 6x^3 + 5$
Answer: $\frac{dy}{dx} = 18x^2$
In first year you learned that the first derivative was:
$\frac{dy}{dx} = \lim_{\Delta x \to 0} \frac{\Delta y}{\Delta x}$
So calculate $\Delta y$ and $\Delta x$ in python using numpy.diff and divide, does it agree with the calculus answer?
End of explanation
yavg = (y[1:] + y[:-1])/2.
np.sum(yavg*dx)
Explanation: What's going on above at x=0?
2) Find the definite integral of this function with python
In first year you learned that if you start with this sum:
$\sum\limits_{x= -5}^{x=+5} \left ( 6x^3 + 5 \right ) \Delta x$
and take $\lim_{\Delta x \to 0}$ you get the definite integral $I =\int_{-5}^5 \left ( 6 x^3 + 5 \right ) dx$
which Newton and Liebniz figured out resulted in $I=50$:
$\int_{-5}^5 6 x^3 + 5 dx = \left .\left ( (6/4)x^4 + 5x \right ) \right |_{-5}^5 = (6/4)*(5^4 - (-5)^4) + ((5\times 5) - ((-5)\times 5)) = 50$
So to do this integral in python, just use numpy.sum(I*dx). The only trick is that np.diff(x) creates a vector that is 1 shorter than x. So replace y with the average value of y in each dx inteval so that you can multiply vectors of the same length.
End of explanation |
15,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization
Step2: Batch normalization
Step3: Batch Normalization
Step4: Batch Normalization
Step5: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT
Step6: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
Step7: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
Step8: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale. | Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
Explanation: Batch Normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.
End of explanation
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model,
small_data,
num_epochs=10,
batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,},
verbose=True,
print_every=200)
bn_solver.train()
solver = Solver(model,
small_data,
num_epochs=10,
batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True,
print_every=200)
solver.train()
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model,
small_data,
num_epochs=10,
batch_size=50,
update_rule='adam',
optim_config={'learning_rate': 1e-3,},
verbose=False,
print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model,
small_data,
num_epochs=10,
batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False,
print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
plt.gcf().set_size_inches(10, 15)
plt.show()
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation |
15,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook-8
Step1: At the danger of repeating ourselves (but to make the point!)
Step2: This doesn't work because you can't use a list (["key1",1]) as a key, though as you saw above you can use a list as a value. For more on the subject of (im)mutability checkout this SO answer ).
Accessing Dictionaries
Like lists, we access an element in a dictionary using a 'location' marked out by a pair of square brackets ([...]). The difference is that the index is no longer an integer indicating the position of the item that we want to access, but is a key in the key
Step3: Notice how now we just jump straight to the item we want? We don't need to think about "Was that the fourth item on the list? Or the fifth?" We just use a sensible key, and we can ask for the associated value directly.
A challenge for you!
How would you print out "2nd Value" from myDict?
Step4: When it comes to error messages, dicts and lists behave in similar ways. If you try to access a dictionary using a key that doesn't exist then Python raises an exception.
What is the name of the exception generated by the following piece of code? Can you find it the Official Docs?
Step5: Handy, no? Again, Python's error messages are giving you helpful clues about where the problem it's encountering might be! Up above we had a TypeError when we tried to create a key using a list. Here, we have a KeyError that tells us something must be wrong with using 99 as a key in myDict. In this case, it's that there is no key 99!
A challenge for you
Can you turn these lists into dictionary called capitalDict?
Step6: We already found out that we can easily convert between different types. Dictionary is also a data type, so we can convert these lists to dictionary just like we convert strings into integers using str and int. However, since we need to pair them, we need an additional function called zip to pair the keys and values.
Step7: How would you print out the capital city of Croatia from capitalDict?
Step8: Creating a Simple Phone Book
One of the simplest uses of a dictionary is as a phone book! (If you're not sure what a phone book is here's a handy guide and here's an example of someone using one).
So here are some useful contact numbers
Step9: Useful Dictionary Methods
We are going to see in the next couple of notebooks how to systematically access values in a dictionary (amongst other things). For now, let's also take in the fact the dictionaries also have utility methods similar to what we saw with the list. And as with the list, these methods are functions that only make sense when you're working with a dictionary, so they're bundled up in a way that makes them easy to use.
Let's say that you have forgotten what keys you put in your dictionary...
Step10: Or maybe you just need to access all of the values without troubling to ask for each key
Step11: Or maybe you even need to get them as pairs
Step12: A challenge for you
Can you access all the values of capitalDict from the previous challenge?
Step13: Are You On the List? (Part 2)
As with the list data type, you can check the presence or absence of a key in a dictionary, using the in / not in operators... but note that they only work on keys.
Step14: What Do You Do if You're Not On the List?
One challenge with dictionaries is that sometimes we have no real idea if a key exists or not. With a list, it's pretty easy to figure out whether or not an index exists because we can just ask Python to tell us the length of the list. So that makes it fairly easy to avoid having the list 'blow up' by throwing an exception.
It's rather harder for a dictionary though, so that's why we have the dedicated get() method
Step15: See how this works
Step16: So how would we access something inside the list returned from cityData[0]?
Why not try
Step17: A challenge for you
Can you retrieve and print the following from cityData
Step18: A Phonebook+
So that's an LoL (list-of-lists). Let's extend this idea to what we'll call Phonebook+ which will be a DoL (dictionary-of-lists). In other words, a phonebook that can do more than just give us phone numbers! We're going to build on the emergency phonebook example above.
Step19: A Challenge for you
See if you can create the rest of the eNumbers dictionary and then print out the Russian and British emergency numbers.
Step20: Dictionary-of-Dictionaries
OK, this is the last thing we're going to throw at you today โ getting your head around 'nested' lists and dictionaries is hard. Really hard. But it's the all-important first step to thinking about data the way that computer 'thinks' about it. This is really abstract
Step21: Try the following code in the code cell below
Step22: Now, figure out how to print
Step23: And print It has a density of 21153.899 persons per square km.
Hint
Step24: Do the same for London.
Step25: Code (Applied Geo-example)
Let's continue our trips around the world! This time though, we'll do things better, and instead of using a simple URL, we are going to use a real-word geographic data type, that you can use on a web-map or in your favourite GIS software.
If you look down below at the KCL_position variable you'll see that we're assigning it a complex and scary data structure. Don't be afraid! If you look closely enough you will notice that is just made out the "building blocks" that we've seen so far
Step26: And here we request a remote GeoJSON file (from url), convert to a dictionary, and place it in a map as a new layer.
Step27: As proof that behind this all is just a dictionary | Python Code:
myDict = {
"key1": "Value 1",
3: "3rd Value",
"key2": "2nd Value",
"Fourth Key": [4.0, 'Jon']
}
print(myDict)
Explanation: Notebook-8: Dictionaries
Lesson Content
In this lesson, we'll continue our exploration of more advanced data structures. Last time we took a peek at a way to represent ordered collections of items via lists.
This time we'll use dictionaries to create collections of unordered items (this is just an easy distinction - there's much more to it - but it's a good way to start wrapping your head around the subject).
In this Notebook
Creating dictionaries
Accessing elements of dictionaries
Methods of dictionaries
Dictionaries
Dictionaries are another data type in Python that, like lists, contains multiple items. The difference is that while lists use an index to access ordered items, dictionaries use 'keys' to access unordered values.
You can go back to our short video that talks about dictionaries:
Like lists, dictionaries are also found in other programming languages, often under a different name. For instance, Python dictionaries might be referred to elsewhere as "maps", "hashes", or "associative arrays").
According to the Official Docs:
It is best to think of a dictionary as an unordered set of key-value pairs, with the requirement that the keys are unique (within one dictionary). A pair of braces creates an empty dictionary: {}
In other words, dictionaries are not lists: instead of just a checklist, we now have a key and a value. We use the key to find the value. So a generic dictionary looks like this:
python
theDictionary = {
key1: value1,
key2: value2,
key3: value3,
...
}
Each key/value pair is linked by a ':', and each pair is separated by a ','. It doesn't really matter if you put everything on newlines (as we do here) or all on the same line. We're just doing it this way to make it easier to read.
Here's a more useful implementation of a dictionary:
End of explanation
# this will result in an error
myFaultyDict = {
["key1", 1]: "Value 1",
"key2": "2nd Value",
3: "3rd Value",
8.0: [5, 'jon']
}
Explanation: At the danger of repeating ourselves (but to make the point!): an important difference between dictionaries and lists is that dictionaries are unordered. Always remember that you have no idea where things are stored in a dictionary and you can't rely on indexing like you can with a list. From this perspective, a Python dictionary is not like a real dictionary (as a real dictionary presents the keys, i.e. words, in alphabetical order).
And notice too that every type of data can go into a dictionary: strings, integers, and floats. There's even a list in this dictionary ([4.0, 'Jon'])! The only constraint is that the key must be immutable; this means that it is a simple, static identifier and that can't change.
End of explanation
print(myDict["key1"])
print(myDict["Fourth Key"])
Explanation: This doesn't work because you can't use a list (["key1",1]) as a key, though as you saw above you can use a list as a value. For more on the subject of (im)mutability checkout this SO answer ).
Accessing Dictionaries
Like lists, we access an element in a dictionary using a 'location' marked out by a pair of square brackets ([...]). The difference is that the index is no longer an integer indicating the position of the item that we want to access, but is a key in the key:value pair:
End of explanation
print(???)
print(myDict["key2"])
Explanation: Notice how now we just jump straight to the item we want? We don't need to think about "Was that the fourth item on the list? Or the fifth?" We just use a sensible key, and we can ask for the associated value directly.
A challenge for you!
How would you print out "2nd Value" from myDict?
End of explanation
print(myDict[99])
Explanation: When it comes to error messages, dicts and lists behave in similar ways. If you try to access a dictionary using a key that doesn't exist then Python raises an exception.
What is the name of the exception generated by the following piece of code? Can you find it the Official Docs?
End of explanation
country = ['Costa Rica','Croatia','Cuba'] #keys
capital_city = ['San Jose','Zagreb','Havana'] #values
Explanation: Handy, no? Again, Python's error messages are giving you helpful clues about where the problem it's encountering might be! Up above we had a TypeError when we tried to create a key using a list. Here, we have a KeyError that tells us something must be wrong with using 99 as a key in myDict. In this case, it's that there is no key 99!
A challenge for you
Can you turn these lists into dictionary called capitalDict?
End of explanation
capitalDict = ???(zip(???,???))
capitalDict = dict(zip(country,capital_city))
capitalDict
Explanation: We already found out that we can easily convert between different types. Dictionary is also a data type, so we can convert these lists to dictionary just like we convert strings into integers using str and int. However, since we need to pair them, we need an additional function called zip to pair the keys and values.
End of explanation
print(???)
print(capitalDict['Croatia'])
Explanation: How would you print out the capital city of Croatia from capitalDict?
End of explanation
eNumbers = {
"IS": '112', # It's not very important here whether we use single- or double-quotes
"US": '911'
}
print("The Icelandic emergency number is " + eNumbers['IS'])
print("The American emergency number is " + eNumbers['US'])
Explanation: Creating a Simple Phone Book
One of the simplest uses of a dictionary is as a phone book! (If you're not sure what a phone book is here's a handy guide and here's an example of someone using one).
So here are some useful contact numbers:
1. American Emergency Number: 911
2. British Emergency Number: 999
3. Icelandic Emergency Number: 112
4. French Emergency Number: 112
5. Russian Emergency Number: 102
Now, how would you create a dictionary that allowed us to look up and print out an emergency phone number based on the two-character ISO country code? It's going to look a little like this:
python
eNumbers = {
...
}
print("The Icelandic emergency number is " + eNumbers['IS'])
print("The American emergency number is " + eNumbers['US'])
End of explanation
programmers = {
"Charles": "Babbage",
"Ada": "Lovelace",
"Alan": "Turing"
}
print(programmers.keys())
Explanation: Useful Dictionary Methods
We are going to see in the next couple of notebooks how to systematically access values in a dictionary (amongst other things). For now, let's also take in the fact the dictionaries also have utility methods similar to what we saw with the list. And as with the list, these methods are functions that only make sense when you're working with a dictionary, so they're bundled up in a way that makes them easy to use.
Let's say that you have forgotten what keys you put in your dictionary...
End of explanation
print(programmers.values())
Explanation: Or maybe you just need to access all of the values without troubling to ask for each key:
End of explanation
# Output is a list of key-value pairs!
print(programmers.items())
Explanation: Or maybe you even need to get them as pairs:
End of explanation
print(???)
print(capitalDict.values())
Explanation: A challenge for you
Can you access all the values of capitalDict from the previous challenge?
End of explanation
print("Charles" in programmers)
print("Babbage" in programmers)
print(True not in programmers)
Explanation: Are You On the List? (Part 2)
As with the list data type, you can check the presence or absence of a key in a dictionary, using the in / not in operators... but note that they only work on keys.
End of explanation
print(programmers.get("Lady Ada", "Are you sure you spelled that right?") )
Explanation: What Do You Do if You're Not On the List?
One challenge with dictionaries is that sometimes we have no real idea if a key exists or not. With a list, it's pretty easy to figure out whether or not an index exists because we can just ask Python to tell us the length of the list. So that makes it fairly easy to avoid having the list 'blow up' by throwing an exception.
It's rather harder for a dictionary though, so that's why we have the dedicated get() method: it not only allows us to fetch the value associated with a key, it also allows us to specify a default value in case the key does not exist:
End of explanation
# Format: city, country, population, area (km^2)
cityData = [
['London','U.K.',8673713,1572],
['Paris','France',2229621,105],
['Washington, D.C.','U.S.A.',672228,177],
['Abuja','Nigeria',1235880,1769],
['Beijing','China',21700000,16411],
]
print(cityData[0])
Explanation: See how this works: the key doesn't exist, but unlike what happened when we asked for myDict[99] we don't get an exception, we get the default value specified as the second input to the method get.
So you've learned two things here: that functions can take more than one input (this one takes both the key that we're looking for, and value to return if Python can't find the key); and that different types (or classes) of data have different methods (there's no get for lists).
Lists of Lists, Dictionaries of Lists, Dictionaries of Dictionaries... Oh my!
OK, this is where it's going to get a little weird but you're also going to see how programming is a little like Lego: once you get the building blocks, you can make lots of cool/strange/useful contraptions from some pretty simple concepts.
Remember that a list or dictionary can store anything: so the first item in your list could itself be a list! For most people starting out on programming, this is the point where their brain starts hurting (it happened to us) and you might want to throw up your hands in frustration thinking "I'm never going to understand this!" But if you stick with it, you will.
And this is really the start of the power of computation.
A Data Set of City Attributes
Let's start out with what some (annoying) people would call a 'trivial' example of how a list-of-lists (LoLs, though most people aren't laughing) can be useful. Let's think through what's going on below: what happens if we write cityData[0]?
End of explanation
print(cityData[1][1])
print(cityData[4][3])
print(cityData[2][0])
Explanation: So how would we access something inside the list returned from cityData[0]?
Why not try:
python
cityData[0][1]
See if you can figure out how to retrieve and print the following from cityData:
1. France
2. 16411
3. Washington, D.C.
Type the code into the coding area below...
End of explanation
print(???)
print(???)
print(???)
print(cityData[3][1])
print(cityData[0][2])
print(cityData[2][3])
Explanation: A challenge for you
Can you retrieve and print the following from cityData:
1. Nigeria
2. 8673713
3. 177
End of explanation
# American Emergency Number: 911
# British Emergency Number: 999
# Icelandic Emergency Number: 112
# French Emergency Number: 112
# Russian Emergency Number: 102
eNumbers = {
'IS': ['Icelandic',112],
'US': ['American',911],
'FR': ['French',112],
'RU': ['Russian',102],
'UK': ['British',999]
}
print("The " + eNumbers['IS'][0] + " emergency number is " + str(eNumbers['IS'][1]))
print("The " + eNumbers['US'][0] + " emergency number is " + str(eNumbers['US'][1]))
print("The " + eNumbers['FR'][0] + " emergency number is " + str(eNumbers['FR'][1]))
Explanation: A Phonebook+
So that's an LoL (list-of-lists). Let's extend this idea to what we'll call Phonebook+ which will be a DoL (dictionary-of-lists). In other words, a phonebook that can do more than just give us phone numbers! We're going to build on the emergency phonebook example above.
End of explanation
print("The " + ??? + " emergency number is " + ???)
print("The " + ??? + " emergency number is " + ???)
print("The " + eNumbers['RU'][0] + " emergency number is " + str(eNumbers['RU'][1]))
print("The " + eNumbers['UK'][0] + " emergency number is " + str(eNumbers['UK'][1]))
Explanation: A Challenge for you
See if you can create the rest of the eNumbers dictionary and then print out the Russian and British emergency numbers.
End of explanation
cityData2 = {
'London' : {
'population': 8673713,
'area': 1572,
'location': [51.507222, -0.1275],
'country': {
'ISO2': 'UK',
'Full': 'United Kingdom',
},
},
'Paris' : {
'population': 2229621,
'area': 105.4,
'location': [48.8567, 2.3508],
'country': {
'ISO2': 'FR',
'Full': 'France',
},
}
}
Explanation: Dictionary-of-Dictionaries
OK, this is the last thing we're going to throw at you today โ getting your head around 'nested' lists and dictionaries is hard. Really hard. But it's the all-important first step to thinking about data the way that computer 'thinks' about it. This is really abstract: something that you access by keys, which in turn gives you access to other keys... it's got a name: recursion. And it's probably one of the cleverest things about computing.
Here's a bit of a complex DoD, combined with a DoL, and other nasties:
End of explanation
#your code here
print(cityData2['Paris'])
print(cityData2['Paris']['country']['ISO2'])
print(cityData2['Paris']['location'][0])
Explanation: Try the following code in the code cell below:
python
print(cityData2['Paris'])
print(cityData2['Paris']['country']['ISO2'])
print(cityData2['Paris']['location'][0])
End of explanation
print("The population of Paris, the capital of " + str(cityData2['Paris']['country']['Full']) + " " \
+ "(" + ??? + ") " + "is "+ ??? + ".")
print("The population of Paris, the capital of " + str(cityData2['Paris']['country']['Full']) + " " \
+ "(" + str(cityData2['Paris']['country']['ISO2']) + ") " + "is "+ str(cityData2['Paris']['population']) + ".")
Explanation: Now, figure out how to print:
The population of Paris, the capital of France (FR), is 2229621.
End of explanation
print("It has a density of " + ???)
print("It has a density of " + str(cityData2['Paris']['population'] / cityData2['Paris']['area'] ))
Explanation: And print It has a density of 21153.899 persons per square km.
Hint: to calculate density, divide population with area.
End of explanation
print(???)
# Note that we can tweak the formatting a bit: Python is smart
# enough to understand that if you have a '+' on the end of a
# string and there next line is also a string then it'll
# continue to concatenate the string...
print("The population of " + 'London' + ", the capital of " +
cityData2['London']['country']['Full'] + " (" + cityData2['London']['country']['ISO2'] + "), is " +
str(cityData2['London']['population']) + ". It has a density of " +
str(cityData2['London']['population']/cityData2['London']['area']) + " persons per square km")
# But a _better_ way to do this might be one in which we don't
# hard-code 'London' into the output -- by changing the variable
# 'c' to Paris we can change the output completely...
c = 'Paris'
cd = cityData2[c]
print("The population of " + c + ", the capital of " +
cd['country']['Full'] + " (" + cd['country']['ISO2'] + "), is " +
str(cd['population']) + ". It has a density of " +
"{0:8.1f}".format(cd['population']/cd['area']) + " persons per square km")
Explanation: Do the same for London.
End of explanation
# Don't worry about the following lines
# I'm simply requesting some modules to
# have additional functions at my disposal
# which usually are not immediately available
import json
from ipyleaflet import Map, GeoJSON, basemaps
# King's College coordinates
# What format are they in? Does it seem appropriate?
# How would you convert them back to numbers?
longitude = -0.11596798896789551
latitude = 51.51130657591914
# Set this up as a coordinate pair
KCL_Coords = [longitude, latitude ]
# How can you assign KCLCoords to
# the key KCLGeometry["coordinates"]?
KCL_Geometry = {
"type": "Point",
"coordinates": KCL_Coords
}
KCL_Position = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"marker-color": "#7e7e7e",
"marker-size": "medium",
"marker-symbol": "building",
"name": "KCL"
},
"geometry": KCL_Geometry
}
]
}
# OUTPUT
# -----------------------------------------------------------
# I'm justing using the "imported" module to print the output
# in a nice and formatted way
print(json.dumps(KCL_Position, indent=4))
# We can also show this in Jupyter directly
# (it won't show up in the PDF version though)
m = Map(center = (51.51, -0.10), zoom=12, min_zoom=5, max_zoom=20,
basemap=basemaps.OpenTopoMap)
geo = GeoJSON(data=KCL_Position)
m.add_layer(geo)
m
Explanation: Code (Applied Geo-example)
Let's continue our trips around the world! This time though, we'll do things better, and instead of using a simple URL, we are going to use a real-word geographic data type, that you can use on a web-map or in your favourite GIS software.
If you look down below at the KCL_position variable you'll see that we're assigning it a complex and scary data structure. Don't be afraid! If you look closely enough you will notice that is just made out the "building blocks" that we've seen so far: floats, lists, strings..all wrapped comfortably in a cosy dictionary!
This is simply a formalised way to represent a geographic marker (a pin on the map!) in a format called GeoJSON.
According to the awesome Lizy Diamond
GeoJSON is an open and popular geographic data format commonly used in web applications. It is an extension of a format called JSON, which stands for JavaScript Object Notation. Basically, JSON is a table turned on its side. GeoJSON extends JSON by adding a section called "geometry" such that you can define coordinates for the particular object (point, line, polygon, multi-polygon, etc). A point in a GeoJSON file might look like this:
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-122.65335738658904,
45.512083676585156
]
},
"properties": {
"name": "Hungry Heart Cupcakes",
"address": "1212 SE Hawthorne Boulevard",
"website": "http://www.hungryheartcupcakes.com",
"gluten free": "no"
}
}
GeoJSON files have to have both a "geometry" section and a "properties" section. The "geometry" section houses the geographic information of the feature (its location and type) and the "properties" section houses all of the descriptive information about the feature (like fields in an attribute table). Source
Now, in order to have our first "webmap", we have to re-create such GeoJSON structure.
As you can see there are two variables containing King's College Longitude/Latitude coordinate position. Unfortunately, they are in the wrong data type. Also, the variable longitude is not included in the list KCLCoords and the list itself is not assigned as a value to the KCLGeometrydictionary.
Take all the necessary steps to fix the code, using the functions we've seen so far.
End of explanation
import json
import random
import requests
from ipyleaflet import Map, GeoJSON
url = 'https://github.com/jupyter-widgets/ipyleaflet/raw/master/examples/europe_110.geo.json'
r = requests.get(url)
d = r.content.decode("utf-8")
j = json.loads(d)
def random_color(feature):
return {
'color': 'black',
'fillColor': random.choice(['red', 'yellow', 'green', 'orange']),
}
m = Map(center=(50.6252978589571, 0.34580993652344), zoom=3)
geo_json = GeoJSON(
data=j,
style={
'opacity': 1, 'dashArray': '9', 'fillOpacity': 0.1, 'weight': 1
},
hover_style={
'color': 'white', 'dashArray': '0', 'fillOpacity': 0.5
},
style_callback=random_color
)
m.add_layer(geo_json)
m
Explanation: And here we request a remote GeoJSON file (from url), convert to a dictionary, and place it in a map as a new layer.
End of explanation
print(json.dumps(j, indent=4))
Explanation: As proof that behind this all is just a dictionary:
End of explanation |
15,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Possible Solution
Step1: Explaining my code
Okay guys, my code is a bit complicated to understand at first glance, but I'm going to talk you through it.
board_temp = ["B"] * bomb_count + [non_bomb_character] * (row * col-bomb_count)
So this line of code looks like a monster, but actually once you think about it is not so difficult to understand. Basically we start by making a list containing "B" characters, the number of "B"'s we add to the list is dependant on the number of bombs the board is supposed to have (which is given by the bomb count argument).
Step2: The next step is to multiply the row by the col to get the total number of squares our board should have. We subtract the bomb_count from this figure because we have already added those to the board and we do not wish to 'double count'.
For example, if bomb_count is 10 and our board is 10x10 then we want to add 90 non-bomb characters to the list (10*10 - 10 = 90). The result is a list of length 100, 10 of which are bombs.
Step3: The next two lines of code are | Python Code:
import random
def build_board(num_rows, num_cols, bomb_count=0, non_bomb_character="-"):
board_temp = ["B"] * bomb_count + [non_bomb_character] * (num_rows * num_cols - bomb_count)
if bomb_count:
random.shuffle(board_temp)
board = []
for i in range(0, num_rows*num_cols, num_cols):
board.append(board_temp[i:i+num_cols])
return board
# Runing the tests...
test_board()
# Note if you recieve an error message saying test_board not found
# try hitting the run button on the test_board cell and try again.
Explanation: Possible Solution
End of explanation
bomb_count = 3
["B"] * bomb_count
Explanation: Explaining my code
Okay guys, my code is a bit complicated to understand at first glance, but I'm going to talk you through it.
board_temp = ["B"] * bomb_count + [non_bomb_character] * (row * col-bomb_count)
So this line of code looks like a monster, but actually once you think about it is not so difficult to understand. Basically we start by making a list containing "B" characters, the number of "B"'s we add to the list is dependant on the number of bombs the board is supposed to have (which is given by the bomb count argument).
End of explanation
row = 2
col = 2
b_count = 1
part_one_of_list = ["B"] * b_count
part_two_of_list = ["-"] * (row * col-b_count)
our_list = part_one_of_list + part_two_of_list
print(our_list) # --> 2x2 grid with 1 bomb.
Explanation: The next step is to multiply the row by the col to get the total number of squares our board should have. We subtract the bomb_count from this figure because we have already added those to the board and we do not wish to 'double count'.
For example, if bomb_count is 10 and our board is 10x10 then we want to add 90 non-bomb characters to the list (10*10 - 10 = 90). The result is a list of length 100, 10 of which are bombs.
End of explanation
## Create a 1d board
board_temp = [str(i).zfill(2) for i in range(1,21)]
print(board_temp)
row = 5
col = 4
# print board with dims (5, 4)
for i in range(0, row*col, col ):
print(board_temp[i : i+col])
row = 3
col = 9
# print board with dims (3, 9)
for i in range(0, row*col, col ):
print(board_temp[i : i+col])
row = 6
col = 2
# print board with dims (6, 2)
for i in range(0, row*col, col ):
print(board_temp[i : i+col])
Explanation: The next two lines of code are:
if bomb_count:
random.shuffle(board_temp)
So this code shuffles the list (but only if bombs >= 1), we take a list with all the bombs at the start and we jumble them around.
board = []
for i in range(0, row*col, col ):
board.append(board_temp[i : i+col])
So this is the most complex part of my function, what are we doing here?
We have a range function, which starts at 0, ends at row * col and has a step of size col. Next up, we use those values to slice the temp_board we made earlier. In effect this is creating a 'moving window', and this gives us our board with the correct dimensions.
Lets try to visualise what is happening here by running it a few times with print statement:
End of explanation |
15,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Population Data From Non-Normal Distribution
Step2: View the True Mean Of Population
Step3: Take A Sample Mean, Repeat 1000 Times
Step4: Plot The Sample Means Of All 100 Samples
Step5: This is the critical chart, remember that the population distribution was uniform, however, this distribution is approaching normality. This is the key point to the central limit theory, and the reason we can assume sample means are not bias.
View The Mean Sample Mean
Step6: Compare To True Mean | Python Code:
# Import packages
import pandas as pd
import numpy as np
# Set matplotlib as inline
%matplotlib inline
Explanation: Title: Demonstrate The Central Limit Theorem
Slug: demonstrate_the_central_limit_theorem
Summary: Python introduction to the central limit theorem
Date: 2016-05-01 12:00
Category: Statistics
Tags: Basics
Authors: Chris Albon
Preliminaries
End of explanation
# Create an empty dataframe
population = pd.DataFrame()
# Create an column that is 10000 random numbers drawn from a uniform distribution
population['numbers'] = np.random.uniform(0,10000,size=10000)
# Plot a histogram of the score data.
# This confirms the data is not a normal distribution.
population['numbers'].hist(bins=100)
Explanation: Create Population Data From Non-Normal Distribution
End of explanation
# View the mean of the numbers
population['numbers'].mean()
Explanation: View the True Mean Of Population
End of explanation
# Create a list
sampled_means = []
# For 1000 times,
for i in range(0,1000):
# Take a random sample of 100 rows from the population, take the mean of those rows, append to sampled_means
sampled_means.append(population.sample(n=100).mean().values[0])
Explanation: Take A Sample Mean, Repeat 1000 Times
End of explanation
# Plot a histogram of sampled_means.
# It is clearly normally distributed and centered around 5000
pd.Series(sampled_means).hist(bins=100)
Explanation: Plot The Sample Means Of All 100 Samples
End of explanation
# View the mean of the sampled_means
pd.Series(sampled_means).mean()
Explanation: This is the critical chart, remember that the population distribution was uniform, however, this distribution is approaching normality. This is the key point to the central limit theory, and the reason we can assume sample means are not bias.
View The Mean Sample Mean
End of explanation
# Subtract Mean Sample Mean From True Population Mean
error = population['numbers'].mean() - pd.Series(sampled_means).mean()
# Print
print('The Mean Sample Mean is only %f different the True Population mean!' % error)
Explanation: Compare To True Mean
End of explanation |
15,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graph format
The EDeN library allows the vectorization of graphs, i.e. the transformation of graphs into sparse vectors.
The graphs that can be processed by the EDeN library have the following restrictions
Step1: Build graphs and then display them
Step2: Create a vector representation
Step3: 2D plot using OneClass classifier to identify density curves
Step4: Compute pairwise similarity matrix | Python Code:
%matplotlib inline
import pylab as plt
import networkx as nx
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
from eden.util import display
print display.serialize_graph(G)
from eden.util import display
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label=[0,0,.1])
G.add_node(1, label=[0,.1,0])
G.add_node(2, label=[.1,0,0])
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':2, 'C':3})
G.add_node(1, label={'A':1, 'B':2, 'D':3})
G.add_node(2, label={'A':1, 'D':2, 'E':3})
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_node(3, label='D')
G.add_node(4, label='E')
G.add_node(5, label='F')
G.add_edge(0,1, label='x')
G.add_edge(0,2, label='y')
G.add_edge(1,3, label='z', nesting=True)
G.add_edge(0,3, label='z', nesting=True)
G.add_edge(2,3, label='z', nesting=True)
G.add_edge(3,4, label='k')
G.add_edge(3,5, label='j')
display.draw_graph(G, size=15, node_size=1500, font_size=24, node_border=True, size_x_to_y_ratio=3, prog='circo')
Explanation: Graph format
The EDeN library allows the vectorization of graphs, i.e. the transformation of graphs into sparse vectors.
The graphs that can be processed by the EDeN library have the following restrictions:
- the graphs are implemented as networkx graphs
- nodes and edges have identifiers: the following identifiers are used as reserved words
1. label
2. weight
3. entity
4. nesting
nodes and edges must have the 'label' attribute
the 'label' attribute can be of one of the following types:
string
vector
dictionary
strings are used to represent categorical values;
dictionaries are used to represent sparse vectors: keys are of string type and values are of type float
- nodes and edges can have a 'weight' attribute of type float
- nodes can have a 'entity' attribute of type string
- nesting edges must have a 'nesting' attribute of type boolean set to True
End of explanation
import networkx as nx
graph_list = []
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='C', entity='CATEG')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', entity='CATEG')
G.add_node(1, label='B', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='x', entity='CATEG_EDGE')
G.add_edge(1,2, label='x', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='X', entity='CATEG')
G.add_node(1, label='X', entity='CATEG')
G.add_node(2, label='X', entity='CATEG')
G.add_edge(0,1, label='x', entity='CATEG_EDGE')
G.add_edge(1,2, label='x', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,0,0], entity='VEC')
G.add_node(1, label=[0,1,0], entity='VEC')
G.add_node(2, label=[0,0,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,1,0], entity='VEC')
G.add_node(1, label=[0,1,1], entity='VEC')
G.add_node(2, label=[0,0,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[1,0.1,0.2], entity='VEC')
G.add_node(1, label=[0.3,1,0.4], entity='VEC')
G.add_node(2, label=[0.5,0.6,1], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label=[0.1,0.2,0.3], entity='VEC')
G.add_node(1, label=[0.4,0.5,0.6], entity='VEC')
G.add_node(2, label=[0.7,0.8,0.9], entity='VEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':1, 'C':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'B':1, 'C':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'b':1, 'C':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'C':1, 'D':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label={'A':1, 'B':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(1, label={'a':1, 'B':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_node(2, label={'a':1, 'b':1, 'C':1, 'D':1, 'E':1}, entity='SPVEC')
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
from eden.util import display
for g in graph_list:
display.draw_graph(g, size=5, node_size=800, node_border=1, layout='shell', secondary_vertex_label = 'entity')
Explanation: Build graphs and then display them
End of explanation
%%time
from eden.graph import Vectorizer
vectorizer = Vectorizer(complexity=2, n=4 )
vectorizer.fit(graph_list)
from itertools import islice
X = vectorizer.transform(islice(graph_list,0,12))
print 'Instances: %d \nFeatures: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
from sklearn.svm import OneClassSVM
import numpy as np
def plot2D(X_reduced,labels):
size=11
plt.figure(figsize=(size,size))
#make mesh
x_min, x_max = X_reduced[:, 0].min(), X_reduced[:, 0].max()
y_min, y_max = X_reduced[:, 1].min(), X_reduced[:, 1].max()
step_num = 100
h = min( ( x_max - x_min ) / step_num , ( y_max - y_min ) / step_num )# step size in the mesh
b = h * 50 # border size
x_min, x_max = X_reduced[:, 0].min() - b, X_reduced[:, 0].max() + b
y_min, y_max = X_reduced[:, 1].min() - b, X_reduced[:, 1].max() + b
xx, yy = np.meshgrid( np.arange( x_min, x_max, h ), np.arange( y_min, y_max, h ) )
#induce a predictive model
clf = OneClassSVM( gamma = 10**3, nu = 0.01 )
clf.fit( X_reduced )
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max] . [y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
levels = np.linspace(min(Z), max(Z), 40)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.get_cmap('YlOrRd'), alpha=.9,levels=levels)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1],
alpha=.5,
s=70,
edgecolors='none',
c = 'white',
cmap = plt.get_cmap('YlOrRd'))
#labels
for id in range( X_reduced.shape[0] ):
label = labels[id]
x = X_reduced[id, 0]
y = X_reduced[id, 1]
plt.annotate(label,xy = (x,y), xytext = (-12, -3), textcoords = 'offset points')
plt.show()
%%time
import numpy as np
#make dense feature representation
n_components = max(2, X.shape[0])
from sklearn.kernel_approximation import Nystroem
feature_map_nystroem = Nystroem(gamma=0.1, n_components=n_components)
X_explicit=feature_map_nystroem.fit_transform(X)
# Visualize result using PCA
from sklearn.decomposition import TruncatedSVD
pca = TruncatedSVD(n_components=2)
X_reduced = pca.fit_transform(X_explicit)
Explanation: Create a vector representation
End of explanation
plot2D(X_reduced,range(X_reduced.shape[0]))
Explanation: 2D plot using OneClass classifier to identify density curves
End of explanation
from ipy_table import *
def prep_table(K):
header = [' ']
header += [i for i in range(K.shape[0])]
mat = [header]
for id, row in enumerate(K):
new_row = [id]
new_row += list(row)
mat.append(new_row)
return mat
from sklearn import metrics
K=metrics.pairwise.pairwise_kernels(X, metric='linear')
mat=prep_table(K)
make_table(mat)
apply_theme('basic')
set_global_style(float_format = '%0.2f')
Explanation: Compute pairwise similarity matrix
End of explanation |
15,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topic 2
Step1: Creating a Reddit Application
Go to https
Step2: Capturing Reddit Posts
Now for a given subreddit, we can get the newest posts to that sub.
Post titles are generally short, so you could treat them as something similar to a tweet.
Step3: Leveraging Reddit's Voting
Getting the new posts gives us the most up-to-date information.
You can also get the "hot" posts, "top" posts, etc. that should be of higher quality.
In theory.
Caveat emptor
Step4: Following Multiple Subreddits
Reddit has a mechanism called "multireddits" that essentially allow you to view multiple reddits together as though they were one.
To do this, you need to concatenate your subreddits of interesting using the "+" sign.
Step5: Accessing Reddit Comments
While you're never supposed to read the comments, for certain live streams or new and rising posts, the comments may provide useful insight into events on the ground or people's sentiment.
New posts may not have comments yet though.
Comments are attached to the post title, so for a given submission, you can pull its comments directly.
Note Reddit returns pages of comments to prevent server overload, so you will not get all comments at once and will have to write code for getting more comments than the top ones returned at first.
This pagination is performed using the MoreXYZ objects (e.g., MoreComments or MorePosts).
Step6: Other Functionality
Reddit has a deep comment structure, and the code above only goes two levels down (top comment and top comment reply).
You can view Praw's additional functionality, replete with examples on its website here
Step7: Connecting to the Facebook Graph
Facebook has a "Graph API" that lets you explore its social graph.
For privacy concerns, however, Facebook's Graph API is extremely limited in the kinds of data it can view.
For instance, Graph API applications can now only view profiles of people who already have installed that particular application.
These restrictions make it quite difficult to see a lot of Facebook's data.
That being said, Facebook does have many popular public pages (e.g., BBC World News), and articles or messages posted by these public pages are accessible.
In addition, many posts and comments made in reply to these public posts are also publically available for us to explore.
To connect to Facebook's API though, we need an access token (unlike Reddit's API).
Fortunately, for research and testing purposes, getting an access token is very easy.
Acquiring a Facebook Access Token
Log in to your Facebook account
Go to Facebook's Graph Explorer (https
Step8: Now we can use the Facebook Graph API with this temporary access token (it does expire after maybe 15 minutes).
Step9: Parsing Posts from a Public Page
To get a public page's posts, all you need is the name of the page.
Then we can pull the page's feed, and for each post on the page, we can pull its comments and the name of the comment's author.
While it's unlikely that we can get more user information than that, author name and sentiment or text analytics can give insight into bursting topics and demographics.
Step10: <hr>
<img src="files/TwitterLogo.png" width="20%">
Topic 2.1
Step11: Creating Twitter Credentials
For more in-depth instructions for creating a Twitter account and/or setting up a Twitter account to use the following code, I will provide a walkthrough on configuring and generating this information.
First, we assume you already have a Twitter account.
If this is not true, either create one real quick or follow along.
See the attached figures.
Step 1. Create a Twitter account If you haven't already done this, do this now at Twitter.com.
Step 2. Setting your mobile number Log into Twitter and go to "Settings." From there, click "Mobile" and fill in an SMS-enabled phone number. You will be asked to confirm this number once it's set, and you'll need to do so before you can create any apps for the next step.
<img src="files/TwitterInstructions_f1.png" scale="10%"/>
<img src="files/TwitterInstructions_f2.png" scale="10%"/>
Step 3. Create an app in Twitter's Dev site Go to (apps.twitter.com), and click the "Create New App" button. Fill in the "Name," "Description," and "Website" fields, leaving the callback one blank (we're not going to use it). Note that the website must be a fully qualified URL, so it should look like
Step12: Connecting to Twitter
Once we have the authentication details set, we can connect to Twitter using the Tweepy OAuth handler, as below.
Step13: Testing our Connection
Now that we are connected to Twitter, let's do a brief check that we can read tweets by pulling the first few tweets from our own timeline (or the account associated with your Twitter app) and printing them.
Step14: Searching Twitter for Keywords
Now that we're connected, we can search Twitter for specific keywords with relative ease just like you were using Twitter's search box.
While this search only goes back 7 days and/or 1,500 tweets (whichever is less), it can be powerful if an event you want to track just started.
Note that you might have to deal with paging if you get lots of data. Twitter will only return you one page of up to 100 tweets at a time.
Step15: More Complex Queries
Twitter's Search API exposes many capabilities, like filtering for media, links, mentions, geolocations, dates, etc.
We can access these capabilities directly with the search function.
For a list of operators Twitter supports, go here
Step16: Dealing with Pages
As mentioned, Twitter serves results in pages.
To get all results, we can use Tweepy's Cursor implementation, which handles this iteration through pages for us in the background.
Step23: Other Search Functionality
The Tweepy wrapper and Twitter API is pretty extensive.
You can do things like pull the last 3,200 tweets from other users' timelines, find all retweets of your account, get follower lists, search for users matching a query, etc.
More information on Tweepy's capabilities are available at its documentation page
Step24: Now we set up the stream using the listener above | Python Code:
# For our first piece of code, we need to import the package
# that connects to Reddit. Praw is a thin wrapper around reddit's
# web APIs and works well
import praw
Explanation: Topic 2: Collecting Social Media Data
This notebook contains examples for using web-based APIs (Application Programmer Interfaces) to download data from social media platforms.
Our examples will include:
Reddit
Facebook
Twitter
For most services, we need to register with the platform in order to use their API.
Instructions for the registration processes are outlined in each specific section below.
We will use APIs because they can be much faster than manually copying and pasting data from the web site, APIs provide uniform methods for accessing resources (searching for keywords, places, or dates), and it should conform to the platform's terms of service (important for partnering and publications).
Note however that each of these platforms has strict limits on access times: e.g., requests per hour, search history depth, maximum number of items returned per request, and similar.
<hr>
<img src="files/RedditLogo.jpg" width="20%">
Topic 2.1: Reddit API
Reddit's API used to be the easiest to use since it did not require credentials to access data on its subreddit pages.
Unfortunately, this process has been changed, and developers now need to create a Reddit application on Reddit's app page located here: (https://www.reddit.com/prefs/apps/).
End of explanation
# Now we specify a "unique" user agent for our code
# This is primarily for identification, I think, and some
# user-agents of bad actors might be blocked
redditApi = praw.Reddit(client_id='OdpBKZ1utVJw8Q',
client_secret='KH5zzauulUBG45W-XYeAS5a2EdA',
user_agent='crisis_informatics_v01')
Explanation: Creating a Reddit Application
Go to https://www.reddit.com/prefs/apps/.
Scroll down to "create application", select "web app", and provide a name, description, and URL (which can be anything).
After you press "create app", you will be redirected to a new page with information about your application. Copy the unique identifiers below "web app" and beside "secret". These are your client_id and client_secret values, which you need below.
<img src="files/reddit_screens/0-001.png" scale="10%"/>
<img src="files/reddit_screens/1-002.png" scale="20%"/>
<img src="files/reddit_screens/1-003.png" scale="10%"/>
End of explanation
subreddit = "worldnews"
targetSub = redditApi.subreddit(subreddit)
submissions = targetSub.new(limit=10)
for post in submissions:
print(post.title)
Explanation: Capturing Reddit Posts
Now for a given subreddit, we can get the newest posts to that sub.
Post titles are generally short, so you could treat them as something similar to a tweet.
End of explanation
subreddit = "worldnews"
targetSub = redditApi.subreddit(subreddit)
submissions = targetSub.hot(limit=5)
for post in submissions:
print(post.title)
Explanation: Leveraging Reddit's Voting
Getting the new posts gives us the most up-to-date information.
You can also get the "hot" posts, "top" posts, etc. that should be of higher quality.
In theory.
Caveat emptor
End of explanation
subreddit = "worldnews+aww"
targetSub = redditApi.subreddit(subreddit)
submissions = targetSub.new(limit=10)
for post in submissions:
print(post.title)
Explanation: Following Multiple Subreddits
Reddit has a mechanism called "multireddits" that essentially allow you to view multiple reddits together as though they were one.
To do this, you need to concatenate your subreddits of interesting using the "+" sign.
End of explanation
subreddit = "worldnews"
breadthCommentCount = 5
targetSub = redditApi.subreddit(subreddit)
submissions = targetSub.hot(limit=1)
for post in submissions:
print (post.title)
post.comment_limit = breadthCommentCount
# Get the top few comments
for comment in post.comments.list():
if isinstance(comment, praw.models.MoreComments):
continue
print ("---", comment.name, "---")
print ("\t", comment.body)
for reply in comment.replies.list():
if isinstance(reply, praw.models.MoreComments):
continue
print ("\t", "---", reply.name, "---")
print ("\t\t", reply.body)
Explanation: Accessing Reddit Comments
While you're never supposed to read the comments, for certain live streams or new and rising posts, the comments may provide useful insight into events on the ground or people's sentiment.
New posts may not have comments yet though.
Comments are attached to the post title, so for a given submission, you can pull its comments directly.
Note Reddit returns pages of comments to prevent server overload, so you will not get all comments at once and will have to write code for getting more comments than the top ones returned at first.
This pagination is performed using the MoreXYZ objects (e.g., MoreComments or MorePosts).
End of explanation
# As before, the first thing we do is import the Facebook
# wrapper
import facebook
Explanation: Other Functionality
Reddit has a deep comment structure, and the code above only goes two levels down (top comment and top comment reply).
You can view Praw's additional functionality, replete with examples on its website here: http://praw.readthedocs.io/
<hr>
<img src="files/FacebookLogo.jpg" width="20%">
Topic 2.2: Facebook API
Getting access to Facebook's API is slightly easier than Twitter's in that you can go to the Graph API explorer, grab an access token, and immediately start playing around with the API.
The access token isn't good forever though, so if you plan on doing long-term analysis or data capture, you'll need to go the full OAuth route and generate tokens using the approved paths.
End of explanation
fbAccessToken = "EAACEdEose0cBAKZAZBoGzF6ZAJBk3uSB0gXSgxPrZBJ5nsZCXkM25xZBT0GzVABvsZBOvARxRukoLxhVEyO42QO1D1IInuE1ZBgQfffxh10BC0iHJmnKfNGHn9bY6ioZA8gHTYAXoOGL0A07hZBKXxMKO1yS3ZAPDB50MVGLBxDjJJDWAYBFhUIoeaAaMAZAzxcT4lMZD"
Explanation: Connecting to the Facebook Graph
Facebook has a "Graph API" that lets you explore its social graph.
For privacy concerns, however, Facebook's Graph API is extremely limited in the kinds of data it can view.
For instance, Graph API applications can now only view profiles of people who already have installed that particular application.
These restrictions make it quite difficult to see a lot of Facebook's data.
That being said, Facebook does have many popular public pages (e.g., BBC World News), and articles or messages posted by these public pages are accessible.
In addition, many posts and comments made in reply to these public posts are also publically available for us to explore.
To connect to Facebook's API though, we need an access token (unlike Reddit's API).
Fortunately, for research and testing purposes, getting an access token is very easy.
Acquiring a Facebook Access Token
Log in to your Facebook account
Go to Facebook's Graph Explorer (https://developers.facebook.com/tools/explorer/)
Copy the long string out of "Access Token" box and paste it in the code cell bedlow
<img src="files/FacebookInstructions_f1.png"/>
End of explanation
# Connect to the graph API, note we use version 2.5
graph = facebook.GraphAPI(access_token=fbAccessToken, version='2.5')
Explanation: Now we can use the Facebook Graph API with this temporary access token (it does expire after maybe 15 minutes).
End of explanation
# What page to look at?
targetPage = "nytimes"
# Other options for pages:
# nytimes, bbc, bbcamerica, bbcafrica, redcross, disaster
maxPosts = 10 # How many posts should we pull?
maxComments = 5 # How many comments for each post?
post = graph.get_object(id=targetPage + '/feed')
# For each post, print its message content and its ID
for v in post["data"][:maxPosts]:
print ("---")
print (v["message"], v["id"])
# For each comment on this post, print its number,
# the name of the author, and the message content
print ("Comments:")
comments = graph.get_object(id='%s/comments' % v["id"])
for (i, comment) in enumerate(comments["data"][:maxComments]):
print ("\t", i, comment["from"]["name"], comment["message"])
Explanation: Parsing Posts from a Public Page
To get a public page's posts, all you need is the name of the page.
Then we can pull the page's feed, and for each post on the page, we can pull its comments and the name of the comment's author.
While it's unlikely that we can get more user information than that, author name and sentiment or text analytics can give insight into bursting topics and demographics.
End of explanation
# For our first piece of code, we need to import the package
# that connects to Twitter. Tweepy is a popular and fully featured
# implementation.
import tweepy
Explanation: <hr>
<img src="files/TwitterLogo.png" width="20%">
Topic 2.1: Twitter API
Twitter's API is probably the most useful and flexible but takes several steps to configure.
To get access to the API, you first need to have a Twitter account and have a mobile phone number (or any number that can receive text messages) attached to that account.
Then, we'll use Twitter's developer portal to create an "app" that will then give us the keys tokens and keys (essentially IDs and passwords) we will need to connect to the API.
So, in summary, the general steps are:
Have a Twitter account,
Configure your Twitter account with your mobile number,
Create an app on Twitter's developer site, and
Generate consumer and access keys and secrets.
We will then plug these four strings into the code below.
End of explanation
# Use the strings from your Twitter app webpage to populate these four
# variables. Be sure and put the strings BETWEEN the quotation marks
# to make it a valid Python string.
consumer_key = "IQ03DPOdXz95N3rTm2iMNE8va"
consumer_secret = "0qGHOXVSX1D1ffP7BfpIxqFalLfgVIqpecXQy9SrUVCGkJ8hmo"
access_token = "867193453159096320-6oUq9riQW8UBa6nD3davJ0SUe9MvZrZ"
access_secret = "5zMwq2DVhxBnvjabM5SU2Imkoei3AE6UtdeOQ0tzR9eNU"
Explanation: Creating Twitter Credentials
For more in-depth instructions for creating a Twitter account and/or setting up a Twitter account to use the following code, I will provide a walkthrough on configuring and generating this information.
First, we assume you already have a Twitter account.
If this is not true, either create one real quick or follow along.
See the attached figures.
Step 1. Create a Twitter account If you haven't already done this, do this now at Twitter.com.
Step 2. Setting your mobile number Log into Twitter and go to "Settings." From there, click "Mobile" and fill in an SMS-enabled phone number. You will be asked to confirm this number once it's set, and you'll need to do so before you can create any apps for the next step.
<img src="files/TwitterInstructions_f1.png" scale="10%"/>
<img src="files/TwitterInstructions_f2.png" scale="10%"/>
Step 3. Create an app in Twitter's Dev site Go to (apps.twitter.com), and click the "Create New App" button. Fill in the "Name," "Description," and "Website" fields, leaving the callback one blank (we're not going to use it). Note that the website must be a fully qualified URL, so it should look like: http://test.url.com. Then scroll down and read the developer agreement, checking that agree, and finally click "Create your Twitter application."
<img src="files/TwitterInstructions_f3.png" scale="10%"/>
<img src="files/TwitterInstructions_f4.png"/>
Step 4. Generate keys and tokens with this app After your application has been created, you will see a summary page like the one below. Click "Keys and Access Tokens" to view and manage keys. Scroll down and click "Create my access token." After a moment, your page should refresh, and it should show you four long strings of characters and numbers, a consume key, consumer secret, an access token, and an access secret (note these are case-sensitive!). Copy and past these four strings into the quotes in the code cell below.
<img src="files/TwitterInstructions_f5.png" scale="10%"/>
<img src="files/TwitterInstructions_f6.png"/>
End of explanation
# Now we use the configured authentication information to connect
# to Twitter's API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth)
print("Connected to Twitter!")
Explanation: Connecting to Twitter
Once we have the authentication details set, we can connect to Twitter using the Tweepy OAuth handler, as below.
End of explanation
# Get tweets from our timeline
public_tweets = api.home_timeline()
# print the first five authors and tweet texts
for tweet in public_tweets[:5]:
print (tweet.author.screen_name, tweet.author.name, "said:", tweet.text)
Explanation: Testing our Connection
Now that we are connected to Twitter, let's do a brief check that we can read tweets by pulling the first few tweets from our own timeline (or the account associated with your Twitter app) and printing them.
End of explanation
# Our search string
queryString = "earthquake"
# Perform the search
matchingTweets = api.search(queryString)
print ("Searched for:", queryString)
print ("Number found:", len(matchingTweets))
# For each tweet that matches our query, print the author and text
print ("\nTweets:")
for tweet in matchingTweets:
print (tweet.author.screen_name, tweet.text)
Explanation: Searching Twitter for Keywords
Now that we're connected, we can search Twitter for specific keywords with relative ease just like you were using Twitter's search box.
While this search only goes back 7 days and/or 1,500 tweets (whichever is less), it can be powerful if an event you want to track just started.
Note that you might have to deal with paging if you get lots of data. Twitter will only return you one page of up to 100 tweets at a time.
End of explanation
# Lets find only media or links about earthquakes
queryString = "earthquake (filter:media OR filter:links)"
# Perform the search
matchingTweets = api.search(queryString)
print ("Searched for:", queryString)
print ("Number found:", len(matchingTweets))
# For each tweet that matches our query, print the author and text
print ("\nTweets:")
for tweet in matchingTweets:
print (tweet.author.screen_name, tweet.text)
Explanation: More Complex Queries
Twitter's Search API exposes many capabilities, like filtering for media, links, mentions, geolocations, dates, etc.
We can access these capabilities directly with the search function.
For a list of operators Twitter supports, go here: https://dev.twitter.com/rest/public/search
End of explanation
# Lets find only media or links about earthquakes
queryString = "earthquake (filter:media OR filter:links)"
# How many tweets should we fetch? Upper limit is 1,500
maxToReturn = 100
# Perform the search, and for each tweet that matches our query,
# print the author and text
print ("\nTweets:")
for status in tweepy.Cursor(api.search, q=queryString).items(maxToReturn):
print (status.author.screen_name, status.text)
Explanation: Dealing with Pages
As mentioned, Twitter serves results in pages.
To get all results, we can use Tweepy's Cursor implementation, which handles this iteration through pages for us in the background.
End of explanation
# First, we need to create our own listener for the stream
# that will stop after a few tweets
class LocalStreamListener(tweepy.StreamListener):
A simple stream listener that breaks out after X tweets
# Max number of tweets
maxTweetCount = 10
# Set current counter
def __init__(self):
tweepy.StreamListener.__init__(self)
self.currentTweetCount = 0
# For writing out to a file
self.filePtr = None
# Create a log file
def set_log_file(self, newFile):
if ( self.filePtr ):
self.filePtr.close()
self.filePtr = newFile
# Close log file
def close_log_file(self):
if ( self.filePtr ):
self.filePtr.close()
# Pass data up to parent then check if we should stop
def on_data(self, data):
print (self.currentTweetCount)
tweepy.StreamListener.on_data(self, data)
if ( self.currentTweetCount >= self.maxTweetCount ):
return False
# Increment the number of statuses we've seen
def on_status(self, status):
self.currentTweetCount += 1
# Could write this status to a file instead of to the console
print (status.text)
# If we have specified a file, write to it
if ( self.filePtr ):
self.filePtr.write("%s\n" % status._json)
# Error handling below here
def on_exception(self, exc):
print (exc)
def on_limit(self, track):
Called when a limitation notice arrives
print ("Limit", track)
return
def on_error(self, status_code):
Called when a non-200 status code is returned
print ("Error:", status_code)
return False
def on_timeout(self):
Called when stream connection times out
print ("Timeout")
return
def on_disconnect(self, notice):
Called when twitter sends a disconnect notice
print ("Disconnect:", notice)
return
def on_warning(self, notice):
print ("Warning:", notice)
Called when a disconnection warning message arrives
Explanation: Other Search Functionality
The Tweepy wrapper and Twitter API is pretty extensive.
You can do things like pull the last 3,200 tweets from other users' timelines, find all retweets of your account, get follower lists, search for users matching a query, etc.
More information on Tweepy's capabilities are available at its documentation page: (http://tweepy.readthedocs.io/en/v3.5.0/api.html)
Other information on the Twitter API is available here: (https://dev.twitter.com/rest/public/search).
Twitter Streaming
Up to this point, all of our work has been retrospective.
An event has occurred, and we want to see how Twitter responded over some period of time.
To follow an event in real time, Twitter and Tweepy support Twitter streaming.
Streaming is a bit complicated, but it essentially lets of track a set of keywords, places, or users.
To keep things simple, I will provide a simple class and show methods for printing the first few tweets.
Larger solutions exist specifically for handling Twitter streaming.
You could take this code though and easily extend it by writing data to a file rather than the console.
I've marked where that code could be inserted.
End of explanation
listener = LocalStreamListener()
localStream = tweepy.Stream(api.auth, listener)
# Stream based on keywords
localStream.filter(track=['earthquake', 'disaster'])
listener = LocalStreamListener()
localStream = tweepy.Stream(api.auth, listener)
# List of screen names to track
screenNames = ['bbcbreaking', 'CNews', 'bbc', 'nytimes']
# Twitter stream uses user IDs instead of names
# so we must convert
userIds = []
for sn in screenNames:
user = api.get_user(sn)
userIds.append(user.id_str)
# Stream based on users
localStream.filter(follow=userIds)
listener = LocalStreamListener()
localStream = tweepy.Stream(api.auth, listener)
# Specify coordinates for a bounding box around area of interest
# In this case, we use San Francisco
swCornerLat = 36.8
swCornerLon = -122.75
neCornerLat = 37.8
neCornerLon = -121.75
boxArray = [swCornerLon, swCornerLat, neCornerLon, neCornerLat]
# Say we want to write these tweets to a file
listener.set_log_file(codecs.open("tweet_log.json", "w", "utf8"))
# Stream based on location
localStream.filter(locations=boxArray)
# Close the log file
listener.close_log_file()
Explanation: Now we set up the stream using the listener above
End of explanation |
15,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the โnew waveโ of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(โkingโ) โ vec(โmanโ) + vec(โwomanโ) =~ vec(โqueenโ), or vec(โMontreal Canadiensโ) โ vec(โMontrealโ) + vec(โTorontoโ) resembles the vector for โToronto Maple Leafsโ.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensimโs word2vec expects a sequence of sentences as its input. Each sentence a list of words (utf8 strings)
Step1: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM
Step2: Say we want to further preprocess the words from the files โ convert to unicode, lowercase, remove numbers, extract named entitiesโฆ All of this can be done inside the MySentences iterator and word2vec doesnโt need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users
Step3: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)
Step4: Training
Word2Vec accepts several parameters that affect both training speed and quality.
One of them is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, thereโs not enough data to make any meaningful training on those words, so itโs best to ignore them
Step5: Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
The last of the major parameters (full list here) is for training parallelization, to speed up training
Step6: The workers parameter only has an effect if you have Cython installed. Without Cython, youโll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
Thereโs a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, thereโs no good way to objectively evaluate the result. Evaluation depends on your end application.
Google have released their testing set of about 20,000 syntactic and semantic test examples, following the โA is to B as C is to Dโ task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad
Step7: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contain word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, coast and shore are very similar as they appear in the same context. At the same time clothes and closet are less similar because they are related but not interchangeable.
Step8: Once again, good performance on Google's or WS-353 test set doesnโt mean word2vec will work well in your application, or vice versa. Itโs always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods
Step9: which uses pickle internally, optionally mmapโing the modelโs internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats
Step10: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that itโs not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box
Step11: You can get the probability distribution for the center word given the context words as input
Step12: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis | Python Code:
# import modules & set up logging
import gensim, logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = [['first', 'sentence'], ['second', 'sentence']]
# train word2vec on the two sentences
model = gensim.models.Word2Vec(sentences, min_count=1)
Explanation: Word2Vec Tutorial
In case you missed the buzz, word2vec is a widely featured as a member of the โnew waveโ of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(โkingโ) โ vec(โmanโ) + vec(โwomanโ) =~ vec(โqueenโ), or vec(โMontreal Canadiensโ) โ vec(โMontrealโ) + vec(โTorontoโ) resembles the vector for โToronto Maple Leafsโ.
Word2vec is very useful in automatic text tagging, recommender systems and machine translation.
Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words.
This tutorial
In this tutorial you will learn how to train and evaluate word2vec models on your business data.
Preparing the Input
Starting from the beginning, gensimโs word2vec expects a sequence of sentences as its input. Each sentence a list of words (utf8 strings):
End of explanation
# create some toy data to use with the following example
import smart_open, os
if not os.path.exists('./data/'):
os.makedirs('./data/')
filenames = ['./data/f1.txt', './data/f2.txt']
for i, fname in enumerate(filenames):
with smart_open.smart_open(fname, 'w') as fout:
for line in sentences[i]:
fout.write(line + '\n')
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
sentences = MySentences('./data/') # a memory-friendly iterator
print(list(sentences))
# generate the Word2Vec model
model = gensim.models.Word2Vec(sentences, min_count=1)
print(model)
print(model.wv.vocab)
Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large.
Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentenceโฆ
For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line:
End of explanation
# build the same model, making the 2 steps explicit
new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training
new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator
new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter)
# can be a non-repeatable, 1-pass generator
print(new_model)
print(model.wv.vocab)
Explanation: Say we want to further preprocess the words from the files โ convert to unicode, lowercase, remove numbers, extract named entitiesโฆ All of this can be done inside the MySentences iterator and word2vec doesnโt need to know. All that is required is that the input yields one sentence (list of utf8 words) after another.
Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language.
1. The first pass collects words and their frequencies to build an internal dictionary tree structure.
2. The second pass trains the neural model.
These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and youโre able to initialize the vocabulary some other way:
End of explanation
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep
lee_train_file = test_data_dir + 'lee_background.cor'
class MyText(object):
def __iter__(self):
for line in open(lee_train_file):
# assume there's one document per line, tokens separated by whitespace
yield line.lower().split()
sentences = MyText()
print(sentences)
Explanation: More data would be nice
For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim):
End of explanation
# default value of min_count=5
model = gensim.models.Word2Vec(sentences, min_count=10)
# default value of size=100
model = gensim.models.Word2Vec(sentences, size=200)
Explanation: Training
Word2Vec accepts several parameters that affect both training speed and quality.
One of them is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, thereโs not enough data to make any meaningful training on those words, so itโs best to ignore them:
End of explanation
# default value of workers=3 (tutorial says 1...)
model = gensim.models.Word2Vec(sentences, workers=4)
Explanation: Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds.
The last of the major parameters (full list here) is for training parallelization, to speed up training:
End of explanation
model.accuracy('./datasets/questions-words.txt')
Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, youโll only be able to use one core because of the GIL (and word2vec training will be miserably slow).
Memory
At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes).
Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB.
Thereโs a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above.
Evaluating
Word2Vec training is an unsupervised task, thereโs no good way to objectively evaluate the result. Evaluation depends on your end application.
Google have released their testing set of about 20,000 syntactic and semantic test examples, following the โA is to B as C is to Dโ task. It is provided in the 'datasets' folder.
For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning.
The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?).
Gensim support the same evaluation set, in exactly the same format:
End of explanation
model.evaluate_word_pairs(test_data_dir +'wordsim353.tsv')
Explanation: This accuracy takes an
optional parameter restrict_vocab
which limits which test examples are to be considered.
In the December 2016 release of Gensim we added a better way to evaluate semantic similarity.
By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contain word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, coast and shore are very similar as they appear in the same context. At the same time clothes and closet are less similar because they are related but not interchangeable.
End of explanation
from tempfile import mkstemp
fs, temp_path = mkstemp("gensim_temp") # creates a temp file
model.save(temp_path) # save the model
new_model = gensim.models.Word2Vec.load(temp_path) # open the model
Explanation: Once again, good performance on Google's or WS-353 test set doesnโt mean word2vec will work well in your application, or vice versa. Itโs always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial.
Storing and loading models
You can store/load models using the standard gensim methods:
End of explanation
model = gensim.models.Word2Vec.load(temp_path)
more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue',
'training', 'it', 'with', 'more', 'sentences']]
model.build_vocab(more_sentences, update=True)
model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter)
# cleaning up temp
os.close(fs)
os.remove(temp_path)
Explanation: which uses pickle internally, optionally mmapโing the modelโs internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing.
In addition, you can load models created by the original C tool, both using its text and binary formats:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)
# using gzipped/bz2 input works too, no need to unzip:
model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True)
Online training / Resuming training
Advanced users can load a model and continue training it with more sentences and new vocabulary words:
End of explanation
model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1)
model.doesnt_match("input is lunch he sentence cat".split())
print(model.similarity('human', 'party'))
print(model.similarity('tree', 'murder'))
Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate.
Note that itโs not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there.
Using the model
Word2Vec supports several word similarity tasks out of the box:
End of explanation
print(model.predict_output_word(['emergency','beacon','received']))
Explanation: You can get the probability distribution for the center word given the context words as input:
End of explanation
model['tree'] # raw NumPy vector of a word
Explanation: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words.
If you need the raw output vectors in your application, you can access these either on a word-by-word basis:
End of explanation |
15,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BCC
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
15,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes!
2. I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.
Step1: 4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
Step2: 5. Violation time" is... not a time. Make it a time
Step3: 6. There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
Step4: 7. Join the data with the Parking Violations Code dataset from the NYC Open Data site
Step5: 8. How much money did NYC make off of parking violations?
Step6: 9. What's the most lucrative kind of parking violation? The most frequent?
Step7: 10. New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles?
Step8: 11. Make a chart of the top few
Step9: 12. What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm,6pm-12am.
Step10: 13. What's the average ticket cost in NYC?ยถ
Step11: 14. Make a graph of the number of tickets per day.
Step12: 16. Manually construct a dataframe out of https | Python Code:
import pandas as pd
#import pandas as pd
import datetime
import datetime as dt
# import datetime
# import datetime as dt
dt.datetime.strptime('08/04/2013', '%m/%d/%Y')
datetime.datetime(2013, 8, 4, 0, 0)
parser = lambda date: pd.datetime.strptime(date, '%m/%d/%Y')
!head -n 10000 violations.csv > small-violations.csv
df = pd.read_csv("small-violations.csv", na_values= {'Vehicle Year': ['0']}, parse_dates=[4], date_parser=parser, dtype=str)
df.tail(20)
Explanation: 1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes!
2. I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.
End of explanation
df['Date First Observed'].value_counts()
import dateutil.parser
def first_observed_function(x):
try:
x= str(x)
print("NaN")
if x == '0':
return np.nan
else:
print("transforming...")
date_clean = dateutil.parser.parse(x)
return date_clean.strftime("%Y-%d-%m")
except:
return None
first_observed_function('20130731')
df['Clean Date First Observed']= df['Date First Observed'].apply(first_observed_function)
df['Clean Date First Observed'].value_counts()
Explanation: 4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
End of explanation
df['Violation Time']
def violation_time_transformed(x):
try:
hour = x[0:2]
minutes = x[2:4]
pam= x[4]
time= hour + ":" + minutes + " " + pam + 'm'
changed_time = dateutil.parser.parse(time)
return changed_time.strftime("%H:%M%p")
except:
return None
df['New Violation Time']= df['Violation Time'].apply(violation_time_transformed)
df['New Violation Time'].head(20)
Explanation: 5. Violation time" is... not a time. Make it a time
End of explanation
df['Vehicle Color'].value_counts()
def color(color):
if (color == "BK") or (color == "BL"):
return 'BLACK'
if (color == "WHT") or (color == "WT") or (color == 'WH'):
return 'WHITE'
else:
return color
#example
color('BK'), color('WHT'), color('BL'), color('WT'), color('WH')
df['B&W Clean Vehicle Color'] = df['Vehicle Color'].apply(color)
df['B&W Clean Vehicle Color'].value_counts()
Explanation: 6. There sure are a lot of colors of cars, too bad so many of them are the same. Make "BLK" and "BLACK", "WT" and "WHITE", and any other combinations that you notice.
End of explanation
!head -n 10000 DOF_Parking_Violation_Codes.csv > small_DOF_Parking_Violation_Codes.csv
violations_data = pd.read_csv("small_DOF_Parking_Violation_Codes.csv")
violations_data.head(2)
type(violations_data['CODE'])
violations_data['CODE'].value_counts()
def transform_code(x):
try:
new_code = x[0:2]
return new_code
except:
return None
single_code = violations_data['CODE'].apply(transform_code)
violations_data['int CODE'] = single_code.astype(int)
violations_data['int CODE'].dtype #now is an integer
violations_data.head(129)
#I need to do this same process to the df['Violation Code'] because to transform it to a INT
old_df = df["Violation Code"].apply(transform_code)
df.head(10)
df['Violation Code 2'] = old_df.astype(int)
#Merging the two data sets
new_df= df.merge(violations_data, left_on="Violation Code 2", right_on="int CODE")
new_df.head(40)
Explanation: 7. Join the data with the Parking Violations Code dataset from the NYC Open Data site
End of explanation
new_df['All Other Areas'].value_counts()
new_df[ 'Manhattan\xa0 96th St. & below'].value_counts()
#First, I will transfrom all values into integers
def money_transformer(money_string):
if money_string == '200 (Heavy Tow plus violation fine)':
string_only = money_string[:3]
return int(string_only)
if money_string == '100\n(Regular Tow, plus violation fine)':
string_only = money_string[:3]
return int(string_only)
try:
return int(money_string.replace("$","").replace(",",""))
except:
return None
new_df['All Other Areas 2'] = new_df['All Other Areas'].apply(money_transformer)
new_df['Manhattan\xa0 96th St. & below 2'] = new_df['Manhattan\xa0 96th St. & below'].apply(money_transformer)
outcome1 = new_df['All Other Areas 2'].sum()
outcome2 = new_df['Manhattan\xa0 96th St. & below 2'].sum()
print("NYC makes between","$", outcome1, "US dollars and","$", outcome2, "US dollars of parking violations")
#PS. Data set has been cut to 10000 rows for memory saving reasons. Output would be considerably higher with the complete DF.
Explanation: 8. How much money did NYC make off of parking violations?
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
new_df['Violation Code 2'].value_counts().head(10).plot.bar()
print("The most frequent is the infraction 21, followed by infraction 46 and 14")
print("this is how the top 3 are defined:")
new_df.groupby('CODE')['DEFINITION'].value_counts().sort_values(ascending=False).head(3)
#Looking for the most lucrative
violations_data.sort_values(by='CODE').head(46)
new_df['Violation Code 2'].value_counts().head(3)
#21 cost $65, 46 cost $115, 14 cost $115
def money_new(money_str):
if money_str == 1894:
return money_str * 65
if money_str == 1366:
return money_str * 115
if money_str == 987:
return money_str * 115
print("For al the 21 infractions the city has made", money_new(1894))
print("For al the 46 infractions the city has made", money_new(1366))
print("For al the 14 infractions the city has made", money_new(987))
print("Seems that infraction 46 is the most lucrative")
Explanation: 9. What's the most lucrative kind of parking violation? The most frequent?
End of explanation
new_df.groupby('Registration State')['All Other Areas 2'].sum().sort_values(ascending=False).head(10)
print('The city has made $274810 of the non newyorkers')
NY_fines= new_df.groupby('Registration State')['All Other Areas 2'].sum().sort_values(ascending=False).head(1)
outcome1 - NY_fines
Explanation: 10. New Jersey has bad drivers, but does it have bad parkers, too? How much money does NYC make off of all non-New York vehicles?
End of explanation
new_df['Registration State'].value_counts().sort_values().tail(10).plot.barh(color= 'Blue')
Explanation: 11. Make a chart of the top few
End of explanation
new_df['New Violation Time'].head(10)
def hour_transformer(x):
try:
time = int(x[:2])
if time <= 6:
return '12am-6am'
elif time <= 12:
return '6am-12pm'
elif time <= 18:
return '12pm-6pm'
elif time <= 24:
return '6pm-12am'
else:
pass
except:
pass
day_time = new_df['New Violation Time'].apply(hour_transformer)
day_time.value_counts().plot.pie(title='Ticket time!')
Explanation: 12. What time of day do people usually get their tickets? You can break the day up into several blocks - for example 12am-6am, 6am-12pm, 12pm-6pm,6pm-12am.
End of explanation
new_df['All Other Areas 2'].mean()
Explanation: 13. What's the average ticket cost in NYC?ยถ
End of explanation
new_df['Issue Date'].describe()
new_df.groupby('Issue Date')['Issue Date'].value_counts(sort=False).plot.bar(figsize=(15, 6))
plt.ylabel('Number of tickets')
plt.xlabel('Days')
#it seems like all the data is concentrated
#only in a few years
new_df.groupby('Issue Date')['All Other Areas 2'].sum().plot(kind="bar", figsize=(15, 6))
plt.ylabel('Amount in $')
plt.xlabel('Days')
Explanation: 14. Make a graph of the number of tickets per day.
End of explanation
#Still havent figure out how :(
Explanation: 16. Manually construct a dataframe out of https://dmv.ny.gov/statistic/2015licinforce-web.pdf (only NYC boroughts - bronx, queens, manhattan, staten island, brooklyn), having columns for borough name, abbreviation, and number of licensed drivers.
End of explanation |
15,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute distance to roads
This notebook computes the distance to each of the nearest road types in a 'roads' vector map from a vector map of 'points' (sample locations).
This notebook uses GRASS GIS (7.0.4), and must be run inside of a GRASS environment (start the jupyter notebook server from the GRASS command line).
Required packages
numpy <br />
pandas <br />
pyprind
Variable declarations
points โ vector map with points to measure distance from (sample locations) <br />
roads โ vector map with roads data <br />
road_type_field โ field name containing the road classification type (i.e. residential, secondary, etc.) <br />
distance_table_filename โ path to export the distances table as a csv file
Step1: Import statements
Step2: GRASS import statements
Step3: Function declarations
connect to an attribute table
Step4: finds the nearest element in a vector map (to) for elements in another vector map (from) <br />
calls the GRASS v.distance command
Step5: selects vector features from an existing vector map and creates a new vector map containing only the selected features <br />
calls the GRASS v.extract command
Step6: Get unique 'roads' types
Step7: Get 'points' attribute table
Step8: Loop through 'roads' types and compute the distances from all 'points'
Step9: Export distances table to a csv file | Python Code:
points = 'sample_points_field'
roads = 'highway'
road_type_field = 'Type'
distance_table_filename = ""
Explanation: Compute distance to roads
This notebook computes the distance to each of the nearest road types in a 'roads' vector map from a vector map of 'points' (sample locations).
This notebook uses GRASS GIS (7.0.4), and must be run inside of a GRASS environment (start the jupyter notebook server from the GRASS command line).
Required packages
numpy <br />
pandas <br />
pyprind
Variable declarations
points โ vector map with points to measure distance from (sample locations) <br />
roads โ vector map with roads data <br />
road_type_field โ field name containing the road classification type (i.e. residential, secondary, etc.) <br />
distance_table_filename โ path to export the distances table as a csv file
End of explanation
import pandas
import numpy as np
import pyprind
Explanation: Import statements
End of explanation
import grass.script as gscript
from grass.pygrass.vector import VectorTopo
from grass.pygrass.vector.table import DBlinks
Explanation: GRASS import statements
End of explanation
def connectToAttributeTable(map):
vector = VectorTopo(map)
vector.open(mode='r')
dblinks = DBlinks(vector.c_mapinfo)
link = dblinks[0]
return link.table()
Explanation: Function declarations
connect to an attribute table
End of explanation
def computeDistance(from_map, to_map):
upload = 'dist'
result = gscript.read_command('v.distance',
from_=from_map,
to=to_map,
upload=upload,
separator='comma',
flags='p')
return result.split('\n')
Explanation: finds the nearest element in a vector map (to) for elements in another vector map (from) <br />
calls the GRASS v.distance command
End of explanation
def extractFeatures(input_, type_, output):
where = "{0} = '{1}'".format(road_type_field, type_)
gscript.read_command('v.extract',
input_=input_,
where=where,
output=output,
overwrite=True)
Explanation: selects vector features from an existing vector map and creates a new vector map containing only the selected features <br />
calls the GRASS v.extract command
End of explanation
roads_table = connectToAttributeTable(map=roads)
roads_table.filters.select(road_type_field)
cursor = roads_table.execute()
result = np.array(cursor.fetchall())
cursor.close()
road_types = np.unique(result)
print(road_types)
Explanation: Get unique 'roads' types
End of explanation
point_table = connectToAttributeTable(map=points)
point_table.filters.select()
columns = point_table.columns.names()
cursor = point_table.execute()
result = np.array(cursor.fetchall())
cursor.close()
point_data = pandas.DataFrame(result, columns=columns).set_index('cat')
Explanation: Get 'points' attribute table
End of explanation
distances = pandas.DataFrame(columns=road_types, index=point_data.index)
progress_bar = pyprind.ProgBar(road_types.size, bar_char='โ', title='Progress', monitor=True, stream=1, width=50)
for type_ in road_types:
# update progress bar
progress_bar.update(item_id=type_)
# extract road data based on type query
extractFeatures(input_=roads, type_=type_, output='roads_tmp')
# compute distance from points to road type
results = computeDistance(points, 'roads_tmp')
# save results to data frame
distances[type_] = [ d.split(',')[1] for d in results[1:len(results)-1] ]
# match index with SiteID
distances['SiteID'] = point_data['ID']
distances.set_index('SiteID', inplace=True)
Explanation: Loop through 'roads' types and compute the distances from all 'points'
End of explanation
distances.to_csv(distance_table_filename, header=False)
Explanation: Export distances table to a csv file
End of explanation |
15,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
F = (-1) / ((np.exp((energy - mu) / (kT))) + 1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = -\frac{1}{e^{\frac{\epsilon-\mu}{kT}} + 1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
energy = np.linspace(0, 10, 100)
plt.figure(figsize=(15,5))
plt.plot(energy, fermidist(energy, mu, kT))
plt.grid(True)
plt.xlabel('Particle Energy')
plt.ylabel('Fermi-Dirac Distribution')
plt.xticks([0, 2, 4, 6, 8, 10], ['0', '2$\epsilon$', '4$\epsilon$', '6$\epsilon$', '8$\epsilon$', '10$\epsilon$'])
plt.ylim(-1,1)
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist, mu = (0.0, 5.0, 0.5), kT = (0.1, 10.0, 0.1))
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
15,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques).
The benefit of this approach is that
Step1: Run solver
Step2: View in 2D and 3D ("glass" brain like 3D plot) | Python Code:
# Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.inverse_sparse import tf_mixed_norm
from mne.viz import plot_sparse_source_estimates
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left visual'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked = mne.pick_channels_evoked(evoked)
# We make the window slightly larger than what you'll eventually be interested
# in ([-0.05, 0.3]) to avoid edge effects.
evoked.crop(tmin=-0.1, tmax=0.4)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname, force_fixed=False,
surf_ori=True)
Explanation: Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques).
The benefit of this approach is that:
it is spatio-temporal without assuming stationarity (sources properties
can vary over time)
activations are localized in space, time and frequency in one step.
with a built-in filtering process based on a short time Fourier
transform (STFT), data does not need to be low passed (just high pass
to make the signals zero mean).
the solver solves a convex optimization problem, hence cannot be
trapped in local minima.
References:
A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski
Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with
non-stationary source activations
Neuroimage, Volume 70, 15 April 2013, Pages 410-422, ISSN 1053-8119,
DOI: 10.1016/j.neuroimage.2012.12.051.
A. Gramfort, D. Strohmeier, J. Haueisen, M. Hamalainen, M. Kowalski
Functional Brain Imaging with M/EEG Using Structured Sparsity in
Time-Frequency Dictionaries
Proceedings Information Processing in Medical Imaging
Lecture Notes in Computer Science, 2011, Volume 6801/2011,
600-611, DOI: 10.1007/978-3-642-22092-0_49
https://doi.org/10.1007/978-3-642-22092-0_49
End of explanation
# alpha_space regularization parameter is between 0 and 100 (100 is high)
alpha_space = 50. # spatial regularization parameter
# alpha_time parameter promotes temporal smoothness
# (0 means no temporal regularization)
alpha_time = 1. # temporal regularization parameter
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=loose, depth=depth)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute TF-MxNE inverse solution
stc, residual = tf_mixed_norm(evoked, forward, cov, alpha_space, alpha_time,
loose=loose, depth=depth, maxit=200, tol=1e-4,
weights=stc_dspm, weights_min=8., debias=True,
wsize=16, tstep=4, window=0.05,
return_residual=True)
# Crop to remove edges
stc.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
# Show the evoked response and the residual for gradiometers
ylim = dict(grad=[-120, 120])
evoked.pick_types(meg='grad', exclude='bads')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True)
residual.pick_types(meg='grad', exclude='bads')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True)
Explanation: Run solver
End of explanation
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="TF-MxNE (cond %s)"
% condition, modes=['sphere'], scale_factors=[1.])
time_label = 'TF-MxNE time=%0.2f ms'
clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])
brain = stc.plot('sample', 'inflated', 'rh', clim=clim, time_label=time_label,
smoothing_steps=5, subjects_dir=subjects_dir)
brain.show_view('medial')
brain.set_data_time_index(120)
brain.add_label("V1", color="yellow", scalar_thresh=.5, borders=True)
brain.add_label("V2", color="red", scalar_thresh=.5, borders=True)
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation |
15,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <h1>Audio Applications</h1>
<BR>
Now that we have established the theory behind the geometry of 1D time series sliding window embeddings, we will look at our first real applications
Step2: <h1>Biphonation Overview</h1>
Biphonation refers to the presence of two or more simultaneous frequencies in a signal which are "incommensurate"; that is, their frequencies are linearly independent over the rational numbers. In other words, the frequencies are "inharmonic." We saw a synthetic example in class 1 of cos(x) + cos(pi x). Today, we will examine how this manifests itself in biology with horse whinnies that occur during states of high emotional valence. During the steady state of a horse whinnie, biphonation is found
<table>
<tr><td>
<img src = "Whinnie.png">
</td></tr>
<tr><td>
<b>Figure 1</b>
Step3: The code below will extract a subsection of the signal and perform a sliding window embedding + 1D persistent homology. Using the interactive plot of the audio waveform above, find two different time ranges to plot
Step4: <h1>Music Analysis</h1>
<img src = "journey.jpg"><BR><BR>
Music is full of repetition. For instance, there is usually a hierarchy of rhythm which determines how the music "pulses," or repeates itself in beat patterns. Often, a dominant rhythm level is deemed the "tempo" of the music. Typical tempos range from about 50 beats per minute to 200 beats per minute. Let's take a moderate tempo level of 120 beats per minute, for instance, which occurs in the song "Don't Stop Believin'" by Journey. This corresponds to a period of 0.5 seconds. As we saw in the horse example, sound is sampled at 44100 samples per second. This corresponds to an ideal sliding window interval length of 22050. Let's try to compute the sliding window embedding of the raw audio to see if the tempo manifests itself with TDA. First, we will load in "Don't Stop Believing" below
Step5: <BR><BR>
Now let's do a sliding window with a window length equal to the sample rate over 2, corresponding to the fact that a beat period is a half of a second in this song. We will have to have a large <code>Tau</code> and <code>dT</code>, since there is such a high sampling rate, because otherwise the TDA code will grind to a halt with way too many points. We will also need to set up special sliding window code that skips the spline interpolation step, because this will also be prohibitively slow at this sampling rate. In other words, we will assume that <code>Tau</code> and <code>dT</code> are integers. Here's the code that does all of this on the first three seconds of audio
Step6: Unfortunately, the sample rate is just to high and the signal is just too messy for this algorithm to work. We will have to do some more sophisticated preprocessing before applying the algorithm
<h1>Audio Novelty Functions And Music vs Speech</h1>
One way to deal with the fact that music is both messy and at a high sampling rate is to derive something called the "audio novelty function," which is designed explicitly to pick up on rhythmic events. To see how it's motivated, let's look at the <a href = "https
Step7: You might notice that there are vertical streaks in a semi-periodic pattern. These correspond to "broadband percussive events," or, on other words, likely onsets for beats when drums occur. An audio novelty function is derived from a spectrogram by looking at the difference between successive frames to try to pick up on this. The code below extracts the audio novelty function and displays it for the same audio snippet.
Step8: Not only is the audio novelty function a cleaner signal, but it is also at a much lower sample rate. Since the "hop size" between each spectrogram window is 256 samples, the temporal resolution is coarser by that factor.
<h2>Music Audio Novelty Embedding</h2>
Let's now try our sliding window with a snippet of the audio novelty function of the previous example instead of the raw audio
Step9: <h2>Speech Example</h2>
In our final experiment in this module, we will look at a sliding window embedding of the audio novelty function on a speech excerpt which does not have a clear rhythmic structure (courtesy of <a href = "http | Python Code:
##Do all of the imports and setup inline plotting
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from scipy.interpolate import InterpolatedUnivariateSpline
from ripser import ripser
from persim import plot_diagrams
import scipy.io.wavfile
from IPython.display import clear_output
def getSlidingWindow(x, dim, Tau, dT):
Return a sliding window of a time series,
using arbitrary sampling. Use linear interpolation
to fill in values in windows not on the original grid
Parameters
----------
x: ndarray(N)
The original time series
dim: int
Dimension of sliding window (number of lags+1)
Tau: float
Length between lags, in units of time series
dT: float
Length between windows, in units of time series
Returns
-------
X: ndarray(N, dim)
All sliding windows stacked up
N = len(x)
NWindows = int(np.floor((N-dim*Tau)/dT))
if NWindows <= 0:
print("Error: Tau too large for signal extent")
return np.zeros((3, dim))
X = np.zeros((NWindows, dim))
spl = InterpolatedUnivariateSpline(np.arange(N), x)
for i in range(NWindows):
idxx = dT*i + Tau*np.arange(dim)
start = int(np.floor(idxx[0]))
end = int(np.ceil(idxx[-1]))+2
# Only take windows that are within range
if end >= len(x):
X = X[0:i, :]
break
X[i, :] = spl(idxx)
return X
Explanation: <h1>Audio Applications</h1>
<BR>
Now that we have established the theory behind the geometry of 1D time series sliding window embeddings, we will look at our first real applications: audio signals. First, we will import all of the necessary libraries and define the sliding window code as before.
End of explanation
#Read in the audio file. Fs is the sample rate, and
#X is the audio signal
Fs, X = scipy.io.wavfile.read("horsewhinnie.wav")
plt.figure()
plt.plot(np.arange(len(X))/float(Fs), X)
plt.xlabel("Time (Seconds)")
plt.title("Horse Whinnie Waveform")
plt.show()
from IPython.display import Audio
# load a remote WAV file
Audio('horsewhinnie.wav')
Explanation: <h1>Biphonation Overview</h1>
Biphonation refers to the presence of two or more simultaneous frequencies in a signal which are "incommensurate"; that is, their frequencies are linearly independent over the rational numbers. In other words, the frequencies are "inharmonic." We saw a synthetic example in class 1 of cos(x) + cos(pi x). Today, we will examine how this manifests itself in biology with horse whinnies that occur during states of high emotional valence. During the steady state of a horse whinnie, biphonation is found
<table>
<tr><td>
<img src = "Whinnie.png">
</td></tr>
<tr><td>
<b>Figure 1</b>: Audio of a horse whinnie. Courtesy of <a href = "http://www.nature.com/articles/srep09989#s1">http://www.nature.com/articles/srep09989#s1</a></td></tr>
</table>
<!--<iframe width="560" height="315" src="https://www.youtube.com/embed/f8DdGpHkzu4" frameborder="0" allowfullscreen></iframe>!-->
<BR><BR>
<h1>Biphonation Example with Horse Whinnies</h1>
<BR>
Let's now load the audio from the horse whinnie example, interactively plot the audio waveform, and listen to it.
End of explanation
#These variables are used to adjust the window size
F0 = 493 #First fundamental frequency
G0 = 1433 #Second fundamental frequency
###TODO: Modify this variable (time in seconds)
time = 0.91
#Step 1: Extract an audio snippet starting at the chosen time
SigLen = 512 #The number of samples to take after the start time
iStart = int(round(time*Fs))
x = X[iStart:iStart + SigLen]
W = int(round(Fs/G0))
#Step 2: Get the sliding window embedding
Y = getSlidingWindow(x, W, 2, 2)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
#Step 3: Do the 1D rips filtration
PDs = ripser(Y, maxdim=1)['dgms']
PD = PDs[1]
#Step 4: Figure out the second largest persistence
sP = 0
sPIdx = 0
if PD.shape[0] > 1:
Pers = PD[:, 1] - PD[:, 0]
sPIdx = np.argsort(-Pers)[1]
sP = Pers[sPIdx]
#Step 5: Plot the results
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.title("Starting At %g Seconds"%time)
plt.plot(time + np.arange(SigLen)/Fs, x)
plt.xlabel("Time")
plt.subplot(122)
plot_diagrams(PDs)
plt.plot([PD[sPIdx, 0]]*2, PD[sPIdx, :], 'r')
plt.scatter(PD[sPIdx, 0], PD[sPIdx, 1], 20, 'r')
plt.title("Second Largest Persistence: %g"%sP)
Explanation: The code below will extract a subsection of the signal and perform a sliding window embedding + 1D persistent homology. Using the interactive plot of the audio waveform above, find two different time ranges to plot:<BR>
<ol>
<li>A region with a pure tone (single sinusoid), which can be found towards the beginning</li>
<li>A region with biphonation, which can be found towards the middle. Ensure that this region has two strongly persistent classes with early birth times. The class will compete to find the region which shows biphonation the most clearly with these statistics, and the score will be based on the <i>second largest persistence</i>, which will be indicated in the persistence diagram plot<BR></li>
</ol>
To interactively search for regions, use the pan icon
<img src = "PanIcon.png">
then left click and drag to translate, and right click and drag to zoom. Once you've found a region, modify the "time" variable in the code below accordingly, and run it.
End of explanation
Fs, X = scipy.io.wavfile.read("journey.wav") #Don't Stop Believing
X = X/(2.0**15) #Loaded in as 16 bit shorts, convert to float
plt.figure()
plt.plot(np.arange(len(X))/float(Fs), X)
plt.xlabel("Time (Seconds)")
plt.title("Don't Stop Believin")
plt.show()
Audio('journey.wav')
Explanation: <h1>Music Analysis</h1>
<img src = "journey.jpg"><BR><BR>
Music is full of repetition. For instance, there is usually a hierarchy of rhythm which determines how the music "pulses," or repeates itself in beat patterns. Often, a dominant rhythm level is deemed the "tempo" of the music. Typical tempos range from about 50 beats per minute to 200 beats per minute. Let's take a moderate tempo level of 120 beats per minute, for instance, which occurs in the song "Don't Stop Believin'" by Journey. This corresponds to a period of 0.5 seconds. As we saw in the horse example, sound is sampled at 44100 samples per second. This corresponds to an ideal sliding window interval length of 22050. Let's try to compute the sliding window embedding of the raw audio to see if the tempo manifests itself with TDA. First, we will load in "Don't Stop Believing" below:
End of explanation
#Sliding window code here assumes integer x, dim, and Tau so no interpolation
#is needed (for computational efficiency)
def getSlidingWindowInteger(x, dim, Tau, dT):
N = len(x)
NWindows = int(np.floor((N-dim*Tau)/dT)) #The number of windows
if NWindows <= 0:
print("Error: Tau too large for signal extent")
return np.zeros((3, dim))
X = np.zeros((NWindows, dim)) #Create a 2D array which will store all windows
idx = np.arange(N)
for i in range(NWindows):
#Figure out the indices of the samples in this window
idxx = np.array(dT*i + Tau*np.arange(dim), dtype=np.int32)
X[i, :] = x[idxx]
return X
#Note that dim*Tau here spans a half a second of audio,
#since Fs is the sample rate
dim = round(Fs/200)
Tau = 100
dT = Fs/100
Y = getSlidingWindowInteger(X[0:Fs*3], dim, Tau, dT)
print("Y.shape = ", Y.shape)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
PDs = ripser(Y, maxdim=1)['dgms']
pca = PCA()
Z = pca.fit_transform(Y)
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.title("2D PCA")
plt.scatter(Z[:, 0], Z[:, 1])
plt.subplot(122)
plot_diagrams(PDs)
plt.title("Persistence Diagram")
plt.show()
Explanation: <BR><BR>
Now let's do a sliding window with a window length equal to the sample rate over 2, corresponding to the fact that a beat period is a half of a second in this song. We will have to have a large <code>Tau</code> and <code>dT</code>, since there is such a high sampling rate, because otherwise the TDA code will grind to a halt with way too many points. We will also need to set up special sliding window code that skips the spline interpolation step, because this will also be prohibitively slow at this sampling rate. In other words, we will assume that <code>Tau</code> and <code>dT</code> are integers. Here's the code that does all of this on the first three seconds of audio:
End of explanation
from MusicFeatures import *
#Compute the power spectrogram and audio novelty function
winSize = 512
hopSize = 256
plt.figure()
(S, novFn) = getAudioNoveltyFn(X, Fs, winSize, hopSize)
plt.imshow(np.log(S.T), cmap = 'afmhot', aspect = 'auto')
plt.title('Log-frequency power spectrogram')
plt.show()
Explanation: Unfortunately, the sample rate is just to high and the signal is just too messy for this algorithm to work. We will have to do some more sophisticated preprocessing before applying the algorithm
<h1>Audio Novelty Functions And Music vs Speech</h1>
One way to deal with the fact that music is both messy and at a high sampling rate is to derive something called the "audio novelty function," which is designed explicitly to pick up on rhythmic events. To see how it's motivated, let's look at the <a href = "https://en.wikipedia.org/wiki/Spectrogram">audio spectrogram</a> of this song
End of explanation
plt.figure(figsize=(8, 4))
#Plot the spectrogram again
plt.subplot(211)
plt.imshow(np.log(S.T), cmap = 'afmhot', aspect = 'auto')
plt.ylabel('Frequency Bin')
plt.title('Log-frequency power spectrogram')
#Plot the audio novelty function
plt.subplot(212)
plt.plot(np.arange(len(novFn))*hopSize/float(Fs), novFn)
plt.xlabel("Time (Seconds)")
plt.ylabel('Audio Novelty')
plt.xlim([0, len(novFn)*float(hopSize)/Fs])
plt.show()
Explanation: You might notice that there are vertical streaks in a semi-periodic pattern. These correspond to "broadband percussive events," or, on other words, likely onsets for beats when drums occur. An audio novelty function is derived from a spectrogram by looking at the difference between successive frames to try to pick up on this. The code below extracts the audio novelty function and displays it for the same audio snippet.
End of explanation
(S, novFn) = getAudioNoveltyFn(X, Fs, winSize, hopSize)
#Take the first 3 seconds of the novelty function
fac = int(Fs/hopSize)
novFn = novFn[fac*4:fac*7]
#Make sure the window size is half of a second, noting that
#the audio novelty function has been downsampled by a "hopSize" factor
dim = 20
Tau = (Fs/2)/(float(hopSize)*dim)
dT = 1
Y = getSlidingWindowInteger(novFn, dim, Tau, dT)
print("Y.shape = ", Y.shape)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
PDs = ripser(Y, maxdim=1)['dgms']
pca = PCA()
Z = pca.fit_transform(Y)
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.title("2D PCA")
plt.scatter(Z[:, 0], Z[:, 1])
plt.subplot(122)
plot_diagrams(PDs)
plt.title("Persistence Diagram")
plt.show()
Explanation: Not only is the audio novelty function a cleaner signal, but it is also at a much lower sample rate. Since the "hop size" between each spectrogram window is 256 samples, the temporal resolution is coarser by that factor.
<h2>Music Audio Novelty Embedding</h2>
Let's now try our sliding window with a snippet of the audio novelty function of the previous example instead of the raw audio:<BR>
End of explanation
#Read in the audio file. Fs is the sample rate, and
#X is the audio signal
Fs, X = scipy.io.wavfile.read("speech.wav")
X = X/(2.0**15)
(S, novFn) = getAudioNoveltyFn(X, Fs, winSize, hopSize)
plt.figure()
plt.plot(np.arange(len(novFn))*hopSize/float(Fs), novFn)
plt.xlabel("Time (Seconds)")
plt.title("Audio Novelty Function for Speech")
Audio('speech.wav')
plt.show()
#Get the novelty function for the first three seconds, and use the
#exact same parameters as before
novFn = novFn[0:int((Fs/hopSize)*3)]
dim = 20
Tau = (Fs/2)/(float(hopSize)*dim)
dT = 1
Y = getSlidingWindowInteger(novFn, dim, Tau, dT)
print("Y.shape = ", Y.shape)
#Mean-center and normalize
Y = Y - np.mean(Y, 1)[:, None]
Y = Y/np.sqrt(np.sum(Y**2, 1))[:, None]
PDs = ripser(Y, maxdim=1)['dgms']
pca = PCA()
Z = pca.fit_transform(Y)
plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.title("2D PCA")
plt.scatter(Z[:, 0], Z[:, 1])
plt.subplot(122)
plot_diagrams(PDs[1], labels=['H1'])
plt.title("Persistence Diagram")
plt.show()
Explanation: <h2>Speech Example</h2>
In our final experiment in this module, we will look at a sliding window embedding of the audio novelty function on a speech excerpt which does not have a clear rhythmic structure (courtesy of <a href = "http://marsyasweb.appspot.com/download/data_sets/">http://marsyasweb.appspot.com/download/data_sets/</a>). Click on the cell below to load the speech audio, and click on the cell below that to run the sliding window embedding + persistent homology
End of explanation |
15,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
Step1: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
Step2: 3 char model
Step3: RNN
Step4: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Then for each successive layer, we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state
Step5: Keras RNN
Step6: Returning sequences
Step7: Sequence model with Keras
Step8: Stateful model with Keras | Python Code:
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read()
print('corpus length:', len(text))
text
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
chars
Explanation: Setup
End of explanation
chars.insert(0, '\0')
''.join(chars)
char_indices = dict((c,i) for i,c in enumerate(chars))
indices_char = dict((i, c) for i,c in enumerate(chars))
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
Explanation: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
End of explanation
c1_dat = [idx[i] for i in xrange(0, len(idx)-4, 3)]
c2_dat = [idx[i+1] for i in xrange(0, len(idx)-4, 3)]
c3_dat = [idx[i+2] for i in xrange(0, len(idx)-4, 3)]
c4_dat = [idx[i+3] for i in xrange(0, len(idx)-4, 3)]
x1 = np.stack(c1_dat[:-2])
x2 = np.stack(c2_dat[:-2])
x3 = np.stack(c3_dat[:-2])
y = np.stack(c4_dat[:-2])
x1
x2
x3
y
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name)
emb = Embedding(n_in, n_out, input_length=1)(inp)
return inp, Flatten()(emb)
n_fac=42
c1_in, c1 = embedding_input('c1', vocab_size, n_fac)
c2_in, c2 = embedding_input('c2', vocab_size, n_fac)
c3_in, c3 = embedding_input('c3', vocab_size, n_fac)
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
c1_hidden = dense_in(c1)
c2_dense = dense_in(c2)
c3_dense = dense_in(c3)
dense_hidden = Dense(n_hidden, activation='tanh')
hidden_2 = dense_hidden(c1_hidden)
c2_hidden = merge([c2_dense, hidden_2])
hidden_3 = dense_hidden(c2_hidden)
c3_hidden = merge([c3_dense, hidden_3])
dense_out = Dense(vocab_size, activation='softmax')
c4_out = dense_out(c3_hidden)
model = Model([c1_in, c2_in, c3_in], c4_out)
model.compile(loss = 'sparse_categorical_crossentropy', optimizer=Adam())
model.summary()
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr=0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr = 0.000001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
model.optimizer.lr = 0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4, verbose=2)
def get_next(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict(arrs)
i = np.argmax(p)
return chars[i]
get_next('phi')
get_next(' th')
get_next(' an')
Explanation: 3 char model
End of explanation
cs=8
c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
for n in range(cs)]
c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs, cs)]
xs = [np.stack(c[:-2]) for c in c_in_dat]
c_in_dat
c_out_dat
xs
len(xs)
xs[0].shape
y = np.stack(c_out_dat[:-2])
[xs[n][:cs] for n in range(cs)]
y[:cs]
n_fac = 42
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name+'_in')
emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)
return inp, Flatten()(emb)
c_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]
c_ins
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax')
Explanation: RNN
End of explanation
hidden = dense_in(c_ins[0][1])
for i in range(1,cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden])
c_out = dense_out(hidden)
model = Model([c[0] for c in c_ins], c_out)
model.summary()
model.compile(loss = 'sparse_categorical_crossentropy', optimizer=Adam())
model.fit(xs, y, batch_size=64, nb_epoch=12, verbose=2)
def get_next(inp):
idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]
p = model.predict(idxs)
return chars[np.argmax(p)]
get_next('for thos')
get_next('part of ')
get_next('queens a')
Explanation: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Then for each successive layer, we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state
End of explanation
n_hidden, n_fac, cs, vocab_size = (256, 42, 8, 86)
model = Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation='relu', inner_init='identity'),
Dense(vocab_size, activation='softmax')
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(np.concatenate(xs,axis=1), y, batch_size=64, nb_epoch=8, verbose=2)
def get_next_keras(inp):
idxs = [char_indices[c] for c in inp]
arrs = np.array(idxs)[np.newaxis,:]
p = model.predict(arrs)[0]
return chars[np.argmax(p)]
get_next_keras('this is ')
get_next_keras('part of ')
get_next_keras('queens a')
Explanation: Keras RNN
End of explanation
c_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)]
for n in range(cs)]
ys = [np.stack(c[:-2]) for c in c_out_dat]
[xs[n][:cs] for n in range(cs)]
[ys[n][:cs] for n in range(cs)]
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax', name='output')
inp1 = Input(shape=(n_fac,), name='zeros')
hidden = dense_in(inp1)
outs = []
for i in range(cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden], mode='sum')
# every layer now has an output
outs.append(dense_out(hidden))
model = Model([inp1] + [c[0] for c in c_ins], outs)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
model.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12, verbose=2)
def get_nexts(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts(' this is')
get_nexts(' part of')
Explanation: Returning sequences
End of explanation
n_hidden, n_fac, cs, vocab_size
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, return_sequences=True, activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
xs[0].shape
x_rnn=np.stack(xs, axis=1)
y_rnn=np.expand_dims(np.stack(ys, axis=1), -1)
x_rnn.shape, y_rnn.shape
x_rnn[:,:,0].shape
model.fit(x_rnn[:,:,0], y_rnn[:,:,0], batch_size=64, nb_epoch=8, verbose=2)
def get_nexts_keras(inp):
idxs = [char_indices[c] for c in inp]
arr = np.array(idxs)[np.newaxis,:]
p = model.predict(arr)[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_keras(' this is')
Explanation: Sequence model with Keras
End of explanation
bs=64
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,8)),
BatchNormalization(),
LSTM(n_hidden, return_sequences=True, stateful=True),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
mx = len(x_rnn)//bs*bs
mx
x_rnn.shape
y_rnn.shape
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
model.optimizer.lr=1e-4
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
model.fit(x_rnn[:mx, :, 0], y_rnn[:mx, :, :, 0], batch_size=bs, nb_epoch=4, shuffle=False, verbose=2)
Explanation: Stateful model with Keras
End of explanation |
15,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
Step2: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
Step3: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
Step4: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
Step5: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples | Python Code:
# As usual, a bit of setup
import sys
import os
sys.path.insert(0, os.path.abspath('..'))
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print 'Running tests with p = ', p
print 'Mean of input: ', x.mean()
print 'Mean of train-time output: ', out.mean()
print 'Mean of test-time output: ', out_test.mean()
print 'Fraction of train-time output set to zero: ', (out == 0).mean()
print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()
print
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print 'dx relative error: ', rel_error(dx, dx_num)
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print 'Running check with dropout = ', dropout
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
print
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print dropout
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation |
15,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from collections import Counter
import random
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
counts = Counter(int_words)
freqs = { i: counts[i]/len(words) for i in range(len(vocab_to_int)) }
[counts[vocab_to_int['the']], len(words)]
words[15]
np.random.seed(666)
## Your code here
def keep(freq, t=1e-5):
x = np.sqrt(t/freq)
r = np.random.random()
return (x, r, r < x)
def foo(pos):
return [freqs[int_words[pos]], words[pos], keep(freqs[int_words[pos]])]
for i in range(10):
print(foo(i))
train_words = [ int_words[i] for i in range(len(int_words)) if keep(freqs[int_words[i]])[2] ]
len(train_words)
subsampled_count = Counter(train_words)
subsampled_count[0]
[int_to_vocab[2], subsampled_count[2], counts[2]]
[np.max(train_words), len(vocab_to_int)]
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
r = np.random.randint(window_size) + 1
return words[idx-r:idx]+words[idx+1:idx+1+r]
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], minval=-1, maxval=1))
embed = tf.nn.embedding_lookup(embedding, inputs)
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
[inputs.shape, embed.shape, labels.shape]
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], mean=0.0, stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled,
num_classes=len(vocab_to_int), num_true=1)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
15,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of Pythran Usage Within a Full Project
This notebook covers the creation of a simple, distutils-powered, project that ships a pythran kernel.
But first some cleanup
Step2: Project layout
The Pythran file is really dumb.
The expected layout is
Step4: And so is the __init__.py file.
Step5: The setup.py file contains the classical metadata, plus a special header. this header basically states if pythran is available, use it, otherwise fallback to the python file.
Step6: Running setup.py
With the described configuration, the normal python setup.py targets should ยซ just work ยป.
If pythran is in the path, it is used to generate the alternative c++ extension when building a source release. Note the hello.cpp!
Step7: But if pythran is no longer in the PYTHONPATH, the installation does not fail
Step8: In case of binary distribution, the native module is generated alongside the original source.
Step9: And if pythran is not in the PYTHONPATH, this still work \o/ | Python Code:
!rm -rf hello setup.py && mkdir hello
Explanation: Example of Pythran Usage Within a Full Project
This notebook covers the creation of a simple, distutils-powered, project that ships a pythran kernel.
But first some cleanup
End of explanation
%%file hello/hello.py
#pythran export hello()
def hello():
Wave hello.
print("Hello from Pythran o/")
Explanation: Project layout
The Pythran file is really dumb.
The expected layout is:
setup.py
hello/
+---- __init__.py
+---- hello.py
End of explanation
%%file hello/__init__.py
Hello package, featuring a Pythran kernel.
from hello import hello
Explanation: And so is the __init__.py file.
End of explanation
%%file setup.py
from distutils.core import setup
try:
from pythran.dist import PythranExtension, PythranBuildExt
setup_args = {
'cmdclass': {"build_ext": PythranBuildExt},
'ext_modules': [PythranExtension('hello.hello', sources = ['hello/hello.py'])],
}
except ImportError:
print("Not building Pythran extension")
setup_args = {}
setup(name = 'hello',
version = '1.0',
description = 'Yet another demo package',
packages = ['hello'],
**setup_args)
Explanation: The setup.py file contains the classical metadata, plus a special header. this header basically states if pythran is available, use it, otherwise fallback to the python file.
End of explanation
%%sh
rm -rf build dist
python setup.py sdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.tar.gz | grep -E 'hello/hello.(py|cpp)' -o | sort
Explanation: Running setup.py
With the described configuration, the normal python setup.py targets should ยซ just work ยป.
If pythran is in the path, it is used to generate the alternative c++ extension when building a source release. Note the hello.cpp!
End of explanation
%%sh
rm -rf build dist
PYTHONPATH= python setup.py sdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.tar.gz | grep -E 'hello/hello.py' -o
Explanation: But if pythran is no longer in the PYTHONPATH, the installation does not fail: the regular Python source can still be used.
End of explanation
%%sh
rm -rf build dist
python setup.py bdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.linux-x86_64.tar.gz | grep -E 'hello/hello.(py|cpp)' -o
Explanation: In case of binary distribution, the native module is generated alongside the original source.
End of explanation
%%sh
rm -rf build dist
PYTHONPATH= python setup.py bdist 2>/dev/null 1>/dev/null
tar tf dist/hello-1.0.linux-x86_64.tar.gz | grep -E 'hello/hello.py' -o
Explanation: And if pythran is not in the PYTHONPATH, this still work \o/
End of explanation |
15,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick overview
Here are some quick examples of what you can do with xarray.DataArray objects. Everything is explained in much more detail in the rest of the documentation.
To begin, import numpy, pandas and xarray using their customary abbreviations
Step1: Create a DataArray
You can make a DataArray from scratch by supplying data in the form of a numpy array or list, with optional dimensions and coordinates
Step2: If you supply a pandas Series or DataFrame, metadata is copied directly
Step3: Here are the key properties for a DataArray
Step4: Indexing
xarray supports four kind of indexing. These operations are just as fast as in pandas, because we borrow pandasโ indexing machinery.
Step5: Computation
Data arrays work very similarly to numpy ndarrays
Step6: However, aggregation operations can use dimension names instead of axis numbers
Step7: Arithmetic operations broadcast based on dimension name. This means you donโt need to insert dummy dimensions for alignment
Step8: Another broadcast example
Step9: It also means that in most cases you do not need to worry about the order of dimensions
Step10: Operations also align based on index labels
Step11: GroupBy
xarray supports grouped operations using a very similar API to pandas
Step12: Convert to pandas
A key feature of xarray is robust conversion to and from pandas objects
Step13: Datasets and NetCDF
xarray.Dataset is a dict-like container of DataArray objects that share index labels and dimensions. It looks a lot like a netCDF file
Step14: You can do almost everything you can do with DataArray objects with Dataset objects if you prefer to work with multiple variables at once.
Datasets also let you easily read and write netCDF files | Python Code:
import numpy as np
import pandas as pd
import xarray as xr
Explanation: Quick overview
Here are some quick examples of what you can do with xarray.DataArray objects. Everything is explained in much more detail in the rest of the documentation.
To begin, import numpy, pandas and xarray using their customary abbreviations:
End of explanation
xr.DataArray(np.random.randn(2, 3))
data = xr.DataArray(np.random.randn(2, 3), [('x', ['a', 'b']), ('y', [-2, 0, 2])])
data
Explanation: Create a DataArray
You can make a DataArray from scratch by supplying data in the form of a numpy array or list, with optional dimensions and coordinates:
End of explanation
xr.DataArray(pd.Series(range(3), index=list('abc'), name='foo'))
Explanation: If you supply a pandas Series or DataFrame, metadata is copied directly:
End of explanation
data.values
data.dims
data.coords
len(data.coords)
data.coords['x']
data.attrs
Explanation: Here are the key properties for a DataArray:
End of explanation
data[[0, 1]]
data.loc['a':'b']
data.loc
data.isel(x=slice(2))
data.sel(x=['a', 'b'])
Explanation: Indexing
xarray supports four kind of indexing. These operations are just as fast as in pandas, because we borrow pandasโ indexing machinery.
End of explanation
data
data + 10
np.sin(data)
data.T
data.sum()
Explanation: Computation
Data arrays work very similarly to numpy ndarrays:
End of explanation
data.mean(dim='x')
Explanation: However, aggregation operations can use dimension names instead of axis numbers:
End of explanation
a = xr.DataArray(np.random.randn(3), [data.coords['y']])
b = xr.DataArray(np.random.randn(4), dims='z')
a
b
a + b
Explanation: Arithmetic operations broadcast based on dimension name. This means you donโt need to insert dummy dimensions for alignment:
End of explanation
v1 = xr.DataArray(np.random.rand(3, 2, 4), dims=['t', 'y', 'x'])
v2 = xr.DataArray(np.random.rand(2, 4), dims=['y', 'x'])
v1
v2
v1 + v2
Explanation: Another broadcast example:
End of explanation
data - data.T
Explanation: It also means that in most cases you do not need to worry about the order of dimensions:
End of explanation
data[:-1]
data[:1]
data[:-1] - data[:1]
Explanation: Operations also align based on index labels:
End of explanation
labels = xr.DataArray(['E', 'F', 'E'], [data.coords['y']], name='labels')
labels
data
data.groupby(labels).mean('y')
data.groupby(labels).apply(lambda x: x - x.min())
Explanation: GroupBy
xarray supports grouped operations using a very similar API to pandas:
End of explanation
data.to_series()
data.to_pandas()
Explanation: Convert to pandas
A key feature of xarray is robust conversion to and from pandas objects:
End of explanation
ds = data.to_dataset(name='foo')
ds
Explanation: Datasets and NetCDF
xarray.Dataset is a dict-like container of DataArray objects that share index labels and dimensions. It looks a lot like a netCDF file:
End of explanation
ds.to_netcdf('example.nc')
xr.open_dataset('example.nc')
Explanation: You can do almost everything you can do with DataArray objects with Dataset objects if you prefer to work with multiple variables at once.
Datasets also let you easily read and write netCDF files:
End of explanation |
15,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 17
Analyze how travelers expressed their feelings on Twitter
A sentiment analysis job about the problems of each major U.S. airline.
Twitter data was scraped from February of 2015 and contributors were
asked to first classify positive, negative, and neutral tweets, followed
by categorizing negative reasons (such as "late flight" or "rude service").
Step1: Proportion of tweets with each sentiment
Step2: Proportion of tweets per airline
Step3: Exercise 17.1
Predict the sentiment using CountVectorizer
use Random Forest classifier | Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# read the data and set the datetime as the index
tweets = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/Tweets.zip', index_col=0)
tweets.head()
tweets.shape
Explanation: Exercise 17
Analyze how travelers expressed their feelings on Twitter
A sentiment analysis job about the problems of each major U.S. airline.
Twitter data was scraped from February of 2015 and contributors were
asked to first classify positive, negative, and neutral tweets, followed
by categorizing negative reasons (such as "late flight" or "rude service").
End of explanation
tweets['airline_sentiment'].value_counts()
Explanation: Proportion of tweets with each sentiment
End of explanation
tweets['airline'].value_counts()
pd.Series(tweets["airline"]).value_counts().plot(kind = "bar",figsize=(8,6),rot = 0)
pd.crosstab(index = tweets["airline"],columns = tweets["airline_sentiment"]).plot(kind='bar',figsize=(10, 6),alpha=0.5,rot=0,stacked=True,title="Sentiment by airline")
Explanation: Proportion of tweets per airline
End of explanation
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
from nltk.stem.snowball import SnowballStemmer
from nltk.stem import WordNetLemmatizer
X = tweets['text']
y = tweets['airline_sentiment'].map({'negative':-1,'neutral':0,'positive':1})
Explanation: Exercise 17.1
Predict the sentiment using CountVectorizer
use Random Forest classifier
End of explanation |
15,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: ้ๅญฆ็ฟใจๅญฆ็ฟไธ่ถณใซใคใใฆ็ฅใ
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: Higgs ใใผใฟใปใใ
ใใฎใใฅใผใใชใขใซใฎ็ฎ็ใฏ็ด ็ฒๅญ็ฉ็ๅญฆใ่กใใใจใงใฏใชใใฎใงใใใผใฟใปใใใฎ่ฉณ็ดฐใซใใ ใใๅฟ
่ฆใฏใใใพใใใใใใใซใฏ 11,000,000 ใฎใตใณใใซใๅซใพใใฆใใฆใๅใตใณใใซใซใฏ 28 ใฎ็นๅพด้ใจใใคใใชใฏใฉในใฉใใซใใใใพใใ
Step3: tf.data.experimental.CsvDataset ใฏใฉในใไฝฟ็จใใใจใไธญ้ใฎ่งฃๅๆ้ ใชใใงใgzip ใใกใคใซใใ็ดๆฅ csv ใฌใณใผใใ่ชญใฟๅใใใจใใงใใพใใ
Step4: csv ใชใผใใผใฏใฉในใฏใๅใฌใณใผใใฎในใซใฉใผใฎใชในใใ่ฟใใพใใๆฌกใฎ้ขๆฐใฏใใใฎในใซใฉใผใฎใชในใใ (feature_vector, label) ใใขใซๅๅบฆใใใฏใใพใใ
Step5: TensorFlow ใฏใๅคง่ฆๆจกใชใใใใฎใใผใฟใงๆผ็ฎใใๅ ดๅใซๆใๅน็็ใงใใ
ใใใใฃใฆใๅ่กใๅๅฅใซๅใใใฏใใไปฃใใใซใ10,000 ใตใณใใซใฎใใใใๅๅพใใๆฐใใ Dataset ใไฝๆใใpack_row ้ขๆฐใๅใใใใซ้ฉ็จใใฆใใใใใใใๅใ
ใฎใฌใณใผใใซๅๅฒใใพใใ
Step6: ใใฎๆฐใใ packed_ds ใฎใฌใณใผใใฎใใใคใใ็ขบ่ชใใพใใ
็นๅพดใฏๅฎๅ
จใซๆญฃ่ฆๅใใใฆใใพใใใใใใฎใใฅใผใใชใขใซใซใฏๅๅใงใใ
Step7: ใใฎ็ญใใใฅใผใใชใขใซใงใฏใๆค่จผใซๆๅใฎ 1,000 ใตใณใใซใฎใฟใไฝฟ็จใใใใฌใผใใณใฐใซๆฌกใฎ 10,000 ใตใณใใซใไฝฟ็จใใพใใ
Step8: Dataset.skip ใจ Dataset.take ใกใฝใใใไฝฟใใจ็ฐกๅใซๅฎ่กใงใใพใใ
ใพใใDataset.cache ใกใฝใใใไฝฟ็จใใฆใใญใผใใผใๅใจใใใฏใงใใกใคใซใใใใผใฟใๅ่ชญใฟๅใใใๅฟ
่ฆใใชใใใใซใใพใใ
Step9: ใใใใฎใใผใฟใปใใใฏใๅใ
ใฎใตใณใใซใ่ฟใใพใใDataset.batch ใกใฝใใใไฝฟ็จใใฆใใใฌใผใใณใฐใซ้ฉใใใตใคใบใฎใใใใไฝๆใใพใใใใใๅฆ็ใใๅใซใใใฌใผใใณใฐใปใใใ Dataset.shuffle ใใใณ Dataset.repeat ใใใใจใๅฟใใชใใงใใ ใใใ
Step10: ้ๅญฆ็ฟใฎใใข
้ๅญฆ็ฟใ้ฒๆญขใใใใใฎๆใๅ็ดใชๆนๆณใฏใใขใใซใฎใตใคใบใใใชใใกใใขใใซๅ
ใฎๅญฆ็ฟๅฏ่ฝใชใใฉใกใผใฟใฎๆฐใๅฐใใใใใใจใงใ๏ผๅญฆ็ฟใใฉใกใผใฟใฎๆฐใฏใใฌใคใคใผใฎๆฐใจใฌใคใคใผใใจใฎใฆใใใๆฐใงๆฑบใพใใพใ๏ผใใใฃใผใใฉใผใใณใฐใงใฏใใขใใซใฎๅญฆ็ฟๅฏ่ฝใชใใฉใกใผใฟๆฐใใใใฐใใฐใขใใซใฎใๅฎน้ใใจๅผใณใพใใ
็ดๆ็ใซ่ใใใฐใใใฉใกใผใฟๆฐใฎๅคใใขใใซใปใฉใ่จๆถๅฎน้ใใๅคงใใใชใใใใฌใผใใณใฐ็จใฎใตใณใใซใจใใฎ็ฎ็ๅคๆฐใฎ้ใฎใใฃใฏใทใงใใชใฎใใใชใใใใณใฐใใใใใๅญฆ็ฟใใใใจใใงใใพใใใใฎใใใใณใฐใซใฏๆฑๅ่ฝๅใใพใฃใใใชใใใใใพใง่ฆใใใจใใชใใใผใฟใไฝฟใฃใฆไบๆธฌใใใ้ใซใฏๅฝนใซ็ซใกใพใใใ
ใใฃใผใใฉใผใใณใฐใฎใขใใซใฏใใฌใผใใณใฐ็จใใผใฟใซ้ฉๅฟใใใใใใใฉใๆฌๅฝใฎใใฃใฌใฌใณใธใฏๆฑๅใงใใฃใฆ้ฉๅฟใงใฏใใใพใใใ
ไธๆนใใใใใฏใผใฏใฎ่จๆถๅฎน้ใ้ใใใฆใใๅ ดๅใๅ่ฟฐใฎใใใชใใใใณใฐใ็ฐกๅใซๅญฆ็ฟใใใใจใฏใงใใพใใใๆๅคฑใๆธใใใใใซใฏใใใไบๆธฌ่ฝๅใ้ซใๅง็ธฎใใใ่กจ็พใๅญฆ็ฟใใชใใใฐใชใใพใใใๅๆใซใใขใใซใๅฐใใใใใใใจใใใฌใผใใณใฐ็จใใผใฟใซ้ฉๅฟใใใฎใ้ฃใใใชใใพใใใๅคใใใๅฎน้ใใจใๅฎน้ไธ่ถณใใฎ้ใซใกใใใฉใใๅฎน้ใใใใฎใงใใ
ๆฎๅฟตใชใใใ๏ผใฌใคใคใผใฎๆฐใใใฌใคใคใผใใจใฎๅคงใใใจใใฃใ๏ผใขใใซใฎ้ฉๅใชใตใคใบใใขใผใญใใฏใใฃใๆฑบใใ้ญๆณใฎๆน็จๅผใฏใใใพใใใไธ้ฃใฎ็ฐใชใใขใผใญใใฏใใฃใไฝฟใฃใฆๅฎ้จใ่กใๅฟ
่ฆใใใใพใใ
้ฉๅใชใขใใซใฎใตใคใบใ่ฆใคใใใซใฏใๆฏ่ผ็ๅฐใชใใฌใคใคใผใฎๆฐใจใใฉใกใผใฟใใๅงใใใฎใใในใใงใใใใใใใๆค่จผ็จใใผใฟใงใฎๆๅคฑๅคใฎๆนๅใ่ฆใใใชใใชใใพใงใๅพใ
ใซใฌใคใคใผใฎๅคงใใใๅขใใใใใๆฐใใชใฌใคใคใผใๅ ใใใใใพใใ
ๆฏ่ผๅบๆบใจใใฆใๅฏใซๆฅ็ถใใใใฌใคใคใผ๏ผtf.keras.layers.Dense๏ผใ ใใไฝฟใฃใใทใณใใซใชใขใใซใๆง็ฏใใใใฎๅพใๅคง่ฆๆจกใชใใผใธใงใณใไฝใฃใฆๆฏ่ผใใพใใ
ๆฏ่ผๅบๆบใไฝใ
ใใฌใผใใณใฐไธญใซๅญฆ็ฟ็ใๅพใ
ใซไธใใใจใๅคใใฎใขใใซใฎใใฌใผใใณใฐใๅไธใใพใใtf.keras.optimizers.schedules ใไฝฟ็จใใฆใๆ้ใฎ็ต้ใจใจใใซๅญฆ็ฟ็ใไธใใพใใ
Step11: ไธ่จใฎใณใผใใฏใtf.keras.optimizers.schedules.InverseTimeDecay ใ่จญๅฎใใๅญฆ็ฟ็ใ 1000 ใจใใใฏใงๅบๆฌ็ใฎ 1/2 ใซใ2000 ใจใใใฏใง 1/3 ใซๅๆฒ็ท็ใซๆธๅฐใใใพใใ
Step12: ใใฎใใฅใผใใชใขใซใฎๅใขใใซใฏใๅใใใฌใผใใณใฐๆงๆใไฝฟ็จใใพใใใใใใฃใฆใใณใผใซใใใฏใฎใชในใใใๅงใใฆใๅๅฉ็จๅฏ่ฝใชๆนๆณใงใใใใ่จญๅฎใใพใใ
ใใฎใใฅใผใใชใขใซใฎใใฌใผใใณใฐใฏใๅคใใฎ็ญใใจใใใฏใงๅฎ่กใใใพใใไธ่ฆใชใญใฐๆ
ๅ ฑใๆธใใใใใซใฏใtfdocs.EpochDots ใไฝฟ็จใใพใใใใใฏใใจใใใฏใใจใซ . ใๅบๅใใ100 ใจใใใฏใใจใซใกใใชใใฏใฎใใซใปใใใๅบๅใใพใใ
ๆฌกใซใtf.keras.callbacks.EarlyStopping ใๅซใใฆใใใฌใผใใณใฐๆ้ใไธๅฟ
่ฆใซ้ทใใชใใชใใใใซใพใใใใฎใณใผใซใใใฏใฏใval_loss ใงใฏใชใใval_binary_crossentropy ใ็ฃ่ฆใใใใใซ่จญๅฎใใใฆใใใใจใซๆณจๆใใฆใใ ใใใใใฎ้ใใฏๅพใง้่ฆใซใชใใพใใ
callbacks.TensorBoard ใไฝฟ็จใใฆใใใฌใผใใณใฐ็จใฎ TensorBoard ใญใฐใ็ๆใใพใใ
Step13: ๅๆงใซใๅใขใใซใฏๅใ Model.compile ใใใณ Model.fit ่จญๅฎใไฝฟ็จใใพใใ
Step14: ้ๅธธใซๅฐ่ฆๆจกใฎใขใใซ๏ผTiny๏ผ
ใพใใใขใใซใใใฌใผใใณใฐใใพใใ
Step15: ๆฌกใซใใขใใซใใฉใฎใใใซๆฉ่ฝใใใใ็ขบ่ชใใพใใ
Step16: ๅฐ่ฆๆจกใฎใขใใซ๏ผSmall๏ผ
ๅฐ่ฆๆจกใชใขใใซใฎใใใฉใผใใณในใไธๅใใใจใใงใใใใฉใใใ็ขบ่ชใใใซใฏใใใใคใใฎๅคง่ฆๆจกใชใขใใซใๆฎต้็ใซใใฌใผใใณใฐใใพใใ
้ ใใฌใคใคใผใ 2 ใคใ 1 ใคใฎใฌใคใคใผๅ
ใฎใฆใใใใ 16 ใใใขใใซใๆง็ฏใใพใใ
Step17: ไธญ่ฆๆจกใฎใขใใซ๏ผMedium๏ผ
ๆฌกใซใ้ ใใฌใคใคใผใ 3 ใคใ 1 ใคใฎใฌใคใคใผๅ
ใฎใฆใใใใโ 64 ใใใขใใซใๆง็ฏใใพใใ
Step18: ๅใใใผใฟใไฝฟใฃใฆ่จ็ทดใใพใใ
Step19: ๅคง่ฆๆจกใฎใขใใซ๏ผLarge๏ผ
ๆผ็ฟใจใใฆใใใๅคง่ฆๆจกใชใขใใซใไฝๆใใใใใใฉใใ ใ่ฟ
้ใซ้้ฉๅใๅงใใใใ็ขบ่ชใใฆใฟใพใใใใๆฌกใซใใใฎใใณใใใผใฏใซใใใใงๅฟ
่ฆใจใใใๅฎน้ใๅคงๅน
ใซไธๅใใใใใฏใผใฏใ่ฟฝๅ ใใพใใ
Step20: ใใฎใขใใซใใพใๅใใใผใฟใไฝฟใฃใฆ่จ็ทดใใพใใ
Step21: ่จ็ทดๆใจๆค่จผๆใฎๆๅคฑใใฐใฉใใซใใ
ๅฎ็ทใฏใใฌใผใใณใฐ็จใใผใฟใปใใใฎๆๅคฑใ็ ด็ทใฏๆค่จผ็จใใผใฟใปใใใงใฎๆๅคฑใงใ๏ผๆค่จผ็จใใผใฟใงใฎๆๅคฑใๅฐใใๆนใ่ฏใใขใใซใงใ๏ผใ
ใใๅคงใใชใขใใซใๆง็ฏใใใจใใใๅคใใฎใใฏใผใๅพใใใพใใใใใฎใใฏใผใไฝใใใฎๅฝขใงๅถ็ดใใใฆใใชใๅ ดๅใใใฌใผใใณใฐใปใใใซ็ฐกๅใซ้้ฉๅใใๅฏ่ฝๆงใใใใพใใ
ใใฎไพใงใฏใ้ๅธธใ"Tiny" ใขใใซใฎใฟใ้้ฉๅใๅฎๅ
จใซๅ้ฟใใใใๅคง่ฆๆจกใชใขใใซใฏใใผใฟใใใ่ฟ
้ใซ้้ฉๅใใพใใ้้ฉๅใฏใ"large" ใขใใซใงใฏ้ๅธธใซๆทฑๅปใซใชใใใใๅฎ้ใซไฝใ่ตทใใฃใฆใใใใ็ขบ่ชใใใซใฏใใใญใใใๅฏพๆฐในใฑใผใซใซๅใๆฟใใๅฟ
่ฆใใใใพใใ
ใใใฏใๆค่จผใกใใชใใฏใใใฌใผใใณใฐใกใใชใใฏใจใใญใใใใฆๆฏ่ผใใใจๆใใใงใใ
ใใใใช้ใใใใใฎใฏๆญฃๅธธใงใใ
ไธกๆนใฎใกใใชใใฏใๅใๆนๅใซ็งปๅใใฆใใๅ ดๅใใในใฆๆญฃๅธธใงใใ
ใใฌใผใใณใฐใกใใชใใฏใๆนๅใ็ถใใฆใใใฎใซๆค่จผใกใใชใใฏใๅๆปใๅงใใๅ ดๅใฏใใใใใ้้ฉๅใซ่ฟใฅใใฆใใพใใ
ๆค่จผใกใใชใใฏใๅๅฏพๆนๅใซ้ฒใใงใใๅ ดๅใใขใใซใฏๆใใใซ้้ฉๅใใฆใใพใใ
Step22: ๆณจๆ
Step23: TensorBoard.dev ใงใใใฎใใผใใใใฏใฎๅๅใฎๅฎ่ก็ตๆใ้ฒ่ฆงใงใใพใใ
TensorBoard.dev ใฏใๆฉๆขฐๅญฆ็ฟใฎๅฎ้จใใในใใ่ฟฝ่ทกใใใใณๅ
ฑๆใใใใใฎใใใใผใธใใจใฏในใใชใจใณในใงใใ
ไพฟๅฎไธใ<iframe> ใซใๅซใพใใฆใใพใใ
Step24: TensorBoard ใฎ็ตๆใๅ
ฑๆใใใใจใๅธๆใใๅ ดๅใฏใไปฅไธใฎใณใผใใใณใผใใปใซใซๅผตใไปใใฆใ<a>TensorBoard.dev</a> ใซใญใฐใใขใใใญใผใใงใใพใใ
ๆณจๆ
Step25: ้ใฟใฎๆญฃๅๅใๅ ใใ
ใใชใใซใ ใฎๅๅใใฎๅๅใใๅญ็ฅใงใใใใใไฝใใฎ่ชฌๆใ2ใคใใใจใใใจใๆใๆญฃใใใจ่ใใใใ่ชฌๆใฏใไปฎๅฎใฎๆฐใๆใๅฐใชใใไธ็ชๅ็ดใชใ่ชฌๆใ ใจใใใใฎใงใใใใฎๅๅใฏใใใฅใผใฉใซใใใใฏใผใฏใไฝฟใฃใฆๅญฆ็ฟใใใใขใใซใซใๅฝใฆใฏใพใใพใใใใ่จ็ทด็จใใผใฟใจใใใใฏใผใฏๆง้ ใใใฃใฆใใใฎใใผใฟใ่ชฌๆใงใใ้ใฟใฎ้ๅใ่คๆฐใใๆ๏ผใคใพใใ่คๆฐใฎใขใใซใใใๆ๏ผใๅ็ดใชใขใใซใฎใปใใ่ค้ใชใใฎใใใ้ๅญฆ็ฟใใซใใใฎใงใใ
ใใใง่จใใๅ็ดใชใขใใซใใจใฏใใใฉใกใผใฟๅคใฎๅๅธใฎใจใณใใญใใผใๅฐใใใใฎ๏ผใใใใฏใไธ่จใง่ฆใใใใซใใใใใใใฉใกใผใฟใฎๆฐใๅฐใชใใใฎ๏ผใงใใใใใใฃใฆใ้ๅญฆ็ฟใ็ทฉๅใใใใใฎไธ่ฌ็ใชๆๆณใฏใ้ใฟใๅฐใใๅคใฎใฟใใจใใใจใงใ้ใฟๅคใฎๅๅธใใใๆด็ถใจใชใ๏ผๆญฃๅ๏ผใใใซๅถ็ดใไธใใใใฎใงใใใใใใ้ใฟใฎๆญฃๅๅใใจๅผใฐใใใใใใฏใผใฏใฎๆๅคฑ้ขๆฐใซใ้ใฟใฎๅคงใใใซ้ข้ฃใใใณในใใๅ ใใใใจใง่กใใใพใใใใฎใณในใใซใฏ 2 ใคใฎ็จฎ้กใใใใพใใ
L1 ๆญฃๅๅ
Step26: l2(0.001) ใจใใใฎใฏใใฌใคใคใผใฎ้ใฟ่กๅใฎไฟๆฐๅ
จใฆใซๅฏพใใฆ 0.001 * weight_coefficient_value**2 ใใใใใฏใผใฏใฎๆๅคฑๅคๅ่จใซๅ ใใใใจใๆๅณใใพใใ
ใใฎใใใbinary_crossentropy ใ็ดๆฅ็ฃ่ฆใใฆใใพใใใใฎๆญฃๅๅใณใณใใผใใณใใๆททๅจใใฆใใชใใใใงใใ
ใใใใฃใฆใL2 ๆญฃๅๅใใใซใใฃใ่จญใใใใๅใ "Large" ใขใใซใฎใใใฉใผใใณในใฏใฏใใใซๅชใใฆใใพใใ
Step27: ใ่ฆงใฎใใใซใ"L2" ๆญฃๅๅใใใฎใขใใซใฏ "Tiny" ใขใใซใจใปใผๅ็ญใซใชใใพใใใ"L2" ใขใใซใฏ "Large" ใขใใซใจๆฏในใฆ้ๅญฆ็ฟใใซใใใชใฃใฆใใพใใไธกๆนใฎใขใใซใฎใใฉใกใผใฟๆฐใฏๅใใงใใใซใใใใใใใงใใ
่ฉณ็ดฐๆ
ๅ ฑ
ใใฎใใใชๆญฃๅๅใซใคใใฆๆณจๆใในใ้่ฆไบ้
ใ 2 ใคใใใพใใ
็ฌ่ชใฎใใฌใผใใณใฐใซใผใใไฝๆใใฆใใๅ ดๅใฏใใขใใซใซๆญฃๅๅใฎๆๅคฑใๅฟ
ใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
Step28: ใใฎๅฎ่ฃ
ใฏใใขใใซใฎๆๅคฑใซๅฏพใใฆ้ใฟใใใซใใฃใไธใใฆใใๆจๆบใฎๆ้ฉๅๆ้ ใ้ฉ็จใใพใใ
2 ็ช็ฎใฎใขใใญใผใใงใฏใไปฃใใใซใ็ใฎๆๅคฑใซๅฏพใใฆใฎใฟใชใใใฃใใคใถใๅฎ่กใใพใใใชใใใฃใใคใถใฏ่จ็ฎใใใในใใใใ้ฉ็จใใชใใใ้ใฟใฎๆธ่กฐใ้ฉ็จใใพใใใใฎใๅ้ขใใใ้ใฟใฎๆธ่กฐใใฏใtf.keras.optimizers.Ftrl ใ tfa.optimizers.AdamW ใชใฉใฎใชใใใฃใใคใถใงไฝฟ็จใใใพใใ
ใใญใใใขใฆใใ่ฟฝๅ ใใ
ใใญใใใขใฆใใฏใใใฅใผใฉใซใใใใฏใผใฏใฎๆญฃๅๅใใฏใใใฏใจใใฆๆใใใไฝฟใใใๆๆณใฎไธใคใงใใใใฎๆๆณใฏใใใญใณใๅคงๅญฆใฎใใณใใณใจๅฝผใฎๅญฆ็ใ้็บใใใใฎใงใใใใญใใใขใฆใใฏๅฑคใซ้ฉ็จใใใใฎใงใ่จ็ทดๆใซๅฑคใใๅบๅใใใ็นๅพด้ใซๅฏพใใฆใฉใณใใ ใซใใใญใใใขใฆใ๏ผใคใพใใผใญๅ๏ผใใ่กใใใฎใงใใไพใใฐใใใๅฑคใ่จ็ทดๆใซใใๅ
ฅๅใตใณใใซใซๅฏพใใฆใๆฎ้ใฏ[0.2, 0.5, 1.3, 0.8, 1.1] ใจใใใใฏใใซใๅบๅใใใจใใพใใใใญใใใขใฆใใ้ฉ็จใใใจใใใฎใใฏใใซใฏไพใใฐ[0, 0.5, 1.3, 0, 1.1]ใฎใใใซใฉใณใใ ใซๆฃใใฐใฃใใใใคใใฎใผใญใๅซใใใใซใชใใพใใใใใญใใใขใฆใ็ใใฏใผใญๅใใใ็นๅพดใฎๅฒๅใงใ้ๅธธใฏ0.2ใใ0.5ใฎ้ใซ่จญๅฎใใพใใใในใๆใฏใใฉใฎใฆใใใใใใญใใใขใฆใใใใใไปฃใใใซๅบๅๅคใใใญใใใขใฆใ็ใจๅใๆฏ็ใงในใฑใผใซใใฆใณใใใพใใใใใฏใ่จ็ทดๆใซๆฏในใฆใใใใใฎใฆใใใใใขใฏใใฃใใงใใใใจใซๅฏพใใฆใใฉใณในใใจใใใใงใใ
ใใญใใใขใฆใใ็ฐกๅใซ่ชฌๆใใใจใใใใใฏใผใฏๅ
ใฎๅใ
ใฎใใผใใฏไปใฎใใผใใฎๅบๅใซไพๅญใงใใชใใใใๅใใผใใฏใใ่ชไฝใงๅฝน็ซใค็นๅพดใๅบๅใใๅฟ
่ฆใใใใจใใใใจใงใใ
ใใญใใใขใฆใใฏใฌใคใคใผใซ้ฉ็จใใใใฎใงใใใฌใผใใณใฐๆใซใฌใคใคใผใใๅบๅใใใ็นๅพด้ใซๅฏพใใฆใฉใณใใ ใซใใใญใใใขใฆใ๏ผใคใพใใผใญๅ๏ผใใ่กใใใฎใงใใไพใใฐใใใใฌใคใคใผใใใฌใผใใณใฐๆใซใใๅ
ฅๅใตใณใใซใซๅฏพใใฆใๆฎ้ใฏ[0.2, 0.5, 1.3, 0.8, 1.1] ใจใใใใฏใใซใๅบๅใใใจใใพใใใใญใใใขใฆใใ้ฉ็จใใใจใใใฎใใฏใใซใฏไพใใฐ[0, 0.5, 1.3, 0, 1.1]ใฎใใใซใฉใณใใ ใซๆฃใใฐใฃใใใใคใใฎใผใญใๅซใใใใซใชใใพใใ
ใใใญใใใขใฆใ็ใใฏใผใญๅใใใ็นๅพดใฎๅฒๅใงใ้ๅธธใฏ 0.2 ใใ 0.5 ใฎ้ใซ่จญๅฎใใพใใใในใๆใฏใใฉใฎใฆใใใใใใญใใใขใฆใใใใใไปฃใใใซๅบๅๅคใใใญใใใขใฆใ็ใจๅใๆฏ็ใงในใฑใผใซใใฆใณใใใพใใใใใฏใใใฌใผใใณใฐๆใซๆฏในใฆใใใใใฎใฆใใใใใขใฏใใฃใใงใใใใจใซๅฏพใใฆใใฉใณในใใจใใใใงใใ
Keras ใงใฏใtf.keras.layers.Dropout ใฌใคใคใผใไฝฟใฃใฆใใญใใใขใฆใใใใใใฏใผใฏใซๅฐๅ
ฅใงใใพใใใใญใใใขใฆใใฌใคใคใผใฏใใใฎ็ดๅใฎใฌใคใคใผใฎๅบๅใซๅฏพใใฆใใญใใใขใฆใใ้ฉ็จใใพใใ
ใใใใฏใผใฏใซ 2 ใคใฎใใญใใใขใฆใใฌใคใคใผใ่ฟฝๅ ใใฆใ้้ฉๅใๆธใใใฎใซใฉใใ ใๅนๆใ็บๆฎใใใ่ฆใฆใฟใพใใใใ
Step29: ใใฎใใญใใใใใใใใใฎๆญฃๅๅใขใใญใผใใฏไธกๆนใจใ "Large" ใขใใซใฎๅไฝใๆนๅใใใใจใๅใใใพใใใใใใ"Tiny" ใฎใใผในใฉใคใณใจๆฏ่ผใใใจๅใใใฎใฏใใใพใใใ
ๆฌกใซใไธกๆนใไธ็ทใซ่ฉฆใใฆใๆนๅใใใใฉใใใ็ขบ่ชใใพใใ
L2 ใจใใญใใใขใฆใใ็ตใฟๅใใใ
Step30: "Combined" ๆญฃๅๅใไฝฟ็จใใใขใใซใฏใๆใใใซๆใๅชใใใขใใซใงใใ
TensorBoard ใง่กจ็คบใใ
ใใใใฎใขใใซใฏใTensorBoard ใญใฐใ่จ้ฒใใพใใใ
ใใผใใใใฏๅ
ใซๅใ่พผใพใใใใณใฝใซใใผใใใฅใผใขใ้ใใซใฏใไปฅไธใใณใผใใปใซใซใณใใผใใพใใ
%tensorboard --logdir {logdir}/regularizers
TensorBoard.dev ใงใใใฎใใผใใใใฏใฎๅๅใฎๅฎ่ก็ตๆใ้ฒ่ฆงใงใใพใใ
ไพฟๅฎไธใ<iframe> ใซใๅซใพใใฆใใพใใ | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 Franรงois Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import regularizers
print(tf.__version__)
!pip install git+https://github.com/tensorflow/docs
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import pathlib
import shutil
import tempfile
logdir = pathlib.Path(tempfile.mkdtemp())/"tensorboard_logs"
shutil.rmtree(logdir, ignore_errors=True)
Explanation: ้ๅญฆ็ฟใจๅญฆ็ฟไธ่ถณใซใคใใฆ็ฅใ
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/overfit_and_underfit"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org ใง่กจ็คบ</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab ใงๅฎ่ก</a>
</td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub ใงใฝใผในใ่กจ็คบ</a>
</td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ใใผใใใใฏใใใฆใณใญใผใ</a>
</td>
</table>
ใใคใใฎใใใซใใใฎไพใฎใใญใฐใฉใ ใฏ tf.keras APIใไฝฟ็จใใพใใ่ฉณใใใฏ TensorFlow ใฎ Keras ใฌใคใใๅ็
งใใฆใใ ใใใ
ใใใพใงใฎไพใใคใพใใๆ ็ปใฌใใฅใผใฎๅ้กใจ็่ฒปใฎๆจๅฎใงใฏใๆค่จผ็จใใผใฟใงใฎใขใใซใฎ็ฒพๅบฆใใๆฐใจใใใฏใงใใผใฏใ่ฟใใใใฎๅพไฝไธใใใจใใ็พ่ฑกใ่ฆใใใพใใใ
่จใๆใใใจใใขใใซใใใฌใผใใณใฐ็จใใผใฟใ<strong>้ๅญฆ็ฟ</strong>ใใใจ่ใใใใพใใ้ๅญฆ็ฟใธใฎๅฏพๅฆใฎไปๆนใๅญฆใถใใจใฏ้่ฆใงใใ<strong>ใใฌใผใใณใฐ็จใใผใฟใปใใ</strong>ใง้ซใ็ฒพๅบฆใ้ๆใใใใจใฏ้ฃใใใใใพใใใใ๏ผใใใพใง่ฆใใใจใใชใ๏ผ<strong>ใในใ็จใใผใฟ</strong>ใซๆฑๅใใใขใใซใ้็บใใใใฎใงใใ
้ๅญฆ็ฟใฎๅๅฏพ่ชใฏๅญฆ็ฟไธ่ถณ๏ผunderfitting๏ผใงใใๅญฆ็ฟไธ่ถณใฏใใขใใซใใในใใใผใฟใซๅฏพใใฆใพใ ๆนๅใฎไฝๅฐใใใๅ ดๅใซ็บ็ใใพใใๅญฆ็ฟไธ่ถณใฎๅๅ ใฏๆงใ
ใงใใใขใใซใๅๅๅผทๅใงใชใใจใใๆญฃๅๅใฎใใใใ ใจใใๅใซใใฌใผใใณใฐๆ้ใ็ญใใใใจใใฃใ็็ฑใใใใพใใๅญฆ็ฟไธ่ถณใฏใใใฌใผใใณใฐ็จใใผใฟใฎไธญใฎ้ข้ฃใใใใฟใผใณใๅญฆ็ฟใใใฃใฆใใชใใจใใใใจใๆๅณใใพใใ
ใขใใซใฎใใฌใผใใณใฐใใใใใใใจใใขใใซใฏ้ๅญฆ็ฟใๅงใใใใฌใผใใณใฐ็จใใผใฟใฎไธญใฎใใฟใผใณใงใใในใใใผใฟใซใฏไธ่ฌ็ใงใฏใชใใใฟใผใณใๅญฆ็ฟใใพใใใใฎใใใ้ๅญฆ็ฟใจๅญฆ็ฟไธ่ถณใฎไธญ้ใ็ฎๆใๅฟ
่ฆใใใใพใใใใใใ่ฆใฆใใใใใซใใกใใใฉใใใจใใใฏๆฐใ ใใใฌใผใใณใฐใ่กใๅฟ
่ฆใใใใพใใ
้ๅญฆ็ฟใ้ฒๆญขใใใใใฎใๆ่ฏใฎ่งฃๆฑบ็ญใฏใใใๅคใใฎใใฌใผใใณใฐ็จใใผใฟใไฝฟใใใจใงใใใใผใฟใปใใใซใฏใใขใใซใๅฆ็ใใใใใใๅ
ฅๅใๅซใพใใๅฟ
่ฆใใใใพใใ่ฟฝๅ ใฎใใผใฟใฏใๆฐใใ่ๅณๆทฑใใฑใผในใซๅฏพๅฟใใๅ ดๅใซใฎใฟๅฝน็ซใกใพใใ
ๅคใใฎใใผใฟใงใใฌใผใใณใฐใ่กใใฐ่กใใปใฉใๅฝ็ถใฎใใจใชใใใขใใซใฎๆฑๅโๆง่ฝใ้ซใใชใใพใใใใใไธๅฏ่ฝใชๅ ดๅใๆฌกๅใฎ็ญใฏๆญฃๅๅใฎใใใชใใฏใใใฏใไฝฟใใใจใงใใๆญฃๅๅใฏใใขใใซใซไฟๅญใใใๆ
ๅ ฑใฎ้ใจใฟใคใใซๅถ็ดใใ่ชฒใใพใใใใใใฏใผใฏใๅฐๆฐใฎใใฟใผใณใใ่จๆถใงใใชใๅ ดๅใๆ้ฉๅใใญใปในใซใใใๆใ้ก่ใชใใฟใผใณใซ็ฆ็นใๅใใใใใใซๅผทๅถใใใพใใใใใซใใใๆฑๅโๆง่ฝใ้ซใใชใๅฏ่ฝๆงใใใใพใใ
ใใฎใใผใใใใฏใงใฏใใใใคใใฎไธ่ฌ็ใชๆญฃๅๅๆๆณใไฝฟ็จใใฆๅ้กใขใใซใๆนๅใใๆนๆณใ่ฆใฆใใใพใใ
ใปใใใขใใ
ใพใใๅฟ
่ฆใชใใใฑใผใธใใคใณใใผใใใพใใ
End of explanation
gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')
FEATURES = 28
Explanation: Higgs ใใผใฟใปใใ
ใใฎใใฅใผใใชใขใซใฎ็ฎ็ใฏ็ด ็ฒๅญ็ฉ็ๅญฆใ่กใใใจใงใฏใชใใฎใงใใใผใฟใปใใใฎ่ฉณ็ดฐใซใใ ใใๅฟ
่ฆใฏใใใพใใใใใใใซใฏ 11,000,000 ใฎใตใณใใซใๅซใพใใฆใใฆใๅใตใณใใซใซใฏ 28 ใฎ็นๅพด้ใจใใคใใชใฏใฉในใฉใใซใใใใพใใ
End of explanation
ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type="GZIP")
Explanation: tf.data.experimental.CsvDataset ใฏใฉในใไฝฟ็จใใใจใไธญ้ใฎ่งฃๅๆ้ ใชใใงใgzip ใใกใคใซใใ็ดๆฅ csv ใฌใณใผใใ่ชญใฟๅใใใจใใงใใพใใ
End of explanation
def pack_row(*row):
label = row[0]
features = tf.stack(row[1:],1)
return features, label
Explanation: csv ใชใผใใผใฏใฉในใฏใๅใฌใณใผใใฎในใซใฉใผใฎใชในใใ่ฟใใพใใๆฌกใฎ้ขๆฐใฏใใใฎในใซใฉใผใฎใชในใใ (feature_vector, label) ใใขใซๅๅบฆใใใฏใใพใใ
End of explanation
packed_ds = ds.batch(10000).map(pack_row).unbatch()
Explanation: TensorFlow ใฏใๅคง่ฆๆจกใชใใใใฎใใผใฟใงๆผ็ฎใใๅ ดๅใซๆใๅน็็ใงใใ
ใใใใฃใฆใๅ่กใๅๅฅใซๅใใใฏใใไปฃใใใซใ10,000 ใตใณใใซใฎใใใใๅๅพใใๆฐใใ Dataset ใไฝๆใใpack_row ้ขๆฐใๅใใใใซ้ฉ็จใใฆใใใใใใใๅใ
ใฎใฌใณใผใใซๅๅฒใใพใใ
End of explanation
for features,label in packed_ds.batch(1000).take(1):
print(features[0])
plt.hist(features.numpy().flatten(), bins = 101)
Explanation: ใใฎๆฐใใ packed_ds ใฎใฌใณใผใใฎใใใคใใ็ขบ่ชใใพใใ
็นๅพดใฏๅฎๅ
จใซๆญฃ่ฆๅใใใฆใใพใใใใใใฎใใฅใผใใชใขใซใซใฏๅๅใงใใ
End of explanation
N_VALIDATION = int(1e3)
N_TRAIN = int(1e4)
BUFFER_SIZE = int(1e4)
BATCH_SIZE = 500
STEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE
Explanation: ใใฎ็ญใใใฅใผใใชใขใซใงใฏใๆค่จผใซๆๅใฎ 1,000 ใตใณใใซใฎใฟใไฝฟ็จใใใใฌใผใใณใฐใซๆฌกใฎ 10,000 ใตใณใใซใไฝฟ็จใใพใใ
End of explanation
validate_ds = packed_ds.take(N_VALIDATION).cache()
train_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()
train_ds
Explanation: Dataset.skip ใจ Dataset.take ใกใฝใใใไฝฟใใจ็ฐกๅใซๅฎ่กใงใใพใใ
ใพใใDataset.cache ใกใฝใใใไฝฟ็จใใฆใใญใผใใผใๅใจใใใฏใงใใกใคใซใใใใผใฟใๅ่ชญใฟๅใใใๅฟ
่ฆใใชใใใใซใใพใใ
End of explanation
validate_ds = validate_ds.batch(BATCH_SIZE)
train_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)
Explanation: ใใใใฎใใผใฟใปใใใฏใๅใ
ใฎใตใณใใซใ่ฟใใพใใDataset.batch ใกใฝใใใไฝฟ็จใใฆใใใฌใผใใณใฐใซ้ฉใใใตใคใบใฎใใใใไฝๆใใพใใใใใๅฆ็ใใๅใซใใใฌใผใใณใฐใปใใใ Dataset.shuffle ใใใณ Dataset.repeat ใใใใจใๅฟใใชใใงใใ ใใใ
End of explanation
lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(
0.001,
decay_steps=STEPS_PER_EPOCH*1000,
decay_rate=1,
staircase=False)
def get_optimizer():
return tf.keras.optimizers.Adam(lr_schedule)
Explanation: ้ๅญฆ็ฟใฎใใข
้ๅญฆ็ฟใ้ฒๆญขใใใใใฎๆใๅ็ดใชๆนๆณใฏใใขใใซใฎใตใคใบใใใชใใกใใขใใซๅ
ใฎๅญฆ็ฟๅฏ่ฝใชใใฉใกใผใฟใฎๆฐใๅฐใใใใใใจใงใ๏ผๅญฆ็ฟใใฉใกใผใฟใฎๆฐใฏใใฌใคใคใผใฎๆฐใจใฌใคใคใผใใจใฎใฆใใใๆฐใงๆฑบใพใใพใ๏ผใใใฃใผใใฉใผใใณใฐใงใฏใใขใใซใฎๅญฆ็ฟๅฏ่ฝใชใใฉใกใผใฟๆฐใใใใฐใใฐใขใใซใฎใๅฎน้ใใจๅผใณใพใใ
็ดๆ็ใซ่ใใใฐใใใฉใกใผใฟๆฐใฎๅคใใขใใซใปใฉใ่จๆถๅฎน้ใใๅคงใใใชใใใใฌใผใใณใฐ็จใฎใตใณใใซใจใใฎ็ฎ็ๅคๆฐใฎ้ใฎใใฃใฏใทใงใใชใฎใใใชใใใใณใฐใใใใใๅญฆ็ฟใใใใจใใงใใพใใใใฎใใใใณใฐใซใฏๆฑๅ่ฝๅใใพใฃใใใชใใใใใพใง่ฆใใใจใใชใใใผใฟใไฝฟใฃใฆไบๆธฌใใใ้ใซใฏๅฝนใซ็ซใกใพใใใ
ใใฃใผใใฉใผใใณใฐใฎใขใใซใฏใใฌใผใใณใฐ็จใใผใฟใซ้ฉๅฟใใใใใใใฉใๆฌๅฝใฎใใฃใฌใฌใณใธใฏๆฑๅใงใใฃใฆ้ฉๅฟใงใฏใใใพใใใ
ไธๆนใใใใใฏใผใฏใฎ่จๆถๅฎน้ใ้ใใใฆใใๅ ดๅใๅ่ฟฐใฎใใใชใใใใณใฐใ็ฐกๅใซๅญฆ็ฟใใใใจใฏใงใใพใใใๆๅคฑใๆธใใใใใซใฏใใใไบๆธฌ่ฝๅใ้ซใๅง็ธฎใใใ่กจ็พใๅญฆ็ฟใใชใใใฐใชใใพใใใๅๆใซใใขใใซใๅฐใใใใใใใจใใใฌใผใใณใฐ็จใใผใฟใซ้ฉๅฟใใใฎใ้ฃใใใชใใพใใใๅคใใใๅฎน้ใใจใๅฎน้ไธ่ถณใใฎ้ใซใกใใใฉใใๅฎน้ใใใใฎใงใใ
ๆฎๅฟตใชใใใ๏ผใฌใคใคใผใฎๆฐใใใฌใคใคใผใใจใฎๅคงใใใจใใฃใ๏ผใขใใซใฎ้ฉๅใชใตใคใบใใขใผใญใใฏใใฃใๆฑบใใ้ญๆณใฎๆน็จๅผใฏใใใพใใใไธ้ฃใฎ็ฐใชใใขใผใญใใฏใใฃใไฝฟใฃใฆๅฎ้จใ่กใๅฟ
่ฆใใใใพใใ
้ฉๅใชใขใใซใฎใตใคใบใ่ฆใคใใใซใฏใๆฏ่ผ็ๅฐใชใใฌใคใคใผใฎๆฐใจใใฉใกใผใฟใใๅงใใใฎใใในใใงใใใใใใใๆค่จผ็จใใผใฟใงใฎๆๅคฑๅคใฎๆนๅใ่ฆใใใชใใชใใพใงใๅพใ
ใซใฌใคใคใผใฎๅคงใใใๅขใใใใใๆฐใใชใฌใคใคใผใๅ ใใใใใพใใ
ๆฏ่ผๅบๆบใจใใฆใๅฏใซๆฅ็ถใใใใฌใคใคใผ๏ผtf.keras.layers.Dense๏ผใ ใใไฝฟใฃใใทใณใใซใชใขใใซใๆง็ฏใใใใฎๅพใๅคง่ฆๆจกใชใใผใธใงใณใไฝใฃใฆๆฏ่ผใใพใใ
ๆฏ่ผๅบๆบใไฝใ
ใใฌใผใใณใฐไธญใซๅญฆ็ฟ็ใๅพใ
ใซไธใใใจใๅคใใฎใขใใซใฎใใฌใผใใณใฐใๅไธใใพใใtf.keras.optimizers.schedules ใไฝฟ็จใใฆใๆ้ใฎ็ต้ใจใจใใซๅญฆ็ฟ็ใไธใใพใใ
End of explanation
step = np.linspace(0,100000)
lr = lr_schedule(step)
plt.figure(figsize = (8,6))
plt.plot(step/STEPS_PER_EPOCH, lr)
plt.ylim([0,max(plt.ylim())])
plt.xlabel('Epoch')
_ = plt.ylabel('Learning Rate')
Explanation: ไธ่จใฎใณใผใใฏใtf.keras.optimizers.schedules.InverseTimeDecay ใ่จญๅฎใใๅญฆ็ฟ็ใ 1000 ใจใใใฏใงๅบๆฌ็ใฎ 1/2 ใซใ2000 ใจใใใฏใง 1/3 ใซๅๆฒ็ท็ใซๆธๅฐใใใพใใ
End of explanation
def get_callbacks(name):
return [
tfdocs.modeling.EpochDots(),
tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),
tf.keras.callbacks.TensorBoard(logdir/name),
]
Explanation: ใใฎใใฅใผใใชใขใซใฎๅใขใใซใฏใๅใใใฌใผใใณใฐๆงๆใไฝฟ็จใใพใใใใใใฃใฆใใณใผใซใใใฏใฎใชในใใใๅงใใฆใๅๅฉ็จๅฏ่ฝใชๆนๆณใงใใใใ่จญๅฎใใพใใ
ใใฎใใฅใผใใชใขใซใฎใใฌใผใใณใฐใฏใๅคใใฎ็ญใใจใใใฏใงๅฎ่กใใใพใใไธ่ฆใชใญใฐๆ
ๅ ฑใๆธใใใใใซใฏใtfdocs.EpochDots ใไฝฟ็จใใพใใใใใฏใใจใใใฏใใจใซ . ใๅบๅใใ100 ใจใใใฏใใจใซใกใใชใใฏใฎใใซใปใใใๅบๅใใพใใ
ๆฌกใซใtf.keras.callbacks.EarlyStopping ใๅซใใฆใใใฌใผใใณใฐๆ้ใไธๅฟ
่ฆใซ้ทใใชใใชใใใใซใพใใใใฎใณใผใซใใใฏใฏใval_loss ใงใฏใชใใval_binary_crossentropy ใ็ฃ่ฆใใใใใซ่จญๅฎใใใฆใใใใจใซๆณจๆใใฆใใ ใใใใใฎ้ใใฏๅพใง้่ฆใซใชใใพใใ
callbacks.TensorBoard ใไฝฟ็จใใฆใใใฌใผใใณใฐ็จใฎ TensorBoard ใญใฐใ็ๆใใพใใ
End of explanation
def compile_and_fit(model, name, optimizer=None, max_epochs=10000):
if optimizer is None:
optimizer = get_optimizer()
model.compile(optimizer=optimizer,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[
tf.keras.losses.BinaryCrossentropy(
from_logits=True, name='binary_crossentropy'),
'accuracy'])
model.summary()
history = model.fit(
train_ds,
steps_per_epoch = STEPS_PER_EPOCH,
epochs=max_epochs,
validation_data=validate_ds,
callbacks=get_callbacks(name),
verbose=0)
return history
Explanation: ๅๆงใซใๅใขใใซใฏๅใ Model.compile ใใใณ Model.fit ่จญๅฎใไฝฟ็จใใพใใ
End of explanation
tiny_model = tf.keras.Sequential([
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(1)
])
size_histories = {}
size_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')
Explanation: ้ๅธธใซๅฐ่ฆๆจกใฎใขใใซ๏ผTiny๏ผ
ใพใใใขใใซใใใฌใผใใณใฐใใพใใ
End of explanation
plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)
plotter.plot(size_histories)
plt.ylim([0.5, 0.7])
Explanation: ๆฌกใซใใขใใซใใฉใฎใใใซๆฉ่ฝใใใใ็ขบ่ชใใพใใ
End of explanation
small_model = tf.keras.Sequential([
# `input_shape` is only required here so that `.summary` works.
layers.Dense(16, activation='elu', input_shape=(FEATURES,)),
layers.Dense(16, activation='elu'),
layers.Dense(1)
])
size_histories['Small'] = compile_and_fit(small_model, 'sizes/Small')
Explanation: ๅฐ่ฆๆจกใฎใขใใซ๏ผSmall๏ผ
ๅฐ่ฆๆจกใชใขใใซใฎใใใฉใผใใณในใไธๅใใใจใใงใใใใฉใใใ็ขบ่ชใใใซใฏใใใใคใใฎๅคง่ฆๆจกใชใขใใซใๆฎต้็ใซใใฌใผใใณใฐใใพใใ
้ ใใฌใคใคใผใ 2 ใคใ 1 ใคใฎใฌใคใคใผๅ
ใฎใฆใใใใ 16 ใใใขใใซใๆง็ฏใใพใใ
End of explanation
medium_model = tf.keras.Sequential([
layers.Dense(64, activation='elu', input_shape=(FEATURES,)),
layers.Dense(64, activation='elu'),
layers.Dense(64, activation='elu'),
layers.Dense(1)
])
Explanation: ไธญ่ฆๆจกใฎใขใใซ๏ผMedium๏ผ
ๆฌกใซใ้ ใใฌใคใคใผใ 3 ใคใ 1 ใคใฎใฌใคใคใผๅ
ใฎใฆใใใใโ 64 ใใใขใใซใๆง็ฏใใพใใ
End of explanation
size_histories['Medium'] = compile_and_fit(medium_model, "sizes/Medium")
Explanation: ๅใใใผใฟใไฝฟใฃใฆ่จ็ทดใใพใใ
End of explanation
large_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(512, activation='elu'),
layers.Dense(1)
])
Explanation: ๅคง่ฆๆจกใฎใขใใซ๏ผLarge๏ผ
ๆผ็ฟใจใใฆใใใๅคง่ฆๆจกใชใขใใซใไฝๆใใใใใใฉใใ ใ่ฟ
้ใซ้้ฉๅใๅงใใใใ็ขบ่ชใใฆใฟใพใใใใๆฌกใซใใใฎใใณใใใผใฏใซใใใใงๅฟ
่ฆใจใใใๅฎน้ใๅคงๅน
ใซไธๅใใใใใฏใผใฏใ่ฟฝๅ ใใพใใ
End of explanation
size_histories['large'] = compile_and_fit(large_model, "sizes/large")
Explanation: ใใฎใขใใซใใพใๅใใใผใฟใไฝฟใฃใฆ่จ็ทดใใพใใ
End of explanation
plotter.plot(size_histories)
a = plt.xscale('log')
plt.xlim([5, max(plt.xlim())])
plt.ylim([0.5, 0.7])
plt.xlabel("Epochs [Log Scale]")
Explanation: ่จ็ทดๆใจๆค่จผๆใฎๆๅคฑใใฐใฉใใซใใ
ๅฎ็ทใฏใใฌใผใใณใฐ็จใใผใฟใปใใใฎๆๅคฑใ็ ด็ทใฏๆค่จผ็จใใผใฟใปใใใงใฎๆๅคฑใงใ๏ผๆค่จผ็จใใผใฟใงใฎๆๅคฑใๅฐใใๆนใ่ฏใใขใใซใงใ๏ผใ
ใใๅคงใใชใขใใซใๆง็ฏใใใจใใใๅคใใฎใใฏใผใๅพใใใพใใใใใฎใใฏใผใไฝใใใฎๅฝขใงๅถ็ดใใใฆใใชใๅ ดๅใใใฌใผใใณใฐใปใใใซ็ฐกๅใซ้้ฉๅใใๅฏ่ฝๆงใใใใพใใ
ใใฎไพใงใฏใ้ๅธธใ"Tiny" ใขใใซใฎใฟใ้้ฉๅใๅฎๅ
จใซๅ้ฟใใใใๅคง่ฆๆจกใชใขใใซใฏใใผใฟใใใ่ฟ
้ใซ้้ฉๅใใพใใ้้ฉๅใฏใ"large" ใขใใซใงใฏ้ๅธธใซๆทฑๅปใซใชใใใใๅฎ้ใซไฝใ่ตทใใฃใฆใใใใ็ขบ่ชใใใซใฏใใใญใใใๅฏพๆฐในใฑใผใซใซๅใๆฟใใๅฟ
่ฆใใใใพใใ
ใใใฏใๆค่จผใกใใชใใฏใใใฌใผใใณใฐใกใใชใใฏใจใใญใใใใฆๆฏ่ผใใใจๆใใใงใใ
ใใใใช้ใใใใใฎใฏๆญฃๅธธใงใใ
ไธกๆนใฎใกใใชใใฏใๅใๆนๅใซ็งปๅใใฆใใๅ ดๅใใในใฆๆญฃๅธธใงใใ
ใใฌใผใใณใฐใกใใชใใฏใๆนๅใ็ถใใฆใใใฎใซๆค่จผใกใใชใใฏใๅๆปใๅงใใๅ ดๅใฏใใใใใ้้ฉๅใซ่ฟใฅใใฆใใพใใ
ๆค่จผใกใใชใใฏใๅๅฏพๆนๅใซ้ฒใใงใใๅ ดๅใใขใใซใฏๆใใใซ้้ฉๅใใฆใใพใใ
End of explanation
#docs_infra: no_execute
# Load the TensorBoard notebook extension
%load_ext tensorboard
# Open an embedded TensorBoard viewer
%tensorboard --logdir {logdir}/sizes
Explanation: ๆณจๆ: ไธ่จใฎใในใฆใฎใใฌใผใใณใฐๅฎ่กใงใฏใcallbacks.EarlyStopping ใไฝฟ็จใใฆใใขใใซใ้ฒ่กใใฆใใชใใใจใๆใใใซใชใฃใๆ็นใงใใฌใผใใณใฐใ็ตไบใใพใใใ
TensorBoard ใง่กจ็คบใใ
ใใใใฎใขใใซใฏใในใฆใใใฌใผใใณใฐไธญใซ TensorBoard ใญใฐใๆธใ่พผใฟใพใใใ
ใใผใใใใฏๅ
ใซๅใ่พผใพใใ TensorBoard ใใฅใผใขใ้ใใพใใ
End of explanation
display.IFrame(
src="https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97",
width="100%", height="800px")
Explanation: TensorBoard.dev ใงใใใฎใใผใใใใฏใฎๅๅใฎๅฎ่ก็ตๆใ้ฒ่ฆงใงใใพใใ
TensorBoard.dev ใฏใๆฉๆขฐๅญฆ็ฟใฎๅฎ้จใใในใใ่ฟฝ่ทกใใใใณๅ
ฑๆใใใใใฎใใใใผใธใใจใฏในใใชใจใณในใงใใ
ไพฟๅฎไธใ<iframe> ใซใๅซใพใใฆใใพใใ
End of explanation
shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)
shutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')
regularizer_histories = {}
regularizer_histories['Tiny'] = size_histories['Tiny']
Explanation: TensorBoard ใฎ็ตๆใๅ
ฑๆใใใใจใๅธๆใใๅ ดๅใฏใไปฅไธใฎใณใผใใใณใผใใปใซใซๅผตใไปใใฆใ<a>TensorBoard.dev</a> ใซใญใฐใใขใใใญใผใใงใใพใใ
ๆณจๆ: Google ใขใซใฆใณใใๅฟ
่ฆใงใใ
!tensorboard dev upload --logdir {logdir}/sizes
่ฆๆณจๆ: ใใฎใณใใณใใฏ็ตไบใใพใใใ้ทๆ้ใซๅใถๅฎ้จใฎ็ตๆใ้ฃ็ถ็ใซใขใใใญใผใใใใใใซ่จญ่จใใใฆใใพใใใใผใฟใฎใขใใใญใผใใๅฎไบใใใใใใผใใใใฏใใผใซใฎ "interrupt execution" ใชใใทใงใณใไฝฟใฃใฆๅๆญขใใๅฟ
่ฆใใใใพใใ
้ๅญฆ็ฟ้ฒๆญขใฎๆฆ็ฅ
ใใฎใปใฏใทใงใณใฎๅ
ๅฎนใซๅ
ฅใๅใซใไธ่จใฎ "Tiny" ใขใใซใใใใฌใผใใณใฐใญใฐใใณใใผใใฆใๆฏ่ผใฎใใผในใฉใคใณใจใใฆไฝฟ็จใใพใใ
End of explanation
l2_model = tf.keras.Sequential([
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001),
input_shape=(FEATURES,)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(512, activation='elu',
kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(1)
])
regularizer_histories['l2'] = compile_and_fit(l2_model, "regularizers/l2")
Explanation: ้ใฟใฎๆญฃๅๅใๅ ใใ
ใใชใใซใ ใฎๅๅใใฎๅๅใใๅญ็ฅใงใใใใใไฝใใฎ่ชฌๆใ2ใคใใใจใใใจใๆใๆญฃใใใจ่ใใใใ่ชฌๆใฏใไปฎๅฎใฎๆฐใๆใๅฐใชใใไธ็ชๅ็ดใชใ่ชฌๆใ ใจใใใใฎใงใใใใฎๅๅใฏใใใฅใผใฉใซใใใใฏใผใฏใไฝฟใฃใฆๅญฆ็ฟใใใใขใใซใซใๅฝใฆใฏใพใใพใใใใ่จ็ทด็จใใผใฟใจใใใใฏใผใฏๆง้ ใใใฃใฆใใใฎใใผใฟใ่ชฌๆใงใใ้ใฟใฎ้ๅใ่คๆฐใใๆ๏ผใคใพใใ่คๆฐใฎใขใใซใใใๆ๏ผใๅ็ดใชใขใใซใฎใปใใ่ค้ใชใใฎใใใ้ๅญฆ็ฟใใซใใใฎใงใใ
ใใใง่จใใๅ็ดใชใขใใซใใจใฏใใใฉใกใผใฟๅคใฎๅๅธใฎใจใณใใญใใผใๅฐใใใใฎ๏ผใใใใฏใไธ่จใง่ฆใใใใซใใใใใใใฉใกใผใฟใฎๆฐใๅฐใชใใใฎ๏ผใงใใใใใใฃใฆใ้ๅญฆ็ฟใ็ทฉๅใใใใใฎไธ่ฌ็ใชๆๆณใฏใ้ใฟใๅฐใใๅคใฎใฟใใจใใใจใงใ้ใฟๅคใฎๅๅธใใใๆด็ถใจใชใ๏ผๆญฃๅ๏ผใใใซๅถ็ดใไธใใใใฎใงใใใใใใ้ใฟใฎๆญฃๅๅใใจๅผใฐใใใใใใฏใผใฏใฎๆๅคฑ้ขๆฐใซใ้ใฟใฎๅคงใใใซ้ข้ฃใใใณในใใๅ ใใใใจใง่กใใใพใใใใฎใณในใใซใฏ 2 ใคใฎ็จฎ้กใใใใพใใ
L1 ๆญฃๅๅ: ้ใฟไฟๆฐใฎ็ตถๅฏพๅคใซๆฏไพใใใณในใใๅ ใใ๏ผ้ใฟใฎใL1 ใใซใ ใใจๅผใฐใใ๏ผใ
L2 ๆญฃๅๅ: ้ใฟไฟๆฐใฎไบไนใซๆฏไพใใใณในใใๅ ใใ๏ผ้ใฟไฟๆฐใฎไบไนใL2 ใใซใ ใใจๅผใฐใใ๏ผใL2 ๆญฃๅๅใฏใใฅใผใฉใซใใใใฏใผใฏ็จ่ชใงใฏ้ใฟๆธ่กฐ๏ผWeight Decay๏ผใจๅผใฐใใใๅผใณๆนใ้ใใฎใงๆททไนฑใใชใใใใซใ้ใฟๆธ่กฐใฏๆฐๅญฆ็ใซใฏ L2 ๆญฃๅๅใจๅ็พฉใงใใใ
L1 ๆญฃๅๅใฏ้ใฟใใฉใกใผใฟใฎไธ้จใ 0 ใซใใใใจใงใขใใซใ็ใซใใๅนๆใใใใพใใL2 ๆญฃๅๅใฏ้ใฟใใฉใกใผใฟใซใใใซใใฃใๅ ใใพใใใขใใซใ็ใซใใใใจใฏใใใพใใใใใฎใใใL2 ๆญฃๅๅใฎใปใใไธ่ฌ็ใงใใ
tf.kerasใงใฏใ้ใฟใฎๆญฃๅๅใใใใใใซใ้ใฟๆญฃๅๅใฎใคใณในใฟใณในใใญใผใฏใผใๅผๆฐใจใใฆใฌใคใคใผใซๅ ใใพใใใใใงใฏใL2 ๆญฃๅๅใ่ฟฝๅ ใใฆใฟใพใใใใ
End of explanation
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
Explanation: l2(0.001) ใจใใใฎใฏใใฌใคใคใผใฎ้ใฟ่กๅใฎไฟๆฐๅ
จใฆใซๅฏพใใฆ 0.001 * weight_coefficient_value**2 ใใใใใฏใผใฏใฎๆๅคฑๅคๅ่จใซๅ ใใใใจใๆๅณใใพใใ
ใใฎใใใbinary_crossentropy ใ็ดๆฅ็ฃ่ฆใใฆใใพใใใใฎๆญฃๅๅใณใณใใผใใณใใๆททๅจใใฆใใชใใใใงใใ
ใใใใฃใฆใL2 ๆญฃๅๅใใใซใใฃใ่จญใใใใๅใ "Large" ใขใใซใฎใใใฉใผใใณในใฏใฏใใใซๅชใใฆใใพใใ
End of explanation
result = l2_model(features)
regularization_loss=tf.add_n(l2_model.losses)
Explanation: ใ่ฆงใฎใใใซใ"L2" ๆญฃๅๅใใใฎใขใใซใฏ "Tiny" ใขใใซใจใปใผๅ็ญใซใชใใพใใใ"L2" ใขใใซใฏ "Large" ใขใใซใจๆฏในใฆ้ๅญฆ็ฟใใซใใใชใฃใฆใใพใใไธกๆนใฎใขใใซใฎใใฉใกใผใฟๆฐใฏๅใใงใใใซใใใใใใใงใใ
่ฉณ็ดฐๆ
ๅ ฑ
ใใฎใใใชๆญฃๅๅใซใคใใฆๆณจๆใในใ้่ฆไบ้
ใ 2 ใคใใใพใใ
็ฌ่ชใฎใใฌใผใใณใฐใซใผใใไฝๆใใฆใใๅ ดๅใฏใใขใใซใซๆญฃๅๅใฎๆๅคฑใๅฟ
ใ็ขบ่ชใใๅฟ
่ฆใใใใพใใ
End of explanation
dropout_model = tf.keras.Sequential([
layers.Dense(512, activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['dropout'] = compile_and_fit(dropout_model, "regularizers/dropout")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
Explanation: ใใฎๅฎ่ฃ
ใฏใใขใใซใฎๆๅคฑใซๅฏพใใฆ้ใฟใใใซใใฃใไธใใฆใใๆจๆบใฎๆ้ฉๅๆ้ ใ้ฉ็จใใพใใ
2 ็ช็ฎใฎใขใใญใผใใงใฏใไปฃใใใซใ็ใฎๆๅคฑใซๅฏพใใฆใฎใฟใชใใใฃใใคใถใๅฎ่กใใพใใใชใใใฃใใคใถใฏ่จ็ฎใใใในใใใใ้ฉ็จใใชใใใ้ใฟใฎๆธ่กฐใ้ฉ็จใใพใใใใฎใๅ้ขใใใ้ใฟใฎๆธ่กฐใใฏใtf.keras.optimizers.Ftrl ใ tfa.optimizers.AdamW ใชใฉใฎใชใใใฃใใคใถใงไฝฟ็จใใใพใใ
ใใญใใใขใฆใใ่ฟฝๅ ใใ
ใใญใใใขใฆใใฏใใใฅใผใฉใซใใใใฏใผใฏใฎๆญฃๅๅใใฏใใใฏใจใใฆๆใใใไฝฟใใใๆๆณใฎไธใคใงใใใใฎๆๆณใฏใใใญใณใๅคงๅญฆใฎใใณใใณใจๅฝผใฎๅญฆ็ใ้็บใใใใฎใงใใใใญใใใขใฆใใฏๅฑคใซ้ฉ็จใใใใฎใงใ่จ็ทดๆใซๅฑคใใๅบๅใใใ็นๅพด้ใซๅฏพใใฆใฉใณใใ ใซใใใญใใใขใฆใ๏ผใคใพใใผใญๅ๏ผใใ่กใใใฎใงใใไพใใฐใใใๅฑคใ่จ็ทดๆใซใใๅ
ฅๅใตใณใใซใซๅฏพใใฆใๆฎ้ใฏ[0.2, 0.5, 1.3, 0.8, 1.1] ใจใใใใฏใใซใๅบๅใใใจใใพใใใใญใใใขใฆใใ้ฉ็จใใใจใใใฎใใฏใใซใฏไพใใฐ[0, 0.5, 1.3, 0, 1.1]ใฎใใใซใฉใณใใ ใซๆฃใใฐใฃใใใใคใใฎใผใญใๅซใใใใซใชใใพใใใใใญใใใขใฆใ็ใใฏใผใญๅใใใ็นๅพดใฎๅฒๅใงใ้ๅธธใฏ0.2ใใ0.5ใฎ้ใซ่จญๅฎใใพใใใในใๆใฏใใฉใฎใฆใใใใใใญใใใขใฆใใใใใไปฃใใใซๅบๅๅคใใใญใใใขใฆใ็ใจๅใๆฏ็ใงในใฑใผใซใใฆใณใใใพใใใใใฏใ่จ็ทดๆใซๆฏในใฆใใใใใฎใฆใใใใใขใฏใใฃใใงใใใใจใซๅฏพใใฆใใฉใณในใใจใใใใงใใ
ใใญใใใขใฆใใ็ฐกๅใซ่ชฌๆใใใจใใใใใฏใผใฏๅ
ใฎๅใ
ใฎใใผใใฏไปใฎใใผใใฎๅบๅใซไพๅญใงใใชใใใใๅใใผใใฏใใ่ชไฝใงๅฝน็ซใค็นๅพดใๅบๅใใๅฟ
่ฆใใใใจใใใใจใงใใ
ใใญใใใขใฆใใฏใฌใคใคใผใซ้ฉ็จใใใใฎใงใใใฌใผใใณใฐๆใซใฌใคใคใผใใๅบๅใใใ็นๅพด้ใซๅฏพใใฆใฉใณใใ ใซใใใญใใใขใฆใ๏ผใคใพใใผใญๅ๏ผใใ่กใใใฎใงใใไพใใฐใใใใฌใคใคใผใใใฌใผใใณใฐๆใซใใๅ
ฅๅใตใณใใซใซๅฏพใใฆใๆฎ้ใฏ[0.2, 0.5, 1.3, 0.8, 1.1] ใจใใใใฏใใซใๅบๅใใใจใใพใใใใญใใใขใฆใใ้ฉ็จใใใจใใใฎใใฏใใซใฏไพใใฐ[0, 0.5, 1.3, 0, 1.1]ใฎใใใซใฉใณใใ ใซๆฃใใฐใฃใใใใคใใฎใผใญใๅซใใใใซใชใใพใใ
ใใใญใใใขใฆใ็ใใฏใผใญๅใใใ็นๅพดใฎๅฒๅใงใ้ๅธธใฏ 0.2 ใใ 0.5 ใฎ้ใซ่จญๅฎใใพใใใในใๆใฏใใฉใฎใฆใใใใใใญใใใขใฆใใใใใไปฃใใใซๅบๅๅคใใใญใใใขใฆใ็ใจๅใๆฏ็ใงในใฑใผใซใใฆใณใใใพใใใใใฏใใใฌใผใใณใฐๆใซๆฏในใฆใใใใใฎใฆใใใใใขใฏใใฃใใงใใใใจใซๅฏพใใฆใใฉใณในใใจใใใใงใใ
Keras ใงใฏใtf.keras.layers.Dropout ใฌใคใคใผใไฝฟใฃใฆใใญใใใขใฆใใใใใใฏใผใฏใซๅฐๅ
ฅใงใใพใใใใญใใใขใฆใใฌใคใคใผใฏใใใฎ็ดๅใฎใฌใคใคใผใฎๅบๅใซๅฏพใใฆใใญใใใขใฆใใ้ฉ็จใใพใใ
ใใใใฏใผใฏใซ 2 ใคใฎใใญใใใขใฆใใฌใคใคใผใ่ฟฝๅ ใใฆใ้้ฉๅใๆธใใใฎใซใฉใใ ใๅนๆใ็บๆฎใใใ่ฆใฆใฟใพใใใใ
End of explanation
combined_model = tf.keras.Sequential([
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu', input_shape=(FEATURES,)),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),
activation='elu'),
layers.Dropout(0.5),
layers.Dense(1)
])
regularizer_histories['combined'] = compile_and_fit(combined_model, "regularizers/combined")
plotter.plot(regularizer_histories)
plt.ylim([0.5, 0.7])
Explanation: ใใฎใใญใใใใใใใใใฎๆญฃๅๅใขใใญใผใใฏไธกๆนใจใ "Large" ใขใใซใฎๅไฝใๆนๅใใใใจใๅใใใพใใใใใใ"Tiny" ใฎใใผในใฉใคใณใจๆฏ่ผใใใจๅใใใฎใฏใใใพใใใ
ๆฌกใซใไธกๆนใไธ็ทใซ่ฉฆใใฆใๆนๅใใใใฉใใใ็ขบ่ชใใพใใ
L2 ใจใใญใใใขใฆใใ็ตใฟๅใใใ
End of explanation
display.IFrame(
src="https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97",
width = "100%",
height="800px")
Explanation: "Combined" ๆญฃๅๅใไฝฟ็จใใใขใใซใฏใๆใใใซๆใๅชใใใขใใซใงใใ
TensorBoard ใง่กจ็คบใใ
ใใใใฎใขใใซใฏใTensorBoard ใญใฐใ่จ้ฒใใพใใใ
ใใผใใใใฏๅ
ใซๅใ่พผใพใใใใณใฝใซใใผใใใฅใผใขใ้ใใซใฏใไปฅไธใใณใผใใปใซใซใณใใผใใพใใ
%tensorboard --logdir {logdir}/regularizers
TensorBoard.dev ใงใใใฎใใผใใใใฏใฎๅๅใฎๅฎ่ก็ตๆใ้ฒ่ฆงใงใใพใใ
ไพฟๅฎไธใ<iframe> ใซใๅซใพใใฆใใพใใ
End of explanation |
15,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1><center>[Notebooks](../) - [Numerical Cartography](../numerical cartography)</center></h1>
Geodetic datum transformations
Is common practice in Geospatial data science to work with datasets collected at different epoch and or referenced to different reference systems. In this context, the transformation parameters needed to convert data accurately into a more up-to-date reference system are often missed or, when available, are valid for wide areas affecting the accuracy of the transformation. In this briefing notes a simplified approach to derive datum transformation parameters is introduced.
A Datum transformations can be defined as a geometric transformations between two three-dimensional coordinate reference systems. A common method consist in apply a linear transformation in the three dimensional space (x,y,z).
A general linear transformation of a vector $x$ to another vector $y$ takes the form
$$y=Mx+t \quad (1)$$
Each element of the $y$ vector is a combination of the element of $x$ plus a translation or shift represented by an element of the $t$ vector. The matrix M is called transformation matrix and t is called translation vector. With M being square nonsingular, the inverse relation exist (eq. 2)
$$x = M^{-1}(y-t) \quad (2)$$
in which case is called affine transformation.
Limited to the two- and three- dimensional space, six elementary transformations are identified, each representing a single effect. They are geometrically described as
Step1: Random points in DATUM 1
Step2: Random points in Datum 2
(transformation performed using pyproj)
Step3: testing dataset
Step4: Conforme 2D
Step5: Affine 2D
Step6: Helmert 7 Parameters | Python Code:
#import the pyproj and numpy library
import pyproj
import numpy as np
# set a reference point P with coordinates:
P = (-70.93931369842528, 43.13567095719326)
# define projection UTM 19 N:
# UTM zone 19, WGS84 ellipse, WGS84 datum, defined by epsg code 32619
p1 = pyproj.Proj(init='epsg:32619')
#Find UTM coordinates for the point P(-70.93931369842528,43.13567095719326)
x1, y1 = p1(P[0],P[1])
# define projection: UTM zone 19, Clarke 1866, NAD27 datum
p3 = pyproj.Proj(init='epsg:26719')
# transform the UTM coordinates for the point P to projection 3 coordinates.
x3, y3 = pyproj.transform(p1,p3,x1,y1)
# generate a set of random points in the range of 100 meters from P1
# note: we use a fake altitude to perform a 3D transformation
# the value of 6371 is the ray of the spheroid in km
xrand = (np.random.random_sample((50,))*100)+x1
yrand = (np.random.random_sample((50,))*100)+y1
zrand = (np.random.random_sample((50,))*10)+(6371*1000)
xrand,yrand,zrand
# transform the UTM coordinates for the points [xrand, yrand] to the projection 3 coordinates.
x, y = pyproj.transform(p1,p3,xrand[:],yrand[:])
# now generate 2 dataframes to store the x,y,z coordinates in the two different DATUM
# and save the reults in a space delimited text file
import pandas as pd
d1 = pd.DataFrame(np.array([xrand,yrand,zrand],dtype=np.float).T, columns=['x','y','z'])
d2 = pd.DataFrame(np.array([x,y,zrand],dtype=np.float).T, columns=['x','y','z'])
d1.to_csv('d1.csv', index=False, header=False, sep=" ")
d2.to_csv('d2.csv', index=False, header=False, sep=" ")
Explanation: <h1><center>[Notebooks](../) - [Numerical Cartography](../numerical cartography)</center></h1>
Geodetic datum transformations
Is common practice in Geospatial data science to work with datasets collected at different epoch and or referenced to different reference systems. In this context, the transformation parameters needed to convert data accurately into a more up-to-date reference system are often missed or, when available, are valid for wide areas affecting the accuracy of the transformation. In this briefing notes a simplified approach to derive datum transformation parameters is introduced.
A Datum transformations can be defined as a geometric transformations between two three-dimensional coordinate reference systems. A common method consist in apply a linear transformation in the three dimensional space (x,y,z).
A general linear transformation of a vector $x$ to another vector $y$ takes the form
$$y=Mx+t \quad (1)$$
Each element of the $y$ vector is a combination of the element of $x$ plus a translation or shift represented by an element of the $t$ vector. The matrix M is called transformation matrix and t is called translation vector. With M being square nonsingular, the inverse relation exist (eq. 2)
$$x = M^{-1}(y-t) \quad (2)$$
in which case is called affine transformation.
Limited to the two- and three- dimensional space, six elementary transformations are identified, each representing a single effect. They are geometrically described as: Translation, Uniform scale, Rotation, Reflection, Stretch (Nonuniform scale factors) and Skew (Shear).
<img src="../images/linear-transformation.svg", width="80%">
<center>Figure 1: Elementary transformations</center>
The Helmert 7-parameter transformation
<img src="../images/helmert.svg", width="80%">
<center>Figure 2: Roto-Traslation and scaling in three- dimensional space</center>
The Helmert 7-parameter transformation, which is an affine (distortion-free) transformation in three dimensions, is extensively used in geodesy to perform Datum transformations. It is applied to geocentric coordinates and can be factored into seven elementary transformations: one uniform scale change, three translations, and three rotations.
Considering a generic point $P$ represented in two orthogonal three dimensional Cartesian spatial reference frames $D_1 (x,y,z)$ and $D_2 (x',y',z')$. For small rotation the direct transformation $P_{D_1} \to P_{D_2}$, is given by
$$
{\begin{pmatrix}
x_p \
y_p \
z_p
\end{pmatrix}}{{D}_2} = \begin{pmatrix}
x^{\prime}_0 \
y^{\prime}_0 \
z^{\prime}_0
\end{pmatrix} + (1+k) \begin{pmatrix}
1 & R_z & -R_y \
-R_z & 1 & R_x \
R_y & -R_x & 1
\end{pmatrix} \begin{pmatrix}
x^{\prime}_p \
y^{\prime}_p \
z^{\prime}_p
\end{pmatrix}{{D}_1} \quad (3)
$$
Horizontal geodetic datum transformations
It is common in Geodesy to separate altimetry from planimetry, in this scenario the affine transformation (eq. 3) can be simplified to a plane roto-translation with isotropic scale variation which requires only four parameters: one scale factor $(\lambda)$, one rotation $(\alpha)$, two translations $(x'_0, y'_0)$. The direct (eq. 4) and inverse (eq. 5) transformations are expressed by:
<img src="../images/plane.svg", width="80%">
<center>Figure 3: Roto-Traslation and scaling in two- dimensional space</center>
$$
\begin{pmatrix}
x_p \
y_p
\end{pmatrix}{{D}_2} = \begin{pmatrix}
T_x \
T_y
\end{pmatrix} + \lambda \begin{pmatrix}
\cos \alpha & \sin \alpha \
- \sin \alpha & \cos \alpha
\end{pmatrix} \begin{pmatrix}
x'_p \
y'_p
\end{pmatrix}{{D}_1} \quad (4)
$$
$$
\begin{pmatrix}
x'p \
y'_p
\end{pmatrix}{{D}2} = \lambda^{-1} \begin{pmatrix}
\cos \alpha & -\sin \alpha \
\sin \alpha & \cos \alpha
\end{pmatrix} \begin{pmatrix}
x_p - T_x \
y_p - T_y
\end{pmatrix}{{D}_1} \quad (5)
$$
To estimate the four parameters $(\lambda, \alpha, x_0, y_0)$ at least two planimetric coordinates in the two systems are needed. However, if more positions are available a least square method (Fitting) can be used solving the linear system:
$$
\left{
\begin{array}{l l}
x'_0 + a x'_p + b y'_p - x_p = 0 \
y'_0 + a y'_p - b x'_p - y_p = 0
\end{array} \right. \quad (6)
$$
with:
$$
a = \lambda \cos \alpha
$$
$$
b = \lambda \sin \alpha
$$
The linear system (6) can be solved knowing at least two points in $(D_1,D_2)$ once estimated the four unknown parameters $(a,b,{x'}_0,{y'}_0)$ it is possible to derive the rotation angle $\alpha$ and the scale factor $\lambda$ by:
$$
\left{
\begin{array}{l l}
\lambda = \sqrt{a^2 + b^2} \
\alpha = \arctan \frac{b}{a}
\end{array} \right. \quad (7)
$$
The relation expressed in (eq. 6) can be used in two different ways:
Knowing the four parameters it is possible to transform the coordinates of $P$ from $D_1 \to D_2$;
Knowing the position of at least 2 points in both systems $(D_1, D_2)$ it is possible to estimate the four parameters by the Least Square Method.
Implementation
Conforme 2D
Affine 2D
Helmert-7-Parameters (3D)
Firs we need to generate a proper test dataset, to do so we'll use a combination of pyproj and numpy.
Staring from the data used in the Working with coordinates - Datum-transformation example, we generate a series of 50 random points in two different DATUM :
UTM zone 19, WGS84 ellipse, WGS84
UTM zone 19, Clarke 1866, NAD27
End of explanation
d1
Explanation: Random points in DATUM 1
End of explanation
d2
Explanation: Random points in Datum 2
(transformation performed using pyproj)
End of explanation
#d1,d2 subsample
d1s=d1[:11]
d2s=d1[:11]
d=d1[11:]
# save to file
d1s.to_csv('d1s.csv', index=False, header=False, sep=" ")
d2s.to_csv('d2s.csv', index=False, header=False, sep=" ")
d.to_csv('d.csv', index=False, header=False, sep=" ")
Explanation: testing dataset:
We'll first select the first 10 points in both dataframe and use them to estimate the transformation parameters. Then we'll use the other 40 points in $d1$ as input for the transformation. Finally compare the results with the output of pyproj.
End of explanation
from transform import conforme
res_conforme = conforme(gcpD1='d1s.csv', gcpD2='d2s.csv', knowD1='d.csv', output='conforme.csv')
res_conforme
Explanation: Conforme 2D
End of explanation
from transform import affine
res_affine = affine(gcpD1='d1s.csv', gcpD2='d2s.csv', knowD1='d.csv', output='affine.csv')
res_affine
Explanation: Affine 2D
End of explanation
from transform import helmert
res_helmert = helmert(gcp1='d1s.csv', gcp2='d2s.csv', inputf='d.csv', output='helmert.txt')
res_helmert
d2[11:][['x','y']]
delta_conforme = (d2[11:]['x'].values - res_conforme[:,0], d2[11:]['y'].values - res_conforme[:,1])
delta_conforme
delta_affine = (d2[11:]['x'].values - res_affine[:,0], d2[11:]['y'].values - res_affine[:,1])
delta_affine
delta_helmert = (d2[11:]['x'].values - res_helmert[:,0], d2[11:]['y'].values - res_helmert[:,1])
delta_helmert
Explanation: Helmert 7 Parameters
End of explanation |
15,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 10 โ Introduction to Artificial Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 10.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Perceptrons
Step2: Activation functions
Step3: FNN for MNIST
using tf.learn
Step8: Using plain TensorFlow
Step9: Using dense() instead of neuron_layer()
Note
Step10: Exercise solutions
1. to 8.
See appendix A.
9.
Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on).
First let's create the deep net. It's exactly the same as earlier, with just one addition
Step11: Now we need to define the directory to write the TensorBoard logs to
Step12: Now we can create the FileWriter that we will use to write the TensorBoard logs
Step13: Hey! Why don't we implement early stopping? For this, we are going to need a validation set. Luckily, the dataset returned by TensorFlow's input_data() function (see above) is already split into a training set (60,000 instances, already shuffled for us), a validation set (5,000 instances) and a test set (5,000 instances). So we can easily define X_valid and y_valid | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Explanation: Chapter 10 โ Introduction to Artificial Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 10.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int)
per_clf = Perceptron(random_state=42)
per_clf.fit(X, y)
y_pred = per_clf.predict([[2, 0.5]])
y_pred
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]
axes = [0, 5, 0, 2]
x0, x1 = np.meshgrid(
np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")
plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)
save_fig("perceptron_iris_plot")
plt.show()
Explanation: Perceptrons
End of explanation
def logit(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def derivative(f, z, eps=0.000001):
return (f(z + eps) - f(z - eps))/(2 * eps)
z = np.linspace(-5, 5, 200)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=2, label="Step")
plt.plot(z, logit(z), "g--", linewidth=2, label="Logit")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=2, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(logit, z), "g--", linewidth=2, label="Logit")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("activation_functions_plot")
plt.show()
def heaviside(z):
return (z >= 0).astype(z.dtype)
def sigmoid(z):
return 1/(1+np.exp(-z))
def mlp_xor(x1, x2, activation=heaviside):
return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)
z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)
plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)
Explanation: Activation functions
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_train = mnist.train.images
X_test = mnist.test.images
y_train = mnist.train.labels.astype("int")
y_test = mnist.test.labels.astype("int")
import tensorflow as tf
config = tf.contrib.learn.RunConfig(tf_random_seed=42) # not shown in the config
feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
dnn_clf = tf.contrib.learn.DNNClassifier(hidden_units=[300,100], n_classes=10,
feature_columns=feature_cols, config=config)
dnn_clf = tf.contrib.learn.SKCompat(dnn_clf) # if TensorFlow >= 1.1
dnn_clf.fit(X_train, y_train, batch_size=50, steps=40000)
from sklearn.metrics import accuracy_score
y_pred = dnn_clf.predict(X_test)
accuracy_score(y_test, y_pred['classes'])
from sklearn.metrics import log_loss
y_pred_proba = y_pred['probabilities']
log_loss(y_test, y_pred_proba)
Explanation: FNN for MNIST
using tf.learn
End of explanation
import tensorflow as tf
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
def neuron_layer(X, n_neurons, name, activation=None):
with tf.name_scope(name):
n_inputs = int(X.get_shape()[1])
stddev = 2 / np.sqrt(n_inputs)
init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
W = tf.Variable(init, name="kernel")
b = tf.Variable(tf.zeros([n_neurons]), name="bias")
Z = tf.matmul(X, W) + b
if activation is not None:
return activation(Z)
else:
return Z
with tf.name_scope("dnn"):
hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images,
y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
X_new_scaled = mnist.test.images[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
print("Predicted classes:", y_pred)
print("Actual classes: ", mnist.test.labels[:20])
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
Explanation: Using plain TensorFlow
End of explanation
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
n_batches = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
show_graph(tf.get_default_graph())
Explanation: Using dense() instead of neuron_layer()
Note: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function, except for a few minor differences:
* several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.
* the default activation is now None rather than tf.nn.relu.
* a few more differences are presented in chapter 11.
End of explanation
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
loss_summary = tf.summary.scalar('log_loss', loss)
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
accuracy_summary = tf.summary.scalar('accuracy', accuracy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Exercise solutions
1. to 8.
See appendix A.
9.
Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on).
First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a tf.summary.scalar() to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.
End of explanation
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
logdir = log_dir("mnist_dnn")
Explanation: Now we need to define the directory to write the TensorBoard logs to:
End of explanation
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
Explanation: Now we can create the FileWriter that we will use to write the TensorBoard logs:
End of explanation
X_valid = mnist.validation.images
y_valid = mnist.validation.labels
m, n = X_train.shape
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_deep_mnist_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_deep_mnist_model"
best_loss = np.infty
epochs_without_progress = 0
max_epochs_without_progress = 50
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})
file_writer.add_summary(accuracy_summary_str, epoch)
file_writer.add_summary(loss_summary_str, epoch)
if epoch % 5 == 0:
print("Epoch:", epoch,
"\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),
"\tLoss: {:.5f}".format(loss_val))
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
if loss_val < best_loss:
saver.save(sess, final_model_path)
best_loss = loss_val
else:
epochs_without_progress += 5
if epochs_without_progress > max_epochs_without_progress:
print("Early stopping")
break
os.remove(checkpoint_epoch_path)
with tf.Session() as sess:
saver.restore(sess, final_model_path)
accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})
accuracy_val
Explanation: Hey! Why don't we implement early stopping? For this, we are going to need a validation set. Luckily, the dataset returned by TensorFlow's input_data() function (see above) is already split into a training set (60,000 instances, already shuffled for us), a validation set (5,000 instances) and a test set (5,000 instances). So we can easily define X_valid and y_valid:
End of explanation |
15,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generation Flow of Fragment Mechanism
Steps
Step1: 0. helper methods
Step2: 1. load text-format fragment mech
Step3: 2. get thermo and kinetics
Step4: 2.1 correct entropy for certain fragments
Step5: 2.2 correct kinetics for reactions with certain fragments
Step6: 3. save in chemkin format
Step7: 4. correct atom count in chemkin | Python Code:
import os
from tqdm import tqdm
from rmgpy import settings
from rmgpy.data.rmg import RMGDatabase
from rmgpy.kinetics import KineticsData
from rmgpy.rmg.model import getFamilyLibraryObject
from rmgpy.data.kinetics.family import TemplateReaction
from rmgpy.data.kinetics.depository import DepositoryReaction
from rmgpy.data.kinetics.common import find_degenerate_reactions
from rmgpy.chemkin import saveChemkinFile, saveSpeciesDictionary
import afm
import afm.fragment
import afm.reaction
Explanation: Generation Flow of Fragment Mechanism
Steps:
load text fragment mechanism (text based: mech and smiles)
create fragments and fragment reactions (from smiles, check isomorphic duplicate, add reaction_repr for fragment reaction)
get thermo and kinetics
Input:
text fragment mechanism and smiles dict
Output:
chemkin file for fragment mechanism
IMPORTANT: USE RMG-Py frag_kinetics_gen_new branch
End of explanation
def read_frag_mech(frag_mech_path):
reaction_string_dict = {}
current_family = ''
with open(frag_mech_path) as f_in:
for line in f_in:
if line.startswith('#') and ':' in line:
_, current_family = [token.strip() for token in line.split(':')]
elif line.strip() and not line.startswith('#'):
reaction_string = line.strip()
if current_family not in reaction_string_dict:
reaction_string_dict[current_family] = [reaction_string]
else:
reaction_string_dict[current_family].append(reaction_string)
return reaction_string_dict
def parse_reaction_string(reaction_string):
reactant_side, product_side = [token.strip() for token in reaction_string.split('==')]
reactant_strings = [token.strip() for token in reactant_side.split('+')]
product_strings = [token.strip() for token in product_side.split('+')]
return reactant_strings, product_strings
Explanation: 0. helper methods
End of explanation
job_name = 'two-sided'
afm_base = os.path.dirname(afm.__path__[0])
working_dir = os.path.join(afm_base, 'examples', 'pdd_chemistry', job_name)
# load RMG database to create reactions
database = RMGDatabase()
database.load(
path = settings['database.directory'],
thermoLibraries = ['primaryThermoLibrary'], # can add others if necessary
kineticsFamilies = 'all',
reactionLibraries = [],
kineticsDepositories = ''
)
thermodb = database.thermo
# Add training reactions
for family in database.kinetics.families.values():
family.addKineticsRulesFromTrainingSet(thermoDatabase=thermodb)
# average up all the kinetics rules
for family in database.kinetics.families.values():
family.fillKineticsRulesByAveragingUp()
# load fragment from smiles-like string
fragment_smiles_filepath = os.path.join(working_dir, 'fragment_smiles.txt')
fragments = []
with open(fragment_smiles_filepath) as f_in:
for line in f_in:
if line.strip() and not line.startswith('#') and ':' in line:
label, smiles = [token.strip() for token in line.split(":")]
frag = afm.fragment.Fragment(label=label).from_SMILES_like_string(smiles)
frag.assign_representative_species()
frag.species_repr.label = label
for prev_frag in fragments:
if frag.isIsomorphic(prev_frag):
raise Exception('Isomorphic duplicate found: {0} and {1}'.format(label, prev_frag.label))
fragments.append(frag)
# construct label-key fragment dictionary
fragment_dict = {}
for frag0 in fragments:
if frag0.label not in fragment_dict:
fragment_dict[frag0.label] = frag0
else:
raise Exception('Fragment with duplicated labels found: {0}'.format(frag0.label))
# put aromatic isomer in front of species.molecule
# 'cause that's the isomer we want to react
for frag in fragments:
species = frag.species_repr
species.generateResonanceIsomers()
for mol in species.molecule:
if mol.isAromatic():
species.molecule = [mol]
break
# load fragment mech in text
fragment_mech_filepath = os.path.join(working_dir, 'frag_mech.txt')
reaction_string_dict = read_frag_mech(fragment_mech_filepath)
# generate reactions
fragment_rxns = []
for family_label in reaction_string_dict:
# parse reaction strings
print "Processing {0}...".format(family_label)
for reaction_string in tqdm(reaction_string_dict[family_label]):
reactant_strings, product_strings = parse_reaction_string(reaction_string)
reactants = [fragment_dict[reactant_string].species_repr for reactant_string in reactant_strings]
products = [fragment_dict[product_string].species_repr.molecule[0] for product_string in product_strings]
for idx, reactant in enumerate(reactants):
for mol in reactant.molecule:
mol.props['label'] = reactant_strings[idx]
for idx, product in enumerate(products):
product.props['label'] = product_strings[idx]
# this script requires reactants to be a list of Species objects
# products to be a list of Molecule objects.
# returned rxns have reactants and products in Species type
new_rxns = database.kinetics.generate_reactions_from_families(reactants=reactants,
products=products,
only_families=[family_label],
resonance=True)
if len(new_rxns) != 1:
print reaction_string + family_label
raise Exception('Non-unique reaction is generated with {0}'.format(reaction_string))
# create fragment reactions
rxn = new_rxns[0]
fragrxn = afm.reaction.FragmentReaction(index=-1,
reversible=True,
family=rxn.family,
reaction_repr=rxn)
fragment_rxns.append(fragrxn)
Explanation: 1. load text-format fragment mech
End of explanation
from rmgpy.data.rmg import getDB
from rmgpy.thermo.thermoengine import processThermoData
from rmgpy.thermo import NASA
import rmgpy.constants as constants
import math
thermodb = getDB('thermo')
# calculate thermo for each species
for fragrxn in tqdm(fragment_rxns):
rxn0 = fragrxn.reaction_repr
for spe in rxn0.reactants + rxn0.products:
thermo0 = thermodb.getThermoData(spe)
if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']:
thermo0.S298.value_si += constants.R * math.log(2)
spe.thermo = processThermoData(spe, thermo0, NASA)
family = getFamilyLibraryObject(rxn0.family)
# Get the kinetics for the reaction
kinetics, source, entry, isForward = family.getKinetics(rxn0, \
templateLabels=rxn0.template, degeneracy=rxn0.degeneracy, \
estimator='rate rules', returnAllKinetics=False)
rxn0.kinetics = kinetics
if not isForward:
rxn0.reactants, rxn0.products = rxn0.products, rxn0.reactants
rxn0.pairs = [(p,r) for r,p in rxn0.pairs]
# convert KineticsData to Arrhenius forms
if isinstance(rxn0.kinetics, KineticsData):
rxn0.kinetics = rxn0.kinetics.toArrhenius()
# correct barrier heights of estimated kinetics
if isinstance(rxn0,TemplateReaction) or isinstance(rxn0,DepositoryReaction): # i.e. not LibraryReaction
rxn0.fixBarrierHeight() # also converts ArrheniusEP to Arrhenius.
fragrxts = [fragment_dict[rxt.label] for rxt in rxn0.reactants]
fragprds = [fragment_dict[prd.label] for prd in rxn0.products]
fragpairs = [(fragment_dict[p0.label],fragment_dict[p1.label]) for p0,p1 in rxn0.pairs]
fragrxn.reactants=fragrxts
fragrxn.products=fragprds
fragrxn.pairs=fragpairs
fragrxn.kinetics=rxn0.kinetics
Explanation: 2. get thermo and kinetics
End of explanation
for frag in fragments:
spe = frag.species_repr
thermo0 = thermodb.getThermoData(spe)
if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']:
thermo0.S298.value_si += constants.R * math.log(2)
spe.thermo = processThermoData(spe, thermo0, NASA)
if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']:
print spe.label
print spe.getFreeEnergy(670)/4184
Explanation: 2.1 correct entropy for certain fragments
End of explanation
for fragrxn in tqdm(fragment_rxns):
rxn0 = fragrxn.reaction_repr
if rxn0.family in ['R_Recombination', 'H_Abstraction', 'R_Addition_MultipleBond']:
for spe in rxn0.reactants + rxn0.products:
if spe.label in ['RCC*CCR', 'LCC*CCR', 'LCC*CCL']:
rxn0.kinetics.changeRate(4)
fragrxn.kinetics=rxn0.kinetics
Explanation: 2.2 correct kinetics for reactions with certain fragments
End of explanation
species_list = []
for frag in fragments:
species = frag.species_repr
species_list.append(species)
len(fragments)
reaction_list = []
for fragrxn in fragment_rxns:
rxn = fragrxn.reaction_repr
reaction_list.append(rxn)
len(reaction_list)
# dump chemkin files
chemkin_path = os.path.join(working_dir, 'chem_annotated.inp')
dictionaryPath = os.path.join(working_dir, 'species_dictionary.txt')
saveChemkinFile(chemkin_path, species_list, reaction_list)
saveSpeciesDictionary(dictionaryPath, species_list)
Explanation: 3. save in chemkin format
End of explanation
def update_atom_count(tokens, parts, R_count):
# remove R_count*2 C and R_count*5 H
string = ''
if R_count == 0:
return 'G'.join(parts)
else:
H_count = int(tokens[2].split('C')[0])
H_count_update = H_count - 5*R_count
C_count = int(tokens[3])
C_count_update = C_count - 2*R_count
tokens = tokens[:2] + [str(H_count_update)+'C'] + [C_count_update]
# Line 1
string += '{0:<16} '.format(tokens[0])
string += '{0!s:<2}{1:>3d}'.format('H', H_count_update)
string += '{0!s:<2}{1:>3d}'.format('C', C_count_update)
string += ' ' * (4 - 2)
string += 'G' + parts[1]
return string
corrected_chemkin_path = os.path.join(working_dir, 'chem_annotated.inp')
output_string = ''
with open(chemkin_path) as f_in:
readThermo = False
for line in f_in:
if line.startswith('THERM ALL'):
readThermo = True
if not readThermo:
output_string += line
continue
if line.startswith('!'):
output_string += line
continue
if 'G' in line and '1' in line:
parts = [part for part in line.split('G')]
tokens = [token.strip() for token in parts[0].split()]
species_label = tokens[0]
R_count = species_label.count('R')
L_count = species_label.count('L')
updated_line = update_atom_count(tokens, parts, R_count+L_count)
output_string += updated_line
else:
output_string += line
with open(corrected_chemkin_path, 'w') as f_out:
f_out.write(output_string)
Explanation: 4. correct atom count in chemkin
End of explanation |
15,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
intakeOutput
Intake and output recorded for patients. Entered from the nursing flowsheet (either manually or interfaced into the hospital system).
Step2: Examine a single patient
Step3: Above we can see that the type of data recorded is described by the cellpath. cellpath is hierarchical, with pipes (|) separating hierarchies. As expected, most data here will fall under the I&O hierarchy. We can see the patient body weight is recorded in both pounds (lbs) and kilograms (kg). The patient's urine output is also documented.
Urine output
Though not recommended for actual use in a study, we can write a query to quickly get an idea of urine output for this patient.
Step4: General intake/output
The columns intaketotal, outputtotal, dialysistotal, and nettotal give us an easy way to plot the patient's fluid balance over time.
Step6: Of course, it is unlikely that the patient has good urine output for almost 20 hours with no corresponding fluid intake - and even less likely that urine output is the only factor affecting patient fluid balance and likely the nettotal column is a naive aggregation of information documented in the intakeOutput table. Indeed, we can see from the infusionDrug table that the patient is receiving both heparin and nitroglycerin, which should be factored in as inputs when calculating patient fluid balance.
Hospitals with data available | Python Code:
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
Explanation: intakeOutput
Intake and output recorded for patients. Entered from the nursing flowsheet (either manually or interfaced into the hospital system).
End of explanation
patientunitstayid = 242380
query = query_schema +
select *
from intakeoutput
where patientunitstayid = {}
order by intakeoutputoffset
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head()
Explanation: Examine a single patient
End of explanation
df_uo = df.loc[df['celllabel'].str.contains('Urine'), :].copy()
df_uo['uo'] = pd.to_numeric(df_uo['cellvaluenumeric'], errors='coerce')
df_uo['uo'] = df_uo['uo'].cumsum()
cols = ['uo']
df_uo.set_index('intakeoutputoffset')[cols].vgplot()
Explanation: Above we can see that the type of data recorded is described by the cellpath. cellpath is hierarchical, with pipes (|) separating hierarchies. As expected, most data here will fall under the I&O hierarchy. We can see the patient body weight is recorded in both pounds (lbs) and kilograms (kg). The patient's urine output is also documented.
Urine output
Though not recommended for actual use in a study, we can write a query to quickly get an idea of urine output for this patient.
End of explanation
cols = ['intaketotal', 'outputtotal', 'dialysistotal', 'nettotal']
df.set_index('intakeoutputoffset')[cols].vgplot()
Explanation: General intake/output
The columns intaketotal, outputtotal, dialysistotal, and nettotal give us an easy way to plot the patient's fluid balance over time.
End of explanation
query = query_schema +
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct a.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join intakeoutput a
on pt.patientunitstayid = a.patientunitstayid
group by pt.hospitalid
.format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
Explanation: Of course, it is unlikely that the patient has good urine output for almost 20 hours with no corresponding fluid intake - and even less likely that urine output is the only factor affecting patient fluid balance and likely the nettotal column is a naive aggregation of information documented in the intakeOutput table. Indeed, we can see from the infusionDrug table that the patient is receiving both heparin and nitroglycerin, which should be factored in as inputs when calculating patient fluid balance.
Hospitals with data available
End of explanation |
15,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Smooth Overlap of Atomic Positions
SOAP is a local descriptor, that maps the local environment around a point very accurately. It eliminates rotational, and permutation redundancies by integrating the overlap of smoothed out atomic positions, by gaussian smearing, and mapping them into coefficients of orthornormal basis functions.
This is done by the following steps
Step1: Atom description
We'll make an ase.Atoms class for NaCl
Step2: Setting SOAP hyper-parameters
Next we set the hyper-parameters to SOAP.
1. calcpos, center of SOAP calculation
1. rcut, sets the cutoff for atoms whose gaussian densities will be included in the integral.
2. nmax, sets the number of orthogonal radial basis functions to use.
3. lmax, sets the number of angular momentum terms, so l = 0, 1, ...., lmax
**Note
Step3: Calculation
Now we call the soap function, and pass all the parameters
Step4: Rotational invariance
Step5: Recompute SOAP for the same atom, after rotation and show the difference in descriptors
Step6: Remark
The power spectrum at a desired position x is the fingerprint of the local chemical environment at this specific position. Thus, it can be used to
Step7: 2. Construct a global SOAP
Use the atomic environments to construct an average SOAP descriptor for molecules | Python Code:
# --- INITIAL DEFINITIONS ---
import numpy, math, random
from visualise import view
from ase import Atoms
import sys
sys.path.insert(0, './data/descriptor_codes/')
sys.path.insert(0, './data/descriptor_codes/src')
from dscribe.descriptors import SOAP
Explanation: Smooth Overlap of Atomic Positions
SOAP is a local descriptor, that maps the local environment around a point very accurately. It eliminates rotational, and permutation redundancies by integrating the overlap of smoothed out atomic positions, by gaussian smearing, and mapping them into coefficients of orthornormal basis functions.
This is done by the following steps:
Smooth out the atomic positions:
The atomic positions are point objects in space. Integrating them would need a lot of basis functions. Thus, the atoms' positions are smeared as gaussian functions.
$$ \rho(r) = \sum_i e^{-(r-r_i)^2}$$
However, this also makes all the elements indistinguishable. Thus, SOAP for individual elements, in molecule/unit-cell, is calculated, and then the values are concantenated at end.
Image courtesy Jรคger Marc
Generate orthonormal basis set:
The obtained smeared atomic position, or atomic density, if you will, is decomposed using Laplace Spherical Harmonics -- spherical harmonics in real space -- and orthogonal basis set: $\Upsilon_{lm}(\theta, \phi)$ and $g_n(r) $.
Basis function for s orbital of hydrogen:
Laplace spherical harmonics $\Upsilon_{โm}$ for l = 0, โฆ, 4 (top to bottom) and m = 0, โฆ, l (left to right). The negative order harmonics $\Upsilon_{โ-m}$ would be shown rotated about the z axis by $90^o$ with respect to the positive order ones.
Image courtsey wikipedia.org/wiki/User:Cyp
Integrate for all coefficients:
$$c_{nlm} = \left< \rho | g_n(r)\Upsilon_{lm} \right> = \int_V g_n(r)\Upsilon_{lm}(\theta, \phi)\rho(r, \theta, \phi)dV$$
Further, a power spectrum, or a density matrix, per se, is made out of these parameters and summed for all m's for rotational invarience.
$$P_{nn'l} = \sum_m c_{nlm}c^*_{n'lm}$$
For more info see:
Bartรณk, Albert P., Risi Kondor, and Gรกbor Csรกnyi. <i>Physical Review B 87.18</i> (2013): <b>184115</b>
For calculating SOAP, we use the DScribe package as developed by Surfaces and Interfaces at the Nanoscale, Aalto
Example
We are going to see SOAP in action for a simple NaCl system.
End of explanation
# Define the system under study: NaCl in a conventional cell.
NaCl_conv = Atoms(
cell=[
[5.6402, 0.0, 0.0],
[0.0, 5.6402, 0.0],
[0.0, 0.0, 5.6402]
],
scaled_positions=[
[0.0, 0.5, 0.0],
[0.0, 0.5, 0.5],
[0.0, 0.0, 0.5],
[0.0, 0.0, 0.0],
[0.5, 0.5, 0.5],
[0.5, 0.5, 0.0],
[0.5, 0.0, 0.0],
[0.5, 0.0, 0.5]
],
symbols=["Na", "Cl", "Na", "Cl", "Na", "Cl", "Na", "Cl"],
)
view(NaCl_conv)
Explanation: Atom description
We'll make an ase.Atoms class for NaCl:
End of explanation
# Computing SOAP
calcpos = [0, 0, 0]
soaper = SOAP(
rcut=8,
nmax=5,
lmax=5,
species=['Na', 'Cl'],
sparse=False
)
Explanation: Setting SOAP hyper-parameters
Next we set the hyper-parameters to SOAP.
1. calcpos, center of SOAP calculation
1. rcut, sets the cutoff for atoms whose gaussian densities will be included in the integral.
2. nmax, sets the number of orthogonal radial basis functions to use.
3. lmax, sets the number of angular momentum terms, so l = 0, 1, ...., lmax
**Note: even when giving one SOAP calculation position, it should be wrapped in a list, as shown in example below**
End of explanation
#calculation
soap1 = soaper.create(NaCl_conv, positions=[calcpos])
print("Size of descriptor: {}".format(soap1.shape[1]))
print("First five values, for position {}: {}".format(calcpos, soap1[0,:5]))
Explanation: Calculation
Now we call the soap function, and pass all the parameters
End of explanation
#Rotation of positions
print("Original positions:\n {}".format(NaCl_conv.positions))
NaCl_conv.rotate(90, [0,1,1], center=calcpos)
print("Rotated positions:\n {}".format(NaCl_conv.positions))
view(NaCl_conv)
Explanation: Rotational invariance
End of explanation
soap2 = soaper.create(NaCl_conv, positions=[calcpos])
print(numpy.linalg.norm(soap1 - soap2))
Explanation: Recompute SOAP for the same atom, after rotation and show the difference in descriptors:
End of explanation
# DIY...
Explanation: Remark
The power spectrum at a desired position x is the fingerprint of the local chemical environment at this specific position. Thus, it can be used to:
1. Compare the similarity of two local chemical environments by comparing their SOAP descriptors.
2. Machine learn local properties, like charges, adsorption energies, etc.
Exercises
1. Smoothness
Verify that the SOAP is smooth under translations of point of interest.
End of explanation
# atomic positions as matrix
molxyz = numpy.load("./data/molecule.coords.npy")
# atom types
moltyp = numpy.load("./data/molecule.types.npy")
atoms_sys = Atoms(positions=molxyz, numbers=moltyp)
view(atoms_sys)
# build SOAP at each atom location
# ...
# compute average soap for each specie
# ...
# concatenate the soaps to the the overall global one
# ...
Explanation: 2. Construct a global SOAP
Use the atomic environments to construct an average SOAP descriptor for molecules
End of explanation |
15,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2 align="center">็นๅปไธๅๅพๆ ๅจ็บฟ่ฟ่กHanLP</h2>
<div align="center">
<a href="https
Step1: ๅ ่ฝฝๆจกๅ
HanLP็ๅทฅไฝๆต็จๆฏๅ
ๅ ่ฝฝๆจกๅ๏ผๆจกๅ็ๆ ็คบ็ฌฆๅญๅจๅจhanlp.pretrained่ฟไธชๅ
ไธญ๏ผๆ็
งNLPไปปๅกๅฝ็ฑปใ
Step2: ่ฐ็จhanlp.load่ฟ่กๅ ่ฝฝ๏ผๆจกๅไผ่ชๅจไธ่ฝฝๅฐๆฌๅฐ็ผๅญใ่ช็ถ่ฏญ่จๅค็ๅไธบ่ฎธๅคไปปๅก๏ผๅ่ฏๅชๆฏๆๅ็บง็ไธไธชใไธๅ
ถๆฏไธชไปปๅกๅ็ฌๅๅปบไธไธชๆจกๅ๏ผไธๅฆๅฉ็จHanLP็่ๅๆจกๅไธๆฌกๆงๅฎๆๅคไธชไปปๅก๏ผ
Step3: ไพๅญๅฅๆณๅๆ
ไปปๅก่ถๅฐ๏ผ้ๅบฆ่ถๅฟซใๅฆๆๅฎไป
ๆง่กไพๅญๅฅๆณๅๆ๏ผ
Step4: ่ฟๅๅผไธบไธไธชDocument
Step5: doc['dep']ไธบๅฅๅญไปฌ็ไพๅญๅฅๆณๆ ๅ่กจ๏ผ็ฌฌiไธชไบๅ
็ป่กจ็คบ็ฌฌiไธชๅ่ฏ็[ไธญๅฟ่ฏ็ไธๆ , ไธไธญๅฟ่ฏ็ไพๅญๅ
ณ็ณป]ใ
ๅฏ่งๅไพๅญๅฅๆณๆ ๏ผ
Step6: ่ฝฌๆขไธบCoNLLๆ ผๅผ๏ผ
Step7: ไธบๅทฒๅ่ฏ็ๅฅๅญๆง่กไพๅญๅฅๆณๅๆ๏ผ | Python Code:
!pip install hanlp -U
Explanation: <h2 align="center">็นๅปไธๅๅพๆ ๅจ็บฟ่ฟ่กHanLP</h2>
<div align="center">
<a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/dep_mtl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Fdep_mtl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a>
</div>
ๅฎ่ฃ
ๆ ่ฎบๆฏWindowsใLinux่ฟๆฏmacOS๏ผHanLP็ๅฎ่ฃ
ๅช้ไธๅฅ่ฏๆๅฎ๏ผ
End of explanation
import hanlp
hanlp.pretrained.mtl.ALL # MTLๅคไปปๅก๏ผๅ
ทไฝไปปๅก่งๆจกๅๅ็งฐ๏ผ่ฏญ็ง่งๅ็งฐๆๅไธไธชๅญๆฎตๆ็ธๅบ่ฏญๆๅบ
Explanation: ๅ ่ฝฝๆจกๅ
HanLP็ๅทฅไฝๆต็จๆฏๅ
ๅ ่ฝฝๆจกๅ๏ผๆจกๅ็ๆ ็คบ็ฌฆๅญๅจๅจhanlp.pretrained่ฟไธชๅ
ไธญ๏ผๆ็
งNLPไปปๅกๅฝ็ฑปใ
End of explanation
HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH)
Explanation: ่ฐ็จhanlp.load่ฟ่กๅ ่ฝฝ๏ผๆจกๅไผ่ชๅจไธ่ฝฝๅฐๆฌๅฐ็ผๅญใ่ช็ถ่ฏญ่จๅค็ๅไธบ่ฎธๅคไปปๅก๏ผๅ่ฏๅชๆฏๆๅ็บง็ไธไธชใไธๅ
ถๆฏไธชไปปๅกๅ็ฌๅๅปบไธไธชๆจกๅ๏ผไธๅฆๅฉ็จHanLP็่ๅๆจกๅไธๆฌกๆงๅฎๆๅคไธชไปปๅก๏ผ
End of explanation
doc = HanLP(['2021ๅนดHanLPv2.1ไธบ็ไบง็ฏๅขๅธฆๆฅๆฌกไธไปฃๆๅ
่ฟ็ๅค่ฏญ็งNLPๆๆฏใ', '้ฟๅฉไธปๆฅๅฐๅไบฌ็ซๆนๅบญๅ่ง่ช็ถ่ฏญไน็งๆๅ
ฌๅธใ'], tasks='dep')
Explanation: ไพๅญๅฅๆณๅๆ
ไปปๅก่ถๅฐ๏ผ้ๅบฆ่ถๅฟซใๅฆๆๅฎไป
ๆง่กไพๅญๅฅๆณๅๆ๏ผ
End of explanation
print(doc)
Explanation: ่ฟๅๅผไธบไธไธชDocument:
End of explanation
doc.pretty_print()
Explanation: doc['dep']ไธบๅฅๅญไปฌ็ไพๅญๅฅๆณๆ ๅ่กจ๏ผ็ฌฌiไธชไบๅ
็ป่กจ็คบ็ฌฌiไธชๅ่ฏ็[ไธญๅฟ่ฏ็ไธๆ , ไธไธญๅฟ่ฏ็ไพๅญๅ
ณ็ณป]ใ
ๅฏ่งๅไพๅญๅฅๆณๆ ๏ผ
End of explanation
print(doc.to_conll())
Explanation: ่ฝฌๆขไธบCoNLLๆ ผๅผ๏ผ
End of explanation
HanLP([
["HanLP", "ไธบ", "็ไบง", "็ฏๅข", "ๅธฆๆฅ", "ๆฌกไธไปฃ", "ๆ", "ๅ
่ฟ", "็", "ๅค่ฏญ็ง", "NLP", "ๆๆฏ", "ใ"],
["ๆ", "็", "ๅธๆ", "ๆฏ", "ๅธๆ", "ๅผ ๆ้", "็", "่ๅฝฑ", "่ขซ", "ๆ้", "ๆ ็บข", "ใ"]
], tasks='dep', skip_tasks='tok*').pretty_print()
Explanation: ไธบๅทฒๅ่ฏ็ๅฅๅญๆง่กไพๅญๅฅๆณๅๆ๏ผ
End of explanation |
15,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scikit-Learn singalong
Step1: Download EEG Data
The following code downloads a copy of the EEG Eye State dataset. All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.
Let's import the same dataset directly with pandas
Step2: Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame
Step3: Now let's take a look at the top of the frame
Step4: The first two columns contain an ID and the response. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
Step5: To select a subset of the columns to look at, typical Pandas indexing applies
Step6: Now let's select a single column, for example -- the response column, and look at the data more closely
Step7: It looks like a binary response, but let's validate that assumption
Step8: We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis)
Step9: Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.
Step10: The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look
Step11: Great, no missing labels.
Out of curiosity, let's see if there is any missing data in this frame
Step12: The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically.
Step13: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Let's calculate the percentage that each class represents
Step14: Split H2O Frame into a train and test set
So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts
Step15: Machine Learning in H2O
We will do a quick demo of the H2O software -- trying to predict eye state (open/closed) from EEG data.
Specify the predictor set and response
The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis').
Step16: Split H2O Frame into a train and test set
Step17: Train and Test a GBM model
Step18: Inspect Model
Step19: Model Performance on a Test Set
Step20: Cross-validated Performance | Python Code:
import pandas as pd
import numpy as np
from collections import Counter
Explanation: Scikit-Learn singalong: EEG Eye State Classification
Author: Kevin Yang
Contact: [email protected]
This tutorial replicates Erin LeDell's oncology demo using Scikit Learn and Pandas, and is intended to provide a comparison of the syntactical and performance differences between sklearn and H2O implementations of Gradient Boosting Machines.
We'll be using Pandas, Numpy and the collections package for most of the data exploration.
End of explanation
csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv"
data = pd.read_csv(csv_url)
Explanation: Download EEG Data
The following code downloads a copy of the EEG Eye State dataset. All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.
Let's import the same dataset directly with pandas
End of explanation
data.shape
Explanation: Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame:
End of explanation
data.head()
Explanation: Now let's take a look at the top of the frame:
End of explanation
data.columns.tolist()
Explanation: The first two columns contain an ID and the response. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
End of explanation
columns = ['AF3', 'eyeDetection', 'split']
data[columns].head(10)
Explanation: To select a subset of the columns to look at, typical Pandas indexing applies:
End of explanation
data['eyeDetection'].head()
Explanation: Now let's select a single column, for example -- the response column, and look at the data more closely:
End of explanation
data['eyeDetection'].unique()
Explanation: It looks like a binary response, but let's validate that assumption:
End of explanation
data['eyeDetection'].nunique()
Explanation: We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis):
End of explanation
data.isnull()
data['eyeDetection'].isnull()
Explanation: Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.
End of explanation
data['eyeDetection'].isnull().sum()
Explanation: The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look:
End of explanation
data.isnull().sum()
Explanation: Great, no missing labels.
Out of curiosity, let's see if there is any missing data in this frame:
End of explanation
Counter(data['eyeDetection'])
Explanation: The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically.
End of explanation
n = data.shape[0] # Total number of training samples
np.array(Counter(data['eyeDetection']).values())/float(n)
Explanation: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Let's calculate the percentage that each class represents:
End of explanation
train = data[data['split']=="train"]
train.shape
valid = data[data['split']=="valid"]
valid.shape
test = data[data['split']=="test"]
test.shape
Explanation: Split H2O Frame into a train and test set
So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set.
If you want H2O to do the splitting for you, you can use the split_frame method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want.
End of explanation
y = 'eyeDetection'
x = data.columns.drop(['eyeDetection','split'])
Explanation: Machine Learning in H2O
We will do a quick demo of the H2O software -- trying to predict eye state (open/closed) from EEG data.
Specify the predictor set and response
The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis').
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
import sklearn
test.shape
Explanation: Split H2O Frame into a train and test set
End of explanation
model = GradientBoostingClassifier(n_estimators=100,
max_depth=4,
learning_rate=0.1)
X=train[x].reset_index(drop=True)
y=train[y].reset_index(drop=True)
model.fit(X, y)
print(model)
Explanation: Train and Test a GBM model
End of explanation
model.get_params()
Explanation: Inspect Model
End of explanation
from sklearn.metrics import r2_score, roc_auc_score, mean_squared_error
y_pred = model.predict(X)
r2_score(y_pred, y)
roc_auc_score(y_pred, y)
mean_squared_error(y_pred, y)
Explanation: Model Performance on a Test Set
End of explanation
from sklearn import cross_validation
cross_validation.cross_val_score(model, X, y, scoring='roc_auc', cv=5)
cross_validation.cross_val_score(model, valid[x].reset_index(drop=True), valid['eyeDetection'].reset_index(drop=True), scoring='roc_auc', cv=5)
Explanation: Cross-validated Performance
End of explanation |
15,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word2Vec Example
(C) 2018 by Damir Cavar
Version
Step1: Using One-Hot Vectors
We can create a one-hot vector that selects the 3rd row
Step2: Let us create a matrix $A$ of four rows
Step3: We can use the column vector $x$ to select a row in matrix $A$
Step4: Computing the Dot-Product
Let us simplify and assume that the dot-product of two vectors is the sum of the products of the scalars in the particular dimension of each vector, e.g.
Step5: If we want to convert the scores into a probability distribution that represents the likelihood that on the basis of these scores the observation belongs to one of the classes, we can compute the Softmax of the vector
Step6: We can provide a parameter $\theta$ to the function, to be able to scale the probability for low values up. The larger $\theta$, the higher the probability assigned to lower values. We set the default for $\theta$ to $1.0$ in the softmax definition
Step7: For a vector of values $[ 4.0, 4.0, 2.0 ]$, we get the following probability distribution given a default $\theta$ of $1.0$
Step8: If we double $\theta$, the probability assigned to the third scalar increases significantly
Step9: Computing Word2Vec
Let us first focus on the skip-gram model. For every word $w$ at position $t$ with some window size or radius of $m$ we want to predict the context of $w$. Our objective function is to maximize the probability of any context word given the current center word
Step10: The rounded sum results in $1.0$
Step11: We could now compare the resulting probabilities with some ground truth and compute the error rate for example. If the ground truth for the above result would be
Step12: <img src="Tangent-calculus.svg.png" width="30%" height="30%">
<p><center><a href="https
Step13: The blue lines show the contour lines of the value of the objective function. Computing the Gradient we identify the direction of the steepest descent. Walking in small steps down this line of the steepest descent takes us to the minimum.
<img src="512px-Gradient_descent.svg.png" width="30%" height="30%">
<p><center>(<a href="https
Step14: Example using Gensim
Gensim provides an API documentation for word2vec. There is a GitHub repository with the code for a high-performance similarity server using Gensim and word2vec. This short intro is based on the Rare Technologies tutorial.
The input for Gensim's word2vec has to be a list of sentences and sentences are a list of tokens. We will import gensim and train a model on some sample sentences
Step15: In a more efficient implementation that does not hold an entire corpus in RAM, one can specify a file-reader and process the input line by line, as shown by Radim ลehลฏลek in his tutorial
Step16: The code above might not run in Jupyter Notebooks, due to restrictions over some modules. Copy the code into a Python file and run it in the command line.
Gensim's word2vec allows you to prune the internal dictionary. This can eliminate tokens that occur a minimum number of times. The min_count parameter provides this restriction.
Gensim offers an API for
Step17: We can access a word vector directly | Python Code:
import numpy as np
Explanation: Word2Vec Example
(C) 2018 by Damir Cavar
Version: 1.1, November 2018
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This is a tutorial related to the L665 course on Machine Learning for NLP focusing on Deep Learning, Spring and Fall 2018 at Indiana University. This material is based on Chris Manning's lecture 2 Word Vector Representations: word2vec and additional sources with extended annotations and explanations.
Introduction
Here we will discuss briefly the necessary methods to understand the Word2Vec algorithm. We will use Numpy for the basic computations.
End of explanation
x = np.array([0, 0, 1, 0])
x
Explanation: Using One-Hot Vectors
We can create a one-hot vector that selects the 3rd row:
End of explanation
A = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16]])
A
Explanation: Let us create a matrix $A$ of four rows:
End of explanation
x.dot(A)
Explanation: We can use the column vector $x$ to select a row in matrix $A$:
End of explanation
y = np.array([4.0, 2.5, 1.1])
Explanation: Computing the Dot-Product
Let us simplify and assume that the dot-product of two vectors is the sum of the products of the scalars in the particular dimension of each vector, e.g.:
$$
u^T v = u \cdot v = \sum_{i=1}^n{u_i v_i}
$$
The more similar two vectors $u$ and $v$ are in terms of directionality, the larger the dot-product is.
We could think of the dot product $u \cdot v$ as projecting one vector on the other. If the two vectors point into the same direction, the dot-product will be the largest. If one vector is orthogonal to the other, it will be $0$. If the vectors are pointing into opposite directions, the dot-product will be negative.
Computing Softmax
Assume that we have some data or results of mutually exclusive variables that represent scores for an observation being of class $C = [ c_1, c_2, c_3 ]$, as represented by the columns in the vector $y$ below:
End of explanation
def softmax1(y):
return np.exp(y) / np.sum(np.exp(y), axis=0)
softmax1([4.0, 4.0, 2.0])
Explanation: If we want to convert the scores into a probability distribution that represents the likelihood that on the basis of these scores the observation belongs to one of the classes, we can compute the Softmax of the vector:
$$
p(C_n) = \frac{\exp(\theta \cdot X_n)}{\sum_{i=1}^N{\exp(\theta \cdot X_i)}}
$$
The parameter $\theta$ allows us to scale the results to increase the probabilities of lower scalars in the vector. The exponentiation of $X$ makes larger values much larger. If we include a parameter like $\theta$, we can scale the effect and increase the probabilities assigned to lower values. See for more details the implementation of softmax below.
In Python we can write this using Numpy's exp and sum functions. The axis parameter determines that the some is performed row-wise:
End of explanation
def softmax(y, t=1.0):
return np.exp(y / t) / np.sum(np.exp(y / t), axis=0)
Explanation: We can provide a parameter $\theta$ to the function, to be able to scale the probability for low values up. The larger $\theta$, the higher the probability assigned to lower values. We set the default for $\theta$ to $1.0$ in the softmax definition:
End of explanation
softmax(np.array([4.0, 4.0, 2.0]))
Explanation: For a vector of values $[ 4.0, 4.0, 2.0 ]$, we get the following probability distribution given a default $\theta$ of $1.0$:
End of explanation
softmax(np.array([4.0, 4.0, 2.0]), 2.0)
Explanation: If we double $\theta$, the probability assigned to the third scalar increases significantly:
End of explanation
softmax(np.array([1.7, 0.3, 0.1, -0.7, -0.2, 0.1, 0.7]))
Explanation: Computing Word2Vec
Let us first focus on the skip-gram model. For every word $w$ at position $t$ with some window size or radius of $m$ we want to predict the context of $w$. Our objective function is to maximize the probability of any context word given the current center word:
$$J'(\theta) = \prod_{t=1}^T \prod_{\substack{-m \leq j \leq m\j \neq 0}} P(w_{t+j} | w_t; \theta)$$
Note that $\theta$ here is a hyper-parameter. We will get back to it later. There is also another hidden hyper-parameter, that is the radius $m$.
We can reformulate this equation as the sum of the log-likelihoods:
$$
J(\theta) = - \frac{1}{T} \sum_{t=1}^T{ \sum_{\substack{-m \leq j \leq m\ j \neq 0}} \log(P(w_{t+j}| w_t)) }
$$
In the equation above we are averaging over all words in the text by dividing with $T$. We take the minus of the sums to get a positive score, since the log of a probability will be always negative. This way our goal is also to minimize the loss function $J(\theta)$, that is to minimize the negative log-likelihood.
The task can be described as: with a specific center word somewhere in the middle of a sentence, pick one random word in a specific radius of $n$ words left and right, we will get as output the probability for every word in the vocabulary of being the selected context word.
The model is trained using word pairs extracted from a text. We go word by word through some text, selecting this word as the center word, and pick the context words from a windows of $n$ size. For example, here for $n=3$ (left and right of the center word):
<img src="Sample_Generation_Word2vec.png" width="70%" height="70%">
Our model will be based on the distributional properties of the word pairs, that is the frequency of their cooccurrence. The left word in the example pairs above is the center word (red) and the right word is one of the context words.
Assume that we have learned 300 features represented in vectors for center words and their context respectively, that is, we have two independent vectors for each word, one that represents its features when it is a context word, and one that represents its features as a center word. This is our model $p(w_{t+j}|w_t)$ for a word at position $t$ in the text, and words in $t+j$ positions in the text relative to $t$. For the purpose of explanation here, we could assume that we have such vector pairs of 300 features for 100,000 such words.
In the following equation we will assume $c$ and $o$ to be indices in the space of the word types that we model. These indices now refer to the list of vocabulary, not to positions in the text, as $t$ does above.
We will compute the Softmax of two word vectors for a context word (index $o$ in the vocabulary) and a center word (index $c$ in the vocabulary) by taking the dot-product of their 300-dimensional feature vectors. That is, one is the vector of a word in the context $u_o$, and the other is the vector of the center word $v_c$. The $u$ vectors are in the context word vectors and the $v$ vectors are the center word vectors.
$$
p(o|c) = \frac{\exp(u_o^T v_c)}{\sum_{W=1}^V{\exp(u_w^T v_c)}}
$$
We compute the dot-product between the one-hot vector representing one center word $w_t$ in the center word matrix $W$. This is the lookup matrix (looking up column) of word embedding matrix as representation of center word. (See Lecture 2, Word Vector Representations: word2vec, Stanford presentation by Chris Manning). As explained above, this will result in picking a concrete center word feature vector from the corresponding matrix:
$$\begin{bmatrix}
0 \[0.3em]
0 \[0.3em]
0 \[0.3em]
0 \[0.3em]
1 \[0.3em]
0 \[0.3em]
0 \[0.3em]
0
\end{bmatrix}
\begin{bmatrix}
-- & 0.2 & -- \[0.3em]
-- & -1.4 & -- \[0.3em]
-- & 0.3 & -- \[0.3em]
-- & -0.1 & -- \[0.3em]
-- & 0.1 & -- \[0.3em]
-- & -0.3 & --
\end{bmatrix} =
\begin{bmatrix}
0.2 \[0.3em]
-1.4 \[0.3em]
0.3 \[0.3em]
-0.1 \[0.3em]
0.1 \[0.3em]
-0.3
\end{bmatrix} = V_c
$$
Take this center word matrix to be the hidden layer of a simple neural network.
We then take the dot-product of this vector of the center word with the matrix of the context vectors (here the scalars dashed out), or output word representation, for each word in the context (or the $n$-sized radius) around the center word, which gives us a 100,000 dimensional vector of weights for each word from the vocabulary in the context of the center vector:
$$V_c \cdot \begin{bmatrix}
-- & -- & -- \[0.3em]
-- & -- & -- \[0.3em]
-- & -- & -- \[0.3em]
-- & -- & -- \[0.3em]
-- & -- & -- \[0.3em]
-- & -- & --
\end{bmatrix} = u_o \cdot v_c$$
Take this context vector matrix to be the output matrix.
In the next step we use Softmax to compute the probability distribution of this vector:
$$\mbox{Softmax}(u_o \cdot v_c) = \mbox{Softmax}(
\begin{bmatrix}
1.7 \[0.3em]
0.3 \[0.3em]
0.1 \[0.3em]
-0.7 \[0.3em]
-0.2 \[0.3em]
0.1 \[0.3em]
0.7
\end{bmatrix}) = \begin{bmatrix}
0.44 \[0.3em]
0.11 \[0.3em]
0.09 \[0.3em]
0.04 \[0.3em]
0.07 \[0.3em]
0.09 \[0.3em]
0.16
\end{bmatrix}
$$
Here once more the softmax function over the vector $u_o \cdot v_c$:
End of explanation
sum([0.44, 0.11, 0.09, 0.04, 0.07, 0.09, 0.16])
Explanation: The rounded sum results in $1.0$:
End of explanation
x_old = 0
x_new = 6
eps = 0.01 # step size
precision = 0.00001
def f_derivative(x):
return 4 * x**3 - 9 * x**2
while abs(x_new - x_old) > precision:
x_old = x_new
x_new = x_old - eps * f_derivative(x_old)
print("x_old:", x_old, " Local minimum occurs at", x_new)
Explanation: We could now compare the resulting probabilities with some ground truth and compute the error rate for example. If the ground truth for the above result would be:
$$\begin{bmatrix}
0 \[0.3em]
0 \[0.3em]
0 \[0.3em]
0 \[0.3em]
0 \[0.3em]
0 \[0.3em]
1
\end{bmatrix}$$
this would mean that the model assigned a probability of $0.16$ to this word, then we could compute the loss.
How do we learn the parameters for each word that would maximize the prediction of a context word (or vice versa)?
Training the Model
Assume that all parameters of the model are defined in a long vector $\theta$. This vector will have twice the length of the vocabulary size, containing a weight for every word as a center word, and every word as a context word:
$$
\theta = \begin{bmatrix}
v_{a} \[0.3em]
v_{ant} \[0.3em]
\vdots \[0.3em]
v_{zero} \[0.3em]
u_{a} \[0.3em]
u_{ant} \[0.3em]
\vdots \[0.3em]
u_{zero}
\end{bmatrix} \in \mathbb{R}^{2dV}
$$
We repeat here again the objective function in which we want to minimize the log-likelihood:
$$
J(\theta) = - \frac{1}{T} \sum_{t=1}^T{ \sum_{\substack{-m \leq j \leq m\ j \neq 0}} \log(P(w_{t+j}| w_t)) }
$$
Our softmax function discussed above fits into this equation:
$$
p(o|c) = \frac{\exp(u_o^T v_c)}{\sum_{W=1}^V{\exp(u_w^T v_c)}}
$$
$$
J(\theta) = - \frac{1}{T} \sum_{t=1}^T{ \sum_{\substack{-m \leq j \leq m\ j \neq 0}} \log(p(o|c)) }
$$
Our goal is to minimize the loss function and maximize the likelihood that we predict the right word in the context for any given center word.
Changing the parameters can be achieved using the gradient. We want to minimize the negative log of the following equation, computing the partial derivative with respect to the center vector ($v_c$):
$$
\frac{\partial }{\partial v_c} \log(\frac{\exp(u_o^T v_c)}{\sum_{W=1}^V{\exp(u_w^T v_c)}})
$$
Note a partial derivative of a function with several variables is its derivative with respect to one of these variables. Here it is $v_c$.
The log of a division can be converted into a subtraction:
$$
\frac{\partial }{\partial v_c} \ \log(\exp(u_o^T v_c)) - \log(\sum_{W=1}^V{\exp(u_w^T v_c)})
$$
We can simplify the first part of the subtraction, since $\log$ and $\exp$ cancel each other out, and the partial derivative with respect to $v_c$ of the simplified equation is simply $u_0$:
$$
\frac{\partial }{\partial v_c} \ \log(\exp(u_o^T v_c)) = \frac{\partial }{\partial v_c} \ u_o^T v_c = u_0
$$
For the second part of the subtraction above, we get:
$$
\frac{\partial }{\partial v_c} \ \log(\sum_{W=1}^V{\exp(u_w^T v_c)})
$$
Using the chain rule we can simplify this equation as well. The chain rule expresses the derivative of the composition of two functions, that is mapping $x$ onto $f(g(x))$, in terms of derivatives of $f$ and $g$.
Let us first start with derivatives of simple functions. What is the derivative of a constant $5$ for example? It is $0$. Since the derivative measures the change of a function, and a constant does not change, the derivative of a constant is $0$. The derivative of a variable $x$ is $1$. The derivative of the product of a constant and a variable is the constant. The derivative of $x^n$ is $n \cdot x^{n-1}$.
A differentiable function of a variable is a function whose derivative exists at each point in its domain (see here).The graph of a differentiable function must have a (non-vertical) tangent line at each point in its domain, and cannot contain any breaks, bends, or cusps. The following function $y = |x|$ (absolute value function) is not differentiable:
<img src="600px-Absolute_value.svg.png" width="30%" height="30%">
The following function is differentiable:
<img src="Polynomialdeg3.svg.png" width="30%" height="30%">
The chain rule allows one to find derivatives of compositions of functions by knowing the derivative of the elementary functions from the composition. Let $g(x)$ be differentiable at $x$ and $f(x)$ be differentiable at $f(g(x))$. Then, if $y=f(g(x))$ and $u=g(x)$:
$$\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dx}$$
...
see for the derivation Chris Manning's video...
The equation he derives is:
$$
u_o - \sum_{x=1}^V p(x|c) u_x
$$
He labels the left part ($u_o$) as observation. This is the context word vectors that we identified in the texts or data. He labels the second part ($\sum_{x=1}^V p(x|c) u_x$) as expectation. This is the part that we want to tweak such that the loss function is minimized.
For maximizing or minimizing the cost (or objective) function using gradient descent, consider this simple example: Find the local minimum of the function $f(x) = x^4 - 3 x^3 + 2$ with derivative $f'(x) = 4 x^3 - 9 x^2$.
Subtracting a fraction of the gradient moves you to the minimum:
End of explanation
while True:
theta_grad = evaluate_gradient(J, corpus, theta)
theta = theta - alpha * theta_grad
Explanation: <img src="Tangent-calculus.svg.png" width="30%" height="30%">
<p><center><a href="https://en.wikipedia.org/wiki/Derivative">Wikipedia Derivative</a></center></p>
Here the idea is to identify the gradient at point $x$, we subtract a little fraction of the gradient, which moves us downhill towards the minimum. We continue with computing the gradient again at this point and walking down towards the minimum.
Gradient Descent
Minimization of an objective function $J(\theta)$ over the entire training corpus makes it necessary to compute gradients for all windows.
We would need to update for each element of $\theta$ to identify the derivatives of the objective function with respect to all the parameters:
$$
\theta_j^{new} = \theta_j^{old} - \alpha \frac{\partial }{\partial \theta_j^{old}} J(\theta)
$$
$\alpha$ is the step size in the Gradient Descent algorithm. We have some parameter values ($\theta_j^{old}$) and we identified the gradient (the $\frac{\partial}{\partial\theta_j^{old}}$ portion) at this position ($j$). We subtract a fraction of the gradient from the old parameters and get new ones. If this gives us a lower objection value, this takes us towards the minimum.
Matrix notation for all parameters:
$$
\theta^{new} = \theta^{old} - \alpha \frac{\partial}{\partial \theta^{old}} J(\theta)
$$
$$
\theta^{new} = \theta^{old} - \alpha \nabla_\theta J(\theta)
$$
Generic Gradient Descent code with some stopping condition to be added to it:
End of explanation
while True:
window = sample_window(corpus)
theta_grad = evaluate_gradient(J, window, theta)
theta = theta - alpha * theta_grad
Explanation: The blue lines show the contour lines of the value of the objective function. Computing the Gradient we identify the direction of the steepest descent. Walking in small steps down this line of the steepest descent takes us to the minimum.
<img src="512px-Gradient_descent.svg.png" width="30%" height="30%">
<p><center>(<a href="https://en.wikipedia.org/wiki/Gradient_descent">Wikipedia: Gradient Descent</a>)</center></p>
To achieve walking continuously down towards the minimum, $\alpha$ needs to be small enough so that one does not jump over the minimum to the other side.
The problem would be that, if we have a billion token corpus, this might include a lot of computations and optimizations. Computing the gradient of the objective function for a very large corpus will take very long for even the first gradient update.
Stochastic Gradient Descent
We compute the gradient and optimize at one position $t$ in the corpus ($t$ is the index of the center word).
$$
\theta^{new} = \theta^{old} - \alpha \nabla_\theta J_t(\theta)
$$
End of explanation
import gensim
sentences = [['Tom', 'loves', 'pizza'], ['Peter', 'loves', 'fries']]
model = gensim.models.Word2Vec(sentences, min_count=1)
Explanation: Example using Gensim
Gensim provides an API documentation for word2vec. There is a GitHub repository with the code for a high-performance similarity server using Gensim and word2vec. This short intro is based on the Rare Technologies tutorial.
The input for Gensim's word2vec has to be a list of sentences and sentences are a list of tokens. We will import gensim and train a model on some sample sentences:
End of explanation
import os
class MySentences(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for fname in os.listdir(self.dirname):
for line in open(os.path.join(self.dirname, fname)):
yield line.split()
sentences = MySentences('examples') # load Gensim_example_1.txt from folder, a memory-friendly iterator
model = gensim.models.Word2Vec(sentences)
Explanation: In a more efficient implementation that does not hold an entire corpus in RAM, one can specify a file-reader and process the input line by line, as shown by Radim ลehลฏลek in his tutorial:
End of explanation
model.wv.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
model.doesnt_match("breakfast cereal dinner lunch".split())
model.similarity('woman', 'man')
Explanation: The code above might not run in Jupyter Notebooks, due to restrictions over some modules. Copy the code into a Python file and run it in the command line.
Gensim's word2vec allows you to prune the internal dictionary. This can eliminate tokens that occur a minimum number of times. The min_count parameter provides this restriction.
Gensim offers an API for:
- evaluation using standard data sets and formats (e.g. Google test set)
- storing and loading of models
- online or resuming of training
We can compute similarities using
End of explanation
model.wv['loves']
Explanation: We can access a word vector directly:
End of explanation |
15,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Writing an algorithm (using Spark/Thunder)
In this notebook, we show how to write an algorithm and put it in a function that can be submitted to the NeuroFinder challenge. In these examples, the algorithms will use functionality from Spark / Thunder for distributed image and time series processing. See the other tutorials for an example submission that does the entire job using only the core Python scientific stack (numpy, scipy, etc.)
Setup plotting
Step1: Load the data
First, let's load some example data so we have something to play with. We'll load the first 100 images from one of the data sets.
Step2: Our images is a class from Thunder for representing time-varying image sequences. Let's cache and count it, which forces it to be loaded and saved, and we'll also compute a reference mean image, which will be useful for displays
Step3: We'll also load the ground truth and the metadata for this data set
Step4: Algorithm structure
We're going to write a function that takes the images variable as an input, as well as an info dictionary with data-set specific metadata, and returns identified sources as an output. It'll look like this (for now our function will just pass and thus do nothing)
Step5: The first thing we could do is use one of Thunder's built-in methods for spatio-temporal feature detection, for example, the localmax algorithm. This is a very simple algorithm that computes the mean, and then applies some very simple image processing to detect local image peaks.
Step6: Let's run our function on the example data and inspect the output
Step7: Let's see how well it did on the example data
Step8: This algorithm isn't doing particularly well, but you could submit this right now to the challenge. Take the run function we wrote, put it in a file run.py in a folder called run, and add an empty __init__.py file in the same folder. Then fork the the neurofinder repository on GitHub and add this folder inside submissions. See here for more detailed instructions.
Tweaking a built-in algorithm
Let's try to improve the algorithm a bit. One option is to use the same algorithm, but just tweak the parameters. We can inspect the algorithm we used with ? to see all the available parameters.
Step9: Try increasing the maximum number of sources, and decrease the minimum distance
Step10: Hmm, that did a bit better, but still not great. Note that the precision (the number of extra sources the algorithm found) is particularly bad.
Step11: You probably don't want to submit this one, but using and tweaking the existing algorithms is a perfectly valid way to submit algorithms! You might end up with something that works really well.
Trying a different algorithm class
Most likely the algorithm about just isn't the right algorithm for these data. Let's try a block algorithm, which does more complex spatio-temporal feature extraction on sub-regions, or blocks, of the full movie
Step12: Let's run this algorithm. It'll take a little longer because it's more complex, that's one of the reasons we try to parallelize these computations!
Step13: Inspect the result
Step14: The overall score is worse, but note that the precision is incredibly high. We missed a lot of sources, but the ones we found are all good. You can see that in the image above
Step15: Writing a custom block algorithm
For our final example, we'll build a custom algorithm from strach using the constructors from Thunder. First, we'll define a function to run on each block. For testing and debugging our function, we'll grab a single block. We'll pick one with a large total standard deviation (in both space and time), so it's likely to have some structure.
Step16: This should be a single numpy array with shape (100,40,40), corresponding to the dimensions in time and space.
Step17: Let's write a function that computes the standard deviation over time, finds the index of the max, draws a circle around the peak, and returns it as a Source.
Step18: Test that our function does something reasonable on the test block, showing the recovered source and the mean of the block over time side by side
Step19: Now we can build a block method that uses this function. We just need to import the classes for constructing block methods, and define an extract function to run on each block. In this case, we'll just call our stdpeak function from above, but to form a complete submission you'd need to include this function alongside run. See the inline comments for what we're doing at each step.
Step20: Now run and evaluate the algorithm | Python Code:
%matplotlib inline
from thunder import Colorize
image = Colorize.image
tile = Colorize.tile
Explanation: Writing an algorithm (using Spark/Thunder)
In this notebook, we show how to write an algorithm and put it in a function that can be submitted to the NeuroFinder challenge. In these examples, the algorithms will use functionality from Spark / Thunder for distributed image and time series processing. See the other tutorials for an example submission that does the entire job using only the core Python scientific stack (numpy, scipy, etc.)
Setup plotting
End of explanation
bucket = "s3n://neuro.datasets/"
path = "challenges/neurofinder/01.00/"
images = tsc.loadImages(bucket + path + 'images', startIdx=0, stopIdx=100)
Explanation: Load the data
First, let's load some example data so we have something to play with. We'll load the first 100 images from one of the data sets.
End of explanation
images.cache()
images.count()
ref = images.mean()
Explanation: Our images is a class from Thunder for representing time-varying image sequences. Let's cache and count it, which forces it to be loaded and saved, and we'll also compute a reference mean image, which will be useful for displays
End of explanation
sources = tsc.loadSources(bucket + path + 'sources')
info = tsc.loadJSON(bucket + path + 'info.json')
Explanation: We'll also load the ground truth and the metadata for this data set
End of explanation
def run(data, info=None):
# do an analysis on the images
# optionally make use of the metadata
# return a set of sources
pass
Explanation: Algorithm structure
We're going to write a function that takes the images variable as an input, as well as an info dictionary with data-set specific metadata, and returns identified sources as an output. It'll look like this (for now our function will just pass and thus do nothing):
End of explanation
def run(data, info):
from thunder import SourceExtraction
method = SourceExtraction('localmax')
result = method.fit(data)
return result
Explanation: The first thing we could do is use one of Thunder's built-in methods for spatio-temporal feature detection, for example, the localmax algorithm. This is a very simple algorithm that computes the mean, and then applies some very simple image processing to detect local image peaks.
End of explanation
out = run(images, info)
image(out.masks((512,512), base=ref, outline=True))
Explanation: Let's run our function on the example data and inspect the output
End of explanation
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
Explanation: Let's see how well it did on the example data
End of explanation
from thunder.extraction.feature.methods.localmax import LocalMaxFeatureAlgorithm
LocalMaxFeatureAlgorithm?
Explanation: This algorithm isn't doing particularly well, but you could submit this right now to the challenge. Take the run function we wrote, put it in a file run.py in a folder called run, and add an empty __init__.py file in the same folder. Then fork the the neurofinder repository on GitHub and add this folder inside submissions. See here for more detailed instructions.
Tweaking a built-in algorithm
Let's try to improve the algorithm a bit. One option is to use the same algorithm, but just tweak the parameters. We can inspect the algorithm we used with ? to see all the available parameters.
End of explanation
def run(data, info):
from thunder import SourceExtraction
method = SourceExtraction('localmax', maxSources=500, minDistance=5)
result = method.fit(data)
return result
out = run(images, info)
image(out.masks((512,512), base=ref, outline=True))
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
Explanation: Try increasing the maximum number of sources, and decrease the minimum distance
End of explanation
print('precision: %.2f' % precision)
Explanation: Hmm, that did a bit better, but still not great. Note that the precision (the number of extra sources the algorithm found) is particularly bad.
End of explanation
def run(data, info):
from thunder import SourceExtraction
from thunder.extraction import OverlapBlockMerger
merger = OverlapBlockMerger(0.1)
method = SourceExtraction('nmf', merger=merger, componentsPerBlock=5, percentile=95, minArea=100, maxArea=500)
result = method.fit(data, size=(32, 32), padding=8)
return result
Explanation: You probably don't want to submit this one, but using and tweaking the existing algorithms is a perfectly valid way to submit algorithms! You might end up with something that works really well.
Trying a different algorithm class
Most likely the algorithm about just isn't the right algorithm for these data. Let's try a block algorithm, which does more complex spatio-temporal feature extraction on sub-regions, or blocks, of the full movie
End of explanation
out = run(images, info)
Explanation: Let's run this algorithm. It'll take a little longer because it's more complex, that's one of the reasons we try to parallelize these computations!
End of explanation
image(out.masks((512,512), base=ref, outline=True))
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
Explanation: Inspect the result
End of explanation
print('precision: %.2f' % precision)
Explanation: The overall score is worse, but note that the precision is incredibly high. We missed a lot of sources, but the ones we found are all good. You can see that in the image above: every identified region does indeed look like it found a neuron.
End of explanation
b = images.toBlocks(size=(40,40)).values().filter(lambda x: x.std() > 1000).first()
Explanation: Writing a custom block algorithm
For our final example, we'll build a custom algorithm from strach using the constructors from Thunder. First, we'll define a function to run on each block. For testing and debugging our function, we'll grab a single block. We'll pick one with a large total standard deviation (in both space and time), so it's likely to have some structure.
End of explanation
b.shape
Explanation: This should be a single numpy array with shape (100,40,40), corresponding to the dimensions in time and space.
End of explanation
def stdpeak(block):
# compute the standard deviation over time
s = block.std(axis=0)
# get the indices of the peak
from numpy import where
r, c = where(s == s.max())
# define a circle around the center, clipping at the boundaries
from skimage.draw import circle
rr, cc = circle(r[0], c[0], 10, shape=block.shape[1:])
coords = zip(rr, cc)
# return as a list of sources (in this case it's just one)\n",
from thunder.extraction.source import Source
if len(coords) > 0:
return [Source(coords)]
else:
return []
Explanation: Let's write a function that computes the standard deviation over time, finds the index of the max, draws a circle around the peak, and returns it as a Source.
End of explanation
s = stdpeak(b)
tile([s[0].mask((40,40)), b.std(axis=0)])
Explanation: Test that our function does something reasonable on the test block, showing the recovered source and the mean of the block over time side by side
End of explanation
def run(data, info):
# import the classes we need for construction
from thunder.extraction.block.base import BlockAlgorithm, BlockMethod
# create a custom class by extending the base method
class TestBlockAlgorithm(BlockAlgorithm):
# write an extract function which draws a circle around the pixel
# in each block with peak standard deviation
def extract(self, block):
return stdpeak(block)
# now instaitiate our new method and use it to fit the data
method = BlockMethod(algorithm=TestBlockAlgorithm())
result = method.fit(data, size=(40, 40))
return result
Explanation: Now we can build a block method that uses this function. We just need to import the classes for constructing block methods, and define an extract function to run on each block. In this case, we'll just call our stdpeak function from above, but to form a complete submission you'd need to include this function alongside run. See the inline comments for what we're doing at each step.
End of explanation
out = run(images, info)
image(out.masks((512,512), base=sources, outline=True))
recall, precision, score = sources.similarity(out, metric='distance', minDistance=5)
print('score: %.2f' % score)
Explanation: Now run and evaluate the algorithm
End of explanation |
15,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unit Tests
Overview and Principles
Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test.
There are two parts to writing tests.
1. invoking the code under test so that it is exercised in a particular way;
1. evaluating the results of executing code under test to determine if it behaved as expected.
The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.
For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.
Test cases can be of several types. Below are listed some common classifications of test cases.
- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.
- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.
- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.
- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.
Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.
A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.
Examples of Test Cases
This section presents examples of test cases. The code under test is the calculation of entropy.
Entropy of a set of probabilities
$$
H = -\sum_i p_i \log(p_i)
$$
where $\sum_i p_i = 1$.
Step1: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
Step2: Question
Step3: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
$$
H = -\sum_{i=1}^{n} p_i \log(p_i)
= -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n})
= n (-\frac{1}{n} \log(\frac{1}{n}) )
= -\log(\frac{1}{n})
$$
For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
Step4: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Unittest Infrastructure
There are several reasons to use a test infrastructure
Step7: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
As expected, the first test passes, but the second test fails.
Exercise
Rewrite the above one-shot test for entropy using the unittest infrastructure.
Step8: Testing For Exceptions
Edge test cases often involves handling exceptions. One approach is to code this directly.
Step9: unittest provides help with testing exceptions.
Step10: Test Files
Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.
The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.
Discussion
Question | Python Code:
import numpy as np
# Code Under Test
def entropy(ps):
if not np.isclose(np.sum(ps), 1.0):
raise ValueError("Probability is not 1.")
items = ps * np.log(ps)
return -np.sum(items)
# Smoke test
probs = [
[0.1, 0.8, 0.1],
[0.1, 0.9],
[0.5, 0.5],
[1.0]
]
for prob in probs:
try:
entropy(prob)
except:
print("%s failed." % str(prob))
print ("Testing completed.")
Explanation: Unit Tests
Overview and Principles
Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test.
There are two parts to writing tests.
1. invoking the code under test so that it is exercised in a particular way;
1. evaluating the results of executing code under test to determine if it behaved as expected.
The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.
For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.
Test cases can be of several types. Below are listed some common classifications of test cases.
- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.
- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.
- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.
- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.
Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.
A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.
Examples of Test Cases
This section presents examples of test cases. The code under test is the calculation of entropy.
Entropy of a set of probabilities
$$
H = -\sum_i p_i \log(p_i)
$$
where $\sum_i p_i = 1$.
End of explanation
# One-shot test. Need to know the correct answer.
entries = [
[0, [1]],
]
for entry in entries:
ans = entry[0]
prob = entry[1]
if not np.isclose(entropy(prob), ans):
print("Test failed!")
print ("Test completed!")
Explanation: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
End of explanation
# Edge test. This is something that should cause an exception.
entropy([0.5])
Explanation: Question: What is an example of another one-shot test? (Hint: You need to know the expected result.)
One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.
End of explanation
# Pattern test
def test_equal_probabilities(n):
prob = 1.0/n
ps = np.repeat(prob , n)
if not np.isclose(entropy(ps), -np.log(prob)):
import pdb; pdb.set_trace()
print ("Bad result.")
else:
print("Worked!")
# Run a test
test_equal_probabilities(100)
Explanation: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
$$
H = -\sum_{i=1}^{n} p_i \log(p_i)
= -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n})
= n (-\frac{1}{n} \log(\frac{1}{n}) )
= -\log(\frac{1}{n})
$$
For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
End of explanation
import unittest
# Define a class in which the tests will run
class UnitTests(unittest.TestCase):
# Each method in the class to execute a test
def test_success(self):
self.assertEqual(1, 1)
def test_success1(self):
self.assertTrue(1 == 1)
def test_failure(self):
self.assertEqual(1, 2)
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
# Function the handles test loading
#def test_setup(argument ?):
Explanation: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Unittest Infrastructure
There are several reasons to use a test infrastructure:
- If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code.
- The infrastructure provides a uniform way to report test results, and to handle test failures.
- A test infrastructure can tell you about coverage so you know what tests to add.
We'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following:
1. import the unittest module
1. define a class that inherits from unittest.TestCase
1. write methods that run the code to be tested and check the outcomes.
The last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with "test".
Second, the "test methods" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.
End of explanation
# Implementating a pattern test. Use functions in the test.
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_equal_probability(self):
def test(count):
Invokes the entropy function for a number of values equal to count
that have the same probability.
:param int count:
raise RuntimeError ("Not implemented.")
#
test(2)
test(20)
test(200)
#test_setup(TestEntropy)
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
Write the full set of tests.
Explanation: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
As expected, the first test passes, but the second test fails.
Exercise
Rewrite the above one-shot test for entropy using the unittest infrastructure.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
try:
entropy([0.1, 0.5])
self.assertTrue(False)
except ValueError:
self.assertTrue(True)
#test_setup(TestEntropy)
Explanation: Testing For Exceptions
Edge test cases often involves handling exceptions. One approach is to code this directly.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
with self.assertRaises(ValueError):
entropy([0.1, 0.5])
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
Explanation: unittest provides help with testing exceptions.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntryopy(unittest.TestCase):
def test_oneshot(self):
self.assertEqual(geomean([1,1]), 1)
def test_oneshot2(self):
self.assertEqual(geomean([3, 3, 3]), 3)
#test_setup(TestGeomean)
#def geomean(argument?):
# return ?
Explanation: Test Files
Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.
The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.
Discussion
Question: What tests would you write for a plotting function?
Test Driven Development
Start by writing the tests. Then write the code.
We illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output.
End of explanation |
15,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Define a function maximum that takes two numbers as arguments and returns the largest of them. Use the if-then-else construct available in Python. (It is true that Python has the max() function built in, but writing it yourself is nevertheless a good exercise.)
Step1: 2. Define a function max_of_three that takes three numbers as arguments and returns the largest of them.
Step2: 3. Define a function length that computes the length of a given list or string. (It is true that Python has the len() function built in, but writing it yourself is nevertheless a good exercise.)
Step3: 4. Write a function is_vowel that takes a character (i.e. a string of length 1) and returns True if it is a vowel, False otherwise.
Step4: 5. Define a function accumulate and a function multiply that sums and multiplies (respectively) all the numbers in a list of numbers. For example, sum([1, 2, 3, 4]) should return 10, and multiply([1, 2, 3, 4]) should return 24.
Step5: A more elegant and generic solution is given hereafter. It uses a functional approach, as the function that is to be calculated is passed to the function
Step6: 6. Define a function reverse that computes the reversal of a string. For example, reverse("I am testing") should return the string "gnitset ma I".
Step7: 7. Define a function is_palindrome that recognizes palindromes (i.e. words that look the same written backwards). For example, is_palindrome("radar") should return True.
Step8: 8. Write a function is_member that takes a value (i.e. a number, string, etc) x and a list of values a, and returns True if x is a member of a, False otherwise. (Note that this is exactly what the in operator does, but for the sake of the exercise you should pretend Python did not have this operator.)
Step9: 9. Define a procedure histogram that takes a list of integers and prints a histogram to the screen. For example, histogram([4, 9, 7]) should print the following
Step10: 10. Write a function filter_long_words that takes a list of words and an integer n and returns the list of words that are longer than n.
Step11: 11. A pangram is a sentence that contains all the letters of the English alphabet at least once, for example
Step12: 12. Represent a small bilingual lexicon as a Python dictionary in the following fashion {"may"
Step13: 13. In cryptography, a Caesar cipher is a very simple encryption techniques in which each letter in the plain text is replaced by a letter some fixed number of positions down the alphabet. For example, with a shift of 3, A would be replaced by D, B would become E, and so on. The method is named after Julius Caesar, who used it to communicate with his generals. ROT-13 ("rotate by 13 places") is a widely used example of a Caesar cipher where the shift is 13. In Python, the key for ROT-13 may be represented by means of the following dictionary
Step14: 14. Write a procedure char_freq_table that accepts the file name jedi.txt as argument, builds a frequency listing of the characters contained in the file, and prints a sorted and nicely formatted character frequency table to the screen. | Python Code:
assert maximum(3, 3) == 3
assert maximum(1, 2) == 2
assert maximum(3, 2) == 3
Explanation: 1. Define a function maximum that takes two numbers as arguments and returns the largest of them. Use the if-then-else construct available in Python. (It is true that Python has the max() function built in, but writing it yourself is nevertheless a good exercise.)
End of explanation
assert max_of_three(1, 2, 3) == 3
assert max_of_three(1, 1, 2) == 2
assert max_of_three(2, 1 , .5) == 2
assert max_of_three(0, 0, 0) == 0
Explanation: 2. Define a function max_of_three that takes three numbers as arguments and returns the largest of them.
End of explanation
assert length([1, 2, 3]) == 3
assert length('this is some string') == 19
Explanation: 3. Define a function length that computes the length of a given list or string. (It is true that Python has the len() function built in, but writing it yourself is nevertheless a good exercise.)
End of explanation
def is_vowel(s):
return s in 'aeiou'
assert is_vowel('t') == False
assert is_vowel('a') == True
Explanation: 4. Write a function is_vowel that takes a character (i.e. a string of length 1) and returns True if it is a vowel, False otherwise.
End of explanation
assert accumulate([1, 2, 3, 4]) == 10
assert multiply([1, 2, 3, 4]) == 24
Explanation: 5. Define a function accumulate and a function multiply that sums and multiplies (respectively) all the numbers in a list of numbers. For example, sum([1, 2, 3, 4]) should return 10, and multiply([1, 2, 3, 4]) should return 24.
End of explanation
from operator import add, mul
def calc(obj, func):
res = None
if func == add:
res = 0
if func == mul:
res = 1
for num in obj:
res = func(res, num)
return res
print(calc([1, 2, 3, 4], mul))
print(calc([1, 2, 3, 4], add))
Explanation: A more elegant and generic solution is given hereafter. It uses a functional approach, as the function that is to be calculated is passed to the function:
End of explanation
assert reverse('I am testing') == 'gnitset ma I'
Explanation: 6. Define a function reverse that computes the reversal of a string. For example, reverse("I am testing") should return the string "gnitset ma I".
End of explanation
assert is_palindrome('radar') == True
assert is_palindrome('sonar') == False
Explanation: 7. Define a function is_palindrome that recognizes palindromes (i.e. words that look the same written backwards). For example, is_palindrome("radar") should return True.
End of explanation
assert is_member([1, 2, 3], 4) == False
assert is_member([1, 2, 3], 2) == True
Explanation: 8. Write a function is_member that takes a value (i.e. a number, string, etc) x and a list of values a, and returns True if x is a member of a, False otherwise. (Note that this is exactly what the in operator does, but for the sake of the exercise you should pretend Python did not have this operator.)
End of explanation
histogram([4, 9, 7])
Explanation: 9. Define a procedure histogram that takes a list of integers and prints a histogram to the screen. For example, histogram([4, 9, 7]) should print the following:
```
```
End of explanation
assert len(filter_long_words('this is some sentence'.split(), 3)) == 3
Explanation: 10. Write a function filter_long_words that takes a list of words and an integer n and returns the list of words that are longer than n.
End of explanation
assert is_pangram('foo') == False
assert is_pangram('The quick brown fox jumps over the lazy dog') == True
Explanation: 11. A pangram is a sentence that contains all the letters of the English alphabet at least once, for example: "The quick brown fox jumps over the lazy dog". Your task here is to write a function is_pangram to check a sentence to see if it is a pangram or not.
End of explanation
assert translate("may the force be with you".split()) == ['mรถge', 'die', 'macht', 'sein', 'mit', 'dir']
Explanation: 12. Represent a small bilingual lexicon as a Python dictionary in the following fashion {"may": "mรถge", "the": "die", "force": "macht", "be": "sein", "with": "mit", "you": "dir"} and use it to translate the sentence "may the force be with you" from English into German. That is, write a function translate that takes a list of English words and returns a list of German words.
End of explanation
text = 'this is some text'
assert rot13(rot13(text)) == text
Explanation: 13. In cryptography, a Caesar cipher is a very simple encryption techniques in which each letter in the plain text is replaced by a letter some fixed number of positions down the alphabet. For example, with a shift of 3, A would be replaced by D, B would become E, and so on. The method is named after Julius Caesar, who used it to communicate with his generals. ROT-13 ("rotate by 13 places") is a widely used example of a Caesar cipher where the shift is 13. In Python, the key for ROT-13 may be represented by means of the following dictionary:
key = {'a':'n', 'b':'o', 'c':'p', 'd':'q', 'e':'r', 'f':'s', 'g':'t', 'h':'u',
'i':'v', 'j':'w', 'k':'x', 'l':'y', 'm':'z', 'n':'a', 'o':'b', 'p':'c',
'q':'d', 'r':'e', 's':'f', 't':'g', 'u':'h', 'v':'i', 'w':'j', 'x':'k',
'y':'l', 'z':'m', 'A':'N', 'B':'O', 'C':'P', 'D':'Q', 'E':'R', 'F':'S',
'G':'T', 'H':'U', 'I':'V', 'J':'W', 'K':'X', 'L':'Y', 'M':'Z', 'N':'A',
'O':'B', 'P':'C', 'Q':'D', 'R':'E', 'S':'F', 'T':'G', 'U':'H', 'V':'I',
'W':'J', 'X':'K', 'Y':'L', 'Z':'M'}
Your task in this exercise is to implement an encoder/decoder of ROT-13 called rot13. Once you're done, you will be able to read the following secret message:
Pnrfne pvcure? V zhpu cersre Pnrfne fnynq!
Note that since English has 26 characters, your ROT-13 program will be able to both encode and decode texts written in English.
End of explanation
with open('material/jedi_frequencies.txt') as fh:
print(fh.read())
Explanation: 14. Write a procedure char_freq_table that accepts the file name jedi.txt as argument, builds a frequency listing of the characters contained in the file, and prints a sorted and nicely formatted character frequency table to the screen.
End of explanation |
15,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: ํ
์ํ๋ก๋ก ๋ถ์ฐ ํ๋ จํ๊ธฐ
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: ์ ๋ต์ ์ข
๋ฅ
tf.distribute.Strategy๋ ์๋ก ๋ค๋ฅธ ๋ค์ํ ์ฌ์ฉ ํํ๋ฅผ ์์ฐ๋ฅด๋ ค๊ณ ํฉ๋๋ค. ๋ช ๊ฐ์ง ์กฐํฉ์ ํ์ฌ ์ง์ํ์ง๋ง, ์ถํ์ ์ถ๊ฐ๋ ์ ๋ต๋ค๋ ์์ต๋๋ค. ์ด๋ค ์ค ๋ช ๊ฐ์ง๋ฅผ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
๋๊ธฐ ํ๋ จ ๋ ๋น๋๊ธฐ ํ๋ จ
Step3: MirroredStrategy ์ธ์คํด์ค๊ฐ ์๊ฒผ์ต๋๋ค. ํ
์ํ๋ก๊ฐ ์ธ์ํ ๋ชจ๋ GPU๋ฅผ ์ฌ์ฉํ๊ณ , ์ฅ์น ๊ฐ ํต์ ์๋ NCCL์ ์ฌ์ฉํ ๊ฒ์
๋๋ค.
์ฅ๋น์ GPU ์ค ์ผ๋ถ๋ง ์ฌ์ฉํ๊ณ ์ถ๋ค๋ฉด, ๋ค์๊ณผ ๊ฐ์ด ํ๋ฉด ๋ฉ๋๋ค.
Step4: ์ฅ์น ๊ฐ ํต์ ๋ฐฉ๋ฒ์ ๋ฐ๊พธ๊ณ ์ถ๋ค๋ฉด, cross_device_ops ์ธ์์ tf.distribute.CrossDeviceOps ํ์
์ ์ธ์คํด์ค๋ฅผ ๋๊ธฐ๋ฉด ๋ฉ๋๋ค. ํ์ฌ ๊ธฐ๋ณธ๊ฐ์ธ tf.distribute.NcclAllReduce ์ด์ธ์ tf.distribute.HierarchicalCopyAllReduce์ tf.distribute.ReductionToOneDevice ๋ ๊ฐ์ง ์ถ๊ฐ ์ต์
์ ์ ๊ณตํฉ๋๋ค.
Step5: CentralStorageStrategy
tf.distribute.experimental.CentralStorageStrategy๋ ๋๊ธฐ ํ๋ จ์ ํฉ๋๋ค. ํ์ง๋ง ๋ณ์๋ฅผ ๋ฏธ๋ฌ๋งํ์ง ์๊ณ , CPU์์ ๊ด๋ฆฌํฉ๋๋ค. ์์
์ ๋ชจ๋ ๋ก์ปฌ GPU๋ค๋ก ๋ณต์ ๋ฉ๋๋ค. ๋จ, ๋ง์ฝ GPU๊ฐ ํ๋๋ฐ์ ์๋ค๋ฉด ๋ชจ๋ ๋ณ์์ ์์
์ด ๊ทธ GPU์ ๋ฐฐ์น๋ฉ๋๋ค.
๋ค์๊ณผ ๊ฐ์ด CentralStorageStrategy ์ธ์คํด์ค๋ฅผ ๋ง๋์ญ์์ค.
Step6: CentralStorageStrategy ์ธ์คํด์ค๊ฐ ๋ง๋ค์ด์ก์ต๋๋ค. ์ธ์ํ ๋ชจ๋ GPU์ CPU๋ฅผ ์ฌ์ฉํฉ๋๋ค. ๊ฐ ๋ณต์ ๋ณธ์ ๋ณ์ ๋ณ๊ฒฝ์ฌํญ์ ๋ชจ๋ ์์ง๋ ํ ๋ณ์์ ์ ์ฉ๋ฉ๋๋ค.
Note
Step7: MultiWorkerMirroredStrategy์ ์ฌ์ฉํ ์ ์๋ ์์ง ์ฐ์ฐ ๊ตฌํ์ ํ์ฌ ๋ ๊ฐ์ง์
๋๋ค. CollectiveCommunication.RING๋ gRPC๋ฅผ ์ฌ์ฉํ ๋ง ๋คํธ์ํฌ ๊ธฐ๋ฐ์ ์์ง ์ฐ์ฐ์
๋๋ค. CollectiveCommunication.NCCL๋ Nvidia์ NCCL์ ์ฌ์ฉํ์ฌ ์์ง ์ฐ์ฐ์ ๊ตฌํํ ๊ฒ์
๋๋ค. CollectiveCommunication.AUTO๋ก ์ค์ ํ๋ฉด ๋ฐํ์์ด ์์์ ๊ตฌํ์ ๊ณ ๋ฆ
๋๋ค. ์ต์ ์ ์์ง ์ฐ์ฐ ๊ตฌํ์ GPU์ ์์ ์ข
๋ฅ, ํด๋ฌ์คํฐ์ ๋คํธ์ํฌ ์ฐ๊ฒฐ ๋ฑ์ ๋ฐ๋ผ ๋ค๋ฅผ ์ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ด ์ง์ ํ ์ ์์ต๋๋ค.
Step8: ๋ค์ค GPU๋ฅผ ์ฌ์ฉํ๋ ๊ฒ๊ณผ ๋น๊ตํด์ ๋ค์ค ์์ปค๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ ๊ฐ์ฅ ํฐ ์ฐจ์ด์ ์ ๋ค์ค ์์ปค์ ๋ํ ์ค์ ๋ถ๋ถ์
๋๋ค. ํด๋ฌ์คํฐ๋ฅผ ๊ตฌ์ฑํ๋ ๊ฐ ์์ปค์ "TF_CONFIG" ํ๊ฒฝ๋ณ์๋ฅผ ์ฌ์ฉํ์ฌ ํด๋ฌ์คํฐ ์ค์ ์ ํ๋ ๊ฒ์ด ํ
์ํ๋ก์ ํ์ค์ ์ธ ๋ฐฉ๋ฒ์
๋๋ค. ์๋์ชฝ "TF_CONFIG" ํญ๋ชฉ์์ ์ด๋ป๊ฒ ํ๋์ง ์์ธํ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
Note
Step9: ์ ์์์๋ MirroredStrategy๋ฅผ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์, ํ๋์ ์ฅ๋น๊ฐ ๋ค์ค GPU๋ฅผ ๊ฐ์ง ๊ฒฝ์ฐ์ ์ฌ์ฉํ ์ ์์ต๋๋ค. strategy.scope()๋ก ๋ถ์ฐ ์ฒ๋ฆฌํ ๋ถ๋ถ์ ์ฝ๋์ ์ง์ ํ ์ ์์ต๋๋ค. ์ด ๋ฒ์(scope) ์์์ ๋ชจ๋ธ์ ๋ง๋ค๋ฉด, ์ผ๋ฐ์ ์ธ ๋ณ์๊ฐ ์๋๋ผ ๋ฏธ๋ฌ๋ง๋ ๋ณ์๊ฐ ๋ง๋ค์ด์ง๋๋ค. ์ด ๋ฒ์ ์์์ ์ปดํ์ผ์ ํ๋ค๋ ๊ฒ์ ์์ฑ์๊ฐ ์ด ์ ๋ต์ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ๋ จํ๋ ค๊ณ ํ๋ค๋ ์๋ฏธ์
๋๋ค. ์ด๋ ๊ฒ ๊ตฌ์ฑํ๊ณ ๋์, ์ผ๋ฐ์ ์ผ๋ก ์คํํ๋ ๊ฒ์ฒ๋ผ ๋ชจ๋ธ์ fit ํจ์๋ฅผ ํธ์ถํฉ๋๋ค.
MirroredStrategy๊ฐ ๋ชจ๋ธ์ ํ๋ จ์ ์ฌ์ฉ ๊ฐ๋ฅํ GPU๋ค๋ก ๋ณต์ ํ๊ณ , ๊ทธ๋๋์ธํธ๋ค์ ์์งํ๋ ๊ฒ ๋ฑ์ ์์์ ์ฒ๋ฆฌํฉ๋๋ค.
Step10: ์์์๋ ํ๋ จ๊ณผ ํ๊ฐ ์
๋ ฅ์ ์ํด tf.data.Dataset์ ์ฌ์ฉํ์ต๋๋ค. ๋ํ์ด(numpy) ๋ฐฐ์ด๋ ์ฌ์ฉํ ์ ์์ต๋๋ค.
Step11: ๋ฐ์ดํฐ์
์ด๋ ๋ํ์ด๋ฅผ ์ฌ์ฉํ๋ ๋ ๊ฒฝ์ฐ ๋ชจ๋ ์
๋ ฅ ๋ฐฐ์น๊ฐ ๋์ผํ ํฌ๊ธฐ๋ก ๋๋์ด์ ธ์ ์ฌ๋ฌ ๊ฐ๋ก ๋ณต์ ๋ ์์
์ ์ ๋ฌ๋ฉ๋๋ค. ์๋ฅผ ๋ค์ด, MirroredStrategy๋ฅผ 2๊ฐ์ GPU์์ ์ฌ์ฉํ๋ค๋ฉด, ํฌ๊ธฐ๊ฐ 10๊ฐ์ธ ๋ฐฐ์น(batch)๊ฐ ๋ ๊ฐ์ GPU๋ก ๋ฐฐ๋ถ๋ฉ๋๋ค. ์ฆ, ๊ฐ GPU๋ ํ ๋จ๊ณ๋ง๋ค 5๊ฐ์ ์
๋ ฅ์ ๋ฐ๊ฒ ๋ฉ๋๋ค. ๋ฐ๋ผ์ GPU๊ฐ ์ถ๊ฐ๋ ์๋ก ๊ฐ ์ํฌํฌ(epoch) ๋น ํ๋ จ ์๊ฐ์ ์ค์ด๋ค๊ฒ ๋ฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก๋ ๊ฐ์๊ธฐ๋ฅผ ๋ ์ถ๊ฐํ ๋๋ง๋ค ๋ฐฐ์น ์ฌ์ด์ฆ๋ ๋ ํค์๋๋ค. ์ถ๊ฐํ ์ปดํจํ
์์์ ๋ ํจ๊ณผ์ ์ผ๋ก ์ฌ์ฉํ๊ธฐ ์ํด์์
๋๋ค. ๋ชจ๋ธ์ ๋ฐ๋ผ์๋ ํ์ต๋ฅ (learning rate)์ ์ฌ์กฐ์ ํด์ผ ํ ์๋ ์์ ๊ฒ์
๋๋ค. ๋ณต์ ๋ณธ์ ์๋ strategy.num_replicas_in_sync๋ก ์ป์ ์ ์์ต๋๋ค.
Step12: ํ์ฌ ์ด๋ค ๊ฒ์ด ์ง์๋ฉ๋๊น?
| ํ๋ จ API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
| Keras API | ์ง์ | ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | 2.3 ์ดํ ์ง์ ์์ |
์์ ์ ํํ ๋ฆฌ์ผ
์์์ ์ค๋ช
ํ ์ผ๋ผ์ค ๋ถ์ฐ ํ๋ จ ๋ฐฉ๋ฒ์ ๋ํ ํํ ๋ฆฌ์ผ๊ณผ ์์ ๋ค์ ๋ชฉ๋ก์
๋๋ค.
MirroredStrategy๋ฅผ ์ฌ์ฉํ MNIST ํ๋ จ ํํ ๋ฆฌ์ผ.
ImageNet ๋ฐ์ดํฐ์ MirroredStrategy๋ฅผ ์ฌ์ฉํ ๊ณต์ ResNet50 ํ๋ จ.
ํด๋ผ์ฐ๋ TPU์์ ImageNet ๋ฐ์ดํฐ์ TPUStrategy๋ฅผ ์ฌ์ฉํ ResNet50 ํ๋ จ. ์ด ์์ ๋ ํ์ฌ ํ
์ํ๋ก 1.x ๋ฒ์ ์์๋ง ๋์ํฉ๋๋ค.
MultiWorkerMirroredStrategy๋ฅผ ์ฌ์ฉํ MNIST ํ๋ จ ํํ ๋ฆฌ์ผ.
MirroredStrategy๋ฅผ ์ฌ์ฉํ NCF ํ๋ จ.
MirroredStrategy๋ฅผ ์ฌ์ฉํ Transformer ํ๋ จ.
์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ์ ํจ๊ป tf.distribute.Strategy ์ฌ์ฉํ๊ธฐ
์ง๊ธ๊น์ง ์ดํด๋ณธ ๊ฒ์ฒ๋ผ ๊ณ ์์ค API์ ํจ๊ป tf.distribute.Strategy๋ฅผ ์ฌ์ฉํ๋ ค๋ฉด ์ฝ๋ ๋ช ์ค๋ง ๋ฐ๊พธ๋ฉด ๋์์ต๋๋ค. ์กฐ๊ธ๋ง ๋ ๋
ธ๋ ฅ์ ๋ค์ด๋ฉด ์ด๋ฐ ํ๋ ์์ํฌ๋ฅผ ์ฌ์ฉํ์ง ์๋ ์ฌ์ฉ์๋ tf.distribute.Strategy๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค.
ํ
์ํ๋ก๋ ๋ค์ํ ์ฉ๋๋ก ์ฌ์ฉ๋ฉ๋๋ค. ์ฐ๊ตฌ์๋ค ๊ฐ์ ์ผ๋ถ ์ฌ์ฉ์๋ค์ ๋ ๋์ ์์ ๋์ ํ๋ จ ๋ฃจํ์ ๋ํ ์ ์ด๋ฅผ ์ํฉ๋๋ค. ์ด ๋๋ฌธ์ ์ถ์ ๊ธฐ๋ ์ผ๋ผ์ค ๊ฐ์ ๊ณ ์์ค API๋ฅผ ์ฌ์ฉํ๊ธฐ ํ๋ ๊ฒฝ์ฐ๊ฐ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด, GAN์ ์ฌ์ฉํ๋๋ฐ ๋งค๋ฒ ์์ฑ์(generator)์ ํ๋ณ์(discriminator) ๋จ๊ณ์ ์๋ฅผ ๋ฐ๊พธ๊ณ ์ถ์ ์ ์์ต๋๋ค. ๋น์ทํ๊ฒ, ๊ณ ์์ค API๋ ๊ฐํ ํ์ต(Reinforcement learning)์๋ ๊ทธ๋ค์ง ์ ์ ํ์ง ์์ต๋๋ค. ๊ทธ๋์ ์ด๋ฐ ์ฌ์ฉ์๋ค์ ๋ณดํต ์์ ๋ง์ ํ๋ จ ๋ฃจํ๋ฅผ ์์ฑํ๊ฒ ๋ฉ๋๋ค.
์ด ์ฌ์ฉ์๋ค์ ์ํ์ฌ, tf.distribute.Strategy ํด๋์ค๋ค์ ์ผ๋ จ์ ์ฃผ์ ๋ฉ์๋๋ค์ ์ ๊ณตํฉ๋๋ค. ์ด ๋ฉ์๋๋ค์ ์ฌ์ฉํ๋ ค๋ฉด ์ฒ์์๋ ์ฝ๋๋ฅผ ์ด๋ฆฌ์ ๋ฆฌ ์กฐ๊ธ ์ฎ๊ฒจ์ผ ํ ์ ์๊ฒ ์ง๋ง, ํ๋ฒ ์์
ํด ๋์ผ๋ฉด ์ ๋ต ์ธ์คํด์ค๋ง ๋ฐ๊ฟ์ GPU, TPU, ์ฌ๋ฌ ์ฅ๋น๋ก ์ฝ๊ฒ ๋ฐ๊ฟ๊ฐ๋ฉฐ ํ๋ จ์ ํ ์ ์์ต๋๋ค.
์์์ ์ดํด๋ณธ ์ผ๋ผ์ค ๋ชจ๋ธ์ ์ฌ์ฉํ ํ๋ จ ์์ ๋ฅผ ํตํ์ฌ ์ฌ์ฉํ๋ ๋ชจ์ต์ ๊ฐ๋จํ๊ฒ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
๋จผ์ , ์ ๋ต์ ๋ฒ์(scope) ์์์ ๋ชจ๋ธ๊ณผ ์ตํฐ๋ง์ด์ ๋ฅผ ๋ง๋ญ๋๋ค. ์ด๋ ๋ชจ๋ธ์ด๋ ์ตํฐ๋ง์ด์ ๋ก ๋ง๋ค์ด์ง ๋ณ์๊ฐ ๋ฏธ๋ฌ๋ง ๋๋๋ก ๋ง๋ญ๋๋ค.
Step13: ๋ค์์ผ๋ก๋ ์
๋ ฅ ๋ฐ์ดํฐ์
์ ๋ง๋ ๋ค์, tf.distribute.Strategy.experimental_distribute_dataset ๋ฉ์๋๋ฅผ ํธ์ถํ์ฌ ์ ๋ต์ ๋ง๊ฒ ๋ฐ์ดํฐ์
์ ๋ถ๋ฐฐํฉ๋๋ค.
Step14: ๊ทธ๋ฆฌ๊ณ ๋์๋ ํ ๋จ๊ณ์ ํ๋ จ์ ์ ์ํฉ๋๋ค. ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํ๊ธฐ ์ํด tf.GradientTape๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ด ๊ทธ๋๋์ธํธ๋ฅผ ์ ์ฉํ์ฌ ์ฐ๋ฆฌ ๋ชจ๋ธ์ ๋ณ์๋ฅผ ๊ฐฑ์ ํ๊ธฐ ์ํด์๋ ์ตํฐ๋ง์ด์ ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ๋ถ์ฐ ํ๋ จ์ ์ํ์ฌ ์ด ํ๋ จ ์์
์ step_fn ํจ์ ์์ ๊ตฌํํฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ step_fn์ ์์์ ๋ง๋ dist_dataset์์ ์ป์ ์
๋ ฅ ๋ฐ์ดํฐ์ ํจ๊ป tf.distrbute.Strategy.experimental_run_v2๋ฉ์๋๋ก ์ ๋ฌํฉ๋๋ค.
Step15: ์ ์ฝ๋์์ ๋ช ๊ฐ์ง ๋ ์ง์ด๋ณผ ์ ์ด ์์ต๋๋ค.
์์ค(loss)์ ๊ณ์ฐํ๊ธฐ ์ํ์ฌ tf.nn.softmax_cross_entropy_with_logits๋ฅผ ์ฌ์ฉํ์์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ์์ค์ ํฉ์ ์ ์ฒด ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋๋๋ ๋ถ๋ถ์ด ์ค์ํฉ๋๋ค. ์ด๋ ๋ชจ๋ ๋ณต์ ๋ ํ๋ จ์ด ๋์์ ์ด๋ฃจ์ด์ง๊ณ ์๊ณ , ๊ฐ ๋จ๊ณ์ ํ๋ จ์ด ์ด๋ฃจ์ด์ง๋ ์
๋ ฅ์ ์๋ ์ ์ฒด ๋ฐฐ์น ํฌ๊ธฐ์ ๊ฐ๊ธฐ ๋๋ฌธ์
๋๋ค. ๋ฐ๋ผ์ ์์ค ๊ฐ์ ๊ฐ ๋ณต์ ๋ ์์
๋ด์ ๋ฐฐ์น ํฌ๊ธฐ๊ฐ ์๋๋ผ ์ ์ฒด ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋๋์ด์ผ ๋ง์ต๋๋ค.
tf.distribute.Strategy.mirrored_strategy.run์์ ๋ฐํ๋ ๊ฒฐ๊ณผ๋ฅผ ๋ชจ์ผ๊ธฐ ์ํ์ฌ tf.distribute.Strategy.reduce API๋ฅผ ์ฌ์ฉํ์์ต๋๋ค. tf.distribute.Strategy.mirrored_strategy.run๋ ์ ๋ต์ ๊ฐ ๋ณต์ ๋ณธ์์ ์ป์ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํํฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ ์ด ๊ฒฐ๊ณผ๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ์ฌ๋ฌ ๊ฐ์ง๊ฐ ์์ต๋๋ค. ์ข
ํฉํ ๊ฒฐ๊ณผ๋ฅผ ์ป๊ธฐ ์ํ์ฌ reduce ํจ์๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. tf.distribute.Strategy.experimental_local_results ๋ฉ์๋๋ก ๊ฐ ๋ณต์ ๋ณธ์์ ์ป์ ๊ฒฐ๊ณผ์ ๊ฐ๋ค ๋ชฉ๋ก์ ์ป์ ์๋ ์์ต๋๋ค.
๋ถ์ฐ ์ ๋ต ๋ฒ์ ์์์ apply_gradients ๋ฉ์๋๊ฐ ํธ์ถ๋๋ฉด, ํ์์๋ ๋์์ด ๋ค๋ฆ
๋๋ค. ๊ตฌ์ฒด์ ์ผ๋ก๋ ๋๊ธฐํ๋ ํ๋ จ ์ค ๋ณ๋ ฌํ๋ ๊ฐ ์์
์์ ๊ทธ๋๋์ธํธ๋ฅผ ์ ์ฉํ๊ธฐ ์ ์, ๋ชจ๋ ๋ณต์ ๋ณธ์ ๊ทธ๋๋์ธํธ๋ฅผ ๋ํด์ง๋๋ค.
ํ๋ จ ๋จ๊ณ๋ฅผ ์ ์ํ์ผ๋ฏ๋ก, ๋ง์ง๋ง์ผ๋ก๋ dist_dataset์ ๋ํ์ฌ ํ๋ จ์ ๋ฐ๋ณตํฉ๋๋ค.
Step16: ์ ์์์๋ dist_dataset์ ์ฐจ๋ก๋๋ก ์ฒ๋ฆฌํ๋ฉฐ ํ๋ จ ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ์ป์์ต๋๋ค. tf.distribute.Strategy.make_experimental_numpy_dataset๋ฅผ ์ฌ์ฉํ๋ฉด ๋ํ์ด ์
๋ ฅ๋ ์ธ ์ ์์ต๋๋ค. tf.distribute.Strategy.experimental_distribute_dataset ํจ์๋ฅผ ํธ์ถํ๊ธฐ ์ ์ ์ด API๋ก ๋ฐ์ดํฐ์
์ ๋ง๋ค๋ฉด ๋ฉ๋๋ค.
๋ฐ์ดํฐ๋ฅผ ์ฐจ๋ก๋๋ก ์ฒ๋ฆฌํ๋ ๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์ ๋ช
์์ ์ผ๋ก ๋ฐ๋ณต์(iterator)๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค. ์ ์ฒด ๋ฐ์ดํฐ๋ฅผ ๋ชจ๋ ์ฌ์ฉํ์ง ์๊ณ , ์ ํด์ง ํ์๋งํผ๋ง ํ๋ จ์ ํ๊ณ ์ถ์ ๋ ์ ์ฉํฉ๋๋ค. ๋ฐ๋ณต์๋ฅผ ๋ง๋ค๊ณ ๋ช
์์ ์ผ๋ก next๋ฅผ ํธ์ถํ์ฌ ๋ค์ ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ์ป๋๋ก ํ๋ฉด ๋ฉ๋๋ค. ์ ๋ฃจํ ์ฝ๋๋ฅผ ๋ฐ๊ฟ๋ณด๋ฉด ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
Step17: tf.distribute.Strategy API๋ฅผ ์ฌ์ฉํ์ฌ ์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ๋ฅผ ๋ถ์ฐ ์ฒ๋ฆฌ ํ๋ ๊ฐ์ฅ ๋จ์ํ ๊ฒฝ์ฐ๋ฅผ ์ดํด๋ณด์์ต๋๋ค. ํ์ฌ API๋ฅผ ๊ฐ์ ํ๋ ๊ณผ์ ์ค์ ์์ต๋๋ค. ์ด API๋ฅผ ์ฌ์ฉํ๋ ค๋ฉด ์ฌ์ฉ์ ์ชฝ์์ ๊ฝค ๋ง์ ์์
์ ํด์ผ ํ๋ฏ๋ก, ๋์ค์ ๋ณ๋์ ๋ ์์ธํ ๊ฐ์ด๋๋ก ์ค๋ช
ํ๋๋ก ํ๊ฒ ์ต๋๋ค.
ํ์ฌ ์ด๋ค ๊ฒ์ด ์ง์๋ฉ๋๊น?
| ํ๋ จ API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|
Step18: ์ ์์ ์์๋ ๊ธฐ๋ณธ์ผ๋ก ์ ๊ณต๋๋ ์ถ์ ๊ธฐ๋ฅผ ์ฌ์ฉํ์์ง๋ง, ์ง์ ๋ง๋ ์ถ์ ๊ธฐ๋ ๋์ผํ ์ฝ๋๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค. train_distribute๊ฐ ํ๋ จ์ ์ด๋ป๊ฒ ๋ถ์ฐ์ํฌ์ง๋ฅผ ์ง์ ํ๊ณ , eval_distribute๊ฐ ํ๊ฐ๋ฅผ ์ด๋ป๊ฒ ๋ถ์ฐ์ํฌ์ง๋ฅผ ์ง์ ํฉ๋๋ค. ์ผ๋ผ์ค์ ํจ๊ป ์ฌ์ฉํ ๋ ํ๋ จ๊ณผ ํ๊ฐ์ ๋์ผํ ๋ถ์ฐ ์ ๋ต์ ์ฌ์ฉํ๋ ๊ฒ๊ณผ๋ ์ฐจ์ด๊ฐ ์์ต๋๋ค.
๋ค์๊ณผ ๊ฐ์ด ์
๋ ฅ ํจ์๋ฅผ ์ง์ ํ๋ฉด ์ถ์ ๊ธฐ์ ํ๋ จ๊ณผ ํ๊ฐ๋ฅผ ํ ์ ์์ต๋๋ค. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
# ํ
์ํ๋ก ํจํค์ง ๊ฐ์ ธ์ค๊ธฐ
import tensorflow as tf
Explanation: ํ
์ํ๋ก๋ก ๋ถ์ฐ ํ๋ จํ๊ธฐ
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/distributed_training"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org์์ ๋ณด๊ธฐ</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />๊ตฌ๊ธ ์ฝ๋ฉ(Colab)์์ ์คํํ๊ธฐ</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />๊นํ๋ธ(GitHub) ์์ค ๋ณด๊ธฐ</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/distributed_training.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: ์ด ๋ฌธ์๋ ํ
์ํ๋ก ์ปค๋ฎค๋ํฐ์์ ๋ฒ์ญํ์ต๋๋ค. ์ปค๋ฎค๋ํฐ ๋ฒ์ญ ํ๋์ ํน์ฑ์ ์ ํํ ๋ฒ์ญ๊ณผ ์ต์ ๋ด์ฉ์ ๋ฐ์ํ๊ธฐ ์ํด ๋
ธ๋ ฅํจ์๋
๋ถ๊ตฌํ๊ณ ๊ณต์ ์๋ฌธ ๋ฌธ์์ ๋ด์ฉ๊ณผ ์ผ์นํ์ง ์์ ์ ์์ต๋๋ค.
์ด ๋ฒ์ญ์ ๊ฐ์ ํ ๋ถ๋ถ์ด ์๋ค๋ฉด
tensorflow/docs-l10n ๊นํ ์ ์ฅ์๋ก ํ ๋ฆฌํ์คํธ๋ฅผ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
๋ฌธ์ ๋ฒ์ญ์ด๋ ๋ฆฌ๋ทฐ์ ์ฐธ์ฌํ๋ ค๋ฉด
[email protected]๋ก
๋ฉ์ผ์ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
๊ฐ์
tf.distribute.Strategy๋ ํ๋ จ์ ์ฌ๋ฌ GPU ๋๋ ์ฌ๋ฌ ์ฅ๋น, ์ฌ๋ฌ TPU๋ก ๋๋์ด ์ฒ๋ฆฌํ๊ธฐ ์ํ ํ
์ํ๋ก API์
๋๋ค. ์ด API๋ฅผ ์ฌ์ฉํ๋ฉด ๊ธฐ์กด์ ๋ชจ๋ธ์ด๋ ํ๋ จ ์ฝ๋๋ฅผ ์กฐ๊ธ๋ง ๊ณ ์ณ์ ๋ถ์ฐ์ฒ๋ฆฌ๋ฅผ ํ ์ ์์ต๋๋ค.
tf.distribute.Strategy๋ ๋ค์์ ์ฃผ์ ๋ชฉํ๋ก ์ค๊ณํ์์ต๋๋ค.
์ฌ์ฉํ๊ธฐ ์ฝ๊ณ , ์ฐ๊ตฌ์, ๊ธฐ๊ณ ํ์ต ์์ง๋์ด ๋ฑ ์ฌ๋ฌ ์ฌ์ฉ์ ์ธต์ ์ง์ํ ๊ฒ.
๊ทธ๋๋ก ์ ์ฉํ๊ธฐ๋ง ํ๋ฉด ์ข์ ์ฑ๋ฅ์ ๋ณด์ผ ๊ฒ.
์ ๋ต๋ค์ ์ฝ๊ฒ ๊ฐ์ ๋ผ์ธ ์ ์์ ๊ฒ.
tf.distribute.Strategy๋ ํ
์ํ๋ก์ ๊ณ ์์ค API์ธ tf.keras ๋ฐ tf.estimator์ ํจ๊ป ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ฝ๋ ํ๋ ์ค๋ง ์ถ๊ฐํ๋ฉด ๋ฉ๋๋ค. ์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ(๊ทธ๋ฆฌ๊ณ ํ
์ํ๋ก๋ฅผ ์ฌ์ฉํ ๋ชจ๋ ๊ณ์ฐ ์์
)์ ํจ๊ป ์ฌ์ฉํ ์ ์๋ API๋ ์ ๊ณตํฉ๋๋ค.
ํ
์ํ๋ก 2.0์์๋ ์ฌ์ฉ์๊ฐ ํ๋ก๊ทธ๋จ์ ์ฆ์ ์คํ(eager execution)ํ ์๋ ์๊ณ , tf.function์ ์ฌ์ฉํ์ฌ ๊ทธ๋ํ์์ ์คํํ ์๋ ์์ต๋๋ค. tf.distribute.Strategy๋ ๋ ๊ฐ์ง ์คํ ๋ฐฉ์์ ๋ชจ๋ ์ง์ํ๋ ค๊ณ ํฉ๋๋ค. ์ด ๊ฐ์ด๋์์๋ ๋๋ถ๋ถ์ ๊ฒฝ์ฐ ํ๋ จ์ ๋ํ์ฌ ์ด์ผ๊ธฐํ๊ฒ ์ง๋ง, ์ด API ์์ฒด๋ ์ฌ๋ฌ ํ๊ฒฝ์์ ํ๊ฐ๋ ์์ธก์ ๋ถ์ฐ ์ฒ๋ฆฌํ๊ธฐ ์ํ์ฌ ์ฌ์ฉํ ์๋ ์๋ค๋ ์ ์ ์ฐธ๊ณ ํ์ญ์์ค.
์ ์ ํ ๋ณด์๊ฒ ์ง๋ง ์ฝ๋๋ฅผ ์ฝ๊ฐ๋ง ๋ฐ๊พธ๋ฉด tf.distribute.Strategy๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. ๋ณ์, ์ธต, ๋ชจ๋ธ, ์ตํฐ๋ง์ด์ , ์งํ, ์๋จธ๋ฆฌ(summary), ์ฒดํฌํฌ์ธํธ ๋ฑ ํ
์ํ๋ก๋ฅผ ๊ตฌ์ฑํ๊ณ ์๋ ๊ธฐ๋ฐ ์์๋ค์ ์ ๋ต(Strategy)์ ์ดํดํ๊ณ ์ฒ๋ฆฌํ ์ ์๋๋ก ์์ ํ๊ธฐ ๋๋ฌธ์
๋๋ค.
์ด ๊ฐ์ด๋์์๋ ๋ค์ํ ํ์์ ์ ๋ต์ ๋ํด์, ๊ทธ๋ฆฌ๊ณ ์ฌ๋ฌ ๊ฐ์ง ์ํฉ์์ ์ด๋ค์ ์ด๋ป๊ฒ ์ฌ์ฉํด์ผ ํ๋์ง ์์๋ณด๊ฒ ์ต๋๋ค.
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
Explanation: ์ ๋ต์ ์ข
๋ฅ
tf.distribute.Strategy๋ ์๋ก ๋ค๋ฅธ ๋ค์ํ ์ฌ์ฉ ํํ๋ฅผ ์์ฐ๋ฅด๋ ค๊ณ ํฉ๋๋ค. ๋ช ๊ฐ์ง ์กฐํฉ์ ํ์ฌ ์ง์ํ์ง๋ง, ์ถํ์ ์ถ๊ฐ๋ ์ ๋ต๋ค๋ ์์ต๋๋ค. ์ด๋ค ์ค ๋ช ๊ฐ์ง๋ฅผ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
๋๊ธฐ ํ๋ จ ๋ ๋น๋๊ธฐ ํ๋ จ: ๋ถ์ฐ ํ๋ จ์ ํ ๋ ๋ฐ์ดํฐ๋ฅผ ๋ณ๋ ฌ๋ก ์ฒ๋ฆฌํ๋ ๋ฐฉ๋ฒ์ ํฌ๊ฒ ๋ ๊ฐ์ง๊ฐ ์์ต๋๋ค. ๋๊ธฐ ํ๋ จ์ ํ ๋๋ ๋ชจ๋ ์์ปค(worker)๊ฐ ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ๋๋์ด ๊ฐ๊ณ ๋์์ ํ๋ จํฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ ๊ฐ ๋จ๊ณ๋ง๋ค ๊ทธ๋๋์ธํธ(gradient)๋ฅผ ๋ชจ์๋๋ค. ๋น๋๊ธฐ ํ๋ จ์์๋ ๋ชจ๋ ์์ปค๊ฐ ๋
๋ฆฝ์ ์ผ๋ก ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํด ํ๋ จํ๊ณ ๊ฐ๊ฐ ๋น๋๊ธฐ์ ์ผ๋ก ๋ณ์๋ค์ ๊ฐฑ์ ํฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก ๋๊ธฐ ํ๋ จ์ ์ฌ ๋ฆฌ๋์ค(all-reduce)๋ฐฉ์์ผ๋ก ๊ตฌํํ๊ณ , ๋น๋๊ธฐ ํ๋ จ์ ํ๋ผ๋ฏธํฐ ์๋ฒ ๊ตฌ์กฐ๋ฅผ ์ฌ์ฉํฉ๋๋ค.
ํ๋์จ์ด ํ๋ซํผ: ํ ์ฅ๋น์ ์๋ ๋ค์ค GPU๋ก ๋๋์ด ํ๋ จํ ์๋ ์๊ณ , ๋คํธ์ํฌ๋ก ์ฐ๊ฒฐ๋ (GPU๊ฐ ์๊ฑฐ๋ ์ฌ๋ฌ ๊ฐ์ GPU๋ฅผ ๊ฐ์ง) ์ฌ๋ฌ ์ฅ๋น๋ก ๋๋์ด์, ๋ ํน์ ํด๋ผ์ฐ๋ TPU์์ ํ๋ จํ ์๋ ์์ต๋๋ค.
์ด๋ฐ ์ฌ์ฉ ํํ๋ค์ ์ํ์ฌ, ํ์ฌ 6๊ฐ์ง ์ ๋ต์ ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ดํ ๋ด์ฉ์์ ํ์ฌ TF 2.2์์ ์ํฉ๋ง๋ค ์ด๋ค ์ ๋ต์ ์ง์ํ๋์ง ์ด์ผ๊ธฐํ๊ฒ ์ต๋๋ค. ์ผ๋จ ๊ฐ๋จํ ๊ฐ์๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
| ํ๋ จ API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:-------------------------- |:------------------- |:--------------------- |:--------------------------------- |:--------------------------------- |:-------------------------- |
| Keras API | ์ง์ | ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | 2.3 ์ดํ ์ง์ ์์ |
| ์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ | ์ง์ | ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | 2.3 ์ดํ ์ง์ ์์ |
| Estimator API | ์ ํ์ ์ผ๋ก ์ง์ | ๋ฏธ์ง์ | ์ ํ์ ์ผ๋ก ์ง์ | ์ ํ์ ์ผ๋ก ์ง์ | ์ ํ์ ์ผ๋ก ์ง์ |
MirroredStrategy
tf.distribute.MirroredStrategy๋ ์ฅ๋น ํ๋์์ ๋ค์ค GPU๋ฅผ ์ด์ฉํ ๋๊ธฐ ๋ถ์ฐ ํ๋ จ์ ์ง์ํฉ๋๋ค. ๊ฐ๊ฐ์ GPU ์ฅ์น๋ง๋ค ๋ณต์ ๋ณธ์ด ๋ง๋ค์ด์ง๋๋ค. ๋ชจ๋ธ์ ๋ชจ๋ ๋ณ์๊ฐ ๋ณต์ ๋ณธ๋ง๋ค ๋ฏธ๋ฌ๋ง ๋ฉ๋๋ค. ์ด ๋ฏธ๋ฌ๋ง๋ ๋ณ์๋ค์ ํ๋์ ๊ฐ์์ ๋ณ์์ ๋์๋๋๋ฐ, ์ด๋ฅผ MirroredVariable๋ผ๊ณ ํฉ๋๋ค. ์ด ๋ณ์๋ค์ ๋์ผํ ๋ณ๊ฒฝ์ฌํญ์ด ํจ๊ป ์ ์ฉ๋๋ฏ๋ก ๋ชจ๋ ๊ฐ์ ๊ฐ์ ์ ์งํฉ๋๋ค.
์ฌ๋ฌ ์ฅ์น์ ๋ณ์์ ๋ณ๊ฒฝ์ฌํญ์ ์ ๋ฌํ๊ธฐ ์ํ์ฌ ํจ์จ์ ์ธ ์ฌ ๋ฆฌ๋์ค ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํฉ๋๋ค. ์ฌ ๋ฆฌ๋์ค ์๊ณ ๋ฆฌ์ฆ์ ๋ชจ๋ ์ฅ์น์ ๊ฑธ์ณ ํ
์๋ฅผ ๋ชจ์ ๋ค์, ๊ทธ ํฉ์ ๊ตฌํ์ฌ ๋ค์ ๊ฐ ์ฅ๋น์ ์ ๊ณตํฉ๋๋ค. ์ด ํตํฉ๋ ์๊ณ ๋ฆฌ์ฆ์ ๋งค์ฐ ํจ์จ์ ์ด์ด์ ๋๊ธฐํ์ ๋ถ๋ด์ ๋ง์ด ๋์ด๋ผ ์ ์์ต๋๋ค. ์ฅ์น ๊ฐ์ ์ฌ์ฉ ๊ฐ๋ฅํ ํต์ ๋ฐฉ๋ฒ์ ๋ฐ๋ผ ๋ค์ํ ์ฌ ๋ฆฌ๋์ค ์๊ณ ๋ฆฌ์ฆ๊ณผ ๊ตฌํ์ด ์์ต๋๋ค. ๊ธฐ๋ณธ๊ฐ์ผ๋ก๋ NVIDIA NCCL์ ์ฌ ๋ฆฌ๋์ค ๊ตฌํ์ผ๋ก ์ฌ์ฉํฉ๋๋ค. ๋ํ ์ ๊ณต๋๋ ๋ค๋ฅธ ๋ช ๊ฐ์ง ๋ฐฉ๋ฒ ์ค์ ์ ํํ๊ฑฐ๋, ์ง์ ๋ง๋ค ์๋ ์์ต๋๋ค.
MirroredStrategy๋ฅผ ๋ง๋๋ ๊ฐ์ฅ ์ฌ์ด ๋ฐฉ๋ฒ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])
Explanation: MirroredStrategy ์ธ์คํด์ค๊ฐ ์๊ฒผ์ต๋๋ค. ํ
์ํ๋ก๊ฐ ์ธ์ํ ๋ชจ๋ GPU๋ฅผ ์ฌ์ฉํ๊ณ , ์ฅ์น ๊ฐ ํต์ ์๋ NCCL์ ์ฌ์ฉํ ๊ฒ์
๋๋ค.
์ฅ๋น์ GPU ์ค ์ผ๋ถ๋ง ์ฌ์ฉํ๊ณ ์ถ๋ค๋ฉด, ๋ค์๊ณผ ๊ฐ์ด ํ๋ฉด ๋ฉ๋๋ค.
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy(
cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
Explanation: ์ฅ์น ๊ฐ ํต์ ๋ฐฉ๋ฒ์ ๋ฐ๊พธ๊ณ ์ถ๋ค๋ฉด, cross_device_ops ์ธ์์ tf.distribute.CrossDeviceOps ํ์
์ ์ธ์คํด์ค๋ฅผ ๋๊ธฐ๋ฉด ๋ฉ๋๋ค. ํ์ฌ ๊ธฐ๋ณธ๊ฐ์ธ tf.distribute.NcclAllReduce ์ด์ธ์ tf.distribute.HierarchicalCopyAllReduce์ tf.distribute.ReductionToOneDevice ๋ ๊ฐ์ง ์ถ๊ฐ ์ต์
์ ์ ๊ณตํฉ๋๋ค.
End of explanation
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
Explanation: CentralStorageStrategy
tf.distribute.experimental.CentralStorageStrategy๋ ๋๊ธฐ ํ๋ จ์ ํฉ๋๋ค. ํ์ง๋ง ๋ณ์๋ฅผ ๋ฏธ๋ฌ๋งํ์ง ์๊ณ , CPU์์ ๊ด๋ฆฌํฉ๋๋ค. ์์
์ ๋ชจ๋ ๋ก์ปฌ GPU๋ค๋ก ๋ณต์ ๋ฉ๋๋ค. ๋จ, ๋ง์ฝ GPU๊ฐ ํ๋๋ฐ์ ์๋ค๋ฉด ๋ชจ๋ ๋ณ์์ ์์
์ด ๊ทธ GPU์ ๋ฐฐ์น๋ฉ๋๋ค.
๋ค์๊ณผ ๊ฐ์ด CentralStorageStrategy ์ธ์คํด์ค๋ฅผ ๋ง๋์ญ์์ค.
End of explanation
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
Explanation: CentralStorageStrategy ์ธ์คํด์ค๊ฐ ๋ง๋ค์ด์ก์ต๋๋ค. ์ธ์ํ ๋ชจ๋ GPU์ CPU๋ฅผ ์ฌ์ฉํฉ๋๋ค. ๊ฐ ๋ณต์ ๋ณธ์ ๋ณ์ ๋ณ๊ฒฝ์ฌํญ์ ๋ชจ๋ ์์ง๋ ํ ๋ณ์์ ์ ์ฉ๋ฉ๋๋ค.
Note: ์ด ์ ๋ต์ ์์ง ๊ฐ์ ์ค์ด๊ณ ๋ ๋ง์ ๊ฒฝ์ฐ์ ์ธ ์ ์๋๋ก ๋ง๋ค๊ณ ์๊ธฐ ๋๋ฌธ์, ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์๋ฉ๋๋ค. ๋ฐ๋ผ์ ๋ค์์ API๊ฐ ๋ฐ๋ ์ ์์์ ์ ๋
ํ์ญ์์ค.
MultiWorkerMirroredStrategy
tf.distribute.experimental.MultiWorkerMirroredStrategy์ MirroredStrategy์ ๋งค์ฐ ๋น์ทํฉ๋๋ค. ๋ค์ค ์์ปค๋ฅผ ์ด์ฉํ์ฌ ๋๊ธฐ ๋ถ์ฐ ํ๋ จ์ ํฉ๋๋ค. ๊ฐ ์์ปค๋ ์ฌ๋ฌ ๊ฐ์ GPU๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. MirroredStrategy์ฒ๋ผ ๋ชจ๋ธ์ ์๋ ๋ชจ๋ ๋ณ์์ ๋ณต์ฌ๋ณธ์ ๋ชจ๋ ์์ปค์ ๊ฐ ์ฅ์น์ ๋ง๋ญ๋๋ค.
๋ค์ค ์์ปค(multi-worker)๋ค ์ฌ์ด์์๋ ์ฌ ๋ฆฌ๋์ค(all-reduce) ํต์ ๋ฐฉ๋ฒ์ผ๋ก CollectiveOps๋ฅผ ์ฌ์ฉํ์ฌ ๋ณ์๋ค์ ๊ฐ์ ๊ฐ์ผ๋ก ์ ์งํฉ๋๋ค. ์์ง ์ฐ์ฐ(collective op)์ ํ
์ํ๋ก ๊ทธ๋ํ์ ์ํ๋ ์ฐ์ฐ ์ค ํ๋์
๋๋ค. ์ด ์ฐ์ฐ์ ํ๋์จ์ด๋ ๋คํธ์ํฌ ๊ตฌ์ฑ, ํ
์ ํฌ๊ธฐ์ ๋ฐ๋ผ ํ
์ํ๋ก ๋ฐํ์์ด ์ง์ํ๋ ์ฌ ๋ฆฌ๋์ค ์๊ณ ๋ฆฌ์ฆ์ ์๋์ผ๋ก ์ ํํฉ๋๋ค.
์ฌ๊ธฐ์ ์ถ๊ฐ ์ฑ๋ฅ ์ต์ ํ๋ ๊ตฌํํ๊ณ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด ์์ ํ
์๋ค์ ์ฌ๋ฌ ์ฌ ๋ฆฌ๋์ค ์์
์ ํฐ ํ
์๋ค์ ๋ ์ ์ ์ฌ ๋ฆฌ๋์ค ์์
์ผ๋ก ๋ฐ๊พธ๋ ์ ์ ์ต์ ํ ๊ธฐ๋ฅ์ด ์์ต๋๋ค. ๋ฟ๋ง์๋๋ผ ํ๋ฌ๊ทธ์ธ ๊ตฌ์กฐ๋ฅผ ๊ฐ๋๋ก ์ค๊ณํ์์ต๋๋ค. ๋ฐ๋ผ์ ์ถํ์๋ ์ฌ์ฉ์๊ฐ ์์ ์ ํ๋์จ์ด์ ๋ ์ต์ ํ๋ ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํ ์๋ ์์ ๊ฒ์
๋๋ค. ์ฐธ๊ณ ๋ก ์ด ์์ง ์ฐ์ฐ์ ์ฌ ๋ฆฌ๋์ค ์ธ์ ๋ธ๋ก๋์บ์คํธ(broadcast)๋ ์ ์ฒด ์์ง(all-gather)๋ ๊ตฌํํ๊ณ ์์ต๋๋ค.
MultiWorkerMirroredStrategy๋ฅผ ๋ง๋๋ ๊ฐ์ฅ ์ฌ์ด ๋ฐฉ๋ฒ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
End of explanation
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
Explanation: MultiWorkerMirroredStrategy์ ์ฌ์ฉํ ์ ์๋ ์์ง ์ฐ์ฐ ๊ตฌํ์ ํ์ฌ ๋ ๊ฐ์ง์
๋๋ค. CollectiveCommunication.RING๋ gRPC๋ฅผ ์ฌ์ฉํ ๋ง ๋คํธ์ํฌ ๊ธฐ๋ฐ์ ์์ง ์ฐ์ฐ์
๋๋ค. CollectiveCommunication.NCCL๋ Nvidia์ NCCL์ ์ฌ์ฉํ์ฌ ์์ง ์ฐ์ฐ์ ๊ตฌํํ ๊ฒ์
๋๋ค. CollectiveCommunication.AUTO๋ก ์ค์ ํ๋ฉด ๋ฐํ์์ด ์์์ ๊ตฌํ์ ๊ณ ๋ฆ
๋๋ค. ์ต์ ์ ์์ง ์ฐ์ฐ ๊ตฌํ์ GPU์ ์์ ์ข
๋ฅ, ํด๋ฌ์คํฐ์ ๋คํธ์ํฌ ์ฐ๊ฒฐ ๋ฑ์ ๋ฐ๋ผ ๋ค๋ฅผ ์ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด ๋ค์๊ณผ ๊ฐ์ด ์ง์ ํ ์ ์์ต๋๋ค.
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
model.compile(loss='mse', optimizer='sgd')
Explanation: ๋ค์ค GPU๋ฅผ ์ฌ์ฉํ๋ ๊ฒ๊ณผ ๋น๊ตํด์ ๋ค์ค ์์ปค๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ ๊ฐ์ฅ ํฐ ์ฐจ์ด์ ์ ๋ค์ค ์์ปค์ ๋ํ ์ค์ ๋ถ๋ถ์
๋๋ค. ํด๋ฌ์คํฐ๋ฅผ ๊ตฌ์ฑํ๋ ๊ฐ ์์ปค์ "TF_CONFIG" ํ๊ฒฝ๋ณ์๋ฅผ ์ฌ์ฉํ์ฌ ํด๋ฌ์คํฐ ์ค์ ์ ํ๋ ๊ฒ์ด ํ
์ํ๋ก์ ํ์ค์ ์ธ ๋ฐฉ๋ฒ์
๋๋ค. ์๋์ชฝ "TF_CONFIG" ํญ๋ชฉ์์ ์ด๋ป๊ฒ ํ๋์ง ์์ธํ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
Note: ์ด ์ ๋ต์ ์์ง ๊ฐ์ ์ค์ด๊ณ ๋ ๋ง์ ๊ฒฝ์ฐ์ ์ธ ์ ์๋๋ก ๋ง๋ค๊ณ ์๊ธฐ ๋๋ฌธ์, ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์๋ฉ๋๋ค. ๋ฐ๋ผ์ ๋์ค์ API๊ฐ ๋ฐ๋ ์ ์์์ ์ ๋
ํ์ญ์์ค.
TPUStrategy
tf.distribute.experimental.TPUStrategy๋ ํ
์ํ๋ก ํ๋ จ์ ํ
์์ฒ๋ฆฌ์ฅ์น(Tensor Processing Unit, TPU)์์ ์ํํ๋ ์ ๋ต์
๋๋ค. TPU๋ ๊ตฌ๊ธ์ ํน๋ณํ ์ฃผ๋ฌธํ ๋ฐ๋์ฒด(ASIC)๋ก์, ๊ธฐ๊ณ ํ์ต ์์
์ ๊ทน์ ์ผ๋ก ๊ฐ์ํ๊ธฐ ์ํ์ฌ ์ค๊ณ๋์์ต๋๋ค. TPU๋ ๊ตฌ๊ธ ์ฝ๋ฉ, Tensorflow Research Cloud, Google Compute Engine์์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
๋ถ์ฐ ํ๋ จ ๊ตฌ์กฐ์ ์ธก๋ฉด์์, TPUStrategy๋ MirroredStrategy์ ๋์ผํฉ๋๋ค. ๋๊ธฐ ๋ถ์ฐ ํ๋ จ ๋ฐฉ์์ ์ฌ์ฉํฉ๋๋ค. TPU๋ ์์ฒด์ ์ผ๋ก ์ฌ๋ฌ TPU ์ฝ์ด๋ค์ ๊ฑธ์น ์ฌ ๋ฆฌ๋์ค ๋ฐ ๊ธฐํ ์์ง ์ฐ์ฐ์ ํจ์จ์ ์ผ๋ก ๊ตฌํํ๊ณ ์์ต๋๋ค. ์ด ๊ตฌํ์ด TPUStrategy์ ์ฌ์ฉ๋ฉ๋๋ค.
TPUStrategy๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
Note: ์ฝ๋ฉ์์ ์ด ์ฝ๋๋ฅผ ์ฌ์ฉํ๋ ค๋ฉด, ์ฝ๋ฉ ๋ฐํ์์ผ๋ก TPU๋ฅผ ์ ํํด์ผ ํฉ๋๋ค. TPUStrategy๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ํ ํํ ๋ฆฌ์ผ์ ๊ณง ์ถ๊ฐํ๊ฒ ์ต๋๋ค.
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu=tpu_address)
tf.config.experimental_connect_to_host(cluster_resolver.master())
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
TPUClusterResolver ์ธ์คํด์ค๋ TPU๋ฅผ ์ฐพ๋๋ก ๋์์ค๋๋ค. ์ฝ๋ฉ์์๋ ์๋ฌด๋ฐ ์ธ์๋ฅผ ์ฃผ์ง ์์๋ ๋ฉ๋๋ค. ํด๋ผ์ฐ๋ TPU์์ ์ฌ์ฉํ๋ ค๋ฉด, TPU ์์์ ์ด๋ฆ์ tpu ๋งค๊ฐ๋ณ์์ ์ง์ ํด์ผ ํฉ๋๋ค. ๋ํ TPU๋ ๊ณ์ฐํ๊ธฐ ์ ์ด๊ธฐํ(initialize)๊ฐ ํ์ํฉ๋๋ค. ์ด๊ธฐํ ์ค TPU ๋ฉ๋ชจ๋ฆฌ๊ฐ ์ง์์ ธ์ ๋ชจ๋ ์ํ ์ ๋ณด๊ฐ ์ฌ๋ผ์ง๋ฏ๋ก, ํ๋ก๊ทธ๋จ ์์์์ ๋ช
์์ ์ผ๋ก TPU ์์คํ
์ ์ด๊ธฐํ(initialize)ํด ์ฃผ์ด์ผ ํฉ๋๋ค.
Note: ์ด ์ ๋ต์ ์์ง ๊ฐ์ ์ค์ด๊ณ ๋ ๋ง์ ๊ฒฝ์ฐ์ ์ธ ์ ์๋๋ก ๋ง๋ค๊ณ ์๊ธฐ ๋๋ฌธ์, ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์๋ฉ๋๋ค. ๋ฐ๋ผ์ ๋์ค์ API๊ฐ ๋ฐ๋ ์ ์์์ ์ ๋
ํ์ญ์์ค.
ParameterServerStrategy
tf.distribute.experimental.ParameterServerStrategy์ ์ฌ๋ฌ ์ฅ๋น์์ ํ๋ จํ ๋ ํ๋ผ๋ฏธํฐ ์๋ฒ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ด ์ ๋ต์ ์ฌ์ฉํ๋ฉด ๋ช ๋์ ์ฅ๋น๋ ์์ปค ์ญํ ์ ํ๊ณ , ๋ช ๋๋ ํ๋ผ๋ฏธํฐ ์๋ฒ ์ญํ ์ ํ๊ฒ ๋ฉ๋๋ค. ๋ชจ๋ธ์ ๊ฐ ๋ณ์๋ ํ ํ๋ผ๋ฏธํฐ ์๋ฒ์ ํ ๋น๋ฉ๋๋ค. ๊ณ์ฐ ์์
์ ๋ชจ๋ ์์ปค์ GPU๋ค์ ๋ณต์ฌ๋ฉ๋๋ค.
์ฝ๋๋ง ๋๊ณ ๋ณด์์ ๋๋ ๋ค๋ฅธ ์ ๋ต๋ค๊ณผ ๋น์ทํฉ๋๋ค.
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
๋ค์ค ์์ปค ํ๊ฒฝ์์ ํ๋ จํ๋ ค๋ฉด, ํด๋ฌ์คํฐ์ ์ํ ํ๋ผ๋ฏธํฐ ์๋ฒ์ ์์ปค๋ฅผ "TF_CONFIG" ํ๊ฒฝ๋ณ์๋ฅผ ์ด์ฉํ์ฌ ์ค์ ํด์ผ ํฉ๋๋ค. ์์ธํ ๋ด์ฉ์ ์๋์ชฝ "TF_CONFIG"์์ ์ค๋ช
ํ๊ฒ ์ต๋๋ค.
์ฌ๊ธฐ๊น์ง ์ฌ๋ฌ ๊ฐ์ง ์ ๋ต๋ค์ด ์ด๋ป๊ฒ ๋ค๋ฅด๊ณ , ์ด๋ป๊ฒ ์ฌ์ฉํ๋์ง ์ดํด๋ณด์์ต๋๋ค. ์ด์ด์ง๋ ์ ๋ค์์๋ ํ๋ จ์ ๋ถ์ฐ์ํค๊ธฐ ์ํ์ฌ ์ด๋ค์ ์ด๋ป๊ฒ ์ฌ์ฉํด์ผ ํ๋์ง ์ดํด๋ณด๊ฒ ์ต๋๋ค. ์ด ๋ฌธ์์์๋ ๊ฐ๋จํ ์ฝ๋ ์กฐ๊ฐ๋ง ๋ณด์ฌ๋๋ฆฌ๊ฒ ์ง๋ง, ์ฒ์๋ถํฐ ๋๊น์ง ์ ์ฒด ์ฝ๋๋ฅผ ์คํํ ์ ์๋ ๋ ๊ธด ํํ ๋ฆฌ์ผ์ ๋งํฌ๋ ํจ๊ป ์๋ดํด๋๋ฆฌ๊ฒ ์ต๋๋ค.
์ผ๋ผ์ค์ ํจ๊ป tf.distribute.Strategy ์ฌ์ฉํ๊ธฐ
tf.distribute.Strategy๋ ํ
์ํ๋ก์ ์ผ๋ผ์ค API ๋ช
์ธ ๊ตฌํ์ธ tf.keras์ ํจ๊ป ์ฌ์ฉํ ์ ์์ต๋๋ค. tf.keras๋ ๋ชจ๋ธ์ ๋ง๋ค๊ณ ํ๋ จํ๋ ๊ณ ์์ค API์
๋๋ค. ๋ถ์ฐ ์ ๋ต์ tf.keras ๋ฐฑ์๋์ ํจ๊ป ์ธ ์ ์์ผ๋ฏ๋ก, ์ผ๋ผ์ค ์ฌ์ฉ์๋ค๋ ์ผ๋ผ์ค ํ๋ จ ํ๋ ์์ํฌ๋ก ์์ฑํ ํ๋ จ ์์
์ ์ฝ๊ฒ ๋ถ์ฐ ์ฒ๋ฆฌํ ์ ์๊ฒ ๋์์ต๋๋ค. ํ๋ จ ํ๋ก๊ทธ๋จ์์ ๊ณ ์ณ์ผํ๋ ๋ถ๋ถ์ ๊ฑฐ์ ์์ต๋๋ค. (1) ์ ์ ํ tf.distribute.Strategy ์ธ์คํด์ค๋ฅผ ๋ง๋ ๋ค์ (2)
์ผ๋ผ์ค ๋ชจ๋ธ์ ์์ฑ๊ณผ ์ปดํ์ผ์ strategy.scope ์์ผ๋ก ์ฎ๊ฒจ์ฃผ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค. Sequential , ํจ์ํ API, ํด๋์ค ์์ ๋ฑ ๋ชจ๋ ๋ฐฉ์์ผ๋ก ๋ง๋ ์ผ๋ผ์ค ๋ชจ๋ธ์ ๋ค ์ง์ํฉ๋๋ค.
๋ค์์ ํ ๊ฐ์ ๋ฐ์ง ์ธต(dense layer)์ ๊ฐ์ง ๋งค์ฐ ๊ฐ๋จํ ์ผ๋ผ์ค ๋ชจ๋ธ์ ๋ถ์ฐ ์ ๋ต์ ์ฌ์ฉํ๋ ์ฝ๋์ ์ผ๋ถ์
๋๋ค.
End of explanation
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
Explanation: ์ ์์์๋ MirroredStrategy๋ฅผ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์, ํ๋์ ์ฅ๋น๊ฐ ๋ค์ค GPU๋ฅผ ๊ฐ์ง ๊ฒฝ์ฐ์ ์ฌ์ฉํ ์ ์์ต๋๋ค. strategy.scope()๋ก ๋ถ์ฐ ์ฒ๋ฆฌํ ๋ถ๋ถ์ ์ฝ๋์ ์ง์ ํ ์ ์์ต๋๋ค. ์ด ๋ฒ์(scope) ์์์ ๋ชจ๋ธ์ ๋ง๋ค๋ฉด, ์ผ๋ฐ์ ์ธ ๋ณ์๊ฐ ์๋๋ผ ๋ฏธ๋ฌ๋ง๋ ๋ณ์๊ฐ ๋ง๋ค์ด์ง๋๋ค. ์ด ๋ฒ์ ์์์ ์ปดํ์ผ์ ํ๋ค๋ ๊ฒ์ ์์ฑ์๊ฐ ์ด ์ ๋ต์ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ๋ จํ๋ ค๊ณ ํ๋ค๋ ์๋ฏธ์
๋๋ค. ์ด๋ ๊ฒ ๊ตฌ์ฑํ๊ณ ๋์, ์ผ๋ฐ์ ์ผ๋ก ์คํํ๋ ๊ฒ์ฒ๋ผ ๋ชจ๋ธ์ fit ํจ์๋ฅผ ํธ์ถํฉ๋๋ค.
MirroredStrategy๊ฐ ๋ชจ๋ธ์ ํ๋ จ์ ์ฌ์ฉ ๊ฐ๋ฅํ GPU๋ค๋ก ๋ณต์ ํ๊ณ , ๊ทธ๋๋์ธํธ๋ค์ ์์งํ๋ ๊ฒ ๋ฑ์ ์์์ ์ฒ๋ฆฌํฉ๋๋ค.
End of explanation
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
model.fit(inputs, targets, epochs=2, batch_size=10)
Explanation: ์์์๋ ํ๋ จ๊ณผ ํ๊ฐ ์
๋ ฅ์ ์ํด tf.data.Dataset์ ์ฌ์ฉํ์ต๋๋ค. ๋ํ์ด(numpy) ๋ฐฐ์ด๋ ์ฌ์ฉํ ์ ์์ต๋๋ค.
End of explanation
# ๋ณต์ ๋ณธ์ ์๋ก ์ ์ฒด ๋ฐฐ์น ํฌ๊ธฐ๋ฅผ ๊ณ์ฐ.
BATCH_SIZE_PER_REPLICA = 5
global_batch_size = (BATCH_SIZE_PER_REPLICA *
mirrored_strategy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
Explanation: ๋ฐ์ดํฐ์
์ด๋ ๋ํ์ด๋ฅผ ์ฌ์ฉํ๋ ๋ ๊ฒฝ์ฐ ๋ชจ๋ ์
๋ ฅ ๋ฐฐ์น๊ฐ ๋์ผํ ํฌ๊ธฐ๋ก ๋๋์ด์ ธ์ ์ฌ๋ฌ ๊ฐ๋ก ๋ณต์ ๋ ์์
์ ์ ๋ฌ๋ฉ๋๋ค. ์๋ฅผ ๋ค์ด, MirroredStrategy๋ฅผ 2๊ฐ์ GPU์์ ์ฌ์ฉํ๋ค๋ฉด, ํฌ๊ธฐ๊ฐ 10๊ฐ์ธ ๋ฐฐ์น(batch)๊ฐ ๋ ๊ฐ์ GPU๋ก ๋ฐฐ๋ถ๋ฉ๋๋ค. ์ฆ, ๊ฐ GPU๋ ํ ๋จ๊ณ๋ง๋ค 5๊ฐ์ ์
๋ ฅ์ ๋ฐ๊ฒ ๋ฉ๋๋ค. ๋ฐ๋ผ์ GPU๊ฐ ์ถ๊ฐ๋ ์๋ก ๊ฐ ์ํฌํฌ(epoch) ๋น ํ๋ จ ์๊ฐ์ ์ค์ด๋ค๊ฒ ๋ฉ๋๋ค. ์ผ๋ฐ์ ์ผ๋ก๋ ๊ฐ์๊ธฐ๋ฅผ ๋ ์ถ๊ฐํ ๋๋ง๋ค ๋ฐฐ์น ์ฌ์ด์ฆ๋ ๋ ํค์๋๋ค. ์ถ๊ฐํ ์ปดํจํ
์์์ ๋ ํจ๊ณผ์ ์ผ๋ก ์ฌ์ฉํ๊ธฐ ์ํด์์
๋๋ค. ๋ชจ๋ธ์ ๋ฐ๋ผ์๋ ํ์ต๋ฅ (learning rate)์ ์ฌ์กฐ์ ํด์ผ ํ ์๋ ์์ ๊ฒ์
๋๋ค. ๋ณต์ ๋ณธ์ ์๋ strategy.num_replicas_in_sync๋ก ์ป์ ์ ์์ต๋๋ค.
End of explanation
with mirrored_strategy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
optimizer = tf.keras.optimizers.SGD()
Explanation: ํ์ฌ ์ด๋ค ๊ฒ์ด ์ง์๋ฉ๋๊น?
| ํ๋ จ API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|---------------- |--------------------- |----------------------- |----------------------------------- |----------------------------------- |--------------------------- |
| Keras API | ์ง์ | ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | ์คํ ๊ธฐ๋ฅ์ผ๋ก ์ง์ | 2.3 ์ดํ ์ง์ ์์ |
์์ ์ ํํ ๋ฆฌ์ผ
์์์ ์ค๋ช
ํ ์ผ๋ผ์ค ๋ถ์ฐ ํ๋ จ ๋ฐฉ๋ฒ์ ๋ํ ํํ ๋ฆฌ์ผ๊ณผ ์์ ๋ค์ ๋ชฉ๋ก์
๋๋ค.
MirroredStrategy๋ฅผ ์ฌ์ฉํ MNIST ํ๋ จ ํํ ๋ฆฌ์ผ.
ImageNet ๋ฐ์ดํฐ์ MirroredStrategy๋ฅผ ์ฌ์ฉํ ๊ณต์ ResNet50 ํ๋ จ.
ํด๋ผ์ฐ๋ TPU์์ ImageNet ๋ฐ์ดํฐ์ TPUStrategy๋ฅผ ์ฌ์ฉํ ResNet50 ํ๋ จ. ์ด ์์ ๋ ํ์ฌ ํ
์ํ๋ก 1.x ๋ฒ์ ์์๋ง ๋์ํฉ๋๋ค.
MultiWorkerMirroredStrategy๋ฅผ ์ฌ์ฉํ MNIST ํ๋ จ ํํ ๋ฆฌ์ผ.
MirroredStrategy๋ฅผ ์ฌ์ฉํ NCF ํ๋ จ.
MirroredStrategy๋ฅผ ์ฌ์ฉํ Transformer ํ๋ จ.
์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ์ ํจ๊ป tf.distribute.Strategy ์ฌ์ฉํ๊ธฐ
์ง๊ธ๊น์ง ์ดํด๋ณธ ๊ฒ์ฒ๋ผ ๊ณ ์์ค API์ ํจ๊ป tf.distribute.Strategy๋ฅผ ์ฌ์ฉํ๋ ค๋ฉด ์ฝ๋ ๋ช ์ค๋ง ๋ฐ๊พธ๋ฉด ๋์์ต๋๋ค. ์กฐ๊ธ๋ง ๋ ๋
ธ๋ ฅ์ ๋ค์ด๋ฉด ์ด๋ฐ ํ๋ ์์ํฌ๋ฅผ ์ฌ์ฉํ์ง ์๋ ์ฌ์ฉ์๋ tf.distribute.Strategy๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค.
ํ
์ํ๋ก๋ ๋ค์ํ ์ฉ๋๋ก ์ฌ์ฉ๋ฉ๋๋ค. ์ฐ๊ตฌ์๋ค ๊ฐ์ ์ผ๋ถ ์ฌ์ฉ์๋ค์ ๋ ๋์ ์์ ๋์ ํ๋ จ ๋ฃจํ์ ๋ํ ์ ์ด๋ฅผ ์ํฉ๋๋ค. ์ด ๋๋ฌธ์ ์ถ์ ๊ธฐ๋ ์ผ๋ผ์ค ๊ฐ์ ๊ณ ์์ค API๋ฅผ ์ฌ์ฉํ๊ธฐ ํ๋ ๊ฒฝ์ฐ๊ฐ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด, GAN์ ์ฌ์ฉํ๋๋ฐ ๋งค๋ฒ ์์ฑ์(generator)์ ํ๋ณ์(discriminator) ๋จ๊ณ์ ์๋ฅผ ๋ฐ๊พธ๊ณ ์ถ์ ์ ์์ต๋๋ค. ๋น์ทํ๊ฒ, ๊ณ ์์ค API๋ ๊ฐํ ํ์ต(Reinforcement learning)์๋ ๊ทธ๋ค์ง ์ ์ ํ์ง ์์ต๋๋ค. ๊ทธ๋์ ์ด๋ฐ ์ฌ์ฉ์๋ค์ ๋ณดํต ์์ ๋ง์ ํ๋ จ ๋ฃจํ๋ฅผ ์์ฑํ๊ฒ ๋ฉ๋๋ค.
์ด ์ฌ์ฉ์๋ค์ ์ํ์ฌ, tf.distribute.Strategy ํด๋์ค๋ค์ ์ผ๋ จ์ ์ฃผ์ ๋ฉ์๋๋ค์ ์ ๊ณตํฉ๋๋ค. ์ด ๋ฉ์๋๋ค์ ์ฌ์ฉํ๋ ค๋ฉด ์ฒ์์๋ ์ฝ๋๋ฅผ ์ด๋ฆฌ์ ๋ฆฌ ์กฐ๊ธ ์ฎ๊ฒจ์ผ ํ ์ ์๊ฒ ์ง๋ง, ํ๋ฒ ์์
ํด ๋์ผ๋ฉด ์ ๋ต ์ธ์คํด์ค๋ง ๋ฐ๊ฟ์ GPU, TPU, ์ฌ๋ฌ ์ฅ๋น๋ก ์ฝ๊ฒ ๋ฐ๊ฟ๊ฐ๋ฉฐ ํ๋ จ์ ํ ์ ์์ต๋๋ค.
์์์ ์ดํด๋ณธ ์ผ๋ผ์ค ๋ชจ๋ธ์ ์ฌ์ฉํ ํ๋ จ ์์ ๋ฅผ ํตํ์ฌ ์ฌ์ฉํ๋ ๋ชจ์ต์ ๊ฐ๋จํ๊ฒ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
๋จผ์ , ์ ๋ต์ ๋ฒ์(scope) ์์์ ๋ชจ๋ธ๊ณผ ์ตํฐ๋ง์ด์ ๋ฅผ ๋ง๋ญ๋๋ค. ์ด๋ ๋ชจ๋ธ์ด๋ ์ตํฐ๋ง์ด์ ๋ก ๋ง๋ค์ด์ง ๋ณ์๊ฐ ๋ฏธ๋ฌ๋ง ๋๋๋ก ๋ง๋ญ๋๋ค.
End of explanation
with mirrored_strategy.scope():
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch(
global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
Explanation: ๋ค์์ผ๋ก๋ ์
๋ ฅ ๋ฐ์ดํฐ์
์ ๋ง๋ ๋ค์, tf.distribute.Strategy.experimental_distribute_dataset ๋ฉ์๋๋ฅผ ํธ์ถํ์ฌ ์ ๋ต์ ๋ง๊ฒ ๋ฐ์ดํฐ์
์ ๋ถ๋ฐฐํฉ๋๋ค.
End of explanation
@tf.function
def train_step(dist_inputs):
def step_fn(inputs):
features, labels = inputs
with tf.GradientTape() as tape:
logits = model(features)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=labels)
loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
return cross_entropy
per_example_losses = mirrored_strategy.run(
step_fn, args=(dist_inputs,))
mean_loss = mirrored_strategy.reduce(
tf.distribute.ReduceOp.MEAN, per_example_losses, axis=0)
return mean_loss
Explanation: ๊ทธ๋ฆฌ๊ณ ๋์๋ ํ ๋จ๊ณ์ ํ๋ จ์ ์ ์ํฉ๋๋ค. ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํ๊ธฐ ์ํด tf.GradientTape๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ด ๊ทธ๋๋์ธํธ๋ฅผ ์ ์ฉํ์ฌ ์ฐ๋ฆฌ ๋ชจ๋ธ์ ๋ณ์๋ฅผ ๊ฐฑ์ ํ๊ธฐ ์ํด์๋ ์ตํฐ๋ง์ด์ ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ๋ถ์ฐ ํ๋ จ์ ์ํ์ฌ ์ด ํ๋ จ ์์
์ step_fn ํจ์ ์์ ๊ตฌํํฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ step_fn์ ์์์ ๋ง๋ dist_dataset์์ ์ป์ ์
๋ ฅ ๋ฐ์ดํฐ์ ํจ๊ป tf.distrbute.Strategy.experimental_run_v2๋ฉ์๋๋ก ์ ๋ฌํฉ๋๋ค.
End of explanation
with mirrored_strategy.scope():
for inputs in dist_dataset:
print(train_step(inputs))
Explanation: ์ ์ฝ๋์์ ๋ช ๊ฐ์ง ๋ ์ง์ด๋ณผ ์ ์ด ์์ต๋๋ค.
์์ค(loss)์ ๊ณ์ฐํ๊ธฐ ์ํ์ฌ tf.nn.softmax_cross_entropy_with_logits๋ฅผ ์ฌ์ฉํ์์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ์์ค์ ํฉ์ ์ ์ฒด ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋๋๋ ๋ถ๋ถ์ด ์ค์ํฉ๋๋ค. ์ด๋ ๋ชจ๋ ๋ณต์ ๋ ํ๋ จ์ด ๋์์ ์ด๋ฃจ์ด์ง๊ณ ์๊ณ , ๊ฐ ๋จ๊ณ์ ํ๋ จ์ด ์ด๋ฃจ์ด์ง๋ ์
๋ ฅ์ ์๋ ์ ์ฒด ๋ฐฐ์น ํฌ๊ธฐ์ ๊ฐ๊ธฐ ๋๋ฌธ์
๋๋ค. ๋ฐ๋ผ์ ์์ค ๊ฐ์ ๊ฐ ๋ณต์ ๋ ์์
๋ด์ ๋ฐฐ์น ํฌ๊ธฐ๊ฐ ์๋๋ผ ์ ์ฒด ๋ฐฐ์น ํฌ๊ธฐ๋ก ๋๋์ด์ผ ๋ง์ต๋๋ค.
tf.distribute.Strategy.mirrored_strategy.run์์ ๋ฐํ๋ ๊ฒฐ๊ณผ๋ฅผ ๋ชจ์ผ๊ธฐ ์ํ์ฌ tf.distribute.Strategy.reduce API๋ฅผ ์ฌ์ฉํ์์ต๋๋ค. tf.distribute.Strategy.mirrored_strategy.run๋ ์ ๋ต์ ๊ฐ ๋ณต์ ๋ณธ์์ ์ป์ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํํฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ ์ด ๊ฒฐ๊ณผ๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ์ฌ๋ฌ ๊ฐ์ง๊ฐ ์์ต๋๋ค. ์ข
ํฉํ ๊ฒฐ๊ณผ๋ฅผ ์ป๊ธฐ ์ํ์ฌ reduce ํจ์๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค. tf.distribute.Strategy.experimental_local_results ๋ฉ์๋๋ก ๊ฐ ๋ณต์ ๋ณธ์์ ์ป์ ๊ฒฐ๊ณผ์ ๊ฐ๋ค ๋ชฉ๋ก์ ์ป์ ์๋ ์์ต๋๋ค.
๋ถ์ฐ ์ ๋ต ๋ฒ์ ์์์ apply_gradients ๋ฉ์๋๊ฐ ํธ์ถ๋๋ฉด, ํ์์๋ ๋์์ด ๋ค๋ฆ
๋๋ค. ๊ตฌ์ฒด์ ์ผ๋ก๋ ๋๊ธฐํ๋ ํ๋ จ ์ค ๋ณ๋ ฌํ๋ ๊ฐ ์์
์์ ๊ทธ๋๋์ธํธ๋ฅผ ์ ์ฉํ๊ธฐ ์ ์, ๋ชจ๋ ๋ณต์ ๋ณธ์ ๊ทธ๋๋์ธํธ๋ฅผ ๋ํด์ง๋๋ค.
ํ๋ จ ๋จ๊ณ๋ฅผ ์ ์ํ์ผ๋ฏ๋ก, ๋ง์ง๋ง์ผ๋ก๋ dist_dataset์ ๋ํ์ฌ ํ๋ จ์ ๋ฐ๋ณตํฉ๋๋ค.
End of explanation
with mirrored_strategy.scope():
iterator = iter(dist_dataset)
for _ in range(10):
print(train_step(next(iterator)))
Explanation: ์ ์์์๋ dist_dataset์ ์ฐจ๋ก๋๋ก ์ฒ๋ฆฌํ๋ฉฐ ํ๋ จ ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ์ป์์ต๋๋ค. tf.distribute.Strategy.make_experimental_numpy_dataset๋ฅผ ์ฌ์ฉํ๋ฉด ๋ํ์ด ์
๋ ฅ๋ ์ธ ์ ์์ต๋๋ค. tf.distribute.Strategy.experimental_distribute_dataset ํจ์๋ฅผ ํธ์ถํ๊ธฐ ์ ์ ์ด API๋ก ๋ฐ์ดํฐ์
์ ๋ง๋ค๋ฉด ๋ฉ๋๋ค.
๋ฐ์ดํฐ๋ฅผ ์ฐจ๋ก๋๋ก ์ฒ๋ฆฌํ๋ ๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์ ๋ช
์์ ์ผ๋ก ๋ฐ๋ณต์(iterator)๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค. ์ ์ฒด ๋ฐ์ดํฐ๋ฅผ ๋ชจ๋ ์ฌ์ฉํ์ง ์๊ณ , ์ ํด์ง ํ์๋งํผ๋ง ํ๋ จ์ ํ๊ณ ์ถ์ ๋ ์ ์ฉํฉ๋๋ค. ๋ฐ๋ณต์๋ฅผ ๋ง๋ค๊ณ ๋ช
์์ ์ผ๋ก next๋ฅผ ํธ์ถํ์ฌ ๋ค์ ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ์ป๋๋ก ํ๋ฉด ๋ฉ๋๋ค. ์ ๋ฃจํ ์ฝ๋๋ฅผ ๋ฐ๊ฟ๋ณด๋ฉด ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
End of explanation
mirrored_strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(
train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy)
regressor = tf.estimator.LinearRegressor(
feature_columns=[tf.feature_column.numeric_column('feats')],
optimizer='SGD',
config=config)
Explanation: tf.distribute.Strategy API๋ฅผ ์ฌ์ฉํ์ฌ ์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ๋ฅผ ๋ถ์ฐ ์ฒ๋ฆฌ ํ๋ ๊ฐ์ฅ ๋จ์ํ ๊ฒฝ์ฐ๋ฅผ ์ดํด๋ณด์์ต๋๋ค. ํ์ฌ API๋ฅผ ๊ฐ์ ํ๋ ๊ณผ์ ์ค์ ์์ต๋๋ค. ์ด API๋ฅผ ์ฌ์ฉํ๋ ค๋ฉด ์ฌ์ฉ์ ์ชฝ์์ ๊ฝค ๋ง์ ์์
์ ํด์ผ ํ๋ฏ๋ก, ๋์ค์ ๋ณ๋์ ๋ ์์ธํ ๊ฐ์ด๋๋ก ์ค๋ช
ํ๋๋ก ํ๊ฒ ์ต๋๋ค.
ํ์ฌ ์ด๋ค ๊ฒ์ด ์ง์๋ฉ๋๊น?
| ํ๋ จ API | MirroredStrategy | TPUStrategy | MultiWorkerMirroredStrategy | CentralStorageStrategy | ParameterServerStrategy |
|:----------------------- |:------------------- |:------------------- |:----------------------------- |:------------------------ |:------------------------- |
| ์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ | ์ง์ | ์ง์ | ์คํ์ ์ผ๋ก ์ง์ | ์คํ์ ์ผ๋ก ์ง์ | 2.3 ์ดํ ์ง์ ์์ |
์์ ์ ํํ ๋ฆฌ์ผ
์ฌ์ฉ์ ์ ์ ํ๋ จ ๋ฃจํ์ ํจ๊ป ๋ถ์ฐ ์ ๋ต์ ์ฌ์ฉํ๋ ์์ ๋ค์
๋๋ค.
MirroredStrategy๋ก MNIST๋ฅผ ํ๋ จํ๋ ํํ ๋ฆฌ์ผ.
MirroredStrategy๋ฅผ ์ฌ์ฉํ๋ DenseNet ์์ .
MirroredStrategy์ TPUStrategy๋ฅผ ์ฌ์ฉํ์ฌ ํ๋ จํ๋ BERT ์์ .
์ด ์์ ๋ ๋ถ์ฐ ํ๋ จ ๋์ค ์ฒดํฌํฌ์ธํธ๋ก๋ถํฐ ๋ถ๋ฌ์ค๊ฑฐ๋ ์ฃผ๊ธฐ์ ์ธ ์ฒดํฌํฌ์ธํธ๋ฅผ ๋ง๋๋ ๋ฐฉ๋ฒ์ ์ดํดํ๋ ๋ฐ ๋งค์ฐ ์ ์ฉํฉ๋๋ค.
keras_use_ctl ํ๋๊ทธ๋ฅผ ์ผ์ ํ์ฑํํ ์ ์๋ MirroredStrategy๋ก ํ๋ จํ NCF ์์ .
MirroredStrategy๋ฅผ ์ฌ์ฉํ์ฌ ํ๋ จํ๋ NMT ์์ .
์ถ์ ๊ธฐ(Estimator)์ ํจ๊ป tf.distribute.Strategy ์ฌ์ฉํ๊ธฐ
tf.estimator๋ ์๋๋ถํฐ ๋น๋๊ธฐ ํ๋ผ๋ฏธํฐ ์๋ฒ ๋ฐฉ์์ ์ง์ํ๋ ๋ถ์ฐ ํ๋ จ ํ
์ํ๋ก API์
๋๋ค. ์ผ๋ผ์ค์ ๋ง์ฐฌ๊ฐ์ง๋ก tf.distribute.Strategy๋ฅผ tf.estimator์ ํจ๊ป ์ธ ์ ์์ต๋๋ค. ์ถ์ ๊ธฐ ์ฌ์ฉ์๋ ์์ฃผ ์กฐ๊ธ๋ง ์ฝ๋๋ฅผ ๋ณ๊ฒฝํ๋ฉด, ํ๋ จ์ด ๋ถ์ฐ๋๋ ๋ฐฉ์์ ์ฝ๊ฒ ๋ฐ๊ฟ ์ ์์ต๋๋ค. ๋ฐ๋ผ์ ์ด์ ๋ ์ถ์ ๊ธฐ ์ฌ์ฉ์๋ค๋ ๋ค์ค GPU๋ ๋ค์ค ์์ปค๋ฟ ์๋๋ผ ๋ค์ค TPU์์ ๋๊ธฐ ๋ฐฉ์์ผ๋ก ๋ถ์ฐ ํ๋ จ์ ํ ์ ์์ต๋๋ค. ํ์ง๋ง ์ถ์ ๊ธฐ๋ ์ ํ์ ์ผ๋ก ์ง์ํ๋ ๊ฒ์
๋๋ค. ์์ธํ ๋ด์ฉ์ ์๋ ํ์ฌ ์ด๋ค ๊ฒ์ด ์ง์๋ฉ๋๊น? ๋ถ๋ถ์ ์ฐธ๊ณ ํ์ญ์์ค.
์ถ์ ๊ธฐ์ ํจ๊ป tf.distribute.Strategy๋ฅผ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ์ผ๋ผ์ค์๋ ์ด์ง ๋ค๋ฆ
๋๋ค. strategy.scope๋ฅผ ์ฌ์ฉํ๋ ๋์ ์, ์ ๋ต ๊ฐ์ฒด๋ฅผ ์ถ์ ๊ธฐ์ RunConfig(์คํ ์ค์ )์ ๋ฃ์ด์ ์ ๋ฌํด์ผํฉ๋๋ค.
๋ค์์ ๊ธฐ๋ณธ์ผ๋ก ์ ๊ณต๋๋ LinearRegressor์ MirroredStrategy๋ฅผ ํจ๊ป ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ฃผ๋ ์ฝ๋์
๋๋ค.
End of explanation
def input_fn():
dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.]))
return dataset.repeat(1000).batch(10)
regressor.train(input_fn=input_fn, steps=10)
regressor.evaluate(input_fn=input_fn, steps=10)
Explanation: ์ ์์ ์์๋ ๊ธฐ๋ณธ์ผ๋ก ์ ๊ณต๋๋ ์ถ์ ๊ธฐ๋ฅผ ์ฌ์ฉํ์์ง๋ง, ์ง์ ๋ง๋ ์ถ์ ๊ธฐ๋ ๋์ผํ ์ฝ๋๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค. train_distribute๊ฐ ํ๋ จ์ ์ด๋ป๊ฒ ๋ถ์ฐ์ํฌ์ง๋ฅผ ์ง์ ํ๊ณ , eval_distribute๊ฐ ํ๊ฐ๋ฅผ ์ด๋ป๊ฒ ๋ถ์ฐ์ํฌ์ง๋ฅผ ์ง์ ํฉ๋๋ค. ์ผ๋ผ์ค์ ํจ๊ป ์ฌ์ฉํ ๋ ํ๋ จ๊ณผ ํ๊ฐ์ ๋์ผํ ๋ถ์ฐ ์ ๋ต์ ์ฌ์ฉํ๋ ๊ฒ๊ณผ๋ ์ฐจ์ด๊ฐ ์์ต๋๋ค.
๋ค์๊ณผ ๊ฐ์ด ์
๋ ฅ ํจ์๋ฅผ ์ง์ ํ๋ฉด ์ถ์ ๊ธฐ์ ํ๋ จ๊ณผ ํ๊ฐ๋ฅผ ํ ์ ์์ต๋๋ค.
End of explanation |
15,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prediction using normal score for wall street columns using the same data clusters.
Here we will test how the prediction between using mixed receptive fields in time compares with non-time mixed receptive fields where the clusters are the same for each of the times.
First as usual we load everything that we need.
Step1: Without Spaces
Load the code vectors and the features
Step2: Do the loop and calculate the predictions | Python Code:
import numpy as np
from sklearn import svm, cross_validation
import h5py
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import sys
sys.path.append("../")
Explanation: Prediction using normal score for wall street columns using the same data clusters.
Here we will test how the prediction between using mixed receptive fields in time compares with non-time mixed receptive fields where the clusters are the same for each of the times.
First as usual we load everything that we need.
End of explanation
# Data to use
Ndata = 10000
# First we load the file
file_location = '../results_database/text_wall_street_columns_indp.hdf5'
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters_spaces.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
targets = []
for index in range(Ndata):
letter_index = index // 10
targets.append(letters_sequence[letter_index])
# Transform to array
targets = np.array(targets)
Explanation: Without Spaces
Load the code vectors and the features
End of explanation
# Calculate the predictions
Ntime_clusters_set = np.arange(10, 37, 3)
scores_mixed = []
scores_indp = []
# Nexa parameters
Nspatial_clusters = 3
Nembedding = 3
for Ntime_clusters in Ntime_clusters_set:
print(Ntime_clusters)
# Here calculate the scores for the mixes
run_name = '/test'
f = h5py.File(file_location, 'r')
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_softmax = np.array(nexa['code-vectors-softmax'])
# Now we need to classify
X = code_vectors_softmax[:Ndata]
y = targets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
scores_mixed.append(score)
# Here calculate the scores for the independent
run_name = '/indep'
f = h5py.File(file_location, 'r')
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
code_vectors_softmax = np.array(nexa['code-vectors-softmax'])
# Now we need to classify
X = code_vectors_softmax[:Ndata]
y = targets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)
clf_linear = svm.SVC(C=1.0, kernel='linear')
clf_linear.fit(X_train, y_train)
score = clf_linear.score(X_test, y_test) * 100.0
scores_indp.append(score)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.plot(Ntime_clusters_set, scores_indp, 'o-', label='independent', lw=2, markersize=10)
ax.plot(Ntime_clusters_set, scores_mixed, 'o-', label='mixed', lw=2, markersize=10)
ax.set_ylim(0, 105)
ax.set_ylabel('Accuracy')
ax.set_xlabel('Number of Data Clusters')
ax.set_title('Accuracy vs Number of Data Clusters for different features (Without Sapces)')
ax.legend()
targets[0:20]
Explanation: Do the loop and calculate the predictions
End of explanation |
15,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algebra and Functions
Step1: Contents
1.Laws of Indices
- Multiplication
- Division
- Raising to power
- Taking a root
- Fraction Indices
- Zero indices
- Negative indices
2.Surds
- Addition and subtraction
- Multiplication and division
- Simplifying
- Rationalising the denominator
3.Quadratic Functions
- Graphing
- Discriminant
- Completing the square
4.Simultaneous Equations
- Elimination
- Substitution
5.Linear and Quadratic Inequalitites
- Rules
- Graphing
6.Polynomials
- Expanding brackets
- Factorising
- Division
- Remainder Theorem
- Factor Theorem
7.Graphs
- Plotting polynomials
- Modulus of linear function
- Graphing log
- Plotting reciprocal of x
8.Functions
- Composite functions
- Inverse functions
- Transformation of functions
9.Rational Functions
- Rational functions to partial fractions
<a id='Laws_of_Indices'></a>
Laws of Indices For All Rational Exponents
<a id='Multiplication'></a>
Multiplication
$x^a(x^b) = x^{a+b}$
<a id='Division'></a>
Division
$\frac{x^a}{x^b} = x^{a-b}$
<a id='Raising_to_power'></a>
Raising to power
$(x^a)^b = x^{ab}$
<a id='Taking_a_root'></a>
Taking a root
$\sqrt[b]{x^a} = x^{\frac{a}{b}}$
<a id='Fraction_Indices'></a>
Fraction Indices
$\frac{x}{y}^{\frac{a}{b}} = \frac{x^{\frac{a}{b}}}{y^{\frac{a}{b}}}$
$= \frac{\sqrt[b]{(x^a)}}{(\sqrt[b]{y})^a}$
note
Step2: <a id='Discriminant'></a>
Discriminant
The discriminant is $b^2 - 4ac$
noted as $\Delta$
if $b^2 - 4ac = 0$
- the line touches the x axis
- 1 "real" or repeated solution
- left with $x = \frac{-b}{2a}$
- this is the equation for the axis of symmetry
- eg. $4^2 - 4\times2\times2 = 0$
if $b^2 - 4ac > 0$
- the line intersects the x axis
- 2 "real" solutions
- can be $+\sqrt{b^2 - 4ac}$
- or $-\sqrt{b^2 - 4ac}$
- eg. $5^2 - 4\times2\times-12 = 121$
if $b^2 - 4ac < 0$
- the line doesn't touch or intersect the x axis
- 0 "real" solutions
- because no "real" roots of any negative number
- eg. $-4^2 - 4\times-4\times-16 = -135$
Step3: <a id='Completing_the_square'></a>
Completing the square
to rewrite $ax^2 + bx + c = 0$
to $(x + a)^2 + b$
eg. $x^2 +2x+3 = 0$
$(x + 1)^2 + 2 = 0$
Example
Step4: <a id='Polynomials'></a>
Polynomials
<a id='Expanding_brackets'></a>
Expanding brackets
Single brackets
Step5: <a id='Modulus_of_linear_function'></a>
Modulus of linear function
The Modulus is the absolute value or the magnitude of the number
It has no polarity eg. |-3| = 3
Step6: <a id='Graphing_log'></a>
Graphing log
Logarithm
Step7: <a id='Plotting_reciprocal_of_x'></a>
Plotting reciprocal of x
Reciprocal = $x^{-1}$
$ax^{-n} = \frac{a}{x^n}$
Step8: Horizontal and Vertical Asymptotes
The linear line in the direction to which a curve approaches as it heads towards infinity
Step9: <a id='Functions'></a>
Functions
<a id='Composite_functions'></a>
Composite functions
multiple functions represented as 1 combined function
$hgf(x) = h(g(f(x)))$
$f^2(x) = f(f(x))$
Example
Step10: <a id='Transformation_of_functions'></a>
Transformation of functions
$f(x) + a$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import math
Explanation: Algebra and Functions
End of explanation
# sample x values
x = np.linspace(-10, 10, 2001).astype(np.float32)
# known variables
a = 2.0
b = 5.0
c = -12.0
# quadratic equation
def eq(x,a,b,c):
return a*x**2 + b*x + c
y = eq(x,a,b,c)
sym = (-b)/(2*a)
# list ot store x values where y = 0 (aka the x intercepts)
x_intercepts = []
# loop through y values find x intercepts (out of the sample values)
# later we'll implement the maths
for i in range(y.size):
if y[i] < 0.1 and y[i] > -0.1:
x_intercepts.append(i)
# plot graph and points
plt.plot(x,y, label='line', color='b')
plt.plot(x[x_intercepts[0]], y[x_intercepts[0]], 'om', color='g', label='x intercept')
plt.plot(x[x_intercepts[1]], y[x_intercepts[1]], 'om', color='g', label='x intercept')
plt.axvline(x=sym, color='r', linestyle='--', label='line of symmetry')
plt.plot(sym, eq(sym,a,b,c), 'om', color='y', label='vertex')
plt.legend(loc=1)
plt.grid(True)
plt.show()
Explanation: Contents
1.Laws of Indices
- Multiplication
- Division
- Raising to power
- Taking a root
- Fraction Indices
- Zero indices
- Negative indices
2.Surds
- Addition and subtraction
- Multiplication and division
- Simplifying
- Rationalising the denominator
3.Quadratic Functions
- Graphing
- Discriminant
- Completing the square
4.Simultaneous Equations
- Elimination
- Substitution
5.Linear and Quadratic Inequalitites
- Rules
- Graphing
6.Polynomials
- Expanding brackets
- Factorising
- Division
- Remainder Theorem
- Factor Theorem
7.Graphs
- Plotting polynomials
- Modulus of linear function
- Graphing log
- Plotting reciprocal of x
8.Functions
- Composite functions
- Inverse functions
- Transformation of functions
9.Rational Functions
- Rational functions to partial fractions
<a id='Laws_of_Indices'></a>
Laws of Indices For All Rational Exponents
<a id='Multiplication'></a>
Multiplication
$x^a(x^b) = x^{a+b}$
<a id='Division'></a>
Division
$\frac{x^a}{x^b} = x^{a-b}$
<a id='Raising_to_power'></a>
Raising to power
$(x^a)^b = x^{ab}$
<a id='Taking_a_root'></a>
Taking a root
$\sqrt[b]{x^a} = x^{\frac{a}{b}}$
<a id='Fraction_Indices'></a>
Fraction Indices
$\frac{x}{y}^{\frac{a}{b}} = \frac{x^{\frac{a}{b}}}{y^{\frac{a}{b}}}$
$= \frac{\sqrt[b]{(x^a)}}{(\sqrt[b]{y})^a}$
note: the order of dealing with denominator and numerator of the indices doesn't matter
<a id='Zero_indices'></a>
Zero indices
$x^0 = 1$
<a id='Negative_indices'></a>
Negative indices
$x^{-a} = \frac{1}{x^a}$
<a id='Surds'></a>
Surds
<a id='Addition_and_subtraction'></a>
Addition and subtraction
$a\sqrt{b} + c\sqrt{b} = (a + c)\sqrt{b}$
$a\sqrt{b} - c\sqrt{b} = (a - c)\sqrt{b}$
<a id='Multiplication_and_division'></a>
Multiplication and division
$\sqrt{ab} = \sqrt{a}\sqrt{b}$
$\sqrt{\frac{a}{b}} = \frac{\sqrt{a}}{\sqrt{b}}$
<a id='Simplifying'></a>
Simplifying
$\sqrt{75} = \sqrt{25}\sqrt{3}$
$= 5\sqrt{3}$
<a id='Rationalising_the_denominator'></a>
Rationalising the denominator
$\frac{a}{\sqrt{b}} = \frac{a}{\sqrt{b}} \times \frac{\sqrt{b}}{\sqrt{b}}$
$= \frac{a\sqrt{b}}{b}$
Example
$\frac{\sqrt{7} + 1}{\sqrt{7} - 2} = \frac{\sqrt{7} + 1}{\sqrt{7} - 2} \times \frac{\sqrt{7} + 2}{\sqrt{7} + 2}$
$= \frac{9 + 3\sqrt{7}}{3}$
$= 3 + \sqrt{7}$
<a id='Quadratic_Functions'></a>
Quadratic Functions
in format (or any polynomial of degree 2):
$ax^2 + bx + c = 0$
Quadratic Formula: $x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
the graph is always an $x^2$ curve
the solution is always the x intercepts
there can be 2, 1 or 0 solutions
<a id='Quadratic_graphing'></a>
Graphing:
note: a, b and c doesn't always represent the values in $ax^2 + bx + c = 0$
In factored form $(x + a)(x + b) = 0$:
- find x intercepts
- find vertex
In standard form $ax^2 + bx + c = 0$:
- find symmetry with $x = \frac{-b}{2a}$
- find vertex by substituting in x value for symmetry
- find x intercepts of y values for x values either side
In vertex form $a(x + b)^2 + c = 0$:
- find when $x + b = 0$ to find symmetry x value
- c = y value for vertex
- if b < 0, graph is negative
- then substitute x values either side of symmetry for other 2 points required
End of explanation
# sample x values
x = np.linspace(-10, 10, 2001).astype(np.float32)
# quadratic equation
def eq(a, b, c):
return a*x**2 + b*x + c
# setup figure and axes
fig, ax = plt.subplots(1, 3, sharex=True, figsize=(16,5))
# define variables for each graph using examples from above
graph_vars = [[2, 4, 2], [2, 5, -12], [-4, -11, -16]]
# loop for 3 graphs
for a in range(3):
# put the current variables into the equation
y = eq(graph_vars[a][0], graph_vars[a][1], graph_vars[a][2])
# list ot store x values where y = 0 (aka the x intercepts)
x_intercepts = []
# loop through y values find x intercepts (out of the sample values)
for i in range(y.size):
if y[i] < 0.01 and y[i] > -0.01:
x_intercepts.append(i)
# plot the current data and x intercept
ax[a].plot(x,y)
for x_intercept in x_intercepts:
ax[a].plot(x[x_intercept], y[x_intercept], 'om')
ax[a].grid(True)
plt.show()
Explanation: <a id='Discriminant'></a>
Discriminant
The discriminant is $b^2 - 4ac$
noted as $\Delta$
if $b^2 - 4ac = 0$
- the line touches the x axis
- 1 "real" or repeated solution
- left with $x = \frac{-b}{2a}$
- this is the equation for the axis of symmetry
- eg. $4^2 - 4\times2\times2 = 0$
if $b^2 - 4ac > 0$
- the line intersects the x axis
- 2 "real" solutions
- can be $+\sqrt{b^2 - 4ac}$
- or $-\sqrt{b^2 - 4ac}$
- eg. $5^2 - 4\times2\times-12 = 121$
if $b^2 - 4ac < 0$
- the line doesn't touch or intersect the x axis
- 0 "real" solutions
- because no "real" roots of any negative number
- eg. $-4^2 - 4\times-4\times-16 = -135$
End of explanation
# sample x values
x = np.linspace(-10, 10, 2001).astype(np.float32)
# known variables
M = 1
C = 1
# linear equation
def lin_eq(x, M, C):
return M*x + C
lin_y = lin_eq(x, M, C)
# known variables
a = 2.0
b = -5.0
c = -12.0
# quadratic equation
def quad_eq(x,a,b,c):
return a*x**2 + b*x + c
quad_y = quad_eq(x,a,b,c)
# list ot store x values where y = 0 (aka the x intercepts)
x_intercepts = []
# loop through y values find x intercepts (out of the sample values)
# later we'll implement the maths
for i in range(quad_y.size):
if quad_y[i] < 0.1 and quad_y[i] > -0.1:
x_intercepts.append(i)
# plot graph and points
fig, ax = plt.subplots(2, 1, sharex=True, figsize=(6,10))
ax[0].set_title('$y = x + 1$')
ax[0].plot(x,lin_y, label='line', color='b')
ax[0].fill_between(x, 20, lin_y, label='$y > x + 1$', facecolor='pink')
ax[1].set_title('$y = 2x^2 - 5x - 12$')
ax[1].plot(x,quad_y, label='line', color='b')
ax[1].plot(x[x_intercepts[0]], quad_y[x_intercepts[0]], 'om', color='g', label='x intercept')
ax[1].plot(x[x_intercepts[1]], quad_y[x_intercepts[1]], 'om', color='g', label='x intercept')
ax[1].fill_between(x[0:x_intercepts[0]], 0, quad_eq(x[0:x_intercepts[0]],a,b,c), label='$0 < 2x^2 - 5x - 12 < y$', facecolor='pink')
ax[1].fill_between(x[x_intercepts[1]:], 0, quad_eq(x[x_intercepts[1]:],a,b,c), facecolor='pink')
for axes in ax:
axes.legend(loc=1)
axes.grid(True)
plt.show()
Explanation: <a id='Completing_the_square'></a>
Completing the square
to rewrite $ax^2 + bx + c = 0$
to $(x + a)^2 + b$
eg. $x^2 +2x+3 = 0$
$(x + 1)^2 + 2 = 0$
Example:
$4x^2 + 20x + 25 = 0$
$x^2 + 5x + \frac{25}{4} = 0$
$(x + \frac{5}{2})^2 - \frac{25}{4} + \frac{25}{4} = 0$
$(x + \frac{5}{2})^2 = 0$
$x + \frac{5}{2} = 0$
$x = - \frac{5}{2}$
note: the discriminant = 0 $\therefore$ only one repeated solution for x
<a id='Simultaneous_Equations'></a>
Simultaneous Equations
<a id='Elimination'></a>
Elimination
Equations:
a) $3x + 2y = 280$
b) $x + 4y = 260$
Solve x and y of above equations:
2a) $6x + 4y = 560$
2a - b) $5x = 300$
$x = 60$
substitute x's value into original equation
$60 + 4y = 260$
$4y = 200$
$y = 50$
<a id='Substitution'></a>
Substitution
Equations:
a)$y - 3x + 2 = 0$
b)$y^2 - x - 6x^2 = 0$
Solve x and y of above equations:
make unknown variable the subject in a equation (a in my case)
a) $y = 3x - 2$
substitute unknown variable in terms of other unknown into the other equation
b) $(3x - 2)^2 - x - 6x^2 = 0$
$(9x^2 - 12x + 4) - x - 6x^2 = 0$
$3x^2 - 13x + 4 = 0$
$(3x - 1)(x - 4) = 0$
$3x - 1 = 0$
$x = \frac{1}{3}$
or
$x - 4 = 0$
$x = 4$
substitute into original equations
$y = 3(4) - 2$
$y = 10$
or
$y = 3(\frac{1}{3}) - 2$
$y = -1$
Linear and Quadratic example
Equations:
a) $2x + y = 8$
b) $3x^2 + xy = 1$
In form a + bโ17, where a and b are integers.
$a\times x$) $2x^2 + xy = 8x$
$b - a\times x$) $x^2 = 1 - 8x$
$x^2 + 8x - 1 = 0$
$\frac{-8 \pm \sqrt{8^2 - 4(-1)}}{2}$
$-4 \pm \sqrt{68}$
$-4 \pm 2\sqrt{17}$
$x = -4 + 2\sqrt{17}$ or $-4 - 2\sqrt{17}$
$y = 8 - 2(-4 \pm 2\sqrt{17})$
$y = 16 - 4\sqrt{17}$ or $16 + 4\sqrt{17}$
<a id='Linear_and_Quadratic_Inequalitites'></a>
Linear and Quadratic Inequalitites
<a id='Rules'></a>
Rules:
Treat like an equation
When multiplying by negative, switch the sign polarity
'or' means either/both statement(s) can be met
'and' means both statements must be met
compound inequalities should be treated like an equation also (can also be split)
there can be no solution
example:
$(2x + 1)(x - 2) > 2(x + 5)$
$2x^2 - 3x - 2 > 2x + 10$
$2x^2 - 5x - 12 > 0$
$(2x + 3)(x - 4) > 0$
since $p \times p = p$ and $n \times n = p$:
either $2x + 3 > 0$ and $x - 4 > 0$
or $2x + 3 < 0$ and $x - 4 < 0$
$2x + 3 > 0$
$x > -\frac{3}{2}$
and $x - 4 > 0$
$x > 4$
which is equivalent to $x > 4$
$2x + 3 < 0$
$x < -\frac{3}{2}$
and $x - 4 < 0$
$x < 4$
which is equivalent to $x < -\frac{3}{2}$
$\therefore$ $x < -\frac{3}{2}$ and $x > 4$
<a id='Inequality_graphing'></a>
End of explanation
# sample x values
x = np.linspace(-10, 10, 2001).astype(np.float32)
# equation
def poly_eq(x, a, n):
return(a*x**n)
# setup figure and axes
fig, ax = plt.subplots(1, 4, sharex=True, figsize=(16,5))
# define variables for each graph using examples from above
graph_vars = [[1, 2], [1, 3], [-1, 2], [-1, 3]]
# loop for 4 graphs
for a in range(4):
# put the current variables into the equation
y = poly_eq(x, graph_vars[a][0], graph_vars[a][1])
# list ot store x values where y = 0 (aka the x intercepts)
x_intercepts = []
# loop through y values find x intercepts (out of the sample values)
for i in range(y.size):
if y[i] < 0.01 and y[i] > -0.01:
x_intercepts.append(i)
# plot the current data and x intercept
ax[a].plot(x,y)
for x_intercept in x_intercepts:
ax[a].plot(x[x_intercept], y[x_intercept], 'om')
ax[a].grid(True)
plt.show()
Explanation: <a id='Polynomials'></a>
Polynomials
<a id='Expanding_brackets'></a>
Expanding brackets
Single brackets:
$a(b + c) = ab + ac$
$3xy(2x + y^2) = 6x^2y + 3xy^3$
Double brackets:
$(a + b)(a + d) = a^2 + (b + d)a + bd$
$(3x - 10)(5x - 9) = 15x^2 - 27x - 50x + 90$
$= 15x^2 - 77x + 90$
note: $(a + b)(a - b) = a^2 - b^2$ is the difference of 2 squares
<a id='Factorising'></a>
Factorising
Single brackets:
$ab + ac = a(b + c)$
$9xy^2 - 6x^4z = 3x(3y^2 - 2x^3z)$
Double brackets:
$x^2 + bx + c = (x + p)(x + q)$
$p /times q = c$
$p + q = b$
$x^2 - x - 12 = (x - 4)(x + 3)$
With non 1 co-efficient:
$ax^2 + bx + c = (rx + p)(sx + q)$
find factor pair of $ac$
which sum $= b$
$4x^2 + 4x - 15$
$ac = -60$
$b = 4$
factor pair = 10 and -6
then substitute into equation instead of bx
$4x^2 + 10x -6x - 15$
then group:
$= 2x(2x + 5) -3(2x + 5)$
$= (2x - 3)(2x + 5)$
<a id='Division'></a>
Division
Use long division
$\frac{3x^3 - 2x^2 + 7x - 4}{x^2 + 1}$
$x^2 + 1|3x^3 - 2x^2 + 7x - 4$
divide the highest degree ($x^2$ in our case)
our current output:
$3x$
take away what we have just taken out the divided
$-(3x^3 + 3x)$
$x^2 + 1|- 2x^2 + 4x - 4$
our current output:
$3x - 2$
$-(-2x^2 - 2)$
$x^2 + 1|4x - 2$
remainder of $4x - 2$
$\frac{3x^3 - 2x^2 + 7x - 4}{x^2 + 1} = 3x - 2 + \frac{4x - 2}{x^2+1}$
<a id='Remainder_Theorem'></a>
Remainder Theorem
To find the remainder of division (or any polynomial):
$f(x) = ax^2 + bx + c$
$\frac{ax^2 + bx + c}{x - d}$
remainder = f(d)
Example:
$f(x) = 3x^2 - 4x + 7$
$\frac{3x^2 - 4x + 7}{x - 1} = (3x - 1) + 6$
$3(1)^2 - 4(1) + 7 = 3 - 4 + 6 = 6$
<a id='Factor_Theorem'></a>
Factor Theorem
Opposite of remainder theorem:
$f(x) = ax^2 + bx + c$
if f(d) = 0:
$\frac{ax^2 + bx + c}{x - d}$
$x - d$ is a factor
Example:
$f(x) = x^3 + 6x^2 + 11x + 6$
$f(-1) = (-1)^3 + 6(-1)^2 + 11(-1) + 6$
$= -1 + 6 + -11 + 6$
$= 0$
$x^3 + 6x^2 + 11x + 6 = (x + 1)(x^2 + 5x + 6)$
$= (x + 1)(x + 2)(x + 3)$
<a id='Graphs'></a>
Graphs
<a id='Plotting_polynomials'></a>
Plotting polynomials
changes of direction are called turning points
polynomial of n'th degree has $\le n -1$ turning points
our 0's are the x intercepts
for function:
$f(x) = ax^n ...$
Leading Coefficient:
if a > 0 and n is even: will increase without bound positively at both endpoints (eg. $x^2$)
if a > 0 and n is odd: will increase without bound positively at the right end and decrease without bound at the left end (eg. $x^3$)
if a < 0 and n is even: will decrease without bound negatively at both endpoints (eg. $-x^2$)
if a < 0 and n is odd: will decrease without bound negatively at the right end and increase without bound at the left end (eg. $-x^3$)
Then:
- plot x intercepts
- plot y intercept, (0,f(0))
- plot more points to give sketch of graph
- wont work if any non "real" numbers involved
End of explanation
x = np.linspace(-10, 10, 2001).astype(np.float32)
M = 2
c = 6
y = M*x + c
fig, ax = plt.subplots()
plt.plot(x, np.absolute(y))
ax.grid(True)
plt.show()
Explanation: <a id='Modulus_of_linear_function'></a>
Modulus of linear function
The Modulus is the absolute value or the magnitude of the number
It has no polarity eg. |-3| = 3
End of explanation
x = np.linspace(0, 10, 1001).astype(np.float32)
a = 2
b = np.e
c = 6
y = a*(np.log(x)/np.log(b)) + c
fig, ax = plt.subplots()
plt.plot(x, y)
ax.grid(True)
plt.show()
Explanation: <a id='Graphing_log'></a>
Graphing log
Logarithm: the power which the base must be raised to, to result a specified value
note: ln = natural log ($log_e()$ or $ln()$)
End of explanation
# x data
x = np.linspace(-1, 1, 2001).astype(np.float32)
# variables
a = 1 # coefficient
n = -1 # power (representing all odd values)
n2 = -2 # power (representing all even values)
# y values for first power
y = a*x**n
# y values for second power
y2 = a*x**n2
# set up plot
fig, ax = plt.subplots(1, 2, sharex=True, figsize=(16,5))
ax[0].plot(x, y)
ax[1].plot(x, y2)
ax[0].set_title("$ax^{-1}$")
ax[1].set_title("$ax^{-2}$")
for axes in ax:
axes.grid(True)
plt.show()
Explanation: <a id='Plotting_reciprocal_of_x'></a>
Plotting reciprocal of x
Reciprocal = $x^{-1}$
$ax^{-n} = \frac{a}{x^n}$
End of explanation
# x data
x = np.linspace(-1, 1, 2001).astype(np.float32)
# variables
a = 1 # coefficient
n = -1 # power (representing all odd values)
n2 = -2 # power (representing all even values)
# y values for first power
y = a*x**n
# y values for second power
y2 = a*x**n2
# set up plot
fig, ax = plt.subplots(1, 2, sharex=True, figsize=(16,5))
ax[0].plot(x, y, color='b', label='$ax^{-1}$')
ax[1].plot(x, y2, color='b', label='$ax^{-2}$')
ax[0].axvline(x=0, color='r', linestyle='--', label='vertical asymptote')
ax[1].axvline(x=0, color='r', linestyle='--', label='vertical asymptote')
ax[0].axhline(y=0, color='g', linestyle='--', label='horizontal asymptote')
ax[1].axhline(y=0, color='g', linestyle='--', label='horizontal asymptote')
ax[0].set_title("$ax^{-1}$")
ax[1].set_title("$ax^{-2}$")
ax[0].set_xlim([-1,1])
ax[0].set_ylim([-100,100])
ax[1].set_xlim([-0.5,0.5])
ax[1].set_ylim([-100,1000])
for axes in ax:
axes.legend(loc=1)
axes.grid(True)
plt.show()
Explanation: Horizontal and Vertical Asymptotes
The linear line in the direction to which a curve approaches as it heads towards infinity
End of explanation
# x data
x = np.linspace(-4, 4, 2001).astype(np.float32)
def f(x):
return x**2
def g(x):
return 2*x - 1
def f2(x):
return (x**2)/3 - 1
def f2i(x):
return np.sqrt(3*(x+1))
fy = f(x)
gy = g(x)
gf = g(f(x))
ff = f(f(x))
f2y = f2(x)
f2iy = f2i(x)
domains = 0
domainf = 3
# set up plot
fig, ax = plt.subplots(3, 1, figsize=(8,18))
ax[0].plot(x, fy, color='b', linestyle='--', label='$f(x) = x^2$')
ax[0].plot(x, gy, color='g', linestyle='--', label='$g(x) = 2x - 1$')
ax[0].plot(x, gf, color='r', linestyle='--', label='$g(f(x)) = 2(x^2) - 1$')
ax[0].plot(x, fg, color='pink', linestyle='--', label='$f(f(x)) = (x^2)^2$')
ax[1].plot(x, f2y, color='b', linestyle='--', label='$f(x) = (x^2)/3 - 1$')
ax[1].plot(x, f2iy, color='g', linestyle='--', label='$g(x) = \sqrt{3(y+1)}$')
x2 = np.linspace(domains, domainf, 1001).astype(np.float32)
ax[2].plot(x, f2y, color='b', linestyle='--', label='$f(x) = (x^2)/3 - 1$')
ax[2].plot(x2, f2i(x2), color='g', linestyle='--', label='$g(x) = \sqrt{3(y+1)}$')
ax[2].plot(x, x, '--', label='line of symmetry')
ax[2].axvline(domains, color='red', lw=2, alpha=0.5, label='domain')
ax[2].axvline(domainf, color='red', lw=2, alpha=0.5)
ax[2].axhline(np.min(f2(domains)).reshape(1), color='green', lw=2, alpha=0.5, label='range')
ax[2].axhline(np.max(f2(domainf)).reshape(1), color='green', lw=2, alpha=0.5)
ax[0].set_title("Composite functions")
ax[1].set_title("Inverse functions")
ax[2].set_title("Inverse functions")
for axes in ax:
axes.legend(loc=9)
axes.grid(True)
plt.show()
Explanation: <a id='Functions'></a>
Functions
<a id='Composite_functions'></a>
Composite functions
multiple functions represented as 1 combined function
$hgf(x) = h(g(f(x)))$
$f^2(x) = f(f(x))$
Example:
$g(x) = 4x - 7$
$f(x) = 2x^2 + 3x$
$gf(x) = 4(2x^2 + 3x) - 7$
$gf(3) = 4(18 + 9) - 7$
$= 101$
<a id='Inverse_functions'></a>
Inverse functions
Reverse of a function, maps output to input (compared to normal function)
$f(x) = \frac{1}{\frac{x}{4} + 2}$
$f^{-1}(y) = 4(\frac{1}{y} - 2)$
can solve algebraically (eg. $y = kx \to \frac{y}{k} = x$)
with $x^2$, can only work with + vals
possibly wont work if you have 0 values
Example:
$f(x) = \frac{4^{x^2}}{5} + 3$
$f^-1(y) = \sqrt{log_4(5(y - 3)}$
$f(3) = 52431.8$
$f^{-1}(52431.8) = 3$
note: doesn't work with certain values because log or root or fractions with 0 denominators wont work
To combat this we add a domain
This is the range of x values as inputs we're using
The range is the range of the output (y) values
When graphed, the line of symmetry between the graphs is y = x
End of explanation
# x data
x = np.linspace(-2, 2, 2001).astype(np.float32)
def f(x, a, b, c, d):
return c*(a*x+b)**2+d
y = f(x, 1, 0, 1, 0)
y1 = f(x, 1, 0, 1, 1)
y2 = f(x, 1, 1, 1, 0)
y3 = f(x, 1, 0, 2, 0)
y4 = f(x, 2, 0, 1, 0)
y5 = f(x, 2, 1, 2, 1)
y6 = f(x, 2, 1, -1, 0)
# set up plot
fig, ax = plt.subplots(3, 2, figsize=(12,12), sharex=True)
ax[0, 0].plot(x, y, color='b', label='$f(x)$')
ax[0, 0].plot(x, y1, color='g', label='$f(x) + a$')
ax[0, 1].plot(x, y, color='b', label='$f(x)$')
ax[0, 1].plot(x, y2, color='g', label='$f(x + a)$')
ax[1, 0].plot(x, y, color='b', label='$f(x)$')
ax[1, 0].plot(x, y3, color='g', label='$af(x)$')
ax[1, 1].plot(x, y, color='b', label='$f(x)$')
ax[1, 1].plot(x, y4, color='g', label='$f(ax)$')
ax[2, 0].plot(x, y, color='b', label='$f(x)$')
ax[2, 0].plot(x, y5, color='g', label='test')
ax[2, 1].plot(x, y, color='b', label='$f(x)$')
ax[2, 1].plot(x, y6, color='g', label='test')
for i in ax:
for axes in i:
axes.legend(loc=9)
axes.grid(True)
plt.show()
Explanation: <a id='Transformation_of_functions'></a>
Transformation of functions
$f(x) + a$:
- moves in y dimension
- by value $a$
$f(x + a)$:
- moves in x dimension
- by value $a$
$af(x)$:
- stretches or compresses in y dimension
- multiplies all y values by $a$
- therefore, gradient also increases by $a$
$f(ax)$:
- stretches or compresses in x dimension
- multiplies all x values as input by $a$
- therefore multiplies all y values by $f(a)$
- therefore multiplies gradient by $f(a)$
Combinations just add or multiply to the function the same way
End of explanation |
15,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How many movies are listed in the titles dataframe?
Step1: What are the earliest two films listed in the titles dataframe?
Step2: How many movies have the title "Hamlet"?
Step3: How many movies are titled "North by Northwest"?
Step4: When was the first movie titled "Hamlet" made?
Step5: List all of the "Treasure Island" movies from earliest to most recent.
Step6: How many movies were made in the year 1950?
Step7: How many movies were made in the year 1960?
Step8: How many movies were made from 1950 through 1959?
Step9: In what years has a movie titled "Batman" been released?
Step10: How many roles were there in the movie "Inception"?
Step11: How many roles in the movie "Inception" are NOT ranked by an "n" value?
Step12: But how many roles in the movie "Inception" did receive an "n" value?
Step13: Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
Step14: Display the entire cast, in "n"-order, of the 1972 film "Sleuth".
Step15: Now display the entire cast, in "n"-order, of the 2007 version of "Sleuth".
Step16: How many roles were credited in the silent 1921 version of Hamlet?
Step17: How many roles were credited in Branaghโs 1996 Hamlet?
Step18: How many "Hamlet" roles have been listed in all film credits through history?
Step19: How many people have played an "Ophelia"?
Step20: How many people have played a role called "The Dude"?
Step21: How many people have played a role called "The Stranger"?
Step22: How many roles has Sidney Poitier played throughout his career?
Step23: How many roles has Judi Dench played?
Step24: List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year.
Step25: List the leading roles that Cary Grant played in the 1940s in order by year.
Step26: How many roles were available for actors in the 1950s?
Step27: How many roles were avilable for actresses in the 1950s?
Step28: How many leading roles (n=1) were available from the beginning of film history through 1980?
Step29: How many non-leading roles were available through from the beginning of film history through 1980?
Step30: How many roles through 1980 were minor enough that they did not warrant a numeric "n" rank? | Python Code:
titles.shape[0]
Explanation: How many movies are listed in the titles dataframe?
End of explanation
titles.sort(columns='year')[0:2]
Explanation: What are the earliest two films listed in the titles dataframe?
End of explanation
titles[titles['title']=='Hamlet'].shape[0]
Explanation: How many movies have the title "Hamlet"?
End of explanation
titles[titles['title']=='North by Northwest'].shape[0]
Explanation: How many movies are titled "North by Northwest"?
End of explanation
titles[titles['title']=='Hamlet'].sort(columns='year')['year'].values[0]
Explanation: When was the first movie titled "Hamlet" made?
End of explanation
titles[titles['title']=='Treasure Island'].sort(columns='year')
Explanation: List all of the "Treasure Island" movies from earliest to most recent.
End of explanation
titles[titles['year']==1950].shape[0]
Explanation: How many movies were made in the year 1950?
End of explanation
titles[titles['year']==1960].shape[0]
Explanation: How many movies were made in the year 1960?
End of explanation
titles[(titles['year']>1950)&(titles['year']<1960)].shape[0]
Explanation: How many movies were made from 1950 through 1959?
End of explanation
print(titles[titles['title']=="Batman"]['year'].values)
Explanation: In what years has a movie titled "Batman" been released?
End of explanation
cast[(cast['title']=="Inception")&(cast['year']==2010)].shape[0]
Explanation: How many roles were there in the movie "Inception"?
End of explanation
sum(cast[(cast['title']=="Inception")&(cast['year']==2010)]['n'].isnull())
Explanation: How many roles in the movie "Inception" are NOT ranked by an "n" value?
End of explanation
sum(cast[(cast['title']=="Inception")&(cast['year']==2010)]['n'].notnull())
Explanation: But how many roles in the movie "Inception" did receive an "n" value?
End of explanation
cast[cast['title']=='North by Northwest'].dropna().sort(columns='n')
Explanation: Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
End of explanation
cast[(cast['title']=='Sleuth')&(cast['year']==1972)].sort(columns='n')
Explanation: Display the entire cast, in "n"-order, of the 1972 film "Sleuth".
End of explanation
cast[(cast['title']=='Sleuth')&(cast['year']==2007)].sort(columns='n')
Explanation: Now display the entire cast, in "n"-order, of the 2007 version of "Sleuth".
End of explanation
cast[(cast['title']=='Hamlet')&(cast['year']==1921)].shape[0]
Explanation: How many roles were credited in the silent 1921 version of Hamlet?
End of explanation
cast[(cast['title']=='Hamlet')&(cast['year']==1996)].shape[0]
Explanation: How many roles were credited in Branaghโs 1996 Hamlet?
End of explanation
cast[(cast['character']=='Hamlet')].shape[0]
Explanation: How many "Hamlet" roles have been listed in all film credits through history?
End of explanation
cast[(cast['character']=='Ophelia')]['name'].unique().shape[0]
Explanation: How many people have played an "Ophelia"?
End of explanation
cast[(cast['character']=='The Dude')]['name'].unique().shape[0]
Explanation: How many people have played a role called "The Dude"?
End of explanation
cast[(cast['character']=='The Stranger')]['name'].unique().shape[0]
Explanation: How many people have played a role called "The Stranger"?
End of explanation
cast[(cast['name']=='Sidney Poitier')]['character'].unique().shape[0]
Explanation: How many roles has Sidney Poitier played throughout his career?
End of explanation
cast[(cast['name']=='Judi Dench')]['character'].unique().shape[0]
Explanation: How many roles has Judi Dench played?
End of explanation
cast[(cast['name']=='Cary Grant')&(cast['n']==2)&(cast['year']>=1940)&(cast['year']<1950)].sort(columns='year')
Explanation: List the supporting roles (having n=2) played by Cary Grant in the 1940s, in order by year.
End of explanation
cast[(cast['name']=='Cary Grant')&(cast['n']==1)&(cast['year']>=1940)&(cast['year']<1950)].sort(columns='year')
Explanation: List the leading roles that Cary Grant played in the 1940s in order by year.
End of explanation
cast[(cast['type']=='actor')&(cast['year']>=1940)&(cast['year']<1950)].shape[0]
Explanation: How many roles were available for actors in the 1950s?
End of explanation
cast[(cast['type']=='actress')&(cast['year']>=1940)&(cast['year']<1950)].shape[0]
Explanation: How many roles were avilable for actresses in the 1950s?
End of explanation
cast[(cast['n']==1)&(cast['year']<=1980)].shape[0]
Explanation: How many leading roles (n=1) were available from the beginning of film history through 1980?
End of explanation
cast[(cast['n']!=1)&(cast['year']<=1980)].shape[0]
Explanation: How many non-leading roles were available through from the beginning of film history through 1980?
End of explanation
sum(cast[(cast['year']<=1980)]['n'].isnull())
Explanation: How many roles through 1980 were minor enough that they did not warrant a numeric "n" rank?
End of explanation |
15,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: 1T_Pandas๋ก ๋ฐฐ์ฐ๋ SQL ์์ํ๊ธฐ (1) - WHERE, ORDER BY
๊ฐฏ์ ์ธ๊ธฐ (COUNT)
์นผ๋ผ๋ช
๋ณ๊ฒฝํ๊ธฐ (AS)
์ ๋ ฌ (ORDER BY)
ํน์ ์กฐ๊ฑด์ ๋ํ ํํฐ๋ง(WHERE)
Pandas์์๋
JOIN ( merge ) ( * )
GROUP BY ( * )
์ ์ ์ฃผ์์ ๋ํด์
Step7: SQL(Structured Query Language; ๊ตฌ์กฐํ๋ ์ง์ ์ธ์ด == Python Programming Language)
์ ๊ทํํ์ ๊ฐ์ ๊ฒฝ์ฐ์๋ ํ๋์ ์ฝ์์ด์ง ํ๋ก๊ทธ๋๋ฐ ์ธ์ด๋ผ๊ณ ํ ์ ์๋ค.
๊ทธ๋์ ๋ด๋ถ์ ์ผ๋ก ํจ์ ๊ฐ์ ๊ธฐ๋ฅ์ด ์๋ค. count, sum, mean...
์ฐ์ฐ์ ๋ํ ๋ถ๋ถ์ด ์๋๋ค, SQL ์ด๋ฐ ์ฐ์ฐ์ ๊ต์ฅํ ์ต์ ํ => ๋น ๋ฅด๊ฒ
๋์ ๋๋ฆฐ ๋ถ๋ถ์ DATA๊ฐ 100๋ง๊ฐ๋ผ๋ฉด Name, Population์ ๋ฝ์์ ๋ค์ด๋ก๋ ๋ฐ๊ฒ ๋๋ฉด ๋คํธ์ํฌ์ ๋ํ ๋น์ฉ์ด ํฌ๊ฒ ๋ถ๋ด๋๋ค. ๊ทธ๋์ 10์ด ์ด์ ๊ฑธ๋ฆฌ๊ฒ ๋๋ค. DATA๊ฐ ๋ค ๋ฐ์์ ์ง ๋๊น์ง. ๊ทธ๋ฐ๋ฐ ๋ง์ฝ์ COUNT๋ฅผ ์ฐ๊ฒ ๋๋ฉด ์ด๋ฏธ ์ฐ์ฐ์ ์๋ฒ์์ ๋๊ณ ์ซ์๋ง ๋ฐ๋ก ๋ ์์ค๋ ๊ฒ์ด๋ค. ๊ทธ๋ฌ๋ฉด ๋ฐ๋ก ์ ๋ณด๋ฅผ ๋ฐ์ ์ ์๋ ๊ฒ์ด๋ค. ๊ทธ๋์ ๊ฐ๋ฅํ๋ฉด SQL์์ ์ฐ์ฐ์ด ๊ฐ๋ฅํ ๊ฒ์ SQL์์ ๋๋ฆฌ๊ณ pandas์์๋ง ํธํ๊ฒ ํ ์ ์๋ ๊ฒ์ pasdas์์. ํ์ง๋ง pandas๊ฐ ํธํ๊ธฐ ๋๋ฌธ์ csv๋ก ๋ฐ์์์ ์ฒ๋ฆฌํด๋ ๋๊ธด ํ์ง๋ง ๊ณ์ฐ๋ ๋ฐ์ดํฐ๋ง ๋ฐ์์ค๋ ๊ฒ์ด ํธํ ๊ฒฝ์ฐ์๋ SQL์์
Step9: Country => Name, Population, Continent
Step15: Name => ์ด๋ฆ
Poplation => ์ธ๊ตฌ
Continent = > ๋๋ฅ์ผ๋ก ๋ณ๊ฒฝ
Step16: ๋ง์ฝ ์ด๋ค ๋ฐ์ดํฐ์ ์ซ์๋ง์ ์ธ์ผ ํ๋ ์ํฉ์ด๋ผ๋ฉด? Pandas or SQL? SQL์ด ์ข๋ค.
Step18: SQL์ ์ด๋ฐ ์ฐ์ฐ์ ๊ต์ฅํ ์ต์ ํ ๋์ด ์๊ธฐ ๋๋ฌธ์ ๋ค๋ฅธ ์ธ์ด๋ณด๋ค ๋น ๋ฅด๋ค.
๋์ ์ ๋๋ฆฐ ๋ถ๋ถ์?
๋ฐ์ดํฐ๊ฐ 100๋ง๊ฐ๋ผ๋ฉด. Name, Population ๋ฑ์ ๋ฝ์ ๋ ๋ค์ด๋ก๋ ๋ฐ์์ผ ํ๋ค.
๋คํธ์ํฌ์ ๋ํ ๋น์ฉ์ด ๊ฑธ๋ฆฐ๋ค. ๋ค์ด ๋ฐ๋ ์๋ ๋๋ฌธ์ ์ค๋ ๊ฑธ๋ฆฐ๋ค.
๊ทธ๋์ ๋น ๋ฅด๊ฒ ํ๋ ๋ฐฉ๋ฒ์? SQL์์ ๊ฐ๋ฅํ ์ฐ์ฐ์ SQL์์ ๋๋ฆฌ๊ณ ๋ฐ๋ ๊ฒ์ด ์ข๋ค.
์กฐ๊ฑด์ ๋ฐ๋ฅธ Filtering
Continent == Europe
Population > 10,000,000 ๋ฐ์ดํฐ๋ฅผ ๋ฝ์๋ผ
Step19: Asia๋ ํน์ Europe์ ์ํ๋ ๊ตญ๊ฐ๋ฅผ 1.pandas 2.sql ๋ ๋ค ๋ฝ์๋ณด๊ธฐ
์ ๋ต 4๊ฐ์ง
1. pandas (1)
Step20: 2. pandas (2)
Step22: 3. sql (1)
Step24: 4. sql (2)
Step25: ์ ๋ ฌ
Pandas์์ ์ธ๊ตฌ์ ๋ด๋ฆผ์ฐจ์์ผ๋ก ์ ๋ ฌ(์ฆ, ์ธ๊ตฌ๊ฐ ๋ง์ ์์๋๋ก)
Step26: sort() - deprecated(์ค์๋๊ฐ ๋จ์ด์ ธ ๋ ์ด์ ์ฌ์ฉ๋์ง ์๊ณ ์์ผ๋ก๋ ์ฌ๋ผ์ง๊ฒ ๋ (์ปดํจํฐ ์์คํ
๊ธฐ๋ฅ ๋ฑ))
Step28: ์ธ๊ตฌ์๊ฐ 1์ต ๋ช
์ด์์ด๋ฉด์, ์์์๋ ์ ๋ฝ์ ํฌํจ๋ ๊ตญ๊ฐ๋ค์ ์ธ๊ตฌ์ ๋ด๋ฆผ์ฐจ์์ผ๋ก ์ถ๋ ฅํ๋ผ ์ ๊ฐ์ ๋ฌธ์ ๋ ํ ์ ์๋ค. | Python Code:
def hello():
์ฃผ์์
๋๋ค, docstring => ํ์ด์ฌ ์ฝ๋๋ฅผ ๋ฌธ์ํ
pass
# ์ต ์ด๊ฒ๋ ์ฃผ์
a =
#Multiline String์
๋๋ค.
a = "์ฌ๋ฌ์ค
ํ
์คํธ"
a = "์ฌ๋ฌ์ค\
ํ
์คํธ"
a
a =
์๋
ํ์ธ์.
์ ๋ ๊น๊ธฐํ์
๋๋ค.
a
a.strip()
a.replace("\n", " ")
a.replace("\n", " ").strip()
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"world",
charset="utf8",
)
SQL_QUERY =
SELECT *
FROM Country
;
country_df = pd.read_sql(SQL_QUERY, db)
# pd.read_sql("SELECT * FROM Country;", db)
# ์ฌ์ค ์ด๋ ๊ฒ ๋ฌธ์ฅ์ ์ค์ฌ์ ์ธ ์ ์๋ค.
Explanation: 1T_Pandas๋ก ๋ฐฐ์ฐ๋ SQL ์์ํ๊ธฐ (1) - WHERE, ORDER BY
๊ฐฏ์ ์ธ๊ธฐ (COUNT)
์นผ๋ผ๋ช
๋ณ๊ฒฝํ๊ธฐ (AS)
์ ๋ ฌ (ORDER BY)
ํน์ ์กฐ๊ฑด์ ๋ํ ํํฐ๋ง(WHERE)
Pandas์์๋
JOIN ( merge ) ( * )
GROUP BY ( * )
์ ์ ์ฃผ์์ ๋ํด์
End of explanation
SQL_QUERY =
SELECT ____ #์ด๋ค Column, ํน์ ์ด๋ค ๊ฐ์ DB๋ก ๊ฐ์ ธ์ฌ๊น
FROM ____ #์ด๋ค Table (Excel, Sheet) ์์ ์ ๋ณด๋ฅผ ๊ฐ์ ธ์ฌ๊น
;
SQL_QUERY =
SELECT *
FROM Country
;
# "select * from Country" => ๋๊ฐ์ด ๋์์ ํ๊ฒ ์ง๋ง ๋๋ฌธ์๋ก ์ฐ๋ ๊ฒ์ด ์ฝ์
# ;(์ธ๋ฏธ์ฝ๋ก ) ์์ผ๋ ์์ผ๋ ์ด์ ๋ ๋์ํ์ง๋ง ๊ทธ๋๋ ๋๋ฌ๋ค๋ ๊ฒ์ ๋ช
์ํ๊ธฐ ์ํด ์ฐ๋ ๊ฒ ๋ซ๋ค.
Explanation: SQL(Structured Query Language; ๊ตฌ์กฐํ๋ ์ง์ ์ธ์ด == Python Programming Language)
์ ๊ทํํ์ ๊ฐ์ ๊ฒฝ์ฐ์๋ ํ๋์ ์ฝ์์ด์ง ํ๋ก๊ทธ๋๋ฐ ์ธ์ด๋ผ๊ณ ํ ์ ์๋ค.
๊ทธ๋์ ๋ด๋ถ์ ์ผ๋ก ํจ์ ๊ฐ์ ๊ธฐ๋ฅ์ด ์๋ค. count, sum, mean...
์ฐ์ฐ์ ๋ํ ๋ถ๋ถ์ด ์๋๋ค, SQL ์ด๋ฐ ์ฐ์ฐ์ ๊ต์ฅํ ์ต์ ํ => ๋น ๋ฅด๊ฒ
๋์ ๋๋ฆฐ ๋ถ๋ถ์ DATA๊ฐ 100๋ง๊ฐ๋ผ๋ฉด Name, Population์ ๋ฝ์์ ๋ค์ด๋ก๋ ๋ฐ๊ฒ ๋๋ฉด ๋คํธ์ํฌ์ ๋ํ ๋น์ฉ์ด ํฌ๊ฒ ๋ถ๋ด๋๋ค. ๊ทธ๋์ 10์ด ์ด์ ๊ฑธ๋ฆฌ๊ฒ ๋๋ค. DATA๊ฐ ๋ค ๋ฐ์์ ์ง ๋๊น์ง. ๊ทธ๋ฐ๋ฐ ๋ง์ฝ์ COUNT๋ฅผ ์ฐ๊ฒ ๋๋ฉด ์ด๋ฏธ ์ฐ์ฐ์ ์๋ฒ์์ ๋๊ณ ์ซ์๋ง ๋ฐ๋ก ๋ ์์ค๋ ๊ฒ์ด๋ค. ๊ทธ๋ฌ๋ฉด ๋ฐ๋ก ์ ๋ณด๋ฅผ ๋ฐ์ ์ ์๋ ๊ฒ์ด๋ค. ๊ทธ๋์ ๊ฐ๋ฅํ๋ฉด SQL์์ ์ฐ์ฐ์ด ๊ฐ๋ฅํ ๊ฒ์ SQL์์ ๋๋ฆฌ๊ณ pandas์์๋ง ํธํ๊ฒ ํ ์ ์๋ ๊ฒ์ pasdas์์. ํ์ง๋ง pandas๊ฐ ํธํ๊ธฐ ๋๋ฌธ์ csv๋ก ๋ฐ์์์ ์ฒ๋ฆฌํด๋ ๋๊ธด ํ์ง๋ง ๊ณ์ฐ๋ ๋ฐ์ดํฐ๋ง ๋ฐ์์ค๋ ๊ฒ์ด ํธํ ๊ฒฝ์ฐ์๋ SQL์์
End of explanation
SQL_QUERY =
SELECT Name, Population, Continent
FROM Country
;
df = pd.read_sql(SQL_QUERY, db)
df.head()
country_df[["Name", "Population", "Continent"]].head()
Explanation: Country => Name, Population, Continent
End of explanation
npc_df = country_df[["Name", "Population", "Continent"]]
npc_df.rename(columns={"Name": "์ด๋ฆ", "Population": "์ธ๊ตฌ", "Continent": "๋๋ฅ"})
SQL_QUERY =
SELECT Name AS "์ด๋ฆ", Population AS "์ธ๊ตฌ์", Continent AS "๋๋ฅ"
FROM Country
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT Name "์ด๋ฆ", Population "์ธ๊ตฌ์", Continent "๋๋ฅ"
FROM Country
;
# AS๊ฐ ์์ด๋ ๋์ผํ๊ฒ ๋์ํ๋ค. ์ด๊ฑด ์์ด ์จ๋ ๋ฉ๋๋ค.
pd.read_sql(SQL_QUERY, db)
country_df.count()
len(country_df)
SQL_QUERY =
SELECT COUNT(*)
FROM Country;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT COUNT(*) "count"
FROM Country;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT SUM(Population)
FROM Country;
#SQL์๋ SUM, AVG ๋ฑ ๋ค์ํ ๋ด์ฅํจ์๊ฐ ์๋ค.
pd.read_sql(SQL_QUERY, db)
Explanation: Name => ์ด๋ฆ
Poplation => ์ธ๊ตฌ
Continent = > ๋๋ฅ์ผ๋ก ๋ณ๊ฒฝ
End of explanation
import time
start_time = time.time()
#์ฌ๊ธฐ์๋ถํฐ ์์
count = len(pd.read_sql("SELECT * FROM Country;", db))
print(count)
end_time = time.time()
excute_time = end_time - start_time
print(excute_time)
start_time = time.time()
#์ฌ๊ธฐ์๋ถํฐ ์์
pd.read_sql("SELECT COUNT(*) FROM Country;", db)
end_time = time.time()
excute_time = end_time - start_time
print(excute_time)
Explanation: ๋ง์ฝ ์ด๋ค ๋ฐ์ดํฐ์ ์ซ์๋ง์ ์ธ์ผ ํ๋ ์ํฉ์ด๋ผ๋ฉด? Pandas or SQL? SQL์ด ์ข๋ค.
End of explanation
is_europe = country_df["Continent"] == "Europe"
is_population = country_df["Population"] > 10000000
country_df[is_europe][is_population]
# AND => ์ ๋ฝ์ ์๊ณ , ์ธ๊ตฌ๊ฐ 1000๋ง ์ด์์ธ ์ ๋ค
# OR => ์ ๋ฝ์ ์๊ฑฐ๋, ์ธ๊ตฌ๊ฐ 1000๋ง ์ด์์ธ ์ ๋ค
# country_df[is_europe & is_population]
# country_df[is_europe | is_population]
SQL_QUERY =
SELECT *
FROM Country
WHERE
Continent = "Europe"
AND Population > 10000000
;
pd.read_sql(SQL_QUERY, db)
Explanation: SQL์ ์ด๋ฐ ์ฐ์ฐ์ ๊ต์ฅํ ์ต์ ํ ๋์ด ์๊ธฐ ๋๋ฌธ์ ๋ค๋ฅธ ์ธ์ด๋ณด๋ค ๋น ๋ฅด๋ค.
๋์ ์ ๋๋ฆฐ ๋ถ๋ถ์?
๋ฐ์ดํฐ๊ฐ 100๋ง๊ฐ๋ผ๋ฉด. Name, Population ๋ฑ์ ๋ฝ์ ๋ ๋ค์ด๋ก๋ ๋ฐ์์ผ ํ๋ค.
๋คํธ์ํฌ์ ๋ํ ๋น์ฉ์ด ๊ฑธ๋ฆฐ๋ค. ๋ค์ด ๋ฐ๋ ์๋ ๋๋ฌธ์ ์ค๋ ๊ฑธ๋ฆฐ๋ค.
๊ทธ๋์ ๋น ๋ฅด๊ฒ ํ๋ ๋ฐฉ๋ฒ์? SQL์์ ๊ฐ๋ฅํ ์ฐ์ฐ์ SQL์์ ๋๋ฆฌ๊ณ ๋ฐ๋ ๊ฒ์ด ์ข๋ค.
์กฐ๊ฑด์ ๋ฐ๋ฅธ Filtering
Continent == Europe
Population > 10,000,000 ๋ฐ์ดํฐ๋ฅผ ๋ฝ์๋ผ
End of explanation
is_asia = country_df["Continent"] == "Asia"
is_europe = country_df["Continent"] == "Europe"
country_df[is_asia | is_europe]
Explanation: Asia๋ ํน์ Europe์ ์ํ๋ ๊ตญ๊ฐ๋ฅผ 1.pandas 2.sql ๋ ๋ค ๋ฝ์๋ณด๊ธฐ
์ ๋ต 4๊ฐ์ง
1. pandas (1)
End of explanation
is_asia_or_europe = country_df["Continent"].isin(["Asia", "Europe"])
country_df[is_asia_or_europe].head(3)
# df["Continent"].str.contains("Asia") => ์ด๊ฑด ํ
์คํธ๋ง์ด๋ ๋ฐฉ๋ฒ
Explanation: 2. pandas (2)
End of explanation
SQL_QUERY =
SELECT *
FROM Country
WHERE
Continent = "Asia"
OR Continent = "Europe"
pd.read_sql(SQL_QUERY, db)
Explanation: 3. sql (1)
End of explanation
SQL_QUERY =
SELECT *
FROM Country
WHERE
Continent IN ("Asia", "Europe")
;
pd.read_sql(SQL_QUERY, db)
Explanation: 4. sql (2)
End of explanation
country_df.sort_values("Population", ascending=False)[["Name", "Population"]]
Explanation: ์ ๋ ฌ
Pandas์์ ์ธ๊ตฌ์ ๋ด๋ฆผ์ฐจ์์ผ๋ก ์ ๋ ฌ(์ฆ, ์ธ๊ตฌ๊ฐ ๋ง์ ์์๋๋ก)
End of explanation
country_df.sort("Population")
Explanation: sort() - deprecated(์ค์๋๊ฐ ๋จ์ด์ ธ ๋ ์ด์ ์ฌ์ฉ๋์ง ์๊ณ ์์ผ๋ก๋ ์ฌ๋ผ์ง๊ฒ ๋ (์ปดํจํฐ ์์คํ
๊ธฐ๋ฅ ๋ฑ))
End of explanation
SQL_QUERY =
SELECT Name, Population
FROM Country
ORDER BY Population DESC
;
# ๊ธฐ๋ณธ๊ฐ์ผ๋ก ASC
pd.read_sql(SQL_QUERY, db)
Explanation: ์ธ๊ตฌ์๊ฐ 1์ต ๋ช
์ด์์ด๋ฉด์, ์์์๋ ์ ๋ฝ์ ํฌํจ๋ ๊ตญ๊ฐ๋ค์ ์ธ๊ตฌ์ ๋ด๋ฆผ์ฐจ์์ผ๋ก ์ถ๋ ฅํ๋ผ ์ ๊ฐ์ ๋ฌธ์ ๋ ํ ์ ์๋ค.
End of explanation |
15,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Machine Learning with scikit-learn
Lab 2
Step1: In the following cell, we are just defining what is needed to train the classifier and display it
Step2: Binary Support Vector Machines
We will start with the support vector machine (SVM) classifier.
Step3: Linear kernel
Step4: Gaussian (or RBF for radial basis function) kernel
Step5: Polynomial kernel
Step6: Decision Trees
In scikit-learn, decision trees classifier are simply referred to as DecisionTreeClassifier.
Step7: Model evaluation
There are several ways to evaluate a model without having to visualize the data. The accuracy is the simplest of them
Step8: Multiclass classification
The previous case showed a binary classification example. As we saw during the lecture, classification can be made in the multiclass scenario (3 or more classes).
Here again, we will generate data (3 classes labeled '0', '1' and '2').
Step9: Support Vector machines
There are 2 main ways to extend a binary classifier (such as the SVM) to the multiclass case
Step10: One versus all (rest) classification
Step11: On an imported dataset
The Iris dataset
In this section, we will apply the classification algorithms we have seen to a standard dataset
Step12: The iris object has several fields
Step13: We can show the dataset description
Step14: The fields that interest us the most concern the features (data and feature_names), the labels (target and target_names).
Step15: The digits dataset
There is another classification dataset available out-of-the-box from sklearn. It is the digits dataset. | Python Code:
%matplotlib inline
import numpy as np
# class '0'
features_0 = [-1.5, 0] + np.random.randn(100, 2)
labels_0 = np.zeros(100)
# class '1'
features_1 = [+1.5, 0] + np.random.randn(100, 2)
labels_1 = np.ones(100)
# show the training set with matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.plot(features_0[:, 0], features_0[:, 1], 'r+') # r+ means red pluses
plt.plot(features_1[:, 0], features_1[:, 1], 'bo') #ย bo means blue circles
Explanation: Introduction to Machine Learning with scikit-learn
Lab 2: Classification
The goal of this lab session is to discover a few classification tools from scikit-learn.
Classification of generated data
Data generation
We will start by generating a 2D training set composed of two classes (labelled 0 and 1).
End of explanation
# Merge the two classes in a single set
features = np.concatenate((features_0, features_1)) # for features
labels = np.concatenate((labels_0, labels_1)) # for labels
# Define a mesh grid on which we will test the classifiers
mesh_size = 0.1
x_min, x_max = features[:, 0].min() - 1, features[:, 0].max() + 1
y_min, y_max = features[:, 1].min() - 1, features[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, mesh_size),
np.arange(y_min, y_max, mesh_size))
# Define a function that shows the
def show_results(classifier, title):
Z = classifier.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
plt.scatter(features[:, 0], features[:, 1], c=labels, cmap=plt.cm.Paired)
# c=labels means that the color will correspond to the label
# cmap=plt.cm.Paired is a colormap less agressive than the blue/red default color (which is somewhat flashy)
plt.title(title)
Explanation: In the following cell, we are just defining what is needed to train the classifier and display it
End of explanation
# in scikit-learn, the SVM classifier is referred to as SVC (Support Vector Classifier)
from sklearn.svm import SVC
Explanation: Binary Support Vector Machines
We will start with the support vector machine (SVM) classifier.
End of explanation
my_C = 10 # the soft-margin trade-off parameter
my_kernel = 'linear' # the kernel
my_linear_classifier = SVC(kernel = my_kernel, C = my_C).fit(features, labels) # train the classifier
show_results(my_linear_classifier, my_kernel) # display the results
Explanation: Linear kernel
End of explanation
my_kernel = 'rbf'
my_gamma = 0.5
my_gaussian_classifier = SVC(kernel = my_kernel, C = my_C, gamma = my_gamma).fit(features, labels)
show_results(my_gaussian_classifier, my_kernel)
Explanation: Gaussian (or RBF for radial basis function) kernel
End of explanation
my_kernel = 'poly'
my_degree = 5
my_polynomial_classifier = SVC(kernel = my_kernel, C = my_C, degree = my_degree).fit(features, labels)
show_results(my_polynomial_classifier, my_kernel)
Explanation: Polynomial kernel
End of explanation
from sklearn.tree import DecisionTreeClassifier
my_tree_classifier = DecisionTreeClassifier().fit(features, labels)
show_results(my_tree_classifier, 'tree')
Explanation: Decision Trees
In scikit-learn, decision trees classifier are simply referred to as DecisionTreeClassifier.
End of explanation
from sklearn import metrics
predicted_labels = my_linear_classifier.predict(features)
print "accuracy of the linear classifier: ", metrics.accuracy_score(labels, predicted_labels)
predicted_labels = my_gaussian_classifier.predict(features)
print "accuracy of the rbf classifier: ", metrics.accuracy_score(labels, predicted_labels)
predicted_labels = my_polynomial_classifier.predict(features)
print "accuracy of the polynomial classifier: ", metrics.accuracy_score(labels, predicted_labels)
Explanation: Model evaluation
There are several ways to evaluate a model without having to visualize the data. The accuracy is the simplest of them: it is the proportion of well classified samples. Standard evaluation metrics can be found in sklearn.metrics.
Note: In this lab session, we won't detail much the model evaluation. There will be a session about entirely dedicated to model evaluation and selection.
End of explanation
# Generate the data
features_0 = [-2.0, 0] + np.random.randn(100, 2)
features_1 = [+2.0, 0] + np.random.randn(100, 2)
features_2 = [0, +2.0] + np.random.randn(100, 2)
labels_0 = np.zeros(100)
labels_1 = np.ones(100)
labels_2 = 2 * np.ones(100)
# Merge the data
features = np.concatenate((features_0, features_1, features_2))
labels = np.concatenate((labels_0, labels_1, labels_2))
# Re-define the meshgrid
mesh_size = 0.1
x_min, x_max = features[:, 0].min() - 1, features[:, 0].max() + 1
y_min, y_max = features[:, 1].min() - 1, features[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, mesh_size),
np.arange(y_min, y_max, mesh_size))
Explanation: Multiclass classification
The previous case showed a binary classification example. As we saw during the lecture, classification can be made in the multiclass scenario (3 or more classes).
Here again, we will generate data (3 classes labeled '0', '1' and '2').
End of explanation
# feel free to play with the parameters (kernel, C, gamma, degree).
my_C = 1.0
my_kernel = 'linear'
my_linear_classifier = SVC(kernel = my_kernel, C = my_C, decision_function_shape = 'ovo').fit(features, labels)
show_results(my_linear_classifier, my_kernel)
Explanation: Support Vector machines
There are 2 main ways to extend a binary classifier (such as the SVM) to the multiclass case: "one versus one" and "one versus all" (or "one versus rest").
The SVC classifier handles both extensions with the decision_function_shape parameter (either equal to 'ovo' or 'ovr'). If the decision_function_shape option is not set, 'ovr' will be set by default.
One versus one classification
End of explanation
my_linear_classifier = SVC(kernel = my_kernel, C = my_C, decision_function_shape = 'ovr').fit(features, labels)
show_results(my_linear_classifier, my_kernel)
my_kernel = 'rbf'
my_gamma = 0.5
my_gaussian_classifier = SVC(kernel = my_kernel, C = my_C, gamma = my_gamma).fit(features, labels)
show_results(my_gaussian_classifier, my_kernel)
my_kernel = 'poly'
my_degree = 5
my_polynomial_classifier = SVC(kernel = my_kernel, C = my_C, degree = my_degree).fit(features, labels)
show_results(my_polynomial_classifier, my_kernel)
from sklearn.tree import DecisionTreeClassifier
my_tree_classifier = DecisionTreeClassifier().fit(features, labels)
show_results(my_tree_classifier, 'tree')
from sklearn.tree import export_graphviz
export_graphviz(my_tree_classifier, out_file='tree.dot')
# The following is OS dependent
# Ubuntu (you'll need to install GraphViz: 'apt-get install graphviz')
# PS/PDF format:
!dot -Tps tree.dot -o tree.ps
!evince tree.ps
# MacOS: You'll also need GraphViz: 'brew install graphviz'
# !dot -Tpng tree.dot -o tree.png
# !open tree.png # works with MacOS
# Note: The "!" at the beginning of a Python instruction means that what follow will be run as if you were in a terminal.
# (Hence, these commands depend on your OS and what's installed on it).
Explanation: One versus all (rest) classification
End of explanation
from sklearn import datasets
iris = datasets.load_iris()
Explanation: On an imported dataset
The Iris dataset
In this section, we will apply the classification algorithms we have seen to a standard dataset: The Iris dataset (https://en.wikipedia.org/wiki/Iris_flower_data_set).
It is a really small dataset which can be loaded this way:
End of explanation
print iris
Explanation: The iris object has several fields:
End of explanation
print iris.DESCR
Explanation: We can show the dataset description:
End of explanation
features = iris.data
print features
features_names = iris.feature_names
print features_names
labels = iris.target
print labels
label_names = iris.target_names
print label_names
plt.figure()
plt.scatter(features[:, 0], features[:, 1], c=labels)
plt.figure()
plt.scatter(features[:, 2], features[:, 3], c=labels)
C = 1.0
my_linear_classifier = SVC(kernel='linear', C=C).fit(features, labels)
predicted_labels = my_linear_classifier.predict(features)
print "accuracy of the linear classifier: ", metrics.accuracy_score(labels, predicted_labels)
from sklearn.cross_validation import train_test_split
# or, depending on your sklearn version
# from sklearn.model_selection import train_test_split
features_train, features_test, labels_train, labels_test = train_test_split(features, labels, test_size = 0.5)
my_linear_classifier = SVC(kernel='linear', C=C).fit(features_train, labels_train)
predicted_labels_train = my_linear_classifier.predict(features_train)
print "accuracy of the linear classifier: ", metrics.accuracy_score(labels_train, predicted_labels_train)
predicted_labels_test = my_linear_classifier.predict(features_test)
print "accuracy of the linear classifier: ", metrics.accuracy_score(labels_test, predicted_labels_test)
Explanation: The fields that interest us the most concern the features (data and feature_names), the labels (target and target_names).
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
print digits.DESCR
print digits.target_names
features = digits.data
labels = digits.target
images = digits.images
print images
print images[0]
features[0, :]
features.shape
labels.shape
for i in range(4):
plt.subplot(1, 4, i + 1)
plt.axis('off')
plt.imshow(images[i], cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Label: %i' % labels[i])
C = 0.01
my_linear_classifier = SVC(kernel='linear', C=C).fit(features, labels)
predicted_labels = my_linear_classifier.predict(features)
print "accuracy of the linear classifier: ", metrics.accuracy_score(labels, predicted_labels)
print my_linear_classifier
features_train, features_test, labels_train, labels_test = train_test_split(features, labels, test_size = 0.5)
my_linear_classifier = SVC(kernel='rbf', C=C).fit(features_train, labels_train)
predicted_labels_train = my_linear_classifier.predict(features_train)
print "accuracy of the linear classifier: ", metrics.accuracy_score(labels_train, predicted_labels_train)
predicted_labels_test = my_linear_classifier.predict(features_test)
print "accuracy of the linear classifier: ", metrics.accuracy_score(labels_test, predicted_labels_test)
Explanation: The digits dataset
There is another classification dataset available out-of-the-box from sklearn. It is the digits dataset.
End of explanation |
15,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis on Movie Reviews
Using Logistic Regression, SGD, Naive Bayes, OneVsOne Models
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
Load Libraries
Step1: Load & Read Datasets
Step2: Extracting features
In order to perform machine learning on text documents, we first need to turn the text content into numerical feature vectors.
Bags of words
The most intuitive way to do so is the bags of words representation
Step3: Convert Occurrence to Frequency
Problem with occurrence count of words
Step4: In the above code, we first used the fit() method to fit our estimator and then the transform() method to transform our count-matrix to a tf-idf representation.
These two steps can be combined using fit_transform() method.
Step5: Train Classifier
We train our classifier by inputing our features and expecting our classifier to output/predict the sentiment value for each phrase in test dataset.
Naive Bayes Classifier
Step6: Building a Pipeline
In order to make the vectorizer => transformer => classifier easier to work with, scikit-learn provides a Pipeline class that behaves like a compound classifier.
You can compare the above accuracy result of the classifier without using Pipeline and the below accuracy result of the classifier while using Pipeline class. It's the same. Hence, Pipeline class highly simplifies our task of tokenizing and tfidf conversion.
Step7: Let's use stop words filter in CountVectorizer method and see how it affects the classifier's accuracy. We see that this increases accuracy.
Step8: Classification Report (precision, recall, f1-score)
Step9: Confusion Matrix
Step10: Stochastic Gradient Descent (SGD) Classifier
Step11: Logistic Regression Classifier
Step12: OneVsOne Classifier
Step13: Create Submission | Python Code:
import nltk
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier, OneVsOneClassifier
from sklearn.metrics import classification_report, confusion_matrix
Explanation: Sentiment Analysis on Movie Reviews
Using Logistic Regression, SGD, Naive Bayes, OneVsOne Models
0 - negative
1 - somewhat negative
2 - neutral
3 - somewhat positive
4 - positive
Load Libraries
End of explanation
train = pd.read_csv('train.tsv', delimiter='\t')
test = pd.read_csv('test.tsv', delimiter='\t')
train.shape, test.shape
train.head()
test.head()
# unique sentiment labels
train.Sentiment.unique()
train.info()
train.Sentiment.value_counts()
train.Sentiment.value_counts() / train.Sentiment.count()
Explanation: Load & Read Datasets
End of explanation
X_train = train['Phrase']
y_train = train['Sentiment']
# Convert a collection of text documents to a matrix of token counts
count_vect = CountVectorizer()
# Fit followed by Transform
# Learn the vocabulary dictionary and return term-document matrix
X_train_counts = count_vect.fit_transform(X_train)
#X_train_count = X_train_count.toarray()
# 156060 rows of train data & 15240 features (one for each vocabulary word)
X_train_counts.shape
# get all words in the vocabulary
vocab = count_vect.get_feature_names()
print (vocab)
# get index of any word
count_vect.vocabulary_.get(u'100')
# Sum up the counts of each vocabulary word
dist = np.sum(X_train_counts, axis=0)
# print (dist) # matrix
dist = np.squeeze(np.asarray(dist))
print (dist) # array
zipped = zip(vocab, dist)
zipped.sort(key = lambda t: t[1], reverse=True) # sort words by highest number of occurrence
# For each, print the vocabulary word and the number of times it
# appears in the training set
for tag, count in zipped:
print (count, tag)
Explanation: Extracting features
In order to perform machine learning on text documents, we first need to turn the text content into numerical feature vectors.
Bags of words
The most intuitive way to do so is the bags of words representation:
assign a fixed integer id to each word occurring in any document of the training set (for instance by building a dictionary from words to integer indices).
for each document $#i$, count the number of occurrences of each word $w$ and store it in $X[i, j]$ as the value of feature $#j$ where $j$ is the index of word $w$ in the dictionary
Reference: http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html
The Bag of Words model learns a vocabulary from all of the documents, then models each document by counting the number of times each word appears.
We'll be using the CountVectorizer feature extractor module from scikit-learn to create bag-of-words features.
End of explanation
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
# 156060 rows of train data & 15240 features (one for each vocabulary word)
X_train_tf.shape
# print some values of tf-idf transformed feature vector
print X_train_tf[1:2]
Explanation: Convert Occurrence to Frequency
Problem with occurrence count of words:
- longer documents will have higher average count values than shorter documents, even though they might talk about the same topics
Solution:
- divide the number of occurrences of each word in a document by the total number of words in the document
- new features formed by this method are called tf (Term Frequencies)
Refinement on tf:
- downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus
- this downscaling is called tf-idf (Term Frequency times Inverse Document Frequency)
Let's compute tf and tf-idf :
End of explanation
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
Explanation: In the above code, we first used the fit() method to fit our estimator and then the transform() method to transform our count-matrix to a tf-idf representation.
These two steps can be combined using fit_transform() method.
End of explanation
clf = MultinomialNB().fit(X_train_tfidf, y_train)
predicted = clf.predict(X_train_tfidf)
np.mean(predicted == y_train)
Explanation: Train Classifier
We train our classifier by inputing our features and expecting our classifier to output/predict the sentiment value for each phrase in test dataset.
Naive Bayes Classifier
End of explanation
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
Explanation: Building a Pipeline
In order to make the vectorizer => transformer => classifier easier to work with, scikit-learn provides a Pipeline class that behaves like a compound classifier.
You can compare the above accuracy result of the classifier without using Pipeline and the below accuracy result of the classifier while using Pipeline class. It's the same. Hence, Pipeline class highly simplifies our task of tokenizing and tfidf conversion.
End of explanation
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
Explanation: Let's use stop words filter in CountVectorizer method and see how it affects the classifier's accuracy. We see that this increases accuracy.
End of explanation
target_names = y_train.unique()
#np.array(map(str, target_names))
#np.char.mod('%d', target_names)
target_names = ['0', '1', '2', '3', '4']
print (classification_report(
y_train, \
predicted, \
target_names = target_names
))
Explanation: Classification Report (precision, recall, f1-score)
End of explanation
print (confusion_matrix(y_train, predicted))
Explanation: Confusion Matrix
End of explanation
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier(loss='modified_huber', shuffle=True, penalty='l2', alpha=1e-3, random_state=42, max_iter=5, tol=None)),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
Explanation: Stochastic Gradient Descent (SGD) Classifier
End of explanation
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english', max_features=5000)),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression())
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
Explanation: Logistic Regression Classifier
End of explanation
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english', max_features=5000)),
('tfidf', TfidfTransformer()),
('clf', OneVsOneClassifier(LinearSVC()))
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
Explanation: OneVsOne Classifier
End of explanation
test.info()
X_test = test['Phrase']
phraseIds = test['PhraseId']
predicted = text_clf.predict(X_test)
output = pd.DataFrame( data={"PhraseId":phraseIds, "Sentiment":predicted} )
#output.to_csv( "submission.csv", index=False, quoting=3 )
Explanation: Create Submission
End of explanation |
Subsets and Splits